Feb 25 11:13:52 localhost kernel: Linux version 5.14.0-686.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026
Feb 25 11:13:52 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb 25 11:13:52 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 25 11:13:52 localhost kernel: BIOS-provided physical RAM map:
Feb 25 11:13:52 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 25 11:13:52 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 25 11:13:52 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 25 11:13:52 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb 25 11:13:52 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb 25 11:13:52 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 25 11:13:52 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 25 11:13:52 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb 25 11:13:52 localhost kernel: NX (Execute Disable) protection: active
Feb 25 11:13:52 localhost kernel: APIC: Static calls initialized
Feb 25 11:13:52 localhost kernel: SMBIOS 2.8 present.
Feb 25 11:13:52 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb 25 11:13:52 localhost kernel: Hypervisor detected: KVM
Feb 25 11:13:52 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 25 11:13:52 localhost kernel: kvm-clock: using sched offset of 8201244639 cycles
Feb 25 11:13:52 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 25 11:13:52 localhost kernel: tsc: Detected 2800.000 MHz processor
Feb 25 11:13:52 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 25 11:13:52 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 25 11:13:52 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb 25 11:13:52 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 25 11:13:52 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 25 11:13:52 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb 25 11:13:52 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb 25 11:13:52 localhost kernel: Using GB pages for direct mapping
Feb 25 11:13:52 localhost kernel: RAMDISK: [mem 0x1b6ca000-0x29b5cfff]
Feb 25 11:13:52 localhost kernel: ACPI: Early table checksum verification disabled
Feb 25 11:13:52 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb 25 11:13:52 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 25 11:13:52 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 25 11:13:52 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 25 11:13:52 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb 25 11:13:52 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 25 11:13:52 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 25 11:13:52 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb 25 11:13:52 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb 25 11:13:52 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb 25 11:13:52 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb 25 11:13:52 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb 25 11:13:52 localhost kernel: No NUMA configuration found
Feb 25 11:13:52 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb 25 11:13:52 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Feb 25 11:13:52 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb 25 11:13:52 localhost kernel: Zone ranges:
Feb 25 11:13:52 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 25 11:13:52 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 25 11:13:52 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb 25 11:13:52 localhost kernel:   Device   empty
Feb 25 11:13:52 localhost kernel: Movable zone start for each node
Feb 25 11:13:52 localhost kernel: Early memory node ranges
Feb 25 11:13:52 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 25 11:13:52 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb 25 11:13:52 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb 25 11:13:52 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb 25 11:13:52 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 25 11:13:52 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 25 11:13:52 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb 25 11:13:52 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Feb 25 11:13:52 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 25 11:13:52 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 25 11:13:52 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 25 11:13:52 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 25 11:13:52 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 25 11:13:52 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 25 11:13:52 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 25 11:13:52 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 25 11:13:52 localhost kernel: TSC deadline timer available
Feb 25 11:13:52 localhost kernel: CPU topo: Max. logical packages:   8
Feb 25 11:13:52 localhost kernel: CPU topo: Max. logical dies:       8
Feb 25 11:13:52 localhost kernel: CPU topo: Max. dies per package:   1
Feb 25 11:13:52 localhost kernel: CPU topo: Max. threads per core:   1
Feb 25 11:13:52 localhost kernel: CPU topo: Num. cores per package:     1
Feb 25 11:13:52 localhost kernel: CPU topo: Num. threads per package:   1
Feb 25 11:13:52 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb 25 11:13:52 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 25 11:13:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb 25 11:13:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb 25 11:13:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb 25 11:13:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb 25 11:13:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb 25 11:13:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb 25 11:13:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb 25 11:13:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb 25 11:13:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb 25 11:13:52 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb 25 11:13:52 localhost kernel: Booting paravirtualized kernel on KVM
Feb 25 11:13:52 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 25 11:13:52 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb 25 11:13:52 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb 25 11:13:52 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Feb 25 11:13:52 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Feb 25 11:13:52 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 25 11:13:52 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 25 11:13:52 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64", will be passed to user space.
Feb 25 11:13:52 localhost kernel: random: crng init done
Feb 25 11:13:52 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 25 11:13:52 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 25 11:13:52 localhost kernel: Fallback order for Node 0: 0 
Feb 25 11:13:52 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb 25 11:13:52 localhost kernel: Policy zone: Normal
Feb 25 11:13:52 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 25 11:13:52 localhost kernel: software IO TLB: area num 8.
Feb 25 11:13:52 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb 25 11:13:52 localhost kernel: ftrace: allocating 49605 entries in 194 pages
Feb 25 11:13:52 localhost kernel: ftrace: allocated 194 pages with 3 groups
Feb 25 11:13:52 localhost kernel: Dynamic Preempt: voluntary
Feb 25 11:13:52 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 25 11:13:52 localhost kernel: rcu:         RCU event tracing is enabled.
Feb 25 11:13:52 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb 25 11:13:52 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Feb 25 11:13:52 localhost kernel:         Rude variant of Tasks RCU enabled.
Feb 25 11:13:52 localhost kernel:         Tracing variant of Tasks RCU enabled.
Feb 25 11:13:52 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 25 11:13:52 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb 25 11:13:52 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 25 11:13:52 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 25 11:13:52 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 25 11:13:52 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb 25 11:13:52 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 25 11:13:52 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb 25 11:13:52 localhost kernel: Console: colour VGA+ 80x25
Feb 25 11:13:52 localhost kernel: printk: console [ttyS0] enabled
Feb 25 11:13:52 localhost kernel: ACPI: Core revision 20230331
Feb 25 11:13:52 localhost kernel: APIC: Switch to symmetric I/O mode setup
Feb 25 11:13:52 localhost kernel: x2apic enabled
Feb 25 11:13:52 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Feb 25 11:13:52 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb 25 11:13:52 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Feb 25 11:13:52 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb 25 11:13:52 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb 25 11:13:52 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb 25 11:13:52 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb 25 11:13:52 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 25 11:13:52 localhost kernel: Spectre V2 : Mitigation: Retpolines
Feb 25 11:13:52 localhost kernel: RETBleed: Mitigation: untrained return thunk
Feb 25 11:13:52 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb 25 11:13:52 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 25 11:13:52 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb 25 11:13:52 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb 25 11:13:52 localhost kernel: active return thunk: retbleed_return_thunk
Feb 25 11:13:52 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 25 11:13:52 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 25 11:13:52 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 25 11:13:52 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 25 11:13:52 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 25 11:13:52 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb 25 11:13:52 localhost kernel: Freeing SMP alternatives memory: 40K
Feb 25 11:13:52 localhost kernel: pid_max: default: 32768 minimum: 301
Feb 25 11:13:52 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb 25 11:13:52 localhost kernel: landlock: Up and running.
Feb 25 11:13:52 localhost kernel: Yama: becoming mindful.
Feb 25 11:13:52 localhost kernel: SELinux:  Initializing.
Feb 25 11:13:52 localhost kernel: LSM support for eBPF active
Feb 25 11:13:52 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 25 11:13:52 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 25 11:13:52 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb 25 11:13:52 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb 25 11:13:52 localhost kernel: ... version:                0
Feb 25 11:13:52 localhost kernel: ... bit width:              48
Feb 25 11:13:52 localhost kernel: ... generic registers:      6
Feb 25 11:13:52 localhost kernel: ... value mask:             0000ffffffffffff
Feb 25 11:13:52 localhost kernel: ... max period:             00007fffffffffff
Feb 25 11:13:52 localhost kernel: ... fixed-purpose events:   0
Feb 25 11:13:52 localhost kernel: ... event mask:             000000000000003f
Feb 25 11:13:52 localhost kernel: signal: max sigframe size: 1776
Feb 25 11:13:52 localhost kernel: rcu: Hierarchical SRCU implementation.
Feb 25 11:13:52 localhost kernel: rcu:         Max phase no-delay instances is 400.
Feb 25 11:13:52 localhost kernel: smp: Bringing up secondary CPUs ...
Feb 25 11:13:52 localhost kernel: smpboot: x86: Booting SMP configuration:
Feb 25 11:13:52 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb 25 11:13:52 localhost kernel: smp: Brought up 1 node, 8 CPUs
Feb 25 11:13:52 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Feb 25 11:13:52 localhost kernel: node 0 deferred pages initialised in 9ms
Feb 25 11:13:52 localhost kernel: Memory: 7617604K/8388068K available (16384K kernel code, 5797K rwdata, 13956K rodata, 4204K init, 7172K bss, 764464K reserved, 0K cma-reserved)
Feb 25 11:13:52 localhost kernel: devtmpfs: initialized
Feb 25 11:13:52 localhost kernel: x86/mm: Memory block size: 128MB
Feb 25 11:13:52 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 25 11:13:52 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb 25 11:13:52 localhost kernel: pinctrl core: initialized pinctrl subsystem
Feb 25 11:13:52 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 25 11:13:52 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb 25 11:13:52 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 25 11:13:52 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 25 11:13:52 localhost kernel: audit: initializing netlink subsys (disabled)
Feb 25 11:13:52 localhost kernel: audit: type=2000 audit(1772018031.184:1): state=initialized audit_enabled=0 res=1
Feb 25 11:13:52 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb 25 11:13:52 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 25 11:13:52 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 25 11:13:52 localhost kernel: cpuidle: using governor menu
Feb 25 11:13:52 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 25 11:13:52 localhost kernel: PCI: Using configuration type 1 for base access
Feb 25 11:13:52 localhost kernel: PCI: Using configuration type 1 for extended access
Feb 25 11:13:52 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 25 11:13:52 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 25 11:13:52 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 25 11:13:52 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 25 11:13:52 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 25 11:13:52 localhost kernel: Demotion targets for Node 0: null
Feb 25 11:13:52 localhost kernel: cryptd: max_cpu_qlen set to 1000
Feb 25 11:13:52 localhost kernel: ACPI: Added _OSI(Module Device)
Feb 25 11:13:52 localhost kernel: ACPI: Added _OSI(Processor Device)
Feb 25 11:13:52 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 25 11:13:52 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 25 11:13:52 localhost kernel: ACPI: Interpreter enabled
Feb 25 11:13:52 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb 25 11:13:52 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Feb 25 11:13:52 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 25 11:13:52 localhost kernel: PCI: Using E820 reservations for host bridge windows
Feb 25 11:13:52 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 25 11:13:52 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 25 11:13:52 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [3] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [4] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [5] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [6] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [7] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [8] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [9] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [10] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [11] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [12] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [13] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [14] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [15] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [16] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [17] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [18] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [19] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [20] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [21] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [22] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [23] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [24] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [25] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [26] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [27] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [28] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [29] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [30] registered
Feb 25 11:13:52 localhost kernel: acpiphp: Slot [31] registered
Feb 25 11:13:52 localhost kernel: PCI host bridge to bus 0000:00
Feb 25 11:13:52 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 25 11:13:52 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 25 11:13:52 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 25 11:13:52 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb 25 11:13:52 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb 25 11:13:52 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 25 11:13:52 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 25 11:13:52 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb 25 11:13:52 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb 25 11:13:52 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 25 11:13:52 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb 25 11:13:52 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb 25 11:13:52 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 25 11:13:52 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 25 11:13:52 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb 25 11:13:52 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb 25 11:13:52 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 25 11:13:52 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb 25 11:13:52 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb 25 11:13:52 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb 25 11:13:52 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb 25 11:13:52 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 25 11:13:52 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb 25 11:13:52 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb 25 11:13:52 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 25 11:13:52 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb 25 11:13:52 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb 25 11:13:52 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 25 11:13:52 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 25 11:13:52 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 25 11:13:52 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 25 11:13:52 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 25 11:13:52 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 25 11:13:52 localhost kernel: iommu: Default domain type: Translated
Feb 25 11:13:52 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 25 11:13:52 localhost kernel: SCSI subsystem initialized
Feb 25 11:13:52 localhost kernel: ACPI: bus type USB registered
Feb 25 11:13:52 localhost kernel: usbcore: registered new interface driver usbfs
Feb 25 11:13:52 localhost kernel: usbcore: registered new interface driver hub
Feb 25 11:13:52 localhost kernel: usbcore: registered new device driver usb
Feb 25 11:13:52 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 25 11:13:52 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 25 11:13:52 localhost kernel: PTP clock support registered
Feb 25 11:13:52 localhost kernel: EDAC MC: Ver: 3.0.0
Feb 25 11:13:52 localhost kernel: NetLabel: Initializing
Feb 25 11:13:52 localhost kernel: NetLabel:  domain hash size = 128
Feb 25 11:13:52 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb 25 11:13:52 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Feb 25 11:13:52 localhost kernel: PCI: Using ACPI for IRQ routing
Feb 25 11:13:52 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 25 11:13:52 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 25 11:13:52 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Feb 25 11:13:52 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 25 11:13:52 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 25 11:13:52 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 25 11:13:52 localhost kernel: vgaarb: loaded
Feb 25 11:13:52 localhost kernel: clocksource: Switched to clocksource kvm-clock
Feb 25 11:13:52 localhost kernel: VFS: Disk quotas dquot_6.6.0
Feb 25 11:13:52 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 25 11:13:52 localhost kernel: pnp: PnP ACPI init
Feb 25 11:13:52 localhost kernel: pnp 00:03: [dma 2]
Feb 25 11:13:52 localhost kernel: pnp: PnP ACPI: found 5 devices
Feb 25 11:13:52 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 25 11:13:52 localhost kernel: NET: Registered PF_INET protocol family
Feb 25 11:13:52 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 25 11:13:52 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 25 11:13:52 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 25 11:13:52 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 25 11:13:52 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 25 11:13:52 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 25 11:13:52 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb 25 11:13:52 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 25 11:13:52 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 25 11:13:52 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 25 11:13:52 localhost kernel: NET: Registered PF_XDP protocol family
Feb 25 11:13:52 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 25 11:13:52 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 25 11:13:52 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 25 11:13:52 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb 25 11:13:52 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 25 11:13:52 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 25 11:13:52 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 25 11:13:52 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 27141 usecs
Feb 25 11:13:52 localhost kernel: PCI: CLS 0 bytes, default 64
Feb 25 11:13:52 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 25 11:13:52 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb 25 11:13:52 localhost kernel: Trying to unpack rootfs image as initramfs...
Feb 25 11:13:52 localhost kernel: ACPI: bus type thunderbolt registered
Feb 25 11:13:52 localhost kernel: Initialise system trusted keyrings
Feb 25 11:13:52 localhost kernel: Key type blacklist registered
Feb 25 11:13:52 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb 25 11:13:52 localhost kernel: zbud: loaded
Feb 25 11:13:52 localhost kernel: integrity: Platform Keyring initialized
Feb 25 11:13:52 localhost kernel: integrity: Machine keyring initialized
Feb 25 11:13:52 localhost kernel: Freeing initrd memory: 234060K
Feb 25 11:13:52 localhost kernel: NET: Registered PF_ALG protocol family
Feb 25 11:13:52 localhost kernel: xor: automatically using best checksumming function   avx       
Feb 25 11:13:52 localhost kernel: Key type asymmetric registered
Feb 25 11:13:52 localhost kernel: Asymmetric key parser 'x509' registered
Feb 25 11:13:52 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb 25 11:13:52 localhost kernel: io scheduler mq-deadline registered
Feb 25 11:13:52 localhost kernel: io scheduler kyber registered
Feb 25 11:13:52 localhost kernel: io scheduler bfq registered
Feb 25 11:13:52 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb 25 11:13:52 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb 25 11:13:52 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb 25 11:13:52 localhost kernel: ACPI: button: Power Button [PWRF]
Feb 25 11:13:52 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 25 11:13:52 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 25 11:13:52 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 25 11:13:52 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 25 11:13:52 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 25 11:13:52 localhost kernel: Non-volatile memory driver v1.3
Feb 25 11:13:52 localhost kernel: rdac: device handler registered
Feb 25 11:13:52 localhost kernel: hp_sw: device handler registered
Feb 25 11:13:52 localhost kernel: emc: device handler registered
Feb 25 11:13:52 localhost kernel: alua: device handler registered
Feb 25 11:13:52 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 25 11:13:52 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 25 11:13:52 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 25 11:13:52 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb 25 11:13:52 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb 25 11:13:52 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb 25 11:13:52 localhost kernel: usb usb1: Product: UHCI Host Controller
Feb 25 11:13:52 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-686.el9.x86_64 uhci_hcd
Feb 25 11:13:52 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb 25 11:13:52 localhost kernel: hub 1-0:1.0: USB hub found
Feb 25 11:13:52 localhost kernel: hub 1-0:1.0: 2 ports detected
Feb 25 11:13:52 localhost kernel: usbcore: registered new interface driver usbserial_generic
Feb 25 11:13:52 localhost kernel: usbserial: USB Serial support registered for generic
Feb 25 11:13:52 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 25 11:13:52 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 25 11:13:52 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 25 11:13:52 localhost kernel: mousedev: PS/2 mouse device common for all mice
Feb 25 11:13:52 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Feb 25 11:13:52 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb 25 11:13:52 localhost kernel: rtc_cmos 00:04: registered as rtc0
Feb 25 11:13:52 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-02-25T11:13:51 UTC (1772018031)
Feb 25 11:13:52 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb 25 11:13:52 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb 25 11:13:52 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 25 11:13:52 localhost kernel: usbcore: registered new interface driver usbhid
Feb 25 11:13:52 localhost kernel: usbhid: USB HID core driver
Feb 25 11:13:52 localhost kernel: drop_monitor: Initializing network drop monitor service
Feb 25 11:13:52 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb 25 11:13:52 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb 25 11:13:52 localhost kernel: Initializing XFRM netlink socket
Feb 25 11:13:52 localhost kernel: NET: Registered PF_INET6 protocol family
Feb 25 11:13:52 localhost kernel: Segment Routing with IPv6
Feb 25 11:13:52 localhost kernel: NET: Registered PF_PACKET protocol family
Feb 25 11:13:52 localhost kernel: mpls_gso: MPLS GSO support
Feb 25 11:13:52 localhost kernel: IPI shorthand broadcast: enabled
Feb 25 11:13:52 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Feb 25 11:13:52 localhost kernel: AES CTR mode by8 optimization enabled
Feb 25 11:13:52 localhost kernel: sched_clock: Marking stable (992002360, 139645520)->(1251863250, -120215370)
Feb 25 11:13:52 localhost kernel: registered taskstats version 1
Feb 25 11:13:52 localhost kernel: Loading compiled-in X.509 certificates
Feb 25 11:13:52 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: d9d4cefd3ca2c4957ef0b2e7c6e39a7e4ae16390'
Feb 25 11:13:52 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb 25 11:13:52 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb 25 11:13:52 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb 25 11:13:52 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb 25 11:13:52 localhost kernel: Demotion targets for Node 0: null
Feb 25 11:13:52 localhost kernel: page_owner is disabled
Feb 25 11:13:52 localhost kernel: Key type .fscrypt registered
Feb 25 11:13:52 localhost kernel: Key type fscrypt-provisioning registered
Feb 25 11:13:52 localhost kernel: Key type big_key registered
Feb 25 11:13:52 localhost kernel: Key type encrypted registered
Feb 25 11:13:52 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 25 11:13:52 localhost kernel: Loading compiled-in module X.509 certificates
Feb 25 11:13:52 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: d9d4cefd3ca2c4957ef0b2e7c6e39a7e4ae16390'
Feb 25 11:13:52 localhost kernel: ima: Allocated hash algorithm: sha256
Feb 25 11:13:52 localhost kernel: ima: No architecture policies found
Feb 25 11:13:52 localhost kernel: evm: Initialising EVM extended attributes:
Feb 25 11:13:52 localhost kernel: evm: security.selinux
Feb 25 11:13:52 localhost kernel: evm: security.SMACK64 (disabled)
Feb 25 11:13:52 localhost kernel: evm: security.SMACK64EXEC (disabled)
Feb 25 11:13:52 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb 25 11:13:52 localhost kernel: evm: security.SMACK64MMAP (disabled)
Feb 25 11:13:52 localhost kernel: evm: security.apparmor (disabled)
Feb 25 11:13:52 localhost kernel: evm: security.ima
Feb 25 11:13:52 localhost kernel: evm: security.capability
Feb 25 11:13:52 localhost kernel: evm: HMAC attrs: 0x1
Feb 25 11:13:52 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb 25 11:13:52 localhost kernel: Running certificate verification RSA selftest
Feb 25 11:13:52 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb 25 11:13:52 localhost kernel: Running certificate verification ECDSA selftest
Feb 25 11:13:52 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb 25 11:13:52 localhost kernel: clk: Disabling unused clocks
Feb 25 11:13:52 localhost kernel: Freeing unused decrypted memory: 2028K
Feb 25 11:13:52 localhost kernel: Freeing unused kernel image (initmem) memory: 4204K
Feb 25 11:13:52 localhost kernel: Write protecting the kernel read-only data: 30720k
Feb 25 11:13:52 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 380K
Feb 25 11:13:52 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb 25 11:13:52 localhost kernel: Run /init as init process
Feb 25 11:13:52 localhost kernel:   with arguments:
Feb 25 11:13:52 localhost kernel:     /init
Feb 25 11:13:52 localhost kernel:   with environment:
Feb 25 11:13:52 localhost kernel:     HOME=/
Feb 25 11:13:52 localhost kernel:     TERM=linux
Feb 25 11:13:52 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64
Feb 25 11:13:52 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 25 11:13:52 localhost systemd[1]: Detected virtualization kvm.
Feb 25 11:13:52 localhost systemd[1]: Detected architecture x86-64.
Feb 25 11:13:52 localhost systemd[1]: Running in initrd.
Feb 25 11:13:52 localhost systemd[1]: No hostname configured, using default hostname.
Feb 25 11:13:52 localhost systemd[1]: Hostname set to <localhost>.
Feb 25 11:13:52 localhost systemd[1]: Initializing machine ID from VM UUID.
Feb 25 11:13:52 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb 25 11:13:52 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb 25 11:13:52 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Feb 25 11:13:52 localhost kernel: usb 1-1: Manufacturer: QEMU
Feb 25 11:13:52 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb 25 11:13:52 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb 25 11:13:52 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb 25 11:13:52 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Feb 25 11:13:52 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 25 11:13:52 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 25 11:13:52 localhost systemd[1]: Reached target Initrd /usr File System.
Feb 25 11:13:52 localhost systemd[1]: Reached target Local File Systems.
Feb 25 11:13:52 localhost systemd[1]: Reached target Path Units.
Feb 25 11:13:52 localhost systemd[1]: Reached target Slice Units.
Feb 25 11:13:52 localhost systemd[1]: Reached target Swaps.
Feb 25 11:13:52 localhost systemd[1]: Reached target Timer Units.
Feb 25 11:13:52 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 25 11:13:52 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Feb 25 11:13:52 localhost systemd[1]: Listening on Journal Socket.
Feb 25 11:13:52 localhost systemd[1]: Listening on udev Control Socket.
Feb 25 11:13:52 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 25 11:13:52 localhost systemd[1]: Reached target Socket Units.
Feb 25 11:13:52 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 25 11:13:52 localhost systemd[1]: Starting Journal Service...
Feb 25 11:13:52 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 25 11:13:52 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 25 11:13:52 localhost systemd[1]: Starting Create System Users...
Feb 25 11:13:52 localhost systemd[1]: Starting Setup Virtual Console...
Feb 25 11:13:52 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 25 11:13:52 localhost systemd[1]: Finished Create System Users.
Feb 25 11:13:52 localhost systemd-journald[309]: Journal started
Feb 25 11:13:52 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/aaf3b8852d7c41fda043b293ce60ba6a) is 8.0M, max 153.6M, 145.6M free.
Feb 25 11:13:52 localhost systemd-sysusers[314]: Creating group 'users' with GID 100.
Feb 25 11:13:52 localhost systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Feb 25 11:13:52 localhost systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb 25 11:13:52 localhost systemd[1]: Started Journal Service.
Feb 25 11:13:52 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 25 11:13:52 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 25 11:13:52 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 25 11:13:52 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 25 11:13:52 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 25 11:13:52 localhost systemd[1]: Finished Setup Virtual Console.
Feb 25 11:13:52 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb 25 11:13:52 localhost systemd[1]: Starting dracut cmdline hook...
Feb 25 11:13:52 localhost dracut-cmdline[330]: dracut-9 dracut-057-110.git20260130.el9
Feb 25 11:13:52 localhost dracut-cmdline[330]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 25 11:13:52 localhost systemd[1]: Finished dracut cmdline hook.
Feb 25 11:13:52 localhost systemd[1]: Starting dracut pre-udev hook...
Feb 25 11:13:52 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 25 11:13:52 localhost kernel: device-mapper: uevent: version 1.0.3
Feb 25 11:13:52 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb 25 11:13:52 localhost kernel: RPC: Registered named UNIX socket transport module.
Feb 25 11:13:52 localhost kernel: RPC: Registered udp transport module.
Feb 25 11:13:52 localhost kernel: RPC: Registered tcp transport module.
Feb 25 11:13:52 localhost kernel: RPC: Registered tcp-with-tls transport module.
Feb 25 11:13:52 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 25 11:13:52 localhost rpc.statd[446]: Version 2.5.4 starting
Feb 25 11:13:52 localhost rpc.statd[446]: Initializing NSM state
Feb 25 11:13:52 localhost rpc.idmapd[451]: Setting log level to 0
Feb 25 11:13:52 localhost systemd[1]: Finished dracut pre-udev hook.
Feb 25 11:13:52 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 25 11:13:52 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Feb 25 11:13:52 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 25 11:13:52 localhost systemd[1]: Starting dracut pre-trigger hook...
Feb 25 11:13:52 localhost systemd[1]: Finished dracut pre-trigger hook.
Feb 25 11:13:52 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 25 11:13:52 localhost systemd[1]: Created slice Slice /system/modprobe.
Feb 25 11:13:52 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 25 11:13:53 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 25 11:13:53 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 25 11:13:53 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 25 11:13:53 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 25 11:13:53 localhost systemd[1]: Reached target Network.
Feb 25 11:13:53 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 25 11:13:53 localhost systemd[1]: Starting dracut initqueue hook...
Feb 25 11:13:53 localhost kernel: libata version 3.00 loaded.
Feb 25 11:13:53 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Feb 25 11:13:53 localhost kernel: scsi host0: ata_piix
Feb 25 11:13:53 localhost kernel: scsi host1: ata_piix
Feb 25 11:13:53 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb 25 11:13:53 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb 25 11:13:53 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb 25 11:13:53 localhost systemd-udevd[506]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 11:13:53 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb 25 11:13:53 localhost kernel:  vda: vda1
Feb 25 11:13:53 localhost kernel: ACPI: bus type drm_connector registered
Feb 25 11:13:53 localhost systemd[1]: Mounting Kernel Configuration File System...
Feb 25 11:13:53 localhost systemd[1]: Mounted Kernel Configuration File System.
Feb 25 11:13:53 localhost systemd[1]: Reached target System Initialization.
Feb 25 11:13:53 localhost systemd[1]: Reached target Basic System.
Feb 25 11:13:53 localhost kernel: ata1: found unknown device (class 0)
Feb 25 11:13:53 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb 25 11:13:53 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb 25 11:13:53 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb 25 11:13:53 localhost systemd[1]: Found device /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b.
Feb 25 11:13:53 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb 25 11:13:53 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 25 11:13:53 localhost systemd[1]: Reached target Initrd Root Device.
Feb 25 11:13:53 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 25 11:13:53 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 25 11:13:53 localhost kernel: Console: switching to colour dummy device 80x25
Feb 25 11:13:53 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 25 11:13:53 localhost kernel: [drm] features: -context_init
Feb 25 11:13:53 localhost kernel: [drm] number of scanouts: 1
Feb 25 11:13:53 localhost kernel: [drm] number of cap sets: 0
Feb 25 11:13:53 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb 25 11:13:53 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb 25 11:13:53 localhost kernel: Console: switching to colour frame buffer device 128x48
Feb 25 11:13:53 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 25 11:13:53 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Feb 25 11:13:53 localhost systemd[1]: Finished dracut initqueue hook.
Feb 25 11:13:53 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Feb 25 11:13:53 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Feb 25 11:13:53 localhost systemd[1]: Reached target Remote File Systems.
Feb 25 11:13:53 localhost systemd[1]: Starting dracut pre-mount hook...
Feb 25 11:13:53 localhost systemd[1]: Finished dracut pre-mount hook.
Feb 25 11:13:53 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b...
Feb 25 11:13:53 localhost systemd-fsck[571]: /usr/sbin/fsck.xfs: XFS file system.
Feb 25 11:13:53 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b.
Feb 25 11:13:53 localhost systemd[1]: Mounting /sysroot...
Feb 25 11:13:53 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb 25 11:13:53 localhost kernel: XFS (vda1): Mounting V5 Filesystem 37391a25-080d-4723-8b0c-cb88a559875b
Feb 25 11:13:53 localhost kernel: XFS (vda1): Ending clean mount
Feb 25 11:13:53 localhost systemd[1]: Mounted /sysroot.
Feb 25 11:13:53 localhost systemd[1]: Reached target Initrd Root File System.
Feb 25 11:13:54 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb 25 11:13:54 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb 25 11:13:54 localhost systemd[1]: Reached target Initrd File Systems.
Feb 25 11:13:54 localhost systemd[1]: Reached target Initrd Default Target.
Feb 25 11:13:54 localhost systemd[1]: Starting dracut mount hook...
Feb 25 11:13:54 localhost systemd[1]: Finished dracut mount hook.
Feb 25 11:13:54 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb 25 11:13:54 localhost rpc.idmapd[451]: exiting on signal 15
Feb 25 11:13:54 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb 25 11:13:54 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb 25 11:13:54 localhost systemd[1]: Stopped target Network.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Timer Units.
Feb 25 11:13:54 localhost systemd[1]: dbus.socket: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Feb 25 11:13:54 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Initrd Default Target.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Basic System.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Initrd Root Device.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Initrd /usr File System.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Path Units.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Remote File Systems.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Slice Units.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Socket Units.
Feb 25 11:13:54 localhost systemd[1]: Stopped target System Initialization.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Local File Systems.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Swaps.
Feb 25 11:13:54 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped dracut mount hook.
Feb 25 11:13:54 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped dracut pre-mount hook.
Feb 25 11:13:54 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Feb 25 11:13:54 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb 25 11:13:54 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped dracut initqueue hook.
Feb 25 11:13:54 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped Apply Kernel Variables.
Feb 25 11:13:54 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Feb 25 11:13:54 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped Coldplug All udev Devices.
Feb 25 11:13:54 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped dracut pre-trigger hook.
Feb 25 11:13:54 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 25 11:13:54 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped Setup Virtual Console.
Feb 25 11:13:54 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb 25 11:13:54 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 25 11:13:54 localhost systemd[1]: systemd-udevd.service: Consumed 1.032s CPU time.
Feb 25 11:13:54 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Closed udev Control Socket.
Feb 25 11:13:54 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Closed udev Kernel Socket.
Feb 25 11:13:54 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped dracut pre-udev hook.
Feb 25 11:13:54 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped dracut cmdline hook.
Feb 25 11:13:54 localhost systemd[1]: Starting Cleanup udev Database...
Feb 25 11:13:54 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb 25 11:13:54 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Feb 25 11:13:54 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Stopped Create System Users.
Feb 25 11:13:54 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 25 11:13:54 localhost systemd[1]: Finished Cleanup udev Database.
Feb 25 11:13:54 localhost systemd[1]: Reached target Switch Root.
Feb 25 11:13:54 localhost systemd[1]: Starting Switch Root...
Feb 25 11:13:54 localhost systemd[1]: Switching root.
Feb 25 11:13:54 localhost systemd-journald[309]: Journal stopped
Feb 25 12:06:44 compute-0 sudo[243747]: pam_unix(sudo:session): session closed for user root
Feb 25 12:06:44 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:06:44 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1921492880' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:06:45 compute-0 sudo[243924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozkegaervoegfjivgvvdgrngxwlqxgwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1772021204.8300242-1452-260763261181864/AnsiballZ_systemd.py'
Feb 25 12:06:45 compute-0 sudo[243924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 12:06:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:45 compute-0 nova_compute[242312]: 2026-02-25 12:06:45.228 242318 INFO nova.scheduler.client.report [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] [req-574aae66-9f9e-4b08-b327-f36776734c41] Created resource provider record via placement API for resource provider with UUID cb4dae98-2ac3-4218-9445-2320139e12ad and name compute-0.ctlplane.example.com.
Feb 25 12:06:45 compute-0 python3.9[243927]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 12:06:45 compute-0 nova_compute[242312]: 2026-02-25 12:06:45.608 242318 DEBUG oslo_concurrency.processutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:06:45 compute-0 ceph-mon[76335]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:06:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569430836' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.146 242318 DEBUG oslo_concurrency.processutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.154 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 25 12:06:46 compute-0 nova_compute[242312]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.155 242318 INFO nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] kernel doesn't support AMD SEV
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.157 242318 DEBUG nova.compute.provider_tree [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.157 242318 DEBUG nova.virt.libvirt.driver [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.209 242318 DEBUG nova.scheduler.client.report [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Updated inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.209 242318 DEBUG nova.compute.provider_tree [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Updating resource provider cb4dae98-2ac3-4218-9445-2320139e12ad generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.209 242318 DEBUG nova.compute.provider_tree [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.283 242318 DEBUG nova.compute.provider_tree [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Updating resource provider cb4dae98-2ac3-4218-9445-2320139e12ad generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.308 242318 DEBUG nova.compute.resource_tracker [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.308 242318 DEBUG oslo_concurrency.lockutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.308 242318 DEBUG nova.service [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.374 242318 DEBUG nova.service [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.374 242318 DEBUG nova.servicegroup.drivers.db [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 25 12:06:46 compute-0 systemd[1]: Stopping nova_compute container...
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.640 242318 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.642 242318 DEBUG oslo_concurrency.lockutils [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.642 242318 DEBUG oslo_concurrency.lockutils [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:06:46 compute-0 nova_compute[242312]: 2026-02-25 12:06:46.642 242318 DEBUG oslo_concurrency.lockutils [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:06:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3569430836' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:06:46 compute-0 systemd[1]: libpod-b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739.scope: Deactivated successfully.
Feb 25 12:06:46 compute-0 virtqemud[243235]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Feb 25 12:06:46 compute-0 virtqemud[243235]: hostname: compute-0
Feb 25 12:06:46 compute-0 virtqemud[243235]: End of file while reading data: Input/output error
Feb 25 12:06:46 compute-0 systemd[1]: libpod-b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739.scope: Consumed 4.006s CPU time.
Feb 25 12:06:46 compute-0 podman[243953]: 2026-02-25 12:06:46.992597466 +0000 UTC m=+0.524419020 container died b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=nova_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Feb 25 12:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c-merged.mount: Deactivated successfully.
Feb 25 12:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739-userdata-shm.mount: Deactivated successfully.
Feb 25 12:06:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:47 compute-0 ceph-mon[76335]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:48 compute-0 podman[243953]: 2026-02-25 12:06:48.713274588 +0000 UTC m=+2.245096162 container cleanup b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=nova_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:06:48 compute-0 podman[243953]: nova_compute
Feb 25 12:06:48 compute-0 podman[243985]: nova_compute
Feb 25 12:06:48 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb 25 12:06:48 compute-0 systemd[1]: Stopped nova_compute container.
Feb 25 12:06:48 compute-0 systemd[1]: Starting nova_compute container...
Feb 25 12:06:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:06:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 25 12:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 25 12:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 25 12:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 25 12:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 25 12:06:48 compute-0 podman[243998]: 2026-02-25 12:06:48.934264349 +0000 UTC m=+0.117406307 container init b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2)
Feb 25 12:06:48 compute-0 podman[243998]: 2026-02-25 12:06:48.947216846 +0000 UTC m=+0.130358764 container start b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, container_name=nova_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:06:48 compute-0 podman[243998]: nova_compute
Feb 25 12:06:48 compute-0 systemd[1]: Started nova_compute container.
Feb 25 12:06:48 compute-0 nova_compute[244014]: + sudo -E kolla_set_configs
Feb 25 12:06:48 compute-0 sudo[243924]: pam_unix(sudo:session): session closed for user root
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Validating config file
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying service configuration files
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Deleting /etc/ceph
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Creating directory /etc/ceph
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /etc/ceph
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Writing out command to execute
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 25 12:06:49 compute-0 nova_compute[244014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 25 12:06:49 compute-0 nova_compute[244014]: ++ cat /run_command
Feb 25 12:06:49 compute-0 nova_compute[244014]: + CMD=nova-compute
Feb 25 12:06:49 compute-0 nova_compute[244014]: + ARGS=
Feb 25 12:06:49 compute-0 nova_compute[244014]: + sudo kolla_copy_cacerts
Feb 25 12:06:49 compute-0 nova_compute[244014]: + [[ ! -n '' ]]
Feb 25 12:06:49 compute-0 nova_compute[244014]: + . kolla_extend_start
Feb 25 12:06:49 compute-0 nova_compute[244014]: Running command: 'nova-compute'
Feb 25 12:06:49 compute-0 nova_compute[244014]: + echo 'Running command: '\''nova-compute'\'''
Feb 25 12:06:49 compute-0 nova_compute[244014]: + umask 0022
Feb 25 12:06:49 compute-0 nova_compute[244014]: + exec nova-compute
Feb 25 12:06:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:49 compute-0 sudo[244175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfgrdjegpvncgrxrtreljoinqolpqbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1772021209.174148-1461-83498701014774/AnsiballZ_podman_container.py'
Feb 25 12:06:49 compute-0 sudo[244175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 12:06:49 compute-0 python3.9[244178]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 25 12:06:49 compute-0 systemd[1]: Started libpod-conmon-26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5.scope.
Feb 25 12:06:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a079393b47af3087364f029f830d94db2615f7092642a171c4a2f100c36b158d/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb 25 12:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a079393b47af3087364f029f830d94db2615f7092642a171c4a2f100c36b158d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 25 12:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a079393b47af3087364f029f830d94db2615f7092642a171c4a2f100c36b158d/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 25 12:06:49 compute-0 podman[244204]: 2026-02-25 12:06:49.966114555 +0000 UTC m=+0.135954613 container init 26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:06:49 compute-0 podman[244204]: 2026-02-25 12:06:49.974176084 +0000 UTC m=+0.144016112 container start 26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Applying nova statedir ownership
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb 25 12:06:50 compute-0 nova_compute_init[244225]: INFO:nova_statedir:Nova statedir ownership complete
Feb 25 12:06:50 compute-0 systemd[1]: libpod-26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5.scope: Deactivated successfully.
Feb 25 12:06:50 compute-0 podman[244226]: 2026-02-25 12:06:50.075528375 +0000 UTC m=+0.027415818 container died 26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Feb 25 12:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5-userdata-shm.mount: Deactivated successfully.
Feb 25 12:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a079393b47af3087364f029f830d94db2615f7092642a171c4a2f100c36b158d-merged.mount: Deactivated successfully.
Feb 25 12:06:50 compute-0 podman[244226]: 2026-02-25 12:06:50.110062534 +0000 UTC m=+0.061949897 container cleanup 26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute_init, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, config_id=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:06:50 compute-0 systemd[1]: libpod-conmon-26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5.scope: Deactivated successfully.
Feb 25 12:06:50 compute-0 ceph-mon[76335]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:50 compute-0 nova_compute[244014]: 2026-02-25 12:06:50.730 244018 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 25 12:06:50 compute-0 nova_compute[244014]: 2026-02-25 12:06:50.730 244018 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 25 12:06:50 compute-0 nova_compute[244014]: 2026-02-25 12:06:50.730 244018 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 25 12:06:50 compute-0 nova_compute[244014]: 2026-02-25 12:06:50.730 244018 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 25 12:06:50 compute-0 nova_compute[244014]: 2026-02-25 12:06:50.844 244018 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:06:50 compute-0 nova_compute[244014]: 2026-02-25 12:06:50.866 244018 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:06:50 compute-0 nova_compute[244014]: 2026-02-25 12:06:50.867 244018 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 25 12:06:50 compute-0 python3.9[244178]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb 25 12:06:51 compute-0 sudo[244175]: pam_unix(sudo:session): session closed for user root
Feb 25 12:06:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.273 244018 INFO nova.virt.driver [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.393 244018 INFO nova.compute.provider_config [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.405 244018 DEBUG oslo_concurrency.lockutils [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.405 244018 DEBUG oslo_concurrency.lockutils [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.405 244018 DEBUG oslo_concurrency.lockutils [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.410 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.410 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.410 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.410 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.410 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.411 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.411 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.411 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.411 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.411 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.412 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.412 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.412 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.412 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.413 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.413 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.413 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.413 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.413 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.414 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.414 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.414 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.414 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.414 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.415 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.415 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.415 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.415 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.415 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.417 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.417 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.417 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.417 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.417 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.418 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.418 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.418 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.418 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.418 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.420 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.420 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.420 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.420 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.420 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.421 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.421 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.421 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.421 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.421 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.422 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.422 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.422 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.422 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.422 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.424 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.424 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.424 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.424 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.424 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 WARNING oslo_config.cfg [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 25 12:06:51 compute-0 nova_compute[244014]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 25 12:06:51 compute-0 nova_compute[244014]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 25 12:06:51 compute-0 nova_compute[244014]: and ``live_migration_inbound_addr`` respectively.
Feb 25 12:06:51 compute-0 nova_compute[244014]: ).  Its value may be silently ignored in the future.
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rbd_secret_uuid        = 8ac33163-6221-5d58-9a39-8b6933fe7762 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.534 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.534 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.534 244018 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)
Feb 25 12:06:51 compute-0 sshd-session[217067]: Connection closed by 192.168.122.30 port 42810
Feb 25 12:06:51 compute-0 sshd-session[217064]: pam_unix(sshd:session): session closed for user zuul
Feb 25 12:06:51 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Feb 25 12:06:51 compute-0 systemd[1]: session-50.scope: Consumed 1min 58.675s CPU time.
Feb 25 12:06:51 compute-0 systemd-logind[811]: Session 50 logged out. Waiting for processes to exit.
Feb 25 12:06:51 compute-0 systemd-logind[811]: Removed session 50.
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.581 244018 INFO nova.virt.node [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Determined node identity cb4dae98-2ac3-4218-9445-2320139e12ad from /var/lib/nova/compute_id
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.582 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.582 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.582 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.583 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.593 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f675e3bb6d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.595 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f675e3bb6d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.595 244018 INFO nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Connection event '1' reason 'None'
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.603 244018 INFO nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Libvirt host capabilities <capabilities>
Feb 25 12:06:51 compute-0 nova_compute[244014]: 
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <host>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <uuid>aaf3b885-2d7c-41fd-a043-b293ce60ba6a</uuid>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <cpu>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <arch>x86_64</arch>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model>EPYC-Rome-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <vendor>AMD</vendor>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <microcode version='16777317'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <signature family='23' model='49' stepping='0'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='x2apic'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='tsc-deadline'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='osxsave'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='hypervisor'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='tsc_adjust'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='spec-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='stibp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='arch-capabilities'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='cmp_legacy'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='topoext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='virt-ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='lbrv'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='tsc-scale'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='vmcb-clean'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='pause-filter'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='pfthreshold'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='svme-addr-chk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='rdctl-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='skip-l1dfl-vmentry'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='mds-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature name='pschange-mc-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <pages unit='KiB' size='4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <pages unit='KiB' size='2048'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <pages unit='KiB' size='1048576'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </cpu>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <power_management>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <suspend_mem/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </power_management>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <iommu support='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <migration_features>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <live/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <uri_transports>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <uri_transport>tcp</uri_transport>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <uri_transport>rdma</uri_transport>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </uri_transports>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </migration_features>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <topology>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <cells num='1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <cell id='0'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:           <memory unit='KiB'>7864276</memory>
Feb 25 12:06:51 compute-0 nova_compute[244014]:           <pages unit='KiB' size='4'>1966069</pages>
Feb 25 12:06:51 compute-0 nova_compute[244014]:           <pages unit='KiB' size='2048'>0</pages>
Feb 25 12:06:51 compute-0 nova_compute[244014]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 25 12:06:51 compute-0 nova_compute[244014]:           <distances>
Feb 25 12:06:51 compute-0 nova_compute[244014]:             <sibling id='0' value='10'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:           </distances>
Feb 25 12:06:51 compute-0 nova_compute[244014]:           <cpus num='8'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:           </cpus>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         </cell>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </cells>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </topology>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <cache>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </cache>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <secmodel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model>selinux</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <doi>0</doi>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </secmodel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <secmodel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model>dac</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <doi>0</doi>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </secmodel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </host>
Feb 25 12:06:51 compute-0 nova_compute[244014]: 
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <guest>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <os_type>hvm</os_type>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <arch name='i686'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <wordsize>32</wordsize>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <domain type='qemu'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <domain type='kvm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </arch>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <features>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <pae/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <nonpae/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <acpi default='on' toggle='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <apic default='on' toggle='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <cpuselection/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <deviceboot/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <disksnapshot default='on' toggle='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <externalSnapshot/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </features>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </guest>
Feb 25 12:06:51 compute-0 nova_compute[244014]: 
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <guest>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <os_type>hvm</os_type>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <arch name='x86_64'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <wordsize>64</wordsize>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <domain type='qemu'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <domain type='kvm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </arch>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <features>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <acpi default='on' toggle='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <apic default='on' toggle='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <cpuselection/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <deviceboot/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <disksnapshot default='on' toggle='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <externalSnapshot/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </features>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </guest>
Feb 25 12:06:51 compute-0 nova_compute[244014]: 
Feb 25 12:06:51 compute-0 nova_compute[244014]: </capabilities>
Feb 25 12:06:51 compute-0 nova_compute[244014]: 
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.606 244018 DEBUG nova.virt.libvirt.volume.mount [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.610 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.617 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 25 12:06:51 compute-0 nova_compute[244014]: <domainCapabilities>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <path>/usr/libexec/qemu-kvm</path>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <domain>kvm</domain>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <arch>i686</arch>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <vcpu max='4096'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <iothreads supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <os supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <enum name='firmware'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <loader supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>rom</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pflash</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='readonly'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>yes</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>no</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='secure'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>no</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </loader>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </os>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <cpu>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='host-passthrough' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='hostPassthroughMigratable'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>on</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>off</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='maximum' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='maximumMigratable'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>on</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>off</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='host-model' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <vendor>AMD</vendor>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='x2apic'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc-deadline'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='hypervisor'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc_adjust'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='spec-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='stibp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='cmp_legacy'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='overflow-recov'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='succor'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='amd-ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='virt-ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='lbrv'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc-scale'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='vmcb-clean'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='flushbyasid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='pause-filter'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='pfthreshold'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='svme-addr-chk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='disable' name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='custom' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='ClearwaterForest'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ddpd-u'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sha512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm3'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='ClearwaterForest-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ddpd-u'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sha512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm3'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Dhyana-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Turin'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vp2intersect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibpb-brtype'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbpb'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='srso-user-kernel-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Turin-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vp2intersect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibpb-brtype'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbpb'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='srso-user-kernel-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-128'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-256'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-128'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-256'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v6'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v7'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='KnightsMill'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4fmaps'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4vnniw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512er'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512pf'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='KnightsMill-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4fmaps'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4vnniw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512er'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512pf'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G4-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tbm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G5-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tbm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='athlon'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='athlon-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='core2duo'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='core2duo-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='coreduo'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='coreduo-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='n270'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='n270-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='phenom'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='phenom-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <memoryBacking supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <enum name='sourceType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>file</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>anonymous</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>memfd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </memoryBacking>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <disk supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='diskDevice'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>disk</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>cdrom</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>floppy</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>lun</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='bus'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>fdc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>scsi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>sata</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-non-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <graphics supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vnc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>egl-headless</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dbus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <video supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='modelType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vga</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>cirrus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>none</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>bochs</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>ramfb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </video>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <hostdev supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='mode'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>subsystem</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='startupPolicy'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>default</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>mandatory</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>requisite</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>optional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='subsysType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pci</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>scsi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='capsType'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='pciBackend'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </hostdev>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <rng supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-non-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>random</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>egd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>builtin</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <filesystem supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='driverType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>path</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>handle</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtiofs</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </filesystem>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <tpm supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tpm-tis</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tpm-crb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>emulator</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>external</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendVersion'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>2.0</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </tpm>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <redirdev supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='bus'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </redirdev>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <channel supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pty</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>unix</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </channel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <crypto supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>qemu</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>builtin</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </crypto>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <interface supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>default</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>passt</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <panic supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>isa</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>hyperv</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </panic>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <console supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>null</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pty</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dev</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>file</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pipe</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>stdio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>udp</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tcp</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>unix</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>qemu-vdagent</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dbus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </console>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <features>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <gic supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <vmcoreinfo supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <genid supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <backingStoreInput supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <backup supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <async-teardown supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <s390-pv supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <ps2 supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <tdx supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <sev supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <sgx supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <hyperv supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='features'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>relaxed</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vapic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>spinlocks</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vpindex</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>runtime</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>synic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>stimer</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>reset</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vendor_id</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>frequencies</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>reenlightenment</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tlbflush</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>ipi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>avic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>emsr_bitmap</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>xmm_input</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <defaults>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <spinlocks>4095</spinlocks>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <stimer_direct>on</stimer_direct>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <tlbflush_direct>on</tlbflush_direct>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <tlbflush_extended>on</tlbflush_extended>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </defaults>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </hyperv>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <launchSecurity supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </features>
Feb 25 12:06:51 compute-0 nova_compute[244014]: </domainCapabilities>
Feb 25 12:06:51 compute-0 nova_compute[244014]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.625 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 25 12:06:51 compute-0 nova_compute[244014]: <domainCapabilities>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <path>/usr/libexec/qemu-kvm</path>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <domain>kvm</domain>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <arch>i686</arch>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <vcpu max='240'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <iothreads supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <os supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <enum name='firmware'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <loader supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>rom</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pflash</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='readonly'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>yes</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>no</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='secure'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>no</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </loader>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </os>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <cpu>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='host-passthrough' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='hostPassthroughMigratable'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>on</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>off</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='maximum' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='maximumMigratable'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>on</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>off</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='host-model' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <vendor>AMD</vendor>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='x2apic'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc-deadline'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='hypervisor'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc_adjust'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='spec-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='stibp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='cmp_legacy'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='overflow-recov'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='succor'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='amd-ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='virt-ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='lbrv'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc-scale'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='vmcb-clean'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='flushbyasid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='pause-filter'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='pfthreshold'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='svme-addr-chk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='disable' name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='custom' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='ClearwaterForest'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ddpd-u'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sha512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm3'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='ClearwaterForest-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ddpd-u'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sha512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm3'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Dhyana-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Turin'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vp2intersect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibpb-brtype'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbpb'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='srso-user-kernel-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Turin-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vp2intersect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibpb-brtype'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbpb'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='srso-user-kernel-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-128'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-256'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-128'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-256'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v6'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v7'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='KnightsMill'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4fmaps'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4vnniw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512er'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512pf'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='KnightsMill-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4fmaps'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4vnniw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512er'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512pf'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G4-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tbm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G5-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tbm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='athlon'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='athlon-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='core2duo'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='core2duo-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='coreduo'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='coreduo-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='n270'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='n270-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='phenom'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='phenom-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <memoryBacking supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <enum name='sourceType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>file</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>anonymous</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>memfd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </memoryBacking>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <disk supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='diskDevice'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>disk</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>cdrom</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>floppy</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>lun</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='bus'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>ide</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>fdc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>scsi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>sata</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-non-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <graphics supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vnc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>egl-headless</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dbus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <video supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='modelType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vga</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>cirrus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>none</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>bochs</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>ramfb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </video>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <hostdev supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='mode'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>subsystem</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='startupPolicy'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>default</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>mandatory</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>requisite</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>optional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='subsysType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pci</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>scsi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='capsType'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='pciBackend'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </hostdev>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <rng supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-non-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>random</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>egd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>builtin</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <filesystem supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='driverType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>path</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>handle</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtiofs</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </filesystem>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <tpm supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tpm-tis</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tpm-crb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>emulator</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>external</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendVersion'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>2.0</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </tpm>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <redirdev supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='bus'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </redirdev>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <channel supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pty</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>unix</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </channel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <crypto supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>qemu</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>builtin</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </crypto>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <interface supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>default</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>passt</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <panic supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>isa</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>hyperv</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </panic>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <console supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>null</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pty</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dev</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>file</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pipe</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>stdio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>udp</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tcp</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>unix</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>qemu-vdagent</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dbus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </console>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <features>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <gic supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <vmcoreinfo supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <genid supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <backingStoreInput supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <backup supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <async-teardown supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <s390-pv supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <ps2 supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <tdx supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <sev supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <sgx supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <hyperv supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='features'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>relaxed</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vapic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>spinlocks</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vpindex</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>runtime</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>synic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>stimer</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>reset</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vendor_id</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>frequencies</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>reenlightenment</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tlbflush</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>ipi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>avic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>emsr_bitmap</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>xmm_input</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <defaults>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <spinlocks>4095</spinlocks>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <stimer_direct>on</stimer_direct>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <tlbflush_direct>on</tlbflush_direct>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <tlbflush_extended>on</tlbflush_extended>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </defaults>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </hyperv>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <launchSecurity supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </features>
Feb 25 12:06:51 compute-0 nova_compute[244014]: </domainCapabilities>
Feb 25 12:06:51 compute-0 nova_compute[244014]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.663 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.668 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 25 12:06:51 compute-0 nova_compute[244014]: <domainCapabilities>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <path>/usr/libexec/qemu-kvm</path>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <domain>kvm</domain>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <arch>x86_64</arch>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <vcpu max='4096'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <iothreads supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <os supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <enum name='firmware'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>efi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <loader supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>rom</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pflash</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='readonly'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>yes</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>no</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='secure'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>yes</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>no</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </loader>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </os>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <cpu>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='host-passthrough' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='hostPassthroughMigratable'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>on</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>off</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='maximum' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='maximumMigratable'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>on</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>off</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='host-model' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <vendor>AMD</vendor>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='x2apic'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc-deadline'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='hypervisor'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc_adjust'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='spec-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='stibp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='cmp_legacy'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='overflow-recov'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='succor'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='amd-ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='virt-ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='lbrv'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc-scale'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='vmcb-clean'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='flushbyasid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='pause-filter'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='pfthreshold'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='svme-addr-chk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='disable' name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='custom' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='ClearwaterForest'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ddpd-u'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sha512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm3'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='ClearwaterForest-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ddpd-u'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sha512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm3'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Dhyana-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Turin'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vp2intersect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibpb-brtype'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbpb'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='srso-user-kernel-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Turin-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vp2intersect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibpb-brtype'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbpb'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='srso-user-kernel-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-128'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-256'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-128'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-256'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v6'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v7'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='KnightsMill'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4fmaps'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4vnniw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512er'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512pf'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='KnightsMill-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4fmaps'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4vnniw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512er'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512pf'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G4-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tbm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G5-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tbm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='athlon'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='athlon-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='core2duo'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='core2duo-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='coreduo'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='coreduo-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='n270'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='n270-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='phenom'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='phenom-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <memoryBacking supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <enum name='sourceType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>file</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>anonymous</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>memfd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </memoryBacking>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <disk supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='diskDevice'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>disk</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>cdrom</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>floppy</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>lun</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='bus'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>fdc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>scsi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>sata</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-non-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <graphics supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vnc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>egl-headless</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dbus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <video supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='modelType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vga</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>cirrus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>none</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>bochs</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>ramfb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </video>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <hostdev supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='mode'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>subsystem</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='startupPolicy'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>default</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>mandatory</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>requisite</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>optional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='subsysType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pci</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>scsi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='capsType'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='pciBackend'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </hostdev>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <rng supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-non-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>random</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>egd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>builtin</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <filesystem supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='driverType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>path</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>handle</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtiofs</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </filesystem>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <tpm supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tpm-tis</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tpm-crb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>emulator</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>external</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendVersion'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>2.0</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </tpm>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <redirdev supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='bus'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </redirdev>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <channel supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pty</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>unix</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </channel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <crypto supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>qemu</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>builtin</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </crypto>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <interface supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>default</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>passt</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <panic supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>isa</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>hyperv</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </panic>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <console supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>null</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pty</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dev</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>file</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pipe</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>stdio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>udp</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tcp</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>unix</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>qemu-vdagent</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dbus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </console>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <features>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <gic supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <vmcoreinfo supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <genid supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <backingStoreInput supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <backup supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <async-teardown supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <s390-pv supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <ps2 supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <tdx supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <sev supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <sgx supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <hyperv supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='features'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>relaxed</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vapic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>spinlocks</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vpindex</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>runtime</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>synic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>stimer</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>reset</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vendor_id</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>frequencies</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>reenlightenment</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tlbflush</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>ipi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>avic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>emsr_bitmap</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>xmm_input</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <defaults>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <spinlocks>4095</spinlocks>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <stimer_direct>on</stimer_direct>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <tlbflush_direct>on</tlbflush_direct>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <tlbflush_extended>on</tlbflush_extended>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </defaults>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </hyperv>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <launchSecurity supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </features>
Feb 25 12:06:51 compute-0 nova_compute[244014]: </domainCapabilities>
Feb 25 12:06:51 compute-0 nova_compute[244014]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.726 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 25 12:06:51 compute-0 nova_compute[244014]: <domainCapabilities>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <path>/usr/libexec/qemu-kvm</path>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <domain>kvm</domain>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <arch>x86_64</arch>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <vcpu max='240'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <iothreads supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <os supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <enum name='firmware'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <loader supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>rom</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pflash</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='readonly'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>yes</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>no</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='secure'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>no</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </loader>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </os>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <cpu>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='host-passthrough' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='hostPassthroughMigratable'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>on</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>off</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='maximum' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='maximumMigratable'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>on</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>off</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='host-model' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <vendor>AMD</vendor>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='x2apic'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc-deadline'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='hypervisor'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc_adjust'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='spec-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='stibp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='cmp_legacy'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='overflow-recov'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='succor'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='amd-ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='virt-ssbd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='lbrv'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='tsc-scale'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='vmcb-clean'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='flushbyasid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='pause-filter'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='pfthreshold'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='svme-addr-chk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <feature policy='disable' name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <mode name='custom' supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Broadwell-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cascadelake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='ClearwaterForest'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ddpd-u'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sha512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm3'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='ClearwaterForest-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ddpd-u'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sha512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm3'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sm4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Cooperlake-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Denverton-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Dhyana-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Genoa-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Milan-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Rome-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Turin'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vp2intersect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibpb-brtype'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbpb'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='srso-user-kernel-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-Turin-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amd-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='auto-ibrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vp2intersect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fs-gs-base-ns'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibpb-brtype'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='no-nested-data-bp'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='null-sel-clr-base'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='perfmon-v2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbpb'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='srso-user-kernel-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='stibp-always-on'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='EPYC-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-128'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-256'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='GraniteRapids-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-128'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-256'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx10-512'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='prefetchiti'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Haswell-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-noTSX'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v6'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Icelake-Server-v7'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='IvyBridge-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='KnightsMill'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4fmaps'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4vnniw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512er'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512pf'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='KnightsMill-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4fmaps'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-4vnniw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512er'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512pf'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G4-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tbm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Opteron_G5-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fma4'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tbm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xop'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SapphireRapids-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='amx-tile'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-bf16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-fp16'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512-vpopcntdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bitalg'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vbmi2'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrc'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fzrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='la57'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='taa-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='tsx-ldtrk'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='SierraForest-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ifma'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-ne-convert'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx-vnni-int8'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bhi-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='bus-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cmpccxadd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fbsdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='fsrs'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ibrs-all'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='intel-psfd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ipred-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='lam'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mcdt-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pbrsb-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='psdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rrsba-ctrl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='sbdr-ssdp-no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='serialize'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vaes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='vpclmulqdq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Client-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='hle'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='rtm'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Skylake-Server-v5'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512bw'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512cd'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512dq'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512f'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='avx512vl'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='invpcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pcid'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='pku'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='mpx'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v2'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v3'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='core-capability'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='split-lock-detect'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='Snowridge-v4'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='cldemote'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='erms'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='gfni'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdir64b'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='movdiri'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='xsaves'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='athlon'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='athlon-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='core2duo'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='core2duo-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='coreduo'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='coreduo-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='n270'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='n270-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='ss'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='phenom'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <blockers model='phenom-v1'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnow'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <feature name='3dnowext'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </blockers>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </mode>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <memoryBacking supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <enum name='sourceType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>file</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>anonymous</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <value>memfd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </memoryBacking>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <disk supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='diskDevice'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>disk</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>cdrom</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>floppy</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>lun</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='bus'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>ide</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>fdc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>scsi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>sata</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-non-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <graphics supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vnc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>egl-headless</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dbus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <video supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='modelType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vga</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>cirrus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>none</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>bochs</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>ramfb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </video>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <hostdev supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='mode'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>subsystem</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='startupPolicy'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>default</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>mandatory</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>requisite</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>optional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='subsysType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pci</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>scsi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='capsType'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='pciBackend'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </hostdev>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <rng supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtio-non-transitional</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>random</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>egd</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>builtin</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <filesystem supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='driverType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>path</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>handle</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>virtiofs</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </filesystem>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <tpm supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tpm-tis</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tpm-crb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>emulator</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>external</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendVersion'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>2.0</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </tpm>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <redirdev supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='bus'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>usb</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </redirdev>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <channel supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pty</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>unix</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </channel>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <crypto supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>qemu</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendModel'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>builtin</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </crypto>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <interface supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='backendType'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>default</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>passt</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <panic supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='model'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>isa</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>hyperv</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </panic>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <console supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='type'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>null</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vc</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pty</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dev</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>file</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>pipe</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>stdio</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>udp</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tcp</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>unix</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>qemu-vdagent</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>dbus</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </console>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   <features>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <gic supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <vmcoreinfo supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <genid supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <backingStoreInput supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <backup supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <async-teardown supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <s390-pv supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <ps2 supported='yes'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <tdx supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <sev supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <sgx supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <hyperv supported='yes'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <enum name='features'>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>relaxed</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vapic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>spinlocks</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vpindex</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>runtime</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>synic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>stimer</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>reset</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>vendor_id</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>frequencies</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>reenlightenment</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>tlbflush</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>ipi</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>avic</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>emsr_bitmap</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <value>xmm_input</value>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </enum>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       <defaults>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <spinlocks>4095</spinlocks>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <stimer_direct>on</stimer_direct>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <tlbflush_direct>on</tlbflush_direct>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <tlbflush_extended>on</tlbflush_extended>
Feb 25 12:06:51 compute-0 nova_compute[244014]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 12:06:51 compute-0 nova_compute[244014]:       </defaults>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     </hyperv>
Feb 25 12:06:51 compute-0 nova_compute[244014]:     <launchSecurity supported='no'/>
Feb 25 12:06:51 compute-0 nova_compute[244014]:   </features>
Feb 25 12:06:51 compute-0 nova_compute[244014]: </domainCapabilities>
Feb 25 12:06:51 compute-0 nova_compute[244014]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.780 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.780 244018 INFO nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Secure Boot support detected
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.782 244018 INFO nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.782 244018 INFO nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.790 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.806 244018 INFO nova.virt.node [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Determined node identity cb4dae98-2ac3-4218-9445-2320139e12ad from /var/lib/nova/compute_id
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.822 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Verified node cb4dae98-2ac3-4218-9445-2320139e12ad matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.842 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.943 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.943 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.943 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.944 244018 DEBUG nova.compute.resource_tracker [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:06:51 compute-0 nova_compute[244014]: 2026-02-25 12:06:51.944 244018 DEBUG oslo_concurrency.processutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:06:52 compute-0 ceph-mon[76335]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:06:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3411204322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.488 244018 DEBUG oslo_concurrency.processutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.710 244018 WARNING nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.712 244018 DEBUG nova.compute.resource_tracker [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5018MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.712 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.713 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.821 244018 DEBUG nova.compute.resource_tracker [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.822 244018 DEBUG nova.compute.resource_tracker [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.879 244018 DEBUG nova.scheduler.client.report [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.904 244018 DEBUG nova.scheduler.client.report [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.905 244018 DEBUG nova.compute.provider_tree [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.920 244018 DEBUG nova.scheduler.client.report [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.942 244018 DEBUG nova.scheduler.client.report [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 12:06:52 compute-0 nova_compute[244014]: 2026-02-25 12:06:52.969 244018 DEBUG oslo_concurrency.processutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:06:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3411204322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:06:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:06:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4275610153' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:06:53 compute-0 nova_compute[244014]: 2026-02-25 12:06:53.528 244018 DEBUG oslo_concurrency.processutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:06:53 compute-0 nova_compute[244014]: 2026-02-25 12:06:53.534 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 25 12:06:53 compute-0 nova_compute[244014]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 25 12:06:53 compute-0 nova_compute[244014]: 2026-02-25 12:06:53.534 244018 INFO nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] kernel doesn't support AMD SEV
Feb 25 12:06:53 compute-0 nova_compute[244014]: 2026-02-25 12:06:53.535 244018 DEBUG nova.compute.provider_tree [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:06:53 compute-0 nova_compute[244014]: 2026-02-25 12:06:53.536 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:06:53 compute-0 nova_compute[244014]: 2026-02-25 12:06:53.557 244018 DEBUG nova.scheduler.client.report [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:06:53 compute-0 nova_compute[244014]: 2026-02-25 12:06:53.583 244018 DEBUG nova.compute.resource_tracker [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:06:53 compute-0 nova_compute[244014]: 2026-02-25 12:06:53.584 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:06:53 compute-0 nova_compute[244014]: 2026-02-25 12:06:53.584 244018 DEBUG nova.service [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 25 12:06:53 compute-0 nova_compute[244014]: 2026-02-25 12:06:53.645 244018 DEBUG nova.service [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 25 12:06:53 compute-0 nova_compute[244014]: 2026-02-25 12:06:53.646 244018 DEBUG nova.servicegroup.drivers.db [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 25 12:06:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:06:54 compute-0 ceph-mon[76335]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4275610153' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:06:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:06:54.988 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:06:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:06:54.988 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:06:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:06:54.988 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:06:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:56 compute-0 ceph-mon[76335]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:57 compute-0 podman[244360]: 2026-02-25 12:06:57.761869793 +0000 UTC m=+0.102690170 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 12:06:57 compute-0 podman[244361]: 2026-02-25 12:06:57.770031959 +0000 UTC m=+0.109592479 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 12:06:58 compute-0 ceph-mon[76335]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:06:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:06:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:00 compute-0 ceph-mon[76335]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:01 compute-0 ceph-mon[76335]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:07:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:04 compute-0 ceph-mon[76335]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:06 compute-0 ceph-mon[76335]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:08 compute-0 ceph-mon[76335]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:10 compute-0 ceph-mon[76335]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:12 compute-0 ceph-mon[76335]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:14 compute-0 ceph-mon[76335]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:07:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2797691297' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:07:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:07:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2797691297' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:07:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:07:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3301395650' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:07:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:07:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3301395650' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:07:16 compute-0 ceph-mon[76335]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2797691297' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:07:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2797691297' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:07:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3301395650' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:07:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3301395650' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:07:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:07:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3025346241' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:07:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:07:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3025346241' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:07:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3025346241' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:07:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3025346241' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:07:18 compute-0 ceph-mon[76335]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:19 compute-0 ceph-mon[76335]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:22 compute-0 ceph-mon[76335]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:24 compute-0 ceph-mon[76335]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:25 compute-0 ceph-mon[76335]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:28 compute-0 ceph-mon[76335]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:28 compute-0 podman[244405]: 2026-02-25 12:07:28.754654256 +0000 UTC m=+0.097752987 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 12:07:28 compute-0 podman[244406]: 2026-02-25 12:07:28.788878875 +0000 UTC m=+0.129310629 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 12:07:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:29 compute-0 ceph-mon[76335]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:07:30
Feb 25 12:07:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:07:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:07:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control']
Feb 25 12:07:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:31 compute-0 ceph-mon[76335]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:07:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:07:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:34 compute-0 ceph-mon[76335]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:35 compute-0 ceph-mon[76335]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:38 compute-0 ceph-mon[76335]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:39 compute-0 ceph-mon[76335]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:41 compute-0 sudo[244451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:07:41 compute-0 sudo[244451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:07:41 compute-0 sudo[244451]: pam_unix(sudo:session): session closed for user root
Feb 25 12:07:41 compute-0 sudo[244476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:07:41 compute-0 sudo[244476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:07:41 compute-0 nova_compute[244014]: 2026-02-25 12:07:41.647 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:07:41 compute-0 ceph-mon[76335]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:41 compute-0 nova_compute[244014]: 2026-02-25 12:07:41.736 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:07:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:07:42 compute-0 sudo[244476]: pam_unix(sudo:session): session closed for user root
Feb 25 12:07:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:07:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:07:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:07:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:07:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:07:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:07:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:07:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:07:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:07:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:07:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:07:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:07:42 compute-0 sudo[244531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:07:42 compute-0 sudo[244531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:07:42 compute-0 sudo[244531]: pam_unix(sudo:session): session closed for user root
Feb 25 12:07:42 compute-0 sudo[244556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:07:42 compute-0 sudo[244556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:07:42 compute-0 podman[244592]: 2026-02-25 12:07:42.602052234 +0000 UTC m=+0.033872011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:07:42 compute-0 podman[244592]: 2026-02-25 12:07:42.763326436 +0000 UTC m=+0.195146173 container create 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 12:07:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:07:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:07:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:07:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:07:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:07:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:07:42 compute-0 systemd[1]: Started libpod-conmon-112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736.scope.
Feb 25 12:07:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:07:43 compute-0 podman[244592]: 2026-02-25 12:07:43.107033502 +0000 UTC m=+0.538853229 container init 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 12:07:43 compute-0 podman[244592]: 2026-02-25 12:07:43.116358842 +0000 UTC m=+0.548178579 container start 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:07:43 compute-0 podman[244592]: 2026-02-25 12:07:43.120200493 +0000 UTC m=+0.552020230 container attach 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 12:07:43 compute-0 keen_dewdney[244608]: 167 167
Feb 25 12:07:43 compute-0 systemd[1]: libpod-112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736.scope: Deactivated successfully.
Feb 25 12:07:43 compute-0 podman[244592]: 2026-02-25 12:07:43.123272002 +0000 UTC m=+0.555091739 container died 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:07:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa00eb16be33a65a10b778f36de1d4815062531df3030dde03ff4fd0290ed311-merged.mount: Deactivated successfully.
Feb 25 12:07:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:43 compute-0 podman[244592]: 2026-02-25 12:07:43.373231348 +0000 UTC m=+0.805051085 container remove 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:07:43 compute-0 systemd[1]: libpod-conmon-112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736.scope: Deactivated successfully.
Feb 25 12:07:43 compute-0 podman[244633]: 2026-02-25 12:07:43.537821206 +0000 UTC m=+0.030763280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:07:43 compute-0 podman[244633]: 2026-02-25 12:07:43.638986651 +0000 UTC m=+0.131928725 container create 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 12:07:43 compute-0 systemd[1]: Started libpod-conmon-25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352.scope.
Feb 25 12:07:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:43 compute-0 ceph-mon[76335]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:43 compute-0 podman[244633]: 2026-02-25 12:07:43.807825052 +0000 UTC m=+0.300767166 container init 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:07:43 compute-0 podman[244633]: 2026-02-25 12:07:43.823489335 +0000 UTC m=+0.316431419 container start 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:07:43 compute-0 podman[244633]: 2026-02-25 12:07:43.829923321 +0000 UTC m=+0.322865405 container attach 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 12:07:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:43 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Feb 25 12:07:43 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:43.907026) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:07:43 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Feb 25 12:07:43 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021263907066, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 987, "num_deletes": 508, "total_data_size": 889144, "memory_usage": 906888, "flush_reason": "Manual Compaction"}
Feb 25 12:07:43 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Feb 25 12:07:43 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021263929928, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 880138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13674, "largest_seqno": 14660, "table_properties": {"data_size": 875773, "index_size": 1505, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 12234, "raw_average_key_size": 17, "raw_value_size": 865004, "raw_average_value_size": 1241, "num_data_blocks": 70, "num_entries": 697, "num_filter_entries": 697, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772021204, "oldest_key_time": 1772021204, "file_creation_time": 1772021263, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:07:43 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 22957 microseconds, and 3536 cpu microseconds.
Feb 25 12:07:43 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:43.929981) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 880138 bytes OK
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:43.930004) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.011221) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.011277) EVENT_LOG_v1 {"time_micros": 1772021264011263, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.011311) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 883335, prev total WAL file size 883335, number of live WAL files 2.
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.012454) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323534' seq:0, type:0; will stop at (end)
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(859KB)], [32(7624KB)]
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021264012486, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8687927, "oldest_snapshot_seqno": -1}
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3737 keys, 6808210 bytes, temperature: kUnknown
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021264179280, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 6808210, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6782077, "index_size": 15720, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 91712, "raw_average_key_size": 24, "raw_value_size": 6713224, "raw_average_value_size": 1796, "num_data_blocks": 665, "num_entries": 3737, "num_filter_entries": 3737, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772021264, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.179777) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 6808210 bytes
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.211595) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 52.0 rd, 40.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 7.4 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(17.6) write-amplify(7.7) OK, records in: 4763, records dropped: 1026 output_compression: NoCompression
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.211661) EVENT_LOG_v1 {"time_micros": 1772021264211634, "job": 14, "event": "compaction_finished", "compaction_time_micros": 167059, "compaction_time_cpu_micros": 10678, "output_level": 6, "num_output_files": 1, "total_output_size": 6808210, "num_input_records": 4763, "num_output_records": 3737, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021264212084, "job": 14, "event": "table_file_deletion", "file_number": 34}
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021264213372, "job": 14, "event": "table_file_deletion", "file_number": 32}
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.012062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.213457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.213468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.213472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.213476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:07:44 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.213480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:07:44 compute-0 xenodochial_poitras[244650]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:07:44 compute-0 xenodochial_poitras[244650]: --> All data devices are unavailable
Feb 25 12:07:44 compute-0 systemd[1]: libpod-25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352.scope: Deactivated successfully.
Feb 25 12:07:44 compute-0 podman[244633]: 2026-02-25 12:07:44.287944682 +0000 UTC m=+0.780886766 container died 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:07:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b-merged.mount: Deactivated successfully.
Feb 25 12:07:45 compute-0 podman[244633]: 2026-02-25 12:07:45.200261827 +0000 UTC m=+1.693203901 container remove 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:07:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:45 compute-0 sudo[244556]: pam_unix(sudo:session): session closed for user root
Feb 25 12:07:45 compute-0 systemd[1]: libpod-conmon-25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352.scope: Deactivated successfully.
Feb 25 12:07:45 compute-0 sudo[244682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:07:45 compute-0 sudo[244682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:07:45 compute-0 sudo[244682]: pam_unix(sudo:session): session closed for user root
Feb 25 12:07:45 compute-0 sudo[244707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:07:45 compute-0 sudo[244707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:07:45 compute-0 rsyslogd[1020]: imjournal from <np0005629333:sudo>: begin to drop messages due to rate-limiting
Feb 25 12:07:45 compute-0 ceph-mon[76335]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:45 compute-0 podman[244744]: 2026-02-25 12:07:45.661310314 +0000 UTC m=+0.036146505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:07:46 compute-0 podman[244744]: 2026-02-25 12:07:46.007016359 +0000 UTC m=+0.381852510 container create 45c3f7d50abbb9cc779d18d0d4aa6ab6eb32caf88d6126a34621532c50066b2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bardeen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:07:46 compute-0 systemd[1]: Started libpod-conmon-45c3f7d50abbb9cc779d18d0d4aa6ab6eb32caf88d6126a34621532c50066b2b.scope.
Feb 25 12:07:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:07:46 compute-0 podman[244744]: 2026-02-25 12:07:46.494516852 +0000 UTC m=+0.869353063 container init 45c3f7d50abbb9cc779d18d0d4aa6ab6eb32caf88d6126a34621532c50066b2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 12:07:46 compute-0 podman[244744]: 2026-02-25 12:07:46.504252443 +0000 UTC m=+0.879088564 container start 45c3f7d50abbb9cc779d18d0d4aa6ab6eb32caf88d6126a34621532c50066b2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bardeen, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:07:46 compute-0 systemd[1]: libpod-45c3f7d50abbb9cc779d18d0d4aa6ab6eb32caf88d6126a34621532c50066b2b.scope: Deactivated successfully.
Feb 25 12:07:46 compute-0 youthful_bardeen[244761]: 167 167
Feb 25 12:07:46 compute-0 podman[244744]: 2026-02-25 12:07:46.545979689 +0000 UTC m=+0.920815900 container attach 45c3f7d50abbb9cc779d18d0d4aa6ab6eb32caf88d6126a34621532c50066b2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 12:07:46 compute-0 podman[244744]: 2026-02-25 12:07:46.547008579 +0000 UTC m=+0.921844720 container died 45c3f7d50abbb9cc779d18d0d4aa6ab6eb32caf88d6126a34621532c50066b2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:07:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-32bd4bdb0df8504b0bbdfad7e73e43e30d8bd56500b4ab163c0978829c0ddb9e-merged.mount: Deactivated successfully.
Feb 25 12:07:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb 25 12:07:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1025670761' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 25 12:07:46 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 25 12:07:46 compute-0 ceph-mgr[76641]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 25 12:07:46 compute-0 ceph-mgr[76641]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 25 12:07:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1025670761' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 25 12:07:46 compute-0 podman[244744]: 2026-02-25 12:07:46.891318223 +0000 UTC m=+1.266154364 container remove 45c3f7d50abbb9cc779d18d0d4aa6ab6eb32caf88d6126a34621532c50066b2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bardeen, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:07:46 compute-0 systemd[1]: libpod-conmon-45c3f7d50abbb9cc779d18d0d4aa6ab6eb32caf88d6126a34621532c50066b2b.scope: Deactivated successfully.
Feb 25 12:07:47 compute-0 podman[244788]: 2026-02-25 12:07:47.040367912 +0000 UTC m=+0.032235663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:07:47 compute-0 podman[244788]: 2026-02-25 12:07:47.137660145 +0000 UTC m=+0.129527866 container create 3ed7b66adc1bdc93996661a30f5bd66d9b6fd12a58c169909f51e7e07f063a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:07:47 compute-0 systemd[1]: Started libpod-conmon-3ed7b66adc1bdc93996661a30f5bd66d9b6fd12a58c169909f51e7e07f063a10.scope.
Feb 25 12:07:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e386726cf4785b302c0b878cb41673a98eec4be839adbb7ec5c0feb2ba18704b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e386726cf4785b302c0b878cb41673a98eec4be839adbb7ec5c0feb2ba18704b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e386726cf4785b302c0b878cb41673a98eec4be839adbb7ec5c0feb2ba18704b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e386726cf4785b302c0b878cb41673a98eec4be839adbb7ec5c0feb2ba18704b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:47 compute-0 podman[244788]: 2026-02-25 12:07:47.292401348 +0000 UTC m=+0.284269069 container init 3ed7b66adc1bdc93996661a30f5bd66d9b6fd12a58c169909f51e7e07f063a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 12:07:47 compute-0 podman[244788]: 2026-02-25 12:07:47.300959675 +0000 UTC m=+0.292827406 container start 3ed7b66adc1bdc93996661a30f5bd66d9b6fd12a58c169909f51e7e07f063a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tu, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 12:07:47 compute-0 podman[244788]: 2026-02-25 12:07:47.330326594 +0000 UTC m=+0.322194385 container attach 3ed7b66adc1bdc93996661a30f5bd66d9b6fd12a58c169909f51e7e07f063a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tu, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:07:47 compute-0 compassionate_tu[244805]: {
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:     "0": [
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:         {
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "devices": [
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "/dev/loop3"
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             ],
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_name": "ceph_lv0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_size": "21470642176",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "name": "ceph_lv0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "tags": {
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.cluster_name": "ceph",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.crush_device_class": "",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.encrypted": "0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.objectstore": "bluestore",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.osd_id": "0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.type": "block",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.vdo": "0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.with_tpm": "0"
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             },
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "type": "block",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "vg_name": "ceph_vg0"
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:         }
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:     ],
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:     "1": [
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:         {
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "devices": [
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "/dev/loop4"
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             ],
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_name": "ceph_lv1",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_size": "21470642176",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "name": "ceph_lv1",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "tags": {
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.cluster_name": "ceph",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.crush_device_class": "",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.encrypted": "0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.objectstore": "bluestore",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.osd_id": "1",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.type": "block",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.vdo": "0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.with_tpm": "0"
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             },
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "type": "block",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "vg_name": "ceph_vg1"
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:         }
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:     ],
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:     "2": [
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:         {
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "devices": [
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "/dev/loop5"
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             ],
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_name": "ceph_lv2",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_size": "21470642176",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "name": "ceph_lv2",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "tags": {
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.cluster_name": "ceph",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.crush_device_class": "",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.encrypted": "0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.objectstore": "bluestore",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.osd_id": "2",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.type": "block",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.vdo": "0",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:                 "ceph.with_tpm": "0"
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             },
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "type": "block",
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:             "vg_name": "ceph_vg2"
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:         }
Feb 25 12:07:47 compute-0 compassionate_tu[244805]:     ]
Feb 25 12:07:47 compute-0 compassionate_tu[244805]: }
Feb 25 12:07:47 compute-0 systemd[1]: libpod-3ed7b66adc1bdc93996661a30f5bd66d9b6fd12a58c169909f51e7e07f063a10.scope: Deactivated successfully.
Feb 25 12:07:47 compute-0 podman[244788]: 2026-02-25 12:07:47.599353492 +0000 UTC m=+0.591221213 container died 3ed7b66adc1bdc93996661a30f5bd66d9b6fd12a58c169909f51e7e07f063a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 12:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e386726cf4785b302c0b878cb41673a98eec4be839adbb7ec5c0feb2ba18704b-merged.mount: Deactivated successfully.
Feb 25 12:07:47 compute-0 podman[244788]: 2026-02-25 12:07:47.904812722 +0000 UTC m=+0.896680443 container remove 3ed7b66adc1bdc93996661a30f5bd66d9b6fd12a58c169909f51e7e07f063a10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tu, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:07:47 compute-0 ceph-mon[76335]: from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 25 12:07:47 compute-0 ceph-mon[76335]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:47 compute-0 sudo[244707]: pam_unix(sudo:session): session closed for user root
Feb 25 12:07:47 compute-0 systemd[1]: libpod-conmon-3ed7b66adc1bdc93996661a30f5bd66d9b6fd12a58c169909f51e7e07f063a10.scope: Deactivated successfully.
Feb 25 12:07:48 compute-0 sudo[244826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:07:48 compute-0 sudo[244826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:07:48 compute-0 sudo[244826]: pam_unix(sudo:session): session closed for user root
Feb 25 12:07:48 compute-0 sudo[244851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:07:48 compute-0 sudo[244851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:07:48 compute-0 podman[244889]: 2026-02-25 12:07:48.347442158 +0000 UTC m=+0.032464019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:07:48 compute-0 podman[244889]: 2026-02-25 12:07:48.476923041 +0000 UTC m=+0.161944902 container create a835f415dbcaf4e502a4aa82e1e69974ed8262b6cd7bd05b9950fd32a61940cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:07:48 compute-0 systemd[1]: Started libpod-conmon-a835f415dbcaf4e502a4aa82e1e69974ed8262b6cd7bd05b9950fd32a61940cb.scope.
Feb 25 12:07:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:07:48 compute-0 podman[244889]: 2026-02-25 12:07:48.812148171 +0000 UTC m=+0.497170062 container init a835f415dbcaf4e502a4aa82e1e69974ed8262b6cd7bd05b9950fd32a61940cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 12:07:48 compute-0 podman[244889]: 2026-02-25 12:07:48.82041189 +0000 UTC m=+0.505433711 container start a835f415dbcaf4e502a4aa82e1e69974ed8262b6cd7bd05b9950fd32a61940cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 12:07:48 compute-0 systemd[1]: libpod-a835f415dbcaf4e502a4aa82e1e69974ed8262b6cd7bd05b9950fd32a61940cb.scope: Deactivated successfully.
Feb 25 12:07:48 compute-0 practical_hellman[244905]: 167 167
Feb 25 12:07:48 compute-0 podman[244889]: 2026-02-25 12:07:48.848472492 +0000 UTC m=+0.533494413 container attach a835f415dbcaf4e502a4aa82e1e69974ed8262b6cd7bd05b9950fd32a61940cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:07:48 compute-0 podman[244889]: 2026-02-25 12:07:48.849321166 +0000 UTC m=+0.534343037 container died a835f415dbcaf4e502a4aa82e1e69974ed8262b6cd7bd05b9950fd32a61940cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:07:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa7f5d9828433d47450a5daf2fc548d78df03c5d38c3e23c3be8f5e405cc58f5-merged.mount: Deactivated successfully.
Feb 25 12:07:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:49 compute-0 podman[244889]: 2026-02-25 12:07:49.422181397 +0000 UTC m=+1.107203258 container remove a835f415dbcaf4e502a4aa82e1e69974ed8262b6cd7bd05b9950fd32a61940cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 12:07:49 compute-0 systemd[1]: libpod-conmon-a835f415dbcaf4e502a4aa82e1e69974ed8262b6cd7bd05b9950fd32a61940cb.scope: Deactivated successfully.
Feb 25 12:07:49 compute-0 ceph-mon[76335]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:49 compute-0 podman[244931]: 2026-02-25 12:07:49.570504195 +0000 UTC m=+0.023161481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:07:49 compute-0 podman[244931]: 2026-02-25 12:07:49.675722556 +0000 UTC m=+0.128379782 container create 605bbbe2bcb82ee0cada854d292eaafa801b1396acfeabad2c7627ecab984708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hermann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:07:49 compute-0 systemd[1]: Started libpod-conmon-605bbbe2bcb82ee0cada854d292eaafa801b1396acfeabad2c7627ecab984708.scope.
Feb 25 12:07:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97503fd19e35c9c60bb96818df66db50b59a1b4629ccece217a1b0ee01dc151/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97503fd19e35c9c60bb96818df66db50b59a1b4629ccece217a1b0ee01dc151/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97503fd19e35c9c60bb96818df66db50b59a1b4629ccece217a1b0ee01dc151/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97503fd19e35c9c60bb96818df66db50b59a1b4629ccece217a1b0ee01dc151/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:07:49 compute-0 podman[244931]: 2026-02-25 12:07:49.890137535 +0000 UTC m=+0.342794801 container init 605bbbe2bcb82ee0cada854d292eaafa801b1396acfeabad2c7627ecab984708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 12:07:49 compute-0 podman[244931]: 2026-02-25 12:07:49.898361133 +0000 UTC m=+0.351018359 container start 605bbbe2bcb82ee0cada854d292eaafa801b1396acfeabad2c7627ecab984708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:07:49 compute-0 podman[244931]: 2026-02-25 12:07:49.945010212 +0000 UTC m=+0.397667488 container attach 605bbbe2bcb82ee0cada854d292eaafa801b1396acfeabad2c7627ecab984708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 12:07:50 compute-0 lvm[245025]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:07:50 compute-0 lvm[245026]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:07:50 compute-0 lvm[245026]: VG ceph_vg1 finished
Feb 25 12:07:50 compute-0 lvm[245025]: VG ceph_vg0 finished
Feb 25 12:07:50 compute-0 lvm[245028]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:07:50 compute-0 lvm[245028]: VG ceph_vg2 finished
Feb 25 12:07:50 compute-0 condescending_hermann[244947]: {}
Feb 25 12:07:50 compute-0 systemd[1]: libpod-605bbbe2bcb82ee0cada854d292eaafa801b1396acfeabad2c7627ecab984708.scope: Deactivated successfully.
Feb 25 12:07:50 compute-0 systemd[1]: libpod-605bbbe2bcb82ee0cada854d292eaafa801b1396acfeabad2c7627ecab984708.scope: Consumed 1.177s CPU time.
Feb 25 12:07:50 compute-0 podman[244931]: 2026-02-25 12:07:50.685651583 +0000 UTC m=+1.138308829 container died 605bbbe2bcb82ee0cada854d292eaafa801b1396acfeabad2c7627ecab984708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.880 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.880 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.905 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.906 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.906 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.907 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.907 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.908 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.908 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.908 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.909 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.942 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.943 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.943 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.943 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:07:50 compute-0 nova_compute[244014]: 2026-02-25 12:07:50.944 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b97503fd19e35c9c60bb96818df66db50b59a1b4629ccece217a1b0ee01dc151-merged.mount: Deactivated successfully.
Feb 25 12:07:51 compute-0 podman[244931]: 2026-02-25 12:07:51.081381984 +0000 UTC m=+1.534039200 container remove 605bbbe2bcb82ee0cada854d292eaafa801b1396acfeabad2c7627ecab984708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hermann, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:07:51 compute-0 sudo[244851]: pam_unix(sudo:session): session closed for user root
Feb 25 12:07:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:07:51 compute-0 systemd[1]: libpod-conmon-605bbbe2bcb82ee0cada854d292eaafa801b1396acfeabad2c7627ecab984708.scope: Deactivated successfully.
Feb 25 12:07:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:07:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:07:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:07:51 compute-0 sudo[245063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:07:51 compute-0 sudo[245063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:07:51 compute-0 sudo[245063]: pam_unix(sudo:session): session closed for user root
Feb 25 12:07:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:07:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1039135797' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:07:51 compute-0 nova_compute[244014]: 2026-02-25 12:07:51.542 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:07:51 compute-0 nova_compute[244014]: 2026-02-25 12:07:51.746 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:07:51 compute-0 nova_compute[244014]: 2026-02-25 12:07:51.748 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5107MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:07:51 compute-0 nova_compute[244014]: 2026-02-25 12:07:51.748 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:07:51 compute-0 nova_compute[244014]: 2026-02-25 12:07:51.748 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:07:51 compute-0 nova_compute[244014]: 2026-02-25 12:07:51.803 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:07:51 compute-0 nova_compute[244014]: 2026-02-25 12:07:51.803 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:07:51 compute-0 nova_compute[244014]: 2026-02-25 12:07:51.840 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:07:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:07:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3086968878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:07:52 compute-0 nova_compute[244014]: 2026-02-25 12:07:52.375 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:07:52 compute-0 nova_compute[244014]: 2026-02-25 12:07:52.383 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:07:52 compute-0 nova_compute[244014]: 2026-02-25 12:07:52.405 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:07:52 compute-0 nova_compute[244014]: 2026-02-25 12:07:52.407 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:07:52 compute-0 nova_compute[244014]: 2026-02-25 12:07:52.408 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:07:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:07:52 compute-0 ceph-mon[76335]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:07:52 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1039135797' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:07:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3086968878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:07:53 compute-0 ceph-mon[76335]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:07:54.988 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:07:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:07:54.988 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:07:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:07:54.988 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:07:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:55 compute-0 ceph-mon[76335]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:58 compute-0 ceph-mon[76335]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:07:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:59 compute-0 ceph-mon[76335]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:07:59 compute-0 podman[245112]: 2026-02-25 12:07:59.757570582 +0000 UTC m=+0.097227049 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 12:07:59 compute-0 podman[245113]: 2026-02-25 12:07:59.761964826 +0000 UTC m=+0.101746527 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller)
Feb 25 12:08:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb 25 12:08:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2756717510' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 25 12:08:00 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14346 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 25 12:08:00 compute-0 ceph-mgr[76641]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 25 12:08:00 compute-0 ceph-mgr[76641]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 25 12:08:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2756717510' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 25 12:08:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:08:01 compute-0 ceph-mon[76335]: from='client.14346 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 25 12:08:01 compute-0 ceph-mon[76335]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:04 compute-0 ceph-mon[76335]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:05 compute-0 ceph-mon[76335]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:08 compute-0 ceph-mon[76335]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:09 compute-0 ceph-mon[76335]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:11 compute-0 ceph-mon[76335]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:14 compute-0 ceph-mon[76335]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:15 compute-0 ceph-mon[76335]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:18 compute-0 ceph-mon[76335]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:19 compute-0 ceph-mon[76335]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:21 compute-0 ceph-mon[76335]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:24 compute-0 ceph-mon[76335]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:25 compute-0 ceph-mon[76335]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:28 compute-0 ceph-mon[76335]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:29 compute-0 ceph-mon[76335]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:30 compute-0 podman[245155]: 2026-02-25 12:08:30.735806735 +0000 UTC m=+0.076843975 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent)
Feb 25 12:08:30 compute-0 podman[245156]: 2026-02-25 12:08:30.770686638 +0000 UTC m=+0.110450052 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 12:08:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:08:30
Feb 25 12:08:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:08:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:08:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'backups', 'images']
Feb 25 12:08:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:08:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:08:32 compute-0 ceph-mon[76335]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:33 compute-0 ceph-mon[76335]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:35 compute-0 ceph-mon[76335]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:37 compute-0 ceph-mon[76335]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:39 compute-0 ceph-mon[76335]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:41 compute-0 ceph-mon[76335]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:08:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:08:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:43 compute-0 ceph-mon[76335]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:46 compute-0 ceph-mon[76335]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:47 compute-0 ceph-mon[76335]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:08:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3673938645' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:08:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:08:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3673938645' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:08:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3673938645' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:08:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3673938645' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:08:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:49 compute-0 ceph-mon[76335]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:51 compute-0 sudo[245197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:08:51 compute-0 sudo[245197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:08:51 compute-0 sudo[245197]: pam_unix(sudo:session): session closed for user root
Feb 25 12:08:51 compute-0 sudo[245222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:08:51 compute-0 sudo[245222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:08:52 compute-0 sudo[245222]: pam_unix(sudo:session): session closed for user root
Feb 25 12:08:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:08:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:08:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:08:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:08:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:08:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:08:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:08:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:08:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:08:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:08:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:08:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:08:52 compute-0 sudo[245278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:08:52 compute-0 sudo[245278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:08:52 compute-0 sudo[245278]: pam_unix(sudo:session): session closed for user root
Feb 25 12:08:52 compute-0 sudo[245303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:08:52 compute-0 sudo[245303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:08:52 compute-0 ceph-mon[76335]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:08:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:08:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:08:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:08:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:08:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.400 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.423 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.423 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.423 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.439 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.440 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.440 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.441 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.441 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.441 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:08:52 compute-0 podman[245339]: 2026-02-25 12:08:52.602880037 +0000 UTC m=+0.078601365 container create f09b9c436b9e9462d4d95f819ec42f2a11fc2c52920a6d240ff968c2787a8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 12:08:52 compute-0 podman[245339]: 2026-02-25 12:08:52.554817854 +0000 UTC m=+0.030539252 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:08:52 compute-0 systemd[1]: Started libpod-conmon-f09b9c436b9e9462d4d95f819ec42f2a11fc2c52920a6d240ff968c2787a8e3e.scope.
Feb 25 12:08:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:08:52 compute-0 podman[245339]: 2026-02-25 12:08:52.823978935 +0000 UTC m=+0.299700303 container init f09b9c436b9e9462d4d95f819ec42f2a11fc2c52920a6d240ff968c2787a8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:08:52 compute-0 podman[245339]: 2026-02-25 12:08:52.833679708 +0000 UTC m=+0.309401046 container start f09b9c436b9e9462d4d95f819ec42f2a11fc2c52920a6d240ff968c2787a8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 12:08:52 compute-0 systemd[1]: libpod-f09b9c436b9e9462d4d95f819ec42f2a11fc2c52920a6d240ff968c2787a8e3e.scope: Deactivated successfully.
Feb 25 12:08:52 compute-0 laughing_varahamihira[245355]: 167 167
Feb 25 12:08:52 compute-0 conmon[245355]: conmon f09b9c436b9e9462d4d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f09b9c436b9e9462d4d95f819ec42f2a11fc2c52920a6d240ff968c2787a8e3e.scope/container/memory.events
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.900 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:08:52 compute-0 nova_compute[244014]: 2026-02-25 12:08:52.900 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:08:52 compute-0 podman[245339]: 2026-02-25 12:08:52.912015025 +0000 UTC m=+0.387736383 container attach f09b9c436b9e9462d4d95f819ec42f2a11fc2c52920a6d240ff968c2787a8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:08:52 compute-0 podman[245339]: 2026-02-25 12:08:52.913089455 +0000 UTC m=+0.388810783 container died f09b9c436b9e9462d4d95f819ec42f2a11fc2c52920a6d240ff968c2787a8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_varahamihira, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6b26ea33dac98ef8a401b5aba1dde04293fb64cded2a8857801dd9fb7f0bf5d-merged.mount: Deactivated successfully.
Feb 25 12:08:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:53 compute-0 podman[245339]: 2026-02-25 12:08:53.295873546 +0000 UTC m=+0.771594914 container remove f09b9c436b9e9462d4d95f819ec42f2a11fc2c52920a6d240ff968c2787a8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_varahamihira, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:08:53 compute-0 systemd[1]: libpod-conmon-f09b9c436b9e9462d4d95f819ec42f2a11fc2c52920a6d240ff968c2787a8e3e.scope: Deactivated successfully.
Feb 25 12:08:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:08:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2924470378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:08:53 compute-0 ceph-mon[76335]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:53 compute-0 nova_compute[244014]: 2026-02-25 12:08:53.454 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:08:53 compute-0 podman[245400]: 2026-02-25 12:08:53.520571605 +0000 UTC m=+0.102694614 container create 3817274506f11dc2907dad2b788f364c3e5533cbdf518c636e07d68468ec5839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:08:53 compute-0 podman[245400]: 2026-02-25 12:08:53.451757586 +0000 UTC m=+0.033880605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:08:53 compute-0 nova_compute[244014]: 2026-02-25 12:08:53.604 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:08:53 compute-0 nova_compute[244014]: 2026-02-25 12:08:53.607 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5110MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:08:53 compute-0 nova_compute[244014]: 2026-02-25 12:08:53.608 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:08:53 compute-0 nova_compute[244014]: 2026-02-25 12:08:53.608 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:08:53 compute-0 systemd[1]: Started libpod-conmon-3817274506f11dc2907dad2b788f364c3e5533cbdf518c636e07d68468ec5839.scope.
Feb 25 12:08:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e059b5ae77317d165fc62eec96b9f48e5d75394840920944c2a6d339d749f1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e059b5ae77317d165fc62eec96b9f48e5d75394840920944c2a6d339d749f1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e059b5ae77317d165fc62eec96b9f48e5d75394840920944c2a6d339d749f1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e059b5ae77317d165fc62eec96b9f48e5d75394840920944c2a6d339d749f1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e059b5ae77317d165fc62eec96b9f48e5d75394840920944c2a6d339d749f1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:08:53 compute-0 nova_compute[244014]: 2026-02-25 12:08:53.691 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:08:53 compute-0 nova_compute[244014]: 2026-02-25 12:08:53.692 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:08:53 compute-0 nova_compute[244014]: 2026-02-25 12:08:53.711 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:08:53 compute-0 podman[245400]: 2026-02-25 12:08:53.808591687 +0000 UTC m=+0.390714656 container init 3817274506f11dc2907dad2b788f364c3e5533cbdf518c636e07d68468ec5839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_noyce, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 12:08:53 compute-0 podman[245400]: 2026-02-25 12:08:53.817594401 +0000 UTC m=+0.399717370 container start 3817274506f11dc2907dad2b788f364c3e5533cbdf518c636e07d68468ec5839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_noyce, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:08:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:53 compute-0 podman[245400]: 2026-02-25 12:08:53.943296291 +0000 UTC m=+0.525419300 container attach 3817274506f11dc2907dad2b788f364c3e5533cbdf518c636e07d68468ec5839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:08:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:08:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1220422329' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:08:54 compute-0 nova_compute[244014]: 2026-02-25 12:08:54.224 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:08:54 compute-0 nova_compute[244014]: 2026-02-25 12:08:54.232 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:08:54 compute-0 sharp_noyce[245418]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:08:54 compute-0 sharp_noyce[245418]: --> All data devices are unavailable
Feb 25 12:08:54 compute-0 nova_compute[244014]: 2026-02-25 12:08:54.253 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:08:54 compute-0 nova_compute[244014]: 2026-02-25 12:08:54.256 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:08:54 compute-0 nova_compute[244014]: 2026-02-25 12:08:54.256 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:08:54 compute-0 systemd[1]: libpod-3817274506f11dc2907dad2b788f364c3e5533cbdf518c636e07d68468ec5839.scope: Deactivated successfully.
Feb 25 12:08:54 compute-0 podman[245400]: 2026-02-25 12:08:54.290359107 +0000 UTC m=+0.872482116 container died 3817274506f11dc2907dad2b788f364c3e5533cbdf518c636e07d68468ec5839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_noyce, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:08:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e059b5ae77317d165fc62eec96b9f48e5d75394840920944c2a6d339d749f1e-merged.mount: Deactivated successfully.
Feb 25 12:08:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2924470378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:08:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1220422329' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:08:54 compute-0 podman[245400]: 2026-02-25 12:08:54.758431891 +0000 UTC m=+1.340554900 container remove 3817274506f11dc2907dad2b788f364c3e5533cbdf518c636e07d68468ec5839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:08:54 compute-0 sudo[245303]: pam_unix(sudo:session): session closed for user root
Feb 25 12:08:54 compute-0 systemd[1]: libpod-conmon-3817274506f11dc2907dad2b788f364c3e5533cbdf518c636e07d68468ec5839.scope: Deactivated successfully.
Feb 25 12:08:54 compute-0 sudo[245473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:08:54 compute-0 sudo[245473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:08:54 compute-0 sudo[245473]: pam_unix(sudo:session): session closed for user root
Feb 25 12:08:54 compute-0 sudo[245498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:08:54 compute-0 sudo[245498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:08:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:08:54.990 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:08:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:08:54.990 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:08:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:08:54.990 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:08:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:55 compute-0 podman[245535]: 2026-02-25 12:08:55.270068542 +0000 UTC m=+0.087203887 container create a1a7cdf39c1c235637766ff87b790df8cbb89941fefe6be36b1a4619c23ae113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 12:08:55 compute-0 podman[245535]: 2026-02-25 12:08:55.202061007 +0000 UTC m=+0.019196372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:08:55 compute-0 systemd[1]: Started libpod-conmon-a1a7cdf39c1c235637766ff87b790df8cbb89941fefe6be36b1a4619c23ae113.scope.
Feb 25 12:08:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:08:55 compute-0 podman[245535]: 2026-02-25 12:08:55.661415255 +0000 UTC m=+0.478550630 container init a1a7cdf39c1c235637766ff87b790df8cbb89941fefe6be36b1a4619c23ae113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_beaver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:08:55 compute-0 podman[245535]: 2026-02-25 12:08:55.669848113 +0000 UTC m=+0.486983458 container start a1a7cdf39c1c235637766ff87b790df8cbb89941fefe6be36b1a4619c23ae113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_beaver, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:08:55 compute-0 nice_beaver[245551]: 167 167
Feb 25 12:08:55 compute-0 systemd[1]: libpod-a1a7cdf39c1c235637766ff87b790df8cbb89941fefe6be36b1a4619c23ae113.scope: Deactivated successfully.
Feb 25 12:08:55 compute-0 ceph-mon[76335]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:55 compute-0 podman[245535]: 2026-02-25 12:08:55.864940878 +0000 UTC m=+0.682076273 container attach a1a7cdf39c1c235637766ff87b790df8cbb89941fefe6be36b1a4619c23ae113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:08:55 compute-0 podman[245535]: 2026-02-25 12:08:55.865572766 +0000 UTC m=+0.682708111 container died a1a7cdf39c1c235637766ff87b790df8cbb89941fefe6be36b1a4619c23ae113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_beaver, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:08:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-acf07e8a2de0973fe2790fc8721d91e924a1bac8306c96fbccd6fb35df8da736-merged.mount: Deactivated successfully.
Feb 25 12:08:56 compute-0 podman[245535]: 2026-02-25 12:08:56.753663531 +0000 UTC m=+1.570798876 container remove a1a7cdf39c1c235637766ff87b790df8cbb89941fefe6be36b1a4619c23ae113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:08:56 compute-0 systemd[1]: libpod-conmon-a1a7cdf39c1c235637766ff87b790df8cbb89941fefe6be36b1a4619c23ae113.scope: Deactivated successfully.
Feb 25 12:08:57 compute-0 podman[245575]: 2026-02-25 12:08:56.947211242 +0000 UTC m=+0.043272680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:08:57 compute-0 podman[245575]: 2026-02-25 12:08:57.13922986 +0000 UTC m=+0.235291228 container create 07e3cb3a84c07fb970eb4b594d43f18c5f688f4703f1c8d7a4f2d34adcf10757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kowalevski, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 12:08:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:57 compute-0 systemd[1]: Started libpod-conmon-07e3cb3a84c07fb970eb4b594d43f18c5f688f4703f1c8d7a4f2d34adcf10757.scope.
Feb 25 12:08:57 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50db827c1c49b3fa2c0cab6b3b456e30166a5fbd14750901303acc9f5395c60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50db827c1c49b3fa2c0cab6b3b456e30166a5fbd14750901303acc9f5395c60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50db827c1c49b3fa2c0cab6b3b456e30166a5fbd14750901303acc9f5395c60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50db827c1c49b3fa2c0cab6b3b456e30166a5fbd14750901303acc9f5395c60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:08:57 compute-0 podman[245575]: 2026-02-25 12:08:57.405196032 +0000 UTC m=+0.501257470 container init 07e3cb3a84c07fb970eb4b594d43f18c5f688f4703f1c8d7a4f2d34adcf10757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kowalevski, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 12:08:57 compute-0 podman[245575]: 2026-02-25 12:08:57.413468145 +0000 UTC m=+0.509529523 container start 07e3cb3a84c07fb970eb4b594d43f18c5f688f4703f1c8d7a4f2d34adcf10757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kowalevski, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:08:57 compute-0 ceph-mon[76335]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:57 compute-0 podman[245575]: 2026-02-25 12:08:57.609209428 +0000 UTC m=+0.705270776 container attach 07e3cb3a84c07fb970eb4b594d43f18c5f688f4703f1c8d7a4f2d34adcf10757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]: {
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:     "0": [
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:         {
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "devices": [
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "/dev/loop3"
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             ],
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_name": "ceph_lv0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_size": "21470642176",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "name": "ceph_lv0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "tags": {
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.cluster_name": "ceph",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.crush_device_class": "",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.encrypted": "0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.objectstore": "bluestore",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.osd_id": "0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.type": "block",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.vdo": "0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.with_tpm": "0"
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             },
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "type": "block",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "vg_name": "ceph_vg0"
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:         }
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:     ],
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:     "1": [
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:         {
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "devices": [
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "/dev/loop4"
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             ],
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_name": "ceph_lv1",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_size": "21470642176",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "name": "ceph_lv1",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "tags": {
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.cluster_name": "ceph",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.crush_device_class": "",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.encrypted": "0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.objectstore": "bluestore",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.osd_id": "1",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.type": "block",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.vdo": "0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.with_tpm": "0"
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             },
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "type": "block",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "vg_name": "ceph_vg1"
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:         }
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:     ],
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:     "2": [
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:         {
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "devices": [
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "/dev/loop5"
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             ],
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_name": "ceph_lv2",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_size": "21470642176",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "name": "ceph_lv2",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "tags": {
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.cluster_name": "ceph",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.crush_device_class": "",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.encrypted": "0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.objectstore": "bluestore",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.osd_id": "2",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.type": "block",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.vdo": "0",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:                 "ceph.with_tpm": "0"
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             },
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "type": "block",
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:             "vg_name": "ceph_vg2"
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:         }
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]:     ]
Feb 25 12:08:57 compute-0 eager_kowalevski[245591]: }
Feb 25 12:08:57 compute-0 systemd[1]: libpod-07e3cb3a84c07fb970eb4b594d43f18c5f688f4703f1c8d7a4f2d34adcf10757.scope: Deactivated successfully.
Feb 25 12:08:57 compute-0 podman[245575]: 2026-02-25 12:08:57.697105304 +0000 UTC m=+0.793166642 container died 07e3cb3a84c07fb970eb4b594d43f18c5f688f4703f1c8d7a4f2d34adcf10757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:08:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:08:57.736 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:08:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:08:57.740 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:08:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:08:57.741 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c50db827c1c49b3fa2c0cab6b3b456e30166a5fbd14750901303acc9f5395c60-merged.mount: Deactivated successfully.
Feb 25 12:08:58 compute-0 podman[245575]: 2026-02-25 12:08:58.333095528 +0000 UTC m=+1.429156896 container remove 07e3cb3a84c07fb970eb4b594d43f18c5f688f4703f1c8d7a4f2d34adcf10757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kowalevski, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:08:58 compute-0 sudo[245498]: pam_unix(sudo:session): session closed for user root
Feb 25 12:08:58 compute-0 systemd[1]: libpod-conmon-07e3cb3a84c07fb970eb4b594d43f18c5f688f4703f1c8d7a4f2d34adcf10757.scope: Deactivated successfully.
Feb 25 12:08:58 compute-0 sudo[245612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:08:58 compute-0 sudo[245612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:08:58 compute-0 sudo[245612]: pam_unix(sudo:session): session closed for user root
Feb 25 12:08:58 compute-0 sudo[245637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:08:58 compute-0 sudo[245637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:08:58 compute-0 podman[245675]: 2026-02-25 12:08:58.79292544 +0000 UTC m=+0.037949780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:08:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:08:58 compute-0 podman[245675]: 2026-02-25 12:08:58.937896553 +0000 UTC m=+0.182920823 container create bd4628238dce194d1b6e779e87c0b0fbced1f8c1d7a637e951062c5a93bc3421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 12:08:59 compute-0 systemd[1]: Started libpod-conmon-bd4628238dce194d1b6e779e87c0b0fbced1f8c1d7a637e951062c5a93bc3421.scope.
Feb 25 12:08:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:08:59 compute-0 podman[245675]: 2026-02-25 12:08:59.080947903 +0000 UTC m=+0.325972193 container init bd4628238dce194d1b6e779e87c0b0fbced1f8c1d7a637e951062c5a93bc3421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:08:59 compute-0 podman[245675]: 2026-02-25 12:08:59.088496235 +0000 UTC m=+0.333520505 container start bd4628238dce194d1b6e779e87c0b0fbced1f8c1d7a637e951062c5a93bc3421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:08:59 compute-0 hopeful_heyrovsky[245691]: 167 167
Feb 25 12:08:59 compute-0 systemd[1]: libpod-bd4628238dce194d1b6e779e87c0b0fbced1f8c1d7a637e951062c5a93bc3421.scope: Deactivated successfully.
Feb 25 12:08:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:59 compute-0 podman[245675]: 2026-02-25 12:08:59.360457496 +0000 UTC m=+0.605481826 container attach bd4628238dce194d1b6e779e87c0b0fbced1f8c1d7a637e951062c5a93bc3421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 12:08:59 compute-0 podman[245675]: 2026-02-25 12:08:59.361014782 +0000 UTC m=+0.606039062 container died bd4628238dce194d1b6e779e87c0b0fbced1f8c1d7a637e951062c5a93bc3421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 12:08:59 compute-0 ceph-mon[76335]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:08:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc1f8db5ac4ed4411510a8c33685545e507da88606accc88186c3629d75196d3-merged.mount: Deactivated successfully.
Feb 25 12:08:59 compute-0 podman[245675]: 2026-02-25 12:08:59.818852068 +0000 UTC m=+1.063876348 container remove bd4628238dce194d1b6e779e87c0b0fbced1f8c1d7a637e951062c5a93bc3421 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 12:08:59 compute-0 systemd[1]: libpod-conmon-bd4628238dce194d1b6e779e87c0b0fbced1f8c1d7a637e951062c5a93bc3421.scope: Deactivated successfully.
Feb 25 12:09:00 compute-0 podman[245715]: 2026-02-25 12:09:00.079042037 +0000 UTC m=+0.119094196 container create 7e87b5272c4aa0dd9ef1e3c745f883eee15c175bc3b6a008025c31b5e3345a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chatterjee, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:09:00 compute-0 podman[245715]: 2026-02-25 12:08:59.993909579 +0000 UTC m=+0.033961758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:09:00 compute-0 systemd[1]: Started libpod-conmon-7e87b5272c4aa0dd9ef1e3c745f883eee15c175bc3b6a008025c31b5e3345a8f.scope.
Feb 25 12:09:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513252c409278f02352a7e3adcdf6634d577d204558fd5c6f85238e4d9b00897/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513252c409278f02352a7e3adcdf6634d577d204558fd5c6f85238e4d9b00897/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513252c409278f02352a7e3adcdf6634d577d204558fd5c6f85238e4d9b00897/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513252c409278f02352a7e3adcdf6634d577d204558fd5c6f85238e4d9b00897/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:09:00 compute-0 podman[245715]: 2026-02-25 12:09:00.349607468 +0000 UTC m=+0.389659627 container init 7e87b5272c4aa0dd9ef1e3c745f883eee15c175bc3b6a008025c31b5e3345a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 12:09:00 compute-0 podman[245715]: 2026-02-25 12:09:00.358047005 +0000 UTC m=+0.398099144 container start 7e87b5272c4aa0dd9ef1e3c745f883eee15c175bc3b6a008025c31b5e3345a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 12:09:00 compute-0 podman[245715]: 2026-02-25 12:09:00.426235945 +0000 UTC m=+0.466288154 container attach 7e87b5272c4aa0dd9ef1e3c745f883eee15c175bc3b6a008025c31b5e3345a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:09:01 compute-0 lvm[245828]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:09:01 compute-0 lvm[245831]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:09:01 compute-0 lvm[245831]: VG ceph_vg2 finished
Feb 25 12:09:01 compute-0 lvm[245828]: VG ceph_vg0 finished
Feb 25 12:09:01 compute-0 lvm[245832]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:09:01 compute-0 lvm[245832]: VG ceph_vg1 finished
Feb 25 12:09:01 compute-0 podman[245807]: 2026-02-25 12:09:01.157271316 +0000 UTC m=+0.075130747 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 25 12:09:01 compute-0 podman[245809]: 2026-02-25 12:09:01.192009424 +0000 UTC m=+0.109226587 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:09:01 compute-0 suspicious_chatterjee[245732]: {}
Feb 25 12:09:01 compute-0 systemd[1]: libpod-7e87b5272c4aa0dd9ef1e3c745f883eee15c175bc3b6a008025c31b5e3345a8f.scope: Deactivated successfully.
Feb 25 12:09:01 compute-0 systemd[1]: libpod-7e87b5272c4aa0dd9ef1e3c745f883eee15c175bc3b6a008025c31b5e3345a8f.scope: Consumed 1.302s CPU time.
Feb 25 12:09:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:01 compute-0 podman[245862]: 2026-02-25 12:09:01.314774652 +0000 UTC m=+0.039484393 container died 7e87b5272c4aa0dd9ef1e3c745f883eee15c175bc3b6a008025c31b5e3345a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 12:09:01 compute-0 ceph-mon[76335]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-513252c409278f02352a7e3adcdf6634d577d204558fd5c6f85238e4d9b00897-merged.mount: Deactivated successfully.
Feb 25 12:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:09:01 compute-0 podman[245862]: 2026-02-25 12:09:01.684813635 +0000 UTC m=+0.409523296 container remove 7e87b5272c4aa0dd9ef1e3c745f883eee15c175bc3b6a008025c31b5e3345a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 12:09:01 compute-0 systemd[1]: libpod-conmon-7e87b5272c4aa0dd9ef1e3c745f883eee15c175bc3b6a008025c31b5e3345a8f.scope: Deactivated successfully.
Feb 25 12:09:01 compute-0 sudo[245637]: pam_unix(sudo:session): session closed for user root
Feb 25 12:09:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:09:01 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:09:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:09:01 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:09:01 compute-0 sudo[245877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:09:01 compute-0 sudo[245877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:09:01 compute-0 sudo[245877]: pam_unix(sudo:session): session closed for user root
Feb 25 12:09:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:09:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:09:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:03 compute-0 ceph-mon[76335]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:05 compute-0 ceph-mon[76335]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:08 compute-0 ceph-mon[76335]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:09 compute-0 ceph-mon[76335]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:11 compute-0 ceph-mon[76335]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:09:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3420 writes, 15K keys, 3420 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3420 writes, 3420 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1302 writes, 5655 keys, 1302 commit groups, 1.0 writes per commit group, ingest: 8.66 MB, 0.01 MB/s
                                           Interval WAL: 1302 writes, 1302 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    120.6      0.13              0.04         7    0.018       0      0       0.0       0.0
                                             L6      1/0    6.49 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    113.8     93.6      0.44              0.12         6    0.073     24K   3192       0.0       0.0
                                            Sum      1/0    6.49 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7     88.2     99.7      0.57              0.16        13    0.043     24K   3192       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     77.5     80.1      0.34              0.07         6    0.057     13K   1949       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    113.8     93.6      0.44              0.12         6    0.073     24K   3192       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    124.5      0.12              0.04         6    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.6 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.04 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 308.00 MB usage: 1.94 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(107,1.73 MB,0.561434%) FilterBlock(14,74.73 KB,0.0236957%) IndexBlock(14,143.80 KB,0.0455931%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 12:09:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:13 compute-0 ceph-mon[76335]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:15 compute-0 ceph-mon[76335]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:17 compute-0 ceph-mon[76335]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:19 compute-0 ceph-mon[76335]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:22 compute-0 ceph-mon[76335]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:23 compute-0 ceph-mon[76335]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:26 compute-0 ceph-mon[76335]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:27 compute-0 ceph-mon[76335]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:29 compute-0 ceph-mon[76335]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:09:30
Feb 25 12:09:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:09:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:09:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.data']
Feb 25 12:09:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:31 compute-0 ceph-mon[76335]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:09:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:09:31 compute-0 podman[245902]: 2026-02-25 12:09:31.713274644 +0000 UTC m=+0.058598620 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:09:31 compute-0 podman[245903]: 2026-02-25 12:09:31.778982092 +0000 UTC m=+0.119696509 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 12:09:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:33 compute-0 ceph-mon[76335]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:36 compute-0 sshd-session[245947]: Received disconnect from 45.148.10.147 port 4358:11:  [preauth]
Feb 25 12:09:36 compute-0 sshd-session[245947]: Disconnected from authenticating user root 45.148.10.147 port 4358 [preauth]
Feb 25 12:09:36 compute-0 ceph-mon[76335]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:37 compute-0 ceph-mon[76335]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:40 compute-0 ceph-mon[76335]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:41 compute-0 ceph-mon[76335]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:09:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:09:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:44 compute-0 ceph-mon[76335]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:45 compute-0 ceph-mon[76335]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:09:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3727181192' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:09:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:09:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3727181192' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:09:47 compute-0 ceph-mon[76335]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3727181192' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:09:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3727181192' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:09:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:50 compute-0 ceph-mon[76335]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:51 compute-0 ceph-mon[76335]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.257 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.257 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.258 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.258 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.278 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.278 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.279 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.280 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.280 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.280 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:09:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:53 compute-0 ceph-mon[76335]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:09:53 compute-0 nova_compute[244014]: 2026-02-25 12:09:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:09:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:54 compute-0 sshd-session[245949]: Invalid user jupyter from 80.94.92.186 port 37672
Feb 25 12:09:54 compute-0 sshd-session[245949]: Connection closed by invalid user jupyter 80.94.92.186 port 37672 [preauth]
Feb 25 12:09:54 compute-0 nova_compute[244014]: 2026-02-25 12:09:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:09:54 compute-0 nova_compute[244014]: 2026-02-25 12:09:54.931 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:09:54 compute-0 nova_compute[244014]: 2026-02-25 12:09:54.932 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:09:54 compute-0 nova_compute[244014]: 2026-02-25 12:09:54.932 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:09:54 compute-0 nova_compute[244014]: 2026-02-25 12:09:54.932 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:09:54 compute-0 nova_compute[244014]: 2026-02-25 12:09:54.933 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:09:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:09:54.991 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:09:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:09:54.991 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:09:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:09:54.992 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:09:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:55 compute-0 ceph-mon[76335]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:09:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3432480044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:09:55 compute-0 nova_compute[244014]: 2026-02-25 12:09:55.568 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:09:55 compute-0 nova_compute[244014]: 2026-02-25 12:09:55.727 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:09:55 compute-0 nova_compute[244014]: 2026-02-25 12:09:55.728 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:09:55 compute-0 nova_compute[244014]: 2026-02-25 12:09:55.728 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:09:55 compute-0 nova_compute[244014]: 2026-02-25 12:09:55.729 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:09:55 compute-0 nova_compute[244014]: 2026-02-25 12:09:55.922 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:09:55 compute-0 nova_compute[244014]: 2026-02-25 12:09:55.923 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:09:55 compute-0 nova_compute[244014]: 2026-02-25 12:09:55.937 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:09:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:09:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2715134099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:09:56 compute-0 nova_compute[244014]: 2026-02-25 12:09:56.451 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:09:56 compute-0 nova_compute[244014]: 2026-02-25 12:09:56.455 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:09:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3432480044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:09:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2715134099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:09:56 compute-0 nova_compute[244014]: 2026-02-25 12:09:56.556 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:09:56 compute-0 nova_compute[244014]: 2026-02-25 12:09:56.558 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:09:56 compute-0 nova_compute[244014]: 2026-02-25 12:09:56.559 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:09:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:57 compute-0 ceph-mon[76335]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:09:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:09:59 compute-0 ceph-mon[76335]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:10:01 compute-0 ceph-mon[76335]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:01 compute-0 sudo[245995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:10:01 compute-0 sudo[245995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:10:01 compute-0 sudo[245995]: pam_unix(sudo:session): session closed for user root
Feb 25 12:10:02 compute-0 sudo[246032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 12:10:02 compute-0 sudo[246032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:10:02 compute-0 podman[246019]: 2026-02-25 12:10:02.060125529 +0000 UTC m=+0.078610650 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:10:02 compute-0 podman[246020]: 2026-02-25 12:10:02.088680887 +0000 UTC m=+0.105548833 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:10:02 compute-0 sudo[246032]: pam_unix(sudo:session): session closed for user root
Feb 25 12:10:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:10:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:10:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:10:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:10:02 compute-0 sudo[246109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:10:02 compute-0 sudo[246109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:10:02 compute-0 sudo[246109]: pam_unix(sudo:session): session closed for user root
Feb 25 12:10:02 compute-0 sudo[246134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:10:02 compute-0 sudo[246134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:10:03 compute-0 sudo[246134]: pam_unix(sudo:session): session closed for user root
Feb 25 12:10:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:10:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:10:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:10:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:10:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:10:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:10:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:10:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:10:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:10:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:10:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:10:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:10:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:10:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:10:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:10:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:10:03 compute-0 ceph-mon[76335]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:03 compute-0 sudo[246190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:10:03 compute-0 sudo[246190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:10:03 compute-0 sudo[246190]: pam_unix(sudo:session): session closed for user root
Feb 25 12:10:03 compute-0 sudo[246215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:10:03 compute-0 sudo[246215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:10:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:04 compute-0 podman[246253]: 2026-02-25 12:10:03.941908938 +0000 UTC m=+0.028253411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:10:04 compute-0 podman[246253]: 2026-02-25 12:10:04.193365761 +0000 UTC m=+0.279710184 container create 54d4ca4879be7aa48699827ea0baf5f94e129c57b5759642355de6775483bee0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:10:04 compute-0 systemd[1]: Started libpod-conmon-54d4ca4879be7aa48699827ea0baf5f94e129c57b5759642355de6775483bee0.scope.
Feb 25 12:10:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:10:04 compute-0 podman[246253]: 2026-02-25 12:10:04.666000379 +0000 UTC m=+0.752344882 container init 54d4ca4879be7aa48699827ea0baf5f94e129c57b5759642355de6775483bee0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 12:10:04 compute-0 podman[246253]: 2026-02-25 12:10:04.674460286 +0000 UTC m=+0.760804749 container start 54d4ca4879be7aa48699827ea0baf5f94e129c57b5759642355de6775483bee0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 12:10:04 compute-0 silly_elion[246270]: 167 167
Feb 25 12:10:04 compute-0 systemd[1]: libpod-54d4ca4879be7aa48699827ea0baf5f94e129c57b5759642355de6775483bee0.scope: Deactivated successfully.
Feb 25 12:10:04 compute-0 conmon[246270]: conmon 54d4ca4879be7aa48699 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54d4ca4879be7aa48699827ea0baf5f94e129c57b5759642355de6775483bee0.scope/container/memory.events
Feb 25 12:10:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:10:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:10:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:10:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:10:04 compute-0 podman[246253]: 2026-02-25 12:10:04.899985972 +0000 UTC m=+0.986330475 container attach 54d4ca4879be7aa48699827ea0baf5f94e129c57b5759642355de6775483bee0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_elion, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb 25 12:10:04 compute-0 podman[246253]: 2026-02-25 12:10:04.900517227 +0000 UTC m=+0.986861680 container died 54d4ca4879be7aa48699827ea0baf5f94e129c57b5759642355de6775483bee0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 12:10:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1542cf5a30cefb62f227aa1e7df2137c8fc5f6d9b590ba023f5400986fe1ffe-merged.mount: Deactivated successfully.
Feb 25 12:10:05 compute-0 podman[246253]: 2026-02-25 12:10:05.701052956 +0000 UTC m=+1.787397419 container remove 54d4ca4879be7aa48699827ea0baf5f94e129c57b5759642355de6775483bee0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 12:10:05 compute-0 systemd[1]: libpod-conmon-54d4ca4879be7aa48699827ea0baf5f94e129c57b5759642355de6775483bee0.scope: Deactivated successfully.
Feb 25 12:10:05 compute-0 ceph-mon[76335]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:05 compute-0 podman[246295]: 2026-02-25 12:10:05.868465849 +0000 UTC m=+0.029576319 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:10:06 compute-0 podman[246295]: 2026-02-25 12:10:06.068522594 +0000 UTC m=+0.229633084 container create ed4c16704aaa7d557adc5e873591663c2c1e39bcfd6910fdca72ee370798f1fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:10:06 compute-0 systemd[1]: Started libpod-conmon-ed4c16704aaa7d557adc5e873591663c2c1e39bcfd6910fdca72ee370798f1fd.scope.
Feb 25 12:10:06 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ac27ffefa1eac21de39d704c21816cf74308896cba7120e0377d6151a1dd51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ac27ffefa1eac21de39d704c21816cf74308896cba7120e0377d6151a1dd51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ac27ffefa1eac21de39d704c21816cf74308896cba7120e0377d6151a1dd51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ac27ffefa1eac21de39d704c21816cf74308896cba7120e0377d6151a1dd51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ac27ffefa1eac21de39d704c21816cf74308896cba7120e0377d6151a1dd51/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:06 compute-0 podman[246295]: 2026-02-25 12:10:06.453406419 +0000 UTC m=+0.614516949 container init ed4c16704aaa7d557adc5e873591663c2c1e39bcfd6910fdca72ee370798f1fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:10:06 compute-0 podman[246295]: 2026-02-25 12:10:06.489591181 +0000 UTC m=+0.650701661 container start ed4c16704aaa7d557adc5e873591663c2c1e39bcfd6910fdca72ee370798f1fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:10:06 compute-0 podman[246295]: 2026-02-25 12:10:06.590806771 +0000 UTC m=+0.751917281 container attach ed4c16704aaa7d557adc5e873591663c2c1e39bcfd6910fdca72ee370798f1fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nash, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:10:06 compute-0 epic_nash[246312]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:10:06 compute-0 epic_nash[246312]: --> All data devices are unavailable
Feb 25 12:10:06 compute-0 systemd[1]: libpod-ed4c16704aaa7d557adc5e873591663c2c1e39bcfd6910fdca72ee370798f1fd.scope: Deactivated successfully.
Feb 25 12:10:07 compute-0 podman[246332]: 2026-02-25 12:10:07.045619492 +0000 UTC m=+0.050523204 container died ed4c16704aaa7d557adc5e873591663c2c1e39bcfd6910fdca72ee370798f1fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb 25 12:10:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:08 compute-0 ceph-mon[76335]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-31ac27ffefa1eac21de39d704c21816cf74308896cba7120e0377d6151a1dd51-merged.mount: Deactivated successfully.
Feb 25 12:10:08 compute-0 podman[246332]: 2026-02-25 12:10:08.728021263 +0000 UTC m=+1.732924975 container remove ed4c16704aaa7d557adc5e873591663c2c1e39bcfd6910fdca72ee370798f1fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:10:08 compute-0 systemd[1]: libpod-conmon-ed4c16704aaa7d557adc5e873591663c2c1e39bcfd6910fdca72ee370798f1fd.scope: Deactivated successfully.
Feb 25 12:10:08 compute-0 sudo[246215]: pam_unix(sudo:session): session closed for user root
Feb 25 12:10:08 compute-0 sudo[246348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:10:08 compute-0 sudo[246348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:10:08 compute-0 sudo[246348]: pam_unix(sudo:session): session closed for user root
Feb 25 12:10:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:08 compute-0 sudo[246373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:10:08 compute-0 sudo[246373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:10:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:09 compute-0 podman[246410]: 2026-02-25 12:10:09.219796757 +0000 UTC m=+0.029666951 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:10:09 compute-0 podman[246410]: 2026-02-25 12:10:09.316906713 +0000 UTC m=+0.126776877 container create c7f0b95f51bc378d0f0e5761ab62b7cf075a802a61510d8b83be419600257e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:10:09 compute-0 systemd[1]: Started libpod-conmon-c7f0b95f51bc378d0f0e5761ab62b7cf075a802a61510d8b83be419600257e58.scope.
Feb 25 12:10:09 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:10:09 compute-0 ceph-mon[76335]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:09 compute-0 podman[246410]: 2026-02-25 12:10:09.621516863 +0000 UTC m=+0.431387107 container init c7f0b95f51bc378d0f0e5761ab62b7cf075a802a61510d8b83be419600257e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jepsen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:10:09 compute-0 podman[246410]: 2026-02-25 12:10:09.629592888 +0000 UTC m=+0.439463042 container start c7f0b95f51bc378d0f0e5761ab62b7cf075a802a61510d8b83be419600257e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jepsen, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:10:09 compute-0 sad_jepsen[246427]: 167 167
Feb 25 12:10:09 compute-0 systemd[1]: libpod-c7f0b95f51bc378d0f0e5761ab62b7cf075a802a61510d8b83be419600257e58.scope: Deactivated successfully.
Feb 25 12:10:09 compute-0 podman[246410]: 2026-02-25 12:10:09.706845999 +0000 UTC m=+0.516716173 container attach c7f0b95f51bc378d0f0e5761ab62b7cf075a802a61510d8b83be419600257e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:10:09 compute-0 podman[246410]: 2026-02-25 12:10:09.707277071 +0000 UTC m=+0.517147245 container died c7f0b95f51bc378d0f0e5761ab62b7cf075a802a61510d8b83be419600257e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-467573c5325d15ff357d015300d403200bd99ab5d86adcc2ca2f76f388853225-merged.mount: Deactivated successfully.
Feb 25 12:10:09 compute-0 podman[246410]: 2026-02-25 12:10:09.936232464 +0000 UTC m=+0.746102638 container remove c7f0b95f51bc378d0f0e5761ab62b7cf075a802a61510d8b83be419600257e58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:10:09 compute-0 systemd[1]: libpod-conmon-c7f0b95f51bc378d0f0e5761ab62b7cf075a802a61510d8b83be419600257e58.scope: Deactivated successfully.
Feb 25 12:10:10 compute-0 podman[246451]: 2026-02-25 12:10:10.179874889 +0000 UTC m=+0.100795040 container create 02e96b382a641abc16a939e8d82711a440a6b1204b15975308390a0d3c73283c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leakey, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:10:10 compute-0 podman[246451]: 2026-02-25 12:10:10.112963027 +0000 UTC m=+0.033883198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:10:10 compute-0 systemd[1]: Started libpod-conmon-02e96b382a641abc16a939e8d82711a440a6b1204b15975308390a0d3c73283c.scope.
Feb 25 12:10:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950f44dff6d9420eeb078c8187c8372d45156dda64a94c00a90331360b43ac7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950f44dff6d9420eeb078c8187c8372d45156dda64a94c00a90331360b43ac7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950f44dff6d9420eeb078c8187c8372d45156dda64a94c00a90331360b43ac7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950f44dff6d9420eeb078c8187c8372d45156dda64a94c00a90331360b43ac7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:10 compute-0 podman[246451]: 2026-02-25 12:10:10.359965805 +0000 UTC m=+0.280885966 container init 02e96b382a641abc16a939e8d82711a440a6b1204b15975308390a0d3c73283c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 12:10:10 compute-0 podman[246451]: 2026-02-25 12:10:10.369216604 +0000 UTC m=+0.290136755 container start 02e96b382a641abc16a939e8d82711a440a6b1204b15975308390a0d3c73283c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:10:10 compute-0 podman[246451]: 2026-02-25 12:10:10.399271165 +0000 UTC m=+0.320191366 container attach 02e96b382a641abc16a939e8d82711a440a6b1204b15975308390a0d3c73283c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 12:10:10 compute-0 sweet_leakey[246467]: {
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:     "0": [
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:         {
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "devices": [
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "/dev/loop3"
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             ],
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_name": "ceph_lv0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_size": "21470642176",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "name": "ceph_lv0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "tags": {
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.cluster_name": "ceph",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.crush_device_class": "",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.encrypted": "0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.objectstore": "bluestore",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.osd_id": "0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.type": "block",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.vdo": "0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.with_tpm": "0"
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             },
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "type": "block",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "vg_name": "ceph_vg0"
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:         }
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:     ],
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:     "1": [
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:         {
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "devices": [
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "/dev/loop4"
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             ],
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_name": "ceph_lv1",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_size": "21470642176",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "name": "ceph_lv1",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "tags": {
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.cluster_name": "ceph",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.crush_device_class": "",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.encrypted": "0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.objectstore": "bluestore",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.osd_id": "1",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.type": "block",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.vdo": "0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.with_tpm": "0"
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             },
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "type": "block",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "vg_name": "ceph_vg1"
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:         }
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:     ],
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:     "2": [
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:         {
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "devices": [
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "/dev/loop5"
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             ],
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_name": "ceph_lv2",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_size": "21470642176",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "name": "ceph_lv2",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "tags": {
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.cluster_name": "ceph",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.crush_device_class": "",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.encrypted": "0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.objectstore": "bluestore",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.osd_id": "2",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.type": "block",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.vdo": "0",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:                 "ceph.with_tpm": "0"
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             },
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "type": "block",
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:             "vg_name": "ceph_vg2"
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:         }
Feb 25 12:10:10 compute-0 sweet_leakey[246467]:     ]
Feb 25 12:10:10 compute-0 sweet_leakey[246467]: }
Feb 25 12:10:10 compute-0 podman[246451]: 2026-02-25 12:10:10.672660941 +0000 UTC m=+0.593581102 container died 02e96b382a641abc16a939e8d82711a440a6b1204b15975308390a0d3c73283c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 12:10:10 compute-0 systemd[1]: libpod-02e96b382a641abc16a939e8d82711a440a6b1204b15975308390a0d3c73283c.scope: Deactivated successfully.
Feb 25 12:10:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-950f44dff6d9420eeb078c8187c8372d45156dda64a94c00a90331360b43ac7f-merged.mount: Deactivated successfully.
Feb 25 12:10:11 compute-0 podman[246451]: 2026-02-25 12:10:11.03662834 +0000 UTC m=+0.957548491 container remove 02e96b382a641abc16a939e8d82711a440a6b1204b15975308390a0d3c73283c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leakey, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 12:10:11 compute-0 systemd[1]: libpod-conmon-02e96b382a641abc16a939e8d82711a440a6b1204b15975308390a0d3c73283c.scope: Deactivated successfully.
Feb 25 12:10:11 compute-0 sudo[246373]: pam_unix(sudo:session): session closed for user root
Feb 25 12:10:11 compute-0 sudo[246490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:10:11 compute-0 sudo[246490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:10:11 compute-0 sudo[246490]: pam_unix(sudo:session): session closed for user root
Feb 25 12:10:11 compute-0 sudo[246515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:10:11 compute-0 sudo[246515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:10:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:11 compute-0 ceph-mon[76335]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:11 compute-0 podman[246552]: 2026-02-25 12:10:11.580375377 +0000 UTC m=+0.077404685 container create fe706daf858056b04697c9029d033e366987445962c35a15f3becdbe69fbea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:10:11 compute-0 systemd[1]: Started libpod-conmon-fe706daf858056b04697c9029d033e366987445962c35a15f3becdbe69fbea09.scope.
Feb 25 12:10:11 compute-0 podman[246552]: 2026-02-25 12:10:11.533325442 +0000 UTC m=+0.030354740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:10:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:10:11 compute-0 podman[246552]: 2026-02-25 12:10:11.723864211 +0000 UTC m=+0.220893479 container init fe706daf858056b04697c9029d033e366987445962c35a15f3becdbe69fbea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hellman, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 12:10:11 compute-0 podman[246552]: 2026-02-25 12:10:11.732553534 +0000 UTC m=+0.229582832 container start fe706daf858056b04697c9029d033e366987445962c35a15f3becdbe69fbea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hellman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:10:11 compute-0 blissful_hellman[246568]: 167 167
Feb 25 12:10:11 compute-0 systemd[1]: libpod-fe706daf858056b04697c9029d033e366987445962c35a15f3becdbe69fbea09.scope: Deactivated successfully.
Feb 25 12:10:11 compute-0 podman[246552]: 2026-02-25 12:10:11.766063971 +0000 UTC m=+0.263093239 container attach fe706daf858056b04697c9029d033e366987445962c35a15f3becdbe69fbea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hellman, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:10:11 compute-0 podman[246552]: 2026-02-25 12:10:11.76926595 +0000 UTC m=+0.266295218 container died fe706daf858056b04697c9029d033e366987445962c35a15f3becdbe69fbea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 12:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f96d4a7399ba0f0d08c977c47ed74233d0934e2d0669f7af426e9bd9d3f92ca7-merged.mount: Deactivated successfully.
Feb 25 12:10:11 compute-0 podman[246552]: 2026-02-25 12:10:11.908796133 +0000 UTC m=+0.405825421 container remove fe706daf858056b04697c9029d033e366987445962c35a15f3becdbe69fbea09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:10:11 compute-0 systemd[1]: libpod-conmon-fe706daf858056b04697c9029d033e366987445962c35a15f3becdbe69fbea09.scope: Deactivated successfully.
Feb 25 12:10:12 compute-0 podman[246596]: 2026-02-25 12:10:12.102991313 +0000 UTC m=+0.061939523 container create dd9c73cbcb92f3aa299bd2b2f30f55e0ec4a96c2d4c20411148dbf53bbd7ed98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:10:12 compute-0 podman[246596]: 2026-02-25 12:10:12.068442397 +0000 UTC m=+0.027390627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:10:12 compute-0 systemd[1]: Started libpod-conmon-dd9c73cbcb92f3aa299bd2b2f30f55e0ec4a96c2d4c20411148dbf53bbd7ed98.scope.
Feb 25 12:10:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac79fc7c065c03e36738c3315b88ee6cd9f0002ed5e8c19a3ee4fb90b1f79ccb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac79fc7c065c03e36738c3315b88ee6cd9f0002ed5e8c19a3ee4fb90b1f79ccb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac79fc7c065c03e36738c3315b88ee6cd9f0002ed5e8c19a3ee4fb90b1f79ccb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac79fc7c065c03e36738c3315b88ee6cd9f0002ed5e8c19a3ee4fb90b1f79ccb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:10:12 compute-0 podman[246596]: 2026-02-25 12:10:12.27556753 +0000 UTC m=+0.234515790 container init dd9c73cbcb92f3aa299bd2b2f30f55e0ec4a96c2d4c20411148dbf53bbd7ed98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:10:12 compute-0 podman[246596]: 2026-02-25 12:10:12.28628615 +0000 UTC m=+0.245234320 container start dd9c73cbcb92f3aa299bd2b2f30f55e0ec4a96c2d4c20411148dbf53bbd7ed98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 12:10:12 compute-0 podman[246596]: 2026-02-25 12:10:12.311565557 +0000 UTC m=+0.270513777 container attach dd9c73cbcb92f3aa299bd2b2f30f55e0ec4a96c2d4c20411148dbf53bbd7ed98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:10:13 compute-0 lvm[246691]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:10:13 compute-0 lvm[246691]: VG ceph_vg1 finished
Feb 25 12:10:13 compute-0 lvm[246690]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:10:13 compute-0 lvm[246690]: VG ceph_vg0 finished
Feb 25 12:10:13 compute-0 lvm[246693]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:10:13 compute-0 lvm[246693]: VG ceph_vg2 finished
Feb 25 12:10:13 compute-0 upbeat_ganguly[246612]: {}
Feb 25 12:10:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:13 compute-0 systemd[1]: libpod-dd9c73cbcb92f3aa299bd2b2f30f55e0ec4a96c2d4c20411148dbf53bbd7ed98.scope: Deactivated successfully.
Feb 25 12:10:13 compute-0 systemd[1]: libpod-dd9c73cbcb92f3aa299bd2b2f30f55e0ec4a96c2d4c20411148dbf53bbd7ed98.scope: Consumed 1.393s CPU time.
Feb 25 12:10:13 compute-0 podman[246596]: 2026-02-25 12:10:13.317356526 +0000 UTC m=+1.276304706 container died dd9c73cbcb92f3aa299bd2b2f30f55e0ec4a96c2d4c20411148dbf53bbd7ed98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Feb 25 12:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac79fc7c065c03e36738c3315b88ee6cd9f0002ed5e8c19a3ee4fb90b1f79ccb-merged.mount: Deactivated successfully.
Feb 25 12:10:13 compute-0 podman[246596]: 2026-02-25 12:10:13.473046511 +0000 UTC m=+1.431994701 container remove dd9c73cbcb92f3aa299bd2b2f30f55e0ec4a96c2d4c20411148dbf53bbd7ed98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:10:13 compute-0 systemd[1]: libpod-conmon-dd9c73cbcb92f3aa299bd2b2f30f55e0ec4a96c2d4c20411148dbf53bbd7ed98.scope: Deactivated successfully.
Feb 25 12:10:13 compute-0 sudo[246515]: pam_unix(sudo:session): session closed for user root
Feb 25 12:10:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:10:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:10:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:10:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:10:13 compute-0 sudo[246710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:10:13 compute-0 sudo[246710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:10:13 compute-0 sudo[246710]: pam_unix(sudo:session): session closed for user root
Feb 25 12:10:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:14 compute-0 ceph-mon[76335]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:10:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:10:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:10:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5830 writes, 24K keys, 5830 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5830 writes, 996 syncs, 5.85 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516837350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516837350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516837350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 25 12:10:15 compute-0 ceph-mon[76335]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:17 compute-0 ceph-mon[76335]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:10:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 7119 writes, 29K keys, 7119 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7119 writes, 1413 syncs, 5.04 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a354b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a354b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a354b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 25 12:10:20 compute-0 ceph-mon[76335]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:21 compute-0 ceph-mon[76335]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:23 compute-0 ceph-mon[76335]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:10:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Cumulative writes: 5669 writes, 24K keys, 5669 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5669 writes, 899 syncs, 6.31 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.176       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.176       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.176       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.166       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.166       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.17              0.00         1    0.166       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 25 12:10:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:25 compute-0 ceph-mon[76335]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 12:10:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:27 compute-0 ceph-mon[76335]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:29 compute-0 ceph-mon[76335]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:10:30
Feb 25 12:10:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:10:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:10:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'vms', '.rgw.root']
Feb 25 12:10:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:10:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:10:31 compute-0 ceph-mon[76335]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:32 compute-0 podman[246735]: 2026-02-25 12:10:32.765463333 +0000 UTC m=+0.108514986 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:10:32 compute-0 podman[246736]: 2026-02-25 12:10:32.773839667 +0000 UTC m=+0.117021984 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 12:10:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:33 compute-0 ceph-mon[76335]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:35 compute-0 ceph-mon[76335]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:37 compute-0 ceph-mon[76335]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:39 compute-0 ceph-mon[76335]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.587122) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021441587214, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1654, "num_deletes": 251, "total_data_size": 2730111, "memory_usage": 2778544, "flush_reason": "Manual Compaction"}
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Feb 25 12:10:41 compute-0 ceph-mon[76335]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021441649022, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2671622, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14661, "largest_seqno": 16314, "table_properties": {"data_size": 2664044, "index_size": 4521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15361, "raw_average_key_size": 19, "raw_value_size": 2648870, "raw_average_value_size": 3395, "num_data_blocks": 206, "num_entries": 780, "num_filter_entries": 780, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772021265, "oldest_key_time": 1772021265, "file_creation_time": 1772021441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 61961 microseconds, and 6434 cpu microseconds.
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.649097) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2671622 bytes OK
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.649130) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.689844) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.689873) EVENT_LOG_v1 {"time_micros": 1772021441689864, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.689901) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2722986, prev total WAL file size 2722986, number of live WAL files 2.
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.690978) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2609KB)], [35(6648KB)]
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021441691044, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9479832, "oldest_snapshot_seqno": -1}
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4003 keys, 7683989 bytes, temperature: kUnknown
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021441831868, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7683989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7655352, "index_size": 17519, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97646, "raw_average_key_size": 24, "raw_value_size": 7581055, "raw_average_value_size": 1893, "num_data_blocks": 742, "num_entries": 4003, "num_filter_entries": 4003, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772021441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:10:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.832173) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7683989 bytes
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.898805) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 67.3 rd, 54.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 6.5 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 4517, records dropped: 514 output_compression: NoCompression
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.898848) EVENT_LOG_v1 {"time_micros": 1772021441898828, "job": 16, "event": "compaction_finished", "compaction_time_micros": 140911, "compaction_time_cpu_micros": 15492, "output_level": 6, "num_output_files": 1, "total_output_size": 7683989, "num_input_records": 4517, "num_output_records": 4003, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021441899399, "job": 16, "event": "table_file_deletion", "file_number": 37}
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021441900291, "job": 16, "event": "table_file_deletion", "file_number": 35}
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.690790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.900408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.900416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.900418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.900419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:10:41 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:10:41.900421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:10:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:43 compute-0 ceph-mon[76335]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:45 compute-0 ceph-mon[76335]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:10:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1303775642' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:10:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:10:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1303775642' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:10:47 compute-0 ceph-mon[76335]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1303775642' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:10:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1303775642' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:10:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:49 compute-0 ceph-mon[76335]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:51 compute-0 ceph-mon[76335]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:53 compute-0 ceph-mon[76335]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:53 compute-0 nova_compute[244014]: 2026-02-25 12:10:53.559 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:10:53 compute-0 nova_compute[244014]: 2026-02-25 12:10:53.560 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:10:53 compute-0 nova_compute[244014]: 2026-02-25 12:10:53.560 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:10:53 compute-0 nova_compute[244014]: 2026-02-25 12:10:53.560 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:10:53 compute-0 nova_compute[244014]: 2026-02-25 12:10:53.574 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:10:53 compute-0 nova_compute[244014]: 2026-02-25 12:10:53.575 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:10:53 compute-0 nova_compute[244014]: 2026-02-25 12:10:53.575 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:10:53 compute-0 nova_compute[244014]: 2026-02-25 12:10:53.575 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:10:53 compute-0 nova_compute[244014]: 2026-02-25 12:10:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:10:53 compute-0 nova_compute[244014]: 2026-02-25 12:10:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:10:53 compute-0 nova_compute[244014]: 2026-02-25 12:10:53.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:10:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:54 compute-0 nova_compute[244014]: 2026-02-25 12:10:54.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:10:54 compute-0 nova_compute[244014]: 2026-02-25 12:10:54.898 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:10:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:10:54.992 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:10:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:10:54.993 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:10:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:10:54.993 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:10:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:55 compute-0 ceph-mon[76335]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:56 compute-0 nova_compute[244014]: 2026-02-25 12:10:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:10:56 compute-0 nova_compute[244014]: 2026-02-25 12:10:56.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:10:56 compute-0 nova_compute[244014]: 2026-02-25 12:10:56.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:10:56 compute-0 nova_compute[244014]: 2026-02-25 12:10:56.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:10:56 compute-0 nova_compute[244014]: 2026-02-25 12:10:56.909 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:10:56 compute-0 nova_compute[244014]: 2026-02-25 12:10:56.910 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:10:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:10:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3765421903' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:10:57 compute-0 nova_compute[244014]: 2026-02-25 12:10:57.458 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:10:57 compute-0 ceph-mon[76335]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3765421903' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:10:57 compute-0 nova_compute[244014]: 2026-02-25 12:10:57.676 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:10:57 compute-0 nova_compute[244014]: 2026-02-25 12:10:57.677 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:10:57 compute-0 nova_compute[244014]: 2026-02-25 12:10:57.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:10:57 compute-0 nova_compute[244014]: 2026-02-25 12:10:57.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:10:57 compute-0 nova_compute[244014]: 2026-02-25 12:10:57.787 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:10:57 compute-0 nova_compute[244014]: 2026-02-25 12:10:57.788 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:10:57 compute-0 nova_compute[244014]: 2026-02-25 12:10:57.811 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:10:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:10:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2583942846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:10:58 compute-0 nova_compute[244014]: 2026-02-25 12:10:58.342 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:10:58 compute-0 nova_compute[244014]: 2026-02-25 12:10:58.348 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:10:58 compute-0 nova_compute[244014]: 2026-02-25 12:10:58.377 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:10:58 compute-0 nova_compute[244014]: 2026-02-25 12:10:58.379 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:10:58 compute-0 nova_compute[244014]: 2026-02-25 12:10:58.380 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:10:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2583942846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:10:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:10:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:10:59 compute-0 ceph-mon[76335]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:11:01 compute-0 ceph-mon[76335]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:03 compute-0 podman[246825]: 2026-02-25 12:11:03.728717697 +0000 UTC m=+0.063846236 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:11:03 compute-0 podman[246826]: 2026-02-25 12:11:03.782469471 +0000 UTC m=+0.113393372 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 12:11:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:04 compute-0 ceph-mon[76335]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:05 compute-0 ceph-mon[76335]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:08 compute-0 ceph-mon[76335]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:09 compute-0 ceph-mon[76335]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:11 compute-0 ceph-mon[76335]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:13 compute-0 ceph-mon[76335]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:13 compute-0 sudo[246869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:11:13 compute-0 sudo[246869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:11:13 compute-0 sudo[246869]: pam_unix(sudo:session): session closed for user root
Feb 25 12:11:13 compute-0 sudo[246894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:11:13 compute-0 sudo[246894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:11:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:14 compute-0 sudo[246894]: pam_unix(sudo:session): session closed for user root
Feb 25 12:11:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 12:11:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 12:11:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:11:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:11:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:11:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:11:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:11:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:11:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:11:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:11:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:11:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:11:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:11:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:11:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 12:11:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:11:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:11:14 compute-0 sudo[246951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:11:14 compute-0 sudo[246951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:11:14 compute-0 sudo[246951]: pam_unix(sudo:session): session closed for user root
Feb 25 12:11:14 compute-0 sudo[246976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:11:14 compute-0 sudo[246976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:11:15 compute-0 podman[247014]: 2026-02-25 12:11:14.970280193 +0000 UTC m=+0.035134814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:11:15 compute-0 podman[247014]: 2026-02-25 12:11:15.109058467 +0000 UTC m=+0.173913028 container create ebb0cff48b97ebed73b9660636ccb6f9f5512d22b43dc94bf00d902927eb42dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:11:15 compute-0 systemd[1]: Started libpod-conmon-ebb0cff48b97ebed73b9660636ccb6f9f5512d22b43dc94bf00d902927eb42dd.scope.
Feb 25 12:11:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:11:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:15 compute-0 podman[247014]: 2026-02-25 12:11:15.357426479 +0000 UTC m=+0.422281090 container init ebb0cff48b97ebed73b9660636ccb6f9f5512d22b43dc94bf00d902927eb42dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 12:11:15 compute-0 podman[247014]: 2026-02-25 12:11:15.366333559 +0000 UTC m=+0.431188090 container start ebb0cff48b97ebed73b9660636ccb6f9f5512d22b43dc94bf00d902927eb42dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 12:11:15 compute-0 systemd[1]: libpod-ebb0cff48b97ebed73b9660636ccb6f9f5512d22b43dc94bf00d902927eb42dd.scope: Deactivated successfully.
Feb 25 12:11:15 compute-0 infallible_mestorf[247030]: 167 167
Feb 25 12:11:15 compute-0 conmon[247030]: conmon ebb0cff48b97ebed73b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebb0cff48b97ebed73b9660636ccb6f9f5512d22b43dc94bf00d902927eb42dd.scope/container/memory.events
Feb 25 12:11:15 compute-0 podman[247014]: 2026-02-25 12:11:15.450674889 +0000 UTC m=+0.515529520 container attach ebb0cff48b97ebed73b9660636ccb6f9f5512d22b43dc94bf00d902927eb42dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:11:15 compute-0 podman[247014]: 2026-02-25 12:11:15.451448601 +0000 UTC m=+0.516303172 container died ebb0cff48b97ebed73b9660636ccb6f9f5512d22b43dc94bf00d902927eb42dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd1b95cd1568ac4e90ddb0f271cc239928eaeca407462df7afff741ec95db595-merged.mount: Deactivated successfully.
Feb 25 12:11:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:11:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:11:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:11:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:11:15 compute-0 ceph-mon[76335]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:15 compute-0 podman[247014]: 2026-02-25 12:11:15.763153095 +0000 UTC m=+0.828007656 container remove ebb0cff48b97ebed73b9660636ccb6f9f5512d22b43dc94bf00d902927eb42dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:11:15 compute-0 systemd[1]: libpod-conmon-ebb0cff48b97ebed73b9660636ccb6f9f5512d22b43dc94bf00d902927eb42dd.scope: Deactivated successfully.
Feb 25 12:11:16 compute-0 podman[247057]: 2026-02-25 12:11:16.038902273 +0000 UTC m=+0.092912031 container create 3caec0a8cc61d658b07738d9ec67113c3b58653403ee3cc8e0c3b45bf1cd8891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:11:16 compute-0 podman[247057]: 2026-02-25 12:11:15.983132992 +0000 UTC m=+0.037142750 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:11:16 compute-0 systemd[1]: Started libpod-conmon-3caec0a8cc61d658b07738d9ec67113c3b58653403ee3cc8e0c3b45bf1cd8891.scope.
Feb 25 12:11:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a30246b4a0f37b4bec008c707cc7edbec12267e86ef37cb95883e8d1a27dd3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a30246b4a0f37b4bec008c707cc7edbec12267e86ef37cb95883e8d1a27dd3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a30246b4a0f37b4bec008c707cc7edbec12267e86ef37cb95883e8d1a27dd3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a30246b4a0f37b4bec008c707cc7edbec12267e86ef37cb95883e8d1a27dd3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a30246b4a0f37b4bec008c707cc7edbec12267e86ef37cb95883e8d1a27dd3b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:16 compute-0 podman[247057]: 2026-02-25 12:11:16.223535081 +0000 UTC m=+0.277544839 container init 3caec0a8cc61d658b07738d9ec67113c3b58653403ee3cc8e0c3b45bf1cd8891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_montalcini, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:11:16 compute-0 podman[247057]: 2026-02-25 12:11:16.232626526 +0000 UTC m=+0.286636254 container start 3caec0a8cc61d658b07738d9ec67113c3b58653403ee3cc8e0c3b45bf1cd8891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:11:16 compute-0 podman[247057]: 2026-02-25 12:11:16.26494316 +0000 UTC m=+0.318952888 container attach 3caec0a8cc61d658b07738d9ec67113c3b58653403ee3cc8e0c3b45bf1cd8891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 12:11:16 compute-0 sharp_montalcini[247074]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:11:16 compute-0 sharp_montalcini[247074]: --> All data devices are unavailable
Feb 25 12:11:16 compute-0 systemd[1]: libpod-3caec0a8cc61d658b07738d9ec67113c3b58653403ee3cc8e0c3b45bf1cd8891.scope: Deactivated successfully.
Feb 25 12:11:16 compute-0 podman[247057]: 2026-02-25 12:11:16.715916732 +0000 UTC m=+0.769926490 container died 3caec0a8cc61d658b07738d9ec67113c3b58653403ee3cc8e0c3b45bf1cd8891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_montalcini, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 12:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a30246b4a0f37b4bec008c707cc7edbec12267e86ef37cb95883e8d1a27dd3b-merged.mount: Deactivated successfully.
Feb 25 12:11:16 compute-0 podman[247057]: 2026-02-25 12:11:16.986854235 +0000 UTC m=+1.040863983 container remove 3caec0a8cc61d658b07738d9ec67113c3b58653403ee3cc8e0c3b45bf1cd8891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_montalcini, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:11:16 compute-0 systemd[1]: libpod-conmon-3caec0a8cc61d658b07738d9ec67113c3b58653403ee3cc8e0c3b45bf1cd8891.scope: Deactivated successfully.
Feb 25 12:11:17 compute-0 sudo[246976]: pam_unix(sudo:session): session closed for user root
Feb 25 12:11:17 compute-0 sudo[247108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:11:17 compute-0 sudo[247108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:11:17 compute-0 sudo[247108]: pam_unix(sudo:session): session closed for user root
Feb 25 12:11:17 compute-0 sudo[247133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:11:17 compute-0 sudo[247133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:11:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:17 compute-0 podman[247170]: 2026-02-25 12:11:17.422087607 +0000 UTC m=+0.025694920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:11:17 compute-0 podman[247170]: 2026-02-25 12:11:17.679491251 +0000 UTC m=+0.283098494 container create d00be61d352039d5aac04f84b88d096740e484945b4ef5be8137cc5893273115 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:11:17 compute-0 ceph-mon[76335]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:17 compute-0 systemd[1]: Started libpod-conmon-d00be61d352039d5aac04f84b88d096740e484945b4ef5be8137cc5893273115.scope.
Feb 25 12:11:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:11:17 compute-0 podman[247170]: 2026-02-25 12:11:17.958671146 +0000 UTC m=+0.562278449 container init d00be61d352039d5aac04f84b88d096740e484945b4ef5be8137cc5893273115 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_heisenberg, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:11:17 compute-0 podman[247170]: 2026-02-25 12:11:17.967890314 +0000 UTC m=+0.571497567 container start d00be61d352039d5aac04f84b88d096740e484945b4ef5be8137cc5893273115 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_heisenberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:11:17 compute-0 goofy_heisenberg[247186]: 167 167
Feb 25 12:11:17 compute-0 systemd[1]: libpod-d00be61d352039d5aac04f84b88d096740e484945b4ef5be8137cc5893273115.scope: Deactivated successfully.
Feb 25 12:11:17 compute-0 podman[247170]: 2026-02-25 12:11:17.984984432 +0000 UTC m=+0.588591685 container attach d00be61d352039d5aac04f84b88d096740e484945b4ef5be8137cc5893273115 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:11:17 compute-0 podman[247170]: 2026-02-25 12:11:17.985874167 +0000 UTC m=+0.589481420 container died d00be61d352039d5aac04f84b88d096740e484945b4ef5be8137cc5893273115 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_heisenberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-98bcecef97c2c9e6c29124e90cdf9e29aeb2317f81be4b93c46f36aaac1c3623-merged.mount: Deactivated successfully.
Feb 25 12:11:18 compute-0 podman[247170]: 2026-02-25 12:11:18.289052633 +0000 UTC m=+0.892659876 container remove d00be61d352039d5aac04f84b88d096740e484945b4ef5be8137cc5893273115 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:11:18 compute-0 systemd[1]: libpod-conmon-d00be61d352039d5aac04f84b88d096740e484945b4ef5be8137cc5893273115.scope: Deactivated successfully.
Feb 25 12:11:18 compute-0 podman[247210]: 2026-02-25 12:11:18.54083028 +0000 UTC m=+0.109315821 container create 053f7b3e9c5d6c9ac54b622359641c8db95601d52b2e4dccd8f913b77ae732dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 12:11:18 compute-0 podman[247210]: 2026-02-25 12:11:18.464235906 +0000 UTC m=+0.032721507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:11:18 compute-0 systemd[1]: Started libpod-conmon-053f7b3e9c5d6c9ac54b622359641c8db95601d52b2e4dccd8f913b77ae732dd.scope.
Feb 25 12:11:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b4b1f5a1304c39d168c44491533b5d7faa70296e0b6ffa19ea1fa4400d2a41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b4b1f5a1304c39d168c44491533b5d7faa70296e0b6ffa19ea1fa4400d2a41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b4b1f5a1304c39d168c44491533b5d7faa70296e0b6ffa19ea1fa4400d2a41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b4b1f5a1304c39d168c44491533b5d7faa70296e0b6ffa19ea1fa4400d2a41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:18 compute-0 podman[247210]: 2026-02-25 12:11:18.763914734 +0000 UTC m=+0.332400325 container init 053f7b3e9c5d6c9ac54b622359641c8db95601d52b2e4dccd8f913b77ae732dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tu, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:11:18 compute-0 podman[247210]: 2026-02-25 12:11:18.770602531 +0000 UTC m=+0.339088072 container start 053f7b3e9c5d6c9ac54b622359641c8db95601d52b2e4dccd8f913b77ae732dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:11:18 compute-0 podman[247210]: 2026-02-25 12:11:18.844598022 +0000 UTC m=+0.413083603 container attach 053f7b3e9c5d6c9ac54b622359641c8db95601d52b2e4dccd8f913b77ae732dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tu, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:11:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]: {
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:     "0": [
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:         {
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "devices": [
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "/dev/loop3"
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             ],
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_name": "ceph_lv0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_size": "21470642176",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "name": "ceph_lv0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "tags": {
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.cluster_name": "ceph",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.crush_device_class": "",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.encrypted": "0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.objectstore": "bluestore",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.osd_id": "0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.type": "block",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.vdo": "0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.with_tpm": "0"
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             },
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "type": "block",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "vg_name": "ceph_vg0"
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:         }
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:     ],
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:     "1": [
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:         {
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "devices": [
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "/dev/loop4"
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             ],
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_name": "ceph_lv1",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_size": "21470642176",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "name": "ceph_lv1",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "tags": {
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.cluster_name": "ceph",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.crush_device_class": "",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.encrypted": "0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.objectstore": "bluestore",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.osd_id": "1",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.type": "block",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.vdo": "0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.with_tpm": "0"
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             },
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "type": "block",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "vg_name": "ceph_vg1"
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:         }
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:     ],
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:     "2": [
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:         {
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "devices": [
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "/dev/loop5"
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             ],
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_name": "ceph_lv2",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_size": "21470642176",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "name": "ceph_lv2",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "tags": {
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.cluster_name": "ceph",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.crush_device_class": "",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.encrypted": "0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.objectstore": "bluestore",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.osd_id": "2",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.type": "block",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.vdo": "0",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:                 "ceph.with_tpm": "0"
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             },
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "type": "block",
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:             "vg_name": "ceph_vg2"
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:         }
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]:     ]
Feb 25 12:11:19 compute-0 xenodochial_tu[247226]: }
Feb 25 12:11:19 compute-0 systemd[1]: libpod-053f7b3e9c5d6c9ac54b622359641c8db95601d52b2e4dccd8f913b77ae732dd.scope: Deactivated successfully.
Feb 25 12:11:19 compute-0 podman[247210]: 2026-02-25 12:11:19.08891526 +0000 UTC m=+0.657400801 container died 053f7b3e9c5d6c9ac54b622359641c8db95601d52b2e4dccd8f913b77ae732dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tu, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:11:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8b4b1f5a1304c39d168c44491533b5d7faa70296e0b6ffa19ea1fa4400d2a41-merged.mount: Deactivated successfully.
Feb 25 12:11:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:19 compute-0 ceph-mon[76335]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:19 compute-0 podman[247210]: 2026-02-25 12:11:19.514205754 +0000 UTC m=+1.082691305 container remove 053f7b3e9c5d6c9ac54b622359641c8db95601d52b2e4dccd8f913b77ae732dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_tu, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 12:11:19 compute-0 systemd[1]: libpod-conmon-053f7b3e9c5d6c9ac54b622359641c8db95601d52b2e4dccd8f913b77ae732dd.scope: Deactivated successfully.
Feb 25 12:11:19 compute-0 sudo[247133]: pam_unix(sudo:session): session closed for user root
Feb 25 12:11:19 compute-0 sudo[247250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:11:19 compute-0 sudo[247250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:11:19 compute-0 sudo[247250]: pam_unix(sudo:session): session closed for user root
Feb 25 12:11:19 compute-0 sudo[247275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:11:19 compute-0 sudo[247275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:11:20 compute-0 podman[247312]: 2026-02-25 12:11:20.035803642 +0000 UTC m=+0.074928177 container create 70d23b774c524b24dee89db3e9a0de18302fd2dd68de5db5bc5e1b00a21024c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:11:20 compute-0 podman[247312]: 2026-02-25 12:11:19.993624302 +0000 UTC m=+0.032748857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:11:20 compute-0 systemd[1]: Started libpod-conmon-70d23b774c524b24dee89db3e9a0de18302fd2dd68de5db5bc5e1b00a21024c5.scope.
Feb 25 12:11:20 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:11:20 compute-0 podman[247312]: 2026-02-25 12:11:20.278783683 +0000 UTC m=+0.317908278 container init 70d23b774c524b24dee89db3e9a0de18302fd2dd68de5db5bc5e1b00a21024c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:11:20 compute-0 podman[247312]: 2026-02-25 12:11:20.285645535 +0000 UTC m=+0.324770070 container start 70d23b774c524b24dee89db3e9a0de18302fd2dd68de5db5bc5e1b00a21024c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_greider, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 12:11:20 compute-0 systemd[1]: libpod-70d23b774c524b24dee89db3e9a0de18302fd2dd68de5db5bc5e1b00a21024c5.scope: Deactivated successfully.
Feb 25 12:11:20 compute-0 gallant_greider[247328]: 167 167
Feb 25 12:11:20 compute-0 conmon[247328]: conmon 70d23b774c524b24dee8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70d23b774c524b24dee89db3e9a0de18302fd2dd68de5db5bc5e1b00a21024c5.scope/container/memory.events
Feb 25 12:11:20 compute-0 podman[247312]: 2026-02-25 12:11:20.410273343 +0000 UTC m=+0.449397878 container attach 70d23b774c524b24dee89db3e9a0de18302fd2dd68de5db5bc5e1b00a21024c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:11:20 compute-0 podman[247312]: 2026-02-25 12:11:20.410772447 +0000 UTC m=+0.449896982 container died 70d23b774c524b24dee89db3e9a0de18302fd2dd68de5db5bc5e1b00a21024c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_greider, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 12:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-988ccce75e8e6c26768c41616ff0170388dca38071f7658d6ee61f0fbb101220-merged.mount: Deactivated successfully.
Feb 25 12:11:21 compute-0 podman[247312]: 2026-02-25 12:11:21.281515968 +0000 UTC m=+1.320640513 container remove 70d23b774c524b24dee89db3e9a0de18302fd2dd68de5db5bc5e1b00a21024c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_greider, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:11:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Feb 25 12:11:21 compute-0 systemd[1]: libpod-conmon-70d23b774c524b24dee89db3e9a0de18302fd2dd68de5db5bc5e1b00a21024c5.scope: Deactivated successfully.
Feb 25 12:11:21 compute-0 podman[247352]: 2026-02-25 12:11:21.52095199 +0000 UTC m=+0.089329041 container create 17cbc027358d896dbf6442101d3fff47f255fdac49d520314546e833974684e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poincare, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 12:11:21 compute-0 ceph-mon[76335]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Feb 25 12:11:21 compute-0 podman[247352]: 2026-02-25 12:11:21.46557823 +0000 UTC m=+0.033955331 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:11:21 compute-0 systemd[1]: Started libpod-conmon-17cbc027358d896dbf6442101d3fff47f255fdac49d520314546e833974684e1.scope.
Feb 25 12:11:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e1baf4c7638473fbb3b5e861a018ce416b9d5aabc8446c97a3cfa392ddb4508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e1baf4c7638473fbb3b5e861a018ce416b9d5aabc8446c97a3cfa392ddb4508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e1baf4c7638473fbb3b5e861a018ce416b9d5aabc8446c97a3cfa392ddb4508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e1baf4c7638473fbb3b5e861a018ce416b9d5aabc8446c97a3cfa392ddb4508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:11:21 compute-0 podman[247352]: 2026-02-25 12:11:21.696622867 +0000 UTC m=+0.264999988 container init 17cbc027358d896dbf6442101d3fff47f255fdac49d520314546e833974684e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 12:11:21 compute-0 podman[247352]: 2026-02-25 12:11:21.705799684 +0000 UTC m=+0.274176765 container start 17cbc027358d896dbf6442101d3fff47f255fdac49d520314546e833974684e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:11:21 compute-0 podman[247352]: 2026-02-25 12:11:21.735160286 +0000 UTC m=+0.303537417 container attach 17cbc027358d896dbf6442101d3fff47f255fdac49d520314546e833974684e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poincare, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:11:22 compute-0 lvm[247445]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:11:22 compute-0 lvm[247445]: VG ceph_vg0 finished
Feb 25 12:11:22 compute-0 lvm[247448]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:11:22 compute-0 lvm[247448]: VG ceph_vg1 finished
Feb 25 12:11:22 compute-0 lvm[247450]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:11:22 compute-0 lvm[247450]: VG ceph_vg2 finished
Feb 25 12:11:22 compute-0 clever_poincare[247369]: {}
Feb 25 12:11:22 compute-0 systemd[1]: libpod-17cbc027358d896dbf6442101d3fff47f255fdac49d520314546e833974684e1.scope: Deactivated successfully.
Feb 25 12:11:22 compute-0 systemd[1]: libpod-17cbc027358d896dbf6442101d3fff47f255fdac49d520314546e833974684e1.scope: Consumed 1.044s CPU time.
Feb 25 12:11:22 compute-0 podman[247453]: 2026-02-25 12:11:22.455109217 +0000 UTC m=+0.030018961 container died 17cbc027358d896dbf6442101d3fff47f255fdac49d520314546e833974684e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poincare, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e1baf4c7638473fbb3b5e861a018ce416b9d5aabc8446c97a3cfa392ddb4508-merged.mount: Deactivated successfully.
Feb 25 12:11:22 compute-0 podman[247453]: 2026-02-25 12:11:22.883016264 +0000 UTC m=+0.457925988 container remove 17cbc027358d896dbf6442101d3fff47f255fdac49d520314546e833974684e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 12:11:22 compute-0 systemd[1]: libpod-conmon-17cbc027358d896dbf6442101d3fff47f255fdac49d520314546e833974684e1.scope: Deactivated successfully.
Feb 25 12:11:22 compute-0 sudo[247275]: pam_unix(sudo:session): session closed for user root
Feb 25 12:11:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:11:23 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:11:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:11:23 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:11:23 compute-0 sudo[247468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:11:23 compute-0 sudo[247468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:11:23 compute-0 sudo[247468]: pam_unix(sudo:session): session closed for user root
Feb 25 12:11:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 12:11:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:11:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:11:24 compute-0 ceph-mon[76335]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 12:11:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 12:11:25 compute-0 ceph-mon[76335]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 12:11:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 12:11:27 compute-0 ceph-mon[76335]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 12:11:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 12:11:29 compute-0 ceph-mon[76335]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 12:11:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:11:30
Feb 25 12:11:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:11:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:11:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'images', 'volumes', 'default.rgw.control', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta']
Feb 25 12:11:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 12:11:31 compute-0 ceph-mon[76335]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:11:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:11:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Feb 25 12:11:33 compute-0 ceph-mon[76335]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Feb 25 12:11:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:34 compute-0 podman[247493]: 2026-02-25 12:11:34.766505241 +0000 UTC m=+0.099358722 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:11:34 compute-0 podman[247494]: 2026-02-25 12:11:34.775241586 +0000 UTC m=+0.108089887 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:11:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 12:11:35 compute-0 ceph-mon[76335]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 12:11:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 12:11:37 compute-0 ceph-mon[76335]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 12:11:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:39 compute-0 ceph-mon[76335]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:41 compute-0 ceph-mon[76335]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:11:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:11:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:43 compute-0 ceph-mon[76335]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:45 compute-0 ceph-mon[76335]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:11:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357942651' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:11:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:11:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357942651' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:11:47 compute-0 ceph-mon[76335]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3357942651' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:11:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3357942651' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:11:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:49 compute-0 ceph-mon[76335]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:50 compute-0 nova_compute[244014]: 2026-02-25 12:11:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:50 compute-0 nova_compute[244014]: 2026-02-25 12:11:50.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 12:11:50 compute-0 nova_compute[244014]: 2026-02-25 12:11:50.907 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 12:11:50 compute-0 nova_compute[244014]: 2026-02-25 12:11:50.908 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:50 compute-0 nova_compute[244014]: 2026-02-25 12:11:50.908 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 12:11:50 compute-0 nova_compute[244014]: 2026-02-25 12:11:50.924 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:51 compute-0 ceph-mon[76335]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:51 compute-0 nova_compute[244014]: 2026-02-25 12:11:51.938 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:51 compute-0 nova_compute[244014]: 2026-02-25 12:11:51.938 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:11:51 compute-0 nova_compute[244014]: 2026-02-25 12:11:51.938 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:11:51 compute-0 nova_compute[244014]: 2026-02-25 12:11:51.956 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:11:52 compute-0 nova_compute[244014]: 2026-02-25 12:11:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:52 compute-0 nova_compute[244014]: 2026-02-25 12:11:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:52 compute-0 nova_compute[244014]: 2026-02-25 12:11:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:53 compute-0 ceph-mon[76335]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:54 compute-0 nova_compute[244014]: 2026-02-25 12:11:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:54 compute-0 nova_compute[244014]: 2026-02-25 12:11:54.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:11:54.994 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:11:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:11:54.994 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:11:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:11:54.994 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:11:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:55 compute-0 ceph-mon[76335]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:55 compute-0 nova_compute[244014]: 2026-02-25 12:11:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:55 compute-0 nova_compute[244014]: 2026-02-25 12:11:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:55 compute-0 nova_compute[244014]: 2026-02-25 12:11:55.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:11:56 compute-0 nova_compute[244014]: 2026-02-25 12:11:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:11:56 compute-0 nova_compute[244014]: 2026-02-25 12:11:56.914 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:11:56 compute-0 nova_compute[244014]: 2026-02-25 12:11:56.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:11:56 compute-0 nova_compute[244014]: 2026-02-25 12:11:56.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:11:56 compute-0 nova_compute[244014]: 2026-02-25 12:11:56.915 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:11:56 compute-0 nova_compute[244014]: 2026-02-25 12:11:56.915 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:11:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:11:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3541659882' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:11:57 compute-0 nova_compute[244014]: 2026-02-25 12:11:57.474 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:11:57 compute-0 ceph-mon[76335]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3541659882' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:11:57 compute-0 nova_compute[244014]: 2026-02-25 12:11:57.585 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:11:57 compute-0 nova_compute[244014]: 2026-02-25 12:11:57.586 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5131MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:11:57 compute-0 nova_compute[244014]: 2026-02-25 12:11:57.586 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:11:57 compute-0 nova_compute[244014]: 2026-02-25 12:11:57.587 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:11:57 compute-0 nova_compute[244014]: 2026-02-25 12:11:57.810 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:11:57 compute-0 nova_compute[244014]: 2026-02-25 12:11:57.810 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:11:57 compute-0 nova_compute[244014]: 2026-02-25 12:11:57.871 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 12:11:57 compute-0 nova_compute[244014]: 2026-02-25 12:11:57.955 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 12:11:57 compute-0 nova_compute[244014]: 2026-02-25 12:11:57.955 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:11:57 compute-0 nova_compute[244014]: 2026-02-25 12:11:57.973 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 12:11:58 compute-0 nova_compute[244014]: 2026-02-25 12:11:58.005 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 12:11:58 compute-0 nova_compute[244014]: 2026-02-25 12:11:58.021 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:11:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:11:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/966609413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:11:58 compute-0 nova_compute[244014]: 2026-02-25 12:11:58.513 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:11:58 compute-0 nova_compute[244014]: 2026-02-25 12:11:58.518 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:11:58 compute-0 nova_compute[244014]: 2026-02-25 12:11:58.534 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:11:58 compute-0 nova_compute[244014]: 2026-02-25 12:11:58.537 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:11:58 compute-0 nova_compute[244014]: 2026-02-25 12:11:58.538 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:11:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/966609413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:11:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:11:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:11:59 compute-0 ceph-mon[76335]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Feb 25 12:12:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Feb 25 12:12:00 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Feb 25 12:12:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Feb 25 12:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:12:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Feb 25 12:12:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Feb 25 12:12:01 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Feb 25 12:12:01 compute-0 ceph-mon[76335]: osdmap e116: 3 total, 3 up, 3 in
Feb 25 12:12:01 compute-0 ceph-mon[76335]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Feb 25 12:12:02 compute-0 ceph-mon[76335]: osdmap e117: 3 total, 3 up, 3 in
Feb 25 12:12:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Feb 25 12:12:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Feb 25 12:12:02 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Feb 25 12:12:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 511 B/s wr, 7 op/s
Feb 25 12:12:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:04 compute-0 ceph-mon[76335]: osdmap e118: 3 total, 3 up, 3 in
Feb 25 12:12:04 compute-0 ceph-mon[76335]: pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 511 B/s wr, 7 op/s
Feb 25 12:12:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Feb 25 12:12:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Feb 25 12:12:05 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Feb 25 12:12:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 683 B/s wr, 10 op/s
Feb 25 12:12:05 compute-0 podman[247581]: 2026-02-25 12:12:05.728492709 +0000 UTC m=+0.068611681 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 25 12:12:05 compute-0 podman[247582]: 2026-02-25 12:12:05.750963628 +0000 UTC m=+0.090012490 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 12:12:06 compute-0 ceph-mon[76335]: osdmap e119: 3 total, 3 up, 3 in
Feb 25 12:12:06 compute-0 ceph-mon[76335]: pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 683 B/s wr, 10 op/s
Feb 25 12:12:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Feb 25 12:12:07 compute-0 ceph-mon[76335]: pgmap v788: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Feb 25 12:12:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Feb 25 12:12:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Feb 25 12:12:08 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Feb 25 12:12:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 6.4 MiB/s wr, 52 op/s
Feb 25 12:12:09 compute-0 ceph-mon[76335]: osdmap e120: 3 total, 3 up, 3 in
Feb 25 12:12:09 compute-0 ceph-mon[76335]: pgmap v790: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 6.4 MiB/s wr, 52 op/s
Feb 25 12:12:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Feb 25 12:12:11 compute-0 ceph-mon[76335]: pgmap v791: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Feb 25 12:12:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.0 MiB/s wr, 40 op/s
Feb 25 12:12:13 compute-0 ceph-mon[76335]: pgmap v792: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.0 MiB/s wr, 40 op/s
Feb 25 12:12:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Feb 25 12:12:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Feb 25 12:12:14 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Feb 25 12:12:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:15 compute-0 ceph-mon[76335]: osdmap e121: 3 total, 3 up, 3 in
Feb 25 12:12:15 compute-0 ceph-mon[76335]: pgmap v794: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Feb 25 12:12:17 compute-0 ceph-mon[76335]: pgmap v795: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Feb 25 12:12:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Feb 25 12:12:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Feb 25 12:12:18 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Feb 25 12:12:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Feb 25 12:12:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Feb 25 12:12:18 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Feb 25 12:12:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Feb 25 12:12:19 compute-0 ceph-mon[76335]: osdmap e122: 3 total, 3 up, 3 in
Feb 25 12:12:19 compute-0 ceph-mon[76335]: osdmap e123: 3 total, 3 up, 3 in
Feb 25 12:12:19 compute-0 ceph-mon[76335]: pgmap v798: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Feb 25 12:12:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 71 op/s
Feb 25 12:12:21 compute-0 ceph-mon[76335]: pgmap v799: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 71 op/s
Feb 25 12:12:23 compute-0 sudo[247630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:12:23 compute-0 sudo[247630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:12:23 compute-0 sudo[247630]: pam_unix(sudo:session): session closed for user root
Feb 25 12:12:23 compute-0 sudo[247655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 12:12:23 compute-0 sudo[247655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:12:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Feb 25 12:12:23 compute-0 ceph-mon[76335]: pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Feb 25 12:12:23 compute-0 podman[247722]: 2026-02-25 12:12:23.81767996 +0000 UTC m=+0.117860990 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 12:12:23 compute-0 podman[247722]: 2026-02-25 12:12:23.932308379 +0000 UTC m=+0.232489359 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 12:12:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Feb 25 12:12:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Feb 25 12:12:23 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Feb 25 12:12:24 compute-0 sudo[247655]: pam_unix(sudo:session): session closed for user root
Feb 25 12:12:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:12:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:12:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:12:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:12:24 compute-0 sudo[247911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:12:24 compute-0 sudo[247911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:12:24 compute-0 sudo[247911]: pam_unix(sudo:session): session closed for user root
Feb 25 12:12:24 compute-0 ceph-mon[76335]: osdmap e124: 3 total, 3 up, 3 in
Feb 25 12:12:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:12:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:12:25 compute-0 sudo[247936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:12:25 compute-0 sudo[247936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:12:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.1 KiB/s wr, 37 op/s
Feb 25 12:12:25 compute-0 sudo[247936]: pam_unix(sudo:session): session closed for user root
Feb 25 12:12:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:12:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:12:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:12:25 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:12:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:12:25 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:12:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:12:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:12:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:12:25 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:12:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:12:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:12:25 compute-0 sudo[247992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:12:25 compute-0 sudo[247992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:12:25 compute-0 sudo[247992]: pam_unix(sudo:session): session closed for user root
Feb 25 12:12:25 compute-0 sudo[248017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:12:25 compute-0 sudo[248017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:12:26 compute-0 ceph-mon[76335]: pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.1 KiB/s wr, 37 op/s
Feb 25 12:12:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:12:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:12:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:12:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:12:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:12:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:12:26 compute-0 podman[248055]: 2026-02-25 12:12:26.086849872 +0000 UTC m=+0.059099315 container create a15464923ea54c82e7295dd8a092451306777402520bb8e8a3f534c1459f7562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gould, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:12:26 compute-0 systemd[1]: Started libpod-conmon-a15464923ea54c82e7295dd8a092451306777402520bb8e8a3f534c1459f7562.scope.
Feb 25 12:12:26 compute-0 podman[248055]: 2026-02-25 12:12:26.062680326 +0000 UTC m=+0.034929539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:12:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:12:26 compute-0 podman[248055]: 2026-02-25 12:12:26.203558669 +0000 UTC m=+0.175807942 container init a15464923ea54c82e7295dd8a092451306777402520bb8e8a3f534c1459f7562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gould, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:12:26 compute-0 podman[248055]: 2026-02-25 12:12:26.210652197 +0000 UTC m=+0.182901400 container start a15464923ea54c82e7295dd8a092451306777402520bb8e8a3f534c1459f7562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gould, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:12:26 compute-0 blissful_gould[248071]: 167 167
Feb 25 12:12:26 compute-0 systemd[1]: libpod-a15464923ea54c82e7295dd8a092451306777402520bb8e8a3f534c1459f7562.scope: Deactivated successfully.
Feb 25 12:12:26 compute-0 conmon[248071]: conmon a15464923ea54c82e729 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a15464923ea54c82e7295dd8a092451306777402520bb8e8a3f534c1459f7562.scope/container/memory.events
Feb 25 12:12:26 compute-0 podman[248055]: 2026-02-25 12:12:26.227305883 +0000 UTC m=+0.199555166 container attach a15464923ea54c82e7295dd8a092451306777402520bb8e8a3f534c1459f7562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 12:12:26 compute-0 podman[248055]: 2026-02-25 12:12:26.227810287 +0000 UTC m=+0.200059530 container died a15464923ea54c82e7295dd8a092451306777402520bb8e8a3f534c1459f7562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 12:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ece78a79e3e50184af5d42fc69b6cbd8bad6a6d264a819291f801fe441f59174-merged.mount: Deactivated successfully.
Feb 25 12:12:26 compute-0 podman[248055]: 2026-02-25 12:12:26.546842636 +0000 UTC m=+0.519091879 container remove a15464923ea54c82e7295dd8a092451306777402520bb8e8a3f534c1459f7562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:12:26 compute-0 systemd[1]: libpod-conmon-a15464923ea54c82e7295dd8a092451306777402520bb8e8a3f534c1459f7562.scope: Deactivated successfully.
Feb 25 12:12:26 compute-0 podman[248097]: 2026-02-25 12:12:26.754617082 +0000 UTC m=+0.072414897 container create c10ab9ab35d642911aed150eef6f267f123c2bfcd9c1680b9006aa36e75398a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chatterjee, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 12:12:26 compute-0 podman[248097]: 2026-02-25 12:12:26.712193685 +0000 UTC m=+0.029991510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:12:26 compute-0 systemd[1]: Started libpod-conmon-c10ab9ab35d642911aed150eef6f267f123c2bfcd9c1680b9006aa36e75398a2.scope.
Feb 25 12:12:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e8dce2417d70dc6ce3b1e113c33cabc6dac931e631f11d203ac4acd55e2834/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e8dce2417d70dc6ce3b1e113c33cabc6dac931e631f11d203ac4acd55e2834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e8dce2417d70dc6ce3b1e113c33cabc6dac931e631f11d203ac4acd55e2834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e8dce2417d70dc6ce3b1e113c33cabc6dac931e631f11d203ac4acd55e2834/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e8dce2417d70dc6ce3b1e113c33cabc6dac931e631f11d203ac4acd55e2834/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:26 compute-0 podman[248097]: 2026-02-25 12:12:26.922329307 +0000 UTC m=+0.240127112 container init c10ab9ab35d642911aed150eef6f267f123c2bfcd9c1680b9006aa36e75398a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:12:26 compute-0 podman[248097]: 2026-02-25 12:12:26.932482911 +0000 UTC m=+0.250280716 container start c10ab9ab35d642911aed150eef6f267f123c2bfcd9c1680b9006aa36e75398a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chatterjee, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Feb 25 12:12:26 compute-0 podman[248097]: 2026-02-25 12:12:26.970433433 +0000 UTC m=+0.288231258 container attach c10ab9ab35d642911aed150eef6f267f123c2bfcd9c1680b9006aa36e75398a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:12:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Feb 25 12:12:27 compute-0 quirky_chatterjee[248114]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:12:27 compute-0 quirky_chatterjee[248114]: --> All data devices are unavailable
Feb 25 12:12:27 compute-0 systemd[1]: libpod-c10ab9ab35d642911aed150eef6f267f123c2bfcd9c1680b9006aa36e75398a2.scope: Deactivated successfully.
Feb 25 12:12:27 compute-0 podman[248097]: 2026-02-25 12:12:27.43442545 +0000 UTC m=+0.752223265 container died c10ab9ab35d642911aed150eef6f267f123c2bfcd9c1680b9006aa36e75398a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 12:12:27 compute-0 ceph-mon[76335]: pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Feb 25 12:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-37e8dce2417d70dc6ce3b1e113c33cabc6dac931e631f11d203ac4acd55e2834-merged.mount: Deactivated successfully.
Feb 25 12:12:27 compute-0 podman[248097]: 2026-02-25 12:12:27.67237638 +0000 UTC m=+0.990174185 container remove c10ab9ab35d642911aed150eef6f267f123c2bfcd9c1680b9006aa36e75398a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chatterjee, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:12:27 compute-0 systemd[1]: libpod-conmon-c10ab9ab35d642911aed150eef6f267f123c2bfcd9c1680b9006aa36e75398a2.scope: Deactivated successfully.
Feb 25 12:12:27 compute-0 sudo[248017]: pam_unix(sudo:session): session closed for user root
Feb 25 12:12:27 compute-0 sudo[248146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:12:27 compute-0 sudo[248146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:12:27 compute-0 sudo[248146]: pam_unix(sudo:session): session closed for user root
Feb 25 12:12:27 compute-0 sudo[248171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:12:27 compute-0 sudo[248171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:12:28 compute-0 podman[248208]: 2026-02-25 12:12:28.238565266 +0000 UTC m=+0.098338923 container create 2d62c549af31d50a37f4c53bb2b9ad1a8c15c4b13a6be90cec21e00d6a0c4df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 12:12:28 compute-0 podman[248208]: 2026-02-25 12:12:28.173402642 +0000 UTC m=+0.033176339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:12:28 compute-0 systemd[1]: Started libpod-conmon-2d62c549af31d50a37f4c53bb2b9ad1a8c15c4b13a6be90cec21e00d6a0c4df4.scope.
Feb 25 12:12:28 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:12:28 compute-0 podman[248208]: 2026-02-25 12:12:28.398509173 +0000 UTC m=+0.258282870 container init 2d62c549af31d50a37f4c53bb2b9ad1a8c15c4b13a6be90cec21e00d6a0c4df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:12:28 compute-0 podman[248208]: 2026-02-25 12:12:28.408270946 +0000 UTC m=+0.268044603 container start 2d62c549af31d50a37f4c53bb2b9ad1a8c15c4b13a6be90cec21e00d6a0c4df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_taussig, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 12:12:28 compute-0 pensive_taussig[248224]: 167 167
Feb 25 12:12:28 compute-0 systemd[1]: libpod-2d62c549af31d50a37f4c53bb2b9ad1a8c15c4b13a6be90cec21e00d6a0c4df4.scope: Deactivated successfully.
Feb 25 12:12:28 compute-0 podman[248208]: 2026-02-25 12:12:28.435051676 +0000 UTC m=+0.294825373 container attach 2d62c549af31d50a37f4c53bb2b9ad1a8c15c4b13a6be90cec21e00d6a0c4df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_taussig, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:12:28 compute-0 podman[248208]: 2026-02-25 12:12:28.435870309 +0000 UTC m=+0.295643956 container died 2d62c549af31d50a37f4c53bb2b9ad1a8c15c4b13a6be90cec21e00d6a0c4df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1313d89c4b364fc16631099f631504daa35c031ee42e6aa969f442eb4f9f3e64-merged.mount: Deactivated successfully.
Feb 25 12:12:28 compute-0 podman[248208]: 2026-02-25 12:12:28.701183315 +0000 UTC m=+0.560956962 container remove 2d62c549af31d50a37f4c53bb2b9ad1a8c15c4b13a6be90cec21e00d6a0c4df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_taussig, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:12:28 compute-0 systemd[1]: libpod-conmon-2d62c549af31d50a37f4c53bb2b9ad1a8c15c4b13a6be90cec21e00d6a0c4df4.scope: Deactivated successfully.
Feb 25 12:12:28 compute-0 podman[248249]: 2026-02-25 12:12:28.923061365 +0000 UTC m=+0.072105289 container create bc147336b5c5376456dd1e5f756f7ff18d0a41bc12f8dc75b9dbcf7777f5af55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaplygin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:12:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:28 compute-0 podman[248249]: 2026-02-25 12:12:28.883646442 +0000 UTC m=+0.032690406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:12:28 compute-0 systemd[1]: Started libpod-conmon-bc147336b5c5376456dd1e5f756f7ff18d0a41bc12f8dc75b9dbcf7777f5af55.scope.
Feb 25 12:12:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a572197d483167633707591bb44084629202bf8a407953cde0c7c2aee64b27f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a572197d483167633707591bb44084629202bf8a407953cde0c7c2aee64b27f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a572197d483167633707591bb44084629202bf8a407953cde0c7c2aee64b27f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a572197d483167633707591bb44084629202bf8a407953cde0c7c2aee64b27f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:29 compute-0 podman[248249]: 2026-02-25 12:12:29.036061078 +0000 UTC m=+0.185105102 container init bc147336b5c5376456dd1e5f756f7ff18d0a41bc12f8dc75b9dbcf7777f5af55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:12:29 compute-0 podman[248249]: 2026-02-25 12:12:29.046387907 +0000 UTC m=+0.195431861 container start bc147336b5c5376456dd1e5f756f7ff18d0a41bc12f8dc75b9dbcf7777f5af55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:12:29 compute-0 podman[248249]: 2026-02-25 12:12:29.060219084 +0000 UTC m=+0.209263038 container attach bc147336b5c5376456dd1e5f756f7ff18d0a41bc12f8dc75b9dbcf7777f5af55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaplygin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]: {
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:     "0": [
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:         {
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "devices": [
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "/dev/loop3"
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             ],
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_name": "ceph_lv0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_size": "21470642176",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "name": "ceph_lv0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "tags": {
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.cluster_name": "ceph",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.crush_device_class": "",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.encrypted": "0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.objectstore": "bluestore",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.osd_id": "0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.type": "block",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.vdo": "0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.with_tpm": "0"
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             },
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "type": "block",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "vg_name": "ceph_vg0"
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:         }
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:     ],
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:     "1": [
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:         {
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "devices": [
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "/dev/loop4"
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             ],
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_name": "ceph_lv1",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_size": "21470642176",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "name": "ceph_lv1",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "tags": {
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.cluster_name": "ceph",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.crush_device_class": "",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.encrypted": "0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.objectstore": "bluestore",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.osd_id": "1",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.type": "block",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.vdo": "0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.with_tpm": "0"
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             },
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "type": "block",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "vg_name": "ceph_vg1"
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:         }
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:     ],
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:     "2": [
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:         {
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "devices": [
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "/dev/loop5"
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             ],
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_name": "ceph_lv2",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_size": "21470642176",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "name": "ceph_lv2",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "tags": {
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.cluster_name": "ceph",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.crush_device_class": "",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.encrypted": "0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.objectstore": "bluestore",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.osd_id": "2",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.type": "block",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.vdo": "0",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:                 "ceph.with_tpm": "0"
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             },
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "type": "block",
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:             "vg_name": "ceph_vg2"
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:         }
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]:     ]
Feb 25 12:12:29 compute-0 suspicious_chaplygin[248266]: }
Feb 25 12:12:29 compute-0 systemd[1]: libpod-bc147336b5c5376456dd1e5f756f7ff18d0a41bc12f8dc75b9dbcf7777f5af55.scope: Deactivated successfully.
Feb 25 12:12:29 compute-0 podman[248249]: 2026-02-25 12:12:29.346203859 +0000 UTC m=+0.495247813 container died bc147336b5c5376456dd1e5f756f7ff18d0a41bc12f8dc75b9dbcf7777f5af55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaplygin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:12:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 25 12:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a572197d483167633707591bb44084629202bf8a407953cde0c7c2aee64b27f-merged.mount: Deactivated successfully.
Feb 25 12:12:29 compute-0 ceph-mon[76335]: pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 25 12:12:29 compute-0 podman[248249]: 2026-02-25 12:12:29.527259426 +0000 UTC m=+0.676303370 container remove bc147336b5c5376456dd1e5f756f7ff18d0a41bc12f8dc75b9dbcf7777f5af55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:12:29 compute-0 systemd[1]: libpod-conmon-bc147336b5c5376456dd1e5f756f7ff18d0a41bc12f8dc75b9dbcf7777f5af55.scope: Deactivated successfully.
Feb 25 12:12:29 compute-0 sudo[248171]: pam_unix(sudo:session): session closed for user root
Feb 25 12:12:29 compute-0 sudo[248287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:12:29 compute-0 sudo[248287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:12:29 compute-0 sudo[248287]: pam_unix(sudo:session): session closed for user root
Feb 25 12:12:29 compute-0 sudo[248312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:12:29 compute-0 sudo[248312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:12:30 compute-0 podman[248349]: 2026-02-25 12:12:29.985213854 +0000 UTC m=+0.035960407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:12:30 compute-0 podman[248349]: 2026-02-25 12:12:30.128483554 +0000 UTC m=+0.179230067 container create 99f063c62523150ed60646e60eea751df80a9e278a2cd67e0bef0395baa80fda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_darwin, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:12:30 compute-0 systemd[1]: Started libpod-conmon-99f063c62523150ed60646e60eea751df80a9e278a2cd67e0bef0395baa80fda.scope.
Feb 25 12:12:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:12:30 compute-0 podman[248349]: 2026-02-25 12:12:30.342079773 +0000 UTC m=+0.392826336 container init 99f063c62523150ed60646e60eea751df80a9e278a2cd67e0bef0395baa80fda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_darwin, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 12:12:30 compute-0 podman[248349]: 2026-02-25 12:12:30.350163319 +0000 UTC m=+0.400909832 container start 99f063c62523150ed60646e60eea751df80a9e278a2cd67e0bef0395baa80fda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_darwin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:12:30 compute-0 stupefied_darwin[248365]: 167 167
Feb 25 12:12:30 compute-0 systemd[1]: libpod-99f063c62523150ed60646e60eea751df80a9e278a2cd67e0bef0395baa80fda.scope: Deactivated successfully.
Feb 25 12:12:30 compute-0 podman[248349]: 2026-02-25 12:12:30.389036107 +0000 UTC m=+0.439782690 container attach 99f063c62523150ed60646e60eea751df80a9e278a2cd67e0bef0395baa80fda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 12:12:30 compute-0 podman[248349]: 2026-02-25 12:12:30.390374284 +0000 UTC m=+0.441120807 container died 99f063c62523150ed60646e60eea751df80a9e278a2cd67e0bef0395baa80fda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bc5e88031cd5a72359d5508d06f98ceee8ad3050d149a441d20d2c1949555c4-merged.mount: Deactivated successfully.
Feb 25 12:12:30 compute-0 podman[248349]: 2026-02-25 12:12:30.500933439 +0000 UTC m=+0.551679952 container remove 99f063c62523150ed60646e60eea751df80a9e278a2cd67e0bef0395baa80fda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_darwin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 12:12:30 compute-0 systemd[1]: libpod-conmon-99f063c62523150ed60646e60eea751df80a9e278a2cd67e0bef0395baa80fda.scope: Deactivated successfully.
Feb 25 12:12:30 compute-0 podman[248391]: 2026-02-25 12:12:30.705222447 +0000 UTC m=+0.068024995 container create 9fafa756ae2599ee8dd187615d44af43062867bbdbf9a52536e3dd2800add003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ride, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 12:12:30 compute-0 podman[248391]: 2026-02-25 12:12:30.67175305 +0000 UTC m=+0.034555668 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:12:30 compute-0 systemd[1]: Started libpod-conmon-9fafa756ae2599ee8dd187615d44af43062867bbdbf9a52536e3dd2800add003.scope.
Feb 25 12:12:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/778b643b8d4113a197e898f8521fc3c771bac2e6623016db800d98e6c0987a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/778b643b8d4113a197e898f8521fc3c771bac2e6623016db800d98e6c0987a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/778b643b8d4113a197e898f8521fc3c771bac2e6623016db800d98e6c0987a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/778b643b8d4113a197e898f8521fc3c771bac2e6623016db800d98e6c0987a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:12:30 compute-0 podman[248391]: 2026-02-25 12:12:30.835173284 +0000 UTC m=+0.197975892 container init 9fafa756ae2599ee8dd187615d44af43062867bbdbf9a52536e3dd2800add003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:12:30 compute-0 podman[248391]: 2026-02-25 12:12:30.840529434 +0000 UTC m=+0.203331982 container start 9fafa756ae2599ee8dd187615d44af43062867bbdbf9a52536e3dd2800add003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:12:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:12:30
Feb 25 12:12:30 compute-0 podman[248391]: 2026-02-25 12:12:30.859995649 +0000 UTC m=+0.222798167 container attach 9fafa756ae2599ee8dd187615d44af43062867bbdbf9a52536e3dd2800add003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ride, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:12:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:12:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:12:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms']
Feb 25 12:12:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 307 B/s wr, 2 op/s
Feb 25 12:12:31 compute-0 ceph-mon[76335]: pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 307 B/s wr, 2 op/s
Feb 25 12:12:31 compute-0 lvm[248489]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:12:31 compute-0 lvm[248489]: VG ceph_vg2 finished
Feb 25 12:12:31 compute-0 lvm[248486]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:12:31 compute-0 lvm[248486]: VG ceph_vg0 finished
Feb 25 12:12:31 compute-0 lvm[248490]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:12:31 compute-0 lvm[248490]: VG ceph_vg1 finished
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:12:31 compute-0 dreamy_ride[248408]: {}
Feb 25 12:12:31 compute-0 systemd[1]: libpod-9fafa756ae2599ee8dd187615d44af43062867bbdbf9a52536e3dd2800add003.scope: Deactivated successfully.
Feb 25 12:12:31 compute-0 systemd[1]: libpod-9fafa756ae2599ee8dd187615d44af43062867bbdbf9a52536e3dd2800add003.scope: Consumed 1.117s CPU time.
Feb 25 12:12:31 compute-0 podman[248391]: 2026-02-25 12:12:31.694640579 +0000 UTC m=+1.057443137 container died 9fafa756ae2599ee8dd187615d44af43062867bbdbf9a52536e3dd2800add003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ride, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:12:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-778b643b8d4113a197e898f8521fc3c771bac2e6623016db800d98e6c0987a96-merged.mount: Deactivated successfully.
Feb 25 12:12:31 compute-0 podman[248391]: 2026-02-25 12:12:31.91618916 +0000 UTC m=+1.278991718 container remove 9fafa756ae2599ee8dd187615d44af43062867bbdbf9a52536e3dd2800add003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ride, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:12:31 compute-0 systemd[1]: libpod-conmon-9fafa756ae2599ee8dd187615d44af43062867bbdbf9a52536e3dd2800add003.scope: Deactivated successfully.
Feb 25 12:12:31 compute-0 sudo[248312]: pam_unix(sudo:session): session closed for user root
Feb 25 12:12:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:12:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:12:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:12:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:12:32 compute-0 sudo[248505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:12:32 compute-0 sudo[248505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:12:32 compute-0 sudo[248505]: pam_unix(sudo:session): session closed for user root
Feb 25 12:12:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:12:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:12:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 16 MiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Feb 25 12:12:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Feb 25 12:12:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Feb 25 12:12:34 compute-0 ceph-mon[76335]: pgmap v806: 305 pgs: 305 active+clean; 16 MiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Feb 25 12:12:34 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Feb 25 12:12:35 compute-0 ceph-mon[76335]: osdmap e125: 3 total, 3 up, 3 in
Feb 25 12:12:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 16 MiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Feb 25 12:12:36 compute-0 ceph-mon[76335]: pgmap v808: 305 pgs: 305 active+clean; 16 MiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Feb 25 12:12:36 compute-0 podman[248531]: 2026-02-25 12:12:36.768582545 +0000 UTC m=+0.099742633 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:12:36 compute-0 podman[248530]: 2026-02-25 12:12:36.768169714 +0000 UTC m=+0.102825989 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 25 12:12:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 12:12:37 compute-0 ceph-mon[76335]: pgmap v809: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 12:12:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 12:12:39 compute-0 ceph-mon[76335]: pgmap v810: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 12:12:41 compute-0 ceph-mon[76335]: pgmap v811: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0018237140127913424 of space, bias 1.0, pg target 0.5471142038374027 quantized to 32 (current 32)
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.611534991534169e-06 of space, bias 4.0, pg target 0.0031338419898410026 quantized to 16 (current 16)
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:12:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:12:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 9.6 MiB/s wr, 18 op/s
Feb 25 12:12:43 compute-0 ceph-mon[76335]: pgmap v812: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 9.6 MiB/s wr, 18 op/s
Feb 25 12:12:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 8.5 MiB/s wr, 16 op/s
Feb 25 12:12:45 compute-0 ceph-mon[76335]: pgmap v813: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 8.5 MiB/s wr, 16 op/s
Feb 25 12:12:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 8.0 MiB/s wr, 15 op/s
Feb 25 12:12:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:12:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2648371590' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:12:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:12:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2648371590' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:12:47 compute-0 ceph-mon[76335]: pgmap v814: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 8.0 MiB/s wr, 15 op/s
Feb 25 12:12:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2648371590' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:12:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2648371590' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:12:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:49 compute-0 ceph-mon[76335]: pgmap v815: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:51 compute-0 ceph-mon[76335]: pgmap v816: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:53 compute-0 ceph-mon[76335]: pgmap v817: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:53 compute-0 nova_compute[244014]: 2026-02-25 12:12:53.538 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:12:53 compute-0 nova_compute[244014]: 2026-02-25 12:12:53.538 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:12:53 compute-0 nova_compute[244014]: 2026-02-25 12:12:53.539 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:12:53 compute-0 nova_compute[244014]: 2026-02-25 12:12:53.555 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:12:53 compute-0 nova_compute[244014]: 2026-02-25 12:12:53.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:12:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:54 compute-0 nova_compute[244014]: 2026-02-25 12:12:54.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:12:54 compute-0 nova_compute[244014]: 2026-02-25 12:12:54.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:12:54 compute-0 nova_compute[244014]: 2026-02-25 12:12:54.909 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:12:54 compute-0 nova_compute[244014]: 2026-02-25 12:12:54.909 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:12:54 compute-0 nova_compute[244014]: 2026-02-25 12:12:54.910 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:12:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:12:54.995 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:12:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:12:54.996 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:12:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:12:54.996 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:12:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:55 compute-0 ceph-mon[76335]: pgmap v818: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:57 compute-0 ceph-mon[76335]: pgmap v819: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:57 compute-0 nova_compute[244014]: 2026-02-25 12:12:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:12:57 compute-0 nova_compute[244014]: 2026-02-25 12:12:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:12:57 compute-0 nova_compute[244014]: 2026-02-25 12:12:57.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:12:58 compute-0 nova_compute[244014]: 2026-02-25 12:12:58.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:12:58 compute-0 nova_compute[244014]: 2026-02-25 12:12:58.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:12:58 compute-0 nova_compute[244014]: 2026-02-25 12:12:58.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:12:58 compute-0 nova_compute[244014]: 2026-02-25 12:12:58.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:12:58 compute-0 nova_compute[244014]: 2026-02-25 12:12:58.906 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:12:58 compute-0 nova_compute[244014]: 2026-02-25 12:12:58.906 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:12:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:12:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:12:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427908589' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:12:59 compute-0 nova_compute[244014]: 2026-02-25 12:12:59.532 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:12:59 compute-0 ceph-mon[76335]: pgmap v820: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:12:59 compute-0 nova_compute[244014]: 2026-02-25 12:12:59.691 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:12:59 compute-0 nova_compute[244014]: 2026-02-25 12:12:59.692 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5133MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:12:59 compute-0 nova_compute[244014]: 2026-02-25 12:12:59.692 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:12:59 compute-0 nova_compute[244014]: 2026-02-25 12:12:59.693 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:12:59 compute-0 nova_compute[244014]: 2026-02-25 12:12:59.868 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:12:59 compute-0 nova_compute[244014]: 2026-02-25 12:12:59.869 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:12:59 compute-0 nova_compute[244014]: 2026-02-25 12:12:59.890 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:13:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:13:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389144477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:13:00 compute-0 nova_compute[244014]: 2026-02-25 12:13:00.443 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:13:00 compute-0 nova_compute[244014]: 2026-02-25 12:13:00.449 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:13:00 compute-0 nova_compute[244014]: 2026-02-25 12:13:00.471 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:13:00 compute-0 nova_compute[244014]: 2026-02-25 12:13:00.474 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:13:00 compute-0 nova_compute[244014]: 2026-02-25 12:13:00.474 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:13:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3427908589' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:13:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2389144477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:13:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:13:01 compute-0 ceph-mon[76335]: pgmap v821: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:03 compute-0 ceph-mon[76335]: pgmap v822: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:05 compute-0 ceph-mon[76335]: pgmap v823: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:07 compute-0 ceph-mon[76335]: pgmap v824: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:07 compute-0 podman[248620]: 2026-02-25 12:13:07.731659945 +0000 UTC m=+0.074472946 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:13:07 compute-0 podman[248621]: 2026-02-25 12:13:07.772059225 +0000 UTC m=+0.111885862 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:13:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:09 compute-0 ceph-mon[76335]: pgmap v825: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:11 compute-0 ceph-mon[76335]: pgmap v826: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:13 compute-0 ceph-mon[76335]: pgmap v827: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:15 compute-0 ceph-mon[76335]: pgmap v828: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:17 compute-0 ceph-mon[76335]: pgmap v829: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:19 compute-0 ceph-mon[76335]: pgmap v830: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:21 compute-0 ceph-mon[76335]: pgmap v831: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:23 compute-0 ceph-mon[76335]: pgmap v832: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:24 compute-0 sshd-session[248663]: Invalid user grafana from 80.94.92.186 port 40860
Feb 25 12:13:24 compute-0 sshd-session[248663]: Connection closed by invalid user grafana 80.94.92.186 port 40860 [preauth]
Feb 25 12:13:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:25 compute-0 ceph-mon[76335]: pgmap v833: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:27 compute-0 ceph-mon[76335]: pgmap v834: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:29 compute-0 ceph-mon[76335]: pgmap v835: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:13:30
Feb 25 12:13:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:13:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:13:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'vms', '.rgw.root']
Feb 25 12:13:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:13:31 compute-0 ceph-mon[76335]: pgmap v836: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:13:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:13:32 compute-0 sudo[248665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:13:32 compute-0 sudo[248665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:13:32 compute-0 sudo[248665]: pam_unix(sudo:session): session closed for user root
Feb 25 12:13:32 compute-0 sudo[248690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:13:32 compute-0 sudo[248690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:13:32 compute-0 sudo[248690]: pam_unix(sudo:session): session closed for user root
Feb 25 12:13:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:13:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:13:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:13:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:13:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:13:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:13:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:13:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:13:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:13:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:13:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:13:33 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:13:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:13:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:13:33 compute-0 sudo[248747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:13:33 compute-0 sudo[248747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:13:33 compute-0 sudo[248747]: pam_unix(sudo:session): session closed for user root
Feb 25 12:13:33 compute-0 sudo[248772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:13:33 compute-0 sudo[248772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:13:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:33 compute-0 podman[248809]: 2026-02-25 12:13:33.525901832 +0000 UTC m=+0.051100158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:13:33 compute-0 podman[248809]: 2026-02-25 12:13:33.68346317 +0000 UTC m=+0.208661446 container create b2866336417123feff05f4c58c131d27a27057e479161bea1bfc21752a9fae69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:13:33 compute-0 systemd[1]: Started libpod-conmon-b2866336417123feff05f4c58c131d27a27057e479161bea1bfc21752a9fae69.scope.
Feb 25 12:13:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:13:33 compute-0 podman[248809]: 2026-02-25 12:13:33.841826862 +0000 UTC m=+0.367025128 container init b2866336417123feff05f4c58c131d27a27057e479161bea1bfc21752a9fae69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:13:33 compute-0 podman[248809]: 2026-02-25 12:13:33.850867036 +0000 UTC m=+0.376065302 container start b2866336417123feff05f4c58c131d27a27057e479161bea1bfc21752a9fae69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_rubin, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:13:33 compute-0 happy_rubin[248827]: 167 167
Feb 25 12:13:33 compute-0 systemd[1]: libpod-b2866336417123feff05f4c58c131d27a27057e479161bea1bfc21752a9fae69.scope: Deactivated successfully.
Feb 25 12:13:33 compute-0 podman[248809]: 2026-02-25 12:13:33.942433939 +0000 UTC m=+0.467632215 container attach b2866336417123feff05f4c58c131d27a27057e479161bea1bfc21752a9fae69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_rubin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:13:33 compute-0 podman[248809]: 2026-02-25 12:13:33.94352937 +0000 UTC m=+0.468727626 container died b2866336417123feff05f4c58c131d27a27057e479161bea1bfc21752a9fae69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 12:13:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-62b07f21db2e6d698ce2abc0ee56f6202b07abd10a5dd68bb6747ae3a39a727e-merged.mount: Deactivated successfully.
Feb 25 12:13:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:13:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:13:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:13:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:13:34 compute-0 ceph-mon[76335]: pgmap v837: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:34 compute-0 podman[248809]: 2026-02-25 12:13:34.40387671 +0000 UTC m=+0.929074986 container remove b2866336417123feff05f4c58c131d27a27057e479161bea1bfc21752a9fae69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_rubin, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 12:13:34 compute-0 systemd[1]: libpod-conmon-b2866336417123feff05f4c58c131d27a27057e479161bea1bfc21752a9fae69.scope: Deactivated successfully.
Feb 25 12:13:34 compute-0 podman[248854]: 2026-02-25 12:13:34.580971548 +0000 UTC m=+0.035536570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:13:34 compute-0 podman[248854]: 2026-02-25 12:13:34.682940234 +0000 UTC m=+0.137505196 container create c9e4c5561ea6a44c64ddb2c66a1e9fdb125d11b8bbef63b5151594e457ba52f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 12:13:34 compute-0 systemd[1]: Started libpod-conmon-c9e4c5561ea6a44c64ddb2c66a1e9fdb125d11b8bbef63b5151594e457ba52f3.scope.
Feb 25 12:13:34 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebbd88235f3b3bbf1b3c84557d46cd36815966dc58562d622095d0df5ff88671/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebbd88235f3b3bbf1b3c84557d46cd36815966dc58562d622095d0df5ff88671/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebbd88235f3b3bbf1b3c84557d46cd36815966dc58562d622095d0df5ff88671/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebbd88235f3b3bbf1b3c84557d46cd36815966dc58562d622095d0df5ff88671/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebbd88235f3b3bbf1b3c84557d46cd36815966dc58562d622095d0df5ff88671/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:34 compute-0 podman[248854]: 2026-02-25 12:13:34.947666185 +0000 UTC m=+0.402231157 container init c9e4c5561ea6a44c64ddb2c66a1e9fdb125d11b8bbef63b5151594e457ba52f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bohr, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:13:34 compute-0 podman[248854]: 2026-02-25 12:13:34.95745653 +0000 UTC m=+0.412021492 container start c9e4c5561ea6a44c64ddb2c66a1e9fdb125d11b8bbef63b5151594e457ba52f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:13:35 compute-0 podman[248854]: 2026-02-25 12:13:35.085610343 +0000 UTC m=+0.540175305 container attach c9e4c5561ea6a44c64ddb2c66a1e9fdb125d11b8bbef63b5151594e457ba52f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:13:35 compute-0 relaxed_bohr[248870]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:13:35 compute-0 relaxed_bohr[248870]: --> All data devices are unavailable
Feb 25 12:13:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:35 compute-0 systemd[1]: libpod-c9e4c5561ea6a44c64ddb2c66a1e9fdb125d11b8bbef63b5151594e457ba52f3.scope: Deactivated successfully.
Feb 25 12:13:35 compute-0 podman[248891]: 2026-02-25 12:13:35.489598918 +0000 UTC m=+0.051026435 container died c9e4c5561ea6a44c64ddb2c66a1e9fdb125d11b8bbef63b5151594e457ba52f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:13:35 compute-0 ceph-mon[76335]: pgmap v838: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebbd88235f3b3bbf1b3c84557d46cd36815966dc58562d622095d0df5ff88671-merged.mount: Deactivated successfully.
Feb 25 12:13:36 compute-0 podman[248891]: 2026-02-25 12:13:36.076063612 +0000 UTC m=+0.637491079 container remove c9e4c5561ea6a44c64ddb2c66a1e9fdb125d11b8bbef63b5151594e457ba52f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:13:36 compute-0 systemd[1]: libpod-conmon-c9e4c5561ea6a44c64ddb2c66a1e9fdb125d11b8bbef63b5151594e457ba52f3.scope: Deactivated successfully.
Feb 25 12:13:36 compute-0 sudo[248772]: pam_unix(sudo:session): session closed for user root
Feb 25 12:13:36 compute-0 sudo[248907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:13:36 compute-0 sudo[248907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:13:36 compute-0 sudo[248907]: pam_unix(sudo:session): session closed for user root
Feb 25 12:13:36 compute-0 sudo[248932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:13:36 compute-0 sudo[248932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:13:36 compute-0 podman[248969]: 2026-02-25 12:13:36.639769826 +0000 UTC m=+0.124709176 container create fd643881c544352ad58ec412022ebc690d88303791a5b163bee8d0fa9c2ab112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 12:13:36 compute-0 podman[248969]: 2026-02-25 12:13:36.550125777 +0000 UTC m=+0.035065137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:13:36 compute-0 systemd[1]: Started libpod-conmon-fd643881c544352ad58ec412022ebc690d88303791a5b163bee8d0fa9c2ab112.scope.
Feb 25 12:13:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:13:36 compute-0 podman[248969]: 2026-02-25 12:13:36.790963456 +0000 UTC m=+0.275902796 container init fd643881c544352ad58ec412022ebc690d88303791a5b163bee8d0fa9c2ab112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:13:36 compute-0 podman[248969]: 2026-02-25 12:13:36.80000681 +0000 UTC m=+0.284946160 container start fd643881c544352ad58ec412022ebc690d88303791a5b163bee8d0fa9c2ab112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Feb 25 12:13:36 compute-0 lucid_swanson[248985]: 167 167
Feb 25 12:13:36 compute-0 systemd[1]: libpod-fd643881c544352ad58ec412022ebc690d88303791a5b163bee8d0fa9c2ab112.scope: Deactivated successfully.
Feb 25 12:13:36 compute-0 podman[248969]: 2026-02-25 12:13:36.843485803 +0000 UTC m=+0.328425123 container attach fd643881c544352ad58ec412022ebc690d88303791a5b163bee8d0fa9c2ab112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Feb 25 12:13:36 compute-0 podman[248969]: 2026-02-25 12:13:36.844145801 +0000 UTC m=+0.329085111 container died fd643881c544352ad58ec412022ebc690d88303791a5b163bee8d0fa9c2ab112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb6b9e7347c0630c90b9d3f0c3b7bb5311801e98dab2b32afb3298fca5488098-merged.mount: Deactivated successfully.
Feb 25 12:13:37 compute-0 podman[248969]: 2026-02-25 12:13:37.103995835 +0000 UTC m=+0.588935185 container remove fd643881c544352ad58ec412022ebc690d88303791a5b163bee8d0fa9c2ab112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 12:13:37 compute-0 systemd[1]: libpod-conmon-fd643881c544352ad58ec412022ebc690d88303791a5b163bee8d0fa9c2ab112.scope: Deactivated successfully.
Feb 25 12:13:37 compute-0 podman[249010]: 2026-02-25 12:13:37.35784897 +0000 UTC m=+0.116096874 container create b7ea3f9eeaffec2c1d91f1ae8500c3e2a67cc43515959274a02d1e71625755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brattain, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:13:37 compute-0 podman[249010]: 2026-02-25 12:13:37.267068629 +0000 UTC m=+0.025316583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:13:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:37 compute-0 systemd[1]: Started libpod-conmon-b7ea3f9eeaffec2c1d91f1ae8500c3e2a67cc43515959274a02d1e71625755cc.scope.
Feb 25 12:13:37 compute-0 ceph-mon[76335]: pgmap v839: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:37 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb74f7cb042e4fddfbf683845563120cb7311dfd955a1c2cb38f9ce789ac5fea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb74f7cb042e4fddfbf683845563120cb7311dfd955a1c2cb38f9ce789ac5fea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb74f7cb042e4fddfbf683845563120cb7311dfd955a1c2cb38f9ce789ac5fea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb74f7cb042e4fddfbf683845563120cb7311dfd955a1c2cb38f9ce789ac5fea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:37 compute-0 podman[249010]: 2026-02-25 12:13:37.634291701 +0000 UTC m=+0.392539645 container init b7ea3f9eeaffec2c1d91f1ae8500c3e2a67cc43515959274a02d1e71625755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brattain, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:13:37 compute-0 podman[249010]: 2026-02-25 12:13:37.643983213 +0000 UTC m=+0.402231107 container start b7ea3f9eeaffec2c1d91f1ae8500c3e2a67cc43515959274a02d1e71625755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brattain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 12:13:37 compute-0 podman[249010]: 2026-02-25 12:13:37.654106668 +0000 UTC m=+0.412354572 container attach b7ea3f9eeaffec2c1d91f1ae8500c3e2a67cc43515959274a02d1e71625755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brattain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:13:37 compute-0 practical_brattain[249026]: {
Feb 25 12:13:37 compute-0 practical_brattain[249026]:     "0": [
Feb 25 12:13:37 compute-0 practical_brattain[249026]:         {
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "devices": [
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "/dev/loop3"
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             ],
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_name": "ceph_lv0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_size": "21470642176",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "name": "ceph_lv0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "tags": {
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.cluster_name": "ceph",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.crush_device_class": "",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.encrypted": "0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.objectstore": "bluestore",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.osd_id": "0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.type": "block",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.vdo": "0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.with_tpm": "0"
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             },
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "type": "block",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "vg_name": "ceph_vg0"
Feb 25 12:13:37 compute-0 practical_brattain[249026]:         }
Feb 25 12:13:37 compute-0 practical_brattain[249026]:     ],
Feb 25 12:13:37 compute-0 practical_brattain[249026]:     "1": [
Feb 25 12:13:37 compute-0 practical_brattain[249026]:         {
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "devices": [
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "/dev/loop4"
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             ],
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_name": "ceph_lv1",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_size": "21470642176",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "name": "ceph_lv1",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "tags": {
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.cluster_name": "ceph",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.crush_device_class": "",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.encrypted": "0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.objectstore": "bluestore",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.osd_id": "1",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.type": "block",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.vdo": "0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.with_tpm": "0"
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             },
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "type": "block",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "vg_name": "ceph_vg1"
Feb 25 12:13:37 compute-0 practical_brattain[249026]:         }
Feb 25 12:13:37 compute-0 practical_brattain[249026]:     ],
Feb 25 12:13:37 compute-0 practical_brattain[249026]:     "2": [
Feb 25 12:13:37 compute-0 practical_brattain[249026]:         {
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "devices": [
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "/dev/loop5"
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             ],
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_name": "ceph_lv2",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_size": "21470642176",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "name": "ceph_lv2",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "tags": {
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.cluster_name": "ceph",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.crush_device_class": "",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.encrypted": "0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.objectstore": "bluestore",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.osd_id": "2",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.type": "block",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.vdo": "0",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:                 "ceph.with_tpm": "0"
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             },
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "type": "block",
Feb 25 12:13:37 compute-0 practical_brattain[249026]:             "vg_name": "ceph_vg2"
Feb 25 12:13:37 compute-0 practical_brattain[249026]:         }
Feb 25 12:13:37 compute-0 practical_brattain[249026]:     ]
Feb 25 12:13:37 compute-0 practical_brattain[249026]: }
Feb 25 12:13:37 compute-0 systemd[1]: libpod-b7ea3f9eeaffec2c1d91f1ae8500c3e2a67cc43515959274a02d1e71625755cc.scope: Deactivated successfully.
Feb 25 12:13:37 compute-0 podman[249010]: 2026-02-25 12:13:37.959256175 +0000 UTC m=+0.717504079 container died b7ea3f9eeaffec2c1d91f1ae8500c3e2a67cc43515959274a02d1e71625755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 12:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb74f7cb042e4fddfbf683845563120cb7311dfd955a1c2cb38f9ce789ac5fea-merged.mount: Deactivated successfully.
Feb 25 12:13:38 compute-0 podman[249010]: 2026-02-25 12:13:38.17714561 +0000 UTC m=+0.935393514 container remove b7ea3f9eeaffec2c1d91f1ae8500c3e2a67cc43515959274a02d1e71625755cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:13:38 compute-0 systemd[1]: libpod-conmon-b7ea3f9eeaffec2c1d91f1ae8500c3e2a67cc43515959274a02d1e71625755cc.scope: Deactivated successfully.
Feb 25 12:13:38 compute-0 podman[249035]: 2026-02-25 12:13:38.228919935 +0000 UTC m=+0.237572629 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:13:38 compute-0 sudo[248932]: pam_unix(sudo:session): session closed for user root
Feb 25 12:13:38 compute-0 podman[249041]: 2026-02-25 12:13:38.281369489 +0000 UTC m=+0.284985132 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 12:13:38 compute-0 sudo[249089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:13:38 compute-0 sudo[249089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:13:38 compute-0 sudo[249089]: pam_unix(sudo:session): session closed for user root
Feb 25 12:13:38 compute-0 sudo[249115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:13:38 compute-0 sudo[249115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:13:38 compute-0 podman[249154]: 2026-02-25 12:13:38.664488008 +0000 UTC m=+0.053522855 container create 116a294d016d9207cfc91310ec35439c84cebc99da099b77b00710f3b1c6ed3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 12:13:38 compute-0 systemd[1]: Started libpod-conmon-116a294d016d9207cfc91310ec35439c84cebc99da099b77b00710f3b1c6ed3c.scope.
Feb 25 12:13:38 compute-0 podman[249154]: 2026-02-25 12:13:38.634800003 +0000 UTC m=+0.023834950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:13:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:13:38 compute-0 podman[249154]: 2026-02-25 12:13:38.752346107 +0000 UTC m=+0.141381024 container init 116a294d016d9207cfc91310ec35439c84cebc99da099b77b00710f3b1c6ed3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:13:38 compute-0 podman[249154]: 2026-02-25 12:13:38.761312489 +0000 UTC m=+0.150347366 container start 116a294d016d9207cfc91310ec35439c84cebc99da099b77b00710f3b1c6ed3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 12:13:38 compute-0 hungry_mestorf[249171]: 167 167
Feb 25 12:13:38 compute-0 podman[249154]: 2026-02-25 12:13:38.767555715 +0000 UTC m=+0.156590592 container attach 116a294d016d9207cfc91310ec35439c84cebc99da099b77b00710f3b1c6ed3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:13:38 compute-0 systemd[1]: libpod-116a294d016d9207cfc91310ec35439c84cebc99da099b77b00710f3b1c6ed3c.scope: Deactivated successfully.
Feb 25 12:13:38 compute-0 podman[249154]: 2026-02-25 12:13:38.76879001 +0000 UTC m=+0.157824887 container died 116a294d016d9207cfc91310ec35439c84cebc99da099b77b00710f3b1c6ed3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dfebfa4371e097912292c55b31c2e354be25143a221f3318f3aafd527d490f8-merged.mount: Deactivated successfully.
Feb 25 12:13:38 compute-0 podman[249154]: 2026-02-25 12:13:38.844075636 +0000 UTC m=+0.233110513 container remove 116a294d016d9207cfc91310ec35439c84cebc99da099b77b00710f3b1c6ed3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mestorf, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:13:38 compute-0 systemd[1]: libpod-conmon-116a294d016d9207cfc91310ec35439c84cebc99da099b77b00710f3b1c6ed3c.scope: Deactivated successfully.
Feb 25 12:13:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:39 compute-0 podman[249197]: 2026-02-25 12:13:39.040928929 +0000 UTC m=+0.061479569 container create d37902c02414e2a8ab9771e2941d4fe638d327a51c83984fd19e1bcbcbc03269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 12:13:39 compute-0 systemd[1]: Started libpod-conmon-d37902c02414e2a8ab9771e2941d4fe638d327a51c83984fd19e1bcbcbc03269.scope.
Feb 25 12:13:39 compute-0 podman[249197]: 2026-02-25 12:13:39.013961961 +0000 UTC m=+0.034512641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:13:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e475c7ec4a0eb6d06d66f72775293291b9ce8194948d5929783443a6a7127dcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e475c7ec4a0eb6d06d66f72775293291b9ce8194948d5929783443a6a7127dcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e475c7ec4a0eb6d06d66f72775293291b9ce8194948d5929783443a6a7127dcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e475c7ec4a0eb6d06d66f72775293291b9ce8194948d5929783443a6a7127dcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:13:39 compute-0 podman[249197]: 2026-02-25 12:13:39.133940893 +0000 UTC m=+0.154491583 container init d37902c02414e2a8ab9771e2941d4fe638d327a51c83984fd19e1bcbcbc03269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nobel, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:13:39 compute-0 podman[249197]: 2026-02-25 12:13:39.143214434 +0000 UTC m=+0.163765064 container start d37902c02414e2a8ab9771e2941d4fe638d327a51c83984fd19e1bcbcbc03269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 12:13:39 compute-0 podman[249197]: 2026-02-25 12:13:39.15055166 +0000 UTC m=+0.171102300 container attach d37902c02414e2a8ab9771e2941d4fe638d327a51c83984fd19e1bcbcbc03269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nobel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:13:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:39 compute-0 ceph-mon[76335]: pgmap v840: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:39 compute-0 lvm[249292]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:13:39 compute-0 lvm[249292]: VG ceph_vg0 finished
Feb 25 12:13:39 compute-0 lvm[249293]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:13:39 compute-0 lvm[249293]: VG ceph_vg1 finished
Feb 25 12:13:39 compute-0 lvm[249295]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:13:39 compute-0 lvm[249295]: VG ceph_vg2 finished
Feb 25 12:13:39 compute-0 admiring_nobel[249214]: {}
Feb 25 12:13:39 compute-0 systemd[1]: libpod-d37902c02414e2a8ab9771e2941d4fe638d327a51c83984fd19e1bcbcbc03269.scope: Deactivated successfully.
Feb 25 12:13:39 compute-0 podman[249197]: 2026-02-25 12:13:39.97775368 +0000 UTC m=+0.998304320 container died d37902c02414e2a8ab9771e2941d4fe638d327a51c83984fd19e1bcbcbc03269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 12:13:39 compute-0 systemd[1]: libpod-d37902c02414e2a8ab9771e2941d4fe638d327a51c83984fd19e1bcbcbc03269.scope: Consumed 1.239s CPU time.
Feb 25 12:13:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e475c7ec4a0eb6d06d66f72775293291b9ce8194948d5929783443a6a7127dcb-merged.mount: Deactivated successfully.
Feb 25 12:13:40 compute-0 podman[249197]: 2026-02-25 12:13:40.214335861 +0000 UTC m=+1.234886501 container remove d37902c02414e2a8ab9771e2941d4fe638d327a51c83984fd19e1bcbcbc03269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:13:40 compute-0 systemd[1]: libpod-conmon-d37902c02414e2a8ab9771e2941d4fe638d327a51c83984fd19e1bcbcbc03269.scope: Deactivated successfully.
Feb 25 12:13:40 compute-0 sudo[249115]: pam_unix(sudo:session): session closed for user root
Feb 25 12:13:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:13:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:13:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:13:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:13:40 compute-0 sudo[249311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:13:40 compute-0 sudo[249311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:13:40 compute-0 sudo[249311]: pam_unix(sudo:session): session closed for user root
Feb 25 12:13:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:13:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0018237140127913424 of space, bias 1.0, pg target 0.5471142038374027 quantized to 32 (current 32)
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.611534991534169e-06 of space, bias 4.0, pg target 0.0031338419898410026 quantized to 16 (current 16)
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:13:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:13:42 compute-0 ceph-mon[76335]: pgmap v841: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:43 compute-0 ceph-mon[76335]: pgmap v842: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:45 compute-0 ceph-mon[76335]: pgmap v843: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:13:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2627558951' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:13:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:13:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2627558951' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:13:47 compute-0 ceph-mon[76335]: pgmap v844: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2627558951' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:13:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2627558951' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:13:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:49 compute-0 ceph-mon[76335]: pgmap v845: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:50 compute-0 ceph-osd[89088]: bluestore.MempoolThread fragmentation_score=0.000139 took=0.000019s
Feb 25 12:13:50 compute-0 ceph-osd[88012]: bluestore.MempoolThread fragmentation_score=0.000127 took=0.000058s
Feb 25 12:13:50 compute-0 ceph-osd[86953]: bluestore.MempoolThread fragmentation_score=0.000132 took=0.000030s
Feb 25 12:13:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:51 compute-0 ceph-mon[76335]: pgmap v846: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:53 compute-0 nova_compute[244014]: 2026-02-25 12:13:53.475 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:13:53 compute-0 nova_compute[244014]: 2026-02-25 12:13:53.476 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:13:53 compute-0 nova_compute[244014]: 2026-02-25 12:13:53.477 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:13:53 compute-0 ceph-mon[76335]: pgmap v847: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:53 compute-0 nova_compute[244014]: 2026-02-25 12:13:53.496 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:13:53 compute-0 nova_compute[244014]: 2026-02-25 12:13:53.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:13:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:54 compute-0 nova_compute[244014]: 2026-02-25 12:13:54.870 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:13:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:13:54.997 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:13:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:13:54.997 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:13:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:13:54.997 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:13:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:55 compute-0 ceph-mon[76335]: pgmap v848: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:55 compute-0 nova_compute[244014]: 2026-02-25 12:13:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:13:56 compute-0 nova_compute[244014]: 2026-02-25 12:13:56.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:13:56 compute-0 nova_compute[244014]: 2026-02-25 12:13:56.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:13:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:57 compute-0 ceph-mon[76335]: pgmap v849: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:57 compute-0 nova_compute[244014]: 2026-02-25 12:13:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:13:57 compute-0 nova_compute[244014]: 2026-02-25 12:13:57.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:13:58 compute-0 nova_compute[244014]: 2026-02-25 12:13:58.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:13:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:13:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:59 compute-0 ceph-mon[76335]: pgmap v850: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:13:59 compute-0 nova_compute[244014]: 2026-02-25 12:13:59.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:13:59 compute-0 nova_compute[244014]: 2026-02-25 12:13:59.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:13:59 compute-0 nova_compute[244014]: 2026-02-25 12:13:59.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:13:59 compute-0 nova_compute[244014]: 2026-02-25 12:13:59.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:13:59 compute-0 nova_compute[244014]: 2026-02-25 12:13:59.909 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:13:59 compute-0 nova_compute[244014]: 2026-02-25 12:13:59.909 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:14:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:14:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096815815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:14:00 compute-0 nova_compute[244014]: 2026-02-25 12:14:00.494 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:14:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4096815815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:14:00 compute-0 nova_compute[244014]: 2026-02-25 12:14:00.702 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:14:00 compute-0 nova_compute[244014]: 2026-02-25 12:14:00.704 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5090MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:14:00 compute-0 nova_compute[244014]: 2026-02-25 12:14:00.705 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:14:00 compute-0 nova_compute[244014]: 2026-02-25 12:14:00.705 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:14:00 compute-0 nova_compute[244014]: 2026-02-25 12:14:00.805 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:14:00 compute-0 nova_compute[244014]: 2026-02-25 12:14:00.805 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:14:00 compute-0 nova_compute[244014]: 2026-02-25 12:14:00.841 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:14:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:14:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1819153540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:14:01 compute-0 nova_compute[244014]: 2026-02-25 12:14:01.485 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:14:01 compute-0 nova_compute[244014]: 2026-02-25 12:14:01.494 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:14:01 compute-0 nova_compute[244014]: 2026-02-25 12:14:01.516 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:14:01 compute-0 nova_compute[244014]: 2026-02-25 12:14:01.519 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:14:01 compute-0 nova_compute[244014]: 2026-02-25 12:14:01.520 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:14:01 compute-0 ceph-mon[76335]: pgmap v851: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1819153540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:14:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:03 compute-0 ceph-mon[76335]: pgmap v852: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:14:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:05 compute-0 ceph-mon[76335]: pgmap v853: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:07 compute-0 ceph-mon[76335]: pgmap v854: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:08 compute-0 podman[249380]: 2026-02-25 12:14:08.765556352 +0000 UTC m=+0.101972688 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 12:14:08 compute-0 podman[249381]: 2026-02-25 12:14:08.815327291 +0000 UTC m=+0.147496547 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 12:14:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:14:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:09 compute-0 ceph-mon[76335]: pgmap v855: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:11 compute-0 ceph-mon[76335]: pgmap v856: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:13 compute-0 ceph-mon[76335]: pgmap v857: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:14:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:15 compute-0 ceph-mon[76335]: pgmap v858: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:17 compute-0 ceph-mon[76335]: pgmap v859: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:14:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:19 compute-0 ceph-mon[76335]: pgmap v860: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:21 compute-0 ceph-mon[76335]: pgmap v861: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.615976) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021661616025, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2098, "num_deletes": 253, "total_data_size": 3495059, "memory_usage": 3552720, "flush_reason": "Manual Compaction"}
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021661663947, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3427552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16315, "largest_seqno": 18412, "table_properties": {"data_size": 3417954, "index_size": 6092, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19084, "raw_average_key_size": 20, "raw_value_size": 3398839, "raw_average_value_size": 3570, "num_data_blocks": 274, "num_entries": 952, "num_filter_entries": 952, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772021443, "oldest_key_time": 1772021443, "file_creation_time": 1772021661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 48061 microseconds, and 9944 cpu microseconds.
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.664033) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3427552 bytes OK
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.664067) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.671300) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.671345) EVENT_LOG_v1 {"time_micros": 1772021661671331, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.671377) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3486264, prev total WAL file size 3486264, number of live WAL files 2.
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.672542) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3347KB)], [38(7503KB)]
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021661672624, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11111541, "oldest_snapshot_seqno": -1}
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4435 keys, 9327796 bytes, temperature: kUnknown
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021661742672, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9327796, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9294645, "index_size": 20945, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 107101, "raw_average_key_size": 24, "raw_value_size": 9211113, "raw_average_value_size": 2076, "num_data_blocks": 889, "num_entries": 4435, "num_filter_entries": 4435, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772021661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.743030) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9327796 bytes
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.757646) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.2 rd, 132.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 4955, records dropped: 520 output_compression: NoCompression
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.757681) EVENT_LOG_v1 {"time_micros": 1772021661757665, "job": 18, "event": "compaction_finished", "compaction_time_micros": 70227, "compaction_time_cpu_micros": 30853, "output_level": 6, "num_output_files": 1, "total_output_size": 9327796, "num_input_records": 4955, "num_output_records": 4435, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021661758599, "job": 18, "event": "table_file_deletion", "file_number": 40}
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021661759984, "job": 18, "event": "table_file_deletion", "file_number": 38}
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.672409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.760178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.760188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.760195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.760198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:14:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:14:21.760201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:14:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:23 compute-0 ceph-mon[76335]: pgmap v862: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 12:14:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Feb 25 12:14:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Feb 25 12:14:24 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Feb 25 12:14:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:25 compute-0 ceph-mon[76335]: osdmap e126: 3 total, 3 up, 3 in
Feb 25 12:14:25 compute-0 ceph-mon[76335]: pgmap v864: 305 pgs: 305 active+clean; 112 MiB data, 248 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Feb 25 12:14:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Feb 25 12:14:26 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Feb 25 12:14:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s
Feb 25 12:14:27 compute-0 ceph-mon[76335]: osdmap e127: 3 total, 3 up, 3 in
Feb 25 12:14:27 compute-0 ceph-mon[76335]: pgmap v866: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s
Feb 25 12:14:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:14:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s
Feb 25 12:14:29 compute-0 ceph-mon[76335]: pgmap v867: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s
Feb 25 12:14:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:14:30
Feb 25 12:14:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:14:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:14:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'vms', 'default.rgw.log']
Feb 25 12:14:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb 25 12:14:31 compute-0 ceph-mon[76335]: pgmap v868: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:14:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:14:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 42 op/s
Feb 25 12:14:33 compute-0 ceph-mon[76335]: pgmap v869: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 42 op/s
Feb 25 12:14:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:14:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Feb 25 12:14:35 compute-0 ceph-mon[76335]: pgmap v870: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Feb 25 12:14:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.9 MiB/s wr, 21 op/s
Feb 25 12:14:37 compute-0 ceph-mon[76335]: pgmap v871: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.9 MiB/s wr, 21 op/s
Feb 25 12:14:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:14:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Feb 25 12:14:39 compute-0 ceph-mon[76335]: pgmap v872: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Feb 25 12:14:39 compute-0 podman[249424]: 2026-02-25 12:14:39.717900117 +0000 UTC m=+0.060272325 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:14:39 compute-0 podman[249425]: 2026-02-25 12:14:39.749486765 +0000 UTC m=+0.084342752 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 25 12:14:40 compute-0 sudo[249467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:14:40 compute-0 sudo[249467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:14:40 compute-0 sudo[249467]: pam_unix(sudo:session): session closed for user root
Feb 25 12:14:40 compute-0 sudo[249492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:14:40 compute-0 sudo[249492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:14:41 compute-0 sudo[249492]: pam_unix(sudo:session): session closed for user root
Feb 25 12:14:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:14:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:14:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:14:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:14:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:14:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:14:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:14:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:14:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:14:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:14:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:14:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:14:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:14:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:14:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:14:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:14:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:14:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:14:41 compute-0 sudo[249548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:14:41 compute-0 sudo[249548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:14:41 compute-0 sudo[249548]: pam_unix(sudo:session): session closed for user root
Feb 25 12:14:41 compute-0 sudo[249573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:14:41 compute-0 sudo[249573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Feb 25 12:14:41 compute-0 podman[249610]: 2026-02-25 12:14:41.556744282 +0000 UTC m=+0.058722741 container create d79f98a6b19c38201bea32c3592ab780a7849bdfdbbf8c4aa873db8811942342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:14:41 compute-0 systemd[1]: Started libpod-conmon-d79f98a6b19c38201bea32c3592ab780a7849bdfdbbf8c4aa873db8811942342.scope.
Feb 25 12:14:41 compute-0 podman[249610]: 2026-02-25 12:14:41.53067412 +0000 UTC m=+0.032652539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:14:41 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:14:41 compute-0 podman[249610]: 2026-02-25 12:14:41.649484819 +0000 UTC m=+0.151463248 container init d79f98a6b19c38201bea32c3592ab780a7849bdfdbbf8c4aa873db8811942342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jennings, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:14:41 compute-0 podman[249610]: 2026-02-25 12:14:41.656852686 +0000 UTC m=+0.158831105 container start d79f98a6b19c38201bea32c3592ab780a7849bdfdbbf8c4aa873db8811942342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:14:41 compute-0 zen_jennings[249627]: 167 167
Feb 25 12:14:41 compute-0 podman[249610]: 2026-02-25 12:14:41.663435391 +0000 UTC m=+0.165413810 container attach d79f98a6b19c38201bea32c3592ab780a7849bdfdbbf8c4aa873db8811942342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jennings, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:14:41 compute-0 systemd[1]: libpod-d79f98a6b19c38201bea32c3592ab780a7849bdfdbbf8c4aa873db8811942342.scope: Deactivated successfully.
Feb 25 12:14:41 compute-0 podman[249610]: 2026-02-25 12:14:41.664859781 +0000 UTC m=+0.166838240 container died d79f98a6b19c38201bea32c3592ab780a7849bdfdbbf8c4aa873db8811942342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jennings, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 12:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b6037528b1c2f8bfc650dd17936202f55a56dfe3660973e773e1a491d1cc44f-merged.mount: Deactivated successfully.
Feb 25 12:14:41 compute-0 podman[249610]: 2026-02-25 12:14:41.742434092 +0000 UTC m=+0.244412541 container remove d79f98a6b19c38201bea32c3592ab780a7849bdfdbbf8c4aa873db8811942342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 12:14:41 compute-0 systemd[1]: libpod-conmon-d79f98a6b19c38201bea32c3592ab780a7849bdfdbbf8c4aa873db8811942342.scope: Deactivated successfully.
Feb 25 12:14:41 compute-0 podman[249651]: 2026-02-25 12:14:41.943482803 +0000 UTC m=+0.062878188 container create b0f0d1f6a50f2198c4e243e0d46a6ae533447223b1a5d6c6c6a42ff13ec45c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lovelace, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024895419939084235 of space, bias 1.0, pg target 0.7468625981725271 quantized to 32 (current 32)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6152454844953773e-06 of space, bias 4.0, pg target 0.003138294581394453 quantized to 16 (current 16)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:14:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:14:41 compute-0 systemd[1]: Started libpod-conmon-b0f0d1f6a50f2198c4e243e0d46a6ae533447223b1a5d6c6c6a42ff13ec45c67.scope.
Feb 25 12:14:42 compute-0 podman[249651]: 2026-02-25 12:14:41.915190918 +0000 UTC m=+0.034586363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:14:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886b7ed457f3a744857daeb99be5f1999d843e35ac77d30670a0527a4dd371d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886b7ed457f3a744857daeb99be5f1999d843e35ac77d30670a0527a4dd371d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886b7ed457f3a744857daeb99be5f1999d843e35ac77d30670a0527a4dd371d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886b7ed457f3a744857daeb99be5f1999d843e35ac77d30670a0527a4dd371d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886b7ed457f3a744857daeb99be5f1999d843e35ac77d30670a0527a4dd371d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:42 compute-0 podman[249651]: 2026-02-25 12:14:42.055407419 +0000 UTC m=+0.174802824 container init b0f0d1f6a50f2198c4e243e0d46a6ae533447223b1a5d6c6c6a42ff13ec45c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:14:42 compute-0 podman[249651]: 2026-02-25 12:14:42.069761182 +0000 UTC m=+0.189156567 container start b0f0d1f6a50f2198c4e243e0d46a6ae533447223b1a5d6c6c6a42ff13ec45c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lovelace, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:14:42 compute-0 podman[249651]: 2026-02-25 12:14:42.076811251 +0000 UTC m=+0.196206716 container attach b0f0d1f6a50f2198c4e243e0d46a6ae533447223b1a5d6c6c6a42ff13ec45c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lovelace, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 12:14:42 compute-0 ceph-mon[76335]: pgmap v873: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Feb 25 12:14:42 compute-0 goofy_lovelace[249668]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:14:42 compute-0 goofy_lovelace[249668]: --> All data devices are unavailable
Feb 25 12:14:42 compute-0 systemd[1]: libpod-b0f0d1f6a50f2198c4e243e0d46a6ae533447223b1a5d6c6c6a42ff13ec45c67.scope: Deactivated successfully.
Feb 25 12:14:42 compute-0 podman[249651]: 2026-02-25 12:14:42.54604412 +0000 UTC m=+0.665439475 container died b0f0d1f6a50f2198c4e243e0d46a6ae533447223b1a5d6c6c6a42ff13ec45c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lovelace, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6886b7ed457f3a744857daeb99be5f1999d843e35ac77d30670a0527a4dd371d-merged.mount: Deactivated successfully.
Feb 25 12:14:42 compute-0 podman[249651]: 2026-02-25 12:14:42.835873226 +0000 UTC m=+0.955268571 container remove b0f0d1f6a50f2198c4e243e0d46a6ae533447223b1a5d6c6c6a42ff13ec45c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lovelace, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:14:42 compute-0 systemd[1]: libpod-conmon-b0f0d1f6a50f2198c4e243e0d46a6ae533447223b1a5d6c6c6a42ff13ec45c67.scope: Deactivated successfully.
Feb 25 12:14:42 compute-0 sudo[249573]: pam_unix(sudo:session): session closed for user root
Feb 25 12:14:42 compute-0 sudo[249702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:14:42 compute-0 sudo[249702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:14:42 compute-0 sudo[249702]: pam_unix(sudo:session): session closed for user root
Feb 25 12:14:43 compute-0 sudo[249727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:14:43 compute-0 sudo[249727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:14:43 compute-0 podman[249766]: 2026-02-25 12:14:43.333896805 +0000 UTC m=+0.051931241 container create f6754cfe507e29addb394d9c1b00aab6a2598632ca4bc41d4127ec935ff47d45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:14:43 compute-0 systemd[1]: Started libpod-conmon-f6754cfe507e29addb394d9c1b00aab6a2598632ca4bc41d4127ec935ff47d45.scope.
Feb 25 12:14:43 compute-0 podman[249766]: 2026-02-25 12:14:43.305278491 +0000 UTC m=+0.023312997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:14:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:14:43 compute-0 podman[249766]: 2026-02-25 12:14:43.42799014 +0000 UTC m=+0.146024626 container init f6754cfe507e29addb394d9c1b00aab6a2598632ca4bc41d4127ec935ff47d45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:14:43 compute-0 podman[249766]: 2026-02-25 12:14:43.436872009 +0000 UTC m=+0.154906465 container start f6754cfe507e29addb394d9c1b00aab6a2598632ca4bc41d4127ec935ff47d45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:14:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:43 compute-0 exciting_chaplygin[249782]: 167 167
Feb 25 12:14:43 compute-0 systemd[1]: libpod-f6754cfe507e29addb394d9c1b00aab6a2598632ca4bc41d4127ec935ff47d45.scope: Deactivated successfully.
Feb 25 12:14:43 compute-0 podman[249766]: 2026-02-25 12:14:43.444976817 +0000 UTC m=+0.163011283 container attach f6754cfe507e29addb394d9c1b00aab6a2598632ca4bc41d4127ec935ff47d45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 12:14:43 compute-0 podman[249766]: 2026-02-25 12:14:43.445972985 +0000 UTC m=+0.164007501 container died f6754cfe507e29addb394d9c1b00aab6a2598632ca4bc41d4127ec935ff47d45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 12:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-77e8653836f8c78578aff4db131ebd7ad40d03fb1195551f0886c5a645765788-merged.mount: Deactivated successfully.
Feb 25 12:14:43 compute-0 ceph-mon[76335]: pgmap v874: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:43 compute-0 podman[249766]: 2026-02-25 12:14:43.5204923 +0000 UTC m=+0.238526746 container remove f6754cfe507e29addb394d9c1b00aab6a2598632ca4bc41d4127ec935ff47d45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:14:43 compute-0 systemd[1]: libpod-conmon-f6754cfe507e29addb394d9c1b00aab6a2598632ca4bc41d4127ec935ff47d45.scope: Deactivated successfully.
Feb 25 12:14:43 compute-0 podman[249808]: 2026-02-25 12:14:43.688667117 +0000 UTC m=+0.054468402 container create 36057c875a635a95cb2d3a3500e162da0ebf5db04acf50680f8d785bd29b728c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hodgkin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 12:14:43 compute-0 systemd[1]: Started libpod-conmon-36057c875a635a95cb2d3a3500e162da0ebf5db04acf50680f8d785bd29b728c.scope.
Feb 25 12:14:43 compute-0 podman[249808]: 2026-02-25 12:14:43.656879444 +0000 UTC m=+0.022680769 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:14:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3379dfc2e91154a241f7bd31e5d2fbb15f334e6d59c75b8cd2a9642aa8ae9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3379dfc2e91154a241f7bd31e5d2fbb15f334e6d59c75b8cd2a9642aa8ae9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3379dfc2e91154a241f7bd31e5d2fbb15f334e6d59c75b8cd2a9642aa8ae9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3379dfc2e91154a241f7bd31e5d2fbb15f334e6d59c75b8cd2a9642aa8ae9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:43 compute-0 podman[249808]: 2026-02-25 12:14:43.792939588 +0000 UTC m=+0.158740893 container init 36057c875a635a95cb2d3a3500e162da0ebf5db04acf50680f8d785bd29b728c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hodgkin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 25 12:14:43 compute-0 podman[249808]: 2026-02-25 12:14:43.802888508 +0000 UTC m=+0.168689793 container start 36057c875a635a95cb2d3a3500e162da0ebf5db04acf50680f8d785bd29b728c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:14:43 compute-0 podman[249808]: 2026-02-25 12:14:43.808637179 +0000 UTC m=+0.174438464 container attach 36057c875a635a95cb2d3a3500e162da0ebf5db04acf50680f8d785bd29b728c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hodgkin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:14:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]: {
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:     "0": [
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:         {
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "devices": [
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "/dev/loop3"
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             ],
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_name": "ceph_lv0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_size": "21470642176",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "name": "ceph_lv0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "tags": {
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.cluster_name": "ceph",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.crush_device_class": "",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.encrypted": "0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.objectstore": "bluestore",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.osd_id": "0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.type": "block",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.vdo": "0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.with_tpm": "0"
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             },
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "type": "block",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "vg_name": "ceph_vg0"
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:         }
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:     ],
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:     "1": [
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:         {
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "devices": [
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "/dev/loop4"
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             ],
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_name": "ceph_lv1",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_size": "21470642176",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "name": "ceph_lv1",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "tags": {
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.cluster_name": "ceph",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.crush_device_class": "",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.encrypted": "0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.objectstore": "bluestore",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.osd_id": "1",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.type": "block",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.vdo": "0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.with_tpm": "0"
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             },
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "type": "block",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "vg_name": "ceph_vg1"
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:         }
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:     ],
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:     "2": [
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:         {
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "devices": [
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "/dev/loop5"
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             ],
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_name": "ceph_lv2",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_size": "21470642176",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "name": "ceph_lv2",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "tags": {
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.cluster_name": "ceph",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.crush_device_class": "",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.encrypted": "0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.objectstore": "bluestore",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.osd_id": "2",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.type": "block",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.vdo": "0",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:                 "ceph.with_tpm": "0"
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             },
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "type": "block",
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:             "vg_name": "ceph_vg2"
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:         }
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]:     ]
Feb 25 12:14:44 compute-0 youthful_hodgkin[249825]: }
Feb 25 12:14:44 compute-0 systemd[1]: libpod-36057c875a635a95cb2d3a3500e162da0ebf5db04acf50680f8d785bd29b728c.scope: Deactivated successfully.
Feb 25 12:14:44 compute-0 podman[249808]: 2026-02-25 12:14:44.10183987 +0000 UTC m=+0.467641145 container died 36057c875a635a95cb2d3a3500e162da0ebf5db04acf50680f8d785bd29b728c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 12:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa3379dfc2e91154a241f7bd31e5d2fbb15f334e6d59c75b8cd2a9642aa8ae9a-merged.mount: Deactivated successfully.
Feb 25 12:14:44 compute-0 podman[249808]: 2026-02-25 12:14:44.180246073 +0000 UTC m=+0.546047348 container remove 36057c875a635a95cb2d3a3500e162da0ebf5db04acf50680f8d785bd29b728c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 12:14:44 compute-0 systemd[1]: libpod-conmon-36057c875a635a95cb2d3a3500e162da0ebf5db04acf50680f8d785bd29b728c.scope: Deactivated successfully.
Feb 25 12:14:44 compute-0 sudo[249727]: pam_unix(sudo:session): session closed for user root
Feb 25 12:14:44 compute-0 sudo[249849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:14:44 compute-0 sudo[249849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:14:44 compute-0 sudo[249849]: pam_unix(sudo:session): session closed for user root
Feb 25 12:14:44 compute-0 sudo[249874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:14:44 compute-0 sudo[249874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:14:44 compute-0 podman[249911]: 2026-02-25 12:14:44.704227842 +0000 UTC m=+0.056332925 container create 180d6726962557018736f0a8ad89400cfd94bd2827a88850ca6dbebf54e4c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_visvesvaraya, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 12:14:44 compute-0 systemd[1]: Started libpod-conmon-180d6726962557018736f0a8ad89400cfd94bd2827a88850ca6dbebf54e4c11d.scope.
Feb 25 12:14:44 compute-0 podman[249911]: 2026-02-25 12:14:44.679789365 +0000 UTC m=+0.031894418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:14:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:14:44 compute-0 podman[249911]: 2026-02-25 12:14:44.79524161 +0000 UTC m=+0.147346733 container init 180d6726962557018736f0a8ad89400cfd94bd2827a88850ca6dbebf54e4c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_visvesvaraya, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 12:14:44 compute-0 podman[249911]: 2026-02-25 12:14:44.805768666 +0000 UTC m=+0.157873739 container start 180d6726962557018736f0a8ad89400cfd94bd2827a88850ca6dbebf54e4c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 12:14:44 compute-0 gifted_visvesvaraya[249927]: 167 167
Feb 25 12:14:44 compute-0 systemd[1]: libpod-180d6726962557018736f0a8ad89400cfd94bd2827a88850ca6dbebf54e4c11d.scope: Deactivated successfully.
Feb 25 12:14:44 compute-0 podman[249911]: 2026-02-25 12:14:44.815058857 +0000 UTC m=+0.167164000 container attach 180d6726962557018736f0a8ad89400cfd94bd2827a88850ca6dbebf54e4c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_visvesvaraya, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:14:44 compute-0 podman[249911]: 2026-02-25 12:14:44.815820188 +0000 UTC m=+0.167925271 container died 180d6726962557018736f0a8ad89400cfd94bd2827a88850ca6dbebf54e4c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_visvesvaraya, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1afcdbe8f726f3bf4a73702e4586f40ed903bce1e645406475dbcb79f48d2e6d-merged.mount: Deactivated successfully.
Feb 25 12:14:44 compute-0 podman[249911]: 2026-02-25 12:14:44.889847849 +0000 UTC m=+0.241952932 container remove 180d6726962557018736f0a8ad89400cfd94bd2827a88850ca6dbebf54e4c11d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 12:14:44 compute-0 systemd[1]: libpod-conmon-180d6726962557018736f0a8ad89400cfd94bd2827a88850ca6dbebf54e4c11d.scope: Deactivated successfully.
Feb 25 12:14:45 compute-0 podman[249953]: 2026-02-25 12:14:45.086222439 +0000 UTC m=+0.061788608 container create 84e912d7ce880a22ef2048df96de743084d32f8a15e6083e875af26c93bf8398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:14:45 compute-0 systemd[1]: Started libpod-conmon-84e912d7ce880a22ef2048df96de743084d32f8a15e6083e875af26c93bf8398.scope.
Feb 25 12:14:45 compute-0 podman[249953]: 2026-02-25 12:14:45.059785336 +0000 UTC m=+0.035351555 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:14:45 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255a09e25776d78a098b0d85a2c89dd115655bce914bf48dc3e3a8be30a0ac1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255a09e25776d78a098b0d85a2c89dd115655bce914bf48dc3e3a8be30a0ac1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255a09e25776d78a098b0d85a2c89dd115655bce914bf48dc3e3a8be30a0ac1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255a09e25776d78a098b0d85a2c89dd115655bce914bf48dc3e3a8be30a0ac1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:14:45 compute-0 podman[249953]: 2026-02-25 12:14:45.212231711 +0000 UTC m=+0.187797890 container init 84e912d7ce880a22ef2048df96de743084d32f8a15e6083e875af26c93bf8398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:14:45 compute-0 podman[249953]: 2026-02-25 12:14:45.2217959 +0000 UTC m=+0.197362069 container start 84e912d7ce880a22ef2048df96de743084d32f8a15e6083e875af26c93bf8398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 12:14:45 compute-0 podman[249953]: 2026-02-25 12:14:45.229410804 +0000 UTC m=+0.204976973 container attach 84e912d7ce880a22ef2048df96de743084d32f8a15e6083e875af26c93bf8398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:14:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:45 compute-0 ceph-mon[76335]: pgmap v875: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:45 compute-0 lvm[250049]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:14:45 compute-0 lvm[250049]: VG ceph_vg1 finished
Feb 25 12:14:45 compute-0 lvm[250048]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:14:45 compute-0 lvm[250048]: VG ceph_vg0 finished
Feb 25 12:14:45 compute-0 lvm[250051]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:14:45 compute-0 lvm[250051]: VG ceph_vg2 finished
Feb 25 12:14:46 compute-0 jolly_mclaren[249970]: {}
Feb 25 12:14:46 compute-0 systemd[1]: libpod-84e912d7ce880a22ef2048df96de743084d32f8a15e6083e875af26c93bf8398.scope: Deactivated successfully.
Feb 25 12:14:46 compute-0 systemd[1]: libpod-84e912d7ce880a22ef2048df96de743084d32f8a15e6083e875af26c93bf8398.scope: Consumed 1.248s CPU time.
Feb 25 12:14:46 compute-0 podman[249953]: 2026-02-25 12:14:46.08630818 +0000 UTC m=+1.061874349 container died 84e912d7ce880a22ef2048df96de743084d32f8a15e6083e875af26c93bf8398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-255a09e25776d78a098b0d85a2c89dd115655bce914bf48dc3e3a8be30a0ac1a-merged.mount: Deactivated successfully.
Feb 25 12:14:46 compute-0 podman[249953]: 2026-02-25 12:14:46.153957071 +0000 UTC m=+1.129523230 container remove 84e912d7ce880a22ef2048df96de743084d32f8a15e6083e875af26c93bf8398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:14:46 compute-0 systemd[1]: libpod-conmon-84e912d7ce880a22ef2048df96de743084d32f8a15e6083e875af26c93bf8398.scope: Deactivated successfully.
Feb 25 12:14:46 compute-0 sudo[249874]: pam_unix(sudo:session): session closed for user root
Feb 25 12:14:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:14:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:14:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:14:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:14:46 compute-0 sudo[250066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:14:46 compute-0 sudo[250066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:14:46 compute-0 sudo[250066]: pam_unix(sudo:session): session closed for user root
Feb 25 12:14:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:14:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:14:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:14:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4193629684' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:14:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:14:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4193629684' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:14:48 compute-0 ceph-mon[76335]: pgmap v876: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4193629684' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:14:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4193629684' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:14:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:14:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:49 compute-0 ceph-mon[76335]: pgmap v877: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:14:49.746 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:14:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:14:49.748 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:14:51 compute-0 nova_compute[244014]: 2026-02-25 12:14:51.222 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:14:51 compute-0 nova_compute[244014]: 2026-02-25 12:14:51.222 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:14:51 compute-0 nova_compute[244014]: 2026-02-25 12:14:51.367 244018 DEBUG nova.compute.manager [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:14:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:51 compute-0 ceph-mon[76335]: pgmap v878: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:51 compute-0 nova_compute[244014]: 2026-02-25 12:14:51.568 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:14:51 compute-0 nova_compute[244014]: 2026-02-25 12:14:51.569 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:14:51 compute-0 nova_compute[244014]: 2026-02-25 12:14:51.578 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:14:51 compute-0 nova_compute[244014]: 2026-02-25 12:14:51.579 244018 INFO nova.compute.claims [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:14:51 compute-0 nova_compute[244014]: 2026-02-25 12:14:51.805 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:14:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:14:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4146525197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:14:52 compute-0 nova_compute[244014]: 2026-02-25 12:14:52.348 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:14:52 compute-0 nova_compute[244014]: 2026-02-25 12:14:52.357 244018 DEBUG nova.compute.provider_tree [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:14:52 compute-0 nova_compute[244014]: 2026-02-25 12:14:52.416 244018 DEBUG nova.scheduler.client.report [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:14:52 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4146525197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:14:52 compute-0 nova_compute[244014]: 2026-02-25 12:14:52.598 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:14:52 compute-0 nova_compute[244014]: 2026-02-25 12:14:52.599 244018 DEBUG nova.compute.manager [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:14:52 compute-0 nova_compute[244014]: 2026-02-25 12:14:52.942 244018 DEBUG nova.compute.manager [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 25 12:14:53 compute-0 nova_compute[244014]: 2026-02-25 12:14:53.070 244018 INFO nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:14:53 compute-0 nova_compute[244014]: 2026-02-25 12:14:53.164 244018 DEBUG nova.compute.manager [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:14:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:53 compute-0 ceph-mon[76335]: pgmap v879: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:53 compute-0 nova_compute[244014]: 2026-02-25 12:14:53.573 244018 DEBUG nova.compute.manager [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:14:53 compute-0 nova_compute[244014]: 2026-02-25 12:14:53.574 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:14:53 compute-0 nova_compute[244014]: 2026-02-25 12:14:53.575 244018 INFO nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Creating image(s)
Feb 25 12:14:53 compute-0 nova_compute[244014]: 2026-02-25 12:14:53.605 244018 DEBUG nova.storage.rbd_utils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] rbd image 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:14:53 compute-0 nova_compute[244014]: 2026-02-25 12:14:53.636 244018 DEBUG nova.storage.rbd_utils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] rbd image 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:14:53 compute-0 nova_compute[244014]: 2026-02-25 12:14:53.666 244018 DEBUG nova.storage.rbd_utils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] rbd image 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:14:53 compute-0 nova_compute[244014]: 2026-02-25 12:14:53.670 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:14:53 compute-0 nova_compute[244014]: 2026-02-25 12:14:53.672 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:14:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:14:53.750 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:14:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:14:54 compute-0 nova_compute[244014]: 2026-02-25 12:14:54.986 244018 DEBUG nova.virt.libvirt.imagebackend [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 25 12:14:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:14:54.998 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:14:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:14:54.999 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:14:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:14:55.000 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:14:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:55 compute-0 nova_compute[244014]: 2026-02-25 12:14:55.521 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:14:55 compute-0 nova_compute[244014]: 2026-02-25 12:14:55.522 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:14:55 compute-0 nova_compute[244014]: 2026-02-25 12:14:55.522 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:14:55 compute-0 nova_compute[244014]: 2026-02-25 12:14:55.548 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:14:55 compute-0 nova_compute[244014]: 2026-02-25 12:14:55.549 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:14:55 compute-0 ceph-mon[76335]: pgmap v880: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:14:55 compute-0 nova_compute[244014]: 2026-02-25 12:14:55.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:14:56 compute-0 sshd-session[250167]: Received disconnect from 45.148.10.151 port 24486:11:  [preauth]
Feb 25 12:14:56 compute-0 sshd-session[250167]: Disconnected from authenticating user root 45.148.10.151 port 24486 [preauth]
Feb 25 12:14:56 compute-0 nova_compute[244014]: 2026-02-25 12:14:56.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.254 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.335 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6.part --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.337 244018 DEBUG nova.virt.images [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.341 244018 DEBUG nova.privsep.utils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.342 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6.part /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:14:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 6 op/s
Feb 25 12:14:57 compute-0 ceph-mon[76335]: pgmap v881: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 6 op/s
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.722 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6.part /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6.converted" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.725 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.796 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6.converted --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.797 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.826 244018 DEBUG nova.storage.rbd_utils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] rbd image 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.830 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:14:57 compute-0 nova_compute[244014]: 2026-02-25 12:14:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:14:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Feb 25 12:14:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Feb 25 12:14:58 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Feb 25 12:14:58 compute-0 nova_compute[244014]: 2026-02-25 12:14:58.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:14:58 compute-0 nova_compute[244014]: 2026-02-25 12:14:58.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:14:58 compute-0 nova_compute[244014]: 2026-02-25 12:14:58.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:14:58 compute-0 nova_compute[244014]: 2026-02-25 12:14:58.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:14:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:14:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 7 op/s
Feb 25 12:14:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Feb 25 12:14:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Feb 25 12:14:59 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Feb 25 12:14:59 compute-0 ceph-mon[76335]: osdmap e128: 3 total, 3 up, 3 in
Feb 25 12:14:59 compute-0 ceph-mon[76335]: pgmap v883: 305 pgs: 305 active+clean; 153 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 7 op/s
Feb 25 12:14:59 compute-0 nova_compute[244014]: 2026-02-25 12:14:59.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.004 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.053 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.134 244018 DEBUG nova.storage.rbd_utils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] resizing rbd image 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.295 244018 DEBUG nova.objects.instance [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lazy-loading 'migration_context' on Instance uuid 16c8e25c-54e5-4f4a-a188-5e5621ce23b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.394 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.396 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Ensure instance console log exists: /var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.396 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.397 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.398 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.401 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.408 244018 WARNING nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.415 244018 DEBUG nova.virt.libvirt.host [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.416 244018 DEBUG nova.virt.libvirt.host [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.420 244018 DEBUG nova.virt.libvirt.host [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.420 244018 DEBUG nova.virt.libvirt.host [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.421 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.421 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.422 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.423 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.423 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.423 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.424 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.424 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.425 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.425 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.425 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.426 244018 DEBUG nova.virt.hardware [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.431 244018 DEBUG nova.privsep.utils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.432 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:00 compute-0 ceph-mon[76335]: osdmap e129: 3 total, 3 up, 3 in
Feb 25 12:15:00 compute-0 nova_compute[244014]: 2026-02-25 12:15:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:15:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:15:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080279345' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.012 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.050 244018 DEBUG nova.storage.rbd_utils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] rbd image 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.054 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.348 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.349 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.349 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.350 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.350 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 171 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 11 op/s
Feb 25 12:15:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:15:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3119196030' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.542 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.544 244018 DEBUG nova.objects.instance [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 16c8e25c-54e5-4f4a-a188-5e5621ce23b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.605 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:15:01 compute-0 nova_compute[244014]:   <uuid>16c8e25c-54e5-4f4a-a188-5e5621ce23b2</uuid>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   <name>instance-00000001</name>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <nova:name>tempest-AutoAllocateNetworkTest-server-229177554</nova:name>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:15:00</nova:creationTime>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:15:01 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:15:01 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:15:01 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:15:01 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:15:01 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:15:01 compute-0 nova_compute[244014]:         <nova:user uuid="ddd8db22e5c0487dbbc394a46dcf24db">tempest-AutoAllocateNetworkTest-1070238077-project-member</nova:user>
Feb 25 12:15:01 compute-0 nova_compute[244014]:         <nova:project uuid="0615b26acc544c6798f0df8252424d5f">tempest-AutoAllocateNetworkTest-1070238077</nova:project>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <system>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <entry name="serial">16c8e25c-54e5-4f4a-a188-5e5621ce23b2</entry>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <entry name="uuid">16c8e25c-54e5-4f4a-a188-5e5621ce23b2</entry>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     </system>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   <os>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   </os>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   <features>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   </features>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk">
Feb 25 12:15:01 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       </source>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:15:01 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk.config">
Feb 25 12:15:01 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       </source>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:15:01 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2/console.log" append="off"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <video>
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     </video>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:15:01 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:15:01 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:15:01 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:15:01 compute-0 nova_compute[244014]: </domain>
Feb 25 12:15:01 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:15:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2080279345' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:01 compute-0 ceph-mon[76335]: pgmap v885: 305 pgs: 305 active+clean; 171 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 11 op/s
Feb 25 12:15:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3119196030' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:15:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390137652' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.885 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.955 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.956 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.957 244018 INFO nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Using config drive
Feb 25 12:15:01 compute-0 nova_compute[244014]: 2026-02-25 12:15:01.985 244018 DEBUG nova.storage.rbd_utils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] rbd image 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:02 compute-0 nova_compute[244014]: 2026-02-25 12:15:02.175 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:15:02 compute-0 nova_compute[244014]: 2026-02-25 12:15:02.176 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:15:02 compute-0 nova_compute[244014]: 2026-02-25 12:15:02.374 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:15:02 compute-0 nova_compute[244014]: 2026-02-25 12:15:02.376 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5025MB free_disk=59.979007720947266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:15:02 compute-0 nova_compute[244014]: 2026-02-25 12:15:02.376 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:02 compute-0 nova_compute[244014]: 2026-02-25 12:15:02.376 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1390137652' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.145 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 16c8e25c-54e5-4f4a-a188-5e5621ce23b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.145 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.146 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.190 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.419 244018 INFO nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Creating config drive at /var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2/disk.config
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.426 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8vchwg4d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 50 op/s
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.552 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8vchwg4d" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.583 244018 DEBUG nova.storage.rbd_utils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] rbd image 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.587 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2/disk.config 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:15:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337539348' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.727 244018 DEBUG oslo_concurrency.processutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2/disk.config 16c8e25c-54e5-4f4a-a188-5e5621ce23b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.728 244018 INFO nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Deleting local config drive /var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2/disk.config because it was imported into RBD.
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.744 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.751 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:15:03 compute-0 systemd[1]: Starting libvirt secret daemon...
Feb 25 12:15:03 compute-0 systemd[1]: Started libvirt secret daemon.
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.825 244018 ERROR nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [req-cd233f0c-589f-4c56-a1ff-585cebee5741] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID cb4dae98-2ac3-4218-9445-2320139e12ad.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-cd233f0c-589f-4c56-a1ff-585cebee5741"}]}
Feb 25 12:15:03 compute-0 systemd-machined[210048]: New machine qemu-1-instance-00000001.
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.852 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 12:15:03 compute-0 ceph-mon[76335]: pgmap v886: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 50 op/s
Feb 25 12:15:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2337539348' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:03 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.877 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.878 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.896 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.930 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 12:15:03 compute-0 nova_compute[244014]: 2026-02-25 12:15:03.965 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Feb 25 12:15:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Feb 25 12:15:04 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.024150) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021704024193, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 667, "num_deletes": 250, "total_data_size": 744598, "memory_usage": 757520, "flush_reason": "Manual Compaction"}
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021704031168, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 537211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18413, "largest_seqno": 19079, "table_properties": {"data_size": 534020, "index_size": 1099, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8209, "raw_average_key_size": 20, "raw_value_size": 527293, "raw_average_value_size": 1295, "num_data_blocks": 49, "num_entries": 407, "num_filter_entries": 407, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772021663, "oldest_key_time": 1772021663, "file_creation_time": 1772021704, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 7061 microseconds, and 2401 cpu microseconds.
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.031212) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 537211 bytes OK
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.031235) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.035778) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.035796) EVENT_LOG_v1 {"time_micros": 1772021704035791, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.035822) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 741059, prev total WAL file size 741059, number of live WAL files 2.
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.036467) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(524KB)], [41(9109KB)]
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021704036507, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9865007, "oldest_snapshot_seqno": -1}
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4342 keys, 6722167 bytes, temperature: kUnknown
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021704087316, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6722167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6693642, "index_size": 16575, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 105618, "raw_average_key_size": 24, "raw_value_size": 6615597, "raw_average_value_size": 1523, "num_data_blocks": 699, "num_entries": 4342, "num_filter_entries": 4342, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772021704, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.087539) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6722167 bytes
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.091813) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.8 rd, 132.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 8.9 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(30.9) write-amplify(12.5) OK, records in: 4842, records dropped: 500 output_compression: NoCompression
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.091830) EVENT_LOG_v1 {"time_micros": 1772021704091822, "job": 20, "event": "compaction_finished", "compaction_time_micros": 50899, "compaction_time_cpu_micros": 17536, "output_level": 6, "num_output_files": 1, "total_output_size": 6722167, "num_input_records": 4842, "num_output_records": 4342, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021704092000, "job": 20, "event": "table_file_deletion", "file_number": 43}
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021704092616, "job": 20, "event": "table_file_deletion", "file_number": 41}
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.036359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.092820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.092831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.092834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.092837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:04.092841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.395 244018 DEBUG nova.compute.manager [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.398 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.399 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021704.3966153, 16c8e25c-54e5-4f4a-a188-5e5621ce23b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.399 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] VM Resumed (Lifecycle Event)
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.409 244018 INFO nova.virt.libvirt.driver [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Instance spawned successfully.
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.410 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.488 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.493 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.509 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.510 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.511 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.512 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.513 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.514 244018 DEBUG nova.virt.libvirt.driver [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:15:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536845434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.580 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.586 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.627 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.628 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021704.3968046, 16c8e25c-54e5-4f4a-a188-5e5621ce23b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.629 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] VM Started (Lifecycle Event)
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.761 244018 INFO nova.compute.manager [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Took 11.19 seconds to spawn the instance on the hypervisor.
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.765 244018 DEBUG nova.compute.manager [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.885 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.891 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:15:04 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.929 244018 INFO nova.compute.manager [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Took 13.40 seconds to build instance.
Feb 25 12:15:05 compute-0 nova_compute[244014]: 2026-02-25 12:15:04.999 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updated inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 25 12:15:05 compute-0 nova_compute[244014]: 2026-02-25 12:15:05.000 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating resource provider cb4dae98-2ac3-4218-9445-2320139e12ad generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 25 12:15:05 compute-0 nova_compute[244014]: 2026-02-25 12:15:05.001 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:15:05 compute-0 ceph-mon[76335]: osdmap e130: 3 total, 3 up, 3 in
Feb 25 12:15:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3536845434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:05 compute-0 nova_compute[244014]: 2026-02-25 12:15:05.094 244018 DEBUG oslo_concurrency.lockutils [None req-431b342d-c139-446e-a85c-a9ba2f9ee6e4 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:05 compute-0 nova_compute[244014]: 2026-02-25 12:15:05.189 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:15:05 compute-0 nova_compute[244014]: 2026-02-25 12:15:05.190 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 691 KiB/s rd, 3.1 MiB/s wr, 47 op/s
Feb 25 12:15:06 compute-0 ceph-mon[76335]: pgmap v888: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 691 KiB/s rd, 3.1 MiB/s wr, 47 op/s
Feb 25 12:15:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Feb 25 12:15:08 compute-0 nova_compute[244014]: 2026-02-25 12:15:08.098 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:08 compute-0 nova_compute[244014]: 2026-02-25 12:15:08.098 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:08 compute-0 nova_compute[244014]: 2026-02-25 12:15:08.099 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:08 compute-0 nova_compute[244014]: 2026-02-25 12:15:08.099 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:08 compute-0 nova_compute[244014]: 2026-02-25 12:15:08.099 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:08 compute-0 nova_compute[244014]: 2026-02-25 12:15:08.100 244018 INFO nova.compute.manager [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Terminating instance
Feb 25 12:15:08 compute-0 nova_compute[244014]: 2026-02-25 12:15:08.101 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "refresh_cache-16c8e25c-54e5-4f4a-a188-5e5621ce23b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:15:08 compute-0 nova_compute[244014]: 2026-02-25 12:15:08.101 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquired lock "refresh_cache-16c8e25c-54e5-4f4a-a188-5e5621ce23b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:15:08 compute-0 nova_compute[244014]: 2026-02-25 12:15:08.101 244018 DEBUG nova.network.neutron [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:15:08 compute-0 rsyslogd[1020]: imjournal: 3448 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 25 12:15:08 compute-0 nova_compute[244014]: 2026-02-25 12:15:08.888 244018 DEBUG nova.network.neutron [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:15:09 compute-0 ceph-mon[76335]: pgmap v889: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Feb 25 12:15:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:09 compute-0 nova_compute[244014]: 2026-02-25 12:15:09.170 244018 DEBUG nova.network.neutron [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:15:09 compute-0 nova_compute[244014]: 2026-02-25 12:15:09.219 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Releasing lock "refresh_cache-16c8e25c-54e5-4f4a-a188-5e5621ce23b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:15:09 compute-0 nova_compute[244014]: 2026-02-25 12:15:09.220 244018 DEBUG nova.compute.manager [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:15:09 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Feb 25 12:15:09 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 5.357s CPU time.
Feb 25 12:15:09 compute-0 systemd-machined[210048]: Machine qemu-1-instance-00000001 terminated.
Feb 25 12:15:09 compute-0 nova_compute[244014]: 2026-02-25 12:15:09.437 244018 INFO nova.virt.libvirt.driver [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Instance destroyed successfully.
Feb 25 12:15:09 compute-0 nova_compute[244014]: 2026-02-25 12:15:09.438 244018 DEBUG nova.objects.instance [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lazy-loading 'resources' on Instance uuid 16c8e25c-54e5-4f4a-a188-5e5621ce23b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Feb 25 12:15:10 compute-0 nova_compute[244014]: 2026-02-25 12:15:10.384 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:10 compute-0 nova_compute[244014]: 2026-02-25 12:15:10.384 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:10 compute-0 nova_compute[244014]: 2026-02-25 12:15:10.519 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:15:10 compute-0 podman[250577]: 2026-02-25 12:15:10.74663412 +0000 UTC m=+0.079366242 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 25 12:15:10 compute-0 podman[250579]: 2026-02-25 12:15:10.791944424 +0000 UTC m=+0.124800459 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 25 12:15:10 compute-0 nova_compute[244014]: 2026-02-25 12:15:10.800 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:10 compute-0 nova_compute[244014]: 2026-02-25 12:15:10.800 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:10 compute-0 nova_compute[244014]: 2026-02-25 12:15:10.809 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:15:10 compute-0 nova_compute[244014]: 2026-02-25 12:15:10.810 244018 INFO nova.compute.claims [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:15:11 compute-0 ceph-mon[76335]: pgmap v890: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Feb 25 12:15:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 182 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 122 op/s
Feb 25 12:15:11 compute-0 nova_compute[244014]: 2026-02-25 12:15:11.563 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:11 compute-0 nova_compute[244014]: 2026-02-25 12:15:11.748 244018 INFO nova.virt.libvirt.driver [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Deleting instance files /var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2_del
Feb 25 12:15:11 compute-0 nova_compute[244014]: 2026-02-25 12:15:11.750 244018 INFO nova.virt.libvirt.driver [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Deletion of /var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2_del complete
Feb 25 12:15:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:15:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221663491' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.083 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.090 244018 DEBUG nova.compute.provider_tree [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.119 244018 DEBUG nova.virt.libvirt.host [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.119 244018 INFO nova.virt.libvirt.host [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] UEFI support detected
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.121 244018 INFO nova.compute.manager [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Took 2.90 seconds to destroy the instance on the hypervisor.
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.122 244018 DEBUG oslo.service.loopingcall [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.122 244018 DEBUG nova.compute.manager [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.123 244018 DEBUG nova.network.neutron [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.182 244018 DEBUG nova.scheduler.client.report [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:15:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/221663491' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.403 244018 DEBUG nova.network.neutron [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.483 244018 DEBUG nova.network.neutron [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.565 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:12 compute-0 nova_compute[244014]: 2026-02-25 12:15:12.566 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.272 244018 INFO nova.compute.manager [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Took 1.15 seconds to deallocate network for instance.
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.284 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.284 244018 DEBUG nova.network.neutron [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:15:13 compute-0 ceph-mon[76335]: pgmap v891: 305 pgs: 305 active+clean; 182 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 122 op/s
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.391 244018 INFO nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:15:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 153 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 119 op/s
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.480 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.511 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.512 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.598 244018 DEBUG oslo_concurrency.processutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.698 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.700 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.701 244018 INFO nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Creating image(s)
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.738 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.765 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.793 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.796 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.854 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.856 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.857 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.858 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.886 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:13 compute-0 nova_compute[244014]: 2026-02-25 12:15:13.890 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:15:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288963850' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.113 244018 DEBUG oslo_concurrency.processutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.121 244018 DEBUG nova.compute.provider_tree [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.145 244018 DEBUG nova.scheduler.client.report [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.152 244018 DEBUG nova.network.neutron [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.153 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.175 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.187907) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714187959, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 331, "num_deletes": 255, "total_data_size": 128911, "memory_usage": 135816, "flush_reason": "Manual Compaction"}
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.220 244018 INFO nova.scheduler.client.report [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Deleted allocations for instance 16c8e25c-54e5-4f4a-a188-5e5621ce23b2
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714228813, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 128145, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19080, "largest_seqno": 19410, "table_properties": {"data_size": 126073, "index_size": 236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4861, "raw_average_key_size": 16, "raw_value_size": 121985, "raw_average_value_size": 420, "num_data_blocks": 11, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772021705, "oldest_key_time": 1772021705, "file_creation_time": 1772021714, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 40947 microseconds, and 941 cpu microseconds.
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.228860) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 128145 bytes OK
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.228879) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.256245) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.256281) EVENT_LOG_v1 {"time_micros": 1772021714256271, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.256310) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 126603, prev total WAL file size 126603, number of live WAL files 2.
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.256905) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323533' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(125KB)], [44(6564KB)]
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714256947, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 6850312, "oldest_snapshot_seqno": -1}
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4115 keys, 6735502 bytes, temperature: kUnknown
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714332652, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 6735502, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6707849, "index_size": 16280, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 102087, "raw_average_key_size": 24, "raw_value_size": 6633071, "raw_average_value_size": 1611, "num_data_blocks": 682, "num_entries": 4115, "num_filter_entries": 4115, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772021714, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.333 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.332999) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 6735502 bytes
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.353425) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.3 rd, 88.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 6.4 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(106.0) write-amplify(52.6) OK, records in: 4632, records dropped: 517 output_compression: NoCompression
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.353476) EVENT_LOG_v1 {"time_micros": 1772021714353455, "job": 22, "event": "compaction_finished", "compaction_time_micros": 75836, "compaction_time_cpu_micros": 21880, "output_level": 6, "num_output_files": 1, "total_output_size": 6735502, "num_input_records": 4632, "num_output_records": 4115, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714353720, "job": 22, "event": "table_file_deletion", "file_number": 46}
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714354788, "job": 22, "event": "table_file_deletion", "file_number": 44}
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.256697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.354870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.354877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.354879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.354881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:14 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.354882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:15:14 compute-0 ceph-mon[76335]: pgmap v892: 305 pgs: 305 active+clean; 153 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 119 op/s
Feb 25 12:15:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1288963850' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.442 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.517 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] resizing rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.629 244018 DEBUG nova.objects.instance [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lazy-loading 'migration_context' on Instance uuid f3ac54ca-7761-47fb-a44f-5c64ff55e40c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.666 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.666 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Ensure instance console log exists: /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.667 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.667 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.667 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.670 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.675 244018 WARNING nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.681 244018 DEBUG nova.virt.libvirt.host [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.681 244018 DEBUG nova.virt.libvirt.host [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.684 244018 DEBUG nova.virt.libvirt.host [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.685 244018 DEBUG nova.virt.libvirt.host [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.685 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.686 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.686 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.687 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.687 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.687 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.688 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.688 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.688 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.689 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.689 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.689 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:15:14 compute-0 nova_compute[244014]: 2026-02-25 12:15:14.693 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:15:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/304538727' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:15 compute-0 nova_compute[244014]: 2026-02-25 12:15:15.271 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:15 compute-0 nova_compute[244014]: 2026-02-25 12:15:15.303 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:15 compute-0 nova_compute[244014]: 2026-02-25 12:15:15.309 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/304538727' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 153 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 104 op/s
Feb 25 12:15:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:15:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/927726857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:15 compute-0 nova_compute[244014]: 2026-02-25 12:15:15.929 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:15 compute-0 nova_compute[244014]: 2026-02-25 12:15:15.932 244018 DEBUG nova.objects.instance [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lazy-loading 'pci_devices' on Instance uuid f3ac54ca-7761-47fb-a44f-5c64ff55e40c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:15 compute-0 nova_compute[244014]: 2026-02-25 12:15:15.972 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:15:15 compute-0 nova_compute[244014]:   <uuid>f3ac54ca-7761-47fb-a44f-5c64ff55e40c</uuid>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   <name>instance-00000002</name>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-789124960</nova:name>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:15:14</nova:creationTime>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:15:15 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:15:15 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:15:15 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:15:15 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:15:15 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:15:15 compute-0 nova_compute[244014]:         <nova:user uuid="0b1ac75e114a4f7493006c0ffae0d4cf">tempest-DeleteServersAdminTestJSON-848319223-project-member</nova:user>
Feb 25 12:15:15 compute-0 nova_compute[244014]:         <nova:project uuid="71ba161ac9034524bed0ed4918ac0d2d">tempest-DeleteServersAdminTestJSON-848319223</nova:project>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <system>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <entry name="serial">f3ac54ca-7761-47fb-a44f-5c64ff55e40c</entry>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <entry name="uuid">f3ac54ca-7761-47fb-a44f-5c64ff55e40c</entry>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     </system>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   <os>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   </os>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   <features>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   </features>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk">
Feb 25 12:15:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:15:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config">
Feb 25 12:15:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:15:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/console.log" append="off"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <video>
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     </video>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:15:15 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:15:15 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:15:15 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:15:15 compute-0 nova_compute[244014]: </domain>
Feb 25 12:15:15 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:15:16 compute-0 nova_compute[244014]: 2026-02-25 12:15:16.064 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:15:16 compute-0 nova_compute[244014]: 2026-02-25 12:15:16.064 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:15:16 compute-0 nova_compute[244014]: 2026-02-25 12:15:16.065 244018 INFO nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Using config drive
Feb 25 12:15:16 compute-0 nova_compute[244014]: 2026-02-25 12:15:16.093 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:16 compute-0 nova_compute[244014]: 2026-02-25 12:15:16.449 244018 INFO nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Creating config drive at /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config
Feb 25 12:15:16 compute-0 nova_compute[244014]: 2026-02-25 12:15:16.455 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3x8jvnwe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:16 compute-0 nova_compute[244014]: 2026-02-25 12:15:16.579 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3x8jvnwe" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:16 compute-0 ceph-mon[76335]: pgmap v893: 305 pgs: 305 active+clean; 153 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 104 op/s
Feb 25 12:15:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/927726857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:16 compute-0 nova_compute[244014]: 2026-02-25 12:15:16.650 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:16 compute-0 nova_compute[244014]: 2026-02-25 12:15:16.656 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:16 compute-0 nova_compute[244014]: 2026-02-25 12:15:16.700 244018 DEBUG oslo_concurrency.processutils [None req-f14ff759-3d81-4d95-857d-4e87714b74b2 872527ec067d4bf69631833e15dd7506 5c15d69b0f154e82aa1af966bc9537e3 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:16 compute-0 nova_compute[244014]: 2026-02-25 12:15:16.720 244018 DEBUG oslo_concurrency.processutils [None req-f14ff759-3d81-4d95-857d-4e87714b74b2 872527ec067d4bf69631833e15dd7506 5c15d69b0f154e82aa1af966bc9537e3 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:17 compute-0 nova_compute[244014]: 2026-02-25 12:15:17.087 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:17 compute-0 nova_compute[244014]: 2026-02-25 12:15:17.088 244018 INFO nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Deleting local config drive /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config because it was imported into RBD.
Feb 25 12:15:17 compute-0 systemd-machined[210048]: New machine qemu-2-instance-00000002.
Feb 25 12:15:17 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Feb 25 12:15:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.175 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021718.1745355, f3ac54ca-7761-47fb-a44f-5c64ff55e40c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.175 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] VM Resumed (Lifecycle Event)
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.179 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.180 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.184 244018 INFO nova.virt.libvirt.driver [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance spawned successfully.
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.184 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.198 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.205 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.210 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.211 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.212 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.212 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.213 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.214 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.224 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.225 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021718.1768045, f3ac54ca-7761-47fb-a44f-5c64ff55e40c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.225 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] VM Started (Lifecycle Event)
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.259 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.264 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.288 244018 INFO nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Took 4.59 seconds to spawn the instance on the hypervisor.
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.288 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.344 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.375 244018 INFO nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Took 7.61 seconds to build instance.
Feb 25 12:15:18 compute-0 nova_compute[244014]: 2026-02-25 12:15:18.402 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:18 compute-0 ceph-mon[76335]: pgmap v894: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 25 12:15:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:15:19 compute-0 nova_compute[244014]: 2026-02-25 12:15:19.910 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Acquiring lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:19 compute-0 nova_compute[244014]: 2026-02-25 12:15:19.911 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:19 compute-0 nova_compute[244014]: 2026-02-25 12:15:19.911 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Acquiring lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:19 compute-0 nova_compute[244014]: 2026-02-25 12:15:19.912 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:19 compute-0 nova_compute[244014]: 2026-02-25 12:15:19.912 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:19 compute-0 nova_compute[244014]: 2026-02-25 12:15:19.915 244018 INFO nova.compute.manager [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Terminating instance
Feb 25 12:15:19 compute-0 nova_compute[244014]: 2026-02-25 12:15:19.917 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Acquiring lock "refresh_cache-f3ac54ca-7761-47fb-a44f-5c64ff55e40c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:15:19 compute-0 nova_compute[244014]: 2026-02-25 12:15:19.917 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Acquired lock "refresh_cache-f3ac54ca-7761-47fb-a44f-5c64ff55e40c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:15:19 compute-0 nova_compute[244014]: 2026-02-25 12:15:19.918 244018 DEBUG nova.network.neutron [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:15:20 compute-0 nova_compute[244014]: 2026-02-25 12:15:20.438 244018 DEBUG nova.network.neutron [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:15:20 compute-0 ceph-mon[76335]: pgmap v895: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:15:21 compute-0 nova_compute[244014]: 2026-02-25 12:15:21.050 244018 DEBUG nova.network.neutron [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:15:21 compute-0 nova_compute[244014]: 2026-02-25 12:15:21.170 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Releasing lock "refresh_cache-f3ac54ca-7761-47fb-a44f-5c64ff55e40c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:15:21 compute-0 nova_compute[244014]: 2026-02-25 12:15:21.171 244018 DEBUG nova.compute.manager [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:15:21 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Feb 25 12:15:21 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 4.136s CPU time.
Feb 25 12:15:21 compute-0 systemd-machined[210048]: Machine qemu-2-instance-00000002 terminated.
Feb 25 12:15:21 compute-0 nova_compute[244014]: 2026-02-25 12:15:21.392 244018 INFO nova.virt.libvirt.driver [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance destroyed successfully.
Feb 25 12:15:21 compute-0 nova_compute[244014]: 2026-02-25 12:15:21.392 244018 DEBUG nova.objects.instance [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lazy-loading 'resources' on Instance uuid f3ac54ca-7761-47fb-a44f-5c64ff55e40c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 605 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Feb 25 12:15:21 compute-0 nova_compute[244014]: 2026-02-25 12:15:21.834 244018 INFO nova.virt.libvirt.driver [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Deleting instance files /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c_del
Feb 25 12:15:21 compute-0 nova_compute[244014]: 2026-02-25 12:15:21.835 244018 INFO nova.virt.libvirt.driver [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Deletion of /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c_del complete
Feb 25 12:15:21 compute-0 nova_compute[244014]: 2026-02-25 12:15:21.923 244018 INFO nova.compute.manager [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Took 0.75 seconds to destroy the instance on the hypervisor.
Feb 25 12:15:21 compute-0 nova_compute[244014]: 2026-02-25 12:15:21.924 244018 DEBUG oslo.service.loopingcall [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:15:21 compute-0 nova_compute[244014]: 2026-02-25 12:15:21.924 244018 DEBUG nova.compute.manager [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:15:21 compute-0 nova_compute[244014]: 2026-02-25 12:15:21.925 244018 DEBUG nova.network.neutron [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:15:22 compute-0 nova_compute[244014]: 2026-02-25 12:15:22.143 244018 DEBUG nova.network.neutron [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:15:22 compute-0 nova_compute[244014]: 2026-02-25 12:15:22.166 244018 DEBUG nova.network.neutron [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:15:22 compute-0 nova_compute[244014]: 2026-02-25 12:15:22.179 244018 INFO nova.compute.manager [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Took 0.25 seconds to deallocate network for instance.
Feb 25 12:15:22 compute-0 nova_compute[244014]: 2026-02-25 12:15:22.231 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:22 compute-0 nova_compute[244014]: 2026-02-25 12:15:22.231 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:22 compute-0 nova_compute[244014]: 2026-02-25 12:15:22.290 244018 DEBUG oslo_concurrency.processutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:22 compute-0 ceph-mon[76335]: pgmap v896: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 605 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Feb 25 12:15:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:15:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893988309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:22 compute-0 nova_compute[244014]: 2026-02-25 12:15:22.847 244018 DEBUG oslo_concurrency.processutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:22 compute-0 nova_compute[244014]: 2026-02-25 12:15:22.855 244018 DEBUG nova.compute.provider_tree [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:15:22 compute-0 nova_compute[244014]: 2026-02-25 12:15:22.878 244018 DEBUG nova.scheduler.client.report [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:15:22 compute-0 nova_compute[244014]: 2026-02-25 12:15:22.905 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:22 compute-0 nova_compute[244014]: 2026-02-25 12:15:22.947 244018 INFO nova.scheduler.client.report [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Deleted allocations for instance f3ac54ca-7761-47fb-a44f-5c64ff55e40c
Feb 25 12:15:23 compute-0 nova_compute[244014]: 2026-02-25 12:15:23.013 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 171 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Feb 25 12:15:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3893988309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:24 compute-0 nova_compute[244014]: 2026-02-25 12:15:24.436 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021709.43531, 16c8e25c-54e5-4f4a-a188-5e5621ce23b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:15:24 compute-0 nova_compute[244014]: 2026-02-25 12:15:24.437 244018 INFO nova.compute.manager [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] VM Stopped (Lifecycle Event)
Feb 25 12:15:24 compute-0 nova_compute[244014]: 2026-02-25 12:15:24.454 244018 DEBUG nova.compute.manager [None req-b45c7e4c-8bdc-4d8b-ae3a-04a38ffae486 - - - - - -] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:24 compute-0 ceph-mon[76335]: pgmap v897: 305 pgs: 305 active+clean; 171 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Feb 25 12:15:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 171 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Feb 25 12:15:25 compute-0 nova_compute[244014]: 2026-02-25 12:15:25.469 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:25 compute-0 nova_compute[244014]: 2026-02-25 12:15:25.470 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:25 compute-0 nova_compute[244014]: 2026-02-25 12:15:25.505 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:15:25 compute-0 nova_compute[244014]: 2026-02-25 12:15:25.584 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:25 compute-0 nova_compute[244014]: 2026-02-25 12:15:25.585 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:25 compute-0 nova_compute[244014]: 2026-02-25 12:15:25.594 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:15:25 compute-0 nova_compute[244014]: 2026-02-25 12:15:25.595 244018 INFO nova.compute.claims [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:15:25 compute-0 nova_compute[244014]: 2026-02-25 12:15:25.711 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:15:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2633691069' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.209 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.216 244018 DEBUG nova.compute.provider_tree [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.241 244018 DEBUG nova.scheduler.client.report [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.279 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.280 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.341 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.342 244018 DEBUG nova.network.neutron [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.366 244018 INFO nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.387 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.478 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.480 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.481 244018 INFO nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Creating image(s)
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.509 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.541 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.572 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.576 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.647 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.649 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.649 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.649 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.671 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.674 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.931 244018 DEBUG nova.network.neutron [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.932 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:15:26 compute-0 ceph-mon[76335]: pgmap v898: 305 pgs: 305 active+clean; 171 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Feb 25 12:15:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2633691069' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:26 compute-0 nova_compute[244014]: 2026-02-25 12:15:26.977 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.048 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] resizing rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.149 244018 DEBUG nova.objects.instance [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lazy-loading 'migration_context' on Instance uuid 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.178 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.178 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Ensure instance console log exists: /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.179 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.179 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.180 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.181 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.185 244018 WARNING nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.190 244018 DEBUG nova.virt.libvirt.host [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.190 244018 DEBUG nova.virt.libvirt.host [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.193 244018 DEBUG nova.virt.libvirt.host [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.194 244018 DEBUG nova.virt.libvirt.host [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.194 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.194 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.195 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.195 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.196 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.196 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.196 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.196 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.197 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.197 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.197 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.198 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.201 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 170 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 140 op/s
Feb 25 12:15:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:15:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710387522' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.766 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.786 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:27 compute-0 nova_compute[244014]: 2026-02-25 12:15:27.790 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3710387522' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:15:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2460758435' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:28 compute-0 nova_compute[244014]: 2026-02-25 12:15:28.299 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:28 compute-0 nova_compute[244014]: 2026-02-25 12:15:28.302 244018 DEBUG nova.objects.instance [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:28 compute-0 nova_compute[244014]: 2026-02-25 12:15:28.415 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:15:28 compute-0 nova_compute[244014]:   <uuid>5e6e56f8-55af-4ba8-b447-8c1ab9ab7892</uuid>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   <name>instance-00000003</name>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-830310681</nova:name>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:15:27</nova:creationTime>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:15:28 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:15:28 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:15:28 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:15:28 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:15:28 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:15:28 compute-0 nova_compute[244014]:         <nova:user uuid="0b1ac75e114a4f7493006c0ffae0d4cf">tempest-DeleteServersAdminTestJSON-848319223-project-member</nova:user>
Feb 25 12:15:28 compute-0 nova_compute[244014]:         <nova:project uuid="71ba161ac9034524bed0ed4918ac0d2d">tempest-DeleteServersAdminTestJSON-848319223</nova:project>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <system>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <entry name="serial">5e6e56f8-55af-4ba8-b447-8c1ab9ab7892</entry>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <entry name="uuid">5e6e56f8-55af-4ba8-b447-8c1ab9ab7892</entry>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     </system>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   <os>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   </os>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   <features>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   </features>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk">
Feb 25 12:15:28 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       </source>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:15:28 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config">
Feb 25 12:15:28 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       </source>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:15:28 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/console.log" append="off"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <video>
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     </video>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:15:28 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:15:28 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:15:28 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:15:28 compute-0 nova_compute[244014]: </domain>
Feb 25 12:15:28 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:15:28 compute-0 nova_compute[244014]: 2026-02-25 12:15:28.523 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:15:28 compute-0 nova_compute[244014]: 2026-02-25 12:15:28.524 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:15:28 compute-0 nova_compute[244014]: 2026-02-25 12:15:28.525 244018 INFO nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Using config drive
Feb 25 12:15:28 compute-0 nova_compute[244014]: 2026-02-25 12:15:28.556 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:28 compute-0 ceph-mon[76335]: pgmap v899: 305 pgs: 305 active+clean; 170 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 140 op/s
Feb 25 12:15:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2460758435' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.011 244018 INFO nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Creating config drive at /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.015 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnl65_st3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.135 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnl65_st3" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.159 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.163 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.312 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.314 244018 INFO nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Deleting local config drive /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config because it was imported into RBD.
Feb 25 12:15:29 compute-0 systemd-machined[210048]: New machine qemu-3-instance-00000003.
Feb 25 12:15:29 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Feb 25 12:15:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 170 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 661 KiB/s wr, 111 op/s
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.734 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021729.7340386, 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.735 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] VM Resumed (Lifecycle Event)
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.738 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.738 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.743 244018 INFO nova.virt.libvirt.driver [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance spawned successfully.
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.743 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.764 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.770 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.771 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.771 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.772 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.772 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.773 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.777 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.819 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.820 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021729.737302, 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.820 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] VM Started (Lifecycle Event)
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.848 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.853 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.861 244018 INFO nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Took 3.38 seconds to spawn the instance on the hypervisor.
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.861 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.872 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.929 244018 INFO nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Took 4.36 seconds to build instance.
Feb 25 12:15:29 compute-0 nova_compute[244014]: 2026-02-25 12:15:29.947 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:15:30
Feb 25 12:15:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:15:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:15:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.control', '.rgw.root', 'vms', 'images', 'default.rgw.meta']
Feb 25 12:15:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:15:30 compute-0 ceph-mon[76335]: pgmap v900: 305 pgs: 305 active+clean; 170 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 661 KiB/s wr, 111 op/s
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 194 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.6 MiB/s wr, 165 op/s
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:15:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:15:32 compute-0 nova_compute[244014]: 2026-02-25 12:15:32.867 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:32 compute-0 nova_compute[244014]: 2026-02-25 12:15:32.868 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:32 compute-0 nova_compute[244014]: 2026-02-25 12:15:32.868 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:32 compute-0 nova_compute[244014]: 2026-02-25 12:15:32.869 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:32 compute-0 nova_compute[244014]: 2026-02-25 12:15:32.869 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:32 compute-0 nova_compute[244014]: 2026-02-25 12:15:32.871 244018 INFO nova.compute.manager [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Terminating instance
Feb 25 12:15:32 compute-0 nova_compute[244014]: 2026-02-25 12:15:32.872 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "refresh_cache-5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:15:32 compute-0 nova_compute[244014]: 2026-02-25 12:15:32.873 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquired lock "refresh_cache-5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:15:32 compute-0 nova_compute[244014]: 2026-02-25 12:15:32.873 244018 DEBUG nova.network.neutron [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:15:33 compute-0 ceph-mon[76335]: pgmap v901: 305 pgs: 305 active+clean; 194 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.6 MiB/s wr, 165 op/s
Feb 25 12:15:33 compute-0 nova_compute[244014]: 2026-02-25 12:15:33.147 244018 DEBUG nova.network.neutron [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:15:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Feb 25 12:15:33 compute-0 nova_compute[244014]: 2026-02-25 12:15:33.743 244018 DEBUG nova.network.neutron [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:15:33 compute-0 nova_compute[244014]: 2026-02-25 12:15:33.757 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Releasing lock "refresh_cache-5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:15:33 compute-0 nova_compute[244014]: 2026-02-25 12:15:33.758 244018 DEBUG nova.compute.manager [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:15:33 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Feb 25 12:15:33 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 4.511s CPU time.
Feb 25 12:15:33 compute-0 systemd-machined[210048]: Machine qemu-3-instance-00000003 terminated.
Feb 25 12:15:33 compute-0 nova_compute[244014]: 2026-02-25 12:15:33.974 244018 INFO nova.virt.libvirt.driver [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance destroyed successfully.
Feb 25 12:15:33 compute-0 nova_compute[244014]: 2026-02-25 12:15:33.975 244018 DEBUG nova.objects.instance [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lazy-loading 'resources' on Instance uuid 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.386 244018 INFO nova.virt.libvirt.driver [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Deleting instance files /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_del
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.387 244018 INFO nova.virt.libvirt.driver [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Deletion of /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_del complete
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.480 244018 INFO nova.compute.manager [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Took 0.72 seconds to destroy the instance on the hypervisor.
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.481 244018 DEBUG oslo.service.loopingcall [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.481 244018 DEBUG nova.compute.manager [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.481 244018 DEBUG nova.network.neutron [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.610 244018 DEBUG nova.network.neutron [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.628 244018 DEBUG nova.network.neutron [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.645 244018 INFO nova.compute.manager [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Took 0.16 seconds to deallocate network for instance.
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.684 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.685 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:34 compute-0 nova_compute[244014]: 2026-02-25 12:15:34.752 244018 DEBUG oslo_concurrency.processutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:35 compute-0 ceph-mon[76335]: pgmap v902: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Feb 25 12:15:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:15:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555577469' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:35 compute-0 nova_compute[244014]: 2026-02-25 12:15:35.270 244018 DEBUG oslo_concurrency.processutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:35 compute-0 nova_compute[244014]: 2026-02-25 12:15:35.277 244018 DEBUG nova.compute.provider_tree [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:15:35 compute-0 nova_compute[244014]: 2026-02-25 12:15:35.295 244018 DEBUG nova.scheduler.client.report [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:15:35 compute-0 nova_compute[244014]: 2026-02-25 12:15:35.320 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:35 compute-0 nova_compute[244014]: 2026-02-25 12:15:35.351 244018 INFO nova.scheduler.client.report [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Deleted allocations for instance 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892
Feb 25 12:15:35 compute-0 nova_compute[244014]: 2026-02-25 12:15:35.436 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Feb 25 12:15:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1555577469' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:36 compute-0 nova_compute[244014]: 2026-02-25 12:15:36.390 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021721.3885195, f3ac54ca-7761-47fb-a44f-5c64ff55e40c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:15:36 compute-0 nova_compute[244014]: 2026-02-25 12:15:36.390 244018 INFO nova.compute.manager [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] VM Stopped (Lifecycle Event)
Feb 25 12:15:36 compute-0 nova_compute[244014]: 2026-02-25 12:15:36.412 244018 DEBUG nova.compute.manager [None req-b24e27f8-9520-4bb3-ab7a-adcc1f0285fd - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:37 compute-0 ceph-mon[76335]: pgmap v903: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Feb 25 12:15:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Feb 25 12:15:39 compute-0 ceph-mon[76335]: pgmap v904: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Feb 25 12:15:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Feb 25 12:15:41 compute-0 ceph-mon[76335]: pgmap v905: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Feb 25 12:15:41 compute-0 podman[251468]: 2026-02-25 12:15:41.749423536 +0000 UTC m=+0.088800903 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent)
Feb 25 12:15:41 compute-0 podman[251469]: 2026-02-25 12:15:41.758196817 +0000 UTC m=+0.097574154 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.829490380927735e-06 of space, bias 1.0, pg target 0.0005488471142783205 quantized to 32 (current 32)
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024895834148089277 of space, bias 1.0, pg target 0.7468750244426783 quantized to 32 (current 32)
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.6104160454649397e-06 of space, bias 4.0, pg target 0.0019324992545579278 quantized to 16 (current 16)
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:15:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.149 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.149 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.190 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.269 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.270 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.278 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.278 244018 INFO nova.compute.claims [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.380 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:15:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2643754188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.939 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.945 244018 DEBUG nova.compute.provider_tree [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.960 244018 DEBUG nova.scheduler.client.report [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.983 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:42 compute-0 nova_compute[244014]: 2026-02-25 12:15:42.984 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.034 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.034 244018 DEBUG nova.network.neutron [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.061 244018 INFO nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.089 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:15:43 compute-0 ceph-mon[76335]: pgmap v906: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Feb 25 12:15:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2643754188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.194 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.196 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.196 244018 INFO nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Creating image(s)
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.228 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.260 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.288 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.291 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.370 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.371 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.372 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.372 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.392 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.396 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 814 KiB/s rd, 188 KiB/s wr, 60 op/s
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.564 244018 DEBUG nova.network.neutron [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.565 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.739 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.820 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] resizing rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.929 244018 DEBUG nova.objects.instance [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'migration_context' on Instance uuid ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.973 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.973 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Ensure instance console log exists: /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.974 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.974 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.975 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.976 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.981 244018 WARNING nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.986 244018 DEBUG nova.virt.libvirt.host [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.986 244018 DEBUG nova.virt.libvirt.host [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.989 244018 DEBUG nova.virt.libvirt.host [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.990 244018 DEBUG nova.virt.libvirt.host [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.990 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.991 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.992 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.993 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.994 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.994 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.995 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.995 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.996 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.996 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.997 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:15:43 compute-0 nova_compute[244014]: 2026-02-25 12:15:43.997 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:15:44 compute-0 nova_compute[244014]: 2026-02-25 12:15:44.002 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:15:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/61062265' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:44 compute-0 nova_compute[244014]: 2026-02-25 12:15:44.533 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:44 compute-0 nova_compute[244014]: 2026-02-25 12:15:44.553 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:44 compute-0 nova_compute[244014]: 2026-02-25 12:15:44.556 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:15:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571584857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.035 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.038 244018 DEBUG nova.objects.instance [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'pci_devices' on Instance uuid ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.053 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:15:45 compute-0 nova_compute[244014]:   <uuid>ee9e86a4-8a34-43a9-baf1-f9b2e7f85534</uuid>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   <name>instance-00000004</name>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <nova:name>tempest-LiveMigrationNegativeTest-server-1560092483</nova:name>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:15:43</nova:creationTime>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:15:45 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:15:45 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:15:45 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:15:45 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:15:45 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:15:45 compute-0 nova_compute[244014]:         <nova:user uuid="0da29cefe0e94220a8d9cf895454b55c">tempest-LiveMigrationNegativeTest-1055227276-project-member</nova:user>
Feb 25 12:15:45 compute-0 nova_compute[244014]:         <nova:project uuid="6bddec791c4c46b7bef787d3d6634b12">tempest-LiveMigrationNegativeTest-1055227276</nova:project>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <system>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <entry name="serial">ee9e86a4-8a34-43a9-baf1-f9b2e7f85534</entry>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <entry name="uuid">ee9e86a4-8a34-43a9-baf1-f9b2e7f85534</entry>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     </system>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   <os>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   </os>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   <features>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   </features>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk">
Feb 25 12:15:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:15:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config">
Feb 25 12:15:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:15:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/console.log" append="off"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <video>
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     </video>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:15:45 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:15:45 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:15:45 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:15:45 compute-0 nova_compute[244014]: </domain>
Feb 25 12:15:45 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.105 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.105 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.106 244018 INFO nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Using config drive
Feb 25 12:15:45 compute-0 ceph-mon[76335]: pgmap v907: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 814 KiB/s rd, 188 KiB/s wr, 60 op/s
Feb 25 12:15:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/61062265' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3571584857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.134 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.515 244018 INFO nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Creating config drive at /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.521 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvvv9m4v2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.641 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvvv9m4v2" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.672 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.675 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.833 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:45 compute-0 nova_compute[244014]: 2026-02-25 12:15:45.834 244018 INFO nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Deleting local config drive /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config because it was imported into RBD.
Feb 25 12:15:45 compute-0 systemd-machined[210048]: New machine qemu-4-instance-00000004.
Feb 25 12:15:45 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Feb 25 12:15:46 compute-0 sudo[251844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:15:46 compute-0 sudo[251844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:15:46 compute-0 sudo[251844]: pam_unix(sudo:session): session closed for user root
Feb 25 12:15:46 compute-0 sudo[251887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:15:46 compute-0 sudo[251887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.522 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021746.522241, ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.524 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] VM Resumed (Lifecycle Event)
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.528 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.529 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.533 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance spawned successfully.
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.533 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.559 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.565 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.571 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.572 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.572 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.573 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.574 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.574 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.603 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.603 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021746.5278625, ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.604 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] VM Started (Lifecycle Event)
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.637 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.641 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.648 244018 INFO nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Took 3.45 seconds to spawn the instance on the hypervisor.
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.648 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.663 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.711 244018 INFO nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Took 4.48 seconds to build instance.
Feb 25 12:15:46 compute-0 nova_compute[244014]: 2026-02-25 12:15:46.732 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:46 compute-0 sudo[251887]: pam_unix(sudo:session): session closed for user root
Feb 25 12:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:15:47 compute-0 sudo[251967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:15:47 compute-0 sudo[251967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:15:47 compute-0 sudo[251967]: pam_unix(sudo:session): session closed for user root
Feb 25 12:15:47 compute-0 ceph-mon[76335]: pgmap v908: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 25 12:15:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:15:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:15:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:15:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:15:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:15:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:15:47 compute-0 sudo[251992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:15:47 compute-0 sudo[251992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2538534503' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2538534503' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:15:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Feb 25 12:15:47 compute-0 podman[252027]: 2026-02-25 12:15:47.490989902 +0000 UTC m=+0.060103640 container create 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 25 12:15:47 compute-0 systemd[1]: Started libpod-conmon-8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a.scope.
Feb 25 12:15:47 compute-0 podman[252027]: 2026-02-25 12:15:47.458208186 +0000 UTC m=+0.027321974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:15:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:15:47 compute-0 podman[252027]: 2026-02-25 12:15:47.599818521 +0000 UTC m=+0.168932279 container init 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 12:15:47 compute-0 podman[252027]: 2026-02-25 12:15:47.609512469 +0000 UTC m=+0.178626207 container start 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:15:47 compute-0 systemd[1]: libpod-8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a.scope: Deactivated successfully.
Feb 25 12:15:47 compute-0 vigilant_gould[252043]: 167 167
Feb 25 12:15:47 compute-0 conmon[252043]: conmon 8a5c21dc5b2607124b78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a.scope/container/memory.events
Feb 25 12:15:47 compute-0 podman[252027]: 2026-02-25 12:15:47.617743074 +0000 UTC m=+0.186856802 container attach 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb 25 12:15:47 compute-0 podman[252027]: 2026-02-25 12:15:47.618391073 +0000 UTC m=+0.187504801 container died 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 12:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f3d102529d860b6729deb8f406b21449ba7be641334dfbdc2801a08827bc973-merged.mount: Deactivated successfully.
Feb 25 12:15:47 compute-0 podman[252027]: 2026-02-25 12:15:47.706337021 +0000 UTC m=+0.275450719 container remove 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:15:47 compute-0 systemd[1]: libpod-conmon-8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a.scope: Deactivated successfully.
Feb 25 12:15:47 compute-0 podman[252065]: 2026-02-25 12:15:47.895328946 +0000 UTC m=+0.063989406 container create f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 12:15:47 compute-0 systemd[1]: Started libpod-conmon-f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4.scope.
Feb 25 12:15:47 compute-0 podman[252065]: 2026-02-25 12:15:47.86758943 +0000 UTC m=+0.036249950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:15:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:48 compute-0 podman[252065]: 2026-02-25 12:15:48.010088991 +0000 UTC m=+0.178749441 container init f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:15:48 compute-0 podman[252065]: 2026-02-25 12:15:48.018923424 +0000 UTC m=+0.187583874 container start f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:15:48 compute-0 podman[252065]: 2026-02-25 12:15:48.026739537 +0000 UTC m=+0.195399997 container attach f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:15:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2538534503' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:15:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2538534503' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:15:48 compute-0 fervent_chatelet[252082]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:15:48 compute-0 fervent_chatelet[252082]: --> All data devices are unavailable
Feb 25 12:15:48 compute-0 systemd[1]: libpod-f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4.scope: Deactivated successfully.
Feb 25 12:15:48 compute-0 podman[252065]: 2026-02-25 12:15:48.514014277 +0000 UTC m=+0.682674717 container died f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658-merged.mount: Deactivated successfully.
Feb 25 12:15:48 compute-0 podman[252065]: 2026-02-25 12:15:48.590996479 +0000 UTC m=+0.759656919 container remove f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 12:15:48 compute-0 systemd[1]: libpod-conmon-f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4.scope: Deactivated successfully.
Feb 25 12:15:48 compute-0 sudo[251992]: pam_unix(sudo:session): session closed for user root
Feb 25 12:15:48 compute-0 sudo[252112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:15:48 compute-0 sudo[252112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:15:48 compute-0 sudo[252112]: pam_unix(sudo:session): session closed for user root
Feb 25 12:15:48 compute-0 sudo[252137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:15:48 compute-0 sudo[252137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:15:48 compute-0 nova_compute[244014]: 2026-02-25 12:15:48.972 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021733.9709713, 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:15:48 compute-0 nova_compute[244014]: 2026-02-25 12:15:48.973 244018 INFO nova.compute.manager [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] VM Stopped (Lifecycle Event)
Feb 25 12:15:48 compute-0 nova_compute[244014]: 2026-02-25 12:15:48.990 244018 DEBUG nova.compute.manager [None req-bad64f4c-a272-42c2-bc47-3478abafbde5 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:15:49 compute-0 podman[252175]: 2026-02-25 12:15:49.045425763 +0000 UTC m=+0.067553901 container create 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:15:49 compute-0 systemd[1]: Started libpod-conmon-59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f.scope.
Feb 25 12:15:49 compute-0 podman[252175]: 2026-02-25 12:15:49.016723319 +0000 UTC m=+0.038851547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:15:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:15:49 compute-0 podman[252175]: 2026-02-25 12:15:49.131188756 +0000 UTC m=+0.153316914 container init 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:15:49 compute-0 podman[252175]: 2026-02-25 12:15:49.138190064 +0000 UTC m=+0.160318222 container start 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:15:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:49 compute-0 podman[252175]: 2026-02-25 12:15:49.144204193 +0000 UTC m=+0.166332341 container attach 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:15:49 compute-0 systemd[1]: libpod-59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f.scope: Deactivated successfully.
Feb 25 12:15:49 compute-0 cranky_wilbur[252192]: 167 167
Feb 25 12:15:49 compute-0 conmon[252192]: conmon 59e518c67b8a21d78789 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f.scope/container/memory.events
Feb 25 12:15:49 compute-0 podman[252175]: 2026-02-25 12:15:49.145823461 +0000 UTC m=+0.167951609 container died 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:15:49 compute-0 ceph-mon[76335]: pgmap v909: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Feb 25 12:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6feb36f09c3bf0d95e99a2ce42610ed1804e992fcbedbb6772b36c2057391abb-merged.mount: Deactivated successfully.
Feb 25 12:15:49 compute-0 podman[252175]: 2026-02-25 12:15:49.206311762 +0000 UTC m=+0.228439920 container remove 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:15:49 compute-0 systemd[1]: libpod-conmon-59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f.scope: Deactivated successfully.
Feb 25 12:15:49 compute-0 podman[252215]: 2026-02-25 12:15:49.35041532 +0000 UTC m=+0.051219135 container create 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 12:15:49 compute-0 systemd[1]: Started libpod-conmon-5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25.scope.
Feb 25 12:15:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e821dfb7168c64e2299e12d8e2067530696958ee443e98052c174d96fac3fc80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e821dfb7168c64e2299e12d8e2067530696958ee443e98052c174d96fac3fc80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e821dfb7168c64e2299e12d8e2067530696958ee443e98052c174d96fac3fc80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:49 compute-0 podman[252215]: 2026-02-25 12:15:49.325424326 +0000 UTC m=+0.026228171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e821dfb7168c64e2299e12d8e2067530696958ee443e98052c174d96fac3fc80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:49 compute-0 podman[252215]: 2026-02-25 12:15:49.439850552 +0000 UTC m=+0.140654387 container init 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 12:15:49 compute-0 podman[252215]: 2026-02-25 12:15:49.447973094 +0000 UTC m=+0.148776939 container start 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:15:49 compute-0 podman[252215]: 2026-02-25 12:15:49.455351563 +0000 UTC m=+0.156155428 container attach 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:15:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]: {
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:     "0": [
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:         {
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "devices": [
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "/dev/loop3"
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             ],
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_name": "ceph_lv0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_size": "21470642176",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "name": "ceph_lv0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "tags": {
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.cluster_name": "ceph",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.crush_device_class": "",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.encrypted": "0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.objectstore": "bluestore",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.osd_id": "0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.type": "block",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.vdo": "0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.with_tpm": "0"
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             },
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "type": "block",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "vg_name": "ceph_vg0"
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:         }
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:     ],
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:     "1": [
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:         {
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "devices": [
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "/dev/loop4"
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             ],
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_name": "ceph_lv1",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_size": "21470642176",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "name": "ceph_lv1",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "tags": {
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.cluster_name": "ceph",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.crush_device_class": "",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.encrypted": "0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.objectstore": "bluestore",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.osd_id": "1",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.type": "block",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.vdo": "0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.with_tpm": "0"
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             },
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "type": "block",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "vg_name": "ceph_vg1"
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:         }
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:     ],
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:     "2": [
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:         {
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "devices": [
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "/dev/loop5"
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             ],
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_name": "ceph_lv2",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_size": "21470642176",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "name": "ceph_lv2",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "tags": {
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.cluster_name": "ceph",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.crush_device_class": "",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.encrypted": "0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.objectstore": "bluestore",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.osd_id": "2",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.type": "block",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.vdo": "0",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:                 "ceph.with_tpm": "0"
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             },
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "type": "block",
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:             "vg_name": "ceph_vg2"
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:         }
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]:     ]
Feb 25 12:15:49 compute-0 cranky_mccarthy[252232]: }
Feb 25 12:15:49 compute-0 systemd[1]: libpod-5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25.scope: Deactivated successfully.
Feb 25 12:15:49 compute-0 podman[252215]: 2026-02-25 12:15:49.738018476 +0000 UTC m=+0.438822301 container died 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e821dfb7168c64e2299e12d8e2067530696958ee443e98052c174d96fac3fc80-merged.mount: Deactivated successfully.
Feb 25 12:15:49 compute-0 podman[252215]: 2026-02-25 12:15:49.819643065 +0000 UTC m=+0.520446910 container remove 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 25 12:15:49 compute-0 systemd[1]: libpod-conmon-5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25.scope: Deactivated successfully.
Feb 25 12:15:49 compute-0 sudo[252137]: pam_unix(sudo:session): session closed for user root
Feb 25 12:15:49 compute-0 sudo[252254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:15:49 compute-0 sudo[252254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:15:49 compute-0 sudo[252254]: pam_unix(sudo:session): session closed for user root
Feb 25 12:15:49 compute-0 sudo[252279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:15:49 compute-0 sudo[252279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:15:50 compute-0 podman[252317]: 2026-02-25 12:15:50.269454022 +0000 UTC m=+0.056193143 container create 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 12:15:50 compute-0 systemd[1]: Started libpod-conmon-020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7.scope.
Feb 25 12:15:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:15:50 compute-0 podman[252317]: 2026-02-25 12:15:50.242869241 +0000 UTC m=+0.029608342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:15:50 compute-0 podman[252317]: 2026-02-25 12:15:50.346348851 +0000 UTC m=+0.133087962 container init 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 12:15:50 compute-0 podman[252317]: 2026-02-25 12:15:50.353236726 +0000 UTC m=+0.139975807 container start 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 12:15:50 compute-0 vibrant_thompson[252334]: 167 167
Feb 25 12:15:50 compute-0 systemd[1]: libpod-020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7.scope: Deactivated successfully.
Feb 25 12:15:50 compute-0 podman[252317]: 2026-02-25 12:15:50.360603175 +0000 UTC m=+0.147342266 container attach 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:15:50 compute-0 podman[252317]: 2026-02-25 12:15:50.361011487 +0000 UTC m=+0.147750578 container died 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9ad1237404f82995611ba3187a7be09b8907a51e589fe743f5f98157763ff85-merged.mount: Deactivated successfully.
Feb 25 12:15:50 compute-0 podman[252317]: 2026-02-25 12:15:50.435293468 +0000 UTC m=+0.222032589 container remove 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:15:50 compute-0 systemd[1]: libpod-conmon-020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7.scope: Deactivated successfully.
Feb 25 12:15:50 compute-0 podman[252360]: 2026-02-25 12:15:50.597539626 +0000 UTC m=+0.052018599 container create 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:15:50 compute-0 systemd[1]: Started libpod-conmon-7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a.scope.
Feb 25 12:15:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:15:50 compute-0 podman[252360]: 2026-02-25 12:15:50.570758739 +0000 UTC m=+0.025237802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68994932d3cfcec4908801fe44f9a36a9d2e68c7f89f777c73c3d8434cd57c72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68994932d3cfcec4908801fe44f9a36a9d2e68c7f89f777c73c3d8434cd57c72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68994932d3cfcec4908801fe44f9a36a9d2e68c7f89f777c73c3d8434cd57c72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68994932d3cfcec4908801fe44f9a36a9d2e68c7f89f777c73c3d8434cd57c72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:15:50 compute-0 podman[252360]: 2026-02-25 12:15:50.690518764 +0000 UTC m=+0.144997767 container init 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:15:50 compute-0 podman[252360]: 2026-02-25 12:15:50.702787179 +0000 UTC m=+0.157266152 container start 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:15:50 compute-0 podman[252360]: 2026-02-25 12:15:50.711724265 +0000 UTC m=+0.166203338 container attach 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:15:51 compute-0 nova_compute[244014]: 2026-02-25 12:15:51.055 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:51 compute-0 nova_compute[244014]: 2026-02-25 12:15:51.057 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:51 compute-0 nova_compute[244014]: 2026-02-25 12:15:51.073 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:15:51 compute-0 nova_compute[244014]: 2026-02-25 12:15:51.148 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:51 compute-0 nova_compute[244014]: 2026-02-25 12:15:51.149 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:51 compute-0 nova_compute[244014]: 2026-02-25 12:15:51.155 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:15:51 compute-0 nova_compute[244014]: 2026-02-25 12:15:51.155 244018 INFO nova.compute.claims [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:15:51 compute-0 ceph-mon[76335]: pgmap v910: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Feb 25 12:15:51 compute-0 nova_compute[244014]: 2026-02-25 12:15:51.283 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:51 compute-0 lvm[252457]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:15:51 compute-0 lvm[252455]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:15:51 compute-0 lvm[252455]: VG ceph_vg0 finished
Feb 25 12:15:51 compute-0 lvm[252457]: VG ceph_vg1 finished
Feb 25 12:15:51 compute-0 lvm[252459]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:15:51 compute-0 lvm[252459]: VG ceph_vg2 finished
Feb 25 12:15:51 compute-0 confident_engelbart[252377]: {}
Feb 25 12:15:51 compute-0 systemd[1]: libpod-7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a.scope: Deactivated successfully.
Feb 25 12:15:51 compute-0 podman[252360]: 2026-02-25 12:15:51.461375256 +0000 UTC m=+0.915854229 container died 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 12:15:51 compute-0 systemd[1]: libpod-7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a.scope: Consumed 1.114s CPU time.
Feb 25 12:15:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 25 12:15:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-68994932d3cfcec4908801fe44f9a36a9d2e68c7f89f777c73c3d8434cd57c72-merged.mount: Deactivated successfully.
Feb 25 12:15:51 compute-0 podman[252360]: 2026-02-25 12:15:51.531683328 +0000 UTC m=+0.986162301 container remove 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:15:51 compute-0 systemd[1]: libpod-conmon-7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a.scope: Deactivated successfully.
Feb 25 12:15:51 compute-0 sudo[252279]: pam_unix(sudo:session): session closed for user root
Feb 25 12:15:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:15:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:15:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:15:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:15:51 compute-0 sudo[252493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:15:51 compute-0 sudo[252493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:15:51 compute-0 sudo[252493]: pam_unix(sudo:session): session closed for user root
Feb 25 12:15:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:15:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573914449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:51 compute-0 nova_compute[244014]: 2026-02-25 12:15:51.841 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:51 compute-0 nova_compute[244014]: 2026-02-25 12:15:51.852 244018 DEBUG nova.compute.provider_tree [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:15:51 compute-0 nova_compute[244014]: 2026-02-25 12:15:51.880 244018 DEBUG nova.scheduler.client.report [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:15:52 compute-0 ceph-mon[76335]: pgmap v911: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 25 12:15:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:15:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:15:52 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/573914449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.852 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.853 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.859 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.859 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.886 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.909 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.909 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.933 244018 INFO nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.955 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.956 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.960 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.967 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:15:52 compute-0 nova_compute[244014]: 2026-02-25 12:15:52.967 244018 INFO nova.compute.claims [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.125 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.128 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.129 244018 INFO nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Creating image(s)
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.158 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.183 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.206 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.210 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.243 244018 WARNING oslo_policy.policy [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.244 244018 WARNING oslo_policy.policy [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.246 244018 DEBUG nova.policy [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee6c6e44a0624805afeb68a67c99f325', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.278 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.279 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.280 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.280 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.299 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.302 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6de08989-13cc-415b-adc9-04b338e13d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.315 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.549 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6de08989-13cc-415b-adc9-04b338e13d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.612 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] resizing rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.713 244018 DEBUG nova.objects.instance [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'migration_context' on Instance uuid 6de08989-13cc-415b-adc9-04b338e13d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.732 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.735 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Ensure instance console log exists: /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.736 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.736 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.737 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:15:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331721725' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.799 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.806 244018 DEBUG nova.compute.provider_tree [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.825 244018 DEBUG nova.scheduler.client.report [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.846 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.847 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.888 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:15:53 compute-0 nova_compute[244014]: 2026-02-25 12:15:53.889 244018 DEBUG nova.network.neutron [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:15:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:15:54.198 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:15:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:15:54.199 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:15:54 compute-0 nova_compute[244014]: 2026-02-25 12:15:54.222 244018 INFO nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:15:54 compute-0 ceph-mon[76335]: pgmap v912: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:15:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1331721725' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:15:54 compute-0 nova_compute[244014]: 2026-02-25 12:15:54.836 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:15:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:15:55.000 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:15:55.001 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:15:55.001 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.113 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.115 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.116 244018 INFO nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Creating image(s)
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.166 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.199 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.232 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.236 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.253 244018 DEBUG nova.network.neutron [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.256 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.258 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Successfully created port: 277c556d-c41e-4d6d-9f29-56e96f6a65e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.294 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.296 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.296 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.297 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.326 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.330 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.797 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.875 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] resizing rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:15:55 compute-0 nova_compute[244014]: 2026-02-25 12:15:55.980 244018 DEBUG nova.objects.instance [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'migration_context' on Instance uuid 6d979dde-168d-4976-99a4-bb4e3eb22ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.616 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.616 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Ensure instance console log exists: /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.617 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.618 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.618 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.621 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.624 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Successfully updated port: 277c556d-c41e-4d6d-9f29-56e96f6a65e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.630 244018 WARNING nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.634 244018 DEBUG nova.virt.libvirt.host [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.635 244018 DEBUG nova.virt.libvirt.host [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.637 244018 DEBUG nova.virt.libvirt.host [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.638 244018 DEBUG nova.virt.libvirt.host [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.639 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.639 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.640 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.640 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.641 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.641 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.642 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.642 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.643 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.643 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.643 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.644 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.648 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:56 compute-0 ceph-mon[76335]: pgmap v913: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.724 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.725 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquired lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:15:56 compute-0 nova_compute[244014]: 2026-02-25 12:15:56.725 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:15:57 compute-0 nova_compute[244014]: 2026-02-25 12:15:57.037 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:15:57 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 25 12:15:57 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 25 12:15:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:15:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2074957029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:57 compute-0 nova_compute[244014]: 2026-02-25 12:15:57.245 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:57 compute-0 nova_compute[244014]: 2026-02-25 12:15:57.266 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:57 compute-0 nova_compute[244014]: 2026-02-25 12:15:57.270 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:57 compute-0 nova_compute[244014]: 2026-02-25 12:15:57.332 244018 DEBUG nova.compute.manager [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-changed-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:15:57 compute-0 nova_compute[244014]: 2026-02-25 12:15:57.332 244018 DEBUG nova.compute.manager [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Refreshing instance network info cache due to event network-changed-277c556d-c41e-4d6d-9f29-56e96f6a65e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:15:57 compute-0 nova_compute[244014]: 2026-02-25 12:15:57.333 244018 DEBUG oslo_concurrency.lockutils [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:15:57 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 25 12:15:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 299 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.9 MiB/s wr, 164 op/s
Feb 25 12:15:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2074957029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:15:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810532628' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:57 compute-0 nova_compute[244014]: 2026-02-25 12:15:57.798 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:57 compute-0 nova_compute[244014]: 2026-02-25 12:15:57.802 244018 DEBUG nova.objects.instance [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d979dde-168d-4976-99a4-bb4e3eb22ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.190 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.191 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.192 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.343 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:15:58 compute-0 nova_compute[244014]:   <uuid>6d979dde-168d-4976-99a4-bb4e3eb22ae0</uuid>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   <name>instance-00000006</name>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <nova:name>tempest-LiveMigrationNegativeTest-server-340575447</nova:name>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:15:56</nova:creationTime>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:15:58 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:15:58 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:15:58 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:15:58 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:15:58 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:15:58 compute-0 nova_compute[244014]:         <nova:user uuid="0da29cefe0e94220a8d9cf895454b55c">tempest-LiveMigrationNegativeTest-1055227276-project-member</nova:user>
Feb 25 12:15:58 compute-0 nova_compute[244014]:         <nova:project uuid="6bddec791c4c46b7bef787d3d6634b12">tempest-LiveMigrationNegativeTest-1055227276</nova:project>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <system>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <entry name="serial">6d979dde-168d-4976-99a4-bb4e3eb22ae0</entry>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <entry name="uuid">6d979dde-168d-4976-99a4-bb4e3eb22ae0</entry>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     </system>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   <os>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   </os>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   <features>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   </features>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk">
Feb 25 12:15:58 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       </source>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:15:58 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config">
Feb 25 12:15:58 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       </source>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:15:58 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/console.log" append="off"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <video>
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     </video>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:15:58 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:15:58 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:15:58 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:15:58 compute-0 nova_compute[244014]: </domain>
Feb 25 12:15:58 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.361 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.361 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.424 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.425 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.426 244018 INFO nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Using config drive
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.457 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:58 compute-0 ceph-mon[76335]: pgmap v914: 305 pgs: 305 active+clean; 299 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.9 MiB/s wr, 164 op/s
Feb 25 12:15:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/810532628' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.746 244018 INFO nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Creating config drive at /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.752 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8y7iise3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.779 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.780 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.780 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.781 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.874 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8y7iise3" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.900 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:15:58 compute-0 nova_compute[244014]: 2026-02-25 12:15:58.906 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:15:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:15:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 299 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 123 op/s
Feb 25 12:15:59 compute-0 nova_compute[244014]: 2026-02-25 12:15:59.751 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:16:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:00.201 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.508 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.509 244018 INFO nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Deleting local config drive /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config because it was imported into RBD.
Feb 25 12:16:00 compute-0 systemd-machined[210048]: New machine qemu-5-instance-00000006.
Feb 25 12:16:00 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000006.
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.902 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updating instance_info_cache with network_info: [{"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.950 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021760.950319, 6d979dde-168d-4976-99a4-bb4e3eb22ae0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.951 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] VM Resumed (Lifecycle Event)
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.956 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.956 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.957 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Releasing lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.958 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Instance network_info: |[{"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.959 244018 DEBUG oslo_concurrency.lockutils [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.960 244018 DEBUG nova.network.neutron [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Refreshing network info cache for port 277c556d-c41e-4d6d-9f29-56e96f6a65e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.965 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Start _get_guest_xml network_info=[{"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.971 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.975 244018 INFO nova.virt.libvirt.driver [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance spawned successfully.
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.976 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.981 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.985 244018 WARNING nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.996 244018 DEBUG nova.virt.libvirt.host [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:16:00 compute-0 nova_compute[244014]: 2026-02-25 12:16:00.997 244018 DEBUG nova.virt.libvirt.host [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.001 244018 DEBUG nova.virt.libvirt.host [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.002 244018 DEBUG nova.virt.libvirt.host [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.002 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.003 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:15:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='966515861',id=21,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1872382025',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.004 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.004 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.005 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.005 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.005 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.006 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.006 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.007 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.007 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.008 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.012 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.035 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.036 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.036 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.037 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.037 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.038 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.051 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.051 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021760.9526653, 6d979dde-168d-4976-99a4-bb4e3eb22ae0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.052 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] VM Started (Lifecycle Event)
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.109 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.112 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.154 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.180 244018 INFO nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Took 6.07 seconds to spawn the instance on the hypervisor.
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.180 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.258 244018 INFO nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Took 8.33 seconds to build instance.
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.286 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:01 compute-0 ceph-mon[76335]: pgmap v915: 305 pgs: 305 active+clean; 299 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 123 op/s
Feb 25 12:16:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 309 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 MiB/s wr, 148 op/s
Feb 25 12:16:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:16:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3832437348' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.529 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.561 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.566 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.731 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.757 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.757 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.757 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.758 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.758 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.759 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.759 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.759 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.759 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.759 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.785 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.785 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.786 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.786 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:16:01 compute-0 nova_compute[244014]: 2026-02-25 12:16:01.786 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:16:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786276841' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.148 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.150 244018 DEBUG nova.virt.libvirt.vif [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(21),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-685039122',id=5,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=21,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-c0zagdsb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=6de08989-13cc-415b-adc9-04b338e13d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.150 244018 DEBUG nova.network.os_vif_util [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.151 244018 DEBUG nova.network.os_vif_util [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.152 244018 DEBUG nova.objects.instance [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6de08989-13cc-415b-adc9-04b338e13d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.188 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:16:02 compute-0 nova_compute[244014]:   <uuid>6de08989-13cc-415b-adc9-04b338e13d0f</uuid>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   <name>instance-00000005</name>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-685039122</nova:name>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:16:00</nova:creationTime>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <nova:flavor name="tempest-flavor_with_ephemeral_0-1872382025">
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <nova:user uuid="ee6c6e44a0624805afeb68a67c99f325">tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member</nova:user>
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <nova:project uuid="0cd0968a9a1b4b9e984b0a10a6ac77a8">tempest-ServersWithSpecificFlavorTestJSON-1630390846</nova:project>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <nova:port uuid="277c556d-c41e-4d6d-9f29-56e96f6a65e2">
Feb 25 12:16:02 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <system>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <entry name="serial">6de08989-13cc-415b-adc9-04b338e13d0f</entry>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <entry name="uuid">6de08989-13cc-415b-adc9-04b338e13d0f</entry>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     </system>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   <os>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   </os>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   <features>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   </features>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/6de08989-13cc-415b-adc9-04b338e13d0f_disk">
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       </source>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/6de08989-13cc-415b-adc9-04b338e13d0f_disk.config">
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       </source>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:16:02 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:9f:6d:04"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <target dev="tap277c556d-c4"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/console.log" append="off"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <video>
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     </video>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:16:02 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:16:02 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:16:02 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:16:02 compute-0 nova_compute[244014]: </domain>
Feb 25 12:16:02 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.189 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Preparing to wait for external event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.189 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.189 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.189 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.190 244018 DEBUG nova.virt.libvirt.vif [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(21),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-685039122',id=5,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=21,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-c0zagdsb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=6de08989-13cc-415b-adc9-04b338e13d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.190 244018 DEBUG nova.network.os_vif_util [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.191 244018 DEBUG nova.network.os_vif_util [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.191 244018 DEBUG os_vif [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.232 244018 DEBUG ovsdbapp.backend.ovs_idl [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.233 244018 DEBUG ovsdbapp.backend.ovs_idl [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.233 244018 DEBUG ovsdbapp.backend.ovs_idl [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [POLLOUT] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:16:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840390076' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.351 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.362 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.362 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.364 244018 INFO oslo.privsep.daemon [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpi0flyv1m/privsep.sock']
Feb 25 12:16:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3832437348' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1786276841' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/840390076' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.572 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.573 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.577 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.577 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.640 244018 DEBUG nova.objects.instance [None req-3f7ddc02-fe16-4b96-b871-08ac2002a35c 1175078d14ec4976bef47e03b7b1c6a4 1764ea8d72cf4177bc85b3771c21c585 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d979dde-168d-4976-99a4-bb4e3eb22ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.667 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021762.6668618, 6d979dde-168d-4976-99a4-bb4e3eb22ae0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.667 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] VM Paused (Lifecycle Event)
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.719 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.722 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.748 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.749 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4782MB free_disk=59.918520422652364GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.749 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.749 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.752 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.848 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.849 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 6de08989-13cc-415b-adc9-04b338e13d0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.849 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 6d979dde-168d-4976-99a4-bb4e3eb22ae0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.849 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.849 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:16:02 compute-0 nova_compute[244014]: 2026-02-25 12:16:02.956 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:03 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000006.scope: Deactivated successfully.
Feb 25 12:16:03 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000006.scope: Consumed 2.257s CPU time.
Feb 25 12:16:03 compute-0 systemd-machined[210048]: Machine qemu-5-instance-00000006 terminated.
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.260 244018 INFO oslo.privsep.daemon [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Spawned new privsep daemon via rootwrap
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.150 253165 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.155 253165 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.160 253165 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.160 253165 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253165
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.292 244018 DEBUG nova.compute.manager [None req-3f7ddc02-fe16-4b96-b871-08ac2002a35c 1175078d14ec4976bef47e03b7b1c6a4 1764ea8d72cf4177bc85b3771c21c585 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:03 compute-0 ceph-mon[76335]: pgmap v916: 305 pgs: 305 active+clean; 309 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 MiB/s wr, 148 op/s
Feb 25 12:16:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.7 MiB/s wr, 178 op/s
Feb 25 12:16:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:16:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239469560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.538 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.543 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.653 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.654 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap277c556d-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.654 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap277c556d-c4, col_values=(('external_ids', {'iface-id': '277c556d-c41e-4d6d-9f29-56e96f6a65e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:6d:04', 'vm-uuid': '6de08989-13cc-415b-adc9-04b338e13d0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:03 compute-0 NetworkManager[49836]: <info>  [1772021763.6581] manager: (tap277c556d-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.664 244018 INFO os_vif [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4')
Feb 25 12:16:03 compute-0 nova_compute[244014]: 2026-02-25 12:16:03.824 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:16:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:04 compute-0 nova_compute[244014]: 2026-02-25 12:16:04.415 244018 DEBUG nova.network.neutron [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updated VIF entry in instance network info cache for port 277c556d-c41e-4d6d-9f29-56e96f6a65e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:16:04 compute-0 nova_compute[244014]: 2026-02-25 12:16:04.416 244018 DEBUG nova.network.neutron [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updating instance_info_cache with network_info: [{"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:04 compute-0 nova_compute[244014]: 2026-02-25 12:16:04.461 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:16:04 compute-0 nova_compute[244014]: 2026-02-25 12:16:04.462 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4239469560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:04 compute-0 nova_compute[244014]: 2026-02-25 12:16:04.571 244018 DEBUG oslo_concurrency.lockutils [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:04 compute-0 nova_compute[244014]: 2026-02-25 12:16:04.584 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:16:04 compute-0 nova_compute[244014]: 2026-02-25 12:16:04.585 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:16:04 compute-0 nova_compute[244014]: 2026-02-25 12:16:04.585 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No VIF found with MAC fa:16:3e:9f:6d:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:16:04 compute-0 nova_compute[244014]: 2026-02-25 12:16:04.586 244018 INFO nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Using config drive
Feb 25 12:16:04 compute-0 nova_compute[244014]: 2026-02-25 12:16:04.617 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:05 compute-0 nova_compute[244014]: 2026-02-25 12:16:05.142 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:05 compute-0 ceph-mon[76335]: pgmap v917: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.7 MiB/s wr, 178 op/s
Feb 25 12:16:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 796 KiB/s rd, 5.7 MiB/s wr, 140 op/s
Feb 25 12:16:05 compute-0 nova_compute[244014]: 2026-02-25 12:16:05.693 244018 INFO nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Creating config drive at /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config
Feb 25 12:16:05 compute-0 nova_compute[244014]: 2026-02-25 12:16:05.701 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn1u5ne6j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:05 compute-0 nova_compute[244014]: 2026-02-25 12:16:05.828 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn1u5ne6j" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:05 compute-0 nova_compute[244014]: 2026-02-25 12:16:05.880 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:05 compute-0 nova_compute[244014]: 2026-02-25 12:16:05.884 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config 6de08989-13cc-415b-adc9-04b338e13d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.428 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config 6de08989-13cc-415b-adc9-04b338e13d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.429 244018 INFO nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Deleting local config drive /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config because it was imported into RBD.
Feb 25 12:16:06 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Feb 25 12:16:06 compute-0 kernel: tap277c556d-c4: entered promiscuous mode
Feb 25 12:16:06 compute-0 NetworkManager[49836]: <info>  [1772021766.5249] manager: (tap277c556d-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Feb 25 12:16:06 compute-0 ovn_controller[147040]: 2026-02-25T12:16:06Z|00027|binding|INFO|Claiming lport 277c556d-c41e-4d6d-9f29-56e96f6a65e2 for this chassis.
Feb 25 12:16:06 compute-0 ovn_controller[147040]: 2026-02-25T12:16:06Z|00028|binding|INFO|277c556d-c41e-4d6d-9f29-56e96f6a65e2: Claiming fa:16:3e:9f:6d:04 10.100.0.6
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.526 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.533 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:06.559 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:6d:04 10.100.0.6'], port_security=['fa:16:3e:9f:6d:04 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6de08989-13cc-415b-adc9-04b338e13d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f43142d8-27de-490e-9b86-d4e77c9c7e07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5d1aaf-d2e8-49e5-aeb2-335f304b15d9, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=277c556d-c41e-4d6d-9f29-56e96f6a65e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:16:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:06.562 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 277c556d-c41e-4d6d-9f29-56e96f6a65e2 in datapath d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 bound to our chassis
Feb 25 12:16:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:06.566 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3fb36f1-0e88-43b4-a8a4-3844d55f1de8
Feb 25 12:16:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:06.567 157129 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp2ek8exws/privsep.sock']
Feb 25 12:16:06 compute-0 systemd-udevd[253249]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:16:06 compute-0 systemd-machined[210048]: New machine qemu-6-instance-00000005.
Feb 25 12:16:06 compute-0 NetworkManager[49836]: <info>  [1772021766.6087] device (tap277c556d-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:16:06 compute-0 NetworkManager[49836]: <info>  [1772021766.6095] device (tap277c556d-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:06 compute-0 ovn_controller[147040]: 2026-02-25T12:16:06Z|00029|binding|INFO|Setting lport 277c556d-c41e-4d6d-9f29-56e96f6a65e2 ovn-installed in OVS
Feb 25 12:16:06 compute-0 ovn_controller[147040]: 2026-02-25T12:16:06Z|00030|binding|INFO|Setting lport 277c556d-c41e-4d6d-9f29-56e96f6a65e2 up in Southbound
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:06 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000005.
Feb 25 12:16:06 compute-0 ceph-mon[76335]: pgmap v918: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 796 KiB/s rd, 5.7 MiB/s wr, 140 op/s
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.958 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.959 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.960 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.960 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.960 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.963 244018 INFO nova.compute.manager [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Terminating instance
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.964 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "refresh_cache-6d979dde-168d-4976-99a4-bb4e3eb22ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.964 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquired lock "refresh_cache-6d979dde-168d-4976-99a4-bb4e3eb22ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:16:06 compute-0 nova_compute[244014]: 2026-02-25 12:16:06.965 244018 DEBUG nova.network.neutron [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:16:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.174 157129 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 25 12:16:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.175 157129 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2ek8exws/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 25 12:16:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.076 253268 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 25 12:16:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.081 253268 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 25 12:16:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.084 253268 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Feb 25 12:16:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.085 253268 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253268
Feb 25 12:16:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.180 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09de8cf9-5db8-4229-b592-bff9ceb82099]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.320 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021767.319154, 6de08989-13cc-415b-adc9-04b338e13d0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.320 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] VM Started (Lifecycle Event)
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.343 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.348 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021767.3194554, 6de08989-13cc-415b-adc9-04b338e13d0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.348 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] VM Paused (Lifecycle Event)
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.369 244018 DEBUG nova.network.neutron [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.374 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.378 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.404 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.458 244018 DEBUG nova.compute.manager [req-19f865af-d0df-42f6-bd8f-574e429f3ef8 req-f4df1cd1-7ac5-42dd-b7bd-49c00a960351 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.459 244018 DEBUG oslo_concurrency.lockutils [req-19f865af-d0df-42f6-bd8f-574e429f3ef8 req-f4df1cd1-7ac5-42dd-b7bd-49c00a960351 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.459 244018 DEBUG oslo_concurrency.lockutils [req-19f865af-d0df-42f6-bd8f-574e429f3ef8 req-f4df1cd1-7ac5-42dd-b7bd-49c00a960351 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.460 244018 DEBUG oslo_concurrency.lockutils [req-19f865af-d0df-42f6-bd8f-574e429f3ef8 req-f4df1cd1-7ac5-42dd-b7bd-49c00a960351 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.460 244018 DEBUG nova.compute.manager [req-19f865af-d0df-42f6-bd8f-574e429f3ef8 req-f4df1cd1-7ac5-42dd-b7bd-49c00a960351 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Processing event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.461 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.466 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021767.4660094, 6de08989-13cc-415b-adc9-04b338e13d0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.466 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] VM Resumed (Lifecycle Event)
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.469 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.473 244018 INFO nova.virt.libvirt.driver [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Instance spawned successfully.
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.473 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.497 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.505 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 326 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 197 op/s
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.510 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.510 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.511 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.512 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.513 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.513 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.560 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.601 244018 INFO nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Took 14.47 seconds to spawn the instance on the hypervisor.
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.601 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.675 244018 INFO nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Took 16.56 seconds to build instance.
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.686 244018 DEBUG nova.network.neutron [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.700 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.707 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Releasing lock "refresh_cache-6d979dde-168d-4976-99a4-bb4e3eb22ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.708 244018 DEBUG nova.compute.manager [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.716 244018 INFO nova.virt.libvirt.driver [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance destroyed successfully.
Feb 25 12:16:07 compute-0 nova_compute[244014]: 2026-02-25 12:16:07.717 244018 DEBUG nova.objects.instance [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'resources' on Instance uuid 6d979dde-168d-4976-99a4-bb4e3eb22ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.726 253268 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.727 253268 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.727 253268 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:08 compute-0 nova_compute[244014]: 2026-02-25 12:16:08.184 244018 INFO nova.virt.libvirt.driver [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Deleting instance files /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0_del
Feb 25 12:16:08 compute-0 nova_compute[244014]: 2026-02-25 12:16:08.185 244018 INFO nova.virt.libvirt.driver [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Deletion of /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0_del complete
Feb 25 12:16:08 compute-0 nova_compute[244014]: 2026-02-25 12:16:08.255 244018 INFO nova.compute.manager [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Took 0.55 seconds to destroy the instance on the hypervisor.
Feb 25 12:16:08 compute-0 nova_compute[244014]: 2026-02-25 12:16:08.256 244018 DEBUG oslo.service.loopingcall [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:16:08 compute-0 nova_compute[244014]: 2026-02-25 12:16:08.257 244018 DEBUG nova.compute.manager [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:16:08 compute-0 nova_compute[244014]: 2026-02-25 12:16:08.257 244018 DEBUG nova.network.neutron [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e5cd66-6be3-418e-ba1b-36ce97e0927a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.330 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd3fb36f1-01 in ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.334 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd3fb36f1-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.334 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[39053281-eb6b-43e3-9134-efce1e9fbae5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.340 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[979d9fea-bab6-4ab9-acc6-61f20ea855c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:08 compute-0 nova_compute[244014]: 2026-02-25 12:16:08.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.365 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed7b61b-502a-4dc6-87a0-32b4bb4139b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.383 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9eb080-9739-4ad2-8bbc-ea11f2b59b8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.385 157129 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp1h0_cpqu/privsep.sock']
Feb 25 12:16:08 compute-0 nova_compute[244014]: 2026-02-25 12:16:08.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:08 compute-0 ceph-mon[76335]: pgmap v919: 305 pgs: 305 active+clean; 326 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 197 op/s
Feb 25 12:16:08 compute-0 nova_compute[244014]: 2026-02-25 12:16:08.936 244018 DEBUG nova.network.neutron [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.961 157129 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.963 157129 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1h0_cpqu/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.870 253343 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.875 253343 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.879 253343 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.879 253343 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253343
Feb 25 12:16:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.967 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0337b7-7500-4f2d-9113-80724f9c89d0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:09 compute-0 nova_compute[244014]: 2026-02-25 12:16:09.202 244018 DEBUG nova.network.neutron [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:09 compute-0 nova_compute[244014]: 2026-02-25 12:16:09.227 244018 INFO nova.compute.manager [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Took 0.97 seconds to deallocate network for instance.
Feb 25 12:16:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.400 253343 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.400 253343 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.400 253343 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 326 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 134 op/s
Feb 25 12:16:09 compute-0 nova_compute[244014]: 2026-02-25 12:16:09.818 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:09 compute-0 nova_compute[244014]: 2026-02-25 12:16:09.819 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:09 compute-0 nova_compute[244014]: 2026-02-25 12:16:09.826 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "661b348d-5b73-45ed-8357-3aefed90d054" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:09 compute-0 nova_compute[244014]: 2026-02-25 12:16:09.827 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.876 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4cd20b-dd5c-4f38-a1b9-b325f9b44b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:09 compute-0 NetworkManager[49836]: <info>  [1772021769.8960] manager: (tapd3fb36f1-00): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Feb 25 12:16:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.896 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6689e1ed-1851-4104-9e91-a8647e09c6a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:09 compute-0 systemd-udevd[253355]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:16:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.922 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[90b54755-c9a1-45b9-8254-72c86af1f8a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.926 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[eca92c68-2365-41a6-83ee-8df38e93056a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:09 compute-0 NetworkManager[49836]: <info>  [1772021769.9485] device (tapd3fb36f1-00): carrier: link connected
Feb 25 12:16:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.951 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d750fc87-7cc8-407d-bb04-5625da359b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.965 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[011b4c04-2b02-4b41-9b22-e608da8c14a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fb36f1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:f0:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373935, 'reachable_time': 27729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253373, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.981 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ec494f-050d-4f80-93ee-353b9c226265]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:f004'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373935, 'tstamp': 373935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253375, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.994 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[658085f5-8a7a-47a8-88d4-615db235503c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fb36f1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:f0:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373935, 'reachable_time': 27729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253376, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.018 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2960a571-bd55-46c1-b336-2c2b874dd024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.053 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6eacb5-ca8b-4c83-acaf-4f9ae1e13d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.055 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fb36f1-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.055 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.056 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3fb36f1-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:10 compute-0 NetworkManager[49836]: <info>  [1772021770.0588] manager: (tapd3fb36f1-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.058 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:10 compute-0 kernel: tapd3fb36f1-00: entered promiscuous mode
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.065 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3fb36f1-00, col_values=(('external_ids', {'iface-id': '642d12d0-ef5c-4bc5-ba96-4b85e033986b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.064 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:10 compute-0 ovn_controller[147040]: 2026-02-25T12:16:10Z|00031|binding|INFO|Releasing lport 642d12d0-ef5c-4bc5-ba96-4b85e033986b from this chassis (sb_readonly=0)
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.089 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.090 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.091 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd523c3-da9f-4cb1-8135-d708b066e2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.092 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID d3fb36f1-0e88-43b4-a8a4-3844d55f1de8
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:16:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.093 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'env', 'PROCESS_TAG=haproxy-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:16:10 compute-0 podman[253409]: 2026-02-25 12:16:10.428620273 +0000 UTC m=+0.072970193 container create 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.436 244018 DEBUG nova.compute.manager [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.437 244018 DEBUG oslo_concurrency.lockutils [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.438 244018 DEBUG oslo_concurrency.lockutils [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.438 244018 DEBUG oslo_concurrency.lockutils [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.439 244018 DEBUG nova.compute.manager [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] No waiting events found dispatching network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.439 244018 WARNING nova.compute.manager [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received unexpected event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 for instance with vm_state active and task_state None.
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.454 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:16:10 compute-0 podman[253409]: 2026-02-25 12:16:10.378350257 +0000 UTC m=+0.022700217 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:16:10 compute-0 systemd[1]: Started libpod-conmon-4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c.scope.
Feb 25 12:16:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df85cc05f2dfb2a93e56b0f9c6c65dbdcac823808956d17e7a377f617bac8210/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:10 compute-0 podman[253409]: 2026-02-25 12:16:10.534749582 +0000 UTC m=+0.179099542 container init 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:16:10 compute-0 podman[253409]: 2026-02-25 12:16:10.546149521 +0000 UTC m=+0.190499431 container start 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.579 244018 DEBUG oslo_concurrency.processutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:10 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [NOTICE]   (253428) : New worker (253431) forked
Feb 25 12:16:10 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [NOTICE]   (253428) : Loading success.
Feb 25 12:16:10 compute-0 nova_compute[244014]: 2026-02-25 12:16:10.645 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:10 compute-0 ceph-mon[76335]: pgmap v920: 305 pgs: 305 active+clean; 326 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 134 op/s
Feb 25 12:16:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:16:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3002540957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.087 244018 DEBUG oslo_concurrency.processutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.093 244018 DEBUG nova.compute.provider_tree [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.146 244018 DEBUG nova.scheduler.client.report [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:16:11 compute-0 NetworkManager[49836]: <info>  [1772021771.1506] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Feb 25 12:16:11 compute-0 NetworkManager[49836]: <info>  [1772021771.1513] device (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 12:16:11 compute-0 NetworkManager[49836]: <warn>  [1772021771.1515] device (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 25 12:16:11 compute-0 NetworkManager[49836]: <info>  [1772021771.1524] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Feb 25 12:16:11 compute-0 NetworkManager[49836]: <info>  [1772021771.1529] device (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 12:16:11 compute-0 NetworkManager[49836]: <warn>  [1772021771.1530] device (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 25 12:16:11 compute-0 NetworkManager[49836]: <info>  [1772021771.1539] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Feb 25 12:16:11 compute-0 NetworkManager[49836]: <info>  [1772021771.1548] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Feb 25 12:16:11 compute-0 NetworkManager[49836]: <info>  [1772021771.1553] device (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 25 12:16:11 compute-0 NetworkManager[49836]: <info>  [1772021771.1556] device (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.173 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.201 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:11 compute-0 ovn_controller[147040]: 2026-02-25T12:16:11Z|00032|binding|INFO|Releasing lport 642d12d0-ef5c-4bc5-ba96-4b85e033986b from this chassis (sb_readonly=0)
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.216 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.347 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.350 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.357 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.358 244018 INFO nova.compute.claims [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.404 244018 INFO nova.scheduler.client.report [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Deleted allocations for instance 6d979dde-168d-4976-99a4-bb4e3eb22ae0
Feb 25 12:16:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 292 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 178 op/s
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.567 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.587 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3002540957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.807 244018 DEBUG nova.compute.manager [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-changed-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.807 244018 DEBUG nova.compute.manager [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Refreshing instance network info cache due to event network-changed-277c556d-c41e-4d6d-9f29-56e96f6a65e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.808 244018 DEBUG oslo_concurrency.lockutils [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.808 244018 DEBUG oslo_concurrency.lockutils [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:16:11 compute-0 nova_compute[244014]: 2026-02-25 12:16:11.808 244018 DEBUG nova.network.neutron [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Refreshing network info cache for port 277c556d-c41e-4d6d-9f29-56e96f6a65e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:16:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:16:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3664686447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.095 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.101 244018 DEBUG nova.compute.provider_tree [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.123 244018 DEBUG nova.scheduler.client.report [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.186 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.187 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.352 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.408 244018 INFO nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.443 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.601 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.603 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.605 244018 INFO nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Creating image(s)
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.635 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.669 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.707 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.713 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.815 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.816 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.817 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.818 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:12 compute-0 ceph-mon[76335]: pgmap v921: 305 pgs: 305 active+clean; 292 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 178 op/s
Feb 25 12:16:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3664686447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.851 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.857 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 661b348d-5b73-45ed-8357-3aefed90d054_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:12 compute-0 podman[253500]: 2026-02-25 12:16:12.892925295 +0000 UTC m=+0.066355536 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 25 12:16:12 compute-0 podman[253542]: 2026-02-25 12:16:12.932568345 +0000 UTC m=+0.106315606 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.977 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.977 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.978 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.978 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.978 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.980 244018 INFO nova.compute.manager [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Terminating instance
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.981 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.981 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquired lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:16:12 compute-0 nova_compute[244014]: 2026-02-25 12:16:12.981 244018 DEBUG nova.network.neutron [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.155 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 661b348d-5b73-45ed-8357-3aefed90d054_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.192 244018 DEBUG nova.network.neutron [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.244 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] resizing rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.364 244018 DEBUG nova.objects.instance [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lazy-loading 'migration_context' on Instance uuid 661b348d-5b73-45ed-8357-3aefed90d054 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.441 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.442 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Ensure instance console log exists: /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.443 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.443 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.443 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.446 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.450 244018 WARNING nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.458 244018 DEBUG nova.virt.libvirt.host [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.459 244018 DEBUG nova.virt.libvirt.host [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.476 244018 DEBUG nova.virt.libvirt.host [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.477 244018 DEBUG nova.virt.libvirt.host [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.477 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.477 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.478 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.478 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.479 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.479 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.479 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.479 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.480 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.480 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.480 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.481 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.485 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.506 244018 DEBUG nova.network.neutron [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.0 MiB/s wr, 203 op/s
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.579 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Releasing lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.580 244018 DEBUG nova.compute.manager [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:16:13 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Feb 25 12:16:13 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 12.653s CPU time.
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:13 compute-0 systemd-machined[210048]: Machine qemu-4-instance-00000004 terminated.
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.743 244018 DEBUG nova.network.neutron [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updated VIF entry in instance network info cache for port 277c556d-c41e-4d6d-9f29-56e96f6a65e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.744 244018 DEBUG nova.network.neutron [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updating instance_info_cache with network_info: [{"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.769 244018 DEBUG oslo_concurrency.lockutils [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.798 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance destroyed successfully.
Feb 25 12:16:13 compute-0 nova_compute[244014]: 2026-02-25 12:16:13.799 244018 DEBUG nova.objects.instance [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'resources' on Instance uuid ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:16:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367881996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.101 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.134 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.139 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.310 244018 INFO nova.virt.libvirt.driver [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Deleting instance files /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_del
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.311 244018 INFO nova.virt.libvirt.driver [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Deletion of /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_del complete
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.455 244018 INFO nova.compute.manager [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Took 0.88 seconds to destroy the instance on the hypervisor.
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.456 244018 DEBUG oslo.service.loopingcall [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.456 244018 DEBUG nova.compute.manager [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.457 244018 DEBUG nova.network.neutron [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:16:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:16:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2913879078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.667 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.668 244018 DEBUG nova.objects.instance [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 661b348d-5b73-45ed-8357-3aefed90d054 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:14 compute-0 ceph-mon[76335]: pgmap v922: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.0 MiB/s wr, 203 op/s
Feb 25 12:16:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1367881996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2913879078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:14 compute-0 nova_compute[244014]: 2026-02-25 12:16:14.861 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:16:14 compute-0 nova_compute[244014]:   <uuid>661b348d-5b73-45ed-8357-3aefed90d054</uuid>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   <name>instance-00000007</name>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-1233417614</nova:name>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:16:13</nova:creationTime>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:16:14 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:16:14 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:16:14 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:16:14 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:16:14 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:16:14 compute-0 nova_compute[244014]:         <nova:user uuid="6fff08b2934f41e8be5a7d014dc41013">tempest-ServerDiagnosticsV248Test-870949120-project-member</nova:user>
Feb 25 12:16:14 compute-0 nova_compute[244014]:         <nova:project uuid="c661c7cccd024c7080e4a22a47c088b7">tempest-ServerDiagnosticsV248Test-870949120</nova:project>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <system>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <entry name="serial">661b348d-5b73-45ed-8357-3aefed90d054</entry>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <entry name="uuid">661b348d-5b73-45ed-8357-3aefed90d054</entry>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     </system>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   <os>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   </os>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   <features>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   </features>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/661b348d-5b73-45ed-8357-3aefed90d054_disk">
Feb 25 12:16:14 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       </source>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:16:14 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/661b348d-5b73-45ed-8357-3aefed90d054_disk.config">
Feb 25 12:16:14 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       </source>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:16:14 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/console.log" append="off"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <video>
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     </video>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:16:14 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:16:14 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:16:14 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:16:14 compute-0 nova_compute[244014]: </domain>
Feb 25 12:16:14 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.070 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.071 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.072 244018 INFO nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Using config drive
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.101 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.215 244018 DEBUG nova.network.neutron [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:16:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 152 op/s
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.554 244018 DEBUG nova.network.neutron [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.618 244018 INFO nova.compute.manager [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Took 1.16 seconds to deallocate network for instance.
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.695 244018 INFO nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Creating config drive at /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.702 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp28h6jrws execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.764 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.765 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.827 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp28h6jrws" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.866 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.870 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config 661b348d-5b73-45ed-8357-3aefed90d054_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:15 compute-0 nova_compute[244014]: 2026-02-25 12:16:15.914 244018 DEBUG oslo_concurrency.processutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.029 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config 661b348d-5b73-45ed-8357-3aefed90d054_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.030 244018 INFO nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Deleting local config drive /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config because it was imported into RBD.
Feb 25 12:16:16 compute-0 systemd-machined[210048]: New machine qemu-7-instance-00000007.
Feb 25 12:16:16 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Feb 25 12:16:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:16:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3135789209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.514 244018 DEBUG oslo_concurrency.processutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.519 244018 DEBUG nova.compute.provider_tree [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.566 244018 DEBUG nova.scheduler.client.report [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.573 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021776.5727217, 661b348d-5b73-45ed-8357-3aefed90d054 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.573 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] VM Resumed (Lifecycle Event)
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.575 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.575 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.578 244018 INFO nova.virt.libvirt.driver [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Instance spawned successfully.
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.579 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.641 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.646 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.650 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.651 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.651 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.652 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.652 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.653 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.685 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.752 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.753 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021776.5746243, 661b348d-5b73-45ed-8357-3aefed90d054 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.753 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] VM Started (Lifecycle Event)
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.795 244018 INFO nova.scheduler.client.report [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Deleted allocations for instance ee9e86a4-8a34-43a9-baf1-f9b2e7f85534
Feb 25 12:16:16 compute-0 ceph-mon[76335]: pgmap v923: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 152 op/s
Feb 25 12:16:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3135789209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.875 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.884 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.945 244018 INFO nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Took 4.34 seconds to spawn the instance on the hypervisor.
Feb 25 12:16:16 compute-0 nova_compute[244014]: 2026-02-25 12:16:16.946 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:17 compute-0 nova_compute[244014]: 2026-02-25 12:16:17.325 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:17 compute-0 nova_compute[244014]: 2026-02-25 12:16:17.385 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:17 compute-0 nova_compute[244014]: 2026-02-25 12:16:17.408 244018 INFO nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Took 6.79 seconds to build instance.
Feb 25 12:16:17 compute-0 nova_compute[244014]: 2026-02-25 12:16:17.431 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 246 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 222 op/s
Feb 25 12:16:18 compute-0 nova_compute[244014]: 2026-02-25 12:16:18.294 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021763.2925382, 6d979dde-168d-4976-99a4-bb4e3eb22ae0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:18 compute-0 nova_compute[244014]: 2026-02-25 12:16:18.294 244018 INFO nova.compute.manager [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] VM Stopped (Lifecycle Event)
Feb 25 12:16:18 compute-0 nova_compute[244014]: 2026-02-25 12:16:18.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:18 compute-0 ovn_controller[147040]: 2026-02-25T12:16:18Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:6d:04 10.100.0.6
Feb 25 12:16:18 compute-0 ovn_controller[147040]: 2026-02-25T12:16:18Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:6d:04 10.100.0.6
Feb 25 12:16:18 compute-0 nova_compute[244014]: 2026-02-25 12:16:18.392 244018 DEBUG nova.compute.manager [None req-4709e7fe-bf25-46e9-a89e-9e3f30c8e97a - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:18 compute-0 nova_compute[244014]: 2026-02-25 12:16:18.662 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:18 compute-0 nova_compute[244014]: 2026-02-25 12:16:18.683 244018 DEBUG nova.compute.manager [None req-508a393b-b326-48b1-be60-4cd4705114a9 431f432a640849889a0f6a0f429aadbc c7f58877702640de8bdf3e202a60769c - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:18 compute-0 nova_compute[244014]: 2026-02-25 12:16:18.686 244018 INFO nova.compute.manager [None req-508a393b-b326-48b1-be60-4cd4705114a9 431f432a640849889a0f6a0f429aadbc c7f58877702640de8bdf3e202a60769c - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Retrieving diagnostics
Feb 25 12:16:18 compute-0 ceph-mon[76335]: pgmap v924: 305 pgs: 305 active+clean; 246 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 222 op/s
Feb 25 12:16:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 246 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Feb 25 12:16:20 compute-0 ceph-mon[76335]: pgmap v925: 305 pgs: 305 active+clean; 246 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Feb 25 12:16:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 262 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.8 MiB/s wr, 191 op/s
Feb 25 12:16:22 compute-0 ceph-mon[76335]: pgmap v926: 305 pgs: 305 active+clean; 262 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.8 MiB/s wr, 191 op/s
Feb 25 12:16:23 compute-0 nova_compute[244014]: 2026-02-25 12:16:23.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 245 op/s
Feb 25 12:16:23 compute-0 nova_compute[244014]: 2026-02-25 12:16:23.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:24 compute-0 ceph-mon[76335]: pgmap v927: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 245 op/s
Feb 25 12:16:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Feb 25 12:16:27 compute-0 ceph-mon[76335]: pgmap v928: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Feb 25 12:16:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 290 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 215 op/s
Feb 25 12:16:28 compute-0 nova_compute[244014]: 2026-02-25 12:16:28.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:28 compute-0 nova_compute[244014]: 2026-02-25 12:16:28.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:28 compute-0 nova_compute[244014]: 2026-02-25 12:16:28.797 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021773.7963176, ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:28 compute-0 nova_compute[244014]: 2026-02-25 12:16:28.797 244018 INFO nova.compute.manager [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] VM Stopped (Lifecycle Event)
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.006 244018 DEBUG nova.compute.manager [None req-bf278eb8-5fb7-4543-a77d-3cccc287a7a9 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:29 compute-0 ceph-mon[76335]: pgmap v929: 305 pgs: 305 active+clean; 290 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 215 op/s
Feb 25 12:16:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.188 244018 DEBUG nova.compute.manager [None req-f4b28b12-c0f7-4224-a724-6b057459f09f 431f432a640849889a0f6a0f429aadbc c7f58877702640de8bdf3e202a60769c - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.192 244018 INFO nova.compute.manager [None req-f4b28b12-c0f7-4224-a724-6b057459f09f 431f432a640849889a0f6a0f429aadbc c7f58877702640de8bdf3e202a60769c - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Retrieving diagnostics
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.261 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.261 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.262 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.263 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.263 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.266 244018 INFO nova.compute.manager [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Terminating instance
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.268 244018 DEBUG nova.compute.manager [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:16:29 compute-0 kernel: tap277c556d-c4 (unregistering): left promiscuous mode
Feb 25 12:16:29 compute-0 NetworkManager[49836]: <info>  [1772021789.3208] device (tap277c556d-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:29 compute-0 ovn_controller[147040]: 2026-02-25T12:16:29Z|00033|binding|INFO|Releasing lport 277c556d-c41e-4d6d-9f29-56e96f6a65e2 from this chassis (sb_readonly=0)
Feb 25 12:16:29 compute-0 ovn_controller[147040]: 2026-02-25T12:16:29Z|00034|binding|INFO|Setting lport 277c556d-c41e-4d6d-9f29-56e96f6a65e2 down in Southbound
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:29 compute-0 ovn_controller[147040]: 2026-02-25T12:16:29Z|00035|binding|INFO|Removing iface tap277c556d-c4 ovn-installed in OVS
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.342 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:29 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000005.scope: Deactivated successfully.
Feb 25 12:16:29 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000005.scope: Consumed 12.369s CPU time.
Feb 25 12:16:29 compute-0 systemd-machined[210048]: Machine qemu-6-instance-00000005 terminated.
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.439 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:6d:04 10.100.0.6'], port_security=['fa:16:3e:9f:6d:04 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6de08989-13cc-415b-adc9-04b338e13d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f43142d8-27de-490e-9b86-d4e77c9c7e07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5d1aaf-d2e8-49e5-aeb2-335f304b15d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=277c556d-c41e-4d6d-9f29-56e96f6a65e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.441 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 277c556d-c41e-4d6d-9f29-56e96f6a65e2 in datapath d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 unbound from our chassis
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.442 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.442 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e52ddb22-8381-4075-8898-965c8f9f3732]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.443 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 namespace which is not needed anymore
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.507 244018 INFO nova.virt.libvirt.driver [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Instance destroyed successfully.
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.508 244018 DEBUG nova.objects.instance [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'resources' on Instance uuid 6de08989-13cc-415b-adc9-04b338e13d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 290 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 145 op/s
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.530 244018 DEBUG nova.virt.libvirt.vif [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(21),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-685039122',id=5,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=21,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:16:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-c0zagdsb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:16:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=6de08989-13cc-415b-adc9-04b338e13d0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.531 244018 DEBUG nova.network.os_vif_util [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.532 244018 DEBUG nova.network.os_vif_util [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.533 244018 DEBUG os_vif [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.536 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.536 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap277c556d-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.544 244018 INFO os_vif [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4')
Feb 25 12:16:29 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [NOTICE]   (253428) : haproxy version is 2.8.14-c23fe91
Feb 25 12:16:29 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [NOTICE]   (253428) : path to executable is /usr/sbin/haproxy
Feb 25 12:16:29 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [WARNING]  (253428) : Exiting Master process...
Feb 25 12:16:29 compute-0 systemd[1]: libpod-4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c.scope: Deactivated successfully.
Feb 25 12:16:29 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [ALERT]    (253428) : Current worker (253431) exited with code 143 (Terminated)
Feb 25 12:16:29 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [WARNING]  (253428) : All workers exited. Exiting... (0)
Feb 25 12:16:29 compute-0 conmon[253424]: conmon 4ac83e63c68ca85fc408 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c.scope/container/memory.events
Feb 25 12:16:29 compute-0 podman[253948]: 2026-02-25 12:16:29.640661136 +0000 UTC m=+0.077892349 container died 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c-userdata-shm.mount: Deactivated successfully.
Feb 25 12:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-df85cc05f2dfb2a93e56b0f9c6c65dbdcac823808956d17e7a377f617bac8210-merged.mount: Deactivated successfully.
Feb 25 12:16:29 compute-0 podman[253948]: 2026-02-25 12:16:29.706948479 +0000 UTC m=+0.144179672 container cleanup 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 12:16:29 compute-0 systemd[1]: libpod-conmon-4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c.scope: Deactivated successfully.
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.741 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "661b348d-5b73-45ed-8357-3aefed90d054" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.743 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.744 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "661b348d-5b73-45ed-8357-3aefed90d054-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.744 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.745 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.746 244018 INFO nova.compute.manager [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Terminating instance
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.747 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "refresh_cache-661b348d-5b73-45ed-8357-3aefed90d054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.747 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquired lock "refresh_cache-661b348d-5b73-45ed-8357-3aefed90d054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.747 244018 DEBUG nova.network.neutron [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:16:29 compute-0 podman[253993]: 2026-02-25 12:16:29.808302386 +0000 UTC m=+0.082265300 container remove 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.815 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e876bc-ae39-4830-b8a3-e15333c7a202]: (4, ('Wed Feb 25 12:16:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 (4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c)\n4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c\nWed Feb 25 12:16:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 (4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c)\n4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.818 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6186b2b4-c147-47d0-9357-6d21073fe584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.820 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fb36f1-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:29 compute-0 kernel: tapd3fb36f1-00: left promiscuous mode
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.834 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e50d6905-05d8-4a7b-8669-13b43fc371ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.846 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[56db67ab-6a38-42aa-a7a4-3e2c69e5a38e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.848 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6c1ef0-72e9-4dff-820b-d8b64607d5e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.861 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4aa23a8-9d97-4a69-9ab3-1e6853af6ea9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373928, 'reachable_time': 29959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254012, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:29 compute-0 systemd[1]: run-netns-ovnmeta\x2dd3fb36f1\x2d0e88\x2d43b4\x2da8a4\x2d3844d55f1de8.mount: Deactivated successfully.
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.871 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:16:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.872 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2ddc24c6-bcf6-44ca-b3ab-37772efc2167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.972 244018 INFO nova.virt.libvirt.driver [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Deleting instance files /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f_del
Feb 25 12:16:29 compute-0 nova_compute[244014]: 2026-02-25 12:16:29.973 244018 INFO nova.virt.libvirt.driver [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Deletion of /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f_del complete
Feb 25 12:16:30 compute-0 nova_compute[244014]: 2026-02-25 12:16:30.365 244018 INFO nova.compute.manager [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Took 1.10 seconds to destroy the instance on the hypervisor.
Feb 25 12:16:30 compute-0 nova_compute[244014]: 2026-02-25 12:16:30.366 244018 DEBUG oslo.service.loopingcall [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:16:30 compute-0 nova_compute[244014]: 2026-02-25 12:16:30.367 244018 DEBUG nova.compute.manager [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:16:30 compute-0 nova_compute[244014]: 2026-02-25 12:16:30.367 244018 DEBUG nova.network.neutron [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:16:30 compute-0 nova_compute[244014]: 2026-02-25 12:16:30.738 244018 DEBUG nova.network.neutron [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:16:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:16:30
Feb 25 12:16:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:16:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:16:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', '.rgw.root', 'images', 'backups', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'vms']
Feb 25 12:16:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.012 244018 DEBUG nova.compute.manager [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-unplugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.013 244018 DEBUG oslo_concurrency.lockutils [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.013 244018 DEBUG oslo_concurrency.lockutils [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.013 244018 DEBUG oslo_concurrency.lockutils [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.013 244018 DEBUG nova.compute.manager [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] No waiting events found dispatching network-vif-unplugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.013 244018 DEBUG nova.compute.manager [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-unplugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:16:31 compute-0 ceph-mon[76335]: pgmap v930: 305 pgs: 305 active+clean; 290 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 145 op/s
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.202 244018 DEBUG nova.network.neutron [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.231 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Releasing lock "refresh_cache-661b348d-5b73-45ed-8357-3aefed90d054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.232 244018 DEBUG nova.compute.manager [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:16:31 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb 25 12:16:31 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 11.601s CPU time.
Feb 25 12:16:31 compute-0 systemd-machined[210048]: Machine qemu-7-instance-00000007 terminated.
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.454 244018 INFO nova.virt.libvirt.driver [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Instance destroyed successfully.
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.455 244018 DEBUG nova.objects.instance [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lazy-loading 'resources' on Instance uuid 661b348d-5b73-45ed-8357-3aefed90d054 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 265 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 157 op/s
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:16:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.895 244018 INFO nova.virt.libvirt.driver [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Deleting instance files /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054_del
Feb 25 12:16:31 compute-0 nova_compute[244014]: 2026-02-25 12:16:31.896 244018 INFO nova.virt.libvirt.driver [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Deletion of /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054_del complete
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.002 244018 INFO nova.compute.manager [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Took 0.77 seconds to destroy the instance on the hypervisor.
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.004 244018 DEBUG oslo.service.loopingcall [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.005 244018 DEBUG nova.compute.manager [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.005 244018 DEBUG nova.network.neutron [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.278 244018 DEBUG nova.network.neutron [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.349 244018 DEBUG nova.network.neutron [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.390 244018 DEBUG nova.network.neutron [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.444 244018 INFO nova.compute.manager [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Took 0.44 seconds to deallocate network for instance.
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.450 244018 INFO nova.compute.manager [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Took 2.08 seconds to deallocate network for instance.
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.596 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.596 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.629 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:32 compute-0 nova_compute[244014]: 2026-02-25 12:16:32.692 244018 DEBUG oslo_concurrency.processutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.169 244018 DEBUG nova.compute.manager [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.170 244018 DEBUG oslo_concurrency.lockutils [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.170 244018 DEBUG oslo_concurrency.lockutils [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.171 244018 DEBUG oslo_concurrency.lockutils [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.171 244018 DEBUG nova.compute.manager [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] No waiting events found dispatching network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.171 244018 WARNING nova.compute.manager [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received unexpected event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 for instance with vm_state deleted and task_state None.
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.171 244018 DEBUG nova.compute.manager [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-deleted-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:16:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:16:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3557133748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:33 compute-0 ceph-mon[76335]: pgmap v931: 305 pgs: 305 active+clean; 265 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 157 op/s
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.225 244018 DEBUG oslo_concurrency.processutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.231 244018 DEBUG nova.compute.provider_tree [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.285 244018 DEBUG nova.scheduler.client.report [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.367 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.372 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 177 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 207 op/s
Feb 25 12:16:33 compute-0 nova_compute[244014]: 2026-02-25 12:16:33.990 244018 INFO nova.scheduler.client.report [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Deleted allocations for instance 661b348d-5b73-45ed-8357-3aefed90d054
Feb 25 12:16:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:34 compute-0 nova_compute[244014]: 2026-02-25 12:16:34.189 244018 DEBUG oslo_concurrency.processutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:34 compute-0 nova_compute[244014]: 2026-02-25 12:16:34.227 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3557133748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:34 compute-0 nova_compute[244014]: 2026-02-25 12:16:34.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:16:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2974601813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:34 compute-0 nova_compute[244014]: 2026-02-25 12:16:34.774 244018 DEBUG oslo_concurrency.processutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:34 compute-0 nova_compute[244014]: 2026-02-25 12:16:34.780 244018 DEBUG nova.compute.provider_tree [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:16:34 compute-0 nova_compute[244014]: 2026-02-25 12:16:34.802 244018 DEBUG nova.scheduler.client.report [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:16:35 compute-0 nova_compute[244014]: 2026-02-25 12:16:35.044 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:35 compute-0 nova_compute[244014]: 2026-02-25 12:16:35.148 244018 INFO nova.scheduler.client.report [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Deleted allocations for instance 6de08989-13cc-415b-adc9-04b338e13d0f
Feb 25 12:16:35 compute-0 ceph-mon[76335]: pgmap v932: 305 pgs: 305 active+clean; 177 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 207 op/s
Feb 25 12:16:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2974601813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 177 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Feb 25 12:16:36 compute-0 nova_compute[244014]: 2026-02-25 12:16:36.168 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:36 compute-0 nova_compute[244014]: 2026-02-25 12:16:36.219 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:36 compute-0 nova_compute[244014]: 2026-02-25 12:16:36.220 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:36 compute-0 nova_compute[244014]: 2026-02-25 12:16:36.262 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:16:36 compute-0 nova_compute[244014]: 2026-02-25 12:16:36.400 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:36 compute-0 nova_compute[244014]: 2026-02-25 12:16:36.400 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:36 compute-0 nova_compute[244014]: 2026-02-25 12:16:36.410 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:16:36 compute-0 nova_compute[244014]: 2026-02-25 12:16:36.411 244018 INFO nova.compute.claims [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:16:36 compute-0 nova_compute[244014]: 2026-02-25 12:16:36.558 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:16:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3226950202' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.136 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.142 244018 DEBUG nova.compute.provider_tree [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.194 244018 DEBUG nova.scheduler.client.report [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.230 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.232 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:16:37 compute-0 ceph-mon[76335]: pgmap v933: 305 pgs: 305 active+clean; 177 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Feb 25 12:16:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3226950202' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.293 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.294 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.316 244018 INFO nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.367 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.457 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.459 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.460 244018 INFO nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Creating image(s)
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.494 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 153 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.530 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.564 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.568 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.644 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.645 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.646 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.647 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.675 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.679 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 20c8b8a1-1561-49f5-9fce-af2840195a57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.924 244018 DEBUG nova.policy [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee6c6e44a0624805afeb68a67c99f325', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:16:37 compute-0 nova_compute[244014]: 2026-02-25 12:16:37.979 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 20c8b8a1-1561-49f5-9fce-af2840195a57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.058 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] resizing rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.177 244018 DEBUG nova.objects.instance [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'migration_context' on Instance uuid 20c8b8a1-1561-49f5-9fce-af2840195a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.358 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.407 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.458 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.463 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.464 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.465 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.485 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.486 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.540 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.541 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.572 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:38 compute-0 nova_compute[244014]: 2026-02-25 12:16:38.577 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:39 compute-0 ceph-mon[76335]: pgmap v934: 305 pgs: 305 active+clean; 153 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Feb 25 12:16:39 compute-0 nova_compute[244014]: 2026-02-25 12:16:39.388 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.811s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:39 compute-0 nova_compute[244014]: 2026-02-25 12:16:39.505 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:16:39 compute-0 nova_compute[244014]: 2026-02-25 12:16:39.506 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Ensure instance console log exists: /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:16:39 compute-0 nova_compute[244014]: 2026-02-25 12:16:39.507 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:39 compute-0 nova_compute[244014]: 2026-02-25 12:16:39.507 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:39 compute-0 nova_compute[244014]: 2026-02-25 12:16:39.508 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 153 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 768 KiB/s wr, 99 op/s
Feb 25 12:16:39 compute-0 nova_compute[244014]: 2026-02-25 12:16:39.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:39 compute-0 nova_compute[244014]: 2026-02-25 12:16:39.733 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Successfully created port: 5fae567b-e26e-4045-8efe-6d5fc03a1658 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:16:41 compute-0 ceph-mon[76335]: pgmap v935: 305 pgs: 305 active+clean; 153 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 768 KiB/s wr, 99 op/s
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 159 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 285 KiB/s rd, 1.1 MiB/s wr, 111 op/s
Feb 25 12:16:41 compute-0 nova_compute[244014]: 2026-02-25 12:16:41.853 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Successfully updated port: 5fae567b-e26e-4045-8efe-6d5fc03a1658 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:16:41 compute-0 nova_compute[244014]: 2026-02-25 12:16:41.874 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:16:41 compute-0 nova_compute[244014]: 2026-02-25 12:16:41.874 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquired lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:16:41 compute-0 nova_compute[244014]: 2026-02-25 12:16:41.875 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.158101789232669e-05 of space, bias 1.0, pg target 0.018474305367698007 quantized to 32 (current 32)
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024895927298540185 of space, bias 1.0, pg target 0.7468778189562055 quantized to 32 (current 32)
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:16:41 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3017775514531495e-06 of space, bias 4.0, pg target 0.0015621330617437794 quantized to 16 (current 16)
Feb 25 12:16:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:16:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:16:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:16:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:16:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:16:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:16:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:16:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:16:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:16:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.022 244018 DEBUG nova.compute.manager [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-changed-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.023 244018 DEBUG nova.compute.manager [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Refreshing instance network info cache due to event network-changed-5fae567b-e26e-4045-8efe-6d5fc03a1658. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.023 244018 DEBUG oslo_concurrency.lockutils [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.058 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.639 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "29947b36-0ef7-49c6-974b-34891af8b99a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.640 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.658 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.787 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.788 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.794 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.795 244018 INFO nova.compute.claims [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:16:42 compute-0 nova_compute[244014]: 2026-02-25 12:16:42.946 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:43 compute-0 nova_compute[244014]: 2026-02-25 12:16:43.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:43 compute-0 ceph-mon[76335]: pgmap v936: 305 pgs: 305 active+clean; 159 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 285 KiB/s rd, 1.1 MiB/s wr, 111 op/s
Feb 25 12:16:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:16:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2990213171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:43 compute-0 nova_compute[244014]: 2026-02-25 12:16:43.497 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:43 compute-0 nova_compute[244014]: 2026-02-25 12:16:43.502 244018 DEBUG nova.compute.provider_tree [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:16:43 compute-0 nova_compute[244014]: 2026-02-25 12:16:43.522 244018 DEBUG nova.scheduler.client.report [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:16:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 202 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 268 KiB/s rd, 2.3 MiB/s wr, 126 op/s
Feb 25 12:16:43 compute-0 nova_compute[244014]: 2026-02-25 12:16:43.555 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:43 compute-0 nova_compute[244014]: 2026-02-25 12:16:43.555 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:16:43 compute-0 nova_compute[244014]: 2026-02-25 12:16:43.692 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:16:43 compute-0 nova_compute[244014]: 2026-02-25 12:16:43.692 244018 DEBUG nova.network.neutron [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:16:43 compute-0 podman[254420]: 2026-02-25 12:16:43.741468852 +0000 UTC m=+0.087005820 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:16:43 compute-0 nova_compute[244014]: 2026-02-25 12:16:43.744 244018 INFO nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:16:43 compute-0 podman[254421]: 2026-02-25 12:16:43.766519278 +0000 UTC m=+0.105846971 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:16:43 compute-0 nova_compute[244014]: 2026-02-25 12:16:43.782 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:16:43 compute-0 nova_compute[244014]: 2026-02-25 12:16:43.960 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updating instance_info_cache with network_info: [{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.072 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Releasing lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.072 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Instance network_info: |[{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.073 244018 DEBUG oslo_concurrency.lockutils [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.073 244018 DEBUG nova.network.neutron [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Refreshing network info cache for port 5fae567b-e26e-4045-8efe-6d5fc03a1658 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.078 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Start _get_guest_xml network_info=[{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [{'disk_bus': 'virtio', 'size': 1, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vdb', 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.085 244018 WARNING nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.091 244018 DEBUG nova.virt.libvirt.host [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.092 244018 DEBUG nova.virt.libvirt.host [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.100 244018 DEBUG nova.virt.libvirt.host [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.100 244018 DEBUG nova.virt.libvirt.host [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.101 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.101 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:15:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='236815221',id=20,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-960178861',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.102 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.102 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.103 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.103 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.103 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.104 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.104 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.105 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.105 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.105 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.110 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.144 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.146 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.147 244018 INFO nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Creating image(s)
Feb 25 12:16:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.172 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.203 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.235 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.240 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.306 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.307 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.308 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.309 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.340 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.344 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 29947b36-0ef7-49c6-974b-34891af8b99a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:44 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2990213171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.504 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021789.5033104, 6de08989-13cc-415b-adc9-04b338e13d0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.506 244018 INFO nova.compute.manager [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] VM Stopped (Lifecycle Event)
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.547 244018 DEBUG nova.compute.manager [None req-f31a8b76-cea1-4a4a-b80e-c31dbc081bb3 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.548 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:16:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385261770' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.643 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.644 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.665 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 29947b36-0ef7-49c6-974b-34891af8b99a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.740 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] resizing rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.778 244018 DEBUG nova.network.neutron [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.778 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.852 244018 DEBUG nova.objects.instance [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lazy-loading 'migration_context' on Instance uuid 29947b36-0ef7-49c6-974b-34891af8b99a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.876 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.877 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Ensure instance console log exists: /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.878 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.879 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.879 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.882 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.887 244018 WARNING nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.892 244018 DEBUG nova.virt.libvirt.host [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.892 244018 DEBUG nova.virt.libvirt.host [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.897 244018 DEBUG nova.virt.libvirt.host [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.898 244018 DEBUG nova.virt.libvirt.host [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.898 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.899 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.900 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.900 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.901 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.901 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.902 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.902 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.903 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.904 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.904 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.905 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:16:44 compute-0 nova_compute[244014]: 2026-02-25 12:16:44.909 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:16:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/475270521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.210 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.239 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.243 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:45 compute-0 ceph-mon[76335]: pgmap v937: 305 pgs: 305 active+clean; 202 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 268 KiB/s rd, 2.3 MiB/s wr, 126 op/s
Feb 25 12:16:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1385261770' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/475270521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:16:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248107500' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.485 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.515 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.521 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 202 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Feb 25 12:16:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:16:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402641402' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.751 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.754 244018 DEBUG nova.virt.libvirt.vif [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:16:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1235856482',id=8,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-6yvik42u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:16:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=20c8b8a1-1561-49f5-9fce-af2840195a57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.755 244018 DEBUG nova.network.os_vif_util [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.757 244018 DEBUG nova.network.os_vif_util [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.758 244018 DEBUG nova.objects.instance [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20c8b8a1-1561-49f5-9fce-af2840195a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.780 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:16:45 compute-0 nova_compute[244014]:   <uuid>20c8b8a1-1561-49f5-9fce-af2840195a57</uuid>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   <name>instance-00000008</name>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1235856482</nova:name>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:16:44</nova:creationTime>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-960178861">
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <nova:ephemeral>1</nova:ephemeral>
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <nova:user uuid="ee6c6e44a0624805afeb68a67c99f325">tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member</nova:user>
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <nova:project uuid="0cd0968a9a1b4b9e984b0a10a6ac77a8">tempest-ServersWithSpecificFlavorTestJSON-1630390846</nova:project>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <nova:port uuid="5fae567b-e26e-4045-8efe-6d5fc03a1658">
Feb 25 12:16:45 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <system>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <entry name="serial">20c8b8a1-1561-49f5-9fce-af2840195a57</entry>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <entry name="uuid">20c8b8a1-1561-49f5-9fce-af2840195a57</entry>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     </system>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   <os>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   </os>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   <features>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   </features>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/20c8b8a1-1561-49f5-9fce-af2840195a57_disk">
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0">
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <target dev="vdb" bus="virtio"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config">
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:16:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:3b:e1:5a"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <target dev="tap5fae567b-e2"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/console.log" append="off"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <video>
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     </video>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:16:45 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:16:45 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:16:45 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:16:45 compute-0 nova_compute[244014]: </domain>
Feb 25 12:16:45 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.790 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Preparing to wait for external event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.790 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.791 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.792 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.793 244018 DEBUG nova.virt.libvirt.vif [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:16:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1235856482',id=8,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-6yvik42u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:16:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=20c8b8a1-1561-49f5-9fce-af2840195a57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.794 244018 DEBUG nova.network.os_vif_util [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.795 244018 DEBUG nova.network.os_vif_util [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.796 244018 DEBUG os_vif [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.798 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.798 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.803 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fae567b-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.804 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fae567b-e2, col_values=(('external_ids', {'iface-id': '5fae567b-e26e-4045-8efe-6d5fc03a1658', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:e1:5a', 'vm-uuid': '20c8b8a1-1561-49f5-9fce-af2840195a57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.806 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:45 compute-0 NetworkManager[49836]: <info>  [1772021805.8080] manager: (tap5fae567b-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.812 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.815 244018 INFO os_vif [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2')
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.887 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.888 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.888 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.889 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No VIF found with MAC fa:16:3e:3b:e1:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.890 244018 INFO nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Using config drive
Feb 25 12:16:45 compute-0 nova_compute[244014]: 2026-02-25 12:16:45.920 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:16:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/649575112' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.026 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.028 244018 DEBUG nova.objects.instance [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29947b36-0ef7-49c6-974b-34891af8b99a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.045 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:16:46 compute-0 nova_compute[244014]:   <uuid>29947b36-0ef7-49c6-974b-34891af8b99a</uuid>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   <name>instance-00000009</name>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerDiagnosticsTest-server-840556860</nova:name>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:16:44</nova:creationTime>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:16:46 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:16:46 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:16:46 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:16:46 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:16:46 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:16:46 compute-0 nova_compute[244014]:         <nova:user uuid="3b5b46d7dac249e288d46ca1056e55e2">tempest-ServerDiagnosticsTest-727848253-project-member</nova:user>
Feb 25 12:16:46 compute-0 nova_compute[244014]:         <nova:project uuid="cbbbbb19c373469196c30db9a3ed4171">tempest-ServerDiagnosticsTest-727848253</nova:project>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <system>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <entry name="serial">29947b36-0ef7-49c6-974b-34891af8b99a</entry>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <entry name="uuid">29947b36-0ef7-49c6-974b-34891af8b99a</entry>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     </system>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   <os>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   </os>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   <features>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   </features>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/29947b36-0ef7-49c6-974b-34891af8b99a_disk">
Feb 25 12:16:46 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       </source>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:16:46 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/29947b36-0ef7-49c6-974b-34891af8b99a_disk.config">
Feb 25 12:16:46 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       </source>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:16:46 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/console.log" append="off"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <video>
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     </video>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:16:46 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:16:46 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:16:46 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:16:46 compute-0 nova_compute[244014]: </domain>
Feb 25 12:16:46 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.120 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.121 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.122 244018 INFO nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Using config drive
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.151 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.235 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.236 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.263 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.276 244018 INFO nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Creating config drive at /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.284 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqoh9bd35 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.304 244018 DEBUG nova.network.neutron [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updated VIF entry in instance network info cache for port 5fae567b-e26e-4045-8efe-6d5fc03a1658. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.305 244018 DEBUG nova.network.neutron [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updating instance_info_cache with network_info: [{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.345 244018 DEBUG oslo_concurrency.lockutils [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.359 244018 INFO nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Creating config drive at /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.364 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpurogm5al execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.403 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.404 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.412 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.412 244018 INFO nova.compute.claims [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.417 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqoh9bd35" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1248107500' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3402641402' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/649575112' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.451 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.455 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.477 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021791.4522336, 661b348d-5b73-45ed-8357-3aefed90d054 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.477 244018 INFO nova.compute.manager [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] VM Stopped (Lifecycle Event)
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.491 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpurogm5al" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.521 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.528 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config 29947b36-0ef7-49c6-974b-34891af8b99a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.543 244018 DEBUG nova.compute.manager [None req-16d6bc58-3c7c-4f17-a575-83c41d122efa - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.611 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.771 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.773 244018 INFO nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Deleting local config drive /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config because it was imported into RBD.
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.808 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config 29947b36-0ef7-49c6-974b-34891af8b99a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.809 244018 INFO nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Deleting local config drive /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config because it was imported into RBD.
Feb 25 12:16:46 compute-0 kernel: tap5fae567b-e2: entered promiscuous mode
Feb 25 12:16:46 compute-0 NetworkManager[49836]: <info>  [1772021806.8355] manager: (tap5fae567b-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:46 compute-0 ovn_controller[147040]: 2026-02-25T12:16:46Z|00036|binding|INFO|Claiming lport 5fae567b-e26e-4045-8efe-6d5fc03a1658 for this chassis.
Feb 25 12:16:46 compute-0 ovn_controller[147040]: 2026-02-25T12:16:46Z|00037|binding|INFO|5fae567b-e26e-4045-8efe-6d5fc03a1658: Claiming fa:16:3e:3b:e1:5a 10.100.0.8
Feb 25 12:16:46 compute-0 ovn_controller[147040]: 2026-02-25T12:16:46Z|00038|binding|INFO|Setting lport 5fae567b-e26e-4045-8efe-6d5fc03a1658 ovn-installed in OVS
Feb 25 12:16:46 compute-0 ovn_controller[147040]: 2026-02-25T12:16:46Z|00039|binding|INFO|Setting lport 5fae567b-e26e-4045-8efe-6d5fc03a1658 up in Southbound
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.852 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:e1:5a 10.100.0.8'], port_security=['fa:16:3e:3b:e1:5a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '20c8b8a1-1561-49f5-9fce-af2840195a57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f43142d8-27de-490e-9b86-d4e77c9c7e07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5d1aaf-d2e8-49e5-aeb2-335f304b15d9, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5fae567b-e26e-4045-8efe-6d5fc03a1658) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.853 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5fae567b-e26e-4045-8efe-6d5fc03a1658 in datapath d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 bound to our chassis
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.854 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3fb36f1-0e88-43b4-a8a4-3844d55f1de8
Feb 25 12:16:46 compute-0 nova_compute[244014]: 2026-02-25 12:16:46.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.865 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7aae872d-5f9b-42a1-abd1-cd5dc6367cba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.866 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd3fb36f1-01 in ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.868 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd3fb36f1-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.868 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[611a17e4-7c6c-4309-8f64-12ecc000e4b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.869 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1321a4ad-a8a9-46b3-958e-014b1ed25793]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:46 compute-0 systemd-udevd[254939]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.890 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f944bc-ded2-4666-ba2c-c92fe56b6eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:46 compute-0 NetworkManager[49836]: <info>  [1772021806.9052] device (tap5fae567b-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:16:46 compute-0 NetworkManager[49836]: <info>  [1772021806.9061] device (tap5fae567b-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:16:46 compute-0 systemd-machined[210048]: New machine qemu-8-instance-00000008.
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ea88c88f-ad3e-402d-9dc0-42d73bec8226]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:46 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Feb 25 12:16:46 compute-0 systemd-machined[210048]: New machine qemu-9-instance-00000009.
Feb 25 12:16:46 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.962 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5a2b51-e907-45ab-93e7-c614c712256e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:46 compute-0 systemd-udevd[254944]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:16:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.969 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[46c98b4e-dab3-44e5-8090-8a1959bf9f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:46 compute-0 NetworkManager[49836]: <info>  [1772021806.9709] manager: (tapd3fb36f1-00): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.005 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3913f3ed-c51f-4d5e-82ae-309cc60a4f20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.009 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b7412fdb-2f01-403f-848b-f3b5a9cbd6b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:47 compute-0 NetworkManager[49836]: <info>  [1772021807.0367] device (tapd3fb36f1-00): carrier: link connected
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.042 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f9995fcb-7ec6-4091-bc2e-20a8bb10c381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.059 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34e34fd8-aece-47ce-acad-ced72b0e4f28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fb36f1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:f0:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377644, 'reachable_time': 41940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254978, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.076 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[97ddf54d-9d9c-4265-8007-56b193eb74f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:f004'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377644, 'tstamp': 377644}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254979, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.095 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1b506f-c56f-4bf1-b684-f151fab8f87c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fb36f1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:f0:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377644, 'reachable_time': 41940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254995, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4381bfac-010e-4516-bad5-eb353de2746b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.192 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d8573b12-48cf-4e1e-94e1-c241192a1c84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.194 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fb36f1-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.195 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.196 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3fb36f1-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:47 compute-0 kernel: tapd3fb36f1-00: entered promiscuous mode
Feb 25 12:16:47 compute-0 NetworkManager[49836]: <info>  [1772021807.1996] manager: (tapd3fb36f1-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.203 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.204 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3fb36f1-00, col_values=(('external_ids', {'iface-id': '642d12d0-ef5c-4bc5-ba96-4b85e033986b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:16:47 compute-0 ovn_controller[147040]: 2026-02-25T12:16:47Z|00040|binding|INFO|Releasing lport 642d12d0-ef5c-4bc5-ba96-4b85e033986b from this chassis (sb_readonly=0)
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.206 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.207 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf0d727-fdb8-45a1-9519-ae00af8d2173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.207 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID d3fb36f1-0e88-43b4-a8a4-3844d55f1de8
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:16:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.208 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'env', 'PROCESS_TAG=haproxy-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.212 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:16:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231002901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.252 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.259 244018 DEBUG nova.compute.provider_tree [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.264 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021807.2599277, 29947b36-0ef7-49c6-974b-34891af8b99a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.264 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] VM Resumed (Lifecycle Event)
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.269 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.269 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.275 244018 INFO nova.virt.libvirt.driver [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance spawned successfully.
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.275 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.304 244018 DEBUG nova.scheduler.client.report [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.314 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.319 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.319 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.320 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.321 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.322 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.323 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.328 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.392 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.393 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.414 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.414 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021807.262309, 29947b36-0ef7-49c6-974b-34891af8b99a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.415 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] VM Started (Lifecycle Event)
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.458 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.463 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:16:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1386756722' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:16:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:16:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1386756722' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:16:47 compute-0 ceph-mon[76335]: pgmap v938: 305 pgs: 305 active+clean; 202 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Feb 25 12:16:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1231002901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.475 244018 INFO nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Took 3.33 seconds to spawn the instance on the hypervisor.
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.476 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.488 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.489 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021807.4041114, 20c8b8a1-1561-49f5-9fce-af2840195a57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.490 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] VM Started (Lifecycle Event)
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.496 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 25 12:16:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 248 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.537 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.543 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021807.4041996, 20c8b8a1-1561-49f5-9fce-af2840195a57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.543 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] VM Paused (Lifecycle Event)
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.547 244018 INFO nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.571 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.585 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.591 244018 INFO nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Took 4.83 seconds to build instance.
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.593 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.630 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.638 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:47 compute-0 podman[255115]: 2026-02-25 12:16:47.670912798 +0000 UTC m=+0.102480161 container create 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 12:16:47 compute-0 podman[255115]: 2026-02-25 12:16:47.607545842 +0000 UTC m=+0.039113245 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.699 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.702 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.703 244018 INFO nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating image(s)
Feb 25 12:16:47 compute-0 systemd[1]: Started libpod-conmon-2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e.scope.
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.738 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707d0ec3801d09f720d201cc99cc3906ecd5ce5937d1d449d2a23665084e2304/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:47 compute-0 podman[255115]: 2026-02-25 12:16:47.781028945 +0000 UTC m=+0.212596388 container init 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 12:16:47 compute-0 podman[255115]: 2026-02-25 12:16:47.785163828 +0000 UTC m=+0.216731231 container start 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.790 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.820 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.823 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:47 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:16:47 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [NOTICE]   (255169) : New worker (255192) forked
Feb 25 12:16:47 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [NOTICE]   (255169) : Loading success.
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.887 244018 DEBUG nova.compute.manager [req-2986612b-3f92-4793-b4e6-a11ccb0be262 req-5c86dd08-a726-4dd4-aa0f-95acb2f19a05 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.888 244018 DEBUG oslo_concurrency.lockutils [req-2986612b-3f92-4793-b4e6-a11ccb0be262 req-5c86dd08-a726-4dd4-aa0f-95acb2f19a05 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.888 244018 DEBUG oslo_concurrency.lockutils [req-2986612b-3f92-4793-b4e6-a11ccb0be262 req-5c86dd08-a726-4dd4-aa0f-95acb2f19a05 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.888 244018 DEBUG oslo_concurrency.lockutils [req-2986612b-3f92-4793-b4e6-a11ccb0be262 req-5c86dd08-a726-4dd4-aa0f-95acb2f19a05 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.889 244018 DEBUG nova.compute.manager [req-2986612b-3f92-4793-b4e6-a11ccb0be262 req-5c86dd08-a726-4dd4-aa0f-95acb2f19a05 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Processing event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.889 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.890 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.890 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.891 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.892 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.917 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.921 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.934 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021807.893856, 20c8b8a1-1561-49f5-9fce-af2840195a57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.935 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] VM Resumed (Lifecycle Event)
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.941 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.944 244018 INFO nova.virt.libvirt.driver [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Instance spawned successfully.
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.944 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.968 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.972 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.973 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.973 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.974 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.974 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.974 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:47 compute-0 nova_compute[244014]: 2026-02-25 12:16:47.979 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.019 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.093 244018 INFO nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Took 10.64 seconds to spawn the instance on the hypervisor.
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.093 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.175 244018 INFO nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Took 11.80 seconds to build instance.
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.198 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.226 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.266 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] resizing rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.359 244018 DEBUG nova.objects.instance [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'migration_context' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.361 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.409 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.409 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Ensure instance console log exists: /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.410 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.410 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.410 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.412 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.415 244018 WARNING nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.420 244018 DEBUG nova.virt.libvirt.host [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.421 244018 DEBUG nova.virt.libvirt.host [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.423 244018 DEBUG nova.virt.libvirt.host [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.423 244018 DEBUG nova.virt.libvirt.host [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.424 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.424 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.425 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.425 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.425 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.425 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.426 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.426 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.426 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.426 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.427 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.427 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:16:48 compute-0 nova_compute[244014]: 2026-02-25 12:16:48.430 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1386756722' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:16:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1386756722' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:16:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:16:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995159402' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:49 compute-0 nova_compute[244014]: 2026-02-25 12:16:49.020 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:49 compute-0 nova_compute[244014]: 2026-02-25 12:16:49.056 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:49 compute-0 nova_compute[244014]: 2026-02-25 12:16:49.064 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:49 compute-0 ceph-mon[76335]: pgmap v939: 305 pgs: 305 active+clean; 248 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Feb 25 12:16:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2995159402' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 248 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Feb 25 12:16:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:16:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987021487' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:49 compute-0 nova_compute[244014]: 2026-02-25 12:16:49.585 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:49 compute-0 nova_compute[244014]: 2026-02-25 12:16:49.588 244018 DEBUG nova.objects.instance [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.209 244018 DEBUG nova.compute.manager [None req-91302496-e037-46c5-baad-b498794749b5 e7c0a976ae734458914db7c57527b6df f051d58fecda492cbff0432942a1a355 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.213 244018 INFO nova.compute.manager [None req-91302496-e037-46c5-baad-b498794749b5 e7c0a976ae734458914db7c57527b6df f051d58fecda492cbff0432942a1a355 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Retrieving diagnostics
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.224 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:16:50 compute-0 nova_compute[244014]:   <uuid>db0fc9fa-1fc0-4334-96f9-2205fa53e308</uuid>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   <name>instance-0000000a</name>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAdmin275Test-server-1821961636</nova:name>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:16:48</nova:creationTime>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:16:50 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:16:50 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:16:50 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:16:50 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:16:50 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:16:50 compute-0 nova_compute[244014]:         <nova:user uuid="40cba1ddb2af4adea0f03477ccf87402">tempest-ServersAdmin275Test-694282747-project-member</nova:user>
Feb 25 12:16:50 compute-0 nova_compute[244014]:         <nova:project uuid="03ef50f1c8db4f9d9f7512943746f9e7">tempest-ServersAdmin275Test-694282747</nova:project>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <system>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <entry name="serial">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <entry name="uuid">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     </system>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   <os>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   </os>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   <features>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   </features>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk">
Feb 25 12:16:50 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       </source>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:16:50 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config">
Feb 25 12:16:50 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       </source>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:16:50 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log" append="off"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <video>
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     </video>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:16:50 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:16:50 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:16:50 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:16:50 compute-0 nova_compute[244014]: </domain>
Feb 25 12:16:50 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.302 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.303 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.303 244018 INFO nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Using config drive
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.329 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.423 244018 DEBUG nova.compute.manager [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.424 244018 DEBUG oslo_concurrency.lockutils [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.424 244018 DEBUG oslo_concurrency.lockutils [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.425 244018 DEBUG oslo_concurrency.lockutils [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.425 244018 DEBUG nova.compute.manager [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] No waiting events found dispatching network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.425 244018 WARNING nova.compute.manager [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received unexpected event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 for instance with vm_state active and task_state None.
Feb 25 12:16:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/987021487' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.595 244018 INFO nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating config drive at /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.601 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpddk8psre execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.662 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "29947b36-0ef7-49c6-974b-34891af8b99a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.664 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.664 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "29947b36-0ef7-49c6-974b-34891af8b99a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.665 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.666 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.673 244018 INFO nova.compute.manager [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Terminating instance
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.675 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "refresh_cache-29947b36-0ef7-49c6-974b-34891af8b99a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.676 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquired lock "refresh_cache-29947b36-0ef7-49c6-974b-34891af8b99a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.676 244018 DEBUG nova.network.neutron [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.737 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpddk8psre" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.770 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.778 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.931 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:50 compute-0 nova_compute[244014]: 2026-02-25 12:16:50.932 244018 INFO nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting local config drive /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config because it was imported into RBD.
Feb 25 12:16:50 compute-0 systemd-machined[210048]: New machine qemu-10-instance-0000000a.
Feb 25 12:16:51 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.246 244018 DEBUG nova.network.neutron [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.513 244018 DEBUG nova.network.neutron [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:51 compute-0 ceph-mon[76335]: pgmap v940: 305 pgs: 305 active+clean; 248 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Feb 25 12:16:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 257 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 914 KiB/s rd, 4.1 MiB/s wr, 123 op/s
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.532 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Releasing lock "refresh_cache-29947b36-0ef7-49c6-974b-34891af8b99a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.533 244018 DEBUG nova.compute.manager [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:16:51 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Feb 25 12:16:51 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 4.551s CPU time.
Feb 25 12:16:51 compute-0 systemd-machined[210048]: Machine qemu-9-instance-00000009 terminated.
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.651 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.653 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.653 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021811.6524277, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.654 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Resumed (Lifecycle Event)
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.658 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance spawned successfully.
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.659 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.694 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.704 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.711 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.712 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.712 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.713 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.714 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.714 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:16:51 compute-0 sudo[255491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.745 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.745 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021811.6526158, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.745 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Started (Lifecycle Event)
Feb 25 12:16:51 compute-0 sudo[255491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:16:51 compute-0 sudo[255491]: pam_unix(sudo:session): session closed for user root
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.760 244018 INFO nova.virt.libvirt.driver [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance destroyed successfully.
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.761 244018 DEBUG nova.objects.instance [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lazy-loading 'resources' on Instance uuid 29947b36-0ef7-49c6-974b-34891af8b99a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.777 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.795 244018 INFO nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Took 4.10 seconds to spawn the instance on the hypervisor.
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.795 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.799 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:16:51 compute-0 sudo[255518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:16:51 compute-0 sudo[255518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.847 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.869 244018 INFO nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Took 5.52 seconds to build instance.
Feb 25 12:16:51 compute-0 nova_compute[244014]: 2026-02-25 12:16:51.885 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:52 compute-0 nova_compute[244014]: 2026-02-25 12:16:52.222 244018 INFO nova.virt.libvirt.driver [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Deleting instance files /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a_del
Feb 25 12:16:52 compute-0 nova_compute[244014]: 2026-02-25 12:16:52.225 244018 INFO nova.virt.libvirt.driver [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Deletion of /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a_del complete
Feb 25 12:16:52 compute-0 nova_compute[244014]: 2026-02-25 12:16:52.286 244018 INFO nova.compute.manager [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Took 0.75 seconds to destroy the instance on the hypervisor.
Feb 25 12:16:52 compute-0 nova_compute[244014]: 2026-02-25 12:16:52.286 244018 DEBUG oslo.service.loopingcall [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:16:52 compute-0 nova_compute[244014]: 2026-02-25 12:16:52.287 244018 DEBUG nova.compute.manager [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:16:52 compute-0 nova_compute[244014]: 2026-02-25 12:16:52.287 244018 DEBUG nova.network.neutron [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:16:52 compute-0 sudo[255518]: pam_unix(sudo:session): session closed for user root
Feb 25 12:16:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:16:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:16:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:16:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:16:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:16:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:16:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:16:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:16:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:16:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:16:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:16:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:16:52 compute-0 sudo[255593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:16:52 compute-0 sudo[255593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:16:52 compute-0 sudo[255593]: pam_unix(sudo:session): session closed for user root
Feb 25 12:16:52 compute-0 ceph-mon[76335]: pgmap v941: 305 pgs: 305 active+clean; 257 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 914 KiB/s rd, 4.1 MiB/s wr, 123 op/s
Feb 25 12:16:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:16:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:16:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:16:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:16:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:16:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:16:52 compute-0 sudo[255618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:16:52 compute-0 sudo[255618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:16:52 compute-0 nova_compute[244014]: 2026-02-25 12:16:52.835 244018 DEBUG nova.network.neutron [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:16:52 compute-0 nova_compute[244014]: 2026-02-25 12:16:52.866 244018 DEBUG nova.network.neutron [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:52 compute-0 nova_compute[244014]: 2026-02-25 12:16:52.876 244018 INFO nova.compute.manager [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Took 0.59 seconds to deallocate network for instance.
Feb 25 12:16:52 compute-0 podman[255656]: 2026-02-25 12:16:52.895756145 +0000 UTC m=+0.053939625 container create 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:16:52 compute-0 systemd[1]: Started libpod-conmon-1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1.scope.
Feb 25 12:16:52 compute-0 nova_compute[244014]: 2026-02-25 12:16:52.936 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:52 compute-0 nova_compute[244014]: 2026-02-25 12:16:52.936 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:16:52 compute-0 podman[255656]: 2026-02-25 12:16:52.867434312 +0000 UTC m=+0.025617792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:16:52 compute-0 podman[255656]: 2026-02-25 12:16:52.983643731 +0000 UTC m=+0.141827241 container init 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 12:16:52 compute-0 podman[255656]: 2026-02-25 12:16:52.988562537 +0000 UTC m=+0.146746047 container start 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 12:16:52 compute-0 exciting_lewin[255672]: 167 167
Feb 25 12:16:52 compute-0 systemd[1]: libpod-1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1.scope: Deactivated successfully.
Feb 25 12:16:53 compute-0 podman[255656]: 2026-02-25 12:16:52.99504324 +0000 UTC m=+0.153226740 container attach 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:16:53 compute-0 podman[255656]: 2026-02-25 12:16:53.00141479 +0000 UTC m=+0.159598300 container died 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:16:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-86ade9d28cdec68a1d5cd504f19ebadf88996996ee8aa60a5d0c703247b4a331-merged.mount: Deactivated successfully.
Feb 25 12:16:53 compute-0 nova_compute[244014]: 2026-02-25 12:16:53.047 244018 DEBUG oslo_concurrency.processutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:16:53 compute-0 podman[255656]: 2026-02-25 12:16:53.068509507 +0000 UTC m=+0.226692977 container remove 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 12:16:53 compute-0 systemd[1]: libpod-conmon-1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1.scope: Deactivated successfully.
Feb 25 12:16:53 compute-0 podman[255717]: 2026-02-25 12:16:53.256390618 +0000 UTC m=+0.046859895 container create 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:16:53 compute-0 systemd[1]: Started libpod-conmon-7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f.scope.
Feb 25 12:16:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:53 compute-0 podman[255717]: 2026-02-25 12:16:53.239777884 +0000 UTC m=+0.030247141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:16:53 compute-0 podman[255717]: 2026-02-25 12:16:53.349043076 +0000 UTC m=+0.139512383 container init 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Feb 25 12:16:53 compute-0 podman[255717]: 2026-02-25 12:16:53.367816774 +0000 UTC m=+0.158286041 container start 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 12:16:53 compute-0 nova_compute[244014]: 2026-02-25 12:16:53.368 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:53 compute-0 podman[255717]: 2026-02-25 12:16:53.373406001 +0000 UTC m=+0.163875318 container attach 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:16:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 286 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.1 MiB/s wr, 259 op/s
Feb 25 12:16:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:16:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/314277043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:53 compute-0 nova_compute[244014]: 2026-02-25 12:16:53.620 244018 DEBUG oslo_concurrency.processutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:16:53 compute-0 nova_compute[244014]: 2026-02-25 12:16:53.628 244018 DEBUG nova.compute.provider_tree [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:16:53 compute-0 nova_compute[244014]: 2026-02-25 12:16:53.654 244018 DEBUG nova.scheduler.client.report [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:16:53 compute-0 nova_compute[244014]: 2026-02-25 12:16:53.698 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:53 compute-0 ecstatic_shannon[255734]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:16:53 compute-0 ecstatic_shannon[255734]: --> All data devices are unavailable
Feb 25 12:16:53 compute-0 nova_compute[244014]: 2026-02-25 12:16:53.841 244018 INFO nova.scheduler.client.report [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Deleted allocations for instance 29947b36-0ef7-49c6-974b-34891af8b99a
Feb 25 12:16:53 compute-0 systemd[1]: libpod-7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f.scope: Deactivated successfully.
Feb 25 12:16:53 compute-0 nova_compute[244014]: 2026-02-25 12:16:53.882 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:53 compute-0 nova_compute[244014]: 2026-02-25 12:16:53.884 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:16:53 compute-0 nova_compute[244014]: 2026-02-25 12:16:53.884 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:16:53 compute-0 podman[255756]: 2026-02-25 12:16:53.905817826 +0000 UTC m=+0.031611772 container died 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:16:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f-merged.mount: Deactivated successfully.
Feb 25 12:16:53 compute-0 podman[255756]: 2026-02-25 12:16:53.94795478 +0000 UTC m=+0.073748726 container remove 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Feb 25 12:16:53 compute-0 systemd[1]: libpod-conmon-7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f.scope: Deactivated successfully.
Feb 25 12:16:53 compute-0 sudo[255618]: pam_unix(sudo:session): session closed for user root
Feb 25 12:16:54 compute-0 sudo[255771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:16:54 compute-0 sudo[255771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:16:54 compute-0 sudo[255771]: pam_unix(sudo:session): session closed for user root
Feb 25 12:16:54 compute-0 sudo[255796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:16:54 compute-0 sudo[255796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:16:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:54 compute-0 podman[255832]: 2026-02-25 12:16:54.374025971 +0000 UTC m=+0.050780673 container create 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:16:54 compute-0 systemd[1]: Started libpod-conmon-0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf.scope.
Feb 25 12:16:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:16:54 compute-0 podman[255832]: 2026-02-25 12:16:54.35147739 +0000 UTC m=+0.028232112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:16:54 compute-0 podman[255832]: 2026-02-25 12:16:54.461211776 +0000 UTC m=+0.137966548 container init 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:16:54 compute-0 podman[255832]: 2026-02-25 12:16:54.46843189 +0000 UTC m=+0.145186632 container start 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 12:16:54 compute-0 admiring_meninsky[255847]: 167 167
Feb 25 12:16:54 compute-0 systemd[1]: libpod-0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf.scope: Deactivated successfully.
Feb 25 12:16:54 compute-0 podman[255832]: 2026-02-25 12:16:54.475186862 +0000 UTC m=+0.151941634 container attach 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:16:54 compute-0 podman[255832]: 2026-02-25 12:16:54.475548832 +0000 UTC m=+0.152303564 container died 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 12:16:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9405cb8c14c2c2416e661bb3c11eaf3fa6332d0282eb8cbb578211a4ca4b48d0-merged.mount: Deactivated successfully.
Feb 25 12:16:54 compute-0 ceph-mon[76335]: pgmap v942: 305 pgs: 305 active+clean; 286 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.1 MiB/s wr, 259 op/s
Feb 25 12:16:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/314277043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:16:54 compute-0 podman[255832]: 2026-02-25 12:16:54.580391103 +0000 UTC m=+0.257145805 container remove 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb 25 12:16:54 compute-0 systemd[1]: libpod-conmon-0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf.scope: Deactivated successfully.
Feb 25 12:16:54 compute-0 nova_compute[244014]: 2026-02-25 12:16:54.670 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:54 compute-0 podman[255871]: 2026-02-25 12:16:54.739398825 +0000 UTC m=+0.044898157 container create 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Feb 25 12:16:54 compute-0 systemd[1]: Started libpod-conmon-5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533.scope.
Feb 25 12:16:54 compute-0 podman[255871]: 2026-02-25 12:16:54.715296238 +0000 UTC m=+0.020795570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:16:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d32a60bbdd09c690d75aaf2935c3327c72f8a8f9f645b5d1db1b8ef06702f86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d32a60bbdd09c690d75aaf2935c3327c72f8a8f9f645b5d1db1b8ef06702f86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d32a60bbdd09c690d75aaf2935c3327c72f8a8f9f645b5d1db1b8ef06702f86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d32a60bbdd09c690d75aaf2935c3327c72f8a8f9f645b5d1db1b8ef06702f86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:55.001 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:16:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:55.002 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:16:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:55.003 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:16:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:55.027 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:16:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:16:55.028 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.028 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.029 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.030 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.030 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 20c8b8a1-1561-49f5-9fce-af2840195a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.033 244018 DEBUG nova.compute.manager [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-changed-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:16:55 compute-0 podman[255871]: 2026-02-25 12:16:55.034332973 +0000 UTC m=+0.339832325 container init 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.034 244018 DEBUG nova.compute.manager [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Refreshing instance network info cache due to event network-changed-5fae567b-e26e-4045-8efe-6d5fc03a1658. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.035 244018 DEBUG oslo_concurrency.lockutils [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.036 244018 INFO nova.compute.manager [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Rebuilding instance
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:55 compute-0 podman[255871]: 2026-02-25 12:16:55.045176745 +0000 UTC m=+0.350676077 container start 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 12:16:55 compute-0 podman[255871]: 2026-02-25 12:16:55.053854904 +0000 UTC m=+0.359354286 container attach 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]: {
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:     "0": [
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:         {
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "devices": [
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "/dev/loop3"
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             ],
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_name": "ceph_lv0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_size": "21470642176",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "name": "ceph_lv0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "tags": {
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.cluster_name": "ceph",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.crush_device_class": "",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.encrypted": "0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.objectstore": "bluestore",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.osd_id": "0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.type": "block",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.vdo": "0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.with_tpm": "0"
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             },
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "type": "block",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "vg_name": "ceph_vg0"
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:         }
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:     ],
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:     "1": [
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:         {
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "devices": [
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "/dev/loop4"
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             ],
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_name": "ceph_lv1",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_size": "21470642176",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "name": "ceph_lv1",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "tags": {
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.cluster_name": "ceph",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.crush_device_class": "",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.encrypted": "0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.objectstore": "bluestore",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.osd_id": "1",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.type": "block",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.vdo": "0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.with_tpm": "0"
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             },
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "type": "block",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "vg_name": "ceph_vg1"
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:         }
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:     ],
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:     "2": [
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:         {
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "devices": [
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "/dev/loop5"
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             ],
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_name": "ceph_lv2",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_size": "21470642176",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "name": "ceph_lv2",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "tags": {
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.cluster_name": "ceph",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.crush_device_class": "",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.encrypted": "0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.objectstore": "bluestore",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.osd_id": "2",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.type": "block",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.vdo": "0",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:                 "ceph.with_tpm": "0"
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             },
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "type": "block",
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:             "vg_name": "ceph_vg2"
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:         }
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]:     ]
Feb 25 12:16:55 compute-0 wizardly_wiles[255887]: }
Feb 25 12:16:55 compute-0 systemd[1]: libpod-5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533.scope: Deactivated successfully.
Feb 25 12:16:55 compute-0 podman[255871]: 2026-02-25 12:16:55.350499922 +0000 UTC m=+0.655999264 container died 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.486 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.512 244018 DEBUG nova.compute.manager [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:16:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 286 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 232 op/s
Feb 25 12:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d32a60bbdd09c690d75aaf2935c3327c72f8a8f9f645b5d1db1b8ef06702f86-merged.mount: Deactivated successfully.
Feb 25 12:16:55 compute-0 podman[255871]: 2026-02-25 12:16:55.589869866 +0000 UTC m=+0.895369188 container remove 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.604 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'pci_requests' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:55 compute-0 systemd[1]: libpod-conmon-5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533.scope: Deactivated successfully.
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.629 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:55 compute-0 sudo[255796]: pam_unix(sudo:session): session closed for user root
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.649 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'resources' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.665 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'migration_context' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:16:55 compute-0 sudo[255908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:16:55 compute-0 sudo[255908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:16:55 compute-0 sudo[255908]: pam_unix(sudo:session): session closed for user root
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.722 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.726 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:16:55 compute-0 sudo[255933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:16:55 compute-0 sudo[255933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:16:55 compute-0 nova_compute[244014]: 2026-02-25 12:16:55.809 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:56 compute-0 podman[255967]: 2026-02-25 12:16:56.009163265 +0000 UTC m=+0.040604469 container create dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:16:56 compute-0 systemd[1]: Started libpod-conmon-dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437.scope.
Feb 25 12:16:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:16:56 compute-0 podman[255967]: 2026-02-25 12:16:55.986283574 +0000 UTC m=+0.017724778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:16:56 compute-0 podman[255967]: 2026-02-25 12:16:56.091183286 +0000 UTC m=+0.122624550 container init dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:16:56 compute-0 podman[255967]: 2026-02-25 12:16:56.096521185 +0000 UTC m=+0.127962399 container start dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 12:16:56 compute-0 relaxed_chaum[255984]: 167 167
Feb 25 12:16:56 compute-0 systemd[1]: libpod-dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437.scope: Deactivated successfully.
Feb 25 12:16:56 compute-0 podman[255967]: 2026-02-25 12:16:56.102636877 +0000 UTC m=+0.134078141 container attach dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:16:56 compute-0 podman[255967]: 2026-02-25 12:16:56.10339632 +0000 UTC m=+0.134837554 container died dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-593d5eea1d4d5bbcd3258de0c51d54ba61e130d5675ec80af952a4cb724a1fb2-merged.mount: Deactivated successfully.
Feb 25 12:16:56 compute-0 podman[255967]: 2026-02-25 12:16:56.172723893 +0000 UTC m=+0.204165097 container remove dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:16:56 compute-0 systemd[1]: libpod-conmon-dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437.scope: Deactivated successfully.
Feb 25 12:16:56 compute-0 podman[256007]: 2026-02-25 12:16:56.363432519 +0000 UTC m=+0.057626686 container create 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:16:56 compute-0 systemd[1]: Started libpod-conmon-836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa.scope.
Feb 25 12:16:56 compute-0 podman[256007]: 2026-02-25 12:16:56.327477479 +0000 UTC m=+0.021671646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:16:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480b4215226c9aea03bff390d56ec8884825caa00d144d84aa77d6372f35518/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480b4215226c9aea03bff390d56ec8884825caa00d144d84aa77d6372f35518/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480b4215226c9aea03bff390d56ec8884825caa00d144d84aa77d6372f35518/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480b4215226c9aea03bff390d56ec8884825caa00d144d84aa77d6372f35518/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:16:56 compute-0 podman[256007]: 2026-02-25 12:16:56.495299672 +0000 UTC m=+0.189493879 container init 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:16:56 compute-0 podman[256007]: 2026-02-25 12:16:56.5049674 +0000 UTC m=+0.199161557 container start 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 12:16:56 compute-0 podman[256007]: 2026-02-25 12:16:56.511858255 +0000 UTC m=+0.206052422 container attach 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 12:16:56 compute-0 ceph-mon[76335]: pgmap v943: 305 pgs: 305 active+clean; 286 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 232 op/s
Feb 25 12:16:56 compute-0 nova_compute[244014]: 2026-02-25 12:16:56.770 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updating instance_info_cache with network_info: [{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:56 compute-0 nova_compute[244014]: 2026-02-25 12:16:56.797 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:56 compute-0 nova_compute[244014]: 2026-02-25 12:16:56.797 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:16:56 compute-0 nova_compute[244014]: 2026-02-25 12:16:56.798 244018 DEBUG oslo_concurrency.lockutils [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:16:56 compute-0 nova_compute[244014]: 2026-02-25 12:16:56.798 244018 DEBUG nova.network.neutron [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Refreshing network info cache for port 5fae567b-e26e-4045-8efe-6d5fc03a1658 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:16:56 compute-0 nova_compute[244014]: 2026-02-25 12:16:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:56 compute-0 nova_compute[244014]: 2026-02-25 12:16:56.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 12:16:56 compute-0 nova_compute[244014]: 2026-02-25 12:16:56.907 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 12:16:57 compute-0 lvm[256104]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:16:57 compute-0 lvm[256104]: VG ceph_vg0 finished
Feb 25 12:16:57 compute-0 lvm[256106]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:16:57 compute-0 lvm[256106]: VG ceph_vg1 finished
Feb 25 12:16:57 compute-0 lvm[256107]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:16:57 compute-0 lvm[256107]: VG ceph_vg2 finished
Feb 25 12:16:57 compute-0 admiring_ritchie[256024]: {}
Feb 25 12:16:57 compute-0 systemd[1]: libpod-836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa.scope: Deactivated successfully.
Feb 25 12:16:57 compute-0 systemd[1]: libpod-836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa.scope: Consumed 1.056s CPU time.
Feb 25 12:16:57 compute-0 podman[256110]: 2026-02-25 12:16:57.308984359 +0000 UTC m=+0.036821137 container died 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 12:16:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-7480b4215226c9aea03bff390d56ec8884825caa00d144d84aa77d6372f35518-merged.mount: Deactivated successfully.
Feb 25 12:16:57 compute-0 podman[256110]: 2026-02-25 12:16:57.406582163 +0000 UTC m=+0.134418941 container remove 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:16:57 compute-0 systemd[1]: libpod-conmon-836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa.scope: Deactivated successfully.
Feb 25 12:16:57 compute-0 sudo[255933]: pam_unix(sudo:session): session closed for user root
Feb 25 12:16:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:16:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:16:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:16:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:16:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 248 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 304 op/s
Feb 25 12:16:57 compute-0 sudo[256125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:16:57 compute-0 sudo[256125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:16:57 compute-0 sudo[256125]: pam_unix(sudo:session): session closed for user root
Feb 25 12:16:57 compute-0 sshd-session[256087]: Invalid user mapr from 80.94.92.186 port 44046
Feb 25 12:16:57 compute-0 nova_compute[244014]: 2026-02-25 12:16:57.907 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:57 compute-0 sshd-session[256087]: Connection closed by invalid user mapr 80.94.92.186 port 44046 [preauth]
Feb 25 12:16:58 compute-0 nova_compute[244014]: 2026-02-25 12:16:58.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:16:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:16:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:16:58 compute-0 nova_compute[244014]: 2026-02-25 12:16:58.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:58 compute-0 nova_compute[244014]: 2026-02-25 12:16:58.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:16:59 compute-0 nova_compute[244014]: 2026-02-25 12:16:59.252 244018 DEBUG nova.network.neutron [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updated VIF entry in instance network info cache for port 5fae567b-e26e-4045-8efe-6d5fc03a1658. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:16:59 compute-0 nova_compute[244014]: 2026-02-25 12:16:59.253 244018 DEBUG nova.network.neutron [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updating instance_info_cache with network_info: [{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:16:59 compute-0 nova_compute[244014]: 2026-02-25 12:16:59.322 244018 DEBUG oslo_concurrency.lockutils [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:16:59 compute-0 ceph-mon[76335]: pgmap v944: 305 pgs: 305 active+clean; 248 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 304 op/s
Feb 25 12:16:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 248 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 266 op/s
Feb 25 12:16:59 compute-0 ovn_controller[147040]: 2026-02-25T12:16:59Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:e1:5a 10.100.0.8
Feb 25 12:16:59 compute-0 ovn_controller[147040]: 2026-02-25T12:16:59Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:e1:5a 10.100.0.8
Feb 25 12:16:59 compute-0 nova_compute[244014]: 2026-02-25 12:16:59.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:59 compute-0 nova_compute[244014]: 2026-02-25 12:16:59.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:59 compute-0 nova_compute[244014]: 2026-02-25 12:16:59.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:16:59 compute-0 nova_compute[244014]: 2026-02-25 12:16:59.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:17:00 compute-0 nova_compute[244014]: 2026-02-25 12:17:00.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:00 compute-0 nova_compute[244014]: 2026-02-25 12:17:00.985 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:17:00 compute-0 nova_compute[244014]: 2026-02-25 12:17:00.985 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:17:00 compute-0 nova_compute[244014]: 2026-02-25 12:17:00.986 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.031 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.032 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.032 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.033 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.033 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:01 compute-0 ceph-mon[76335]: pgmap v945: 305 pgs: 305 active+clean; 248 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 266 op/s
Feb 25 12:17:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 262 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.5 MiB/s wr, 292 op/s
Feb 25 12:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:17:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:17:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3217483231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.640 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.737 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.738 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.741 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.741 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.741 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.919 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.920 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4401MB free_disk=59.946292605251074GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:01 compute-0 nova_compute[244014]: 2026-02-25 12:17:01.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:02 compute-0 nova_compute[244014]: 2026-02-25 12:17:02.167 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 20c8b8a1-1561-49f5-9fce-af2840195a57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:17:02 compute-0 nova_compute[244014]: 2026-02-25 12:17:02.167 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance db0fc9fa-1fc0-4334-96f9-2205fa53e308 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:17:02 compute-0 nova_compute[244014]: 2026-02-25 12:17:02.167 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:17:02 compute-0 nova_compute[244014]: 2026-02-25 12:17:02.168 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:17:02 compute-0 nova_compute[244014]: 2026-02-25 12:17:02.336 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3217483231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:17:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427421751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:02 compute-0 nova_compute[244014]: 2026-02-25 12:17:02.899 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:02 compute-0 nova_compute[244014]: 2026-02-25 12:17:02.905 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:17:02 compute-0 nova_compute[244014]: 2026-02-25 12:17:02.942 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:17:03 compute-0 nova_compute[244014]: 2026-02-25 12:17:03.159 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:17:03 compute-0 nova_compute[244014]: 2026-02-25 12:17:03.160 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:03 compute-0 nova_compute[244014]: 2026-02-25 12:17:03.161 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:17:03 compute-0 nova_compute[244014]: 2026-02-25 12:17:03.161 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 12:17:03 compute-0 nova_compute[244014]: 2026-02-25 12:17:03.369 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:03 compute-0 ceph-mon[76335]: pgmap v946: 305 pgs: 305 active+clean; 262 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.5 MiB/s wr, 292 op/s
Feb 25 12:17:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3427421751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 282 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.8 MiB/s wr, 295 op/s
Feb 25 12:17:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:04 compute-0 nova_compute[244014]: 2026-02-25 12:17:04.347 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:04 compute-0 nova_compute[244014]: 2026-02-25 12:17:04.348 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:04 compute-0 nova_compute[244014]: 2026-02-25 12:17:04.421 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:17:04 compute-0 nova_compute[244014]: 2026-02-25 12:17:04.498 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:04 compute-0 nova_compute[244014]: 2026-02-25 12:17:04.499 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:04 compute-0 nova_compute[244014]: 2026-02-25 12:17:04.507 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:17:04 compute-0 nova_compute[244014]: 2026-02-25 12:17:04.508 244018 INFO nova.compute.claims [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:17:04 compute-0 ceph-mon[76335]: pgmap v947: 305 pgs: 305 active+clean; 282 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.8 MiB/s wr, 295 op/s
Feb 25 12:17:04 compute-0 nova_compute[244014]: 2026-02-25 12:17:04.649 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:05.031 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:17:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:17:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836423578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.204 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.212 244018 DEBUG nova.compute.provider_tree [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.255 244018 DEBUG nova.scheduler.client.report [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.297 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.299 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.421 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.421 244018 DEBUG nova.network.neutron [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.487 244018 INFO nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:17:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 282 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 147 op/s
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.537 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:17:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1836423578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.659 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.661 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.662 244018 INFO nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Creating image(s)
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.737 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.762 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.785 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.788 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.809 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.818 244018 DEBUG nova.network.neutron [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.818 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.869 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.870 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.870 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.870 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.896 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:05 compute-0 nova_compute[244014]: 2026-02-25 12:17:05.900 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.186 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.598 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.660 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] resizing rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.752 244018 DEBUG nova.objects.instance [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'migration_context' on Instance uuid 77af7d73-f695-47b4-8ec1-98a3672ff8d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.794 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.795 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Ensure instance console log exists: /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.795 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.796 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.796 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.797 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.803 244018 WARNING nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.808 244018 DEBUG nova.virt.libvirt.host [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.809 244018 DEBUG nova.virt.libvirt.host [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.811 244018 DEBUG nova.virt.libvirt.host [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.812 244018 DEBUG nova.virt.libvirt.host [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:17:06 compute-0 ceph-mon[76335]: pgmap v948: 305 pgs: 305 active+clean; 282 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 147 op/s
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.812 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.813 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.813 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.813 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.814 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.814 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.814 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.815 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.815 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.815 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.815 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.816 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.819 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.848 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021811.7544844, 29947b36-0ef7-49c6-974b-34891af8b99a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.849 244018 INFO nova.compute.manager [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] VM Stopped (Lifecycle Event)
Feb 25 12:17:06 compute-0 nova_compute[244014]: 2026-02-25 12:17:06.868 244018 DEBUG nova.compute.manager [None req-b72a3521-8341-436f-ba66-7b4e8449bcc8 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:17:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640942291' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:07 compute-0 nova_compute[244014]: 2026-02-25 12:17:07.326 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:07 compute-0 nova_compute[244014]: 2026-02-25 12:17:07.346 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:07 compute-0 nova_compute[244014]: 2026-02-25 12:17:07.349 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 353 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 219 op/s
Feb 25 12:17:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/640942291' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:17:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357481479' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:07 compute-0 nova_compute[244014]: 2026-02-25 12:17:07.947 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:07 compute-0 nova_compute[244014]: 2026-02-25 12:17:07.948 244018 DEBUG nova.objects.instance [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77af7d73-f695-47b4-8ec1-98a3672ff8d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:07 compute-0 nova_compute[244014]: 2026-02-25 12:17:07.964 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:17:07 compute-0 nova_compute[244014]:   <uuid>77af7d73-f695-47b4-8ec1-98a3672ff8d8</uuid>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   <name>instance-0000000b</name>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-344510811</nova:name>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:17:06</nova:creationTime>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:17:07 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:17:07 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:17:07 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:17:07 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:17:07 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:17:07 compute-0 nova_compute[244014]:         <nova:user uuid="f653b2ec5be5483092804fcd55beab49">tempest-ServersAdminNegativeTestJSON-49539203-project-member</nova:user>
Feb 25 12:17:07 compute-0 nova_compute[244014]:         <nova:project uuid="2805044e788543068625873119e58bd0">tempest-ServersAdminNegativeTestJSON-49539203</nova:project>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <system>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <entry name="serial">77af7d73-f695-47b4-8ec1-98a3672ff8d8</entry>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <entry name="uuid">77af7d73-f695-47b4-8ec1-98a3672ff8d8</entry>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     </system>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   <os>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   </os>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   <features>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   </features>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk">
Feb 25 12:17:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:17:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config">
Feb 25 12:17:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:17:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/console.log" append="off"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <video>
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     </video>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:17:07 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:17:07 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:17:07 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:17:07 compute-0 nova_compute[244014]: </domain>
Feb 25 12:17:07 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:17:08 compute-0 nova_compute[244014]: 2026-02-25 12:17:08.016 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:17:08 compute-0 nova_compute[244014]: 2026-02-25 12:17:08.016 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:17:08 compute-0 nova_compute[244014]: 2026-02-25 12:17:08.016 244018 INFO nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Using config drive
Feb 25 12:17:08 compute-0 nova_compute[244014]: 2026-02-25 12:17:08.038 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:08 compute-0 nova_compute[244014]: 2026-02-25 12:17:08.372 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:08 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 25 12:17:08 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 12.363s CPU time.
Feb 25 12:17:08 compute-0 systemd-machined[210048]: Machine qemu-10-instance-0000000a terminated.
Feb 25 12:17:08 compute-0 nova_compute[244014]: 2026-02-25 12:17:08.841 244018 INFO nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Creating config drive at /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config
Feb 25 12:17:08 compute-0 ceph-mon[76335]: pgmap v949: 305 pgs: 305 active+clean; 353 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 219 op/s
Feb 25 12:17:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/357481479' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:08 compute-0 nova_compute[244014]: 2026-02-25 12:17:08.848 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe8pyo8xz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:08 compute-0 nova_compute[244014]: 2026-02-25 12:17:08.886 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance shutdown successfully after 13 seconds.
Feb 25 12:17:08 compute-0 nova_compute[244014]: 2026-02-25 12:17:08.894 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance destroyed successfully.
Feb 25 12:17:08 compute-0 nova_compute[244014]: 2026-02-25 12:17:08.903 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance destroyed successfully.
Feb 25 12:17:08 compute-0 nova_compute[244014]: 2026-02-25 12:17:08.978 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe8pyo8xz" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.008 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.012 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.156 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.159 244018 INFO nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Deleting local config drive /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config because it was imported into RBD.
Feb 25 12:17:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:09 compute-0 systemd-machined[210048]: New machine qemu-11-instance-0000000b.
Feb 25 12:17:09 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.340 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting instance files /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.341 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deletion of /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del complete
Feb 25 12:17:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 353 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 591 KiB/s rd, 5.6 MiB/s wr, 147 op/s
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.555 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.556 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating image(s)
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.592 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.623 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.650 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.653 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.654 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.657 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021829.5782044, 77af7d73-f695-47b4-8ec1-98a3672ff8d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.658 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] VM Resumed (Lifecycle Event)
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.662 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.662 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.667 244018 INFO nova.virt.libvirt.driver [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance spawned successfully.
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.667 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.679 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.684 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.689 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.690 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.690 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.691 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.691 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.692 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.716 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.716 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021829.5783021, 77af7d73-f695-47b4-8ec1-98a3672ff8d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.717 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] VM Started (Lifecycle Event)
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.784 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.787 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.792 244018 INFO nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Took 4.13 seconds to spawn the instance on the hypervisor.
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.792 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.812 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.857 244018 INFO nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Took 5.38 seconds to build instance.
Feb 25 12:17:09 compute-0 nova_compute[244014]: 2026-02-25 12:17:09.874 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:10 compute-0 nova_compute[244014]: 2026-02-25 12:17:10.027 244018 DEBUG nova.virt.libvirt.imagebackend [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/f0ef5a9a-23b8-4883-8e47-feb7403a11d8/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/f0ef5a9a-23b8-4883-8e47-feb7403a11d8/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.061 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:11 compute-0 ceph-mon[76335]: pgmap v950: 305 pgs: 305 active+clean; 353 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 591 KiB/s rd, 5.6 MiB/s wr, 147 op/s
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.258 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.338 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.part --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.339 244018 DEBUG nova.virt.images [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] f0ef5a9a-23b8-4883-8e47-feb7403a11d8 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.343 244018 DEBUG nova.privsep.utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.343 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.part /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 342 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.1 MiB/s wr, 194 op/s
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.563 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.part /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.converted" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.568 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.635 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.converted --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.637 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.669 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.673 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.945 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.945 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.946 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.946 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.947 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.949 244018 INFO nova.compute.manager [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Terminating instance
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.955 244018 DEBUG nova.compute.manager [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:17:11 compute-0 nova_compute[244014]: 2026-02-25 12:17:11.956 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.045 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] resizing rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.128 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.128 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.144 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:17:12 compute-0 kernel: tap5fae567b-e2 (unregistering): left promiscuous mode
Feb 25 12:17:12 compute-0 NetworkManager[49836]: <info>  [1772021832.2249] device (tap5fae567b-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.224 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.224 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:12 compute-0 ovn_controller[147040]: 2026-02-25T12:17:12Z|00041|binding|INFO|Releasing lport 5fae567b-e26e-4045-8efe-6d5fc03a1658 from this chassis (sb_readonly=0)
Feb 25 12:17:12 compute-0 ovn_controller[147040]: 2026-02-25T12:17:12Z|00042|binding|INFO|Setting lport 5fae567b-e26e-4045-8efe-6d5fc03a1658 down in Southbound
Feb 25 12:17:12 compute-0 ovn_controller[147040]: 2026-02-25T12:17:12Z|00043|binding|INFO|Removing iface tap5fae567b-e2 ovn-installed in OVS
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.232 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.242 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.243 244018 INFO nova.compute.claims [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.242 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:e1:5a 10.100.0.8'], port_security=['fa:16:3e:3b:e1:5a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '20c8b8a1-1561-49f5-9fce-af2840195a57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f43142d8-27de-490e-9b86-d4e77c9c7e07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5d1aaf-d2e8-49e5-aeb2-335f304b15d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5fae567b-e26e-4045-8efe-6d5fc03a1658) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.245 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5fae567b-e26e-4045-8efe-6d5fc03a1658 in datapath d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 unbound from our chassis
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.248 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.250 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[459194ea-d2c4-4c26-a111-f49e7a0b23d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.252 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 namespace which is not needed anymore
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:12 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Feb 25 12:17:12 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 12.629s CPU time.
Feb 25 12:17:12 compute-0 systemd-machined[210048]: Machine qemu-8-instance-00000008 terminated.
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.340 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.341 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Ensure instance console log exists: /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.341 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.341 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.341 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.342 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.345 244018 WARNING nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.353 244018 DEBUG nova.virt.libvirt.host [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.353 244018 DEBUG nova.virt.libvirt.host [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.355 244018 DEBUG nova.virt.libvirt.host [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.356 244018 DEBUG nova.virt.libvirt.host [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.356 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.356 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.356 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.358 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.358 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.358 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.372 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:12 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [NOTICE]   (255169) : haproxy version is 2.8.14-c23fe91
Feb 25 12:17:12 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [NOTICE]   (255169) : path to executable is /usr/sbin/haproxy
Feb 25 12:17:12 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [WARNING]  (255169) : Exiting Master process...
Feb 25 12:17:12 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [ALERT]    (255169) : Current worker (255192) exited with code 143 (Terminated)
Feb 25 12:17:12 compute-0 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [WARNING]  (255169) : All workers exited. Exiting... (0)
Feb 25 12:17:12 compute-0 systemd[1]: libpod-2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e.scope: Deactivated successfully.
Feb 25 12:17:12 compute-0 conmon[255138]: conmon 2f771423fea679a9ed4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e.scope/container/memory.events
Feb 25 12:17:12 compute-0 podman[256782]: 2026-02-25 12:17:12.394188623 +0000 UTC m=+0.049094082 container died 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.424 244018 INFO nova.virt.libvirt.driver [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Instance destroyed successfully.
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.428 244018 DEBUG nova.objects.instance [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'resources' on Instance uuid 20c8b8a1-1561-49f5-9fce-af2840195a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e-userdata-shm.mount: Deactivated successfully.
Feb 25 12:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-707d0ec3801d09f720d201cc99cc3906ecd5ce5937d1d449d2a23665084e2304-merged.mount: Deactivated successfully.
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.440 244018 DEBUG nova.virt.libvirt.vif [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:16:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1235856482',id=8,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:16:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-6yvik42u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:16:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=20c8b8a1-1561-49f5-9fce-af2840195a57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.441 244018 DEBUG nova.network.os_vif_util [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.441 244018 DEBUG nova.network.os_vif_util [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.442 244018 DEBUG os_vif [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.445 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fae567b-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.450 244018 INFO os_vif [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2')
Feb 25 12:17:12 compute-0 podman[256782]: 2026-02-25 12:17:12.457817437 +0000 UTC m=+0.112722906 container cleanup 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:17:12 compute-0 systemd[1]: libpod-conmon-2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e.scope: Deactivated successfully.
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.474 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:12 compute-0 podman[256836]: 2026-02-25 12:17:12.543389153 +0000 UTC m=+0.064963084 container remove 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.548 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[073c6b70-f17c-4be0-9513-1bd4500d384a]: (4, ('Wed Feb 25 12:17:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 (2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e)\n2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e\nWed Feb 25 12:17:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 (2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e)\n2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.550 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cba0ba49-ddeb-43a9-a7a6-6e6e324b652c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.551 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fb36f1-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:17:12 compute-0 kernel: tapd3fb36f1-00: left promiscuous mode
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.561 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4266573-3765-4eb3-a454-60da139b0b75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.574 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[237c970b-2855-43b4-9485-b07f46aa8c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.575 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bea060ff-ba97-4ffc-817b-78a8910d5a27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.592 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1bb7eed-f3fa-4a5a-a9e0-a298f9c56b75]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377636, 'reachable_time': 43116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256876, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:12 compute-0 systemd[1]: run-netns-ovnmeta\x2dd3fb36f1\x2d0e88\x2d43b4\x2da8a4\x2d3844d55f1de8.mount: Deactivated successfully.
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.597 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:17:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.597 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[aa34f02b-66ab-45c2-a81c-65a2246da3b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.861 244018 INFO nova.virt.libvirt.driver [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Deleting instance files /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57_del
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.861 244018 INFO nova.virt.libvirt.driver [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Deletion of /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57_del complete
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.867 244018 DEBUG nova.compute.manager [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-unplugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.867 244018 DEBUG oslo_concurrency.lockutils [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.867 244018 DEBUG oslo_concurrency.lockutils [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.867 244018 DEBUG oslo_concurrency.lockutils [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.867 244018 DEBUG nova.compute.manager [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] No waiting events found dispatching network-vif-unplugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.868 244018 DEBUG nova.compute.manager [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-unplugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:17:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:17:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272380381' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.960 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.977 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.980 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.998 244018 INFO nova.compute.manager [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Took 1.04 seconds to destroy the instance on the hypervisor.
Feb 25 12:17:12 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.999 244018 DEBUG oslo.service.loopingcall [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:12.999 244018 DEBUG nova.compute.manager [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.000 244018 DEBUG nova.network.neutron [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:17:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:17:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/115378256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:13 compute-0 ceph-mon[76335]: pgmap v951: 305 pgs: 305 active+clean; 342 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.1 MiB/s wr, 194 op/s
Feb 25 12:17:13 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4272380381' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:13 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/115378256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.111 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.115 244018 DEBUG nova.compute.provider_tree [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.140 244018 DEBUG nova.scheduler.client.report [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.182 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.182 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.296 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.296 244018 DEBUG nova.network.neutron [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.353 244018 INFO nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.399 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:17:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:17:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062890002' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.499 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.502 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:17:13 compute-0 nova_compute[244014]:   <uuid>db0fc9fa-1fc0-4334-96f9-2205fa53e308</uuid>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   <name>instance-0000000a</name>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAdmin275Test-server-1821961636</nova:name>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:17:12</nova:creationTime>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:17:13 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:17:13 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:17:13 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:17:13 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:17:13 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:17:13 compute-0 nova_compute[244014]:         <nova:user uuid="40cba1ddb2af4adea0f03477ccf87402">tempest-ServersAdmin275Test-694282747-project-member</nova:user>
Feb 25 12:17:13 compute-0 nova_compute[244014]:         <nova:project uuid="03ef50f1c8db4f9d9f7512943746f9e7">tempest-ServersAdmin275Test-694282747</nova:project>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <system>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <entry name="serial">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <entry name="uuid">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     </system>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   <os>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   </os>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   <features>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   </features>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk">
Feb 25 12:17:13 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       </source>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:17:13 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config">
Feb 25 12:17:13 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       </source>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:17:13 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log" append="off"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <video>
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     </video>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:17:13 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:17:13 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:17:13 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:17:13 compute-0 nova_compute[244014]: </domain>
Feb 25 12:17:13 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:17:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 300 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 6.0 MiB/s wr, 253 op/s
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.581 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.583 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.584 244018 INFO nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Creating image(s)
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.611 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.654 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.684 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.688 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.704 244018 DEBUG nova.network.neutron [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.705 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.710 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.710 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.711 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Using config drive
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.732 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.743 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.743 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.744 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.744 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.764 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.768 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.812 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.878 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'keypairs' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.942 244018 DEBUG nova.network.neutron [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:17:13 compute-0 nova_compute[244014]: 2026-02-25 12:17:13.973 244018 INFO nova.compute.manager [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Took 0.97 seconds to deallocate network for instance.
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.018 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.044 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.044 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.085 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating config drive at /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.091 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpznk6fh7w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1062890002' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.125 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] resizing rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:17:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.217 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpznk6fh7w" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.254 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.258 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.331 244018 DEBUG nova.objects.instance [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f78f954-3472-4631-a0ae-b945b9c26f5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.336 244018 DEBUG oslo_concurrency.processutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.369 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.370 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Ensure instance console log exists: /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.371 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.371 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.372 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.375 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.380 244018 WARNING nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.386 244018 DEBUG nova.virt.libvirt.host [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.387 244018 DEBUG nova.virt.libvirt.host [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.390 244018 DEBUG nova.virt.libvirt.host [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.390 244018 DEBUG nova.virt.libvirt.host [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.391 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.392 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.393 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.393 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.393 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.394 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.394 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.395 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.395 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.396 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.396 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.396 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.403 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.570 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.571 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting local config drive /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config because it was imported into RBD.
Feb 25 12:17:14 compute-0 systemd-machined[210048]: New machine qemu-12-instance-0000000a.
Feb 25 12:17:14 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000a.
Feb 25 12:17:14 compute-0 podman[257213]: 2026-02-25 12:17:14.713531419 +0000 UTC m=+0.072170259 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 12:17:14 compute-0 podman[257212]: 2026-02-25 12:17:14.720832777 +0000 UTC m=+0.080797406 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:17:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:17:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105879104' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.894 244018 DEBUG oslo_concurrency.processutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.899 244018 DEBUG nova.compute.provider_tree [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:17:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:17:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1278180881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.934 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.953 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.956 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:14 compute-0 nova_compute[244014]: 2026-02-25 12:17:14.973 244018 DEBUG nova.scheduler.client.report [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.009 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.015 244018 DEBUG nova.compute.manager [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.016 244018 DEBUG oslo_concurrency.lockutils [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.016 244018 DEBUG oslo_concurrency.lockutils [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.016 244018 DEBUG oslo_concurrency.lockutils [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.016 244018 DEBUG nova.compute.manager [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] No waiting events found dispatching network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.016 244018 WARNING nova.compute.manager [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received unexpected event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 for instance with vm_state deleted and task_state None.
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.017 244018 DEBUG nova.compute.manager [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-deleted-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.045 244018 INFO nova.scheduler.client.report [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Deleted allocations for instance 20c8b8a1-1561-49f5-9fce-af2840195a57
Feb 25 12:17:15 compute-0 ceph-mon[76335]: pgmap v952: 305 pgs: 305 active+clean; 300 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 6.0 MiB/s wr, 253 op/s
Feb 25 12:17:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3105879104' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1278180881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.293 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:17:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3561197649' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.508 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.509 244018 DEBUG nova.objects.instance [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f78f954-3472-4631-a0ae-b945b9c26f5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.513 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for db0fc9fa-1fc0-4334-96f9-2205fa53e308 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.513 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021835.5128403, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.513 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Resumed (Lifecycle Event)
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.515 244018 DEBUG nova.compute.manager [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.516 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.519 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance spawned successfully.
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.519 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.526 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:17:15 compute-0 nova_compute[244014]:   <uuid>9f78f954-3472-4631-a0ae-b945b9c26f5a</uuid>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   <name>instance-0000000c</name>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-1226014116</nova:name>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:17:14</nova:creationTime>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:17:15 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:17:15 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:17:15 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:17:15 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:17:15 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:17:15 compute-0 nova_compute[244014]:         <nova:user uuid="f653b2ec5be5483092804fcd55beab49">tempest-ServersAdminNegativeTestJSON-49539203-project-member</nova:user>
Feb 25 12:17:15 compute-0 nova_compute[244014]:         <nova:project uuid="2805044e788543068625873119e58bd0">tempest-ServersAdminNegativeTestJSON-49539203</nova:project>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <system>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <entry name="serial">9f78f954-3472-4631-a0ae-b945b9c26f5a</entry>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <entry name="uuid">9f78f954-3472-4631-a0ae-b945b9c26f5a</entry>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     </system>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   <os>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   </os>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   <features>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   </features>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/9f78f954-3472-4631-a0ae-b945b9c26f5a_disk">
Feb 25 12:17:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:17:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config">
Feb 25 12:17:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:17:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/console.log" append="off"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <video>
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     </video>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:17:15 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:17:15 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:17:15 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:17:15 compute-0 nova_compute[244014]: </domain>
Feb 25 12:17:15 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.535 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 300 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 204 op/s
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.540 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.541 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.541 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.541 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.542 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.542 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.545 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.569 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.570 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021835.5151365, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.570 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Started (Lifecycle Event)
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.595 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.598 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.599 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.599 244018 INFO nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Using config drive
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.618 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.625 244018 DEBUG nova.compute.manager [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.626 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.650 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.705 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.705 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.705 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.810 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.819 244018 INFO nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Creating config drive at /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.824 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprrv7_hrf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.951 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprrv7_hrf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:15 compute-0 nova_compute[244014]: 2026-02-25 12:17:15.996 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:16 compute-0 nova_compute[244014]: 2026-02-25 12:17:16.000 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3561197649' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:16 compute-0 nova_compute[244014]: 2026-02-25 12:17:16.216 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:16 compute-0 nova_compute[244014]: 2026-02-25 12:17:16.217 244018 INFO nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Deleting local config drive /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config because it was imported into RBD.
Feb 25 12:17:16 compute-0 systemd-machined[210048]: New machine qemu-13-instance-0000000c.
Feb 25 12:17:16 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.096 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021837.0951371, 9f78f954-3472-4631-a0ae-b945b9c26f5a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.097 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] VM Resumed (Lifecycle Event)
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.103 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.104 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.108 244018 INFO nova.virt.libvirt.driver [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance spawned successfully.
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.109 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.128 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.132 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.142 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.143 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.144 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.144 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.145 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.146 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.153 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.153 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021837.0961735, 9f78f954-3472-4631-a0ae-b945b9c26f5a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.154 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] VM Started (Lifecycle Event)
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.201 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.205 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.214 244018 INFO nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Took 3.63 seconds to spawn the instance on the hypervisor.
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.215 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:17 compute-0 ceph-mon[76335]: pgmap v953: 305 pgs: 305 active+clean; 300 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 204 op/s
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.225 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.275 244018 INFO nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Took 5.09 seconds to build instance.
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.299 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:17 compute-0 nova_compute[244014]: 2026-02-25 12:17:17.449 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 293 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 7.1 MiB/s wr, 359 op/s
Feb 25 12:17:18 compute-0 nova_compute[244014]: 2026-02-25 12:17:18.307 244018 INFO nova.compute.manager [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Rebuilding instance
Feb 25 12:17:18 compute-0 nova_compute[244014]: 2026-02-25 12:17:18.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:18 compute-0 nova_compute[244014]: 2026-02-25 12:17:18.767 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'trusted_certs' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:18 compute-0 nova_compute[244014]: 2026-02-25 12:17:18.793 244018 DEBUG nova.compute.manager [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:18 compute-0 nova_compute[244014]: 2026-02-25 12:17:18.857 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'pci_requests' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:18 compute-0 nova_compute[244014]: 2026-02-25 12:17:18.869 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'pci_devices' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:18 compute-0 nova_compute[244014]: 2026-02-25 12:17:18.887 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'resources' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:18 compute-0 nova_compute[244014]: 2026-02-25 12:17:18.913 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'migration_context' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:18 compute-0 nova_compute[244014]: 2026-02-25 12:17:18.932 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:17:18 compute-0 nova_compute[244014]: 2026-02-25 12:17:18.937 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:17:19 compute-0 nova_compute[244014]: 2026-02-25 12:17:19.087 244018 DEBUG nova.objects.instance [None req-0d544355-0e80-4e87-a307-6c0c91a9cd3e 6a24496b0a9947e48d0da17ecf258476 40504c45ce5547dc88d4082880bfc79f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f78f954-3472-4631-a0ae-b945b9c26f5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:19 compute-0 nova_compute[244014]: 2026-02-25 12:17:19.107 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021839.1075826, 9f78f954-3472-4631-a0ae-b945b9c26f5a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:19 compute-0 nova_compute[244014]: 2026-02-25 12:17:19.108 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] VM Paused (Lifecycle Event)
Feb 25 12:17:19 compute-0 nova_compute[244014]: 2026-02-25 12:17:19.128 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:19 compute-0 nova_compute[244014]: 2026-02-25 12:17:19.134 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:17:19 compute-0 nova_compute[244014]: 2026-02-25 12:17:19.152 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 25 12:17:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:19 compute-0 ceph-mon[76335]: pgmap v954: 305 pgs: 305 active+clean; 293 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 7.1 MiB/s wr, 359 op/s
Feb 25 12:17:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 293 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.0 MiB/s wr, 287 op/s
Feb 25 12:17:19 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Feb 25 12:17:19 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 2.952s CPU time.
Feb 25 12:17:19 compute-0 systemd-machined[210048]: Machine qemu-13-instance-0000000c terminated.
Feb 25 12:17:19 compute-0 nova_compute[244014]: 2026-02-25 12:17:19.730 244018 DEBUG nova.compute.manager [None req-0d544355-0e80-4e87-a307-6c0c91a9cd3e 6a24496b0a9947e48d0da17ecf258476 40504c45ce5547dc88d4082880bfc79f - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:20 compute-0 ceph-mon[76335]: pgmap v955: 305 pgs: 305 active+clean; 293 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.0 MiB/s wr, 287 op/s
Feb 25 12:17:21 compute-0 nova_compute[244014]: 2026-02-25 12:17:21.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:21 compute-0 nova_compute[244014]: 2026-02-25 12:17:21.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 300 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.2 MiB/s wr, 354 op/s
Feb 25 12:17:21 compute-0 nova_compute[244014]: 2026-02-25 12:17:21.876 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:21 compute-0 nova_compute[244014]: 2026-02-25 12:17:21.877 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:21 compute-0 nova_compute[244014]: 2026-02-25 12:17:21.877 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "9f78f954-3472-4631-a0ae-b945b9c26f5a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:21 compute-0 nova_compute[244014]: 2026-02-25 12:17:21.878 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:21 compute-0 nova_compute[244014]: 2026-02-25 12:17:21.878 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:21 compute-0 nova_compute[244014]: 2026-02-25 12:17:21.881 244018 INFO nova.compute.manager [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Terminating instance
Feb 25 12:17:21 compute-0 nova_compute[244014]: 2026-02-25 12:17:21.882 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "refresh_cache-9f78f954-3472-4631-a0ae-b945b9c26f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:17:21 compute-0 nova_compute[244014]: 2026-02-25 12:17:21.883 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquired lock "refresh_cache-9f78f954-3472-4631-a0ae-b945b9c26f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:17:21 compute-0 nova_compute[244014]: 2026-02-25 12:17:21.883 244018 DEBUG nova.network.neutron [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:17:22 compute-0 nova_compute[244014]: 2026-02-25 12:17:22.075 244018 DEBUG nova.network.neutron [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:17:22 compute-0 nova_compute[244014]: 2026-02-25 12:17:22.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:22 compute-0 ceph-mon[76335]: pgmap v956: 305 pgs: 305 active+clean; 300 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.2 MiB/s wr, 354 op/s
Feb 25 12:17:22 compute-0 nova_compute[244014]: 2026-02-25 12:17:22.745 244018 DEBUG nova.network.neutron [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:17:22 compute-0 nova_compute[244014]: 2026-02-25 12:17:22.768 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Releasing lock "refresh_cache-9f78f954-3472-4631-a0ae-b945b9c26f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:17:22 compute-0 nova_compute[244014]: 2026-02-25 12:17:22.769 244018 DEBUG nova.compute.manager [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:17:22 compute-0 nova_compute[244014]: 2026-02-25 12:17:22.777 244018 INFO nova.virt.libvirt.driver [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance destroyed successfully.
Feb 25 12:17:22 compute-0 nova_compute[244014]: 2026-02-25 12:17:22.778 244018 DEBUG nova.objects.instance [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'resources' on Instance uuid 9f78f954-3472-4631-a0ae-b945b9c26f5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.184 244018 INFO nova.virt.libvirt.driver [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Deleting instance files /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a_del
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.185 244018 INFO nova.virt.libvirt.driver [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Deletion of /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a_del complete
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.262 244018 INFO nova.compute.manager [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Took 0.49 seconds to destroy the instance on the hypervisor.
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.263 244018 DEBUG oslo.service.loopingcall [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.264 244018 DEBUG nova.compute.manager [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.264 244018 DEBUG nova.network.neutron [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 5.7 MiB/s wr, 378 op/s
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.579 244018 DEBUG nova.network.neutron [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.612 244018 DEBUG nova.network.neutron [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.626 244018 INFO nova.compute.manager [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Took 0.36 seconds to deallocate network for instance.
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.663 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.663 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:23 compute-0 nova_compute[244014]: 2026-02-25 12:17:23.742 244018 DEBUG oslo_concurrency.processutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:17:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303118052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:24 compute-0 nova_compute[244014]: 2026-02-25 12:17:24.296 244018 DEBUG oslo_concurrency.processutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:24 compute-0 nova_compute[244014]: 2026-02-25 12:17:24.304 244018 DEBUG nova.compute.provider_tree [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:17:24 compute-0 nova_compute[244014]: 2026-02-25 12:17:24.320 244018 DEBUG nova.scheduler.client.report [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:17:24 compute-0 nova_compute[244014]: 2026-02-25 12:17:24.345 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:24 compute-0 nova_compute[244014]: 2026-02-25 12:17:24.366 244018 INFO nova.scheduler.client.report [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Deleted allocations for instance 9f78f954-3472-4631-a0ae-b945b9c26f5a
Feb 25 12:17:24 compute-0 nova_compute[244014]: 2026-02-25 12:17:24.426 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:24 compute-0 ceph-mon[76335]: pgmap v957: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 5.7 MiB/s wr, 378 op/s
Feb 25 12:17:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/303118052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.152 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.152 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.153 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.153 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.153 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.155 244018 INFO nova.compute.manager [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Terminating instance
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.157 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "refresh_cache-77af7d73-f695-47b4-8ec1-98a3672ff8d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.157 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquired lock "refresh_cache-77af7d73-f695-47b4-8ec1-98a3672ff8d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.158 244018 DEBUG nova.network.neutron [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.456 244018 DEBUG nova.network.neutron [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:17:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.0 MiB/s wr, 292 op/s
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.914 244018 DEBUG nova.network.neutron [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.934 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Releasing lock "refresh_cache-77af7d73-f695-47b4-8ec1-98a3672ff8d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:17:25 compute-0 nova_compute[244014]: 2026-02-25 12:17:25.935 244018 DEBUG nova.compute.manager [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:17:26 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Feb 25 12:17:26 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 11.130s CPU time.
Feb 25 12:17:26 compute-0 systemd-machined[210048]: Machine qemu-11-instance-0000000b terminated.
Feb 25 12:17:26 compute-0 nova_compute[244014]: 2026-02-25 12:17:26.156 244018 INFO nova.virt.libvirt.driver [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance destroyed successfully.
Feb 25 12:17:26 compute-0 nova_compute[244014]: 2026-02-25 12:17:26.156 244018 DEBUG nova.objects.instance [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'resources' on Instance uuid 77af7d73-f695-47b4-8ec1-98a3672ff8d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:26 compute-0 nova_compute[244014]: 2026-02-25 12:17:26.575 244018 INFO nova.virt.libvirt.driver [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Deleting instance files /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8_del
Feb 25 12:17:26 compute-0 nova_compute[244014]: 2026-02-25 12:17:26.576 244018 INFO nova.virt.libvirt.driver [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Deletion of /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8_del complete
Feb 25 12:17:26 compute-0 ceph-mon[76335]: pgmap v958: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.0 MiB/s wr, 292 op/s
Feb 25 12:17:26 compute-0 nova_compute[244014]: 2026-02-25 12:17:26.636 244018 INFO nova.compute.manager [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Took 0.70 seconds to destroy the instance on the hypervisor.
Feb 25 12:17:26 compute-0 nova_compute[244014]: 2026-02-25 12:17:26.637 244018 DEBUG oslo.service.loopingcall [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:17:26 compute-0 nova_compute[244014]: 2026-02-25 12:17:26.638 244018 DEBUG nova.compute.manager [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:17:26 compute-0 nova_compute[244014]: 2026-02-25 12:17:26.638 244018 DEBUG nova.network.neutron [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.012 244018 DEBUG nova.network.neutron [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.035 244018 DEBUG nova.network.neutron [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.051 244018 INFO nova.compute.manager [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Took 0.41 seconds to deallocate network for instance.
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.108 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.109 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.171 244018 DEBUG oslo_concurrency.processutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.423 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021832.4227164, 20c8b8a1-1561-49f5-9fce-af2840195a57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.424 244018 INFO nova.compute.manager [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] VM Stopped (Lifecycle Event)
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.447 244018 DEBUG nova.compute.manager [None req-9a9878a5-dc30-44f9-9de6-0cf65d38ef9e - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.455 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 239 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.1 MiB/s wr, 374 op/s
Feb 25 12:17:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:17:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540527969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.793 244018 DEBUG oslo_concurrency.processutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.799 244018 DEBUG nova.compute.provider_tree [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.820 244018 DEBUG nova.scheduler.client.report [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.852 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.876 244018 INFO nova.scheduler.client.report [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Deleted allocations for instance 77af7d73-f695-47b4-8ec1-98a3672ff8d8
Feb 25 12:17:27 compute-0 nova_compute[244014]: 2026-02-25 12:17:27.957 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:28 compute-0 nova_compute[244014]: 2026-02-25 12:17:28.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:28 compute-0 ceph-mon[76335]: pgmap v959: 305 pgs: 305 active+clean; 239 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.1 MiB/s wr, 374 op/s
Feb 25 12:17:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1540527969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:28 compute-0 nova_compute[244014]: 2026-02-25 12:17:28.982 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:17:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 239 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.2 MiB/s wr, 219 op/s
Feb 25 12:17:30 compute-0 ceph-mon[76335]: pgmap v960: 305 pgs: 305 active+clean; 239 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.2 MiB/s wr, 219 op/s
Feb 25 12:17:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:17:30
Feb 25 12:17:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:17:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:17:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log']
Feb 25 12:17:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:17:31 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 25 12:17:31 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000a.scope: Consumed 12.655s CPU time.
Feb 25 12:17:31 compute-0 systemd-machined[210048]: Machine qemu-12-instance-0000000a terminated.
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 227 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 242 op/s
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:17:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:31.999 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance shutdown successfully after 13 seconds.
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.007 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance destroyed successfully.
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.014 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance destroyed successfully.
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.605 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting instance files /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.606 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deletion of /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del complete
Feb 25 12:17:32 compute-0 ceph-mon[76335]: pgmap v961: 305 pgs: 305 active+clean; 227 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 242 op/s
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.749 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.750 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating image(s)
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.781 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.819 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.849 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.853 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.927 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.929 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.930 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.930 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.961 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:32 compute-0 nova_compute[244014]: 2026-02-25 12:17:32.966 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.410 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.486 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] resizing rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:17:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 213 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 4.1 MiB/s wr, 187 op/s
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.630 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.631 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Ensure instance console log exists: /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.632 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.633 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.633 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.636 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.640 244018 WARNING nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.646 244018 DEBUG nova.virt.libvirt.host [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.646 244018 DEBUG nova.virt.libvirt.host [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.650 244018 DEBUG nova.virt.libvirt.host [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.651 244018 DEBUG nova.virt.libvirt.host [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.651 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.652 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.653 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.653 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.653 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.654 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.654 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.654 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.655 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.655 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.656 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.656 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.657 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'vcpu_model' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:33 compute-0 nova_compute[244014]: 2026-02-25 12:17:33.687 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:17:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715058366' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:34 compute-0 nova_compute[244014]: 2026-02-25 12:17:34.295 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:34 compute-0 nova_compute[244014]: 2026-02-25 12:17:34.331 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:34 compute-0 nova_compute[244014]: 2026-02-25 12:17:34.337 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:34 compute-0 ceph-mon[76335]: pgmap v962: 305 pgs: 305 active+clean; 213 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 4.1 MiB/s wr, 187 op/s
Feb 25 12:17:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/715058366' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:34 compute-0 nova_compute[244014]: 2026-02-25 12:17:34.733 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021839.7315087, 9f78f954-3472-4631-a0ae-b945b9c26f5a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:34 compute-0 nova_compute[244014]: 2026-02-25 12:17:34.734 244018 INFO nova.compute.manager [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] VM Stopped (Lifecycle Event)
Feb 25 12:17:34 compute-0 nova_compute[244014]: 2026-02-25 12:17:34.773 244018 DEBUG nova.compute.manager [None req-e7173a5e-71e9-412e-b5e2-21467220b878 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:17:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1423179979' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:34 compute-0 nova_compute[244014]: 2026-02-25 12:17:34.916 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:34 compute-0 nova_compute[244014]: 2026-02-25 12:17:34.920 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:17:34 compute-0 nova_compute[244014]:   <uuid>db0fc9fa-1fc0-4334-96f9-2205fa53e308</uuid>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   <name>instance-0000000a</name>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAdmin275Test-server-1821961636</nova:name>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:17:33</nova:creationTime>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:17:34 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:17:34 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:17:34 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:17:34 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:17:34 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:17:34 compute-0 nova_compute[244014]:         <nova:user uuid="40cba1ddb2af4adea0f03477ccf87402">tempest-ServersAdmin275Test-694282747-project-member</nova:user>
Feb 25 12:17:34 compute-0 nova_compute[244014]:         <nova:project uuid="03ef50f1c8db4f9d9f7512943746f9e7">tempest-ServersAdmin275Test-694282747</nova:project>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <system>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <entry name="serial">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <entry name="uuid">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     </system>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   <os>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   </os>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   <features>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   </features>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk">
Feb 25 12:17:34 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       </source>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:17:34 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config">
Feb 25 12:17:34 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       </source>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:17:34 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log" append="off"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <video>
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     </video>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:17:34 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:17:34 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:17:34 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:17:34 compute-0 nova_compute[244014]: </domain>
Feb 25 12:17:34 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:17:34 compute-0 nova_compute[244014]: 2026-02-25 12:17:34.973 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:17:34 compute-0 nova_compute[244014]: 2026-02-25 12:17:34.974 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:17:34 compute-0 nova_compute[244014]: 2026-02-25 12:17:34.977 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Using config drive
Feb 25 12:17:35 compute-0 nova_compute[244014]: 2026-02-25 12:17:35.001 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:35 compute-0 nova_compute[244014]: 2026-02-25 12:17:35.025 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'ec2_ids' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:35 compute-0 nova_compute[244014]: 2026-02-25 12:17:35.063 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'keypairs' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:35 compute-0 nova_compute[244014]: 2026-02-25 12:17:35.470 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating config drive at /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config
Feb 25 12:17:35 compute-0 nova_compute[244014]: 2026-02-25 12:17:35.477 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq954wmnq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 213 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 235 KiB/s rd, 2.2 MiB/s wr, 117 op/s
Feb 25 12:17:35 compute-0 nova_compute[244014]: 2026-02-25 12:17:35.605 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq954wmnq" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:35 compute-0 nova_compute[244014]: 2026-02-25 12:17:35.639 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:35 compute-0 nova_compute[244014]: 2026-02-25 12:17:35.643 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1423179979' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:35 compute-0 nova_compute[244014]: 2026-02-25 12:17:35.813 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:35 compute-0 nova_compute[244014]: 2026-02-25 12:17:35.815 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting local config drive /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config because it was imported into RBD.
Feb 25 12:17:35 compute-0 systemd-machined[210048]: New machine qemu-14-instance-0000000a.
Feb 25 12:17:35 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000a.
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.327 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for db0fc9fa-1fc0-4334-96f9-2205fa53e308 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.328 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021856.327064, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.329 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Resumed (Lifecycle Event)
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.334 244018 DEBUG nova.compute.manager [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.334 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.339 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance spawned successfully.
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.340 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.358 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.362 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.463 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.463 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021856.3272145, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.464 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Started (Lifecycle Event)
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.469 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.470 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.470 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.471 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.472 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.472 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.551 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.555 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:17:36 compute-0 ceph-mon[76335]: pgmap v963: 305 pgs: 305 active+clean; 213 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 235 KiB/s rd, 2.2 MiB/s wr, 117 op/s
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.755 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.764 244018 DEBUG nova.compute.manager [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.843 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.844 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.844 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:17:36 compute-0 nova_compute[244014]: 2026-02-25 12:17:36.903 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:37 compute-0 nova_compute[244014]: 2026-02-25 12:17:37.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 771 KiB/s rd, 3.9 MiB/s wr, 195 op/s
Feb 25 12:17:38 compute-0 nova_compute[244014]: 2026-02-25 12:17:38.114 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:38 compute-0 nova_compute[244014]: 2026-02-25 12:17:38.115 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:38 compute-0 nova_compute[244014]: 2026-02-25 12:17:38.115 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:38 compute-0 nova_compute[244014]: 2026-02-25 12:17:38.116 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:38 compute-0 nova_compute[244014]: 2026-02-25 12:17:38.116 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:38 compute-0 nova_compute[244014]: 2026-02-25 12:17:38.118 244018 INFO nova.compute.manager [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Terminating instance
Feb 25 12:17:38 compute-0 nova_compute[244014]: 2026-02-25 12:17:38.120 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "refresh_cache-db0fc9fa-1fc0-4334-96f9-2205fa53e308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:17:38 compute-0 nova_compute[244014]: 2026-02-25 12:17:38.120 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquired lock "refresh_cache-db0fc9fa-1fc0-4334-96f9-2205fa53e308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:17:38 compute-0 nova_compute[244014]: 2026-02-25 12:17:38.121 244018 DEBUG nova.network.neutron [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:17:38 compute-0 nova_compute[244014]: 2026-02-25 12:17:38.292 244018 DEBUG nova.network.neutron [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:17:38 compute-0 nova_compute[244014]: 2026-02-25 12:17:38.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:38 compute-0 ceph-mon[76335]: pgmap v964: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 771 KiB/s rd, 3.9 MiB/s wr, 195 op/s
Feb 25 12:17:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 620 KiB/s rd, 1.9 MiB/s wr, 113 op/s
Feb 25 12:17:39 compute-0 nova_compute[244014]: 2026-02-25 12:17:39.846 244018 DEBUG nova.network.neutron [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:17:39 compute-0 nova_compute[244014]: 2026-02-25 12:17:39.869 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Releasing lock "refresh_cache-db0fc9fa-1fc0-4334-96f9-2205fa53e308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:17:39 compute-0 nova_compute[244014]: 2026-02-25 12:17:39.869 244018 DEBUG nova.compute.manager [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:17:39 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 25 12:17:39 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000a.scope: Consumed 4.118s CPU time.
Feb 25 12:17:39 compute-0 systemd-machined[210048]: Machine qemu-14-instance-0000000a terminated.
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.090 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance destroyed successfully.
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.091 244018 DEBUG nova.objects.instance [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'resources' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.494 244018 INFO nova.virt.libvirt.driver [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting instance files /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.495 244018 INFO nova.virt.libvirt.driver [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deletion of /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del complete
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.548 244018 INFO nova.compute.manager [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Took 0.68 seconds to destroy the instance on the hypervisor.
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.549 244018 DEBUG oslo.service.loopingcall [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.550 244018 DEBUG nova.compute.manager [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.550 244018 DEBUG nova.network.neutron [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.729 244018 DEBUG nova.network.neutron [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.783 244018 DEBUG nova.network.neutron [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.801 244018 INFO nova.compute.manager [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Took 0.25 seconds to deallocate network for instance.
Feb 25 12:17:40 compute-0 ceph-mon[76335]: pgmap v965: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 620 KiB/s rd, 1.9 MiB/s wr, 113 op/s
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.851 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.851 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:40 compute-0 nova_compute[244014]: 2026-02-25 12:17:40.912 244018 DEBUG oslo_concurrency.processutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:41 compute-0 nova_compute[244014]: 2026-02-25 12:17:41.155 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021846.1541154, 77af7d73-f695-47b4-8ec1-98a3672ff8d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:41 compute-0 nova_compute[244014]: 2026-02-25 12:17:41.156 244018 INFO nova.compute.manager [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] VM Stopped (Lifecycle Event)
Feb 25 12:17:41 compute-0 nova_compute[244014]: 2026-02-25 12:17:41.182 244018 DEBUG nova.compute.manager [None req-56a48e00-c165-4210-be24-14fec9ae075c - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:17:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692699550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:41 compute-0 nova_compute[244014]: 2026-02-25 12:17:41.431 244018 DEBUG oslo_concurrency.processutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:41 compute-0 nova_compute[244014]: 2026-02-25 12:17:41.436 244018 DEBUG nova.compute.provider_tree [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:17:41 compute-0 nova_compute[244014]: 2026-02-25 12:17:41.456 244018 DEBUG nova.scheduler.client.report [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:17:41 compute-0 nova_compute[244014]: 2026-02-25 12:17:41.483 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:41 compute-0 nova_compute[244014]: 2026-02-25 12:17:41.505 244018 INFO nova.scheduler.client.report [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Deleted allocations for instance db0fc9fa-1fc0-4334-96f9-2205fa53e308
Feb 25 12:17:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 178 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 156 op/s
Feb 25 12:17:41 compute-0 nova_compute[244014]: 2026-02-25 12:17:41.565 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:41 compute-0 nova_compute[244014]: 2026-02-25 12:17:41.649 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:17:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3692699550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00015022157419548375 of space, bias 1.0, pg target 0.04506647225864512 quantized to 32 (current 32)
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00248968974604864 of space, bias 1.0, pg target 0.746906923814592 quantized to 32 (current 32)
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.436791589578832e-07 of space, bias 4.0, pg target 0.0010124149907494598 quantized to 16 (current 16)
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:17:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:17:42 compute-0 nova_compute[244014]: 2026-02-25 12:17:42.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:42 compute-0 ceph-mon[76335]: pgmap v966: 305 pgs: 305 active+clean; 178 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 156 op/s
Feb 25 12:17:43 compute-0 nova_compute[244014]: 2026-02-25 12:17:43.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 153 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 12:17:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:44 compute-0 nova_compute[244014]: 2026-02-25 12:17:44.802 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:44 compute-0 nova_compute[244014]: 2026-02-25 12:17:44.803 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:44 compute-0 nova_compute[244014]: 2026-02-25 12:17:44.826 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:17:44 compute-0 ceph-mon[76335]: pgmap v967: 305 pgs: 305 active+clean; 153 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 12:17:44 compute-0 nova_compute[244014]: 2026-02-25 12:17:44.886 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:44 compute-0 nova_compute[244014]: 2026-02-25 12:17:44.887 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:44 compute-0 nova_compute[244014]: 2026-02-25 12:17:44.895 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:17:44 compute-0 nova_compute[244014]: 2026-02-25 12:17:44.895 244018 INFO nova.compute.claims [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:17:45 compute-0 nova_compute[244014]: 2026-02-25 12:17:45.003 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:17:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/555895963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 153 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Feb 25 12:17:45 compute-0 nova_compute[244014]: 2026-02-25 12:17:45.560 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:45 compute-0 nova_compute[244014]: 2026-02-25 12:17:45.566 244018 DEBUG nova.compute.provider_tree [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:17:45 compute-0 podman[257993]: 2026-02-25 12:17:45.73174287 +0000 UTC m=+0.071848228 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 25 12:17:45 compute-0 nova_compute[244014]: 2026-02-25 12:17:45.764 244018 DEBUG nova.scheduler.client.report [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:17:45 compute-0 podman[257994]: 2026-02-25 12:17:45.773566504 +0000 UTC m=+0.113978780 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 12:17:45 compute-0 nova_compute[244014]: 2026-02-25 12:17:45.842 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:45 compute-0 nova_compute[244014]: 2026-02-25 12:17:45.843 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:17:45 compute-0 nova_compute[244014]: 2026-02-25 12:17:45.903 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:17:45 compute-0 nova_compute[244014]: 2026-02-25 12:17:45.904 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:17:45 compute-0 nova_compute[244014]: 2026-02-25 12:17:45.928 244018 INFO nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:17:45 compute-0 nova_compute[244014]: 2026-02-25 12:17:45.947 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:17:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/555895963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.048 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.050 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.051 244018 INFO nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Creating image(s)
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.082 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.115 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.142 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.146 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.208 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.209 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.210 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.211 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.239 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.242 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4c768bc2-a711-4179-b12f-604509e47856_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.432 244018 DEBUG nova.policy [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38c6d2e6875a408687bd85066c826987', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c7e518f2e6c84550b07b36ea68922707', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.516 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4c768bc2-a711-4179-b12f-604509e47856_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.592 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] resizing rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.713 244018 DEBUG nova.objects.instance [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c768bc2-a711-4179-b12f-604509e47856 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.729 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.730 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Ensure instance console log exists: /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.730 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.731 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:46 compute-0 nova_compute[244014]: 2026-02-25 12:17:46.731 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:47 compute-0 ceph-mon[76335]: pgmap v968: 305 pgs: 305 active+clean; 153 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Feb 25 12:17:47 compute-0 nova_compute[244014]: 2026-02-25 12:17:47.317 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Successfully created port: e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:17:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:17:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3631058678' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:17:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:17:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3631058678' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:17:47 compute-0 nova_compute[244014]: 2026-02-25 12:17:47.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 178 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 155 op/s
Feb 25 12:17:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3631058678' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:17:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3631058678' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.073284) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868073311, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1603, "num_deletes": 251, "total_data_size": 2397811, "memory_usage": 2437872, "flush_reason": "Manual Compaction"}
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868097979, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2361794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19411, "largest_seqno": 21013, "table_properties": {"data_size": 2354643, "index_size": 4094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15851, "raw_average_key_size": 20, "raw_value_size": 2339922, "raw_average_value_size": 2969, "num_data_blocks": 185, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772021714, "oldest_key_time": 1772021714, "file_creation_time": 1772021868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 24766 microseconds, and 3583 cpu microseconds.
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:17:48 compute-0 nova_compute[244014]: 2026-02-25 12:17:48.190 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Successfully updated port: e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.098041) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2361794 bytes OK
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.098066) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.195757) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.195807) EVENT_LOG_v1 {"time_micros": 1772021868195797, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.195838) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2390779, prev total WAL file size 2390779, number of live WAL files 2.
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.196539) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2306KB)], [47(6577KB)]
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868196580, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9097296, "oldest_snapshot_seqno": -1}
Feb 25 12:17:48 compute-0 nova_compute[244014]: 2026-02-25 12:17:48.235 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:17:48 compute-0 nova_compute[244014]: 2026-02-25 12:17:48.235 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquired lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:17:48 compute-0 nova_compute[244014]: 2026-02-25 12:17:48.235 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4389 keys, 7345579 bytes, temperature: kUnknown
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868264074, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7345579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7315727, "index_size": 17774, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 108511, "raw_average_key_size": 24, "raw_value_size": 7235915, "raw_average_value_size": 1648, "num_data_blocks": 743, "num_entries": 4389, "num_filter_entries": 4389, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772021868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.264345) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7345579 bytes
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.268129) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.6 rd, 108.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.4 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 4903, records dropped: 514 output_compression: NoCompression
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.268169) EVENT_LOG_v1 {"time_micros": 1772021868268150, "job": 24, "event": "compaction_finished", "compaction_time_micros": 67575, "compaction_time_cpu_micros": 14495, "output_level": 6, "num_output_files": 1, "total_output_size": 7345579, "num_input_records": 4903, "num_output_records": 4389, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868268858, "job": 24, "event": "table_file_deletion", "file_number": 49}
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868270114, "job": 24, "event": "table_file_deletion", "file_number": 47}
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.196473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.270195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.270201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.270204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.270207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:17:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.270210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:17:48 compute-0 nova_compute[244014]: 2026-02-25 12:17:48.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:48 compute-0 nova_compute[244014]: 2026-02-25 12:17:48.466 244018 DEBUG nova.compute.manager [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:17:48 compute-0 nova_compute[244014]: 2026-02-25 12:17:48.467 244018 DEBUG nova.compute.manager [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing instance network info cache due to event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:17:48 compute-0 nova_compute[244014]: 2026-02-25 12:17:48.467 244018 DEBUG oslo_concurrency.lockutils [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:17:48 compute-0 nova_compute[244014]: 2026-02-25 12:17:48.595 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:17:49 compute-0 ceph-mon[76335]: pgmap v969: 305 pgs: 305 active+clean; 178 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 155 op/s
Feb 25 12:17:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 178 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 862 KiB/s wr, 76 op/s
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.077 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updating instance_info_cache with network_info: [{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.107 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Releasing lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.108 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Instance network_info: |[{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.108 244018 DEBUG oslo_concurrency.lockutils [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.109 244018 DEBUG nova.network.neutron [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.115 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Start _get_guest_xml network_info=[{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.120 244018 WARNING nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.129 244018 DEBUG nova.virt.libvirt.host [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.130 244018 DEBUG nova.virt.libvirt.host [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.134 244018 DEBUG nova.virt.libvirt.host [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.134 244018 DEBUG nova.virt.libvirt.host [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.135 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.135 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.136 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.137 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.137 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.138 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.138 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.139 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.139 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.140 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.140 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.141 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.145 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:17:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/834894133' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.700 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.730 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:50 compute-0 nova_compute[244014]: 2026-02-25 12:17:50.735 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:51 compute-0 ceph-mon[76335]: pgmap v970: 305 pgs: 305 active+clean; 178 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 862 KiB/s wr, 76 op/s
Feb 25 12:17:51 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/834894133' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:17:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022918749' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.339 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.343 244018 DEBUG nova.virt.libvirt.vif [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-942373444',id=13,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7e518f2e6c84550b07b36ea68922707',ramdisk_id='',reservation_id='r-m5sgaoda',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:17:45Z,user_data=None,user_id='38c6d2e6875a408687bd85066c826987',uuid=4c768bc2-a711-4179-b12f-604509e47856,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.344 244018 DEBUG nova.network.os_vif_util [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converting VIF {"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.346 244018 DEBUG nova.network.os_vif_util [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.348 244018 DEBUG nova.objects.instance [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c768bc2-a711-4179-b12f-604509e47856 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.435 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:17:51 compute-0 nova_compute[244014]:   <uuid>4c768bc2-a711-4179-b12f-604509e47856</uuid>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   <name>instance-0000000d</name>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444</nova:name>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:17:50</nova:creationTime>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <nova:user uuid="38c6d2e6875a408687bd85066c826987">tempest-FloatingIPsAssociationNegativeTestJSON-701873226-project-member</nova:user>
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <nova:project uuid="c7e518f2e6c84550b07b36ea68922707">tempest-FloatingIPsAssociationNegativeTestJSON-701873226</nova:project>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <nova:port uuid="e4e15a18-1007-4bd1-80a7-b8d2cb64f15c">
Feb 25 12:17:51 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <system>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <entry name="serial">4c768bc2-a711-4179-b12f-604509e47856</entry>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <entry name="uuid">4c768bc2-a711-4179-b12f-604509e47856</entry>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     </system>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   <os>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   </os>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   <features>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   </features>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/4c768bc2-a711-4179-b12f-604509e47856_disk">
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       </source>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/4c768bc2-a711-4179-b12f-604509e47856_disk.config">
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       </source>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:17:51 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:26:a8:6b"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <target dev="tape4e15a18-10"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/console.log" append="off"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <video>
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     </video>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:17:51 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:17:51 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:17:51 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:17:51 compute-0 nova_compute[244014]: </domain>
Feb 25 12:17:51 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.437 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Preparing to wait for external event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.438 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.439 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.439 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.441 244018 DEBUG nova.virt.libvirt.vif [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-942373444',id=13,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7e518f2e6c84550b07b36ea68922707',ramdisk_id='',reservation_id='r-m5sgaoda',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:17:45Z,user_data=None,user_id='38c6d2e6875a408687bd85066c826987',uuid=4c768bc2-a711-4179-b12f-604509e47856,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.442 244018 DEBUG nova.network.os_vif_util [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converting VIF {"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.443 244018 DEBUG nova.network.os_vif_util [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.444 244018 DEBUG os_vif [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.446 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.447 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.452 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4e15a18-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.453 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4e15a18-10, col_values=(('external_ids', {'iface-id': 'e4e15a18-1007-4bd1-80a7-b8d2cb64f15c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:a8:6b', 'vm-uuid': '4c768bc2-a711-4179-b12f-604509e47856'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:17:51 compute-0 NetworkManager[49836]: <info>  [1772021871.4568] manager: (tape4e15a18-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.457 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.464 244018 INFO os_vif [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10')
Feb 25 12:17:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.671 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.671 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.672 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] No VIF found with MAC fa:16:3e:26:a8:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.673 244018 INFO nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Using config drive
Feb 25 12:17:51 compute-0 nova_compute[244014]: 2026-02-25 12:17:51.703 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:52 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1022918749' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.503 244018 INFO nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Creating config drive at /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.509 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9srjv0y0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.637 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9srjv0y0" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.674 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.678 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config 4c768bc2-a711-4179-b12f-604509e47856_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.829 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config 4c768bc2-a711-4179-b12f-604509e47856_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.830 244018 INFO nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Deleting local config drive /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config because it was imported into RBD.
Feb 25 12:17:52 compute-0 kernel: tape4e15a18-10: entered promiscuous mode
Feb 25 12:17:52 compute-0 NetworkManager[49836]: <info>  [1772021872.8824] manager: (tape4e15a18-10): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Feb 25 12:17:52 compute-0 ovn_controller[147040]: 2026-02-25T12:17:52Z|00044|binding|INFO|Claiming lport e4e15a18-1007-4bd1-80a7-b8d2cb64f15c for this chassis.
Feb 25 12:17:52 compute-0 ovn_controller[147040]: 2026-02-25T12:17:52Z|00045|binding|INFO|e4e15a18-1007-4bd1-80a7-b8d2cb64f15c: Claiming fa:16:3e:26:a8:6b 10.100.0.3
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.886 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:52 compute-0 systemd-udevd[258340]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:17:52 compute-0 systemd-machined[210048]: New machine qemu-15-instance-0000000d.
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.919 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a8:6b 10.100.0.3'], port_security=['fa:16:3e:26:a8:6b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4c768bc2-a711-4179-b12f-604509e47856', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7e518f2e6c84550b07b36ea68922707', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd3f2670-feae-47a4-bde0-e33bcae0ae19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e09bb1cc-ff05-4116-89c1-7f1944c54b5c, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.920 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c in datapath 1b6027b5-2ee6-49fb-b9e5-c79e5230d07e bound to our chassis
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.921 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1b6027b5-2ee6-49fb-b9e5-c79e5230d07e
Feb 25 12:17:52 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000d.
Feb 25 12:17:52 compute-0 NetworkManager[49836]: <info>  [1772021872.9295] device (tape4e15a18-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:17:52 compute-0 NetworkManager[49836]: <info>  [1772021872.9303] device (tape4e15a18-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.931 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[78ac9fe2-8e87-4b6e-837f-aa7f724e566b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.932 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1b6027b5-21 in ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.934 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:52 compute-0 ovn_controller[147040]: 2026-02-25T12:17:52Z|00046|binding|INFO|Setting lport e4e15a18-1007-4bd1-80a7-b8d2cb64f15c ovn-installed in OVS
Feb 25 12:17:52 compute-0 ovn_controller[147040]: 2026-02-25T12:17:52Z|00047|binding|INFO|Setting lport e4e15a18-1007-4bd1-80a7-b8d2cb64f15c up in Southbound
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.933 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1b6027b5-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.933 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[813c4ac4-5ff1-4f99-880f-0f4dc7fa8188]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.934 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3808052f-84ec-4253-9fbc-277f53a0bbef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:52 compute-0 nova_compute[244014]: 2026-02-25 12:17:52.938 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.947 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d05bdb-86a8-495f-a87c-af5f3b6f8271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.959 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3ee63867-402b-4ab7-a2c7-f817437df4cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.982 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[823883ac-35a6-49d1-be73-a71ca93195d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:52 compute-0 systemd-udevd[258343]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:17:52 compute-0 NetworkManager[49836]: <info>  [1772021872.9893] manager: (tap1b6027b5-20): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Feb 25 12:17:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.988 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3086ef21-2660-4ebb-956f-e9bc5e773c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.018 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0f46c9-8f47-405a-a9b2-1ded8b9df28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.023 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b4a904-ce8e-4fe2-a516-24425f3708ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:53 compute-0 NetworkManager[49836]: <info>  [1772021873.0437] device (tap1b6027b5-20): carrier: link connected
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.046 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[35284fca-6392-4221-b284-ecbe98f5a965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.063 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c48dc0-ec20-46e0-9154-4053d9fad3a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b6027b5-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:ad:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384245, 'reachable_time': 29268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258373, 'error': None, 'target': 'ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.080 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3bb7e9-7efa-4b4b-922f-3ac49369345a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:ad5e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384245, 'tstamp': 384245}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258374, 'error': None, 'target': 'ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.108 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f2084cb3-6508-42cc-930f-dbd2738bbc32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b6027b5-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:ad:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384245, 'reachable_time': 29268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258375, 'error': None, 'target': 'ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.138 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[acadf091-db52-49ff-a6ff-bb3f380c9b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.185 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[82059ad9-59cf-4d5d-bf58-7e5b1d2a0101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.186 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b6027b5-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.187 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.187 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b6027b5-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:17:53 compute-0 kernel: tap1b6027b5-20: entered promiscuous mode
Feb 25 12:17:53 compute-0 NetworkManager[49836]: <info>  [1772021873.1897] manager: (tap1b6027b5-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.192 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1b6027b5-20, col_values=(('external_ids', {'iface-id': '7dd8ba6d-d4c9-4e9e-b4b6-0321252d1200'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:17:53 compute-0 ovn_controller[147040]: 2026-02-25T12:17:53Z|00048|binding|INFO|Releasing lport 7dd8ba6d-d4c9-4e9e-b4b6-0321252d1200 from this chassis (sb_readonly=0)
Feb 25 12:17:53 compute-0 ceph-mon[76335]: pgmap v971: 305 pgs: 305 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.202 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1b6027b5-2ee6-49fb-b9e5-c79e5230d07e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1b6027b5-2ee6-49fb-b9e5-c79e5230d07e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.204 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.208 244018 DEBUG nova.network.neutron [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updated VIF entry in instance network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.209 244018 DEBUG nova.network.neutron [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updating instance_info_cache with network_info: [{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.211 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7407bd-4840-452f-a053-4b171e70a67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.211 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/1b6027b5-2ee6-49fb-b9e5-c79e5230d07e.pid.haproxy
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 1b6027b5-2ee6-49fb-b9e5-c79e5230d07e
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:17:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.212 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'env', 'PROCESS_TAG=haproxy-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1b6027b5-2ee6-49fb-b9e5-c79e5230d07e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.294 244018 DEBUG oslo_concurrency.lockutils [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:53 compute-0 podman[258442]: 2026-02-25 12:17:53.553238755 +0000 UTC m=+0.066939990 container create 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:17:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.572 244018 DEBUG nova.compute.manager [req-9048d1c7-b44d-48ae-bfcb-20a4917b109d req-60d71dd7-801a-425f-856a-4d9376ef3423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.574 244018 DEBUG oslo_concurrency.lockutils [req-9048d1c7-b44d-48ae-bfcb-20a4917b109d req-60d71dd7-801a-425f-856a-4d9376ef3423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.575 244018 DEBUG oslo_concurrency.lockutils [req-9048d1c7-b44d-48ae-bfcb-20a4917b109d req-60d71dd7-801a-425f-856a-4d9376ef3423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.576 244018 DEBUG oslo_concurrency.lockutils [req-9048d1c7-b44d-48ae-bfcb-20a4917b109d req-60d71dd7-801a-425f-856a-4d9376ef3423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.577 244018 DEBUG nova.compute.manager [req-9048d1c7-b44d-48ae-bfcb-20a4917b109d req-60d71dd7-801a-425f-856a-4d9376ef3423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Processing event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.578 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021873.577011, 4c768bc2-a711-4179-b12f-604509e47856 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.579 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] VM Started (Lifecycle Event)
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.581 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.587 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.592 244018 INFO nova.virt.libvirt.driver [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Instance spawned successfully.
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.592 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:17:53 compute-0 systemd[1]: Started libpod-conmon-4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148.scope.
Feb 25 12:17:53 compute-0 podman[258442]: 2026-02-25 12:17:53.512144791 +0000 UTC m=+0.025846056 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:17:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c0265e3a393cacb87e5ee458bb9d5c78aa9ff00fd616b394131707d01c2e19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:17:53 compute-0 podman[258442]: 2026-02-25 12:17:53.658232642 +0000 UTC m=+0.171933907 container init 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:17:53 compute-0 podman[258442]: 2026-02-25 12:17:53.665852785 +0000 UTC m=+0.179554020 container start 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:17:53 compute-0 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [NOTICE]   (258467) : New worker (258469) forked
Feb 25 12:17:53 compute-0 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [NOTICE]   (258467) : Loading success.
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.699 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.711 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.717 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.717 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.718 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.719 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.720 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.720 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.837 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.837 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021873.5771196, 4c768bc2-a711-4179-b12f-604509e47856 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.838 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] VM Paused (Lifecycle Event)
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:17:53 compute-0 nova_compute[244014]: 2026-02-25 12:17:53.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:17:54 compute-0 nova_compute[244014]: 2026-02-25 12:17:54.015 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:54 compute-0 nova_compute[244014]: 2026-02-25 12:17:54.021 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021873.5857973, 4c768bc2-a711-4179-b12f-604509e47856 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:54 compute-0 nova_compute[244014]: 2026-02-25 12:17:54.021 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] VM Resumed (Lifecycle Event)
Feb 25 12:17:54 compute-0 nova_compute[244014]: 2026-02-25 12:17:54.062 244018 INFO nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Took 8.01 seconds to spawn the instance on the hypervisor.
Feb 25 12:17:54 compute-0 nova_compute[244014]: 2026-02-25 12:17:54.063 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:54 compute-0 nova_compute[244014]: 2026-02-25 12:17:54.149 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:54 compute-0 nova_compute[244014]: 2026-02-25 12:17:54.154 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:17:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:54 compute-0 nova_compute[244014]: 2026-02-25 12:17:54.250 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:17:54 compute-0 nova_compute[244014]: 2026-02-25 12:17:54.254 244018 INFO nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Took 9.38 seconds to build instance.
Feb 25 12:17:54 compute-0 nova_compute[244014]: 2026-02-25 12:17:54.438 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:55.002 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:55.003 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:55.003 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:55 compute-0 nova_compute[244014]: 2026-02-25 12:17:55.088 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021860.0872273, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:17:55 compute-0 nova_compute[244014]: 2026-02-25 12:17:55.088 244018 INFO nova.compute.manager [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Stopped (Lifecycle Event)
Feb 25 12:17:55 compute-0 nova_compute[244014]: 2026-02-25 12:17:55.117 244018 DEBUG nova.compute.manager [None req-3adc958f-69f7-4b06-a731-86ed952e4f11 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:17:55 compute-0 ceph-mon[76335]: pgmap v972: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Feb 25 12:17:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:17:55 compute-0 nova_compute[244014]: 2026-02-25 12:17:55.668 244018 DEBUG nova.compute.manager [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:17:55 compute-0 nova_compute[244014]: 2026-02-25 12:17:55.669 244018 DEBUG oslo_concurrency.lockutils [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:55 compute-0 nova_compute[244014]: 2026-02-25 12:17:55.670 244018 DEBUG oslo_concurrency.lockutils [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:55 compute-0 nova_compute[244014]: 2026-02-25 12:17:55.670 244018 DEBUG oslo_concurrency.lockutils [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:55 compute-0 nova_compute[244014]: 2026-02-25 12:17:55.671 244018 DEBUG nova.compute.manager [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] No waiting events found dispatching network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:17:55 compute-0 nova_compute[244014]: 2026-02-25 12:17:55.671 244018 WARNING nova.compute.manager [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received unexpected event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c for instance with vm_state active and task_state None.
Feb 25 12:17:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:56.152 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:17:56 compute-0 nova_compute[244014]: 2026-02-25 12:17:56.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:56.153 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:17:56 compute-0 nova_compute[244014]: 2026-02-25 12:17:56.456 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:56 compute-0 nova_compute[244014]: 2026-02-25 12:17:56.814 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:56 compute-0 nova_compute[244014]: 2026-02-25 12:17:56.815 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:56 compute-0 nova_compute[244014]: 2026-02-25 12:17:56.920 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.049 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.050 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.059 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.059 244018 INFO nova.compute.claims [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:17:57 compute-0 ceph-mon[76335]: pgmap v973: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.234 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:17:57 compute-0 sudo[258498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:17:57 compute-0 sudo[258498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:17:57 compute-0 sudo[258498]: pam_unix(sudo:session): session closed for user root
Feb 25 12:17:57 compute-0 sudo[258523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:17:57 compute-0 sudo[258523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:17:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:17:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2367595341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.795 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.803 244018 DEBUG nova.compute.provider_tree [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.822 244018 DEBUG nova.scheduler.client.report [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.868 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.870 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.923 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.923 244018 DEBUG nova.network.neutron [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.951 244018 INFO nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:17:57 compute-0 nova_compute[244014]: 2026-02-25 12:17:57.977 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.125 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.127 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.127 244018 INFO nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Creating image(s)
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.145 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.162 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.181 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.186 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.229 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.230 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.231 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.231 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2367595341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:17:58 compute-0 sudo[258523]: pam_unix(sudo:session): session closed for user root
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.269 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.273 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:17:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:17:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:17:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.316 244018 DEBUG nova.network.neutron [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.316 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:17:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:17:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:17:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:17:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:17:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:17:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:17:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:17:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:17:58 compute-0 sudo[258671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:17:58 compute-0 sudo[258671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.390 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:17:58 compute-0 sudo[258671]: pam_unix(sudo:session): session closed for user root
Feb 25 12:17:58 compute-0 sudo[258699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:17:58 compute-0 sudo[258699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.545 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.626 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] resizing rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.746 244018 DEBUG nova.objects.instance [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lazy-loading 'migration_context' on Instance uuid f53717b5-3196-44b5-bc3a-aa8f53ce397d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.763 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.764 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Ensure instance console log exists: /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.764 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.765 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.765 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.768 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.777 244018 WARNING nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.785 244018 DEBUG nova.virt.libvirt.host [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.785 244018 DEBUG nova.virt.libvirt.host [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.789 244018 DEBUG nova.virt.libvirt.host [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.789 244018 DEBUG nova.virt.libvirt.host [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.790 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.790 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.791 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.791 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.791 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.792 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.792 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.792 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.793 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.793 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.793 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.794 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:17:58 compute-0 nova_compute[244014]: 2026-02-25 12:17:58.798 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:58 compute-0 podman[258805]: 2026-02-25 12:17:58.808904001 +0000 UTC m=+0.066621181 container create 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 12:17:58 compute-0 systemd[1]: Started libpod-conmon-3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f.scope.
Feb 25 12:17:58 compute-0 podman[258805]: 2026-02-25 12:17:58.777626063 +0000 UTC m=+0.035343253 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:17:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:17:58 compute-0 podman[258805]: 2026-02-25 12:17:58.903106505 +0000 UTC m=+0.160823675 container init 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:17:58 compute-0 podman[258805]: 2026-02-25 12:17:58.911595563 +0000 UTC m=+0.169312753 container start 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:17:58 compute-0 sleepy_babbage[258825]: 167 167
Feb 25 12:17:58 compute-0 systemd[1]: libpod-3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f.scope: Deactivated successfully.
Feb 25 12:17:58 compute-0 podman[258805]: 2026-02-25 12:17:58.919348681 +0000 UTC m=+0.177065871 container attach 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 12:17:58 compute-0 podman[258805]: 2026-02-25 12:17:58.919711321 +0000 UTC m=+0.177428481 container died 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:17:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1150796ec1108e939246a3387ab9a4e098f60b631d9a3209ec2c88f6be0fdd64-merged.mount: Deactivated successfully.
Feb 25 12:17:58 compute-0 podman[258805]: 2026-02-25 12:17:58.989210212 +0000 UTC m=+0.246927402 container remove 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:17:59 compute-0 systemd[1]: libpod-conmon-3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f.scope: Deactivated successfully.
Feb 25 12:17:59 compute-0 podman[258866]: 2026-02-25 12:17:59.145460707 +0000 UTC m=+0.053004348 container create 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:17:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:17:59.156 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:17:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:17:59 compute-0 systemd[1]: Started libpod-conmon-28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993.scope.
Feb 25 12:17:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:17:59 compute-0 podman[258866]: 2026-02-25 12:17:59.125544518 +0000 UTC m=+0.033088169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:17:59 compute-0 podman[258866]: 2026-02-25 12:17:59.235497295 +0000 UTC m=+0.143040956 container init 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:17:59 compute-0 podman[258866]: 2026-02-25 12:17:59.24210553 +0000 UTC m=+0.149649171 container start 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:17:59 compute-0 ceph-mon[76335]: pgmap v974: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:17:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:17:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:17:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:17:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:17:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:17:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:17:59 compute-0 podman[258866]: 2026-02-25 12:17:59.255589799 +0000 UTC m=+0.163133440 container attach 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:17:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:17:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3561680764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:17:59 compute-0 nova_compute[244014]: 2026-02-25 12:17:59.441 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:17:59 compute-0 nova_compute[244014]: 2026-02-25 12:17:59.458 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:17:59 compute-0 nova_compute[244014]: 2026-02-25 12:17:59.462 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:17:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 966 KiB/s wr, 96 op/s
Feb 25 12:17:59 compute-0 gifted_chatterjee[258881]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:17:59 compute-0 gifted_chatterjee[258881]: --> All data devices are unavailable
Feb 25 12:17:59 compute-0 systemd[1]: libpod-28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993.scope: Deactivated successfully.
Feb 25 12:17:59 compute-0 podman[258866]: 2026-02-25 12:17:59.695527477 +0000 UTC m=+0.603071188 container died 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4-merged.mount: Deactivated successfully.
Feb 25 12:17:59 compute-0 podman[258866]: 2026-02-25 12:17:59.769937936 +0000 UTC m=+0.677481607 container remove 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:17:59 compute-0 systemd[1]: libpod-conmon-28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993.scope: Deactivated successfully.
Feb 25 12:17:59 compute-0 sudo[258699]: pam_unix(sudo:session): session closed for user root
Feb 25 12:17:59 compute-0 sudo[258953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:17:59 compute-0 sudo[258953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:17:59 compute-0 sudo[258953]: pam_unix(sudo:session): session closed for user root
Feb 25 12:17:59 compute-0 nova_compute[244014]: 2026-02-25 12:17:59.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:17:59 compute-0 nova_compute[244014]: 2026-02-25 12:17:59.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:17:59 compute-0 sudo[258978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:17:59 compute-0 sudo[258978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:18:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:18:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/868051482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.021 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.023 244018 DEBUG nova.objects.instance [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lazy-loading 'pci_devices' on Instance uuid f53717b5-3196-44b5-bc3a-aa8f53ce397d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.038 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:18:00 compute-0 nova_compute[244014]:   <uuid>f53717b5-3196-44b5-bc3a-aa8f53ce397d</uuid>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   <name>instance-0000000e</name>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-954327769</nova:name>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:17:58</nova:creationTime>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:18:00 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:18:00 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:18:00 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:18:00 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:18:00 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:18:00 compute-0 nova_compute[244014]:         <nova:user uuid="d8d80b1211534b27a4143b5919f28d50">tempest-ServerDiagnosticsNegativeTest-2137851213-project-member</nova:user>
Feb 25 12:18:00 compute-0 nova_compute[244014]:         <nova:project uuid="2a1df86b8c4e48d7aaa799f3804282bc">tempest-ServerDiagnosticsNegativeTest-2137851213</nova:project>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <system>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <entry name="serial">f53717b5-3196-44b5-bc3a-aa8f53ce397d</entry>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <entry name="uuid">f53717b5-3196-44b5-bc3a-aa8f53ce397d</entry>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     </system>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   <os>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   </os>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   <features>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   </features>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk">
Feb 25 12:18:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config">
Feb 25 12:18:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/console.log" append="off"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <video>
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     </video>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:18:00 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:18:00 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:18:00 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:18:00 compute-0 nova_compute[244014]: </domain>
Feb 25 12:18:00 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.110 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.111 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.111 244018 INFO nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Using config drive
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.140 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:00 compute-0 podman[259037]: 2026-02-25 12:18:00.227022605 +0000 UTC m=+0.061581629 container create 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 12:18:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3561680764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/868051482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:00 compute-0 systemd[1]: Started libpod-conmon-12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183.scope.
Feb 25 12:18:00 compute-0 podman[259037]: 2026-02-25 12:18:00.200263954 +0000 UTC m=+0.034823028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:18:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:18:00 compute-0 podman[259037]: 2026-02-25 12:18:00.329581824 +0000 UTC m=+0.164140838 container init 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 12:18:00 compute-0 podman[259037]: 2026-02-25 12:18:00.336990092 +0000 UTC m=+0.171549106 container start 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:18:00 compute-0 exciting_herschel[259052]: 167 167
Feb 25 12:18:00 compute-0 systemd[1]: libpod-12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183.scope: Deactivated successfully.
Feb 25 12:18:00 compute-0 podman[259037]: 2026-02-25 12:18:00.345065078 +0000 UTC m=+0.179624082 container attach 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 12:18:00 compute-0 podman[259037]: 2026-02-25 12:18:00.345850871 +0000 UTC m=+0.180409895 container died 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 12:18:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-807689e24177a5b89e496204e5e74597949f6a8f76e55e0cf3fa3cff4eefccb2-merged.mount: Deactivated successfully.
Feb 25 12:18:00 compute-0 podman[259037]: 2026-02-25 12:18:00.400917246 +0000 UTC m=+0.235476240 container remove 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.415 244018 INFO nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Creating config drive at /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.420 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc6vxtgnr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:00 compute-0 systemd[1]: libpod-conmon-12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183.scope: Deactivated successfully.
Feb 25 12:18:00 compute-0 ovn_controller[147040]: 2026-02-25T12:18:00Z|00049|binding|INFO|Releasing lport 7dd8ba6d-d4c9-4e9e-b4b6-0321252d1200 from this chassis (sb_readonly=0)
Feb 25 12:18:00 compute-0 NetworkManager[49836]: <info>  [1772021880.4491] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.449 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:00 compute-0 NetworkManager[49836]: <info>  [1772021880.4501] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:00 compute-0 ovn_controller[147040]: 2026-02-25T12:18:00Z|00050|binding|INFO|Releasing lport 7dd8ba6d-d4c9-4e9e-b4b6-0321252d1200 from this chassis (sb_readonly=0)
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.541 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc6vxtgnr" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:00 compute-0 podman[259079]: 2026-02-25 12:18:00.566065752 +0000 UTC m=+0.061137887 container create 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.574 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.580 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:00 compute-0 systemd[1]: Started libpod-conmon-08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5.scope.
Feb 25 12:18:00 compute-0 podman[259079]: 2026-02-25 12:18:00.533722114 +0000 UTC m=+0.028794149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:18:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596ba95705856135a09a99b11f13f01ed6775b1e98d1ae63da340e1ffdc05d69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596ba95705856135a09a99b11f13f01ed6775b1e98d1ae63da340e1ffdc05d69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596ba95705856135a09a99b11f13f01ed6775b1e98d1ae63da340e1ffdc05d69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596ba95705856135a09a99b11f13f01ed6775b1e98d1ae63da340e1ffdc05d69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:18:00 compute-0 podman[259079]: 2026-02-25 12:18:00.656184961 +0000 UTC m=+0.151257026 container init 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:18:00 compute-0 podman[259079]: 2026-02-25 12:18:00.662885169 +0000 UTC m=+0.157957184 container start 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:18:00 compute-0 podman[259079]: 2026-02-25 12:18:00.670808232 +0000 UTC m=+0.165880287 container attach 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.728 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.729 244018 INFO nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Deleting local config drive /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config because it was imported into RBD.
Feb 25 12:18:00 compute-0 systemd-machined[210048]: New machine qemu-16-instance-0000000e.
Feb 25 12:18:00 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000e.
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:18:00 compute-0 nova_compute[244014]: 2026-02-25 12:18:00.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]: {
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:     "0": [
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:         {
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "devices": [
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "/dev/loop3"
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             ],
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_name": "ceph_lv0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_size": "21470642176",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "name": "ceph_lv0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "tags": {
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.cluster_name": "ceph",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.crush_device_class": "",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.encrypted": "0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.objectstore": "bluestore",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.osd_id": "0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.type": "block",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.vdo": "0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.with_tpm": "0"
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             },
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "type": "block",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "vg_name": "ceph_vg0"
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:         }
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:     ],
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:     "1": [
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:         {
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "devices": [
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "/dev/loop4"
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             ],
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_name": "ceph_lv1",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_size": "21470642176",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "name": "ceph_lv1",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "tags": {
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.cluster_name": "ceph",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.crush_device_class": "",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.encrypted": "0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.objectstore": "bluestore",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.osd_id": "1",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.type": "block",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.vdo": "0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.with_tpm": "0"
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             },
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "type": "block",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "vg_name": "ceph_vg1"
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:         }
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:     ],
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:     "2": [
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:         {
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "devices": [
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "/dev/loop5"
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             ],
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_name": "ceph_lv2",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_size": "21470642176",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "name": "ceph_lv2",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "tags": {
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.cluster_name": "ceph",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.crush_device_class": "",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.encrypted": "0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.objectstore": "bluestore",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.osd_id": "2",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.type": "block",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.vdo": "0",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:                 "ceph.with_tpm": "0"
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             },
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "type": "block",
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:             "vg_name": "ceph_vg2"
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:         }
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]:     ]
Feb 25 12:18:00 compute-0 sleepy_clarke[259114]: }
Feb 25 12:18:01 compute-0 systemd[1]: libpod-08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5.scope: Deactivated successfully.
Feb 25 12:18:01 compute-0 podman[259079]: 2026-02-25 12:18:01.007009847 +0000 UTC m=+0.502081892 container died 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.030 244018 DEBUG nova.compute.manager [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.031 244018 DEBUG nova.compute.manager [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing instance network info cache due to event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.032 244018 DEBUG oslo_concurrency.lockutils [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.032 244018 DEBUG oslo_concurrency.lockutils [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.033 244018 DEBUG nova.network.neutron [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-596ba95705856135a09a99b11f13f01ed6775b1e98d1ae63da340e1ffdc05d69-merged.mount: Deactivated successfully.
Feb 25 12:18:01 compute-0 podman[259079]: 2026-02-25 12:18:01.073942076 +0000 UTC m=+0.569014101 container remove 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 12:18:01 compute-0 systemd[1]: libpod-conmon-08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5.scope: Deactivated successfully.
Feb 25 12:18:01 compute-0 sudo[258978]: pam_unix(sudo:session): session closed for user root
Feb 25 12:18:01 compute-0 sudo[259203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:18:01 compute-0 sudo[259203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:18:01 compute-0 sudo[259203]: pam_unix(sudo:session): session closed for user root
Feb 25 12:18:01 compute-0 sudo[259234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:18:01 compute-0 sudo[259234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.256 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021881.2562892, f53717b5-3196-44b5-bc3a-aa8f53ce397d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.257 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] VM Resumed (Lifecycle Event)
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.259 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.259 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.262 244018 INFO nova.virt.libvirt.driver [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance spawned successfully.
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.263 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:18:01 compute-0 ceph-mon[76335]: pgmap v975: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 966 KiB/s wr, 96 op/s
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.285 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.292 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.295 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.295 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.295 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.296 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.296 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.297 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.337 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.337 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021881.257664, f53717b5-3196-44b5-bc3a-aa8f53ce397d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.337 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] VM Started (Lifecycle Event)
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.365 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.368 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.376 244018 INFO nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Took 3.25 seconds to spawn the instance on the hypervisor.
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.376 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.399 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.446 244018 INFO nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Took 4.46 seconds to build instance.
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.464 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:01 compute-0 podman[259271]: 2026-02-25 12:18:01.507467844 +0000 UTC m=+0.039995893 container create 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:18:01 compute-0 systemd[1]: Started libpod-conmon-80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84.scope.
Feb 25 12:18:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:18:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 215 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 110 op/s
Feb 25 12:18:01 compute-0 anacron[164977]: Job `cron.daily' started
Feb 25 12:18:01 compute-0 anacron[164977]: Job `cron.daily' terminated
Feb 25 12:18:01 compute-0 podman[259271]: 2026-02-25 12:18:01.572215082 +0000 UTC m=+0.104743161 container init 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 12:18:01 compute-0 podman[259271]: 2026-02-25 12:18:01.576709698 +0000 UTC m=+0.109237757 container start 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:18:01 compute-0 silly_proskuriakova[259287]: 167 167
Feb 25 12:18:01 compute-0 systemd[1]: libpod-80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84.scope: Deactivated successfully.
Feb 25 12:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:18:01 compute-0 podman[259271]: 2026-02-25 12:18:01.484601693 +0000 UTC m=+0.017129732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:18:01 compute-0 podman[259271]: 2026-02-25 12:18:01.586514303 +0000 UTC m=+0.119042362 container attach 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:18:01 compute-0 podman[259271]: 2026-02-25 12:18:01.586913134 +0000 UTC m=+0.119441163 container died 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 12:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d70e07e02d6ceaaea1cb9292422951d70b3a900db669e6e41bb16c0e5beac004-merged.mount: Deactivated successfully.
Feb 25 12:18:01 compute-0 podman[259271]: 2026-02-25 12:18:01.648562745 +0000 UTC m=+0.181090764 container remove 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:18:01 compute-0 systemd[1]: libpod-conmon-80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84.scope: Deactivated successfully.
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:18:01 compute-0 podman[259312]: 2026-02-25 12:18:01.878375845 +0000 UTC m=+0.070227952 container create 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:18:01 compute-0 systemd[1]: Started libpod-conmon-153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d.scope.
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.919 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:18:01 compute-0 nova_compute[244014]: 2026-02-25 12:18:01.919 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:01 compute-0 podman[259312]: 2026-02-25 12:18:01.84183769 +0000 UTC m=+0.033689817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:18:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e07bc62d606ef0912c833457b21d1d548806fb4f768908e7f2a52952c6a62e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e07bc62d606ef0912c833457b21d1d548806fb4f768908e7f2a52952c6a62e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e07bc62d606ef0912c833457b21d1d548806fb4f768908e7f2a52952c6a62e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e07bc62d606ef0912c833457b21d1d548806fb4f768908e7f2a52952c6a62e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:18:01 compute-0 podman[259312]: 2026-02-25 12:18:01.977257231 +0000 UTC m=+0.169109328 container init 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:18:01 compute-0 podman[259312]: 2026-02-25 12:18:01.983511706 +0000 UTC m=+0.175363803 container start 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:18:01 compute-0 podman[259312]: 2026-02-25 12:18:01.991303965 +0000 UTC m=+0.183156052 container attach 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:18:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701414873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.502 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.563 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.565 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.565 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.565 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.565 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.566 244018 INFO nova.compute.manager [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Terminating instance
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.567 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "refresh_cache-f53717b5-3196-44b5-bc3a-aa8f53ce397d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.567 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquired lock "refresh_cache-f53717b5-3196-44b5-bc3a-aa8f53ce397d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.567 244018 DEBUG nova.network.neutron [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.588 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.588 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.590 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.591 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:18:02 compute-0 lvm[259426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:18:02 compute-0 lvm[259426]: VG ceph_vg0 finished
Feb 25 12:18:02 compute-0 lvm[259429]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:18:02 compute-0 lvm[259429]: VG ceph_vg1 finished
Feb 25 12:18:02 compute-0 lvm[259430]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:18:02 compute-0 lvm[259430]: VG ceph_vg2 finished
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.692 244018 DEBUG nova.network.neutron [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updated VIF entry in instance network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.693 244018 DEBUG nova.network.neutron [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updating instance_info_cache with network_info: [{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:02 compute-0 determined_engelbart[259328]: {}
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.750 244018 DEBUG oslo_concurrency.lockutils [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:02 compute-0 systemd[1]: libpod-153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d.scope: Deactivated successfully.
Feb 25 12:18:02 compute-0 podman[259312]: 2026-02-25 12:18:02.755256848 +0000 UTC m=+0.947108965 container died 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 12:18:02 compute-0 systemd[1]: libpod-153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d.scope: Consumed 1.000s CPU time.
Feb 25 12:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e07bc62d606ef0912c833457b21d1d548806fb4f768908e7f2a52952c6a62e7-merged.mount: Deactivated successfully.
Feb 25 12:18:02 compute-0 podman[259312]: 2026-02-25 12:18:02.814954754 +0000 UTC m=+1.006806821 container remove 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Feb 25 12:18:02 compute-0 systemd[1]: libpod-conmon-153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d.scope: Deactivated successfully.
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.841 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.843 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4320MB free_disk=59.959298719652GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.843 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.844 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:02 compute-0 sudo[259234]: pam_unix(sudo:session): session closed for user root
Feb 25 12:18:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:18:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:18:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:18:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:18:02 compute-0 sudo[259447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.947 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 4c768bc2-a711-4179-b12f-604509e47856 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.948 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance f53717b5-3196-44b5-bc3a-aa8f53ce397d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.948 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:18:02 compute-0 sudo[259447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.949 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:18:02 compute-0 sudo[259447]: pam_unix(sudo:session): session closed for user root
Feb 25 12:18:02 compute-0 nova_compute[244014]: 2026-02-25 12:18:02.959 244018 DEBUG nova.network.neutron [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.021 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.290 244018 DEBUG nova.network.neutron [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.327 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Releasing lock "refresh_cache-f53717b5-3196-44b5-bc3a-aa8f53ce397d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.328 244018 DEBUG nova.compute.manager [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:03 compute-0 ceph-mon[76335]: pgmap v976: 305 pgs: 305 active+clean; 215 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 110 op/s
Feb 25 12:18:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2701414873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:18:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:18:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3948967032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:03 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Feb 25 12:18:03 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Consumed 2.410s CPU time.
Feb 25 12:18:03 compute-0 systemd-machined[210048]: Machine qemu-16-instance-0000000e terminated.
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.547 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.555 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:18:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.582 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.627 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.628 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.752 244018 INFO nova.virt.libvirt.driver [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance destroyed successfully.
Feb 25 12:18:03 compute-0 nova_compute[244014]: 2026-02-25 12:18:03.752 244018 DEBUG nova.objects.instance [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lazy-loading 'resources' on Instance uuid f53717b5-3196-44b5-bc3a-aa8f53ce397d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:04 compute-0 ovn_controller[147040]: 2026-02-25T12:18:04Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:a8:6b 10.100.0.3
Feb 25 12:18:04 compute-0 ovn_controller[147040]: 2026-02-25T12:18:04Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:a8:6b 10.100.0.3
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.142 244018 INFO nova.virt.libvirt.driver [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Deleting instance files /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d_del
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.143 244018 INFO nova.virt.libvirt.driver [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Deletion of /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d_del complete
Feb 25 12:18:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.186 244018 INFO nova.compute.manager [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Took 0.86 seconds to destroy the instance on the hypervisor.
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.187 244018 DEBUG oslo.service.loopingcall [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.187 244018 DEBUG nova.compute.manager [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.187 244018 DEBUG nova.network.neutron [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.286 244018 DEBUG nova.network.neutron [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.299 244018 DEBUG nova.network.neutron [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.322 244018 INFO nova.compute.manager [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Took 0.13 seconds to deallocate network for instance.
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.367 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.367 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3948967032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:04 compute-0 nova_compute[244014]: 2026-02-25 12:18:04.432 244018 DEBUG oslo_concurrency.processutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/957699284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:05 compute-0 nova_compute[244014]: 2026-02-25 12:18:05.168 244018 DEBUG oslo_concurrency.processutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:05 compute-0 nova_compute[244014]: 2026-02-25 12:18:05.175 244018 DEBUG nova.compute.provider_tree [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:18:05 compute-0 nova_compute[244014]: 2026-02-25 12:18:05.230 244018 DEBUG nova.scheduler.client.report [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:18:05 compute-0 nova_compute[244014]: 2026-02-25 12:18:05.374 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:05 compute-0 nova_compute[244014]: 2026-02-25 12:18:05.403 244018 INFO nova.scheduler.client.report [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Deleted allocations for instance f53717b5-3196-44b5-bc3a-aa8f53ce397d
Feb 25 12:18:05 compute-0 ceph-mon[76335]: pgmap v977: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 12:18:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/957699284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:05 compute-0 nova_compute[244014]: 2026-02-25 12:18:05.484 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 12:18:06 compute-0 nova_compute[244014]: 2026-02-25 12:18:06.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:07 compute-0 ceph-mon[76335]: pgmap v978: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 12:18:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 233 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 266 op/s
Feb 25 12:18:08 compute-0 nova_compute[244014]: 2026-02-25 12:18:08.394 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:08 compute-0 nova_compute[244014]: 2026-02-25 12:18:08.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:09 compute-0 ceph-mon[76335]: pgmap v979: 305 pgs: 305 active+clean; 233 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 266 op/s
Feb 25 12:18:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 233 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Feb 25 12:18:10 compute-0 nova_compute[244014]: 2026-02-25 12:18:10.704 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:10 compute-0 nova_compute[244014]: 2026-02-25 12:18:10.704 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:10 compute-0 nova_compute[244014]: 2026-02-25 12:18:10.727 244018 DEBUG nova.compute.manager [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:10 compute-0 nova_compute[244014]: 2026-02-25 12:18:10.727 244018 DEBUG nova.compute.manager [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing instance network info cache due to event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:18:10 compute-0 nova_compute[244014]: 2026-02-25 12:18:10.727 244018 DEBUG oslo_concurrency.lockutils [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:10 compute-0 nova_compute[244014]: 2026-02-25 12:18:10.728 244018 DEBUG oslo_concurrency.lockutils [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:10 compute-0 nova_compute[244014]: 2026-02-25 12:18:10.728 244018 DEBUG nova.network.neutron [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:18:10 compute-0 nova_compute[244014]: 2026-02-25 12:18:10.946 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:18:11 compute-0 nova_compute[244014]: 2026-02-25 12:18:11.033 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:11 compute-0 nova_compute[244014]: 2026-02-25 12:18:11.034 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:11 compute-0 nova_compute[244014]: 2026-02-25 12:18:11.042 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:18:11 compute-0 nova_compute[244014]: 2026-02-25 12:18:11.043 244018 INFO nova.compute.claims [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:18:11 compute-0 nova_compute[244014]: 2026-02-25 12:18:11.250 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:11 compute-0 nova_compute[244014]: 2026-02-25 12:18:11.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:11 compute-0 ceph-mon[76335]: pgmap v980: 305 pgs: 305 active+clean; 233 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Feb 25 12:18:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 233 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Feb 25 12:18:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867391249' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:11 compute-0 nova_compute[244014]: 2026-02-25 12:18:11.794 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:11 compute-0 nova_compute[244014]: 2026-02-25 12:18:11.800 244018 DEBUG nova.compute.provider_tree [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:18:11 compute-0 nova_compute[244014]: 2026-02-25 12:18:11.826 244018 DEBUG nova.scheduler.client.report [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.091 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.093 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.150 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.151 244018 DEBUG nova.network.neutron [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.172 244018 INFO nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.191 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.307 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.309 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.310 244018 INFO nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Creating image(s)
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.340 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.370 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.400 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.405 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.477 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.478 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.480 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.480 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3867391249' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.510 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.516 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.628 244018 DEBUG nova.network.neutron [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.629 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.821 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:12 compute-0 nova_compute[244014]: 2026-02-25 12:18:12.902 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] resizing rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.003 244018 DEBUG nova.objects.instance [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lazy-loading 'migration_context' on Instance uuid 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.023 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.024 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Ensure instance console log exists: /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.024 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.025 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.025 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.027 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.031 244018 WARNING nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.035 244018 DEBUG nova.virt.libvirt.host [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.035 244018 DEBUG nova.virt.libvirt.host [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.038 244018 DEBUG nova.virt.libvirt.host [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.039 244018 DEBUG nova.virt.libvirt.host [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.039 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.040 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.041 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.041 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.042 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.042 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.042 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.043 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.043 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.044 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.044 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.045 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.049 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:13 compute-0 ceph-mon[76335]: pgmap v981: 305 pgs: 305 active+clean; 233 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Feb 25 12:18:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 233 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 179 op/s
Feb 25 12:18:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:18:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162791070' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.582 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.609 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.613 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.776 244018 DEBUG nova.network.neutron [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updated VIF entry in instance network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.778 244018 DEBUG nova.network.neutron [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updating instance_info_cache with network_info: [{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.799 244018 DEBUG oslo_concurrency.lockutils [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:13 compute-0 nova_compute[244014]: 2026-02-25 12:18:13.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:18:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3952549009' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.236 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.238 244018 DEBUG nova.objects.instance [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.266 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:18:14 compute-0 nova_compute[244014]:   <uuid>0b5fe226-aafe-4d7f-8a2e-28aea8ed8795</uuid>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   <name>instance-0000000f</name>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerExternalEventsTest-server-415826439</nova:name>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:18:13</nova:creationTime>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:18:14 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:18:14 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:18:14 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:18:14 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:18:14 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:18:14 compute-0 nova_compute[244014]:         <nova:user uuid="0ef1a239db3f426694c02473a6b39653">tempest-ServerExternalEventsTest-569547795-project-member</nova:user>
Feb 25 12:18:14 compute-0 nova_compute[244014]:         <nova:project uuid="e7d55de27cf3451a8ed2d95d03721e1b">tempest-ServerExternalEventsTest-569547795</nova:project>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <system>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <entry name="serial">0b5fe226-aafe-4d7f-8a2e-28aea8ed8795</entry>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <entry name="uuid">0b5fe226-aafe-4d7f-8a2e-28aea8ed8795</entry>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     </system>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   <os>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   </os>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   <features>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   </features>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk">
Feb 25 12:18:14 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:14 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config">
Feb 25 12:18:14 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:14 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/console.log" append="off"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <video>
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     </video>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:18:14 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:18:14 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:18:14 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:18:14 compute-0 nova_compute[244014]: </domain>
Feb 25 12:18:14 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.349 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.350 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.350 244018 INFO nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Using config drive
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.379 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4162791070' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3952549009' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.774 244018 INFO nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Creating config drive at /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.780 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3666zc2g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.909 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3666zc2g" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.931 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:14 compute-0 nova_compute[244014]: 2026-02-25 12:18:14.934 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.060 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.061 244018 INFO nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Deleting local config drive /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config because it was imported into RBD.
Feb 25 12:18:15 compute-0 systemd-machined[210048]: New machine qemu-17-instance-0000000f.
Feb 25 12:18:15 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-0000000f.
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.454 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021895.4540918, 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.455 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] VM Resumed (Lifecycle Event)
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.460 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.461 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.466 244018 INFO nova.virt.libvirt.driver [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance spawned successfully.
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.466 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.486 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.494 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.498 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.499 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.500 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.500 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.501 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.502 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.530 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.531 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021895.4601972, 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.531 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] VM Started (Lifecycle Event)
Feb 25 12:18:15 compute-0 ceph-mon[76335]: pgmap v982: 305 pgs: 305 active+clean; 233 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 179 op/s
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.552 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.557 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 233 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.567 244018 INFO nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Took 3.26 seconds to spawn the instance on the hypervisor.
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.567 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.577 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.640 244018 INFO nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Took 4.63 seconds to build instance.
Feb 25 12:18:15 compute-0 nova_compute[244014]: 2026-02-25 12:18:15.657 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.562 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.563 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:16 compute-0 ceph-mon[76335]: pgmap v983: 305 pgs: 305 active+clean; 233 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.564 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.565 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.565 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.567 244018 INFO nova.compute.manager [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Terminating instance
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.569 244018 DEBUG nova.compute.manager [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:18:16 compute-0 kernel: tape4e15a18-10 (unregistering): left promiscuous mode
Feb 25 12:18:16 compute-0 NetworkManager[49836]: <info>  [1772021896.6224] device (tape4e15a18-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:18:16 compute-0 ovn_controller[147040]: 2026-02-25T12:18:16Z|00051|binding|INFO|Releasing lport e4e15a18-1007-4bd1-80a7-b8d2cb64f15c from this chassis (sb_readonly=0)
Feb 25 12:18:16 compute-0 ovn_controller[147040]: 2026-02-25T12:18:16Z|00052|binding|INFO|Setting lport e4e15a18-1007-4bd1-80a7-b8d2cb64f15c down in Southbound
Feb 25 12:18:16 compute-0 ovn_controller[147040]: 2026-02-25T12:18:16Z|00053|binding|INFO|Removing iface tape4e15a18-10 ovn-installed in OVS
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.646 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a8:6b 10.100.0.3'], port_security=['fa:16:3e:26:a8:6b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4c768bc2-a711-4179-b12f-604509e47856', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7e518f2e6c84550b07b36ea68922707', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd3f2670-feae-47a4-bde0-e33bcae0ae19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e09bb1cc-ff05-4116-89c1-7f1944c54b5c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.649 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c in datapath 1b6027b5-2ee6-49fb-b9e5-c79e5230d07e unbound from our chassis
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.652 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.653 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[de15619b-ab64-4748-ae96-c15f3c867771]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.655 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e namespace which is not needed anymore
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb 25 12:18:16 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Consumed 11.741s CPU time.
Feb 25 12:18:16 compute-0 systemd-machined[210048]: Machine qemu-15-instance-0000000d terminated.
Feb 25 12:18:16 compute-0 podman[259904]: 2026-02-25 12:18:16.721485564 +0000 UTC m=+0.077100505 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:18:16 compute-0 podman[259910]: 2026-02-25 12:18:16.741946438 +0000 UTC m=+0.097354634 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 12:18:16 compute-0 NetworkManager[49836]: <info>  [1772021896.7855] manager: (tape4e15a18-10): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.802 244018 INFO nova.virt.libvirt.driver [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Instance destroyed successfully.
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.802 244018 DEBUG nova.objects.instance [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lazy-loading 'resources' on Instance uuid 4c768bc2-a711-4179-b12f-604509e47856 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:16 compute-0 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [NOTICE]   (258467) : haproxy version is 2.8.14-c23fe91
Feb 25 12:18:16 compute-0 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [NOTICE]   (258467) : path to executable is /usr/sbin/haproxy
Feb 25 12:18:16 compute-0 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [WARNING]  (258467) : Exiting Master process...
Feb 25 12:18:16 compute-0 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [ALERT]    (258467) : Current worker (258469) exited with code 143 (Terminated)
Feb 25 12:18:16 compute-0 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [WARNING]  (258467) : All workers exited. Exiting... (0)
Feb 25 12:18:16 compute-0 systemd[1]: libpod-4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148.scope: Deactivated successfully.
Feb 25 12:18:16 compute-0 podman[259967]: 2026-02-25 12:18:16.813827665 +0000 UTC m=+0.054008416 container died 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.816 244018 DEBUG nova.virt.libvirt.vif [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-942373444',id=13,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:17:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c7e518f2e6c84550b07b36ea68922707',ramdisk_id='',reservation_id='r-m5sgaoda',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:17:54Z,user_data=None,user_id='38c6d2e6875a408687bd85066c826987',uuid=4c768bc2-a711-4179-b12f-604509e47856,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.816 244018 DEBUG nova.network.os_vif_util [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converting VIF {"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.817 244018 DEBUG nova.network.os_vif_util [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.817 244018 DEBUG os_vif [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.819 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4e15a18-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.824 244018 INFO os_vif [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10')
Feb 25 12:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8c0265e3a393cacb87e5ee458bb9d5c78aa9ff00fd616b394131707d01c2e19-merged.mount: Deactivated successfully.
Feb 25 12:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148-userdata-shm.mount: Deactivated successfully.
Feb 25 12:18:16 compute-0 podman[259967]: 2026-02-25 12:18:16.848807167 +0000 UTC m=+0.088987918 container cleanup 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 12:18:16 compute-0 systemd[1]: libpod-conmon-4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148.scope: Deactivated successfully.
Feb 25 12:18:16 compute-0 podman[260019]: 2026-02-25 12:18:16.914266465 +0000 UTC m=+0.045939811 container remove 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.918 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7360b6-60cf-4977-aa06-51c0328ee530]: (4, ('Wed Feb 25 12:18:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e (4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148)\n4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148\nWed Feb 25 12:18:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e (4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148)\n4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.920 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2a67206f-c909-4dbf-9fda-23ae21bb9614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.921 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b6027b5-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:16 compute-0 kernel: tap1b6027b5-20: left promiscuous mode
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 nova_compute[244014]: 2026-02-25 12:18:16.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.933 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3d3c91-672f-44f4-8f13-229d42ea548e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.947 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd567ae9-18c0-4543-ad10-e6e0d2c00f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.948 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecd893a-ab8f-45a5-8dce-f52c34215d0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.962 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbac4b4-3516-4203-8b32-6bc295c47506]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384238, 'reachable_time': 21166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260036, 'error': None, 'target': 'ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.964 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:18:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.964 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b32ad794-a420-4c33-80fb-397d6b88c3b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d1b6027b5\x2d2ee6\x2d49fb\x2db9e5\x2dc79e5230d07e.mount: Deactivated successfully.
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.097 244018 INFO nova.virt.libvirt.driver [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Deleting instance files /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856_del
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.106 244018 INFO nova.virt.libvirt.driver [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Deletion of /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856_del complete
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.153 244018 INFO nova.compute.manager [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Took 0.58 seconds to destroy the instance on the hypervisor.
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.154 244018 DEBUG oslo.service.loopingcall [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.156 244018 DEBUG nova.compute.manager [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.156 244018 DEBUG nova.network.neutron [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.161 244018 DEBUG nova.compute.manager [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.161 244018 DEBUG nova.compute.manager [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.162 244018 DEBUG oslo_concurrency.lockutils [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] Acquiring lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.162 244018 DEBUG oslo_concurrency.lockutils [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] Acquired lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.162 244018 DEBUG nova.network.neutron [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.309 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.309 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.310 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.310 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.310 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.311 244018 INFO nova.compute.manager [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Terminating instance
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.312 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 248 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 238 op/s
Feb 25 12:18:17 compute-0 nova_compute[244014]: 2026-02-25 12:18:17.872 244018 DEBUG nova.network.neutron [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:18:18 compute-0 nova_compute[244014]: 2026-02-25 12:18:18.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:18 compute-0 ceph-mon[76335]: pgmap v984: 305 pgs: 305 active+clean; 248 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 238 op/s
Feb 25 12:18:18 compute-0 nova_compute[244014]: 2026-02-25 12:18:18.750 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021883.7498527, f53717b5-3196-44b5-bc3a-aa8f53ce397d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:18 compute-0 nova_compute[244014]: 2026-02-25 12:18:18.752 244018 INFO nova.compute.manager [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] VM Stopped (Lifecycle Event)
Feb 25 12:18:18 compute-0 nova_compute[244014]: 2026-02-25 12:18:18.806 244018 DEBUG nova.compute.manager [None req-515c8926-dfe1-4bfc-85df-6b5acff7fb35 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:18 compute-0 nova_compute[244014]: 2026-02-25 12:18:18.872 244018 DEBUG nova.network.neutron [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:18 compute-0 nova_compute[244014]: 2026-02-25 12:18:18.936 244018 DEBUG oslo_concurrency.lockutils [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] Releasing lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:18 compute-0 nova_compute[244014]: 2026-02-25 12:18:18.937 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquired lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:18 compute-0 nova_compute[244014]: 2026-02-25 12:18:18.938 244018 DEBUG nova.network.neutron [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:18:19 compute-0 nova_compute[244014]: 2026-02-25 12:18:19.159 244018 DEBUG nova.network.neutron [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:18:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:19 compute-0 nova_compute[244014]: 2026-02-25 12:18:19.541 244018 DEBUG nova.compute.manager [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-unplugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:19 compute-0 nova_compute[244014]: 2026-02-25 12:18:19.542 244018 DEBUG oslo_concurrency.lockutils [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:19 compute-0 nova_compute[244014]: 2026-02-25 12:18:19.543 244018 DEBUG oslo_concurrency.lockutils [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:19 compute-0 nova_compute[244014]: 2026-02-25 12:18:19.543 244018 DEBUG oslo_concurrency.lockutils [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:19 compute-0 nova_compute[244014]: 2026-02-25 12:18:19.544 244018 DEBUG nova.compute.manager [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] No waiting events found dispatching network-vif-unplugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:18:19 compute-0 nova_compute[244014]: 2026-02-25 12:18:19.544 244018 DEBUG nova.compute.manager [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-unplugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:18:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 248 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 25 12:18:19 compute-0 nova_compute[244014]: 2026-02-25 12:18:19.886 244018 DEBUG nova.network.neutron [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:19 compute-0 nova_compute[244014]: 2026-02-25 12:18:19.906 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Releasing lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:19 compute-0 nova_compute[244014]: 2026-02-25 12:18:19.907 244018 DEBUG nova.compute.manager [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:18:19 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Feb 25 12:18:19 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Consumed 4.950s CPU time.
Feb 25 12:18:19 compute-0 systemd-machined[210048]: Machine qemu-17-instance-0000000f terminated.
Feb 25 12:18:19 compute-0 nova_compute[244014]: 2026-02-25 12:18:19.981 244018 DEBUG nova.network.neutron [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.003 244018 INFO nova.compute.manager [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Took 2.85 seconds to deallocate network for instance.
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.074 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.075 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.128 244018 INFO nova.virt.libvirt.driver [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance destroyed successfully.
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.132 244018 DEBUG nova.objects.instance [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lazy-loading 'resources' on Instance uuid 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.149 244018 DEBUG oslo_concurrency.processutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.462 244018 INFO nova.virt.libvirt.driver [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Deleting instance files /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_del
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.463 244018 INFO nova.virt.libvirt.driver [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Deletion of /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_del complete
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.532 244018 INFO nova.compute.manager [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.532 244018 DEBUG oslo.service.loopingcall [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.533 244018 DEBUG nova.compute.manager [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.534 244018 DEBUG nova.network.neutron [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:18:20 compute-0 ceph-mon[76335]: pgmap v985: 305 pgs: 305 active+clean; 248 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.732 244018 DEBUG nova.network.neutron [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:18:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609934762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.755 244018 DEBUG nova.network.neutron [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.757 244018 DEBUG oslo_concurrency.processutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.764 244018 DEBUG nova.compute.provider_tree [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.772 244018 INFO nova.compute.manager [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Took 0.24 seconds to deallocate network for instance.
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.782 244018 DEBUG nova.scheduler.client.report [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.839 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.844 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.844 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.912 244018 DEBUG oslo_concurrency.processutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:20 compute-0 nova_compute[244014]: 2026-02-25 12:18:20.978 244018 INFO nova.scheduler.client.report [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Deleted allocations for instance 4c768bc2-a711-4179-b12f-604509e47856
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.113 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2031068564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.443 244018 DEBUG oslo_concurrency.processutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.450 244018 DEBUG nova.compute.provider_tree [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.547 244018 DEBUG nova.scheduler.client.report [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:18:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 202 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.614 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.648 244018 INFO nova.scheduler.client.report [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Deleted allocations for instance 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795
Feb 25 12:18:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1609934762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2031068564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.724 244018 DEBUG nova.compute.manager [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-deleted-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.725 244018 DEBUG nova.compute.manager [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.725 244018 DEBUG oslo_concurrency.lockutils [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.726 244018 DEBUG oslo_concurrency.lockutils [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.727 244018 DEBUG oslo_concurrency.lockutils [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.727 244018 DEBUG nova.compute.manager [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] No waiting events found dispatching network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.728 244018 WARNING nova.compute.manager [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received unexpected event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c for instance with vm_state deleted and task_state None.
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.759 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:21 compute-0 nova_compute[244014]: 2026-02-25 12:18:21.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:22 compute-0 ceph-mon[76335]: pgmap v986: 305 pgs: 305 active+clean; 202 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Feb 25 12:18:23 compute-0 nova_compute[244014]: 2026-02-25 12:18:23.401 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Feb 25 12:18:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:24 compute-0 nova_compute[244014]: 2026-02-25 12:18:24.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:24 compute-0 nova_compute[244014]: 2026-02-25 12:18:24.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:24 compute-0 ceph-mon[76335]: pgmap v987: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Feb 25 12:18:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Feb 25 12:18:26 compute-0 ceph-mon[76335]: pgmap v988: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Feb 25 12:18:26 compute-0 nova_compute[244014]: 2026-02-25 12:18:26.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Feb 25 12:18:28 compute-0 nova_compute[244014]: 2026-02-25 12:18:28.403 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:28 compute-0 ceph-mon[76335]: pgmap v989: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Feb 25 12:18:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 12:18:30 compute-0 ceph-mon[76335]: pgmap v990: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 12:18:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:18:30
Feb 25 12:18:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:18:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:18:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'vms', 'images', '.rgw.root', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.meta']
Feb 25 12:18:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:18:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:18:31 compute-0 nova_compute[244014]: 2026-02-25 12:18:31.799 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021896.7975128, 4c768bc2-a711-4179-b12f-604509e47856 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:31 compute-0 nova_compute[244014]: 2026-02-25 12:18:31.799 244018 INFO nova.compute.manager [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] VM Stopped (Lifecycle Event)
Feb 25 12:18:31 compute-0 nova_compute[244014]: 2026-02-25 12:18:31.826 244018 DEBUG nova.compute.manager [None req-b408afb7-f80f-4f47-a1a8-7691ab3a541e - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:31 compute-0 nova_compute[244014]: 2026-02-25 12:18:31.828 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:32 compute-0 ceph-mon[76335]: pgmap v991: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.044 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "d974e887-fd2f-479e-951e-fad497dd7b0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.045 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.067 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.168 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.169 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.179 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.179 244018 INFO nova.compute.claims [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.327 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 17 op/s
Feb 25 12:18:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502609560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.943 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.949 244018 DEBUG nova.compute.provider_tree [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:18:33 compute-0 nova_compute[244014]: 2026-02-25 12:18:33.980 244018 DEBUG nova.scheduler.client.report [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.008 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.010 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.062 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.062 244018 DEBUG nova.network.neutron [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.081 244018 INFO nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.106 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:18:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.187 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.189 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.189 244018 INFO nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Creating image(s)
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.212 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.244 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.270 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.273 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.333 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.334 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.334 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.335 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.357 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.360 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d974e887-fd2f-479e-951e-fad497dd7b0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.550 244018 DEBUG nova.network.neutron [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.551 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.660 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d974e887-fd2f-479e-951e-fad497dd7b0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.742 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] resizing rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:18:34 compute-0 ceph-mon[76335]: pgmap v992: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 17 op/s
Feb 25 12:18:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3502609560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.854 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.855 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.864 244018 DEBUG nova.objects.instance [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lazy-loading 'migration_context' on Instance uuid d974e887-fd2f-479e-951e-fad497dd7b0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.878 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.878 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Ensure instance console log exists: /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.879 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.880 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.880 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.883 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.888 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.893 244018 WARNING nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.901 244018 DEBUG nova.virt.libvirt.host [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.902 244018 DEBUG nova.virt.libvirt.host [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.906 244018 DEBUG nova.virt.libvirt.host [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.907 244018 DEBUG nova.virt.libvirt.host [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.908 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.908 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.909 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.909 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.910 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.910 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.911 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.911 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.912 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.912 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.912 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.913 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:18:34 compute-0 nova_compute[244014]: 2026-02-25 12:18:34.917 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.005 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.006 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.015 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.016 244018 INFO nova.compute.claims [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.126 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021900.1253996, 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.127 244018 INFO nova.compute.manager [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] VM Stopped (Lifecycle Event)
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.145 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.169 244018 DEBUG nova.compute.manager [None req-8e766892-b5a3-49bf-ae34-b29f068632e9 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:18:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310083825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.460 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.491 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.496 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:18:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/774732830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.716 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.723 244018 DEBUG nova.compute.provider_tree [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.752 244018 DEBUG nova.scheduler.client.report [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:18:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2310083825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/774732830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.780 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.781 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.838 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.839 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.860 244018 INFO nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.882 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:18:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:18:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3998105227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.961 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.964 244018 DEBUG nova.objects.instance [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lazy-loading 'pci_devices' on Instance uuid d974e887-fd2f-479e-951e-fad497dd7b0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.985 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:18:35 compute-0 nova_compute[244014]:   <uuid>d974e887-fd2f-479e-951e-fad497dd7b0f</uuid>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   <name>instance-00000010</name>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <nova:name>tempest-TenantUsagesTestJSON-server-522638906</nova:name>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:18:34</nova:creationTime>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:18:35 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:18:35 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:18:35 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:18:35 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:18:35 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:18:35 compute-0 nova_compute[244014]:         <nova:user uuid="1768606fbf83414784f34904db6329b8">tempest-TenantUsagesTestJSON-721733181-project-member</nova:user>
Feb 25 12:18:35 compute-0 nova_compute[244014]:         <nova:project uuid="d1b2f7969c344b348a5d9bd271c293f9">tempest-TenantUsagesTestJSON-721733181</nova:project>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <system>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <entry name="serial">d974e887-fd2f-479e-951e-fad497dd7b0f</entry>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <entry name="uuid">d974e887-fd2f-479e-951e-fad497dd7b0f</entry>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     </system>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   <os>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   </os>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   <features>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   </features>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d974e887-fd2f-479e-951e-fad497dd7b0f_disk">
Feb 25 12:18:35 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:35 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config">
Feb 25 12:18:35 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:35 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/console.log" append="off"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <video>
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     </video>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:18:35 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:18:35 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:18:35 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:18:35 compute-0 nova_compute[244014]: </domain>
Feb 25 12:18:35 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.992 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.994 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:18:35 compute-0 nova_compute[244014]: 2026-02-25 12:18:35.994 244018 INFO nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating image(s)
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.020 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.044 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.064 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.067 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.081 244018 DEBUG nova.policy [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6395ac4bfa5d4910aed9116395bbbdeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.112 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.113 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.113 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.113 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.131 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.133 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.150 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.151 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.151 244018 INFO nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Using config drive
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.170 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.368 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.406 244018 INFO nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Creating config drive at /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.412 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsimli3_d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.470 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.559 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsimli3_d" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.583 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.588 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.611 244018 DEBUG nova.objects.instance [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.631 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.632 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Ensure instance console log exists: /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.632 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.633 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.633 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.724 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.724 244018 INFO nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Deleting local config drive /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config because it was imported into RBD.
Feb 25 12:18:36 compute-0 ceph-mon[76335]: pgmap v993: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Feb 25 12:18:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3998105227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:36 compute-0 systemd-machined[210048]: New machine qemu-18-instance-00000010.
Feb 25 12:18:36 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000010.
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.821 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.822 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.831 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:36 compute-0 nova_compute[244014]: 2026-02-25 12:18:36.929 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.138 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.139 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.148 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.149 244018 INFO nova.compute.claims [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.327 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.376 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021917.3754072, d974e887-fd2f-479e-951e-fad497dd7b0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.377 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] VM Resumed (Lifecycle Event)
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.381 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.382 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.386 244018 INFO nova.virt.libvirt.driver [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance spawned successfully.
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.386 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.420 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.438 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.443 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.444 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.445 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.446 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.446 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.447 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.481 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.482 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021917.376808, d974e887-fd2f-479e-951e-fad497dd7b0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.482 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] VM Started (Lifecycle Event)
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.536 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.538 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Successfully created port: 2e503dd2-735e-4bfc-87c7-dffab319d935 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.547 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.555 244018 INFO nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Took 3.37 seconds to spawn the instance on the hypervisor.
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.556 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.570 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 246 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.673 244018 INFO nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Took 4.55 seconds to build instance.
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.715 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320252035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.905 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.910 244018 DEBUG nova.compute.provider_tree [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.925 244018 DEBUG nova.scheduler.client.report [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.947 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:37 compute-0 nova_compute[244014]: 2026-02-25 12:18:37.948 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.037 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.038 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.072 244018 INFO nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.104 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.216 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.218 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.219 244018 INFO nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Creating image(s)
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.257 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.294 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.324 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.328 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.346 244018 DEBUG nova.policy [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6395ac4bfa5d4910aed9116395bbbdeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.388 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.389 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.389 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.390 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.406 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.411 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7a6ab503-d433-40a7-9395-3d5660e852c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.423 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.622 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7a6ab503-d433-40a7-9395-3d5660e852c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.686 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.762 244018 DEBUG nova.objects.instance [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid 7a6ab503-d433-40a7-9395-3d5660e852c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:38 compute-0 ceph-mon[76335]: pgmap v994: 305 pgs: 305 active+clean; 246 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Feb 25 12:18:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/320252035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.802 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.802 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Ensure instance console log exists: /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.803 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.803 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:38 compute-0 nova_compute[244014]: 2026-02-25 12:18:38.803 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.290 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "d974e887-fd2f-479e-951e-fad497dd7b0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.290 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.290 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "d974e887-fd2f-479e-951e-fad497dd7b0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.291 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.291 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.292 244018 INFO nova.compute.manager [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Terminating instance
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.292 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "refresh_cache-d974e887-fd2f-479e-951e-fad497dd7b0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.293 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquired lock "refresh_cache-d974e887-fd2f-479e-951e-fad497dd7b0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.293 244018 DEBUG nova.network.neutron [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.299 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Successfully updated port: 2e503dd2-735e-4bfc-87c7-dffab319d935 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.321 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.321 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquired lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.322 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.425 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Successfully created port: 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.511 244018 DEBUG nova.network.neutron [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:18:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 246 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.611 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.773 244018 DEBUG nova.compute.manager [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-changed-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.774 244018 DEBUG nova.compute.manager [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Refreshing instance network info cache due to event network-changed-2e503dd2-735e-4bfc-87c7-dffab319d935. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.774 244018 DEBUG oslo_concurrency.lockutils [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.894 244018 DEBUG nova.network.neutron [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.912 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Releasing lock "refresh_cache-d974e887-fd2f-479e-951e-fad497dd7b0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:39 compute-0 nova_compute[244014]: 2026-02-25 12:18:39.912 244018 DEBUG nova.compute.manager [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:18:39 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Deactivated successfully.
Feb 25 12:18:39 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Consumed 3.180s CPU time.
Feb 25 12:18:39 compute-0 systemd-machined[210048]: Machine qemu-18-instance-00000010 terminated.
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.138 244018 INFO nova.virt.libvirt.driver [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance destroyed successfully.
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.139 244018 DEBUG nova.objects.instance [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lazy-loading 'resources' on Instance uuid d974e887-fd2f-479e-951e-fad497dd7b0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.504 244018 INFO nova.virt.libvirt.driver [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Deleting instance files /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f_del
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.506 244018 INFO nova.virt.libvirt.driver [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Deletion of /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f_del complete
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.572 244018 INFO nova.compute.manager [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.573 244018 DEBUG oslo.service.loopingcall [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.573 244018 DEBUG nova.compute.manager [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.573 244018 DEBUG nova.network.neutron [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.779 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updating instance_info_cache with network_info: [{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:40 compute-0 ceph-mon[76335]: pgmap v995: 305 pgs: 305 active+clean; 246 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.805 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Releasing lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.806 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance network_info: |[{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.808 244018 DEBUG oslo_concurrency.lockutils [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.808 244018 DEBUG nova.network.neutron [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Refreshing network info cache for port 2e503dd2-735e-4bfc-87c7-dffab319d935 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.814 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start _get_guest_xml network_info=[{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.821 244018 WARNING nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.826 244018 DEBUG nova.virt.libvirt.host [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.827 244018 DEBUG nova.virt.libvirt.host [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.836 244018 DEBUG nova.virt.libvirt.host [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.837 244018 DEBUG nova.virt.libvirt.host [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.838 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.838 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.839 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.840 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.841 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.841 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.842 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.842 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.843 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.843 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.844 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.845 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.849 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.873 244018 DEBUG nova.network.neutron [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.893 244018 DEBUG nova.network.neutron [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.917 244018 INFO nova.compute.manager [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Took 0.34 seconds to deallocate network for instance.
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.969 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:40 compute-0 nova_compute[244014]: 2026-02-25 12:18:40.970 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.072 244018 DEBUG oslo_concurrency.processutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.173 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Successfully updated port: 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.192 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.193 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquired lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.194 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.309 244018 DEBUG nova.compute.manager [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-changed-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.310 244018 DEBUG nova.compute.manager [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Refreshing instance network info cache due to event network-changed-179cbc6a-d79f-4f46-88e8-f362ea0e4a26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.310 244018 DEBUG oslo_concurrency.lockutils [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:18:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1444724717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.400 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.430 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.435 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 240 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 695 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Feb 25 12:18:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134556862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.597 244018 DEBUG oslo_concurrency.processutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.604 244018 DEBUG nova.compute.provider_tree [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.621 244018 DEBUG nova.scheduler.client.report [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.651 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.687 244018 INFO nova.scheduler.client.report [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Deleted allocations for instance d974e887-fd2f-479e-951e-fad497dd7b0f
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.736 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:18:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1444724717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3134556862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.822 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:18:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612929653' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.947 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.949 244018 DEBUG nova.virt.libvirt.vif [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:35Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.950 244018 DEBUG nova.network.os_vif_util [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.951 244018 DEBUG nova.network.os_vif_util [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.953 244018 DEBUG nova.objects.instance [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.982 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:18:41 compute-0 nova_compute[244014]:   <uuid>52f927ad-a417-489f-9f92-87bc3433649d</uuid>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   <name>instance-00000011</name>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAdminTestJSON-server-1736622759</nova:name>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:18:40</nova:creationTime>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <nova:port uuid="2e503dd2-735e-4bfc-87c7-dffab319d935">
Feb 25 12:18:41 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <system>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <entry name="serial">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <entry name="uuid">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     </system>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   <os>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   </os>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   <features>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   </features>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk">
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk.config">
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:41 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:87:71:62"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <target dev="tap2e503dd2-73"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log" append="off"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <video>
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     </video>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:18:41 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:18:41 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:18:41 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:18:41 compute-0 nova_compute[244014]: </domain>
Feb 25 12:18:41 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.983 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Preparing to wait for external event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.984 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.984 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.984 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.985 244018 DEBUG nova.virt.libvirt.vif [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:35Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.986 244018 DEBUG nova.network.os_vif_util [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.987 244018 DEBUG nova.network.os_vif_util [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.987 244018 DEBUG os_vif [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.989 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.989 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.994 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.994 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e503dd2-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.995 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e503dd2-73, col_values=(('external_ids', {'iface-id': '2e503dd2-735e-4bfc-87c7-dffab319d935', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:71:62', 'vm-uuid': '52f927ad-a417-489f-9f92-87bc3433649d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:41 compute-0 nova_compute[244014]: 2026-02-25 12:18:41.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:41 compute-0 NetworkManager[49836]: <info>  [1772021921.9987] manager: (tap2e503dd2-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Feb 25 12:18:42 compute-0 nova_compute[244014]: 2026-02-25 12:18:42.000 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:18:42 compute-0 nova_compute[244014]: 2026-02-25 12:18:42.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:42 compute-0 nova_compute[244014]: 2026-02-25 12:18:42.008 244018 INFO os_vif [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006154421725418758 of space, bias 1.0, pg target 0.18463265176256272 quantized to 32 (current 32)
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002489711543254153 of space, bias 1.0, pg target 0.7469134629762458 quantized to 32 (current 32)
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.271449539215372e-07 of space, bias 4.0, pg target 0.0009925739447058447 quantized to 16 (current 16)
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:18:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:18:42 compute-0 nova_compute[244014]: 2026-02-25 12:18:42.123 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:42 compute-0 nova_compute[244014]: 2026-02-25 12:18:42.124 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:42 compute-0 nova_compute[244014]: 2026-02-25 12:18:42.124 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:87:71:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:18:42 compute-0 nova_compute[244014]: 2026-02-25 12:18:42.125 244018 INFO nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Using config drive
Feb 25 12:18:42 compute-0 nova_compute[244014]: 2026-02-25 12:18:42.157 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:42 compute-0 ceph-mon[76335]: pgmap v996: 305 pgs: 305 active+clean; 240 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 695 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Feb 25 12:18:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3612929653' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:43 compute-0 nova_compute[244014]: 2026-02-25 12:18:43.410 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 181 op/s
Feb 25 12:18:43 compute-0 nova_compute[244014]: 2026-02-25 12:18:43.967 244018 INFO nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating config drive at /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config
Feb 25 12:18:43 compute-0 nova_compute[244014]: 2026-02-25 12:18:43.974 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6do9jecn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.099 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6do9jecn" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.136 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.141 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.186 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updating instance_info_cache with network_info: [{"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.261 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Releasing lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.261 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Instance network_info: |[{"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.262 244018 DEBUG oslo_concurrency.lockutils [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.262 244018 DEBUG nova.network.neutron [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Refreshing network info cache for port 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.265 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Start _get_guest_xml network_info=[{"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.272 244018 WARNING nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.278 244018 DEBUG nova.virt.libvirt.host [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.279 244018 DEBUG nova.virt.libvirt.host [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.291 244018 DEBUG nova.virt.libvirt.host [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.292 244018 DEBUG nova.virt.libvirt.host [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.292 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.293 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.294 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.294 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.294 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.295 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.295 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.295 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.296 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.296 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.296 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.297 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.302 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.323 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.326 244018 INFO nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting local config drive /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config because it was imported into RBD.
Feb 25 12:18:44 compute-0 kernel: tap2e503dd2-73: entered promiscuous mode
Feb 25 12:18:44 compute-0 NetworkManager[49836]: <info>  [1772021924.3784] manager: (tap2e503dd2-73): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Feb 25 12:18:44 compute-0 ovn_controller[147040]: 2026-02-25T12:18:44Z|00054|binding|INFO|Claiming lport 2e503dd2-735e-4bfc-87c7-dffab319d935 for this chassis.
Feb 25 12:18:44 compute-0 ovn_controller[147040]: 2026-02-25T12:18:44Z|00055|binding|INFO|2e503dd2-735e-4bfc-87c7-dffab319d935: Claiming fa:16:3e:87:71:62 10.100.0.5
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.408 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.413 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.417 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.431 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57896b35-523c-4cf4-8e58-043a548b143e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.433 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1f4cbf9a-41 in ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.435 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1f4cbf9a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.435 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3cfb3f-8ab9-4d40-bc16-6e84fba2bcd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.436 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b1bec435-41ca-424a-8a19-efa8e3686f45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 systemd-machined[210048]: New machine qemu-19-instance-00000011.
Feb 25 12:18:44 compute-0 systemd-udevd[261029]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.449 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:44 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000011.
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.450 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad07655-ee12-4183-b994-3e5bc332b5db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_controller[147040]: 2026-02-25T12:18:44Z|00056|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 ovn-installed in OVS
Feb 25 12:18:44 compute-0 ovn_controller[147040]: 2026-02-25T12:18:44Z|00057|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 up in Southbound
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:44 compute-0 NetworkManager[49836]: <info>  [1772021924.4640] device (tap2e503dd2-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:18:44 compute-0 NetworkManager[49836]: <info>  [1772021924.4650] device (tap2e503dd2-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.469 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fc649c27-199a-4655-a5c5-2941b6cb6559]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.495 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c7186ba1-0906-4823-a994-a20b4722bd8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 systemd-udevd[261049]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.502 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05c55829-7390-4cfa-aebb-40d64cc944bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 NetworkManager[49836]: <info>  [1772021924.5043] manager: (tap1f4cbf9a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.533 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[96527515-7a5c-42da-86be-b8a2a0a7ba71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.537 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed8e519-909e-4d91-8268-01f749d4b3db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 NetworkManager[49836]: <info>  [1772021924.5589] device (tap1f4cbf9a-40): carrier: link connected
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.564 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c00a627f-a443-449d-b24a-9418cd422e68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.582 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[75588d2d-7c8d-44b0-953e-277904d4beb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261080, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.596 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dce1d045-9240-4703-8b27-68d4fcdca2ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb3:f8a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389396, 'tstamp': 389396}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261081, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.614 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[66f369e5-ecb2-4465-93b3-8517f44789d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261082, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.643 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57e22e1e-ba50-4079-8e2d-b5e3b876459e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.696 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d02bf53-1eae-4a83-9f55-3ee6b9673bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.697 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.698 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.699 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:44 compute-0 kernel: tap1f4cbf9a-40: entered promiscuous mode
Feb 25 12:18:44 compute-0 NetworkManager[49836]: <info>  [1772021924.7020] manager: (tap1f4cbf9a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.709 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.711 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:44 compute-0 ovn_controller[147040]: 2026-02-25T12:18:44Z|00058|binding|INFO|Releasing lport 2cfd1e6b-d28d-43c0-bbbd-c6ad77855812 from this chassis (sb_readonly=0)
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.715 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.716 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d055957-b673-41aa-9356-f8d125b7deea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.717 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6.pid.haproxy
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:18:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.718 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'env', 'PROCESS_TAG=haproxy-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.811 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021924.8108473, 52f927ad-a417-489f-9f92-87bc3433649d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.812 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Started (Lifecycle Event)
Feb 25 12:18:44 compute-0 ceph-mon[76335]: pgmap v997: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 181 op/s
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.851 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.855 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021924.8116775, 52f927ad-a417-489f-9f92-87bc3433649d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.855 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Paused (Lifecycle Event)
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.886 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.891 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.896 244018 DEBUG nova.network.neutron [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updated VIF entry in instance network info cache for port 2e503dd2-735e-4bfc-87c7-dffab319d935. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.897 244018 DEBUG nova.network.neutron [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updating instance_info_cache with network_info: [{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.921 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.924 244018 DEBUG oslo_concurrency.lockutils [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:18:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2461457061' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.951 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.971 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:44 compute-0 nova_compute[244014]: 2026-02-25 12:18:44.975 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:45 compute-0 podman[261177]: 2026-02-25 12:18:45.099081404 +0000 UTC m=+0.057636979 container create 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 12:18:45 compute-0 systemd[1]: Started libpod-conmon-5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a.scope.
Feb 25 12:18:45 compute-0 podman[261177]: 2026-02-25 12:18:45.068083904 +0000 UTC m=+0.026639539 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:18:45 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc838cfd7fef58997060e892c18c0d6fc490da055ecee8b3fae6fbd9365cbb97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:18:45 compute-0 podman[261177]: 2026-02-25 12:18:45.191020064 +0000 UTC m=+0.149575639 container init 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:18:45 compute-0 podman[261177]: 2026-02-25 12:18:45.196093797 +0000 UTC m=+0.154649352 container start 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 12:18:45 compute-0 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [NOTICE]   (261216) : New worker (261218) forked
Feb 25 12:18:45 compute-0 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [NOTICE]   (261216) : Loading success.
Feb 25 12:18:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:18:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/635832215' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.576 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.578 244018 DEBUG nova.virt.libvirt.vif [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-671436254',display_name='tempest-ServersAdminTestJSON-server-671436254',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-671436254',id=18,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-drd5ht2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:38Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=7a6ab503-d433-40a7-9395-3d5660e852c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.579 244018 DEBUG nova.network.os_vif_util [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:18:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 181 op/s
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.580 244018 DEBUG nova.network.os_vif_util [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.582 244018 DEBUG nova.objects.instance [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a6ab503-d433-40a7-9395-3d5660e852c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.609 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:18:45 compute-0 nova_compute[244014]:   <uuid>7a6ab503-d433-40a7-9395-3d5660e852c9</uuid>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   <name>instance-00000012</name>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAdminTestJSON-server-671436254</nova:name>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:18:44</nova:creationTime>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <nova:port uuid="179cbc6a-d79f-4f46-88e8-f362ea0e4a26">
Feb 25 12:18:45 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <system>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <entry name="serial">7a6ab503-d433-40a7-9395-3d5660e852c9</entry>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <entry name="uuid">7a6ab503-d433-40a7-9395-3d5660e852c9</entry>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     </system>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   <os>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   </os>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   <features>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   </features>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7a6ab503-d433-40a7-9395-3d5660e852c9_disk">
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config">
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:d2:1d:06"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <target dev="tap179cbc6a-d7"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/console.log" append="off"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <video>
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     </video>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:18:45 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:18:45 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:18:45 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:18:45 compute-0 nova_compute[244014]: </domain>
Feb 25 12:18:45 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.611 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Preparing to wait for external event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.612 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.612 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.612 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.613 244018 DEBUG nova.virt.libvirt.vif [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-671436254',display_name='tempest-ServersAdminTestJSON-server-671436254',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-671436254',id=18,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-drd5ht2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:38Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=7a6ab503-d433-40a7-9395-3d5660e852c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.614 244018 DEBUG nova.network.os_vif_util [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.615 244018 DEBUG nova.network.os_vif_util [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.615 244018 DEBUG os_vif [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.617 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.617 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.622 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap179cbc6a-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.623 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap179cbc6a-d7, col_values=(('external_ids', {'iface-id': '179cbc6a-d79f-4f46-88e8-f362ea0e4a26', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:1d:06', 'vm-uuid': '7a6ab503-d433-40a7-9395-3d5660e852c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:45 compute-0 NetworkManager[49836]: <info>  [1772021925.6270] manager: (tap179cbc6a-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.632 244018 INFO os_vif [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7')
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.711 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.712 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.713 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:d2:1d:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.713 244018 INFO nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Using config drive
Feb 25 12:18:45 compute-0 nova_compute[244014]: 2026-02-25 12:18:45.745 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2461457061' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/635832215' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.236 244018 INFO nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Creating config drive at /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.242 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0snxfpqu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.369 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0snxfpqu" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.400 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.405 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config 7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.473 244018 DEBUG nova.network.neutron [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updated VIF entry in instance network info cache for port 179cbc6a-d79f-4f46-88e8-f362ea0e4a26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.474 244018 DEBUG nova.network.neutron [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updating instance_info_cache with network_info: [{"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.492 244018 DEBUG oslo_concurrency.lockutils [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.543 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config 7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.543 244018 INFO nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Deleting local config drive /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config because it was imported into RBD.
Feb 25 12:18:46 compute-0 kernel: tap179cbc6a-d7: entered promiscuous mode
Feb 25 12:18:46 compute-0 NetworkManager[49836]: <info>  [1772021926.6028] manager: (tap179cbc6a-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Feb 25 12:18:46 compute-0 ovn_controller[147040]: 2026-02-25T12:18:46Z|00059|binding|INFO|Claiming lport 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 for this chassis.
Feb 25 12:18:46 compute-0 ovn_controller[147040]: 2026-02-25T12:18:46Z|00060|binding|INFO|179cbc6a-d79f-4f46-88e8-f362ea0e4a26: Claiming fa:16:3e:d2:1d:06 10.100.0.7
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.606 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:46 compute-0 systemd-udevd[261067]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.614 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:1d:06 10.100.0.7'], port_security=['fa:16:3e:d2:1d:06 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7a6ab503-d433-40a7-9395-3d5660e852c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=179cbc6a-d79f-4f46-88e8-f362ea0e4a26) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.617 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.619 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.620 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:18:46 compute-0 ovn_controller[147040]: 2026-02-25T12:18:46Z|00061|binding|INFO|Setting lport 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 ovn-installed in OVS
Feb 25 12:18:46 compute-0 ovn_controller[147040]: 2026-02-25T12:18:46Z|00062|binding|INFO|Setting lport 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 up in Southbound
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:46 compute-0 NetworkManager[49836]: <info>  [1772021926.6271] device (tap179cbc6a-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:18:46 compute-0 NetworkManager[49836]: <info>  [1772021926.6303] device (tap179cbc6a-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.638 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d73d5263-9873-4b78-bf66-dbc24ab5c4f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:46 compute-0 systemd-machined[210048]: New machine qemu-20-instance-00000012.
Feb 25 12:18:46 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000012.
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.669 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cc58e7c3-f952-4f69-855d-00be62da81d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.674 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[163af9b0-fe04-4950-bfe5-0e6c04a40d63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.706 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0065c47f-fd6a-4af9-8539-8100059ad7f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.728 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[403838d2-5f19-4743-898f-d7fe2d8f862c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261312, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.745 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[91fcb9e9-2565-493a-9446-49d8d8e31642]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261314, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261314, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.748 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:46 compute-0 nova_compute[244014]: 2026-02-25 12:18:46.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.754 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.754 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.755 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.756 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:18:46 compute-0 ceph-mon[76335]: pgmap v998: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 181 op/s
Feb 25 12:18:47 compute-0 nova_compute[244014]: 2026-02-25 12:18:47.291 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021927.28961, 7a6ab503-d433-40a7-9395-3d5660e852c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:47 compute-0 nova_compute[244014]: 2026-02-25 12:18:47.292 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] VM Started (Lifecycle Event)
Feb 25 12:18:47 compute-0 nova_compute[244014]: 2026-02-25 12:18:47.345 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:47 compute-0 nova_compute[244014]: 2026-02-25 12:18:47.349 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021927.290977, 7a6ab503-d433-40a7-9395-3d5660e852c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:47 compute-0 nova_compute[244014]: 2026-02-25 12:18:47.350 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] VM Paused (Lifecycle Event)
Feb 25 12:18:47 compute-0 nova_compute[244014]: 2026-02-25 12:18:47.367 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:47 compute-0 nova_compute[244014]: 2026-02-25 12:18:47.371 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:47 compute-0 nova_compute[244014]: 2026-02-25 12:18:47.415 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:18:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3224183954' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:18:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:18:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3224183954' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:18:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 191 op/s
Feb 25 12:18:47 compute-0 podman[261357]: 2026-02-25 12:18:47.749715392 +0000 UTC m=+0.082335512 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:18:47 compute-0 podman[261358]: 2026-02-25 12:18:47.814669685 +0000 UTC m=+0.147493961 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:18:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3224183954' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:18:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3224183954' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:18:48 compute-0 nova_compute[244014]: 2026-02-25 12:18:48.412 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:48 compute-0 ceph-mon[76335]: pgmap v999: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 191 op/s
Feb 25 12:18:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Feb 25 12:18:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Feb 25 12:18:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Feb 25 12:18:49 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Feb 25 12:18:50 compute-0 nova_compute[244014]: 2026-02-25 12:18:50.414 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:50 compute-0 nova_compute[244014]: 2026-02-25 12:18:50.415 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:50 compute-0 nova_compute[244014]: 2026-02-25 12:18:50.447 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:18:50 compute-0 nova_compute[244014]: 2026-02-25 12:18:50.574 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:50 compute-0 nova_compute[244014]: 2026-02-25 12:18:50.575 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:50 compute-0 nova_compute[244014]: 2026-02-25 12:18:50.583 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:18:50 compute-0 nova_compute[244014]: 2026-02-25 12:18:50.584 244018 INFO nova.compute.claims [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:18:50 compute-0 nova_compute[244014]: 2026-02-25 12:18:50.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:50 compute-0 nova_compute[244014]: 2026-02-25 12:18:50.741 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:50 compute-0 ceph-mon[76335]: pgmap v1000: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Feb 25 12:18:50 compute-0 ceph-mon[76335]: osdmap e131: 3 total, 3 up, 3 in
Feb 25 12:18:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1079271304' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.264 244018 DEBUG nova.compute.manager [req-79663541-7cf5-4bc6-b839-d06da54cdf5c req-51ae7b5e-9a13-4238-bc1d-20a7905a3b3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.265 244018 DEBUG oslo_concurrency.lockutils [req-79663541-7cf5-4bc6-b839-d06da54cdf5c req-51ae7b5e-9a13-4238-bc1d-20a7905a3b3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.266 244018 DEBUG oslo_concurrency.lockutils [req-79663541-7cf5-4bc6-b839-d06da54cdf5c req-51ae7b5e-9a13-4238-bc1d-20a7905a3b3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.267 244018 DEBUG oslo_concurrency.lockutils [req-79663541-7cf5-4bc6-b839-d06da54cdf5c req-51ae7b5e-9a13-4238-bc1d-20a7905a3b3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.267 244018 DEBUG nova.compute.manager [req-79663541-7cf5-4bc6-b839-d06da54cdf5c req-51ae7b5e-9a13-4238-bc1d-20a7905a3b3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Processing event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.268 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.269 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.274 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021931.274385, 7a6ab503-d433-40a7-9395-3d5660e852c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.275 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] VM Resumed (Lifecycle Event)
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.296 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.302 244018 DEBUG nova.compute.provider_tree [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.309 244018 INFO nova.virt.libvirt.driver [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Instance spawned successfully.
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.309 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.321 244018 DEBUG nova.scheduler.client.report [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.329 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.337 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.342 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.343 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.343 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.344 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.345 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.346 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.394 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.395 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.398 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.467 244018 INFO nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Took 13.25 seconds to spawn the instance on the hypervisor.
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.468 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.483 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.484 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.524 244018 INFO nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:18:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.590 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.595 244018 INFO nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Took 14.48 seconds to build instance.
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.645 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.737 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.739 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.740 244018 INFO nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Creating image(s)
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.766 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.794 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.826 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.832 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.866 244018 DEBUG nova.policy [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6cf653b76cca4209ae56b9938dc0be3b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9bc1369afb1243ce828d930dea442277', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.872 244018 DEBUG nova.compute.manager [req-b1470198-30ca-489a-80e8-0ea5244f56ed req-0f26d934-178b-4051-a890-f638801ef4f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.872 244018 DEBUG oslo_concurrency.lockutils [req-b1470198-30ca-489a-80e8-0ea5244f56ed req-0f26d934-178b-4051-a890-f638801ef4f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.873 244018 DEBUG oslo_concurrency.lockutils [req-b1470198-30ca-489a-80e8-0ea5244f56ed req-0f26d934-178b-4051-a890-f638801ef4f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.873 244018 DEBUG oslo_concurrency.lockutils [req-b1470198-30ca-489a-80e8-0ea5244f56ed req-0f26d934-178b-4051-a890-f638801ef4f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.873 244018 DEBUG nova.compute.manager [req-b1470198-30ca-489a-80e8-0ea5244f56ed req-0f26d934-178b-4051-a890-f638801ef4f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Processing event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.875 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:18:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.895 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021931.8804073, 52f927ad-a417-489f-9f92-87bc3433649d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.895 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Resumed (Lifecycle Event)
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.899 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:18:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Feb 25 12:18:51 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1079271304' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.904 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance spawned successfully.
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.905 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:18:51 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.918 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.927 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.927 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.928 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.929 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.963 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.969 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 606b209d-b916-4ac7-87c7-887c274f747f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:51 compute-0 nova_compute[244014]: 2026-02-25 12:18:51.990 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.002 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.002 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.004 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.004 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.006 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.007 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.039 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.160 244018 INFO nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Took 16.17 seconds to spawn the instance on the hypervisor.
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.160 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.257 244018 INFO nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Took 17.32 seconds to build instance.
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.262 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 606b209d-b916-4ac7-87c7-887c274f747f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.303 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.353 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] resizing rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.473 244018 DEBUG nova.objects.instance [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lazy-loading 'migration_context' on Instance uuid 606b209d-b916-4ac7-87c7-887c274f747f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.523 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.524 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Ensure instance console log exists: /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.525 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.525 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.526 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:52 compute-0 nova_compute[244014]: 2026-02-25 12:18:52.794 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Successfully created port: 259aef1d-40c8-413d-93ff-e8e3a0eede8a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:18:52 compute-0 ceph-mon[76335]: pgmap v1002: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Feb 25 12:18:52 compute-0 ceph-mon[76335]: osdmap e132: 3 total, 3 up, 3 in
Feb 25 12:18:53 compute-0 nova_compute[244014]: 2026-02-25 12:18:53.415 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:53 compute-0 nova_compute[244014]: 2026-02-25 12:18:53.459 244018 DEBUG nova.compute.manager [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:53 compute-0 nova_compute[244014]: 2026-02-25 12:18:53.459 244018 DEBUG oslo_concurrency.lockutils [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:53 compute-0 nova_compute[244014]: 2026-02-25 12:18:53.460 244018 DEBUG oslo_concurrency.lockutils [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:53 compute-0 nova_compute[244014]: 2026-02-25 12:18:53.461 244018 DEBUG oslo_concurrency.lockutils [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:53 compute-0 nova_compute[244014]: 2026-02-25 12:18:53.462 244018 DEBUG nova.compute.manager [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] No waiting events found dispatching network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:18:53 compute-0 nova_compute[244014]: 2026-02-25 12:18:53.462 244018 WARNING nova.compute.manager [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received unexpected event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 for instance with vm_state active and task_state None.
Feb 25 12:18:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 703 KiB/s rd, 873 KiB/s wr, 82 op/s
Feb 25 12:18:54 compute-0 nova_compute[244014]: 2026-02-25 12:18:54.131 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Successfully updated port: 259aef1d-40c8-413d-93ff-e8e3a0eede8a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:18:54 compute-0 nova_compute[244014]: 2026-02-25 12:18:54.176 244018 DEBUG nova.compute.manager [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:54 compute-0 nova_compute[244014]: 2026-02-25 12:18:54.176 244018 DEBUG oslo_concurrency.lockutils [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:54 compute-0 nova_compute[244014]: 2026-02-25 12:18:54.177 244018 DEBUG oslo_concurrency.lockutils [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:54 compute-0 nova_compute[244014]: 2026-02-25 12:18:54.178 244018 DEBUG oslo_concurrency.lockutils [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:54 compute-0 nova_compute[244014]: 2026-02-25 12:18:54.178 244018 DEBUG nova.compute.manager [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:18:54 compute-0 nova_compute[244014]: 2026-02-25 12:18:54.179 244018 WARNING nova.compute.manager [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state None.
Feb 25 12:18:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:54 compute-0 nova_compute[244014]: 2026-02-25 12:18:54.181 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:54 compute-0 nova_compute[244014]: 2026-02-25 12:18:54.181 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquired lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:54 compute-0 nova_compute[244014]: 2026-02-25 12:18:54.181 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:18:54 compute-0 nova_compute[244014]: 2026-02-25 12:18:54.630 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:18:54 compute-0 ceph-mon[76335]: pgmap v1004: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 703 KiB/s rd, 873 KiB/s wr, 82 op/s
Feb 25 12:18:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:55.003 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:55.003 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:55.004 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:55 compute-0 nova_compute[244014]: 2026-02-25 12:18:55.134 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021920.1328886, d974e887-fd2f-479e-951e-fad497dd7b0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:55 compute-0 nova_compute[244014]: 2026-02-25 12:18:55.135 244018 INFO nova.compute.manager [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] VM Stopped (Lifecycle Event)
Feb 25 12:18:55 compute-0 nova_compute[244014]: 2026-02-25 12:18:55.309 244018 DEBUG nova.compute.manager [None req-b06d835a-134f-43ba-b901-482d785ebecc - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 691 KiB/s rd, 854 KiB/s wr, 66 op/s
Feb 25 12:18:55 compute-0 nova_compute[244014]: 2026-02-25 12:18:55.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.201 244018 DEBUG nova.compute.manager [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-changed-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.202 244018 DEBUG nova.compute.manager [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Refreshing instance network info cache due to event network-changed-259aef1d-40c8-413d-93ff-e8e3a0eede8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.203 244018 DEBUG oslo_concurrency.lockutils [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.530 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Updating instance_info_cache with network_info: [{"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.553 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Releasing lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.554 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Instance network_info: |[{"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.555 244018 DEBUG oslo_concurrency.lockutils [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.555 244018 DEBUG nova.network.neutron [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Refreshing network info cache for port 259aef1d-40c8-413d-93ff-e8e3a0eede8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.560 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Start _get_guest_xml network_info=[{"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.565 244018 WARNING nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.570 244018 DEBUG nova.virt.libvirt.host [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.571 244018 DEBUG nova.virt.libvirt.host [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.579 244018 DEBUG nova.virt.libvirt.host [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.580 244018 DEBUG nova.virt.libvirt.host [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.581 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.581 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.582 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.583 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.583 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.584 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.584 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.585 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.585 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.586 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.586 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.587 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:18:56 compute-0 nova_compute[244014]: 2026-02-25 12:18:56.591 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:56 compute-0 ceph-mon[76335]: pgmap v1005: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 691 KiB/s rd, 854 KiB/s wr, 66 op/s
Feb 25 12:18:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:18:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887839894' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.236 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:57.258 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:18:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:57.259 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.273 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.281 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.305 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 293 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 295 op/s
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.629 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.630 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.630 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.652 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:18:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:18:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329172932' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.857 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.859 244018 DEBUG nova.virt.libvirt.vif [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1771878194',display_name='tempest-ImagesNegativeTestJSON-server-1771878194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1771878194',id=19,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bc1369afb1243ce828d930dea442277',ramdisk_id='',reservation_id='r-kxueboo3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-926895290',owner_user_name='tempest-ImagesNegativeTestJSON-926895290-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:51Z,user_data=None,user_id='6cf653b76cca4209ae56b9938dc0be3b',uuid=606b209d-b916-4ac7-87c7-887c274f747f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.860 244018 DEBUG nova.network.os_vif_util [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converting VIF {"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.861 244018 DEBUG nova.network.os_vif_util [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.863 244018 DEBUG nova.objects.instance [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lazy-loading 'pci_devices' on Instance uuid 606b209d-b916-4ac7-87c7-887c274f747f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.888 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:18:57 compute-0 nova_compute[244014]:   <uuid>606b209d-b916-4ac7-87c7-887c274f747f</uuid>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   <name>instance-00000013</name>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <nova:name>tempest-ImagesNegativeTestJSON-server-1771878194</nova:name>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:18:56</nova:creationTime>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <nova:user uuid="6cf653b76cca4209ae56b9938dc0be3b">tempest-ImagesNegativeTestJSON-926895290-project-member</nova:user>
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <nova:project uuid="9bc1369afb1243ce828d930dea442277">tempest-ImagesNegativeTestJSON-926895290</nova:project>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <nova:port uuid="259aef1d-40c8-413d-93ff-e8e3a0eede8a">
Feb 25 12:18:57 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <system>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <entry name="serial">606b209d-b916-4ac7-87c7-887c274f747f</entry>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <entry name="uuid">606b209d-b916-4ac7-87c7-887c274f747f</entry>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     </system>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   <os>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   </os>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   <features>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   </features>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/606b209d-b916-4ac7-87c7-887c274f747f_disk">
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/606b209d-b916-4ac7-87c7-887c274f747f_disk.config">
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       </source>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:18:57 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:4c:ea:d4"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <target dev="tap259aef1d-40"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/console.log" append="off"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <video>
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     </video>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:18:57 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:18:57 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:18:57 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:18:57 compute-0 nova_compute[244014]: </domain>
Feb 25 12:18:57 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.897 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Preparing to wait for external event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.897 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.898 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.898 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.899 244018 DEBUG nova.virt.libvirt.vif [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1771878194',display_name='tempest-ImagesNegativeTestJSON-server-1771878194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1771878194',id=19,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bc1369afb1243ce828d930dea442277',ramdisk_id='',reservation_id='r-kxueboo3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-926895290',owner_user_name='tempest-ImagesNegativeTestJSON-926895290-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:51Z,user_data=None,user_id='6cf653b76cca4209ae56b9938dc0be3b',uuid=606b209d-b916-4ac7-87c7-887c274f747f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.900 244018 DEBUG nova.network.os_vif_util [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converting VIF {"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.901 244018 DEBUG nova.network.os_vif_util [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.902 244018 DEBUG os_vif [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.904 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.904 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.907 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.907 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.913 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap259aef1d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.914 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap259aef1d-40, col_values=(('external_ids', {'iface-id': '259aef1d-40c8-413d-93ff-e8e3a0eede8a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:ea:d4', 'vm-uuid': '606b209d-b916-4ac7-87c7-887c274f747f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:57 compute-0 NetworkManager[49836]: <info>  [1772021937.9174] manager: (tap259aef1d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.921 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.922 244018 INFO os_vif [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40')
Feb 25 12:18:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1887839894' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/329172932' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.991 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.992 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.993 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] No VIF found with MAC fa:16:3e:4c:ea:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:18:57 compute-0 nova_compute[244014]: 2026-02-25 12:18:57.994 244018 INFO nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Using config drive
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.025 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.408 244018 DEBUG nova.network.neutron [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Updated VIF entry in instance network info cache for port 259aef1d-40c8-413d-93ff-e8e3a0eede8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.410 244018 DEBUG nova.network.neutron [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Updating instance_info_cache with network_info: [{"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.516 244018 DEBUG oslo_concurrency.lockutils [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.713 244018 INFO nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Creating config drive at /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.721 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_lih5yk0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.854 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_lih5yk0" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.901 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.919 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config 606b209d-b916-4ac7-87c7-887c274f747f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:58 compute-0 ceph-mon[76335]: pgmap v1006: 305 pgs: 305 active+clean; 293 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 295 op/s
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.967 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.969 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:58 compute-0 nova_compute[244014]: 2026-02-25 12:18:58.995 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.063 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config 606b209d-b916-4ac7-87c7-887c274f747f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.064 244018 INFO nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Deleting local config drive /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config because it was imported into RBD.
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.088 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.089 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.100 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.101 244018 INFO nova.compute.claims [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:18:59 compute-0 kernel: tap259aef1d-40: entered promiscuous mode
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:59 compute-0 ovn_controller[147040]: 2026-02-25T12:18:59Z|00063|binding|INFO|Claiming lport 259aef1d-40c8-413d-93ff-e8e3a0eede8a for this chassis.
Feb 25 12:18:59 compute-0 ovn_controller[147040]: 2026-02-25T12:18:59Z|00064|binding|INFO|259aef1d-40c8-413d-93ff-e8e3a0eede8a: Claiming fa:16:3e:4c:ea:d4 10.100.0.12
Feb 25 12:18:59 compute-0 NetworkManager[49836]: <info>  [1772021939.1143] manager: (tap259aef1d-40): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.131 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:ea:d4 10.100.0.12'], port_security=['fa:16:3e:4c:ea:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '606b209d-b916-4ac7-87c7-887c274f747f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66127cb2-231b-48c1-8e16-29c2386af2ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bc1369afb1243ce828d930dea442277', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f267df70-0e5b-45b2-8b12-ebc2ca8b02c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=252bdedd-c508-4af3-ad86-a8b65ae67147, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=259aef1d-40c8-413d-93ff-e8e3a0eede8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.132 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 259aef1d-40c8-413d-93ff-e8e3a0eede8a in datapath 66127cb2-231b-48c1-8e16-29c2386af2ee bound to our chassis
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.133 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 66127cb2-231b-48c1-8e16-29c2386af2ee
Feb 25 12:18:59 compute-0 systemd-udevd[261720]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.149 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7bbff4-84f8-4303-bde8-5834b4c81a35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.150 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap66127cb2-21 in ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.151 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.152 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap66127cb2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:18:59 compute-0 NetworkManager[49836]: <info>  [1772021939.1564] device (tap259aef1d-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.152 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1205603e-1e85-4462-aa7b-70bf83622a83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 NetworkManager[49836]: <info>  [1772021939.1572] device (tap259aef1d-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1e32186e-1ad0-4515-ac74-af04efbba25e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 ovn_controller[147040]: 2026-02-25T12:18:59Z|00065|binding|INFO|Setting lport 259aef1d-40c8-413d-93ff-e8e3a0eede8a ovn-installed in OVS
Feb 25 12:18:59 compute-0 ovn_controller[147040]: 2026-02-25T12:18:59Z|00066|binding|INFO|Setting lport 259aef1d-40c8-413d-93ff-e8e3a0eede8a up in Southbound
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.160 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:59 compute-0 systemd-machined[210048]: New machine qemu-21-instance-00000013.
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.169 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[37cd81ab-9b98-4905-8e4a-0d38af714817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000013.
Feb 25 12:18:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:18:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.182 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[180faba3-fb2e-4624-b764-64b3a2769874]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Feb 25 12:18:59 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.210 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9e48f93e-c0db-42fa-b2cd-655b0be0b7e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.215 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5a733557-ab14-4782-8dae-becac0bfd449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 systemd-udevd[261726]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:18:59 compute-0 NetworkManager[49836]: <info>  [1772021939.2169] manager: (tap66127cb2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.250 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f87441a3-b9fd-4860-8a6e-153068a2fb09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.255 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e2167697-b1cc-4e66-9fb8-05e0fa87976f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 NetworkManager[49836]: <info>  [1772021939.2713] device (tap66127cb2-20): carrier: link connected
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.275 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef823b3-950a-463a-ad64-c32797599edc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.287 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.290 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d0f416-1e60-4894-9d33-5e1aeb369eec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66127cb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:2a:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390867, 'reachable_time': 35171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261756, 'error': None, 'target': 'ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.308 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b80ede2-6a9e-44b3-af92-0018fb5fc1fa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:2a5b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 390867, 'tstamp': 390867}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261758, 'error': None, 'target': 'ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.326 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00298518-a645-4d06-a19f-938a00398c55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66127cb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:2a:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390867, 'reachable_time': 35171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261759, 'error': None, 'target': 'ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.353 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21cdba1a-15b6-4d34-aa2c-8310ea5a6464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.407 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8c093b1d-f73f-4aab-8a39-a6bb6b4e42fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.408 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66127cb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.409 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.411 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66127cb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:59 compute-0 NetworkManager[49836]: <info>  [1772021939.4145] manager: (tap66127cb2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Feb 25 12:18:59 compute-0 kernel: tap66127cb2-20: entered promiscuous mode
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.418 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap66127cb2-20, col_values=(('external_ids', {'iface-id': 'ca0c018e-5c35-4116-8dd5-552f68754c1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.419 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:59 compute-0 ovn_controller[147040]: 2026-02-25T12:18:59Z|00067|binding|INFO|Releasing lport ca0c018e-5c35-4116-8dd5-552f68754c1a from this chassis (sb_readonly=0)
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.421 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/66127cb2-231b-48c1-8e16-29c2386af2ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/66127cb2-231b-48c1-8e16-29c2386af2ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.422 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[23162b11-ac62-4120-8d33-aaf03af8d133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.423 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-66127cb2-231b-48c1-8e16-29c2386af2ee
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/66127cb2-231b-48c1-8e16-29c2386af2ee.pid.haproxy
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 66127cb2-231b-48c1-8e16-29c2386af2ee
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:18:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.424 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee', 'env', 'PROCESS_TAG=haproxy-66127cb2-231b-48c1-8e16-29c2386af2ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/66127cb2-231b-48c1-8e16-29c2386af2ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:18:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 293 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 270 op/s
Feb 25 12:18:59 compute-0 podman[261828]: 2026-02-25 12:18:59.752739506 +0000 UTC m=+0.042685259 container create 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:18:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:18:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2744705052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:18:59 compute-0 systemd[1]: Started libpod-conmon-56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c.scope.
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.816 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021939.8165283, 606b209d-b916-4ac7-87c7-887c274f747f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.817 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] VM Started (Lifecycle Event)
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.820 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.824 244018 DEBUG nova.compute.provider_tree [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:18:59 compute-0 podman[261828]: 2026-02-25 12:18:59.729947496 +0000 UTC m=+0.019893269 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:18:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d034a831b5d2ac46587817457b2ea4e03f7429753844bd67b29486a82f6de6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.841 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.843 244018 DEBUG nova.scheduler.client.report [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:18:59 compute-0 podman[261828]: 2026-02-25 12:18:59.848430622 +0000 UTC m=+0.138376395 container init 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.848 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021939.8166451, 606b209d-b916-4ac7-87c7-887c274f747f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.849 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] VM Paused (Lifecycle Event)
Feb 25 12:18:59 compute-0 podman[261828]: 2026-02-25 12:18:59.862877677 +0000 UTC m=+0.152823420 container start 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.870 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.872 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.873 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.876 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.888 244018 DEBUG nova.compute.manager [req-375feb3b-4344-4773-9783-4715456c4b8d req-4df39ef9-3c76-48bb-869c-9d08906e8244 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.889 244018 DEBUG oslo_concurrency.lockutils [req-375feb3b-4344-4773-9783-4715456c4b8d req-4df39ef9-3c76-48bb-869c-9d08906e8244 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.889 244018 DEBUG oslo_concurrency.lockutils [req-375feb3b-4344-4773-9783-4715456c4b8d req-4df39ef9-3c76-48bb-869c-9d08906e8244 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.889 244018 DEBUG oslo_concurrency.lockutils [req-375feb3b-4344-4773-9783-4715456c4b8d req-4df39ef9-3c76-48bb-869c-9d08906e8244 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.890 244018 DEBUG nova.compute.manager [req-375feb3b-4344-4773-9783-4715456c4b8d req-4df39ef9-3c76-48bb-869c-9d08906e8244 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Processing event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.890 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:18:59 compute-0 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [NOTICE]   (261871) : New worker (261873) forked
Feb 25 12:18:59 compute-0 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [NOTICE]   (261871) : Loading success.
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.897 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.899 244018 INFO nova.virt.libvirt.driver [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Instance spawned successfully.
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.900 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.904 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.904 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021939.8935006, 606b209d-b916-4ac7-87c7-887c274f747f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.904 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] VM Resumed (Lifecycle Event)
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.930 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.931 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.931 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.932 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.932 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.934 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.938 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.941 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.941 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.946 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.964 244018 INFO nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.967 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:18:59 compute-0 nova_compute[244014]: 2026-02-25 12:18:59.996 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.006 244018 INFO nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Took 8.27 seconds to spawn the instance on the hypervisor.
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.007 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.078 244018 INFO nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Took 9.53 seconds to build instance.
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.108 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.116 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.118 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.118 244018 INFO nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Creating image(s)
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.141 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.171 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:00 compute-0 ceph-mon[76335]: osdmap e133: 3 total, 3 up, 3 in
Feb 25 12:19:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2744705052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.217 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.221 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.293 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.295 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.295 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.296 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.319 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.323 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.399 244018 DEBUG nova.policy [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6395ac4bfa5d4910aed9116395bbbdeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.595 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.702 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.808 244018 DEBUG nova.objects.instance [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid d9b67bce-8a7c-4f49-9cab-3e20377ca630 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.832 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.833 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Ensure instance console log exists: /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.833 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.834 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:00 compute-0 nova_compute[244014]: 2026-02-25 12:19:00.834 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:01 compute-0 ceph-mon[76335]: pgmap v1008: 305 pgs: 305 active+clean; 293 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 270 op/s
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.563 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updating instance_info_cache with network_info: [{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:19:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 314 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.2 MiB/s wr, 238 op/s
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.589 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.590 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.591 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.592 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.593 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.776 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.777 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.778 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.778 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.779 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.784 244018 INFO nova.compute.manager [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Terminating instance
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.789 244018 DEBUG nova.compute.manager [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.793 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Successfully created port: 24e2d6b3-f0d8-4603-8c1e-da65e164050d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:19:01 compute-0 kernel: tap259aef1d-40 (unregistering): left promiscuous mode
Feb 25 12:19:01 compute-0 NetworkManager[49836]: <info>  [1772021941.8655] device (tap259aef1d-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:19:01 compute-0 ovn_controller[147040]: 2026-02-25T12:19:01Z|00068|binding|INFO|Releasing lport 259aef1d-40c8-413d-93ff-e8e3a0eede8a from this chassis (sb_readonly=0)
Feb 25 12:19:01 compute-0 ovn_controller[147040]: 2026-02-25T12:19:01Z|00069|binding|INFO|Setting lport 259aef1d-40c8-413d-93ff-e8e3a0eede8a down in Southbound
Feb 25 12:19:01 compute-0 ovn_controller[147040]: 2026-02-25T12:19:01Z|00070|binding|INFO|Removing iface tap259aef1d-40 ovn-installed in OVS
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.873 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:19:01 compute-0 nova_compute[244014]: 2026-02-25 12:19:01.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:01.880 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:ea:d4 10.100.0.12'], port_security=['fa:16:3e:4c:ea:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '606b209d-b916-4ac7-87c7-887c274f747f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66127cb2-231b-48c1-8e16-29c2386af2ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bc1369afb1243ce828d930dea442277', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f267df70-0e5b-45b2-8b12-ebc2ca8b02c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=252bdedd-c508-4af3-ad86-a8b65ae67147, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=259aef1d-40c8-413d-93ff-e8e3a0eede8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:01.882 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 259aef1d-40c8-413d-93ff-e8e3a0eede8a in datapath 66127cb2-231b-48c1-8e16-29c2386af2ee unbound from our chassis
Feb 25 12:19:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:01.885 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 66127cb2-231b-48c1-8e16-29c2386af2ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:19:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:01.887 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab4227f-161d-4acd-9b20-997f262d8b9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:01.890 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee namespace which is not needed anymore
Feb 25 12:19:01 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Deactivated successfully.
Feb 25 12:19:01 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Consumed 2.586s CPU time.
Feb 25 12:19:01 compute-0 systemd-machined[210048]: Machine qemu-21-instance-00000013 terminated.
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.025 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.034 244018 DEBUG nova.compute.manager [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.035 244018 DEBUG oslo_concurrency.lockutils [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.035 244018 DEBUG oslo_concurrency.lockutils [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.036 244018 DEBUG oslo_concurrency.lockutils [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.036 244018 DEBUG nova.compute.manager [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] No waiting events found dispatching network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.037 244018 WARNING nova.compute.manager [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received unexpected event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a for instance with vm_state active and task_state deleting.
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.046 244018 INFO nova.virt.libvirt.driver [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Instance destroyed successfully.
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.047 244018 DEBUG nova.objects.instance [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lazy-loading 'resources' on Instance uuid 606b209d-b916-4ac7-87c7-887c274f747f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:02 compute-0 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [NOTICE]   (261871) : haproxy version is 2.8.14-c23fe91
Feb 25 12:19:02 compute-0 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [NOTICE]   (261871) : path to executable is /usr/sbin/haproxy
Feb 25 12:19:02 compute-0 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [WARNING]  (261871) : Exiting Master process...
Feb 25 12:19:02 compute-0 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [WARNING]  (261871) : Exiting Master process...
Feb 25 12:19:02 compute-0 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [ALERT]    (261871) : Current worker (261873) exited with code 143 (Terminated)
Feb 25 12:19:02 compute-0 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [WARNING]  (261871) : All workers exited. Exiting... (0)
Feb 25 12:19:02 compute-0 systemd[1]: libpod-56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c.scope: Deactivated successfully.
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.070 244018 DEBUG nova.virt.libvirt.vif [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:18:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1771878194',display_name='tempest-ImagesNegativeTestJSON-server-1771878194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1771878194',id=19,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bc1369afb1243ce828d930dea442277',ramdisk_id='',reservation_id='r-kxueboo3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-926895290',owner_user_name='tempest-ImagesNegativeTestJSON-926895290-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:00Z,user_data=None,user_id='6cf653b76cca4209ae56b9938dc0be3b',uuid=606b209d-b916-4ac7-87c7-887c274f747f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.071 244018 DEBUG nova.network.os_vif_util [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converting VIF {"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.072 244018 DEBUG nova.network.os_vif_util [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.072 244018 DEBUG os_vif [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:19:02 compute-0 podman[262070]: 2026-02-25 12:19:02.075321326 +0000 UTC m=+0.057909006 container died 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.077 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap259aef1d-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.084 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.086 244018 INFO os_vif [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40')
Feb 25 12:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c-userdata-shm.mount: Deactivated successfully.
Feb 25 12:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-95d034a831b5d2ac46587817457b2ea4e03f7429753844bd67b29486a82f6de6-merged.mount: Deactivated successfully.
Feb 25 12:19:02 compute-0 podman[262070]: 2026-02-25 12:19:02.149778756 +0000 UTC m=+0.132366436 container cleanup 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:19:02 compute-0 systemd[1]: libpod-conmon-56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c.scope: Deactivated successfully.
Feb 25 12:19:02 compute-0 podman[262126]: 2026-02-25 12:19:02.245440711 +0000 UTC m=+0.072520606 container remove 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 12:19:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.252 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cdaa444e-31c3-43d7-8f8a-7b1f0fd53ec5]: (4, ('Wed Feb 25 12:19:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee (56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c)\n56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c\nWed Feb 25 12:19:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee (56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c)\n56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.254 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4590dd38-1660-413a-806d-c4ac293f5a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.255 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66127cb2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:02 compute-0 kernel: tap66127cb2-20: left promiscuous mode
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.267 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a73e3c5-c9a9-43b9-a3ce-d3133cb89911]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.280 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49f1cc35-1608-4184-b5b4-552332887d41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.281 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2296cb07-7603-4031-8721-90ca39969a99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.302 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9d9121-5a3b-4f1b-ba91-513d0f187d04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390861, 'reachable_time': 22852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262138, 'error': None, 'target': 'ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d66127cb2\x2d231b\x2d48c1\x2d8e16\x2d29c2386af2ee.mount: Deactivated successfully.
Feb 25 12:19:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.307 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:19:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.307 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[c3efe016-ae99-4a39-ae30-0635a31f53f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.462 244018 INFO nova.virt.libvirt.driver [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Deleting instance files /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f_del
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.464 244018 INFO nova.virt.libvirt.driver [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Deletion of /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f_del complete
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.544 244018 INFO nova.compute.manager [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Took 0.75 seconds to destroy the instance on the hypervisor.
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.545 244018 DEBUG oslo.service.loopingcall [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.545 244018 DEBUG nova.compute.manager [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.545 244018 DEBUG nova.network.neutron [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:19:02 compute-0 nova_compute[244014]: 2026-02-25 12:19:02.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:19:03 compute-0 sudo[262144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:19:03 compute-0 sudo[262144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:19:03 compute-0 sudo[262144]: pam_unix(sudo:session): session closed for user root
Feb 25 12:19:03 compute-0 sudo[262169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:19:03 compute-0 sudo[262169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:19:03 compute-0 ceph-mon[76335]: pgmap v1009: 305 pgs: 305 active+clean; 314 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.2 MiB/s wr, 238 op/s
Feb 25 12:19:03 compute-0 nova_compute[244014]: 2026-02-25 12:19:03.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 357 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.9 MiB/s wr, 349 op/s
Feb 25 12:19:03 compute-0 sudo[262169]: pam_unix(sudo:session): session closed for user root
Feb 25 12:19:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:19:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:19:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:19:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:19:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:19:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:19:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:19:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:19:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:19:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:19:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:19:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:19:03 compute-0 ovn_controller[147040]: 2026-02-25T12:19:03Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:71:62 10.100.0.5
Feb 25 12:19:03 compute-0 ovn_controller[147040]: 2026-02-25T12:19:03Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:71:62 10.100.0.5
Feb 25 12:19:03 compute-0 sudo[262225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:19:03 compute-0 sudo[262225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:19:03 compute-0 sudo[262225]: pam_unix(sudo:session): session closed for user root
Feb 25 12:19:03 compute-0 nova_compute[244014]: 2026-02-25 12:19:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:19:03 compute-0 nova_compute[244014]: 2026-02-25 12:19:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:19:03 compute-0 sudo[262250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:19:03 compute-0 sudo[262250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:19:03 compute-0 nova_compute[244014]: 2026-02-25 12:19:03.923 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:03 compute-0 nova_compute[244014]: 2026-02-25 12:19:03.924 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:03 compute-0 nova_compute[244014]: 2026-02-25 12:19:03.924 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:03 compute-0 nova_compute[244014]: 2026-02-25 12:19:03.924 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:19:03 compute-0 nova_compute[244014]: 2026-02-25 12:19:03.925 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.125 244018 DEBUG nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-unplugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.126 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.126 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.127 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.127 244018 DEBUG nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] No waiting events found dispatching network-vif-unplugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.128 244018 DEBUG nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-unplugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.128 244018 DEBUG nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.128 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.129 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.129 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.129 244018 DEBUG nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] No waiting events found dispatching network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.130 244018 WARNING nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received unexpected event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a for instance with vm_state active and task_state deleting.
Feb 25 12:19:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:04 compute-0 podman[262308]: 2026-02-25 12:19:04.199455887 +0000 UTC m=+0.046298260 container create fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.203 244018 DEBUG nova.network.neutron [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.225 244018 INFO nova.compute.manager [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Took 1.68 seconds to deallocate network for instance.
Feb 25 12:19:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:19:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:19:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:19:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:19:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:19:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:19:04 compute-0 systemd[1]: Started libpod-conmon-fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f.scope.
Feb 25 12:19:04 compute-0 ovn_controller[147040]: 2026-02-25T12:19:04Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:1d:06 10.100.0.7
Feb 25 12:19:04 compute-0 ovn_controller[147040]: 2026-02-25T12:19:04Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:1d:06 10.100.0.7
Feb 25 12:19:04 compute-0 podman[262308]: 2026-02-25 12:19:04.177756048 +0000 UTC m=+0.024598441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.283 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.283 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.288 244018 DEBUG nova.compute.manager [req-7c157564-7632-4dfc-8726-2e371f7eaada req-eaeeb418-af7e-4d81-a30e-ee5e42fe438f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-deleted-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:19:04 compute-0 podman[262308]: 2026-02-25 12:19:04.324381724 +0000 UTC m=+0.171224107 container init fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 12:19:04 compute-0 podman[262308]: 2026-02-25 12:19:04.33458473 +0000 UTC m=+0.181427113 container start fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:19:04 compute-0 objective_yonath[262324]: 167 167
Feb 25 12:19:04 compute-0 systemd[1]: libpod-fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f.scope: Deactivated successfully.
Feb 25 12:19:04 compute-0 podman[262308]: 2026-02-25 12:19:04.346672079 +0000 UTC m=+0.193514502 container attach fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:19:04 compute-0 podman[262308]: 2026-02-25 12:19:04.347305537 +0000 UTC m=+0.194147920 container died fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.384 244018 DEBUG oslo_concurrency.processutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d8775d111f512136a542e65d31a4ba28f25d2bd05449f827e65888111911a62-merged.mount: Deactivated successfully.
Feb 25 12:19:04 compute-0 podman[262308]: 2026-02-25 12:19:04.444387582 +0000 UTC m=+0.291229965 container remove fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 12:19:04 compute-0 systemd[1]: libpod-conmon-fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f.scope: Deactivated successfully.
Feb 25 12:19:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:19:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3181884690' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.497 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.578 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.580 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.584 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.585 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:19:04 compute-0 podman[262370]: 2026-02-25 12:19:04.658337107 +0000 UTC m=+0.075811979 container create 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:19:04 compute-0 podman[262370]: 2026-02-25 12:19:04.610787023 +0000 UTC m=+0.028261945 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:19:04 compute-0 systemd[1]: Started libpod-conmon-17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b.scope.
Feb 25 12:19:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:04 compute-0 podman[262370]: 2026-02-25 12:19:04.759056164 +0000 UTC m=+0.176531046 container init 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 12:19:04 compute-0 podman[262370]: 2026-02-25 12:19:04.766265616 +0000 UTC m=+0.183740448 container start 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:19:04 compute-0 podman[262370]: 2026-02-25 12:19:04.779362164 +0000 UTC m=+0.196836996 container attach 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.820 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.822 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4189MB free_disk=59.88206484075636GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.822 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.855 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Successfully updated port: 24e2d6b3-f0d8-4603-8c1e-da65e164050d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.869 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.870 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquired lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:04 compute-0 nova_compute[244014]: 2026-02-25 12:19:04.870 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:19:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:19:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/240962882' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.007 244018 DEBUG oslo_concurrency.processutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.011 244018 DEBUG nova.compute.provider_tree [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.027 244018 DEBUG nova.scheduler.client.report [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.048 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.049 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.076 244018 INFO nova.scheduler.client.report [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Deleted allocations for instance 606b209d-b916-4ac7-87c7-887c274f747f
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.122 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 52f927ad-a417-489f-9f92-87bc3433649d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.122 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 7a6ab503-d433-40a7-9395-3d5660e852c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.122 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d9b67bce-8a7c-4f49-9cab-3e20377ca630 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.122 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.122 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.147 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.161 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:19:05 compute-0 gifted_ganguly[262386]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:19:05 compute-0 gifted_ganguly[262386]: --> All data devices are unavailable
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.204 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:05 compute-0 systemd[1]: libpod-17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b.scope: Deactivated successfully.
Feb 25 12:19:05 compute-0 podman[262370]: 2026-02-25 12:19:05.236050853 +0000 UTC m=+0.653525715 container died 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:19:05 compute-0 ceph-mon[76335]: pgmap v1010: 305 pgs: 305 active+clean; 357 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.9 MiB/s wr, 349 op/s
Feb 25 12:19:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3181884690' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/240962882' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81-merged.mount: Deactivated successfully.
Feb 25 12:19:05 compute-0 podman[262370]: 2026-02-25 12:19:05.356616956 +0000 UTC m=+0.774091788 container remove 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:19:05 compute-0 systemd[1]: libpod-conmon-17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b.scope: Deactivated successfully.
Feb 25 12:19:05 compute-0 sudo[262250]: pam_unix(sudo:session): session closed for user root
Feb 25 12:19:05 compute-0 sudo[262440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:19:05 compute-0 sudo[262440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:19:05 compute-0 sudo[262440]: pam_unix(sudo:session): session closed for user root
Feb 25 12:19:05 compute-0 sudo[262465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:19:05 compute-0 sudo[262465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:19:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 357 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.9 MiB/s wr, 349 op/s
Feb 25 12:19:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:19:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337113847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.770 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.778 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.794 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.820 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:19:05 compute-0 nova_compute[244014]: 2026-02-25 12:19:05.821 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:05 compute-0 podman[262506]: 2026-02-25 12:19:05.861877888 +0000 UTC m=+0.072593739 container create fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:19:05 compute-0 podman[262506]: 2026-02-25 12:19:05.814130907 +0000 UTC m=+0.024846818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:19:05 compute-0 systemd[1]: Started libpod-conmon-fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4.scope.
Feb 25 12:19:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:19:05 compute-0 podman[262506]: 2026-02-25 12:19:05.972216375 +0000 UTC m=+0.182932276 container init fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:19:05 compute-0 podman[262506]: 2026-02-25 12:19:05.980326592 +0000 UTC m=+0.191042453 container start fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:19:05 compute-0 peaceful_colden[262523]: 167 167
Feb 25 12:19:05 compute-0 systemd[1]: libpod-fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4.scope: Deactivated successfully.
Feb 25 12:19:06 compute-0 podman[262506]: 2026-02-25 12:19:06.002808303 +0000 UTC m=+0.213524214 container attach fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:19:06 compute-0 podman[262506]: 2026-02-25 12:19:06.004256224 +0000 UTC m=+0.214972085 container died fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 25 12:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb6dcf0141afc45ac3079eb8817b4c30ff8221dd55fa98456b51f4fca9336fb0-merged.mount: Deactivated successfully.
Feb 25 12:19:06 compute-0 podman[262506]: 2026-02-25 12:19:06.133420239 +0000 UTC m=+0.344136090 container remove fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:19:06 compute-0 systemd[1]: libpod-conmon-fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4.scope: Deactivated successfully.
Feb 25 12:19:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:06.261 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:06 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/337113847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:06 compute-0 podman[262548]: 2026-02-25 12:19:06.357930821 +0000 UTC m=+0.079359628 container create 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:19:06 compute-0 podman[262548]: 2026-02-25 12:19:06.315253173 +0000 UTC m=+0.036682020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.411 244018 DEBUG nova.compute.manager [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-changed-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.411 244018 DEBUG nova.compute.manager [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Refreshing instance network info cache due to event network-changed-24e2d6b3-f0d8-4603-8c1e-da65e164050d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.412 244018 DEBUG oslo_concurrency.lockutils [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:06 compute-0 systemd[1]: Started libpod-conmon-029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc.scope.
Feb 25 12:19:06 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726619dda4afd6c4abd9534a387ecbf18821ba5f6efe413e48b05fc113205de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726619dda4afd6c4abd9534a387ecbf18821ba5f6efe413e48b05fc113205de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726619dda4afd6c4abd9534a387ecbf18821ba5f6efe413e48b05fc113205de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726619dda4afd6c4abd9534a387ecbf18821ba5f6efe413e48b05fc113205de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:06 compute-0 podman[262548]: 2026-02-25 12:19:06.502503519 +0000 UTC m=+0.223932336 container init 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 12:19:06 compute-0 podman[262548]: 2026-02-25 12:19:06.511847451 +0000 UTC m=+0.233276248 container start 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:19:06 compute-0 podman[262548]: 2026-02-25 12:19:06.526579065 +0000 UTC m=+0.248007912 container attach 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.568 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Updating instance_info_cache with network_info: [{"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.584 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Releasing lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.585 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Instance network_info: |[{"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.585 244018 DEBUG oslo_concurrency.lockutils [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.585 244018 DEBUG nova.network.neutron [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Refreshing network info cache for port 24e2d6b3-f0d8-4603-8c1e-da65e164050d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.589 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Start _get_guest_xml network_info=[{"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.593 244018 WARNING nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.598 244018 DEBUG nova.virt.libvirt.host [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.600 244018 DEBUG nova.virt.libvirt.host [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.606 244018 DEBUG nova.virt.libvirt.host [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.606 244018 DEBUG nova.virt.libvirt.host [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.607 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.607 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.607 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.608 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.608 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.608 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.608 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.608 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.609 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.609 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.609 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.609 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:19:06 compute-0 nova_compute[244014]: 2026-02-25 12:19:06.612 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]: {
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:     "0": [
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:         {
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "devices": [
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "/dev/loop3"
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             ],
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_name": "ceph_lv0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_size": "21470642176",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "name": "ceph_lv0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "tags": {
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.cluster_name": "ceph",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.crush_device_class": "",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.encrypted": "0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.objectstore": "bluestore",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.osd_id": "0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.type": "block",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.vdo": "0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.with_tpm": "0"
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             },
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "type": "block",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "vg_name": "ceph_vg0"
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:         }
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:     ],
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:     "1": [
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:         {
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "devices": [
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "/dev/loop4"
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             ],
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_name": "ceph_lv1",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_size": "21470642176",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "name": "ceph_lv1",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "tags": {
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.cluster_name": "ceph",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.crush_device_class": "",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.encrypted": "0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.objectstore": "bluestore",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.osd_id": "1",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.type": "block",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.vdo": "0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.with_tpm": "0"
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             },
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "type": "block",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "vg_name": "ceph_vg1"
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:         }
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:     ],
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:     "2": [
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:         {
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "devices": [
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "/dev/loop5"
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             ],
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_name": "ceph_lv2",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_size": "21470642176",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "name": "ceph_lv2",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "tags": {
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.cluster_name": "ceph",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.crush_device_class": "",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.encrypted": "0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.objectstore": "bluestore",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.osd_id": "2",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.type": "block",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.vdo": "0",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:                 "ceph.with_tpm": "0"
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             },
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "type": "block",
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:             "vg_name": "ceph_vg2"
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:         }
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]:     ]
Feb 25 12:19:06 compute-0 intelligent_proskuriakova[262564]: }
Feb 25 12:19:06 compute-0 systemd[1]: libpod-029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc.scope: Deactivated successfully.
Feb 25 12:19:06 compute-0 podman[262548]: 2026-02-25 12:19:06.869473739 +0000 UTC m=+0.590902526 container died 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 12:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d726619dda4afd6c4abd9534a387ecbf18821ba5f6efe413e48b05fc113205de-merged.mount: Deactivated successfully.
Feb 25 12:19:06 compute-0 podman[262548]: 2026-02-25 12:19:06.961006918 +0000 UTC m=+0.682435695 container remove 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:19:06 compute-0 systemd[1]: libpod-conmon-029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc.scope: Deactivated successfully.
Feb 25 12:19:07 compute-0 sudo[262465]: pam_unix(sudo:session): session closed for user root
Feb 25 12:19:07 compute-0 sudo[262604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:19:07 compute-0 sudo[262604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:19:07 compute-0 sudo[262604]: pam_unix(sudo:session): session closed for user root
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:07 compute-0 sudo[262629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:19:07 compute-0 sudo[262629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:19:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3182560998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.153 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.183 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.187 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:07 compute-0 ceph-mon[76335]: pgmap v1011: 305 pgs: 305 active+clean; 357 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.9 MiB/s wr, 349 op/s
Feb 25 12:19:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3182560998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:07 compute-0 podman[262706]: 2026-02-25 12:19:07.416203215 +0000 UTC m=+0.058989497 container create 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:19:07 compute-0 systemd[1]: Started libpod-conmon-7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f.scope.
Feb 25 12:19:07 compute-0 podman[262706]: 2026-02-25 12:19:07.392060347 +0000 UTC m=+0.034846609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:19:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:19:07 compute-0 podman[262706]: 2026-02-25 12:19:07.513724952 +0000 UTC m=+0.156511214 container init 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 12:19:07 compute-0 podman[262706]: 2026-02-25 12:19:07.521591013 +0000 UTC m=+0.164377295 container start 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:19:07 compute-0 charming_einstein[262722]: 167 167
Feb 25 12:19:07 compute-0 systemd[1]: libpod-7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f.scope: Deactivated successfully.
Feb 25 12:19:07 compute-0 podman[262706]: 2026-02-25 12:19:07.527254642 +0000 UTC m=+0.170041074 container attach 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:19:07 compute-0 podman[262706]: 2026-02-25 12:19:07.528191298 +0000 UTC m=+0.170977580 container died 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-78b13219497a183ed859effe85a7038d56c81742c962618778f3ea843fa7e766-merged.mount: Deactivated successfully.
Feb 25 12:19:07 compute-0 podman[262706]: 2026-02-25 12:19:07.58417791 +0000 UTC m=+0.226964192 container remove 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 12:19:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 358 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.2 MiB/s wr, 300 op/s
Feb 25 12:19:07 compute-0 systemd[1]: libpod-conmon-7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f.scope: Deactivated successfully.
Feb 25 12:19:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/84937775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.750 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.752 244018 DEBUG nova.virt.libvirt.vif [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-654045690',display_name='tempest-ServersAdminTestJSON-server-654045690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-654045690',id=20,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ki1fzu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:00Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=d9b67bce-8a7c-4f49-9cab-3e20377ca630,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.753 244018 DEBUG nova.network.os_vif_util [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.754 244018 DEBUG nova.network.os_vif_util [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.756 244018 DEBUG nova.objects.instance [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid d9b67bce-8a7c-4f49-9cab-3e20377ca630 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:07 compute-0 podman[262746]: 2026-02-25 12:19:07.777071274 +0000 UTC m=+0.061696343 container create 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.794 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:19:07 compute-0 nova_compute[244014]:   <uuid>d9b67bce-8a7c-4f49-9cab-3e20377ca630</uuid>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   <name>instance-00000014</name>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAdminTestJSON-server-654045690</nova:name>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:19:06</nova:creationTime>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <nova:port uuid="24e2d6b3-f0d8-4603-8c1e-da65e164050d">
Feb 25 12:19:07 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <system>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <entry name="serial">d9b67bce-8a7c-4f49-9cab-3e20377ca630</entry>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <entry name="uuid">d9b67bce-8a7c-4f49-9cab-3e20377ca630</entry>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     </system>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   <os>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   </os>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   <features>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   </features>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk">
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config">
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:4b:b0:82"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <target dev="tap24e2d6b3-f0"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/console.log" append="off"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <video>
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     </video>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:19:07 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:19:07 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:19:07 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:19:07 compute-0 nova_compute[244014]: </domain>
Feb 25 12:19:07 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.795 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Preparing to wait for external event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.795 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.795 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.796 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.796 244018 DEBUG nova.virt.libvirt.vif [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-654045690',display_name='tempest-ServersAdminTestJSON-server-654045690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-654045690',id=20,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ki1fzu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:00Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=d9b67bce-8a7c-4f49-9cab-3e20377ca630,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.797 244018 DEBUG nova.network.os_vif_util [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.797 244018 DEBUG nova.network.os_vif_util [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.798 244018 DEBUG os_vif [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.798 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.799 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.802 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24e2d6b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.803 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24e2d6b3-f0, col_values=(('external_ids', {'iface-id': '24e2d6b3-f0d8-4603-8c1e-da65e164050d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:b0:82', 'vm-uuid': 'd9b67bce-8a7c-4f49-9cab-3e20377ca630'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.805 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:07 compute-0 NetworkManager[49836]: <info>  [1772021947.8061] manager: (tap24e2d6b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.806 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.814 244018 INFO os_vif [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0')
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.815 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:19:07 compute-0 systemd[1]: Started libpod-conmon-6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477.scope.
Feb 25 12:19:07 compute-0 podman[262746]: 2026-02-25 12:19:07.749913742 +0000 UTC m=+0.034538821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:19:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.861 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.861 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.862 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:4b:b0:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.862 244018 INFO nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Using config drive
Feb 25 12:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f83beca2732a6cb288f5a1a2a8008ef4384d7da61fedf0852e1af660d0c8d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f83beca2732a6cb288f5a1a2a8008ef4384d7da61fedf0852e1af660d0c8d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f83beca2732a6cb288f5a1a2a8008ef4384d7da61fedf0852e1af660d0c8d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f83beca2732a6cb288f5a1a2a8008ef4384d7da61fedf0852e1af660d0c8d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:07 compute-0 nova_compute[244014]: 2026-02-25 12:19:07.892 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:07 compute-0 podman[262746]: 2026-02-25 12:19:07.897321869 +0000 UTC m=+0.181946978 container init 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:19:07 compute-0 podman[262746]: 2026-02-25 12:19:07.909632795 +0000 UTC m=+0.194257864 container start 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:19:07 compute-0 podman[262746]: 2026-02-25 12:19:07.914023068 +0000 UTC m=+0.198648137 container attach 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:19:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/84937775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.419 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:08 compute-0 lvm[262864]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:19:08 compute-0 lvm[262863]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:19:08 compute-0 lvm[262863]: VG ceph_vg0 finished
Feb 25 12:19:08 compute-0 lvm[262864]: VG ceph_vg1 finished
Feb 25 12:19:08 compute-0 lvm[262866]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:19:08 compute-0 lvm[262866]: VG ceph_vg2 finished
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.571 244018 INFO nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Creating config drive at /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.578 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmply5zrg8v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:08 compute-0 focused_heisenberg[262767]: {}
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.626 244018 DEBUG nova.network.neutron [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Updated VIF entry in instance network info cache for port 24e2d6b3-f0d8-4603-8c1e-da65e164050d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.628 244018 DEBUG nova.network.neutron [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Updating instance_info_cache with network_info: [{"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.644 244018 DEBUG oslo_concurrency.lockutils [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:08 compute-0 systemd[1]: libpod-6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477.scope: Deactivated successfully.
Feb 25 12:19:08 compute-0 systemd[1]: libpod-6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477.scope: Consumed 1.101s CPU time.
Feb 25 12:19:08 compute-0 podman[262746]: 2026-02-25 12:19:08.649477881 +0000 UTC m=+0.934102950 container died 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-43f83beca2732a6cb288f5a1a2a8008ef4384d7da61fedf0852e1af660d0c8d7-merged.mount: Deactivated successfully.
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.704 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmply5zrg8v" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:08 compute-0 podman[262746]: 2026-02-25 12:19:08.709345471 +0000 UTC m=+0.993970510 container remove 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 12:19:08 compute-0 systemd[1]: libpod-conmon-6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477.scope: Deactivated successfully.
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.731 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.735 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:08 compute-0 sudo[262629]: pam_unix(sudo:session): session closed for user root
Feb 25 12:19:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:19:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:19:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:19:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:19:08 compute-0 sudo[262902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:19:08 compute-0 sudo[262902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:19:08 compute-0 sudo[262902]: pam_unix(sudo:session): session closed for user root
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.892 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.893 244018 INFO nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Deleting local config drive /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config because it was imported into RBD.
Feb 25 12:19:08 compute-0 kernel: tap24e2d6b3-f0: entered promiscuous mode
Feb 25 12:19:08 compute-0 NetworkManager[49836]: <info>  [1772021948.9491] manager: (tap24e2d6b3-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Feb 25 12:19:08 compute-0 systemd-udevd[262862]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:19:08 compute-0 ovn_controller[147040]: 2026-02-25T12:19:08Z|00071|binding|INFO|Claiming lport 24e2d6b3-f0d8-4603-8c1e-da65e164050d for this chassis.
Feb 25 12:19:08 compute-0 ovn_controller[147040]: 2026-02-25T12:19:08Z|00072|binding|INFO|24e2d6b3-f0d8-4603-8c1e-da65e164050d: Claiming fa:16:3e:4b:b0:82 10.100.0.9
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.949 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:08.958 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:b0:82 10.100.0.9'], port_security=['fa:16:3e:4b:b0:82 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd9b67bce-8a7c-4f49-9cab-3e20377ca630', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=24e2d6b3-f0d8-4603-8c1e-da65e164050d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:08.961 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 24e2d6b3-f0d8-4603-8c1e-da65e164050d in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis
Feb 25 12:19:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:08.963 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:19:08 compute-0 ovn_controller[147040]: 2026-02-25T12:19:08Z|00073|binding|INFO|Setting lport 24e2d6b3-f0d8-4603-8c1e-da65e164050d ovn-installed in OVS
Feb 25 12:19:08 compute-0 ovn_controller[147040]: 2026-02-25T12:19:08Z|00074|binding|INFO|Setting lport 24e2d6b3-f0d8-4603-8c1e-da65e164050d up in Southbound
Feb 25 12:19:08 compute-0 NetworkManager[49836]: <info>  [1772021948.9680] device (tap24e2d6b3-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:19:08 compute-0 NetworkManager[49836]: <info>  [1772021948.9693] device (tap24e2d6b3-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:19:08 compute-0 nova_compute[244014]: 2026-02-25 12:19:08.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:08.984 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b3313897-0338-48de-b7e4-89b45c95a0a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:08 compute-0 systemd-machined[210048]: New machine qemu-22-instance-00000014.
Feb 25 12:19:09 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Feb 25 12:19:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.011 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[be8f7056-403d-48a8-8ca6-dc11dfe30e88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.016 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f21be837-7fea-43ab-b2cb-dc3048171929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.045 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5097fc1c-f7c3-4236-9fda-75bc561eb75e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.066 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c966b610-36ad-4dc1-a242-958188bacab5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262968, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.084 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[02ee3282-32e6-46b3-ac96-c7bde7185abe]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262969, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262969, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.086 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.088 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.089 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.090 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.090 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.091 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.091 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:09 compute-0 ceph-mon[76335]: pgmap v1012: 305 pgs: 305 active+clean; 358 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.2 MiB/s wr, 300 op/s
Feb 25 12:19:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:19:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.484 244018 DEBUG nova.compute.manager [req-ca9e9782-1422-4100-8ea7-ff5b57fbfc0e req-9551c66d-164b-46c1-91c6-a3413a496a2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.485 244018 DEBUG oslo_concurrency.lockutils [req-ca9e9782-1422-4100-8ea7-ff5b57fbfc0e req-9551c66d-164b-46c1-91c6-a3413a496a2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.485 244018 DEBUG oslo_concurrency.lockutils [req-ca9e9782-1422-4100-8ea7-ff5b57fbfc0e req-9551c66d-164b-46c1-91c6-a3413a496a2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.486 244018 DEBUG oslo_concurrency.lockutils [req-ca9e9782-1422-4100-8ea7-ff5b57fbfc0e req-9551c66d-164b-46c1-91c6-a3413a496a2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.486 244018 DEBUG nova.compute.manager [req-ca9e9782-1422-4100-8ea7-ff5b57fbfc0e req-9551c66d-164b-46c1-91c6-a3413a496a2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Processing event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:19:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 358 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.0 MiB/s wr, 289 op/s
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.845 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021949.8452568, d9b67bce-8a7c-4f49-9cab-3e20377ca630 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.846 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] VM Started (Lifecycle Event)
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.849 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.852 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.856 244018 INFO nova.virt.libvirt.driver [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Instance spawned successfully.
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.857 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.914 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.917 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.977 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.978 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.979 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.980 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.980 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:09 compute-0 nova_compute[244014]: 2026-02-25 12:19:09.981 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.024 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.025 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021949.8453603, d9b67bce-8a7c-4f49-9cab-3e20377ca630 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.025 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] VM Paused (Lifecycle Event)
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.052 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.057 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021949.8511143, d9b67bce-8a7c-4f49-9cab-3e20377ca630 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.057 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] VM Resumed (Lifecycle Event)
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.089 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.093 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.104 244018 INFO nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Took 9.99 seconds to spawn the instance on the hypervisor.
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.104 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.114 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.176 244018 INFO nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Took 11.13 seconds to build instance.
Feb 25 12:19:10 compute-0 nova_compute[244014]: 2026-02-25 12:19:10.252 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:11 compute-0 ceph-mon[76335]: pgmap v1013: 305 pgs: 305 active+clean; 358 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.0 MiB/s wr, 289 op/s
Feb 25 12:19:11 compute-0 nova_compute[244014]: 2026-02-25 12:19:11.561 244018 DEBUG nova.compute.manager [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:11 compute-0 nova_compute[244014]: 2026-02-25 12:19:11.561 244018 DEBUG oslo_concurrency.lockutils [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:11 compute-0 nova_compute[244014]: 2026-02-25 12:19:11.561 244018 DEBUG oslo_concurrency.lockutils [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:11 compute-0 nova_compute[244014]: 2026-02-25 12:19:11.561 244018 DEBUG oslo_concurrency.lockutils [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:11 compute-0 nova_compute[244014]: 2026-02-25 12:19:11.562 244018 DEBUG nova.compute.manager [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] No waiting events found dispatching network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:11 compute-0 nova_compute[244014]: 2026-02-25 12:19:11.562 244018 WARNING nova.compute.manager [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received unexpected event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d for instance with vm_state active and task_state None.
Feb 25 12:19:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 358 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 285 op/s
Feb 25 12:19:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:19:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4856 writes, 21K keys, 4856 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4856 writes, 4856 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1436 writes, 6465 keys, 1436 commit groups, 1.0 writes per commit group, ingest: 9.06 MB, 0.02 MB/s
                                           Interval WAL: 1436 writes, 1436 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     77.5      0.31              0.07        12    0.026       0      0       0.0       0.0
                                             L6      1/0    7.01 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    111.5     91.4      0.84              0.22        11    0.077     48K   5757       0.0       0.0
                                            Sum      1/0    7.01 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     81.6     87.6      1.15              0.28        23    0.050     48K   5757       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1     75.2     76.1      0.59              0.12        10    0.059     23K   2565       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    111.5     91.4      0.84              0.22        11    0.077     48K   5757       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     78.5      0.31              0.07        11    0.028       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.023, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 1.2 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 9.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000133 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(561,8.91 MB,2.9319%) FilterBlock(24,141.55 KB,0.0454702%) IndexBlock(24,260.44 KB,0.0836623%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 12:19:12 compute-0 nova_compute[244014]: 2026-02-25 12:19:12.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:13 compute-0 ovn_controller[147040]: 2026-02-25T12:19:13Z|00075|binding|INFO|Releasing lport 2cfd1e6b-d28d-43c0-bbbd-c6ad77855812 from this chassis (sb_readonly=0)
Feb 25 12:19:13 compute-0 nova_compute[244014]: 2026-02-25 12:19:13.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:13 compute-0 ceph-mon[76335]: pgmap v1014: 305 pgs: 305 active+clean; 358 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 285 op/s
Feb 25 12:19:13 compute-0 nova_compute[244014]: 2026-02-25 12:19:13.421 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 310 op/s
Feb 25 12:19:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:15 compute-0 ceph-mon[76335]: pgmap v1015: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 310 op/s
Feb 25 12:19:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 186 op/s
Feb 25 12:19:16 compute-0 nova_compute[244014]: 2026-02-25 12:19:16.065 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:16 compute-0 nova_compute[244014]: 2026-02-25 12:19:16.066 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:16 compute-0 nova_compute[244014]: 2026-02-25 12:19:16.165 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:19:16 compute-0 nova_compute[244014]: 2026-02-25 12:19:16.271 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:16 compute-0 nova_compute[244014]: 2026-02-25 12:19:16.272 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:16 compute-0 nova_compute[244014]: 2026-02-25 12:19:16.281 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:19:16 compute-0 nova_compute[244014]: 2026-02-25 12:19:16.281 244018 INFO nova.compute.claims [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:19:16 compute-0 nova_compute[244014]: 2026-02-25 12:19:16.485 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:19:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1742440866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.035 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.040 244018 DEBUG nova.compute.provider_tree [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.042 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021942.0402136, 606b209d-b916-4ac7-87c7-887c274f747f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.042 244018 INFO nova.compute.manager [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] VM Stopped (Lifecycle Event)
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.065 244018 DEBUG nova.scheduler.client.report [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.069 244018 DEBUG nova.compute.manager [None req-8f5fb875-9083-416d-b952-ba207c00be5a - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.093 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.094 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.165 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.166 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.195 244018 INFO nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.213 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.213 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.230 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.236 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.337 244018 DEBUG nova.policy [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6395ac4bfa5d4910aed9116395bbbdeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.341 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.341 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.350 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.351 244018 INFO nova.compute.claims [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.356 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.358 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.359 244018 INFO nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Creating image(s)
Feb 25 12:19:17 compute-0 ceph-mon[76335]: pgmap v1016: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 186 op/s
Feb 25 12:19:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1742440866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.397 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.429 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.463 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.468 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.545 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.546 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.547 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.547 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.568 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.571 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 186 op/s
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.658 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.888 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:17 compute-0 nova_compute[244014]: 2026-02-25 12:19:17.965 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.069 244018 DEBUG nova.objects.instance [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.098 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.099 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Ensure instance console log exists: /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.099 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.100 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.101 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:19:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532858643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.224 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.229 244018 DEBUG nova.compute.provider_tree [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.251 244018 DEBUG nova.scheduler.client.report [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.279 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.280 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.362 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.363 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:19:18 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3532858643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.423 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.462 244018 INFO nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.510 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.692 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.694 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.694 244018 INFO nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Creating image(s)
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.748 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:18 compute-0 podman[263224]: 2026-02-25 12:19:18.753982435 +0000 UTC m=+0.093681521 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.785 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:18 compute-0 podman[263225]: 2026-02-25 12:19:18.789863972 +0000 UTC m=+0.129605279 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2)
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.810 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.814 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.834 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Successfully created port: 55015950-cf1b-4183-802f-22f661123534 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.879 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.880 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.880 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.881 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.909 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.913 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:18 compute-0 nova_compute[244014]: 2026-02-25 12:19:18.935 244018 DEBUG nova.policy [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9aa84b2700234a5e9dcba1fc0bbc4cea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:19:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.189 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.254 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] resizing rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.343 244018 DEBUG nova.objects.instance [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'migration_context' on Instance uuid de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:19 compute-0 ceph-mon[76335]: pgmap v1017: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 186 op/s
Feb 25 12:19:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 41 KiB/s wr, 74 op/s
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.599 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.600 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Ensure instance console log exists: /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.600 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.601 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.601 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.786 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Successfully updated port: 55015950-cf1b-4183-802f-22f661123534 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.821 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.821 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquired lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.821 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:19:19 compute-0 nova_compute[244014]: 2026-02-25 12:19:19.954 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:19:20 compute-0 nova_compute[244014]: 2026-02-25 12:19:20.025 244018 DEBUG nova.compute.manager [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-changed-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:20 compute-0 nova_compute[244014]: 2026-02-25 12:19:20.026 244018 DEBUG nova.compute.manager [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Refreshing instance network info cache due to event network-changed-55015950-cf1b-4183-802f-22f661123534. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:19:20 compute-0 nova_compute[244014]: 2026-02-25 12:19:20.026 244018 DEBUG oslo_concurrency.lockutils [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:20 compute-0 nova_compute[244014]: 2026-02-25 12:19:20.172 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Successfully created port: b2336583-1aaa-4789-8d4f-a3a14997891d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:19:21 compute-0 ceph-mon[76335]: pgmap v1018: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 41 KiB/s wr, 74 op/s
Feb 25 12:19:21 compute-0 ovn_controller[147040]: 2026-02-25T12:19:21Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:b0:82 10.100.0.9
Feb 25 12:19:21 compute-0 ovn_controller[147040]: 2026-02-25T12:19:21Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:b0:82 10.100.0.9
Feb 25 12:19:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 404 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.4 MiB/s wr, 108 op/s
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.780 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Updating instance_info_cache with network_info: [{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.803 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Releasing lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.804 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Instance network_info: |[{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.805 244018 DEBUG oslo_concurrency.lockutils [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.806 244018 DEBUG nova.network.neutron [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Refreshing network info cache for port 55015950-cf1b-4183-802f-22f661123534 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.811 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Start _get_guest_xml network_info=[{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.817 244018 WARNING nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.824 244018 DEBUG nova.virt.libvirt.host [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.825 244018 DEBUG nova.virt.libvirt.host [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.828 244018 DEBUG nova.virt.libvirt.host [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.828 244018 DEBUG nova.virt.libvirt.host [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.829 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.830 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.830 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.831 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.831 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.832 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.832 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.832 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.833 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.833 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.834 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.834 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:19:21 compute-0 nova_compute[244014]: 2026-02-25 12:19:21.839 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.305 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.306 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.314 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Successfully updated port: b2336583-1aaa-4789-8d4f-a3a14997891d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:19:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381563903' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.401 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:22 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1381563903' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.432 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.437 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.457 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.463 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.464 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.464 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.467 244018 DEBUG nova.compute.manager [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.468 244018 DEBUG nova.compute.manager [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing instance network info cache due to event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.468 244018 DEBUG oslo_concurrency.lockutils [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.637 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.638 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.646 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.646 244018 INFO nova.compute.claims [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.767 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.809 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.930 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589651980' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.960 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.963 244018 DEBUG nova.virt.libvirt.vif [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-649798416',display_name='tempest-ServersAdminTestJSON-server-649798416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-649798416',id=21,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-fszvg8lb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:17Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=8993d2e7-8b5d-42eb-9e24-f96dcc4da39b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.964 244018 DEBUG nova.network.os_vif_util [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.964 244018 DEBUG nova.network.os_vif_util [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.966 244018 DEBUG nova.objects.instance [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.984 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:19:22 compute-0 nova_compute[244014]:   <uuid>8993d2e7-8b5d-42eb-9e24-f96dcc4da39b</uuid>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   <name>instance-00000015</name>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAdminTestJSON-server-649798416</nova:name>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:19:21</nova:creationTime>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <nova:port uuid="55015950-cf1b-4183-802f-22f661123534">
Feb 25 12:19:22 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <system>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <entry name="serial">8993d2e7-8b5d-42eb-9e24-f96dcc4da39b</entry>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <entry name="uuid">8993d2e7-8b5d-42eb-9e24-f96dcc4da39b</entry>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     </system>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   <os>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   </os>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   <features>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   </features>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk">
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config">
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:22 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:c6:da:42"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <target dev="tap55015950-cf"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/console.log" append="off"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <video>
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     </video>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:19:22 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:19:22 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:19:22 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:19:22 compute-0 nova_compute[244014]: </domain>
Feb 25 12:19:22 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.987 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Preparing to wait for external event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.988 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.988 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.989 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.991 244018 DEBUG nova.virt.libvirt.vif [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-649798416',display_name='tempest-ServersAdminTestJSON-server-649798416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-649798416',id=21,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-fszvg8lb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:17Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=8993d2e7-8b5d-42eb-9e24-f96dcc4da39b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.991 244018 DEBUG nova.network.os_vif_util [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.993 244018 DEBUG nova.network.os_vif_util [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.994 244018 DEBUG os_vif [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.996 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:22 compute-0 nova_compute[244014]: 2026-02-25 12:19:22.997 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.002 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.002 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55015950-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.003 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55015950-cf, col_values=(('external_ids', {'iface-id': '55015950-cf1b-4183-802f-22f661123534', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:da:42', 'vm-uuid': '8993d2e7-8b5d-42eb-9e24-f96dcc4da39b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:23 compute-0 NetworkManager[49836]: <info>  [1772021963.0070] manager: (tap55015950-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.013 244018 INFO os_vif [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf')
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.064 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.065 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.065 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:c6:da:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.065 244018 INFO nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Using config drive
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.081 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:23 compute-0 ceph-mon[76335]: pgmap v1019: 305 pgs: 305 active+clean; 404 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.4 MiB/s wr, 108 op/s
Feb 25 12:19:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/589651980' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:19:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2282474270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.478 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.490 244018 DEBUG nova.compute.provider_tree [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.520 244018 DEBUG nova.scheduler.client.report [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.561 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.562 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:19:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 480 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.6 MiB/s wr, 147 op/s
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.673 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.674 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.735 244018 INFO nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.771 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.822 244018 DEBUG nova.network.neutron [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Updated VIF entry in instance network info cache for port 55015950-cf1b-4183-802f-22f661123534. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.822 244018 DEBUG nova.network.neutron [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Updating instance_info_cache with network_info: [{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.848 244018 INFO nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Creating config drive at /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.854 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6sxladem execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.876 244018 DEBUG oslo_concurrency.lockutils [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.924 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.927 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.927 244018 INFO nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Creating image(s)
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.962 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:23 compute-0 nova_compute[244014]: 2026-02-25 12:19:23.994 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.021 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.026 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.054 244018 DEBUG nova.policy [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8349db6a5fcb4d8596a69d83481207b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '61fb5315043b44e588f1a84d85f1547b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.057 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6sxladem" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.092 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.098 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.125 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.130 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.131 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.132 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.132 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.164 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.171 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d44c3dbc-e4bc-4235-bd88-b39616473248_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.196 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.197 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Instance network_info: |[{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.199 244018 DEBUG oslo_concurrency.lockutils [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.200 244018 DEBUG nova.network.neutron [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.208 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Start _get_guest_xml network_info=[{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.217 244018 WARNING nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.239 244018 DEBUG nova.virt.libvirt.host [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.241 244018 DEBUG nova.virt.libvirt.host [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.246 244018 DEBUG nova.virt.libvirt.host [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.246 244018 DEBUG nova.virt.libvirt.host [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.247 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.248 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.249 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.249 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.249 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.250 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.250 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.251 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.251 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.252 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.252 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.252 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.262 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.285 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.286 244018 INFO nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Deleting local config drive /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config because it was imported into RBD.
Feb 25 12:19:24 compute-0 NetworkManager[49836]: <info>  [1772021964.3462] manager: (tap55015950-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Feb 25 12:19:24 compute-0 kernel: tap55015950-cf: entered promiscuous mode
Feb 25 12:19:24 compute-0 ovn_controller[147040]: 2026-02-25T12:19:24Z|00076|binding|INFO|Claiming lport 55015950-cf1b-4183-802f-22f661123534 for this chassis.
Feb 25 12:19:24 compute-0 ovn_controller[147040]: 2026-02-25T12:19:24Z|00077|binding|INFO|55015950-cf1b-4183-802f-22f661123534: Claiming fa:16:3e:c6:da:42 10.100.0.12
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:24 compute-0 ovn_controller[147040]: 2026-02-25T12:19:24Z|00078|binding|INFO|Setting lport 55015950-cf1b-4183-802f-22f661123534 ovn-installed in OVS
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.370 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:24 compute-0 ovn_controller[147040]: 2026-02-25T12:19:24Z|00079|binding|INFO|Setting lport 55015950-cf1b-4183-802f-22f661123534 up in Southbound
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.377 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:da:42 10.100.0.12'], port_security=['fa:16:3e:c6:da:42 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8993d2e7-8b5d-42eb-9e24-f96dcc4da39b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=55015950-cf1b-4183-802f-22f661123534) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.378 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 55015950-cf1b-4183-802f-22f661123534 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.380 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:19:24 compute-0 systemd-machined[210048]: New machine qemu-23-instance-00000015.
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.391 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1dff0414-11ea-4d75-a684-61976e55572c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:24 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000015.
Feb 25 12:19:24 compute-0 systemd-udevd[263701]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:19:24 compute-0 NetworkManager[49836]: <info>  [1772021964.4127] device (tap55015950-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:19:24 compute-0 NetworkManager[49836]: <info>  [1772021964.4135] device (tap55015950-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.411 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a973e4-130b-4dad-9c2c-894980a5354f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.415 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[20911424-ea40-44c6-ac44-2a1eb0d3beb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.435 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c79c0daf-0851-4789-be35-9f82c843c03e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.448 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4699a15b-0688-4a32-8fa4-3090f7669104]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263720, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.465 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[97837102-051b-4743-abb4-56e7555ccb43]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263722, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263722, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.466 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2282474270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.470 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.471 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.471 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.472 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.489 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d44c3dbc-e4bc-4235-bd88-b39616473248_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.547 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] resizing rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.624 244018 DEBUG nova.objects.instance [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lazy-loading 'migration_context' on Instance uuid d44c3dbc-e4bc-4235-bd88-b39616473248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.638 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.638 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Ensure instance console log exists: /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.639 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.639 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.640 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760156500' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.838 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.872 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.879 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.900 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021964.8879974, 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.900 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] VM Started (Lifecycle Event)
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.909 244018 DEBUG nova.compute.manager [req-eac59a1c-f12c-4350-b97b-69c5e7475e13 req-74b89faf-1b29-4c40-802c-74909fcb1e9a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.910 244018 DEBUG oslo_concurrency.lockutils [req-eac59a1c-f12c-4350-b97b-69c5e7475e13 req-74b89faf-1b29-4c40-802c-74909fcb1e9a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.910 244018 DEBUG oslo_concurrency.lockutils [req-eac59a1c-f12c-4350-b97b-69c5e7475e13 req-74b89faf-1b29-4c40-802c-74909fcb1e9a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.911 244018 DEBUG oslo_concurrency.lockutils [req-eac59a1c-f12c-4350-b97b-69c5e7475e13 req-74b89faf-1b29-4c40-802c-74909fcb1e9a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.911 244018 DEBUG nova.compute.manager [req-eac59a1c-f12c-4350-b97b-69c5e7475e13 req-74b89faf-1b29-4c40-802c-74909fcb1e9a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Processing event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.912 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.919 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.923 244018 INFO nova.virt.libvirt.driver [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Instance spawned successfully.
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.924 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.930 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.934 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.952 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.952 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.953 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.954 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.954 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.955 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.966 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.966 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021964.8884034, 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.966 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] VM Paused (Lifecycle Event)
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.991 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.996 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021964.9176836, 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:24 compute-0 nova_compute[244014]: 2026-02-25 12:19:24.996 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] VM Resumed (Lifecycle Event)
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.020 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.027 244018 INFO nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Took 7.67 seconds to spawn the instance on the hypervisor.
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.027 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.030 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.062 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.106 244018 INFO nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Took 8.89 seconds to build instance.
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.127 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.306 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Successfully created port: 47bb858f-172e-40b5-8ac0-02c531c303c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:19:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4208460432' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.437 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.439 244018 DEBUG nova.virt.libvirt.vif [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1190496141',display_name='tempest-FloatingIPsAssociationTestJSON-server-1190496141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1190496141',id=22,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-pnxl3h9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:18Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.439 244018 DEBUG nova.network.os_vif_util [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.440 244018 DEBUG nova.network.os_vif_util [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.442 244018 DEBUG nova.objects.instance [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.470 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:19:25 compute-0 nova_compute[244014]:   <uuid>de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75</uuid>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   <name>instance-00000016</name>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1190496141</nova:name>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:19:24</nova:creationTime>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <nova:user uuid="9aa84b2700234a5e9dcba1fc0bbc4cea">tempest-FloatingIPsAssociationTestJSON-1904923370-project-member</nova:user>
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <nova:project uuid="67d0ed57ac554e4390e928b3c8f9b5f6">tempest-FloatingIPsAssociationTestJSON-1904923370</nova:project>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <nova:port uuid="b2336583-1aaa-4789-8d4f-a3a14997891d">
Feb 25 12:19:25 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <system>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <entry name="serial">de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75</entry>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <entry name="uuid">de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75</entry>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     </system>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   <os>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   </os>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   <features>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   </features>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk">
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config">
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:25 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:23:d7:29"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <target dev="tapb2336583-1a"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/console.log" append="off"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <video>
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     </video>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:19:25 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:19:25 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:19:25 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:19:25 compute-0 nova_compute[244014]: </domain>
Feb 25 12:19:25 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.472 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Preparing to wait for external event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.472 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.473 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:25 compute-0 ceph-mon[76335]: pgmap v1020: 305 pgs: 305 active+clean; 480 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.6 MiB/s wr, 147 op/s
Feb 25 12:19:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/760156500' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4208460432' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.473 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.474 244018 DEBUG nova.virt.libvirt.vif [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1190496141',display_name='tempest-FloatingIPsAssociationTestJSON-server-1190496141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1190496141',id=22,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-pnxl3h9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:18Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.475 244018 DEBUG nova.network.os_vif_util [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.476 244018 DEBUG nova.network.os_vif_util [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.476 244018 DEBUG os_vif [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.478 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.479 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.482 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2336583-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.483 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2336583-1a, col_values=(('external_ids', {'iface-id': 'b2336583-1aaa-4789-8d4f-a3a14997891d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:d7:29', 'vm-uuid': 'de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:25 compute-0 NetworkManager[49836]: <info>  [1772021965.4864] manager: (tapb2336583-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.493 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.495 244018 INFO os_vif [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a')
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.562 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.564 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.564 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No VIF found with MAC fa:16:3e:23:d7:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.566 244018 INFO nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Using config drive
Feb 25 12:19:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 480 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 5.6 MiB/s wr, 106 op/s
Feb 25 12:19:25 compute-0 nova_compute[244014]: 2026-02-25 12:19:25.599 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:26 compute-0 nova_compute[244014]: 2026-02-25 12:19:26.308 244018 INFO nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Creating config drive at /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config
Feb 25 12:19:26 compute-0 nova_compute[244014]: 2026-02-25 12:19:26.317 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa796econ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:26 compute-0 nova_compute[244014]: 2026-02-25 12:19:26.451 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa796econ" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:26 compute-0 nova_compute[244014]: 2026-02-25 12:19:26.490 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:26 compute-0 nova_compute[244014]: 2026-02-25 12:19:26.495 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:26 compute-0 nova_compute[244014]: 2026-02-25 12:19:26.651 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:26 compute-0 nova_compute[244014]: 2026-02-25 12:19:26.652 244018 INFO nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Deleting local config drive /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config because it was imported into RBD.
Feb 25 12:19:26 compute-0 kernel: tapb2336583-1a: entered promiscuous mode
Feb 25 12:19:26 compute-0 NetworkManager[49836]: <info>  [1772021966.7137] manager: (tapb2336583-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Feb 25 12:19:26 compute-0 ovn_controller[147040]: 2026-02-25T12:19:26Z|00080|binding|INFO|Claiming lport b2336583-1aaa-4789-8d4f-a3a14997891d for this chassis.
Feb 25 12:19:26 compute-0 ovn_controller[147040]: 2026-02-25T12:19:26Z|00081|binding|INFO|b2336583-1aaa-4789-8d4f-a3a14997891d: Claiming fa:16:3e:23:d7:29 10.100.0.3
Feb 25 12:19:26 compute-0 NetworkManager[49836]: <info>  [1772021966.7252] device (tapb2336583-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:19:26 compute-0 NetworkManager[49836]: <info>  [1772021966.7274] device (tapb2336583-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:19:26 compute-0 nova_compute[244014]: 2026-02-25 12:19:26.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.736 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:d7:29 10.100.0.3'], port_security=['fa:16:3e:23:d7:29 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a50ab9ba-7ffb-499d-9822-c20dbf4e32aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db1eeaa4-6673-482f-8f62-eb89284fbfdd, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b2336583-1aaa-4789-8d4f-a3a14997891d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.738 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b2336583-1aaa-4789-8d4f-a3a14997891d in datapath 41c706f5-6f0b-47a8-91a4-16f87e2a0571 bound to our chassis
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.739 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41c706f5-6f0b-47a8-91a4-16f87e2a0571
Feb 25 12:19:26 compute-0 systemd-machined[210048]: New machine qemu-24-instance-00000016.
Feb 25 12:19:26 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000016.
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.758 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bef6c386-93e0-4dde-bba2-31774ef071ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.758 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41c706f5-61 in ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.760 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41c706f5-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.761 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[935e1c45-c9ef-4618-908c-f6c297977ff1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.761 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[25e0f48c-1ceb-412a-b7fb-a8f70f7c90d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.778 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[bb071dba-db6b-4aeb-9409-933fdc2bda50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 nova_compute[244014]: 2026-02-25 12:19:26.779 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:26 compute-0 ovn_controller[147040]: 2026-02-25T12:19:26Z|00082|binding|INFO|Setting lport b2336583-1aaa-4789-8d4f-a3a14997891d ovn-installed in OVS
Feb 25 12:19:26 compute-0 ovn_controller[147040]: 2026-02-25T12:19:26Z|00083|binding|INFO|Setting lport b2336583-1aaa-4789-8d4f-a3a14997891d up in Southbound
Feb 25 12:19:26 compute-0 nova_compute[244014]: 2026-02-25 12:19:26.785 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.790 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[707c8857-24d5-472f-8b25-7af923becec4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.836 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dfffdd31-1217-4683-9b4e-048c9cde19c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.841 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdf23e5-797f-4a17-a888-fcb2a5163018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 NetworkManager[49836]: <info>  [1772021966.8433] manager: (tap41c706f5-60): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.882 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3f471aeb-beeb-40f4-a581-5bbf13d5b874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.885 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e345c27d-8eac-439a-8615-92ac85d00377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 NetworkManager[49836]: <info>  [1772021966.9085] device (tap41c706f5-60): carrier: link connected
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.912 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8ba215-168b-4e58-b8da-770bc9123440]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.929 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[72d98bbf-20b6-4acf-868c-aa512a76f556]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41c706f5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:94:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393631, 'reachable_time': 16472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263984, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.943 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd71f81-206d-451a-a1b1-4c17379eae76]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:94dc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393631, 'tstamp': 393631}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263985, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.961 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b345e0f-1ffa-4520-9628-557af7910e83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41c706f5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:94:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393631, 'reachable_time': 16472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263986, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.986 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9d58d9-5287-4fee-8124-0b305d9b3e8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.043 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b78f27b3-8a90-4579-9119-fcdf3107e958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.045 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c706f5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.045 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.045 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41c706f5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:27 compute-0 NetworkManager[49836]: <info>  [1772021967.0485] manager: (tap41c706f5-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Feb 25 12:19:27 compute-0 kernel: tap41c706f5-60: entered promiscuous mode
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.052 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41c706f5-60, col_values=(('external_ids', {'iface-id': 'f59f37b4-05c7-4f51-99f1-f2c4bac42231'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:27 compute-0 ovn_controller[147040]: 2026-02-25T12:19:27Z|00084|binding|INFO|Releasing lport f59f37b4-05c7-4f51-99f1-f2c4bac42231 from this chassis (sb_readonly=0)
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.070 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.071 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41c706f5-6f0b-47a8-91a4-16f87e2a0571.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41c706f5-6f0b-47a8-91a4-16f87e2a0571.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.072 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e2bb72-37ba-440a-a991-dc16ea4553ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.074 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-41c706f5-6f0b-47a8-91a4-16f87e2a0571
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/41c706f5-6f0b-47a8-91a4-16f87e2a0571.pid.haproxy
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 41c706f5-6f0b-47a8-91a4-16f87e2a0571
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:19:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.077 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'env', 'PROCESS_TAG=haproxy-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41c706f5-6f0b-47a8-91a4-16f87e2a0571.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.084 244018 DEBUG nova.network.neutron [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updated VIF entry in instance network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.085 244018 DEBUG nova.network.neutron [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.158 244018 DEBUG oslo_concurrency.lockutils [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:27 compute-0 ceph-mon[76335]: pgmap v1021: 305 pgs: 305 active+clean; 480 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 5.6 MiB/s wr, 106 op/s
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.483 244018 DEBUG nova.compute.manager [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.484 244018 DEBUG oslo_concurrency.lockutils [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.485 244018 DEBUG oslo_concurrency.lockutils [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.486 244018 DEBUG oslo_concurrency.lockutils [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.489 244018 DEBUG nova.compute.manager [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] No waiting events found dispatching network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.489 244018 WARNING nova.compute.manager [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received unexpected event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 for instance with vm_state active and task_state None.
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.490 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021967.4883902, de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.490 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] VM Started (Lifecycle Event)
Feb 25 12:19:27 compute-0 podman[264059]: 2026-02-25 12:19:27.585513459 +0000 UTC m=+0.089882004 container create 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:19:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.5 MiB/s wr, 222 op/s
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.613 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.618 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021967.4885652, de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.618 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] VM Paused (Lifecycle Event)
Feb 25 12:19:27 compute-0 systemd[1]: Started libpod-conmon-478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35.scope.
Feb 25 12:19:27 compute-0 podman[264059]: 2026-02-25 12:19:27.547358488 +0000 UTC m=+0.051727083 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:19:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:19:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0335c2e7dd04e96857217313fc187a38e2ae19855220b0923b3e59e3330f4337/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:27 compute-0 podman[264059]: 2026-02-25 12:19:27.674815366 +0000 UTC m=+0.179183971 container init 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:19:27 compute-0 podman[264059]: 2026-02-25 12:19:27.678813558 +0000 UTC m=+0.183182103 container start 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 12:19:27 compute-0 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [NOTICE]   (264078) : New worker (264080) forked
Feb 25 12:19:27 compute-0 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [NOTICE]   (264078) : Loading success.
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.721 244018 DEBUG oslo_concurrency.lockutils [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] Acquiring lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.721 244018 DEBUG oslo_concurrency.lockutils [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] Acquired lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.721 244018 DEBUG nova.network.neutron [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.724 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.727 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:27 compute-0 nova_compute[244014]: 2026-02-25 12:19:27.791 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:28 compute-0 nova_compute[244014]: 2026-02-25 12:19:28.088 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Successfully updated port: 47bb858f-172e-40b5-8ac0-02c531c303c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:19:28 compute-0 nova_compute[244014]: 2026-02-25 12:19:28.116 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:28 compute-0 nova_compute[244014]: 2026-02-25 12:19:28.116 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquired lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:28 compute-0 nova_compute[244014]: 2026-02-25 12:19:28.116 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:19:28 compute-0 nova_compute[244014]: 2026-02-25 12:19:28.296 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:19:28 compute-0 nova_compute[244014]: 2026-02-25 12:19:28.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:29 compute-0 ceph-mon[76335]: pgmap v1022: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.5 MiB/s wr, 222 op/s
Feb 25 12:19:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.5 MiB/s wr, 221 op/s
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.722 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.723 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.724 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.724 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.724 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Processing event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.725 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.725 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.726 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.726 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.727 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] No waiting events found dispatching network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.727 244018 WARNING nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received unexpected event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d for instance with vm_state building and task_state spawning.
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.728 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-changed-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.728 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Refreshing instance network info cache due to event network-changed-47bb858f-172e-40b5-8ac0-02c531c303c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.729 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.730 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.735 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021969.7343109, de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.736 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] VM Resumed (Lifecycle Event)
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.740 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.745 244018 INFO nova.virt.libvirt.driver [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Instance spawned successfully.
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.746 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.967 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.982 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.986 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.986 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.987 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.987 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.987 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:29 compute-0 nova_compute[244014]: 2026-02-25 12:19:29.988 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.013 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.022 244018 DEBUG nova.network.neutron [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Updating instance_info_cache with network_info: [{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.030 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Updating instance_info_cache with network_info: [{"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.138 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Releasing lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.138 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Instance network_info: |[{"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.139 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.139 244018 DEBUG nova.network.neutron [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Refreshing network info cache for port 47bb858f-172e-40b5-8ac0-02c531c303c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.142 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Start _get_guest_xml network_info=[{"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.147 244018 DEBUG oslo_concurrency.lockutils [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] Releasing lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.147 244018 DEBUG nova.compute.manager [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.147 244018 DEBUG nova.compute.manager [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] network_info to inject: |[{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.149 244018 INFO nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Took 11.46 seconds to spawn the instance on the hypervisor.
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.150 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.158 244018 WARNING nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.162 244018 DEBUG nova.virt.libvirt.host [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.163 244018 DEBUG nova.virt.libvirt.host [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.165 244018 DEBUG nova.virt.libvirt.host [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.166 244018 DEBUG nova.virt.libvirt.host [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.166 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.166 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.168 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.168 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.168 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.168 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.171 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.252 244018 INFO nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Took 12.94 seconds to build instance.
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.297 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051982892' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:19:30
Feb 25 12:19:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:19:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:19:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'images', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'vms']
Feb 25 12:19:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.893 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.722s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.933 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:30 compute-0 nova_compute[244014]: 2026-02-25 12:19:30.939 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.456 244018 DEBUG nova.network.neutron [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Updated VIF entry in instance network info cache for port 47bb858f-172e-40b5-8ac0-02c531c303c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.457 244018 DEBUG nova.network.neutron [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Updating instance_info_cache with network_info: [{"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:31 compute-0 ceph-mon[76335]: pgmap v1023: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.5 MiB/s wr, 221 op/s
Feb 25 12:19:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1051982892' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2308674807' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.534 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.539 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.540 244018 DEBUG nova.virt.libvirt.vif [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1820272093',display_name='tempest-ImagesOneServerTestJSON-server-1820272093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1820272093',id=23,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61fb5315043b44e588f1a84d85f1547b',ramdisk_id='',reservation_id='r-brwcjik7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-921017522',owner_user_name='tempest-ImagesOneServerTestJSON-921017522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:23Z,user_data=None,user_id='8349db6a5fcb4d8596a69d83481207b3',uuid=d44c3dbc-e4bc-4235-bd88-b39616473248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.541 244018 DEBUG nova.network.os_vif_util [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converting VIF {"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.542 244018 DEBUG nova.network.os_vif_util [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.543 244018 DEBUG nova.objects.instance [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lazy-loading 'pci_devices' on Instance uuid d44c3dbc-e4bc-4235-bd88-b39616473248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.574 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:19:31 compute-0 nova_compute[244014]:   <uuid>d44c3dbc-e4bc-4235-bd88-b39616473248</uuid>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   <name>instance-00000017</name>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <nova:name>tempest-ImagesOneServerTestJSON-server-1820272093</nova:name>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:19:30</nova:creationTime>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <nova:user uuid="8349db6a5fcb4d8596a69d83481207b3">tempest-ImagesOneServerTestJSON-921017522-project-member</nova:user>
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <nova:project uuid="61fb5315043b44e588f1a84d85f1547b">tempest-ImagesOneServerTestJSON-921017522</nova:project>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <nova:port uuid="47bb858f-172e-40b5-8ac0-02c531c303c3">
Feb 25 12:19:31 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <system>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <entry name="serial">d44c3dbc-e4bc-4235-bd88-b39616473248</entry>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <entry name="uuid">d44c3dbc-e4bc-4235-bd88-b39616473248</entry>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     </system>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   <os>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   </os>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   <features>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   </features>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d44c3dbc-e4bc-4235-bd88-b39616473248_disk">
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config">
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:31 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:d6:24:39"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <target dev="tap47bb858f-17"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/console.log" append="off"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <video>
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     </video>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:19:31 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:19:31 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:19:31 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:19:31 compute-0 nova_compute[244014]: </domain>
Feb 25 12:19:31 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.575 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Preparing to wait for external event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.575 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.575 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.575 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.576 244018 DEBUG nova.virt.libvirt.vif [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1820272093',display_name='tempest-ImagesOneServerTestJSON-server-1820272093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1820272093',id=23,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61fb5315043b44e588f1a84d85f1547b',ramdisk_id='',reservation_id='r-brwcjik7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-921017522',owner_user_name='tempest-ImagesOneServerTestJSON-921017522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:23Z,user_data=None,user_id='8349db6a5fcb4d8596a69d83481207b3',uuid=d44c3dbc-e4bc-4235-bd88-b39616473248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.577 244018 DEBUG nova.network.os_vif_util [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converting VIF {"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.578 244018 DEBUG nova.network.os_vif_util [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.578 244018 DEBUG os_vif [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.579 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.579 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.583 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47bb858f-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.584 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47bb858f-17, col_values=(('external_ids', {'iface-id': '47bb858f-172e-40b5-8ac0-02c531c303c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:24:39', 'vm-uuid': 'd44c3dbc-e4bc-4235-bd88-b39616473248'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:31 compute-0 NetworkManager[49836]: <info>  [1772021971.5861] manager: (tap47bb858f-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.597 244018 INFO os_vif [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17')
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.5 MiB/s wr, 261 op/s
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.683 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.684 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.684 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] No VIF found with MAC fa:16:3e:d6:24:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.685 244018 INFO nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Using config drive
Feb 25 12:19:31 compute-0 nova_compute[244014]: 2026-02-25 12:19:31.711 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:19:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:19:32 compute-0 nova_compute[244014]: 2026-02-25 12:19:32.351 244018 INFO nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Creating config drive at /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config
Feb 25 12:19:32 compute-0 nova_compute[244014]: 2026-02-25 12:19:32.355 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpod7expt3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:32 compute-0 nova_compute[244014]: 2026-02-25 12:19:32.483 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpod7expt3" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2308674807' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:32 compute-0 nova_compute[244014]: 2026-02-25 12:19:32.529 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:32 compute-0 nova_compute[244014]: 2026-02-25 12:19:32.534 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:32 compute-0 nova_compute[244014]: 2026-02-25 12:19:32.687 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:32 compute-0 nova_compute[244014]: 2026-02-25 12:19:32.688 244018 INFO nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Deleting local config drive /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config because it was imported into RBD.
Feb 25 12:19:32 compute-0 kernel: tap47bb858f-17: entered promiscuous mode
Feb 25 12:19:32 compute-0 NetworkManager[49836]: <info>  [1772021972.7520] manager: (tap47bb858f-17): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Feb 25 12:19:32 compute-0 nova_compute[244014]: 2026-02-25 12:19:32.754 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:32 compute-0 ovn_controller[147040]: 2026-02-25T12:19:32Z|00085|binding|INFO|Claiming lport 47bb858f-172e-40b5-8ac0-02c531c303c3 for this chassis.
Feb 25 12:19:32 compute-0 ovn_controller[147040]: 2026-02-25T12:19:32Z|00086|binding|INFO|47bb858f-172e-40b5-8ac0-02c531c303c3: Claiming fa:16:3e:d6:24:39 10.100.0.9
Feb 25 12:19:32 compute-0 nova_compute[244014]: 2026-02-25 12:19:32.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:32 compute-0 systemd-udevd[264223]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.778 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:24:39 10.100.0.9'], port_security=['fa:16:3e:d6:24:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd44c3dbc-e4bc-4235-bd88-b39616473248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61fb5315043b44e588f1a84d85f1547b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '65ea446a-ea2b-4eb1-8854-770694a45ea7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf0a1784-e657-4d3c-8170-e3cf50faab00, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=47bb858f-172e-40b5-8ac0-02c531c303c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.780 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 47bb858f-172e-40b5-8ac0-02c531c303c3 in datapath 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 bound to our chassis
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.786 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1
Feb 25 12:19:32 compute-0 NetworkManager[49836]: <info>  [1772021972.7939] device (tap47bb858f-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:19:32 compute-0 NetworkManager[49836]: <info>  [1772021972.7949] device (tap47bb858f-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.803 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e77629c2-000c-4f22-9107-843c09a61dfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.805 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap55e574c2-41 in ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:19:32 compute-0 systemd-machined[210048]: New machine qemu-25-instance-00000017.
Feb 25 12:19:32 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000017.
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.809 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap55e574c2-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.809 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dab282d1-dbed-4f12-a400-ab00dd2e4505]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.811 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[543c2c23-016c-4af5-a6d2-e32beb0270ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:32 compute-0 ovn_controller[147040]: 2026-02-25T12:19:32Z|00087|binding|INFO|Setting lport 47bb858f-172e-40b5-8ac0-02c531c303c3 ovn-installed in OVS
Feb 25 12:19:32 compute-0 ovn_controller[147040]: 2026-02-25T12:19:32Z|00088|binding|INFO|Setting lport 47bb858f-172e-40b5-8ac0-02c531c303c3 up in Southbound
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.826 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8f843e82-c563-45f2-a967-5a8d565f526c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:32 compute-0 nova_compute[244014]: 2026-02-25 12:19:32.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3bb933-cded-4c7f-aa28-475216c44d5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.939 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb9f72c-a29a-439f-8c41-cc9b064789d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:32 compute-0 NetworkManager[49836]: <info>  [1772021972.9505] manager: (tap55e574c2-40): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Feb 25 12:19:32 compute-0 systemd-udevd[264228]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.949 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[afa8f6f0-1a2e-4a88-aa45-0c2339c5f2d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.975 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ad53c7b1-2dbe-49c0-bacc-d1f4a55a1811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.979 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c4113a-108e-47c4-b45e-16d1f9c29eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:32 compute-0 NetworkManager[49836]: <info>  [1772021972.9979] device (tap55e574c2-40): carrier: link connected
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.000 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bd31b21a-3012-442d-babf-e2fc83239d1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.014 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3aea074d-b5a8-4a50-b45c-9a059f4480b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55e574c2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:96:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394240, 'reachable_time': 17034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264258, 'error': None, 'target': 'ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.029 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[68046c81-9d9a-44d8-ad67-3db6e97e1657]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:966e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394240, 'tstamp': 394240}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264259, 'error': None, 'target': 'ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.042 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b75ba1-1bc6-4cbc-acaf-157f3bca3466]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55e574c2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:96:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394240, 'reachable_time': 17034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264260, 'error': None, 'target': 'ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.069 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6affe936-09a7-4b9a-8bfc-b3342344c967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.116 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[44989e6a-8d07-490f-981d-d5384fc0ae48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.117 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55e574c2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.117 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.118 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55e574c2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:33 compute-0 NetworkManager[49836]: <info>  [1772021973.1204] manager: (tap55e574c2-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.120 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:33 compute-0 kernel: tap55e574c2-40: entered promiscuous mode
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.123 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55e574c2-40, col_values=(('external_ids', {'iface-id': 'eaac86b8-7606-47e6-83eb-e27fec0ae5ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.128 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/55e574c2-43dc-4bbd-bf4c-2028df9ad3c1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/55e574c2-43dc-4bbd-bf4c-2028df9ad3c1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.129 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4171ce6-c041-4308-a833-6b48ce7fe06b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.129 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/55e574c2-43dc-4bbd-bf4c-2028df9ad3c1.pid.haproxy
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:19:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.130 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'env', 'PROCESS_TAG=haproxy-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/55e574c2-43dc-4bbd-bf4c-2028df9ad3c1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:19:33 compute-0 ovn_controller[147040]: 2026-02-25T12:19:33Z|00089|binding|INFO|Releasing lport eaac86b8-7606-47e6-83eb-e27fec0ae5ff from this chassis (sb_readonly=0)
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.447 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021973.4467921, d44c3dbc-e4bc-4235-bd88-b39616473248 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.448 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] VM Started (Lifecycle Event)
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.488 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.495 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021973.450587, d44c3dbc-e4bc-4235-bd88-b39616473248 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.496 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] VM Paused (Lifecycle Event)
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.520 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.523 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:33 compute-0 podman[264331]: 2026-02-25 12:19:33.529488626 +0000 UTC m=+0.078690229 container create 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 12:19:33 compute-0 ceph-mon[76335]: pgmap v1024: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.5 MiB/s wr, 261 op/s
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.546 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:33 compute-0 podman[264331]: 2026-02-25 12:19:33.479068461 +0000 UTC m=+0.028270104 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:19:33 compute-0 systemd[1]: Started libpod-conmon-9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2.scope.
Feb 25 12:19:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.1 MiB/s wr, 261 op/s
Feb 25 12:19:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a7581b6a9b7832e1212a34f08cb65dbe5142b80457bc50869d2ba4120d093b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:33 compute-0 podman[264331]: 2026-02-25 12:19:33.638245679 +0000 UTC m=+0.187447282 container init 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:19:33 compute-0 podman[264331]: 2026-02-25 12:19:33.643105765 +0000 UTC m=+0.192307368 container start 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.674 244018 DEBUG nova.compute.manager [req-41080055-46cf-4689-8f41-afec874a415e req-7e1883a5-3f02-4051-946f-55ab5b56ca63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.674 244018 DEBUG oslo_concurrency.lockutils [req-41080055-46cf-4689-8f41-afec874a415e req-7e1883a5-3f02-4051-946f-55ab5b56ca63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.675 244018 DEBUG oslo_concurrency.lockutils [req-41080055-46cf-4689-8f41-afec874a415e req-7e1883a5-3f02-4051-946f-55ab5b56ca63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.675 244018 DEBUG oslo_concurrency.lockutils [req-41080055-46cf-4689-8f41-afec874a415e req-7e1883a5-3f02-4051-946f-55ab5b56ca63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.675 244018 DEBUG nova.compute.manager [req-41080055-46cf-4689-8f41-afec874a415e req-7e1883a5-3f02-4051-946f-55ab5b56ca63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Processing event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:19:33 compute-0 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [NOTICE]   (264350) : New worker (264352) forked
Feb 25 12:19:33 compute-0 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [NOTICE]   (264350) : Loading success.
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.676 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.680 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021973.6794744, d44c3dbc-e4bc-4235-bd88-b39616473248 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.681 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] VM Resumed (Lifecycle Event)
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.692 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.700 244018 INFO nova.virt.libvirt.driver [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Instance spawned successfully.
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.701 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.723 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.730 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.735 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.736 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.736 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.737 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.738 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.738 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.777 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.823 244018 INFO nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 9.90 seconds to spawn the instance on the hypervisor.
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.824 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.915 244018 INFO nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 11.30 seconds to build instance.
Feb 25 12:19:33 compute-0 nova_compute[244014]: 2026-02-25 12:19:33.939 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:34 compute-0 nova_compute[244014]: 2026-02-25 12:19:34.118 244018 INFO nova.compute.manager [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Rebuilding instance
Feb 25 12:19:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:34 compute-0 nova_compute[244014]: 2026-02-25 12:19:34.415 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'trusted_certs' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:34 compute-0 nova_compute[244014]: 2026-02-25 12:19:34.437 244018 DEBUG nova.compute.manager [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:34 compute-0 nova_compute[244014]: 2026-02-25 12:19:34.486 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_requests' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:34 compute-0 nova_compute[244014]: 2026-02-25 12:19:34.502 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:34 compute-0 nova_compute[244014]: 2026-02-25 12:19:34.523 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:34 compute-0 nova_compute[244014]: 2026-02-25 12:19:34.567 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:34 compute-0 nova_compute[244014]: 2026-02-25 12:19:34.588 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:19:34 compute-0 nova_compute[244014]: 2026-02-25 12:19:34.593 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:19:35 compute-0 nova_compute[244014]: 2026-02-25 12:19:35.118 244018 DEBUG nova.compute.manager [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:35 compute-0 nova_compute[244014]: 2026-02-25 12:19:35.159 244018 INFO nova.compute.manager [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] instance snapshotting
Feb 25 12:19:35 compute-0 nova_compute[244014]: 2026-02-25 12:19:35.463 244018 INFO nova.virt.libvirt.driver [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Beginning live snapshot process
Feb 25 12:19:35 compute-0 ceph-mon[76335]: pgmap v1025: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.1 MiB/s wr, 261 op/s
Feb 25 12:19:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 188 op/s
Feb 25 12:19:35 compute-0 nova_compute[244014]: 2026-02-25 12:19:35.639 244018 DEBUG nova.virt.libvirt.imagebackend [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:19:35 compute-0 nova_compute[244014]: 2026-02-25 12:19:35.913 244018 DEBUG nova.storage.rbd_utils [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] creating snapshot(a278e77012d7446c92a8db2a544fcdfe) on rbd image(d44c3dbc-e4bc-4235-bd88-b39616473248_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:19:35 compute-0 nova_compute[244014]: 2026-02-25 12:19:35.967 244018 DEBUG nova.compute.manager [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:35 compute-0 nova_compute[244014]: 2026-02-25 12:19:35.968 244018 DEBUG oslo_concurrency.lockutils [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:35 compute-0 nova_compute[244014]: 2026-02-25 12:19:35.968 244018 DEBUG oslo_concurrency.lockutils [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:35 compute-0 nova_compute[244014]: 2026-02-25 12:19:35.968 244018 DEBUG oslo_concurrency.lockutils [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:35 compute-0 nova_compute[244014]: 2026-02-25 12:19:35.968 244018 DEBUG nova.compute.manager [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] No waiting events found dispatching network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:35 compute-0 nova_compute[244014]: 2026-02-25 12:19:35.969 244018 WARNING nova.compute.manager [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received unexpected event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 for instance with vm_state active and task_state image_uploading.
Feb 25 12:19:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Feb 25 12:19:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Feb 25 12:19:36 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Feb 25 12:19:36 compute-0 nova_compute[244014]: 2026-02-25 12:19:36.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:36 compute-0 ceph-mon[76335]: pgmap v1026: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 188 op/s
Feb 25 12:19:36 compute-0 ceph-mon[76335]: osdmap e134: 3 total, 3 up, 3 in
Feb 25 12:19:36 compute-0 nova_compute[244014]: 2026-02-25 12:19:36.639 244018 DEBUG nova.storage.rbd_utils [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] cloning vms/d44c3dbc-e4bc-4235-bd88-b39616473248_disk@a278e77012d7446c92a8db2a544fcdfe to images/7dba2e0d-092c-46fd-a8a8-9e81515a6945 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:19:36 compute-0 nova_compute[244014]: 2026-02-25 12:19:36.821 244018 DEBUG nova.storage.rbd_utils [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] flattening images/7dba2e0d-092c-46fd-a8a8-9e81515a6945 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:19:36 compute-0 kernel: tap2e503dd2-73 (unregistering): left promiscuous mode
Feb 25 12:19:36 compute-0 NetworkManager[49836]: <info>  [1772021976.9623] device (tap2e503dd2-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:19:36 compute-0 ovn_controller[147040]: 2026-02-25T12:19:36Z|00090|binding|INFO|Releasing lport 2e503dd2-735e-4bfc-87c7-dffab319d935 from this chassis (sb_readonly=0)
Feb 25 12:19:36 compute-0 ovn_controller[147040]: 2026-02-25T12:19:36Z|00091|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 down in Southbound
Feb 25 12:19:36 compute-0 ovn_controller[147040]: 2026-02-25T12:19:36Z|00092|binding|INFO|Removing iface tap2e503dd2-73 ovn-installed in OVS
Feb 25 12:19:36 compute-0 nova_compute[244014]: 2026-02-25 12:19:36.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:36 compute-0 nova_compute[244014]: 2026-02-25 12:19:36.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:36.982 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:36.983 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis
Feb 25 12:19:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:36.984 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.000 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.004 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[96aee681-cd98-4334-9f06-93852c396d5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Deactivated successfully.
Feb 25 12:19:37 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Consumed 13.023s CPU time.
Feb 25 12:19:37 compute-0 systemd-machined[210048]: Machine qemu-19-instance-00000011 terminated.
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.025 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ca8dcf-3ebb-4aa6-a126-1eb15e55da30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.028 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b144f285-786a-4dfc-b661-8e67990d51f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.049 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[564da317-bd44-4b1d-9e97-6a0fdc70f6fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.061 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e59be8da-c8a7-47e6-8f2b-0816ac33e458]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264478, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad0bef3-9a53-44e5-8805-ee39e3d121e2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264479, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264479, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.074 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.079 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.161 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.162 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:37 compute-0 kernel: tap2e503dd2-73: entered promiscuous mode
Feb 25 12:19:37 compute-0 systemd-udevd[264468]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:19:37 compute-0 ovn_controller[147040]: 2026-02-25T12:19:37Z|00093|binding|INFO|Claiming lport 2e503dd2-735e-4bfc-87c7-dffab319d935 for this chassis.
Feb 25 12:19:37 compute-0 ovn_controller[147040]: 2026-02-25T12:19:37Z|00094|binding|INFO|2e503dd2-735e-4bfc-87c7-dffab319d935: Claiming fa:16:3e:87:71:62 10.100.0.5
Feb 25 12:19:37 compute-0 NetworkManager[49836]: <info>  [1772021977.1850] manager: (tap2e503dd2-73): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Feb 25 12:19:37 compute-0 kernel: tap2e503dd2-73 (unregistering): left promiscuous mode
Feb 25 12:19:37 compute-0 ovn_controller[147040]: 2026-02-25T12:19:37Z|00095|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 ovn-installed in OVS
Feb 25 12:19:37 compute-0 ovn_controller[147040]: 2026-02-25T12:19:37Z|00096|if_status|INFO|Not setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 down as sb is readonly
Feb 25 12:19:37 compute-0 ovn_controller[147040]: 2026-02-25T12:19:37Z|00097|binding|INFO|Releasing lport 2e503dd2-735e-4bfc-87c7-dffab319d935 from this chassis (sb_readonly=0)
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.214 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.215 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.217 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.229 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49703212-0157-487f-8042-e3039fa446a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_controller[147040]: 2026-02-25T12:19:37Z|00098|binding|INFO|Releasing lport eaac86b8-7606-47e6-83eb-e27fec0ae5ff from this chassis (sb_readonly=0)
Feb 25 12:19:37 compute-0 ovn_controller[147040]: 2026-02-25T12:19:37Z|00099|binding|INFO|Releasing lport f59f37b4-05c7-4f51-99f1-f2c4bac42231 from this chassis (sb_readonly=0)
Feb 25 12:19:37 compute-0 ovn_controller[147040]: 2026-02-25T12:19:37Z|00100|binding|INFO|Releasing lport 2cfd1e6b-d28d-43c0-bbbd-c6ad77855812 from this chassis (sb_readonly=0)
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.244 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:37 compute-0 ovn_controller[147040]: 2026-02-25T12:19:37Z|00101|binding|INFO|Removing iface tap2e503dd2-73 ovn-installed in OVS
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.261 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.266 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f7080701-d546-4609-bf50-a7b382628dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.270 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1b923b14-cff1-4df0-bb77-8689dffdaeeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.286 244018 DEBUG nova.storage.rbd_utils [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] removing snapshot(a278e77012d7446c92a8db2a544fcdfe) on rbd image(d44c3dbc-e4bc-4235-bd88-b39616473248_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.291 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[395032de-2c7b-4339-917c-269b52875caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.304 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2685cb4f-05bc-40f0-8d7c-c1efb30e1c1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264508, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.321 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0dd05a-892b-4c6a-9d14-93999fad038a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264509, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264509, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.322 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.323 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.326 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.326 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.327 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.327 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.328 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.329 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.341 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[934bdf32-2314-4cf3-95f0-2bfe7e31ef77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.359 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.360 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.366 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.367 244018 INFO nova.compute.claims [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.383 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b086929a-ea3f-486a-a0b6-94d8afda0727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.386 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[73cc53f7-14d7-4932-b2c8-b75c670f21a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.412 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[476e6f62-199d-4a8b-a31a-b85f01361a66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.427 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[85dec762-de0f-495e-b5ed-aae9e39c248e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264516, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.445 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb13667-afcb-4ee2-bb26-b41e08d34ea9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264517, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264517, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.447 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.452 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.453 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.453 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.453 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.558 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 554 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.2 MiB/s wr, 229 op/s
Feb 25 12:19:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Feb 25 12:19:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Feb 25 12:19:37 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.665 244018 DEBUG nova.storage.rbd_utils [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] creating snapshot(snap) on rbd image(7dba2e0d-092c-46fd-a8a8-9e81515a6945) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.702 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance shutdown successfully after 3 seconds.
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.713 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance destroyed successfully.
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.721 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance destroyed successfully.
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.722 244018 DEBUG nova.virt.libvirt.vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:18:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:33Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.723 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.724 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.725 244018 DEBUG os_vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.728 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.728 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e503dd2-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.731 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:37 compute-0 nova_compute[244014]: 2026-02-25 12:19:37.733 244018 INFO os_vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.059 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting instance files /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.059 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deletion of /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del complete
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.068 244018 DEBUG nova.compute.manager [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.068 244018 DEBUG oslo_concurrency.lockutils [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.069 244018 DEBUG oslo_concurrency.lockutils [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.070 244018 DEBUG oslo_concurrency.lockutils [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.071 244018 DEBUG nova.compute.manager [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.071 244018 WARNING nova.compute.manager [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state error and task_state rebuilding.
Feb 25 12:19:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:19:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945168416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.124 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.132 244018 DEBUG nova.compute.provider_tree [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.172 244018 DEBUG nova.scheduler.client.report [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.226 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.227 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.234 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.234 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating image(s)
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.257 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.290 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.320 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.323 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.354 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.355 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.382 244018 INFO nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.406 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.407 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.408 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.408 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.430 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.437 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.458 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.576 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.577 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.578 244018 INFO nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Creating image(s)
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.618 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Feb 25 12:19:38 compute-0 ceph-mon[76335]: pgmap v1028: 305 pgs: 305 active+clean; 554 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.2 MiB/s wr, 229 op/s
Feb 25 12:19:38 compute-0 ceph-mon[76335]: osdmap e135: 3 total, 3 up, 3 in
Feb 25 12:19:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2945168416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.656 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Feb 25 12:19:38 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.719 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.723 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.749 244018 DEBUG nova.policy [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9aa84b2700234a5e9dcba1fc0bbc4cea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.753 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.814 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.815 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.816 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.816 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.847 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.851 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.880 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.997 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.998 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Ensure instance console log exists: /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:19:38 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.999 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:38.999 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.000 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.003 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start _get_guest_xml network_info=[{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.009 244018 WARNING nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.018 244018 DEBUG nova.virt.libvirt.host [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.019 244018 DEBUG nova.virt.libvirt.host [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.023 244018 DEBUG nova.virt.libvirt.host [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.024 244018 DEBUG nova.virt.libvirt.host [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.024 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.024 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.025 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.025 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.025 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.026 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.026 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.026 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.026 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.027 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.027 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.027 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.027 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'vcpu_model' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.056 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.182 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:39 compute-0 ovn_controller[147040]: 2026-02-25T12:19:39Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:da:42 10.100.0.12
Feb 25 12:19:39 compute-0 ovn_controller[147040]: 2026-02-25T12:19:39Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:da:42 10.100.0.12
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.283 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] resizing rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.432 244018 DEBUG nova.objects.instance [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.454 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.454 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Ensure instance console log exists: /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.455 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.455 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.455 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 554 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.7 MiB/s wr, 236 op/s
Feb 25 12:19:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334233231' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.645 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.670 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.673 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:39 compute-0 ceph-mon[76335]: osdmap e136: 3 total, 3 up, 3 in
Feb 25 12:19:39 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/334233231' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:39 compute-0 nova_compute[244014]: 2026-02-25 12:19:39.766 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Successfully created port: ea269819-4b09-472f-b5f6-ad74852b3850 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.165 244018 DEBUG nova.compute.manager [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.165 244018 DEBUG oslo_concurrency.lockutils [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.166 244018 DEBUG oslo_concurrency.lockutils [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.166 244018 DEBUG oslo_concurrency.lockutils [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.166 244018 DEBUG nova.compute.manager [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.167 244018 WARNING nova.compute.manager [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state error and task_state rebuild_spawning.
Feb 25 12:19:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086827651' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.208 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.210 244018 DEBUG nova.virt.libvirt.vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:18:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:38Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.210 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.211 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.213 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:19:40 compute-0 nova_compute[244014]:   <uuid>52f927ad-a417-489f-9f92-87bc3433649d</uuid>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   <name>instance-00000011</name>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAdminTestJSON-server-1736622759</nova:name>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:19:39</nova:creationTime>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <nova:port uuid="2e503dd2-735e-4bfc-87c7-dffab319d935">
Feb 25 12:19:40 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <system>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <entry name="serial">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <entry name="uuid">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     </system>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   <os>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   </os>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   <features>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   </features>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk">
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk.config">
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:40 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:87:71:62"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <target dev="tap2e503dd2-73"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log" append="off"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <video>
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     </video>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:19:40 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:19:40 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:19:40 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:19:40 compute-0 nova_compute[244014]: </domain>
Feb 25 12:19:40 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.214 244018 DEBUG nova.virt.libvirt.vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:18:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:38Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.215 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.215 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.216 244018 DEBUG os_vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.216 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.217 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.217 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.220 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.220 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e503dd2-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.221 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e503dd2-73, col_values=(('external_ids', {'iface-id': '2e503dd2-735e-4bfc-87c7-dffab319d935', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:71:62', 'vm-uuid': '52f927ad-a417-489f-9f92-87bc3433649d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:40 compute-0 NetworkManager[49836]: <info>  [1772021980.2239] manager: (tap2e503dd2-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.228 244018 INFO os_vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.328 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.328 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.329 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:87:71:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.329 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Using config drive
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.355 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.363 244018 INFO nova.virt.libvirt.driver [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Snapshot image upload complete
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.364 244018 INFO nova.compute.manager [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 5.20 seconds to snapshot the instance on the hypervisor.
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.407 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'ec2_ids' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.475 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'keypairs' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.580 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Successfully updated port: ea269819-4b09-472f-b5f6-ad74852b3850 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.617 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.618 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquired lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.618 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:19:40 compute-0 ceph-mon[76335]: pgmap v1031: 305 pgs: 305 active+clean; 554 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.7 MiB/s wr, 236 op/s
Feb 25 12:19:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4086827651' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.850 244018 DEBUG nova.compute.manager [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.851 244018 DEBUG nova.compute.manager [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing instance network info cache due to event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.851 244018 DEBUG oslo_concurrency.lockutils [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.867 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.887 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating config drive at /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config
Feb 25 12:19:40 compute-0 nova_compute[244014]: 2026-02-25 12:19:40.892 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqrx5g7sb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.025 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqrx5g7sb" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.063 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.068 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.276 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.277 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting local config drive /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config because it was imported into RBD.
Feb 25 12:19:41 compute-0 NetworkManager[49836]: <info>  [1772021981.3078] manager: (tap2e503dd2-73): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Feb 25 12:19:41 compute-0 kernel: tap2e503dd2-73: entered promiscuous mode
Feb 25 12:19:41 compute-0 ovn_controller[147040]: 2026-02-25T12:19:41Z|00102|binding|INFO|Claiming lport 2e503dd2-735e-4bfc-87c7-dffab319d935 for this chassis.
Feb 25 12:19:41 compute-0 ovn_controller[147040]: 2026-02-25T12:19:41Z|00103|binding|INFO|2e503dd2-735e-4bfc-87c7-dffab319d935: Claiming fa:16:3e:87:71:62 10.100.0.5
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:41 compute-0 ovn_controller[147040]: 2026-02-25T12:19:41Z|00104|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 ovn-installed in OVS
Feb 25 12:19:41 compute-0 systemd-udevd[265045]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.333 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:41 compute-0 systemd-machined[210048]: New machine qemu-26-instance-00000011.
Feb 25 12:19:41 compute-0 NetworkManager[49836]: <info>  [1772021981.3447] device (tap2e503dd2-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:19:41 compute-0 NetworkManager[49836]: <info>  [1772021981.3451] device (tap2e503dd2-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:19:41 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000011.
Feb 25 12:19:41 compute-0 ovn_controller[147040]: 2026-02-25T12:19:41Z|00105|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 up in Southbound
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.359 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.360 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.362 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.372 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3b71daa6-b3b2-4574-b43e-3278b027b426]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.406 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[09343034-1f91-46b7-a460-0abe4b24b908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.421 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e373a2-6a45-494f-8025-ff3a44166459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.447 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8f1fd6-262d-4413-b4da-abe6c1175dd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.467 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[06f0b3ce-7803-46c7-bc64-194d381395c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265060, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.482 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f5e8930f-e5eb-4e71-bbc4-e69d7a61cff3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265061, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265061, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.484 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.488 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.488 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.488 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.489 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 625 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 11 MiB/s wr, 361 op/s
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.993 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 52f927ad-a417-489f-9f92-87bc3433649d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.994 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021981.993051, 52f927ad-a417-489f-9f92-87bc3433649d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.994 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Resumed (Lifecycle Event)
Feb 25 12:19:41 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.999 244018 DEBUG nova.compute.manager [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:41.999 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.004 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance spawned successfully.
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.004 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.018 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.021 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.044 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.045 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.045 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.045 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.056 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.057 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003999499221437105 of space, bias 1.0, pg target 1.1998497664311314 quantized to 32 (current 32)
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0027272717409505894 of space, bias 1.0, pg target 0.8154542505442263 quantized to 32 (current 32)
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.369006830150746e-07 of space, bias 4.0, pg target 0.0007617332168860292 quantized to 16 (current 16)
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:19:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.070 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.070 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021981.9972565, 52f927ad-a417-489f-9f92-87bc3433649d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.070 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Started (Lifecycle Event)
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.117 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.122 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.159 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.164 244018 DEBUG nova.compute.manager [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.241 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.242 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.242 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.330 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.342 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updating instance_info_cache with network_info: [{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.400 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Releasing lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.400 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Instance network_info: |[{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.401 244018 DEBUG oslo_concurrency.lockutils [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.401 244018 DEBUG nova.network.neutron [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.410 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Start _get_guest_xml network_info=[{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.419 244018 DEBUG nova.compute.manager [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.420 244018 DEBUG oslo_concurrency.lockutils [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.420 244018 DEBUG oslo_concurrency.lockutils [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.420 244018 DEBUG oslo_concurrency.lockutils [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.421 244018 DEBUG nova.compute.manager [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.421 244018 WARNING nova.compute.manager [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state None.
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.425 244018 WARNING nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.430 244018 DEBUG nova.virt.libvirt.host [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.431 244018 DEBUG nova.virt.libvirt.host [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.434 244018 DEBUG nova.virt.libvirt.host [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.434 244018 DEBUG nova.virt.libvirt.host [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.437 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.437 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.437 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.438 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.438 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.438 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.438 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.439 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.439 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.439 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.439 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.439 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:19:42 compute-0 nova_compute[244014]: 2026-02-25 12:19:42.443 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:42 compute-0 ceph-mon[76335]: pgmap v1032: 305 pgs: 305 active+clean; 625 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 11 MiB/s wr, 361 op/s
Feb 25 12:19:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1899825016' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.112 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.156 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.163 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:43 compute-0 ovn_controller[147040]: 2026-02-25T12:19:43Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:d7:29 10.100.0.3
Feb 25 12:19:43 compute-0 ovn_controller[147040]: 2026-02-25T12:19:43Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:d7:29 10.100.0.3
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 648 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 16 MiB/s wr, 534 op/s
Feb 25 12:19:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718956847' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.658 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.660 244018 DEBUG nova.virt.libvirt.vif [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1490579841',display_name='tempest-FloatingIPsAssociationTestJSON-server-1490579841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1490579841',id=24,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-nihkdknn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:38Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=267cd6f8-d842-45a8-b4b1-4c6f3dee8d69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.661 244018 DEBUG nova.network.os_vif_util [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.662 244018 DEBUG nova.network.os_vif_util [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.663 244018 DEBUG nova.objects.instance [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.707 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:19:43 compute-0 nova_compute[244014]:   <uuid>267cd6f8-d842-45a8-b4b1-4c6f3dee8d69</uuid>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   <name>instance-00000018</name>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1490579841</nova:name>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:19:42</nova:creationTime>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <nova:user uuid="9aa84b2700234a5e9dcba1fc0bbc4cea">tempest-FloatingIPsAssociationTestJSON-1904923370-project-member</nova:user>
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <nova:project uuid="67d0ed57ac554e4390e928b3c8f9b5f6">tempest-FloatingIPsAssociationTestJSON-1904923370</nova:project>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <nova:port uuid="ea269819-4b09-472f-b5f6-ad74852b3850">
Feb 25 12:19:43 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <system>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <entry name="serial">267cd6f8-d842-45a8-b4b1-4c6f3dee8d69</entry>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <entry name="uuid">267cd6f8-d842-45a8-b4b1-4c6f3dee8d69</entry>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     </system>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   <os>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   </os>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   <features>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   </features>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk">
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config">
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:43 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:14:80:6d"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <target dev="tapea269819-4b"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/console.log" append="off"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <video>
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     </video>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:19:43 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:19:43 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:19:43 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:19:43 compute-0 nova_compute[244014]: </domain>
Feb 25 12:19:43 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.707 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Preparing to wait for external event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.708 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.708 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.709 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.710 244018 DEBUG nova.virt.libvirt.vif [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1490579841',display_name='tempest-FloatingIPsAssociationTestJSON-server-1490579841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1490579841',id=24,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-nihkdknn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:38Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=267cd6f8-d842-45a8-b4b1-4c6f3dee8d69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.710 244018 DEBUG nova.network.os_vif_util [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.711 244018 DEBUG nova.network.os_vif_util [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.711 244018 DEBUG os_vif [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.712 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.713 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.713 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.722 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.723 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea269819-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.725 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea269819-4b, col_values=(('external_ids', {'iface-id': 'ea269819-4b09-472f-b5f6-ad74852b3850', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:80:6d', 'vm-uuid': '267cd6f8-d842-45a8-b4b1-4c6f3dee8d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:43 compute-0 NetworkManager[49836]: <info>  [1772021983.7304] manager: (tapea269819-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.731 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.738 244018 INFO os_vif [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b')
Feb 25 12:19:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Feb 25 12:19:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1899825016' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2718956847' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Feb 25 12:19:43 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.810 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.810 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.810 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No VIF found with MAC fa:16:3e:14:80:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.811 244018 INFO nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Using config drive
Feb 25 12:19:43 compute-0 nova_compute[244014]: 2026-02-25 12:19:43.832 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Feb 25 12:19:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Feb 25 12:19:44 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Feb 25 12:19:44 compute-0 ceph-mon[76335]: pgmap v1033: 305 pgs: 305 active+clean; 648 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 16 MiB/s wr, 534 op/s
Feb 25 12:19:44 compute-0 ceph-mon[76335]: osdmap e137: 3 total, 3 up, 3 in
Feb 25 12:19:44 compute-0 ceph-mon[76335]: osdmap e138: 3 total, 3 up, 3 in
Feb 25 12:19:44 compute-0 nova_compute[244014]: 2026-02-25 12:19:44.821 244018 INFO nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Creating config drive at /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config
Feb 25 12:19:44 compute-0 nova_compute[244014]: 2026-02-25 12:19:44.828 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyp6pn1v4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:44 compute-0 nova_compute[244014]: 2026-02-25 12:19:44.960 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyp6pn1v4" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.000 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.005 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.099 244018 DEBUG nova.network.neutron [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updated VIF entry in instance network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.100 244018 DEBUG nova.network.neutron [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updating instance_info_cache with network_info: [{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.115 244018 DEBUG oslo_concurrency.lockutils [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.189 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.189 244018 INFO nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Deleting local config drive /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config because it was imported into RBD.
Feb 25 12:19:45 compute-0 kernel: tapea269819-4b: entered promiscuous mode
Feb 25 12:19:45 compute-0 NetworkManager[49836]: <info>  [1772021985.2214] manager: (tapea269819-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Feb 25 12:19:45 compute-0 ovn_controller[147040]: 2026-02-25T12:19:45Z|00106|binding|INFO|Claiming lport ea269819-4b09-472f-b5f6-ad74852b3850 for this chassis.
Feb 25 12:19:45 compute-0 ovn_controller[147040]: 2026-02-25T12:19:45Z|00107|binding|INFO|ea269819-4b09-472f-b5f6-ad74852b3850: Claiming fa:16:3e:14:80:6d 10.100.0.12
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.227 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.234 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:80:6d 10.100.0.12'], port_security=['fa:16:3e:14:80:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '267cd6f8-d842-45a8-b4b1-4c6f3dee8d69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a50ab9ba-7ffb-499d-9822-c20dbf4e32aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db1eeaa4-6673-482f-8f62-eb89284fbfdd, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ea269819-4b09-472f-b5f6-ad74852b3850) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.235 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ea269819-4b09-472f-b5f6-ad74852b3850 in datapath 41c706f5-6f0b-47a8-91a4-16f87e2a0571 bound to our chassis
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.237 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41c706f5-6f0b-47a8-91a4-16f87e2a0571
Feb 25 12:19:45 compute-0 ovn_controller[147040]: 2026-02-25T12:19:45Z|00108|binding|INFO|Setting lport ea269819-4b09-472f-b5f6-ad74852b3850 ovn-installed in OVS
Feb 25 12:19:45 compute-0 ovn_controller[147040]: 2026-02-25T12:19:45Z|00109|binding|INFO|Setting lport ea269819-4b09-472f-b5f6-ad74852b3850 up in Southbound
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.241 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.249 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:45 compute-0 systemd-udevd[265241]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:19:45 compute-0 systemd-machined[210048]: New machine qemu-27-instance-00000018.
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.261 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8489a9cc-6cc4-4211-8713-8f551f5981f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:45 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000018.
Feb 25 12:19:45 compute-0 NetworkManager[49836]: <info>  [1772021985.2679] device (tapea269819-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:19:45 compute-0 NetworkManager[49836]: <info>  [1772021985.2687] device (tapea269819-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.286 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf2c644-813b-4128-a6b3-f4ed399faf00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.289 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[28823ba8-0421-405f-a2cd-c8adf59b4abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.312 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[33a4f15c-9755-4e8a-80d6-7e9c1f9d7cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.328 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6d492804-48d7-4c9e-9454-858f57e0b83a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41c706f5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:94:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393631, 'reachable_time': 16472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265253, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.344 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cea039a1-08af-41e9-8e5f-eb90bd728540]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41c706f5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393641, 'tstamp': 393641}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265255, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41c706f5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393644, 'tstamp': 393644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265255, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.345 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c706f5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.348 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41c706f5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.349 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.349 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41c706f5-60, col_values=(('external_ids', {'iface-id': 'f59f37b4-05c7-4f51-99f1-f2c4bac42231'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.349 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:45 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.566 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021985.5656543, 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.567 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] VM Started (Lifecycle Event)
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.585 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.588 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021985.5667517, 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.588 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] VM Paused (Lifecycle Event)
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.606 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 648 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 13 MiB/s wr, 390 op/s
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.609 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:45 compute-0 nova_compute[244014]: 2026-02-25 12:19:45.631 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:45 compute-0 ovn_controller[147040]: 2026-02-25T12:19:45Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:24:39 10.100.0.9
Feb 25 12:19:45 compute-0 ovn_controller[147040]: 2026-02-25T12:19:45Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:24:39 10.100.0.9
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.132 244018 DEBUG nova.compute.manager [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.132 244018 DEBUG oslo_concurrency.lockutils [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.133 244018 DEBUG oslo_concurrency.lockutils [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.133 244018 DEBUG oslo_concurrency.lockutils [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.133 244018 DEBUG nova.compute.manager [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.134 244018 WARNING nova.compute.manager [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state None.
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.207 244018 DEBUG nova.compute.manager [req-cab85cee-5bbf-44dc-b692-9e986ff4758c req-35e6f3a3-f1dd-483b-bf23-b2d4ddab39bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.208 244018 DEBUG oslo_concurrency.lockutils [req-cab85cee-5bbf-44dc-b692-9e986ff4758c req-35e6f3a3-f1dd-483b-bf23-b2d4ddab39bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.208 244018 DEBUG oslo_concurrency.lockutils [req-cab85cee-5bbf-44dc-b692-9e986ff4758c req-35e6f3a3-f1dd-483b-bf23-b2d4ddab39bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.208 244018 DEBUG oslo_concurrency.lockutils [req-cab85cee-5bbf-44dc-b692-9e986ff4758c req-35e6f3a3-f1dd-483b-bf23-b2d4ddab39bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.208 244018 DEBUG nova.compute.manager [req-cab85cee-5bbf-44dc-b692-9e986ff4758c req-35e6f3a3-f1dd-483b-bf23-b2d4ddab39bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Processing event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.209 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.213 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021986.212929, 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.214 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] VM Resumed (Lifecycle Event)
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.216 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.220 244018 INFO nova.virt.libvirt.driver [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Instance spawned successfully.
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.220 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.244 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.255 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.259 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.259 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.260 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.261 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.261 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.269 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.297 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.336 244018 INFO nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Took 7.76 seconds to spawn the instance on the hypervisor.
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.337 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.451 244018 INFO nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Took 9.11 seconds to build instance.
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.479 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:46 compute-0 ceph-mon[76335]: pgmap v1036: 305 pgs: 305 active+clean; 648 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 13 MiB/s wr, 390 op/s
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.886 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.886 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:46 compute-0 nova_compute[244014]: 2026-02-25 12:19:46.931 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.075 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.076 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.084 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.085 244018 INFO nova.compute.claims [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.171 244018 DEBUG nova.compute.manager [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.239 244018 INFO nova.compute.manager [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] instance snapshotting
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.460 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:19:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4126469574' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:19:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:19:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4126469574' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.527 244018 INFO nova.virt.libvirt.driver [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Beginning live snapshot process
Feb 25 12:19:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 643 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 15 MiB/s wr, 618 op/s
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.736 244018 INFO nova.compute.manager [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Rebuilding instance
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.754 244018 DEBUG nova.virt.libvirt.imagebackend [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:19:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4126469574' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:19:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4126469574' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.943 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'trusted_certs' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:47 compute-0 nova_compute[244014]: 2026-02-25 12:19:47.966 244018 DEBUG nova.compute.manager [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:19:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2467274907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.032 244018 DEBUG nova.storage.rbd_utils [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] creating snapshot(b994f1c75d9e4bb6a2b7a460df5a4a50) on rbd image(d44c3dbc-e4bc-4235-bd88-b39616473248_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.100 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.103 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_requests' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.117 244018 DEBUG nova.compute.provider_tree [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.136 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.139 244018 DEBUG nova.scheduler.client.report [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.173 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.214 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.215 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.220 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.257 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.262 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.300 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.300 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.328 244018 INFO nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.346 244018 DEBUG nova.compute.manager [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.347 244018 DEBUG oslo_concurrency.lockutils [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.347 244018 DEBUG oslo_concurrency.lockutils [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.347 244018 DEBUG oslo_concurrency.lockutils [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.348 244018 DEBUG nova.compute.manager [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] No waiting events found dispatching network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.348 244018 WARNING nova.compute.manager [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received unexpected event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 for instance with vm_state active and task_state None.
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.350 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.437 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.481 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.482 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.482 244018 INFO nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Creating image(s)
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.502 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.525 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.543 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.546 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.565 244018 DEBUG nova.policy [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.608 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.609 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.610 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.610 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.629 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.633 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 89488b9f-7c53-4e00-ad62-837e33a76dae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Feb 25 12:19:48 compute-0 ceph-mon[76335]: pgmap v1037: 305 pgs: 305 active+clean; 643 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 15 MiB/s wr, 618 op/s
Feb 25 12:19:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2467274907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:19:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Feb 25 12:19:48 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.844 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 89488b9f-7c53-4e00-ad62-837e33a76dae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.932 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] resizing rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:19:48 compute-0 nova_compute[244014]: 2026-02-25 12:19:48.984 244018 DEBUG nova.storage.rbd_utils [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] cloning vms/d44c3dbc-e4bc-4235-bd88-b39616473248_disk@b994f1c75d9e4bb6a2b7a460df5a4a50 to images/1824ee54-4df4-4dd9-a540-99e517825b0b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.101 244018 DEBUG nova.objects.instance [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'migration_context' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.116 244018 DEBUG nova.storage.rbd_utils [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] flattening images/1824ee54-4df4-4dd9-a540-99e517825b0b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.168 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.169 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Ensure instance console log exists: /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.169 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.170 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.170 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Feb 25 12:19:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Feb 25 12:19:49 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.423 244018 DEBUG nova.storage.rbd_utils [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] removing snapshot(b994f1c75d9e4bb6a2b7a460df5a4a50) on rbd image(d44c3dbc-e4bc-4235-bd88-b39616473248_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:49 compute-0 NetworkManager[49836]: <info>  [1772021989.4885] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Feb 25 12:19:49 compute-0 NetworkManager[49836]: <info>  [1772021989.4897] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:49 compute-0 ovn_controller[147040]: 2026-02-25T12:19:49Z|00110|binding|INFO|Releasing lport eaac86b8-7606-47e6-83eb-e27fec0ae5ff from this chassis (sb_readonly=0)
Feb 25 12:19:49 compute-0 ovn_controller[147040]: 2026-02-25T12:19:49Z|00111|binding|INFO|Releasing lport f59f37b4-05c7-4f51-99f1-f2c4bac42231 from this chassis (sb_readonly=0)
Feb 25 12:19:49 compute-0 ovn_controller[147040]: 2026-02-25T12:19:49Z|00112|binding|INFO|Releasing lport 2cfd1e6b-d28d-43c0-bbbd-c6ad77855812 from this chassis (sb_readonly=0)
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 643 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.6 MiB/s wr, 384 op/s
Feb 25 12:19:49 compute-0 nova_compute[244014]: 2026-02-25 12:19:49.721 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Successfully created port: 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:19:49 compute-0 podman[265610]: 2026-02-25 12:19:49.777491923 +0000 UTC m=+0.108258372 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 12:19:49 compute-0 podman[265611]: 2026-02-25 12:19:49.803108091 +0000 UTC m=+0.134536369 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 12:19:49 compute-0 ceph-mon[76335]: osdmap e139: 3 total, 3 up, 3 in
Feb 25 12:19:49 compute-0 ceph-mon[76335]: osdmap e140: 3 total, 3 up, 3 in
Feb 25 12:19:50 compute-0 nova_compute[244014]: 2026-02-25 12:19:50.154 244018 DEBUG nova.compute.manager [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:50 compute-0 nova_compute[244014]: 2026-02-25 12:19:50.154 244018 DEBUG nova.compute.manager [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing instance network info cache due to event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:19:50 compute-0 nova_compute[244014]: 2026-02-25 12:19:50.154 244018 DEBUG oslo_concurrency.lockutils [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:50 compute-0 nova_compute[244014]: 2026-02-25 12:19:50.154 244018 DEBUG oslo_concurrency.lockutils [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:50 compute-0 nova_compute[244014]: 2026-02-25 12:19:50.155 244018 DEBUG nova.network.neutron [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:19:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Feb 25 12:19:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Feb 25 12:19:50 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Feb 25 12:19:50 compute-0 nova_compute[244014]: 2026-02-25 12:19:50.234 244018 DEBUG nova.storage.rbd_utils [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] creating snapshot(snap) on rbd image(1824ee54-4df4-4dd9-a540-99e517825b0b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:19:51 compute-0 nova_compute[244014]: 2026-02-25 12:19:51.017 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Successfully updated port: 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:19:51 compute-0 nova_compute[244014]: 2026-02-25 12:19:51.036 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:51 compute-0 nova_compute[244014]: 2026-02-25 12:19:51.036 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:51 compute-0 nova_compute[244014]: 2026-02-25 12:19:51.036 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:19:51 compute-0 nova_compute[244014]: 2026-02-25 12:19:51.141 244018 DEBUG nova.compute.manager [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-changed-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:51 compute-0 nova_compute[244014]: 2026-02-25 12:19:51.141 244018 DEBUG nova.compute.manager [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing instance network info cache due to event network-changed-5e8b3807-0ee8-4f97-aa2d-3db7d1283888. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:19:51 compute-0 nova_compute[244014]: 2026-02-25 12:19:51.142 244018 DEBUG oslo_concurrency.lockutils [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:51 compute-0 ceph-mon[76335]: pgmap v1040: 305 pgs: 305 active+clean; 643 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.6 MiB/s wr, 384 op/s
Feb 25 12:19:51 compute-0 ceph-mon[76335]: osdmap e141: 3 total, 3 up, 3 in
Feb 25 12:19:51 compute-0 nova_compute[244014]: 2026-02-25 12:19:51.215 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:19:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Feb 25 12:19:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Feb 25 12:19:51 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Feb 25 12:19:51 compute-0 nova_compute[244014]: 2026-02-25 12:19:51.437 244018 DEBUG nova.network.neutron [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updated VIF entry in instance network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:19:51 compute-0 nova_compute[244014]: 2026-02-25 12:19:51.437 244018 DEBUG nova.network.neutron [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:51 compute-0 nova_compute[244014]: 2026-02-25 12:19:51.457 244018 DEBUG oslo_concurrency.lockutils [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 705 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 125 op/s
Feb 25 12:19:52 compute-0 ceph-mon[76335]: osdmap e142: 3 total, 3 up, 3 in
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.283 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.323 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.324 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Instance network_info: |[{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.325 244018 DEBUG oslo_concurrency.lockutils [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.325 244018 DEBUG nova.network.neutron [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing network info cache for port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.328 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Start _get_guest_xml network_info=[{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.333 244018 WARNING nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.343 244018 DEBUG nova.virt.libvirt.host [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.343 244018 DEBUG nova.virt.libvirt.host [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.347 244018 DEBUG nova.virt.libvirt.host [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.347 244018 DEBUG nova.virt.libvirt.host [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.348 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.348 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.348 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.349 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.349 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.349 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.349 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.350 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.350 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.350 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.350 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.351 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.354 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.640 244018 INFO nova.virt.libvirt.driver [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Snapshot image upload complete
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.641 244018 INFO nova.compute.manager [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 5.40 seconds to snapshot the instance on the hypervisor.
Feb 25 12:19:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103251486' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.898 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.928 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:52 compute-0 nova_compute[244014]: 2026-02-25 12:19:52.935 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:53 compute-0 ovn_controller[147040]: 2026-02-25T12:19:53Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:71:62 10.100.0.5
Feb 25 12:19:53 compute-0 ovn_controller[147040]: 2026-02-25T12:19:53Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:71:62 10.100.0.5
Feb 25 12:19:53 compute-0 ceph-mon[76335]: pgmap v1043: 305 pgs: 305 active+clean; 705 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 125 op/s
Feb 25 12:19:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2103251486' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:19:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987129474' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.469 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.472 244018 DEBUG nova.virt.libvirt.vif [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.472 244018 DEBUG nova.network.os_vif_util [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.473 244018 DEBUG nova.network.os_vif_util [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.475 244018 DEBUG nova.objects.instance [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.509 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:19:53 compute-0 nova_compute[244014]:   <uuid>89488b9f-7c53-4e00-ad62-837e33a76dae</uuid>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   <name>instance-00000019</name>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:19:52</nova:creationTime>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 12:19:53 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <system>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <entry name="serial">89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <entry name="uuid">89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     </system>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   <os>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   </os>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   <features>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   </features>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk">
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config">
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       </source>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:19:53 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:0c:10:e8"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <target dev="tap5e8b3807-0e"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log" append="off"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <video>
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     </video>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:19:53 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:19:53 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:19:53 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:19:53 compute-0 nova_compute[244014]: </domain>
Feb 25 12:19:53 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.509 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Preparing to wait for external event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.510 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.510 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.510 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.511 244018 DEBUG nova.virt.libvirt.vif [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.511 244018 DEBUG nova.network.os_vif_util [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.512 244018 DEBUG nova.network.os_vif_util [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.513 244018 DEBUG os_vif [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.513 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.514 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.514 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.516 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.516 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e8b3807-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.517 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5e8b3807-0e, col_values=(('external_ids', {'iface-id': '5e8b3807-0ee8-4f97-aa2d-3db7d1283888', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:10:e8', 'vm-uuid': '89488b9f-7c53-4e00-ad62-837e33a76dae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:53 compute-0 NetworkManager[49836]: <info>  [1772021993.5191] manager: (tap5e8b3807-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.525 244018 INFO os_vif [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e')
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.583 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.583 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.583 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:0c:10:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.584 244018 INFO nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Using config drive
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.599 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 769 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 15 MiB/s wr, 431 op/s
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.790 244018 DEBUG nova.compute.manager [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.790 244018 DEBUG nova.compute.manager [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing instance network info cache due to event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.790 244018 DEBUG oslo_concurrency.lockutils [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.790 244018 DEBUG oslo_concurrency.lockutils [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:53 compute-0 nova_compute[244014]: 2026-02-25 12:19:53.790 244018 DEBUG nova.network.neutron [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.037 244018 DEBUG nova.network.neutron [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updated VIF entry in instance network info cache for port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.038 244018 DEBUG nova.network.neutron [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.058 244018 DEBUG oslo_concurrency.lockutils [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.211 244018 INFO nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Creating config drive at /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.220 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprtkosx4q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1987129474' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.367 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprtkosx4q" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.405 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.411 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config 89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.530 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config 89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.531 244018 INFO nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Deleting local config drive /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config because it was imported into RBD.
Feb 25 12:19:54 compute-0 kernel: tap5e8b3807-0e: entered promiscuous mode
Feb 25 12:19:54 compute-0 NetworkManager[49836]: <info>  [1772021994.5991] manager: (tap5e8b3807-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Feb 25 12:19:54 compute-0 ovn_controller[147040]: 2026-02-25T12:19:54Z|00113|binding|INFO|Claiming lport 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 for this chassis.
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:54 compute-0 ovn_controller[147040]: 2026-02-25T12:19:54Z|00114|binding|INFO|5e8b3807-0ee8-4f97-aa2d-3db7d1283888: Claiming fa:16:3e:0c:10:e8 10.100.0.13
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.614 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:10:e8 10.100.0.13'], port_security=['fa:16:3e:0c:10:e8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '89488b9f-7c53-4e00-ad62-837e33a76dae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e3ab090-12ee-4eae-8ad1-0ddee1251e75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5e8b3807-0ee8-4f97-aa2d-3db7d1283888) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.615 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.617 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:19:54 compute-0 ovn_controller[147040]: 2026-02-25T12:19:54Z|00115|binding|INFO|Setting lport 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 ovn-installed in OVS
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:54 compute-0 ovn_controller[147040]: 2026-02-25T12:19:54Z|00116|binding|INFO|Setting lport 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 up in Southbound
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.626 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[12a15592-029b-4e17-9cff-53508bfab6ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.626 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08121372-a1 in ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.628 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08121372-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.628 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4083d532-b4bc-45b2-86e1-c2a8ef790e1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.629 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[23e8ce88-fb17-4bc7-890b-38c3de3c1262]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.639 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4d28eb36-c4e1-46ea-ad70-edf3b3e0166b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 systemd-machined[210048]: New machine qemu-28-instance-00000019.
Feb 25 12:19:54 compute-0 systemd-udevd[265810]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.654 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67c64832-7a17-48f5-a106-c35625417de6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000019.
Feb 25 12:19:54 compute-0 NetworkManager[49836]: <info>  [1772021994.6652] device (tap5e8b3807-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:19:54 compute-0 NetworkManager[49836]: <info>  [1772021994.6669] device (tap5e8b3807-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.682 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b06a2fc7-da47-4258-81db-a66516c062cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.689 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a8c353-285c-4721-9afe-44965dea711a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 NetworkManager[49836]: <info>  [1772021994.6915] manager: (tap08121372-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.732 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f81e58-9972-4536-9e49-5007df66e699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.735 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1c57d330-4667-48e0-b055-c2447257194a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 NetworkManager[49836]: <info>  [1772021994.7599] device (tap08121372-a0): carrier: link connected
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.765 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7b781f67-703a-4e5a-84e8-e24ee7d7a6c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.777 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2b360605-9a74-435c-aabe-78b1187fcf06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396416, 'reachable_time': 29153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265840, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.793 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[095723a8-e300-4e99-b3b8-3572b6e22ade]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:732b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396416, 'tstamp': 396416}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265841, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.813 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[17fb05f4-74e6-40c7-aff5-3a37a7ed4ce4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396416, 'reachable_time': 29153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265842, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.839 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bcae03ca-f8f7-4934-8f5d-97ddcde916a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.889 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6e1e16-93d9-43c5-a365-7bc1d2b5de14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.890 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.891 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.891 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:54 compute-0 kernel: tap08121372-a0: entered promiscuous mode
Feb 25 12:19:54 compute-0 NetworkManager[49836]: <info>  [1772021994.8947] manager: (tap08121372-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.900 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.900 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:54 compute-0 ovn_controller[147040]: 2026-02-25T12:19:54Z|00117|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.904 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.906 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0eee8154-35d8-46d4-84c3-51d50abb2d48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.909 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:19:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.911 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'env', 'PROCESS_TAG=haproxy-08121372-a435-401a-b405-778e10d8c2e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08121372-a435-401a-b405-778e10d8c2e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:19:54 compute-0 nova_compute[244014]: 2026-02-25 12:19:54.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:55.004 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:55.005 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:55.006 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.157 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021995.1561058, 89488b9f-7c53-4e00-ad62-837e33a76dae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.157 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] VM Started (Lifecycle Event)
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.181 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.185 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021995.1562688, 89488b9f-7c53-4e00-ad62-837e33a76dae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.186 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] VM Paused (Lifecycle Event)
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.193 244018 DEBUG nova.network.neutron [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updated VIF entry in instance network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.193 244018 DEBUG nova.network.neutron [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.224 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.230 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.236 244018 DEBUG oslo_concurrency.lockutils [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.256 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:55 compute-0 ceph-mon[76335]: pgmap v1044: 305 pgs: 305 active+clean; 769 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 15 MiB/s wr, 431 op/s
Feb 25 12:19:55 compute-0 podman[265914]: 2026-02-25 12:19:55.451834259 +0000 UTC m=+0.078324209 container create 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 25 12:19:55 compute-0 podman[265914]: 2026-02-25 12:19:55.412661444 +0000 UTC m=+0.039151474 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:19:55 compute-0 systemd[1]: Started libpod-conmon-75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d.scope.
Feb 25 12:19:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:19:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7f7a1a509256948c44cf81664959a0d95a0720770dc3872c473bf267ae05b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:19:55 compute-0 podman[265914]: 2026-02-25 12:19:55.564243677 +0000 UTC m=+0.190733657 container init 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 12:19:55 compute-0 podman[265914]: 2026-02-25 12:19:55.572721908 +0000 UTC m=+0.199211868 container start 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:19:55 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [NOTICE]   (265934) : New worker (265936) forked
Feb 25 12:19:55 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [NOTICE]   (265934) : Loading success.
Feb 25 12:19:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 769 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 322 op/s
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:19:55 compute-0 nova_compute[244014]: 2026-02-25 12:19:55.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:19:56 compute-0 nova_compute[244014]: 2026-02-25 12:19:56.046 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:56 compute-0 nova_compute[244014]: 2026-02-25 12:19:56.046 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:56 compute-0 nova_compute[244014]: 2026-02-25 12:19:56.046 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:19:56 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 25 12:19:57 compute-0 nova_compute[244014]: 2026-02-25 12:19:57.213 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:57 compute-0 nova_compute[244014]: 2026-02-25 12:19:57.213 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing instance network info cache due to event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:19:57 compute-0 nova_compute[244014]: 2026-02-25 12:19:57.214 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:19:57 compute-0 nova_compute[244014]: 2026-02-25 12:19:57.214 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:19:57 compute-0 nova_compute[244014]: 2026-02-25 12:19:57.214 244018 DEBUG nova.network.neutron [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:19:57 compute-0 nova_compute[244014]: 2026-02-25 12:19:57.240 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updating instance_info_cache with network_info: [{"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Feb 25 12:19:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Feb 25 12:19:57 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Feb 25 12:19:57 compute-0 ceph-mon[76335]: pgmap v1045: 305 pgs: 305 active+clean; 769 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 322 op/s
Feb 25 12:19:57 compute-0 nova_compute[244014]: 2026-02-25 12:19:57.347 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:57 compute-0 nova_compute[244014]: 2026-02-25 12:19:57.347 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:19:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 809 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 14 MiB/s wr, 429 op/s
Feb 25 12:19:58 compute-0 ovn_controller[147040]: 2026-02-25T12:19:58Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:80:6d 10.100.0.12
Feb 25 12:19:58 compute-0 ovn_controller[147040]: 2026-02-25T12:19:58Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:80:6d 10.100.0.12
Feb 25 12:19:58 compute-0 ceph-mon[76335]: osdmap e143: 3 total, 3 up, 3 in
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.316 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.898 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.899 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.900 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.900 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.901 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.903 244018 INFO nova.compute.manager [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Terminating instance
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.905 244018 DEBUG nova.compute.manager [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.930 244018 DEBUG nova.network.neutron [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updated VIF entry in instance network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.931 244018 DEBUG nova.network.neutron [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updating instance_info_cache with network_info: [{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:19:58 compute-0 kernel: tap47bb858f-17 (unregistering): left promiscuous mode
Feb 25 12:19:58 compute-0 NetworkManager[49836]: <info>  [1772021998.9557] device (tap47bb858f-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:19:58 compute-0 ovn_controller[147040]: 2026-02-25T12:19:58Z|00118|binding|INFO|Releasing lport 47bb858f-172e-40b5-8ac0-02c531c303c3 from this chassis (sb_readonly=0)
Feb 25 12:19:58 compute-0 ovn_controller[147040]: 2026-02-25T12:19:58Z|00119|binding|INFO|Setting lport 47bb858f-172e-40b5-8ac0-02c531c303c3 down in Southbound
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.963 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:58 compute-0 ovn_controller[147040]: 2026-02-25T12:19:58Z|00120|binding|INFO|Removing iface tap47bb858f-17 ovn-installed in OVS
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.978 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.979 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.979 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.980 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.980 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.980 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Processing event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.981 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:58.980 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:24:39 10.100.0.9'], port_security=['fa:16:3e:d6:24:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd44c3dbc-e4bc-4235-bd88-b39616473248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61fb5315043b44e588f1a84d85f1547b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '65ea446a-ea2b-4eb1-8854-770694a45ea7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf0a1784-e657-4d3c-8170-e3cf50faab00, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=47bb858f-172e-40b5-8ac0-02c531c303c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.981 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.981 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.982 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.982 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] No waiting events found dispatching network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.982 244018 WARNING nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received unexpected event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 for instance with vm_state building and task_state spawning.
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.983 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.988 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021998.988034, 89488b9f-7c53-4e00-ad62-837e33a76dae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.989 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] VM Resumed (Lifecycle Event)
Feb 25 12:19:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:58.987 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 47bb858f-172e-40b5-8ac0-02c531c303c3 in datapath 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 unbound from our chassis
Feb 25 12:19:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:58.993 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.993 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:19:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:58.996 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c59a03c8-3685-42f9-bce5-976156af17e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:58.997 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 namespace which is not needed anymore
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.997 244018 INFO nova.virt.libvirt.driver [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Instance spawned successfully.
Feb 25 12:19:58 compute-0 nova_compute[244014]: 2026-02-25 12:19:58.998 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:19:59 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Deactivated successfully.
Feb 25 12:19:59 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Consumed 12.542s CPU time.
Feb 25 12:19:59 compute-0 systemd-machined[210048]: Machine qemu-25-instance-00000017 terminated.
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.038 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.043 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.061 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.061 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.062 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.062 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.063 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.063 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.091 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:19:59 compute-0 kernel: tap47bb858f-17: entered promiscuous mode
Feb 25 12:19:59 compute-0 NetworkManager[49836]: <info>  [1772021999.1272] manager: (tap47bb858f-17): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Feb 25 12:19:59 compute-0 systemd-udevd[265948]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:19:59 compute-0 ovn_controller[147040]: 2026-02-25T12:19:59Z|00121|binding|INFO|Claiming lport 47bb858f-172e-40b5-8ac0-02c531c303c3 for this chassis.
Feb 25 12:19:59 compute-0 ovn_controller[147040]: 2026-02-25T12:19:59Z|00122|binding|INFO|47bb858f-172e-40b5-8ac0-02c531c303c3: Claiming fa:16:3e:d6:24:39 10.100.0.9
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.129 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:59 compute-0 kernel: tap47bb858f-17 (unregistering): left promiscuous mode
Feb 25 12:19:59 compute-0 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [NOTICE]   (264350) : haproxy version is 2.8.14-c23fe91
Feb 25 12:19:59 compute-0 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [NOTICE]   (264350) : path to executable is /usr/sbin/haproxy
Feb 25 12:19:59 compute-0 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [WARNING]  (264350) : Exiting Master process...
Feb 25 12:19:59 compute-0 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [WARNING]  (264350) : Exiting Master process...
Feb 25 12:19:59 compute-0 ovn_controller[147040]: 2026-02-25T12:19:59Z|00123|binding|INFO|Setting lport 47bb858f-172e-40b5-8ac0-02c531c303c3 ovn-installed in OVS
Feb 25 12:19:59 compute-0 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [ALERT]    (264350) : Current worker (264352) exited with code 143 (Terminated)
Feb 25 12:19:59 compute-0 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [WARNING]  (264350) : All workers exited. Exiting... (0)
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:59 compute-0 systemd[1]: libpod-9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2.scope: Deactivated successfully.
Feb 25 12:19:59 compute-0 podman[265970]: 2026-02-25 12:19:59.151295747 +0000 UTC m=+0.063830188 container died 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.151 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:59 compute-0 ovn_controller[147040]: 2026-02-25T12:19:59Z|00124|if_status|INFO|Dropped 3 log messages in last 22 seconds (most recently, 22 seconds ago) due to excessive rate
Feb 25 12:19:59 compute-0 ovn_controller[147040]: 2026-02-25T12:19:59Z|00125|if_status|INFO|Not setting lport 47bb858f-172e-40b5-8ac0-02c531c303c3 down as sb is readonly
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.153 244018 INFO nova.virt.libvirt.driver [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Instance destroyed successfully.
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.153 244018 DEBUG nova.objects.instance [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lazy-loading 'resources' on Instance uuid d44c3dbc-e4bc-4235-bd88-b39616473248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:19:59 compute-0 ovn_controller[147040]: 2026-02-25T12:19:59Z|00126|binding|INFO|Releasing lport 47bb858f-172e-40b5-8ac0-02c531c303c3 from this chassis (sb_readonly=0)
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.173 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:24:39 10.100.0.9'], port_security=['fa:16:3e:d6:24:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd44c3dbc-e4bc-4235-bd88-b39616473248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61fb5315043b44e588f1a84d85f1547b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '65ea446a-ea2b-4eb1-8854-770694a45ea7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf0a1784-e657-4d3c-8170-e3cf50faab00, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=47bb858f-172e-40b5-8ac0-02c531c303c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.183 244018 INFO nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Took 10.70 seconds to spawn the instance on the hypervisor.
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.183 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.185 244018 DEBUG nova.virt.libvirt.vif [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1820272093',display_name='tempest-ImagesOneServerTestJSON-server-1820272093',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1820272093',id=23,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61fb5315043b44e588f1a84d85f1547b',ramdisk_id='',reservation_id='r-brwcjik7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-921017522',owner_user_name='tempest-ImagesOneServerTestJSON-921017522-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:52Z,user_data=None,user_id='8349db6a5fcb4d8596a69d83481207b3',uuid=d44c3dbc-e4bc-4235-bd88-b39616473248,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.185 244018 DEBUG nova.network.os_vif_util [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converting VIF {"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.186 244018 DEBUG nova.network.os_vif_util [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.187 244018 DEBUG os_vif [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:19:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2-userdata-shm.mount: Deactivated successfully.
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.191 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.190 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:24:39 10.100.0.9'], port_security=['fa:16:3e:d6:24:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd44c3dbc-e4bc-4235-bd88-b39616473248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61fb5315043b44e588f1a84d85f1547b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '65ea446a-ea2b-4eb1-8854-770694a45ea7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf0a1784-e657-4d3c-8170-e3cf50faab00, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=47bb858f-172e-40b5-8ac0-02c531c303c3) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.191 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47bb858f-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-85a7581b6a9b7832e1212a34f08cb65dbe5142b80457bc50869d2ba4120d093b-merged.mount: Deactivated successfully.
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.197 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:19:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:19:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.200 244018 INFO os_vif [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17')
Feb 25 12:19:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Feb 25 12:19:59 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Feb 25 12:19:59 compute-0 podman[265970]: 2026-02-25 12:19:59.212992122 +0000 UTC m=+0.125526563 container cleanup 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:19:59 compute-0 systemd[1]: libpod-conmon-9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2.scope: Deactivated successfully.
Feb 25 12:19:59 compute-0 podman[266016]: 2026-02-25 12:19:59.282135179 +0000 UTC m=+0.046952117 container remove 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.288 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19c4f15a-71c6-462a-b17a-4f90f2c225da]: (4, ('Wed Feb 25 12:19:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 (9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2)\n9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2\nWed Feb 25 12:19:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 (9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2)\n9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.289 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7cd5bf-2a35-4523-afc5-ffa3507a0121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.290 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55e574c2-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:19:59 compute-0 kernel: tap55e574c2-40: left promiscuous mode
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:19:59 compute-0 ceph-mon[76335]: pgmap v1047: 305 pgs: 305 active+clean; 809 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 14 MiB/s wr, 429 op/s
Feb 25 12:19:59 compute-0 ceph-mon[76335]: osdmap e144: 3 total, 3 up, 3 in
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.302 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce9efce-8f55-4889-829b-2665d6ccee6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.313 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[79e7f594-54aa-4033-8fbc-049806ab5b97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.317 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcbf209-a621-4723-80af-bba6ece82a66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.334 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69dba085-84f7-4b44-84a1-85a6678bb069]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394234, 'reachable_time': 26287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266034, 'error': None, 'target': 'ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d55e574c2\x2d43dc\x2d4bbd\x2dbf4c\x2d2028df9ad3c1.mount: Deactivated successfully.
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.340 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.340 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2a648b8e-3097-4a30-8cad-334e1bd82f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.341 244018 INFO nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Took 12.31 seconds to build instance.
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.342 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 47bb858f-172e-40b5-8ac0-02c531c303c3 in datapath 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 unbound from our chassis
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.344 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.345 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8e232b-19f4-4dd9-8726-8af8bca252ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.346 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 47bb858f-172e-40b5-8ac0-02c531c303c3 in datapath 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 unbound from our chassis
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.348 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:19:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.349 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19f3065a-93c6-4e3f-a89d-e2848d06410f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.370 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.491 244018 INFO nova.virt.libvirt.driver [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Deleting instance files /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248_del
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.492 244018 INFO nova.virt.libvirt.driver [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Deletion of /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248_del complete
Feb 25 12:19:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 809 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 9.3 MiB/s wr, 334 op/s
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.618 244018 INFO nova.compute.manager [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 0.71 seconds to destroy the instance on the hypervisor.
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.619 244018 DEBUG oslo.service.loopingcall [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.619 244018 DEBUG nova.compute.manager [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.620 244018 DEBUG nova.network.neutron [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.779 244018 DEBUG nova.compute.manager [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-unplugged-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.779 244018 DEBUG oslo_concurrency.lockutils [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.779 244018 DEBUG oslo_concurrency.lockutils [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.780 244018 DEBUG oslo_concurrency.lockutils [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.780 244018 DEBUG nova.compute.manager [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] No waiting events found dispatching network-vif-unplugged-47bb858f-172e-40b5-8ac0-02c531c303c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.780 244018 DEBUG nova.compute.manager [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-unplugged-47bb858f-172e-40b5-8ac0-02c531c303c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:19:59 compute-0 nova_compute[244014]: 2026-02-25 12:19:59.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:20:00 compute-0 kernel: tap2e503dd2-73 (unregistering): left promiscuous mode
Feb 25 12:20:00 compute-0 NetworkManager[49836]: <info>  [1772022000.5829] device (tap2e503dd2-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:20:00 compute-0 ovn_controller[147040]: 2026-02-25T12:20:00Z|00127|binding|INFO|Releasing lport 2e503dd2-735e-4bfc-87c7-dffab319d935 from this chassis (sb_readonly=0)
Feb 25 12:20:00 compute-0 ovn_controller[147040]: 2026-02-25T12:20:00Z|00128|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 down in Southbound
Feb 25 12:20:00 compute-0 nova_compute[244014]: 2026-02-25 12:20:00.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:00 compute-0 ovn_controller[147040]: 2026-02-25T12:20:00Z|00129|binding|INFO|Removing iface tap2e503dd2-73 ovn-installed in OVS
Feb 25 12:20:00 compute-0 nova_compute[244014]: 2026-02-25 12:20:00.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:00 compute-0 nova_compute[244014]: 2026-02-25 12:20:00.598 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.603 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.605 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.608 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.621 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce6ed4c-33d0-42d2-80d3-a2ec3054e24e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:00 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000011.scope: Deactivated successfully.
Feb 25 12:20:00 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000011.scope: Consumed 11.816s CPU time.
Feb 25 12:20:00 compute-0 systemd-machined[210048]: Machine qemu-26-instance-00000011 terminated.
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.650 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[167e92d8-e1d5-4ccd-9348-3f752fed1293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.653 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6a0e9d-7b41-4eb3-85a0-25a9b86b8a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.676 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[15ff20eb-0bcd-44dd-a151-069af6de4199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.699 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6186c89-ce1c-4db4-801f-3bcc315864b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266045, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.709 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1aeb9b02-d5aa-4481-ad1e-290ef1397da9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266046, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266046, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.710 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:00 compute-0 nova_compute[244014]: 2026-02-25 12:20:00.712 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:00 compute-0 nova_compute[244014]: 2026-02-25 12:20:00.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.718 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.718 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.719 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.719 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.878 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:00 compute-0 nova_compute[244014]: 2026-02-25 12:20:00.878 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.879 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:20:00 compute-0 nova_compute[244014]: 2026-02-25 12:20:00.902 244018 DEBUG nova.network.neutron [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:00 compute-0 nova_compute[244014]: 2026-02-25 12:20:00.935 244018 INFO nova.compute.manager [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 1.32 seconds to deallocate network for instance.
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.010 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.010 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.227 244018 DEBUG oslo_concurrency.processutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:01 compute-0 ceph-mon[76335]: pgmap v1049: 305 pgs: 305 active+clean; 809 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 9.3 MiB/s wr, 334 op/s
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.331 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance shutdown successfully after 13 seconds.
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.338 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance destroyed successfully.
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.346 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance destroyed successfully.
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.348 244018 DEBUG nova.virt.libvirt.vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:47Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.349 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.350 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.351 244018 DEBUG os_vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.354 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e503dd2-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.362 244018 INFO os_vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')
Feb 25 12:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:20:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 763 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.0 MiB/s wr, 230 op/s
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.655 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting instance files /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.655 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deletion of /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del complete
Feb 25 12:20:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828269986' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.815 244018 DEBUG oslo_concurrency.processutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.822 244018 DEBUG nova.compute.provider_tree [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.850 244018 DEBUG nova.scheduler.client.report [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.859 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.860 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating image(s)
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.890 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.922 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.952 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.957 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.976 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.977 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:01 compute-0 nova_compute[244014]: 2026-02-25 12:20:01.982 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.011 244018 DEBUG nova.compute.manager [req-f7a8c85b-636d-4dcd-b9eb-9f913a175553 req-dc082251-b21d-46bd-83ee-cbad412d9fa7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-deleted-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.013 244018 INFO nova.scheduler.client.report [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Deleted allocations for instance d44c3dbc-e4bc-4235-bd88-b39616473248
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.022 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.024 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.025 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.025 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.051 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.055 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.109 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.110 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.110 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.110 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.111 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] No waiting events found dispatching network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.111 244018 WARNING nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received unexpected event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 for instance with vm_state deleted and task_state None.
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.111 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.111 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.112 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.112 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.112 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.113 244018 WARNING nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state rebuild_spawning.
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.113 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.113 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing instance network info cache due to event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.113 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.114 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.114 244018 DEBUG nova.network.neutron [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.143 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.258 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2828269986' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.330 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.410 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.411 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Ensure instance console log exists: /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.411 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.411 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.412 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.413 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start _get_guest_xml network_info=[{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.417 244018 WARNING nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.426 244018 DEBUG nova.virt.libvirt.host [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.426 244018 DEBUG nova.virt.libvirt.host [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.430 244018 DEBUG nova.virt.libvirt.host [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.431 244018 DEBUG nova.virt.libvirt.host [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.431 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.431 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.433 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.433 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.433 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.433 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'vcpu_model' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.470 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:20:02 compute-0 nova_compute[244014]: 2026-02-25 12:20:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:20:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:20:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446686615' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.011 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.041 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.046 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:03 compute-0 ceph-mon[76335]: pgmap v1050: 305 pgs: 305 active+clean; 763 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.0 MiB/s wr, 230 op/s
Feb 25 12:20:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/446686615' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.558 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.559 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.559 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.559 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.559 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.561 244018 INFO nova.compute.manager [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Terminating instance
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.562 244018 DEBUG nova.compute.manager [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:20:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:20:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343313881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:03 compute-0 kernel: tapea269819-4b (unregistering): left promiscuous mode
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.603 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.604 244018 DEBUG nova.virt.libvirt.vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:01Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.605 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:03 compute-0 NetworkManager[49836]: <info>  [1772022003.6051] device (tapea269819-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.607 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.609 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:20:03 compute-0 nova_compute[244014]:   <uuid>52f927ad-a417-489f-9f92-87bc3433649d</uuid>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   <name>instance-00000011</name>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAdminTestJSON-server-1736622759</nova:name>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:20:02</nova:creationTime>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <nova:port uuid="2e503dd2-735e-4bfc-87c7-dffab319d935">
Feb 25 12:20:03 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <system>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <entry name="serial">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <entry name="uuid">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     </system>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   <os>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   </os>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   <features>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   </features>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk">
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk.config">
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:20:03 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:87:71:62"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <target dev="tap2e503dd2-73"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log" append="off"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <video>
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     </video>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:20:03 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:20:03 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:20:03 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:20:03 compute-0 nova_compute[244014]: </domain>
Feb 25 12:20:03 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.610 244018 DEBUG nova.virt.libvirt.vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:01Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.611 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.612 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.612 244018 DEBUG os_vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:20:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 649 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 7.2 MiB/s wr, 428 op/s
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.616 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.616 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:03 compute-0 ovn_controller[147040]: 2026-02-25T12:20:03Z|00130|binding|INFO|Releasing lport ea269819-4b09-472f-b5f6-ad74852b3850 from this chassis (sb_readonly=0)
Feb 25 12:20:03 compute-0 ovn_controller[147040]: 2026-02-25T12:20:03Z|00131|binding|INFO|Setting lport ea269819-4b09-472f-b5f6-ad74852b3850 down in Southbound
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 ovn_controller[147040]: 2026-02-25T12:20:03Z|00132|binding|INFO|Removing iface tapea269819-4b ovn-installed in OVS
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.621 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e503dd2-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.622 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e503dd2-73, col_values=(('external_ids', {'iface-id': '2e503dd2-735e-4bfc-87c7-dffab319d935', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:71:62', 'vm-uuid': '52f927ad-a417-489f-9f92-87bc3433649d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 NetworkManager[49836]: <info>  [1772022003.6241] manager: (tap2e503dd2-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.627 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:80:6d 10.100.0.12'], port_security=['fa:16:3e:14:80:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '267cd6f8-d842-45a8-b4b1-4c6f3dee8d69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a50ab9ba-7ffb-499d-9822-c20dbf4e32aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db1eeaa4-6673-482f-8f62-eb89284fbfdd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ea269819-4b09-472f-b5f6-ad74852b3850) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.628 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ea269819-4b09-472f-b5f6-ad74852b3850 in datapath 41c706f5-6f0b-47a8-91a4-16f87e2a0571 unbound from our chassis
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.630 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41c706f5-6f0b-47a8-91a4-16f87e2a0571
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.645 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[491d82dc-ce99-4a29-98f3-200d32d1e66d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.646 244018 INFO os_vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')
Feb 25 12:20:03 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000018.scope: Deactivated successfully.
Feb 25 12:20:03 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000018.scope: Consumed 12.125s CPU time.
Feb 25 12:20:03 compute-0 systemd-machined[210048]: Machine qemu-27-instance-00000018 terminated.
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.671 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8c506256-88c4-4833-adbf-31b7160c9e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.675 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9794965c-81ee-4178-9e51-70009560d6c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.705 244018 DEBUG nova.network.neutron [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updated VIF entry in instance network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.706 244018 DEBUG nova.network.neutron [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updating instance_info_cache with network_info: [{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.707 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[677531a9-3647-4e41-ad11-b147abbdbd90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.715 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.715 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.716 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:87:71:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.716 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Using config drive
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.743 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f729d9-d422-4ba8-b7f2-cd87f52feed8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41c706f5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:94:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393631, 'reachable_time': 16472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266347, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.751 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.755 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[03cc95f3-f78d-4009-9912-4444d3f71dd0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41c706f5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393641, 'tstamp': 393641}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266360, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41c706f5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393644, 'tstamp': 393644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266360, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.757 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c706f5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.764 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.765 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41c706f5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.765 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.765 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.765 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41c706f5-60, col_values=(('external_ids', {'iface-id': 'f59f37b4-05c7-4f51-99f1-f2c4bac42231'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.765 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.765 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.765 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.766 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.766 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.766 244018 WARNING nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state rebuild_spawning.
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.777 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'ec2_ids' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.798 244018 INFO nova.virt.libvirt.driver [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Instance destroyed successfully.
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.799 244018 DEBUG nova.objects.instance [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'resources' on Instance uuid 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.806 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'keypairs' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.810 244018 DEBUG nova.virt.libvirt.vif [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1490579841',display_name='tempest-FloatingIPsAssociationTestJSON-server-1490579841',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1490579841',id=24,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-nihkdknn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:46Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=267cd6f8-d842-45a8-b4b1-4c6f3dee8d69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.810 244018 DEBUG nova.network.os_vif_util [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.811 244018 DEBUG nova.network.os_vif_util [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.811 244018 DEBUG os_vif [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.812 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.813 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea269819-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.819 244018 INFO os_vif [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b')
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.908 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:20:03 compute-0 nova_compute[244014]: 2026-02-25 12:20:03.909 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.094 244018 INFO nova.virt.libvirt.driver [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Deleting instance files /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_del
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.096 244018 INFO nova.virt.libvirt.driver [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Deletion of /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_del complete
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.107 244018 DEBUG nova.compute.manager [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-unplugged-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.108 244018 DEBUG oslo_concurrency.lockutils [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.108 244018 DEBUG oslo_concurrency.lockutils [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.109 244018 DEBUG oslo_concurrency.lockutils [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.109 244018 DEBUG nova.compute.manager [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] No waiting events found dispatching network-vif-unplugged-ea269819-4b09-472f-b5f6-ad74852b3850 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.110 244018 DEBUG nova.compute.manager [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-unplugged-ea269819-4b09-472f-b5f6-ad74852b3850 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.150 244018 INFO nova.compute.manager [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Took 0.59 seconds to destroy the instance on the hypervisor.
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.151 244018 DEBUG oslo.service.loopingcall [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.152 244018 DEBUG nova.compute.manager [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.152 244018 DEBUG nova.network.neutron [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.188 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating config drive at /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.195 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjtgfe8qj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.234 244018 DEBUG nova.compute.manager [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-changed-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.235 244018 DEBUG nova.compute.manager [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing instance network info cache due to event network-changed-5e8b3807-0ee8-4f97-aa2d-3db7d1283888. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.236 244018 DEBUG oslo_concurrency.lockutils [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.236 244018 DEBUG oslo_concurrency.lockutils [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.237 244018 DEBUG nova.network.neutron [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing network info cache for port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.327 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjtgfe8qj" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1343313881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.369 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.373 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/792541830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.435 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.528 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.530 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting local config drive /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config because it was imported into RBD.
Feb 25 12:20:04 compute-0 kernel: tap2e503dd2-73: entered promiscuous mode
Feb 25 12:20:04 compute-0 NetworkManager[49836]: <info>  [1772022004.5892] manager: (tap2e503dd2-73): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Feb 25 12:20:04 compute-0 ovn_controller[147040]: 2026-02-25T12:20:04Z|00133|binding|INFO|Claiming lport 2e503dd2-735e-4bfc-87c7-dffab319d935 for this chassis.
Feb 25 12:20:04 compute-0 ovn_controller[147040]: 2026-02-25T12:20:04Z|00134|binding|INFO|2e503dd2-735e-4bfc-87c7-dffab319d935: Claiming fa:16:3e:87:71:62 10.100.0.5
Feb 25 12:20:04 compute-0 systemd-udevd[266336]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.606 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.608 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis
Feb 25 12:20:04 compute-0 NetworkManager[49836]: <info>  [1772022004.6103] device (tap2e503dd2-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:20:04 compute-0 NetworkManager[49836]: <info>  [1772022004.6108] device (tap2e503dd2-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.611 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:20:04 compute-0 ovn_controller[147040]: 2026-02-25T12:20:04Z|00135|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 ovn-installed in OVS
Feb 25 12:20:04 compute-0 ovn_controller[147040]: 2026-02-25T12:20:04Z|00136|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 up in Southbound
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.623 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.624 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:04 compute-0 systemd-machined[210048]: New machine qemu-29-instance-00000011.
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.632 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.632 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.634 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dcece5b0-93dd-40a3-9bf8-9c59d6e41ee2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:04 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-00000011.
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.645 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.645 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.654 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.654 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.670 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2dcdd8e2-6d85-4760-9b05-098891cb5abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.677 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f3375240-3b17-4a9c-8f77-f7432d495cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.699 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.701 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.712 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.712 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.715 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3abf94e9-be8c-4df0-8661-55145f769179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.749 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[869986c7-a27f-432d-b14a-bb7f2417a7f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266484, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.770 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d38a1f36-5089-4eb8-a475-4fde47253ff1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266485, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266485, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.773 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:04 compute-0 nova_compute[244014]: 2026-02-25 12:20:04.775 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.779 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.780 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.781 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.782 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.075 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.076 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.70900002960116GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.077 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.077 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.163 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 52f927ad-a417-489f-9f92-87bc3433649d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.164 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 7a6ab503-d433-40a7-9395-3d5660e852c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.164 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d9b67bce-8a7c-4f49-9cab-3e20377ca630 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.164 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.165 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.165 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.165 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 89488b9f-7c53-4e00-ad62-837e33a76dae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.166 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.166 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.185 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.209 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.209 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.223 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 52f927ad-a417-489f-9f92-87bc3433649d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.224 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022005.2228303, 52f927ad-a417-489f-9f92-87bc3433649d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.225 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Resumed (Lifecycle Event)
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.229 244018 DEBUG nova.compute.manager [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.230 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.234 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance spawned successfully.
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.235 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.253 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.269 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.278 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.283 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.283 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.284 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.285 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.285 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.286 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.300 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.308 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.309 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022005.229014, 52f927ad-a417-489f-9f92-87bc3433649d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.309 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Started (Lifecycle Event)
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.333 244018 DEBUG nova.network.neutron [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:05 compute-0 ceph-mon[76335]: pgmap v1051: 305 pgs: 305 active+clean; 649 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 7.2 MiB/s wr, 428 op/s
Feb 25 12:20:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/792541830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.345 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.348 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.364 244018 INFO nova.compute.manager [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Took 1.21 seconds to deallocate network for instance.
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.387 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.408 244018 DEBUG nova.compute.manager [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.462 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.480 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:05 compute-0 nova_compute[244014]: 2026-02-25 12:20:05.516 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 649 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.1 MiB/s wr, 277 op/s
Feb 25 12:20:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1518581231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.086 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.093 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.138 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.234 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.234 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.236 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.255 244018 DEBUG nova.compute.manager [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.255 244018 DEBUG oslo_concurrency.lockutils [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.256 244018 DEBUG oslo_concurrency.lockutils [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.257 244018 DEBUG oslo_concurrency.lockutils [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.257 244018 DEBUG nova.compute.manager [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] No waiting events found dispatching network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.257 244018 WARNING nova.compute.manager [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received unexpected event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 for instance with vm_state deleted and task_state None.
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.307 244018 DEBUG nova.network.neutron [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updated VIF entry in instance network info cache for port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.308 244018 DEBUG nova.network.neutron [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:06 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1518581231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.385 244018 DEBUG oslo_concurrency.processutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.451 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.453 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.454 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.455 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.455 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.456 244018 WARNING nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state None.
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.456 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.457 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing instance network info cache due to event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.457 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.457 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.458 244018 DEBUG nova.network.neutron [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.532 244018 DEBUG oslo_concurrency.lockutils [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2618421651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.896 244018 DEBUG oslo_concurrency.processutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.901 244018 DEBUG nova.compute.provider_tree [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.920 244018 DEBUG nova.scheduler.client.report [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.968 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.972 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 1.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:06 compute-0 nova_compute[244014]: 2026-02-25 12:20:06.972 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.024 244018 INFO nova.scheduler.client.report [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Deleted allocations for instance 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.075 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.122 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:07 compute-0 ovn_controller[147040]: 2026-02-25T12:20:07Z|00137|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 12:20:07 compute-0 ovn_controller[147040]: 2026-02-25T12:20:07Z|00138|binding|INFO|Releasing lport f59f37b4-05c7-4f51-99f1-f2c4bac42231 from this chassis (sb_readonly=0)
Feb 25 12:20:07 compute-0 ovn_controller[147040]: 2026-02-25T12:20:07Z|00139|binding|INFO|Releasing lport 2cfd1e6b-d28d-43c0-bbbd-c6ad77855812 from this chassis (sb_readonly=0)
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.237 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:20:07 compute-0 ceph-mon[76335]: pgmap v1052: 305 pgs: 305 active+clean; 649 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.1 MiB/s wr, 277 op/s
Feb 25 12:20:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2618421651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 563 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 370 op/s
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.947 244018 DEBUG nova.network.neutron [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updated VIF entry in instance network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.948 244018 DEBUG nova.network.neutron [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.981 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.982 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-deleted-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.983 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.984 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.984 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.985 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.985 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:07 compute-0 nova_compute[244014]: 2026-02-25 12:20:07.986 244018 WARNING nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state None.
Feb 25 12:20:08 compute-0 nova_compute[244014]: 2026-02-25 12:20:08.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:08 compute-0 nova_compute[244014]: 2026-02-25 12:20:08.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:08 compute-0 sudo[266572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:20:08 compute-0 sudo[266572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:08 compute-0 sudo[266572]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:09 compute-0 sudo[266597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 12:20:09 compute-0 sudo[266597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Feb 25 12:20:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Feb 25 12:20:09 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Feb 25 12:20:09 compute-0 ceph-mon[76335]: pgmap v1053: 305 pgs: 305 active+clean; 563 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 370 op/s
Feb 25 12:20:09 compute-0 ceph-mon[76335]: osdmap e145: 3 total, 3 up, 3 in
Feb 25 12:20:09 compute-0 sudo[266597]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:20:09 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:20:09 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:09 compute-0 ovn_controller[147040]: 2026-02-25T12:20:09Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:10:e8 10.100.0.13
Feb 25 12:20:09 compute-0 ovn_controller[147040]: 2026-02-25T12:20:09Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:10:e8 10.100.0.13
Feb 25 12:20:09 compute-0 sudo[266642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:20:09 compute-0 sudo[266642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:09 compute-0 sudo[266642]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:09 compute-0 sudo[266667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:20:09 compute-0 sudo[266667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 563 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 370 op/s
Feb 25 12:20:09 compute-0 sudo[266667]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:10 compute-0 sudo[266723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:20:10 compute-0 sudo[266723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:10 compute-0 sudo[266723]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:10 compute-0 sudo[266748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Feb 25 12:20:10 compute-0 sudo[266748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.309 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.310 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.310 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.310 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.310 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.311 244018 INFO nova.compute.manager [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Terminating instance
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.312 244018 DEBUG nova.compute.manager [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:20:10 compute-0 kernel: tap55015950-cf (unregistering): left promiscuous mode
Feb 25 12:20:10 compute-0 NetworkManager[49836]: <info>  [1772022010.3546] device (tap55015950-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:10 compute-0 ovn_controller[147040]: 2026-02-25T12:20:10Z|00140|binding|INFO|Releasing lport 55015950-cf1b-4183-802f-22f661123534 from this chassis (sb_readonly=0)
Feb 25 12:20:10 compute-0 ovn_controller[147040]: 2026-02-25T12:20:10Z|00141|binding|INFO|Setting lport 55015950-cf1b-4183-802f-22f661123534 down in Southbound
Feb 25 12:20:10 compute-0 ovn_controller[147040]: 2026-02-25T12:20:10Z|00142|binding|INFO|Removing iface tap55015950-cf ovn-installed in OVS
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.368 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:da:42 10.100.0.12'], port_security=['fa:16:3e:c6:da:42 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8993d2e7-8b5d-42eb-9e24-f96dcc4da39b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=55015950-cf1b-4183-802f-22f661123534) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.370 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 55015950-cf1b-4183-802f-22f661123534 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.371 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:10 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:10 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.386 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f7380ce0-751e-42cd-88e2-fa372e3d359c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:10 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Deactivated successfully.
Feb 25 12:20:10 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Consumed 14.578s CPU time.
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.412 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5da462da-f7de-4d4d-8ebc-c434db4f5ff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:10 compute-0 systemd-machined[210048]: Machine qemu-23-instance-00000015 terminated.
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.415 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bd33ee58-df8e-411f-a677-fdedc033ec37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.435 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cc1b73-267b-424c-9e7b-1ee82483bd84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.446 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[20e4b836-07b4-48d9-905c-e239da204ac7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266801, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.456 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5803a83a-6038-4bd7-8dc4-d1a9ebf63497]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266802, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266802, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.457 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:10 compute-0 sudo[266748]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.463 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.463 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.463 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.464 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:20:10 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:20:10 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:20:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:20:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:20:10 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:20:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:20:10 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:20:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:20:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:20:10 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:20:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:20:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.545 244018 INFO nova.virt.libvirt.driver [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Instance destroyed successfully.
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.547 244018 DEBUG nova.objects.instance [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:10 compute-0 sudo[266804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:20:10 compute-0 sudo[266804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:10 compute-0 sudo[266804]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:10 compute-0 sudo[266839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:20:10 compute-0 sudo[266839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.725 244018 DEBUG nova.virt.libvirt.vif [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-649798416',display_name='tempest-ServersAdminTestJSON-server-649798416',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-649798416',id=21,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-fszvg8lb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:25Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=8993d2e7-8b5d-42eb-9e24-f96dcc4da39b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.726 244018 DEBUG nova.network.os_vif_util [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.727 244018 DEBUG nova.network.os_vif_util [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.728 244018 DEBUG os_vif [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.731 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55015950-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.738 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:20:10 compute-0 nova_compute[244014]: 2026-02-25 12:20:10.743 244018 INFO os_vif [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf')
Feb 25 12:20:10 compute-0 podman[266893]: 2026-02-25 12:20:10.869139504 +0000 UTC m=+0.049212331 container create e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 12:20:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.881 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:10 compute-0 systemd[1]: Started libpod-conmon-e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936.scope.
Feb 25 12:20:10 compute-0 podman[266893]: 2026-02-25 12:20:10.847583811 +0000 UTC m=+0.027656708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:20:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:20:10 compute-0 podman[266893]: 2026-02-25 12:20:10.972191286 +0000 UTC m=+0.152264173 container init e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 12:20:10 compute-0 podman[266893]: 2026-02-25 12:20:10.980336728 +0000 UTC m=+0.160409585 container start e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 12:20:10 compute-0 podman[266893]: 2026-02-25 12:20:10.983878788 +0000 UTC m=+0.163951685 container attach e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 12:20:10 compute-0 dazzling_heyrovsky[266910]: 167 167
Feb 25 12:20:10 compute-0 systemd[1]: libpod-e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936.scope: Deactivated successfully.
Feb 25 12:20:10 compute-0 conmon[266910]: conmon e81a562e303c57a8feb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936.scope/container/memory.events
Feb 25 12:20:10 compute-0 podman[266893]: 2026-02-25 12:20:10.988318845 +0000 UTC m=+0.168391692 container died e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb 25 12:20:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-afe0ceb13a9c74b88e37e7e6865491b4bb4857ee76c17e5d003e0cd3232fdfa2-merged.mount: Deactivated successfully.
Feb 25 12:20:11 compute-0 podman[266893]: 2026-02-25 12:20:11.037193035 +0000 UTC m=+0.217265882 container remove e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:20:11 compute-0 systemd[1]: libpod-conmon-e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936.scope: Deactivated successfully.
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.083 244018 INFO nova.virt.libvirt.driver [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Deleting instance files /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_del
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.084 244018 INFO nova.virt.libvirt.driver [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Deletion of /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_del complete
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.189 244018 INFO nova.compute.manager [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Took 0.88 seconds to destroy the instance on the hypervisor.
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.190 244018 DEBUG oslo.service.loopingcall [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.190 244018 DEBUG nova.compute.manager [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.191 244018 DEBUG nova.network.neutron [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:20:11 compute-0 podman[266933]: 2026-02-25 12:20:11.262828435 +0000 UTC m=+0.072438252 container create dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 12:20:11 compute-0 podman[266933]: 2026-02-25 12:20:11.221397866 +0000 UTC m=+0.031007703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:20:11 compute-0 systemd[1]: Started libpod-conmon-dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226.scope.
Feb 25 12:20:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:11 compute-0 podman[266933]: 2026-02-25 12:20:11.387673287 +0000 UTC m=+0.197283164 container init dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:20:11 compute-0 podman[266933]: 2026-02-25 12:20:11.39620344 +0000 UTC m=+0.205813247 container start dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 12:20:11 compute-0 podman[266933]: 2026-02-25 12:20:11.400237565 +0000 UTC m=+0.209847362 container attach dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:20:11 compute-0 ceph-mon[76335]: pgmap v1055: 305 pgs: 305 active+clean; 563 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 370 op/s
Feb 25 12:20:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:20:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:20:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:20:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:20:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.567 244018 DEBUG nova.compute.manager [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-unplugged-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.569 244018 DEBUG oslo_concurrency.lockutils [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.569 244018 DEBUG oslo_concurrency.lockutils [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.569 244018 DEBUG oslo_concurrency.lockutils [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.569 244018 DEBUG nova.compute.manager [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] No waiting events found dispatching network-vif-unplugged-55015950-cf1b-4183-802f-22f661123534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:11 compute-0 nova_compute[244014]: 2026-02-25 12:20:11.570 244018 DEBUG nova.compute.manager [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-unplugged-55015950-cf1b-4183-802f-22f661123534 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:20:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 555 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 364 op/s
Feb 25 12:20:11 compute-0 practical_payne[266949]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:20:11 compute-0 practical_payne[266949]: --> All data devices are unavailable
Feb 25 12:20:11 compute-0 systemd[1]: libpod-dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226.scope: Deactivated successfully.
Feb 25 12:20:11 compute-0 podman[266933]: 2026-02-25 12:20:11.903245517 +0000 UTC m=+0.712855334 container died dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 12:20:12 compute-0 nova_compute[244014]: 2026-02-25 12:20:12.025 244018 DEBUG nova.network.neutron [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:12 compute-0 nova_compute[244014]: 2026-02-25 12:20:12.040 244018 INFO nova.compute.manager [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Took 0.85 seconds to deallocate network for instance.
Feb 25 12:20:12 compute-0 nova_compute[244014]: 2026-02-25 12:20:12.094 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:12 compute-0 nova_compute[244014]: 2026-02-25 12:20:12.095 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:12 compute-0 nova_compute[244014]: 2026-02-25 12:20:12.125 244018 DEBUG nova.compute.manager [req-32bb59e6-db67-4b08-8483-b0c8bacc911b req-03142469-1835-4b31-811f-83d71e6a600a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-deleted-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e-merged.mount: Deactivated successfully.
Feb 25 12:20:12 compute-0 podman[266933]: 2026-02-25 12:20:12.2544745 +0000 UTC m=+1.064084317 container remove dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:20:12 compute-0 systemd[1]: libpod-conmon-dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226.scope: Deactivated successfully.
Feb 25 12:20:12 compute-0 nova_compute[244014]: 2026-02-25 12:20:12.281 244018 DEBUG oslo_concurrency.processutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:12 compute-0 sudo[266839]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:12 compute-0 sudo[266982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:20:12 compute-0 sudo[266982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:12 compute-0 sudo[266982]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:12 compute-0 sudo[267008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:20:12 compute-0 sudo[267008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:12 compute-0 sshd-session[266978]: Received disconnect from 45.148.10.157 port 41870:11:  [preauth]
Feb 25 12:20:12 compute-0 sshd-session[266978]: Disconnected from authenticating user root 45.148.10.157 port 41870 [preauth]
Feb 25 12:20:12 compute-0 podman[267063]: 2026-02-25 12:20:12.759285573 +0000 UTC m=+0.062974123 container create 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:20:12 compute-0 podman[267063]: 2026-02-25 12:20:12.717206756 +0000 UTC m=+0.020895296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:20:12 compute-0 systemd[1]: Started libpod-conmon-933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045.scope.
Feb 25 12:20:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4129356601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:20:12 compute-0 podman[267063]: 2026-02-25 12:20:12.857130087 +0000 UTC m=+0.160818647 container init 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 12:20:12 compute-0 nova_compute[244014]: 2026-02-25 12:20:12.855 244018 DEBUG oslo_concurrency.processutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:12 compute-0 podman[267063]: 2026-02-25 12:20:12.863279322 +0000 UTC m=+0.166967852 container start 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:20:12 compute-0 podman[267063]: 2026-02-25 12:20:12.866141203 +0000 UTC m=+0.169829723 container attach 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:20:12 compute-0 systemd[1]: libpod-933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045.scope: Deactivated successfully.
Feb 25 12:20:12 compute-0 sad_goldstine[267080]: 167 167
Feb 25 12:20:12 compute-0 conmon[267080]: conmon 933c276668caefb25389 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045.scope/container/memory.events
Feb 25 12:20:12 compute-0 podman[267063]: 2026-02-25 12:20:12.87234075 +0000 UTC m=+0.176029270 container died 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:20:12 compute-0 nova_compute[244014]: 2026-02-25 12:20:12.870 244018 DEBUG nova.compute.provider_tree [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:20:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd329842d05144bc3c81cc132f5fb82920dfff7494b56222cfbcfd265fbd6c53-merged.mount: Deactivated successfully.
Feb 25 12:20:12 compute-0 podman[267063]: 2026-02-25 12:20:12.905661568 +0000 UTC m=+0.209350088 container remove 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 12:20:12 compute-0 nova_compute[244014]: 2026-02-25 12:20:12.905 244018 DEBUG nova.scheduler.client.report [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:20:12 compute-0 systemd[1]: libpod-conmon-933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045.scope: Deactivated successfully.
Feb 25 12:20:12 compute-0 nova_compute[244014]: 2026-02-25 12:20:12.947 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:12 compute-0 nova_compute[244014]: 2026-02-25 12:20:12.975 244018 INFO nova.scheduler.client.report [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Deleted allocations for instance 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b
Feb 25 12:20:13 compute-0 nova_compute[244014]: 2026-02-25 12:20:13.038 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:13 compute-0 podman[267106]: 2026-02-25 12:20:13.080227875 +0000 UTC m=+0.050463107 container create 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:20:13 compute-0 systemd[1]: Started libpod-conmon-43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3.scope.
Feb 25 12:20:13 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:20:13 compute-0 podman[267106]: 2026-02-25 12:20:13.062405667 +0000 UTC m=+0.032640909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b880d734a62ee220340aaaf44cd8158a34d8392d8d7f963362f96b3eac70fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b880d734a62ee220340aaaf44cd8158a34d8392d8d7f963362f96b3eac70fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b880d734a62ee220340aaaf44cd8158a34d8392d8d7f963362f96b3eac70fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b880d734a62ee220340aaaf44cd8158a34d8392d8d7f963362f96b3eac70fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:13 compute-0 podman[267106]: 2026-02-25 12:20:13.179974693 +0000 UTC m=+0.150209955 container init 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 12:20:13 compute-0 podman[267106]: 2026-02-25 12:20:13.194419303 +0000 UTC m=+0.164654555 container start 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:20:13 compute-0 podman[267106]: 2026-02-25 12:20:13.198049197 +0000 UTC m=+0.168284509 container attach 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 12:20:13 compute-0 nova_compute[244014]: 2026-02-25 12:20:13.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]: {
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:     "0": [
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:         {
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "devices": [
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "/dev/loop3"
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             ],
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_name": "ceph_lv0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_size": "21470642176",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "name": "ceph_lv0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "tags": {
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.cluster_name": "ceph",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.crush_device_class": "",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.encrypted": "0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.objectstore": "bluestore",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.osd_id": "0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.type": "block",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.vdo": "0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.with_tpm": "0"
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             },
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "type": "block",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "vg_name": "ceph_vg0"
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:         }
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:     ],
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:     "1": [
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:         {
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "devices": [
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "/dev/loop4"
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             ],
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_name": "ceph_lv1",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_size": "21470642176",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "name": "ceph_lv1",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "tags": {
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.cluster_name": "ceph",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.crush_device_class": "",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.encrypted": "0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.objectstore": "bluestore",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.osd_id": "1",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.type": "block",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.vdo": "0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.with_tpm": "0"
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             },
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "type": "block",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "vg_name": "ceph_vg1"
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:         }
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:     ],
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:     "2": [
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:         {
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "devices": [
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "/dev/loop5"
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             ],
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_name": "ceph_lv2",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_size": "21470642176",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "name": "ceph_lv2",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "tags": {
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.cluster_name": "ceph",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.crush_device_class": "",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.encrypted": "0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.objectstore": "bluestore",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.osd_id": "2",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.type": "block",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.vdo": "0",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:                 "ceph.with_tpm": "0"
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             },
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "type": "block",
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:             "vg_name": "ceph_vg2"
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:         }
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]:     ]
Feb 25 12:20:13 compute-0 beautiful_thompson[267123]: }
Feb 25 12:20:13 compute-0 ceph-mon[76335]: pgmap v1056: 305 pgs: 305 active+clean; 555 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 364 op/s
Feb 25 12:20:13 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4129356601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:13 compute-0 systemd[1]: libpod-43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3.scope: Deactivated successfully.
Feb 25 12:20:13 compute-0 podman[267132]: 2026-02-25 12:20:13.573627962 +0000 UTC m=+0.026689551 container died 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-07b880d734a62ee220340aaaf44cd8158a34d8392d8d7f963362f96b3eac70fb-merged.mount: Deactivated successfully.
Feb 25 12:20:13 compute-0 podman[267132]: 2026-02-25 12:20:13.61150119 +0000 UTC m=+0.064562739 container remove 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:20:13 compute-0 systemd[1]: libpod-conmon-43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3.scope: Deactivated successfully.
Feb 25 12:20:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 517 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 257 op/s
Feb 25 12:20:13 compute-0 sudo[267008]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:13 compute-0 sudo[267148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:20:13 compute-0 sudo[267148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:13 compute-0 sudo[267148]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:13 compute-0 nova_compute[244014]: 2026-02-25 12:20:13.787 244018 DEBUG nova.compute.manager [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:13 compute-0 nova_compute[244014]: 2026-02-25 12:20:13.787 244018 DEBUG nova.compute.manager [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing instance network info cache due to event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:20:13 compute-0 nova_compute[244014]: 2026-02-25 12:20:13.787 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:13 compute-0 nova_compute[244014]: 2026-02-25 12:20:13.788 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:13 compute-0 nova_compute[244014]: 2026-02-25 12:20:13.788 244018 DEBUG nova.network.neutron [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:20:13 compute-0 sudo[267173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:20:13 compute-0 sudo[267173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:14 compute-0 podman[267212]: 2026-02-25 12:20:14.111898257 +0000 UTC m=+0.062918791 container create 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.150 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021999.149009, d44c3dbc-e4bc-4235-bd88-b39616473248 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.150 244018 INFO nova.compute.manager [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] VM Stopped (Lifecycle Event)
Feb 25 12:20:14 compute-0 systemd[1]: Started libpod-conmon-23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805.scope.
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.175 244018 DEBUG nova.compute.manager [None req-d3b9eff6-8cdf-426a-82f8-24eba9b515a6 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:14 compute-0 podman[267212]: 2026-02-25 12:20:14.083805258 +0000 UTC m=+0.034825852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.181 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.181 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.182 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.182 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.182 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.184 244018 INFO nova.compute.manager [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Terminating instance
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.185 244018 DEBUG nova.compute.manager [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:20:14 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:20:14 compute-0 podman[267212]: 2026-02-25 12:20:14.207364033 +0000 UTC m=+0.158384567 container init 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 12:20:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:14 compute-0 podman[267212]: 2026-02-25 12:20:14.217143301 +0000 UTC m=+0.168163835 container start 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:20:14 compute-0 podman[267212]: 2026-02-25 12:20:14.223472931 +0000 UTC m=+0.174493515 container attach 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 12:20:14 compute-0 awesome_nobel[267228]: 167 167
Feb 25 12:20:14 compute-0 systemd[1]: libpod-23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805.scope: Deactivated successfully.
Feb 25 12:20:14 compute-0 podman[267212]: 2026-02-25 12:20:14.230236934 +0000 UTC m=+0.181257458 container died 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:20:14 compute-0 kernel: tap24e2d6b3-f0 (unregistering): left promiscuous mode
Feb 25 12:20:14 compute-0 NetworkManager[49836]: <info>  [1772022014.2441] device (tap24e2d6b3-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:20:14 compute-0 ovn_controller[147040]: 2026-02-25T12:20:14Z|00143|binding|INFO|Releasing lport 24e2d6b3-f0d8-4603-8c1e-da65e164050d from this chassis (sb_readonly=0)
Feb 25 12:20:14 compute-0 ovn_controller[147040]: 2026-02-25T12:20:14Z|00144|binding|INFO|Setting lport 24e2d6b3-f0d8-4603-8c1e-da65e164050d down in Southbound
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.254 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:14 compute-0 ovn_controller[147040]: 2026-02-25T12:20:14Z|00145|binding|INFO|Removing iface tap24e2d6b3-f0 ovn-installed in OVS
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fb43b2d1d0e376b0c0fee06b12df7609bc4102e0bd66d2d005b5397d9e9e0d4-merged.mount: Deactivated successfully.
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.264 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:b0:82 10.100.0.9'], port_security=['fa:16:3e:4b:b0:82 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd9b67bce-8a7c-4f49-9cab-3e20377ca630', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=24e2d6b3-f0d8-4603-8c1e-da65e164050d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.266 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 24e2d6b3-f0d8-4603-8c1e-da65e164050d in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.275 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:20:14 compute-0 podman[267212]: 2026-02-25 12:20:14.280077492 +0000 UTC m=+0.231098016 container remove 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:20:14 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Feb 25 12:20:14 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 14.846s CPU time.
Feb 25 12:20:14 compute-0 systemd-machined[210048]: Machine qemu-22-instance-00000014 terminated.
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.292 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a48b2c1-828a-4e45-9ca8-1611040fdf76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:14 compute-0 systemd[1]: libpod-conmon-23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805.scope: Deactivated successfully.
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.327 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7216aa64-9153-41c4-9df0-0c544975a03b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.331 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3fe495-5b43-4eff-b4df-70f0439c5fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.357 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[27029dbc-5c12-4627-b4de-5788df8d425a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.395 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d386dc5-1ead-4aab-84e8-199c18137a2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267259, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.413 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[40e5200c-6121-4a0d-abd8-df2ef8f24206]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267265, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267265, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.417 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.423 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.424 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.425 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.426 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.426 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.427 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.428 244018 INFO nova.virt.libvirt.driver [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Instance destroyed successfully.
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.428 244018 DEBUG nova.objects.instance [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid d9b67bce-8a7c-4f49-9cab-3e20377ca630 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.440 244018 DEBUG nova.virt.libvirt.vif [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-654045690',display_name='tempest-ServersAdminTestJSON-server-654045690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-654045690',id=20,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ki1fzu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:10Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=d9b67bce-8a7c-4f49-9cab-3e20377ca630,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.440 244018 DEBUG nova.network.os_vif_util [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.441 244018 DEBUG nova.network.os_vif_util [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.441 244018 DEBUG os_vif [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.444 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24e2d6b3-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.449 244018 INFO os_vif [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0')
Feb 25 12:20:14 compute-0 podman[267272]: 2026-02-25 12:20:14.484686314 +0000 UTC m=+0.058881147 container create 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:20:14 compute-0 systemd[1]: Started libpod-conmon-4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389.scope.
Feb 25 12:20:14 compute-0 podman[267272]: 2026-02-25 12:20:14.466773834 +0000 UTC m=+0.040968747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:20:14 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f50386a3145a2b2602a91f5baeccb11ab765f7e5973d65f5b81ad02f99ebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f50386a3145a2b2602a91f5baeccb11ab765f7e5973d65f5b81ad02f99ebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f50386a3145a2b2602a91f5baeccb11ab765f7e5973d65f5b81ad02f99ebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f50386a3145a2b2602a91f5baeccb11ab765f7e5973d65f5b81ad02f99ebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:14 compute-0 podman[267272]: 2026-02-25 12:20:14.58719487 +0000 UTC m=+0.161389773 container init 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 12:20:14 compute-0 podman[267272]: 2026-02-25 12:20:14.600654403 +0000 UTC m=+0.174849276 container start 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb 25 12:20:14 compute-0 podman[267272]: 2026-02-25 12:20:14.608208468 +0000 UTC m=+0.182403401 container attach 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.751 244018 INFO nova.virt.libvirt.driver [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Deleting instance files /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630_del
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.752 244018 INFO nova.virt.libvirt.driver [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Deletion of /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630_del complete
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.766 244018 DEBUG nova.network.neutron [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updated VIF entry in instance network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.767 244018 DEBUG nova.network.neutron [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.805 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.806 244018 DEBUG nova.compute.manager [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.806 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.807 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.807 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.807 244018 DEBUG nova.compute.manager [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] No waiting events found dispatching network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.808 244018 WARNING nova.compute.manager [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received unexpected event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 for instance with vm_state deleted and task_state None.
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.834 244018 INFO nova.compute.manager [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.834 244018 DEBUG oslo.service.loopingcall [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.835 244018 DEBUG nova.compute.manager [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:20:14 compute-0 nova_compute[244014]: 2026-02-25 12:20:14.835 244018 DEBUG nova.network.neutron [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:20:15 compute-0 lvm[267389]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:20:15 compute-0 lvm[267389]: VG ceph_vg0 finished
Feb 25 12:20:15 compute-0 lvm[267391]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:20:15 compute-0 lvm[267391]: VG ceph_vg1 finished
Feb 25 12:20:15 compute-0 lvm[267392]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:20:15 compute-0 lvm[267392]: VG ceph_vg2 finished
Feb 25 12:20:15 compute-0 hungry_chaum[267313]: {}
Feb 25 12:20:15 compute-0 systemd[1]: libpod-4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389.scope: Deactivated successfully.
Feb 25 12:20:15 compute-0 podman[267272]: 2026-02-25 12:20:15.405466102 +0000 UTC m=+0.979660975 container died 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:20:15 compute-0 systemd[1]: libpod-4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389.scope: Consumed 1.102s CPU time.
Feb 25 12:20:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c2f50386a3145a2b2602a91f5baeccb11ab765f7e5973d65f5b81ad02f99ebd-merged.mount: Deactivated successfully.
Feb 25 12:20:15 compute-0 podman[267272]: 2026-02-25 12:20:15.452203732 +0000 UTC m=+1.026398595 container remove 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 12:20:15 compute-0 systemd[1]: libpod-conmon-4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389.scope: Deactivated successfully.
Feb 25 12:20:15 compute-0 nova_compute[244014]: 2026-02-25 12:20:15.500 244018 DEBUG nova.network.neutron [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:15 compute-0 sudo[267173]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:15 compute-0 ceph-mon[76335]: pgmap v1057: 305 pgs: 305 active+clean; 517 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 257 op/s
Feb 25 12:20:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:20:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 49K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 12K writes, 3441 syncs, 3.51 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6253 writes, 24K keys, 6253 commit groups, 1.0 writes per commit group, ingest: 26.67 MB, 0.04 MB/s
                                           Interval WAL: 6253 writes, 2445 syncs, 2.56 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:20:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:20:15 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:20:15 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:15 compute-0 nova_compute[244014]: 2026-02-25 12:20:15.552 244018 INFO nova.compute.manager [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Took 0.72 seconds to deallocate network for instance.
Feb 25 12:20:15 compute-0 sudo[267407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:20:15 compute-0 sudo[267407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:20:15 compute-0 sudo[267407]: pam_unix(sudo:session): session closed for user root
Feb 25 12:20:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 517 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 257 op/s
Feb 25 12:20:15 compute-0 nova_compute[244014]: 2026-02-25 12:20:15.657 244018 DEBUG nova.compute.manager [req-3b3cafdb-2733-4910-8469-7e6152f9dac6 req-eff15c67-ab79-404b-b050-9346568e9fdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-vif-deleted-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:15 compute-0 nova_compute[244014]: 2026-02-25 12:20:15.721 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:15 compute-0 nova_compute[244014]: 2026-02-25 12:20:15.722 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:16 compute-0 ovn_controller[147040]: 2026-02-25T12:20:16Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:71:62 10.100.0.5
Feb 25 12:20:16 compute-0 ovn_controller[147040]: 2026-02-25T12:20:16Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:71:62 10.100.0.5
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.265 244018 DEBUG nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-vif-unplugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.265 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.266 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.266 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.267 244018 DEBUG nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] No waiting events found dispatching network-vif-unplugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.267 244018 WARNING nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received unexpected event network-vif-unplugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d for instance with vm_state deleted and task_state None.
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.267 244018 DEBUG nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.268 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.268 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.269 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.269 244018 DEBUG nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] No waiting events found dispatching network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.269 244018 WARNING nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received unexpected event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d for instance with vm_state deleted and task_state None.
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.280 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.281 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.281 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.282 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.282 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.283 244018 INFO nova.compute.manager [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Terminating instance
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.285 244018 DEBUG nova.compute.manager [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:20:16 compute-0 kernel: tapb2336583-1a (unregistering): left promiscuous mode
Feb 25 12:20:16 compute-0 NetworkManager[49836]: <info>  [1772022016.3267] device (tapb2336583-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:20:16 compute-0 ovn_controller[147040]: 2026-02-25T12:20:16Z|00146|binding|INFO|Releasing lport b2336583-1aaa-4789-8d4f-a3a14997891d from this chassis (sb_readonly=0)
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:16 compute-0 ovn_controller[147040]: 2026-02-25T12:20:16Z|00147|binding|INFO|Setting lport b2336583-1aaa-4789-8d4f-a3a14997891d down in Southbound
Feb 25 12:20:16 compute-0 ovn_controller[147040]: 2026-02-25T12:20:16Z|00148|binding|INFO|Removing iface tapb2336583-1a ovn-installed in OVS
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.340 244018 DEBUG oslo_concurrency.processutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.360 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:d7:29 10.100.0.3'], port_security=['fa:16:3e:23:d7:29 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a50ab9ba-7ffb-499d-9822-c20dbf4e32aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db1eeaa4-6673-482f-8f62-eb89284fbfdd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b2336583-1aaa-4789-8d4f-a3a14997891d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.361 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b2336583-1aaa-4789-8d4f-a3a14997891d in datapath 41c706f5-6f0b-47a8-91a4-16f87e2a0571 unbound from our chassis
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.363 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41c706f5-6f0b-47a8-91a4-16f87e2a0571, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.364 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0dc6c2e-c010-4798-8572-c6561d11c5c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.365 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571 namespace which is not needed anymore
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:16 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Deactivated successfully.
Feb 25 12:20:16 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Consumed 14.130s CPU time.
Feb 25 12:20:16 compute-0 systemd-machined[210048]: Machine qemu-24-instance-00000016 terminated.
Feb 25 12:20:16 compute-0 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [NOTICE]   (264078) : haproxy version is 2.8.14-c23fe91
Feb 25 12:20:16 compute-0 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [NOTICE]   (264078) : path to executable is /usr/sbin/haproxy
Feb 25 12:20:16 compute-0 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [WARNING]  (264078) : Exiting Master process...
Feb 25 12:20:16 compute-0 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [WARNING]  (264078) : Exiting Master process...
Feb 25 12:20:16 compute-0 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [ALERT]    (264078) : Current worker (264080) exited with code 143 (Terminated)
Feb 25 12:20:16 compute-0 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [WARNING]  (264078) : All workers exited. Exiting... (0)
Feb 25 12:20:16 compute-0 systemd[1]: libpod-478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35.scope: Deactivated successfully.
Feb 25 12:20:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:20:16 compute-0 conmon[264074]: conmon 478b53e8a0b2cb418603 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35.scope/container/memory.events
Feb 25 12:20:16 compute-0 podman[267456]: 2026-02-25 12:20:16.52444851 +0000 UTC m=+0.059529745 container died 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.528 244018 INFO nova.virt.libvirt.driver [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Instance destroyed successfully.
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.529 244018 DEBUG nova.objects.instance [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'resources' on Instance uuid de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35-userdata-shm.mount: Deactivated successfully.
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.557 244018 DEBUG nova.virt.libvirt.vif [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1190496141',display_name='tempest-FloatingIPsAssociationTestJSON-server-1190496141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1190496141',id=22,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-pnxl3h9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:30Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:20:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0335c2e7dd04e96857217313fc187a38e2ae19855220b0923b3e59e3330f4337-merged.mount: Deactivated successfully.
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.558 244018 DEBUG nova.network.os_vif_util [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.559 244018 DEBUG nova.network.os_vif_util [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.560 244018 DEBUG os_vif [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.563 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2336583-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:16 compute-0 podman[267456]: 2026-02-25 12:20:16.571273532 +0000 UTC m=+0.106354747 container cleanup 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.574 244018 INFO os_vif [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a')
Feb 25 12:20:16 compute-0 systemd[1]: libpod-conmon-478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35.scope: Deactivated successfully.
Feb 25 12:20:16 compute-0 podman[267515]: 2026-02-25 12:20:16.634934093 +0000 UTC m=+0.042614803 container remove 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.644 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[560f46f3-1fef-4895-aa6b-91412b545521]: (4, ('Wed Feb 25 12:20:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571 (478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35)\n478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35\nWed Feb 25 12:20:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571 (478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35)\n478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.646 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6414961a-5f8d-44e4-8890-d3a81a432c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.647 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c706f5-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:16 compute-0 kernel: tap41c706f5-60: left promiscuous mode
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.669 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e3092a-307e-4979-b3dc-9263cf6bd554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.686 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5f02e103-b620-4bfa-b6af-e73a51a70243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.687 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86262362-83f9-4c4c-b0ec-79614b7d1ff9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.702 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[62d6cb0a-625b-4b76-a50a-978fe7ce6c7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393624, 'reachable_time': 35159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267548, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d41c706f5\x2d6f0b\x2d47a8\x2d91a4\x2d16f87e2a0571.mount: Deactivated successfully.
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.704 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:20:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.704 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[897a466a-2281-48a5-93e7-89eba66cb404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.850 244018 INFO nova.virt.libvirt.driver [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Deleting instance files /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_del
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.852 244018 INFO nova.virt.libvirt.driver [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Deletion of /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_del complete
Feb 25 12:20:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211447400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.877 244018 DEBUG oslo_concurrency.processutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.884 244018 DEBUG nova.compute.provider_tree [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.908 244018 DEBUG nova.scheduler.client.report [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.938 244018 INFO nova.compute.manager [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.939 244018 DEBUG oslo.service.loopingcall [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.940 244018 DEBUG nova.compute.manager [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.940 244018 DEBUG nova.network.neutron [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:20:16 compute-0 nova_compute[244014]: 2026-02-25 12:20:16.956 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.328 244018 INFO nova.scheduler.client.report [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Deleted allocations for instance d9b67bce-8a7c-4f49-9cab-3e20377ca630
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.408 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:17 compute-0 ceph-mon[76335]: pgmap v1058: 305 pgs: 305 active+clean; 517 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 257 op/s
Feb 25 12:20:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1211447400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.614 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.615 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.616 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.616 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.617 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.618 244018 INFO nova.compute.manager [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Terminating instance
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.620 244018 DEBUG nova.compute.manager [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:20:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 416 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 5.1 MiB/s wr, 240 op/s
Feb 25 12:20:17 compute-0 kernel: tap179cbc6a-d7 (unregistering): left promiscuous mode
Feb 25 12:20:17 compute-0 NetworkManager[49836]: <info>  [1772022017.6769] device (tap179cbc6a-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:17 compute-0 ovn_controller[147040]: 2026-02-25T12:20:17Z|00149|binding|INFO|Releasing lport 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 from this chassis (sb_readonly=0)
Feb 25 12:20:17 compute-0 ovn_controller[147040]: 2026-02-25T12:20:17Z|00150|binding|INFO|Setting lport 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 down in Southbound
Feb 25 12:20:17 compute-0 ovn_controller[147040]: 2026-02-25T12:20:17Z|00151|binding|INFO|Removing iface tap179cbc6a-d7 ovn-installed in OVS
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.706 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:1d:06 10.100.0.7'], port_security=['fa:16:3e:d2:1d:06 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7a6ab503-d433-40a7-9395-3d5660e852c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=179cbc6a-d79f-4f46-88e8-f362ea0e4a26) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.708 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.710 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.724 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35b2136d-36f0-49ec-bd70-8234fe71d84e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:17 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Deactivated successfully.
Feb 25 12:20:17 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Consumed 15.060s CPU time.
Feb 25 12:20:17 compute-0 systemd-machined[210048]: Machine qemu-20-instance-00000012 terminated.
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.752 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[67befab2-3dcf-4b1c-b737-74bbd0585276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.755 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca27315-7e57-41db-8b7a-1da3fe43bfb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.776 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2938dd83-025f-4961-beac-03e50c690ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.792 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d4578e-0bf2-4fc0-a43a-4386083094f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 27, 'rx_bytes': 868, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 27, 'rx_bytes': 868, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267564, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.805 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[77dfc9c0-b333-49c4-828a-1f2d2ce51017]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267565, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267565, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.807 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.809 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.816 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.817 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.866 244018 INFO nova.virt.libvirt.driver [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Instance destroyed successfully.
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.867 244018 DEBUG nova.objects.instance [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid 7a6ab503-d433-40a7-9395-3d5660e852c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.888 244018 DEBUG nova.virt.libvirt.vif [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-671436254',display_name='tempest-ServersAdminTestJSON-server-671436254',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-671436254',id=18,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:18:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-drd5ht2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:18:51Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=7a6ab503-d433-40a7-9395-3d5660e852c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.889 244018 DEBUG nova.network.os_vif_util [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.890 244018 DEBUG nova.network.os_vif_util [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.890 244018 DEBUG os_vif [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.893 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap179cbc6a-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.895 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:20:17 compute-0 nova_compute[244014]: 2026-02-25 12:20:17.900 244018 INFO os_vif [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7')
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.047 244018 DEBUG nova.network.neutron [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.110 244018 INFO nova.compute.manager [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Took 1.17 seconds to deallocate network for instance.
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.191 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.191 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.213 244018 INFO nova.virt.libvirt.driver [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Deleting instance files /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9_del
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.213 244018 INFO nova.virt.libvirt.driver [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Deletion of /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9_del complete
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.225 244018 DEBUG nova.compute.manager [req-ab4c2ba9-29bb-44ca-a9bf-4cf9bc6ac21b req-9a668c4b-efbe-4ef6-8ba9-888db177a32d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-vif-deleted-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.300 244018 INFO nova.compute.manager [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Took 0.68 seconds to destroy the instance on the hypervisor.
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.301 244018 DEBUG oslo.service.loopingcall [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.301 244018 DEBUG nova.compute.manager [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.302 244018 DEBUG nova.network.neutron [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.315 244018 DEBUG oslo_concurrency.processutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.359 244018 DEBUG nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-vif-unplugged-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.360 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.360 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.360 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.361 244018 DEBUG nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] No waiting events found dispatching network-vif-unplugged-b2336583-1aaa-4789-8d4f-a3a14997891d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.361 244018 WARNING nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received unexpected event network-vif-unplugged-b2336583-1aaa-4789-8d4f-a3a14997891d for instance with vm_state deleted and task_state None.
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.362 244018 DEBUG nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.362 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.362 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.363 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.363 244018 DEBUG nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] No waiting events found dispatching network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.363 244018 WARNING nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received unexpected event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d for instance with vm_state deleted and task_state None.
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.797 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022003.795866, 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.797 244018 INFO nova.compute.manager [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] VM Stopped (Lifecycle Event)
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.851 244018 DEBUG nova.compute.manager [None req-467312a5-9207-42af-983b-b54605920be0 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3811217120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.911 244018 DEBUG oslo_concurrency.processutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.918 244018 DEBUG nova.compute.provider_tree [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:20:18 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.945 244018 DEBUG nova.scheduler.client.report [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:20:19 compute-0 nova_compute[244014]: 2026-02-25 12:20:18.999 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:19 compute-0 nova_compute[244014]: 2026-02-25 12:20:19.038 244018 INFO nova.scheduler.client.report [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Deleted allocations for instance de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75
Feb 25 12:20:19 compute-0 nova_compute[244014]: 2026-02-25 12:20:19.154 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:20:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 13K writes, 54K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 13K writes, 4026 syncs, 3.41 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6594 writes, 25K keys, 6594 commit groups, 1.0 writes per commit group, ingest: 27.32 MB, 0.05 MB/s
                                           Interval WAL: 6594 writes, 2613 syncs, 2.52 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:20:19 compute-0 ceph-mon[76335]: pgmap v1059: 305 pgs: 305 active+clean; 416 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 5.1 MiB/s wr, 240 op/s
Feb 25 12:20:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3811217120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 416 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 947 KiB/s rd, 4.9 MiB/s wr, 231 op/s
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.298 244018 DEBUG nova.network.neutron [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.332 244018 DEBUG nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-unplugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.333 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.334 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.334 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.334 244018 DEBUG nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] No waiting events found dispatching network-vif-unplugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.335 244018 DEBUG nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-unplugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.335 244018 DEBUG nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.335 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.336 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.336 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.336 244018 DEBUG nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] No waiting events found dispatching network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.337 244018 WARNING nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received unexpected event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 for instance with vm_state active and task_state deleting.
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.345 244018 INFO nova.compute.manager [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Took 2.04 seconds to deallocate network for instance.
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.406 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.407 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.493 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.494 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.495 244018 DEBUG nova.objects.instance [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:20 compute-0 nova_compute[244014]: 2026-02-25 12:20:20.502 244018 DEBUG oslo_concurrency.processutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:20 compute-0 podman[267624]: 2026-02-25 12:20:20.732765264 +0000 UTC m=+0.076500938 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:20:20 compute-0 podman[267638]: 2026-02-25 12:20:20.772029411 +0000 UTC m=+0.114785727 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 12:20:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/617138349' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.035 244018 DEBUG oslo_concurrency.processutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.043 244018 DEBUG nova.compute.provider_tree [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.067 244018 DEBUG nova.objects.instance [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.073 244018 DEBUG nova.scheduler.client.report [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.084 244018 DEBUG nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.105 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.143 244018 INFO nova.scheduler.client.report [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Deleted allocations for instance 7a6ab503-d433-40a7-9395-3d5660e852c9
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.226 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.388 244018 DEBUG nova.policy [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:20:21 compute-0 ceph-mon[76335]: pgmap v1060: 305 pgs: 305 active+clean; 416 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 947 KiB/s rd, 4.9 MiB/s wr, 231 op/s
Feb 25 12:20:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/617138349' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 367 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 4.3 MiB/s wr, 235 op/s
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.718 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.718 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.719 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.719 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.719 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.721 244018 INFO nova.compute.manager [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Terminating instance
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.723 244018 DEBUG nova.compute.manager [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:20:21 compute-0 kernel: tap2e503dd2-73 (unregistering): left promiscuous mode
Feb 25 12:20:21 compute-0 NetworkManager[49836]: <info>  [1772022021.7903] device (tap2e503dd2-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:21 compute-0 ovn_controller[147040]: 2026-02-25T12:20:21Z|00152|binding|INFO|Releasing lport 2e503dd2-735e-4bfc-87c7-dffab319d935 from this chassis (sb_readonly=0)
Feb 25 12:20:21 compute-0 ovn_controller[147040]: 2026-02-25T12:20:21Z|00153|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 down in Southbound
Feb 25 12:20:21 compute-0 ovn_controller[147040]: 2026-02-25T12:20:21Z|00154|binding|INFO|Removing iface tap2e503dd2-73 ovn-installed in OVS
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:21.810 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:21.812 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis
Feb 25 12:20:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:21.815 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:20:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:21.817 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[27fe5341-d672-4b08-9da6-1e7454b24711]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:21.817 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 namespace which is not needed anymore
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:21 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000011.scope: Deactivated successfully.
Feb 25 12:20:21 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000011.scope: Consumed 11.708s CPU time.
Feb 25 12:20:21 compute-0 systemd-machined[210048]: Machine qemu-29-instance-00000011 terminated.
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.961 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance destroyed successfully.
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.962 244018 DEBUG nova.objects.instance [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:21 compute-0 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [NOTICE]   (261216) : haproxy version is 2.8.14-c23fe91
Feb 25 12:20:21 compute-0 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [NOTICE]   (261216) : path to executable is /usr/sbin/haproxy
Feb 25 12:20:21 compute-0 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [WARNING]  (261216) : Exiting Master process...
Feb 25 12:20:21 compute-0 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [ALERT]    (261216) : Current worker (261218) exited with code 143 (Terminated)
Feb 25 12:20:21 compute-0 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [WARNING]  (261216) : All workers exited. Exiting... (0)
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.980 244018 DEBUG nova.virt.libvirt.vif [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:07Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.981 244018 DEBUG nova.network.os_vif_util [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.981 244018 DEBUG nova.network.os_vif_util [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.982 244018 DEBUG os_vif [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.983 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e503dd2-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:21 compute-0 systemd[1]: libpod-5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a.scope: Deactivated successfully.
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.985 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:21 compute-0 podman[267711]: 2026-02-25 12:20:21.989280574 +0000 UTC m=+0.063822687 container died 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:20:21 compute-0 nova_compute[244014]: 2026-02-25 12:20:21.992 244018 INFO os_vif [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')
Feb 25 12:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a-userdata-shm.mount: Deactivated successfully.
Feb 25 12:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc838cfd7fef58997060e892c18c0d6fc490da055ecee8b3fae6fbd9365cbb97-merged.mount: Deactivated successfully.
Feb 25 12:20:22 compute-0 podman[267711]: 2026-02-25 12:20:22.035579342 +0000 UTC m=+0.110121455 container cleanup 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 12:20:22 compute-0 systemd[1]: libpod-conmon-5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a.scope: Deactivated successfully.
Feb 25 12:20:22 compute-0 podman[267766]: 2026-02-25 12:20:22.123295467 +0000 UTC m=+0.059343619 container remove 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:20:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.131 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1e4ed7-de7f-4680-99dd-972c183a5641]: (4, ('Wed Feb 25 12:20:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 (5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a)\n5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a\nWed Feb 25 12:20:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 (5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a)\n5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.134 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c12a10-6436-43d0-8f25-2641922f13cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.138 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:22 compute-0 kernel: tap1f4cbf9a-40: left promiscuous mode
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.156 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[42b3925e-e8f2-4e4a-a433-baf80750334c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.172 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc96f1a-d25e-4cb0-a48e-dffaec71e9d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.173 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fd8771-6fdd-4f58-963d-90ce339ca2fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.191 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[06724f9b-4b02-4d19-8f21-5fae47c491f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389389, 'reachable_time': 39675, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267782, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.194 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:20:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.195 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[531a3d20-d6e4-43a2-af60-f2d3b6f49cd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d1f4cbf9a\x2d48ed\x2d4d53\x2db670\x2dcf25c9e6c5f6.mount: Deactivated successfully.
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.273 244018 DEBUG nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Successfully created port: 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.305 244018 INFO nova.virt.libvirt.driver [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting instance files /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.306 244018 INFO nova.virt.libvirt.driver [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deletion of /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del complete
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.768 244018 DEBUG nova.compute.manager [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.769 244018 DEBUG oslo_concurrency.lockutils [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.770 244018 DEBUG oslo_concurrency.lockutils [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.771 244018 DEBUG oslo_concurrency.lockutils [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.771 244018 DEBUG nova.compute.manager [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.771 244018 DEBUG nova.compute.manager [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.822 244018 INFO nova.compute.manager [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Took 1.10 seconds to destroy the instance on the hypervisor.
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.823 244018 DEBUG oslo.service.loopingcall [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.823 244018 DEBUG nova.compute.manager [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:20:22 compute-0 nova_compute[244014]: 2026-02-25 12:20:22.823 244018 DEBUG nova.network.neutron [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:20:23 compute-0 nova_compute[244014]: 2026-02-25 12:20:23.337 244018 DEBUG nova.compute.manager [req-6c5c725d-9a7a-4b4b-9f93-77e8bbe7b58e req-bd2a7472-e697-4260-9601-ee5a097d8bd7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-deleted-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:23 compute-0 nova_compute[244014]: 2026-02-25 12:20:23.454 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:23 compute-0 ceph-mon[76335]: pgmap v1061: 305 pgs: 305 active+clean; 367 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 4.3 MiB/s wr, 235 op/s
Feb 25 12:20:23 compute-0 ovn_controller[147040]: 2026-02-25T12:20:23Z|00155|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 12:20:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 291 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 2.9 MiB/s wr, 199 op/s
Feb 25 12:20:23 compute-0 nova_compute[244014]: 2026-02-25 12:20:23.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:23 compute-0 nova_compute[244014]: 2026-02-25 12:20:23.833 244018 DEBUG nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Successfully updated port: 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:20:23 compute-0 nova_compute[244014]: 2026-02-25 12:20:23.893 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:23 compute-0 nova_compute[244014]: 2026-02-25 12:20:23.893 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:23 compute-0 nova_compute[244014]: 2026-02-25 12:20:23.894 244018 DEBUG nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.139 244018 WARNING nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it
Feb 25 12:20:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.510 244018 DEBUG nova.compute.manager [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.510 244018 DEBUG oslo_concurrency.lockutils [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.511 244018 DEBUG oslo_concurrency.lockutils [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.512 244018 DEBUG oslo_concurrency.lockutils [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.512 244018 DEBUG nova.compute.manager [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.513 244018 WARNING nova.compute.manager [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state deleting.
Feb 25 12:20:24 compute-0 ceph-mon[76335]: pgmap v1062: 305 pgs: 305 active+clean; 291 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 2.9 MiB/s wr, 199 op/s
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.602 244018 DEBUG nova.compute.manager [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-changed-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.602 244018 DEBUG nova.compute.manager [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing instance network info cache due to event network-changed-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.603 244018 DEBUG oslo_concurrency.lockutils [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.610 244018 DEBUG nova.network.neutron [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.655 244018 INFO nova.compute.manager [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Took 1.83 seconds to deallocate network for instance.
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.762 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.762 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:24 compute-0 nova_compute[244014]: 2026-02-25 12:20:24.828 244018 DEBUG oslo_concurrency.processutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:20:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.6 total, 600.0 interval
                                           Cumulative writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2984 syncs, 3.63 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5159 writes, 20K keys, 5159 commit groups, 1.0 writes per commit group, ingest: 21.71 MB, 0.04 MB/s
                                           Interval WAL: 5159 writes, 2085 syncs, 2.47 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:20:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120177367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:25 compute-0 nova_compute[244014]: 2026-02-25 12:20:25.403 244018 DEBUG oslo_concurrency.processutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:25 compute-0 nova_compute[244014]: 2026-02-25 12:20:25.410 244018 DEBUG nova.compute.provider_tree [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:20:25 compute-0 nova_compute[244014]: 2026-02-25 12:20:25.429 244018 DEBUG nova.scheduler.client.report [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:20:25 compute-0 nova_compute[244014]: 2026-02-25 12:20:25.479 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:25 compute-0 nova_compute[244014]: 2026-02-25 12:20:25.527 244018 INFO nova.scheduler.client.report [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Deleted allocations for instance 52f927ad-a417-489f-9f92-87bc3433649d
Feb 25 12:20:25 compute-0 nova_compute[244014]: 2026-02-25 12:20:25.544 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022010.5433438, 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:25 compute-0 nova_compute[244014]: 2026-02-25 12:20:25.545 244018 INFO nova.compute.manager [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] VM Stopped (Lifecycle Event)
Feb 25 12:20:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/120177367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:25 compute-0 nova_compute[244014]: 2026-02-25 12:20:25.582 244018 DEBUG nova.compute.manager [None req-35a8a6f4-8bfd-4195-9dfa-3519229e0d54 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 291 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Feb 25 12:20:25 compute-0 nova_compute[244014]: 2026-02-25 12:20:25.645 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.223 244018 DEBUG nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.268 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.269 244018 DEBUG oslo_concurrency.lockutils [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.269 244018 DEBUG nova.network.neutron [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing network info cache for port 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.275 244018 DEBUG nova.virt.libvirt.vif [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.275 244018 DEBUG nova.network.os_vif_util [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.276 244018 DEBUG nova.network.os_vif_util [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.277 244018 DEBUG os_vif [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.277 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.278 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.279 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.282 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.282 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07c8ce0b-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.283 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07c8ce0b-0c, col_values=(('external_ids', {'iface-id': '07c8ce0b-0c89-4abc-87c6-99c1024f7dd8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:5e:dc', 'vm-uuid': '89488b9f-7c53-4e00-ad62-837e33a76dae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.285 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:26 compute-0 NetworkManager[49836]: <info>  [1772022026.2866] manager: (tap07c8ce0b-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.293 244018 INFO os_vif [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c')
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.294 244018 DEBUG nova.virt.libvirt.vif [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.294 244018 DEBUG nova.network.os_vif_util [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.295 244018 DEBUG nova.network.os_vif_util [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.299 244018 DEBUG nova.virt.libvirt.guest [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] attach device xml: <interface type="ethernet">
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:3d:5e:dc"/>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <target dev="tap07c8ce0b-0c"/>
Feb 25 12:20:26 compute-0 nova_compute[244014]: </interface>
Feb 25 12:20:26 compute-0 nova_compute[244014]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 25 12:20:26 compute-0 kernel: tap07c8ce0b-0c: entered promiscuous mode
Feb 25 12:20:26 compute-0 NetworkManager[49836]: <info>  [1772022026.3165] manager: (tap07c8ce0b-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Feb 25 12:20:26 compute-0 ovn_controller[147040]: 2026-02-25T12:20:26Z|00156|binding|INFO|Claiming lport 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 for this chassis.
Feb 25 12:20:26 compute-0 ovn_controller[147040]: 2026-02-25T12:20:26Z|00157|binding|INFO|07c8ce0b-0c89-4abc-87c6-99c1024f7dd8: Claiming fa:16:3e:3d:5e:dc 10.100.0.14
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.320 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:26 compute-0 ovn_controller[147040]: 2026-02-25T12:20:26Z|00158|binding|INFO|Setting lport 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 ovn-installed in OVS
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:26 compute-0 ovn_controller[147040]: 2026-02-25T12:20:26Z|00159|binding|INFO|Setting lport 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 up in Southbound
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.338 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:5e:dc 10.100.0.14'], port_security=['fa:16:3e:3d:5e:dc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '89488b9f-7c53-4e00-ad62-837e33a76dae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.342 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.345 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.361 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a69a26eb-aa68-4648-904f-2a8e16e280d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:26 compute-0 systemd-udevd[267812]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:20:26 compute-0 NetworkManager[49836]: <info>  [1772022026.3811] device (tap07c8ce0b-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:20:26 compute-0 NetworkManager[49836]: <info>  [1772022026.3819] device (tap07c8ce0b-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.394 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b5432c46-89b2-40a4-8673-9df6a2501657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.399 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0f820f82-7a3c-47e4-adb4-d48b36b2c60a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.411 244018 DEBUG nova.virt.libvirt.driver [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.412 244018 DEBUG nova.virt.libvirt.driver [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.412 244018 DEBUG nova.virt.libvirt.driver [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:0c:10:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.413 244018 DEBUG nova.virt.libvirt.driver [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:3d:5e:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.422 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b32315-7d94-4e3c-812e-eaafbbe59c9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.442 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee79c77e-e31f-450e-8e54-fffdbd3daa64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396416, 'reachable_time': 29153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267819, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.450 244018 DEBUG nova.virt.libvirt.guest [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:20:26</nova:creationTime>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:20:26 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:20:26 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:20:26 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:20:26 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:20:26 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:20:26 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:20:26 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:20:26 compute-0 nova_compute[244014]:     <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 12:20:26 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:20:26 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:20:26 compute-0 nova_compute[244014]:     <nova:port uuid="07c8ce0b-0c89-4abc-87c6-99c1024f7dd8">
Feb 25 12:20:26 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:20:26 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:20:26 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:20:26 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:20:26 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.459 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[97987a71-4038-4902-bbce-c7f55a7aa59f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396426, 'tstamp': 396426}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267820, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396429, 'tstamp': 396429}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267820, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.461 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.465 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.466 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.466 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.467 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.487 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:26 compute-0 ceph-mon[76335]: pgmap v1063: 305 pgs: 305 active+clean; 291 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.711 244018 DEBUG nova.compute.manager [req-69ee54a4-01ae-41b4-9728-83e6173f563c req-e5f2761e-6874-4473-a1e0-b8f59b9d86a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-deleted-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.785 244018 DEBUG nova.compute.manager [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.785 244018 DEBUG oslo_concurrency.lockutils [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.786 244018 DEBUG oslo_concurrency.lockutils [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.787 244018 DEBUG oslo_concurrency.lockutils [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.787 244018 DEBUG nova.compute.manager [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] No waiting events found dispatching network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:26 compute-0 nova_compute[244014]: 2026-02-25 12:20:26.787 244018 WARNING nova.compute.manager [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received unexpected event network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 for instance with vm_state active and task_state None.
Feb 25 12:20:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 12:20:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 233 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 412 KiB/s rd, 2.1 MiB/s wr, 176 op/s
Feb 25 12:20:28 compute-0 ovn_controller[147040]: 2026-02-25T12:20:28Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:5e:dc 10.100.0.14
Feb 25 12:20:28 compute-0 ovn_controller[147040]: 2026-02-25T12:20:28Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:5e:dc 10.100.0.14
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.457 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.492 244018 DEBUG nova.network.neutron [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updated VIF entry in instance network info cache for port 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.493 244018 DEBUG nova.network.neutron [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.521 244018 DEBUG oslo_concurrency.lockutils [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:28 compute-0 ceph-mon[76335]: pgmap v1064: 305 pgs: 305 active+clean; 233 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 412 KiB/s rd, 2.1 MiB/s wr, 176 op/s
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.690 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.691 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.707 244018 DEBUG nova.objects.instance [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.735 244018 DEBUG nova.virt.libvirt.vif [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.735 244018 DEBUG nova.network.os_vif_util [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.736 244018 DEBUG nova.network.os_vif_util [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.740 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.743 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.746 244018 DEBUG nova.virt.libvirt.driver [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Attempting to detach device tap07c8ce0b-0c from instance 89488b9f-7c53-4e00-ad62-837e33a76dae from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.746 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:3d:5e:dc"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <target dev="tap07c8ce0b-0c"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]: </interface>
Feb 25 12:20:28 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.754 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.758 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface>not found in domain: <domain type='kvm' id='28'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <name>instance-00000019</name>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <uuid>89488b9f-7c53-4e00-ad62-837e33a76dae</uuid>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:20:26</nova:creationTime>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:port uuid="07c8ce0b-0c89-4abc-87c6-99c1024f7dd8">
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:20:28 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <system>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='serial'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='uuid'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </system>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <os>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </os>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <features>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </features>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk' index='2'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config' index='1'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:0c:10:e8'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target dev='tap5e8b3807-0e'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:3d:5e:dc'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target dev='tap07c8ce0b-0c'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='net1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <source path='/dev/pts/7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       </target>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/7'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <source path='/dev/pts/7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </console>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </input>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </input>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </input>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5907' autoport='yes' listen='::0'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <video>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </video>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c642,c973</label>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c642,c973</imagelabel>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:20:28 compute-0 nova_compute[244014]: </domain>
Feb 25 12:20:28 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.759 244018 INFO nova.virt.libvirt.driver [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tap07c8ce0b-0c from instance 89488b9f-7c53-4e00-ad62-837e33a76dae from the persistent domain config.
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.760 244018 DEBUG nova.virt.libvirt.driver [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] (1/8): Attempting to detach device tap07c8ce0b-0c with device alias net1 from instance 89488b9f-7c53-4e00-ad62-837e33a76dae from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.761 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:3d:5e:dc"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <target dev="tap07c8ce0b-0c"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]: </interface>
Feb 25 12:20:28 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:20:28 compute-0 kernel: tap07c8ce0b-0c (unregistering): left promiscuous mode
Feb 25 12:20:28 compute-0 NetworkManager[49836]: <info>  [1772022028.8269] device (tap07c8ce0b-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:20:28 compute-0 ovn_controller[147040]: 2026-02-25T12:20:28Z|00160|binding|INFO|Releasing lport 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 from this chassis (sb_readonly=0)
Feb 25 12:20:28 compute-0 ovn_controller[147040]: 2026-02-25T12:20:28Z|00161|binding|INFO|Setting lport 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 down in Southbound
Feb 25 12:20:28 compute-0 ovn_controller[147040]: 2026-02-25T12:20:28Z|00162|binding|INFO|Removing iface tap07c8ce0b-0c ovn-installed in OVS
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.837 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Received event <DeviceRemovedEvent: 1772022028.83713, 89488b9f-7c53-4e00-ad62-837e33a76dae => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.838 244018 DEBUG nova.virt.libvirt.driver [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Start waiting for the detach event from libvirt for device tap07c8ce0b-0c with device alias net1 for instance 89488b9f-7c53-4e00-ad62-837e33a76dae _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.839 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.845 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface>not found in domain: <domain type='kvm' id='28'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <name>instance-00000019</name>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <uuid>89488b9f-7c53-4e00-ad62-837e33a76dae</uuid>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:20:26</nova:creationTime>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:port uuid="07c8ce0b-0c89-4abc-87c6-99c1024f7dd8">
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:20:28 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <system>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='serial'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='uuid'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </system>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <os>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </os>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <features>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </features>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk' index='2'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config' index='1'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:0c:10:e8'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target dev='tap5e8b3807-0e'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <source path='/dev/pts/7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       </target>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/7'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <source path='/dev/pts/7'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </console>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </input>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </input>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </input>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5907' autoport='yes' listen='::0'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <video>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </video>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c642,c973</label>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c642,c973</imagelabel>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:20:28 compute-0 nova_compute[244014]: </domain>
Feb 25 12:20:28 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.845 244018 INFO nova.virt.libvirt.driver [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tap07c8ce0b-0c from instance 89488b9f-7c53-4e00-ad62-837e33a76dae from the live domain config.
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.845 244018 DEBUG nova.virt.libvirt.vif [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.846 244018 DEBUG nova.network.os_vif_util [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.846 244018 DEBUG nova.network.os_vif_util [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.847 244018 DEBUG os_vif [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.849 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07c8ce0b-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.850 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.853 244018 INFO os_vif [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c')
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.853 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:20:28</nova:creationTime>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 12:20:28 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:20:28 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:20:28 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:20:28 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:20:28 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:20:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.897 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:5e:dc 10.100.0.14'], port_security=['fa:16:3e:3d:5e:dc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '89488b9f-7c53-4e00-ad62-837e33a76dae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.900 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis
Feb 25 12:20:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.903 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.922 244018 DEBUG nova.compute.manager [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.922 244018 DEBUG oslo_concurrency.lockutils [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.923 244018 DEBUG oslo_concurrency.lockutils [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.923 244018 DEBUG oslo_concurrency.lockutils [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.923 244018 DEBUG nova.compute.manager [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] No waiting events found dispatching network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:28 compute-0 nova_compute[244014]: 2026-02-25 12:20:28.923 244018 WARNING nova.compute.manager [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received unexpected event network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 for instance with vm_state active and task_state None.
Feb 25 12:20:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.922 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d253936d-533e-42da-8be9-e65836d160f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.951 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6e7778-e245-4bc7-90b0-a515b6ea1749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.958 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[16b3dcdd-2d1a-48fc-b507-63c588333cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.989 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[34b33671-daa5-486b-8546-ac0ab8e47720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.008 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d2a35d-4871-48cd-90ed-3060e5df0a78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396416, 'reachable_time': 29153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267830, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d03650be-035a-410f-9208-df3835a9a086]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396426, 'tstamp': 396426}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267831, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396429, 'tstamp': 396429}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267831, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.025 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:29 compute-0 nova_compute[244014]: 2026-02-25 12:20:29.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.029 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.030 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.031 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.032 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:29 compute-0 nova_compute[244014]: 2026-02-25 12:20:29.425 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022014.4231637, d9b67bce-8a7c-4f49-9cab-3e20377ca630 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:29 compute-0 nova_compute[244014]: 2026-02-25 12:20:29.426 244018 INFO nova.compute.manager [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] VM Stopped (Lifecycle Event)
Feb 25 12:20:29 compute-0 nova_compute[244014]: 2026-02-25 12:20:29.451 244018 DEBUG nova.compute.manager [None req-7afd26ff-cf89-44cb-8a7f-cd84ff60f5b7 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 233 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 75 KiB/s wr, 74 op/s
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.179 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.180 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.180 244018 DEBUG nova.network.neutron [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:30 compute-0 ceph-mon[76335]: pgmap v1065: 305 pgs: 305 active+clean; 233 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 75 KiB/s wr, 74 op/s
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.785 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.786 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.786 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.786 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.787 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.788 244018 INFO nova.compute.manager [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Terminating instance
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.790 244018 DEBUG nova.compute.manager [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:20:30 compute-0 kernel: tap5e8b3807-0e (unregistering): left promiscuous mode
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.829 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:30 compute-0 NetworkManager[49836]: <info>  [1772022030.8308] device (tap5e8b3807-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:20:30 compute-0 ovn_controller[147040]: 2026-02-25T12:20:30Z|00163|binding|INFO|Releasing lport 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 from this chassis (sb_readonly=0)
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:30 compute-0 ovn_controller[147040]: 2026-02-25T12:20:30Z|00164|binding|INFO|Setting lport 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 down in Southbound
Feb 25 12:20:30 compute-0 ovn_controller[147040]: 2026-02-25T12:20:30Z|00165|binding|INFO|Removing iface tap5e8b3807-0e ovn-installed in OVS
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:30 compute-0 nova_compute[244014]: 2026-02-25 12:20:30.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:30.870 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:10:e8 10.100.0.13'], port_security=['fa:16:3e:0c:10:e8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '89488b9f-7c53-4e00-ad62-837e33a76dae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e3ab090-12ee-4eae-8ad1-0ddee1251e75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5e8b3807-0ee8-4f97-aa2d-3db7d1283888) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:30.872 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis
Feb 25 12:20:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:30.874 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08121372-a435-401a-b405-778e10d8c2e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:20:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:30.875 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ef8804-0262-4771-aee3-1e1de0eba51a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:30.876 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 namespace which is not needed anymore
Feb 25 12:20:30 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Deactivated successfully.
Feb 25 12:20:30 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Consumed 12.228s CPU time.
Feb 25 12:20:30 compute-0 systemd-machined[210048]: Machine qemu-28-instance-00000019 terminated.
Feb 25 12:20:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:20:30
Feb 25 12:20:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:20:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:20:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms']
Feb 25 12:20:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.027 244018 DEBUG nova.compute.manager [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-deleted-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.028 244018 INFO nova.compute.manager [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Neutron deleted interface 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8; detaching it from the instance and deleting it from the info cache
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.028 244018 DEBUG nova.network.neutron [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.035 244018 INFO nova.virt.libvirt.driver [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Instance destroyed successfully.
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.036 244018 DEBUG nova.objects.instance [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'resources' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:31 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [NOTICE]   (265934) : haproxy version is 2.8.14-c23fe91
Feb 25 12:20:31 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [NOTICE]   (265934) : path to executable is /usr/sbin/haproxy
Feb 25 12:20:31 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [WARNING]  (265934) : Exiting Master process...
Feb 25 12:20:31 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [WARNING]  (265934) : Exiting Master process...
Feb 25 12:20:31 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [ALERT]    (265934) : Current worker (265936) exited with code 143 (Terminated)
Feb 25 12:20:31 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [WARNING]  (265934) : All workers exited. Exiting... (0)
Feb 25 12:20:31 compute-0 systemd[1]: libpod-75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d.scope: Deactivated successfully.
Feb 25 12:20:31 compute-0 conmon[265930]: conmon 75b9e4254fe9035cdfd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d.scope/container/memory.events
Feb 25 12:20:31 compute-0 podman[267856]: 2026-02-25 12:20:31.060756623 +0000 UTC m=+0.070586579 container died 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.061 244018 DEBUG nova.virt.libvirt.vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.062 244018 DEBUG nova.network.os_vif_util [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.063 244018 DEBUG nova.network.os_vif_util [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.063 244018 DEBUG os_vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.065 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e8b3807-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.079 244018 DEBUG nova.objects.instance [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'system_metadata' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.083 244018 INFO os_vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e')
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.084 244018 DEBUG nova.virt.libvirt.vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.084 244018 DEBUG nova.network.os_vif_util [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.085 244018 DEBUG nova.network.os_vif_util [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.086 244018 DEBUG os_vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.087 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.088 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07c8ce0b-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.088 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d-userdata-shm.mount: Deactivated successfully.
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.091 244018 INFO os_vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c')
Feb 25 12:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f7f7a1a509256948c44cf81664959a0d95a0720770dc3872c473bf267ae05b3-merged.mount: Deactivated successfully.
Feb 25 12:20:31 compute-0 podman[267856]: 2026-02-25 12:20:31.097350545 +0000 UTC m=+0.107180511 container cleanup 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:20:31 compute-0 systemd[1]: libpod-conmon-75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d.scope: Deactivated successfully.
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.136 244018 DEBUG nova.objects.instance [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'flavor' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:31 compute-0 podman[267910]: 2026-02-25 12:20:31.173174272 +0000 UTC m=+0.054380788 container remove 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:20:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.179 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9dded9-2f26-42d5-aab3-fbf5bd2098a4]: (4, ('Wed Feb 25 12:20:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 (75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d)\n75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d\nWed Feb 25 12:20:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 (75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d)\n75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.181 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[31e5f275-f25e-4234-b19e-b7be8275618a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.182 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:31 compute-0 kernel: tap08121372-a0: left promiscuous mode
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.188 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.192 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3ef68c-ddff-49d0-9d75-96c39ec4a40f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.199 244018 DEBUG nova.virt.libvirt.vif [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.199 244018 DEBUG nova.network.os_vif_util [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.200 244018 DEBUG nova.network.os_vif_util [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.207 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[87de8be2-6596-42aa-ab75-2fb1c6021a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.209 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3281c7-062e-43c5-b74c-3933593897fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.215 244018 DEBUG nova.virt.libvirt.guest [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.221 244018 DEBUG nova.virt.libvirt.guest [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface>not found in domain: <domain type='kvm'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <name>instance-00000019</name>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <uuid>89488b9f-7c53-4e00-ad62-837e33a76dae</uuid>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:19:52</nova:creationTime>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 12:20:31 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <system>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <entry name='serial'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <entry name='uuid'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </system>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <os>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   </os>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <features>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   </features>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <cpu mode='host-model' check='partial'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:0c:10:e8'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target dev='tap5e8b3807-0e'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       </target>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <console type='pty'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </console>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </input>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <video>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </video>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:20:31 compute-0 nova_compute[244014]: </domain>
Feb 25 12:20:31 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.223 244018 WARNING nova.virt.libvirt.driver [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Detaching interface fa:16:3e:3d:5e:dc failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap07c8ce0b-0c' not found.
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.224 244018 DEBUG nova.virt.libvirt.vif [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.225 244018 DEBUG nova.network.os_vif_util [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.226 244018 DEBUG nova.network.os_vif_util [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.226 244018 DEBUG os_vif [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.229 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07c8ce0b-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.229 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.229 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4010c4c-e114-41ab-87c2-fce8e05d7a86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396408, 'reachable_time': 41147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267929, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:31 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.235 244018 DEBUG nova.compute.manager [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-unplugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.235 244018 DEBUG oslo_concurrency.lockutils [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.235 244018 DEBUG oslo_concurrency.lockutils [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.236 244018 DEBUG oslo_concurrency.lockutils [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.236 244018 DEBUG nova.compute.manager [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] No waiting events found dispatching network-vif-unplugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d08121372\x2da435\x2d401a\x2db405\x2d778e10d8c2e2.mount: Deactivated successfully.
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.236 244018 DEBUG nova.compute.manager [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-unplugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:20:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.234 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:20:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.234 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9610019e-d487-41da-afe3-98430e1cb2bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.237 244018 INFO os_vif [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c')
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.239 244018 DEBUG nova.virt.libvirt.guest [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:20:31</nova:creationTime>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 12:20:31 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:20:31 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:20:31 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:20:31 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:20:31 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.389 244018 INFO nova.virt.libvirt.driver [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Deleting instance files /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae_del
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.390 244018 INFO nova.virt.libvirt.driver [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Deletion of /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae_del complete
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.520 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022016.5193965, de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.520 244018 INFO nova.compute.manager [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] VM Stopped (Lifecycle Event)
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 233 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 75 KiB/s wr, 74 op/s
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.694 244018 INFO nova.network.neutron [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Port 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.695 244018 DEBUG nova.network.neutron [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.724 244018 DEBUG nova.compute.manager [None req-a48cfd5b-3b23-4fe2-8503-9875a44996dc - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.724 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.731 244018 INFO nova.compute.manager [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Took 0.94 seconds to destroy the instance on the hypervisor.
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.731 244018 DEBUG oslo.service.loopingcall [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.732 244018 DEBUG nova.compute.manager [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.733 244018 DEBUG nova.network.neutron [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:20:31 compute-0 nova_compute[244014]: 2026-02-25 12:20:31.748 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:20:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:20:32 compute-0 nova_compute[244014]: 2026-02-25 12:20:32.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:32 compute-0 nova_compute[244014]: 2026-02-25 12:20:32.497 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:32 compute-0 ceph-mon[76335]: pgmap v1066: 305 pgs: 305 active+clean; 233 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 75 KiB/s wr, 74 op/s
Feb 25 12:20:32 compute-0 nova_compute[244014]: 2026-02-25 12:20:32.864 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022017.863511, 7a6ab503-d433-40a7-9395-3d5660e852c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:32 compute-0 nova_compute[244014]: 2026-02-25 12:20:32.865 244018 INFO nova.compute.manager [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] VM Stopped (Lifecycle Event)
Feb 25 12:20:32 compute-0 nova_compute[244014]: 2026-02-25 12:20:32.937 244018 DEBUG nova.compute.manager [None req-67b1a63c-4490-43e9-8c5f-95d1139d3913 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.319 244018 DEBUG nova.network.neutron [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.436 244018 INFO nova.compute.manager [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Took 1.70 seconds to deallocate network for instance.
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.450 244018 DEBUG nova.compute.manager [req-5bbfe431-364b-4a8b-abe4-3006afcd236d req-209bd495-7184-45ff-8aa7-9ec3715af9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-deleted-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.450 244018 INFO nova.compute.manager [req-5bbfe431-364b-4a8b-abe4-3006afcd236d req-209bd495-7184-45ff-8aa7-9ec3715af9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Neutron deleted interface 5e8b3807-0ee8-4f97-aa2d-3db7d1283888; detaching it from the instance and deleting it from the info cache
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.451 244018 DEBUG nova.network.neutron [req-5bbfe431-364b-4a8b-abe4-3006afcd236d req-209bd495-7184-45ff-8aa7-9ec3715af9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.534 244018 DEBUG nova.compute.manager [req-5bbfe431-364b-4a8b-abe4-3006afcd236d req-209bd495-7184-45ff-8aa7-9ec3715af9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Detach interface failed, port_id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888, reason: Instance 89488b9f-7c53-4e00-ad62-837e33a76dae could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.610 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.610 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 195 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.2 KiB/s wr, 61 op/s
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.665 244018 DEBUG oslo_concurrency.processutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.783 244018 DEBUG nova.compute.manager [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.784 244018 DEBUG oslo_concurrency.lockutils [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.784 244018 DEBUG oslo_concurrency.lockutils [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.785 244018 DEBUG oslo_concurrency.lockutils [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.785 244018 DEBUG nova.compute.manager [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] No waiting events found dispatching network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:33 compute-0 nova_compute[244014]: 2026-02-25 12:20:33.786 244018 WARNING nova.compute.manager [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received unexpected event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 for instance with vm_state deleted and task_state None.
Feb 25 12:20:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/368640659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:34 compute-0 nova_compute[244014]: 2026-02-25 12:20:34.233 244018 DEBUG oslo_concurrency.processutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:34 compute-0 nova_compute[244014]: 2026-02-25 12:20:34.238 244018 DEBUG nova.compute.provider_tree [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:20:34 compute-0 nova_compute[244014]: 2026-02-25 12:20:34.314 244018 DEBUG nova.scheduler.client.report [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:20:34 compute-0 nova_compute[244014]: 2026-02-25 12:20:34.382 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:34 compute-0 nova_compute[244014]: 2026-02-25 12:20:34.442 244018 INFO nova.scheduler.client.report [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Deleted allocations for instance 89488b9f-7c53-4e00-ad62-837e33a76dae
Feb 25 12:20:34 compute-0 nova_compute[244014]: 2026-02-25 12:20:34.635 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:34 compute-0 ceph-mon[76335]: pgmap v1067: 305 pgs: 305 active+clean; 195 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.2 KiB/s wr, 61 op/s
Feb 25 12:20:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/368640659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 195 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 KiB/s wr, 42 op/s
Feb 25 12:20:36 compute-0 nova_compute[244014]: 2026-02-25 12:20:36.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:36 compute-0 sshd-session[267954]: Invalid user mapr from 80.94.92.186 port 47226
Feb 25 12:20:36 compute-0 sshd-session[267954]: Connection closed by invalid user mapr 80.94.92.186 port 47226 [preauth]
Feb 25 12:20:36 compute-0 ceph-mon[76335]: pgmap v1068: 305 pgs: 305 active+clean; 195 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 KiB/s wr, 42 op/s
Feb 25 12:20:36 compute-0 nova_compute[244014]: 2026-02-25 12:20:36.958 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022021.9568572, 52f927ad-a417-489f-9f92-87bc3433649d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:36 compute-0 nova_compute[244014]: 2026-02-25 12:20:36.959 244018 INFO nova.compute.manager [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Stopped (Lifecycle Event)
Feb 25 12:20:36 compute-0 nova_compute[244014]: 2026-02-25 12:20:36.992 244018 DEBUG nova.compute.manager [None req-9a64dcda-494a-4638-893b-958295ee1183 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 KiB/s wr, 48 op/s
Feb 25 12:20:38 compute-0 nova_compute[244014]: 2026-02-25 12:20:38.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:38 compute-0 ceph-mon[76335]: pgmap v1069: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 KiB/s wr, 48 op/s
Feb 25 12:20:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 25 12:20:40 compute-0 ceph-mon[76335]: pgmap v1070: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 25 12:20:41 compute-0 nova_compute[244014]: 2026-02-25 12:20:41.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.9747607904369495e-06 of space, bias 1.0, pg target 0.0011924282371310849 quantized to 32 (current 32)
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024899687316491128 of space, bias 1.0, pg target 0.7469906194947339 quantized to 32 (current 32)
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0716338374073357e-06 of space, bias 4.0, pg target 0.0012859606048888027 quantized to 16 (current 16)
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:20:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:20:42 compute-0 ceph-mon[76335]: pgmap v1071: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 25 12:20:42 compute-0 nova_compute[244014]: 2026-02-25 12:20:42.813 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:42 compute-0 nova_compute[244014]: 2026-02-25 12:20:42.814 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:42 compute-0 nova_compute[244014]: 2026-02-25 12:20:42.857 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:20:42 compute-0 nova_compute[244014]: 2026-02-25 12:20:42.950 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:42 compute-0 nova_compute[244014]: 2026-02-25 12:20:42.950 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:42 compute-0 nova_compute[244014]: 2026-02-25 12:20:42.957 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:20:42 compute-0 nova_compute[244014]: 2026-02-25 12:20:42.957 244018 INFO nova.compute.claims [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:20:43 compute-0 nova_compute[244014]: 2026-02-25 12:20:43.184 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:43 compute-0 nova_compute[244014]: 2026-02-25 12:20:43.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 25 12:20:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3670045618' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:43 compute-0 nova_compute[244014]: 2026-02-25 12:20:43.718 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:43 compute-0 nova_compute[244014]: 2026-02-25 12:20:43.724 244018 DEBUG nova.compute.provider_tree [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:20:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3670045618' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:44 compute-0 nova_compute[244014]: 2026-02-25 12:20:44.109 244018 DEBUG nova.scheduler.client.report [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:20:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:44 compute-0 nova_compute[244014]: 2026-02-25 12:20:44.757 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:44 compute-0 nova_compute[244014]: 2026-02-25 12:20:44.758 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:20:44 compute-0 ceph-mon[76335]: pgmap v1072: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 25 12:20:44 compute-0 nova_compute[244014]: 2026-02-25 12:20:44.826 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:20:44 compute-0 nova_compute[244014]: 2026-02-25 12:20:44.826 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:20:44 compute-0 nova_compute[244014]: 2026-02-25 12:20:44.856 244018 INFO nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:20:44 compute-0 nova_compute[244014]: 2026-02-25 12:20:44.883 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:44.999 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.001 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.002 244018 INFO nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Creating image(s)
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.035 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.071 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.107 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.112 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.143 244018 DEBUG nova.policy [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10c5d76df8d046e8858030b12b7affa1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28df138f3ff744a09c7e79a179881f21', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.189 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.190 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.191 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.192 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.226 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.231 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 437f3047-f865-44f7-b16e-cddab230e873_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.550 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 437f3047-f865-44f7-b16e-cddab230e873_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 852 B/s wr, 5 op/s
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.634 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] resizing rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.722 244018 DEBUG nova.objects.instance [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lazy-loading 'migration_context' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.826 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.826 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Ensure instance console log exists: /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.827 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.827 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:45 compute-0 nova_compute[244014]: 2026-02-25 12:20:45.827 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.023 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022031.0221703, 89488b9f-7c53-4e00-ad62-837e33a76dae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.024 244018 INFO nova.compute.manager [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] VM Stopped (Lifecycle Event)
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.062 244018 DEBUG nova.compute.manager [None req-0c6131d7-981f-4b98-856d-1ce5d5939626 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.304 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Successfully created port: 6c0083e1-a97b-4de8-aa7b-14217c45376f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:20:46 compute-0 ceph-mon[76335]: pgmap v1073: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 852 B/s wr, 5 op/s
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.845 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.846 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.872 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.966 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.967 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.978 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:20:46 compute-0 nova_compute[244014]: 2026-02-25 12:20:46.978 244018 INFO nova.compute.claims [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.081 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Successfully updated port: 6c0083e1-a97b-4de8-aa7b-14217c45376f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.131 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.132 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.132 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.244 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.312 244018 DEBUG nova.compute.manager [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.313 244018 DEBUG nova.compute.manager [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing instance network info cache due to event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.313 244018 DEBUG oslo_concurrency.lockutils [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.357 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:20:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:20:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4166039607' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:20:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:20:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4166039607' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:20:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb 25 12:20:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:20:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080038316' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.770 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4166039607' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:20:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4166039607' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:20:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2080038316' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.777 244018 DEBUG nova.compute.provider_tree [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.817 244018 DEBUG nova.scheduler.client.report [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.852 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.853 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.929 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.929 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:20:47 compute-0 nova_compute[244014]: 2026-02-25 12:20:47.984 244018 INFO nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.005 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.135 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.137 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.138 244018 INFO nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Creating image(s)
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.172 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.208 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.245 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.251 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.326 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.327 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.328 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.329 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.362 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.367 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 51d1d661-89db-4958-a2f4-c299ee997cde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.391 244018 DEBUG nova.policy [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.601 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 51d1d661-89db-4958-a2f4-c299ee997cde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.698 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] resizing rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:20:48 compute-0 ceph-mon[76335]: pgmap v1074: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.812 244018 DEBUG nova.objects.instance [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'migration_context' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.833 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.833 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Ensure instance console log exists: /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.834 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.835 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.835 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.880 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.932 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.933 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Instance network_info: |[{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.934 244018 DEBUG oslo_concurrency.lockutils [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.935 244018 DEBUG nova.network.neutron [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.940 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Start _get_guest_xml network_info=[{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.947 244018 WARNING nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.953 244018 DEBUG nova.virt.libvirt.host [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.953 244018 DEBUG nova.virt.libvirt.host [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.958 244018 DEBUG nova.virt.libvirt.host [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.958 244018 DEBUG nova.virt.libvirt.host [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.959 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.959 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.960 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.960 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.961 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.961 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.962 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.962 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.962 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.963 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.963 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.964 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:20:48 compute-0 nova_compute[244014]: 2026-02-25 12:20:48.968 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:49 compute-0 nova_compute[244014]: 2026-02-25 12:20:49.317 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully created port: 433e6f28-313e-4fe8-b8da-eacc8a0332c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:20:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:20:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/985391114' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:49 compute-0 nova_compute[244014]: 2026-02-25 12:20:49.609 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:20:49 compute-0 nova_compute[244014]: 2026-02-25 12:20:49.641 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:49 compute-0 nova_compute[244014]: 2026-02-25 12:20:49.646 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/985391114' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:20:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/671786509' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.187 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.189 244018 DEBUG nova.virt.libvirt.vif [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:20:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1111859970',display_name='tempest-AttachInterfacesUnderV243Test-server-1111859970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1111859970',id=26,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXFN7zC5myw9D/AepfFmxAXWRK8TzW2Ph47f455gtFzF5f9ZUF8tii8j35Vr/zY8abYSaTZMXBunxHe79g4nB5q7Pdu+4WcSGnU0sarcpQHiHET7AGRM6ljsAzRNmAesA==',key_name='tempest-keypair-1280464142',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28df138f3ff744a09c7e79a179881f21',ramdisk_id='',reservation_id='r-t6tk786a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1754830019',owner_user_name='tempest-AttachInterfacesUnderV243Test-1754830019-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='10c5d76df8d046e8858030b12b7affa1',uuid=437f3047-f865-44f7-b16e-cddab230e873,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.190 244018 DEBUG nova.network.os_vif_util [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converting VIF {"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.192 244018 DEBUG nova.network.os_vif_util [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.193 244018 DEBUG nova.objects.instance [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lazy-loading 'pci_devices' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.215 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:20:50 compute-0 nova_compute[244014]:   <uuid>437f3047-f865-44f7-b16e-cddab230e873</uuid>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   <name>instance-0000001a</name>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1111859970</nova:name>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:20:48</nova:creationTime>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <nova:user uuid="10c5d76df8d046e8858030b12b7affa1">tempest-AttachInterfacesUnderV243Test-1754830019-project-member</nova:user>
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <nova:project uuid="28df138f3ff744a09c7e79a179881f21">tempest-AttachInterfacesUnderV243Test-1754830019</nova:project>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <nova:port uuid="6c0083e1-a97b-4de8-aa7b-14217c45376f">
Feb 25 12:20:50 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <system>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <entry name="serial">437f3047-f865-44f7-b16e-cddab230e873</entry>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <entry name="uuid">437f3047-f865-44f7-b16e-cddab230e873</entry>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     </system>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   <os>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   </os>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   <features>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   </features>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/437f3047-f865-44f7-b16e-cddab230e873_disk">
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/437f3047-f865-44f7-b16e-cddab230e873_disk.config">
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:20:50 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:b5:e1:9a"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <target dev="tap6c0083e1-a9"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/console.log" append="off"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <video>
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     </video>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:20:50 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:20:50 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:20:50 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:20:50 compute-0 nova_compute[244014]: </domain>
Feb 25 12:20:50 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.216 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Preparing to wait for external event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.216 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.217 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.217 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.217 244018 DEBUG nova.virt.libvirt.vif [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:20:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1111859970',display_name='tempest-AttachInterfacesUnderV243Test-server-1111859970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1111859970',id=26,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXFN7zC5myw9D/AepfFmxAXWRK8TzW2Ph47f455gtFzF5f9ZUF8tii8j35Vr/zY8abYSaTZMXBunxHe79g4nB5q7Pdu+4WcSGnU0sarcpQHiHET7AGRM6ljsAzRNmAesA==',key_name='tempest-keypair-1280464142',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28df138f3ff744a09c7e79a179881f21',ramdisk_id='',reservation_id='r-t6tk786a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1754830019',owner_user_name='tempest-AttachInterfacesUnderV243Test-1754830019-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='10c5d76df8d046e8858030b12b7affa1',uuid=437f3047-f865-44f7-b16e-cddab230e873,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.218 244018 DEBUG nova.network.os_vif_util [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converting VIF {"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.218 244018 DEBUG nova.network.os_vif_util [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.218 244018 DEBUG os_vif [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.219 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.219 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.219 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.222 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c0083e1-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.222 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c0083e1-a9, col_values=(('external_ids', {'iface-id': '6c0083e1-a97b-4de8-aa7b-14217c45376f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:e1:9a', 'vm-uuid': '437f3047-f865-44f7-b16e-cddab230e873'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.223 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:50 compute-0 NetworkManager[49836]: <info>  [1772022050.2253] manager: (tap6c0083e1-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.231 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.232 244018 INFO os_vif [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9')
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.315 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.316 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.316 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] No VIF found with MAC fa:16:3e:b5:e1:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.317 244018 INFO nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Using config drive
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.346 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:50 compute-0 ceph-mon[76335]: pgmap v1075: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:20:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/671786509' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:50 compute-0 nova_compute[244014]: 2026-02-25 12:20:50.994 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully updated port: 433e6f28-313e-4fe8-b8da-eacc8a0332c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.013 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.014 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.014 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.220 244018 INFO nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Creating config drive at /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.228 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu1gbz944 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.359 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu1gbz944" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.396 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.400 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config 437f3047-f865-44f7-b16e-cddab230e873_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.446 244018 DEBUG nova.compute.manager [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-changed-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.447 244018 DEBUG nova.compute.manager [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing instance network info cache due to event network-changed-433e6f28-313e-4fe8-b8da-eacc8a0332c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.447 244018 DEBUG oslo_concurrency.lockutils [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.472 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.561 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config 437f3047-f865-44f7-b16e-cddab230e873_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.562 244018 INFO nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Deleting local config drive /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config because it was imported into RBD.
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.588 244018 DEBUG nova.network.neutron [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updated VIF entry in instance network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.588 244018 DEBUG nova.network.neutron [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.603 244018 DEBUG oslo_concurrency.lockutils [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:51 compute-0 kernel: tap6c0083e1-a9: entered promiscuous mode
Feb 25 12:20:51 compute-0 ovn_controller[147040]: 2026-02-25T12:20:51Z|00166|binding|INFO|Claiming lport 6c0083e1-a97b-4de8-aa7b-14217c45376f for this chassis.
Feb 25 12:20:51 compute-0 ovn_controller[147040]: 2026-02-25T12:20:51Z|00167|binding|INFO|6c0083e1-a97b-4de8-aa7b-14217c45376f: Claiming fa:16:3e:b5:e1:9a 10.100.0.6
Feb 25 12:20:51 compute-0 NetworkManager[49836]: <info>  [1772022051.6384] manager: (tap6c0083e1-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Feb 25 12:20:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 212 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 27 op/s
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.657 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:e1:9a 10.100.0.6'], port_security=['fa:16:3e:b5:e1:9a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '437f3047-f865-44f7-b16e-cddab230e873', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28df138f3ff744a09c7e79a179881f21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '142094f4-7b01-4079-a840-811d10ccd4e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fff7b783-3fdd-403e-b061-0aa11a082612, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6c0083e1-a97b-4de8-aa7b-14217c45376f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.659 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6c0083e1-a97b-4de8-aa7b-14217c45376f in datapath 08d2df9f-b68a-40d4-a888-4f15fc0b5403 bound to our chassis
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.661 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08d2df9f-b68a-40d4-a888-4f15fc0b5403
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.677 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa45755-9a80-42e4-9a69-dc1a59df5196]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.678 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08d2df9f-b1 in ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.681 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08d2df9f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.681 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57bacf76-813c-436e-9372-907e9885236a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.682 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[75d2137d-c872-4bcd-b56e-b632d948874c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 systemd-machined[210048]: New machine qemu-30-instance-0000001a.
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:51 compute-0 ovn_controller[147040]: 2026-02-25T12:20:51Z|00168|binding|INFO|Setting lport 6c0083e1-a97b-4de8-aa7b-14217c45376f ovn-installed in OVS
Feb 25 12:20:51 compute-0 ovn_controller[147040]: 2026-02-25T12:20:51Z|00169|binding|INFO|Setting lport 6c0083e1-a97b-4de8-aa7b-14217c45376f up in Southbound
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.699 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[24809956-4faa-4a3a-ab5a-da128b221c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-0000001a.
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:51 compute-0 systemd-udevd[268505]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:20:51 compute-0 NetworkManager[49836]: <info>  [1772022051.7256] device (tap6c0083e1-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:20:51 compute-0 NetworkManager[49836]: <info>  [1772022051.7263] device (tap6c0083e1-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:20:51 compute-0 podman[268455]: 2026-02-25 12:20:51.73231433 +0000 UTC m=+0.123363721 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.726 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[735f8149-4195-48e2-835d-89dbfaf63e7d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.752 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dd190554-fba8-41f9-bfd7-f363be241874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 systemd-udevd[268510]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:20:51 compute-0 NetworkManager[49836]: <info>  [1772022051.7590] manager: (tap08d2df9f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.758 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8b9f36-9256-4b1b-9385-79c0ca44d696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 podman[268458]: 2026-02-25 12:20:51.760325737 +0000 UTC m=+0.148685412 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.782 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f7faa196-e04d-409e-b0df-31125fe86816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.785 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bd84d73c-e583-44ba-9536-ab64fb790239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 NetworkManager[49836]: <info>  [1772022051.8036] device (tap08d2df9f-b0): carrier: link connected
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.806 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[de723bb2-3343-4f85-b8f8-5e464d1d8a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.822 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0382705-e3be-4f16-b3c1-bb6ea4313399]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08d2df9f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:1a:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402121, 'reachable_time': 23852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268544, 'error': None, 'target': 'ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.831 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[96224060-29ed-4d2d-b9e7-2608867bd563]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:1ade'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402121, 'tstamp': 402121}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268545, 'error': None, 'target': 'ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.845 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0d84cfe1-046a-4bda-8274-97cd45676079]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08d2df9f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:1a:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402121, 'reachable_time': 23852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268546, 'error': None, 'target': 'ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.873 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a283753-e238-4608-bf45-74164985f868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.929 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8dab47a9-93b6-4e91-934d-24a85d0b0d34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.931 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08d2df9f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.932 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.933 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08d2df9f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:51 compute-0 kernel: tap08d2df9f-b0: entered promiscuous mode
Feb 25 12:20:51 compute-0 NetworkManager[49836]: <info>  [1772022051.9399] manager: (tap08d2df9f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.940 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08d2df9f-b0, col_values=(('external_ids', {'iface-id': '6d2f2d5b-e692-4371-bdfb-0788a13e6e8e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:51 compute-0 ovn_controller[147040]: 2026-02-25T12:20:51Z|00170|binding|INFO|Releasing lport 6d2f2d5b-e692-4371-bdfb-0788a13e6e8e from this chassis (sb_readonly=0)
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.935 244018 DEBUG nova.compute.manager [req-519d2744-ed23-427e-9a69-50770808c1c2 req-b4821d06-4c15-44a5-a61b-e8cd12cb10fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.942 244018 DEBUG oslo_concurrency.lockutils [req-519d2744-ed23-427e-9a69-50770808c1c2 req-b4821d06-4c15-44a5-a61b-e8cd12cb10fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.943 244018 DEBUG oslo_concurrency.lockutils [req-519d2744-ed23-427e-9a69-50770808c1c2 req-b4821d06-4c15-44a5-a61b-e8cd12cb10fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.943 244018 DEBUG oslo_concurrency.lockutils [req-519d2744-ed23-427e-9a69-50770808c1c2 req-b4821d06-4c15-44a5-a61b-e8cd12cb10fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.943 244018 DEBUG nova.compute.manager [req-519d2744-ed23-427e-9a69-50770808c1c2 req-b4821d06-4c15-44a5-a61b-e8cd12cb10fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Processing event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:20:51 compute-0 nova_compute[244014]: 2026-02-25 12:20:51.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.956 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08d2df9f-b68a-40d4-a888-4f15fc0b5403.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08d2df9f-b68a-40d4-a888-4f15fc0b5403.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.958 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[499d9f46-e191-44b2-9fbf-dd332f9c8122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.959 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-08d2df9f-b68a-40d4-a888-4f15fc0b5403
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/08d2df9f-b68a-40d4-a888-4f15fc0b5403.pid.haproxy
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 08d2df9f-b68a-40d4-a888-4f15fc0b5403
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:20:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.960 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'env', 'PROCESS_TAG=haproxy-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08d2df9f-b68a-40d4-a888-4f15fc0b5403.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:20:52 compute-0 podman[268578]: 2026-02-25 12:20:52.364538968 +0000 UTC m=+0.071420173 container create 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 25 12:20:52 compute-0 systemd[1]: Started libpod-conmon-61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7.scope.
Feb 25 12:20:52 compute-0 podman[268578]: 2026-02-25 12:20:52.327665279 +0000 UTC m=+0.034546534 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:20:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e3b4792fbae073d4d0de8c0eb2f97b209d111264ecc91aaecd534959142ed9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.514 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:20:52 compute-0 podman[268578]: 2026-02-25 12:20:52.515605496 +0000 UTC m=+0.222486751 container init 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.515 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022052.5135899, 437f3047-f865-44f7-b16e-cddab230e873 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.516 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] VM Started (Lifecycle Event)
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.520 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:20:52 compute-0 podman[268578]: 2026-02-25 12:20:52.524941782 +0000 UTC m=+0.231822977 container start 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.525 244018 INFO nova.virt.libvirt.driver [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Instance spawned successfully.
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.525 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.548 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.554 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.555 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.556 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:52 compute-0 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [NOTICE]   (268639) : New worker (268641) forked
Feb 25 12:20:52 compute-0 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [NOTICE]   (268639) : Loading success.
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.556 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.557 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.558 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.565 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.605 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.606 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022052.5150912, 437f3047-f865-44f7-b16e-cddab230e873 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.606 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] VM Paused (Lifecycle Event)
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.640 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.644 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022052.51974, 437f3047-f865-44f7-b16e-cddab230e873 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.644 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] VM Resumed (Lifecycle Event)
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.659 244018 INFO nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Took 7.66 seconds to spawn the instance on the hypervisor.
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.659 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.669 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.673 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.738 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.812 244018 INFO nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Took 9.89 seconds to build instance.
Feb 25 12:20:52 compute-0 nova_compute[244014]: 2026-02-25 12:20:52.855 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:52 compute-0 ceph-mon[76335]: pgmap v1076: 305 pgs: 305 active+clean; 212 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 27 op/s
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.490 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.520 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.521 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Instance network_info: |[{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.522 244018 DEBUG oslo_concurrency.lockutils [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.523 244018 DEBUG nova.network.neutron [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing network info cache for port 433e6f28-313e-4fe8-b8da-eacc8a0332c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.528 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Start _get_guest_xml network_info=[{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.533 244018 WARNING nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.539 244018 DEBUG nova.virt.libvirt.host [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.540 244018 DEBUG nova.virt.libvirt.host [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.546 244018 DEBUG nova.virt.libvirt.host [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.547 244018 DEBUG nova.virt.libvirt.host [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.547 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.548 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.549 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.549 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.550 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.550 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.551 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.551 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.552 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.552 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.553 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.553 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:20:53 compute-0 nova_compute[244014]: 2026-02-25 12:20:53.559 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.039 244018 DEBUG nova.compute.manager [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.040 244018 DEBUG oslo_concurrency.lockutils [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.041 244018 DEBUG oslo_concurrency.lockutils [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.041 244018 DEBUG oslo_concurrency.lockutils [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.042 244018 DEBUG nova.compute.manager [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] No waiting events found dispatching network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.043 244018 WARNING nova.compute.manager [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received unexpected event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f for instance with vm_state active and task_state None.
Feb 25 12:20:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:20:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4078310787' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.122 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.155 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.161 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:20:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040702362' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.713 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.716 244018 DEBUG nova.virt.libvirt.vif [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.717 244018 DEBUG nova.network.os_vif_util [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.718 244018 DEBUG nova.network.os_vif_util [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.720 244018 DEBUG nova.objects.instance [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:54 compute-0 ceph-mon[76335]: pgmap v1077: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Feb 25 12:20:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4078310787' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4040702362' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.918680) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054918739, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2156, "num_deletes": 255, "total_data_size": 3235397, "memory_usage": 3285552, "flush_reason": "Manual Compaction"}
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054932077, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3154273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21014, "largest_seqno": 23169, "table_properties": {"data_size": 3144753, "index_size": 5886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20752, "raw_average_key_size": 20, "raw_value_size": 3125183, "raw_average_value_size": 3097, "num_data_blocks": 263, "num_entries": 1009, "num_filter_entries": 1009, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772021869, "oldest_key_time": 1772021869, "file_creation_time": 1772022054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 13491 microseconds, and 7069 cpu microseconds.
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.932191) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3154273 bytes OK
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.932245) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.934193) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.934215) EVENT_LOG_v1 {"time_micros": 1772022054934209, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.934239) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3226254, prev total WAL file size 3226254, number of live WAL files 2.
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.935259) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3080KB)], [50(7173KB)]
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054935297, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10499852, "oldest_snapshot_seqno": -1}
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.962 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:20:54 compute-0 nova_compute[244014]:   <uuid>51d1d661-89db-4958-a2f4-c299ee997cde</uuid>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   <name>instance-0000001b</name>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:20:53</nova:creationTime>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 12:20:54 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <system>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <entry name="serial">51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <entry name="uuid">51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     </system>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   <os>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   </os>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   <features>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   </features>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk">
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk.config">
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       </source>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:20:54 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:4a:d5:f4"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <target dev="tap433e6f28-31"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log" append="off"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <video>
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     </video>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:20:54 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:20:54 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:20:54 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:20:54 compute-0 nova_compute[244014]: </domain>
Feb 25 12:20:54 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.972 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Preparing to wait for external event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.973 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.974 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.974 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.975 244018 DEBUG nova.virt.libvirt.vif [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.976 244018 DEBUG nova.network.os_vif_util [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.977 244018 DEBUG nova.network.os_vif_util [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.978 244018 DEBUG os_vif [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.980 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4875 keys, 8732516 bytes, temperature: kUnknown
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054981241, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8732516, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8698300, "index_size": 20922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 119493, "raw_average_key_size": 24, "raw_value_size": 8608925, "raw_average_value_size": 1765, "num_data_blocks": 876, "num_entries": 4875, "num_filter_entries": 4875, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.981 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.981519) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8732516 bytes
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.983095) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.0 rd, 189.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.0 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5398, records dropped: 523 output_compression: NoCompression
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.983127) EVENT_LOG_v1 {"time_micros": 1772022054983113, "job": 26, "event": "compaction_finished", "compaction_time_micros": 46059, "compaction_time_cpu_micros": 23633, "output_level": 6, "num_output_files": 1, "total_output_size": 8732516, "num_input_records": 5398, "num_output_records": 4875, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054983809, "job": 26, "event": "table_file_deletion", "file_number": 52}
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054984793, "job": 26, "event": "table_file_deletion", "file_number": 50}
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.935177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.984910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.984917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.984920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.984923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:20:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.984926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.987 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap433e6f28-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.988 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap433e6f28-31, col_values=(('external_ids', {'iface-id': '433e6f28-313e-4fe8-b8da-eacc8a0332c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:d5:f4', 'vm-uuid': '51d1d661-89db-4958-a2f4-c299ee997cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:54 compute-0 NetworkManager[49836]: <info>  [1772022054.9920] manager: (tap433e6f28-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:54 compute-0 nova_compute[244014]: 2026-02-25 12:20:54.998 244018 INFO os_vif [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31')
Feb 25 12:20:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:55.004 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:55.006 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:55.007 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.117 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.117 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.118 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:4a:d5:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.118 244018 INFO nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Using config drive
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.141 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.685 244018 INFO nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Creating config drive at /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.693 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4wyfcbet execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.714 244018 DEBUG nova.network.neutron [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated VIF entry in instance network info cache for port 433e6f28-313e-4fe8-b8da-eacc8a0332c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.715 244018 DEBUG nova.network.neutron [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.769 244018 DEBUG oslo_concurrency.lockutils [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:55 compute-0 NetworkManager[49836]: <info>  [1772022055.8089] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Feb 25 12:20:55 compute-0 NetworkManager[49836]: <info>  [1772022055.8098] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.817 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4wyfcbet" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:55 compute-0 ovn_controller[147040]: 2026-02-25T12:20:55Z|00171|binding|INFO|Releasing lport 6d2f2d5b-e692-4371-bdfb-0788a13e6e8e from this chassis (sb_readonly=0)
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.853 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.857 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config 51d1d661-89db-4958-a2f4-c299ee997cde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.882 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.916 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:20:55 compute-0 nova_compute[244014]: 2026-02-25 12:20:55.999 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config 51d1d661-89db-4958-a2f4-c299ee997cde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.000 244018 INFO nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Deleting local config drive /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config because it was imported into RBD.
Feb 25 12:20:56 compute-0 NetworkManager[49836]: <info>  [1772022056.0512] manager: (tap433e6f28-31): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Feb 25 12:20:56 compute-0 kernel: tap433e6f28-31: entered promiscuous mode
Feb 25 12:20:56 compute-0 ovn_controller[147040]: 2026-02-25T12:20:56Z|00172|binding|INFO|Claiming lport 433e6f28-313e-4fe8-b8da-eacc8a0332c8 for this chassis.
Feb 25 12:20:56 compute-0 ovn_controller[147040]: 2026-02-25T12:20:56Z|00173|binding|INFO|433e6f28-313e-4fe8-b8da-eacc8a0332c8: Claiming fa:16:3e:4a:d5:f4 10.100.0.11
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:56 compute-0 systemd-udevd[268783]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:20:56 compute-0 ovn_controller[147040]: 2026-02-25T12:20:56Z|00174|binding|INFO|Setting lport 433e6f28-313e-4fe8-b8da-eacc8a0332c8 ovn-installed in OVS
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.084 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.087 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:56 compute-0 ovn_controller[147040]: 2026-02-25T12:20:56Z|00175|binding|INFO|Setting lport 433e6f28-313e-4fe8-b8da-eacc8a0332c8 up in Southbound
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.088 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:d5:f4 10.100.0.11'], port_security=['fa:16:3e:4a:d5:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43bfaf0d-cb4f-4024-9f62-b8a095234270', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=433e6f28-313e-4fe8-b8da-eacc8a0332c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.090 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 433e6f28-313e-4fe8-b8da-eacc8a0332c8 in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.092 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:20:56 compute-0 NetworkManager[49836]: <info>  [1772022056.0938] device (tap433e6f28-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:20:56 compute-0 NetworkManager[49836]: <info>  [1772022056.0946] device (tap433e6f28-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:20:56 compute-0 systemd-machined[210048]: New machine qemu-31-instance-0000001b.
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.102 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b96aec56-1905-44ff-a90e-74a8c8d7466c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.103 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08121372-a1 in ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.105 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08121372-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.106 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2ee969-200e-4f15-b8e1-80801ea83e24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.107 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[971120bd-215d-47ac-b247-7a51b9c95c9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.116 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[12188dc0-dce0-4554-a571-33309d2f93d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.127 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e74d8a2d-f9b7-4f5e-a29f-a019648d2515]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.168 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[92eaa505-9711-4a7b-ad52-ef2a565b717f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.180 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[01389b53-8875-4e77-b639-cc0303e7cf2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 NetworkManager[49836]: <info>  [1772022056.1814] manager: (tap08121372-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.213 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8da5875e-7aa3-4788-a9b3-e00e84afc802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.221 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b49d3a72-8b5d-4e72-b356-041c1e315973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.226 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.226 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.226 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.227 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.237 244018 DEBUG nova.compute.manager [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.237 244018 DEBUG nova.compute.manager [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing instance network info cache due to event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.237 244018 DEBUG oslo_concurrency.lockutils [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:20:56 compute-0 NetworkManager[49836]: <info>  [1772022056.2465] device (tap08121372-a0): carrier: link connected
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.250 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[44faa8b2-7a07-4be2-806b-bb1c416f2fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.265 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88cbc67e-372f-4b0d-b1f6-632ee30a7021]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 42201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268819, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.277 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[721bff18-d7c3-4e48-90af-83867e77a935]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:732b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402565, 'tstamp': 402565}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268820, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.293 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37f66b32-e2cc-4991-8146-17225d59b6b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 42201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268821, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.318 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[66eb1f82-6a0e-4ce2-865b-92cf34e019d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.366 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2104f7-8ba8-4837-b72e-77075c3c0a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.368 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.369 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.369 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:56 compute-0 NetworkManager[49836]: <info>  [1772022056.3722] manager: (tap08121372-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Feb 25 12:20:56 compute-0 kernel: tap08121372-a0: entered promiscuous mode
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.375 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:56 compute-0 ovn_controller[147040]: 2026-02-25T12:20:56Z|00176|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.388 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.389 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe60fe6-bbdb-4a8c-942b-8fec892d8e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.390 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:20:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.391 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'env', 'PROCESS_TAG=haproxy-08121372-a435-401a-b405-778e10d8c2e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08121372-a435-401a-b405-778e10d8c2e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:20:56 compute-0 podman[268853]: 2026-02-25 12:20:56.804982417 +0000 UTC m=+0.068951023 container create b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:20:56 compute-0 systemd[1]: Started libpod-conmon-b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d.scope.
Feb 25 12:20:56 compute-0 podman[268853]: 2026-02-25 12:20:56.771875465 +0000 UTC m=+0.035844121 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:20:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b250d49de0d9de34b918b625012f9e944f53e47c7cb2c90afd9a5ea1fff5ce10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:20:56 compute-0 podman[268853]: 2026-02-25 12:20:56.901975067 +0000 UTC m=+0.165943693 container init b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:20:56 compute-0 podman[268853]: 2026-02-25 12:20:56.912247799 +0000 UTC m=+0.176216415 container start b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 12:20:56 compute-0 nova_compute[244014]: 2026-02-25 12:20:56.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:56 compute-0 ceph-mon[76335]: pgmap v1078: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Feb 25 12:20:56 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [NOTICE]   (268891) : New worker (268909) forked
Feb 25 12:20:56 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [NOTICE]   (268891) : Loading success.
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.050 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022057.0500329, 51d1d661-89db-4958-a2f4-c299ee997cde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.051 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] VM Started (Lifecycle Event)
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.099 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.106 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022057.0512195, 51d1d661-89db-4958-a2f4-c299ee997cde => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.106 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] VM Paused (Lifecycle Event)
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.154 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.157 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.202 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:20:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.912 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.940 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.941 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.941 244018 DEBUG oslo_concurrency.lockutils [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:20:57 compute-0 nova_compute[244014]: 2026-02-25 12:20:57.942 244018 DEBUG nova.network.neutron [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.430 244018 DEBUG nova.compute.manager [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.430 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.430 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG nova.compute.manager [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Processing event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG nova.compute.manager [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.432 244018 DEBUG nova.compute.manager [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.432 244018 WARNING nova.compute.manager [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 for instance with vm_state building and task_state spawning.
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.432 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.435 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022058.4353757, 51d1d661-89db-4958-a2f4-c299ee997cde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.435 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] VM Resumed (Lifecycle Event)
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.458 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.461 244018 INFO nova.virt.libvirt.driver [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Instance spawned successfully.
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.461 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.496 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.498 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.523 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.524 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.525 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.525 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.526 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.527 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.586 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.647 244018 INFO nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Took 10.51 seconds to spawn the instance on the hypervisor.
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.648 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.737 244018 INFO nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Took 11.79 seconds to build instance.
Feb 25 12:20:58 compute-0 nova_compute[244014]: 2026-02-25 12:20:58.765 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:20:58 compute-0 ceph-mon[76335]: pgmap v1079: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Feb 25 12:20:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:20:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Feb 25 12:20:59 compute-0 nova_compute[244014]: 2026-02-25 12:20:59.687 244018 DEBUG nova.network.neutron [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updated VIF entry in instance network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:20:59 compute-0 nova_compute[244014]: 2026-02-25 12:20:59.688 244018 DEBUG nova.network.neutron [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:20:59 compute-0 nova_compute[244014]: 2026-02-25 12:20:59.705 244018 DEBUG oslo_concurrency.lockutils [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:20:59 compute-0 nova_compute[244014]: 2026-02-25 12:20:59.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:00 compute-0 ceph-mon[76335]: pgmap v1080: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Feb 25 12:21:01 compute-0 nova_compute[244014]: 2026-02-25 12:21:01.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:21:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 12:21:01 compute-0 nova_compute[244014]: 2026-02-25 12:21:01.646 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:01.647 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:21:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:01.648 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:21:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:01.649 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:01 compute-0 nova_compute[244014]: 2026-02-25 12:21:01.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:21:01 compute-0 nova_compute[244014]: 2026-02-25 12:21:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:21:02 compute-0 nova_compute[244014]: 2026-02-25 12:21:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:21:02 compute-0 nova_compute[244014]: 2026-02-25 12:21:02.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:21:02 compute-0 ceph-mon[76335]: pgmap v1081: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 12:21:03 compute-0 nova_compute[244014]: 2026-02-25 12:21:03.332 244018 DEBUG nova.compute.manager [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-changed-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:03 compute-0 nova_compute[244014]: 2026-02-25 12:21:03.333 244018 DEBUG nova.compute.manager [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing instance network info cache due to event network-changed-433e6f28-313e-4fe8-b8da-eacc8a0332c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:21:03 compute-0 nova_compute[244014]: 2026-02-25 12:21:03.333 244018 DEBUG oslo_concurrency.lockutils [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:03 compute-0 nova_compute[244014]: 2026-02-25 12:21:03.334 244018 DEBUG oslo_concurrency.lockutils [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:03 compute-0 nova_compute[244014]: 2026-02-25 12:21:03.334 244018 DEBUG nova.network.neutron [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing network info cache for port 433e6f28-313e-4fe8-b8da-eacc8a0332c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:21:03 compute-0 nova_compute[244014]: 2026-02-25 12:21:03.471 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 173 op/s
Feb 25 12:21:03 compute-0 nova_compute[244014]: 2026-02-25 12:21:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:21:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:04 compute-0 ovn_controller[147040]: 2026-02-25T12:21:04Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:e1:9a 10.100.0.6
Feb 25 12:21:04 compute-0 ovn_controller[147040]: 2026-02-25T12:21:04Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:e1:9a 10.100.0.6
Feb 25 12:21:04 compute-0 nova_compute[244014]: 2026-02-25 12:21:04.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:21:04 compute-0 nova_compute[244014]: 2026-02-25 12:21:04.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:04 compute-0 nova_compute[244014]: 2026-02-25 12:21:04.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:04 compute-0 nova_compute[244014]: 2026-02-25 12:21:04.907 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:04 compute-0 nova_compute[244014]: 2026-02-25 12:21:04.907 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:21:04 compute-0 nova_compute[244014]: 2026-02-25 12:21:04.908 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:04 compute-0 ceph-mon[76335]: pgmap v1082: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 173 op/s
Feb 25 12:21:04 compute-0 nova_compute[244014]: 2026-02-25 12:21:04.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.368 244018 DEBUG nova.network.neutron [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated VIF entry in instance network info cache for port 433e6f28-313e-4fe8-b8da-eacc8a0332c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.369 244018 DEBUG nova.network.neutron [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.464 244018 DEBUG oslo_concurrency.lockutils [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:21:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3680699916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.493 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.615 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.616 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.622 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.622 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:21:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 142 op/s
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.829 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.833 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4045MB free_disk=59.94620304182172GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.833 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.834 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.945 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 437f3047-f865-44f7-b16e-cddab230e873 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.945 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 51d1d661-89db-4958-a2f4-c299ee997cde actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.946 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:21:05 compute-0 nova_compute[244014]: 2026-02-25 12:21:05.946 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:21:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3680699916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:21:06 compute-0 nova_compute[244014]: 2026-02-25 12:21:06.011 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:21:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313543570' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:21:06 compute-0 nova_compute[244014]: 2026-02-25 12:21:06.579 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:06 compute-0 nova_compute[244014]: 2026-02-25 12:21:06.585 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:21:06 compute-0 nova_compute[244014]: 2026-02-25 12:21:06.605 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:21:06 compute-0 nova_compute[244014]: 2026-02-25 12:21:06.644 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:21:06 compute-0 nova_compute[244014]: 2026-02-25 12:21:06.644 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:07 compute-0 ceph-mon[76335]: pgmap v1083: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 142 op/s
Feb 25 12:21:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3313543570' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:21:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 208 op/s
Feb 25 12:21:07 compute-0 nova_compute[244014]: 2026-02-25 12:21:07.646 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:21:07 compute-0 nova_compute[244014]: 2026-02-25 12:21:07.646 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:21:07 compute-0 nova_compute[244014]: 2026-02-25 12:21:07.647 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:21:08 compute-0 nova_compute[244014]: 2026-02-25 12:21:08.473 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:09 compute-0 ceph-mon[76335]: pgmap v1084: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 208 op/s
Feb 25 12:21:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:09 compute-0 ovn_controller[147040]: 2026-02-25T12:21:09Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:d5:f4 10.100.0.11
Feb 25 12:21:09 compute-0 ovn_controller[147040]: 2026-02-25T12:21:09Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:d5:f4 10.100.0.11
Feb 25 12:21:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 25 12:21:09 compute-0 nova_compute[244014]: 2026-02-25 12:21:09.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:21:09 compute-0 nova_compute[244014]: 2026-02-25 12:21:09.999 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:11 compute-0 ceph-mon[76335]: pgmap v1085: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 25 12:21:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 290 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Feb 25 12:21:13 compute-0 ceph-mon[76335]: pgmap v1086: 305 pgs: 305 active+clean; 290 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Feb 25 12:21:13 compute-0 nova_compute[244014]: 2026-02-25 12:21:13.183 244018 DEBUG nova.objects.instance [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lazy-loading 'flavor' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:13 compute-0 nova_compute[244014]: 2026-02-25 12:21:13.261 244018 DEBUG oslo_concurrency.lockutils [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:13 compute-0 nova_compute[244014]: 2026-02-25 12:21:13.261 244018 DEBUG oslo_concurrency.lockutils [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:13 compute-0 nova_compute[244014]: 2026-02-25 12:21:13.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 179 op/s
Feb 25 12:21:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Feb 25 12:21:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Feb 25 12:21:14 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Feb 25 12:21:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:14 compute-0 nova_compute[244014]: 2026-02-25 12:21:14.672 244018 DEBUG nova.network.neutron [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:21:15 compute-0 nova_compute[244014]: 2026-02-25 12:21:15.002 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:15 compute-0 ceph-mon[76335]: pgmap v1087: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 179 op/s
Feb 25 12:21:15 compute-0 ceph-mon[76335]: osdmap e146: 3 total, 3 up, 3 in
Feb 25 12:21:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 786 KiB/s rd, 5.1 MiB/s wr, 156 op/s
Feb 25 12:21:15 compute-0 sudo[268971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:21:15 compute-0 sudo[268971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:21:15 compute-0 sudo[268971]: pam_unix(sudo:session): session closed for user root
Feb 25 12:21:15 compute-0 sudo[268996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:21:15 compute-0 sudo[268996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:21:15 compute-0 nova_compute[244014]: 2026-02-25 12:21:15.781 244018 DEBUG nova.compute.manager [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:15 compute-0 nova_compute[244014]: 2026-02-25 12:21:15.784 244018 DEBUG nova.compute.manager [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing instance network info cache due to event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:21:15 compute-0 nova_compute[244014]: 2026-02-25 12:21:15.784 244018 DEBUG oslo_concurrency.lockutils [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:16 compute-0 sudo[268996]: pam_unix(sudo:session): session closed for user root
Feb 25 12:21:16 compute-0 sudo[269053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:21:16 compute-0 sudo[269053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:21:16 compute-0 sudo[269053]: pam_unix(sudo:session): session closed for user root
Feb 25 12:21:16 compute-0 sudo[269078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- inventory --format=json-pretty --filter-for-batch
Feb 25 12:21:16 compute-0 sudo[269078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:21:16 compute-0 podman[269115]: 2026-02-25 12:21:16.996854117 +0000 UTC m=+0.054586424 container create 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:21:17 compute-0 nova_compute[244014]: 2026-02-25 12:21:17.014 244018 DEBUG nova.network.neutron [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:17 compute-0 nova_compute[244014]: 2026-02-25 12:21:17.038 244018 DEBUG oslo_concurrency.lockutils [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:17 compute-0 systemd[1]: Started libpod-conmon-952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798.scope.
Feb 25 12:21:17 compute-0 nova_compute[244014]: 2026-02-25 12:21:17.038 244018 DEBUG nova.compute.manager [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Feb 25 12:21:17 compute-0 nova_compute[244014]: 2026-02-25 12:21:17.040 244018 DEBUG nova.compute.manager [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] network_info to inject: |[{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Feb 25 12:21:17 compute-0 nova_compute[244014]: 2026-02-25 12:21:17.044 244018 DEBUG oslo_concurrency.lockutils [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:17 compute-0 nova_compute[244014]: 2026-02-25 12:21:17.045 244018 DEBUG nova.network.neutron [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:21:17 compute-0 podman[269115]: 2026-02-25 12:21:16.969067136 +0000 UTC m=+0.026799573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:21:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:21:17 compute-0 ceph-mon[76335]: pgmap v1089: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 786 KiB/s rd, 5.1 MiB/s wr, 156 op/s
Feb 25 12:21:17 compute-0 podman[269115]: 2026-02-25 12:21:17.101765082 +0000 UTC m=+0.159497419 container init 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:21:17 compute-0 podman[269115]: 2026-02-25 12:21:17.110200372 +0000 UTC m=+0.167932709 container start 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:21:17 compute-0 podman[269115]: 2026-02-25 12:21:17.113839285 +0000 UTC m=+0.171571612 container attach 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:21:17 compute-0 youthful_hawking[269132]: 167 167
Feb 25 12:21:17 compute-0 systemd[1]: libpod-952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798.scope: Deactivated successfully.
Feb 25 12:21:17 compute-0 podman[269115]: 2026-02-25 12:21:17.117921522 +0000 UTC m=+0.175653859 container died 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:21:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2082e52ec00ff7d8d3b2e3b59b9741ec10d53b319e6975068bde68f73ea4190e-merged.mount: Deactivated successfully.
Feb 25 12:21:17 compute-0 podman[269115]: 2026-02-25 12:21:17.160227655 +0000 UTC m=+0.217959942 container remove 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 12:21:17 compute-0 systemd[1]: libpod-conmon-952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798.scope: Deactivated successfully.
Feb 25 12:21:17 compute-0 podman[269155]: 2026-02-25 12:21:17.355873692 +0000 UTC m=+0.069501019 container create 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:21:17 compute-0 systemd[1]: Started libpod-conmon-1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068.scope.
Feb 25 12:21:17 compute-0 podman[269155]: 2026-02-25 12:21:17.322314807 +0000 UTC m=+0.035942194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:21:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887fcae71b6a15a9fd490653bcaef35bf625336f2e7c8be7a2678b0ce1110c40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887fcae71b6a15a9fd490653bcaef35bf625336f2e7c8be7a2678b0ce1110c40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887fcae71b6a15a9fd490653bcaef35bf625336f2e7c8be7a2678b0ce1110c40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887fcae71b6a15a9fd490653bcaef35bf625336f2e7c8be7a2678b0ce1110c40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:17 compute-0 podman[269155]: 2026-02-25 12:21:17.474760764 +0000 UTC m=+0.188388111 container init 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 12:21:17 compute-0 podman[269155]: 2026-02-25 12:21:17.483322238 +0000 UTC m=+0.196949575 container start 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:21:17 compute-0 podman[269155]: 2026-02-25 12:21:17.488973259 +0000 UTC m=+0.202600586 container attach 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:21:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]: [
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:     {
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:         "available": false,
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:         "being_replaced": false,
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:         "ceph_device_lvm": false,
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:         "lsm_data": {},
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:         "lvs": [],
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:         "path": "/dev/sr0",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:         "rejected_reasons": [
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "Has a FileSystem",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "Insufficient space (<5GB)"
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:         ],
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:         "sys_api": {
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "actuators": null,
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "device_nodes": [
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:                 "sr0"
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             ],
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "devname": "sr0",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "human_readable_size": "482.00 KB",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "id_bus": "ata",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "model": "QEMU DVD-ROM",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "nr_requests": "2",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "parent": "/dev/sr0",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "partitions": {},
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "path": "/dev/sr0",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "removable": "1",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "rev": "2.5+",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "ro": "0",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "rotational": "1",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "sas_address": "",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "sas_device_handle": "",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "scheduler_mode": "mq-deadline",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "sectors": 0,
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "sectorsize": "2048",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "size": 493568.0,
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "support_discard": "2048",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "type": "disk",
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:             "vendor": "QEMU"
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:         }
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]:     }
Feb 25 12:21:18 compute-0 intelligent_mirzakhani[269171]: ]
Feb 25 12:21:18 compute-0 systemd[1]: libpod-1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068.scope: Deactivated successfully.
Feb 25 12:21:18 compute-0 podman[269155]: 2026-02-25 12:21:18.121818324 +0000 UTC m=+0.835445631 container died 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Feb 25 12:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-887fcae71b6a15a9fd490653bcaef35bf625336f2e7c8be7a2678b0ce1110c40-merged.mount: Deactivated successfully.
Feb 25 12:21:18 compute-0 podman[269155]: 2026-02-25 12:21:18.168148602 +0000 UTC m=+0.881775909 container remove 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:21:18 compute-0 systemd[1]: libpod-conmon-1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068.scope: Deactivated successfully.
Feb 25 12:21:18 compute-0 sudo[269078]: pam_unix(sudo:session): session closed for user root
Feb 25 12:21:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:21:18 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:21:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:21:18 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:21:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 12:21:18 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 12:21:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:21:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:21:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:21:18 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:21:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:21:18 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:21:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:21:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:21:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:21:18 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:21:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:21:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:21:18 compute-0 sudo[270100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:21:18 compute-0 sudo[270100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:21:18 compute-0 sudo[270100]: pam_unix(sudo:session): session closed for user root
Feb 25 12:21:18 compute-0 sudo[270125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:21:18 compute-0 sudo[270125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:21:18 compute-0 nova_compute[244014]: 2026-02-25 12:21:18.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:18 compute-0 podman[270162]: 2026-02-25 12:21:18.780048422 +0000 UTC m=+0.102626951 container create e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 12:21:18 compute-0 podman[270162]: 2026-02-25 12:21:18.713017765 +0000 UTC m=+0.035596364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:21:18 compute-0 systemd[1]: Started libpod-conmon-e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740.scope.
Feb 25 12:21:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:21:18 compute-0 podman[270162]: 2026-02-25 12:21:18.897058431 +0000 UTC m=+0.219637030 container init e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:21:18 compute-0 podman[270162]: 2026-02-25 12:21:18.907375475 +0000 UTC m=+0.229954024 container start e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:21:18 compute-0 podman[270162]: 2026-02-25 12:21:18.911531423 +0000 UTC m=+0.234109972 container attach e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 12:21:18 compute-0 cool_yonath[270179]: 167 167
Feb 25 12:21:18 compute-0 systemd[1]: libpod-e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740.scope: Deactivated successfully.
Feb 25 12:21:18 compute-0 podman[270162]: 2026-02-25 12:21:18.913439807 +0000 UTC m=+0.236018356 container died e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf2970ed82240e8d6ccf82802bcff666441e1de7726864821bbab8412d852ef6-merged.mount: Deactivated successfully.
Feb 25 12:21:18 compute-0 podman[270162]: 2026-02-25 12:21:18.961596178 +0000 UTC m=+0.284174737 container remove e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:21:18 compute-0 systemd[1]: libpod-conmon-e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740.scope: Deactivated successfully.
Feb 25 12:21:19 compute-0 podman[270203]: 2026-02-25 12:21:19.171731516 +0000 UTC m=+0.059905785 container create 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:21:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:19 compute-0 systemd[1]: Started libpod-conmon-6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3.scope.
Feb 25 12:21:19 compute-0 ceph-mon[76335]: pgmap v1090: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Feb 25 12:21:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:21:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:21:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 12:21:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:21:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:21:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:21:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:21:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:21:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:21:19 compute-0 podman[270203]: 2026-02-25 12:21:19.147446045 +0000 UTC m=+0.035620364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:21:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:21:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:19 compute-0 podman[270203]: 2026-02-25 12:21:19.274362666 +0000 UTC m=+0.162536985 container init 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:21:19 compute-0 podman[270203]: 2026-02-25 12:21:19.289217999 +0000 UTC m=+0.177392278 container start 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:21:19 compute-0 podman[270203]: 2026-02-25 12:21:19.294383236 +0000 UTC m=+0.182557575 container attach 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 12:21:19 compute-0 nova_compute[244014]: 2026-02-25 12:21:19.513 244018 DEBUG nova.network.neutron [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updated VIF entry in instance network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:21:19 compute-0 nova_compute[244014]: 2026-02-25 12:21:19.516 244018 DEBUG nova.network.neutron [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:19 compute-0 nova_compute[244014]: 2026-02-25 12:21:19.550 244018 DEBUG oslo_concurrency.lockutils [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Feb 25 12:21:19 compute-0 charming_taussig[270220]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:21:19 compute-0 charming_taussig[270220]: --> All data devices are unavailable
Feb 25 12:21:19 compute-0 systemd[1]: libpod-6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3.scope: Deactivated successfully.
Feb 25 12:21:19 compute-0 podman[270240]: 2026-02-25 12:21:19.86144074 +0000 UTC m=+0.026945157 container died 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 12:21:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857-merged.mount: Deactivated successfully.
Feb 25 12:21:19 compute-0 podman[270240]: 2026-02-25 12:21:19.914874771 +0000 UTC m=+0.080379208 container remove 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Feb 25 12:21:19 compute-0 systemd[1]: libpod-conmon-6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3.scope: Deactivated successfully.
Feb 25 12:21:19 compute-0 sudo[270125]: pam_unix(sudo:session): session closed for user root
Feb 25 12:21:20 compute-0 nova_compute[244014]: 2026-02-25 12:21:20.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:20 compute-0 sudo[270255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:21:20 compute-0 sudo[270255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:21:20 compute-0 sudo[270255]: pam_unix(sudo:session): session closed for user root
Feb 25 12:21:20 compute-0 sudo[270280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:21:20 compute-0 sudo[270280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:21:20 compute-0 podman[270317]: 2026-02-25 12:21:20.367999493 +0000 UTC m=+0.041604795 container create c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:21:20 compute-0 systemd[1]: Started libpod-conmon-c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d.scope.
Feb 25 12:21:20 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:21:20 compute-0 podman[270317]: 2026-02-25 12:21:20.350655109 +0000 UTC m=+0.024260391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:21:20 compute-0 podman[270317]: 2026-02-25 12:21:20.466033312 +0000 UTC m=+0.139638624 container init c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:21:20 compute-0 podman[270317]: 2026-02-25 12:21:20.473787823 +0000 UTC m=+0.147393125 container start c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 12:21:20 compute-0 cool_mcclintock[270333]: 167 167
Feb 25 12:21:20 compute-0 systemd[1]: libpod-c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d.scope: Deactivated successfully.
Feb 25 12:21:20 compute-0 podman[270317]: 2026-02-25 12:21:20.4845912 +0000 UTC m=+0.158196562 container attach c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 12:21:20 compute-0 podman[270317]: 2026-02-25 12:21:20.485606019 +0000 UTC m=+0.159211321 container died c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cf599176a1f13dedd81ee7480976519539bba4df1d99f13a2c7706028f8a2cd-merged.mount: Deactivated successfully.
Feb 25 12:21:20 compute-0 podman[270317]: 2026-02-25 12:21:20.571566105 +0000 UTC m=+0.245171397 container remove c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 12:21:20 compute-0 systemd[1]: libpod-conmon-c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d.scope: Deactivated successfully.
Feb 25 12:21:20 compute-0 nova_compute[244014]: 2026-02-25 12:21:20.602 244018 DEBUG nova.objects.instance [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lazy-loading 'flavor' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:20 compute-0 nova_compute[244014]: 2026-02-25 12:21:20.630 244018 DEBUG oslo_concurrency.lockutils [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:20 compute-0 nova_compute[244014]: 2026-02-25 12:21:20.630 244018 DEBUG oslo_concurrency.lockutils [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:20 compute-0 podman[270358]: 2026-02-25 12:21:20.752870643 +0000 UTC m=+0.053549164 container create 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 12:21:20 compute-0 systemd[1]: Started libpod-conmon-47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192.scope.
Feb 25 12:21:20 compute-0 podman[270358]: 2026-02-25 12:21:20.723733094 +0000 UTC m=+0.024411675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:21:20 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2706b7f0d90901817487c75296b5e97a601b3a179da50baadb9bd36dec774a11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2706b7f0d90901817487c75296b5e97a601b3a179da50baadb9bd36dec774a11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2706b7f0d90901817487c75296b5e97a601b3a179da50baadb9bd36dec774a11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2706b7f0d90901817487c75296b5e97a601b3a179da50baadb9bd36dec774a11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:20 compute-0 podman[270358]: 2026-02-25 12:21:20.880998899 +0000 UTC m=+0.181677420 container init 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 12:21:20 compute-0 podman[270358]: 2026-02-25 12:21:20.890740516 +0000 UTC m=+0.191419027 container start 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 12:21:20 compute-0 podman[270358]: 2026-02-25 12:21:20.936084836 +0000 UTC m=+0.236763367 container attach 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:21:21 compute-0 sweet_feynman[270374]: {
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:     "0": [
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:         {
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "devices": [
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "/dev/loop3"
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             ],
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_name": "ceph_lv0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_size": "21470642176",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "name": "ceph_lv0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "tags": {
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.cluster_name": "ceph",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.crush_device_class": "",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.encrypted": "0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.objectstore": "bluestore",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.osd_id": "0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.type": "block",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.vdo": "0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.with_tpm": "0"
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             },
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "type": "block",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "vg_name": "ceph_vg0"
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:         }
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:     ],
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:     "1": [
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:         {
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "devices": [
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "/dev/loop4"
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             ],
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_name": "ceph_lv1",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_size": "21470642176",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "name": "ceph_lv1",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "tags": {
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.cluster_name": "ceph",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.crush_device_class": "",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.encrypted": "0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.objectstore": "bluestore",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.osd_id": "1",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.type": "block",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.vdo": "0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.with_tpm": "0"
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             },
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "type": "block",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "vg_name": "ceph_vg1"
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:         }
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:     ],
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:     "2": [
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:         {
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "devices": [
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "/dev/loop5"
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             ],
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_name": "ceph_lv2",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_size": "21470642176",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "name": "ceph_lv2",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "tags": {
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.cluster_name": "ceph",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.crush_device_class": "",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.encrypted": "0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.objectstore": "bluestore",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.osd_id": "2",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.type": "block",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.vdo": "0",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:                 "ceph.with_tpm": "0"
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             },
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "type": "block",
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:             "vg_name": "ceph_vg2"
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:         }
Feb 25 12:21:21 compute-0 sweet_feynman[270374]:     ]
Feb 25 12:21:21 compute-0 sweet_feynman[270374]: }
Feb 25 12:21:21 compute-0 ceph-mon[76335]: pgmap v1091: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Feb 25 12:21:21 compute-0 systemd[1]: libpod-47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192.scope: Deactivated successfully.
Feb 25 12:21:21 compute-0 podman[270358]: 2026-02-25 12:21:21.244869062 +0000 UTC m=+0.545547583 container died 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 12:21:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2706b7f0d90901817487c75296b5e97a601b3a179da50baadb9bd36dec774a11-merged.mount: Deactivated successfully.
Feb 25 12:21:21 compute-0 podman[270358]: 2026-02-25 12:21:21.355111097 +0000 UTC m=+0.655789608 container remove 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:21:21 compute-0 systemd[1]: libpod-conmon-47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192.scope: Deactivated successfully.
Feb 25 12:21:21 compute-0 sudo[270280]: pam_unix(sudo:session): session closed for user root
Feb 25 12:21:21 compute-0 sudo[270397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:21:21 compute-0 sudo[270397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:21:21 compute-0 sudo[270397]: pam_unix(sudo:session): session closed for user root
Feb 25 12:21:21 compute-0 sudo[270422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:21:21 compute-0 sudo[270422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:21:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Feb 25 12:21:21 compute-0 podman[270459]: 2026-02-25 12:21:21.89891925 +0000 UTC m=+0.055959323 container create 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:21:21 compute-0 systemd[1]: Started libpod-conmon-06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4.scope.
Feb 25 12:21:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:21:21 compute-0 podman[270459]: 2026-02-25 12:21:21.875313268 +0000 UTC m=+0.032353321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:21:21 compute-0 podman[270459]: 2026-02-25 12:21:21.983729743 +0000 UTC m=+0.140769786 container init 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 12:21:21 compute-0 podman[270459]: 2026-02-25 12:21:21.990891467 +0000 UTC m=+0.147931500 container start 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:21:21 compute-0 podman[270459]: 2026-02-25 12:21:21.994185861 +0000 UTC m=+0.151225894 container attach 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:21:21 compute-0 busy_greider[270477]: 167 167
Feb 25 12:21:21 compute-0 systemd[1]: libpod-06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4.scope: Deactivated successfully.
Feb 25 12:21:21 compute-0 podman[270459]: 2026-02-25 12:21:21.995762865 +0000 UTC m=+0.152802918 container died 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff07ea1af8a0570ffdc424bb66ec63f83a59b888fa552fe7522b0790e1d618ad-merged.mount: Deactivated successfully.
Feb 25 12:21:22 compute-0 podman[270459]: 2026-02-25 12:21:22.036458913 +0000 UTC m=+0.193498946 container remove 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 12:21:22 compute-0 podman[270473]: 2026-02-25 12:21:22.041549338 +0000 UTC m=+0.091819743 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 25 12:21:22 compute-0 systemd[1]: libpod-conmon-06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4.scope: Deactivated successfully.
Feb 25 12:21:22 compute-0 podman[270476]: 2026-02-25 12:21:22.069799682 +0000 UTC m=+0.120054657 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:21:22 compute-0 podman[270543]: 2026-02-25 12:21:22.198549385 +0000 UTC m=+0.050824017 container create 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 12:21:22 compute-0 systemd[1]: Started libpod-conmon-30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80.scope.
Feb 25 12:21:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:21:22 compute-0 podman[270543]: 2026-02-25 12:21:22.174493351 +0000 UTC m=+0.026768063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc00c6d31d524c897c7daf794ea352f1e078c5150d746747ba92a791b18c930/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc00c6d31d524c897c7daf794ea352f1e078c5150d746747ba92a791b18c930/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc00c6d31d524c897c7daf794ea352f1e078c5150d746747ba92a791b18c930/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc00c6d31d524c897c7daf794ea352f1e078c5150d746747ba92a791b18c930/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:22 compute-0 podman[270543]: 2026-02-25 12:21:22.29045793 +0000 UTC m=+0.142732602 container init 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 12:21:22 compute-0 podman[270543]: 2026-02-25 12:21:22.304122029 +0000 UTC m=+0.156396651 container start 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 12:21:22 compute-0 podman[270543]: 2026-02-25 12:21:22.307739682 +0000 UTC m=+0.160014334 container attach 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 12:21:22 compute-0 nova_compute[244014]: 2026-02-25 12:21:22.433 244018 DEBUG nova.network.neutron [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:21:22 compute-0 nova_compute[244014]: 2026-02-25 12:21:22.585 244018 DEBUG nova.compute.manager [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:22 compute-0 nova_compute[244014]: 2026-02-25 12:21:22.585 244018 DEBUG nova.compute.manager [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing instance network info cache due to event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:21:22 compute-0 nova_compute[244014]: 2026-02-25 12:21:22.586 244018 DEBUG oslo_concurrency.lockutils [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:22 compute-0 lvm[270640]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:21:22 compute-0 lvm[270638]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:21:22 compute-0 lvm[270640]: VG ceph_vg1 finished
Feb 25 12:21:22 compute-0 lvm[270638]: VG ceph_vg0 finished
Feb 25 12:21:22 compute-0 lvm[270642]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:21:22 compute-0 lvm[270642]: VG ceph_vg2 finished
Feb 25 12:21:22 compute-0 lvm[270643]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:21:22 compute-0 lvm[270643]: VG ceph_vg0 finished
Feb 25 12:21:22 compute-0 nova_compute[244014]: 2026-02-25 12:21:22.948 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:22 compute-0 nova_compute[244014]: 2026-02-25 12:21:22.948 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:22 compute-0 nova_compute[244014]: 2026-02-25 12:21:22.948 244018 DEBUG nova.objects.instance [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:23 compute-0 nova_compute[244014]: 2026-02-25 12:21:23.025 244018 DEBUG nova.objects.instance [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:23 compute-0 nova_compute[244014]: 2026-02-25 12:21:23.068 244018 DEBUG nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:21:23 compute-0 happy_poitras[270561]: {}
Feb 25 12:21:23 compute-0 systemd[1]: libpod-30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80.scope: Deactivated successfully.
Feb 25 12:21:23 compute-0 podman[270543]: 2026-02-25 12:21:23.094370583 +0000 UTC m=+0.946645205 container died 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 12:21:23 compute-0 systemd[1]: libpod-30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80.scope: Consumed 1.046s CPU time.
Feb 25 12:21:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cc00c6d31d524c897c7daf794ea352f1e078c5150d746747ba92a791b18c930-merged.mount: Deactivated successfully.
Feb 25 12:21:23 compute-0 podman[270543]: 2026-02-25 12:21:23.211427154 +0000 UTC m=+1.063701806 container remove 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:21:23 compute-0 systemd[1]: libpod-conmon-30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80.scope: Deactivated successfully.
Feb 25 12:21:23 compute-0 ceph-mon[76335]: pgmap v1092: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Feb 25 12:21:23 compute-0 sudo[270422]: pam_unix(sudo:session): session closed for user root
Feb 25 12:21:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:21:23 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:21:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:21:23 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:21:23 compute-0 sudo[270659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:21:23 compute-0 sudo[270659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:21:23 compute-0 sudo[270659]: pam_unix(sudo:session): session closed for user root
Feb 25 12:21:23 compute-0 nova_compute[244014]: 2026-02-25 12:21:23.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 19 KiB/s wr, 106 op/s
Feb 25 12:21:23 compute-0 nova_compute[244014]: 2026-02-25 12:21:23.978 244018 DEBUG nova.policy [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:21:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:21:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:21:24 compute-0 nova_compute[244014]: 2026-02-25 12:21:24.747 244018 DEBUG nova.network.neutron [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:24 compute-0 nova_compute[244014]: 2026-02-25 12:21:24.771 244018 DEBUG oslo_concurrency.lockutils [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:24 compute-0 nova_compute[244014]: 2026-02-25 12:21:24.772 244018 DEBUG nova.compute.manager [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Feb 25 12:21:24 compute-0 nova_compute[244014]: 2026-02-25 12:21:24.772 244018 DEBUG nova.compute.manager [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] network_info to inject: |[{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Feb 25 12:21:24 compute-0 nova_compute[244014]: 2026-02-25 12:21:24.776 244018 DEBUG oslo_concurrency.lockutils [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:24 compute-0 nova_compute[244014]: 2026-02-25 12:21:24.776 244018 DEBUG nova.network.neutron [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.265 244018 DEBUG nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully created port: 31f40ed6-505b-4061-b861-ea2720b0ff62 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:21:25 compute-0 ceph-mon[76335]: pgmap v1093: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 19 KiB/s wr, 106 op/s
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.558 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.559 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.559 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.559 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.559 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.560 244018 INFO nova.compute.manager [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Terminating instance
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.561 244018 DEBUG nova.compute.manager [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:21:25 compute-0 kernel: tap6c0083e1-a9 (unregistering): left promiscuous mode
Feb 25 12:21:25 compute-0 NetworkManager[49836]: <info>  [1772022085.6115] device (tap6c0083e1-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:21:25 compute-0 ovn_controller[147040]: 2026-02-25T12:21:25Z|00177|binding|INFO|Releasing lport 6c0083e1-a97b-4de8-aa7b-14217c45376f from this chassis (sb_readonly=0)
Feb 25 12:21:25 compute-0 ovn_controller[147040]: 2026-02-25T12:21:25Z|00178|binding|INFO|Setting lport 6c0083e1-a97b-4de8-aa7b-14217c45376f down in Southbound
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:25 compute-0 ovn_controller[147040]: 2026-02-25T12:21:25Z|00179|binding|INFO|Removing iface tap6c0083e1-a9 ovn-installed in OVS
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.634 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:e1:9a 10.100.0.6'], port_security=['fa:16:3e:b5:e1:9a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '437f3047-f865-44f7-b16e-cddab230e873', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28df138f3ff744a09c7e79a179881f21', 'neutron:revision_number': '6', 'neutron:security_group_ids': '142094f4-7b01-4079-a840-811d10ccd4e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fff7b783-3fdd-403e-b061-0aa11a082612, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6c0083e1-a97b-4de8-aa7b-14217c45376f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:21:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.636 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6c0083e1-a97b-4de8-aa7b-14217c45376f in datapath 08d2df9f-b68a-40d4-a888-4f15fc0b5403 unbound from our chassis
Feb 25 12:21:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.638 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08d2df9f-b68a-40d4-a888-4f15fc0b5403, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:21:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.640 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5532ce03-0cd7-4d05-96c6-3fa93f783f39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.640 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403 namespace which is not needed anymore
Feb 25 12:21:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 17 KiB/s wr, 92 op/s
Feb 25 12:21:25 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Feb 25 12:21:25 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Consumed 13.923s CPU time.
Feb 25 12:21:25 compute-0 systemd-machined[210048]: Machine qemu-30-instance-0000001a terminated.
Feb 25 12:21:25 compute-0 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [NOTICE]   (268639) : haproxy version is 2.8.14-c23fe91
Feb 25 12:21:25 compute-0 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [NOTICE]   (268639) : path to executable is /usr/sbin/haproxy
Feb 25 12:21:25 compute-0 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [WARNING]  (268639) : Exiting Master process...
Feb 25 12:21:25 compute-0 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [WARNING]  (268639) : Exiting Master process...
Feb 25 12:21:25 compute-0 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [ALERT]    (268639) : Current worker (268641) exited with code 143 (Terminated)
Feb 25 12:21:25 compute-0 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [WARNING]  (268639) : All workers exited. Exiting... (0)
Feb 25 12:21:25 compute-0 systemd[1]: libpod-61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7.scope: Deactivated successfully.
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.803 244018 INFO nova.virt.libvirt.driver [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Instance destroyed successfully.
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.804 244018 DEBUG nova.objects.instance [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lazy-loading 'resources' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:25 compute-0 podman[270706]: 2026-02-25 12:21:25.810266316 +0000 UTC m=+0.063071396 container died 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.828 244018 DEBUG nova.virt.libvirt.vif [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1111859970',display_name='tempest-AttachInterfacesUnderV243Test-server-1111859970',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1111859970',id=26,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXFN7zC5myw9D/AepfFmxAXWRK8TzW2Ph47f455gtFzF5f9ZUF8tii8j35Vr/zY8abYSaTZMXBunxHe79g4nB5q7Pdu+4WcSGnU0sarcpQHiHET7AGRM6ljsAzRNmAesA==',key_name='tempest-keypair-1280464142',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28df138f3ff744a09c7e79a179881f21',ramdisk_id='',reservation_id='r-t6tk786a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1754830019',owner_user_name='tempest-AttachInterfacesUnderV243Test-1754830019-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:21:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='10c5d76df8d046e8858030b12b7affa1',uuid=437f3047-f865-44f7-b16e-cddab230e873,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.829 244018 DEBUG nova.network.os_vif_util [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converting VIF {"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.830 244018 DEBUG nova.network.os_vif_util [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.831 244018 DEBUG os_vif [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.835 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c0083e1-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.844 244018 INFO os_vif [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9')
Feb 25 12:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7-userdata-shm.mount: Deactivated successfully.
Feb 25 12:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-95e3b4792fbae073d4d0de8c0eb2f97b209d111264ecc91aaecd534959142ed9-merged.mount: Deactivated successfully.
Feb 25 12:21:25 compute-0 podman[270706]: 2026-02-25 12:21:25.863518241 +0000 UTC m=+0.116323321 container cleanup 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:21:25 compute-0 systemd[1]: libpod-conmon-61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7.scope: Deactivated successfully.
Feb 25 12:21:25 compute-0 podman[270763]: 2026-02-25 12:21:25.940572563 +0000 UTC m=+0.052612458 container remove 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:21:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.947 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee192504-604b-4a0c-a252-ac46c87ef2ee]: (4, ('Wed Feb 25 12:21:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403 (61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7)\n61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7\nWed Feb 25 12:21:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403 (61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7)\n61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.950 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd66c4e-3c69-4421-af0a-1ecf0ca1b9f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.951 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08d2df9f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:25 compute-0 kernel: tap08d2df9f-b0: left promiscuous mode
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.972 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d06010-83b4-4d5e-bd6b-992be2acaa39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:25 compute-0 nova_compute[244014]: 2026-02-25 12:21:25.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.990 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5368e769-e188-4b89-aa6e-b1e07474e6cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.991 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[842ef611-dd6c-460e-a0d5-44bdae08d2c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:26.007 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[781c26ca-f2a3-435f-9923-fa3e8f85d998]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402115, 'reachable_time': 27684, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270779, 'error': None, 'target': 'ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d08d2df9f\x2db68a\x2d40d4\x2da888\x2d4f15fc0b5403.mount: Deactivated successfully.
Feb 25 12:21:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:26.013 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:21:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:26.013 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[138ed9aa-6e90-41ee-94e8-14ece411663e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:26 compute-0 ovn_controller[147040]: 2026-02-25T12:21:26Z|00180|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.137 244018 INFO nova.virt.libvirt.driver [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Deleting instance files /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873_del
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.138 244018 INFO nova.virt.libvirt.driver [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Deletion of /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873_del complete
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.277 244018 INFO nova.compute.manager [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Took 0.72 seconds to destroy the instance on the hypervisor.
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.278 244018 DEBUG oslo.service.loopingcall [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.278 244018 DEBUG nova.compute.manager [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.278 244018 DEBUG nova.network.neutron [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.536 244018 DEBUG nova.network.neutron [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updated VIF entry in instance network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.537 244018 DEBUG nova.network.neutron [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.563 244018 DEBUG oslo_concurrency.lockutils [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.765 244018 DEBUG nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully updated port: 31f40ed6-505b-4061-b861-ea2720b0ff62 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.786 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.786 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.787 244018 DEBUG nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:21:26 compute-0 nova_compute[244014]: 2026-02-25 12:21:26.943 244018 WARNING nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it
Feb 25 12:21:27 compute-0 ceph-mon[76335]: pgmap v1094: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 17 KiB/s wr, 92 op/s
Feb 25 12:21:27 compute-0 nova_compute[244014]: 2026-02-25 12:21:27.405 244018 DEBUG nova.compute.manager [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-unplugged-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:27 compute-0 nova_compute[244014]: 2026-02-25 12:21:27.406 244018 DEBUG oslo_concurrency.lockutils [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:27 compute-0 nova_compute[244014]: 2026-02-25 12:21:27.407 244018 DEBUG oslo_concurrency.lockutils [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:27 compute-0 nova_compute[244014]: 2026-02-25 12:21:27.407 244018 DEBUG oslo_concurrency.lockutils [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:27 compute-0 nova_compute[244014]: 2026-02-25 12:21:27.408 244018 DEBUG nova.compute.manager [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] No waiting events found dispatching network-vif-unplugged-6c0083e1-a97b-4de8-aa7b-14217c45376f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:21:27 compute-0 nova_compute[244014]: 2026-02-25 12:21:27.408 244018 DEBUG nova.compute.manager [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-unplugged-6c0083e1-a97b-4de8-aa7b-14217c45376f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:21:27 compute-0 nova_compute[244014]: 2026-02-25 12:21:27.477 244018 DEBUG nova.network.neutron [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:27 compute-0 nova_compute[244014]: 2026-02-25 12:21:27.502 244018 INFO nova.compute.manager [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Took 1.22 seconds to deallocate network for instance.
Feb 25 12:21:27 compute-0 nova_compute[244014]: 2026-02-25 12:21:27.564 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:27 compute-0 nova_compute[244014]: 2026-02-25 12:21:27.565 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:27 compute-0 nova_compute[244014]: 2026-02-25 12:21:27.649 244018 DEBUG oslo_concurrency.processutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 20 KiB/s wr, 117 op/s
Feb 25 12:21:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:21:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1603885954' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:21:28 compute-0 nova_compute[244014]: 2026-02-25 12:21:28.256 244018 DEBUG oslo_concurrency.processutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:28 compute-0 nova_compute[244014]: 2026-02-25 12:21:28.265 244018 DEBUG nova.compute.provider_tree [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:21:28 compute-0 nova_compute[244014]: 2026-02-25 12:21:28.296 244018 DEBUG nova.scheduler.client.report [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:21:28 compute-0 nova_compute[244014]: 2026-02-25 12:21:28.360 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1603885954' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:21:28 compute-0 nova_compute[244014]: 2026-02-25 12:21:28.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:28 compute-0 nova_compute[244014]: 2026-02-25 12:21:28.555 244018 INFO nova.scheduler.client.report [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Deleted allocations for instance 437f3047-f865-44f7-b16e-cddab230e873
Feb 25 12:21:28 compute-0 nova_compute[244014]: 2026-02-25 12:21:28.643 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.226476) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089226547, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 570, "num_deletes": 256, "total_data_size": 623513, "memory_usage": 635352, "flush_reason": "Manual Compaction"}
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089232774, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 607103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23170, "largest_seqno": 23739, "table_properties": {"data_size": 603944, "index_size": 1067, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 6997, "raw_average_key_size": 18, "raw_value_size": 597569, "raw_average_value_size": 1540, "num_data_blocks": 48, "num_entries": 388, "num_filter_entries": 388, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022055, "oldest_key_time": 1772022055, "file_creation_time": 1772022089, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 6349 microseconds, and 2933 cpu microseconds.
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.232831) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 607103 bytes OK
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.232852) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.234887) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.234910) EVENT_LOG_v1 {"time_micros": 1772022089234903, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.234934) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 620311, prev total WAL file size 620311, number of live WAL files 2.
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.235544) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(592KB)], [53(8527KB)]
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089235586, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9339619, "oldest_snapshot_seqno": -1}
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4736 keys, 9240141 bytes, temperature: kUnknown
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089279881, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9240141, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9205626, "index_size": 21605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 117774, "raw_average_key_size": 24, "raw_value_size": 9117440, "raw_average_value_size": 1925, "num_data_blocks": 903, "num_entries": 4736, "num_filter_entries": 4736, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022089, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.280230) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9240141 bytes
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.281484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.2 rd, 208.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.3 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(30.6) write-amplify(15.2) OK, records in: 5263, records dropped: 527 output_compression: NoCompression
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.281523) EVENT_LOG_v1 {"time_micros": 1772022089281506, "job": 28, "event": "compaction_finished", "compaction_time_micros": 44426, "compaction_time_cpu_micros": 27256, "output_level": 6, "num_output_files": 1, "total_output_size": 9240141, "num_input_records": 5263, "num_output_records": 4736, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089281931, "job": 28, "event": "table_file_deletion", "file_number": 55}
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089283172, "job": 28, "event": "table_file_deletion", "file_number": 53}
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.235374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.283240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.283245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.283246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.283248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:21:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.283249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:21:29 compute-0 ceph-mon[76335]: pgmap v1095: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 20 KiB/s wr, 117 op/s
Feb 25 12:21:29 compute-0 nova_compute[244014]: 2026-02-25 12:21:29.609 244018 DEBUG nova.compute.manager [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-changed-31f40ed6-505b-4061-b861-ea2720b0ff62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:29 compute-0 nova_compute[244014]: 2026-02-25 12:21:29.610 244018 DEBUG nova.compute.manager [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing instance network info cache due to event network-changed-31f40ed6-505b-4061-b861-ea2720b0ff62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:21:29 compute-0 nova_compute[244014]: 2026-02-25 12:21:29.610 244018 DEBUG oslo_concurrency.lockutils [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 4.2 KiB/s wr, 103 op/s
Feb 25 12:21:29 compute-0 nova_compute[244014]: 2026-02-25 12:21:29.812 244018 DEBUG nova.compute.manager [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:29 compute-0 nova_compute[244014]: 2026-02-25 12:21:29.813 244018 DEBUG oslo_concurrency.lockutils [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:29 compute-0 nova_compute[244014]: 2026-02-25 12:21:29.814 244018 DEBUG oslo_concurrency.lockutils [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:29 compute-0 nova_compute[244014]: 2026-02-25 12:21:29.814 244018 DEBUG oslo_concurrency.lockutils [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:29 compute-0 nova_compute[244014]: 2026-02-25 12:21:29.815 244018 DEBUG nova.compute.manager [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] No waiting events found dispatching network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:21:29 compute-0 nova_compute[244014]: 2026-02-25 12:21:29.815 244018 WARNING nova.compute.manager [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received unexpected event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f for instance with vm_state deleted and task_state None.
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.288 244018 DEBUG nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.343 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.345 244018 DEBUG oslo_concurrency.lockutils [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.346 244018 DEBUG nova.network.neutron [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing network info cache for port 31f40ed6-505b-4061-b861-ea2720b0ff62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.350 244018 DEBUG nova.virt.libvirt.vif [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.351 244018 DEBUG nova.network.os_vif_util [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.352 244018 DEBUG nova.network.os_vif_util [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.353 244018 DEBUG os_vif [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.354 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.355 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.358 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.358 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31f40ed6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.359 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31f40ed6-50, col_values=(('external_ids', {'iface-id': '31f40ed6-505b-4061-b861-ea2720b0ff62', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:c1:8f', 'vm-uuid': '51d1d661-89db-4958-a2f4-c299ee997cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.361 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:30 compute-0 NetworkManager[49836]: <info>  [1772022090.3625] manager: (tap31f40ed6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.369 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.371 244018 INFO os_vif [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50')
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.373 244018 DEBUG nova.virt.libvirt.vif [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.373 244018 DEBUG nova.network.os_vif_util [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.374 244018 DEBUG nova.network.os_vif_util [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.379 244018 DEBUG nova.virt.libvirt.guest [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] attach device xml: <interface type="ethernet">
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:c1:c1:8f"/>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <target dev="tap31f40ed6-50"/>
Feb 25 12:21:30 compute-0 nova_compute[244014]: </interface>
Feb 25 12:21:30 compute-0 nova_compute[244014]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 25 12:21:30 compute-0 kernel: tap31f40ed6-50: entered promiscuous mode
Feb 25 12:21:30 compute-0 NetworkManager[49836]: <info>  [1772022090.3952] manager: (tap31f40ed6-50): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:30 compute-0 ovn_controller[147040]: 2026-02-25T12:21:30Z|00181|binding|INFO|Claiming lport 31f40ed6-505b-4061-b861-ea2720b0ff62 for this chassis.
Feb 25 12:21:30 compute-0 ovn_controller[147040]: 2026-02-25T12:21:30Z|00182|binding|INFO|31f40ed6-505b-4061-b861-ea2720b0ff62: Claiming fa:16:3e:c1:c1:8f 10.100.0.10
Feb 25 12:21:30 compute-0 ovn_controller[147040]: 2026-02-25T12:21:30Z|00183|binding|INFO|Setting lport 31f40ed6-505b-4061-b861-ea2720b0ff62 ovn-installed in OVS
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:30 compute-0 ovn_controller[147040]: 2026-02-25T12:21:30Z|00184|binding|INFO|Setting lport 31f40ed6-505b-4061-b861-ea2720b0ff62 up in Southbound
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.413 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.415 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c1:8f 10.100.0.10'], port_security=['fa:16:3e:c1:c1:8f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=31f40ed6-505b-4061-b861-ea2720b0ff62) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.418 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 31f40ed6-505b-4061-b861-ea2720b0ff62 in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.420 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:21:30 compute-0 systemd-udevd[270814]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.435 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c7257c36-a14c-4f61-a25a-2060cbe1d022]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:30 compute-0 NetworkManager[49836]: <info>  [1772022090.4489] device (tap31f40ed6-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:21:30 compute-0 NetworkManager[49836]: <info>  [1772022090.4501] device (tap31f40ed6-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.468 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[225b32c8-f340-47d4-a727-7783091a70b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.472 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0219bdb2-a405-414a-910e-47abb8218e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.493 244018 DEBUG nova.virt.libvirt.driver [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.494 244018 DEBUG nova.virt.libvirt.driver [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.495 244018 DEBUG nova.virt.libvirt.driver [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:4a:d5:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.495 244018 DEBUG nova.virt.libvirt.driver [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:c1:c1:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.503 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ce53b0dc-fd70-4881-a960-307461fbd4e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.522 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a8891728-3c5c-434e-ab42-20251bf47817]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 42201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270821, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.526 244018 DEBUG nova.virt.libvirt.guest [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:21:30</nova:creationTime>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:21:30 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:21:30 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:21:30 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:21:30 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:21:30 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:21:30 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:21:30 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:21:30 compute-0 nova_compute[244014]:     <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 12:21:30 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:21:30 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:30 compute-0 nova_compute[244014]:     <nova:port uuid="31f40ed6-505b-4061-b861-ea2720b0ff62">
Feb 25 12:21:30 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 12:21:30 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:30 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:21:30 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:21:30 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.541 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[14dff7e7-ad29-4159-92c6-9468d7332c84]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270822, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270822, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.543 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.547 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.548 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.548 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.549 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.549 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:30 compute-0 nova_compute[244014]: 2026-02-25 12:21:30.565 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:21:30
Feb 25 12:21:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:21:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:21:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'images', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.rgw.root', 'default.rgw.log']
Feb 25 12:21:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:21:31 compute-0 ceph-mon[76335]: pgmap v1096: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 4.2 KiB/s wr, 103 op/s
Feb 25 12:21:31 compute-0 nova_compute[244014]: 2026-02-25 12:21:31.560 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:31 compute-0 nova_compute[244014]: 2026-02-25 12:21:31.561 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:31 compute-0 nova_compute[244014]: 2026-02-25 12:21:31.562 244018 DEBUG nova.objects.instance [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 5.2 KiB/s wr, 103 op/s
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:21:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:21:32 compute-0 ovn_controller[147040]: 2026-02-25T12:21:32Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:c1:8f 10.100.0.10
Feb 25 12:21:32 compute-0 ovn_controller[147040]: 2026-02-25T12:21:32Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:c1:8f 10.100.0.10
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.069 244018 DEBUG nova.network.neutron [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated VIF entry in instance network info cache for port 31f40ed6-505b-4061-b861-ea2720b0ff62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.069 244018 DEBUG nova.network.neutron [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.077 244018 DEBUG nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-deleted-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.078 244018 DEBUG nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.079 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.079 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.080 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.080 244018 DEBUG nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.081 244018 WARNING nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 for instance with vm_state active and task_state None.
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.081 244018 DEBUG nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.082 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.082 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.082 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.083 244018 DEBUG nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.083 244018 WARNING nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 for instance with vm_state active and task_state None.
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.122 244018 DEBUG nova.objects.instance [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.125 244018 DEBUG oslo_concurrency.lockutils [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.154 244018 DEBUG nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.387 244018 DEBUG nova.policy [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:21:32 compute-0 nova_compute[244014]: 2026-02-25 12:21:32.922 244018 DEBUG nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully created port: f1abe770-5205-4bae-888a-f2489c2af7a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:21:33 compute-0 ceph-mon[76335]: pgmap v1097: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 5.2 KiB/s wr, 103 op/s
Feb 25 12:21:33 compute-0 nova_compute[244014]: 2026-02-25 12:21:33.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 5.2 KiB/s wr, 79 op/s
Feb 25 12:21:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Feb 25 12:21:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Feb 25 12:21:34 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Feb 25 12:21:34 compute-0 nova_compute[244014]: 2026-02-25 12:21:34.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:35 compute-0 nova_compute[244014]: 2026-02-25 12:21:35.048 244018 DEBUG nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully updated port: f1abe770-5205-4bae-888a-f2489c2af7a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:21:35 compute-0 nova_compute[244014]: 2026-02-25 12:21:35.065 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:35 compute-0 nova_compute[244014]: 2026-02-25 12:21:35.066 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:35 compute-0 nova_compute[244014]: 2026-02-25 12:21:35.066 244018 DEBUG nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:21:35 compute-0 nova_compute[244014]: 2026-02-25 12:21:35.186 244018 DEBUG nova.compute.manager [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-changed-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:35 compute-0 nova_compute[244014]: 2026-02-25 12:21:35.186 244018 DEBUG nova.compute.manager [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing instance network info cache due to event network-changed-f1abe770-5205-4bae-888a-f2489c2af7a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:21:35 compute-0 nova_compute[244014]: 2026-02-25 12:21:35.186 244018 DEBUG oslo_concurrency.lockutils [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:35 compute-0 nova_compute[244014]: 2026-02-25 12:21:35.361 244018 WARNING nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it
Feb 25 12:21:35 compute-0 nova_compute[244014]: 2026-02-25 12:21:35.362 244018 WARNING nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it
Feb 25 12:21:35 compute-0 nova_compute[244014]: 2026-02-25 12:21:35.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:35 compute-0 ceph-mon[76335]: pgmap v1098: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 5.2 KiB/s wr, 79 op/s
Feb 25 12:21:35 compute-0 ceph-mon[76335]: osdmap e147: 3 total, 3 up, 3 in
Feb 25 12:21:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 6.2 KiB/s wr, 33 op/s
Feb 25 12:21:37 compute-0 ovn_controller[147040]: 2026-02-25T12:21:37Z|00185|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 12:21:37 compute-0 nova_compute[244014]: 2026-02-25 12:21:37.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:37 compute-0 ceph-mon[76335]: pgmap v1100: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 6.2 KiB/s wr, 33 op/s
Feb 25 12:21:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 233 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 KiB/s wr, 25 op/s
Feb 25 12:21:38 compute-0 nova_compute[244014]: 2026-02-25 12:21:38.415 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:38 compute-0 nova_compute[244014]: 2026-02-25 12:21:38.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Feb 25 12:21:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Feb 25 12:21:39 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Feb 25 12:21:39 compute-0 ceph-mon[76335]: pgmap v1101: 305 pgs: 305 active+clean; 233 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 KiB/s wr, 25 op/s
Feb 25 12:21:39 compute-0 ceph-mon[76335]: osdmap e148: 3 total, 3 up, 3 in
Feb 25 12:21:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 233 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.662 244018 DEBUG nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.686 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.688 244018 DEBUG oslo_concurrency.lockutils [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.689 244018 DEBUG nova.network.neutron [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing network info cache for port f1abe770-5205-4bae-888a-f2489c2af7a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.694 244018 DEBUG nova.virt.libvirt.vif [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.695 244018 DEBUG nova.network.os_vif_util [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.696 244018 DEBUG nova.network.os_vif_util [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.697 244018 DEBUG os_vif [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.698 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.699 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.704 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1abe770-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.705 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1abe770-52, col_values=(('external_ids', {'iface-id': 'f1abe770-5205-4bae-888a-f2489c2af7a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:c5:fe', 'vm-uuid': '51d1d661-89db-4958-a2f4-c299ee997cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:39 compute-0 NetworkManager[49836]: <info>  [1772022099.7091] manager: (tapf1abe770-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.715 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.720 244018 INFO os_vif [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52')
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.722 244018 DEBUG nova.virt.libvirt.vif [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.722 244018 DEBUG nova.network.os_vif_util [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.723 244018 DEBUG nova.network.os_vif_util [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.727 244018 DEBUG nova.virt.libvirt.guest [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] attach device xml: <interface type="ethernet">
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:85:c5:fe"/>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <target dev="tapf1abe770-52"/>
Feb 25 12:21:39 compute-0 nova_compute[244014]: </interface>
Feb 25 12:21:39 compute-0 nova_compute[244014]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 25 12:21:39 compute-0 NetworkManager[49836]: <info>  [1772022099.7409] manager: (tapf1abe770-52): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Feb 25 12:21:39 compute-0 kernel: tapf1abe770-52: entered promiscuous mode
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:39 compute-0 ovn_controller[147040]: 2026-02-25T12:21:39Z|00186|binding|INFO|Claiming lport f1abe770-5205-4bae-888a-f2489c2af7a7 for this chassis.
Feb 25 12:21:39 compute-0 ovn_controller[147040]: 2026-02-25T12:21:39Z|00187|binding|INFO|f1abe770-5205-4bae-888a-f2489c2af7a7: Claiming fa:16:3e:85:c5:fe 10.100.0.7
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.746 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:39 compute-0 ovn_controller[147040]: 2026-02-25T12:21:39Z|00188|binding|INFO|Setting lport f1abe770-5205-4bae-888a-f2489c2af7a7 ovn-installed in OVS
Feb 25 12:21:39 compute-0 ovn_controller[147040]: 2026-02-25T12:21:39Z|00189|binding|INFO|Setting lport f1abe770-5205-4bae-888a-f2489c2af7a7 up in Southbound
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.753 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.754 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:c5:fe 10.100.0.7'], port_security=['fa:16:3e:85:c5:fe 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f1abe770-5205-4bae-888a-f2489c2af7a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.757 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f1abe770-5205-4bae-888a-f2489c2af7a7 in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.761 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:21:39 compute-0 systemd-udevd[270830]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.777 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c145408-da42-41a0-ac82-b8016ab9951a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:39 compute-0 NetworkManager[49836]: <info>  [1772022099.7855] device (tapf1abe770-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:21:39 compute-0 NetworkManager[49836]: <info>  [1772022099.7864] device (tapf1abe770-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.801 244018 DEBUG nova.virt.libvirt.driver [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.802 244018 DEBUG nova.virt.libvirt.driver [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.802 244018 DEBUG nova.virt.libvirt.driver [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:4a:d5:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.802 244018 DEBUG nova.virt.libvirt.driver [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:c1:c1:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.802 244018 DEBUG nova.virt.libvirt.driver [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:85:c5:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.808 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[219cd775-faca-40df-a395-fecebf0f0ea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.812 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a38c4c29-8ec5-4a4f-b4e0-d5489d729453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.828 244018 DEBUG nova.virt.libvirt.guest [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:21:39</nova:creationTime>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:21:39 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 12:21:39 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     <nova:port uuid="31f40ed6-505b-4061-b861-ea2720b0ff62">
Feb 25 12:21:39 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 12:21:39 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:21:39 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:39 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:21:39 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:21:39 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.840 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[34ba225e-b1d6-4484-bc7c-8cdf30f1e84f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.858 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.858 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e69bdb8c-bcfc-4865-af9a-ce7047883c91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 31993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270837, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.875 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed6a0b9-df86-4a59-93a4-a168774bc831]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270838, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270838, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.878 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:39 compute-0 nova_compute[244014]: 2026-02-25 12:21:39.880 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.881 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.882 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.883 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.883 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:40 compute-0 nova_compute[244014]: 2026-02-25 12:21:40.795 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022085.7939126, 437f3047-f865-44f7-b16e-cddab230e873 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:21:40 compute-0 nova_compute[244014]: 2026-02-25 12:21:40.795 244018 INFO nova.compute.manager [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] VM Stopped (Lifecycle Event)
Feb 25 12:21:40 compute-0 nova_compute[244014]: 2026-02-25 12:21:40.982 244018 DEBUG nova.compute.manager [None req-caa137db-6bb9-43eb-ba56-41f3208c4847 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:21:41 compute-0 ceph-mon[76335]: pgmap v1103: 305 pgs: 305 active+clean; 233 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 25 12:21:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 233 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 5.7 KiB/s wr, 31 op/s
Feb 25 12:21:41 compute-0 nova_compute[244014]: 2026-02-25 12:21:41.854 244018 DEBUG nova.network.neutron [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated VIF entry in instance network info cache for port f1abe770-5205-4bae-888a-f2489c2af7a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:21:41 compute-0 nova_compute[244014]: 2026-02-25 12:21:41.855 244018 DEBUG nova.network.neutron [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:41 compute-0 nova_compute[244014]: 2026-02-25 12:21:41.875 244018 DEBUG oslo_concurrency.lockutils [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007643093857563683 of space, bias 1.0, pg target 0.2292928157269105 quantized to 32 (current 32)
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024899999836253926 of space, bias 1.0, pg target 0.7469999950876178 quantized to 32 (current 32)
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.048749876634027e-06 of space, bias 4.0, pg target 0.0012584998519608323 quantized to 16 (current 16)
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:21:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:21:42 compute-0 ovn_controller[147040]: 2026-02-25T12:21:42Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:c5:fe 10.100.0.7
Feb 25 12:21:42 compute-0 ovn_controller[147040]: 2026-02-25T12:21:42Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:c5:fe 10.100.0.7
Feb 25 12:21:42 compute-0 nova_compute[244014]: 2026-02-25 12:21:42.413 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:42 compute-0 nova_compute[244014]: 2026-02-25 12:21:42.414 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:42 compute-0 nova_compute[244014]: 2026-02-25 12:21:42.445 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:21:42 compute-0 nova_compute[244014]: 2026-02-25 12:21:42.532 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:42 compute-0 nova_compute[244014]: 2026-02-25 12:21:42.534 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:42 compute-0 nova_compute[244014]: 2026-02-25 12:21:42.546 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:21:42 compute-0 nova_compute[244014]: 2026-02-25 12:21:42.548 244018 INFO nova.compute.claims [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:21:42 compute-0 nova_compute[244014]: 2026-02-25 12:21:42.704 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:21:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2645564130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.279 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.286 244018 DEBUG nova.compute.provider_tree [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.308 244018 DEBUG nova.scheduler.client.report [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.345 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.346 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.405 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.406 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:21:43 compute-0 ceph-mon[76335]: pgmap v1104: 305 pgs: 305 active+clean; 233 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 5.7 KiB/s wr, 31 op/s
Feb 25 12:21:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2645564130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.437 244018 INFO nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.479 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 233 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.4 KiB/s wr, 27 op/s
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.800 244018 DEBUG nova.policy [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1cfc3e643014e1c984d71182534fd24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '851e1817495944c1ad7ac421ab226d13', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.825 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.830 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.830 244018 INFO nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Creating image(s)
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.863 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.896 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.930 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:21:43 compute-0 nova_compute[244014]: 2026-02-25 12:21:43.935 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.014 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.015 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.016 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.017 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.047 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.050 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.265 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.343 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] resizing rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.419 244018 DEBUG nova.objects.instance [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'migration_context' on Instance uuid e9eb76fe-9616-40a4-aa53-0054cc5c3a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.439 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.439 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Ensure instance console log exists: /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.440 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.440 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.441 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.909 244018 DEBUG nova.compute.manager [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.910 244018 DEBUG oslo_concurrency.lockutils [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.910 244018 DEBUG oslo_concurrency.lockutils [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.910 244018 DEBUG oslo_concurrency.lockutils [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.910 244018 DEBUG nova.compute.manager [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:21:44 compute-0 nova_compute[244014]: 2026-02-25 12:21:44.911 244018 WARNING nova.compute.manager [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 for instance with vm_state active and task_state None.
Feb 25 12:21:45 compute-0 nova_compute[244014]: 2026-02-25 12:21:45.314 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Successfully created port: 6e87f383-dbcb-4dad-b195-bc813617ad12 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:21:45 compute-0 ceph-mon[76335]: pgmap v1105: 305 pgs: 305 active+clean; 233 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.4 KiB/s wr, 27 op/s
Feb 25 12:21:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 233 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 5.0 KiB/s wr, 25 op/s
Feb 25 12:21:46 compute-0 nova_compute[244014]: 2026-02-25 12:21:46.773 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Successfully updated port: 6e87f383-dbcb-4dad-b195-bc813617ad12 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:21:46 compute-0 nova_compute[244014]: 2026-02-25 12:21:46.787 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:46 compute-0 nova_compute[244014]: 2026-02-25 12:21:46.787 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquired lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:46 compute-0 nova_compute[244014]: 2026-02-25 12:21:46.788 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:21:46 compute-0 nova_compute[244014]: 2026-02-25 12:21:46.823 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-be69a588-f795-413e-b981-a20088eea5ed" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:46 compute-0 nova_compute[244014]: 2026-02-25 12:21:46.823 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-be69a588-f795-413e-b981-a20088eea5ed" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:46 compute-0 nova_compute[244014]: 2026-02-25 12:21:46.824 244018 DEBUG nova.objects.instance [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:46 compute-0 nova_compute[244014]: 2026-02-25 12:21:46.939 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.099 244018 DEBUG nova.compute.manager [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.099 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.100 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.100 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.100 244018 DEBUG nova.compute.manager [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.100 244018 WARNING nova.compute.manager [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 for instance with vm_state active and task_state None.
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.101 244018 DEBUG nova.compute.manager [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-changed-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.101 244018 DEBUG nova.compute.manager [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Refreshing instance network info cache due to event network-changed-6e87f383-dbcb-4dad-b195-bc813617ad12. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.101 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:47 compute-0 ceph-mon[76335]: pgmap v1106: 305 pgs: 305 active+clean; 233 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 5.0 KiB/s wr, 25 op/s
Feb 25 12:21:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:21:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/451925680' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:21:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:21:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/451925680' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.620 244018 DEBUG nova.objects.instance [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.642 244018 DEBUG nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:21:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.866 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Updating instance_info_cache with network_info: [{"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.908 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Releasing lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.909 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Instance network_info: |[{"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.909 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.910 244018 DEBUG nova.network.neutron [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Refreshing network info cache for port 6e87f383-dbcb-4dad-b195-bc813617ad12 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.915 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Start _get_guest_xml network_info=[{"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.924 244018 WARNING nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.930 244018 DEBUG nova.virt.libvirt.host [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.931 244018 DEBUG nova.virt.libvirt.host [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.937 244018 DEBUG nova.virt.libvirt.host [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.938 244018 DEBUG nova.virt.libvirt.host [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.938 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.938 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.939 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.939 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.940 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.940 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.940 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.940 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.941 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.941 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.941 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.941 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:21:47 compute-0 nova_compute[244014]: 2026-02-25 12:21:47.945 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:48 compute-0 nova_compute[244014]: 2026-02-25 12:21:48.087 244018 DEBUG nova.policy [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:21:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/451925680' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:21:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/451925680' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:21:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:21:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/45216140' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:21:48 compute-0 nova_compute[244014]: 2026-02-25 12:21:48.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:48 compute-0 nova_compute[244014]: 2026-02-25 12:21:48.508 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:48 compute-0 nova_compute[244014]: 2026-02-25 12:21:48.540 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:21:48 compute-0 nova_compute[244014]: 2026-02-25 12:21:48.546 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:21:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/974428450' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.071 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.074 244018 DEBUG nova.virt.libvirt.vif [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1303628417',display_name='tempest-ImagesTestJSON-server-1303628417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1303628417',id=28,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-18jw7ang',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:21:43Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=e9eb76fe-9616-40a4-aa53-0054cc5c3a57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.075 244018 DEBUG nova.network.os_vif_util [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.076 244018 DEBUG nova.network.os_vif_util [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.078 244018 DEBUG nova.objects.instance [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'pci_devices' on Instance uuid e9eb76fe-9616-40a4-aa53-0054cc5c3a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.094 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:21:49 compute-0 nova_compute[244014]:   <uuid>e9eb76fe-9616-40a4-aa53-0054cc5c3a57</uuid>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   <name>instance-0000001c</name>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <nova:name>tempest-ImagesTestJSON-server-1303628417</nova:name>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:21:47</nova:creationTime>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <nova:user uuid="f1cfc3e643014e1c984d71182534fd24">tempest-ImagesTestJSON-1353368211-project-member</nova:user>
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <nova:project uuid="851e1817495944c1ad7ac421ab226d13">tempest-ImagesTestJSON-1353368211</nova:project>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <nova:port uuid="6e87f383-dbcb-4dad-b195-bc813617ad12">
Feb 25 12:21:49 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <system>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <entry name="serial">e9eb76fe-9616-40a4-aa53-0054cc5c3a57</entry>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <entry name="uuid">e9eb76fe-9616-40a4-aa53-0054cc5c3a57</entry>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     </system>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   <os>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   </os>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   <features>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   </features>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk">
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       </source>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config">
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       </source>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:21:49 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:9e:80:09"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <target dev="tap6e87f383-db"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/console.log" append="off"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <video>
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     </video>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:21:49 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:21:49 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:21:49 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:21:49 compute-0 nova_compute[244014]: </domain>
Feb 25 12:21:49 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.096 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Preparing to wait for external event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.096 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.097 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.097 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.099 244018 DEBUG nova.virt.libvirt.vif [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1303628417',display_name='tempest-ImagesTestJSON-server-1303628417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1303628417',id=28,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-18jw7ang',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:21:43Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=e9eb76fe-9616-40a4-aa53-0054cc5c3a57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.099 244018 DEBUG nova.network.os_vif_util [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.100 244018 DEBUG nova.network.os_vif_util [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.101 244018 DEBUG os_vif [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.102 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.102 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.103 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.107 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.107 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e87f383-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.108 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e87f383-db, col_values=(('external_ids', {'iface-id': '6e87f383-dbcb-4dad-b195-bc813617ad12', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:80:09', 'vm-uuid': 'e9eb76fe-9616-40a4-aa53-0054cc5c3a57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:49 compute-0 NetworkManager[49836]: <info>  [1772022109.1110] manager: (tap6e87f383-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.115 244018 INFO os_vif [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db')
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.160 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.161 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.161 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No VIF found with MAC fa:16:3e:9e:80:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.162 244018 INFO nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Using config drive
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.182 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:21:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:49 compute-0 ceph-mon[76335]: pgmap v1107: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Feb 25 12:21:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/45216140' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:21:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/974428450' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:21:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 32 op/s
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.732 244018 INFO nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Creating config drive at /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.739 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5m3_1uc7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.849 244018 DEBUG nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully updated port: be69a588-f795-413e-b981-a20088eea5ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.862 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5m3_1uc7" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.894 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.899 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.923 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.924 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.925 244018 DEBUG nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.996 244018 DEBUG nova.compute.manager [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-changed-be69a588-f795-413e-b981-a20088eea5ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.996 244018 DEBUG nova.compute.manager [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing instance network info cache due to event network-changed-be69a588-f795-413e-b981-a20088eea5ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:21:49 compute-0 nova_compute[244014]: 2026-02-25 12:21:49.997 244018 DEBUG oslo_concurrency.lockutils [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.057 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.058 244018 INFO nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Deleting local config drive /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config because it was imported into RBD.
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.074 244018 DEBUG nova.network.neutron [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Updated VIF entry in instance network info cache for port 6e87f383-dbcb-4dad-b195-bc813617ad12. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.075 244018 DEBUG nova.network.neutron [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Updating instance_info_cache with network_info: [{"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.092 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:50 compute-0 NetworkManager[49836]: <info>  [1772022110.1140] manager: (tap6e87f383-db): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Feb 25 12:21:50 compute-0 kernel: tap6e87f383-db: entered promiscuous mode
Feb 25 12:21:50 compute-0 ovn_controller[147040]: 2026-02-25T12:21:50Z|00190|binding|INFO|Claiming lport 6e87f383-dbcb-4dad-b195-bc813617ad12 for this chassis.
Feb 25 12:21:50 compute-0 ovn_controller[147040]: 2026-02-25T12:21:50Z|00191|binding|INFO|6e87f383-dbcb-4dad-b195-bc813617ad12: Claiming fa:16:3e:9e:80:09 10.100.0.13
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.124 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:80:09 10.100.0.13'], port_security=['fa:16:3e:9e:80:09 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e9eb76fe-9616-40a4-aa53-0054cc5c3a57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6e87f383-dbcb-4dad-b195-bc813617ad12) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.126 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6e87f383-dbcb-4dad-b195-bc813617ad12 in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 bound to our chassis
Feb 25 12:21:50 compute-0 ovn_controller[147040]: 2026-02-25T12:21:50Z|00192|binding|INFO|Setting lport 6e87f383-dbcb-4dad-b195-bc813617ad12 ovn-installed in OVS
Feb 25 12:21:50 compute-0 ovn_controller[147040]: 2026-02-25T12:21:50Z|00193|binding|INFO|Setting lport 6e87f383-dbcb-4dad-b195-bc813617ad12 up in Southbound
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.128 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.141 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ad43f7c7-e8cd-401b-8708-39c3aba29d7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.142 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a1663dd-21 in ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.144 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a1663dd-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.145 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[562c0ac2-9433-4eab-bbae-2a62b9ed5c83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.146 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[20e700a9-60ea-483c-ab6e-9f80d85c1f52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 systemd-machined[210048]: New machine qemu-32-instance-0000001c.
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.160 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ed627be5-6706-4779-9505-6a0e775ac247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Feb 25 12:21:50 compute-0 systemd-udevd[271166]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.177 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7161c572-1837-4c82-9c0b-f90240a82d6c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 NetworkManager[49836]: <info>  [1772022110.1875] device (tap6e87f383-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:21:50 compute-0 NetworkManager[49836]: <info>  [1772022110.1886] device (tap6e87f383-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.212 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8547d8e4-1d23-4faa-bc7d-a1f315c4629d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.217 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc074b2-921c-46ac-a660-71310491d4e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 systemd-udevd[271169]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:21:50 compute-0 NetworkManager[49836]: <info>  [1772022110.2194] manager: (tap6a1663dd-20): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.259 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2c1bd9-d995-468d-86ed-33ea633aca14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.264 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d5a885-ebf1-492e-9369-ba679f213ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 NetworkManager[49836]: <info>  [1772022110.2947] device (tap6a1663dd-20): carrier: link connected
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.303 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[edb97d22-de19-4dde-9bbc-70a0b1fdd73e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.322 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e4adc5-023e-49cf-8c7c-34aac90e99a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407970, 'reachable_time': 35160, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271196, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.337 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6714865-91e0-40ba-8dd6-fe4392ba016e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407970, 'tstamp': 407970}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271197, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.352 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6d07ce-c102-4f55-85e3-1f82b246af73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407970, 'reachable_time': 35160, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271198, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.382 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[427ff152-2ecd-4382-bc03-22b9cc7ec6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.442 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f7075e7e-cb67-46b1-b4e6-8cb7b40fd1b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.443 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.443 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.444 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a1663dd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:50 compute-0 NetworkManager[49836]: <info>  [1772022110.4467] manager: (tap6a1663dd-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Feb 25 12:21:50 compute-0 kernel: tap6a1663dd-20: entered promiscuous mode
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.449 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a1663dd-20, col_values=(('external_ids', {'iface-id': '7f515737-b36a-4caa-affc-8ad110539172'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.454 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.454 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:21:50 compute-0 ovn_controller[147040]: 2026-02-25T12:21:50Z|00194|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.455 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1ed559-d081-4679-b48f-c8eefabf28af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.456 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.457 244018 WARNING nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.457 244018 WARNING nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.458 244018 WARNING nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it
Feb 25 12:21:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.457 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'env', 'PROCESS_TAG=haproxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a1663dd-25aa-4b0e-a1c0-42434d371a21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.800 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022110.799059, e9eb76fe-9616-40a4-aa53-0054cc5c3a57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.802 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] VM Started (Lifecycle Event)
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.828 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.833 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022110.799418, e9eb76fe-9616-40a4-aa53-0054cc5c3a57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.834 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] VM Paused (Lifecycle Event)
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.856 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.866 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:21:50 compute-0 podman[271272]: 2026-02-25 12:21:50.886112282 +0000 UTC m=+0.098010179 container create 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 12:21:50 compute-0 nova_compute[244014]: 2026-02-25 12:21:50.890 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:21:50 compute-0 podman[271272]: 2026-02-25 12:21:50.816735528 +0000 UTC m=+0.028633245 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:21:50 compute-0 systemd[1]: Started libpod-conmon-32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1.scope.
Feb 25 12:21:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda58046a993fafc00305b047e988f5ee511b02fbba1a417a37a87ca1579e71e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:21:51 compute-0 podman[271272]: 2026-02-25 12:21:51.003654807 +0000 UTC m=+0.215552574 container init 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 12:21:51 compute-0 podman[271272]: 2026-02-25 12:21:51.01149735 +0000 UTC m=+0.223395077 container start 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:21:51 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [NOTICE]   (271291) : New worker (271293) forked
Feb 25 12:21:51 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [NOTICE]   (271291) : Loading success.
Feb 25 12:21:51 compute-0 nova_compute[244014]: 2026-02-25 12:21:51.067 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:51 compute-0 ceph-mon[76335]: pgmap v1108: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 32 op/s
Feb 25 12:21:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 25 12:21:52 compute-0 podman[271302]: 2026-02-25 12:21:52.736131888 +0000 UTC m=+0.073874893 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:21:52 compute-0 podman[271303]: 2026-02-25 12:21:52.798961466 +0000 UTC m=+0.133039876 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 25 12:21:53 compute-0 ceph-mon[76335]: pgmap v1109: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 25 12:21:53 compute-0 nova_compute[244014]: 2026-02-25 12:21:53.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.970 244018 DEBUG nova.compute.manager [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.971 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.972 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.972 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.973 244018 DEBUG nova.compute.manager [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Processing event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.973 244018 DEBUG nova.compute.manager [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.974 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.974 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.974 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.975 244018 DEBUG nova.compute.manager [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] No waiting events found dispatching network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.975 244018 WARNING nova.compute.manager [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received unexpected event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 for instance with vm_state building and task_state spawning.
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.976 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.981 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022114.981375, e9eb76fe-9616-40a4-aa53-0054cc5c3a57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.982 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] VM Resumed (Lifecycle Event)
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.986 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.991 244018 INFO nova.virt.libvirt.driver [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Instance spawned successfully.
Feb 25 12:21:54 compute-0 nova_compute[244014]: 2026-02-25 12:21:54.991 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:21:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:55.005 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:55.006 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:55.007 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.023 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.033 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.038 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.039 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.040 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.041 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.042 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.042 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.059 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.107 244018 INFO nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Took 11.28 seconds to spawn the instance on the hypervisor.
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.108 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.174 244018 INFO nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Took 12.68 seconds to build instance.
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.191 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:55 compute-0 ceph-mon[76335]: pgmap v1110: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:21:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.902 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.903 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:55 compute-0 nova_compute[244014]: 2026-02-25 12:21:55.922 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.001 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.002 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.011 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.012 244018 INFO nova.compute.claims [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.340 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.575 244018 DEBUG nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.603 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.604 244018 DEBUG oslo_concurrency.lockutils [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.605 244018 DEBUG nova.network.neutron [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing network info cache for port be69a588-f795-413e-b981-a20088eea5ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.609 244018 DEBUG nova.virt.libvirt.vif [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.609 244018 DEBUG nova.network.os_vif_util [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.611 244018 DEBUG nova.network.os_vif_util [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.611 244018 DEBUG os_vif [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.613 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.614 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.618 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe69a588-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.618 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe69a588-f7, col_values=(('external_ids', {'iface-id': 'be69a588-f795-413e-b981-a20088eea5ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:32:d9', 'vm-uuid': '51d1d661-89db-4958-a2f4-c299ee997cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:56 compute-0 NetworkManager[49836]: <info>  [1772022116.6219] manager: (tapbe69a588-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.632 244018 INFO os_vif [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7')
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.633 244018 DEBUG nova.virt.libvirt.vif [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.633 244018 DEBUG nova.network.os_vif_util [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.634 244018 DEBUG nova.network.os_vif_util [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.637 244018 DEBUG nova.virt.libvirt.guest [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] attach device xml: <interface type="ethernet">
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:76:32:d9"/>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <target dev="tapbe69a588-f7"/>
Feb 25 12:21:56 compute-0 nova_compute[244014]: </interface>
Feb 25 12:21:56 compute-0 nova_compute[244014]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 25 12:21:56 compute-0 kernel: tapbe69a588-f7: entered promiscuous mode
Feb 25 12:21:56 compute-0 NetworkManager[49836]: <info>  [1772022116.6482] manager: (tapbe69a588-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Feb 25 12:21:56 compute-0 ovn_controller[147040]: 2026-02-25T12:21:56Z|00195|binding|INFO|Claiming lport be69a588-f795-413e-b981-a20088eea5ed for this chassis.
Feb 25 12:21:56 compute-0 ovn_controller[147040]: 2026-02-25T12:21:56Z|00196|binding|INFO|be69a588-f795-413e-b981-a20088eea5ed: Claiming fa:16:3e:76:32:d9 10.100.0.3
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.659 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:32:d9 10.100.0.3'], port_security=['fa:16:3e:76:32:d9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-616423065', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-616423065', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=be69a588-f795-413e-b981-a20088eea5ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.660 157129 INFO neutron.agent.ovn.metadata.agent [-] Port be69a588-f795-413e-b981-a20088eea5ed in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.662 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:21:56 compute-0 ovn_controller[147040]: 2026-02-25T12:21:56Z|00197|binding|INFO|Setting lport be69a588-f795-413e-b981-a20088eea5ed up in Southbound
Feb 25 12:21:56 compute-0 ovn_controller[147040]: 2026-02-25T12:21:56Z|00198|binding|INFO|Setting lport be69a588-f795-413e-b981-a20088eea5ed ovn-installed in OVS
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:56 compute-0 systemd-udevd[271374]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.677 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a5d9bf-40aa-4b2c-82c5-a67612b1b4f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:56 compute-0 NetworkManager[49836]: <info>  [1772022116.6867] device (tapbe69a588-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:21:56 compute-0 NetworkManager[49836]: <info>  [1772022116.6884] device (tapbe69a588-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.726 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0701b5fc-a41d-4902-8e8c-95b78e29e879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.729 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[038eb543-dd3c-4a69-9d79-5dd6a311f5ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.752 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4d222830-c8b8-40ba-b0e5-1d474f8f2971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.794 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5cee3435-0595-48fb-bfbd-c4b8268b57bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 31993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271381, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.817 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.816 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[79432e3c-ddcb-4bd0-b46d-ceae47b5c6e5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271382, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271382, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.817 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.818 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:4a:d5:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.818 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:c1:c1:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.819 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:85:c5:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.819 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.819 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:76:32:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.824 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.824 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.825 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.826 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:21:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3765039026' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.906 244018 DEBUG nova.virt.libvirt.guest [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:21:56</nova:creationTime>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:21:56 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 12:21:56 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     <nova:port uuid="31f40ed6-505b-4061-b861-ea2720b0ff62">
Feb 25 12:21:56 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 12:21:56 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 12:21:56 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:21:56 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:56 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:21:56 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:21:56 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.914 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:56 compute-0 nova_compute[244014]: 2026-02-25 12:21:56.921 244018 DEBUG nova.compute.provider_tree [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.077 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-be69a588-f795-413e-b981-a20088eea5ed" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.096 244018 DEBUG nova.scheduler.client.report [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.118 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.119 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.167 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.168 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.191 244018 INFO nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.209 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.302 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.304 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.305 244018 INFO nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Creating image(s)
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.337 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.370 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.404 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.409 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.474 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.475 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.475 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.476 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.499 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.502 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:21:57 compute-0 ceph-mon[76335]: pgmap v1111: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:21:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3765039026' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:21:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.731 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.809 244018 DEBUG nova.policy [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7d27f8a357c2443a9140598fd9ec73e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '04734aae68d34fac8fb592fc015632fd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.818 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] resizing rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.858 244018 DEBUG nova.compute.manager [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.859 244018 DEBUG oslo_concurrency.lockutils [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.860 244018 DEBUG oslo_concurrency.lockutils [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.860 244018 DEBUG oslo_concurrency.lockutils [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.861 244018 DEBUG nova.compute.manager [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.861 244018 WARNING nova.compute.manager [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed for instance with vm_state active and task_state None.
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.917 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.918 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.918 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.928 244018 DEBUG nova.objects.instance [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lazy-loading 'migration_context' on Instance uuid 5a7dc142-2b11-4214-87b7-636f27ccacbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.950 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.958 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.959 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Ensure instance console log exists: /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.960 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.961 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:57 compute-0 nova_compute[244014]: 2026-02-25 12:21:57.962 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.020 244018 INFO nova.compute.manager [None req-c7298093-c441-4d8d-9897-a2fd745f8370 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Pausing
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.021 244018 DEBUG nova.objects.instance [None req-c7298093-c441-4d8d-9897-a2fd745f8370 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'flavor' on Instance uuid e9eb76fe-9616-40a4-aa53-0054cc5c3a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.056 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022118.0560896, e9eb76fe-9616-40a4-aa53-0054cc5c3a57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.057 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] VM Paused (Lifecycle Event)
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.060 244018 DEBUG nova.compute.manager [None req-c7298093-c441-4d8d-9897-a2fd745f8370 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.089 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.094 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.133 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] During sync_power_state the instance has a pending task (pausing). Skip.
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.477 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.497 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:58 compute-0 ovn_controller[147040]: 2026-02-25T12:21:58Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:32:d9 10.100.0.3
Feb 25 12:21:58 compute-0 ovn_controller[147040]: 2026-02-25T12:21:58Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:32:d9 10.100.0.3
Feb 25 12:21:58 compute-0 ceph-mon[76335]: pgmap v1112: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.896 244018 DEBUG nova.network.neutron [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated VIF entry in instance network info cache for port be69a588-f795-413e-b981-a20088eea5ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.898 244018 DEBUG nova.network.neutron [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.946 244018 DEBUG oslo_concurrency.lockutils [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.947 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.948 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:21:58 compute-0 nova_compute[244014]: 2026-02-25 12:21:58.948 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.345 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-31f40ed6-505b-4061-b861-ea2720b0ff62" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.346 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-31f40ed6-505b-4061-b861-ea2720b0ff62" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.366 244018 DEBUG nova.objects.instance [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.398 244018 DEBUG nova.virt.libvirt.vif [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.399 244018 DEBUG nova.network.os_vif_util [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.401 244018 DEBUG nova.network.os_vif_util [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.404 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Successfully created port: 0fef626d-412c-4101-95eb-eadb3354247f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.413 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.417 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.421 244018 DEBUG nova.virt.libvirt.driver [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Attempting to detach device tap31f40ed6-50 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.421 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:c1:c1:8f"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <target dev="tap31f40ed6-50"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]: </interface>
Feb 25 12:21:59 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.430 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.435 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <name>instance-0000001b</name>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <uuid>51d1d661-89db-4958-a2f4-c299ee997cde</uuid>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:21:56</nova:creationTime>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:port uuid="31f40ed6-505b-4061-b861-ea2720b0ff62">
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:21:59 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <system>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='serial'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='uuid'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </system>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <os>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </os>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <features>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </features>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk' index='2'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       </source>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk.config' index='1'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       </source>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:4a:d5:f4'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target dev='tap433e6f28-31'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:c1:c1:8f'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target dev='tap31f40ed6-50'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='net1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:85:c5:fe'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target dev='tapf1abe770-52'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='net2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:76:32:d9'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target dev='tapbe69a588-f7'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='net3'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       </target>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/1'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </console>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </input>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </input>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </input>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <video>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </video>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c806,c930</label>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c806,c930</imagelabel>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:21:59 compute-0 nova_compute[244014]: </domain>
Feb 25 12:21:59 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.436 244018 INFO nova.virt.libvirt.driver [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tap31f40ed6-50 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the persistent domain config.
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.437 244018 DEBUG nova.virt.libvirt.driver [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] (1/8): Attempting to detach device tap31f40ed6-50 with device alias net1 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.438 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:c1:c1:8f"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <target dev="tap31f40ed6-50"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]: </interface>
Feb 25 12:21:59 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:21:59 compute-0 kernel: tap31f40ed6-50 (unregistering): left promiscuous mode
Feb 25 12:21:59 compute-0 NetworkManager[49836]: <info>  [1772022119.5656] device (tap31f40ed6-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:21:59 compute-0 ovn_controller[147040]: 2026-02-25T12:21:59Z|00199|binding|INFO|Releasing lport 31f40ed6-505b-4061-b861-ea2720b0ff62 from this chassis (sb_readonly=0)
Feb 25 12:21:59 compute-0 ovn_controller[147040]: 2026-02-25T12:21:59Z|00200|binding|INFO|Setting lport 31f40ed6-505b-4061-b861-ea2720b0ff62 down in Southbound
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:59 compute-0 ovn_controller[147040]: 2026-02-25T12:21:59Z|00201|binding|INFO|Removing iface tap31f40ed6-50 ovn-installed in OVS
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.577 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Received event <DeviceRemovedEvent: 1772022119.575966, 51d1d661-89db-4958-a2f4-c299ee997cde => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.579 244018 DEBUG nova.virt.libvirt.driver [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Start waiting for the detach event from libvirt for device tap31f40ed6-50 with device alias net1 for instance 51d1d661-89db-4958-a2f4-c299ee997cde _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.579 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.585 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <name>instance-0000001b</name>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <uuid>51d1d661-89db-4958-a2f4-c299ee997cde</uuid>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:21:56</nova:creationTime>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:port uuid="31f40ed6-505b-4061-b861-ea2720b0ff62">
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:21:59 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <system>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='serial'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='uuid'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </system>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <os>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </os>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <features>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </features>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk' index='2'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       </source>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk.config' index='1'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       </source>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:4a:d5:f4'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target dev='tap433e6f28-31'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:85:c5:fe'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target dev='tapf1abe770-52'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='net2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:76:32:d9'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target dev='tapbe69a588-f7'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='net3'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       </target>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/1'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </console>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </input>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </input>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </input>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <video>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </video>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c806,c930</label>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c806,c930</imagelabel>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:21:59 compute-0 nova_compute[244014]: </domain>
Feb 25 12:21:59 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.585 244018 INFO nova.virt.libvirt.driver [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tap31f40ed6-50 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the live domain config.
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.586 244018 DEBUG nova.virt.libvirt.vif [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.587 244018 DEBUG nova.network.os_vif_util [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.588 244018 DEBUG nova.network.os_vif_util [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.587 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c1:8f 10.100.0.10'], port_security=['fa:16:3e:c1:c1:8f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=31f40ed6-505b-4061-b861-ea2720b0ff62) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.588 244018 DEBUG os_vif [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.589 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 31f40ed6-505b-4061-b861-ea2720b0ff62 in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.591 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31f40ed6-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.592 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.597 244018 INFO os_vif [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50')
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.598 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:21:59</nova:creationTime>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 12:21:59 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:21:59 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:21:59 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:21:59 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:21:59 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.608 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9320917a-df56-405f-94e7-04aec17303da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.642 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[28cd2d84-9a73-4e37-aea6-b734b760aa73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.647 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e943b312-2458-494f-a7bd-318495fb129a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 73 op/s
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.684 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[25f3a6ce-b3ac-4d06-9673-4f6ead3741b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.702 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e25326-a4b7-4061-b566-9d46b66b962c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 31993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271560, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.723 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2015285-3394-43f0-82f9-ecad9c8bdecc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271561, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271561, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.725 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:59 compute-0 nova_compute[244014]: 2026-02-25 12:21:59.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.729 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.729 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.730 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:21:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.730 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:00 compute-0 nova_compute[244014]: 2026-02-25 12:22:00.288 244018 DEBUG nova.compute.manager [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:00 compute-0 nova_compute[244014]: 2026-02-25 12:22:00.288 244018 DEBUG oslo_concurrency.lockutils [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:00 compute-0 nova_compute[244014]: 2026-02-25 12:22:00.289 244018 DEBUG oslo_concurrency.lockutils [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:00 compute-0 nova_compute[244014]: 2026-02-25 12:22:00.289 244018 DEBUG oslo_concurrency.lockutils [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:00 compute-0 nova_compute[244014]: 2026-02-25 12:22:00.289 244018 DEBUG nova.compute.manager [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:00 compute-0 nova_compute[244014]: 2026-02-25 12:22:00.290 244018 WARNING nova.compute.manager [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed for instance with vm_state active and task_state None.
Feb 25 12:22:00 compute-0 nova_compute[244014]: 2026-02-25 12:22:00.363 244018 DEBUG nova.compute.manager [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:00 compute-0 nova_compute[244014]: 2026-02-25 12:22:00.420 244018 INFO nova.compute.manager [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] instance snapshotting
Feb 25 12:22:00 compute-0 nova_compute[244014]: 2026-02-25 12:22:00.420 244018 WARNING nova.compute.manager [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] trying to snapshot a non-running instance: (state: 3 expected: 1)
Feb 25 12:22:00 compute-0 ceph-mon[76335]: pgmap v1113: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 73 op/s
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.432 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.587 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Successfully updated port: 0fef626d-412c-4101-95eb-eadb3354247f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.603 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.604 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquired lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.604 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.611 244018 INFO nova.virt.libvirt.driver [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Beginning live snapshot process
Feb 25 12:22:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 292 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 499 KiB/s wr, 75 op/s
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.793 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.796 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.803 244018 DEBUG nova.compute.manager [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-changed-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.803 244018 DEBUG nova.compute.manager [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Refreshing instance network info cache due to event network-changed-0fef626d-412c-4101-95eb-eadb3354247f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.804 244018 DEBUG oslo_concurrency.lockutils [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.813 244018 DEBUG nova.virt.libvirt.imagebackend [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.819 157129 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 261c7612-7fb7-43f1-9ba4-19c54ae198d6 with type ""
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.821 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:32:d9 10.100.0.3'], port_security=['fa:16:3e:76:32:d9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-616423065', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-616423065', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=be69a588-f795-413e-b981-a20088eea5ed) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.823 157129 INFO neutron.agent.ovn.metadata.agent [-] Port be69a588-f795-413e-b981-a20088eea5ed in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.826 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:22:01 compute-0 ovn_controller[147040]: 2026-02-25T12:22:01Z|00202|binding|INFO|Removing iface tapbe69a588-f7 ovn-installed in OVS
Feb 25 12:22:01 compute-0 ovn_controller[147040]: 2026-02-25T12:22:01Z|00203|binding|INFO|Removing lport be69a588-f795-413e-b981-a20088eea5ed ovn-installed in OVS
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.841 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8f9bae-0c15-468d-81ea-60bf91e7b50c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.872 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[335e0f55-a0d7-49f7-8715-163d138956a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.876 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[59f1ba9a-5150-4003-b441-16ea9770fb16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.880 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.913 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dac11507-59c4-45a8-bbf9-4fa9a5aa94ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.934 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfc7ab9-46b4-46a8-b243-b0b74675aa94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 31993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271600, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.953 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4902a1-0e2b-4ee5-9ca1-c9f61cd2fc0a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271601, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271601, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.955 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.957 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:01 compute-0 nova_compute[244014]: 2026-02-25 12:22:01.958 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.959 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.960 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.961 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.962 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.080 244018 DEBUG nova.storage.rbd_utils [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(56170dc03de240289a8d156f9d514494) on rbd image(e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.341 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.342 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.343 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.343 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.344 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.346 244018 INFO nova.compute.manager [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Terminating instance
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.347 244018 DEBUG nova.compute.manager [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:22:02 compute-0 kernel: tap433e6f28-31 (unregistering): left promiscuous mode
Feb 25 12:22:02 compute-0 NetworkManager[49836]: <info>  [1772022122.4003] device (tap433e6f28-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:22:02 compute-0 ovn_controller[147040]: 2026-02-25T12:22:02Z|00204|binding|INFO|Releasing lport 433e6f28-313e-4fe8-b8da-eacc8a0332c8 from this chassis (sb_readonly=0)
Feb 25 12:22:02 compute-0 ovn_controller[147040]: 2026-02-25T12:22:02Z|00205|binding|INFO|Setting lport 433e6f28-313e-4fe8-b8da-eacc8a0332c8 down in Southbound
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 ovn_controller[147040]: 2026-02-25T12:22:02Z|00206|binding|INFO|Removing iface tap433e6f28-31 ovn-installed in OVS
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.413 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.415 244018 DEBUG nova.compute.manager [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-deleted-31f40ed6-505b-4061-b861-ea2720b0ff62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.416 244018 INFO nova.compute.manager [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Neutron deleted interface 31f40ed6-505b-4061-b861-ea2720b0ff62; detaching it from the instance and deleting it from the info cache
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.417 244018 DEBUG nova.network.neutron [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.419 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:d5:f4 10.100.0.11'], port_security=['fa:16:3e:4a:d5:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43bfaf0d-cb4f-4024-9f62-b8a095234270', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=433e6f28-313e-4fe8-b8da-eacc8a0332c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:02 compute-0 kernel: tapf1abe770-52 (unregistering): left promiscuous mode
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.422 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.423 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 433e6f28-313e-4fe8-b8da-eacc8a0332c8 in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis
Feb 25 12:22:02 compute-0 NetworkManager[49836]: <info>  [1772022122.4251] device (tapf1abe770-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.426 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:22:02 compute-0 ovn_controller[147040]: 2026-02-25T12:22:02Z|00207|binding|INFO|Releasing lport f1abe770-5205-4bae-888a-f2489c2af7a7 from this chassis (sb_readonly=0)
Feb 25 12:22:02 compute-0 ovn_controller[147040]: 2026-02-25T12:22:02Z|00208|binding|INFO|Setting lport f1abe770-5205-4bae-888a-f2489c2af7a7 down in Southbound
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 ovn_controller[147040]: 2026-02-25T12:22:02Z|00209|binding|INFO|Removing iface tapf1abe770-52 ovn-installed in OVS
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.442 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc79da2-859c-4ca9-9b32-7f9a1f0135da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 kernel: tapbe69a588-f7 (unregistering): left promiscuous mode
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.454 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:c5:fe 10.100.0.7'], port_security=['fa:16:3e:85:c5:fe 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f1abe770-5205-4bae-888a-f2489c2af7a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:02 compute-0 NetworkManager[49836]: <info>  [1772022122.4573] device (tapbe69a588-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.482 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[019779ce-699b-469f-826c-4f44e2ac9bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.487 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e13e294c-a825-4876-ab0a-6f2bcde34df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Feb 25 12:22:02 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 14.923s CPU time.
Feb 25 12:22:02 compute-0 systemd-machined[210048]: Machine qemu-31-instance-0000001b terminated.
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.522 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8495d245-26c9-481b-9909-5765e377ee97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.544 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[44b10f7e-b69e-4c4e-b8c8-e946d0fb2db8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 31993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271642, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.555 244018 DEBUG nova.objects.instance [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'system_metadata' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.560 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2624b7c-7332-43be-871e-a87f9cae44c2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271643, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271643, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.563 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.577 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.577 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.578 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.579 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:02 compute-0 NetworkManager[49836]: <info>  [1772022122.5807] manager: (tapf1abe770-52): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.582 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f1abe770-5205-4bae-888a-f2489c2af7a7 in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.584 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08121372-a435-401a-b405-778e10d8c2e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.585 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47246297-ffe4-4057-851d-07d4fb7383ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.587 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 namespace which is not needed anymore
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 NetworkManager[49836]: <info>  [1772022122.5946] manager: (tapbe69a588-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.615 244018 INFO nova.virt.libvirt.driver [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Instance destroyed successfully.
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.616 244018 DEBUG nova.objects.instance [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'resources' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.730 244018 DEBUG nova.objects.instance [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'flavor' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Feb 25 12:22:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Feb 25 12:22:02 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [NOTICE]   (268891) : haproxy version is 2.8.14-c23fe91
Feb 25 12:22:02 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [NOTICE]   (268891) : path to executable is /usr/sbin/haproxy
Feb 25 12:22:02 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [WARNING]  (268891) : Exiting Master process...
Feb 25 12:22:02 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [WARNING]  (268891) : Exiting Master process...
Feb 25 12:22:02 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [ALERT]    (268891) : Current worker (268909) exited with code 143 (Terminated)
Feb 25 12:22:02 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [WARNING]  (268891) : All workers exited. Exiting... (0)
Feb 25 12:22:02 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Feb 25 12:22:02 compute-0 ceph-mon[76335]: pgmap v1114: 305 pgs: 305 active+clean; 292 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 499 KiB/s wr, 75 op/s
Feb 25 12:22:02 compute-0 systemd[1]: libpod-b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d.scope: Deactivated successfully.
Feb 25 12:22:02 compute-0 podman[271693]: 2026-02-25 12:22:02.752752601 +0000 UTC m=+0.058912477 container died b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.752 244018 DEBUG nova.virt.libvirt.vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.753 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.755 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.756 244018 DEBUG os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.759 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap433e6f28-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.765 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.777 244018 INFO os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31')
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.778 244018 DEBUG nova.virt.libvirt.vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.779 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.780 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.781 244018 DEBUG os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.785 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31f40ed6-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.785 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.790 244018 DEBUG nova.virt.libvirt.vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.790 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.792 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d-userdata-shm.mount: Deactivated successfully.
Feb 25 12:22:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b250d49de0d9de34b918b625012f9e944f53e47c7cb2c90afd9a5ea1fff5ce10-merged.mount: Deactivated successfully.
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.802 244018 DEBUG nova.storage.rbd_utils [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] cloning vms/e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk@56170dc03de240289a8d156f9d514494 to images/7c5bf52b-c6c9-41dc-8a9d-b16bc36dfe7b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:22:02 compute-0 podman[271693]: 2026-02-25 12:22:02.807681714 +0000 UTC m=+0.113841520 container cleanup b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:22:02 compute-0 systemd[1]: libpod-conmon-b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d.scope: Deactivated successfully.
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.837 244018 INFO os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50')
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.838 244018 DEBUG nova.virt.libvirt.vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.838 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.839 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.839 244018 DEBUG os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.840 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1abe770-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.842 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.844 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.846 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.853 244018 INFO os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52')
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.854 244018 DEBUG nova.virt.libvirt.vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.854 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.854 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.855 244018 DEBUG os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:22:02 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.868 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.869 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe69a588-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.876 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface>not found in domain: <domain type='kvm'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <name>instance-0000001b</name>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <uuid>51d1d661-89db-4958-a2f4-c299ee997cde</uuid>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:20:53</nova:creationTime>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 12:22:02 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <system>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <entry name='serial'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <entry name='uuid'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </system>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <os>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   </os>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <features>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   </features>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <cpu mode='host-model' check='partial'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk.config'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:4a:d5:f4'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target dev='tap433e6f28-31'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:85:c5:fe'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target dev='tapf1abe770-52'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:76:32:d9'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target dev='tapbe69a588-f7'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       </target>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <console type='pty'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </console>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </input>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <video>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </video>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:22:02 compute-0 nova_compute[244014]: </domain>
Feb 25 12:22:02 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.878 244018 WARNING nova.virt.libvirt.driver [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Detaching interface fa:16:3e:c1:c1:8f failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap31f40ed6-50' not found.
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.880 244018 DEBUG nova.virt.libvirt.vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.881 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.882 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.883 244018 DEBUG os_vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:22:02 compute-0 podman[271731]: 2026-02-25 12:22:02.88484559 +0000 UTC m=+0.054042269 container remove b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.886 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31f40ed6-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.887 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.889 244018 INFO os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7')
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.895 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d60eedac-5b3f-428e-9f40-420095564672]: (4, ('Wed Feb 25 12:22:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 (b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d)\nb50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d\nWed Feb 25 12:22:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 (b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d)\nb50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.896 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e935a566-1be5-41fc-9c73-c588d066c3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.897 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:02 compute-0 kernel: tap08121372-a0: left promiscuous mode
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.904 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4730c8-35b8-4170-ac55-c7db7ab0b057]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.918 244018 INFO os_vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50')
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.919 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:22:02</nova:creationTime>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 12:22:02 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:22:02 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:22:02 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:22:02 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:22:02 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.919 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed2cdfc-da11-4e81-9ce1-b57ad2f8e5d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.920 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[144e32bf-c329-488a-a8b1-31f4ce98b874]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.924 244018 DEBUG nova.compute.manager [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-deleted-be69a588-f795-413e-b981-a20088eea5ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.924 244018 INFO nova.compute.manager [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Neutron deleted interface be69a588-f795-413e-b981-a20088eea5ed; detaching it from the instance and deleting it from the info cache
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.924 244018 DEBUG nova.network.neutron [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.937 244018 DEBUG nova.storage.rbd_utils [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] flattening images/7c5bf52b-c6c9-41dc-8a9d-b16bc36dfe7b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.938 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba144d37-078e-4aec-8961-ef8dc9d3b57b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402555, 'reachable_time': 23431, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271792, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.941 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:22:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d08121372\x2da435\x2d401a\x2db405\x2d778e10d8c2e2.mount: Deactivated successfully.
Feb 25 12:22:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.943 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[29b9448d-3a59-440f-946e-8a3724107b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.990 244018 DEBUG nova.virt.libvirt.vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.992 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:02 compute-0 nova_compute[244014]: 2026-02-25 12:22:02.996 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.006 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:76:32:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbe69a588-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.015 244018 DEBUG nova.virt.libvirt.driver [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Attempting to detach device tapbe69a588-f7 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.016 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:76:32:d9"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <target dev="tapbe69a588-f7"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]: </interface>
Feb 25 12:22:03 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.042 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:76:32:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbe69a588-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.046 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:76:32:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbe69a588-f7"/></interface>not found in domain: <domain type='kvm'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <name>instance-0000001b</name>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <uuid>51d1d661-89db-4958-a2f4-c299ee997cde</uuid>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:22:02</nova:creationTime>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 12:22:03 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 12:22:03 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 12:22:03 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <system>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <entry name='serial'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <entry name='uuid'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </system>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <os>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   </os>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <features>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   </features>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <cpu mode='host-model' check='partial'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk.config'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:4a:d5:f4'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target dev='tap433e6f28-31'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:85:c5:fe'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target dev='tapf1abe770-52'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       </target>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <console type='pty'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </console>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </input>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <video>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </video>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:22:03 compute-0 nova_compute[244014]: </domain>
Feb 25 12:22:03 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.048 244018 INFO nova.virt.libvirt.driver [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Successfully detached device tapbe69a588-f7 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the persistent domain config.
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.049 244018 DEBUG nova.virt.libvirt.vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.049 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.050 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.051 244018 DEBUG os_vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.052 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.052 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe69a588-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.053 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.054 244018 INFO os_vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7')
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.055 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:22:03</nova:creationTime>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 12:22:03 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:22:03 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:22:03 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:22:03 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:22:03 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.187 244018 DEBUG nova.storage.rbd_utils [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] removing snapshot(56170dc03de240289a8d156f9d514494) on rbd image(e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.297 244018 INFO nova.virt.libvirt.driver [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Deleting instance files /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde_del
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.298 244018 INFO nova.virt.libvirt.driver [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Deletion of /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde_del complete
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.365 244018 INFO nova.compute.manager [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Took 1.02 seconds to destroy the instance on the hypervisor.
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.366 244018 DEBUG oslo.service.loopingcall [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.366 244018 DEBUG nova.compute.manager [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.366 244018 DEBUG nova.network.neutron [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.585 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.607 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Releasing lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.608 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Instance network_info: |[{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.609 244018 DEBUG oslo_concurrency.lockutils [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.609 244018 DEBUG nova.network.neutron [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Refreshing network info cache for port 0fef626d-412c-4101-95eb-eadb3354247f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.616 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Start _get_guest_xml network_info=[{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.625 244018 WARNING nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.637 244018 DEBUG nova.virt.libvirt.host [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.638 244018 DEBUG nova.virt.libvirt.host [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.642 244018 DEBUG nova.virt.libvirt.host [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.642 244018 DEBUG nova.virt.libvirt.host [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.643 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.643 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.644 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.644 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.645 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.645 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.645 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.646 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.646 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.646 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.646 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.647 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.651 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Feb 25 12:22:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Feb 25 12:22:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Feb 25 12:22:03 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Feb 25 12:22:03 compute-0 ceph-mon[76335]: osdmap e149: 3 total, 3 up, 3 in
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.812 244018 DEBUG nova.storage.rbd_utils [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(snap) on rbd image(7c5bf52b-c6c9-41dc-8a9d-b16bc36dfe7b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:22:03 compute-0 nova_compute[244014]: 2026-02-25 12:22:03.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:22:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695468625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.210 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.230 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.233 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.732 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-unplugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.733 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.733 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.733 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.734 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-unplugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.734 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-unplugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.734 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.734 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.735 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.735 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.735 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.735 244018 WARNING nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 for instance with vm_state active and task_state deleting.
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.736 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-unplugged-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.736 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.736 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.736 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.737 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-unplugged-f1abe770-5205-4bae-888a-f2489c2af7a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.737 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-unplugged-f1abe770-5205-4bae-888a-f2489c2af7a7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:22:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:22:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398283395' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Feb 25 12:22:04 compute-0 ceph-mon[76335]: pgmap v1116: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Feb 25 12:22:04 compute-0 ceph-mon[76335]: osdmap e150: 3 total, 3 up, 3 in
Feb 25 12:22:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1695468625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/398283395' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.776 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.777 244018 DEBUG nova.virt.libvirt.vif [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:21:57Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.777 244018 DEBUG nova.network.os_vif_util [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.778 244018 DEBUG nova.network.os_vif_util [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.779 244018 DEBUG nova.objects.instance [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a7dc142-2b11-4214-87b7-636f27ccacbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Feb 25 12:22:04 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.971 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:22:04 compute-0 nova_compute[244014]:   <uuid>5a7dc142-2b11-4214-87b7-636f27ccacbf</uuid>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   <name>instance-0000001d</name>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <nova:name>tempest-AttachInterfacesV270Test-server-389676934</nova:name>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:22:03</nova:creationTime>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <nova:user uuid="7d27f8a357c2443a9140598fd9ec73e1">tempest-AttachInterfacesV270Test-1981016367-project-member</nova:user>
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <nova:project uuid="04734aae68d34fac8fb592fc015632fd">tempest-AttachInterfacesV270Test-1981016367</nova:project>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <nova:port uuid="0fef626d-412c-4101-95eb-eadb3354247f">
Feb 25 12:22:04 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <system>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <entry name="serial">5a7dc142-2b11-4214-87b7-636f27ccacbf</entry>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <entry name="uuid">5a7dc142-2b11-4214-87b7-636f27ccacbf</entry>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     </system>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   <os>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   </os>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   <features>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   </features>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/5a7dc142-2b11-4214-87b7-636f27ccacbf_disk">
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config">
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:22:04 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:67:08:21"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <target dev="tap0fef626d-41"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/console.log" append="off"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <video>
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     </video>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:22:04 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:22:04 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:22:04 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:22:04 compute-0 nova_compute[244014]: </domain>
Feb 25 12:22:04 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.971 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Preparing to wait for external event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.971 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.971 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.972 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.972 244018 DEBUG nova.virt.libvirt.vif [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:21:57Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.972 244018 DEBUG nova.network.os_vif_util [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.973 244018 DEBUG nova.network.os_vif_util [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.973 244018 DEBUG os_vif [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.973 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.974 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.974 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.976 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0fef626d-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.977 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0fef626d-41, col_values=(('external_ids', {'iface-id': '0fef626d-412c-4101-95eb-eadb3354247f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:08:21', 'vm-uuid': '5a7dc142-2b11-4214-87b7-636f27ccacbf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.978 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:04 compute-0 NetworkManager[49836]: <info>  [1772022124.9789] manager: (tap0fef626d-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:04 compute-0 nova_compute[244014]: 2026-02-25 12:22:04.984 244018 INFO os_vif [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41')
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.031 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.031 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.031 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No VIF found with MAC fa:16:3e:67:08:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.031 244018 INFO nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Using config drive
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.056 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.5 MiB/s wr, 63 op/s
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.731 244018 INFO nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Creating config drive at /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.737 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5givxko3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.764 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:05 compute-0 ceph-mon[76335]: osdmap e151: 3 total, 3 up, 3 in
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.799 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.799 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.800 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.801 244018 DEBUG nova.network.neutron [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.803 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.805 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.806 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.806 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.870 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5givxko3" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.904 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.908 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.933 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.935 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.966 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.967 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.967 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.968 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:22:05 compute-0 nova_compute[244014]: 2026-02-25 12:22:05.968 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.066 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.068 244018 INFO nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Deleting local config drive /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config because it was imported into RBD.
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.089 244018 DEBUG nova.network.neutron [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updated VIF entry in instance network info cache for port 0fef626d-412c-4101-95eb-eadb3354247f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.090 244018 DEBUG nova.network.neutron [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:06 compute-0 kernel: tap0fef626d-41: entered promiscuous mode
Feb 25 12:22:06 compute-0 NetworkManager[49836]: <info>  [1772022126.1148] manager: (tap0fef626d-41): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Feb 25 12:22:06 compute-0 ovn_controller[147040]: 2026-02-25T12:22:06Z|00210|binding|INFO|Claiming lport 0fef626d-412c-4101-95eb-eadb3354247f for this chassis.
Feb 25 12:22:06 compute-0 ovn_controller[147040]: 2026-02-25T12:22:06Z|00211|binding|INFO|0fef626d-412c-4101-95eb-eadb3354247f: Claiming fa:16:3e:67:08:21 10.100.0.12
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.122 244018 DEBUG oslo_concurrency.lockutils [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.129 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:08:21 10.100.0.12'], port_security=['fa:16:3e:67:08:21 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5a7dc142-2b11-4214-87b7-636f27ccacbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04734aae68d34fac8fb592fc015632fd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4de9378b-f5fd-4838-89b9-d0ae11ab0ea8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdb8d004-ad0d-4775-bacb-c6f8c36164f5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0fef626d-412c-4101-95eb-eadb3354247f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.131 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0fef626d-412c-4101-95eb-eadb3354247f in datapath 6599ab7f-86c0-4af9-b532-eeb7a134fca8 bound to our chassis
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.134 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6599ab7f-86c0-4af9-b532-eeb7a134fca8
Feb 25 12:22:06 compute-0 ovn_controller[147040]: 2026-02-25T12:22:06Z|00212|binding|INFO|Setting lport 0fef626d-412c-4101-95eb-eadb3354247f ovn-installed in OVS
Feb 25 12:22:06 compute-0 ovn_controller[147040]: 2026-02-25T12:22:06Z|00213|binding|INFO|Setting lport 0fef626d-412c-4101-95eb-eadb3354247f up in Southbound
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.142 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.147 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a076dde0-dd77-42b0-a0d4-aff23efcedf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.149 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6599ab7f-81 in ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.150 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6599ab7f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.151 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b669cf1-3951-44f4-a952-d3fcbd0c11d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.152 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[049bae00-f53d-4743-9499-8803d9d23118]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 systemd-machined[210048]: New machine qemu-33-instance-0000001d.
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.163 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc1c34d-58d3-4ae9-875c-b25312de28d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Feb 25 12:22:06 compute-0 systemd-udevd[272010]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.187 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[596ee765-26e9-4359-9638-9f564acea1ef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 NetworkManager[49836]: <info>  [1772022126.2010] device (tap0fef626d-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:22:06 compute-0 NetworkManager[49836]: <info>  [1772022126.2026] device (tap0fef626d-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.226 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e2267cc2-45af-4ba4-be4b-b0893a4a285a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.233 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b213083e-8ab0-40a9-b9a4-bca05bf80c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 NetworkManager[49836]: <info>  [1772022126.2348] manager: (tap6599ab7f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/102)
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.261 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3fe8c4-0cb1-4d5c-9912-7a627a22e222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.268 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ed0f6e-431e-4d79-a6db-48621dad16a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 NetworkManager[49836]: <info>  [1772022126.2960] device (tap6599ab7f-80): carrier: link connected
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.301 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac5f02f-d6f5-4fad-88c0-35706038eace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.320 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d62bb6-c787-4400-8be2-dbe3918907cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6599ab7f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:4a:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409570, 'reachable_time': 24913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272040, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.334 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1133d88a-f2f4-4c75-8c17-2db581f9d622]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:4a2d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409570, 'tstamp': 409570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272041, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.336 244018 DEBUG nova.network.neutron [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.357 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e795fdda-03a4-4d69-a0a2-e91d4ce4170b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6599ab7f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:4a:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409570, 'reachable_time': 24913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272042, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.365 244018 INFO nova.compute.manager [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Took 3.00 seconds to deallocate network for instance.
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.373 244018 INFO nova.network.neutron [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Port 433e6f28-313e-4fe8-b8da-eacc8a0332c8 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.374 244018 INFO nova.network.neutron [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Port 31f40ed6-505b-4061-b861-ea2720b0ff62 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.393 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0653097-7c44-419c-81fb-296b814be0a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.420 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.421 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.465 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[057af4d7-e58d-46de-8038-85b783dbee66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.467 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6599ab7f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.468 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.469 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6599ab7f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:06 compute-0 kernel: tap6599ab7f-80: entered promiscuous mode
Feb 25 12:22:06 compute-0 NetworkManager[49836]: <info>  [1772022126.4728] manager: (tap6599ab7f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.480 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6599ab7f-80, col_values=(('external_ids', {'iface-id': 'd8168125-f59d-4dc2-b239-e9ce5117e64b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:06 compute-0 ovn_controller[147040]: 2026-02-25T12:22:06Z|00214|binding|INFO|Releasing lport d8168125-f59d-4dc2-b239-e9ce5117e64b from this chassis (sb_readonly=0)
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.486 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6599ab7f-86c0-4af9-b532-eeb7a134fca8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6599ab7f-86c0-4af9-b532-eeb7a134fca8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.487 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8e35cff5-3e29-4da5-9eca-7d0bf65bdda8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.490 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-6599ab7f-86c0-4af9-b532-eeb7a134fca8
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/6599ab7f-86c0-4af9-b532-eeb7a134fca8.pid.haproxy
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 6599ab7f-86c0-4af9-b532-eeb7a134fca8
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:22:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.493 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'env', 'PROCESS_TAG=haproxy-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6599ab7f-86c0-4af9-b532-eeb7a134fca8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.575 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022126.5745094, 5a7dc142-2b11-4214-87b7-636f27ccacbf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.575 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] VM Started (Lifecycle Event)
Feb 25 12:22:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:22:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1066128238' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.615 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.616 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.622 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022126.5746312, 5a7dc142-2b11-4214-87b7-636f27ccacbf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.622 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] VM Paused (Lifecycle Event)
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.649 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.654 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.698 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.754 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.755 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.764 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.764 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:22:06 compute-0 ceph-mon[76335]: pgmap v1119: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.5 MiB/s wr, 63 op/s
Feb 25 12:22:06 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1066128238' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.820 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.821 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.821 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.822 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.822 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.822 244018 WARNING nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 for instance with vm_state deleted and task_state None.
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.823 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-deleted-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.823 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-deleted-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.823 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.824 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.824 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.824 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.824 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Processing event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.825 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.825 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.825 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.826 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.826 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.826 244018 WARNING nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received unexpected event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f for instance with vm_state building and task_state spawning.
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.827 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.832 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.833 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022126.8330507, 5a7dc142-2b11-4214-87b7-636f27ccacbf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.834 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] VM Resumed (Lifecycle Event)
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.841 244018 INFO nova.virt.libvirt.driver [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Instance spawned successfully.
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.842 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.868 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.879 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.884 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:06 compute-0 podman[272119]: 2026-02-25 12:22:06.886035432 +0000 UTC m=+0.077679471 container create 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.885 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.886 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.887 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.888 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.888 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.918 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:22:06 compute-0 nova_compute[244014]: 2026-02-25 12:22:06.928 244018 DEBUG oslo_concurrency.processutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:06 compute-0 podman[272119]: 2026-02-25 12:22:06.849662077 +0000 UTC m=+0.041306186 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:22:06 compute-0 systemd[1]: Started libpod-conmon-677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b.scope.
Feb 25 12:22:06 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:22:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb7a359f29f9a30a87daa097bb03214ce4fe36c45b982aeae264aafa91f3402f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:07 compute-0 podman[272119]: 2026-02-25 12:22:07.002132815 +0000 UTC m=+0.193776884 container init 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:22:07 compute-0 podman[272119]: 2026-02-25 12:22:07.010493853 +0000 UTC m=+0.202137922 container start 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.029 244018 INFO nova.virt.libvirt.driver [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Snapshot image upload complete
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.030 244018 INFO nova.compute.manager [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Took 6.61 seconds to snapshot the instance on the hypervisor.
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.043 244018 INFO nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Took 9.74 seconds to spawn the instance on the hypervisor.
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.043 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:07 compute-0 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [NOTICE]   (272139) : New worker (272141) forked
Feb 25 12:22:07 compute-0 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [NOTICE]   (272139) : Loading success.
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.160 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.161 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4159MB free_disk=59.900690112262964GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.162 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.174 244018 INFO nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Took 11.21 seconds to build instance.
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.210 244018 INFO nova.network.neutron [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Port be69a588-f795-413e-b981-a20088eea5ed from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.211 244018 DEBUG nova.network.neutron [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.262 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.329 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.459 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-31f40ed6-505b-4061-b861-ea2720b0ff62" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 8.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:22:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4122485027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.502 244018 DEBUG oslo_concurrency.processutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.509 244018 DEBUG nova.compute.provider_tree [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.544 244018 DEBUG nova.scheduler.client.report [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.571 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.575 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.605 244018 INFO nova.scheduler.client.report [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Deleted allocations for instance 51d1d661-89db-4958-a2f4-c299ee997cde
Feb 25 12:22:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 293 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.2 MiB/s wr, 242 op/s
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.686 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e9eb76fe-9616-40a4-aa53-0054cc5c3a57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.687 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 5a7dc142-2b11-4214-87b7-636f27ccacbf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.687 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.687 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.702 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:07 compute-0 nova_compute[244014]: 2026-02-25 12:22:07.751 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:07.800 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4122485027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.038 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "interface-5a7dc142-2b11-4214-87b7-636f27ccacbf-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.039 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "interface-5a7dc142-2b11-4214-87b7-636f27ccacbf-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.039 244018 DEBUG nova.objects.instance [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lazy-loading 'flavor' on Instance uuid 5a7dc142-2b11-4214-87b7-636f27ccacbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.068 244018 DEBUG nova.objects.instance [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lazy-loading 'pci_requests' on Instance uuid 5a7dc142-2b11-4214-87b7-636f27ccacbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.083 244018 DEBUG nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:22:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:22:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3767490391' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.326 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.332 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.354 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.406 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.413 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.423 244018 DEBUG nova.policy [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7d27f8a357c2443a9140598fd9ec73e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '04734aae68d34fac8fb592fc015632fd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.436 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.437 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 12:22:08 compute-0 nova_compute[244014]: 2026-02-25 12:22:08.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:08 compute-0 ceph-mon[76335]: pgmap v1120: 305 pgs: 305 active+clean; 293 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.2 MiB/s wr, 242 op/s
Feb 25 12:22:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3767490391' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.133 244018 DEBUG nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Successfully created port: 448b460d-ecda-4125-9d74-8560b946b896 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:22:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Feb 25 12:22:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Feb 25 12:22:09 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Feb 25 12:22:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 293 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 181 op/s
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.904 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.905 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.905 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.905 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.906 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.931 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.934 244018 DEBUG nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Successfully updated port: 448b460d-ecda-4125-9d74-8560b946b896 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.969 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.969 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquired lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.969 244018 DEBUG nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:22:09 compute-0 nova_compute[244014]: 2026-02-25 12:22:09.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.165 244018 WARNING nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] 6599ab7f-86c0-4af9-b532-eeb7a134fca8 already exists in list: networks containing: ['6599ab7f-86c0-4af9-b532-eeb7a134fca8']. ignoring it
Feb 25 12:22:10 compute-0 ceph-mon[76335]: osdmap e152: 3 total, 3 up, 3 in
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.827 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.827 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.828 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.828 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.828 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.830 244018 INFO nova.compute.manager [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Terminating instance
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.831 244018 DEBUG nova.compute.manager [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:22:10 compute-0 kernel: tap6e87f383-db (unregistering): left promiscuous mode
Feb 25 12:22:10 compute-0 NetworkManager[49836]: <info>  [1772022130.8692] device (tap6e87f383-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:22:10 compute-0 ovn_controller[147040]: 2026-02-25T12:22:10Z|00215|binding|INFO|Releasing lport 6e87f383-dbcb-4dad-b195-bc813617ad12 from this chassis (sb_readonly=0)
Feb 25 12:22:10 compute-0 ovn_controller[147040]: 2026-02-25T12:22:10Z|00216|binding|INFO|Setting lport 6e87f383-dbcb-4dad-b195-bc813617ad12 down in Southbound
Feb 25 12:22:10 compute-0 ovn_controller[147040]: 2026-02-25T12:22:10Z|00217|binding|INFO|Removing iface tap6e87f383-db ovn-installed in OVS
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.880 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.897 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.910 244018 DEBUG nova.compute.manager [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-changed-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.911 244018 DEBUG nova.compute.manager [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Refreshing instance network info cache due to event network-changed-448b460d-ecda-4125-9d74-8560b946b896. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:22:10 compute-0 nova_compute[244014]: 2026-02-25 12:22:10.911 244018 DEBUG oslo_concurrency.lockutils [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:10 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Feb 25 12:22:10 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 3.789s CPU time.
Feb 25 12:22:10 compute-0 systemd-machined[210048]: Machine qemu-32-instance-0000001c terminated.
Feb 25 12:22:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:10.935 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:80:09 10.100.0.13'], port_security=['fa:16:3e:9e:80:09 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e9eb76fe-9616-40a4-aa53-0054cc5c3a57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6e87f383-dbcb-4dad-b195-bc813617ad12) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:10.937 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6e87f383-dbcb-4dad-b195-bc813617ad12 in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 unbound from our chassis
Feb 25 12:22:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:10.939 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:22:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:10.940 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef65d113-ec4b-4397-8f6f-881279998113]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:10.941 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace which is not needed anymore
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.079 244018 INFO nova.virt.libvirt.driver [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Instance destroyed successfully.
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.080 244018 DEBUG nova.objects.instance [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'resources' on Instance uuid e9eb76fe-9616-40a4-aa53-0054cc5c3a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.097 244018 DEBUG nova.virt.libvirt.vif [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1303628417',display_name='tempest-ImagesTestJSON-server-1303628417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1303628417',id=28,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:21:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-18jw7ang',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:07Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=e9eb76fe-9616-40a4-aa53-0054cc5c3a57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.098 244018 DEBUG nova.network.os_vif_util [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.100 244018 DEBUG nova.network.os_vif_util [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.101 244018 DEBUG os_vif [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.103 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.104 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e87f383-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.106 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.113 244018 INFO os_vif [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db')
Feb 25 12:22:11 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [NOTICE]   (271291) : haproxy version is 2.8.14-c23fe91
Feb 25 12:22:11 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [NOTICE]   (271291) : path to executable is /usr/sbin/haproxy
Feb 25 12:22:11 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [WARNING]  (271291) : Exiting Master process...
Feb 25 12:22:11 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [WARNING]  (271291) : Exiting Master process...
Feb 25 12:22:11 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [ALERT]    (271291) : Current worker (271293) exited with code 143 (Terminated)
Feb 25 12:22:11 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [WARNING]  (271291) : All workers exited. Exiting... (0)
Feb 25 12:22:11 compute-0 systemd[1]: libpod-32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1.scope: Deactivated successfully.
Feb 25 12:22:11 compute-0 podman[272217]: 2026-02-25 12:22:11.128055125 +0000 UTC m=+0.068885031 container died 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:22:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1-userdata-shm.mount: Deactivated successfully.
Feb 25 12:22:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-dda58046a993fafc00305b047e988f5ee511b02fbba1a417a37a87ca1579e71e-merged.mount: Deactivated successfully.
Feb 25 12:22:11 compute-0 podman[272217]: 2026-02-25 12:22:11.20167106 +0000 UTC m=+0.142500926 container cleanup 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:22:11 compute-0 systemd[1]: libpod-conmon-32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1.scope: Deactivated successfully.
Feb 25 12:22:11 compute-0 ceph-mon[76335]: pgmap v1122: 305 pgs: 305 active+clean; 293 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 181 op/s
Feb 25 12:22:11 compute-0 podman[272273]: 2026-02-25 12:22:11.27867217 +0000 UTC m=+0.050231460 container remove 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:22:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.289 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a26800a9-df0b-4b1b-be30-f6525fcdde0d]: (4, ('Wed Feb 25 12:22:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1)\n32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1\nWed Feb 25 12:22:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1)\n32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.292 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c3ebbf-95ca-4c9e-9b62-4402e4a187dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.293 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:11 compute-0 kernel: tap6a1663dd-20: left promiscuous mode
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.307 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05940354-3f3f-4456-bb82-048be2684fa3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.323 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a0620579-bc59-4f1a-a736-1d424fa14c45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.324 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[317e12bf-56ab-4cb8-a383-292592259d03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.354 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[08e84599-54fe-47ec-ba14-1644916ddd0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407961, 'reachable_time': 15149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272289, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d6a1663dd\x2d25aa\x2d4b0e\x2da1c0\x2d42434d371a21.mount: Deactivated successfully.
Feb 25 12:22:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.365 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:22:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.365 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a5273ac6-4a8b-4c28-9051-03d0f4f35d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.481 244018 INFO nova.virt.libvirt.driver [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Deleting instance files /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57_del
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.482 244018 INFO nova.virt.libvirt.driver [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Deletion of /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57_del complete
Feb 25 12:22:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 277 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 207 op/s
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.685 244018 INFO nova.compute.manager [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Took 0.85 seconds to destroy the instance on the hypervisor.
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.686 244018 DEBUG oslo.service.loopingcall [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.687 244018 DEBUG nova.compute.manager [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:22:11 compute-0 nova_compute[244014]: 2026-02-25 12:22:11.687 244018 DEBUG nova.network.neutron [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.519 244018 DEBUG nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.587 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Releasing lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.588 244018 DEBUG oslo_concurrency.lockutils [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.589 244018 DEBUG nova.network.neutron [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Refreshing network info cache for port 448b460d-ecda-4125-9d74-8560b946b896 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:22:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 205 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.4 MiB/s wr, 277 op/s
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:13 compute-0 ceph-mon[76335]: pgmap v1123: 305 pgs: 305 active+clean; 277 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 207 op/s
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.714 244018 DEBUG nova.virt.libvirt.vif [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:07Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.714 244018 DEBUG nova.network.os_vif_util [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.715 244018 DEBUG nova.network.os_vif_util [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.716 244018 DEBUG os_vif [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.717 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.717 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.721 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap448b460d-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.722 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap448b460d-ec, col_values=(('external_ids', {'iface-id': '448b460d-ecda-4125-9d74-8560b946b896', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:fa:d0', 'vm-uuid': '5a7dc142-2b11-4214-87b7-636f27ccacbf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.723 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:13 compute-0 NetworkManager[49836]: <info>  [1772022133.7254] manager: (tap448b460d-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.730 244018 INFO os_vif [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec')
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.731 244018 DEBUG nova.virt.libvirt.vif [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:07Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.732 244018 DEBUG nova.network.os_vif_util [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.733 244018 DEBUG nova.network.os_vif_util [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.735 244018 DEBUG nova.virt.libvirt.guest [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] attach device xml: <interface type="ethernet">
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:17:fa:d0"/>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <target dev="tap448b460d-ec"/>
Feb 25 12:22:13 compute-0 nova_compute[244014]: </interface>
Feb 25 12:22:13 compute-0 nova_compute[244014]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 25 12:22:13 compute-0 kernel: tap448b460d-ec: entered promiscuous mode
Feb 25 12:22:13 compute-0 ovn_controller[147040]: 2026-02-25T12:22:13Z|00218|binding|INFO|Claiming lport 448b460d-ecda-4125-9d74-8560b946b896 for this chassis.
Feb 25 12:22:13 compute-0 ovn_controller[147040]: 2026-02-25T12:22:13Z|00219|binding|INFO|448b460d-ecda-4125-9d74-8560b946b896: Claiming fa:16:3e:17:fa:d0 10.100.0.8
Feb 25 12:22:13 compute-0 NetworkManager[49836]: <info>  [1772022133.7548] manager: (tap448b460d-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Feb 25 12:22:13 compute-0 systemd-udevd[272195]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:13 compute-0 ovn_controller[147040]: 2026-02-25T12:22:13Z|00220|binding|INFO|Setting lport 448b460d-ecda-4125-9d74-8560b946b896 ovn-installed in OVS
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.765 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:13 compute-0 NetworkManager[49836]: <info>  [1772022133.7700] device (tap448b460d-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:22:13 compute-0 NetworkManager[49836]: <info>  [1772022133.7712] device (tap448b460d-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.782 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:fa:d0 10.100.0.8'], port_security=['fa:16:3e:17:fa:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5a7dc142-2b11-4214-87b7-636f27ccacbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04734aae68d34fac8fb592fc015632fd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4de9378b-f5fd-4838-89b9-d0ae11ab0ea8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdb8d004-ad0d-4775-bacb-c6f8c36164f5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=448b460d-ecda-4125-9d74-8560b946b896) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:13 compute-0 ovn_controller[147040]: 2026-02-25T12:22:13Z|00221|binding|INFO|Setting lport 448b460d-ecda-4125-9d74-8560b946b896 up in Southbound
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.783 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 448b460d-ecda-4125-9d74-8560b946b896 in datapath 6599ab7f-86c0-4af9-b532-eeb7a134fca8 bound to our chassis
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.785 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6599ab7f-86c0-4af9-b532-eeb7a134fca8
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.796 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45905596-a4aa-411a-8fd8-ea1fe531b074]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.823 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b4aa3c95-957e-4bf6-96f1-972d1f065244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.825 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[654e9ace-deb1-4fdc-ae12-f4df49d412f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.852 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a28312-e551-44e2-8aed-9320c1ae6b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.864 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c8b935-5302-4c59-8ecb-2b09a62e852b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6599ab7f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:4a:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409570, 'reachable_time': 24913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272302, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.882 244018 DEBUG nova.virt.libvirt.driver [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.882 244018 DEBUG nova.virt.libvirt.driver [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.882 244018 DEBUG nova.virt.libvirt.driver [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No VIF found with MAC fa:16:3e:67:08:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.882 244018 DEBUG nova.virt.libvirt.driver [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No VIF found with MAC fa:16:3e:17:fa:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.883 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[210d8713-c223-4617-ac09-3d485a4b2912]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6599ab7f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409583, 'tstamp': 409583}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272303, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6599ab7f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409586, 'tstamp': 409586}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272303, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.885 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6599ab7f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.890 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6599ab7f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.891 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.892 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6599ab7f-80, col_values=(('external_ids', {'iface-id': 'd8168125-f59d-4dc2-b239-e9ce5117e64b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.894 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.983 244018 DEBUG nova.virt.libvirt.guest [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <nova:name>tempest-AttachInterfacesV270Test-server-389676934</nova:name>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:22:13</nova:creationTime>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:22:13 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:22:13 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:22:13 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:22:13 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:22:13 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:22:13 compute-0 nova_compute[244014]:     <nova:user uuid="7d27f8a357c2443a9140598fd9ec73e1">tempest-AttachInterfacesV270Test-1981016367-project-member</nova:user>
Feb 25 12:22:13 compute-0 nova_compute[244014]:     <nova:project uuid="04734aae68d34fac8fb592fc015632fd">tempest-AttachInterfacesV270Test-1981016367</nova:project>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:22:13 compute-0 nova_compute[244014]:     <nova:port uuid="0fef626d-412c-4101-95eb-eadb3354247f">
Feb 25 12:22:13 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:22:13 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:22:13 compute-0 nova_compute[244014]:     <nova:port uuid="448b460d-ecda-4125-9d74-8560b946b896">
Feb 25 12:22:13 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:22:13 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:22:13 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:22:13 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:22:13 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:22:13 compute-0 nova_compute[244014]: 2026-02-25 12:22:13.985 244018 DEBUG nova.network.neutron [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:14 compute-0 nova_compute[244014]: 2026-02-25 12:22:14.032 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "interface-5a7dc142-2b11-4214-87b7-636f27ccacbf-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:14 compute-0 nova_compute[244014]: 2026-02-25 12:22:14.073 244018 INFO nova.compute.manager [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Took 2.39 seconds to deallocate network for instance.
Feb 25 12:22:14 compute-0 nova_compute[244014]: 2026-02-25 12:22:14.214 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:14 compute-0 nova_compute[244014]: 2026-02-25 12:22:14.214 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Feb 25 12:22:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Feb 25 12:22:14 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Feb 25 12:22:14 compute-0 nova_compute[244014]: 2026-02-25 12:22:14.291 244018 DEBUG oslo_concurrency.processutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:22:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2230240323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:14 compute-0 nova_compute[244014]: 2026-02-25 12:22:14.836 244018 DEBUG oslo_concurrency.processutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:14 compute-0 nova_compute[244014]: 2026-02-25 12:22:14.842 244018 DEBUG nova.compute.provider_tree [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:22:14 compute-0 nova_compute[244014]: 2026-02-25 12:22:14.862 244018 DEBUG nova.scheduler.client.report [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:22:14 compute-0 nova_compute[244014]: 2026-02-25 12:22:14.892 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:14 compute-0 nova_compute[244014]: 2026-02-25 12:22:14.928 244018 INFO nova.scheduler.client.report [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Deleted allocations for instance e9eb76fe-9616-40a4-aa53-0054cc5c3a57
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.006 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.194 244018 DEBUG nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-vif-unplugged-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.194 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.195 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.195 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.196 244018 DEBUG nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] No waiting events found dispatching network-vif-unplugged-6e87f383-dbcb-4dad-b195-bc813617ad12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.196 244018 WARNING nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received unexpected event network-vif-unplugged-6e87f383-dbcb-4dad-b195-bc813617ad12 for instance with vm_state deleted and task_state None.
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.196 244018 DEBUG nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.196 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.197 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.197 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.197 244018 DEBUG nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] No waiting events found dispatching network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.198 244018 WARNING nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received unexpected event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 for instance with vm_state deleted and task_state None.
Feb 25 12:22:15 compute-0 ceph-mon[76335]: pgmap v1124: 305 pgs: 305 active+clean; 205 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.4 MiB/s wr, 277 op/s
Feb 25 12:22:15 compute-0 ceph-mon[76335]: osdmap e153: 3 total, 3 up, 3 in
Feb 25 12:22:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2230240323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.290 244018 DEBUG nova.network.neutron [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updated VIF entry in instance network info cache for port 448b460d-ecda-4125-9d74-8560b946b896. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.293 244018 DEBUG nova.network.neutron [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.332 244018 DEBUG oslo_concurrency.lockutils [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 205 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 KiB/s wr, 172 op/s
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.882 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.883 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:15 compute-0 nova_compute[244014]: 2026-02-25 12:22:15.926 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.097 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.097 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.105 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.106 244018 INFO nova.compute.claims [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.268 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:22:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112032542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.795 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.804 244018 DEBUG nova.compute.provider_tree [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.825 244018 DEBUG nova.scheduler.client.report [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.856 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.857 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.914 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.915 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.965 244018 INFO nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:22:16 compute-0 nova_compute[244014]: 2026-02-25 12:22:16.991 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.142 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.144 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.144 244018 INFO nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Creating image(s)
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.176 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.213 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.244 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.249 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:17 compute-0 ceph-mon[76335]: pgmap v1126: 305 pgs: 305 active+clean; 205 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 KiB/s wr, 172 op/s
Feb 25 12:22:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2112032542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.336 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.337 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.337 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.337 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.354 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.357 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ffd5cedf-474c-4977-807e-22a276ceb002_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:17 compute-0 ovn_controller[147040]: 2026-02-25T12:22:17Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:08:21 10.100.0.12
Feb 25 12:22:17 compute-0 ovn_controller[147040]: 2026-02-25T12:22:17Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:08:21 10.100.0.12
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.521 244018 DEBUG nova.policy [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1cfc3e643014e1c984d71182534fd24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '851e1817495944c1ad7ac421ab226d13', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.543 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ffd5cedf-474c-4977-807e-22a276ceb002_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.617 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022122.6103446, 51d1d661-89db-4958-a2f4-c299ee997cde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.618 244018 INFO nova.compute.manager [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] VM Stopped (Lifecycle Event)
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.626 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] resizing rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.668 244018 DEBUG nova.compute.manager [None req-57fda520-de7c-4c73-b2f7-94b535be15f1 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 205 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 202 op/s
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.734 244018 DEBUG nova.objects.instance [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'migration_context' on Instance uuid ffd5cedf-474c-4977-807e-22a276ceb002 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.768 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.768 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Ensure instance console log exists: /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.769 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.769 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.769 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.824 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.825 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:17 compute-0 nova_compute[244014]: 2026-02-25 12:22:17.966 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.109 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.110 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.119 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.119 244018 INFO nova.compute.claims [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.198 244018 DEBUG nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-vif-deleted-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.199 244018 DEBUG nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.200 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.200 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.201 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.202 244018 DEBUG nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.202 244018 WARNING nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received unexpected event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 for instance with vm_state active and task_state None.
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.203 244018 DEBUG nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.204 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.205 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.205 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.206 244018 DEBUG nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.206 244018 WARNING nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received unexpected event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 for instance with vm_state active and task_state None.
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.258 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Successfully created port: fd320060-eaf8-4fd7-9325-e3793617bc7b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.304 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.305 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.306 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.306 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.307 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.309 244018 INFO nova.compute.manager [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Terminating instance
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.311 244018 DEBUG nova.compute.manager [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:22:18 compute-0 kernel: tap0fef626d-41 (unregistering): left promiscuous mode
Feb 25 12:22:18 compute-0 NetworkManager[49836]: <info>  [1772022138.3619] device (tap0fef626d-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:22:18 compute-0 ovn_controller[147040]: 2026-02-25T12:22:18Z|00222|binding|INFO|Releasing lport 0fef626d-412c-4101-95eb-eadb3354247f from this chassis (sb_readonly=0)
Feb 25 12:22:18 compute-0 ovn_controller[147040]: 2026-02-25T12:22:18Z|00223|binding|INFO|Setting lport 0fef626d-412c-4101-95eb-eadb3354247f down in Southbound
Feb 25 12:22:18 compute-0 ovn_controller[147040]: 2026-02-25T12:22:18Z|00224|binding|INFO|Removing iface tap0fef626d-41 ovn-installed in OVS
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.368 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.376 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:08:21 10.100.0.12'], port_security=['fa:16:3e:67:08:21 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5a7dc142-2b11-4214-87b7-636f27ccacbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04734aae68d34fac8fb592fc015632fd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4de9378b-f5fd-4838-89b9-d0ae11ab0ea8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdb8d004-ad0d-4775-bacb-c6f8c36164f5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0fef626d-412c-4101-95eb-eadb3354247f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.375 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.377 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0fef626d-412c-4101-95eb-eadb3354247f in datapath 6599ab7f-86c0-4af9-b532-eeb7a134fca8 unbound from our chassis
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.379 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6599ab7f-86c0-4af9-b532-eeb7a134fca8
Feb 25 12:22:18 compute-0 kernel: tap448b460d-ec (unregistering): left promiscuous mode
Feb 25 12:22:18 compute-0 NetworkManager[49836]: <info>  [1772022138.3831] device (tap448b460d-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:22:18 compute-0 ovn_controller[147040]: 2026-02-25T12:22:18Z|00225|binding|INFO|Releasing lport 448b460d-ecda-4125-9d74-8560b946b896 from this chassis (sb_readonly=0)
Feb 25 12:22:18 compute-0 ovn_controller[147040]: 2026-02-25T12:22:18Z|00226|binding|INFO|Setting lport 448b460d-ecda-4125-9d74-8560b946b896 down in Southbound
Feb 25 12:22:18 compute-0 ovn_controller[147040]: 2026-02-25T12:22:18Z|00227|binding|INFO|Removing iface tap448b460d-ec ovn-installed in OVS
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.398 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[acc1a7de-49ff-4ec4-b5cf-98dfcf59bf2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.413 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:fa:d0 10.100.0.8'], port_security=['fa:16:3e:17:fa:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5a7dc142-2b11-4214-87b7-636f27ccacbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04734aae68d34fac8fb592fc015632fd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4de9378b-f5fd-4838-89b9-d0ae11ab0ea8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdb8d004-ad0d-4775-bacb-c6f8c36164f5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=448b460d-ecda-4125-9d74-8560b946b896) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:18 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Feb 25 12:22:18 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 11.303s CPU time.
Feb 25 12:22:18 compute-0 systemd-machined[210048]: Machine qemu-33-instance-0000001d terminated.
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.444 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4662b7-7c28-49d1-8a53-745f65ead3d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.448 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2fd6e6-d127-4ece-b8b3-423471eb310b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.484 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1965b643-d195-41ab-856d-dfa506fb0021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.507 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[923f37d4-a075-485e-9dd5-2795f914b452]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6599ab7f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:4a:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409570, 'reachable_time': 24913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272535, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.536 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a4a216c-12f5-4067-bd79-f06eef8f0566]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6599ab7f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409583, 'tstamp': 409583}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272554, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6599ab7f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409586, 'tstamp': 409586}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272554, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.539 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6599ab7f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.542 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 NetworkManager[49836]: <info>  [1772022138.5443] manager: (tap448b460d-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6599ab7f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.556 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.557 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6599ab7f-80, col_values=(('external_ids', {'iface-id': 'd8168125-f59d-4dc2-b239-e9ce5117e64b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.558 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.560 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 448b460d-ecda-4125-9d74-8560b946b896 in datapath 6599ab7f-86c0-4af9-b532-eeb7a134fca8 unbound from our chassis
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.563 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6599ab7f-86c0-4af9-b532-eeb7a134fca8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.564 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a23c3ec2-f04b-4bea-9d74-3550d1ff0e17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.565 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8 namespace which is not needed anymore
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.571 244018 INFO nova.virt.libvirt.driver [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Instance destroyed successfully.
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.571 244018 DEBUG nova.objects.instance [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lazy-loading 'resources' on Instance uuid 5a7dc142-2b11-4214-87b7-636f27ccacbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.587 244018 DEBUG nova.virt.libvirt.vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:07Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.587 244018 DEBUG nova.network.os_vif_util [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.588 244018 DEBUG nova.network.os_vif_util [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.588 244018 DEBUG os_vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.590 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0fef626d-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.594 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.600 244018 INFO os_vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41')
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.600 244018 DEBUG nova.virt.libvirt.vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:07Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.601 244018 DEBUG nova.network.os_vif_util [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.601 244018 DEBUG nova.network.os_vif_util [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.602 244018 DEBUG os_vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.603 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap448b460d-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.609 244018 INFO os_vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec')
Feb 25 12:22:18 compute-0 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [NOTICE]   (272139) : haproxy version is 2.8.14-c23fe91
Feb 25 12:22:18 compute-0 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [NOTICE]   (272139) : path to executable is /usr/sbin/haproxy
Feb 25 12:22:18 compute-0 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [WARNING]  (272139) : Exiting Master process...
Feb 25 12:22:18 compute-0 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [ALERT]    (272139) : Current worker (272141) exited with code 143 (Terminated)
Feb 25 12:22:18 compute-0 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [WARNING]  (272139) : All workers exited. Exiting... (0)
Feb 25 12:22:18 compute-0 systemd[1]: libpod-677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b.scope: Deactivated successfully.
Feb 25 12:22:18 compute-0 podman[272613]: 2026-02-25 12:22:18.710237622 +0000 UTC m=+0.051612589 container died 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.712 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b-userdata-shm.mount: Deactivated successfully.
Feb 25 12:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb7a359f29f9a30a87daa097bb03214ce4fe36c45b982aeae264aafa91f3402f-merged.mount: Deactivated successfully.
Feb 25 12:22:18 compute-0 podman[272613]: 2026-02-25 12:22:18.749459578 +0000 UTC m=+0.090834595 container cleanup 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:22:18 compute-0 systemd[1]: libpod-conmon-677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b.scope: Deactivated successfully.
Feb 25 12:22:18 compute-0 podman[272645]: 2026-02-25 12:22:18.828503847 +0000 UTC m=+0.049606612 container remove 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.835 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1f54c809-d7bb-457c-bb46-2d4ec32ff0bd]: (4, ('Wed Feb 25 12:22:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8 (677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b)\n677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b\nWed Feb 25 12:22:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8 (677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b)\n677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.837 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[77f6d966-2a0b-4dec-8c47-b4121178f369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.838 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6599ab7f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 kernel: tap6599ab7f-80: left promiscuous mode
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.852 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.857 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1f66ba-537b-4056-80dc-c587e081e3e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.868 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16c969d2-a281-4a94-97d1-b8d54b41a539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.869 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e6efc135-3a10-49e1-b9a4-5e3664e96543]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.881 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7f41c142-358e-4e14-809e-4dca6f8f6654]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409562, 'reachable_time': 16997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272662, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.883 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:22:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.883 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e98adacf-e425-41ad-8e7e-0bd11df9bb9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d6599ab7f\x2d86c0\x2d4af9\x2db532\x2deeb7a134fca8.mount: Deactivated successfully.
Feb 25 12:22:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:22:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2580611927' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.929 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.937 244018 INFO nova.virt.libvirt.driver [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Deleting instance files /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf_del
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.938 244018 INFO nova.virt.libvirt.driver [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Deletion of /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf_del complete
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.943 244018 DEBUG nova.compute.provider_tree [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:22:18 compute-0 nova_compute[244014]: 2026-02-25 12:22:18.972 244018 DEBUG nova.scheduler.client.report [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.076 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.078 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.090 244018 INFO nova.compute.manager [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Took 0.78 seconds to destroy the instance on the hypervisor.
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.090 244018 DEBUG oslo.service.loopingcall [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.091 244018 DEBUG nova.compute.manager [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.091 244018 DEBUG nova.network.neutron [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:22:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:19 compute-0 ceph-mon[76335]: pgmap v1127: 305 pgs: 305 active+clean; 205 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 202 op/s
Feb 25 12:22:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2580611927' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.271 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.272 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.308 244018 INFO nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.356 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.520 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.522 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.523 244018 INFO nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Creating image(s)
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.556 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.592 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.623 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.627 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.660 244018 DEBUG nova.policy [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:22:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 205 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 933 KiB/s wr, 170 op/s
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.709 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.710 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.711 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.711 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.744 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:19 compute-0 nova_compute[244014]: 2026-02-25 12:22:19.747 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.017 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.096 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] resizing rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.193 244018 DEBUG nova.objects.instance [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'migration_context' on Instance uuid 33db1662-e67d-41de-b8d6-ea93b40cf7cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.209 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.210 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Ensure instance console log exists: /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.211 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.212 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.212 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.419 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Successfully updated port: fd320060-eaf8-4fd7-9325-e3793617bc7b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.448 244018 DEBUG nova.compute.manager [req-b7050af2-8dcb-44ae-82eb-05bb26432fe0 req-49a64329-4b35-40cb-ad9e-c5fab7ce4509 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-deleted-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.449 244018 INFO nova.compute.manager [req-b7050af2-8dcb-44ae-82eb-05bb26432fe0 req-49a64329-4b35-40cb-ad9e-c5fab7ce4509 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Neutron deleted interface 0fef626d-412c-4101-95eb-eadb3354247f; detaching it from the instance and deleting it from the info cache
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.449 244018 DEBUG nova.network.neutron [req-b7050af2-8dcb-44ae-82eb-05bb26432fe0 req-49a64329-4b35-40cb-ad9e-c5fab7ce4509 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [{"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.645 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.645 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquired lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.646 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.655 244018 DEBUG nova.network.neutron [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.728 244018 DEBUG nova.compute.manager [req-b7050af2-8dcb-44ae-82eb-05bb26432fe0 req-49a64329-4b35-40cb-ad9e-c5fab7ce4509 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Detach interface failed, port_id=0fef626d-412c-4101-95eb-eadb3354247f, reason: Instance 5a7dc142-2b11-4214-87b7-636f27ccacbf could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.867 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-unplugged-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.867 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.868 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.868 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.868 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-unplugged-0fef626d-412c-4101-95eb-eadb3354247f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.869 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-unplugged-0fef626d-412c-4101-95eb-eadb3354247f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.869 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.869 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.870 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.870 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.870 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.871 244018 WARNING nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received unexpected event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f for instance with vm_state active and task_state deleting.
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.871 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-unplugged-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.871 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.871 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.872 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.872 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-unplugged-448b460d-ecda-4125-9d74-8560b946b896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.872 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-unplugged-448b460d-ecda-4125-9d74-8560b946b896 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.890 244018 INFO nova.compute.manager [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Took 1.80 seconds to deallocate network for instance.
Feb 25 12:22:20 compute-0 nova_compute[244014]: 2026-02-25 12:22:20.975 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:22:21 compute-0 nova_compute[244014]: 2026-02-25 12:22:21.043 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:21 compute-0 nova_compute[244014]: 2026-02-25 12:22:21.044 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:21 compute-0 nova_compute[244014]: 2026-02-25 12:22:21.116 244018 DEBUG oslo_concurrency.processutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:21 compute-0 ceph-mon[76335]: pgmap v1128: 305 pgs: 305 active+clean; 205 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 933 KiB/s wr, 170 op/s
Feb 25 12:22:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:22:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264283392' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:21 compute-0 nova_compute[244014]: 2026-02-25 12:22:21.654 244018 DEBUG oslo_concurrency.processutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:21 compute-0 nova_compute[244014]: 2026-02-25 12:22:21.660 244018 DEBUG nova.compute.provider_tree [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:22:21 compute-0 nova_compute[244014]: 2026-02-25 12:22:21.668 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Successfully created port: 67db480d-6a66-4c54-be9c-5375a0d664cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:22:21 compute-0 nova_compute[244014]: 2026-02-25 12:22:21.683 244018 DEBUG nova.scheduler.client.report [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:22:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 235 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 158 op/s
Feb 25 12:22:21 compute-0 nova_compute[244014]: 2026-02-25 12:22:21.732 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:21 compute-0 nova_compute[244014]: 2026-02-25 12:22:21.841 244018 INFO nova.scheduler.client.report [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Deleted allocations for instance 5a7dc142-2b11-4214-87b7-636f27ccacbf
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.000 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:22 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/264283392' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.316 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Updating instance_info_cache with network_info: [{"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.429 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Releasing lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.430 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance network_info: |[{"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.434 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Start _get_guest_xml network_info=[{"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.440 244018 WARNING nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.447 244018 DEBUG nova.virt.libvirt.host [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.448 244018 DEBUG nova.virt.libvirt.host [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.451 244018 DEBUG nova.virt.libvirt.host [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.452 244018 DEBUG nova.virt.libvirt.host [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.452 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.453 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.453 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.454 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.455 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.456 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.456 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.456 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.457 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.457 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.457 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.458 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.463 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.587 244018 DEBUG nova.compute.manager [req-1f8d1afc-9155-4b05-9553-c95b16442038 req-6c30d5d9-c0ab-4879-8e1e-6031511e5a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-deleted-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.682 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Successfully updated port: 67db480d-6a66-4c54-be9c-5375a0d664cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.716 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.716 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.717 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:22:22 compute-0 nova_compute[244014]: 2026-02-25 12:22:22.927 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:22:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:22:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364510408' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.051 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.081 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.085 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:23 compute-0 ceph-mon[76335]: pgmap v1129: 305 pgs: 305 active+clean; 235 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 158 op/s
Feb 25 12:22:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/364510408' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:23 compute-0 sudo[272913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.514 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.515 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.516 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.516 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.516 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.516 244018 WARNING nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received unexpected event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 for instance with vm_state deleted and task_state None.
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.517 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received event network-changed-fd320060-eaf8-4fd7-9325-e3793617bc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.517 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Refreshing instance network info cache due to event network-changed-fd320060-eaf8-4fd7-9325-e3793617bc7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:22:23 compute-0 sudo[272913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.517 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.518 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.518 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Refreshing network info cache for port fd320060-eaf8-4fd7-9325-e3793617bc7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:22:23 compute-0 sudo[272913]: pam_unix(sudo:session): session closed for user root
Feb 25 12:22:23 compute-0 sudo[272951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:22:23 compute-0 sudo[272951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:23 compute-0 podman[272937]: 2026-02-25 12:22:23.614981452 +0000 UTC m=+0.095554810 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:22:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:22:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/598363268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:23 compute-0 podman[272938]: 2026-02-25 12:22:23.638739078 +0000 UTC m=+0.119741918 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.652 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.654 244018 DEBUG nova.virt.libvirt.vif [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-337821408',display_name='tempest-ImagesTestJSON-server-337821408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-337821408',id=30,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-a00v4oxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:22:17Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=ffd5cedf-474c-4977-807e-22a276ceb002,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.654 244018 DEBUG nova.network.os_vif_util [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.656 244018 DEBUG nova.network.os_vif_util [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:73:07,bridge_name='br-int',has_traffic_filtering=True,id=fd320060-eaf8-4fd7-9325-e3793617bc7b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd320060-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.657 244018 DEBUG nova.objects.instance [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'pci_devices' on Instance uuid ffd5cedf-474c-4977-807e-22a276ceb002 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.680 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:22:23 compute-0 nova_compute[244014]:   <uuid>ffd5cedf-474c-4977-807e-22a276ceb002</uuid>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   <name>instance-0000001e</name>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <nova:name>tempest-ImagesTestJSON-server-337821408</nova:name>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:22:22</nova:creationTime>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <nova:user uuid="f1cfc3e643014e1c984d71182534fd24">tempest-ImagesTestJSON-1353368211-project-member</nova:user>
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <nova:project uuid="851e1817495944c1ad7ac421ab226d13">tempest-ImagesTestJSON-1353368211</nova:project>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <nova:port uuid="fd320060-eaf8-4fd7-9325-e3793617bc7b">
Feb 25 12:22:23 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <system>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <entry name="serial">ffd5cedf-474c-4977-807e-22a276ceb002</entry>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <entry name="uuid">ffd5cedf-474c-4977-807e-22a276ceb002</entry>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     </system>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   <os>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   </os>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   <features>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   </features>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ffd5cedf-474c-4977-807e-22a276ceb002_disk">
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ffd5cedf-474c-4977-807e-22a276ceb002_disk.config">
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:22:23 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:76:73:07"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <target dev="tapfd320060-ea"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/console.log" append="off"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <video>
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     </video>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:22:23 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:22:23 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:22:23 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:22:23 compute-0 nova_compute[244014]: </domain>
Feb 25 12:22:23 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.680 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Preparing to wait for external event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.681 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.681 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.681 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.682 244018 DEBUG nova.virt.libvirt.vif [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-337821408',display_name='tempest-ImagesTestJSON-server-337821408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-337821408',id=30,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-a00v4oxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:22:17Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=ffd5cedf-474c-4977-807e-22a276ceb002,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.683 244018 DEBUG nova.network.os_vif_util [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 246 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 503 KiB/s rd, 6.8 MiB/s wr, 182 op/s
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.684 244018 DEBUG nova.network.os_vif_util [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:73:07,bridge_name='br-int',has_traffic_filtering=True,id=fd320060-eaf8-4fd7-9325-e3793617bc7b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd320060-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.686 244018 DEBUG os_vif [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:73:07,bridge_name='br-int',has_traffic_filtering=True,id=fd320060-eaf8-4fd7-9325-e3793617bc7b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd320060-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.688 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.688 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.692 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd320060-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.693 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd320060-ea, col_values=(('external_ids', {'iface-id': 'fd320060-eaf8-4fd7-9325-e3793617bc7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:73:07', 'vm-uuid': 'ffd5cedf-474c-4977-807e-22a276ceb002'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:23 compute-0 NetworkManager[49836]: <info>  [1772022143.6963] manager: (tapfd320060-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.699 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.704 244018 INFO os_vif [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:73:07,bridge_name='br-int',has_traffic_filtering=True,id=fd320060-eaf8-4fd7-9325-e3793617bc7b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd320060-ea')
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.762 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.763 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.763 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No VIF found with MAC fa:16:3e:76:73:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.763 244018 INFO nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Using config drive
Feb 25 12:22:23 compute-0 nova_compute[244014]: 2026-02-25 12:22:23.786 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:24 compute-0 sudo[272951]: pam_unix(sudo:session): session closed for user root
Feb 25 12:22:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:22:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:22:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:22:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:22:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:22:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:22:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:22:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:22:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:22:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:22:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:22:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:22:24 compute-0 sudo[273055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:22:24 compute-0 sudo[273055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:22:24 compute-0 sudo[273055]: pam_unix(sudo:session): session closed for user root
Feb 25 12:22:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:24 compute-0 sudo[273080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:22:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/598363268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:22:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:22:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:22:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:22:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:22:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:22:24 compute-0 sudo[273080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:22:24 compute-0 podman[273117]: 2026-02-25 12:22:24.614353166 +0000 UTC m=+0.059129573 container create 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb 25 12:22:24 compute-0 systemd[1]: Started libpod-conmon-62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d.scope.
Feb 25 12:22:24 compute-0 podman[273117]: 2026-02-25 12:22:24.590079826 +0000 UTC m=+0.034856263 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:22:24 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:22:24 compute-0 nova_compute[244014]: 2026-02-25 12:22:24.690 244018 INFO nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Creating config drive at /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config
Feb 25 12:22:24 compute-0 nova_compute[244014]: 2026-02-25 12:22:24.697 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp179iq57l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:24 compute-0 podman[273117]: 2026-02-25 12:22:24.708265198 +0000 UTC m=+0.153041645 container init 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:22:24 compute-0 podman[273117]: 2026-02-25 12:22:24.716800641 +0000 UTC m=+0.161577048 container start 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:22:24 compute-0 ecstatic_heisenberg[273134]: 167 167
Feb 25 12:22:24 compute-0 systemd[1]: libpod-62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d.scope: Deactivated successfully.
Feb 25 12:22:24 compute-0 podman[273117]: 2026-02-25 12:22:24.725725185 +0000 UTC m=+0.170501592 container attach 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:22:24 compute-0 podman[273117]: 2026-02-25 12:22:24.726557329 +0000 UTC m=+0.171333736 container died 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:22:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb780e7d05b6c9cc2f4d9d4992afe712fdeb9a48d35127c8a0b1c3c237db18bb-merged.mount: Deactivated successfully.
Feb 25 12:22:24 compute-0 podman[273117]: 2026-02-25 12:22:24.773841294 +0000 UTC m=+0.218617691 container remove 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:22:24 compute-0 systemd[1]: libpod-conmon-62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d.scope: Deactivated successfully.
Feb 25 12:22:24 compute-0 nova_compute[244014]: 2026-02-25 12:22:24.828 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp179iq57l" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:24 compute-0 nova_compute[244014]: 2026-02-25 12:22:24.864 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:24 compute-0 nova_compute[244014]: 2026-02-25 12:22:24.869 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config ffd5cedf-474c-4977-807e-22a276ceb002_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:24 compute-0 podman[273180]: 2026-02-25 12:22:24.946386464 +0000 UTC m=+0.062739597 container create 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 12:22:24 compute-0 systemd[1]: Started libpod-conmon-49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3.scope.
Feb 25 12:22:25 compute-0 podman[273180]: 2026-02-25 12:22:24.917634235 +0000 UTC m=+0.033987428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:22:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.022 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.032 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config ffd5cedf-474c-4977-807e-22a276ceb002_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.033 244018 INFO nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Deleting local config drive /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config because it was imported into RBD.
Feb 25 12:22:25 compute-0 podman[273180]: 2026-02-25 12:22:25.052624346 +0000 UTC m=+0.168977469 container init 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb 25 12:22:25 compute-0 podman[273180]: 2026-02-25 12:22:25.063319181 +0000 UTC m=+0.179672294 container start 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.065 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.065 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Instance network_info: |[{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:22:25 compute-0 podman[273180]: 2026-02-25 12:22:25.067476819 +0000 UTC m=+0.183829962 container attach 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.068 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Start _get_guest_xml network_info=[{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.077 244018 WARNING nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.085 244018 DEBUG nova.virt.libvirt.host [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.086 244018 DEBUG nova.virt.libvirt.host [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.089 244018 DEBUG nova.virt.libvirt.host [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.090 244018 DEBUG nova.virt.libvirt.host [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.090 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.091 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.091 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.092 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.092 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.092 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.093 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.093 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.093 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.094 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.094 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.094 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:22:25 compute-0 kernel: tapfd320060-ea: entered promiscuous mode
Feb 25 12:22:25 compute-0 NetworkManager[49836]: <info>  [1772022145.0995] manager: (tapfd320060-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.099 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:25 compute-0 ovn_controller[147040]: 2026-02-25T12:22:25Z|00228|binding|INFO|Claiming lport fd320060-eaf8-4fd7-9325-e3793617bc7b for this chassis.
Feb 25 12:22:25 compute-0 ovn_controller[147040]: 2026-02-25T12:22:25Z|00229|binding|INFO|fd320060-eaf8-4fd7-9325-e3793617bc7b: Claiming fa:16:3e:76:73:07 10.100.0.12
Feb 25 12:22:25 compute-0 ovn_controller[147040]: 2026-02-25T12:22:25Z|00230|binding|INFO|Setting lport fd320060-eaf8-4fd7-9325-e3793617bc7b ovn-installed in OVS
Feb 25 12:22:25 compute-0 ovn_controller[147040]: 2026-02-25T12:22:25Z|00231|binding|INFO|Setting lport fd320060-eaf8-4fd7-9325-e3793617bc7b up in Southbound
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.109 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:73:07 10.100.0.12'], port_security=['fa:16:3e:76:73:07 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ffd5cedf-474c-4977-807e-22a276ceb002', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fd320060-eaf8-4fd7-9325-e3793617bc7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.111 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fd320060-eaf8-4fd7-9325-e3793617bc7b in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 bound to our chassis
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.113 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.124 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b674aa5b-fb3f-4972-a041-2ddc32e11b1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.124 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a1663dd-21 in ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.126 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a1663dd-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.126 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac149cc2-7549-445e-83ab-c3de59db99ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34574215-7bd4-4c50-b348-cd5fea70b166]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 systemd-udevd[273237]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.137 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5181c5-165d-4a06-9997-c6aba41522bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 systemd-machined[210048]: New machine qemu-34-instance-0000001e.
Feb 25 12:22:25 compute-0 NetworkManager[49836]: <info>  [1772022145.1522] device (tapfd320060-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:22:25 compute-0 NetworkManager[49836]: <info>  [1772022145.1531] device (tapfd320060-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:22:25 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000001e.
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.161 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7dffecb1-bb7b-4022-8528-f646d787e4ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.193 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd4e376-293c-4729-ba46-8ab8a7f6273e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 NetworkManager[49836]: <info>  [1772022145.1998] manager: (tap6a1663dd-20): new Veth device (/org/freedesktop/NetworkManager/Devices/109)
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.199 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[284b9438-3f3a-4dae-867a-79ba0918980c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.229 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ba272279-7c4e-47ac-b0e5-163367631c2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.232 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdf2401-e6d1-42e6-81a1-bf97d6a32924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 NetworkManager[49836]: <info>  [1772022145.2477] device (tap6a1663dd-20): carrier: link connected
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.249 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d24c77fb-4c7a-4b69-b1ac-a63b11716e58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.263 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81394fbb-858a-4048-bc15-e7bf6e428746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411465, 'reachable_time': 26508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273287, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.276 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8852c2ab-f587-49d3-9740-458165f18f0a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411465, 'tstamp': 411465}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273290, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.291 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4caf6a0b-4170-48c2-882f-1b220f33c95e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411465, 'reachable_time': 26508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273291, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ceph-mon[76335]: pgmap v1130: 305 pgs: 305 active+clean; 246 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 503 KiB/s rd, 6.8 MiB/s wr, 182 op/s
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.318 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76bfe854-1961-4509-acbb-98aee18e600e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.373 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b69f7a06-deae-4d3c-a14d-22c73d799442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.374 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:25 compute-0 kernel: tap6a1663dd-20: entered promiscuous mode
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.375 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.375 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a1663dd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:25 compute-0 NetworkManager[49836]: <info>  [1772022145.3778] manager: (tap6a1663dd-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.381 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a1663dd-20, col_values=(('external_ids', {'iface-id': '7f515737-b36a-4caa-affc-8ad110539172'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:25 compute-0 ovn_controller[147040]: 2026-02-25T12:22:25Z|00232|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.386 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.386 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a807a260-bc60-458b-a59f-675b10702446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.387 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:22:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.388 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'env', 'PROCESS_TAG=haproxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a1663dd-25aa-4b0e-a1c0-42434d371a21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:25 compute-0 wizardly_heyrovsky[273216]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:22:25 compute-0 wizardly_heyrovsky[273216]: --> All data devices are unavailable
Feb 25 12:22:25 compute-0 systemd[1]: libpod-49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3.scope: Deactivated successfully.
Feb 25 12:22:25 compute-0 podman[273180]: 2026-02-25 12:22:25.530773901 +0000 UTC m=+0.647127014 container died 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 12:22:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f-merged.mount: Deactivated successfully.
Feb 25 12:22:25 compute-0 podman[273180]: 2026-02-25 12:22:25.587139714 +0000 UTC m=+0.703492847 container remove 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:22:25 compute-0 systemd[1]: libpod-conmon-49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3.scope: Deactivated successfully.
Feb 25 12:22:25 compute-0 sudo[273080]: pam_unix(sudo:session): session closed for user root
Feb 25 12:22:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:22:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1617231523' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.653 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022145.652667, ffd5cedf-474c-4977-807e-22a276ceb002 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.654 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] VM Started (Lifecycle Event)
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.660 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.685 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 246 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 440 KiB/s rd, 6.0 MiB/s wr, 159 op/s
Feb 25 12:22:25 compute-0 sudo[273373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.689 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:25 compute-0 sudo[273373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:22:25 compute-0 sudo[273373]: pam_unix(sudo:session): session closed for user root
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.709 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.716 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022145.652835, ffd5cedf-474c-4977-807e-22a276ceb002 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.716 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] VM Paused (Lifecycle Event)
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.738 244018 DEBUG nova.compute.manager [req-473462fe-6654-4592-8f2b-0224faa28ce5 req-ecc3b36b-d7b6-471f-afb0-45775afec12b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.739 244018 DEBUG oslo_concurrency.lockutils [req-473462fe-6654-4592-8f2b-0224faa28ce5 req-ecc3b36b-d7b6-471f-afb0-45775afec12b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.739 244018 DEBUG oslo_concurrency.lockutils [req-473462fe-6654-4592-8f2b-0224faa28ce5 req-ecc3b36b-d7b6-471f-afb0-45775afec12b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.739 244018 DEBUG oslo_concurrency.lockutils [req-473462fe-6654-4592-8f2b-0224faa28ce5 req-ecc3b36b-d7b6-471f-afb0-45775afec12b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.740 244018 DEBUG nova.compute.manager [req-473462fe-6654-4592-8f2b-0224faa28ce5 req-ecc3b36b-d7b6-471f-afb0-45775afec12b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Processing event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.741 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.744 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:25 compute-0 sudo[273435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.745 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:22:25 compute-0 sudo[273435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.749 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022145.7451842, ffd5cedf-474c-4977-807e-22a276ceb002 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.749 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] VM Resumed (Lifecycle Event)
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.752 244018 INFO nova.virt.libvirt.driver [-] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance spawned successfully.
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.752 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:22:25 compute-0 podman[273432]: 2026-02-25 12:22:25.755402721 +0000 UTC m=+0.051554167 container create f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.778 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.781 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.781 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.782 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.782 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.783 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.784 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.788 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:22:25 compute-0 systemd[1]: Started libpod-conmon-f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899.scope.
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.816 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:22:25 compute-0 podman[273432]: 2026-02-25 12:22:25.729903895 +0000 UTC m=+0.026055391 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:22:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9334fe215ef094e8186d73e604d53efb736aee56ac5daec6868b6733a23f6a9f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:25 compute-0 podman[273432]: 2026-02-25 12:22:25.849191569 +0000 UTC m=+0.145343055 container init f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.853 244018 INFO nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Took 8.71 seconds to spawn the instance on the hypervisor.
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.853 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:25 compute-0 podman[273432]: 2026-02-25 12:22:25.854738167 +0000 UTC m=+0.150889613 container start f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:22:25 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[273483]: [NOTICE]   (273496) : New worker (273498) forked
Feb 25 12:22:25 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[273483]: [NOTICE]   (273496) : Loading success.
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.921 244018 INFO nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Took 9.85 seconds to build instance.
Feb 25 12:22:25 compute-0 nova_compute[244014]: 2026-02-25 12:22:25.942 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:26 compute-0 podman[273519]: 2026-02-25 12:22:26.018089665 +0000 UTC m=+0.031640871 container create 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:22:26 compute-0 systemd[1]: Started libpod-conmon-14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925.scope.
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.076 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022131.0753112, e9eb76fe-9616-40a4-aa53-0054cc5c3a57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.077 244018 INFO nova.compute.manager [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] VM Stopped (Lifecycle Event)
Feb 25 12:22:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:22:26 compute-0 podman[273519]: 2026-02-25 12:22:26.096941998 +0000 UTC m=+0.110493264 container init 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:22:26 compute-0 podman[273519]: 2026-02-25 12:22:26.004528819 +0000 UTC m=+0.018080055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:22:26 compute-0 podman[273519]: 2026-02-25 12:22:26.104681409 +0000 UTC m=+0.118232665 container start 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:22:26 compute-0 zen_aryabhata[273535]: 167 167
Feb 25 12:22:26 compute-0 systemd[1]: libpod-14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925.scope: Deactivated successfully.
Feb 25 12:22:26 compute-0 podman[273519]: 2026-02-25 12:22:26.109480895 +0000 UTC m=+0.123032151 container attach 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Feb 25 12:22:26 compute-0 podman[273519]: 2026-02-25 12:22:26.110182095 +0000 UTC m=+0.123733311 container died 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.115 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Updated VIF entry in instance network info cache for port fd320060-eaf8-4fd7-9325-e3793617bc7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.116 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Updating instance_info_cache with network_info: [{"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.119 244018 DEBUG nova.compute.manager [None req-7515afb6-cf31-44d6-a463-352e3f416521 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6235d5e1ec722d2de6bb742e2e7263a8c6864d14277088de20fbf04ed8742f75-merged.mount: Deactivated successfully.
Feb 25 12:22:26 compute-0 podman[273519]: 2026-02-25 12:22:26.142731071 +0000 UTC m=+0.156282277 container remove 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:22:26 compute-0 systemd[1]: libpod-conmon-14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925.scope: Deactivated successfully.
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.179 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.180 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.180 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing instance network info cache due to event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.180 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.181 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.181 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:22:26 compute-0 ovn_controller[147040]: 2026-02-25T12:22:26Z|00233|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:22:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:22:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201932827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.221 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.222 244018 DEBUG nova.virt.libvirt.vif [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-244265435',display_name='tempest-tempest.common.compute-instance-244265435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-244265435',id=31,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-y80s0s52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:22:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=33db1662-e67d-41de-b8d6-ea93b40cf7cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.223 244018 DEBUG nova.network.os_vif_util [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.223 244018 DEBUG nova.network.os_vif_util [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:51,bridge_name='br-int',has_traffic_filtering=True,id=67db480d-6a66-4c54-be9c-5375a0d664cd,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67db480d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.224 244018 DEBUG nova.objects.instance [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33db1662-e67d-41de-b8d6-ea93b40cf7cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:26 compute-0 podman[273561]: 2026-02-25 12:22:26.272673809 +0000 UTC m=+0.038598480 container create 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:22:26 compute-0 ovn_controller[147040]: 2026-02-25T12:22:26Z|00234|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:26 compute-0 systemd[1]: Started libpod-conmon-18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1.scope.
Feb 25 12:22:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1617231523' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/201932827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.338 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:22:26 compute-0 nova_compute[244014]:   <uuid>33db1662-e67d-41de-b8d6-ea93b40cf7cd</uuid>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   <name>instance-0000001f</name>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <nova:name>tempest-tempest.common.compute-instance-244265435</nova:name>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:22:25</nova:creationTime>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <nova:port uuid="67db480d-6a66-4c54-be9c-5375a0d664cd">
Feb 25 12:22:26 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <system>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <entry name="serial">33db1662-e67d-41de-b8d6-ea93b40cf7cd</entry>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <entry name="uuid">33db1662-e67d-41de-b8d6-ea93b40cf7cd</entry>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     </system>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   <os>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   </os>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   <features>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   </features>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk">
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config">
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:22:26 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:4a:ac:51"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <target dev="tap67db480d-6a"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/console.log" append="off"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <video>
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     </video>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:22:26 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:22:26 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:22:26 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:22:26 compute-0 nova_compute[244014]: </domain>
Feb 25 12:22:26 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.343 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Preparing to wait for external event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:22:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.343 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.345 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.345 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.346 244018 DEBUG nova.virt.libvirt.vif [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-244265435',display_name='tempest-tempest.common.compute-instance-244265435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-244265435',id=31,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-y80s0s52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:22:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=33db1662-e67d-41de-b8d6-ea93b40cf7cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ce79f13d78a07f954baa22de194b13d7b0ab9684c1feb2ee6932b4319eace9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.347 244018 DEBUG nova.network.os_vif_util [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.348 244018 DEBUG nova.network.os_vif_util [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:51,bridge_name='br-int',has_traffic_filtering=True,id=67db480d-6a66-4c54-be9c-5375a0d664cd,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67db480d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.348 244018 DEBUG os_vif [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:51,bridge_name='br-int',has_traffic_filtering=True,id=67db480d-6a66-4c54-be9c-5375a0d664cd,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67db480d-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:22:26 compute-0 podman[273561]: 2026-02-25 12:22:26.254315676 +0000 UTC m=+0.020240407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ce79f13d78a07f954baa22de194b13d7b0ab9684c1feb2ee6932b4319eace9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.349 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.350 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ce79f13d78a07f954baa22de194b13d7b0ab9684c1feb2ee6932b4319eace9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ce79f13d78a07f954baa22de194b13d7b0ab9684c1feb2ee6932b4319eace9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.353 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67db480d-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.353 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67db480d-6a, col_values=(('external_ids', {'iface-id': '67db480d-6a66-4c54-be9c-5375a0d664cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:ac:51', 'vm-uuid': '33db1662-e67d-41de-b8d6-ea93b40cf7cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:26 compute-0 NetworkManager[49836]: <info>  [1772022146.3556] manager: (tap67db480d-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:22:26 compute-0 podman[273561]: 2026-02-25 12:22:26.369091662 +0000 UTC m=+0.135016393 container init 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.370 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.371 244018 INFO os_vif [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:51,bridge_name='br-int',has_traffic_filtering=True,id=67db480d-6a66-4c54-be9c-5375a0d664cd,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67db480d-6a')
Feb 25 12:22:26 compute-0 podman[273561]: 2026-02-25 12:22:26.380331222 +0000 UTC m=+0.146255923 container start 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:22:26 compute-0 podman[273561]: 2026-02-25 12:22:26.38448503 +0000 UTC m=+0.150409741 container attach 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.495 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.497 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.497 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:4a:ac:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.498 244018 INFO nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Using config drive
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.529 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]: {
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:     "0": [
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:         {
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "devices": [
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "/dev/loop3"
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             ],
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_name": "ceph_lv0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_size": "21470642176",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "name": "ceph_lv0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "tags": {
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.cluster_name": "ceph",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.crush_device_class": "",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.encrypted": "0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.objectstore": "bluestore",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.osd_id": "0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.type": "block",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.vdo": "0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.with_tpm": "0"
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             },
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "type": "block",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "vg_name": "ceph_vg0"
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:         }
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:     ],
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:     "1": [
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:         {
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "devices": [
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "/dev/loop4"
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             ],
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_name": "ceph_lv1",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_size": "21470642176",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "name": "ceph_lv1",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "tags": {
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.cluster_name": "ceph",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.crush_device_class": "",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.encrypted": "0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.objectstore": "bluestore",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.osd_id": "1",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.type": "block",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.vdo": "0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.with_tpm": "0"
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             },
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "type": "block",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "vg_name": "ceph_vg1"
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:         }
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:     ],
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:     "2": [
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:         {
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "devices": [
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "/dev/loop5"
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             ],
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_name": "ceph_lv2",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_size": "21470642176",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "name": "ceph_lv2",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "tags": {
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.cluster_name": "ceph",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.crush_device_class": "",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.encrypted": "0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.objectstore": "bluestore",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.osd_id": "2",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.type": "block",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.vdo": "0",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:                 "ceph.with_tpm": "0"
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             },
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "type": "block",
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:             "vg_name": "ceph_vg2"
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:         }
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]:     ]
Feb 25 12:22:26 compute-0 awesome_goldstine[273577]: }
Feb 25 12:22:26 compute-0 systemd[1]: libpod-18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1.scope: Deactivated successfully.
Feb 25 12:22:26 compute-0 podman[273561]: 2026-02-25 12:22:26.683153288 +0000 UTC m=+0.449077999 container died 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 12:22:26 compute-0 podman[273561]: 2026-02-25 12:22:26.727609542 +0000 UTC m=+0.493534243 container remove 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:22:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-37ce79f13d78a07f954baa22de194b13d7b0ab9684c1feb2ee6932b4319eace9-merged.mount: Deactivated successfully.
Feb 25 12:22:26 compute-0 systemd[1]: libpod-conmon-18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1.scope: Deactivated successfully.
Feb 25 12:22:26 compute-0 sudo[273435]: pam_unix(sudo:session): session closed for user root
Feb 25 12:22:26 compute-0 sudo[273618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:22:26 compute-0 sudo[273618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:22:26 compute-0 sudo[273618]: pam_unix(sudo:session): session closed for user root
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.897 244018 INFO nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Creating config drive at /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config
Feb 25 12:22:26 compute-0 nova_compute[244014]: 2026-02-25 12:22:26.903 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe5jtm2do execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:26 compute-0 sudo[273643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:22:26 compute-0 sudo[273643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.036 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe5jtm2do" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.066 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.077 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:27 compute-0 podman[273702]: 2026-02-25 12:22:27.188862016 +0000 UTC m=+0.052816264 container create ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:22:27 compute-0 systemd[1]: Started libpod-conmon-ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49.scope.
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.240 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.241 244018 INFO nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Deleting local config drive /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config because it was imported into RBD.
Feb 25 12:22:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:22:27 compute-0 podman[273702]: 2026-02-25 12:22:27.163254208 +0000 UTC m=+0.027208476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:22:27 compute-0 podman[273702]: 2026-02-25 12:22:27.269811549 +0000 UTC m=+0.133765867 container init ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:22:27 compute-0 podman[273702]: 2026-02-25 12:22:27.278439975 +0000 UTC m=+0.142394203 container start ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:22:27 compute-0 NetworkManager[49836]: <info>  [1772022147.2808] manager: (tap67db480d-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Feb 25 12:22:27 compute-0 podman[273702]: 2026-02-25 12:22:27.282100259 +0000 UTC m=+0.146054517 container attach ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 12:22:27 compute-0 kernel: tap67db480d-6a: entered promiscuous mode
Feb 25 12:22:27 compute-0 systemd-udevd[273253]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:22:27 compute-0 happy_brown[273736]: 167 167
Feb 25 12:22:27 compute-0 systemd[1]: libpod-ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49.scope: Deactivated successfully.
Feb 25 12:22:27 compute-0 conmon[273736]: conmon ca0a471a1e400be034cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49.scope/container/memory.events
Feb 25 12:22:27 compute-0 podman[273702]: 2026-02-25 12:22:27.288024268 +0000 UTC m=+0.151978526 container died ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 12:22:27 compute-0 ovn_controller[147040]: 2026-02-25T12:22:27Z|00235|binding|INFO|Claiming lport 67db480d-6a66-4c54-be9c-5375a0d664cd for this chassis.
Feb 25 12:22:27 compute-0 ovn_controller[147040]: 2026-02-25T12:22:27Z|00236|binding|INFO|67db480d-6a66-4c54-be9c-5375a0d664cd: Claiming fa:16:3e:4a:ac:51 10.100.0.11
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.291 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:27 compute-0 NetworkManager[49836]: <info>  [1772022147.2983] device (tap67db480d-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:22:27 compute-0 NetworkManager[49836]: <info>  [1772022147.2992] device (tap67db480d-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.305 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:ac:51 10.100.0.11'], port_security=['fa:16:3e:4a:ac:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '33db1662-e67d-41de-b8d6-ea93b40cf7cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc690998-27c4-44c1-9496-60c99dea61b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=67db480d-6a66-4c54-be9c-5375a0d664cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.308 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 67db480d-6a66-4c54-be9c-5375a0d664cd in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.309 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:27 compute-0 ovn_controller[147040]: 2026-02-25T12:22:27Z|00237|binding|INFO|Setting lport 67db480d-6a66-4c54-be9c-5375a0d664cd ovn-installed in OVS
Feb 25 12:22:27 compute-0 ovn_controller[147040]: 2026-02-25T12:22:27Z|00238|binding|INFO|Setting lport 67db480d-6a66-4c54-be9c-5375a0d664cd up in Southbound
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.317 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.316 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:27 compute-0 ceph-mon[76335]: pgmap v1131: 305 pgs: 305 active+clean; 246 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 440 KiB/s rd, 6.0 MiB/s wr, 159 op/s
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.327 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9c03cc11-471d-4aed-94cd-c39827f6266e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.328 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08121372-a1 in ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:22:27 compute-0 systemd-machined[210048]: New machine qemu-35-instance-0000001f.
Feb 25 12:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4dc5cbccd02c79d4bf120842d6b24638ebb8f4160b4b0607d3d6400ff10a598-merged.mount: Deactivated successfully.
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.332 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08121372-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.333 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e6d2b5-e606-4dab-b5f9-38b3f5c960c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.334 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19c9456e-15e4-4bb5-8afd-33b310cc4839]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Feb 25 12:22:27 compute-0 podman[273702]: 2026-02-25 12:22:27.345976427 +0000 UTC m=+0.209930665 container remove ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.350 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[5a28507c-4e39-408e-9294-c6796346a7c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 systemd[1]: libpod-conmon-ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49.scope: Deactivated successfully.
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.372 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[25c69af4-3b90-4264-8916-dafeadc7b309]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.394 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[70b0b920-03a9-49b0-af2a-21f7ddc02cd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 NetworkManager[49836]: <info>  [1772022147.4124] manager: (tap08121372-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.414 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38a80d62-e506-4f81-97dd-2231a2b92df2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.440 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ad82af-45a7-481b-bcc9-b4fde177542c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.444 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[369b3b96-fca9-446b-bce2-c1d9acf52aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 NetworkManager[49836]: <info>  [1772022147.4653] device (tap08121372-a0): carrier: link connected
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.471 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f3952259-ee53-435a-92bd-ec09fad485a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.491 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b01ae92-fb0d-46a8-960c-bc8f1bab0dc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411687, 'reachable_time': 20583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273789, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.506 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[90dad9e7-25e6-448c-ac6a-3dac7611d924]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:732b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411687, 'tstamp': 411687}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273797, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.519 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7f8803-031a-453b-9dcf-bf7c24483210]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411687, 'reachable_time': 20583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273798, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.543 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc442bb-81ea-4c31-b0ab-7999abad7ab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 podman[273790]: 2026-02-25 12:22:27.556240939 +0000 UTC m=+0.059996228 container create 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 12:22:27 compute-0 systemd[1]: Started libpod-conmon-70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b.scope.
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.609 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[33a15e24-012a-46e9-8487-d36c3dffccfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.612 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.612 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.613 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:27 compute-0 kernel: tap08121372-a0: entered promiscuous mode
Feb 25 12:22:27 compute-0 NetworkManager[49836]: <info>  [1772022147.6163] manager: (tap08121372-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:27 compute-0 podman[273790]: 2026-02-25 12:22:27.528610763 +0000 UTC m=+0.032366132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.627 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:27 compute-0 ovn_controller[147040]: 2026-02-25T12:22:27Z|00239|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 12:22:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.639 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.639 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bafcfce1-722b-4e15-89bd-bf1929f7745a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.640 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:22:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.641 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'env', 'PROCESS_TAG=haproxy-08121372-a435-401a-b405-778e10d8c2e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08121372-a435-401a-b405-778e10d8c2e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f284aac424b9f14ba868f44fc0169e26fef3cd2f4abef20321764ea577401fb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f284aac424b9f14ba868f44fc0169e26fef3cd2f4abef20321764ea577401fb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f284aac424b9f14ba868f44fc0169e26fef3cd2f4abef20321764ea577401fb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f284aac424b9f14ba868f44fc0169e26fef3cd2f4abef20321764ea577401fb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 246 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 846 KiB/s rd, 5.7 MiB/s wr, 178 op/s
Feb 25 12:22:27 compute-0 podman[273790]: 2026-02-25 12:22:27.696793278 +0000 UTC m=+0.200548647 container init 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:22:27 compute-0 podman[273790]: 2026-02-25 12:22:27.70422977 +0000 UTC m=+0.207985079 container start 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:22:27 compute-0 podman[273790]: 2026-02-25 12:22:27.751457673 +0000 UTC m=+0.255212962 container attach 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.882 244018 DEBUG nova.compute.manager [req-107a8671-002e-428b-9634-4631d56464d8 req-400efaf4-da9d-4791-b8ea-1977d011bc8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.883 244018 DEBUG oslo_concurrency.lockutils [req-107a8671-002e-428b-9634-4631d56464d8 req-400efaf4-da9d-4791-b8ea-1977d011bc8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.883 244018 DEBUG oslo_concurrency.lockutils [req-107a8671-002e-428b-9634-4631d56464d8 req-400efaf4-da9d-4791-b8ea-1977d011bc8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.884 244018 DEBUG oslo_concurrency.lockutils [req-107a8671-002e-428b-9634-4631d56464d8 req-400efaf4-da9d-4791-b8ea-1977d011bc8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.884 244018 DEBUG nova.compute.manager [req-107a8671-002e-428b-9634-4631d56464d8 req-400efaf4-da9d-4791-b8ea-1977d011bc8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Processing event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.886 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updated VIF entry in instance network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.887 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.908 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.947 244018 DEBUG nova.compute.manager [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.948 244018 DEBUG oslo_concurrency.lockutils [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.948 244018 DEBUG oslo_concurrency.lockutils [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.949 244018 DEBUG oslo_concurrency.lockutils [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.949 244018 DEBUG nova.compute.manager [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] No waiting events found dispatching network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.949 244018 WARNING nova.compute.manager [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received unexpected event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b for instance with vm_state active and task_state None.
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.959 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022147.9587524, 33db1662-e67d-41de-b8d6-ea93b40cf7cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.959 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] VM Started (Lifecycle Event)
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.961 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.964 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.967 244018 INFO nova.virt.libvirt.driver [-] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Instance spawned successfully.
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.967 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.983 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.989 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.992 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.992 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.993 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.993 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.994 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:27 compute-0 nova_compute[244014]: 2026-02-25 12:22:27.994 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.024 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.024 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022147.960511, 33db1662-e67d-41de-b8d6-ea93b40cf7cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.024 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] VM Paused (Lifecycle Event)
Feb 25 12:22:28 compute-0 podman[273894]: 2026-02-25 12:22:28.039515319 +0000 UTC m=+0.053218565 container create 015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.049 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.052 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022147.9633794, 33db1662-e67d-41de-b8d6-ea93b40cf7cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.052 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] VM Resumed (Lifecycle Event)
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.059 244018 INFO nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Took 8.54 seconds to spawn the instance on the hypervisor.
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.060 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:28 compute-0 systemd[1]: Started libpod-conmon-015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0.scope.
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.084 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.087 244018 DEBUG oslo_concurrency.lockutils [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.087 244018 DEBUG oslo_concurrency.lockutils [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.087 244018 DEBUG nova.compute.manager [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.090 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.093 244018 DEBUG nova.compute.manager [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.093 244018 DEBUG nova.objects.instance [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'flavor' on Instance uuid ffd5cedf-474c-4977-807e-22a276ceb002 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:28 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:22:28 compute-0 podman[273894]: 2026-02-25 12:22:28.009441004 +0000 UTC m=+0.023144280 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef7d83654588edb9f69d59a0176fd12c6608dd39d362a35f0ac0ad74b24bf41d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.132 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.134 244018 DEBUG nova.virt.libvirt.driver [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:22:28 compute-0 podman[273894]: 2026-02-25 12:22:28.149012495 +0000 UTC m=+0.162715741 container init 015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.150 244018 INFO nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Took 10.07 seconds to build instance.
Feb 25 12:22:28 compute-0 podman[273894]: 2026-02-25 12:22:28.156359814 +0000 UTC m=+0.170063060 container start 015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 12:22:28 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[273920]: [NOTICE]   (273952) : New worker (273959) forked
Feb 25 12:22:28 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[273920]: [NOTICE]   (273952) : Loading success.
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.183 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:28 compute-0 lvm[273987]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:22:28 compute-0 lvm[273987]: VG ceph_vg1 finished
Feb 25 12:22:28 compute-0 lvm[273986]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:22:28 compute-0 lvm[273986]: VG ceph_vg0 finished
Feb 25 12:22:28 compute-0 lvm[273989]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:22:28 compute-0 lvm[273989]: VG ceph_vg2 finished
Feb 25 12:22:28 compute-0 unruffled_jepsen[273815]: {}
Feb 25 12:22:28 compute-0 systemd[1]: libpod-70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b.scope: Deactivated successfully.
Feb 25 12:22:28 compute-0 podman[273992]: 2026-02-25 12:22:28.484321215 +0000 UTC m=+0.023818539 container died 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 12:22:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f284aac424b9f14ba868f44fc0169e26fef3cd2f4abef20321764ea577401fb3-merged.mount: Deactivated successfully.
Feb 25 12:22:28 compute-0 podman[273992]: 2026-02-25 12:22:28.522559283 +0000 UTC m=+0.062056537 container remove 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Feb 25 12:22:28 compute-0 systemd[1]: libpod-conmon-70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b.scope: Deactivated successfully.
Feb 25 12:22:28 compute-0 sudo[273643]: pam_unix(sudo:session): session closed for user root
Feb 25 12:22:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:22:28 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:22:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:22:28 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:22:28 compute-0 sudo[274006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:22:28 compute-0 sudo[274006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:22:28 compute-0 sudo[274006]: pam_unix(sudo:session): session closed for user root
Feb 25 12:22:28 compute-0 nova_compute[244014]: 2026-02-25 12:22:28.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:29 compute-0 ceph-mon[76335]: pgmap v1132: 305 pgs: 305 active+clean; 246 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 846 KiB/s rd, 5.7 MiB/s wr, 178 op/s
Feb 25 12:22:29 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:22:29 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:22:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 246 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 758 KiB/s rd, 4.9 MiB/s wr, 151 op/s
Feb 25 12:22:30 compute-0 nova_compute[244014]: 2026-02-25 12:22:30.207 244018 DEBUG nova.compute.manager [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:30 compute-0 nova_compute[244014]: 2026-02-25 12:22:30.209 244018 DEBUG oslo_concurrency.lockutils [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:30 compute-0 nova_compute[244014]: 2026-02-25 12:22:30.210 244018 DEBUG oslo_concurrency.lockutils [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:30 compute-0 nova_compute[244014]: 2026-02-25 12:22:30.210 244018 DEBUG oslo_concurrency.lockutils [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:30 compute-0 nova_compute[244014]: 2026-02-25 12:22:30.211 244018 DEBUG nova.compute.manager [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] No waiting events found dispatching network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:30 compute-0 nova_compute[244014]: 2026-02-25 12:22:30.211 244018 WARNING nova.compute.manager [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received unexpected event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd for instance with vm_state active and task_state None.
Feb 25 12:22:30 compute-0 ceph-mon[76335]: pgmap v1133: 305 pgs: 305 active+clean; 246 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 758 KiB/s rd, 4.9 MiB/s wr, 151 op/s
Feb 25 12:22:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:22:30
Feb 25 12:22:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:22:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:22:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root']
Feb 25 12:22:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:22:31 compute-0 nova_compute[244014]: 2026-02-25 12:22:31.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.9 MiB/s wr, 216 op/s
Feb 25 12:22:31 compute-0 nova_compute[244014]: 2026-02-25 12:22:31.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:31 compute-0 NetworkManager[49836]: <info>  [1772022151.7045] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Feb 25 12:22:31 compute-0 NetworkManager[49836]: <info>  [1772022151.7062] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Feb 25 12:22:31 compute-0 ovn_controller[147040]: 2026-02-25T12:22:31Z|00240|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 12:22:31 compute-0 nova_compute[244014]: 2026-02-25 12:22:31.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:31 compute-0 ovn_controller[147040]: 2026-02-25T12:22:31Z|00241|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:22:31 compute-0 nova_compute[244014]: 2026-02-25 12:22:31.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:22:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:22:32 compute-0 nova_compute[244014]: 2026-02-25 12:22:32.397 244018 DEBUG nova.compute.manager [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:32 compute-0 nova_compute[244014]: 2026-02-25 12:22:32.398 244018 DEBUG nova.compute.manager [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing instance network info cache due to event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:22:32 compute-0 nova_compute[244014]: 2026-02-25 12:22:32.399 244018 DEBUG oslo_concurrency.lockutils [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:32 compute-0 nova_compute[244014]: 2026-02-25 12:22:32.399 244018 DEBUG oslo_concurrency.lockutils [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:32 compute-0 nova_compute[244014]: 2026-02-25 12:22:32.400 244018 DEBUG nova.network.neutron [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:22:33 compute-0 ceph-mon[76335]: pgmap v1134: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.9 MiB/s wr, 216 op/s
Feb 25 12:22:33 compute-0 nova_compute[244014]: 2026-02-25 12:22:33.563 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022138.5626101, 5a7dc142-2b11-4214-87b7-636f27ccacbf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:33 compute-0 nova_compute[244014]: 2026-02-25 12:22:33.564 244018 INFO nova.compute.manager [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] VM Stopped (Lifecycle Event)
Feb 25 12:22:33 compute-0 nova_compute[244014]: 2026-02-25 12:22:33.589 244018 DEBUG nova.compute.manager [None req-f1e12a04-6e59-454b-a6c8-d66ac19bc22b - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 236 op/s
Feb 25 12:22:33 compute-0 nova_compute[244014]: 2026-02-25 12:22:33.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:34 compute-0 nova_compute[244014]: 2026-02-25 12:22:34.195 244018 DEBUG nova.network.neutron [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updated VIF entry in instance network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:22:34 compute-0 nova_compute[244014]: 2026-02-25 12:22:34.196 244018 DEBUG nova.network.neutron [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:34 compute-0 nova_compute[244014]: 2026-02-25 12:22:34.227 244018 DEBUG oslo_concurrency.lockutils [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:34 compute-0 ovn_controller[147040]: 2026-02-25T12:22:34Z|00242|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 12:22:34 compute-0 ovn_controller[147040]: 2026-02-25T12:22:34Z|00243|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:22:34 compute-0 nova_compute[244014]: 2026-02-25 12:22:34.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:35 compute-0 nova_compute[244014]: 2026-02-25 12:22:35.668 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:35 compute-0 nova_compute[244014]: 2026-02-25 12:22:35.668 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Feb 25 12:22:35 compute-0 nova_compute[244014]: 2026-02-25 12:22:35.704 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:22:35 compute-0 ceph-mon[76335]: pgmap v1135: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 236 op/s
Feb 25 12:22:35 compute-0 nova_compute[244014]: 2026-02-25 12:22:35.803 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:35 compute-0 nova_compute[244014]: 2026-02-25 12:22:35.804 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:35 compute-0 nova_compute[244014]: 2026-02-25 12:22:35.821 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:22:35 compute-0 nova_compute[244014]: 2026-02-25 12:22:35.822 244018 INFO nova.compute.claims [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:22:36 compute-0 nova_compute[244014]: 2026-02-25 12:22:36.195 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:36 compute-0 nova_compute[244014]: 2026-02-25 12:22:36.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:22:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773599575' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.049 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.854s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.056 244018 DEBUG nova.compute.provider_tree [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.083 244018 DEBUG nova.scheduler.client.report [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.121 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.122 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:22:37 compute-0 ceph-mon[76335]: pgmap v1136: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.174 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.174 244018 DEBUG nova.network.neutron [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.211 244018 INFO nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.239 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.386 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.387 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.388 244018 INFO nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Creating image(s)
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.415 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.443 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.464 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.467 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.516 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.518 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.519 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.520 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.549 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.552 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:37 compute-0 nova_compute[244014]: 2026-02-25 12:22:37.612 244018 DEBUG nova.policy [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:22:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 257 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 166 op/s
Feb 25 12:22:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3773599575' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:22:38 compute-0 nova_compute[244014]: 2026-02-25 12:22:38.197 244018 DEBUG nova.virt.libvirt.driver [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:22:38 compute-0 nova_compute[244014]: 2026-02-25 12:22:38.396 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.843s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:38 compute-0 nova_compute[244014]: 2026-02-25 12:22:38.479 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] resizing rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:22:38 compute-0 rsyslogd[1020]: imjournal from <np0005629333:ceph-mon>: begin to drop messages due to rate-limiting
Feb 25 12:22:38 compute-0 nova_compute[244014]: 2026-02-25 12:22:38.569 244018 DEBUG nova.objects.instance [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'migration_context' on Instance uuid 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:38 compute-0 nova_compute[244014]: 2026-02-25 12:22:38.597 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:22:38 compute-0 nova_compute[244014]: 2026-02-25 12:22:38.598 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Ensure instance console log exists: /var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:22:38 compute-0 nova_compute[244014]: 2026-02-25 12:22:38.598 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:38 compute-0 nova_compute[244014]: 2026-02-25 12:22:38.599 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:38 compute-0 nova_compute[244014]: 2026-02-25 12:22:38.599 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:38 compute-0 nova_compute[244014]: 2026-02-25 12:22:38.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:39 compute-0 ceph-mon[76335]: pgmap v1137: 305 pgs: 305 active+clean; 257 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 166 op/s
Feb 25 12:22:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 257 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.1 MiB/s wr, 139 op/s
Feb 25 12:22:39 compute-0 ovn_controller[147040]: 2026-02-25T12:22:39Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:ac:51 10.100.0.11
Feb 25 12:22:39 compute-0 ovn_controller[147040]: 2026-02-25T12:22:39Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:73:07 10.100.0.12
Feb 25 12:22:39 compute-0 ovn_controller[147040]: 2026-02-25T12:22:39Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:ac:51 10.100.0.11
Feb 25 12:22:39 compute-0 ovn_controller[147040]: 2026-02-25T12:22:39Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:73:07 10.100.0.12
Feb 25 12:22:41 compute-0 ceph-mon[76335]: pgmap v1138: 305 pgs: 305 active+clean; 257 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.1 MiB/s wr, 139 op/s
Feb 25 12:22:41 compute-0 nova_compute[244014]: 2026-02-25 12:22:41.358 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 296 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.2 MiB/s wr, 194 op/s
Feb 25 12:22:42 compute-0 nova_compute[244014]: 2026-02-25 12:22:42.036 244018 DEBUG nova.network.neutron [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Successfully created port: 7298ef49-4582-410a-821d-bd43bc865fba _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013226274168801088 of space, bias 1.0, pg target 0.39678822506403266 quantized to 32 (current 32)
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024901549083503295 of space, bias 1.0, pg target 0.7470464725050988 quantized to 32 (current 32)
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.817711411521034e-07 of space, bias 4.0, pg target 0.0006981253693825241 quantized to 16 (current 16)
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:22:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:22:43 compute-0 ceph-mon[76335]: pgmap v1139: 305 pgs: 305 active+clean; 296 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.2 MiB/s wr, 194 op/s
Feb 25 12:22:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 358 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 208 op/s
Feb 25 12:22:43 compute-0 nova_compute[244014]: 2026-02-25 12:22:43.722 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:43 compute-0 nova_compute[244014]: 2026-02-25 12:22:43.784 244018 DEBUG nova.network.neutron [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Successfully updated port: 7298ef49-4582-410a-821d-bd43bc865fba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:22:43 compute-0 nova_compute[244014]: 2026-02-25 12:22:43.801 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:43 compute-0 nova_compute[244014]: 2026-02-25 12:22:43.802 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:43 compute-0 nova_compute[244014]: 2026-02-25 12:22:43.802 244018 DEBUG nova.network.neutron [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:22:44 compute-0 nova_compute[244014]: 2026-02-25 12:22:44.072 244018 DEBUG nova.network.neutron [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:22:44 compute-0 nova_compute[244014]: 2026-02-25 12:22:44.144 244018 DEBUG nova.compute.manager [req-54a890fe-74cf-4b39-b966-1786942167b4 req-08d6ad22-1332-4e4b-a96f-fc694065444c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-changed-7298ef49-4582-410a-821d-bd43bc865fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:44 compute-0 nova_compute[244014]: 2026-02-25 12:22:44.144 244018 DEBUG nova.compute.manager [req-54a890fe-74cf-4b39-b966-1786942167b4 req-08d6ad22-1332-4e4b-a96f-fc694065444c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Refreshing instance network info cache due to event network-changed-7298ef49-4582-410a-821d-bd43bc865fba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:22:44 compute-0 nova_compute[244014]: 2026-02-25 12:22:44.145 244018 DEBUG oslo_concurrency.lockutils [req-54a890fe-74cf-4b39-b966-1786942167b4 req-08d6ad22-1332-4e4b-a96f-fc694065444c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.006 244018 DEBUG nova.network.neutron [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updating instance_info_cache with network_info: [{"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.072 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.072 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Instance network_info: |[{"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.073 244018 DEBUG oslo_concurrency.lockutils [req-54a890fe-74cf-4b39-b966-1786942167b4 req-08d6ad22-1332-4e4b-a96f-fc694065444c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.073 244018 DEBUG nova.network.neutron [req-54a890fe-74cf-4b39-b966-1786942167b4 req-08d6ad22-1332-4e4b-a96f-fc694065444c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Refreshing network info cache for port 7298ef49-4582-410a-821d-bd43bc865fba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.079 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Start _get_guest_xml network_info=[{"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.085 244018 WARNING nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.093 244018 DEBUG nova.virt.libvirt.host [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.094 244018 DEBUG nova.virt.libvirt.host [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.098 244018 DEBUG nova.virt.libvirt.host [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.099 244018 DEBUG nova.virt.libvirt.host [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.099 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.100 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.101 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.101 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.101 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.102 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.102 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.103 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.103 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.104 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.104 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.105 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.111 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.199 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:45 compute-0 ceph-mon[76335]: pgmap v1140: 305 pgs: 305 active+clean; 358 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 208 op/s
Feb 25 12:22:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:22:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454635598' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.640 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.663 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:45 compute-0 nova_compute[244014]: 2026-02-25 12:22:45.667 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 358 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 660 KiB/s rd, 6.0 MiB/s wr, 153 op/s
Feb 25 12:22:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:22:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2929779363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.204 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.206 244018 DEBUG nova.virt.libvirt.vif [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1363616292',display_name='tempest-tempest.common.compute-instance-1363616292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1363616292',id=32,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-lwlp6xmk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:22:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=8715ba65-46d1-406d-8a4d-b5ff5067e2c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.206 244018 DEBUG nova.network.os_vif_util [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.207 244018 DEBUG nova.network.os_vif_util [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e5:c6,bridge_name='br-int',has_traffic_filtering=True,id=7298ef49-4582-410a-821d-bd43bc865fba,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7298ef49-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.208 244018 DEBUG nova.objects.instance [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.226 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:22:46 compute-0 nova_compute[244014]:   <uuid>8715ba65-46d1-406d-8a4d-b5ff5067e2c1</uuid>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   <name>instance-00000020</name>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <nova:name>tempest-tempest.common.compute-instance-1363616292</nova:name>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:22:45</nova:creationTime>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <nova:port uuid="7298ef49-4582-410a-821d-bd43bc865fba">
Feb 25 12:22:46 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <system>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <entry name="serial">8715ba65-46d1-406d-8a4d-b5ff5067e2c1</entry>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <entry name="uuid">8715ba65-46d1-406d-8a4d-b5ff5067e2c1</entry>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     </system>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   <os>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   </os>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   <features>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   </features>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk">
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk.config">
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       </source>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:22:46 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:ba:e5:c6"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <target dev="tap7298ef49-45"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/console.log" append="off"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <video>
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     </video>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:22:46 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:22:46 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:22:46 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:22:46 compute-0 nova_compute[244014]: </domain>
Feb 25 12:22:46 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.227 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Preparing to wait for external event network-vif-plugged-7298ef49-4582-410a-821d-bd43bc865fba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.228 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.228 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.228 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.229 244018 DEBUG nova.virt.libvirt.vif [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1363616292',display_name='tempest-tempest.common.compute-instance-1363616292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1363616292',id=32,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-lwlp6xmk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:22:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=8715ba65-46d1-406d-8a4d-b5ff5067e2c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.229 244018 DEBUG nova.network.os_vif_util [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.229 244018 DEBUG nova.network.os_vif_util [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e5:c6,bridge_name='br-int',has_traffic_filtering=True,id=7298ef49-4582-410a-821d-bd43bc865fba,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7298ef49-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.230 244018 DEBUG os_vif [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e5:c6,bridge_name='br-int',has_traffic_filtering=True,id=7298ef49-4582-410a-821d-bd43bc865fba,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7298ef49-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.231 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.231 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.234 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7298ef49-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.235 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7298ef49-45, col_values=(('external_ids', {'iface-id': '7298ef49-4582-410a-821d-bd43bc865fba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:e5:c6', 'vm-uuid': '8715ba65-46d1-406d-8a4d-b5ff5067e2c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:46 compute-0 NetworkManager[49836]: <info>  [1772022166.2381] manager: (tap7298ef49-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:22:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1454635598' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2929779363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.246 244018 INFO os_vif [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e5:c6,bridge_name='br-int',has_traffic_filtering=True,id=7298ef49-4582-410a-821d-bd43bc865fba,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7298ef49-45')
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.316 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.317 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.317 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:ba:e5:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.318 244018 INFO nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Using config drive
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.336 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.725 244018 DEBUG nova.network.neutron [req-54a890fe-74cf-4b39-b966-1786942167b4 req-08d6ad22-1332-4e4b-a96f-fc694065444c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updated VIF entry in instance network info cache for port 7298ef49-4582-410a-821d-bd43bc865fba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.726 244018 DEBUG nova.network.neutron [req-54a890fe-74cf-4b39-b966-1786942167b4 req-08d6ad22-1332-4e4b-a96f-fc694065444c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updating instance_info_cache with network_info: [{"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:46 compute-0 nova_compute[244014]: 2026-02-25 12:22:46.746 244018 DEBUG oslo_concurrency.lockutils [req-54a890fe-74cf-4b39-b966-1786942167b4 req-08d6ad22-1332-4e4b-a96f-fc694065444c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:47 compute-0 ceph-mon[76335]: pgmap v1141: 305 pgs: 305 active+clean; 358 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 660 KiB/s rd, 6.0 MiB/s wr, 153 op/s
Feb 25 12:22:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:22:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/853690327' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:22:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:22:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/853690327' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:22:47 compute-0 nova_compute[244014]: 2026-02-25 12:22:47.518 244018 INFO nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Creating config drive at /var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/disk.config
Feb 25 12:22:47 compute-0 nova_compute[244014]: 2026-02-25 12:22:47.525 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpy5yxmha_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:47 compute-0 nova_compute[244014]: 2026-02-25 12:22:47.663 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpy5yxmha_" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 358 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 660 KiB/s rd, 6.0 MiB/s wr, 154 op/s
Feb 25 12:22:47 compute-0 nova_compute[244014]: 2026-02-25 12:22:47.701 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:22:47 compute-0 nova_compute[244014]: 2026-02-25 12:22:47.707 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/disk.config 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:22:47 compute-0 nova_compute[244014]: 2026-02-25 12:22:47.855 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/disk.config 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:22:47 compute-0 nova_compute[244014]: 2026-02-25 12:22:47.856 244018 INFO nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Deleting local config drive /var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/disk.config because it was imported into RBD.
Feb 25 12:22:47 compute-0 kernel: tap7298ef49-45: entered promiscuous mode
Feb 25 12:22:47 compute-0 NetworkManager[49836]: <info>  [1772022167.9073] manager: (tap7298ef49-45): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Feb 25 12:22:47 compute-0 ovn_controller[147040]: 2026-02-25T12:22:47Z|00244|binding|INFO|Claiming lport 7298ef49-4582-410a-821d-bd43bc865fba for this chassis.
Feb 25 12:22:47 compute-0 ovn_controller[147040]: 2026-02-25T12:22:47Z|00245|binding|INFO|7298ef49-4582-410a-821d-bd43bc865fba: Claiming fa:16:3e:ba:e5:c6 10.100.0.3
Feb 25 12:22:47 compute-0 nova_compute[244014]: 2026-02-25 12:22:47.911 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:47.920 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e5:c6 10.100.0.3'], port_security=['fa:16:3e:ba:e5:c6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8715ba65-46d1-406d-8a4d-b5ff5067e2c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc690998-27c4-44c1-9496-60c99dea61b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7298ef49-4582-410a-821d-bd43bc865fba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:47.922 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7298ef49-4582-410a-821d-bd43bc865fba in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis
Feb 25 12:22:47 compute-0 ovn_controller[147040]: 2026-02-25T12:22:47Z|00246|binding|INFO|Setting lport 7298ef49-4582-410a-821d-bd43bc865fba ovn-installed in OVS
Feb 25 12:22:47 compute-0 ovn_controller[147040]: 2026-02-25T12:22:47Z|00247|binding|INFO|Setting lport 7298ef49-4582-410a-821d-bd43bc865fba up in Southbound
Feb 25 12:22:47 compute-0 nova_compute[244014]: 2026-02-25 12:22:47.926 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:47.925 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:22:47 compute-0 systemd-machined[210048]: New machine qemu-36-instance-00000020.
Feb 25 12:22:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:47.941 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9560a9-a7b5-40d6-9819-2b2da5b57e2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:47 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000020.
Feb 25 12:22:47 compute-0 systemd-udevd[274359]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:22:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:47.974 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7659d7eb-51aa-496a-82ab-7f30481a0267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:47.979 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd7bf66-819a-48b4-87ec-2e0fe29c0608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:47 compute-0 NetworkManager[49836]: <info>  [1772022167.9883] device (tap7298ef49-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:22:47 compute-0 NetworkManager[49836]: <info>  [1772022167.9892] device (tap7298ef49-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:22:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:48.013 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[24e4e346-7540-4cb7-af6a-44ef1cc72235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:48.037 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ada6c820-c8d6-4aee-8b9e-6512834b1112]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411687, 'reachable_time': 20583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274369, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:48.057 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[50eded3b-8558-4a94-881d-15bb69306b43]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411698, 'tstamp': 411698}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274371, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411701, 'tstamp': 411701}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274371, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:48.059 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.061 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:48.063 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:48.063 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:48.064 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:48.064 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:22:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/853690327' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:22:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/853690327' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.737 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022168.7368827, 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.737 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] VM Started (Lifecycle Event)
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.769 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.773 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022168.7370422, 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.773 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] VM Paused (Lifecycle Event)
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.793 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.797 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.824 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.830 244018 DEBUG nova.compute.manager [req-c3a75bfe-e5ca-4485-886b-21b4565e34c0 req-353f7dae-21a7-4326-8a18-132e760144af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-vif-plugged-7298ef49-4582-410a-821d-bd43bc865fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.830 244018 DEBUG oslo_concurrency.lockutils [req-c3a75bfe-e5ca-4485-886b-21b4565e34c0 req-353f7dae-21a7-4326-8a18-132e760144af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.831 244018 DEBUG oslo_concurrency.lockutils [req-c3a75bfe-e5ca-4485-886b-21b4565e34c0 req-353f7dae-21a7-4326-8a18-132e760144af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.831 244018 DEBUG oslo_concurrency.lockutils [req-c3a75bfe-e5ca-4485-886b-21b4565e34c0 req-353f7dae-21a7-4326-8a18-132e760144af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.831 244018 DEBUG nova.compute.manager [req-c3a75bfe-e5ca-4485-886b-21b4565e34c0 req-353f7dae-21a7-4326-8a18-132e760144af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Processing event network-vif-plugged-7298ef49-4582-410a-821d-bd43bc865fba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.832 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.836 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022168.836185, 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.836 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] VM Resumed (Lifecycle Event)
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.839 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.842 244018 INFO nova.virt.libvirt.driver [-] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Instance spawned successfully.
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.843 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.856 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.860 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.863 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.863 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.863 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.864 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.864 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.864 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:22:48 compute-0 nova_compute[244014]: 2026-02-25 12:22:48.887 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:22:49 compute-0 nova_compute[244014]: 2026-02-25 12:22:49.253 244018 INFO nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Took 11.87 seconds to spawn the instance on the hypervisor.
Feb 25 12:22:49 compute-0 nova_compute[244014]: 2026-02-25 12:22:49.254 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:49 compute-0 ceph-mon[76335]: pgmap v1142: 305 pgs: 305 active+clean; 358 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 660 KiB/s rd, 6.0 MiB/s wr, 154 op/s
Feb 25 12:22:49 compute-0 nova_compute[244014]: 2026-02-25 12:22:49.266 244018 DEBUG nova.virt.libvirt.driver [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:22:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:49 compute-0 nova_compute[244014]: 2026-02-25 12:22:49.383 244018 INFO nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Took 13.61 seconds to build instance.
Feb 25 12:22:49 compute-0 nova_compute[244014]: 2026-02-25 12:22:49.408 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 358 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 629 KiB/s rd, 4.9 MiB/s wr, 135 op/s
Feb 25 12:22:50 compute-0 nova_compute[244014]: 2026-02-25 12:22:50.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:51 compute-0 ceph-mon[76335]: pgmap v1143: 305 pgs: 305 active+clean; 358 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 629 KiB/s rd, 4.9 MiB/s wr, 135 op/s
Feb 25 12:22:51 compute-0 kernel: tapfd320060-ea (unregistering): left promiscuous mode
Feb 25 12:22:51 compute-0 NetworkManager[49836]: <info>  [1772022171.4624] device (tapfd320060-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.467 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:51 compute-0 ovn_controller[147040]: 2026-02-25T12:22:51Z|00248|binding|INFO|Releasing lport fd320060-eaf8-4fd7-9325-e3793617bc7b from this chassis (sb_readonly=0)
Feb 25 12:22:51 compute-0 ovn_controller[147040]: 2026-02-25T12:22:51Z|00249|binding|INFO|Setting lport fd320060-eaf8-4fd7-9325-e3793617bc7b down in Southbound
Feb 25 12:22:51 compute-0 ovn_controller[147040]: 2026-02-25T12:22:51Z|00250|binding|INFO|Removing iface tapfd320060-ea ovn-installed in OVS
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.513 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:73:07 10.100.0.12'], port_security=['fa:16:3e:76:73:07 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ffd5cedf-474c-4977-807e-22a276ceb002', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fd320060-eaf8-4fd7-9325-e3793617bc7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.515 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fd320060-eaf8-4fd7-9325-e3793617bc7b in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 unbound from our chassis
Feb 25 12:22:51 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Feb 25 12:22:51 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Consumed 11.756s CPU time.
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.519 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:22:51 compute-0 systemd-machined[210048]: Machine qemu-34-instance-0000001e terminated.
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.521 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a236ff-a80c-4e94-b953-1fa4c7b72713]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.522 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace which is not needed anymore
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.668 244018 DEBUG nova.compute.manager [req-4492491c-e70b-48ae-8059-f710d0ee9c45 req-bc1436fd-6dde-4247-89de-3db4f42261de 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-vif-plugged-7298ef49-4582-410a-821d-bd43bc865fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.669 244018 DEBUG oslo_concurrency.lockutils [req-4492491c-e70b-48ae-8059-f710d0ee9c45 req-bc1436fd-6dde-4247-89de-3db4f42261de 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.670 244018 DEBUG oslo_concurrency.lockutils [req-4492491c-e70b-48ae-8059-f710d0ee9c45 req-bc1436fd-6dde-4247-89de-3db4f42261de 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.671 244018 DEBUG oslo_concurrency.lockutils [req-4492491c-e70b-48ae-8059-f710d0ee9c45 req-bc1436fd-6dde-4247-89de-3db4f42261de 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.671 244018 DEBUG nova.compute.manager [req-4492491c-e70b-48ae-8059-f710d0ee9c45 req-bc1436fd-6dde-4247-89de-3db4f42261de 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] No waiting events found dispatching network-vif-plugged-7298ef49-4582-410a-821d-bd43bc865fba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.671 244018 WARNING nova.compute.manager [req-4492491c-e70b-48ae-8059-f710d0ee9c45 req-bc1436fd-6dde-4247-89de-3db4f42261de 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received unexpected event network-vif-plugged-7298ef49-4582-410a-821d-bd43bc865fba for instance with vm_state active and task_state None.
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:51 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[273483]: [NOTICE]   (273496) : haproxy version is 2.8.14-c23fe91
Feb 25 12:22:51 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[273483]: [NOTICE]   (273496) : path to executable is /usr/sbin/haproxy
Feb 25 12:22:51 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[273483]: [WARNING]  (273496) : Exiting Master process...
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.693 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:51 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[273483]: [ALERT]    (273496) : Current worker (273498) exited with code 143 (Terminated)
Feb 25 12:22:51 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[273483]: [WARNING]  (273496) : All workers exited. Exiting... (0)
Feb 25 12:22:51 compute-0 systemd[1]: libpod-f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899.scope: Deactivated successfully.
Feb 25 12:22:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.9 MiB/s wr, 159 op/s
Feb 25 12:22:51 compute-0 podman[274437]: 2026-02-25 12:22:51.706645507 +0000 UTC m=+0.067960934 container died f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899-userdata-shm.mount: Deactivated successfully.
Feb 25 12:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9334fe215ef094e8186d73e604d53efb736aee56ac5daec6868b6733a23f6a9f-merged.mount: Deactivated successfully.
Feb 25 12:22:51 compute-0 podman[274437]: 2026-02-25 12:22:51.741621542 +0000 UTC m=+0.102936989 container cleanup f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:22:51 compute-0 systemd[1]: libpod-conmon-f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899.scope: Deactivated successfully.
Feb 25 12:22:51 compute-0 podman[274473]: 2026-02-25 12:22:51.794992631 +0000 UTC m=+0.037737465 container remove f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.800 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[871d7a8a-2208-4be6-b551-61064cafb790]: (4, ('Wed Feb 25 12:22:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899)\nf7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899\nWed Feb 25 12:22:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899)\nf7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.802 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2340ccac-635f-4e4b-9d40-74afcd89a780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.803 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.805 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:51 compute-0 kernel: tap6a1663dd-20: left promiscuous mode
Feb 25 12:22:51 compute-0 nova_compute[244014]: 2026-02-25 12:22:51.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.823 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5f6798-e17b-4562-a794-d38cdd161b10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.837 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d5030e25-9ef5-48ad-a8c2-0e184fa19098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.838 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05c66b70-2f91-4b56-9da5-e1b9e5dae845]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.852 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a208166b-9f9b-4ac9-b64d-831758554f9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411459, 'reachable_time': 33192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274492, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.854 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:22:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d6a1663dd\x2d25aa\x2d4b0e\x2da1c0\x2d42434d371a21.mount: Deactivated successfully.
Feb 25 12:22:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:51.854 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae5c138-94bd-41ec-94da-ca189cc2cdc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:22:52 compute-0 nova_compute[244014]: 2026-02-25 12:22:52.283 244018 INFO nova.virt.libvirt.driver [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance shutdown successfully after 24 seconds.
Feb 25 12:22:52 compute-0 nova_compute[244014]: 2026-02-25 12:22:52.289 244018 INFO nova.virt.libvirt.driver [-] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance destroyed successfully.
Feb 25 12:22:52 compute-0 nova_compute[244014]: 2026-02-25 12:22:52.290 244018 DEBUG nova.objects.instance [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'numa_topology' on Instance uuid ffd5cedf-474c-4977-807e-22a276ceb002 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:52 compute-0 nova_compute[244014]: 2026-02-25 12:22:52.311 244018 DEBUG nova.compute.manager [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:52 compute-0 nova_compute[244014]: 2026-02-25 12:22:52.391 244018 DEBUG oslo_concurrency.lockutils [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:52 compute-0 nova_compute[244014]: 2026-02-25 12:22:52.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:53 compute-0 ceph-mon[76335]: pgmap v1144: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.9 MiB/s wr, 159 op/s
Feb 25 12:22:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 157 op/s
Feb 25 12:22:53 compute-0 podman[274494]: 2026-02-25 12:22:53.727467184 +0000 UTC m=+0.064702421 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 25 12:22:53 compute-0 nova_compute[244014]: 2026-02-25 12:22:53.726 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:53 compute-0 podman[274495]: 2026-02-25 12:22:53.760174035 +0000 UTC m=+0.094047567 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 12:22:53 compute-0 nova_compute[244014]: 2026-02-25 12:22:53.820 244018 DEBUG nova.compute.manager [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:22:53 compute-0 nova_compute[244014]: 2026-02-25 12:22:53.831 244018 DEBUG nova.compute.manager [req-12c69947-bace-42ae-b6f4-fb1d0edcb5c2 req-38a2f993-5013-4493-83dc-25ec45357365 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:53 compute-0 nova_compute[244014]: 2026-02-25 12:22:53.832 244018 DEBUG nova.compute.manager [req-12c69947-bace-42ae-b6f4-fb1d0edcb5c2 req-38a2f993-5013-4493-83dc-25ec45357365 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing instance network info cache due to event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:22:53 compute-0 nova_compute[244014]: 2026-02-25 12:22:53.832 244018 DEBUG oslo_concurrency.lockutils [req-12c69947-bace-42ae-b6f4-fb1d0edcb5c2 req-38a2f993-5013-4493-83dc-25ec45357365 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:53 compute-0 nova_compute[244014]: 2026-02-25 12:22:53.833 244018 DEBUG oslo_concurrency.lockutils [req-12c69947-bace-42ae-b6f4-fb1d0edcb5c2 req-38a2f993-5013-4493-83dc-25ec45357365 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:53 compute-0 nova_compute[244014]: 2026-02-25 12:22:53.833 244018 DEBUG nova.network.neutron [req-12c69947-bace-42ae-b6f4-fb1d0edcb5c2 req-38a2f993-5013-4493-83dc-25ec45357365 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:22:53 compute-0 nova_compute[244014]: 2026-02-25 12:22:53.893 244018 INFO nova.compute.manager [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] instance snapshotting
Feb 25 12:22:53 compute-0 nova_compute[244014]: 2026-02-25 12:22:53.894 244018 WARNING nova.compute.manager [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] trying to snapshot a non-running instance: (state: 4 expected: 1)
Feb 25 12:22:54 compute-0 nova_compute[244014]: 2026-02-25 12:22:54.098 244018 INFO nova.virt.libvirt.driver [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Beginning cold snapshot process
Feb 25 12:22:54 compute-0 nova_compute[244014]: 2026-02-25 12:22:54.247 244018 DEBUG nova.virt.libvirt.imagebackend [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:22:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:54 compute-0 nova_compute[244014]: 2026-02-25 12:22:54.399 244018 DEBUG nova.storage.rbd_utils [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(0474b0770f0f426098ca54b73d7bdbcf) on rbd image(ffd5cedf-474c-4977-807e-22a276ceb002_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:22:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:55.006 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:55.007 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:22:55.007 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Feb 25 12:22:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Feb 25 12:22:55 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Feb 25 12:22:55 compute-0 ceph-mon[76335]: pgmap v1145: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 157 op/s
Feb 25 12:22:55 compute-0 nova_compute[244014]: 2026-02-25 12:22:55.353 244018 DEBUG nova.storage.rbd_utils [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] cloning vms/ffd5cedf-474c-4977-807e-22a276ceb002_disk@0474b0770f0f426098ca54b73d7bdbcf to images/22d95515-b8e4-4f1a-823a-591b5455e39f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:22:55 compute-0 nova_compute[244014]: 2026-02-25 12:22:55.447 244018 DEBUG nova.storage.rbd_utils [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] flattening images/22d95515-b8e4-4f1a-823a-591b5455e39f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:22:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 69 KiB/s wr, 93 op/s
Feb 25 12:22:55 compute-0 nova_compute[244014]: 2026-02-25 12:22:55.721 244018 DEBUG nova.compute.manager [req-f89c5358-61d0-47f0-8076-e5e2e8762784 req-8f39debc-d910-419b-ac5a-390c323ac4f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received event network-vif-unplugged-fd320060-eaf8-4fd7-9325-e3793617bc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:55 compute-0 nova_compute[244014]: 2026-02-25 12:22:55.721 244018 DEBUG oslo_concurrency.lockutils [req-f89c5358-61d0-47f0-8076-e5e2e8762784 req-8f39debc-d910-419b-ac5a-390c323ac4f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:55 compute-0 nova_compute[244014]: 2026-02-25 12:22:55.722 244018 DEBUG oslo_concurrency.lockutils [req-f89c5358-61d0-47f0-8076-e5e2e8762784 req-8f39debc-d910-419b-ac5a-390c323ac4f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:55 compute-0 nova_compute[244014]: 2026-02-25 12:22:55.722 244018 DEBUG oslo_concurrency.lockutils [req-f89c5358-61d0-47f0-8076-e5e2e8762784 req-8f39debc-d910-419b-ac5a-390c323ac4f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:55 compute-0 nova_compute[244014]: 2026-02-25 12:22:55.723 244018 DEBUG nova.compute.manager [req-f89c5358-61d0-47f0-8076-e5e2e8762784 req-8f39debc-d910-419b-ac5a-390c323ac4f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] No waiting events found dispatching network-vif-unplugged-fd320060-eaf8-4fd7-9325-e3793617bc7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:55 compute-0 nova_compute[244014]: 2026-02-25 12:22:55.723 244018 WARNING nova.compute.manager [req-f89c5358-61d0-47f0-8076-e5e2e8762784 req-8f39debc-d910-419b-ac5a-390c323ac4f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received unexpected event network-vif-unplugged-fd320060-eaf8-4fd7-9325-e3793617bc7b for instance with vm_state stopped and task_state image_uploading.
Feb 25 12:22:55 compute-0 nova_compute[244014]: 2026-02-25 12:22:55.815 244018 DEBUG nova.storage.rbd_utils [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] removing snapshot(0474b0770f0f426098ca54b73d7bdbcf) on rbd image(ffd5cedf-474c-4977-807e-22a276ceb002_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:22:56 compute-0 nova_compute[244014]: 2026-02-25 12:22:56.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Feb 25 12:22:56 compute-0 ceph-mon[76335]: osdmap e154: 3 total, 3 up, 3 in
Feb 25 12:22:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Feb 25 12:22:56 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Feb 25 12:22:56 compute-0 nova_compute[244014]: 2026-02-25 12:22:56.355 244018 DEBUG nova.storage.rbd_utils [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(snap) on rbd image(22d95515-b8e4-4f1a-823a-591b5455e39f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:22:56 compute-0 nova_compute[244014]: 2026-02-25 12:22:56.395 244018 DEBUG nova.compute.manager [req-f7ca33f7-fa0f-4eb9-9288-33777a50fe79 req-616c58bd-6155-4086-9224-7ba69caa1d40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-changed-7298ef49-4582-410a-821d-bd43bc865fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:56 compute-0 nova_compute[244014]: 2026-02-25 12:22:56.396 244018 DEBUG nova.compute.manager [req-f7ca33f7-fa0f-4eb9-9288-33777a50fe79 req-616c58bd-6155-4086-9224-7ba69caa1d40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Refreshing instance network info cache due to event network-changed-7298ef49-4582-410a-821d-bd43bc865fba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:22:56 compute-0 nova_compute[244014]: 2026-02-25 12:22:56.396 244018 DEBUG oslo_concurrency.lockutils [req-f7ca33f7-fa0f-4eb9-9288-33777a50fe79 req-616c58bd-6155-4086-9224-7ba69caa1d40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:56 compute-0 nova_compute[244014]: 2026-02-25 12:22:56.397 244018 DEBUG oslo_concurrency.lockutils [req-f7ca33f7-fa0f-4eb9-9288-33777a50fe79 req-616c58bd-6155-4086-9224-7ba69caa1d40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:56 compute-0 nova_compute[244014]: 2026-02-25 12:22:56.397 244018 DEBUG nova.network.neutron [req-f7ca33f7-fa0f-4eb9-9288-33777a50fe79 req-616c58bd-6155-4086-9224-7ba69caa1d40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Refreshing network info cache for port 7298ef49-4582-410a-821d-bd43bc865fba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:22:57 compute-0 nova_compute[244014]: 2026-02-25 12:22:57.044 244018 DEBUG nova.network.neutron [req-12c69947-bace-42ae-b6f4-fb1d0edcb5c2 req-38a2f993-5013-4493-83dc-25ec45357365 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updated VIF entry in instance network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:22:57 compute-0 nova_compute[244014]: 2026-02-25 12:22:57.045 244018 DEBUG nova.network.neutron [req-12c69947-bace-42ae-b6f4-fb1d0edcb5c2 req-38a2f993-5013-4493-83dc-25ec45357365 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:57 compute-0 nova_compute[244014]: 2026-02-25 12:22:57.201 244018 DEBUG oslo_concurrency.lockutils [req-12c69947-bace-42ae-b6f4-fb1d0edcb5c2 req-38a2f993-5013-4493-83dc-25ec45357365 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Feb 25 12:22:57 compute-0 ceph-mon[76335]: pgmap v1147: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 69 KiB/s wr, 93 op/s
Feb 25 12:22:57 compute-0 ceph-mon[76335]: osdmap e155: 3 total, 3 up, 3 in
Feb 25 12:22:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Feb 25 12:22:57 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Feb 25 12:22:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 438 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 245 op/s
Feb 25 12:22:58 compute-0 ceph-mon[76335]: osdmap e156: 3 total, 3 up, 3 in
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.413 244018 DEBUG nova.network.neutron [req-f7ca33f7-fa0f-4eb9-9288-33777a50fe79 req-616c58bd-6155-4086-9224-7ba69caa1d40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updated VIF entry in instance network info cache for port 7298ef49-4582-410a-821d-bd43bc865fba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.414 244018 DEBUG nova.network.neutron [req-f7ca33f7-fa0f-4eb9-9288-33777a50fe79 req-616c58bd-6155-4086-9224-7ba69caa1d40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updating instance_info_cache with network_info: [{"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.456 244018 DEBUG oslo_concurrency.lockutils [req-f7ca33f7-fa0f-4eb9-9288-33777a50fe79 req-616c58bd-6155-4086-9224-7ba69caa1d40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.591 244018 DEBUG nova.compute.manager [req-284ccf8f-9af3-4ea0-84d5-0d51b400db4e req-4a05596e-9214-4ce1-8382-8423d6a24639 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.591 244018 DEBUG oslo_concurrency.lockutils [req-284ccf8f-9af3-4ea0-84d5-0d51b400db4e req-4a05596e-9214-4ce1-8382-8423d6a24639 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.592 244018 DEBUG oslo_concurrency.lockutils [req-284ccf8f-9af3-4ea0-84d5-0d51b400db4e req-4a05596e-9214-4ce1-8382-8423d6a24639 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.593 244018 DEBUG oslo_concurrency.lockutils [req-284ccf8f-9af3-4ea0-84d5-0d51b400db4e req-4a05596e-9214-4ce1-8382-8423d6a24639 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.593 244018 DEBUG nova.compute.manager [req-284ccf8f-9af3-4ea0-84d5-0d51b400db4e req-4a05596e-9214-4ce1-8382-8423d6a24639 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] No waiting events found dispatching network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.594 244018 WARNING nova.compute.manager [req-284ccf8f-9af3-4ea0-84d5-0d51b400db4e req-4a05596e-9214-4ce1-8382-8423d6a24639 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received unexpected event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b for instance with vm_state stopped and task_state image_uploading.
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.940 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:22:58 compute-0 nova_compute[244014]: 2026-02-25 12:22:58.941 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:22:59 compute-0 nova_compute[244014]: 2026-02-25 12:22:59.006 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:22:59 compute-0 nova_compute[244014]: 2026-02-25 12:22:59.119 244018 DEBUG nova.compute.manager [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-changed-7298ef49-4582-410a-821d-bd43bc865fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:22:59 compute-0 nova_compute[244014]: 2026-02-25 12:22:59.120 244018 DEBUG nova.compute.manager [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Refreshing instance network info cache due to event network-changed-7298ef49-4582-410a-821d-bd43bc865fba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:22:59 compute-0 nova_compute[244014]: 2026-02-25 12:22:59.120 244018 DEBUG oslo_concurrency.lockutils [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:22:59 compute-0 nova_compute[244014]: 2026-02-25 12:22:59.120 244018 DEBUG oslo_concurrency.lockutils [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:22:59 compute-0 nova_compute[244014]: 2026-02-25 12:22:59.121 244018 DEBUG nova.network.neutron [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Refreshing network info cache for port 7298ef49-4582-410a-821d-bd43bc865fba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:22:59 compute-0 nova_compute[244014]: 2026-02-25 12:22:59.124 244018 DEBUG oslo_concurrency.lockutils [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-33db1662-e67d-41de-b8d6-ea93b40cf7cd-dc98a932-5f79-4db0-8662-1a5f5de8adff" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:22:59 compute-0 nova_compute[244014]: 2026-02-25 12:22:59.124 244018 DEBUG oslo_concurrency.lockutils [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-33db1662-e67d-41de-b8d6-ea93b40cf7cd-dc98a932-5f79-4db0-8662-1a5f5de8adff" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:22:59 compute-0 nova_compute[244014]: 2026-02-25 12:22:59.125 244018 DEBUG nova.objects.instance [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 33db1662-e67d-41de-b8d6-ea93b40cf7cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:22:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:22:59 compute-0 ceph-mon[76335]: pgmap v1150: 305 pgs: 305 active+clean; 438 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 245 op/s
Feb 25 12:22:59 compute-0 nova_compute[244014]: 2026-02-25 12:22:59.370 244018 INFO nova.virt.libvirt.driver [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Snapshot image upload complete
Feb 25 12:22:59 compute-0 nova_compute[244014]: 2026-02-25 12:22:59.371 244018 INFO nova.compute.manager [None req-a604c5c6-c5ed-483e-a12f-3076014a5237 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Took 5.47 seconds to snapshot the instance on the hypervisor.
Feb 25 12:22:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 438 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 139 op/s
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.101 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Acquiring lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.102 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.135 244018 DEBUG nova.objects.instance [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 33db1662-e67d-41de-b8d6-ea93b40cf7cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.155 244018 DEBUG nova.compute.manager [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.181 244018 DEBUG nova.network.neutron [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.256 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.257 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.268 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.269 244018 INFO nova.compute.claims [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.438 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.702 244018 DEBUG nova.policy [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.873 244018 DEBUG nova.network.neutron [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updated VIF entry in instance network info cache for port 7298ef49-4582-410a-821d-bd43bc865fba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.874 244018 DEBUG nova.network.neutron [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updating instance_info_cache with network_info: [{"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.893 244018 DEBUG oslo_concurrency.lockutils [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.894 244018 DEBUG nova.compute.manager [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.895 244018 DEBUG nova.compute.manager [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing instance network info cache due to event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.895 244018 DEBUG oslo_concurrency.lockutils [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.896 244018 DEBUG oslo_concurrency.lockutils [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:00 compute-0 nova_compute[244014]: 2026-02-25 12:23:00.897 244018 DEBUG nova.network.neutron [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:23:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:23:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3186707410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.045 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.052 244018 DEBUG nova.compute.provider_tree [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.071 244018 DEBUG nova.scheduler.client.report [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.112 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.114 244018 DEBUG nova.compute.manager [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.191 244018 DEBUG nova.compute.manager [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.192 244018 DEBUG nova.network.neutron [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.221 244018 INFO nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.239 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.265 244018 DEBUG nova.compute.manager [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:23:01 compute-0 ceph-mon[76335]: pgmap v1151: 305 pgs: 305 active+clean; 438 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 139 op/s
Feb 25 12:23:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3186707410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.395 244018 DEBUG nova.compute.manager [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.399 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.400 244018 INFO nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Creating image(s)
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.435 244018 DEBUG nova.storage.rbd_utils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] rbd image a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.473 244018 DEBUG nova.storage.rbd_utils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] rbd image a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.507 244018 DEBUG nova.storage.rbd_utils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] rbd image a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.512 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.547 244018 DEBUG nova.policy [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6663c48526ff47d0a45fdb4e651bcb0a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '892810ff108142de9ed0a316aeee727f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.592 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.593 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.594 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.594 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:01 compute-0 ovn_controller[147040]: 2026-02-25T12:23:01Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:e5:c6 10.100.0.3
Feb 25 12:23:01 compute-0 ovn_controller[147040]: 2026-02-25T12:23:01Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:e5:c6 10.100.0.3
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.625 244018 DEBUG nova.storage.rbd_utils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] rbd image a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.629 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.655 244018 DEBUG nova.network.neutron [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Successfully updated port: dc98a932-5f79-4db0-8662-1a5f5de8adff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.676 244018 DEBUG oslo_concurrency.lockutils [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 449 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 8.5 MiB/s wr, 191 op/s
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.892 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:01 compute-0 nova_compute[244014]: 2026-02-25 12:23:01.977 244018 DEBUG nova.storage.rbd_utils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] resizing rbd image a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.080 244018 DEBUG nova.objects.instance [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lazy-loading 'migration_context' on Instance uuid a0fa8542-8f2c-4a70-be2d-02c4139e33aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.109 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.110 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Ensure instance console log exists: /var/lib/nova/instances/a0fa8542-8f2c-4a70-be2d-02c4139e33aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.110 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.111 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.111 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Feb 25 12:23:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Feb 25 12:23:02 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.568 244018 DEBUG nova.network.neutron [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updated VIF entry in instance network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.569 244018 DEBUG nova.network.neutron [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.595 244018 DEBUG oslo_concurrency.lockutils [req-a05e3834-b2c0-4168-b130-e289e4c67209 req-3ea94663-ca27-43ac-9dcb-43cc9fac7bd2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.595 244018 DEBUG oslo_concurrency.lockutils [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.596 244018 DEBUG nova.network.neutron [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.649 244018 DEBUG nova.network.neutron [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Successfully created port: e66ad0e5-f667-4662-b49b-13be01718e41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:23:02 compute-0 nova_compute[244014]: 2026-02-25 12:23:02.806 244018 WARNING nova.network.neutron [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it
Feb 25 12:23:03 compute-0 ceph-mon[76335]: pgmap v1152: 305 pgs: 305 active+clean; 449 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 8.5 MiB/s wr, 191 op/s
Feb 25 12:23:03 compute-0 ceph-mon[76335]: osdmap e157: 3 total, 3 up, 3 in
Feb 25 12:23:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 490 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 8.2 MiB/s wr, 216 op/s
Feb 25 12:23:03 compute-0 nova_compute[244014]: 2026-02-25 12:23:03.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:03 compute-0 nova_compute[244014]: 2026-02-25 12:23:03.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:23:03 compute-0 nova_compute[244014]: 2026-02-25 12:23:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:23:03 compute-0 nova_compute[244014]: 2026-02-25 12:23:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:23:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Feb 25 12:23:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Feb 25 12:23:04 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Feb 25 12:23:04 compute-0 nova_compute[244014]: 2026-02-25 12:23:04.874 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.075 244018 DEBUG nova.network.neutron [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Successfully updated port: e66ad0e5-f667-4662-b49b-13be01718e41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.103 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Acquiring lock "refresh_cache-a0fa8542-8f2c-4a70-be2d-02c4139e33aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.103 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Acquired lock "refresh_cache-a0fa8542-8f2c-4a70-be2d-02c4139e33aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.103 244018 DEBUG nova.network.neutron [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.123 244018 DEBUG nova.compute.manager [req-3e37ef4b-2f85-4afa-a5d9-a9c2f1664d84 req-49771774-e5f8-4142-90c3-590be5435a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-changed-dc98a932-5f79-4db0-8662-1a5f5de8adff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.123 244018 DEBUG nova.compute.manager [req-3e37ef4b-2f85-4afa-a5d9-a9c2f1664d84 req-49771774-e5f8-4142-90c3-590be5435a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing instance network info cache due to event network-changed-dc98a932-5f79-4db0-8662-1a5f5de8adff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.124 244018 DEBUG oslo_concurrency.lockutils [req-3e37ef4b-2f85-4afa-a5d9-a9c2f1664d84 req-49771774-e5f8-4142-90c3-590be5435a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.241 244018 DEBUG nova.network.neutron [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.286 244018 DEBUG oslo_concurrency.lockutils [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.286 244018 DEBUG oslo_concurrency.lockutils [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.286 244018 DEBUG oslo_concurrency.lockutils [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.287 244018 DEBUG oslo_concurrency.lockutils [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.287 244018 DEBUG oslo_concurrency.lockutils [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.288 244018 INFO nova.compute.manager [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Terminating instance
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.290 244018 DEBUG nova.compute.manager [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.298 244018 INFO nova.virt.libvirt.driver [-] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance destroyed successfully.
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.298 244018 DEBUG nova.objects.instance [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'resources' on Instance uuid ffd5cedf-474c-4977-807e-22a276ceb002 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:05 compute-0 ceph-mon[76335]: pgmap v1154: 305 pgs: 305 active+clean; 490 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 8.2 MiB/s wr, 216 op/s
Feb 25 12:23:05 compute-0 ceph-mon[76335]: osdmap e158: 3 total, 3 up, 3 in
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.314 244018 DEBUG nova.virt.libvirt.vif [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-337821408',display_name='tempest-ImagesTestJSON-server-337821408',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-337821408',id=30,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-a00v4oxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:59Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=ffd5cedf-474c-4977-807e-22a276ceb002,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.315 244018 DEBUG nova.network.os_vif_util [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.316 244018 DEBUG nova.network.os_vif_util [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:73:07,bridge_name='br-int',has_traffic_filtering=True,id=fd320060-eaf8-4fd7-9325-e3793617bc7b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd320060-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.316 244018 DEBUG os_vif [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:73:07,bridge_name='br-int',has_traffic_filtering=True,id=fd320060-eaf8-4fd7-9325-e3793617bc7b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd320060-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.319 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.319 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd320060-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.321 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.323 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.326 244018 INFO os_vif [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:73:07,bridge_name='br-int',has_traffic_filtering=True,id=fd320060-eaf8-4fd7-9325-e3793617bc7b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd320060-ea')
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.655 244018 INFO nova.virt.libvirt.driver [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Deleting instance files /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002_del
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.656 244018 INFO nova.virt.libvirt.driver [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Deletion of /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002_del complete
Feb 25 12:23:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 490 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 4.5 MiB/s wr, 143 op/s
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.713 244018 INFO nova.compute.manager [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Took 0.42 seconds to destroy the instance on the hypervisor.
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.714 244018 DEBUG oslo.service.loopingcall [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.714 244018 DEBUG nova.compute.manager [-] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.715 244018 DEBUG nova.network.neutron [-] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:23:05 compute-0 nova_compute[244014]: 2026-02-25 12:23:05.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.119 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.120 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.123 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.177 244018 DEBUG nova.compute.manager [req-0dfe3a46-de4c-4202-b3b0-a020ddd97858 req-99bf3054-f389-43d0-80e5-746953e31e3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Received event network-changed-e66ad0e5-f667-4662-b49b-13be01718e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.177 244018 DEBUG nova.compute.manager [req-0dfe3a46-de4c-4202-b3b0-a020ddd97858 req-99bf3054-f389-43d0-80e5-746953e31e3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Refreshing instance network info cache due to event network-changed-e66ad0e5-f667-4662-b49b-13be01718e41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.178 244018 DEBUG oslo_concurrency.lockutils [req-0dfe3a46-de4c-4202-b3b0-a020ddd97858 req-99bf3054-f389-43d0-80e5-746953e31e3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-a0fa8542-8f2c-4a70-be2d-02c4139e33aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.414 244018 DEBUG nova.network.neutron [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.449 244018 DEBUG oslo_concurrency.lockutils [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.450 244018 DEBUG oslo_concurrency.lockutils [req-3e37ef4b-2f85-4afa-a5d9-a9c2f1664d84 req-49771774-e5f8-4142-90c3-590be5435a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.451 244018 DEBUG nova.network.neutron [req-3e37ef4b-2f85-4afa-a5d9-a9c2f1664d84 req-49771774-e5f8-4142-90c3-590be5435a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing network info cache for port dc98a932-5f79-4db0-8662-1a5f5de8adff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.455 244018 DEBUG nova.network.neutron [-] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.458 244018 DEBUG nova.virt.libvirt.vif [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-244265435',display_name='tempest-tempest.common.compute-instance-244265435',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-244265435',id=31,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-y80s0s52',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=33db1662-e67d-41de-b8d6-ea93b40cf7cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.458 244018 DEBUG nova.network.os_vif_util [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.459 244018 DEBUG nova.network.os_vif_util [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.460 244018 DEBUG os_vif [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.461 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.462 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.467 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.467 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc98a932-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.468 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc98a932-5f, col_values=(('external_ids', {'iface-id': 'dc98a932-5f79-4db0-8662-1a5f5de8adff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:c3:28', 'vm-uuid': '33db1662-e67d-41de-b8d6-ea93b40cf7cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.470 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:06 compute-0 NetworkManager[49836]: <info>  [1772022186.4724] manager: (tapdc98a932-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.480 244018 INFO os_vif [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f')
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.481 244018 DEBUG nova.virt.libvirt.vif [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-244265435',display_name='tempest-tempest.common.compute-instance-244265435',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-244265435',id=31,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-y80s0s52',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=33db1662-e67d-41de-b8d6-ea93b40cf7cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.482 244018 DEBUG nova.network.os_vif_util [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.483 244018 DEBUG nova.network.os_vif_util [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.489 244018 INFO nova.compute.manager [-] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Took 0.77 seconds to deallocate network for instance.
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.497 244018 DEBUG nova.virt.libvirt.guest [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] attach device xml: <interface type="ethernet">
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:a3:c3:28"/>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <target dev="tapdc98a932-5f"/>
Feb 25 12:23:06 compute-0 nova_compute[244014]: </interface>
Feb 25 12:23:06 compute-0 nova_compute[244014]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 25 12:23:06 compute-0 kernel: tapdc98a932-5f: entered promiscuous mode
Feb 25 12:23:06 compute-0 NetworkManager[49836]: <info>  [1772022186.5129] manager: (tapdc98a932-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.516 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:06 compute-0 ovn_controller[147040]: 2026-02-25T12:23:06Z|00251|binding|INFO|Claiming lport dc98a932-5f79-4db0-8662-1a5f5de8adff for this chassis.
Feb 25 12:23:06 compute-0 ovn_controller[147040]: 2026-02-25T12:23:06Z|00252|binding|INFO|dc98a932-5f79-4db0-8662-1a5f5de8adff: Claiming fa:16:3e:a3:c3:28 10.100.0.13
Feb 25 12:23:06 compute-0 ovn_controller[147040]: 2026-02-25T12:23:06Z|00253|binding|INFO|Setting lport dc98a932-5f79-4db0-8662-1a5f5de8adff ovn-installed in OVS
Feb 25 12:23:06 compute-0 ovn_controller[147040]: 2026-02-25T12:23:06Z|00254|binding|INFO|Setting lport dc98a932-5f79-4db0-8662-1a5f5de8adff up in Southbound
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.528 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.530 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.531 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:c3:28 10.100.0.13'], port_security=['fa:16:3e:a3:c3:28 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-452456135', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '33db1662-e67d-41de-b8d6-ea93b40cf7cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-452456135', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dc98a932-5f79-4db0-8662-1a5f5de8adff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.535 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dc98a932-5f79-4db0-8662-1a5f5de8adff in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.539 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.540 244018 DEBUG oslo_concurrency.lockutils [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.541 244018 DEBUG oslo_concurrency.lockutils [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:06 compute-0 systemd-udevd[274896]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.558 244018 DEBUG nova.network.neutron [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Updating instance_info_cache with network_info: [{"id": "e66ad0e5-f667-4662-b49b-13be01718e41", "address": "fa:16:3e:2d:d5:7d", "network": {"id": "a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a", "bridge": "br-int", "label": "tempest-ServersTestJSON-1423404129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "892810ff108142de9ed0a316aeee727f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66ad0e5-f6", "ovs_interfaceid": "e66ad0e5-f667-4662-b49b-13be01718e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.563 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb229d8-6991-4f29-8eae-5211fc533675]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:06 compute-0 NetworkManager[49836]: <info>  [1772022186.5757] device (tapdc98a932-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:23:06 compute-0 NetworkManager[49836]: <info>  [1772022186.5770] device (tapdc98a932-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.585 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Releasing lock "refresh_cache-a0fa8542-8f2c-4a70-be2d-02c4139e33aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.585 244018 DEBUG nova.compute.manager [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Instance network_info: |[{"id": "e66ad0e5-f667-4662-b49b-13be01718e41", "address": "fa:16:3e:2d:d5:7d", "network": {"id": "a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a", "bridge": "br-int", "label": "tempest-ServersTestJSON-1423404129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "892810ff108142de9ed0a316aeee727f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66ad0e5-f6", "ovs_interfaceid": "e66ad0e5-f667-4662-b49b-13be01718e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.587 244018 DEBUG oslo_concurrency.lockutils [req-0dfe3a46-de4c-4202-b3b0-a020ddd97858 req-99bf3054-f389-43d0-80e5-746953e31e3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-a0fa8542-8f2c-4a70-be2d-02c4139e33aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.587 244018 DEBUG nova.network.neutron [req-0dfe3a46-de4c-4202-b3b0-a020ddd97858 req-99bf3054-f389-43d0-80e5-746953e31e3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Refreshing network info cache for port e66ad0e5-f667-4662-b49b-13be01718e41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.592 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Start _get_guest_xml network_info=[{"id": "e66ad0e5-f667-4662-b49b-13be01718e41", "address": "fa:16:3e:2d:d5:7d", "network": {"id": "a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a", "bridge": "br-int", "label": "tempest-ServersTestJSON-1423404129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "892810ff108142de9ed0a316aeee727f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66ad0e5-f6", "ovs_interfaceid": "e66ad0e5-f667-4662-b49b-13be01718e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.596 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a38d8ece-e164-428a-8893-3d8bf6a61dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.600 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3857a285-d267-44f6-a30a-ba94e235387c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.608 244018 WARNING nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.617 244018 DEBUG nova.virt.libvirt.host [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.618 244018 DEBUG nova.virt.libvirt.host [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.632 244018 DEBUG nova.virt.libvirt.host [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.633 244018 DEBUG nova.virt.libvirt.host [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.633 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.633 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.634 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.634 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.635 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.635 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.635 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.636 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.636 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.636 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.637 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.637 244018 DEBUG nova.virt.hardware [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.641 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7e31cea6-b534-4ba0-9d30-25bf7a441474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.642 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.657 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfb439c-8773-4855-932c-253a4f45a432]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411687, 'reachable_time': 20583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274903, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.680 244018 DEBUG nova.virt.libvirt.driver [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.680 244018 DEBUG nova.virt.libvirt.driver [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.681 244018 DEBUG nova.virt.libvirt.driver [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:4a:ac:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.681 244018 DEBUG nova.virt.libvirt.driver [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:a3:c3:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.680 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45a8a6c8-3ac7-403b-b988-237693351ec6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411698, 'tstamp': 411698}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274905, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411701, 'tstamp': 411701}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274905, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.684 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.689 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.689 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.690 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.690 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:06.691 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.702 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022171.701431, ffd5cedf-474c-4977-807e-22a276ceb002 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.702 244018 INFO nova.compute.manager [-] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] VM Stopped (Lifecycle Event)
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.721 244018 DEBUG nova.virt.libvirt.guest [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <nova:name>tempest-tempest.common.compute-instance-244265435</nova:name>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:23:06</nova:creationTime>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:23:06 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:23:06 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:23:06 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:23:06 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:23:06 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:23:06 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:23:06 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:23:06 compute-0 nova_compute[244014]:     <nova:port uuid="67db480d-6a66-4c54-be9c-5375a0d664cd">
Feb 25 12:23:06 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:23:06 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:06 compute-0 nova_compute[244014]:     <nova:port uuid="dc98a932-5f79-4db0-8662-1a5f5de8adff">
Feb 25 12:23:06 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:23:06 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:06 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:23:06 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:23:06 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.731 244018 DEBUG nova.compute.manager [None req-d0fca414-877c-4ffa-a420-382f4b528eb8 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.742 244018 DEBUG oslo_concurrency.processutils [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.768 244018 DEBUG oslo_concurrency.lockutils [None req-f08cc13e-6a19-485f-9d2d-81af3681a39b ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-33db1662-e67d-41de-b8d6-ea93b40cf7cd-dc98a932-5f79-4db0-8662-1a5f5de8adff" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:23:06 compute-0 nova_compute[244014]: 2026-02-25 12:23:06.907 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:23:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/859704308' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.180 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.223 244018 DEBUG nova.storage.rbd_utils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] rbd image a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.229 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:23:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31496705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.258 244018 DEBUG oslo_concurrency.processutils [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.266 244018 DEBUG nova.compute.provider_tree [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.284 244018 DEBUG nova.scheduler.client.report [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:23:07 compute-0 ceph-mon[76335]: pgmap v1156: 305 pgs: 305 active+clean; 490 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 4.5 MiB/s wr, 143 op/s
Feb 25 12:23:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/859704308' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/31496705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.312 244018 DEBUG oslo_concurrency.lockutils [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.320 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.320 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.320 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.321 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.384 244018 INFO nova.scheduler.client.report [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Deleted allocations for instance ffd5cedf-474c-4977-807e-22a276ceb002
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.483 244018 DEBUG oslo_concurrency.lockutils [None req-3cc603a7-479e-4ef7-ab60-ab96b58536b2 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 5.9 MiB/s wr, 240 op/s
Feb 25 12:23:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:23:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1697086429' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.739 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.740 244018 DEBUG nova.virt.libvirt.vif [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2085604757',display_name='tempest-ServersTestJSON-server-2085604757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2085604757',id=33,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOzqgxgCIo7l7lqm0S8dDx/L4vtjhysLouB0IheKsPVXTqKSUAZWo1Tzmr0uetVGx7aDsqQALOyBqtcoDD9ZYLfXt3j1TD852rnw7RJo5Dmtus4guS6p74HOa1FoQEOTg==',key_name='tempest-keypair-1324311740',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='892810ff108142de9ed0a316aeee727f',ramdisk_id='',reservation_id='r-hj81v0hj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-409113905',owner_user_name='tempest-ServersTestJSON-409113905-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6663c48526ff47d0a45fdb4e651bcb0a',uuid=a0fa8542-8f2c-4a70-be2d-02c4139e33aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e66ad0e5-f667-4662-b49b-13be01718e41", "address": "fa:16:3e:2d:d5:7d", "network": {"id": "a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a", "bridge": "br-int", "label": "tempest-ServersTestJSON-1423404129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "892810ff108142de9ed0a316aeee727f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66ad0e5-f6", "ovs_interfaceid": "e66ad0e5-f667-4662-b49b-13be01718e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.741 244018 DEBUG nova.network.os_vif_util [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Converting VIF {"id": "e66ad0e5-f667-4662-b49b-13be01718e41", "address": "fa:16:3e:2d:d5:7d", "network": {"id": "a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a", "bridge": "br-int", "label": "tempest-ServersTestJSON-1423404129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "892810ff108142de9ed0a316aeee727f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66ad0e5-f6", "ovs_interfaceid": "e66ad0e5-f667-4662-b49b-13be01718e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.742 244018 DEBUG nova.network.os_vif_util [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d5:7d,bridge_name='br-int',has_traffic_filtering=True,id=e66ad0e5-f667-4662-b49b-13be01718e41,network=Network(a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66ad0e5-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.744 244018 DEBUG nova.objects.instance [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lazy-loading 'pci_devices' on Instance uuid a0fa8542-8f2c-4a70-be2d-02c4139e33aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.759 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:23:07 compute-0 nova_compute[244014]:   <uuid>a0fa8542-8f2c-4a70-be2d-02c4139e33aa</uuid>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   <name>instance-00000021</name>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersTestJSON-server-2085604757</nova:name>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:23:06</nova:creationTime>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <nova:user uuid="6663c48526ff47d0a45fdb4e651bcb0a">tempest-ServersTestJSON-409113905-project-member</nova:user>
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <nova:project uuid="892810ff108142de9ed0a316aeee727f">tempest-ServersTestJSON-409113905</nova:project>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <nova:port uuid="e66ad0e5-f667-4662-b49b-13be01718e41">
Feb 25 12:23:07 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <system>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <entry name="serial">a0fa8542-8f2c-4a70-be2d-02c4139e33aa</entry>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <entry name="uuid">a0fa8542-8f2c-4a70-be2d-02c4139e33aa</entry>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     </system>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   <os>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   </os>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   <features>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   </features>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk">
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk.config">
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:23:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:2d:d5:7d"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <target dev="tape66ad0e5-f6"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/a0fa8542-8f2c-4a70-be2d-02c4139e33aa/console.log" append="off"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <video>
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     </video>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:23:07 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:23:07 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:23:07 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:23:07 compute-0 nova_compute[244014]: </domain>
Feb 25 12:23:07 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.759 244018 DEBUG nova.compute.manager [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Preparing to wait for external event network-vif-plugged-e66ad0e5-f667-4662-b49b-13be01718e41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.759 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Acquiring lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.760 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.760 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.761 244018 DEBUG nova.virt.libvirt.vif [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2085604757',display_name='tempest-ServersTestJSON-server-2085604757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2085604757',id=33,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOzqgxgCIo7l7lqm0S8dDx/L4vtjhysLouB0IheKsPVXTqKSUAZWo1Tzmr0uetVGx7aDsqQALOyBqtcoDD9ZYLfXt3j1TD852rnw7RJo5Dmtus4guS6p74HOa1FoQEOTg==',key_name='tempest-keypair-1324311740',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='892810ff108142de9ed0a316aeee727f',ramdisk_id='',reservation_id='r-hj81v0hj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-409113905',owner_user_name='tempest-ServersTestJSON-409113905-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6663c48526ff47d0a45fdb4e651bcb0a',uuid=a0fa8542-8f2c-4a70-be2d-02c4139e33aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e66ad0e5-f667-4662-b49b-13be01718e41", "address": "fa:16:3e:2d:d5:7d", "network": {"id": "a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a", "bridge": "br-int", "label": "tempest-ServersTestJSON-1423404129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "892810ff108142de9ed0a316aeee727f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66ad0e5-f6", "ovs_interfaceid": "e66ad0e5-f667-4662-b49b-13be01718e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.761 244018 DEBUG nova.network.os_vif_util [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Converting VIF {"id": "e66ad0e5-f667-4662-b49b-13be01718e41", "address": "fa:16:3e:2d:d5:7d", "network": {"id": "a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a", "bridge": "br-int", "label": "tempest-ServersTestJSON-1423404129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "892810ff108142de9ed0a316aeee727f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66ad0e5-f6", "ovs_interfaceid": "e66ad0e5-f667-4662-b49b-13be01718e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.762 244018 DEBUG nova.network.os_vif_util [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d5:7d,bridge_name='br-int',has_traffic_filtering=True,id=e66ad0e5-f667-4662-b49b-13be01718e41,network=Network(a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66ad0e5-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.762 244018 DEBUG os_vif [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d5:7d,bridge_name='br-int',has_traffic_filtering=True,id=e66ad0e5-f667-4662-b49b-13be01718e41,network=Network(a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66ad0e5-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.764 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.764 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.767 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape66ad0e5-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.768 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape66ad0e5-f6, col_values=(('external_ids', {'iface-id': 'e66ad0e5-f667-4662-b49b-13be01718e41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:d5:7d', 'vm-uuid': 'a0fa8542-8f2c-4a70-be2d-02c4139e33aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.769 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:07 compute-0 NetworkManager[49836]: <info>  [1772022187.7717] manager: (tape66ad0e5-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.775 244018 INFO os_vif [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d5:7d,bridge_name='br-int',has_traffic_filtering=True,id=e66ad0e5-f667-4662-b49b-13be01718e41,network=Network(a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66ad0e5-f6')
Feb 25 12:23:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:23:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2755443277' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.844 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.845 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.845 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] No VIF found with MAC fa:16:3e:2d:d5:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.845 244018 INFO nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Using config drive
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.873 244018 DEBUG nova.storage.rbd_utils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] rbd image a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.879 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.954 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.955 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.958 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.958 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.962 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:23:07 compute-0 nova_compute[244014]: 2026-02-25 12:23:07.962 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.186 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.188 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3839MB free_disk=59.841263801790774GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.188 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.189 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.266 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 33db1662-e67d-41de-b8d6-ea93b40cf7cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.266 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.267 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance a0fa8542-8f2c-4a70-be2d-02c4139e33aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.267 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.267 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:23:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1697086429' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2755443277' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.346 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:23:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1181502434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.903 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.909 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:23:08 compute-0 nova_compute[244014]: 2026-02-25 12:23:08.991 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:23:09 compute-0 nova_compute[244014]: 2026-02-25 12:23:09.023 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:23:09 compute-0 nova_compute[244014]: 2026-02-25 12:23:09.023 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:09 compute-0 ovn_controller[147040]: 2026-02-25T12:23:09Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:c3:28 10.100.0.13
Feb 25 12:23:09 compute-0 ovn_controller[147040]: 2026-02-25T12:23:09Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:c3:28 10.100.0.13
Feb 25 12:23:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Feb 25 12:23:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Feb 25 12:23:09 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Feb 25 12:23:09 compute-0 ceph-mon[76335]: pgmap v1157: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 5.9 MiB/s wr, 240 op/s
Feb 25 12:23:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1181502434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:09 compute-0 ceph-mon[76335]: osdmap e159: 3 total, 3 up, 3 in
Feb 25 12:23:09 compute-0 nova_compute[244014]: 2026-02-25 12:23:09.520 244018 INFO nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Creating config drive at /var/lib/nova/instances/a0fa8542-8f2c-4a70-be2d-02c4139e33aa/disk.config
Feb 25 12:23:09 compute-0 nova_compute[244014]: 2026-02-25 12:23:09.528 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0fa8542-8f2c-4a70-be2d-02c4139e33aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc2zcgjdt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:09 compute-0 nova_compute[244014]: 2026-02-25 12:23:09.673 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0fa8542-8f2c-4a70-be2d-02c4139e33aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc2zcgjdt" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.5 MiB/s wr, 104 op/s
Feb 25 12:23:09 compute-0 nova_compute[244014]: 2026-02-25 12:23:09.751 244018 DEBUG nova.storage.rbd_utils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] rbd image a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:09 compute-0 nova_compute[244014]: 2026-02-25 12:23:09.756 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0fa8542-8f2c-4a70-be2d-02c4139e33aa/disk.config a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:09 compute-0 nova_compute[244014]: 2026-02-25 12:23:09.917 244018 DEBUG oslo_concurrency.processutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a0fa8542-8f2c-4a70-be2d-02c4139e33aa/disk.config a0fa8542-8f2c-4a70-be2d-02c4139e33aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:09 compute-0 nova_compute[244014]: 2026-02-25 12:23:09.918 244018 INFO nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Deleting local config drive /var/lib/nova/instances/a0fa8542-8f2c-4a70-be2d-02c4139e33aa/disk.config because it was imported into RBD.
Feb 25 12:23:09 compute-0 kernel: tape66ad0e5-f6: entered promiscuous mode
Feb 25 12:23:09 compute-0 NetworkManager[49836]: <info>  [1772022189.9795] manager: (tape66ad0e5-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Feb 25 12:23:09 compute-0 nova_compute[244014]: 2026-02-25 12:23:09.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:09 compute-0 ovn_controller[147040]: 2026-02-25T12:23:09Z|00255|binding|INFO|Claiming lport e66ad0e5-f667-4662-b49b-13be01718e41 for this chassis.
Feb 25 12:23:09 compute-0 ovn_controller[147040]: 2026-02-25T12:23:09Z|00256|binding|INFO|e66ad0e5-f667-4662-b49b-13be01718e41: Claiming fa:16:3e:2d:d5:7d 10.100.0.8
Feb 25 12:23:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:09.993 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:d5:7d 10.100.0.8'], port_security=['fa:16:3e:2d:d5:7d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a0fa8542-8f2c-4a70-be2d-02c4139e33aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '892810ff108142de9ed0a316aeee727f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46687f36-faa2-4f92-82ce-c79469270051', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a003c918-adf1-4a59-b001-17237d07e18b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e66ad0e5-f667-4662-b49b-13be01718e41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:09.995 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e66ad0e5-f667-4662-b49b-13be01718e41 in datapath a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a bound to our chassis
Feb 25 12:23:09 compute-0 nova_compute[244014]: 2026-02-25 12:23:09.996 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:09.998 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a
Feb 25 12:23:09 compute-0 ovn_controller[147040]: 2026-02-25T12:23:09Z|00257|binding|INFO|Setting lport e66ad0e5-f667-4662-b49b-13be01718e41 ovn-installed in OVS
Feb 25 12:23:09 compute-0 ovn_controller[147040]: 2026-02-25T12:23:09Z|00258|binding|INFO|Setting lport e66ad0e5-f667-4662-b49b-13be01718e41 up in Southbound
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.014 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ec08bde4-3236-487c-9489-12895046c185]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.015 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7340f0b-d1 in ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.017 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7340f0b-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.018 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[af546aa2-c869-4456-bf55-2752b8332953]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.019 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c448fc3-9662-4ab0-97ed-84f5707f6ab0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 systemd-machined[210048]: New machine qemu-37-instance-00000021.
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.025 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.026 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.026 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.033 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[cdce06e8-dba2-4923-b5e9-b2c6c340789f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000021.
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.054 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f0561c6b-da41-4c79-b54d-3f5befbee134]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 systemd-udevd[275110]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:23:10 compute-0 NetworkManager[49836]: <info>  [1772022190.0740] device (tape66ad0e5-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:23:10 compute-0 NetworkManager[49836]: <info>  [1772022190.0757] device (tape66ad0e5-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.091 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c68876e2-c1d9-4e44-a9cd-0e3e0fddc53a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 systemd-udevd[275114]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.098 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d1dae0c-0c0f-4f99-9f66-ad8e193a0415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 NetworkManager[49836]: <info>  [1772022190.0998] manager: (tapa7340f0b-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/123)
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.132 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[516fac41-2197-4952-9081-bcb528896dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.136 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[082ad13b-c42f-4ad6-a3ef-4e661ff343e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 NetworkManager[49836]: <info>  [1772022190.1632] device (tapa7340f0b-d0): carrier: link connected
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.171 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[11045cd5-753a-4595-919e-7bd4ea15ce64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.193 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7abf70c4-bc91-4197-b96e-4ab2f66dfef3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7340f0b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:c9:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 415957, 'reachable_time': 36313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275140, 'error': None, 'target': 'ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.216 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca551f1-13c1-43b6-b5f6-7d0c46f9c4f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:c910'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 415957, 'tstamp': 415957}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275141, 'error': None, 'target': 'ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.234 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fdf59b-4c41-4629-9fdb-3d64209ed9e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7340f0b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:c9:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 415957, 'reachable_time': 36313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275142, 'error': None, 'target': 'ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.271 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bb83f07d-d6b3-40ea-9e5a-1ef74dae01de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.330 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8e9b48-1b60-45a4-844d-ac93de139bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.332 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7340f0b-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.332 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.333 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7340f0b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:10 compute-0 kernel: tapa7340f0b-d0: entered promiscuous mode
Feb 25 12:23:10 compute-0 NetworkManager[49836]: <info>  [1772022190.3365] manager: (tapa7340f0b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.339 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7340f0b-d0, col_values=(('external_ids', {'iface-id': '4d9e80e0-04af-4da1-a83a-8178307c4d12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:10 compute-0 ovn_controller[147040]: 2026-02-25T12:23:10Z|00259|binding|INFO|Releasing lport 4d9e80e0-04af-4da1-a83a-8178307c4d12 from this chassis (sb_readonly=0)
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.343 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.349 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bb29f299-6df2-49f7-89a7-d968a3037169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.350 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a.pid.haproxy
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:23:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:10.351 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a', 'env', 'PROCESS_TAG=haproxy-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:23:10 compute-0 podman[275214]: 2026-02-25 12:23:10.699274206 +0000 UTC m=+0.053763861 container create d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.714 244018 DEBUG nova.compute.manager [req-662892c4-910c-4322-a900-9a1f743e0a31 req-f13d1484-9c94-4489-891a-79a5e1d8c382 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received event network-vif-deleted-fd320060-eaf8-4fd7-9325-e3793617bc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.715 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022190.7151673, a0fa8542-8f2c-4a70-be2d-02c4139e33aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.716 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] VM Started (Lifecycle Event)
Feb 25 12:23:10 compute-0 systemd[1]: Started libpod-conmon-d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d.scope.
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.753 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.759 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022190.7153451, a0fa8542-8f2c-4a70-be2d-02c4139e33aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.760 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] VM Paused (Lifecycle Event)
Feb 25 12:23:10 compute-0 podman[275214]: 2026-02-25 12:23:10.673251255 +0000 UTC m=+0.027740930 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:23:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1984c0fe71000529616af06823881f6d1e83c47268daad7720b31014f79041c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.782 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.788 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:23:10 compute-0 podman[275214]: 2026-02-25 12:23:10.794345941 +0000 UTC m=+0.148835686 container init d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:23:10 compute-0 podman[275214]: 2026-02-25 12:23:10.801761982 +0000 UTC m=+0.156251667 container start d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:23:10 compute-0 nova_compute[244014]: 2026-02-25 12:23:10.812 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:23:10 compute-0 neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a[275231]: [NOTICE]   (275235) : New worker (275237) forked
Feb 25 12:23:10 compute-0 neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a[275231]: [NOTICE]   (275235) : Loading success.
Feb 25 12:23:11 compute-0 nova_compute[244014]: 2026-02-25 12:23:11.330 244018 DEBUG nova.network.neutron [req-3e37ef4b-2f85-4afa-a5d9-a9c2f1664d84 req-49771774-e5f8-4142-90c3-590be5435a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updated VIF entry in instance network info cache for port dc98a932-5f79-4db0-8662-1a5f5de8adff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:23:11 compute-0 nova_compute[244014]: 2026-02-25 12:23:11.331 244018 DEBUG nova.network.neutron [req-3e37ef4b-2f85-4afa-a5d9-a9c2f1664d84 req-49771774-e5f8-4142-90c3-590be5435a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:11 compute-0 ceph-mon[76335]: pgmap v1159: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.5 MiB/s wr, 104 op/s
Feb 25 12:23:11 compute-0 nova_compute[244014]: 2026-02-25 12:23:11.368 244018 DEBUG oslo_concurrency.lockutils [req-3e37ef4b-2f85-4afa-a5d9-a9c2f1664d84 req-49771774-e5f8-4142-90c3-590be5435a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:11 compute-0 nova_compute[244014]: 2026-02-25 12:23:11.423 244018 DEBUG nova.network.neutron [req-0dfe3a46-de4c-4202-b3b0-a020ddd97858 req-99bf3054-f389-43d0-80e5-746953e31e3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Updated VIF entry in instance network info cache for port e66ad0e5-f667-4662-b49b-13be01718e41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:23:11 compute-0 nova_compute[244014]: 2026-02-25 12:23:11.423 244018 DEBUG nova.network.neutron [req-0dfe3a46-de4c-4202-b3b0-a020ddd97858 req-99bf3054-f389-43d0-80e5-746953e31e3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Updating instance_info_cache with network_info: [{"id": "e66ad0e5-f667-4662-b49b-13be01718e41", "address": "fa:16:3e:2d:d5:7d", "network": {"id": "a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a", "bridge": "br-int", "label": "tempest-ServersTestJSON-1423404129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "892810ff108142de9ed0a316aeee727f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66ad0e5-f6", "ovs_interfaceid": "e66ad0e5-f667-4662-b49b-13be01718e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:11 compute-0 nova_compute[244014]: 2026-02-25 12:23:11.442 244018 DEBUG oslo_concurrency.lockutils [req-0dfe3a46-de4c-4202-b3b0-a020ddd97858 req-99bf3054-f389-43d0-80e5-746953e31e3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-a0fa8542-8f2c-4a70-be2d-02c4139e33aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.4 MiB/s wr, 100 op/s
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.062 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "cc067b57-66e5-499d-ac85-a81e91c1c978" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.063 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.078 244018 DEBUG oslo_concurrency.lockutils [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-33db1662-e67d-41de-b8d6-ea93b40cf7cd-dc98a932-5f79-4db0-8662-1a5f5de8adff" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.079 244018 DEBUG oslo_concurrency.lockutils [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-33db1662-e67d-41de-b8d6-ea93b40cf7cd-dc98a932-5f79-4db0-8662-1a5f5de8adff" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.084 244018 DEBUG nova.compute.manager [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.107 244018 DEBUG nova.objects.instance [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 33db1662-e67d-41de-b8d6-ea93b40cf7cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.130 244018 DEBUG nova.virt.libvirt.vif [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-244265435',display_name='tempest-tempest.common.compute-instance-244265435',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-244265435',id=31,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-y80s0s52',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=33db1662-e67d-41de-b8d6-ea93b40cf7cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.131 244018 DEBUG nova.network.os_vif_util [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.131 244018 DEBUG nova.network.os_vif_util [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.135 244018 DEBUG nova.virt.libvirt.guest [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.138 244018 DEBUG nova.virt.libvirt.guest [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.141 244018 DEBUG nova.virt.libvirt.driver [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Attempting to detach device tapdc98a932-5f from instance 33db1662-e67d-41de-b8d6-ea93b40cf7cd from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.141 244018 DEBUG nova.virt.libvirt.guest [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:a3:c3:28"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <target dev="tapdc98a932-5f"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]: </interface>
Feb 25 12:23:12 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.152 244018 DEBUG nova.virt.libvirt.guest [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.156 244018 DEBUG nova.virt.libvirt.guest [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface>not found in domain: <domain type='kvm' id='35'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <name>instance-0000001f</name>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <uuid>33db1662-e67d-41de-b8d6-ea93b40cf7cd</uuid>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:name>tempest-tempest.common.compute-instance-244265435</nova:name>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:23:06</nova:creationTime>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:port uuid="67db480d-6a66-4c54-be9c-5375a0d664cd">
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:port uuid="dc98a932-5f79-4db0-8662-1a5f5de8adff">
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:23:12 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <system>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='serial'>33db1662-e67d-41de-b8d6-ea93b40cf7cd</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='uuid'>33db1662-e67d-41de-b8d6-ea93b40cf7cd</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </system>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <os>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </os>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <features>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </features>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk' index='2'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config' index='1'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:4a:ac:51'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target dev='tap67db480d-6a'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:a3:c3:28'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target dev='tapdc98a932-5f'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='net1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/console.log' append='off'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       </target>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/1'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/console.log' append='off'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </console>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <video>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </video>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c108,c498</label>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c108,c498</imagelabel>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:23:12 compute-0 nova_compute[244014]: </domain>
Feb 25 12:23:12 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.158 244018 INFO nova.virt.libvirt.driver [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tapdc98a932-5f from instance 33db1662-e67d-41de-b8d6-ea93b40cf7cd from the persistent domain config.
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.158 244018 DEBUG nova.virt.libvirt.driver [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] (1/8): Attempting to detach device tapdc98a932-5f with device alias net1 from instance 33db1662-e67d-41de-b8d6-ea93b40cf7cd from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.159 244018 DEBUG nova.virt.libvirt.guest [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:a3:c3:28"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <target dev="tapdc98a932-5f"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]: </interface>
Feb 25 12:23:12 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.172 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.173 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.180 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.181 244018 INFO nova.compute.claims [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:23:12 compute-0 kernel: tapdc98a932-5f (unregistering): left promiscuous mode
Feb 25 12:23:12 compute-0 NetworkManager[49836]: <info>  [1772022192.2713] device (tapdc98a932-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:23:12 compute-0 ovn_controller[147040]: 2026-02-25T12:23:12Z|00260|binding|INFO|Releasing lport dc98a932-5f79-4db0-8662-1a5f5de8adff from this chassis (sb_readonly=0)
Feb 25 12:23:12 compute-0 ovn_controller[147040]: 2026-02-25T12:23:12Z|00261|binding|INFO|Setting lport dc98a932-5f79-4db0-8662-1a5f5de8adff down in Southbound
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:12 compute-0 ovn_controller[147040]: 2026-02-25T12:23:12Z|00262|binding|INFO|Removing iface tapdc98a932-5f ovn-installed in OVS
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.283 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.285 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Received event <DeviceRemovedEvent: 1772022192.2855153, 33db1662-e67d-41de-b8d6-ea93b40cf7cd => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.289 244018 DEBUG nova.virt.libvirt.driver [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Start waiting for the detach event from libvirt for device tapdc98a932-5f with device alias net1 for instance 33db1662-e67d-41de-b8d6-ea93b40cf7cd _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.290 244018 DEBUG nova.virt.libvirt.guest [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.290 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:c3:28 10.100.0.13'], port_security=['fa:16:3e:a3:c3:28 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-452456135', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '33db1662-e67d-41de-b8d6-ea93b40cf7cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-452456135', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dc98a932-5f79-4db0-8662-1a5f5de8adff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.292 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dc98a932-5f79-4db0-8662-1a5f5de8adff in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.294 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.294 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.296 244018 DEBUG nova.virt.libvirt.guest [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface>not found in domain: <domain type='kvm' id='35'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <name>instance-0000001f</name>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <uuid>33db1662-e67d-41de-b8d6-ea93b40cf7cd</uuid>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:name>tempest-tempest.common.compute-instance-244265435</nova:name>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:23:06</nova:creationTime>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:port uuid="67db480d-6a66-4c54-be9c-5375a0d664cd">
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:port uuid="dc98a932-5f79-4db0-8662-1a5f5de8adff">
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:23:12 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <system>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='serial'>33db1662-e67d-41de-b8d6-ea93b40cf7cd</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='uuid'>33db1662-e67d-41de-b8d6-ea93b40cf7cd</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </system>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <os>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </os>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <features>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </features>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk' index='2'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config' index='1'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:4a:ac:51'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target dev='tap67db480d-6a'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/console.log' append='off'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       </target>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/1'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/console.log' append='off'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </console>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <video>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </video>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c108,c498</label>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c108,c498</imagelabel>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:23:12 compute-0 nova_compute[244014]: </domain>
Feb 25 12:23:12 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.297 244018 INFO nova.virt.libvirt.driver [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tapdc98a932-5f from instance 33db1662-e67d-41de-b8d6-ea93b40cf7cd from the live domain config.
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.299 244018 DEBUG nova.virt.libvirt.vif [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-244265435',display_name='tempest-tempest.common.compute-instance-244265435',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-244265435',id=31,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-y80s0s52',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=33db1662-e67d-41de-b8d6-ea93b40cf7cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.300 244018 DEBUG nova.network.os_vif_util [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.301 244018 DEBUG nova.network.os_vif_util [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.301 244018 DEBUG os_vif [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.305 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc98a932-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.311 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c520dfb0-1065-4c51-955f-149c95546c06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.316 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.319 244018 INFO os_vif [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f')
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.320 244018 DEBUG nova.virt.libvirt.guest [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:name>tempest-tempest.common.compute-instance-244265435</nova:name>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:23:12</nova:creationTime>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     <nova:port uuid="67db480d-6a66-4c54-be9c-5375a0d664cd">
Feb 25 12:23:12 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:23:12 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:12 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:23:12 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:23:12 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.343 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[46e5f5cc-404a-4229-9051-0b935f79e4bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.347 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c57eae5d-70bc-4171-bb47-c3774daec5f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.375 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[033e5c2e-7b6f-4f92-9fdd-902577c2ab17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.395 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[125f6298-1e34-47b5-aa9d-3430cb7a9c86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411687, 'reachable_time': 20583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275255, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.400 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.413 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc283bd-57ea-4c0b-9fa9-419da6892d79]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411698, 'tstamp': 411698}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275256, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411701, 'tstamp': 411701}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275256, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.415 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.418 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.418 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.418 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:12.419 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.424 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.822 244018 DEBUG nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.822 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.822 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.823 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.823 244018 DEBUG nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] No waiting events found dispatching network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.823 244018 WARNING nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received unexpected event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff for instance with vm_state active and task_state None.
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.824 244018 DEBUG nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.824 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.824 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.824 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.825 244018 DEBUG nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] No waiting events found dispatching network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.825 244018 WARNING nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received unexpected event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff for instance with vm_state active and task_state None.
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.825 244018 DEBUG nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Received event network-vif-plugged-e66ad0e5-f667-4662-b49b-13be01718e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.825 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.826 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.826 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.826 244018 DEBUG nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Processing event network-vif-plugged-e66ad0e5-f667-4662-b49b-13be01718e41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.827 244018 DEBUG nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Received event network-vif-plugged-e66ad0e5-f667-4662-b49b-13be01718e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.827 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.827 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.827 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.828 244018 DEBUG nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] No waiting events found dispatching network-vif-plugged-e66ad0e5-f667-4662-b49b-13be01718e41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.828 244018 WARNING nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Received unexpected event network-vif-plugged-e66ad0e5-f667-4662-b49b-13be01718e41 for instance with vm_state building and task_state spawning.
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.828 244018 DEBUG nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-unplugged-dc98a932-5f79-4db0-8662-1a5f5de8adff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.828 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.828 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.829 244018 DEBUG oslo_concurrency.lockutils [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.829 244018 DEBUG nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] No waiting events found dispatching network-vif-unplugged-dc98a932-5f79-4db0-8662-1a5f5de8adff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.829 244018 WARNING nova.compute.manager [req-54b98f01-5017-437f-be69-2e345432ea0e req-159b5ac4-d98e-485c-80a7-110988464b81 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received unexpected event network-vif-unplugged-dc98a932-5f79-4db0-8662-1a5f5de8adff for instance with vm_state active and task_state None.
Feb 25 12:23:12 compute-0 nova_compute[244014]: 2026-02-25 12:23:12.830 244018 DEBUG nova.compute.manager [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:23:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:23:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572423871' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 359 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.2 MiB/s wr, 95 op/s
Feb 25 12:23:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:14.126 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:14 compute-0 ceph-mon[76335]: pgmap v1160: 305 pgs: 305 active+clean; 358 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.4 MiB/s wr, 100 op/s
Feb 25 12:23:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2572423871' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.557 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.558 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022194.5524824, a0fa8542-8f2c-4a70-be2d-02c4139e33aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.558 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] VM Resumed (Lifecycle Event)
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.566 244018 INFO nova.virt.libvirt.driver [-] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Instance spawned successfully.
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.566 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.578 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.583 244018 DEBUG nova.compute.provider_tree [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.589 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.596 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.601 244018 DEBUG nova.scheduler.client.report [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.608 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.609 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.610 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.611 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.611 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.612 244018 DEBUG nova.virt.libvirt.driver [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.620 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.664 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.665 244018 DEBUG nova.compute.manager [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.696 244018 INFO nova.compute.manager [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Took 13.30 seconds to spawn the instance on the hypervisor.
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.696 244018 DEBUG nova.compute.manager [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.761 244018 DEBUG nova.compute.manager [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.761 244018 DEBUG nova.network.neutron [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.812 244018 INFO nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.818 244018 INFO nova.compute.manager [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Took 14.59 seconds to build instance.
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.841 244018 DEBUG nova.compute.manager [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.846 244018 DEBUG oslo_concurrency.lockutils [None req-706ca700-8aa8-4d5e-a6e5-03a4b648fe4a 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.975 244018 DEBUG nova.compute.manager [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.977 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:23:14 compute-0 nova_compute[244014]: 2026-02-25 12:23:14.978 244018 INFO nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Creating image(s)
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.011 244018 DEBUG nova.storage.rbd_utils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image cc067b57-66e5-499d-ac85-a81e91c1c978_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.048 244018 DEBUG nova.storage.rbd_utils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image cc067b57-66e5-499d-ac85-a81e91c1c978_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.081 244018 DEBUG nova.storage.rbd_utils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image cc067b57-66e5-499d-ac85-a81e91c1c978_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.087 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.116 244018 DEBUG nova.policy [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1cfc3e643014e1c984d71182534fd24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '851e1817495944c1ad7ac421ab226d13', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.129 244018 DEBUG nova.compute.manager [req-8fc954b9-c9c1-446d-935c-fb4915691b47 req-50ef0861-dd9a-423d-993e-b37a04f9d447 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.129 244018 DEBUG oslo_concurrency.lockutils [req-8fc954b9-c9c1-446d-935c-fb4915691b47 req-50ef0861-dd9a-423d-993e-b37a04f9d447 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.130 244018 DEBUG oslo_concurrency.lockutils [req-8fc954b9-c9c1-446d-935c-fb4915691b47 req-50ef0861-dd9a-423d-993e-b37a04f9d447 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.131 244018 DEBUG oslo_concurrency.lockutils [req-8fc954b9-c9c1-446d-935c-fb4915691b47 req-50ef0861-dd9a-423d-993e-b37a04f9d447 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.131 244018 DEBUG nova.compute.manager [req-8fc954b9-c9c1-446d-935c-fb4915691b47 req-50ef0861-dd9a-423d-993e-b37a04f9d447 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] No waiting events found dispatching network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.131 244018 WARNING nova.compute.manager [req-8fc954b9-c9c1-446d-935c-fb4915691b47 req-50ef0861-dd9a-423d-993e-b37a04f9d447 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received unexpected event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff for instance with vm_state active and task_state None.
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.174 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.175 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.176 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.177 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.203 244018 DEBUG nova.storage.rbd_utils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image cc067b57-66e5-499d-ac85-a81e91c1c978_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.207 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 cc067b57-66e5-499d-ac85-a81e91c1c978_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.437 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 cc067b57-66e5-499d-ac85-a81e91c1c978_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.503 244018 DEBUG nova.storage.rbd_utils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] resizing rbd image cc067b57-66e5-499d-ac85-a81e91c1c978_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:23:15 compute-0 ceph-mon[76335]: pgmap v1161: 305 pgs: 305 active+clean; 359 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.2 MiB/s wr, 95 op/s
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.575 244018 DEBUG nova.objects.instance [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'migration_context' on Instance uuid cc067b57-66e5-499d-ac85-a81e91c1c978 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.594 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.594 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Ensure instance console log exists: /var/lib/nova/instances/cc067b57-66e5-499d-ac85-a81e91c1c978/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.594 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.595 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:15 compute-0 nova_compute[244014]: 2026-02-25 12:23:15.595 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 359 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.1 MiB/s wr, 89 op/s
Feb 25 12:23:16 compute-0 nova_compute[244014]: 2026-02-25 12:23:16.089 244018 DEBUG oslo_concurrency.lockutils [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:16 compute-0 nova_compute[244014]: 2026-02-25 12:23:16.090 244018 DEBUG oslo_concurrency.lockutils [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:16 compute-0 nova_compute[244014]: 2026-02-25 12:23:16.090 244018 DEBUG nova.network.neutron [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:23:16 compute-0 nova_compute[244014]: 2026-02-25 12:23:16.401 244018 DEBUG nova.network.neutron [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Successfully created port: 57720a37-b288-4d33-9b7f-4493075cd82e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:23:17 compute-0 nova_compute[244014]: 2026-02-25 12:23:17.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:17 compute-0 nova_compute[244014]: 2026-02-25 12:23:17.548 244018 DEBUG nova.network.neutron [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Successfully updated port: 57720a37-b288-4d33-9b7f-4493075cd82e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:23:17 compute-0 ceph-mon[76335]: pgmap v1162: 305 pgs: 305 active+clean; 359 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.1 MiB/s wr, 89 op/s
Feb 25 12:23:17 compute-0 nova_compute[244014]: 2026-02-25 12:23:17.577 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "refresh_cache-cc067b57-66e5-499d-ac85-a81e91c1c978" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:17 compute-0 nova_compute[244014]: 2026-02-25 12:23:17.578 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquired lock "refresh_cache-cc067b57-66e5-499d-ac85-a81e91c1c978" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:17 compute-0 nova_compute[244014]: 2026-02-25 12:23:17.578 244018 DEBUG nova.network.neutron [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:23:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 405 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.009 244018 DEBUG nova.network.neutron [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.092 244018 INFO nova.network.neutron [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Port dc98a932-5f79-4db0-8662-1a5f5de8adff from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.093 244018 DEBUG nova.network.neutron [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.125 244018 DEBUG oslo_concurrency.lockutils [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.149 244018 DEBUG oslo_concurrency.lockutils [None req-8d5958a8-90d8-4310-878f-85b202804f03 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-33db1662-e67d-41de-b8d6-ea93b40cf7cd-dc98a932-5f79-4db0-8662-1a5f5de8adff" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 6.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.161 244018 DEBUG nova.compute.manager [req-631e8426-7c80-4783-abc1-ee75554d2e23 req-02019be4-7fa0-4d2c-adb8-147d5076682b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Received event network-changed-57720a37-b288-4d33-9b7f-4493075cd82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.162 244018 DEBUG nova.compute.manager [req-631e8426-7c80-4783-abc1-ee75554d2e23 req-02019be4-7fa0-4d2c-adb8-147d5076682b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Refreshing instance network info cache due to event network-changed-57720a37-b288-4d33-9b7f-4493075cd82e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.162 244018 DEBUG oslo_concurrency.lockutils [req-631e8426-7c80-4783-abc1-ee75554d2e23 req-02019be4-7fa0-4d2c-adb8-147d5076682b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-cc067b57-66e5-499d-ac85-a81e91c1c978" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.664 244018 DEBUG nova.compute.manager [req-d37d69d8-c2a5-41f2-888f-23d100968fc0 req-612c42e8-05af-4a20-82d6-85812ac28f87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Received event network-changed-e66ad0e5-f667-4662-b49b-13be01718e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.665 244018 DEBUG nova.compute.manager [req-d37d69d8-c2a5-41f2-888f-23d100968fc0 req-612c42e8-05af-4a20-82d6-85812ac28f87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Refreshing instance network info cache due to event network-changed-e66ad0e5-f667-4662-b49b-13be01718e41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.665 244018 DEBUG oslo_concurrency.lockutils [req-d37d69d8-c2a5-41f2-888f-23d100968fc0 req-612c42e8-05af-4a20-82d6-85812ac28f87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-a0fa8542-8f2c-4a70-be2d-02c4139e33aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.666 244018 DEBUG oslo_concurrency.lockutils [req-d37d69d8-c2a5-41f2-888f-23d100968fc0 req-612c42e8-05af-4a20-82d6-85812ac28f87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-a0fa8542-8f2c-4a70-be2d-02c4139e33aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:18 compute-0 nova_compute[244014]: 2026-02-25 12:23:18.666 244018 DEBUG nova.network.neutron [req-d37d69d8-c2a5-41f2-888f-23d100968fc0 req-612c42e8-05af-4a20-82d6-85812ac28f87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Refreshing network info cache for port e66ad0e5-f667-4662-b49b-13be01718e41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:19 compute-0 ceph-mon[76335]: pgmap v1163: 305 pgs: 305 active+clean; 405 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Feb 25 12:23:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 405 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.847 244018 DEBUG nova.network.neutron [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Updating instance_info_cache with network_info: [{"id": "57720a37-b288-4d33-9b7f-4493075cd82e", "address": "fa:16:3e:33:0b:0f", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57720a37-b2", "ovs_interfaceid": "57720a37-b288-4d33-9b7f-4493075cd82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.881 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Releasing lock "refresh_cache-cc067b57-66e5-499d-ac85-a81e91c1c978" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.881 244018 DEBUG nova.compute.manager [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Instance network_info: |[{"id": "57720a37-b288-4d33-9b7f-4493075cd82e", "address": "fa:16:3e:33:0b:0f", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57720a37-b2", "ovs_interfaceid": "57720a37-b288-4d33-9b7f-4493075cd82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.882 244018 DEBUG oslo_concurrency.lockutils [req-631e8426-7c80-4783-abc1-ee75554d2e23 req-02019be4-7fa0-4d2c-adb8-147d5076682b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-cc067b57-66e5-499d-ac85-a81e91c1c978" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.883 244018 DEBUG nova.network.neutron [req-631e8426-7c80-4783-abc1-ee75554d2e23 req-02019be4-7fa0-4d2c-adb8-147d5076682b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Refreshing network info cache for port 57720a37-b288-4d33-9b7f-4493075cd82e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.888 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Start _get_guest_xml network_info=[{"id": "57720a37-b288-4d33-9b7f-4493075cd82e", "address": "fa:16:3e:33:0b:0f", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57720a37-b2", "ovs_interfaceid": "57720a37-b288-4d33-9b7f-4493075cd82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.893 244018 WARNING nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.903 244018 DEBUG nova.virt.libvirt.host [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.904 244018 DEBUG nova.virt.libvirt.host [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.913 244018 DEBUG nova.virt.libvirt.host [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.914 244018 DEBUG nova.virt.libvirt.host [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.915 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.915 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.916 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.917 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.917 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.918 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.918 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.919 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.920 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.920 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.921 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.921 244018 DEBUG nova.virt.hardware [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:23:19 compute-0 nova_compute[244014]: 2026-02-25 12:23:19.926 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:23:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882900161' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:20 compute-0 nova_compute[244014]: 2026-02-25 12:23:20.503 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:20 compute-0 nova_compute[244014]: 2026-02-25 12:23:20.532 244018 DEBUG nova.storage.rbd_utils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image cc067b57-66e5-499d-ac85-a81e91c1c978_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:20 compute-0 nova_compute[244014]: 2026-02-25 12:23:20.537 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1882900161' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:20 compute-0 nova_compute[244014]: 2026-02-25 12:23:20.743 244018 DEBUG oslo_concurrency.lockutils [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-8715ba65-46d1-406d-8a4d-b5ff5067e2c1-dc98a932-5f79-4db0-8662-1a5f5de8adff" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:20 compute-0 nova_compute[244014]: 2026-02-25 12:23:20.745 244018 DEBUG oslo_concurrency.lockutils [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-8715ba65-46d1-406d-8a4d-b5ff5067e2c1-dc98a932-5f79-4db0-8662-1a5f5de8adff" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:20 compute-0 nova_compute[244014]: 2026-02-25 12:23:20.745 244018 DEBUG nova.objects.instance [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:23:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741568469' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.121 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.123 244018 DEBUG nova.virt.libvirt.vif [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1006495350',display_name='tempest-ImagesTestJSON-server-1006495350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1006495350',id=34,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-9a0rzzlw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:23:14Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=cc067b57-66e5-499d-ac85-a81e91c1c978,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57720a37-b288-4d33-9b7f-4493075cd82e", "address": "fa:16:3e:33:0b:0f", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57720a37-b2", "ovs_interfaceid": "57720a37-b288-4d33-9b7f-4493075cd82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.124 244018 DEBUG nova.network.os_vif_util [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "57720a37-b288-4d33-9b7f-4493075cd82e", "address": "fa:16:3e:33:0b:0f", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57720a37-b2", "ovs_interfaceid": "57720a37-b288-4d33-9b7f-4493075cd82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.125 244018 DEBUG nova.network.os_vif_util [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:0f,bridge_name='br-int',has_traffic_filtering=True,id=57720a37-b288-4d33-9b7f-4493075cd82e,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57720a37-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.127 244018 DEBUG nova.objects.instance [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'pci_devices' on Instance uuid cc067b57-66e5-499d-ac85-a81e91c1c978 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.162 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:23:21 compute-0 nova_compute[244014]:   <uuid>cc067b57-66e5-499d-ac85-a81e91c1c978</uuid>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   <name>instance-00000022</name>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <nova:name>tempest-ImagesTestJSON-server-1006495350</nova:name>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:23:19</nova:creationTime>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <nova:user uuid="f1cfc3e643014e1c984d71182534fd24">tempest-ImagesTestJSON-1353368211-project-member</nova:user>
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <nova:project uuid="851e1817495944c1ad7ac421ab226d13">tempest-ImagesTestJSON-1353368211</nova:project>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <nova:port uuid="57720a37-b288-4d33-9b7f-4493075cd82e">
Feb 25 12:23:21 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <system>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <entry name="serial">cc067b57-66e5-499d-ac85-a81e91c1c978</entry>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <entry name="uuid">cc067b57-66e5-499d-ac85-a81e91c1c978</entry>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     </system>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   <os>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   </os>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   <features>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   </features>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/cc067b57-66e5-499d-ac85-a81e91c1c978_disk">
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/cc067b57-66e5-499d-ac85-a81e91c1c978_disk.config">
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:23:21 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:33:0b:0f"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <target dev="tap57720a37-b2"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/cc067b57-66e5-499d-ac85-a81e91c1c978/console.log" append="off"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <video>
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     </video>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:23:21 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:23:21 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:23:21 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:23:21 compute-0 nova_compute[244014]: </domain>
Feb 25 12:23:21 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.171 244018 DEBUG nova.compute.manager [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Preparing to wait for external event network-vif-plugged-57720a37-b288-4d33-9b7f-4493075cd82e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.172 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.172 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.173 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.174 244018 DEBUG nova.virt.libvirt.vif [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1006495350',display_name='tempest-ImagesTestJSON-server-1006495350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1006495350',id=34,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-9a0rzzlw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:23:14Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=cc067b57-66e5-499d-ac85-a81e91c1c978,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57720a37-b288-4d33-9b7f-4493075cd82e", "address": "fa:16:3e:33:0b:0f", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57720a37-b2", "ovs_interfaceid": "57720a37-b288-4d33-9b7f-4493075cd82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.175 244018 DEBUG nova.network.os_vif_util [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "57720a37-b288-4d33-9b7f-4493075cd82e", "address": "fa:16:3e:33:0b:0f", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57720a37-b2", "ovs_interfaceid": "57720a37-b288-4d33-9b7f-4493075cd82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.176 244018 DEBUG nova.network.os_vif_util [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:0f,bridge_name='br-int',has_traffic_filtering=True,id=57720a37-b288-4d33-9b7f-4493075cd82e,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57720a37-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.177 244018 DEBUG os_vif [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:0f,bridge_name='br-int',has_traffic_filtering=True,id=57720a37-b288-4d33-9b7f-4493075cd82e,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57720a37-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.178 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.179 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.182 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.183 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57720a37-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.184 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57720a37-b2, col_values=(('external_ids', {'iface-id': '57720a37-b288-4d33-9b7f-4493075cd82e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:0b:0f', 'vm-uuid': 'cc067b57-66e5-499d-ac85-a81e91c1c978'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:21 compute-0 NetworkManager[49836]: <info>  [1772022201.1865] manager: (tap57720a37-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.189 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.192 244018 INFO os_vif [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:0f,bridge_name='br-int',has_traffic_filtering=True,id=57720a37-b288-4d33-9b7f-4493075cd82e,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57720a37-b2')
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.268 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.269 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.269 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No VIF found with MAC fa:16:3e:33:0b:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.270 244018 INFO nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Using config drive
Feb 25 12:23:21 compute-0 nova_compute[244014]: 2026-02-25 12:23:21.298 244018 DEBUG nova.storage.rbd_utils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image cc067b57-66e5-499d-ac85-a81e91c1c978_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:21 compute-0 ceph-mon[76335]: pgmap v1164: 305 pgs: 305 active+clean; 405 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Feb 25 12:23:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2741568469' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 405 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.050 244018 INFO nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Creating config drive at /var/lib/nova/instances/cc067b57-66e5-499d-ac85-a81e91c1c978/disk.config
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.057 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cc067b57-66e5-499d-ac85-a81e91c1c978/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmptdte5jls execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.192 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cc067b57-66e5-499d-ac85-a81e91c1c978/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmptdte5jls" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.225 244018 DEBUG nova.storage.rbd_utils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image cc067b57-66e5-499d-ac85-a81e91c1c978_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.229 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cc067b57-66e5-499d-ac85-a81e91c1c978/disk.config cc067b57-66e5-499d-ac85-a81e91c1c978_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.368 244018 DEBUG oslo_concurrency.processutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cc067b57-66e5-499d-ac85-a81e91c1c978/disk.config cc067b57-66e5-499d-ac85-a81e91c1c978_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.369 244018 INFO nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Deleting local config drive /var/lib/nova/instances/cc067b57-66e5-499d-ac85-a81e91c1c978/disk.config because it was imported into RBD.
Feb 25 12:23:22 compute-0 kernel: tap57720a37-b2: entered promiscuous mode
Feb 25 12:23:22 compute-0 ovn_controller[147040]: 2026-02-25T12:23:22Z|00263|binding|INFO|Claiming lport 57720a37-b288-4d33-9b7f-4493075cd82e for this chassis.
Feb 25 12:23:22 compute-0 ovn_controller[147040]: 2026-02-25T12:23:22Z|00264|binding|INFO|57720a37-b288-4d33-9b7f-4493075cd82e: Claiming fa:16:3e:33:0b:0f 10.100.0.13
Feb 25 12:23:22 compute-0 NetworkManager[49836]: <info>  [1772022202.4219] manager: (tap57720a37-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.421 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:22 compute-0 ovn_controller[147040]: 2026-02-25T12:23:22Z|00265|binding|INFO|Setting lport 57720a37-b288-4d33-9b7f-4493075cd82e ovn-installed in OVS
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.426 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.429 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.437 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:0b:0f 10.100.0.13'], port_security=['fa:16:3e:33:0b:0f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cc067b57-66e5-499d-ac85-a81e91c1c978', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=57720a37-b288-4d33-9b7f-4493075cd82e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:22 compute-0 ovn_controller[147040]: 2026-02-25T12:23:22Z|00266|binding|INFO|Setting lport 57720a37-b288-4d33-9b7f-4493075cd82e up in Southbound
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.438 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 57720a37-b288-4d33-9b7f-4493075cd82e in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 bound to our chassis
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.440 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.449 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[32e9b55a-4d54-4538-9536-8838017eb928]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 systemd-machined[210048]: New machine qemu-38-instance-00000022.
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.450 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a1663dd-21 in ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.452 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a1663dd-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.452 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1c613256-0901-4f89-9aea-e1dde2a060bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.453 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8d42718b-e7c0-4665-b3be-bd584f0fc3d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.465 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[70308d00-7208-4c50-8643-23d41373ec82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.492 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7640dee9-8a30-49c1-ae15-85636a650ccf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 systemd-udevd[275586]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:23:22 compute-0 NetworkManager[49836]: <info>  [1772022202.5156] device (tap57720a37-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:23:22 compute-0 NetworkManager[49836]: <info>  [1772022202.5165] device (tap57720a37-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.543 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd79e441-977c-44b0-a20a-7f6753b491c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.549 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[200fe6e4-a916-43cd-807c-4ea3ef6456d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 NetworkManager[49836]: <info>  [1772022202.5498] manager: (tap6a1663dd-20): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.595 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[88411d86-2535-4423-933b-d843366ea79f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.600 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[29a842f3-1253-402b-be5a-d2f705673714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 ceph-mon[76335]: pgmap v1165: 305 pgs: 305 active+clean; 405 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 25 12:23:22 compute-0 NetworkManager[49836]: <info>  [1772022202.6235] device (tap6a1663dd-20): carrier: link connected
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.632 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1fda5ded-e393-497b-a35e-e32ae715a697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.650 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2171c3c9-3d4a-4de2-898a-55c2104a69fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417203, 'reachable_time': 35633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275615, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.652 244018 DEBUG nova.objects.instance [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.657 244018 DEBUG nova.network.neutron [req-d37d69d8-c2a5-41f2-888f-23d100968fc0 req-612c42e8-05af-4a20-82d6-85812ac28f87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Updated VIF entry in instance network info cache for port e66ad0e5-f667-4662-b49b-13be01718e41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.658 244018 DEBUG nova.network.neutron [req-d37d69d8-c2a5-41f2-888f-23d100968fc0 req-612c42e8-05af-4a20-82d6-85812ac28f87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Updating instance_info_cache with network_info: [{"id": "e66ad0e5-f667-4662-b49b-13be01718e41", "address": "fa:16:3e:2d:d5:7d", "network": {"id": "a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a", "bridge": "br-int", "label": "tempest-ServersTestJSON-1423404129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "892810ff108142de9ed0a316aeee727f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66ad0e5-f6", "ovs_interfaceid": "e66ad0e5-f667-4662-b49b-13be01718e41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.664 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f74ecfc4-e584-44e8-86ae-e40f8a19128c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 417203, 'tstamp': 417203}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275616, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.672 244018 DEBUG nova.network.neutron [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.677 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88aa3eeb-8ed2-44c2-ac79-82646a7085e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417203, 'reachable_time': 35633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275617, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.680 244018 DEBUG oslo_concurrency.lockutils [req-d37d69d8-c2a5-41f2-888f-23d100968fc0 req-612c42e8-05af-4a20-82d6-85812ac28f87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-a0fa8542-8f2c-4a70-be2d-02c4139e33aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.706 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac81b139-9ba8-4dfc-922a-6f040289062d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.752 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c190a548-cc3b-420c-8c85-f5b4b4a4604f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.753 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.754 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.754 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a1663dd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.754 244018 DEBUG nova.compute.manager [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.755 244018 DEBUG nova.compute.manager [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing instance network info cache due to event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:23:22 compute-0 NetworkManager[49836]: <info>  [1772022202.7570] manager: (tap6a1663dd-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Feb 25 12:23:22 compute-0 kernel: tap6a1663dd-20: entered promiscuous mode
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.759 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a1663dd-20, col_values=(('external_ids', {'iface-id': '7f515737-b36a-4caa-affc-8ad110539172'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:22 compute-0 ovn_controller[147040]: 2026-02-25T12:23:22Z|00267|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.768 244018 DEBUG oslo_concurrency.lockutils [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.769 244018 DEBUG oslo_concurrency.lockutils [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.769 244018 DEBUG nova.network.neutron [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:23:22 compute-0 nova_compute[244014]: 2026-02-25 12:23:22.771 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.774 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.776 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eff29c76-0587-463d-be3a-bd9bd32540f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.777 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:23:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:22.778 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'env', 'PROCESS_TAG=haproxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a1663dd-25aa-4b0e-a1c0-42434d371a21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.073 244018 DEBUG nova.network.neutron [req-631e8426-7c80-4783-abc1-ee75554d2e23 req-02019be4-7fa0-4d2c-adb8-147d5076682b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Updated VIF entry in instance network info cache for port 57720a37-b288-4d33-9b7f-4493075cd82e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.073 244018 DEBUG nova.network.neutron [req-631e8426-7c80-4783-abc1-ee75554d2e23 req-02019be4-7fa0-4d2c-adb8-147d5076682b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Updating instance_info_cache with network_info: [{"id": "57720a37-b288-4d33-9b7f-4493075cd82e", "address": "fa:16:3e:33:0b:0f", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57720a37-b2", "ovs_interfaceid": "57720a37-b288-4d33-9b7f-4493075cd82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.112 244018 DEBUG oslo_concurrency.lockutils [req-631e8426-7c80-4783-abc1-ee75554d2e23 req-02019be4-7fa0-4d2c-adb8-147d5076682b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-cc067b57-66e5-499d-ac85-a81e91c1c978" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.132 244018 DEBUG nova.policy [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:23:23 compute-0 podman[275649]: 2026-02-25 12:23:23.207630723 +0000 UTC m=+0.087494640 container create 4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:23:23 compute-0 podman[275649]: 2026-02-25 12:23:23.167465871 +0000 UTC m=+0.047329848 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:23:23 compute-0 systemd[1]: Started libpod-conmon-4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9.scope.
Feb 25 12:23:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5578fd17469e145d58daaa01bac4cbc9d4deab7d41fcf06e0b1b07504c0ef56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:23 compute-0 podman[275649]: 2026-02-25 12:23:23.309525342 +0000 UTC m=+0.189389289 container init 4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:23:23 compute-0 podman[275649]: 2026-02-25 12:23:23.318713564 +0000 UTC m=+0.198577461 container start 4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:23:23 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[275664]: [NOTICE]   (275668) : New worker (275670) forked
Feb 25 12:23:23 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[275664]: [NOTICE]   (275668) : Loading success.
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.548 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022203.5476723, cc067b57-66e5-499d-ac85-a81e91c1c978 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.548 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] VM Started (Lifecycle Event)
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.580 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.584 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022203.5490804, cc067b57-66e5-499d-ac85-a81e91c1c978 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.585 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] VM Paused (Lifecycle Event)
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.605 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.608 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:23:23 compute-0 nova_compute[244014]: 2026-02-25 12:23:23.635 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:23:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 405 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:23:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:24 compute-0 podman[275721]: 2026-02-25 12:23:24.728427213 +0000 UTC m=+0.064239629 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:23:24 compute-0 podman[275722]: 2026-02-25 12:23:24.741507295 +0000 UTC m=+0.078214066 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.861 244018 DEBUG nova.compute.manager [req-3286434f-af50-4251-bd69-ee1a3e47981f req-6c34934f-c6e4-4056-bf5d-706460897415 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Received event network-vif-plugged-57720a37-b288-4d33-9b7f-4493075cd82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.861 244018 DEBUG oslo_concurrency.lockutils [req-3286434f-af50-4251-bd69-ee1a3e47981f req-6c34934f-c6e4-4056-bf5d-706460897415 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.862 244018 DEBUG oslo_concurrency.lockutils [req-3286434f-af50-4251-bd69-ee1a3e47981f req-6c34934f-c6e4-4056-bf5d-706460897415 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.862 244018 DEBUG oslo_concurrency.lockutils [req-3286434f-af50-4251-bd69-ee1a3e47981f req-6c34934f-c6e4-4056-bf5d-706460897415 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.862 244018 DEBUG nova.compute.manager [req-3286434f-af50-4251-bd69-ee1a3e47981f req-6c34934f-c6e4-4056-bf5d-706460897415 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Processing event network-vif-plugged-57720a37-b288-4d33-9b7f-4493075cd82e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.863 244018 DEBUG nova.compute.manager [req-3286434f-af50-4251-bd69-ee1a3e47981f req-6c34934f-c6e4-4056-bf5d-706460897415 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Received event network-vif-plugged-57720a37-b288-4d33-9b7f-4493075cd82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.863 244018 DEBUG oslo_concurrency.lockutils [req-3286434f-af50-4251-bd69-ee1a3e47981f req-6c34934f-c6e4-4056-bf5d-706460897415 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.863 244018 DEBUG oslo_concurrency.lockutils [req-3286434f-af50-4251-bd69-ee1a3e47981f req-6c34934f-c6e4-4056-bf5d-706460897415 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.864 244018 DEBUG oslo_concurrency.lockutils [req-3286434f-af50-4251-bd69-ee1a3e47981f req-6c34934f-c6e4-4056-bf5d-706460897415 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.864 244018 DEBUG nova.compute.manager [req-3286434f-af50-4251-bd69-ee1a3e47981f req-6c34934f-c6e4-4056-bf5d-706460897415 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] No waiting events found dispatching network-vif-plugged-57720a37-b288-4d33-9b7f-4493075cd82e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.864 244018 WARNING nova.compute.manager [req-3286434f-af50-4251-bd69-ee1a3e47981f req-6c34934f-c6e4-4056-bf5d-706460897415 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Received unexpected event network-vif-plugged-57720a37-b288-4d33-9b7f-4493075cd82e for instance with vm_state building and task_state spawning.
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.865 244018 DEBUG nova.compute.manager [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.869 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022204.8685002, cc067b57-66e5-499d-ac85-a81e91c1c978 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.870 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] VM Resumed (Lifecycle Event)
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.872 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.876 244018 INFO nova.virt.libvirt.driver [-] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Instance spawned successfully.
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.876 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.897 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.907 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.912 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.912 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.913 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.914 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.914 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.915 244018 DEBUG nova.virt.libvirt.driver [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.950 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.984 244018 INFO nova.compute.manager [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Took 10.01 seconds to spawn the instance on the hypervisor.
Feb 25 12:23:24 compute-0 nova_compute[244014]: 2026-02-25 12:23:24.984 244018 DEBUG nova.compute.manager [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.054 244018 DEBUG nova.network.neutron [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updated VIF entry in instance network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.055 244018 DEBUG nova.network.neutron [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.065 244018 DEBUG nova.network.neutron [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Successfully updated port: dc98a932-5f79-4db0-8662-1a5f5de8adff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.097 244018 INFO nova.compute.manager [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Took 12.95 seconds to build instance.
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.102 244018 DEBUG oslo_concurrency.lockutils [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.103 244018 DEBUG nova.compute.manager [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-changed-7298ef49-4582-410a-821d-bd43bc865fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.103 244018 DEBUG nova.compute.manager [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Refreshing instance network info cache due to event network-changed-7298ef49-4582-410a-821d-bd43bc865fba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.104 244018 DEBUG oslo_concurrency.lockutils [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.104 244018 DEBUG oslo_concurrency.lockutils [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.104 244018 DEBUG nova.network.neutron [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Refreshing network info cache for port 7298ef49-4582-410a-821d-bd43bc865fba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.107 244018 DEBUG oslo_concurrency.lockutils [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.134 244018 DEBUG oslo_concurrency.lockutils [None req-729ac522-8db5-4738-9a7c-bbc97f0cf68f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:25 compute-0 ovn_controller[147040]: 2026-02-25T12:23:25Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:d5:7d 10.100.0.8
Feb 25 12:23:25 compute-0 ovn_controller[147040]: 2026-02-25T12:23:25Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:d5:7d 10.100.0.8
Feb 25 12:23:25 compute-0 ceph-mon[76335]: pgmap v1166: 305 pgs: 305 active+clean; 405 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:23:25 compute-0 ovn_controller[147040]: 2026-02-25T12:23:25Z|00268|binding|INFO|Releasing lport 4d9e80e0-04af-4da1-a83a-8178307c4d12 from this chassis (sb_readonly=0)
Feb 25 12:23:25 compute-0 ovn_controller[147040]: 2026-02-25T12:23:25Z|00269|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 12:23:25 compute-0 ovn_controller[147040]: 2026-02-25T12:23:25Z|00270|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:23:25 compute-0 nova_compute[244014]: 2026-02-25 12:23:25.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 405 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.320 244018 DEBUG nova.objects.instance [None req-235393d7-43c9-4caa-a0f7-219566b389a9 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'pci_devices' on Instance uuid cc067b57-66e5-499d-ac85-a81e91c1c978 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.356 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022206.3558912, cc067b57-66e5-499d-ac85-a81e91c1c978 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.356 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] VM Paused (Lifecycle Event)
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.379 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.385 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.413 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.560 244018 DEBUG nova.network.neutron [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updated VIF entry in instance network info cache for port 7298ef49-4582-410a-821d-bd43bc865fba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.561 244018 DEBUG nova.network.neutron [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updating instance_info_cache with network_info: [{"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.622 244018 DEBUG oslo_concurrency.lockutils [req-c32b252d-c442-474c-9a61-56f1c1a2a10f req-033b36d6-3902-48cd-be85-0e3cee201c08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.623 244018 DEBUG oslo_concurrency.lockutils [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.623 244018 DEBUG nova.network.neutron [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:23:26 compute-0 kernel: tap57720a37-b2 (unregistering): left promiscuous mode
Feb 25 12:23:26 compute-0 NetworkManager[49836]: <info>  [1772022206.8357] device (tap57720a37-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:23:26 compute-0 ceph-mon[76335]: pgmap v1167: 305 pgs: 305 active+clean; 405 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Feb 25 12:23:26 compute-0 ovn_controller[147040]: 2026-02-25T12:23:26Z|00271|binding|INFO|Releasing lport 57720a37-b288-4d33-9b7f-4493075cd82e from this chassis (sb_readonly=0)
Feb 25 12:23:26 compute-0 ovn_controller[147040]: 2026-02-25T12:23:26Z|00272|binding|INFO|Setting lport 57720a37-b288-4d33-9b7f-4493075cd82e down in Southbound
Feb 25 12:23:26 compute-0 ovn_controller[147040]: 2026-02-25T12:23:26Z|00273|binding|INFO|Removing iface tap57720a37-b2 ovn-installed in OVS
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:26.854 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:0b:0f 10.100.0.13'], port_security=['fa:16:3e:33:0b:0f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cc067b57-66e5-499d-ac85-a81e91c1c978', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=57720a37-b288-4d33-9b7f-4493075cd82e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:26.856 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 57720a37-b288-4d33-9b7f-4493075cd82e in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 unbound from our chassis
Feb 25 12:23:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:26.858 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:23:26 compute-0 nova_compute[244014]: 2026-02-25 12:23:26.858 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:26.860 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1f8de4-67e8-486b-95aa-28e824bc88b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:26.860 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace which is not needed anymore
Feb 25 12:23:26 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Feb 25 12:23:26 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 2.718s CPU time.
Feb 25 12:23:26 compute-0 systemd-machined[210048]: Machine qemu-38-instance-00000022 terminated.
Feb 25 12:23:26 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[275664]: [NOTICE]   (275668) : haproxy version is 2.8.14-c23fe91
Feb 25 12:23:26 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[275664]: [NOTICE]   (275668) : path to executable is /usr/sbin/haproxy
Feb 25 12:23:26 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[275664]: [WARNING]  (275668) : Exiting Master process...
Feb 25 12:23:26 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[275664]: [WARNING]  (275668) : Exiting Master process...
Feb 25 12:23:26 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[275664]: [ALERT]    (275668) : Current worker (275670) exited with code 143 (Terminated)
Feb 25 12:23:26 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[275664]: [WARNING]  (275668) : All workers exited. Exiting... (0)
Feb 25 12:23:26 compute-0 systemd[1]: libpod-4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9.scope: Deactivated successfully.
Feb 25 12:23:26 compute-0 podman[275791]: 2026-02-25 12:23:26.996945947 +0000 UTC m=+0.045013652 container died 4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 25 12:23:27 compute-0 kernel: tap57720a37-b2: entered promiscuous mode
Feb 25 12:23:27 compute-0 kernel: tap57720a37-b2 (unregistering): left promiscuous mode
Feb 25 12:23:27 compute-0 ovn_controller[147040]: 2026-02-25T12:23:27Z|00274|binding|INFO|Claiming lport 57720a37-b288-4d33-9b7f-4493075cd82e for this chassis.
Feb 25 12:23:27 compute-0 ovn_controller[147040]: 2026-02-25T12:23:27Z|00275|binding|INFO|57720a37-b288-4d33-9b7f-4493075cd82e: Claiming fa:16:3e:33:0b:0f 10.100.0.13
Feb 25 12:23:27 compute-0 nova_compute[244014]: 2026-02-25 12:23:27.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:27 compute-0 nova_compute[244014]: 2026-02-25 12:23:27.023 244018 DEBUG nova.compute.manager [None req-235393d7-43c9-4caa-a0f7-219566b389a9 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:27 compute-0 ovn_controller[147040]: 2026-02-25T12:23:27Z|00276|binding|INFO|Setting lport 57720a37-b288-4d33-9b7f-4493075cd82e ovn-installed in OVS
Feb 25 12:23:27 compute-0 ovn_controller[147040]: 2026-02-25T12:23:27Z|00277|if_status|INFO|Dropped 1 log messages in last 208 seconds (most recently, 208 seconds ago) due to excessive rate
Feb 25 12:23:27 compute-0 ovn_controller[147040]: 2026-02-25T12:23:27Z|00278|if_status|INFO|Not setting lport 57720a37-b288-4d33-9b7f-4493075cd82e down as sb is readonly
Feb 25 12:23:27 compute-0 nova_compute[244014]: 2026-02-25 12:23:27.026 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9-userdata-shm.mount: Deactivated successfully.
Feb 25 12:23:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5578fd17469e145d58daaa01bac4cbc9d4deab7d41fcf06e0b1b07504c0ef56-merged.mount: Deactivated successfully.
Feb 25 12:23:27 compute-0 podman[275791]: 2026-02-25 12:23:27.05891249 +0000 UTC m=+0.106980165 container cleanup 4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 12:23:27 compute-0 systemd[1]: libpod-conmon-4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9.scope: Deactivated successfully.
Feb 25 12:23:27 compute-0 ovn_controller[147040]: 2026-02-25T12:23:27Z|00279|binding|INFO|Releasing lport 57720a37-b288-4d33-9b7f-4493075cd82e from this chassis (sb_readonly=0)
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.093 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:0b:0f 10.100.0.13'], port_security=['fa:16:3e:33:0b:0f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cc067b57-66e5-499d-ac85-a81e91c1c978', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=57720a37-b288-4d33-9b7f-4493075cd82e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:27 compute-0 nova_compute[244014]: 2026-02-25 12:23:27.097 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:27 compute-0 podman[275826]: 2026-02-25 12:23:27.115806159 +0000 UTC m=+0.041812571 container remove 4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.121 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7755e6-289a-4ebc-bf87-6bc09ce6752c]: (4, ('Wed Feb 25 12:23:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9)\n4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9\nWed Feb 25 12:23:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9)\n4aac3ac77cfa060f1efc7ca04a633c11b057b2663f974d886b09bb6e6cfa1da9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.123 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e848ba76-7833-4252-8861-c43a2a0f703c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.125 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:27 compute-0 nova_compute[244014]: 2026-02-25 12:23:27.127 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:27 compute-0 kernel: tap6a1663dd-20: left promiscuous mode
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.130 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:0b:0f 10.100.0.13'], port_security=['fa:16:3e:33:0b:0f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cc067b57-66e5-499d-ac85-a81e91c1c978', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=57720a37-b288-4d33-9b7f-4493075cd82e) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:27 compute-0 nova_compute[244014]: 2026-02-25 12:23:27.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.141 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7edf16-ef75-4815-aaea-cc2513311c22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.153 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f512f9-f540-439b-8798-13cc9456a69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.154 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[222336d5-0799-4560-999c-76c31ce11682]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.164 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bd801757-81de-46ee-8641-b87f97249e03]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417193, 'reachable_time': 43214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275843, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.167 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:23:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d6a1663dd\x2d25aa\x2d4b0e\x2da1c0\x2d42434d371a21.mount: Deactivated successfully.
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.168 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[77254b6a-3425-49b9-a3d7-fda1fd1fc832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.169 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 57720a37-b288-4d33-9b7f-4493075cd82e in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 unbound from our chassis
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.171 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.172 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3fec967d-81e2-4422-aa71-502b72b9e44f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.173 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 57720a37-b288-4d33-9b7f-4493075cd82e in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 unbound from our chassis
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.175 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:23:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:27.175 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2f30ae87-2ba0-4366-939a-436c58e71eef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:27 compute-0 nova_compute[244014]: 2026-02-25 12:23:27.344 244018 WARNING nova.network.neutron [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it
Feb 25 12:23:27 compute-0 nova_compute[244014]: 2026-02-25 12:23:27.352 244018 DEBUG nova.compute.manager [req-df89ebe2-48d4-4cab-bd7a-1b99fe83a937 req-79ec2f31-149b-4b98-8c33-3d73c03b8587 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-changed-dc98a932-5f79-4db0-8662-1a5f5de8adff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:27 compute-0 nova_compute[244014]: 2026-02-25 12:23:27.352 244018 DEBUG nova.compute.manager [req-df89ebe2-48d4-4cab-bd7a-1b99fe83a937 req-79ec2f31-149b-4b98-8c33-3d73c03b8587 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Refreshing instance network info cache due to event network-changed-dc98a932-5f79-4db0-8662-1a5f5de8adff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:23:27 compute-0 nova_compute[244014]: 2026-02-25 12:23:27.353 244018 DEBUG oslo_concurrency.lockutils [req-df89ebe2-48d4-4cab-bd7a-1b99fe83a937 req-79ec2f31-149b-4b98-8c33-3d73c03b8587 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 438 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 231 op/s
Feb 25 12:23:28 compute-0 sudo[275844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:23:28 compute-0 sudo[275844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:23:28 compute-0 sudo[275844]: pam_unix(sudo:session): session closed for user root
Feb 25 12:23:28 compute-0 sudo[275869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 12:23:28 compute-0 sudo[275869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:23:28 compute-0 ceph-mon[76335]: pgmap v1168: 305 pgs: 305 active+clean; 438 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 231 op/s
Feb 25 12:23:29 compute-0 podman[275937]: 2026-02-25 12:23:29.331623414 +0000 UTC m=+0.080019237 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.391 244018 DEBUG nova.compute.manager [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:29 compute-0 podman[275937]: 2026-02-25 12:23:29.418153136 +0000 UTC m=+0.166548889 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.446 244018 INFO nova.compute.manager [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] instance snapshotting
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.447 244018 WARNING nova.compute.manager [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] trying to snapshot a non-running instance: (state: 4 expected: 1)
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.694 244018 DEBUG nova.network.neutron [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updating instance_info_cache with network_info: [{"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 438 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.732 244018 DEBUG oslo_concurrency.lockutils [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.733 244018 DEBUG oslo_concurrency.lockutils [req-df89ebe2-48d4-4cab-bd7a-1b99fe83a937 req-79ec2f31-149b-4b98-8c33-3d73c03b8587 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.733 244018 DEBUG nova.network.neutron [req-df89ebe2-48d4-4cab-bd7a-1b99fe83a937 req-79ec2f31-149b-4b98-8c33-3d73c03b8587 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Refreshing network info cache for port dc98a932-5f79-4db0-8662-1a5f5de8adff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.738 244018 DEBUG nova.virt.libvirt.vif [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1363616292',display_name='tempest-tempest.common.compute-instance-1363616292',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1363616292',id=32,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-lwlp6xmk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=8715ba65-46d1-406d-8a4d-b5ff5067e2c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.739 244018 DEBUG nova.network.os_vif_util [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.740 244018 DEBUG nova.network.os_vif_util [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.740 244018 DEBUG os_vif [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.741 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.742 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.746 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.746 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc98a932-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.747 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc98a932-5f, col_values=(('external_ids', {'iface-id': 'dc98a932-5f79-4db0-8662-1a5f5de8adff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:c3:28', 'vm-uuid': '8715ba65-46d1-406d-8a4d-b5ff5067e2c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:29 compute-0 NetworkManager[49836]: <info>  [1772022209.7507] manager: (tapdc98a932-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.755 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.760 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.761 244018 INFO os_vif [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f')
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.762 244018 DEBUG nova.virt.libvirt.vif [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1363616292',display_name='tempest-tempest.common.compute-instance-1363616292',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1363616292',id=32,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-lwlp6xmk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=8715ba65-46d1-406d-8a4d-b5ff5067e2c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.762 244018 DEBUG nova.network.os_vif_util [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.763 244018 DEBUG nova.network.os_vif_util [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.767 244018 DEBUG nova.virt.libvirt.guest [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] attach device xml: <interface type="ethernet">
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:a3:c3:28"/>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <target dev="tapdc98a932-5f"/>
Feb 25 12:23:29 compute-0 nova_compute[244014]: </interface>
Feb 25 12:23:29 compute-0 nova_compute[244014]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 25 12:23:29 compute-0 NetworkManager[49836]: <info>  [1772022209.7843] manager: (tapdc98a932-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Feb 25 12:23:29 compute-0 systemd-udevd[275772]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:23:29 compute-0 kernel: tapdc98a932-5f: entered promiscuous mode
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:29 compute-0 ovn_controller[147040]: 2026-02-25T12:23:29Z|00280|binding|INFO|Claiming lport dc98a932-5f79-4db0-8662-1a5f5de8adff for this chassis.
Feb 25 12:23:29 compute-0 ovn_controller[147040]: 2026-02-25T12:23:29Z|00281|binding|INFO|dc98a932-5f79-4db0-8662-1a5f5de8adff: Claiming fa:16:3e:a3:c3:28 10.100.0.13
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.802 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:c3:28 10.100.0.13'], port_security=['fa:16:3e:a3:c3:28 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-452456135', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8715ba65-46d1-406d-8a4d-b5ff5067e2c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-452456135', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '7', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dc98a932-5f79-4db0-8662-1a5f5de8adff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.804 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dc98a932-5f79-4db0-8662-1a5f5de8adff in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis
Feb 25 12:23:29 compute-0 NetworkManager[49836]: <info>  [1772022209.8052] device (tapdc98a932-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:23:29 compute-0 NetworkManager[49836]: <info>  [1772022209.8059] device (tapdc98a932-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.807 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:23:29 compute-0 ovn_controller[147040]: 2026-02-25T12:23:29Z|00282|binding|INFO|Setting lport dc98a932-5f79-4db0-8662-1a5f5de8adff ovn-installed in OVS
Feb 25 12:23:29 compute-0 ovn_controller[147040]: 2026-02-25T12:23:29Z|00283|binding|INFO|Setting lport dc98a932-5f79-4db0-8662-1a5f5de8adff up in Southbound
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.812 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.824 244018 INFO nova.virt.libvirt.driver [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Beginning cold snapshot process
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.830 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6d5d34-0b4d-41b4-a28a-2456c7c189ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.856 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8de284c3-45c7-470b-8b68-dd4fb4ec0509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.862 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f5154325-f544-4192-be59-464552716ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.895 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[143ac7dd-a149-45a4-a7a5-63b97e1f9763]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.900 244018 DEBUG nova.virt.libvirt.driver [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.901 244018 DEBUG nova.virt.libvirt.driver [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.902 244018 DEBUG nova.virt.libvirt.driver [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:ba:e5:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.902 244018 DEBUG nova.virt.libvirt.driver [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:a3:c3:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.914 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6472d258-0947-4886-8f22-1a57720460bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411687, 'reachable_time': 20583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276069, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.932 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[653bafa7-2d41-47d2-8b67-4dd4a8316e6c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411698, 'tstamp': 411698}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276072, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411701, 'tstamp': 411701}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276072, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.934 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.940 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.940 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.940 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:29.941 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:29 compute-0 nova_compute[244014]: 2026-02-25 12:23:29.947 244018 DEBUG nova.virt.libvirt.guest [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <nova:name>tempest-tempest.common.compute-instance-1363616292</nova:name>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:23:29</nova:creationTime>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:23:29 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:23:29 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:23:29 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:23:29 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:23:29 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:23:29 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:23:29 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:23:29 compute-0 nova_compute[244014]:     <nova:port uuid="7298ef49-4582-410a-821d-bd43bc865fba">
Feb 25 12:23:29 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:23:29 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:29 compute-0 nova_compute[244014]:     <nova:port uuid="dc98a932-5f79-4db0-8662-1a5f5de8adff">
Feb 25 12:23:29 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:23:29 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:29 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:23:29 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:23:29 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.013 244018 DEBUG nova.virt.libvirt.imagebackend [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.028 244018 DEBUG oslo_concurrency.lockutils [None req-220a6b40-98e8-444e-a8d3-440c94f8029c ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-8715ba65-46d1-406d-8a4d-b5ff5067e2c1-dc98a932-5f79-4db0-8662-1a5f5de8adff" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.236 244018 DEBUG nova.compute.manager [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Received event network-vif-unplugged-57720a37-b288-4d33-9b7f-4493075cd82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.236 244018 DEBUG oslo_concurrency.lockutils [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.237 244018 DEBUG oslo_concurrency.lockutils [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.237 244018 DEBUG oslo_concurrency.lockutils [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.237 244018 DEBUG nova.compute.manager [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] No waiting events found dispatching network-vif-unplugged-57720a37-b288-4d33-9b7f-4493075cd82e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.238 244018 WARNING nova.compute.manager [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Received unexpected event network-vif-unplugged-57720a37-b288-4d33-9b7f-4493075cd82e for instance with vm_state suspended and task_state image_uploading.
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.238 244018 DEBUG nova.compute.manager [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Received event network-vif-plugged-57720a37-b288-4d33-9b7f-4493075cd82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.238 244018 DEBUG oslo_concurrency.lockutils [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.238 244018 DEBUG oslo_concurrency.lockutils [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.239 244018 DEBUG oslo_concurrency.lockutils [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.239 244018 DEBUG nova.compute.manager [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] No waiting events found dispatching network-vif-plugged-57720a37-b288-4d33-9b7f-4493075cd82e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.240 244018 WARNING nova.compute.manager [req-7c16367a-2ca4-458e-884e-3ea3ca0a5053 req-64c1d95d-148b-4a31-9428-a3309334b6aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Received unexpected event network-vif-plugged-57720a37-b288-4d33-9b7f-4493075cd82e for instance with vm_state suspended and task_state image_uploading.
Feb 25 12:23:30 compute-0 sudo[275869]: pam_unix(sudo:session): session closed for user root
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.342 244018 DEBUG nova.storage.rbd_utils [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(a911b0a0ed484dd8a4825ec16c4fbc45) on rbd image(cc067b57-66e5-499d-ac85-a81e91c1c978_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:23:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:23:30 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:23:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:23:30 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:23:30 compute-0 sudo[276195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:23:30 compute-0 sudo[276195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:23:30 compute-0 sudo[276195]: pam_unix(sudo:session): session closed for user root
Feb 25 12:23:30 compute-0 sudo[276222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:23:30 compute-0 sudo[276222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:23:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:23:30
Feb 25 12:23:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:23:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:23:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes']
Feb 25 12:23:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.965 244018 DEBUG oslo_concurrency.lockutils [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-8715ba65-46d1-406d-8a4d-b5ff5067e2c1-dc98a932-5f79-4db0-8662-1a5f5de8adff" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.965 244018 DEBUG oslo_concurrency.lockutils [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-8715ba65-46d1-406d-8a4d-b5ff5067e2c1-dc98a932-5f79-4db0-8662-1a5f5de8adff" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:30 compute-0 nova_compute[244014]: 2026-02-25 12:23:30.984 244018 DEBUG nova.objects.instance [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.013 244018 DEBUG nova.virt.libvirt.vif [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1363616292',display_name='tempest-tempest.common.compute-instance-1363616292',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1363616292',id=32,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-lwlp6xmk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=8715ba65-46d1-406d-8a4d-b5ff5067e2c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.013 244018 DEBUG nova.network.os_vif_util [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.014 244018 DEBUG nova.network.os_vif_util [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.016 244018 DEBUG nova.virt.libvirt.guest [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.018 244018 DEBUG nova.virt.libvirt.guest [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.020 244018 DEBUG nova.virt.libvirt.driver [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Attempting to detach device tapdc98a932-5f from instance 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.020 244018 DEBUG nova.virt.libvirt.guest [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:a3:c3:28"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <target dev="tapdc98a932-5f"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]: </interface>
Feb 25 12:23:31 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.027 244018 DEBUG nova.virt.libvirt.guest [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.030 244018 DEBUG nova.virt.libvirt.guest [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface>not found in domain: <domain type='kvm' id='36'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <name>instance-00000020</name>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <uuid>8715ba65-46d1-406d-8a4d-b5ff5067e2c1</uuid>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:name>tempest-tempest.common.compute-instance-1363616292</nova:name>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:23:29</nova:creationTime>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:port uuid="7298ef49-4582-410a-821d-bd43bc865fba">
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:port uuid="dc98a932-5f79-4db0-8662-1a5f5de8adff">
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:23:31 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <system>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='serial'>8715ba65-46d1-406d-8a4d-b5ff5067e2c1</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='uuid'>8715ba65-46d1-406d-8a4d-b5ff5067e2c1</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </system>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <os>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </os>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <features>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </features>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk' index='2'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk.config' index='1'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:ba:e5:c6'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target dev='tap7298ef49-45'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:a3:c3:28'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target dev='tapdc98a932-5f'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='net1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <source path='/dev/pts/2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/console.log' append='off'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       </target>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/2'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <source path='/dev/pts/2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/console.log' append='off'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </console>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <video>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </video>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c710,c736</label>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c710,c736</imagelabel>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:23:31 compute-0 nova_compute[244014]: </domain>
Feb 25 12:23:31 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.031 244018 INFO nova.virt.libvirt.driver [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tapdc98a932-5f from instance 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 from the persistent domain config.
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.031 244018 DEBUG nova.virt.libvirt.driver [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] (1/8): Attempting to detach device tapdc98a932-5f with device alias net1 from instance 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.031 244018 DEBUG nova.virt.libvirt.guest [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:a3:c3:28"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <target dev="tapdc98a932-5f"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]: </interface>
Feb 25 12:23:31 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:23:31 compute-0 sudo[276222]: pam_unix(sudo:session): session closed for user root
Feb 25 12:23:31 compute-0 kernel: tapdc98a932-5f (unregistering): left promiscuous mode
Feb 25 12:23:31 compute-0 NetworkManager[49836]: <info>  [1772022211.1398] device (tapdc98a932-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.148 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:31 compute-0 ovn_controller[147040]: 2026-02-25T12:23:31Z|00284|binding|INFO|Releasing lport dc98a932-5f79-4db0-8662-1a5f5de8adff from this chassis (sb_readonly=0)
Feb 25 12:23:31 compute-0 ovn_controller[147040]: 2026-02-25T12:23:31Z|00285|binding|INFO|Setting lport dc98a932-5f79-4db0-8662-1a5f5de8adff down in Southbound
Feb 25 12:23:31 compute-0 ovn_controller[147040]: 2026-02-25T12:23:31Z|00286|binding|INFO|Removing iface tapdc98a932-5f ovn-installed in OVS
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.150 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Received event <DeviceRemovedEvent: 1772022211.1505265, 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.151 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.153 244018 DEBUG nova.virt.libvirt.driver [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Start waiting for the detach event from libvirt for device tapdc98a932-5f with device alias net1 for instance 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.154 244018 DEBUG nova.virt.libvirt.guest [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.157 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.159 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:c3:28 10.100.0.13'], port_security=['fa:16:3e:a3:c3:28 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-452456135', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8715ba65-46d1-406d-8a4d-b5ff5067e2c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-452456135', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '9', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dc98a932-5f79-4db0-8662-1a5f5de8adff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.160 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dc98a932-5f79-4db0-8662-1a5f5de8adff in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.161 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.158 244018 DEBUG nova.virt.libvirt.guest [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a3:c3:28"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdc98a932-5f"/></interface>not found in domain: <domain type='kvm' id='36'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <name>instance-00000020</name>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <uuid>8715ba65-46d1-406d-8a4d-b5ff5067e2c1</uuid>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:name>tempest-tempest.common.compute-instance-1363616292</nova:name>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:23:29</nova:creationTime>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:port uuid="7298ef49-4582-410a-821d-bd43bc865fba">
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:port uuid="dc98a932-5f79-4db0-8662-1a5f5de8adff">
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:23:31 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <system>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='serial'>8715ba65-46d1-406d-8a4d-b5ff5067e2c1</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='uuid'>8715ba65-46d1-406d-8a4d-b5ff5067e2c1</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </system>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <os>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </os>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <features>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </features>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk' index='2'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk.config' index='1'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:ba:e5:c6'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target dev='tap7298ef49-45'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <source path='/dev/pts/2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/console.log' append='off'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       </target>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/2'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <source path='/dev/pts/2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1/console.log' append='off'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </console>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </input>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <video>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </video>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c710,c736</label>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c710,c736</imagelabel>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:23:31 compute-0 nova_compute[244014]: </domain>
Feb 25 12:23:31 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.159 244018 INFO nova.virt.libvirt.driver [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tapdc98a932-5f from instance 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 from the live domain config.
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.159 244018 DEBUG nova.virt.libvirt.vif [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1363616292',display_name='tempest-tempest.common.compute-instance-1363616292',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1363616292',id=32,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-lwlp6xmk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=8715ba65-46d1-406d-8a4d-b5ff5067e2c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.160 244018 DEBUG nova.network.os_vif_util [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.161 244018 DEBUG nova.network.os_vif_util [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.161 244018 DEBUG os_vif [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.163 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.163 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc98a932-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.166 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.168 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.175 244018 INFO os_vif [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f')
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.176 244018 DEBUG nova.virt.libvirt.guest [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:name>tempest-tempest.common.compute-instance-1363616292</nova:name>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:23:31</nova:creationTime>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     <nova:port uuid="7298ef49-4582-410a-821d-bd43bc865fba">
Feb 25 12:23:31 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:23:31 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:23:31 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:23:31 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:23:31 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.175 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67b3fbd3-b25b-43c3-be85-77cd67dcafb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:23:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.197 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4aea16-9f0d-439c-9bf0-480b5d1020c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:23:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.200 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cbebff7a-575b-4f88-855d-8fa9a0b6afbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:23:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:23:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:23:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.220 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[302b3afc-1f10-4c78-9e11-a4e656c1b941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:23:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:23:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:23:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.234 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0266d3-7f21-4520-a178-1df92f5470a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411687, 'reachable_time': 20583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276295, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.251 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[df815c52-a39a-435a-9abb-5eca241b5ee6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411698, 'tstamp': 411698}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276303, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411701, 'tstamp': 411701}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276303, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.252 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.253 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.257 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.258 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.258 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:31.258 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:31 compute-0 sudo[276296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:23:31 compute-0 sudo[276296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:23:31 compute-0 sudo[276296]: pam_unix(sudo:session): session closed for user root
Feb 25 12:23:31 compute-0 sudo[276323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:23:31 compute-0 sudo[276323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:23:31 compute-0 ceph-mon[76335]: pgmap v1169: 305 pgs: 305 active+clean; 438 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Feb 25 12:23:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:23:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:23:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:23:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:23:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:23:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:23:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:23:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:23:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Feb 25 12:23:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Feb 25 12:23:31 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.407 244018 DEBUG nova.storage.rbd_utils [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] cloning vms/cc067b57-66e5-499d-ac85-a81e91c1c978_disk@a911b0a0ed484dd8a4825ec16c4fbc45 to images/89a762c3-5024-4ff6-acbe-3b2c2f22732b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.492 244018 DEBUG nova.storage.rbd_utils [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] flattening images/89a762c3-5024-4ff6-acbe-3b2c2f22732b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.533 244018 DEBUG nova.network.neutron [req-df89ebe2-48d4-4cab-bd7a-1b99fe83a937 req-79ec2f31-149b-4b98-8c33-3d73c03b8587 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updated VIF entry in instance network info cache for port dc98a932-5f79-4db0-8662-1a5f5de8adff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.534 244018 DEBUG nova.network.neutron [req-df89ebe2-48d4-4cab-bd7a-1b99fe83a937 req-79ec2f31-149b-4b98-8c33-3d73c03b8587 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updating instance_info_cache with network_info: [{"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.571 244018 DEBUG oslo_concurrency.lockutils [req-df89ebe2-48d4-4cab-bd7a-1b99fe83a937 req-79ec2f31-149b-4b98-8c33-3d73c03b8587 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:23:31 compute-0 podman[276411]: 2026-02-25 12:23:31.635591857 +0000 UTC m=+0.058449094 container create 27aa902f63de998cee85b64b0008549f5bdc2a9fc8796880a80b3209e5e5ce73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:23:31 compute-0 podman[276411]: 2026-02-25 12:23:31.597451181 +0000 UTC m=+0.020308508 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:23:31 compute-0 systemd[1]: Started libpod-conmon-27aa902f63de998cee85b64b0008549f5bdc2a9fc8796880a80b3209e5e5ce73.scope.
Feb 25 12:23:31 compute-0 nova_compute[244014]: 2026-02-25 12:23:31.713 244018 DEBUG nova.storage.rbd_utils [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] removing snapshot(a911b0a0ed484dd8a4825ec16c4fbc45) on rbd image(cc067b57-66e5-499d-ac85-a81e91c1c978_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:23:31 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 438 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 167 op/s
Feb 25 12:23:31 compute-0 podman[276411]: 2026-02-25 12:23:31.735037646 +0000 UTC m=+0.157894943 container init 27aa902f63de998cee85b64b0008549f5bdc2a9fc8796880a80b3209e5e5ce73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 12:23:31 compute-0 podman[276411]: 2026-02-25 12:23:31.744204197 +0000 UTC m=+0.167061474 container start 27aa902f63de998cee85b64b0008549f5bdc2a9fc8796880a80b3209e5e5ce73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_matsumoto, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:23:31 compute-0 podman[276411]: 2026-02-25 12:23:31.747532042 +0000 UTC m=+0.170389319 container attach 27aa902f63de998cee85b64b0008549f5bdc2a9fc8796880a80b3209e5e5ce73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_matsumoto, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 12:23:31 compute-0 vibrant_matsumoto[276443]: 167 167
Feb 25 12:23:31 compute-0 systemd[1]: libpod-27aa902f63de998cee85b64b0008549f5bdc2a9fc8796880a80b3209e5e5ce73.scope: Deactivated successfully.
Feb 25 12:23:31 compute-0 podman[276450]: 2026-02-25 12:23:31.795652711 +0000 UTC m=+0.031443196 container died 27aa902f63de998cee85b64b0008549f5bdc2a9fc8796880a80b3209e5e5ce73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_matsumoto, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2d8a94cfeb5247d0e5b02ac3cf1613813516a9377f2eb48c2648b26afbf87fd-merged.mount: Deactivated successfully.
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:23:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:23:31 compute-0 podman[276450]: 2026-02-25 12:23:31.834969709 +0000 UTC m=+0.070760104 container remove 27aa902f63de998cee85b64b0008549f5bdc2a9fc8796880a80b3209e5e5ce73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:23:31 compute-0 systemd[1]: libpod-conmon-27aa902f63de998cee85b64b0008549f5bdc2a9fc8796880a80b3209e5e5ce73.scope: Deactivated successfully.
Feb 25 12:23:32 compute-0 podman[276473]: 2026-02-25 12:23:32.026021765 +0000 UTC m=+0.054217413 container create f7f57fabec0bd9057a9153452136ed0a80dd14156c92c85fc4eeb913d7e4c16f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hellman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:23:32 compute-0 systemd[1]: Started libpod-conmon-f7f57fabec0bd9057a9153452136ed0a80dd14156c92c85fc4eeb913d7e4c16f.scope.
Feb 25 12:23:32 compute-0 podman[276473]: 2026-02-25 12:23:32.002175937 +0000 UTC m=+0.030371645 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:23:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5362ec7b3ff706c720afaba736f3b54fc25e04b5bfd9aa2202881060b800d422/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5362ec7b3ff706c720afaba736f3b54fc25e04b5bfd9aa2202881060b800d422/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5362ec7b3ff706c720afaba736f3b54fc25e04b5bfd9aa2202881060b800d422/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5362ec7b3ff706c720afaba736f3b54fc25e04b5bfd9aa2202881060b800d422/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5362ec7b3ff706c720afaba736f3b54fc25e04b5bfd9aa2202881060b800d422/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:32 compute-0 podman[276473]: 2026-02-25 12:23:32.130117597 +0000 UTC m=+0.158313305 container init f7f57fabec0bd9057a9153452136ed0a80dd14156c92c85fc4eeb913d7e4c16f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 12:23:32 compute-0 podman[276473]: 2026-02-25 12:23:32.145639679 +0000 UTC m=+0.173835337 container start f7f57fabec0bd9057a9153452136ed0a80dd14156c92c85fc4eeb913d7e4c16f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 12:23:32 compute-0 podman[276473]: 2026-02-25 12:23:32.150012893 +0000 UTC m=+0.178208541 container attach f7f57fabec0bd9057a9153452136ed0a80dd14156c92c85fc4eeb913d7e4c16f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 12:23:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Feb 25 12:23:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Feb 25 12:23:32 compute-0 ceph-mon[76335]: osdmap e160: 3 total, 3 up, 3 in
Feb 25 12:23:32 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.415 244018 DEBUG nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.416 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.416 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.417 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.417 244018 DEBUG nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] No waiting events found dispatching network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.418 244018 WARNING nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received unexpected event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff for instance with vm_state active and task_state None.
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.419 244018 DEBUG nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.419 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.420 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.420 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.421 244018 DEBUG nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] No waiting events found dispatching network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.422 244018 WARNING nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received unexpected event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff for instance with vm_state active and task_state None.
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.422 244018 DEBUG nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-vif-unplugged-dc98a932-5f79-4db0-8662-1a5f5de8adff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.423 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.423 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.424 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.424 244018 DEBUG nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] No waiting events found dispatching network-vif-unplugged-dc98a932-5f79-4db0-8662-1a5f5de8adff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.425 244018 WARNING nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received unexpected event network-vif-unplugged-dc98a932-5f79-4db0-8662-1a5f5de8adff for instance with vm_state active and task_state None.
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.425 244018 DEBUG nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.426 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.426 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.427 244018 DEBUG oslo_concurrency.lockutils [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.427 244018 DEBUG nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] No waiting events found dispatching network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.428 244018 WARNING nova.compute.manager [req-6c832a3f-edd9-449d-8e2e-f932a5f02641 req-0d0296a1-2522-406c-8ce1-831407425526 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received unexpected event network-vif-plugged-dc98a932-5f79-4db0-8662-1a5f5de8adff for instance with vm_state active and task_state None.
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.430 244018 DEBUG oslo_concurrency.lockutils [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.431 244018 DEBUG oslo_concurrency.lockutils [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.431 244018 DEBUG nova.network.neutron [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.437 244018 DEBUG nova.storage.rbd_utils [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(snap) on rbd image(89a762c3-5024-4ff6-acbe-3b2c2f22732b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:23:32 compute-0 wonderful_hellman[276490]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:23:32 compute-0 wonderful_hellman[276490]: --> All data devices are unavailable
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.624 244018 DEBUG oslo_concurrency.lockutils [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.625 244018 DEBUG oslo_concurrency.lockutils [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.626 244018 DEBUG oslo_concurrency.lockutils [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.626 244018 DEBUG oslo_concurrency.lockutils [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.627 244018 DEBUG oslo_concurrency.lockutils [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.629 244018 INFO nova.compute.manager [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Terminating instance
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.631 244018 DEBUG nova.compute.manager [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:23:32 compute-0 systemd[1]: libpod-f7f57fabec0bd9057a9153452136ed0a80dd14156c92c85fc4eeb913d7e4c16f.scope: Deactivated successfully.
Feb 25 12:23:32 compute-0 podman[276473]: 2026-02-25 12:23:32.676139853 +0000 UTC m=+0.704335501 container died f7f57fabec0bd9057a9153452136ed0a80dd14156c92c85fc4eeb913d7e4c16f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:23:32 compute-0 kernel: tap7298ef49-45 (unregistering): left promiscuous mode
Feb 25 12:23:32 compute-0 NetworkManager[49836]: <info>  [1772022212.6894] device (tap7298ef49-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:32 compute-0 ovn_controller[147040]: 2026-02-25T12:23:32Z|00287|binding|INFO|Releasing lport 7298ef49-4582-410a-821d-bd43bc865fba from this chassis (sb_readonly=0)
Feb 25 12:23:32 compute-0 ovn_controller[147040]: 2026-02-25T12:23:32Z|00288|binding|INFO|Setting lport 7298ef49-4582-410a-821d-bd43bc865fba down in Southbound
Feb 25 12:23:32 compute-0 ovn_controller[147040]: 2026-02-25T12:23:32Z|00289|binding|INFO|Removing iface tap7298ef49-45 ovn-installed in OVS
Feb 25 12:23:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5362ec7b3ff706c720afaba736f3b54fc25e04b5bfd9aa2202881060b800d422-merged.mount: Deactivated successfully.
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.718 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:32 compute-0 podman[276473]: 2026-02-25 12:23:32.727087132 +0000 UTC m=+0.755282780 container remove f7f57fabec0bd9057a9153452136ed0a80dd14156c92c85fc4eeb913d7e4c16f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hellman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 12:23:32 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Deactivated successfully.
Feb 25 12:23:32 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Consumed 13.774s CPU time.
Feb 25 12:23:32 compute-0 systemd[1]: libpod-conmon-f7f57fabec0bd9057a9153452136ed0a80dd14156c92c85fc4eeb913d7e4c16f.scope: Deactivated successfully.
Feb 25 12:23:32 compute-0 systemd-machined[210048]: Machine qemu-36-instance-00000020 terminated.
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.784 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e5:c6 10.100.0.3'], port_security=['fa:16:3e:ba:e5:c6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8715ba65-46d1-406d-8a4d-b5ff5067e2c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc690998-27c4-44c1-9496-60c99dea61b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7298ef49-4582-410a-821d-bd43bc865fba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.785 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7298ef49-4582-410a-821d-bd43bc865fba in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.787 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 12:23:32 compute-0 sudo[276323]: pam_unix(sudo:session): session closed for user root
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.800 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[93cb9ca5-af8c-4edf-b3ad-0fc6651887fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.821 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d48c7830-3b54-4b4f-be81-9339f4adc4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.824 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[96f54b2c-3761-4fc8-8d0c-c700a9346638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.845 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b8b96e-be7e-4853-9916-14b507abdc9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:32 compute-0 sudo[276545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:23:32 compute-0 sudo[276545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:23:32 compute-0 sudo[276545]: pam_unix(sudo:session): session closed for user root
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.863 244018 INFO nova.virt.libvirt.driver [-] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Instance destroyed successfully.
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.863 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fdcdf196-bfb2-4f93-8d25-bf4bc0b515e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411687, 'reachable_time': 20583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276577, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.863 244018 DEBUG nova.objects.instance [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'resources' on Instance uuid 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.877 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[70a3ac6e-0784-4629-8994-d852a93d3fbb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411698, 'tstamp': 411698}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276591, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411701, 'tstamp': 411701}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276591, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.878 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.880 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.886 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.886 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.886 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.887 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:32.887 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.904 244018 DEBUG nova.virt.libvirt.vif [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1363616292',display_name='tempest-tempest.common.compute-instance-1363616292',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1363616292',id=32,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-lwlp6xmk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=8715ba65-46d1-406d-8a4d-b5ff5067e2c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.904 244018 DEBUG nova.network.os_vif_util [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.905 244018 DEBUG nova.network.os_vif_util [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:e5:c6,bridge_name='br-int',has_traffic_filtering=True,id=7298ef49-4582-410a-821d-bd43bc865fba,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7298ef49-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:32 compute-0 sudo[276587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.905 244018 DEBUG os_vif [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:e5:c6,bridge_name='br-int',has_traffic_filtering=True,id=7298ef49-4582-410a-821d-bd43bc865fba,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7298ef49-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.906 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7298ef49-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.908 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:32 compute-0 sudo[276587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.913 244018 INFO os_vif [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:e5:c6,bridge_name='br-int',has_traffic_filtering=True,id=7298ef49-4582-410a-821d-bd43bc865fba,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7298ef49-45')
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.914 244018 DEBUG nova.virt.libvirt.vif [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1363616292',display_name='tempest-tempest.common.compute-instance-1363616292',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1363616292',id=32,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-lwlp6xmk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=8715ba65-46d1-406d-8a4d-b5ff5067e2c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.914 244018 DEBUG nova.network.os_vif_util [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "address": "fa:16:3e:a3:c3:28", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc98a932-5f", "ovs_interfaceid": "dc98a932-5f79-4db0-8662-1a5f5de8adff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.915 244018 DEBUG nova.network.os_vif_util [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.915 244018 DEBUG os_vif [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.916 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc98a932-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.916 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:32 compute-0 nova_compute[244014]: 2026-02-25 12:23:32.917 244018 INFO os_vif [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:c3:28,bridge_name='br-int',has_traffic_filtering=True,id=dc98a932-5f79-4db0-8662-1a5f5de8adff,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdc98a932-5f')
Feb 25 12:23:33 compute-0 nova_compute[244014]: 2026-02-25 12:23:33.164 244018 INFO nova.virt.libvirt.driver [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Deleting instance files /var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1_del
Feb 25 12:23:33 compute-0 nova_compute[244014]: 2026-02-25 12:23:33.165 244018 INFO nova.virt.libvirt.driver [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Deletion of /var/lib/nova/instances/8715ba65-46d1-406d-8a4d-b5ff5067e2c1_del complete
Feb 25 12:23:33 compute-0 podman[276648]: 2026-02-25 12:23:33.205443573 +0000 UTC m=+0.046427442 container create 78a783df614e1e829eb286788177d7fc9d7a480028f373dbc0dfcc8ea84a41fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 12:23:33 compute-0 nova_compute[244014]: 2026-02-25 12:23:33.233 244018 INFO nova.compute.manager [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 25 12:23:33 compute-0 nova_compute[244014]: 2026-02-25 12:23:33.234 244018 DEBUG oslo.service.loopingcall [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:23:33 compute-0 nova_compute[244014]: 2026-02-25 12:23:33.235 244018 DEBUG nova.compute.manager [-] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:23:33 compute-0 nova_compute[244014]: 2026-02-25 12:23:33.235 244018 DEBUG nova.network.neutron [-] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:23:33 compute-0 systemd[1]: Started libpod-conmon-78a783df614e1e829eb286788177d7fc9d7a480028f373dbc0dfcc8ea84a41fd.scope.
Feb 25 12:23:33 compute-0 podman[276648]: 2026-02-25 12:23:33.184089475 +0000 UTC m=+0.025073404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:23:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:23:33 compute-0 podman[276648]: 2026-02-25 12:23:33.295438003 +0000 UTC m=+0.136421962 container init 78a783df614e1e829eb286788177d7fc9d7a480028f373dbc0dfcc8ea84a41fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:23:33 compute-0 podman[276648]: 2026-02-25 12:23:33.303973706 +0000 UTC m=+0.144957605 container start 78a783df614e1e829eb286788177d7fc9d7a480028f373dbc0dfcc8ea84a41fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:23:33 compute-0 podman[276648]: 2026-02-25 12:23:33.307482126 +0000 UTC m=+0.148466015 container attach 78a783df614e1e829eb286788177d7fc9d7a480028f373dbc0dfcc8ea84a41fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:23:33 compute-0 sad_lehmann[276664]: 167 167
Feb 25 12:23:33 compute-0 systemd[1]: libpod-78a783df614e1e829eb286788177d7fc9d7a480028f373dbc0dfcc8ea84a41fd.scope: Deactivated successfully.
Feb 25 12:23:33 compute-0 podman[276648]: 2026-02-25 12:23:33.311426898 +0000 UTC m=+0.152410787 container died 78a783df614e1e829eb286788177d7fc9d7a480028f373dbc0dfcc8ea84a41fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lehmann, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:23:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c2cf80fee40b2b292734d191c812e955e5a87324a52446486a06654ebd0f54b-merged.mount: Deactivated successfully.
Feb 25 12:23:33 compute-0 podman[276648]: 2026-02-25 12:23:33.355984766 +0000 UTC m=+0.196968655 container remove 78a783df614e1e829eb286788177d7fc9d7a480028f373dbc0dfcc8ea84a41fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:23:33 compute-0 systemd[1]: libpod-conmon-78a783df614e1e829eb286788177d7fc9d7a480028f373dbc0dfcc8ea84a41fd.scope: Deactivated successfully.
Feb 25 12:23:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Feb 25 12:23:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Feb 25 12:23:33 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Feb 25 12:23:33 compute-0 ceph-mon[76335]: pgmap v1171: 305 pgs: 305 active+clean; 438 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 167 op/s
Feb 25 12:23:33 compute-0 ceph-mon[76335]: osdmap e161: 3 total, 3 up, 3 in
Feb 25 12:23:33 compute-0 podman[276689]: 2026-02-25 12:23:33.563601543 +0000 UTC m=+0.060489192 container create aa147623a808046f01d6a4b19530c95ee1e42a2fc1cc730188daf924060463c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:23:33 compute-0 systemd[1]: Started libpod-conmon-aa147623a808046f01d6a4b19530c95ee1e42a2fc1cc730188daf924060463c0.scope.
Feb 25 12:23:33 compute-0 podman[276689]: 2026-02-25 12:23:33.539612831 +0000 UTC m=+0.036500520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:23:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89398e98803c46b768b112f71f03e14d3e6a1a8ab4308eff8b7b112736434982/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89398e98803c46b768b112f71f03e14d3e6a1a8ab4308eff8b7b112736434982/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89398e98803c46b768b112f71f03e14d3e6a1a8ab4308eff8b7b112736434982/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89398e98803c46b768b112f71f03e14d3e6a1a8ab4308eff8b7b112736434982/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:33 compute-0 podman[276689]: 2026-02-25 12:23:33.661933731 +0000 UTC m=+0.158821430 container init aa147623a808046f01d6a4b19530c95ee1e42a2fc1cc730188daf924060463c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wescoff, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:23:33 compute-0 podman[276689]: 2026-02-25 12:23:33.67524882 +0000 UTC m=+0.172136459 container start aa147623a808046f01d6a4b19530c95ee1e42a2fc1cc730188daf924060463c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wescoff, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:23:33 compute-0 podman[276689]: 2026-02-25 12:23:33.679673456 +0000 UTC m=+0.176561155 container attach aa147623a808046f01d6a4b19530c95ee1e42a2fc1cc730188daf924060463c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wescoff, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:23:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 463 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 76 op/s
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]: {
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:     "0": [
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:         {
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "devices": [
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "/dev/loop3"
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             ],
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_name": "ceph_lv0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_size": "21470642176",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "name": "ceph_lv0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "tags": {
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.cluster_name": "ceph",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.crush_device_class": "",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.encrypted": "0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.objectstore": "bluestore",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.osd_id": "0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.type": "block",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.vdo": "0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.with_tpm": "0"
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             },
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "type": "block",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "vg_name": "ceph_vg0"
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:         }
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:     ],
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:     "1": [
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:         {
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "devices": [
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "/dev/loop4"
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             ],
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_name": "ceph_lv1",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_size": "21470642176",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "name": "ceph_lv1",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "tags": {
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.cluster_name": "ceph",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.crush_device_class": "",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.encrypted": "0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.objectstore": "bluestore",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.osd_id": "1",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.type": "block",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.vdo": "0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.with_tpm": "0"
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             },
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "type": "block",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "vg_name": "ceph_vg1"
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:         }
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:     ],
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:     "2": [
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:         {
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "devices": [
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "/dev/loop5"
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             ],
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_name": "ceph_lv2",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_size": "21470642176",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "name": "ceph_lv2",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "tags": {
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.cluster_name": "ceph",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.crush_device_class": "",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.encrypted": "0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.objectstore": "bluestore",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.osd_id": "2",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.type": "block",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.vdo": "0",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:                 "ceph.with_tpm": "0"
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             },
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "type": "block",
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:             "vg_name": "ceph_vg2"
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:         }
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]:     ]
Feb 25 12:23:33 compute-0 stoic_wescoff[276706]: }
Feb 25 12:23:33 compute-0 systemd[1]: libpod-aa147623a808046f01d6a4b19530c95ee1e42a2fc1cc730188daf924060463c0.scope: Deactivated successfully.
Feb 25 12:23:34 compute-0 podman[276715]: 2026-02-25 12:23:34.042118947 +0000 UTC m=+0.036684835 container died aa147623a808046f01d6a4b19530c95ee1e42a2fc1cc730188daf924060463c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb 25 12:23:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-89398e98803c46b768b112f71f03e14d3e6a1a8ab4308eff8b7b112736434982-merged.mount: Deactivated successfully.
Feb 25 12:23:34 compute-0 podman[276715]: 2026-02-25 12:23:34.081454786 +0000 UTC m=+0.076020664 container remove aa147623a808046f01d6a4b19530c95ee1e42a2fc1cc730188daf924060463c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wescoff, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 12:23:34 compute-0 systemd[1]: libpod-conmon-aa147623a808046f01d6a4b19530c95ee1e42a2fc1cc730188daf924060463c0.scope: Deactivated successfully.
Feb 25 12:23:34 compute-0 sudo[276587]: pam_unix(sudo:session): session closed for user root
Feb 25 12:23:34 compute-0 sudo[276731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:23:34 compute-0 sudo[276731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:23:34 compute-0 sudo[276731]: pam_unix(sudo:session): session closed for user root
Feb 25 12:23:34 compute-0 sudo[276756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:23:34 compute-0 sudo[276756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:23:34 compute-0 ceph-mon[76335]: osdmap e162: 3 total, 3 up, 3 in
Feb 25 12:23:34 compute-0 nova_compute[244014]: 2026-02-25 12:23:34.409 244018 DEBUG nova.compute.manager [req-31f1dca0-563c-488f-a420-9546f8504d5c req-819f637a-d0cc-4055-a19a-3acf9f6496d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-vif-unplugged-7298ef49-4582-410a-821d-bd43bc865fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:34 compute-0 nova_compute[244014]: 2026-02-25 12:23:34.410 244018 DEBUG oslo_concurrency.lockutils [req-31f1dca0-563c-488f-a420-9546f8504d5c req-819f637a-d0cc-4055-a19a-3acf9f6496d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:34 compute-0 nova_compute[244014]: 2026-02-25 12:23:34.410 244018 DEBUG oslo_concurrency.lockutils [req-31f1dca0-563c-488f-a420-9546f8504d5c req-819f637a-d0cc-4055-a19a-3acf9f6496d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:34 compute-0 nova_compute[244014]: 2026-02-25 12:23:34.410 244018 DEBUG oslo_concurrency.lockutils [req-31f1dca0-563c-488f-a420-9546f8504d5c req-819f637a-d0cc-4055-a19a-3acf9f6496d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:34 compute-0 nova_compute[244014]: 2026-02-25 12:23:34.410 244018 DEBUG nova.compute.manager [req-31f1dca0-563c-488f-a420-9546f8504d5c req-819f637a-d0cc-4055-a19a-3acf9f6496d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] No waiting events found dispatching network-vif-unplugged-7298ef49-4582-410a-821d-bd43bc865fba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:34 compute-0 nova_compute[244014]: 2026-02-25 12:23:34.411 244018 DEBUG nova.compute.manager [req-31f1dca0-563c-488f-a420-9546f8504d5c req-819f637a-d0cc-4055-a19a-3acf9f6496d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-vif-unplugged-7298ef49-4582-410a-821d-bd43bc865fba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:23:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:34 compute-0 nova_compute[244014]: 2026-02-25 12:23:34.559 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:34 compute-0 podman[276793]: 2026-02-25 12:23:34.576824931 +0000 UTC m=+0.061535172 container create b0abdfa78faa57b84c42e4a3082e7205051566f0e3ffecad2b0200593c41f0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:23:34 compute-0 systemd[1]: Started libpod-conmon-b0abdfa78faa57b84c42e4a3082e7205051566f0e3ffecad2b0200593c41f0e3.scope.
Feb 25 12:23:34 compute-0 podman[276793]: 2026-02-25 12:23:34.550420719 +0000 UTC m=+0.035131040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:23:34 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:23:34 compute-0 podman[276793]: 2026-02-25 12:23:34.661192261 +0000 UTC m=+0.145902552 container init b0abdfa78faa57b84c42e4a3082e7205051566f0e3ffecad2b0200593c41f0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:23:34 compute-0 podman[276793]: 2026-02-25 12:23:34.667424368 +0000 UTC m=+0.152134619 container start b0abdfa78faa57b84c42e4a3082e7205051566f0e3ffecad2b0200593c41f0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:23:34 compute-0 podman[276793]: 2026-02-25 12:23:34.671611988 +0000 UTC m=+0.156322299 container attach b0abdfa78faa57b84c42e4a3082e7205051566f0e3ffecad2b0200593c41f0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 12:23:34 compute-0 vigilant_elgamal[276809]: 167 167
Feb 25 12:23:34 compute-0 systemd[1]: libpod-b0abdfa78faa57b84c42e4a3082e7205051566f0e3ffecad2b0200593c41f0e3.scope: Deactivated successfully.
Feb 25 12:23:34 compute-0 podman[276793]: 2026-02-25 12:23:34.67380496 +0000 UTC m=+0.158515201 container died b0abdfa78faa57b84c42e4a3082e7205051566f0e3ffecad2b0200593c41f0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 12:23:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1ea8b97f782ea9f6fb98e30be3b93fea8263344ef397a3146a7b3e3343547fb-merged.mount: Deactivated successfully.
Feb 25 12:23:34 compute-0 podman[276793]: 2026-02-25 12:23:34.7201864 +0000 UTC m=+0.204896641 container remove b0abdfa78faa57b84c42e4a3082e7205051566f0e3ffecad2b0200593c41f0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:23:34 compute-0 systemd[1]: libpod-conmon-b0abdfa78faa57b84c42e4a3082e7205051566f0e3ffecad2b0200593c41f0e3.scope: Deactivated successfully.
Feb 25 12:23:34 compute-0 podman[276832]: 2026-02-25 12:23:34.914893069 +0000 UTC m=+0.047960545 container create 9c89bf5e9544ffb7b04b673f8eb6d8ce508db0a171c897266615c6f544d573a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:23:34 compute-0 systemd[1]: Started libpod-conmon-9c89bf5e9544ffb7b04b673f8eb6d8ce508db0a171c897266615c6f544d573a4.scope.
Feb 25 12:23:34 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456ce67e2c73b9482ee6ce4c0d34428dd98e93363b5e82a2516c9fe532502ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456ce67e2c73b9482ee6ce4c0d34428dd98e93363b5e82a2516c9fe532502ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456ce67e2c73b9482ee6ce4c0d34428dd98e93363b5e82a2516c9fe532502ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456ce67e2c73b9482ee6ce4c0d34428dd98e93363b5e82a2516c9fe532502ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:34 compute-0 podman[276832]: 2026-02-25 12:23:34.89733675 +0000 UTC m=+0.030404256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:23:35 compute-0 podman[276832]: 2026-02-25 12:23:35.009422659 +0000 UTC m=+0.142490225 container init 9c89bf5e9544ffb7b04b673f8eb6d8ce508db0a171c897266615c6f544d573a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 12:23:35 compute-0 podman[276832]: 2026-02-25 12:23:35.018513078 +0000 UTC m=+0.151580584 container start 9c89bf5e9544ffb7b04b673f8eb6d8ce508db0a171c897266615c6f544d573a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:23:35 compute-0 podman[276832]: 2026-02-25 12:23:35.022268884 +0000 UTC m=+0.155336460 container attach 9c89bf5e9544ffb7b04b673f8eb6d8ce508db0a171c897266615c6f544d573a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:23:35 compute-0 nova_compute[244014]: 2026-02-25 12:23:35.160 244018 INFO nova.virt.libvirt.driver [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Snapshot image upload complete
Feb 25 12:23:35 compute-0 nova_compute[244014]: 2026-02-25 12:23:35.160 244018 INFO nova.compute.manager [None req-ed9fd4e8-bcd2-4103-a2ad-44f939653565 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Took 5.71 seconds to snapshot the instance on the hypervisor.
Feb 25 12:23:35 compute-0 ceph-mon[76335]: pgmap v1174: 305 pgs: 305 active+clean; 463 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 76 op/s
Feb 25 12:23:35 compute-0 nova_compute[244014]: 2026-02-25 12:23:35.502 244018 INFO nova.network.neutron [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Port dc98a932-5f79-4db0-8662-1a5f5de8adff from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 25 12:23:35 compute-0 nova_compute[244014]: 2026-02-25 12:23:35.503 244018 DEBUG nova.network.neutron [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updating instance_info_cache with network_info: [{"id": "7298ef49-4582-410a-821d-bd43bc865fba", "address": "fa:16:3e:ba:e5:c6", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7298ef49-45", "ovs_interfaceid": "7298ef49-4582-410a-821d-bd43bc865fba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:35 compute-0 nova_compute[244014]: 2026-02-25 12:23:35.527 244018 DEBUG oslo_concurrency.lockutils [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-8715ba65-46d1-406d-8a4d-b5ff5067e2c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:35 compute-0 nova_compute[244014]: 2026-02-25 12:23:35.561 244018 DEBUG oslo_concurrency.lockutils [None req-aaff00a4-ee8a-4019-b2f7-f0265699d465 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-8715ba65-46d1-406d-8a4d-b5ff5067e2c1-dc98a932-5f79-4db0-8662-1a5f5de8adff" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:35 compute-0 lvm[276926]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:23:35 compute-0 lvm[276926]: VG ceph_vg0 finished
Feb 25 12:23:35 compute-0 lvm[276927]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:23:35 compute-0 lvm[276927]: VG ceph_vg1 finished
Feb 25 12:23:35 compute-0 lvm[276929]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:23:35 compute-0 lvm[276929]: VG ceph_vg2 finished
Feb 25 12:23:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 463 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 76 op/s
Feb 25 12:23:35 compute-0 serene_panini[276848]: {}
Feb 25 12:23:35 compute-0 systemd[1]: libpod-9c89bf5e9544ffb7b04b673f8eb6d8ce508db0a171c897266615c6f544d573a4.scope: Deactivated successfully.
Feb 25 12:23:35 compute-0 systemd[1]: libpod-9c89bf5e9544ffb7b04b673f8eb6d8ce508db0a171c897266615c6f544d573a4.scope: Consumed 1.127s CPU time.
Feb 25 12:23:35 compute-0 podman[276932]: 2026-02-25 12:23:35.885033672 +0000 UTC m=+0.036082317 container died 9c89bf5e9544ffb7b04b673f8eb6d8ce508db0a171c897266615c6f544d573a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:23:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-456ce67e2c73b9482ee6ce4c0d34428dd98e93363b5e82a2516c9fe532502ff9-merged.mount: Deactivated successfully.
Feb 25 12:23:35 compute-0 podman[276932]: 2026-02-25 12:23:35.934744727 +0000 UTC m=+0.085793312 container remove 9c89bf5e9544ffb7b04b673f8eb6d8ce508db0a171c897266615c6f544d573a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:23:35 compute-0 systemd[1]: libpod-conmon-9c89bf5e9544ffb7b04b673f8eb6d8ce508db0a171c897266615c6f544d573a4.scope: Deactivated successfully.
Feb 25 12:23:35 compute-0 sudo[276756]: pam_unix(sudo:session): session closed for user root
Feb 25 12:23:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:23:36 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:23:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:23:36 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:23:36 compute-0 nova_compute[244014]: 2026-02-25 12:23:36.047 244018 DEBUG nova.network.neutron [-] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:36 compute-0 nova_compute[244014]: 2026-02-25 12:23:36.074 244018 INFO nova.compute.manager [-] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Took 2.84 seconds to deallocate network for instance.
Feb 25 12:23:36 compute-0 sudo[276948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:23:36 compute-0 sudo[276948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:23:36 compute-0 sudo[276948]: pam_unix(sudo:session): session closed for user root
Feb 25 12:23:36 compute-0 nova_compute[244014]: 2026-02-25 12:23:36.134 244018 DEBUG oslo_concurrency.lockutils [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:36 compute-0 nova_compute[244014]: 2026-02-25 12:23:36.134 244018 DEBUG oslo_concurrency.lockutils [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:36 compute-0 nova_compute[244014]: 2026-02-25 12:23:36.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:36 compute-0 nova_compute[244014]: 2026-02-25 12:23:36.363 244018 DEBUG oslo_concurrency.processutils [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:23:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3284856103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:36 compute-0 nova_compute[244014]: 2026-02-25 12:23:36.921 244018 DEBUG oslo_concurrency.processutils [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:36 compute-0 nova_compute[244014]: 2026-02-25 12:23:36.928 244018 DEBUG nova.compute.provider_tree [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:23:36 compute-0 nova_compute[244014]: 2026-02-25 12:23:36.949 244018 DEBUG nova.scheduler.client.report [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:23:36 compute-0 nova_compute[244014]: 2026-02-25 12:23:36.975 244018 DEBUG oslo_concurrency.lockutils [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:37 compute-0 ceph-mon[76335]: pgmap v1175: 305 pgs: 305 active+clean; 463 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 76 op/s
Feb 25 12:23:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:23:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:23:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3284856103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:37 compute-0 nova_compute[244014]: 2026-02-25 12:23:37.010 244018 INFO nova.scheduler.client.report [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Deleted allocations for instance 8715ba65-46d1-406d-8a4d-b5ff5067e2c1
Feb 25 12:23:37 compute-0 nova_compute[244014]: 2026-02-25 12:23:37.095 244018 DEBUG oslo_concurrency.lockutils [None req-84a49e83-a5c3-422f-9e92-7977531741fe ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Feb 25 12:23:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Feb 25 12:23:37 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Feb 25 12:23:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 405 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Feb 25 12:23:37 compute-0 nova_compute[244014]: 2026-02-25 12:23:37.867 244018 DEBUG nova.compute.manager [req-0d0f8ecd-8c0d-4ab5-b315-7164e845d60e req-00d5e313-84fb-47a3-b7f9-de4b6f81d839 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-vif-plugged-7298ef49-4582-410a-821d-bd43bc865fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:37 compute-0 nova_compute[244014]: 2026-02-25 12:23:37.868 244018 DEBUG oslo_concurrency.lockutils [req-0d0f8ecd-8c0d-4ab5-b315-7164e845d60e req-00d5e313-84fb-47a3-b7f9-de4b6f81d839 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:37 compute-0 nova_compute[244014]: 2026-02-25 12:23:37.868 244018 DEBUG oslo_concurrency.lockutils [req-0d0f8ecd-8c0d-4ab5-b315-7164e845d60e req-00d5e313-84fb-47a3-b7f9-de4b6f81d839 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:37 compute-0 nova_compute[244014]: 2026-02-25 12:23:37.868 244018 DEBUG oslo_concurrency.lockutils [req-0d0f8ecd-8c0d-4ab5-b315-7164e845d60e req-00d5e313-84fb-47a3-b7f9-de4b6f81d839 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:37 compute-0 nova_compute[244014]: 2026-02-25 12:23:37.869 244018 DEBUG nova.compute.manager [req-0d0f8ecd-8c0d-4ab5-b315-7164e845d60e req-00d5e313-84fb-47a3-b7f9-de4b6f81d839 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] No waiting events found dispatching network-vif-plugged-7298ef49-4582-410a-821d-bd43bc865fba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:37 compute-0 nova_compute[244014]: 2026-02-25 12:23:37.869 244018 WARNING nova.compute.manager [req-0d0f8ecd-8c0d-4ab5-b315-7164e845d60e req-00d5e313-84fb-47a3-b7f9-de4b6f81d839 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received unexpected event network-vif-plugged-7298ef49-4582-410a-821d-bd43bc865fba for instance with vm_state deleted and task_state None.
Feb 25 12:23:37 compute-0 nova_compute[244014]: 2026-02-25 12:23:37.869 244018 DEBUG nova.compute.manager [req-0d0f8ecd-8c0d-4ab5-b315-7164e845d60e req-00d5e313-84fb-47a3-b7f9-de4b6f81d839 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Received event network-vif-deleted-7298ef49-4582-410a-821d-bd43bc865fba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:37 compute-0 nova_compute[244014]: 2026-02-25 12:23:37.907 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.152 244018 DEBUG oslo_concurrency.lockutils [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "cc067b57-66e5-499d-ac85-a81e91c1c978" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.153 244018 DEBUG oslo_concurrency.lockutils [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.153 244018 DEBUG oslo_concurrency.lockutils [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.154 244018 DEBUG oslo_concurrency.lockutils [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.154 244018 DEBUG oslo_concurrency.lockutils [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.156 244018 INFO nova.compute.manager [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Terminating instance
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.158 244018 DEBUG nova.compute.manager [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.168 244018 INFO nova.virt.libvirt.driver [-] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Instance destroyed successfully.
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.168 244018 DEBUG nova.objects.instance [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'resources' on Instance uuid cc067b57-66e5-499d-ac85-a81e91c1c978 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.189 244018 DEBUG nova.virt.libvirt.vif [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1006495350',display_name='tempest-ImagesTestJSON-server-1006495350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1006495350',id=34,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:23:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-9a0rzzlw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:23:35Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=cc067b57-66e5-499d-ac85-a81e91c1c978,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "57720a37-b288-4d33-9b7f-4493075cd82e", "address": "fa:16:3e:33:0b:0f", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57720a37-b2", "ovs_interfaceid": "57720a37-b288-4d33-9b7f-4493075cd82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.189 244018 DEBUG nova.network.os_vif_util [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "57720a37-b288-4d33-9b7f-4493075cd82e", "address": "fa:16:3e:33:0b:0f", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57720a37-b2", "ovs_interfaceid": "57720a37-b288-4d33-9b7f-4493075cd82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.190 244018 DEBUG nova.network.os_vif_util [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:0f,bridge_name='br-int',has_traffic_filtering=True,id=57720a37-b288-4d33-9b7f-4493075cd82e,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57720a37-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.191 244018 DEBUG os_vif [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:0f,bridge_name='br-int',has_traffic_filtering=True,id=57720a37-b288-4d33-9b7f-4493075cd82e,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57720a37-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.194 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57720a37-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.196 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.198 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.201 244018 INFO os_vif [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:0f,bridge_name='br-int',has_traffic_filtering=True,id=57720a37-b288-4d33-9b7f-4493075cd82e,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57720a37-b2')
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.245 244018 DEBUG oslo_concurrency.lockutils [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.246 244018 DEBUG oslo_concurrency.lockutils [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.246 244018 DEBUG oslo_concurrency.lockutils [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.246 244018 DEBUG oslo_concurrency.lockutils [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.247 244018 DEBUG oslo_concurrency.lockutils [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.248 244018 INFO nova.compute.manager [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Terminating instance
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.250 244018 DEBUG nova.compute.manager [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:23:38 compute-0 kernel: tap67db480d-6a (unregistering): left promiscuous mode
Feb 25 12:23:38 compute-0 NetworkManager[49836]: <info>  [1772022218.3036] device (tap67db480d-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.307 244018 DEBUG oslo_concurrency.lockutils [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Acquiring lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.307 244018 DEBUG oslo_concurrency.lockutils [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.308 244018 DEBUG oslo_concurrency.lockutils [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Acquiring lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.308 244018 DEBUG oslo_concurrency.lockutils [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.308 244018 DEBUG oslo_concurrency.lockutils [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.310 244018 INFO nova.compute.manager [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Terminating instance
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.313 244018 DEBUG nova.compute.manager [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:23:38 compute-0 ovn_controller[147040]: 2026-02-25T12:23:38Z|00290|binding|INFO|Releasing lport 67db480d-6a66-4c54-be9c-5375a0d664cd from this chassis (sb_readonly=0)
Feb 25 12:23:38 compute-0 ovn_controller[147040]: 2026-02-25T12:23:38Z|00291|binding|INFO|Setting lport 67db480d-6a66-4c54-be9c-5375a0d664cd down in Southbound
Feb 25 12:23:38 compute-0 ovn_controller[147040]: 2026-02-25T12:23:38Z|00292|binding|INFO|Removing iface tap67db480d-6a ovn-installed in OVS
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.326 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:ac:51 10.100.0.11'], port_security=['fa:16:3e:4a:ac:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '33db1662-e67d-41de-b8d6-ea93b40cf7cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc690998-27c4-44c1-9496-60c99dea61b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=67db480d-6a66-4c54-be9c-5375a0d664cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.329 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 67db480d-6a66-4c54-be9c-5375a0d664cd in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.331 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08121372-a435-401a-b405-778e10d8c2e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.333 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[56da8d5d-beb4-4c28-891c-9ac7b2136196]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.334 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 namespace which is not needed anymore
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Feb 25 12:23:38 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 14.459s CPU time.
Feb 25 12:23:38 compute-0 systemd-machined[210048]: Machine qemu-35-instance-0000001f terminated.
Feb 25 12:23:38 compute-0 kernel: tape66ad0e5-f6 (unregistering): left promiscuous mode
Feb 25 12:23:38 compute-0 NetworkManager[49836]: <info>  [1772022218.3704] device (tape66ad0e5-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 ovn_controller[147040]: 2026-02-25T12:23:38Z|00293|binding|INFO|Releasing lport e66ad0e5-f667-4662-b49b-13be01718e41 from this chassis (sb_readonly=0)
Feb 25 12:23:38 compute-0 ovn_controller[147040]: 2026-02-25T12:23:38Z|00294|binding|INFO|Setting lport e66ad0e5-f667-4662-b49b-13be01718e41 down in Southbound
Feb 25 12:23:38 compute-0 ovn_controller[147040]: 2026-02-25T12:23:38Z|00295|binding|INFO|Removing iface tape66ad0e5-f6 ovn-installed in OVS
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.393 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:d5:7d 10.100.0.8'], port_security=['fa:16:3e:2d:d5:7d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a0fa8542-8f2c-4a70-be2d-02c4139e33aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '892810ff108142de9ed0a316aeee727f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46687f36-faa2-4f92-82ce-c79469270051', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a003c918-adf1-4a59-b001-17237d07e18b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e66ad0e5-f667-4662-b49b-13be01718e41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Deactivated successfully.
Feb 25 12:23:38 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Consumed 12.190s CPU time.
Feb 25 12:23:38 compute-0 systemd-machined[210048]: Machine qemu-37-instance-00000021 terminated.
Feb 25 12:23:38 compute-0 ceph-mon[76335]: osdmap e163: 3 total, 3 up, 3 in
Feb 25 12:23:38 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[273920]: [NOTICE]   (273952) : haproxy version is 2.8.14-c23fe91
Feb 25 12:23:38 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[273920]: [NOTICE]   (273952) : path to executable is /usr/sbin/haproxy
Feb 25 12:23:38 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[273920]: [WARNING]  (273952) : Exiting Master process...
Feb 25 12:23:38 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[273920]: [ALERT]    (273952) : Current worker (273959) exited with code 143 (Terminated)
Feb 25 12:23:38 compute-0 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[273920]: [WARNING]  (273952) : All workers exited. Exiting... (0)
Feb 25 12:23:38 compute-0 systemd[1]: libpod-015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0.scope: Deactivated successfully.
Feb 25 12:23:38 compute-0 podman[277039]: 2026-02-25 12:23:38.485097969 +0000 UTC m=+0.045475825 container died 015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.488 244018 INFO nova.virt.libvirt.driver [-] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Instance destroyed successfully.
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.489 244018 DEBUG nova.objects.instance [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'resources' on Instance uuid 33db1662-e67d-41de-b8d6-ea93b40cf7cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0-userdata-shm.mount: Deactivated successfully.
Feb 25 12:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef7d83654588edb9f69d59a0176fd12c6608dd39d362a35f0ac0ad74b24bf41d-merged.mount: Deactivated successfully.
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.518 244018 INFO nova.virt.libvirt.driver [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Deleting instance files /var/lib/nova/instances/cc067b57-66e5-499d-ac85-a81e91c1c978_del
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.519 244018 INFO nova.virt.libvirt.driver [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Deletion of /var/lib/nova/instances/cc067b57-66e5-499d-ac85-a81e91c1c978_del complete
Feb 25 12:23:38 compute-0 podman[277039]: 2026-02-25 12:23:38.531229571 +0000 UTC m=+0.091607427 container cleanup 015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:23:38 compute-0 systemd[1]: libpod-conmon-015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0.scope: Deactivated successfully.
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.556 244018 INFO nova.virt.libvirt.driver [-] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Instance destroyed successfully.
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.557 244018 DEBUG nova.objects.instance [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lazy-loading 'resources' on Instance uuid a0fa8542-8f2c-4a70-be2d-02c4139e33aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:38 compute-0 podman[277089]: 2026-02-25 12:23:38.599647168 +0000 UTC m=+0.047104311 container remove 015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.603 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29ce0ba4-de90-4eb3-99c9-fb8f99ad67c3]: (4, ('Wed Feb 25 12:23:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 (015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0)\n015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0\nWed Feb 25 12:23:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 (015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0)\n015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.605 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d7be09-0f2d-43a4-890a-937c963c3e0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.606 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 kernel: tap08121372-a0: left promiscuous mode
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.622 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4c28f08b-7ccf-4161-8d26-ef3473aabf4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.639 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b28c8608-3bdd-49b0-adfa-c13f2d88b305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.641 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee67881-d79a-4414-bd52-a5738c3ac3cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.652 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c16cad-3c71-4358-9479-53ec91d08884]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411679, 'reachable_time': 16420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277119, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.654 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.655 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c3ed8e-8cfd-43ab-822d-0be24955a44b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d08121372\x2da435\x2d401a\x2db405\x2d778e10d8c2e2.mount: Deactivated successfully.
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.655 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e66ad0e5-f667-4662-b49b-13be01718e41 in datapath a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a unbound from our chassis
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.657 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.658 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7e09a41f-f958-41a6-9ead-c4cf03e81010]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.658 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a namespace which is not needed anymore
Feb 25 12:23:38 compute-0 neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a[275231]: [NOTICE]   (275235) : haproxy version is 2.8.14-c23fe91
Feb 25 12:23:38 compute-0 neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a[275231]: [NOTICE]   (275235) : path to executable is /usr/sbin/haproxy
Feb 25 12:23:38 compute-0 neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a[275231]: [WARNING]  (275235) : Exiting Master process...
Feb 25 12:23:38 compute-0 neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a[275231]: [WARNING]  (275235) : Exiting Master process...
Feb 25 12:23:38 compute-0 neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a[275231]: [ALERT]    (275235) : Current worker (275237) exited with code 143 (Terminated)
Feb 25 12:23:38 compute-0 neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a[275231]: [WARNING]  (275235) : All workers exited. Exiting... (0)
Feb 25 12:23:38 compute-0 systemd[1]: libpod-d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d.scope: Deactivated successfully.
Feb 25 12:23:38 compute-0 podman[277137]: 2026-02-25 12:23:38.788211133 +0000 UTC m=+0.049541520 container died d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.802 244018 DEBUG nova.virt.libvirt.vif [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-244265435',display_name='tempest-tempest.common.compute-instance-244265435',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-244265435',id=31,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-y80s0s52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=33db1662-e67d-41de-b8d6-ea93b40cf7cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.803 244018 DEBUG nova.network.os_vif_util [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.803 244018 DEBUG nova.network.os_vif_util [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:ac:51,bridge_name='br-int',has_traffic_filtering=True,id=67db480d-6a66-4c54-be9c-5375a0d664cd,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67db480d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.804 244018 DEBUG os_vif [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:ac:51,bridge_name='br-int',has_traffic_filtering=True,id=67db480d-6a66-4c54-be9c-5375a0d664cd,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67db480d-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.805 244018 DEBUG nova.virt.libvirt.vif [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2085604757',display_name='tempest-ServersTestJSON-server-2085604757',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2085604757',id=33,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOzqgxgCIo7l7lqm0S8dDx/L4vtjhysLouB0IheKsPVXTqKSUAZWo1Tzmr0uetVGx7aDsqQALOyBqtcoDD9ZYLfXt3j1TD852rnw7RJo5Dmtus4guS6p74HOa1FoQEOTg==',key_name='tempest-keypair-1324311740',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:23:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='892810ff108142de9ed0a316aeee727f',ramdisk_id='',reservation_id='r-hj81v0hj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-409113905',owner_user_name='tempest-ServersTestJSON-409113905-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:23:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6663c48526ff47d0a45fdb4e651bcb0a',uuid=a0fa8542-8f2c-4a70-be2d-02c4139e33aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e66ad0e5-f667-4662-b49b-13be01718e41", "address": "fa:16:3e:2d:d5:7d", "network": {"id": "a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a", "bridge": "br-int", "label": "tempest-ServersTestJSON-1423404129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "892810ff108142de9ed0a316aeee727f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66ad0e5-f6", "ovs_interfaceid": "e66ad0e5-f667-4662-b49b-13be01718e41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.806 244018 DEBUG nova.network.os_vif_util [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Converting VIF {"id": "e66ad0e5-f667-4662-b49b-13be01718e41", "address": "fa:16:3e:2d:d5:7d", "network": {"id": "a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a", "bridge": "br-int", "label": "tempest-ServersTestJSON-1423404129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "892810ff108142de9ed0a316aeee727f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66ad0e5-f6", "ovs_interfaceid": "e66ad0e5-f667-4662-b49b-13be01718e41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.806 244018 DEBUG nova.network.os_vif_util [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:d5:7d,bridge_name='br-int',has_traffic_filtering=True,id=e66ad0e5-f667-4662-b49b-13be01718e41,network=Network(a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66ad0e5-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.806 244018 DEBUG os_vif [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:d5:7d,bridge_name='br-int',has_traffic_filtering=True,id=e66ad0e5-f667-4662-b49b-13be01718e41,network=Network(a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66ad0e5-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.808 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67db480d-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.809 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.812 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.815 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape66ad0e5-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d-userdata-shm.mount: Deactivated successfully.
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.822 244018 INFO os_vif [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:ac:51,bridge_name='br-int',has_traffic_filtering=True,id=67db480d-6a66-4c54-be9c-5375a0d664cd,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67db480d-6a')
Feb 25 12:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1984c0fe71000529616af06823881f6d1e83c47268daad7720b31014f79041c-merged.mount: Deactivated successfully.
Feb 25 12:23:38 compute-0 podman[277137]: 2026-02-25 12:23:38.829516038 +0000 UTC m=+0.090846395 container cleanup d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 12:23:38 compute-0 systemd[1]: libpod-conmon-d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d.scope: Deactivated successfully.
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.853 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.859 244018 INFO os_vif [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:d5:7d,bridge_name='br-int',has_traffic_filtering=True,id=e66ad0e5-f667-4662-b49b-13be01718e41,network=Network(a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66ad0e5-f6')
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.894 244018 INFO nova.compute.manager [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Took 0.74 seconds to destroy the instance on the hypervisor.
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.895 244018 DEBUG oslo.service.loopingcall [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.896 244018 DEBUG nova.compute.manager [-] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.896 244018 DEBUG nova.network.neutron [-] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:23:38 compute-0 podman[277174]: 2026-02-25 12:23:38.918665245 +0000 UTC m=+0.062637903 container remove d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.925 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f49957e7-ed7d-4a90-9185-4bb9f574aaa4]: (4, ('Wed Feb 25 12:23:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a (d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d)\nd52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d\nWed Feb 25 12:23:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a (d52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d)\nd52785050b2469188006edd505af59f68f6931240c0135093b058de82bff7e3d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.926 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0d23d158-ef13-4207-8fa1-ef5f8bb13661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.927 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7340f0b-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.929 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 kernel: tapa7340f0b-d0: left promiscuous mode
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.935 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b3f172-79db-4d80-9e72-8744f1692896]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 nova_compute[244014]: 2026-02-25 12:23:38.937 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.956 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[831b9e0d-b668-4607-b8ad-e26d0ca1f9c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.957 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f8538d1f-5cc9-466a-b3a2-31be9388d678]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.982 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2432462b-6607-49a4-8e33-ca1d8c20fd8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 415949, 'reachable_time': 39520, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277215, 'error': None, 'target': 'ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.984 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7340f0b-d2bf-4d8e-b253-1bd6a6bea50a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:23:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:38.985 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0f182cbd-9327-47a9-a877-5898aa47686f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.135 244018 INFO nova.virt.libvirt.driver [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Deleting instance files /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd_del
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.135 244018 INFO nova.virt.libvirt.driver [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Deletion of /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd_del complete
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.176 244018 INFO nova.virt.libvirt.driver [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Deleting instance files /var/lib/nova/instances/a0fa8542-8f2c-4a70-be2d-02c4139e33aa_del
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.177 244018 INFO nova.virt.libvirt.driver [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Deletion of /var/lib/nova/instances/a0fa8542-8f2c-4a70-be2d-02c4139e33aa_del complete
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.186 244018 INFO nova.compute.manager [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Took 0.94 seconds to destroy the instance on the hypervisor.
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.186 244018 DEBUG oslo.service.loopingcall [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.186 244018 DEBUG nova.compute.manager [-] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.186 244018 DEBUG nova.network.neutron [-] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.230 244018 INFO nova.compute.manager [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Took 0.92 seconds to destroy the instance on the hypervisor.
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.230 244018 DEBUG oslo.service.loopingcall [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.230 244018 DEBUG nova.compute.manager [-] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.230 244018 DEBUG nova.network.neutron [-] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:23:39 compute-0 ceph-mon[76335]: pgmap v1177: 305 pgs: 305 active+clean; 405 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Feb 25 12:23:39 compute-0 systemd[1]: run-netns-ovnmeta\x2da7340f0b\x2dd2bf\x2d4d8e\x2db253\x2d1bd6a6bea50a.mount: Deactivated successfully.
Feb 25 12:23:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Feb 25 12:23:39 compute-0 nova_compute[244014]: 2026-02-25 12:23:39.561 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Feb 25 12:23:39 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Feb 25 12:23:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 405 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Feb 25 12:23:40 compute-0 nova_compute[244014]: 2026-02-25 12:23:40.287 244018 DEBUG nova.compute.manager [req-a2c31a5f-625f-4b8f-9c5e-fd3cefc00faf req-47d59c92-a584-4a42-b37b-513e1b3ed52a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-unplugged-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:40 compute-0 nova_compute[244014]: 2026-02-25 12:23:40.288 244018 DEBUG oslo_concurrency.lockutils [req-a2c31a5f-625f-4b8f-9c5e-fd3cefc00faf req-47d59c92-a584-4a42-b37b-513e1b3ed52a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:40 compute-0 nova_compute[244014]: 2026-02-25 12:23:40.288 244018 DEBUG oslo_concurrency.lockutils [req-a2c31a5f-625f-4b8f-9c5e-fd3cefc00faf req-47d59c92-a584-4a42-b37b-513e1b3ed52a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:40 compute-0 nova_compute[244014]: 2026-02-25 12:23:40.288 244018 DEBUG oslo_concurrency.lockutils [req-a2c31a5f-625f-4b8f-9c5e-fd3cefc00faf req-47d59c92-a584-4a42-b37b-513e1b3ed52a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:40 compute-0 nova_compute[244014]: 2026-02-25 12:23:40.289 244018 DEBUG nova.compute.manager [req-a2c31a5f-625f-4b8f-9c5e-fd3cefc00faf req-47d59c92-a584-4a42-b37b-513e1b3ed52a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] No waiting events found dispatching network-vif-unplugged-67db480d-6a66-4c54-be9c-5375a0d664cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:40 compute-0 nova_compute[244014]: 2026-02-25 12:23:40.289 244018 DEBUG nova.compute.manager [req-a2c31a5f-625f-4b8f-9c5e-fd3cefc00faf req-47d59c92-a584-4a42-b37b-513e1b3ed52a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-unplugged-67db480d-6a66-4c54-be9c-5375a0d664cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:23:40 compute-0 ceph-mon[76335]: osdmap e164: 3 total, 3 up, 3 in
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.042 244018 DEBUG nova.network.neutron [-] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.064 244018 INFO nova.compute.manager [-] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Took 1.88 seconds to deallocate network for instance.
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.121 244018 DEBUG nova.network.neutron [-] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.135 244018 DEBUG oslo_concurrency.lockutils [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.136 244018 DEBUG oslo_concurrency.lockutils [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.146 244018 INFO nova.compute.manager [-] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Took 2.25 seconds to deallocate network for instance.
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.159 244018 DEBUG nova.compute.manager [req-c61a8752-626a-4df2-b9ff-77e32cb37d82 req-66fd6029-a678-4645-b51c-826cc2ce1c65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-deleted-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.212 244018 DEBUG oslo_concurrency.lockutils [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.250 244018 DEBUG oslo_concurrency.processutils [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.310 244018 DEBUG nova.network.neutron [-] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.344 244018 INFO nova.compute.manager [-] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Took 2.11 seconds to deallocate network for instance.
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.404 244018 DEBUG oslo_concurrency.lockutils [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:41 compute-0 ceph-mon[76335]: pgmap v1179: 305 pgs: 305 active+clean; 405 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Feb 25 12:23:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:23:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2861128788' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 301 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 145 op/s
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.749 244018 DEBUG oslo_concurrency.processutils [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.756 244018 DEBUG nova.compute.provider_tree [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.775 244018 DEBUG nova.scheduler.client.report [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.795 244018 DEBUG oslo_concurrency.lockutils [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.797 244018 DEBUG oslo_concurrency.lockutils [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.825 244018 INFO nova.scheduler.client.report [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Deleted allocations for instance 33db1662-e67d-41de-b8d6-ea93b40cf7cd
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.885 244018 DEBUG oslo_concurrency.processutils [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:41 compute-0 nova_compute[244014]: 2026-02-25 12:23:41.912 244018 DEBUG oslo_concurrency.lockutils [None req-9986f489-a621-464f-be59-046cd96a6b5d ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.025 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022207.0241892, cc067b57-66e5-499d-ac85-a81e91c1c978 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.025 244018 INFO nova.compute.manager [-] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] VM Stopped (Lifecycle Event)
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.047 244018 DEBUG nova.compute.manager [None req-f4c3885a-1af4-4814-9298-32ae883fcdeb - - - - - -] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011050664812681567 of space, bias 1.0, pg target 0.331519944380447 quantized to 32 (current 32)
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0026454434944734586 of space, bias 1.0, pg target 0.7936330483420376 quantized to 32 (current 32)
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.689784792272686e-07 of space, bias 4.0, pg target 0.0006827741750727223 quantized to 16 (current 16)
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:23:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:23:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:23:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3459311617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.417 244018 DEBUG nova.compute.manager [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.417 244018 DEBUG oslo_concurrency.lockutils [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.418 244018 DEBUG oslo_concurrency.lockutils [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.418 244018 DEBUG oslo_concurrency.lockutils [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.418 244018 DEBUG nova.compute.manager [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] No waiting events found dispatching network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.419 244018 WARNING nova.compute.manager [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received unexpected event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd for instance with vm_state deleted and task_state None.
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.419 244018 DEBUG nova.compute.manager [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Received event network-vif-unplugged-e66ad0e5-f667-4662-b49b-13be01718e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.419 244018 DEBUG oslo_concurrency.lockutils [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.420 244018 DEBUG oslo_concurrency.lockutils [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.420 244018 DEBUG oslo_concurrency.lockutils [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.420 244018 DEBUG nova.compute.manager [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] No waiting events found dispatching network-vif-unplugged-e66ad0e5-f667-4662-b49b-13be01718e41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.421 244018 WARNING nova.compute.manager [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Received unexpected event network-vif-unplugged-e66ad0e5-f667-4662-b49b-13be01718e41 for instance with vm_state deleted and task_state None.
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.421 244018 DEBUG nova.compute.manager [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Received event network-vif-plugged-e66ad0e5-f667-4662-b49b-13be01718e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.421 244018 DEBUG oslo_concurrency.lockutils [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.421 244018 DEBUG oslo_concurrency.lockutils [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.422 244018 DEBUG oslo_concurrency.lockutils [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.422 244018 DEBUG nova.compute.manager [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] No waiting events found dispatching network-vif-plugged-e66ad0e5-f667-4662-b49b-13be01718e41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.422 244018 WARNING nova.compute.manager [req-7fc7c419-5562-480c-834e-28839af9b14a req-e828edb7-2c6b-4623-8efe-b1dec0696ea5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Received unexpected event network-vif-plugged-e66ad0e5-f667-4662-b49b-13be01718e41 for instance with vm_state deleted and task_state None.
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.432 244018 DEBUG oslo_concurrency.processutils [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.439 244018 DEBUG nova.compute.provider_tree [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.453 244018 DEBUG nova.scheduler.client.report [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.489 244018 DEBUG oslo_concurrency.lockutils [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.492 244018 DEBUG oslo_concurrency.lockutils [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.514 244018 INFO nova.scheduler.client.report [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Deleted allocations for instance cc067b57-66e5-499d-ac85-a81e91c1c978
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.542 244018 DEBUG oslo_concurrency.processutils [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2861128788' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3459311617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:42 compute-0 nova_compute[244014]: 2026-02-25 12:23:42.608 244018 DEBUG oslo_concurrency.lockutils [None req-a9b5e59f-614a-4243-be0f-9f12588c363e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "cc067b57-66e5-499d-ac85-a81e91c1c978" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:23:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1114383197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.049 244018 DEBUG oslo_concurrency.processutils [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.055 244018 DEBUG nova.compute.provider_tree [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.081 244018 DEBUG nova.scheduler.client.report [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.118 244018 DEBUG oslo_concurrency.lockutils [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.152 244018 INFO nova.scheduler.client.report [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Deleted allocations for instance a0fa8542-8f2c-4a70-be2d-02c4139e33aa
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.274 244018 DEBUG oslo_concurrency.lockutils [None req-9cc5d22c-b00d-4c1a-bf94-8fe308ca92df 6663c48526ff47d0a45fdb4e651bcb0a 892810ff108142de9ed0a316aeee727f - - default default] Lock "a0fa8542-8f2c-4a70-be2d-02c4139e33aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.277 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.328 244018 DEBUG nova.compute.manager [req-bd052ce2-4d22-4ad6-a65d-e9b84bead56d req-17ec2faf-cbf7-49e1-8293-d597b778fa5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cc067b57-66e5-499d-ac85-a81e91c1c978] Received event network-vif-deleted-57720a37-b288-4d33-9b7f-4493075cd82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.328 244018 DEBUG nova.compute.manager [req-bd052ce2-4d22-4ad6-a65d-e9b84bead56d req-17ec2faf-cbf7-49e1-8293-d597b778fa5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Received event network-vif-deleted-e66ad0e5-f667-4662-b49b-13be01718e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:43 compute-0 ceph-mon[76335]: pgmap v1180: 305 pgs: 305 active+clean; 301 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 145 op/s
Feb 25 12:23:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1114383197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.640 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.640 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.660 244018 DEBUG nova.compute.manager [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:23:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 153 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 239 op/s
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.752 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.753 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.761 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.761 244018 INFO nova.compute.claims [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.818 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:43 compute-0 nova_compute[244014]: 2026-02-25 12:23:43.896 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:23:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2395673310' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.427 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.434 244018 DEBUG nova.compute.provider_tree [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.455 244018 DEBUG nova.scheduler.client.report [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.477 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.478 244018 DEBUG nova.compute.manager [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.522 244018 DEBUG nova.compute.manager [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.522 244018 DEBUG nova.network.neutron [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:23:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.563 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:44 compute-0 ceph-mon[76335]: pgmap v1181: 305 pgs: 305 active+clean; 153 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 239 op/s
Feb 25 12:23:44 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2395673310' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.769 244018 INFO nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.795 244018 DEBUG nova.compute.manager [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.893 244018 DEBUG nova.compute.manager [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.895 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.895 244018 INFO nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Creating image(s)
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.924 244018 DEBUG nova.storage.rbd_utils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.945 244018 DEBUG nova.storage.rbd_utils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.963 244018 DEBUG nova.storage.rbd_utils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:44 compute-0 nova_compute[244014]: 2026-02-25 12:23:44.966 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.048 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.050 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.051 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.052 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.087 244018 DEBUG nova.storage.rbd_utils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.091 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.122 244018 DEBUG nova.policy [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1cfc3e643014e1c984d71182534fd24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '851e1817495944c1ad7ac421ab226d13', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.366 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.439 244018 DEBUG nova.storage.rbd_utils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] resizing rbd image fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.535 244018 DEBUG nova.objects.instance [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'migration_context' on Instance uuid fb09cc9c-e6a9-4718-bb97-0df5558cb091 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.553 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.553 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Ensure instance console log exists: /var/lib/nova/instances/fb09cc9c-e6a9-4718-bb97-0df5558cb091/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.554 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.555 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:45 compute-0 nova_compute[244014]: 2026-02-25 12:23:45.555 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 153 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 6.6 KiB/s wr, 148 op/s
Feb 25 12:23:46 compute-0 nova_compute[244014]: 2026-02-25 12:23:46.010 244018 DEBUG nova.network.neutron [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Successfully created port: 8428e69d-5fa3-4f42-9801-6b72fc0fc05b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:23:46 compute-0 ceph-mon[76335]: pgmap v1182: 305 pgs: 305 active+clean; 153 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 6.6 KiB/s wr, 148 op/s
Feb 25 12:23:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:23:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/431338775' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:23:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:23:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/431338775' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:23:47 compute-0 nova_compute[244014]: 2026-02-25 12:23:47.650 244018 DEBUG nova.network.neutron [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Successfully updated port: 8428e69d-5fa3-4f42-9801-6b72fc0fc05b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:23:47 compute-0 nova_compute[244014]: 2026-02-25 12:23:47.680 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "refresh_cache-fb09cc9c-e6a9-4718-bb97-0df5558cb091" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:47 compute-0 nova_compute[244014]: 2026-02-25 12:23:47.680 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquired lock "refresh_cache-fb09cc9c-e6a9-4718-bb97-0df5558cb091" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:47 compute-0 nova_compute[244014]: 2026-02-25 12:23:47.680 244018 DEBUG nova.network.neutron [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:23:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 200 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Feb 25 12:23:47 compute-0 nova_compute[244014]: 2026-02-25 12:23:47.776 244018 DEBUG nova.compute.manager [req-5b7e946a-0943-4320-910a-dc49c9822f8c req-c0fe1d7a-e123-4640-8f2b-7c553d30d868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Received event network-changed-8428e69d-5fa3-4f42-9801-6b72fc0fc05b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:47 compute-0 nova_compute[244014]: 2026-02-25 12:23:47.777 244018 DEBUG nova.compute.manager [req-5b7e946a-0943-4320-910a-dc49c9822f8c req-c0fe1d7a-e123-4640-8f2b-7c553d30d868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Refreshing instance network info cache due to event network-changed-8428e69d-5fa3-4f42-9801-6b72fc0fc05b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:23:47 compute-0 nova_compute[244014]: 2026-02-25 12:23:47.777 244018 DEBUG oslo_concurrency.lockutils [req-5b7e946a-0943-4320-910a-dc49c9822f8c req-c0fe1d7a-e123-4640-8f2b-7c553d30d868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-fb09cc9c-e6a9-4718-bb97-0df5558cb091" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/431338775' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:23:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/431338775' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:23:47 compute-0 nova_compute[244014]: 2026-02-25 12:23:47.861 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022212.8592236, 8715ba65-46d1-406d-8a4d-b5ff5067e2c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:47 compute-0 nova_compute[244014]: 2026-02-25 12:23:47.862 244018 INFO nova.compute.manager [-] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] VM Stopped (Lifecycle Event)
Feb 25 12:23:47 compute-0 nova_compute[244014]: 2026-02-25 12:23:47.882 244018 DEBUG nova.compute.manager [None req-eec23937-8827-4b3e-a595-57ad93324e54 - - - - - -] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:47 compute-0 nova_compute[244014]: 2026-02-25 12:23:47.965 244018 DEBUG nova.network.neutron [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:23:48 compute-0 nova_compute[244014]: 2026-02-25 12:23:48.430 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:48 compute-0 nova_compute[244014]: 2026-02-25 12:23:48.537 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:48 compute-0 nova_compute[244014]: 2026-02-25 12:23:48.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:48 compute-0 ceph-mon[76335]: pgmap v1183: 305 pgs: 305 active+clean; 200 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Feb 25 12:23:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Feb 25 12:23:49 compute-0 nova_compute[244014]: 2026-02-25 12:23:49.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Feb 25 12:23:49 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Feb 25 12:23:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 200 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.113 244018 DEBUG nova.network.neutron [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Updating instance_info_cache with network_info: [{"id": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "address": "fa:16:3e:20:a7:a4", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8428e69d-5f", "ovs_interfaceid": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.164 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Releasing lock "refresh_cache-fb09cc9c-e6a9-4718-bb97-0df5558cb091" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.165 244018 DEBUG nova.compute.manager [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Instance network_info: |[{"id": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "address": "fa:16:3e:20:a7:a4", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8428e69d-5f", "ovs_interfaceid": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.165 244018 DEBUG oslo_concurrency.lockutils [req-5b7e946a-0943-4320-910a-dc49c9822f8c req-c0fe1d7a-e123-4640-8f2b-7c553d30d868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-fb09cc9c-e6a9-4718-bb97-0df5558cb091" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.165 244018 DEBUG nova.network.neutron [req-5b7e946a-0943-4320-910a-dc49c9822f8c req-c0fe1d7a-e123-4640-8f2b-7c553d30d868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Refreshing network info cache for port 8428e69d-5fa3-4f42-9801-6b72fc0fc05b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.169 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Start _get_guest_xml network_info=[{"id": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "address": "fa:16:3e:20:a7:a4", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8428e69d-5f", "ovs_interfaceid": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.176 244018 WARNING nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.180 244018 DEBUG nova.virt.libvirt.host [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.181 244018 DEBUG nova.virt.libvirt.host [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.184 244018 DEBUG nova.virt.libvirt.host [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.184 244018 DEBUG nova.virt.libvirt.host [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.185 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.185 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.186 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.186 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.186 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.187 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.187 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.187 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.187 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.188 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.188 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.188 244018 DEBUG nova.virt.hardware [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.192 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:50 compute-0 ceph-mon[76335]: osdmap e165: 3 total, 3 up, 3 in
Feb 25 12:23:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:23:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/809554127' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.757 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.797 244018 DEBUG nova.storage.rbd_utils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:50 compute-0 nova_compute[244014]: 2026-02-25 12:23:50.803 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:23:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2369727698' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.347 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.348 244018 DEBUG nova.virt.libvirt.vif [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:23:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2091476591',display_name='tempest-ImagesTestJSON-server-2091476591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2091476591',id=35,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-55qce6pu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:23:44Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=fb09cc9c-e6a9-4718-bb97-0df5558cb091,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "address": "fa:16:3e:20:a7:a4", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8428e69d-5f", "ovs_interfaceid": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.348 244018 DEBUG nova.network.os_vif_util [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "address": "fa:16:3e:20:a7:a4", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8428e69d-5f", "ovs_interfaceid": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.349 244018 DEBUG nova.network.os_vif_util [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=8428e69d-5fa3-4f42-9801-6b72fc0fc05b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8428e69d-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.350 244018 DEBUG nova.objects.instance [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'pci_devices' on Instance uuid fb09cc9c-e6a9-4718-bb97-0df5558cb091 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.368 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:23:51 compute-0 nova_compute[244014]:   <uuid>fb09cc9c-e6a9-4718-bb97-0df5558cb091</uuid>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   <name>instance-00000023</name>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <nova:name>tempest-ImagesTestJSON-server-2091476591</nova:name>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:23:50</nova:creationTime>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <nova:user uuid="f1cfc3e643014e1c984d71182534fd24">tempest-ImagesTestJSON-1353368211-project-member</nova:user>
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <nova:project uuid="851e1817495944c1ad7ac421ab226d13">tempest-ImagesTestJSON-1353368211</nova:project>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <nova:port uuid="8428e69d-5fa3-4f42-9801-6b72fc0fc05b">
Feb 25 12:23:51 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <system>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <entry name="serial">fb09cc9c-e6a9-4718-bb97-0df5558cb091</entry>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <entry name="uuid">fb09cc9c-e6a9-4718-bb97-0df5558cb091</entry>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     </system>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   <os>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   </os>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   <features>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   </features>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk">
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk.config">
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       </source>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:23:51 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:20:a7:a4"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <target dev="tap8428e69d-5f"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/fb09cc9c-e6a9-4718-bb97-0df5558cb091/console.log" append="off"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <video>
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     </video>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:23:51 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:23:51 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:23:51 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:23:51 compute-0 nova_compute[244014]: </domain>
Feb 25 12:23:51 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.369 244018 DEBUG nova.compute.manager [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Preparing to wait for external event network-vif-plugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.370 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.370 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.371 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.372 244018 DEBUG nova.virt.libvirt.vif [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:23:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2091476591',display_name='tempest-ImagesTestJSON-server-2091476591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2091476591',id=35,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-55qce6pu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:23:44Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=fb09cc9c-e6a9-4718-bb97-0df5558cb091,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "address": "fa:16:3e:20:a7:a4", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8428e69d-5f", "ovs_interfaceid": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.372 244018 DEBUG nova.network.os_vif_util [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "address": "fa:16:3e:20:a7:a4", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8428e69d-5f", "ovs_interfaceid": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.373 244018 DEBUG nova.network.os_vif_util [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=8428e69d-5fa3-4f42-9801-6b72fc0fc05b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8428e69d-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.374 244018 DEBUG os_vif [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=8428e69d-5fa3-4f42-9801-6b72fc0fc05b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8428e69d-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.375 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.376 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.379 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8428e69d-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.380 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8428e69d-5f, col_values=(('external_ids', {'iface-id': '8428e69d-5fa3-4f42-9801-6b72fc0fc05b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:a7:a4', 'vm-uuid': 'fb09cc9c-e6a9-4718-bb97-0df5558cb091'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:51 compute-0 NetworkManager[49836]: <info>  [1772022231.3832] manager: (tap8428e69d-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.387 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.389 244018 INFO os_vif [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=8428e69d-5fa3-4f42-9801-6b72fc0fc05b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8428e69d-5f')
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.444 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.444 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.445 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No VIF found with MAC fa:16:3e:20:a7:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.446 244018 INFO nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Using config drive
Feb 25 12:23:51 compute-0 nova_compute[244014]: 2026-02-25 12:23:51.475 244018 DEBUG nova.storage.rbd_utils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:51 compute-0 ceph-mon[76335]: pgmap v1185: 305 pgs: 305 active+clean; 200 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Feb 25 12:23:51 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/809554127' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:51 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2369727698' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:23:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 200 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 2.1 MiB/s wr, 107 op/s
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.035 244018 INFO nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Creating config drive at /var/lib/nova/instances/fb09cc9c-e6a9-4718-bb97-0df5558cb091/disk.config
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.041 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb09cc9c-e6a9-4718-bb97-0df5558cb091/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpw1ymhayr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.174 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb09cc9c-e6a9-4718-bb97-0df5558cb091/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpw1ymhayr" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.206 244018 DEBUG nova.storage.rbd_utils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.210 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb09cc9c-e6a9-4718-bb97-0df5558cb091/disk.config fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.369 244018 DEBUG oslo_concurrency.processutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb09cc9c-e6a9-4718-bb97-0df5558cb091/disk.config fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.371 244018 INFO nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Deleting local config drive /var/lib/nova/instances/fb09cc9c-e6a9-4718-bb97-0df5558cb091/disk.config because it was imported into RBD.
Feb 25 12:23:52 compute-0 kernel: tap8428e69d-5f: entered promiscuous mode
Feb 25 12:23:52 compute-0 NetworkManager[49836]: <info>  [1772022232.4245] manager: (tap8428e69d-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Feb 25 12:23:52 compute-0 ovn_controller[147040]: 2026-02-25T12:23:52Z|00296|binding|INFO|Claiming lport 8428e69d-5fa3-4f42-9801-6b72fc0fc05b for this chassis.
Feb 25 12:23:52 compute-0 ovn_controller[147040]: 2026-02-25T12:23:52Z|00297|binding|INFO|8428e69d-5fa3-4f42-9801-6b72fc0fc05b: Claiming fa:16:3e:20:a7:a4 10.100.0.8
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.441 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:a7:a4 10.100.0.8'], port_security=['fa:16:3e:20:a7:a4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fb09cc9c-e6a9-4718-bb97-0df5558cb091', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8428e69d-5fa3-4f42-9801-6b72fc0fc05b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.443 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8428e69d-5fa3-4f42-9801-6b72fc0fc05b in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 bound to our chassis
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.445 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:23:52 compute-0 systemd-machined[210048]: New machine qemu-39-instance-00000023.
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.459 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[519d96db-5032-4ce2-ab37-869201241528]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.460 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a1663dd-21 in ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.462 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a1663dd-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.463 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9592a65b-06c9-4a64-a59a-f1732a09ccee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.464 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a13046aa-0d5e-4eff-bc3d-47019b42c8ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.476 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab4b290-d5b8-4ad4-b37b-9360cb16233d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:52 compute-0 ovn_controller[147040]: 2026-02-25T12:23:52Z|00298|binding|INFO|Setting lport 8428e69d-5fa3-4f42-9801-6b72fc0fc05b ovn-installed in OVS
Feb 25 12:23:52 compute-0 ovn_controller[147040]: 2026-02-25T12:23:52Z|00299|binding|INFO|Setting lport 8428e69d-5fa3-4f42-9801-6b72fc0fc05b up in Southbound
Feb 25 12:23:52 compute-0 systemd-udevd[277615]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.483 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.492 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[89c57c77-a09d-4d0c-aaf0-0e211dcbb223]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 NetworkManager[49836]: <info>  [1772022232.5043] device (tap8428e69d-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:23:52 compute-0 NetworkManager[49836]: <info>  [1772022232.5056] device (tap8428e69d-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.520 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2467379e-4ead-4c26-9df1-6a6ee535430a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.528 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[84ade817-d452-41b1-a34c-038275da9001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 systemd-udevd[277620]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:23:52 compute-0 NetworkManager[49836]: <info>  [1772022232.5301] manager: (tap6a1663dd-20): new Veth device (/org/freedesktop/NetworkManager/Devices/133)
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.562 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cc346036-ef91-4765-8729-78cdb6024ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.566 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[233420e8-6b14-42fc-93c5-97b8bc0e4528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 NetworkManager[49836]: <info>  [1772022232.5916] device (tap6a1663dd-20): carrier: link connected
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.599 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e918122d-21a1-4fc7-919f-52702ea411c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.618 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2399e6f7-b184-48b4-96bb-1f770477fa87]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420199, 'reachable_time': 39062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277646, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.632 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2eed0b-1608-4c55-9929-adc67b4eedea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420199, 'tstamp': 420199}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277647, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.645 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[95cd040c-d491-40c3-a4c3-9e49b8b9e5dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420199, 'reachable_time': 39062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277648, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.670 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[28bc6887-e5cd-4187-84cc-b711c3834d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.718 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b13444f-4621-4d20-8b4a-8e8c1b805ed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.719 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.719 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.720 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a1663dd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:52 compute-0 NetworkManager[49836]: <info>  [1772022232.7231] manager: (tap6a1663dd-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Feb 25 12:23:52 compute-0 kernel: tap6a1663dd-20: entered promiscuous mode
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.728 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.735 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a1663dd-20, col_values=(('external_ids', {'iface-id': '7f515737-b36a-4caa-affc-8ad110539172'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:23:52 compute-0 ovn_controller[147040]: 2026-02-25T12:23:52Z|00300|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.745 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.746 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.747 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[da809637-9d02-4386-b7aa-12003f137940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.748 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:23:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:52.748 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'env', 'PROCESS_TAG=haproxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a1663dd-25aa-4b0e-a1c0-42434d371a21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.766 244018 DEBUG nova.compute.manager [req-a5cb8814-c176-45f5-a8e2-49ca44b90af1 req-0bea2f0b-45e2-45e0-b95e-ee22381962af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Received event network-vif-plugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.767 244018 DEBUG oslo_concurrency.lockutils [req-a5cb8814-c176-45f5-a8e2-49ca44b90af1 req-0bea2f0b-45e2-45e0-b95e-ee22381962af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.767 244018 DEBUG oslo_concurrency.lockutils [req-a5cb8814-c176-45f5-a8e2-49ca44b90af1 req-0bea2f0b-45e2-45e0-b95e-ee22381962af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.768 244018 DEBUG oslo_concurrency.lockutils [req-a5cb8814-c176-45f5-a8e2-49ca44b90af1 req-0bea2f0b-45e2-45e0-b95e-ee22381962af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:52 compute-0 nova_compute[244014]: 2026-02-25 12:23:52.768 244018 DEBUG nova.compute.manager [req-a5cb8814-c176-45f5-a8e2-49ca44b90af1 req-0bea2f0b-45e2-45e0-b95e-ee22381962af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Processing event network-vif-plugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.060 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022233.0596159, fb09cc9c-e6a9-4718-bb97-0df5558cb091 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.061 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] VM Started (Lifecycle Event)
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.064 244018 DEBUG nova.compute.manager [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.069 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.072 244018 INFO nova.virt.libvirt.driver [-] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Instance spawned successfully.
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.072 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.088 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.094 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.099 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.100 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.101 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.101 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.102 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.103 244018 DEBUG nova.virt.libvirt.driver [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:23:53 compute-0 podman[277722]: 2026-02-25 12:23:53.123905164 +0000 UTC m=+0.071699732 container create 5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.137 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.137 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022233.06069, fb09cc9c-e6a9-4718-bb97-0df5558cb091 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.138 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] VM Paused (Lifecycle Event)
Feb 25 12:23:53 compute-0 systemd[1]: Started libpod-conmon-5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a.scope.
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.162 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.165 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022233.0671656, fb09cc9c-e6a9-4718-bb97-0df5558cb091 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.165 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] VM Resumed (Lifecycle Event)
Feb 25 12:23:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:23:53 compute-0 podman[277722]: 2026-02-25 12:23:53.094347813 +0000 UTC m=+0.042142401 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde15e975be9682ca4b6427e9a4884dbac48c37fb6c688f492d5ea9b2508efce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.200 244018 INFO nova.compute.manager [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Took 8.31 seconds to spawn the instance on the hypervisor.
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.201 244018 DEBUG nova.compute.manager [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.203 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:53 compute-0 podman[277722]: 2026-02-25 12:23:53.203772136 +0000 UTC m=+0.151566724 container init 5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 12:23:53 compute-0 podman[277722]: 2026-02-25 12:23:53.208328886 +0000 UTC m=+0.156123444 container start 5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.212 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:23:53 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[277737]: [NOTICE]   (277741) : New worker (277743) forked
Feb 25 12:23:53 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[277737]: [NOTICE]   (277741) : Loading success.
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.266 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.309 244018 DEBUG nova.network.neutron [req-5b7e946a-0943-4320-910a-dc49c9822f8c req-c0fe1d7a-e123-4640-8f2b-7c553d30d868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Updated VIF entry in instance network info cache for port 8428e69d-5fa3-4f42-9801-6b72fc0fc05b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.310 244018 DEBUG nova.network.neutron [req-5b7e946a-0943-4320-910a-dc49c9822f8c req-c0fe1d7a-e123-4640-8f2b-7c553d30d868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Updating instance_info_cache with network_info: [{"id": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "address": "fa:16:3e:20:a7:a4", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8428e69d-5f", "ovs_interfaceid": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.314 244018 INFO nova.compute.manager [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Took 9.60 seconds to build instance.
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.331 244018 DEBUG oslo_concurrency.lockutils [req-5b7e946a-0943-4320-910a-dc49c9822f8c req-c0fe1d7a-e123-4640-8f2b-7c553d30d868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-fb09cc9c-e6a9-4718-bb97-0df5558cb091" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.335 244018 DEBUG oslo_concurrency.lockutils [None req-7e609e82-494b-48f2-b82f-a1360535c079 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.484 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022218.4838705, 33db1662-e67d-41de-b8d6-ea93b40cf7cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.485 244018 INFO nova.compute.manager [-] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] VM Stopped (Lifecycle Event)
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.515 244018 DEBUG nova.compute.manager [None req-a8235722-39be-4bd3-be4a-3f1d688dc358 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.554 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022218.5541983, a0fa8542-8f2c-4a70-be2d-02c4139e33aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.555 244018 INFO nova.compute.manager [-] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] VM Stopped (Lifecycle Event)
Feb 25 12:23:53 compute-0 nova_compute[244014]: 2026-02-25 12:23:53.594 244018 DEBUG nova.compute.manager [None req-f425d6e8-42ec-4823-b88e-03e663f94a22 - - - - - -] [instance: a0fa8542-8f2c-4a70-be2d-02c4139e33aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:53 compute-0 ceph-mon[76335]: pgmap v1186: 305 pgs: 305 active+clean; 200 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 2.1 MiB/s wr, 107 op/s
Feb 25 12:23:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 200 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Feb 25 12:23:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:54 compute-0 nova_compute[244014]: 2026-02-25 12:23:54.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:54 compute-0 ceph-mon[76335]: pgmap v1187: 305 pgs: 305 active+clean; 200 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Feb 25 12:23:54 compute-0 nova_compute[244014]: 2026-02-25 12:23:54.841 244018 DEBUG nova.compute.manager [None req-4edbf9d7-fd22-4b2f-860b-053332bf27f8 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:23:54 compute-0 nova_compute[244014]: 2026-02-25 12:23:54.879 244018 DEBUG nova.compute.manager [req-8a363d42-e010-43de-88fa-eaffd8c17117 req-bc724e35-f9b1-4efd-82dc-40f0387c2a35 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Received event network-vif-plugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:23:54 compute-0 nova_compute[244014]: 2026-02-25 12:23:54.880 244018 DEBUG oslo_concurrency.lockutils [req-8a363d42-e010-43de-88fa-eaffd8c17117 req-bc724e35-f9b1-4efd-82dc-40f0387c2a35 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:54 compute-0 nova_compute[244014]: 2026-02-25 12:23:54.881 244018 DEBUG oslo_concurrency.lockutils [req-8a363d42-e010-43de-88fa-eaffd8c17117 req-bc724e35-f9b1-4efd-82dc-40f0387c2a35 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:54 compute-0 nova_compute[244014]: 2026-02-25 12:23:54.882 244018 DEBUG oslo_concurrency.lockutils [req-8a363d42-e010-43de-88fa-eaffd8c17117 req-bc724e35-f9b1-4efd-82dc-40f0387c2a35 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:54 compute-0 nova_compute[244014]: 2026-02-25 12:23:54.882 244018 DEBUG nova.compute.manager [req-8a363d42-e010-43de-88fa-eaffd8c17117 req-bc724e35-f9b1-4efd-82dc-40f0387c2a35 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] No waiting events found dispatching network-vif-plugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:23:54 compute-0 nova_compute[244014]: 2026-02-25 12:23:54.883 244018 WARNING nova.compute.manager [req-8a363d42-e010-43de-88fa-eaffd8c17117 req-bc724e35-f9b1-4efd-82dc-40f0387c2a35 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Received unexpected event network-vif-plugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b for instance with vm_state active and task_state image_snapshot.
Feb 25 12:23:54 compute-0 nova_compute[244014]: 2026-02-25 12:23:54.906 244018 INFO nova.compute.manager [None req-4edbf9d7-fd22-4b2f-860b-053332bf27f8 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] instance snapshotting
Feb 25 12:23:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:55.007 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:23:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:55.008 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:23:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:23:55.009 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:23:55 compute-0 nova_compute[244014]: 2026-02-25 12:23:55.337 244018 INFO nova.virt.libvirt.driver [None req-4edbf9d7-fd22-4b2f-860b-053332bf27f8 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Beginning live snapshot process
Feb 25 12:23:55 compute-0 nova_compute[244014]: 2026-02-25 12:23:55.507 244018 DEBUG nova.virt.libvirt.imagebackend [None req-4edbf9d7-fd22-4b2f-860b-053332bf27f8 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:23:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 200 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Feb 25 12:23:55 compute-0 podman[277785]: 2026-02-25 12:23:55.767361685 +0000 UTC m=+0.106999805 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:23:55 compute-0 podman[277786]: 2026-02-25 12:23:55.79739315 +0000 UTC m=+0.136243678 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:23:55 compute-0 nova_compute[244014]: 2026-02-25 12:23:55.996 244018 DEBUG nova.storage.rbd_utils [None req-4edbf9d7-fd22-4b2f-860b-053332bf27f8 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(9f82783ac90b4c0ea69b160e03c67df8) on rbd image(fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:23:56 compute-0 nova_compute[244014]: 2026-02-25 12:23:56.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Feb 25 12:23:56 compute-0 ceph-mon[76335]: pgmap v1188: 305 pgs: 305 active+clean; 200 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Feb 25 12:23:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Feb 25 12:23:56 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Feb 25 12:23:56 compute-0 nova_compute[244014]: 2026-02-25 12:23:56.857 244018 DEBUG nova.storage.rbd_utils [None req-4edbf9d7-fd22-4b2f-860b-053332bf27f8 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] cloning vms/fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk@9f82783ac90b4c0ea69b160e03c67df8 to images/3dad1c68-e760-48b3-a8c1-46ce658c3e36 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:23:56 compute-0 nova_compute[244014]: 2026-02-25 12:23:56.981 244018 DEBUG nova.storage.rbd_utils [None req-4edbf9d7-fd22-4b2f-860b-053332bf27f8 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] flattening images/3dad1c68-e760-48b3-a8c1-46ce658c3e36 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:23:57 compute-0 nova_compute[244014]: 2026-02-25 12:23:57.203 244018 DEBUG nova.storage.rbd_utils [None req-4edbf9d7-fd22-4b2f-860b-053332bf27f8 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] removing snapshot(9f82783ac90b4c0ea69b160e03c67df8) on rbd image(fb09cc9c-e6a9-4718-bb97-0df5558cb091_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:23:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 211 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 689 KiB/s wr, 142 op/s
Feb 25 12:23:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Feb 25 12:23:57 compute-0 ceph-mon[76335]: osdmap e166: 3 total, 3 up, 3 in
Feb 25 12:23:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Feb 25 12:23:57 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Feb 25 12:23:57 compute-0 nova_compute[244014]: 2026-02-25 12:23:57.853 244018 DEBUG nova.storage.rbd_utils [None req-4edbf9d7-fd22-4b2f-860b-053332bf27f8 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(snap) on rbd image(3dad1c68-e760-48b3-a8c1-46ce658c3e36) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:23:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Feb 25 12:23:58 compute-0 ceph-mon[76335]: pgmap v1190: 305 pgs: 305 active+clean; 211 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 689 KiB/s wr, 142 op/s
Feb 25 12:23:58 compute-0 ceph-mon[76335]: osdmap e167: 3 total, 3 up, 3 in
Feb 25 12:23:58 compute-0 nova_compute[244014]: 2026-02-25 12:23:58.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:23:58 compute-0 nova_compute[244014]: 2026-02-25 12:23:58.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:23:58 compute-0 nova_compute[244014]: 2026-02-25 12:23:58.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:23:58 compute-0 nova_compute[244014]: 2026-02-25 12:23:58.896 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-fb09cc9c-e6a9-4718-bb97-0df5558cb091" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:23:58 compute-0 nova_compute[244014]: 2026-02-25 12:23:58.897 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-fb09cc9c-e6a9-4718-bb97-0df5558cb091" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:23:58 compute-0 nova_compute[244014]: 2026-02-25 12:23:58.897 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:23:58 compute-0 nova_compute[244014]: 2026-02-25 12:23:58.897 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid fb09cc9c-e6a9-4718-bb97-0df5558cb091 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:23:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Feb 25 12:23:58 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Feb 25 12:23:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:23:59 compute-0 nova_compute[244014]: 2026-02-25 12:23:59.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:23:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 211 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 938 KiB/s wr, 193 op/s
Feb 25 12:23:59 compute-0 ceph-mon[76335]: osdmap e168: 3 total, 3 up, 3 in
Feb 25 12:24:00 compute-0 nova_compute[244014]: 2026-02-25 12:24:00.461 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Updating instance_info_cache with network_info: [{"id": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "address": "fa:16:3e:20:a7:a4", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8428e69d-5f", "ovs_interfaceid": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:00 compute-0 nova_compute[244014]: 2026-02-25 12:24:00.501 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-fb09cc9c-e6a9-4718-bb97-0df5558cb091" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:00 compute-0 nova_compute[244014]: 2026-02-25 12:24:00.502 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:24:00 compute-0 nova_compute[244014]: 2026-02-25 12:24:00.702 244018 INFO nova.virt.libvirt.driver [None req-4edbf9d7-fd22-4b2f-860b-053332bf27f8 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Snapshot image upload complete
Feb 25 12:24:00 compute-0 nova_compute[244014]: 2026-02-25 12:24:00.703 244018 INFO nova.compute.manager [None req-4edbf9d7-fd22-4b2f-860b-053332bf27f8 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Took 5.80 seconds to snapshot the instance on the hypervisor.
Feb 25 12:24:00 compute-0 ceph-mon[76335]: pgmap v1193: 305 pgs: 305 active+clean; 211 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 938 KiB/s wr, 193 op/s
Feb 25 12:24:01 compute-0 nova_compute[244014]: 2026-02-25 12:24:01.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:24:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 246 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 273 op/s
Feb 25 12:24:02 compute-0 ceph-mon[76335]: pgmap v1194: 305 pgs: 305 active+clean; 246 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 273 op/s
Feb 25 12:24:03 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 25 12:24:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 246 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.1 MiB/s wr, 156 op/s
Feb 25 12:24:03 compute-0 ovn_controller[147040]: 2026-02-25T12:24:03Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:a7:a4 10.100.0.8
Feb 25 12:24:03 compute-0 ovn_controller[147040]: 2026-02-25T12:24:03Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:a7:a4 10.100.0.8
Feb 25 12:24:03 compute-0 nova_compute[244014]: 2026-02-25 12:24:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:03.944158) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022243944218, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2114, "num_deletes": 508, "total_data_size": 2798851, "memory_usage": 2859336, "flush_reason": "Manual Compaction"}
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022243954989, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2048237, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23740, "largest_seqno": 25853, "table_properties": {"data_size": 2040305, "index_size": 4110, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22066, "raw_average_key_size": 20, "raw_value_size": 2021485, "raw_average_value_size": 1864, "num_data_blocks": 183, "num_entries": 1084, "num_filter_entries": 1084, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022089, "oldest_key_time": 1772022089, "file_creation_time": 1772022243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 10889 microseconds, and 6389 cpu microseconds.
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:03.955048) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2048237 bytes OK
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:03.955072) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:03.957105) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:03.957126) EVENT_LOG_v1 {"time_micros": 1772022243957119, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:03.957149) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2788801, prev total WAL file size 2788801, number of live WAL files 2.
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:03.957882) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2000KB)], [56(9023KB)]
Feb 25 12:24:03 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022243957945, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 11288378, "oldest_snapshot_seqno": -1}
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4844 keys, 6987197 bytes, temperature: kUnknown
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022244004868, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 6987197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6955293, "index_size": 18674, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 121336, "raw_average_key_size": 25, "raw_value_size": 6868538, "raw_average_value_size": 1417, "num_data_blocks": 767, "num_entries": 4844, "num_filter_entries": 4844, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:04.005119) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 6987197 bytes
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:04.006482) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 240.2 rd, 148.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.8 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(8.9) write-amplify(3.4) OK, records in: 5820, records dropped: 976 output_compression: NoCompression
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:04.006512) EVENT_LOG_v1 {"time_micros": 1772022244006499, "job": 30, "event": "compaction_finished", "compaction_time_micros": 46999, "compaction_time_cpu_micros": 23670, "output_level": 6, "num_output_files": 1, "total_output_size": 6987197, "num_input_records": 5820, "num_output_records": 4844, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022244006925, "job": 30, "event": "table_file_deletion", "file_number": 58}
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022244008203, "job": 30, "event": "table_file_deletion", "file_number": 56}
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:03.957800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:04.008300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:04.008307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:04.008311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:04.008314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:24:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:24:04.008318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:24:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Feb 25 12:24:04 compute-0 nova_compute[244014]: 2026-02-25 12:24:04.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Feb 25 12:24:04 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Feb 25 12:24:04 compute-0 nova_compute[244014]: 2026-02-25 12:24:04.756 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:04 compute-0 nova_compute[244014]: 2026-02-25 12:24:04.756 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:04 compute-0 nova_compute[244014]: 2026-02-25 12:24:04.787 244018 DEBUG nova.compute.manager [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:24:04 compute-0 nova_compute[244014]: 2026-02-25 12:24:04.862 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:04 compute-0 nova_compute[244014]: 2026-02-25 12:24:04.862 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:04 compute-0 nova_compute[244014]: 2026-02-25 12:24:04.873 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:24:04 compute-0 nova_compute[244014]: 2026-02-25 12:24:04.874 244018 INFO nova.compute.claims [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:24:04 compute-0 nova_compute[244014]: 2026-02-25 12:24:04.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:24:04 compute-0 ceph-mon[76335]: pgmap v1195: 305 pgs: 305 active+clean; 246 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.1 MiB/s wr, 156 op/s
Feb 25 12:24:04 compute-0 ceph-mon[76335]: osdmap e169: 3 total, 3 up, 3 in
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.064 244018 DEBUG oslo_concurrency.processutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363375188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.617 244018 DEBUG oslo_concurrency.processutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.621 244018 DEBUG nova.compute.provider_tree [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.660 244018 DEBUG nova.scheduler.client.report [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.735 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.736 244018 DEBUG nova.compute.manager [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:24:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 246 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 63 op/s
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.803 244018 DEBUG nova.compute.manager [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.804 244018 DEBUG nova.network.neutron [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.830 244018 INFO nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.853 244018 DEBUG nova.compute.manager [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:24:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/363375188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.983 244018 DEBUG nova.compute.manager [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.985 244018 DEBUG nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:24:05 compute-0 nova_compute[244014]: 2026-02-25 12:24:05.986 244018 INFO nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Creating image(s)
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.025 244018 DEBUG nova.storage.rbd_utils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.055 244018 DEBUG nova.storage.rbd_utils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.075 244018 DEBUG nova.storage.rbd_utils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.078 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "cae6a91458b54c099a06581f61b4befbc4bd60d7" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.078 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "cae6a91458b54c099a06581f61b4befbc4bd60d7" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.127 244018 DEBUG nova.policy [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1cfc3e643014e1c984d71182534fd24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '851e1817495944c1ad7ac421ab226d13', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.373 244018 DEBUG nova.virt.libvirt.imagebackend [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/3dad1c68-e760-48b3-a8c1-46ce658c3e36/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/3dad1c68-e760-48b3-a8c1-46ce658c3e36/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 25 12:24:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:06.409 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:06.411 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.437 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.442 244018 DEBUG nova.virt.libvirt.imagebackend [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Selected location: {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/3dad1c68-e760-48b3-a8c1-46ce658c3e36/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.443 244018 DEBUG nova.storage.rbd_utils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] cloning images/3dad1c68-e760-48b3-a8c1-46ce658c3e36@snap to None/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.559 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "cae6a91458b54c099a06581f61b4befbc4bd60d7" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.701 244018 DEBUG nova.objects.instance [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'migration_context' on Instance uuid 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.720 244018 DEBUG nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.721 244018 DEBUG nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Ensure instance console log exists: /var/lib/nova/instances/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.721 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.721 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:06 compute-0 nova_compute[244014]: 2026-02-25 12:24:06.721 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:06 compute-0 ceph-mon[76335]: pgmap v1197: 305 pgs: 305 active+clean; 246 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 63 op/s
Feb 25 12:24:07 compute-0 nova_compute[244014]: 2026-02-25 12:24:07.097 244018 DEBUG nova.network.neutron [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Successfully created port: 5d14f032-9842-4c87-8531-26efd8b98065 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:24:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 279 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 151 op/s
Feb 25 12:24:07 compute-0 nova_compute[244014]: 2026-02-25 12:24:07.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.353 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.353 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.375 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.380 244018 DEBUG nova.network.neutron [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Successfully updated port: 5d14f032-9842-4c87-8531-26efd8b98065 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.390 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "42abb4b3-2144-4c11-a04c-7641d725bcde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.391 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.421 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "refresh_cache-1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.422 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquired lock "refresh_cache-1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.422 244018 DEBUG nova.network.neutron [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.425 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.445 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.446 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.532 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.537 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.538 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.553 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.554 244018 INFO nova.compute.claims [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.561 244018 DEBUG nova.compute.manager [req-275d7622-56f7-41d5-836a-50721e50fc5b req-fa2725de-17d3-498c-ab1b-33064903c5eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Received event network-changed-5d14f032-9842-4c87-8531-26efd8b98065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.561 244018 DEBUG nova.compute.manager [req-275d7622-56f7-41d5-836a-50721e50fc5b req-fa2725de-17d3-498c-ab1b-33064903c5eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Refreshing instance network info cache due to event network-changed-5d14f032-9842-4c87-8531-26efd8b98065. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.562 244018 DEBUG oslo_concurrency.lockutils [req-275d7622-56f7-41d5-836a-50721e50fc5b req-fa2725de-17d3-498c-ab1b-33064903c5eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.579 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.632 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.646 244018 DEBUG nova.network.neutron [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.787 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:24:08 compute-0 nova_compute[244014]: 2026-02-25 12:24:08.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:08 compute-0 ceph-mon[76335]: pgmap v1198: 305 pgs: 305 active+clean; 279 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 151 op/s
Feb 25 12:24:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3205458484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.318 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.325 244018 DEBUG nova.compute.provider_tree [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.348 244018 DEBUG nova.scheduler.client.report [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.372 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.374 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.378 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.394 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.395 244018 INFO nova.compute.claims [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.455 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.455 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.484 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.504 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.593 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.595 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.596 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Creating image(s)
Feb 25 12:24:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.625 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.657 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.688 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.692 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.721 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 279 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.1 MiB/s wr, 134 op/s
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.761 244018 DEBUG nova.network.neutron [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Updating instance_info_cache with network_info: [{"id": "5d14f032-9842-4c87-8531-26efd8b98065", "address": "fa:16:3e:c8:53:75", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d14f032-98", "ovs_interfaceid": "5d14f032-9842-4c87-8531-26efd8b98065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.774 244018 DEBUG nova.policy [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ab8469e67b142bab26cdd996097e148', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd255f122166b45af8d4d83610929aaea', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.779 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.780 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.781 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.782 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.815 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.819 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.844 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Releasing lock "refresh_cache-1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.844 244018 DEBUG nova.compute.manager [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Instance network_info: |[{"id": "5d14f032-9842-4c87-8531-26efd8b98065", "address": "fa:16:3e:c8:53:75", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d14f032-98", "ovs_interfaceid": "5d14f032-9842-4c87-8531-26efd8b98065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.845 244018 DEBUG oslo_concurrency.lockutils [req-275d7622-56f7-41d5-836a-50721e50fc5b req-fa2725de-17d3-498c-ab1b-33064903c5eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.846 244018 DEBUG nova.network.neutron [req-275d7622-56f7-41d5-836a-50721e50fc5b req-fa2725de-17d3-498c-ab1b-33064903c5eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Refreshing network info cache for port 5d14f032-9842-4c87-8531-26efd8b98065 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.851 244018 DEBUG nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Start _get_guest_xml network_info=[{"id": "5d14f032-9842-4c87-8531-26efd8b98065", "address": "fa:16:3e:c8:53:75", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d14f032-98", "ovs_interfaceid": "5d14f032-9842-4c87-8531-26efd8b98065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:23:54Z,direct_url=<?>,disk_format='raw',id=3dad1c68-e760-48b3-a8c1-46ce658c3e36,min_disk=1,min_ram=0,name='tempest-test-snap-1699384219',owner='851e1817495944c1ad7ac421ab226d13',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:24:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': '3dad1c68-e760-48b3-a8c1-46ce658c3e36'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.857 244018 WARNING nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.862 244018 DEBUG nova.virt.libvirt.host [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.862 244018 DEBUG nova.virt.libvirt.host [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.865 244018 DEBUG nova.virt.libvirt.host [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.865 244018 DEBUG nova.virt.libvirt.host [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.866 244018 DEBUG nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.866 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:23:54Z,direct_url=<?>,disk_format='raw',id=3dad1c68-e760-48b3-a8c1-46ce658c3e36,min_disk=1,min_ram=0,name='tempest-test-snap-1699384219',owner='851e1817495944c1ad7ac421ab226d13',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:24:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.866 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.867 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.867 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.867 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.867 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.867 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.867 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.868 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.868 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.868 244018 DEBUG nova.virt.hardware [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:24:09 compute-0 nova_compute[244014]: 2026-02-25 12:24:09.872 244018 DEBUG oslo_concurrency.processutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3205458484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.095 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.172 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] resizing rbd image 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.263 244018 DEBUG nova.objects.instance [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lazy-loading 'migration_context' on Instance uuid 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425062096' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.285 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.287 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.287 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Ensure instance console log exists: /var/lib/nova/instances/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.287 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.288 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.288 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.295 244018 DEBUG nova.compute.provider_tree [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.322 244018 DEBUG nova.scheduler.client.report [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.373 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.374 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.378 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.385 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.385 244018 INFO nova.compute.claims [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:24:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3493404966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.469 244018 DEBUG oslo_concurrency.processutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.497 244018 DEBUG nova.storage.rbd_utils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.501 244018 DEBUG oslo_concurrency.processutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.538 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.539 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.569 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.589 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.609 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.610 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.612 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Successfully created port: f1012b5d-c005-4c9f-a5ec-1e01ce3c581f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.655 244018 DEBUG nova.compute.manager [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.723 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.765 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.769 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.769 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Creating image(s)
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.807 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 42abb4b3-2144-4c11-a04c-7641d725bcde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.844 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 42abb4b3-2144-4c11-a04c-7641d725bcde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.879 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 42abb4b3-2144-4c11-a04c-7641d725bcde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.897 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.944 244018 DEBUG nova.policy [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ab8469e67b142bab26cdd996097e148', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd255f122166b45af8d4d83610929aaea', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.952 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "2dd0d0eb-f5ba-419e-8233-256425af6119" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.953 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.965 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.986 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.987 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.987 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:10 compute-0 nova_compute[244014]: 2026-02-25 12:24:10.987 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.017 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 42abb4b3-2144-4c11-a04c-7641d725bcde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.022 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 42abb4b3-2144-4c11-a04c-7641d725bcde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1156225704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.060 244018 DEBUG nova.compute.manager [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.064 244018 DEBUG oslo_concurrency.processutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.066 244018 DEBUG nova.virt.libvirt.vif [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-768222662',display_name='tempest-ImagesTestJSON-server-768222662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-768222662',id=36,image_ref='3dad1c68-e760-48b3-a8c1-46ce658c3e36',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-6ochylfk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='fb09cc9c-e6a9-4718-bb97-0df5558cb091',image_min_disk='1',image_min_ram='0',image_owner_id='851e1817495944c1ad7ac421ab226d13',image_owner_project_name='tempest-ImagesTestJSON-1353368211',image_owner_user_name='tempest-ImagesTestJSON-1353368211-project-member',image_user_id='f1cfc3e643014e1c984d71182534fd24',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:05Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d14f032-9842-4c87-8531-26efd8b98065", "address": "fa:16:3e:c8:53:75", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d14f032-98", "ovs_interfaceid": "5d14f032-9842-4c87-8531-26efd8b98065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.066 244018 DEBUG nova.network.os_vif_util [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "5d14f032-9842-4c87-8531-26efd8b98065", "address": "fa:16:3e:c8:53:75", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d14f032-98", "ovs_interfaceid": "5d14f032-9842-4c87-8531-26efd8b98065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.067 244018 DEBUG nova.network.os_vif_util [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:53:75,bridge_name='br-int',has_traffic_filtering=True,id=5d14f032-9842-4c87-8531-26efd8b98065,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d14f032-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.068 244018 DEBUG nova.objects.instance [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:11 compute-0 ceph-mon[76335]: pgmap v1199: 305 pgs: 305 active+clean; 279 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.1 MiB/s wr, 134 op/s
Feb 25 12:24:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3425062096' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3493404966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1156225704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.127 244018 DEBUG nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:24:11 compute-0 nova_compute[244014]:   <uuid>1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d</uuid>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   <name>instance-00000024</name>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <nova:name>tempest-ImagesTestJSON-server-768222662</nova:name>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:24:09</nova:creationTime>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <nova:user uuid="f1cfc3e643014e1c984d71182534fd24">tempest-ImagesTestJSON-1353368211-project-member</nova:user>
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <nova:project uuid="851e1817495944c1ad7ac421ab226d13">tempest-ImagesTestJSON-1353368211</nova:project>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="3dad1c68-e760-48b3-a8c1-46ce658c3e36"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <nova:port uuid="5d14f032-9842-4c87-8531-26efd8b98065">
Feb 25 12:24:11 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <system>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <entry name="serial">1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d</entry>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <entry name="uuid">1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d</entry>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     </system>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   <os>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   </os>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   <features>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   </features>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_disk">
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_disk.config">
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:11 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:c8:53:75"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <target dev="tap5d14f032-98"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d/console.log" append="off"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <video>
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     </video>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:24:11 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:24:11 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:24:11 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:24:11 compute-0 nova_compute[244014]: </domain>
Feb 25 12:24:11 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.128 244018 DEBUG nova.compute.manager [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Preparing to wait for external event network-vif-plugged-5d14f032-9842-4c87-8531-26efd8b98065 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.128 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.128 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.128 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.129 244018 DEBUG nova.virt.libvirt.vif [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-768222662',display_name='tempest-ImagesTestJSON-server-768222662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-768222662',id=36,image_ref='3dad1c68-e760-48b3-a8c1-46ce658c3e36',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-6ochylfk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='fb09cc9c-e6a9-4718-bb97-0df5558cb091',image_min_disk='1',image_min_ram='0',image_owner_id='851e1817495944c1ad7ac421ab226d13',image_owner_project_name='tempest-ImagesTestJSON-1353368211',image_owner_user_name='tempest-ImagesTestJSON-1353368211-project-member',image_user_id='f1cfc3e643014e1c984d71182534fd24',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:05Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d14f032-9842-4c87-8531-26efd8b98065", "address": "fa:16:3e:c8:53:75", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d14f032-98", "ovs_interfaceid": "5d14f032-9842-4c87-8531-26efd8b98065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.129 244018 DEBUG nova.network.os_vif_util [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "5d14f032-9842-4c87-8531-26efd8b98065", "address": "fa:16:3e:c8:53:75", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d14f032-98", "ovs_interfaceid": "5d14f032-9842-4c87-8531-26efd8b98065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.130 244018 DEBUG nova.network.os_vif_util [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:53:75,bridge_name='br-int',has_traffic_filtering=True,id=5d14f032-9842-4c87-8531-26efd8b98065,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d14f032-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.130 244018 DEBUG os_vif [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:53:75,bridge_name='br-int',has_traffic_filtering=True,id=5d14f032-9842-4c87-8531-26efd8b98065,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d14f032-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.131 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.131 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.135 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d14f032-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.136 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d14f032-98, col_values=(('external_ids', {'iface-id': '5d14f032-9842-4c87-8531-26efd8b98065', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:53:75', 'vm-uuid': '1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:11 compute-0 NetworkManager[49836]: <info>  [1772022251.1386] manager: (tap5d14f032-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.149 244018 INFO os_vif [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:53:75,bridge_name='br-int',has_traffic_filtering=True,id=5d14f032-9842-4c87-8531-26efd8b98065,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d14f032-98')
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.212 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.219 244018 DEBUG nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.219 244018 DEBUG nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.220 244018 DEBUG nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No VIF found with MAC fa:16:3e:c8:53:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.220 244018 INFO nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Using config drive
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.255 244018 DEBUG nova.storage.rbd_utils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729476925' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.291 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.297 244018 DEBUG nova.compute.provider_tree [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.313 244018 DEBUG nova.scheduler.client.report [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.436 244018 DEBUG nova.network.neutron [req-275d7622-56f7-41d5-836a-50721e50fc5b req-fa2725de-17d3-498c-ab1b-33064903c5eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Updated VIF entry in instance network info cache for port 5d14f032-9842-4c87-8531-26efd8b98065. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.436 244018 DEBUG nova.network.neutron [req-275d7622-56f7-41d5-836a-50721e50fc5b req-fa2725de-17d3-498c-ab1b-33064903c5eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Updating instance_info_cache with network_info: [{"id": "5d14f032-9842-4c87-8531-26efd8b98065", "address": "fa:16:3e:c8:53:75", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d14f032-98", "ovs_interfaceid": "5d14f032-9842-4c87-8531-26efd8b98065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.437 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 42abb4b3-2144-4c11-a04c-7641d725bcde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.467 244018 DEBUG oslo_concurrency.lockutils [req-275d7622-56f7-41d5-836a-50721e50fc5b req-fa2725de-17d3-498c-ab1b-33064903c5eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.467 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.468 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.470 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 2.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.470 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.470 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.470 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.496 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.536 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.537 244018 INFO nova.compute.claims [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.545 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] resizing rbd image 42abb4b3-2144-4c11-a04c-7641d725bcde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.607 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.607 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.659 244018 DEBUG nova.objects.instance [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lazy-loading 'migration_context' on Instance uuid 42abb4b3-2144-4c11-a04c-7641d725bcde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.673 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.694 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.695 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Ensure instance console log exists: /var/lib/nova/instances/42abb4b3-2144-4c11-a04c-7641d725bcde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.695 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.696 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.697 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.720 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:24:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 335 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 498 KiB/s rd, 5.2 MiB/s wr, 129 op/s
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.782 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Successfully updated port: f1012b5d-c005-4c9f-a5ec-1e01ce3c581f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.788 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Successfully created port: 7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.814 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "refresh_cache-2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.814 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquired lock "refresh_cache-2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.814 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.863 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.864 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.865 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Creating image(s)
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.894 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.926 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.957 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:11 compute-0 nova_compute[244014]: 2026-02-25 12:24:11.961 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.000 244018 DEBUG nova.policy [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ab8469e67b142bab26cdd996097e148', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd255f122166b45af8d4d83610929aaea', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.007 244018 INFO nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Creating config drive at /var/lib/nova/instances/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d/disk.config
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.013 244018 DEBUG oslo_concurrency.processutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpoptd9713 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2299698484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.043 244018 DEBUG nova.compute.manager [req-c1915ae1-3b23-4c97-a367-62080cfc4edc req-c04dcdfb-e598-43de-ba08-706e01a36308 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Received event network-changed-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.043 244018 DEBUG nova.compute.manager [req-c1915ae1-3b23-4c97-a367-62080cfc4edc req-c04dcdfb-e598-43de-ba08-706e01a36308 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Refreshing instance network info cache due to event network-changed-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.044 244018 DEBUG oslo_concurrency.lockutils [req-c1915ae1-3b23-4c97-a367-62080cfc4edc req-c04dcdfb-e598-43de-ba08-706e01a36308 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.052 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.053 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.054 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.055 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.055 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1729476925' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2299698484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.085 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.090 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.131 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.158 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.162 244018 DEBUG oslo_concurrency.processutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpoptd9713" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.190 244018 DEBUG nova.storage.rbd_utils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.193 244018 DEBUG oslo_concurrency.processutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d/disk.config 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.286 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.286 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.289 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.289 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.366 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.582 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] resizing rbd image 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.615 244018 DEBUG oslo_concurrency.processutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d/disk.config 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.616 244018 INFO nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Deleting local config drive /var/lib/nova/instances/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d/disk.config because it was imported into RBD.
Feb 25 12:24:12 compute-0 kernel: tap5d14f032-98: entered promiscuous mode
Feb 25 12:24:12 compute-0 NetworkManager[49836]: <info>  [1772022252.6749] manager: (tap5d14f032-98): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Feb 25 12:24:12 compute-0 ovn_controller[147040]: 2026-02-25T12:24:12Z|00301|binding|INFO|Claiming lport 5d14f032-9842-4c87-8531-26efd8b98065 for this chassis.
Feb 25 12:24:12 compute-0 ovn_controller[147040]: 2026-02-25T12:24:12Z|00302|binding|INFO|5d14f032-9842-4c87-8531-26efd8b98065: Claiming fa:16:3e:c8:53:75 10.100.0.10
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.688 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:53:75 10.100.0.10'], port_security=['fa:16:3e:c8:53:75 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5d14f032-9842-4c87-8531-26efd8b98065) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:12 compute-0 ovn_controller[147040]: 2026-02-25T12:24:12Z|00303|binding|INFO|Setting lport 5d14f032-9842-4c87-8531-26efd8b98065 ovn-installed in OVS
Feb 25 12:24:12 compute-0 ovn_controller[147040]: 2026-02-25T12:24:12Z|00304|binding|INFO|Setting lport 5d14f032-9842-4c87-8531-26efd8b98065 up in Southbound
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.690 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5d14f032-9842-4c87-8531-26efd8b98065 in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 bound to our chassis
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.693 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:24:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707182355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.708 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0c56af-c394-4e65-938b-ca02263ee4b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:12 compute-0 systemd-machined[210048]: New machine qemu-40-instance-00000024.
Feb 25 12:24:12 compute-0 systemd-udevd[278883]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.723 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Successfully updated port: 7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.725 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.728 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:12 compute-0 NetworkManager[49836]: <info>  [1772022252.7338] device (tap5d14f032-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:24:12 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000024.
Feb 25 12:24:12 compute-0 NetworkManager[49836]: <info>  [1772022252.7351] device (tap5d14f032-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.737 244018 DEBUG nova.objects.instance [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lazy-loading 'migration_context' on Instance uuid 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.745 244018 DEBUG nova.compute.provider_tree [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.746 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f0557b55-3507-4036-85bf-794d65eb8460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.749 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "refresh_cache-42abb4b3-2144-4c11-a04c-7641d725bcde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.749 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquired lock "refresh_cache-42abb4b3-2144-4c11-a04c-7641d725bcde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.749 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.752 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c47b89b0-c263-420b-829f-c7e1fde731b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.769 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.769 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Ensure instance console log exists: /var/lib/nova/instances/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.770 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.770 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.770 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.774 244018 DEBUG nova.scheduler.client.report [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.780 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[21bf4732-45b5-4747-aadf-e6fa62a07e1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.782 244018 DEBUG nova.compute.manager [req-74353721-c066-4e98-8829-8de929be89f8 req-48e1ade3-e9fd-4895-a74d-ee399097bc79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Received event network-changed-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.782 244018 DEBUG nova.compute.manager [req-74353721-c066-4e98-8829-8de929be89f8 req-48e1ade3-e9fd-4895-a74d-ee399097bc79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Refreshing instance network info cache due to event network-changed-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.782 244018 DEBUG oslo_concurrency.lockutils [req-74353721-c066-4e98-8829-8de929be89f8 req-48e1ade3-e9fd-4895-a74d-ee399097bc79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-42abb4b3-2144-4c11-a04c-7641d725bcde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.800 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cddd588b-9bd9-412e-b400-14037d2ecc76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420199, 'reachable_time': 39062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278896, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.803 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.804 244018 DEBUG nova.compute.manager [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.806 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.814 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.814 244018 INFO nova.compute.claims [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.818 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[625850fd-9d7a-48eb-9c64-c055fbf191e6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a1663dd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420209, 'tstamp': 420209}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278898, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a1663dd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420212, 'tstamp': 420212}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278898, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.821 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.825 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a1663dd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.825 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.826 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a1663dd-20, col_values=(('external_ids', {'iface-id': '7f515737-b36a-4caa-affc-8ad110539172'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:12.826 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.875 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.880 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3959MB free_disk=59.942613834515214GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.880 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.882 244018 DEBUG nova.compute.manager [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.882 244018 DEBUG nova.network.neutron [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.898 244018 INFO nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.916 244018 DEBUG nova.compute.manager [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.923 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:24:12 compute-0 nova_compute[244014]: 2026-02-25 12:24:12.946 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Successfully created port: 1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.006 244018 DEBUG nova.compute.manager [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.007 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.007 244018 INFO nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Creating image(s)
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.025 244018 DEBUG nova.storage.rbd_utils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] rbd image 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.046 244018 DEBUG nova.storage.rbd_utils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] rbd image 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.065 244018 DEBUG nova.storage.rbd_utils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] rbd image 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.068 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:13 compute-0 ceph-mon[76335]: pgmap v1200: 305 pgs: 305 active+clean; 335 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 498 KiB/s rd, 5.2 MiB/s wr, 129 op/s
Feb 25 12:24:13 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/707182355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.144 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.145 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.145 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.145 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.165 244018 DEBUG nova.storage.rbd_utils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] rbd image 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.167 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.189 244018 DEBUG nova.policy [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd9c37980bc44499d879a667da762ceb2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ddc1ab49b1f14fa1b483e02aa65e18b8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.192 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:13.413 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.416 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.473 244018 DEBUG nova.storage.rbd_utils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] resizing rbd image 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.548 244018 DEBUG nova.objects.instance [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lazy-loading 'migration_context' on Instance uuid 8249bea8-a1c0-42f5-a4d1-12b74e20bccd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.568 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.568 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Ensure instance console log exists: /var/lib/nova/instances/8249bea8-a1c0-42f5-a4d1-12b74e20bccd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.569 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.569 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.569 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.678 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Updating instance_info_cache with network_info: [{"id": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "address": "fa:16:3e:7c:85:fd", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1012b5d-c0", "ovs_interfaceid": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.709 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Releasing lock "refresh_cache-2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.709 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Instance network_info: |[{"id": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "address": "fa:16:3e:7c:85:fd", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1012b5d-c0", "ovs_interfaceid": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.710 244018 DEBUG oslo_concurrency.lockutils [req-c1915ae1-3b23-4c97-a367-62080cfc4edc req-c04dcdfb-e598-43de-ba08-706e01a36308 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.710 244018 DEBUG nova.network.neutron [req-c1915ae1-3b23-4c97-a367-62080cfc4edc req-c04dcdfb-e598-43de-ba08-706e01a36308 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Refreshing network info cache for port f1012b5d-c005-4c9f-a5ec-1e01ce3c581f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.716 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Start _get_guest_xml network_info=[{"id": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "address": "fa:16:3e:7c:85:fd", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1012b5d-c0", "ovs_interfaceid": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:24:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753286574' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.721 244018 WARNING nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.727 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.728 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.732 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.732 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.733 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.733 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.734 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.734 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.735 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.735 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.735 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.736 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.736 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.736 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.737 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.737 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.742 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 377 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 528 KiB/s rd, 7.1 MiB/s wr, 174 op/s
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.771 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.789 244018 DEBUG nova.compute.provider_tree [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.814 244018 DEBUG nova.scheduler.client.report [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.842 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.843 244018 DEBUG nova.compute.manager [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.848 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.912 244018 DEBUG nova.compute.manager [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.913 244018 DEBUG nova.network.neutron [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.927 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Updating instance_info_cache with network_info: [{"id": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "address": "fa:16:3e:70:ac:35", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d80048f-ed", "ovs_interfaceid": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.928 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022253.9255996, 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.929 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] VM Started (Lifecycle Event)
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.936 244018 INFO nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.959 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance fb09cc9c-e6a9-4718-bb97-0df5558cb091 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.959 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.960 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.960 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 42abb4b3-2144-4c11-a04c-7641d725bcde actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.960 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.961 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8249bea8-a1c0-42f5-a4d1-12b74e20bccd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.961 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 2dd0d0eb-f5ba-419e-8233-256425af6119 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.961 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.961 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.964 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.966 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Releasing lock "refresh_cache-42abb4b3-2144-4c11-a04c-7641d725bcde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.967 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Instance network_info: |[{"id": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "address": "fa:16:3e:70:ac:35", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d80048f-ed", "ovs_interfaceid": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.967 244018 DEBUG nova.compute.manager [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.971 244018 DEBUG oslo_concurrency.lockutils [req-74353721-c066-4e98-8829-8de929be89f8 req-48e1ade3-e9fd-4895-a74d-ee399097bc79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-42abb4b3-2144-4c11-a04c-7641d725bcde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.971 244018 DEBUG nova.network.neutron [req-74353721-c066-4e98-8829-8de929be89f8 req-48e1ade3-e9fd-4895-a74d-ee399097bc79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Refreshing network info cache for port 7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.974 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Start _get_guest_xml network_info=[{"id": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "address": "fa:16:3e:70:ac:35", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d80048f-ed", "ovs_interfaceid": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.981 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022253.926555, 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.981 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] VM Paused (Lifecycle Event)
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.985 244018 WARNING nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.988 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.989 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.994 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.995 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.995 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.996 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.996 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.996 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.997 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.997 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.997 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.998 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.998 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.998 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.999 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:24:13 compute-0 nova_compute[244014]: 2026-02-25 12:24:13.999 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.002 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.032 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.040 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.069 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.088 244018 DEBUG nova.compute.manager [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.090 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.090 244018 INFO nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Creating image(s)
Feb 25 12:24:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3753286574' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.119 244018 DEBUG nova.storage.rbd_utils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 2dd0d0eb-f5ba-419e-8233-256425af6119_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.143 244018 DEBUG nova.storage.rbd_utils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 2dd0d0eb-f5ba-419e-8233-256425af6119_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.165 244018 DEBUG nova.storage.rbd_utils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 2dd0d0eb-f5ba-419e-8233-256425af6119_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.169 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.196 244018 DEBUG nova.policy [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9c44c1a95c8d4d97bc1d7dde284dcf1b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'daab2f813dbd467685c22833bf875ec9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.201 244018 DEBUG nova.compute.manager [req-98f5392c-6f69-46a3-8fcc-367cb201853c req-94d40cf7-1169-45c1-b4e1-3fde2e5039e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Received event network-vif-plugged-5d14f032-9842-4c87-8531-26efd8b98065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.201 244018 DEBUG oslo_concurrency.lockutils [req-98f5392c-6f69-46a3-8fcc-367cb201853c req-94d40cf7-1169-45c1-b4e1-3fde2e5039e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.202 244018 DEBUG oslo_concurrency.lockutils [req-98f5392c-6f69-46a3-8fcc-367cb201853c req-94d40cf7-1169-45c1-b4e1-3fde2e5039e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.202 244018 DEBUG oslo_concurrency.lockutils [req-98f5392c-6f69-46a3-8fcc-367cb201853c req-94d40cf7-1169-45c1-b4e1-3fde2e5039e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.202 244018 DEBUG nova.compute.manager [req-98f5392c-6f69-46a3-8fcc-367cb201853c req-94d40cf7-1169-45c1-b4e1-3fde2e5039e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Processing event network-vif-plugged-5d14f032-9842-4c87-8531-26efd8b98065 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.202 244018 DEBUG nova.compute.manager [req-98f5392c-6f69-46a3-8fcc-367cb201853c req-94d40cf7-1169-45c1-b4e1-3fde2e5039e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Received event network-vif-plugged-5d14f032-9842-4c87-8531-26efd8b98065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.203 244018 DEBUG oslo_concurrency.lockutils [req-98f5392c-6f69-46a3-8fcc-367cb201853c req-94d40cf7-1169-45c1-b4e1-3fde2e5039e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.203 244018 DEBUG oslo_concurrency.lockutils [req-98f5392c-6f69-46a3-8fcc-367cb201853c req-94d40cf7-1169-45c1-b4e1-3fde2e5039e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.203 244018 DEBUG oslo_concurrency.lockutils [req-98f5392c-6f69-46a3-8fcc-367cb201853c req-94d40cf7-1169-45c1-b4e1-3fde2e5039e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.203 244018 DEBUG nova.compute.manager [req-98f5392c-6f69-46a3-8fcc-367cb201853c req-94d40cf7-1169-45c1-b4e1-3fde2e5039e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] No waiting events found dispatching network-vif-plugged-5d14f032-9842-4c87-8531-26efd8b98065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.204 244018 WARNING nova.compute.manager [req-98f5392c-6f69-46a3-8fcc-367cb201853c req-94d40cf7-1169-45c1-b4e1-3fde2e5039e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Received unexpected event network-vif-plugged-5d14f032-9842-4c87-8531-26efd8b98065 for instance with vm_state building and task_state spawning.
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.204 244018 DEBUG nova.compute.manager [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.208 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022254.208308, 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.209 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] VM Resumed (Lifecycle Event)
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.210 244018 DEBUG nova.virt.libvirt.driver [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.213 244018 INFO nova.virt.libvirt.driver [-] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Instance spawned successfully.
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.213 244018 INFO nova.compute.manager [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Took 8.23 seconds to spawn the instance on the hypervisor.
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.214 244018 DEBUG nova.compute.manager [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.243 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.244 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.244 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.245 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.245 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.267 244018 DEBUG nova.storage.rbd_utils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 2dd0d0eb-f5ba-419e-8233-256425af6119_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.271 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2dd0d0eb-f5ba-419e-8233-256425af6119_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/765349679' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.305 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.328 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.331 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.365 244018 INFO nova.compute.manager [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Took 9.53 seconds to build instance.
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.370 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.390 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.445 244018 DEBUG oslo_concurrency.lockutils [None req-f9a6828c-c3e8-4398-9492-34b5d5d78e4e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.521 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2dd0d0eb-f5ba-419e-8233-256425af6119_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.598 244018 DEBUG nova.storage.rbd_utils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] resizing rbd image 2dd0d0eb-f5ba-419e-8233-256425af6119_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:24:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3601677197' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.643 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.675 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 42abb4b3-2144-4c11-a04c-7641d725bcde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.679 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.717 244018 DEBUG nova.network.neutron [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Successfully created port: b98cf008-d49f-4577-b9e8-2ca8d67298cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.766 244018 DEBUG nova.objects.instance [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'migration_context' on Instance uuid 2dd0d0eb-f5ba-419e-8233-256425af6119 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.769 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Successfully updated port: 1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:24:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1483847348' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.885 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.887 244018 DEBUG nova.virt.libvirt.vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-23727972',display_name='tempest-ListServersNegativeTestJSON-server-23727972-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-23727972-1',id=37,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d255f122166b45af8d4d83610929aaea',ramdisk_id='',reservation_id='r-hbnkbyp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-266593939',owner_user_name='tempest-ListServersNegativeTestJSON-266593939-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:09Z,user_data=None,user_id='2ab8469e67b142bab26cdd996097e148',uuid=2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "address": "fa:16:3e:7c:85:fd", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1012b5d-c0", "ovs_interfaceid": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.887 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converting VIF {"id": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "address": "fa:16:3e:7c:85:fd", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1012b5d-c0", "ovs_interfaceid": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.888 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:85:fd,bridge_name='br-int',has_traffic_filtering=True,id=f1012b5d-c005-4c9f-a5ec-1e01ce3c581f,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1012b5d-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.890 244018 DEBUG nova.objects.instance [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lazy-loading 'pci_devices' on Instance uuid 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.914 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.915 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Ensure instance console log exists: /var/lib/nova/instances/2dd0d0eb-f5ba-419e-8233-256425af6119/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.915 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.916 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.916 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.918 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "refresh_cache-19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.918 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquired lock "refresh_cache-19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.918 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.923 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:24:14 compute-0 nova_compute[244014]:   <uuid>2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a</uuid>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   <name>instance-00000025</name>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <nova:name>tempest-ListServersNegativeTestJSON-server-23727972-1</nova:name>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:24:13</nova:creationTime>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <nova:user uuid="2ab8469e67b142bab26cdd996097e148">tempest-ListServersNegativeTestJSON-266593939-project-member</nova:user>
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <nova:project uuid="d255f122166b45af8d4d83610929aaea">tempest-ListServersNegativeTestJSON-266593939</nova:project>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <nova:port uuid="f1012b5d-c005-4c9f-a5ec-1e01ce3c581f">
Feb 25 12:24:14 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <system>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <entry name="serial">2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a</entry>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <entry name="uuid">2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a</entry>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     </system>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   <os>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   </os>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   <features>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   </features>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk">
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk.config">
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:14 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:7c:85:fd"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <target dev="tapf1012b5d-c0"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a/console.log" append="off"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <video>
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     </video>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:24:14 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:24:14 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:24:14 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:24:14 compute-0 nova_compute[244014]: </domain>
Feb 25 12:24:14 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.924 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Preparing to wait for external event network-vif-plugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.924 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.924 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.925 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.926 244018 DEBUG nova.virt.libvirt.vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-23727972',display_name='tempest-ListServersNegativeTestJSON-server-23727972-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-23727972-1',id=37,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d255f122166b45af8d4d83610929aaea',ramdisk_id='',reservation_id='r-hbnkbyp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-266593939',owner_user_name='tempest-ListServersNegativeTestJSON-266593939-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:09Z,user_data=None,user_id='2ab8469e67b142bab26cdd996097e148',uuid=2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "address": "fa:16:3e:7c:85:fd", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1012b5d-c0", "ovs_interfaceid": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.926 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converting VIF {"id": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "address": "fa:16:3e:7c:85:fd", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1012b5d-c0", "ovs_interfaceid": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.927 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:85:fd,bridge_name='br-int',has_traffic_filtering=True,id=f1012b5d-c005-4c9f-a5ec-1e01ce3c581f,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1012b5d-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.927 244018 DEBUG os_vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:85:fd,bridge_name='br-int',has_traffic_filtering=True,id=f1012b5d-c005-4c9f-a5ec-1e01ce3c581f,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1012b5d-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.933 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.933 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.934 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.942 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1012b5d-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.942 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1012b5d-c0, col_values=(('external_ids', {'iface-id': 'f1012b5d-c005-4c9f-a5ec-1e01ce3c581f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:85:fd', 'vm-uuid': '2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:14 compute-0 NetworkManager[49836]: <info>  [1772022254.9462] manager: (tapf1012b5d-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.949 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.952 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.955 244018 INFO os_vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:85:fd,bridge_name='br-int',has_traffic_filtering=True,id=f1012b5d-c005-4c9f-a5ec-1e01ce3c581f,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1012b5d-c0')
Feb 25 12:24:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2321604877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.983 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:14 compute-0 nova_compute[244014]: 2026-02-25 12:24:14.988 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.008 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.022 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.023 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.023 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] No VIF found with MAC fa:16:3e:7c:85:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.023 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Using config drive
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.046 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.052 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.053 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:15 compute-0 ceph-mon[76335]: pgmap v1201: 305 pgs: 305 active+clean; 377 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 528 KiB/s rd, 7.1 MiB/s wr, 174 op/s
Feb 25 12:24:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/765349679' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3601677197' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1483847348' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2321604877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:15 compute-0 sshd-session[279304]: Invalid user latitude from 80.94.92.186 port 50416
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.177 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:24:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183999142' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.220 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.222 244018 DEBUG nova.virt.libvirt.vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-23727972',display_name='tempest-ListServersNegativeTestJSON-server-23727972-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-23727972-2',id=38,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d255f122166b45af8d4d83610929aaea',ramdisk_id='',reservation_id='r-hbnkbyp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-266593939',owner_user_name='tempest-ListServersNegativeTestJSON-266593939-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:10Z,user_data=None,user_id='2ab8469e67b142bab26cdd996097e148',uuid=42abb4b3-2144-4c11-a04c-7641d725bcde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "address": "fa:16:3e:70:ac:35", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d80048f-ed", "ovs_interfaceid": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.223 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converting VIF {"id": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "address": "fa:16:3e:70:ac:35", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d80048f-ed", "ovs_interfaceid": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.224 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:ac:35,bridge_name='br-int',has_traffic_filtering=True,id=7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d80048f-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.226 244018 DEBUG nova.objects.instance [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lazy-loading 'pci_devices' on Instance uuid 42abb4b3-2144-4c11-a04c-7641d725bcde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:15 compute-0 sshd-session[279304]: Connection closed by invalid user latitude 80.94.92.186 port 50416 [preauth]
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.253 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:24:15 compute-0 nova_compute[244014]:   <uuid>42abb4b3-2144-4c11-a04c-7641d725bcde</uuid>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   <name>instance-00000026</name>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <nova:name>tempest-ListServersNegativeTestJSON-server-23727972-2</nova:name>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:24:13</nova:creationTime>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <nova:user uuid="2ab8469e67b142bab26cdd996097e148">tempest-ListServersNegativeTestJSON-266593939-project-member</nova:user>
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <nova:project uuid="d255f122166b45af8d4d83610929aaea">tempest-ListServersNegativeTestJSON-266593939</nova:project>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <nova:port uuid="7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa">
Feb 25 12:24:15 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <system>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <entry name="serial">42abb4b3-2144-4c11-a04c-7641d725bcde</entry>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <entry name="uuid">42abb4b3-2144-4c11-a04c-7641d725bcde</entry>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     </system>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   <os>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   </os>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   <features>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   </features>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/42abb4b3-2144-4c11-a04c-7641d725bcde_disk">
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/42abb4b3-2144-4c11-a04c-7641d725bcde_disk.config">
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:70:ac:35"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <target dev="tap7d80048f-ed"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/42abb4b3-2144-4c11-a04c-7641d725bcde/console.log" append="off"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <video>
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     </video>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:24:15 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:24:15 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:24:15 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:24:15 compute-0 nova_compute[244014]: </domain>
Feb 25 12:24:15 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.253 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Preparing to wait for external event network-vif-plugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.254 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.254 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.255 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.256 244018 DEBUG nova.virt.libvirt.vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-23727972',display_name='tempest-ListServersNegativeTestJSON-server-23727972-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-23727972-2',id=38,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d255f122166b45af8d4d83610929aaea',ramdisk_id='',reservation_id='r-hbnkbyp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-266593939',owner_user_name='tempest-ListServersNegativeTestJSON-266593939-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:10Z,user_data=None,user_id='2ab8469e67b142bab26cdd996097e148',uuid=42abb4b3-2144-4c11-a04c-7641d725bcde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "address": "fa:16:3e:70:ac:35", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d80048f-ed", "ovs_interfaceid": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.257 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converting VIF {"id": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "address": "fa:16:3e:70:ac:35", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d80048f-ed", "ovs_interfaceid": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.258 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:ac:35,bridge_name='br-int',has_traffic_filtering=True,id=7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d80048f-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.259 244018 DEBUG os_vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:ac:35,bridge_name='br-int',has_traffic_filtering=True,id=7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d80048f-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.260 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.261 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.265 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.269 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.269 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d80048f-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.269 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d80048f-ed, col_values=(('external_ids', {'iface-id': '7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:ac:35', 'vm-uuid': '42abb4b3-2144-4c11-a04c-7641d725bcde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:15 compute-0 NetworkManager[49836]: <info>  [1772022255.2724] manager: (tap7d80048f-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.277 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.277 244018 INFO os_vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:ac:35,bridge_name='br-int',has_traffic_filtering=True,id=7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d80048f-ed')
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.314 244018 DEBUG nova.network.neutron [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Successfully created port: 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.330 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.331 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.331 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] No VIF found with MAC fa:16:3e:70:ac:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.331 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Using config drive
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.350 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 42abb4b3-2144-4c11-a04c-7641d725bcde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.529 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Creating config drive at /var/lib/nova/instances/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a/disk.config
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.536 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp260qfoy2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.623 244018 DEBUG nova.network.neutron [req-c1915ae1-3b23-4c97-a367-62080cfc4edc req-c04dcdfb-e598-43de-ba08-706e01a36308 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Updated VIF entry in instance network info cache for port f1012b5d-c005-4c9f-a5ec-1e01ce3c581f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.625 244018 DEBUG nova.network.neutron [req-c1915ae1-3b23-4c97-a367-62080cfc4edc req-c04dcdfb-e598-43de-ba08-706e01a36308 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Updating instance_info_cache with network_info: [{"id": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "address": "fa:16:3e:7c:85:fd", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1012b5d-c0", "ovs_interfaceid": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.674 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp260qfoy2" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.713 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.718 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a/disk.config 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.747 244018 DEBUG oslo_concurrency.lockutils [req-c1915ae1-3b23-4c97-a367-62080cfc4edc req-c04dcdfb-e598-43de-ba08-706e01a36308 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 377 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 475 KiB/s rd, 6.4 MiB/s wr, 156 op/s
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.865 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a/disk.config 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.866 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Deleting local config drive /var/lib/nova/instances/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a/disk.config because it was imported into RBD.
Feb 25 12:24:15 compute-0 NetworkManager[49836]: <info>  [1772022255.9272] manager: (tapf1012b5d-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Feb 25 12:24:15 compute-0 kernel: tapf1012b5d-c0: entered promiscuous mode
Feb 25 12:24:15 compute-0 ovn_controller[147040]: 2026-02-25T12:24:15Z|00305|binding|INFO|Claiming lport f1012b5d-c005-4c9f-a5ec-1e01ce3c581f for this chassis.
Feb 25 12:24:15 compute-0 ovn_controller[147040]: 2026-02-25T12:24:15Z|00306|binding|INFO|f1012b5d-c005-4c9f-a5ec-1e01ce3c581f: Claiming fa:16:3e:7c:85:fd 10.100.0.5
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.947 244018 DEBUG oslo_concurrency.lockutils [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.948 244018 DEBUG oslo_concurrency.lockutils [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.948 244018 DEBUG oslo_concurrency.lockutils [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.949 244018 DEBUG oslo_concurrency.lockutils [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.949 244018 DEBUG oslo_concurrency.lockutils [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.953 244018 INFO nova.compute.manager [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Terminating instance
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.954 244018 DEBUG nova.compute.manager [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:24:15 compute-0 nova_compute[244014]: 2026-02-25 12:24:15.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:15 compute-0 systemd-machined[210048]: New machine qemu-41-instance-00000025.
Feb 25 12:24:15 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000025.
Feb 25 12:24:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:15.968 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:85:fd 10.100.0.5'], port_security=['fa:16:3e:7c:85:fd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd255f122166b45af8d4d83610929aaea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eac83cd3-8631-44c7-aec4-5978a4fbcffa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd7410d4-8e67-4686-bf02-8378887b01c1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f1012b5d-c005-4c9f-a5ec-1e01ce3c581f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:15.970 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f1012b5d-c005-4c9f-a5ec-1e01ce3c581f in datapath a87ddd03-d435-46c4-8efc-50bb38492ac4 bound to our chassis
Feb 25 12:24:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:15.973 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a87ddd03-d435-46c4-8efc-50bb38492ac4
Feb 25 12:24:15 compute-0 systemd-udevd[279538]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:24:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:15.986 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c73ff9f0-55b9-4f9a-8af3-d73db73c50ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:15.987 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa87ddd03-d1 in ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:24:15 compute-0 NetworkManager[49836]: <info>  [1772022255.9900] device (tapf1012b5d-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:24:15 compute-0 NetworkManager[49836]: <info>  [1772022255.9909] device (tapf1012b5d-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:24:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:15.990 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa87ddd03-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:24:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:15.991 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[560ff70a-0171-47ff-bb29-df0c613efe65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:15.994 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a246e00-5b31-4ba2-a298-36b77b2f4b44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00307|binding|INFO|Setting lport f1012b5d-c005-4c9f-a5ec-1e01ce3c581f ovn-installed in OVS
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00308|binding|INFO|Setting lport f1012b5d-c005-4c9f-a5ec-1e01ce3c581f up in Southbound
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.008 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b637be15-6472-4f23-a1ce-27d566bc571c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 kernel: tap5d14f032-98 (unregistering): left promiscuous mode
Feb 25 12:24:16 compute-0 NetworkManager[49836]: <info>  [1772022256.0268] device (tap5d14f032-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00309|binding|INFO|Releasing lport 5d14f032-9842-4c87-8531-26efd8b98065 from this chassis (sb_readonly=0)
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.033 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00310|binding|INFO|Setting lport 5d14f032-9842-4c87-8531-26efd8b98065 down in Southbound
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00311|binding|INFO|Removing iface tap5d14f032-98 ovn-installed in OVS
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.035 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.036 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f381eed0-3d23-4216-944e-8599d9cac30c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.040 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:53:75 10.100.0.10'], port_security=['fa:16:3e:c8:53:75 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5d14f032-9842-4c87-8531-26efd8b98065) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.053 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.054 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.060 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0a588343-ca4e-4466-aaa8-a0fc6c8734c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.067 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[68dd1994-9c46-4415-a357-01bea4c1ebe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 NetworkManager[49836]: <info>  [1772022256.0686] manager: (tapa87ddd03-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/140)
Feb 25 12:24:16 compute-0 systemd-udevd[279543]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:24:16 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Deactivated successfully.
Feb 25 12:24:16 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Consumed 3.189s CPU time.
Feb 25 12:24:16 compute-0 systemd-machined[210048]: Machine qemu-40-instance-00000024 terminated.
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.094 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc79442-681a-4a41-a7d2-97a5eed927df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.098 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8087d115-230f-4241-aa0f-c39efddb37c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2183999142' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:16 compute-0 NetworkManager[49836]: <info>  [1772022256.1180] device (tapa87ddd03-d0): carrier: link connected
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.123 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[586342b6-a6c4-44ad-97a2-22113cdf61a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.138 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8be15f-f52b-4bc1-a9e9-0e688de7cafc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa87ddd03-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:62:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422552, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279584, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.149 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16f4af23-5b2b-4277-af3a-70efe8b580a9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:6223'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422552, 'tstamp': 422552}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279585, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.162 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[918afe6a-e795-4324-8301-802fc0a6bb0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa87ddd03-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:62:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422552, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279586, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 NetworkManager[49836]: <info>  [1772022256.1788] manager: (tap5d14f032-98): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.190 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34a609da-3531-450e-a213-c912195956d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.193 244018 INFO nova.virt.libvirt.driver [-] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Instance destroyed successfully.
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.193 244018 DEBUG nova.objects.instance [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'resources' on Instance uuid 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.213 244018 DEBUG nova.virt.libvirt.vif [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-768222662',display_name='tempest-ImagesTestJSON-server-768222662',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-768222662',id=36,image_ref='3dad1c68-e760-48b3-a8c1-46ce658c3e36',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:24:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-6ochylfk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='fb09cc9c-e6a9-4718-bb97-0df5558cb091',image_min_disk='1',image_min_ram='0',image_owner_id='851e1817495944c1ad7ac421ab226d13',image_owner_project_name='tempest-ImagesTestJSON-1353368211',image_owner_user_name='tempest-ImagesTestJSON-1353368211-project-member',image_user_id='f1cfc3e643014e1c984d71182534fd24',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:14Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d14f032-9842-4c87-8531-26efd8b98065", "address": "fa:16:3e:c8:53:75", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d14f032-98", "ovs_interfaceid": "5d14f032-9842-4c87-8531-26efd8b98065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.214 244018 DEBUG nova.network.os_vif_util [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "5d14f032-9842-4c87-8531-26efd8b98065", "address": "fa:16:3e:c8:53:75", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d14f032-98", "ovs_interfaceid": "5d14f032-9842-4c87-8531-26efd8b98065", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.214 244018 DEBUG nova.network.os_vif_util [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:53:75,bridge_name='br-int',has_traffic_filtering=True,id=5d14f032-9842-4c87-8531-26efd8b98065,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d14f032-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.215 244018 DEBUG os_vif [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:53:75,bridge_name='br-int',has_traffic_filtering=True,id=5d14f032-9842-4c87-8531-26efd8b98065,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d14f032-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.216 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.217 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d14f032-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.220 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.224 244018 INFO os_vif [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:53:75,bridge_name='br-int',has_traffic_filtering=True,id=5d14f032-9842-4c87-8531-26efd8b98065,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d14f032-98')
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.242 244018 DEBUG nova.network.neutron [req-74353721-c066-4e98-8829-8de929be89f8 req-48e1ade3-e9fd-4895-a74d-ee399097bc79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Updated VIF entry in instance network info cache for port 7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.243 244018 DEBUG nova.network.neutron [req-74353721-c066-4e98-8829-8de929be89f8 req-48e1ade3-e9fd-4895-a74d-ee399097bc79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Updating instance_info_cache with network_info: [{"id": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "address": "fa:16:3e:70:ac:35", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d80048f-ed", "ovs_interfaceid": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.249 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8e58f385-aa44-4a51-bd1e-183362927055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.251 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa87ddd03-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.251 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.252 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa87ddd03-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:16 compute-0 NetworkManager[49836]: <info>  [1772022256.2555] manager: (tapa87ddd03-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.256 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 kernel: tapa87ddd03-d0: entered promiscuous mode
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.262 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa87ddd03-d0, col_values=(('external_ids', {'iface-id': 'ad147878-63a9-4ab7-9f47-10bbd9941f76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.264 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00312|binding|INFO|Releasing lport ad147878-63a9-4ab7-9f47-10bbd9941f76 from this chassis (sb_readonly=0)
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.278 244018 DEBUG oslo_concurrency.lockutils [req-74353721-c066-4e98-8829-8de929be89f8 req-48e1ade3-e9fd-4895-a74d-ee399097bc79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-42abb4b3-2144-4c11-a04c-7641d725bcde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.281 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a87ddd03-d435-46c4-8efc-50bb38492ac4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a87ddd03-d435-46c4-8efc-50bb38492ac4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.282 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[03b25d76-ce54-4231-b44f-a584dde4e858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.284 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-a87ddd03-d435-46c4-8efc-50bb38492ac4
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/a87ddd03-d435-46c4-8efc-50bb38492ac4.pid.haproxy
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID a87ddd03-d435-46c4-8efc-50bb38492ac4
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.286 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'env', 'PROCESS_TAG=haproxy-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a87ddd03-d435-46c4-8efc-50bb38492ac4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.330 244018 DEBUG nova.network.neutron [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Updating instance_info_cache with network_info: [{"id": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "address": "fa:16:3e:29:b8:f3", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e1fa674-42", "ovs_interfaceid": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.360 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Releasing lock "refresh_cache-19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.360 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Instance network_info: |[{"id": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "address": "fa:16:3e:29:b8:f3", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e1fa674-42", "ovs_interfaceid": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.366 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Start _get_guest_xml network_info=[{"id": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "address": "fa:16:3e:29:b8:f3", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e1fa674-42", "ovs_interfaceid": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.373 244018 WARNING nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.390 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.391 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.396 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.396 244018 DEBUG nova.virt.libvirt.host [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.397 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.397 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.398 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.399 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.399 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.399 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.400 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.400 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.401 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.401 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.401 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.401 244018 DEBUG nova.virt.hardware [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.406 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.440 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Creating config drive at /var/lib/nova/instances/42abb4b3-2144-4c11-a04c-7641d725bcde/disk.config
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.444 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42abb4b3-2144-4c11-a04c-7641d725bcde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbdgwkezg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.508 244018 DEBUG nova.network.neutron [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Successfully updated port: b98cf008-d49f-4577-b9e8-2ca8d67298cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.521 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022256.5209377, 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.521 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] VM Started (Lifecycle Event)
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.529 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquiring lock "refresh_cache-8249bea8-a1c0-42f5-a4d1-12b74e20bccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.529 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquired lock "refresh_cache-8249bea8-a1c0-42f5-a4d1-12b74e20bccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.529 244018 DEBUG nova.network.neutron [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.545 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.557 244018 INFO nova.virt.libvirt.driver [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Deleting instance files /var/lib/nova/instances/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_del
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.558 244018 INFO nova.virt.libvirt.driver [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Deletion of /var/lib/nova/instances/1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d_del complete
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.565 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022256.5221517, 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.565 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] VM Paused (Lifecycle Event)
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.577 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42abb4b3-2144-4c11-a04c-7641d725bcde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbdgwkezg" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.611 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 42abb4b3-2144-4c11-a04c-7641d725bcde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.616 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/42abb4b3-2144-4c11-a04c-7641d725bcde/disk.config 42abb4b3-2144-4c11-a04c-7641d725bcde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.640 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:16 compute-0 podman[279701]: 2026-02-25 12:24:16.64040768 +0000 UTC m=+0.054885305 container create 9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.648 244018 INFO nova.compute.manager [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Took 0.69 seconds to destroy the instance on the hypervisor.
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.649 244018 DEBUG oslo.service.loopingcall [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.649 244018 DEBUG nova.compute.manager [-] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.649 244018 DEBUG nova.network.neutron [-] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.655 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.677 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:16 compute-0 systemd[1]: Started libpod-conmon-9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4.scope.
Feb 25 12:24:16 compute-0 podman[279701]: 2026-02-25 12:24:16.61357575 +0000 UTC m=+0.028053435 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:24:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f76f8a3d92aee60d4d1619494090c8ef810355452f7426bf0426b95f85bdb2b8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:16 compute-0 podman[279701]: 2026-02-25 12:24:16.730049986 +0000 UTC m=+0.144527661 container init 9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:24:16 compute-0 podman[279701]: 2026-02-25 12:24:16.737789315 +0000 UTC m=+0.152266940 container start 9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.744 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/42abb4b3-2144-4c11-a04c-7641d725bcde/disk.config 42abb4b3-2144-4c11-a04c-7641d725bcde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.745 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Deleting local config drive /var/lib/nova/instances/42abb4b3-2144-4c11-a04c-7641d725bcde/disk.config because it was imported into RBD.
Feb 25 12:24:16 compute-0 neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4[279767]: [NOTICE]   (279774) : New worker (279779) forked
Feb 25 12:24:16 compute-0 neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4[279767]: [NOTICE]   (279774) : Loading success.
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.784 244018 DEBUG nova.compute.manager [req-8d9981e8-6929-4220-8d8a-0b46a24c186d req-39d9cf0a-3b12-4393-95d0-070e6f984ab8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Received event network-changed-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.785 244018 DEBUG nova.compute.manager [req-8d9981e8-6929-4220-8d8a-0b46a24c186d req-39d9cf0a-3b12-4393-95d0-070e6f984ab8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Refreshing instance network info cache due to event network-changed-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.785 244018 DEBUG oslo_concurrency.lockutils [req-8d9981e8-6929-4220-8d8a-0b46a24c186d req-39d9cf0a-3b12-4393-95d0-070e6f984ab8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.786 244018 DEBUG oslo_concurrency.lockutils [req-8d9981e8-6929-4220-8d8a-0b46a24c186d req-39d9cf0a-3b12-4393-95d0-070e6f984ab8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.786 244018 DEBUG nova.network.neutron [req-8d9981e8-6929-4220-8d8a-0b46a24c186d req-39d9cf0a-3b12-4393-95d0-070e6f984ab8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Refreshing network info cache for port 1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.789 244018 DEBUG nova.network.neutron [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:24:16 compute-0 NetworkManager[49836]: <info>  [1772022256.7969] manager: (tap7d80048f-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Feb 25 12:24:16 compute-0 kernel: tap7d80048f-ed: entered promiscuous mode
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00313|binding|INFO|Claiming lport 7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa for this chassis.
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00314|binding|INFO|7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa: Claiming fa:16:3e:70:ac:35 10.100.0.10
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00315|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00316|binding|INFO|Releasing lport ad147878-63a9-4ab7-9f47-10bbd9941f76 from this chassis (sb_readonly=0)
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.808 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:ac:35 10.100.0.10'], port_security=['fa:16:3e:70:ac:35 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '42abb4b3-2144-4c11-a04c-7641d725bcde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd255f122166b45af8d4d83610929aaea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eac83cd3-8631-44c7-aec4-5978a4fbcffa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd7410d4-8e67-4686-bf02-8378887b01c1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:16 compute-0 NetworkManager[49836]: <info>  [1772022256.8110] device (tap7d80048f-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:24:16 compute-0 NetworkManager[49836]: <info>  [1772022256.8123] device (tap7d80048f-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00317|binding|INFO|Setting lport 7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa ovn-installed in OVS
Feb 25 12:24:16 compute-0 ovn_controller[147040]: 2026-02-25T12:24:16Z|00318|binding|INFO|Setting lport 7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa up in Southbound
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.825 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5d14f032-9842-4c87-8531-26efd8b98065 in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 unbound from our chassis
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.827 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:24:16 compute-0 systemd-machined[210048]: New machine qemu-42-instance-00000026.
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.839 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[44e1394b-540f-4653-97c0-59e927031fa1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000026.
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.861 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fe303ea5-299b-4e46-a64d-c6f62c470134]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.864 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[da490a1f-bfa9-49ac-b7b6-fa5809a1fdd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.867 244018 DEBUG nova.compute.manager [req-3a12872d-06a4-4acf-867a-3ad0e78659fc req-dc17672f-0fa4-4963-8630-6ae433249b64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Received event network-vif-plugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.867 244018 DEBUG oslo_concurrency.lockutils [req-3a12872d-06a4-4acf-867a-3ad0e78659fc req-dc17672f-0fa4-4963-8630-6ae433249b64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.868 244018 DEBUG oslo_concurrency.lockutils [req-3a12872d-06a4-4acf-867a-3ad0e78659fc req-dc17672f-0fa4-4963-8630-6ae433249b64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.868 244018 DEBUG oslo_concurrency.lockutils [req-3a12872d-06a4-4acf-867a-3ad0e78659fc req-dc17672f-0fa4-4963-8630-6ae433249b64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.868 244018 DEBUG nova.compute.manager [req-3a12872d-06a4-4acf-867a-3ad0e78659fc req-dc17672f-0fa4-4963-8630-6ae433249b64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Processing event network-vif-plugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.869 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.873 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022256.8728564, 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.873 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] VM Resumed (Lifecycle Event)
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.881 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.885 244018 INFO nova.virt.libvirt.driver [-] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Instance spawned successfully.
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.886 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.888 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[837986fd-137f-43cd-9aa4-d81568c072dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.895 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.897 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.902 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5251384a-53d1-4835-9ac6-45c428bc4e26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420199, 'reachable_time': 39062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279810, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.911 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.911 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.912 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.912 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.912 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.913 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[84d8829f-c29c-48b7-a918-7a4bcc055446]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a1663dd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420209, 'tstamp': 420209}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279811, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a1663dd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 420212, 'tstamp': 420212}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279811, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.917 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.920 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a1663dd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.920 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.920 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a1663dd-20, col_values=(('external_ids', {'iface-id': '7f515737-b36a-4caa-affc-8ad110539172'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.920 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.922 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa in datapath a87ddd03-d435-46c4-8efc-50bb38492ac4 unbound from our chassis
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.923 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a87ddd03-d435-46c4-8efc-50bb38492ac4
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.934 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[205e59fe-d94d-4648-a152-b033823356a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.937 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.956 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[87f3d12e-daa0-4d2b-88e3-0f1944213d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.960 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7e677291-402b-48fb-b5ee-47f6efa20a5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.979 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d9cba2-dcaf-406e-b9e8-57cc29cd5a17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.981 244018 INFO nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Took 7.39 seconds to spawn the instance on the hypervisor.
Feb 25 12:24:16 compute-0 nova_compute[244014]: 2026-02-25 12:24:16.981 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:16.993 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3051bf-416c-476e-9775-f0a6dd87a2aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa87ddd03-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:62:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422552, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279842, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/344545548' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:17.008 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c54efb8d-af56-42d8-bd21-34aaf6d05ccc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa87ddd03-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422562, 'tstamp': 422562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279853, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa87ddd03-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422565, 'tstamp': 422565}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279853, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:17.010 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa87ddd03-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:17.012 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa87ddd03-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.011 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:17.013 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:17.013 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa87ddd03-d0, col_values=(('external_ids', {'iface-id': 'ad147878-63a9-4ab7-9f47-10bbd9941f76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:17.014 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.043 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.047 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.075 244018 DEBUG nova.network.neutron [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Successfully updated port: 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.086 244018 INFO nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Took 8.58 seconds to build instance.
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.091 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022257.0916336, 42abb4b3-2144-4c11-a04c-7641d725bcde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.092 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] VM Started (Lifecycle Event)
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.106 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "refresh_cache-2dd0d0eb-f5ba-419e-8233-256425af6119" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.106 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquired lock "refresh_cache-2dd0d0eb-f5ba-419e-8233-256425af6119" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.107 244018 DEBUG nova.network.neutron [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.111 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.112 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.116 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022257.0941176, 42abb4b3-2144-4c11-a04c-7641d725bcde => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.117 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] VM Paused (Lifecycle Event)
Feb 25 12:24:17 compute-0 ceph-mon[76335]: pgmap v1202: 305 pgs: 305 active+clean; 377 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 475 KiB/s rd, 6.4 MiB/s wr, 156 op/s
Feb 25 12:24:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/344545548' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.133 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.136 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.162 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.282 244018 DEBUG nova.network.neutron [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.335 244018 DEBUG nova.network.neutron [-] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.357 244018 INFO nova.compute.manager [-] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Took 0.71 seconds to deallocate network for instance.
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.405 244018 DEBUG oslo_concurrency.lockutils [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.405 244018 DEBUG oslo_concurrency.lockutils [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3752908624' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.620 244018 DEBUG oslo_concurrency.processutils [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.665 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.666 244018 DEBUG nova.virt.libvirt.vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-23727972',display_name='tempest-ListServersNegativeTestJSON-server-23727972-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-23727972-3',id=39,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d255f122166b45af8d4d83610929aaea',ramdisk_id='',reservation_id='r-hbnkbyp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-266593939',owner_user_name='tempest-ListServersNegativeTestJSON-266593939-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:11Z,user_data=None,user_id='2ab8469e67b142bab26cdd996097e148',uuid=19e66139-1e99-432c-bbd6-fc3a6d3e1b1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "address": "fa:16:3e:29:b8:f3", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e1fa674-42", "ovs_interfaceid": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.667 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converting VIF {"id": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "address": "fa:16:3e:29:b8:f3", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e1fa674-42", "ovs_interfaceid": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.667 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:b8:f3,bridge_name='br-int',has_traffic_filtering=True,id=1e1fa674-42f4-483b-8576-2dd9c4c8ffc0,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e1fa674-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.668 244018 DEBUG nova.objects.instance [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lazy-loading 'pci_devices' on Instance uuid 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.688 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:24:17 compute-0 nova_compute[244014]:   <uuid>19e66139-1e99-432c-bbd6-fc3a6d3e1b1d</uuid>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   <name>instance-00000027</name>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <nova:name>tempest-ListServersNegativeTestJSON-server-23727972-3</nova:name>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:24:16</nova:creationTime>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <nova:user uuid="2ab8469e67b142bab26cdd996097e148">tempest-ListServersNegativeTestJSON-266593939-project-member</nova:user>
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <nova:project uuid="d255f122166b45af8d4d83610929aaea">tempest-ListServersNegativeTestJSON-266593939</nova:project>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <nova:port uuid="1e1fa674-42f4-483b-8576-2dd9c4c8ffc0">
Feb 25 12:24:17 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <system>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <entry name="serial">19e66139-1e99-432c-bbd6-fc3a6d3e1b1d</entry>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <entry name="uuid">19e66139-1e99-432c-bbd6-fc3a6d3e1b1d</entry>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     </system>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   <os>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   </os>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   <features>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   </features>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk">
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk.config">
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:17 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:29:b8:f3"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <target dev="tap1e1fa674-42"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d/console.log" append="off"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <video>
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     </video>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:24:17 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:24:17 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:24:17 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:24:17 compute-0 nova_compute[244014]: </domain>
Feb 25 12:24:17 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.688 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Preparing to wait for external event network-vif-plugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.689 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.689 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.689 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.689 244018 DEBUG nova.virt.libvirt.vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-23727972',display_name='tempest-ListServersNegativeTestJSON-server-23727972-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-23727972-3',id=39,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d255f122166b45af8d4d83610929aaea',ramdisk_id='',reservation_id='r-hbnkbyp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-266593939',owner_user_name='tempest-ListServersNegativeTestJSON-266593939-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:11Z,user_data=None,user_id='2ab8469e67b142bab26cdd996097e148',uuid=19e66139-1e99-432c-bbd6-fc3a6d3e1b1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "address": "fa:16:3e:29:b8:f3", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e1fa674-42", "ovs_interfaceid": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.690 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converting VIF {"id": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "address": "fa:16:3e:29:b8:f3", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e1fa674-42", "ovs_interfaceid": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.690 244018 DEBUG nova.network.os_vif_util [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:b8:f3,bridge_name='br-int',has_traffic_filtering=True,id=1e1fa674-42f4-483b-8576-2dd9c4c8ffc0,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e1fa674-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.690 244018 DEBUG os_vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:b8:f3,bridge_name='br-int',has_traffic_filtering=True,id=1e1fa674-42f4-483b-8576-2dd9c4c8ffc0,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e1fa674-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.691 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.691 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.693 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.693 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e1fa674-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.694 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e1fa674-42, col_values=(('external_ids', {'iface-id': '1e1fa674-42f4-483b-8576-2dd9c4c8ffc0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:b8:f3', 'vm-uuid': '19e66139-1e99-432c-bbd6-fc3a6d3e1b1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:17 compute-0 NetworkManager[49836]: <info>  [1772022257.6967] manager: (tap1e1fa674-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.702 244018 INFO os_vif [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:b8:f3,bridge_name='br-int',has_traffic_filtering=True,id=1e1fa674-42f4-483b-8576-2dd9c4c8ffc0,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e1fa674-42')
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.751 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.752 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.752 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] No VIF found with MAC fa:16:3e:29:b8:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:24:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 510 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 11 MiB/s wr, 334 op/s
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.753 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Using config drive
Feb 25 12:24:17 compute-0 nova_compute[244014]: 2026-02-25 12:24:17.780 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:18 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3752908624' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.141 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Creating config drive at /var/lib/nova/instances/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d/disk.config
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.147 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmptlykvxa3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.176 244018 DEBUG nova.network.neutron [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Updating instance_info_cache with network_info: [{"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.181 244018 DEBUG nova.network.neutron [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Updating instance_info_cache with network_info: [{"id": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "address": "fa:16:3e:20:d6:fa", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0cdbf7-28", "ovs_interfaceid": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529075044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.211 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Releasing lock "refresh_cache-8249bea8-a1c0-42f5-a4d1-12b74e20bccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.211 244018 DEBUG nova.compute.manager [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Instance network_info: |[{"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.212 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Releasing lock "refresh_cache-2dd0d0eb-f5ba-419e-8233-256425af6119" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.212 244018 DEBUG nova.compute.manager [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Instance network_info: |[{"id": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "address": "fa:16:3e:20:d6:fa", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0cdbf7-28", "ovs_interfaceid": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.213 244018 DEBUG oslo_concurrency.processutils [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.216 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Start _get_guest_xml network_info=[{"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.219 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Start _get_guest_xml network_info=[{"id": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "address": "fa:16:3e:20:d6:fa", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0cdbf7-28", "ovs_interfaceid": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.228 244018 DEBUG nova.compute.provider_tree [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.232 244018 WARNING nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.234 244018 WARNING nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.238 244018 DEBUG nova.virt.libvirt.host [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.238 244018 DEBUG nova.virt.libvirt.host [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.239 244018 DEBUG nova.virt.libvirt.host [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.240 244018 DEBUG nova.virt.libvirt.host [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.242 244018 DEBUG nova.virt.libvirt.host [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.243 244018 DEBUG nova.virt.libvirt.host [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.244 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.244 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.245 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.245 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.245 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.245 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.246 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.246 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.246 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.247 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.247 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.247 244018 DEBUG nova.virt.hardware [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.251 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.269 244018 DEBUG nova.virt.libvirt.host [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.271 244018 DEBUG nova.virt.libvirt.host [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.271 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.272 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.273 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.273 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.274 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.274 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.275 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.275 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.276 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.276 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.277 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.277 244018 DEBUG nova.virt.hardware [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.283 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.319 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmptlykvxa3" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.322 244018 DEBUG nova.scheduler.client.report [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.352 244018 DEBUG nova.storage.rbd_utils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] rbd image 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.355 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d/disk.config 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.403 244018 DEBUG oslo_concurrency.lockutils [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.488 244018 DEBUG oslo_concurrency.processutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d/disk.config 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.488 244018 INFO nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Deleting local config drive /var/lib/nova/instances/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d/disk.config because it was imported into RBD.
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.507 244018 INFO nova.scheduler.client.report [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Deleted allocations for instance 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d
Feb 25 12:24:18 compute-0 kernel: tap1e1fa674-42: entered promiscuous mode
Feb 25 12:24:18 compute-0 NetworkManager[49836]: <info>  [1772022258.5273] manager: (tap1e1fa674-42): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Feb 25 12:24:18 compute-0 ovn_controller[147040]: 2026-02-25T12:24:18Z|00319|binding|INFO|Claiming lport 1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 for this chassis.
Feb 25 12:24:18 compute-0 ovn_controller[147040]: 2026-02-25T12:24:18Z|00320|binding|INFO|1e1fa674-42f4-483b-8576-2dd9c4c8ffc0: Claiming fa:16:3e:29:b8:f3 10.100.0.8
Feb 25 12:24:18 compute-0 NetworkManager[49836]: <info>  [1772022258.5359] device (tap1e1fa674-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:24:18 compute-0 NetworkManager[49836]: <info>  [1772022258.5365] device (tap1e1fa674-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.541 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:b8:f3 10.100.0.8'], port_security=['fa:16:3e:29:b8:f3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '19e66139-1e99-432c-bbd6-fc3a6d3e1b1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd255f122166b45af8d4d83610929aaea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eac83cd3-8631-44c7-aec4-5978a4fbcffa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd7410d4-8e67-4686-bf02-8378887b01c1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=1e1fa674-42f4-483b-8576-2dd9c4c8ffc0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.543 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 in datapath a87ddd03-d435-46c4-8efc-50bb38492ac4 bound to our chassis
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.546 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a87ddd03-d435-46c4-8efc-50bb38492ac4
Feb 25 12:24:18 compute-0 ovn_controller[147040]: 2026-02-25T12:24:18Z|00321|binding|INFO|Setting lport 1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 ovn-installed in OVS
Feb 25 12:24:18 compute-0 ovn_controller[147040]: 2026-02-25T12:24:18Z|00322|binding|INFO|Setting lport 1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 up in Southbound
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:18 compute-0 systemd-machined[210048]: New machine qemu-43-instance-00000027.
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.564 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e0397c5b-b244-441b-b38b-f26b2ecd195e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:18 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000027.
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.598 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[158b5778-7a6f-4e7a-b75d-debc968b98c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.600 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a65cb1f6-3878-40f9-9a0a-7bdebae3a776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.606 244018 DEBUG oslo_concurrency.lockutils [None req-9e75325c-fdb7-4666-835a-6502b0a6453e f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.619 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d21a411e-dd76-40ca-83d1-6f54caf079c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.633 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d5a8cf-94ba-464b-84f2-dd2b4a11e159]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa87ddd03-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:62:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422552, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280048, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.649 244018 DEBUG nova.network.neutron [req-8d9981e8-6929-4220-8d8a-0b46a24c186d req-39d9cf0a-3b12-4393-95d0-070e6f984ab8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Updated VIF entry in instance network info cache for port 1e1fa674-42f4-483b-8576-2dd9c4c8ffc0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.650 244018 DEBUG nova.network.neutron [req-8d9981e8-6929-4220-8d8a-0b46a24c186d req-39d9cf0a-3b12-4393-95d0-070e6f984ab8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Updating instance_info_cache with network_info: [{"id": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "address": "fa:16:3e:29:b8:f3", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e1fa674-42", "ovs_interfaceid": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.652 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a91c112d-16da-423a-b03d-90eb3a792292]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa87ddd03-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422562, 'tstamp': 422562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280049, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa87ddd03-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422565, 'tstamp': 422565}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280049, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.654 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa87ddd03-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.656 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.659 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa87ddd03-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.660 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.660 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa87ddd03-d0, col_values=(('external_ids', {'iface-id': 'ad147878-63a9-4ab7-9f47-10bbd9941f76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:18.661 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.667 244018 DEBUG oslo_concurrency.lockutils [req-8d9981e8-6929-4220-8d8a-0b46a24c186d req-39d9cf0a-3b12-4393-95d0-070e6f984ab8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/721810963' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.800 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.818 244018 DEBUG nova.storage.rbd_utils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] rbd image 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.821 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427323549' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.871 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.901 244018 DEBUG nova.storage.rbd_utils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 2dd0d0eb-f5ba-419e-8233-256425af6119_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.905 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.929 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022258.8765035, 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.929 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] VM Started (Lifecycle Event)
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.938 244018 DEBUG nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Received event network-vif-unplugged-5d14f032-9842-4c87-8531-26efd8b98065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.938 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.939 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.939 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.939 244018 DEBUG nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] No waiting events found dispatching network-vif-unplugged-5d14f032-9842-4c87-8531-26efd8b98065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.939 244018 WARNING nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Received unexpected event network-vif-unplugged-5d14f032-9842-4c87-8531-26efd8b98065 for instance with vm_state deleted and task_state None.
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.939 244018 DEBUG nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received event network-changed-b98cf008-d49f-4577-b9e8-2ca8d67298cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.940 244018 DEBUG nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Refreshing instance network info cache due to event network-changed-b98cf008-d49f-4577-b9e8-2ca8d67298cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.940 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8249bea8-a1c0-42f5-a4d1-12b74e20bccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.940 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8249bea8-a1c0-42f5-a4d1-12b74e20bccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.940 244018 DEBUG nova.network.neutron [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Refreshing network info cache for port b98cf008-d49f-4577-b9e8-2ca8d67298cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.964 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.973 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022258.8766563, 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.973 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] VM Paused (Lifecycle Event)
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.995 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:18 compute-0 nova_compute[244014]: 2026-02-25 12:24:18.998 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.009 244018 DEBUG nova.compute.manager [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Received event network-vif-plugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.009 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.010 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.010 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.010 244018 DEBUG nova.compute.manager [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] No waiting events found dispatching network-vif-plugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.010 244018 WARNING nova.compute.manager [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Received unexpected event network-vif-plugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f for instance with vm_state active and task_state None.
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.010 244018 DEBUG nova.compute.manager [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Received event network-changed-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.011 244018 DEBUG nova.compute.manager [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Refreshing instance network info cache due to event network-changed-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.011 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-2dd0d0eb-f5ba-419e-8233-256425af6119" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.011 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-2dd0d0eb-f5ba-419e-8233-256425af6119" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.011 244018 DEBUG nova.network.neutron [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Refreshing network info cache for port 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.017 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:19 compute-0 ceph-mon[76335]: pgmap v1203: 305 pgs: 305 active+clean; 510 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 11 MiB/s wr, 334 op/s
Feb 25 12:24:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2529075044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/721810963' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3427323549' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/273368521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.306 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.310 244018 DEBUG nova.virt.libvirt.vif [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-903296548',display_name='tempest-InstanceActionsTestJSON-server-903296548',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-903296548',id=40,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddc1ab49b1f14fa1b483e02aa65e18b8',ramdisk_id='',reservation_id='r-jtdvjz9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1092309800',owner_user_name='tempest-InstanceActionsTestJSON-1092309800-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:12Z,user_data=None,user_id='d9c37980bc44499d879a667da762ceb2',uuid=8249bea8-a1c0-42f5-a4d1-12b74e20bccd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.313 244018 DEBUG nova.network.os_vif_util [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converting VIF {"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.315 244018 DEBUG nova.network.os_vif_util [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.317 244018 DEBUG nova.objects.instance [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8249bea8-a1c0-42f5-a4d1-12b74e20bccd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.343 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <uuid>8249bea8-a1c0-42f5-a4d1-12b74e20bccd</uuid>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <name>instance-00000028</name>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:name>tempest-InstanceActionsTestJSON-server-903296548</nova:name>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:24:18</nova:creationTime>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:user uuid="d9c37980bc44499d879a667da762ceb2">tempest-InstanceActionsTestJSON-1092309800-project-member</nova:user>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:project uuid="ddc1ab49b1f14fa1b483e02aa65e18b8">tempest-InstanceActionsTestJSON-1092309800</nova:project>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:port uuid="b98cf008-d49f-4577-b9e8-2ca8d67298cc">
Feb 25 12:24:19 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <system>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="serial">8249bea8-a1c0-42f5-a4d1-12b74e20bccd</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="uuid">8249bea8-a1c0-42f5-a4d1-12b74e20bccd</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </system>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <os>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </os>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <features>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </features>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk">
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk.config">
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:f6:f6:38"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <target dev="tapb98cf008-d4"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8249bea8-a1c0-42f5-a4d1-12b74e20bccd/console.log" append="off"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <video>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </video>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:24:19 compute-0 nova_compute[244014]: </domain>
Feb 25 12:24:19 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.344 244018 DEBUG nova.compute.manager [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Preparing to wait for external event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.344 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.345 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.345 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.347 244018 DEBUG nova.virt.libvirt.vif [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-903296548',display_name='tempest-InstanceActionsTestJSON-server-903296548',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-903296548',id=40,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddc1ab49b1f14fa1b483e02aa65e18b8',ramdisk_id='',reservation_id='r-jtdvjz9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1092309800',owner_user_name='tempest-InstanceActionsTestJSON-1092309800-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:12Z,user_data=None,user_id='d9c37980bc44499d879a667da762ceb2',uuid=8249bea8-a1c0-42f5-a4d1-12b74e20bccd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.347 244018 DEBUG nova.network.os_vif_util [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converting VIF {"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.348 244018 DEBUG nova.network.os_vif_util [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.349 244018 DEBUG os_vif [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.355 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.356 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.360 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb98cf008-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.361 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb98cf008-d4, col_values=(('external_ids', {'iface-id': 'b98cf008-d49f-4577-b9e8-2ca8d67298cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:f6:38', 'vm-uuid': '8249bea8-a1c0-42f5-a4d1-12b74e20bccd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:19 compute-0 NetworkManager[49836]: <info>  [1772022259.3644] manager: (tapb98cf008-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.372 244018 INFO os_vif [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4')
Feb 25 12:24:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890875996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.448 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.450 244018 DEBUG nova.virt.libvirt.vif [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-505619470',display_name='tempest-DeleteServersTestJSON-server-505619470',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-505619470',id=41,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-026to650',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:14Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=2dd0d0eb-f5ba-419e-8233-256425af6119,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "address": "fa:16:3e:20:d6:fa", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0cdbf7-28", "ovs_interfaceid": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.451 244018 DEBUG nova.network.os_vif_util [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "address": "fa:16:3e:20:d6:fa", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0cdbf7-28", "ovs_interfaceid": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.452 244018 DEBUG nova.network.os_vif_util [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0cdbf7-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.454 244018 DEBUG nova.objects.instance [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2dd0d0eb-f5ba-419e-8233-256425af6119 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.584 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <uuid>2dd0d0eb-f5ba-419e-8233-256425af6119</uuid>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <name>instance-00000029</name>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:name>tempest-DeleteServersTestJSON-server-505619470</nova:name>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:24:18</nova:creationTime>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:user uuid="9c44c1a95c8d4d97bc1d7dde284dcf1b">tempest-DeleteServersTestJSON-285157757-project-member</nova:user>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:project uuid="daab2f813dbd467685c22833bf875ec9">tempest-DeleteServersTestJSON-285157757</nova:project>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <nova:port uuid="6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc">
Feb 25 12:24:19 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <system>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="serial">2dd0d0eb-f5ba-419e-8233-256425af6119</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="uuid">2dd0d0eb-f5ba-419e-8233-256425af6119</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </system>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <os>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </os>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <features>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </features>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2dd0d0eb-f5ba-419e-8233-256425af6119_disk">
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2dd0d0eb-f5ba-419e-8233-256425af6119_disk.config">
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:19 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:20:d6:fa"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <target dev="tap6b0cdbf7-28"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/2dd0d0eb-f5ba-419e-8233-256425af6119/console.log" append="off"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <video>
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </video>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:24:19 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:24:19 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:24:19 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:24:19 compute-0 nova_compute[244014]: </domain>
Feb 25 12:24:19 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.585 244018 DEBUG nova.compute.manager [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Preparing to wait for external event network-vif-plugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.585 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.586 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.586 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.587 244018 DEBUG nova.virt.libvirt.vif [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-505619470',display_name='tempest-DeleteServersTestJSON-server-505619470',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-505619470',id=41,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-026to650',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:14Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=2dd0d0eb-f5ba-419e-8233-256425af6119,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "address": "fa:16:3e:20:d6:fa", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0cdbf7-28", "ovs_interfaceid": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.587 244018 DEBUG nova.network.os_vif_util [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "address": "fa:16:3e:20:d6:fa", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0cdbf7-28", "ovs_interfaceid": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.588 244018 DEBUG nova.network.os_vif_util [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0cdbf7-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.589 244018 DEBUG os_vif [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0cdbf7-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.592 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.593 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.598 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b0cdbf7-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.598 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b0cdbf7-28, col_values=(('external_ids', {'iface-id': '6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:d6:fa', 'vm-uuid': '2dd0d0eb-f5ba-419e-8233-256425af6119'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.600 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:19 compute-0 NetworkManager[49836]: <info>  [1772022259.6015] manager: (tap6b0cdbf7-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.602 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.620 244018 INFO os_vif [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0cdbf7-28')
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.635 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.635 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.636 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] No VIF found with MAC fa:16:3e:f6:f6:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.637 244018 INFO nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Using config drive
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.667 244018 DEBUG nova.storage.rbd_utils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] rbd image 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.717 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.717 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.717 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No VIF found with MAC fa:16:3e:20:d6:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.718 244018 INFO nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Using config drive
Feb 25 12:24:19 compute-0 nova_compute[244014]: 2026-02-25 12:24:19.734 244018 DEBUG nova.storage.rbd_utils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 2dd0d0eb-f5ba-419e-8233-256425af6119_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 510 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 8.9 MiB/s wr, 264 op/s
Feb 25 12:24:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/273368521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3890875996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.321 244018 INFO nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Creating config drive at /var/lib/nova/instances/2dd0d0eb-f5ba-419e-8233-256425af6119/disk.config
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.327 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2dd0d0eb-f5ba-419e-8233-256425af6119/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp3wqme3j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.357 244018 INFO nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Creating config drive at /var/lib/nova/instances/8249bea8-a1c0-42f5-a4d1-12b74e20bccd/disk.config
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.363 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8249bea8-a1c0-42f5-a4d1-12b74e20bccd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4g896lun execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.397 244018 DEBUG nova.network.neutron [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Updated VIF entry in instance network info cache for port b98cf008-d49f-4577-b9e8-2ca8d67298cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.398 244018 DEBUG nova.network.neutron [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Updating instance_info_cache with network_info: [{"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.415 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8249bea8-a1c0-42f5-a4d1-12b74e20bccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.416 244018 DEBUG nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Received event network-vif-plugged-5d14f032-9842-4c87-8531-26efd8b98065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.416 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.417 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.417 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.417 244018 DEBUG nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] No waiting events found dispatching network-vif-plugged-5d14f032-9842-4c87-8531-26efd8b98065 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.418 244018 WARNING nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Received unexpected event network-vif-plugged-5d14f032-9842-4c87-8531-26efd8b98065 for instance with vm_state deleted and task_state None.
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.418 244018 DEBUG nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Received event network-vif-plugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.418 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.419 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.419 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.419 244018 DEBUG nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Processing event network-vif-plugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.420 244018 DEBUG nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Received event network-vif-plugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.420 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.420 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.421 244018 DEBUG oslo_concurrency.lockutils [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.421 244018 DEBUG nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] No waiting events found dispatching network-vif-plugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.421 244018 WARNING nova.compute.manager [req-9d2a24c7-86f4-44d1-a0ed-e6fb99925d3e req-27af2534-71c3-4d7c-9e3f-ac2593dc4fa1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Received unexpected event network-vif-plugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa for instance with vm_state building and task_state spawning.
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.422 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.439 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022260.438018, 42abb4b3-2144-4c11-a04c-7641d725bcde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.439 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] VM Resumed (Lifecycle Event)
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.441 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.458 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.460 244018 INFO nova.virt.libvirt.driver [-] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Instance spawned successfully.
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.461 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2dd0d0eb-f5ba-419e-8233-256425af6119/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp3wqme3j" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.462 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.495 244018 DEBUG nova.storage.rbd_utils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 2dd0d0eb-f5ba-419e-8233-256425af6119_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.500 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2dd0d0eb-f5ba-419e-8233-256425af6119/disk.config 2dd0d0eb-f5ba-419e-8233-256425af6119_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.535 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8249bea8-a1c0-42f5-a4d1-12b74e20bccd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4g896lun" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.566 244018 DEBUG nova.storage.rbd_utils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] rbd image 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.569 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8249bea8-a1c0-42f5-a4d1-12b74e20bccd/disk.config 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.596 244018 DEBUG nova.network.neutron [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Updated VIF entry in instance network info cache for port 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.597 244018 DEBUG nova.network.neutron [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Updating instance_info_cache with network_info: [{"id": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "address": "fa:16:3e:20:d6:fa", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0cdbf7-28", "ovs_interfaceid": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.600 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.609 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.610 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.610 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.611 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.611 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.612 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.618 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-2dd0d0eb-f5ba-419e-8233-256425af6119" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.619 244018 DEBUG nova.compute.manager [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Received event network-vif-deleted-5d14f032-9842-4c87-8531-26efd8b98065 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.619 244018 DEBUG nova.compute.manager [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Received event network-vif-plugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.620 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.620 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.621 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.621 244018 DEBUG nova.compute.manager [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Processing event network-vif-plugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.621 244018 DEBUG nova.compute.manager [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Received event network-vif-plugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.622 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.622 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.622 244018 DEBUG oslo_concurrency.lockutils [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.623 244018 DEBUG nova.compute.manager [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] No waiting events found dispatching network-vif-plugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.623 244018 WARNING nova.compute.manager [req-bc873330-4d4a-4cc7-864f-31b705933ed3 req-31ac256d-4859-450c-b120-d7e6266f09c5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Received unexpected event network-vif-plugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 for instance with vm_state building and task_state spawning.
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.624 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.625 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.629 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022260.628053, 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.629 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] VM Resumed (Lifecycle Event)
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.633 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.637 244018 INFO nova.virt.libvirt.driver [-] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Instance spawned successfully.
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.637 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.656 244018 DEBUG oslo_concurrency.processutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2dd0d0eb-f5ba-419e-8233-256425af6119/disk.config 2dd0d0eb-f5ba-419e-8233-256425af6119_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.657 244018 INFO nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Deleting local config drive /var/lib/nova/instances/2dd0d0eb-f5ba-419e-8233-256425af6119/disk.config because it was imported into RBD.
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.665 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.674 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.693 244018 INFO nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Took 9.93 seconds to spawn the instance on the hypervisor.
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.693 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.695 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.695 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.696 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.697 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.697 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.697 244018 DEBUG nova.virt.libvirt.driver [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:20 compute-0 kernel: tap6b0cdbf7-28: entered promiscuous mode
Feb 25 12:24:20 compute-0 NetworkManager[49836]: <info>  [1772022260.7034] manager: (tap6b0cdbf7-28): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.709 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:20 compute-0 ovn_controller[147040]: 2026-02-25T12:24:20Z|00323|binding|INFO|Claiming lport 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc for this chassis.
Feb 25 12:24:20 compute-0 ovn_controller[147040]: 2026-02-25T12:24:20Z|00324|binding|INFO|6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc: Claiming fa:16:3e:20:d6:fa 10.100.0.9
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.713 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.722 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d6:fa 10.100.0.9'], port_security=['fa:16:3e:20:d6:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2dd0d0eb-f5ba-419e-8233-256425af6119', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.724 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 bound to our chassis
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.725 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bf394ebd-8d95-4226-b433-ecd2f6219815]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.733 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0d45b1c-11 in ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.734 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0d45b1c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.735 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[669fe410-b45c-4e1e-87e3-7e89d8a146b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.735 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b954c7af-4b0b-4e2e-ad51-290c7ee4a432]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 systemd-machined[210048]: New machine qemu-44-instance-00000029.
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.746 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e057df-3c81-4c62-93b4-16f39c0f7ddc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000029.
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.765 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.765 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d53e47bf-37d2-4228-92ce-b84f3489a0fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 systemd-udevd[280316]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:24:20 compute-0 ovn_controller[147040]: 2026-02-25T12:24:20Z|00325|binding|INFO|Setting lport 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc ovn-installed in OVS
Feb 25 12:24:20 compute-0 ovn_controller[147040]: 2026-02-25T12:24:20Z|00326|binding|INFO|Setting lport 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc up in Southbound
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.778 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:20 compute-0 NetworkManager[49836]: <info>  [1772022260.7833] device (tap6b0cdbf7-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:24:20 compute-0 NetworkManager[49836]: <info>  [1772022260.7837] device (tap6b0cdbf7-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.795 244018 DEBUG oslo_concurrency.processutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8249bea8-a1c0-42f5-a4d1-12b74e20bccd/disk.config 8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.796 244018 INFO nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Deleting local config drive /var/lib/nova/instances/8249bea8-a1c0-42f5-a4d1-12b74e20bccd/disk.config because it was imported into RBD.
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.802 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[36894f14-e48d-4db7-b8f0-768c72628460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 NetworkManager[49836]: <info>  [1772022260.8076] manager: (tapa0d45b1c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Feb 25 12:24:20 compute-0 systemd-udevd[280322]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.807 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[39fe4ced-c55f-403b-9f25-a3105c908320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.828 244018 INFO nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Took 8.96 seconds to spawn the instance on the hypervisor.
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.828 244018 DEBUG nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.830 244018 INFO nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Took 12.29 seconds to build instance.
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.836 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1e48d2-1ccc-478d-932d-3c135fd15065]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 systemd-udevd[280345]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:24:20 compute-0 NetworkManager[49836]: <info>  [1772022260.8393] manager: (tapb98cf008-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Feb 25 12:24:20 compute-0 kernel: tapb98cf008-d4: entered promiscuous mode
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.840 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3a349b23-e6f6-43e4-bee8-25d326e9e3fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 ovn_controller[147040]: 2026-02-25T12:24:20Z|00327|binding|INFO|Claiming lport b98cf008-d49f-4577-b9e8-2ca8d67298cc for this chassis.
Feb 25 12:24:20 compute-0 ovn_controller[147040]: 2026-02-25T12:24:20Z|00328|binding|INFO|b98cf008-d49f-4577-b9e8-2ca8d67298cc: Claiming fa:16:3e:f6:f6:38 10.100.0.14
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.842 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:20 compute-0 NetworkManager[49836]: <info>  [1772022260.8517] device (tapb98cf008-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:24:20 compute-0 NetworkManager[49836]: <info>  [1772022260.8528] device (tapb98cf008-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.860 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.861 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f6:38 10.100.0.14'], port_security=['fa:16:3e:f6:f6:38 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8249bea8-a1c0-42f5-a4d1-12b74e20bccd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da46635e-b256-469a-9470-4d538a2beccb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddc1ab49b1f14fa1b483e02aa65e18b8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7defde8-e039-4217-befa-f29edcb63bc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3956e8c4-15f2-4fcd-bec9-9698bba7bb8f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b98cf008-d49f-4577-b9e8-2ca8d67298cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:20 compute-0 NetworkManager[49836]: <info>  [1772022260.8667] device (tapa0d45b1c-10): carrier: link connected
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.868 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[433c7fbf-701d-4079-90d8-c07f2aca36bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:20 compute-0 ovn_controller[147040]: 2026-02-25T12:24:20Z|00329|binding|INFO|Setting lport b98cf008-d49f-4577-b9e8-2ca8d67298cc ovn-installed in OVS
Feb 25 12:24:20 compute-0 ovn_controller[147040]: 2026-02-25T12:24:20Z|00330|binding|INFO|Setting lport b98cf008-d49f-4577-b9e8-2ca8d67298cc up in Southbound
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:20 compute-0 systemd-machined[210048]: New machine qemu-45-instance-00000028.
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.883 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ebfe82d2-47b2-461b-8ee5-69a1b1fe24d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423027, 'reachable_time': 33154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280360, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000028.
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.894 244018 INFO nova.compute.manager [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Took 12.28 seconds to build instance.
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.902 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[204eedba-ee62-4984-b00b-ff33b58cd449]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:2bc6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423027, 'tstamp': 423027}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280361, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.914 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bccfb333-a86c-4d7d-9a51-b4cd6c131887]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423027, 'reachable_time': 33154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280362, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.916 244018 DEBUG oslo_concurrency.lockutils [None req-72dc3ab6-6a41-4b28-9b0e-5d1c38bd2cb7 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.939 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[caed615e-05e0-436a-a70b-62778a5052ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.986 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c571a4-9229-4f00-8a25-8714ed5e8959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.987 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.987 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.987 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d45b1c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:20 compute-0 NetworkManager[49836]: <info>  [1772022260.9895] manager: (tapa0d45b1c-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Feb 25 12:24:20 compute-0 kernel: tapa0d45b1c-10: entered promiscuous mode
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.992 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:20.996 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d45b1c-10, col_values=(('external_ids', {'iface-id': '7c89df63-779c-4e1c-9e58-76a4001fabc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:20 compute-0 ovn_controller[147040]: 2026-02-25T12:24:20Z|00331|binding|INFO|Releasing lport 7c89df63-779c-4e1c-9e58-76a4001fabc2 from this chassis (sb_readonly=0)
Feb 25 12:24:20 compute-0 nova_compute[244014]: 2026-02-25 12:24:20.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.002 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.004 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.004 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[79337cce-90c9-4da5-9c65-78a75cd82ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.005 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.005 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'env', 'PROCESS_TAG=haproxy-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0d45b1c-1680-4599-a27a-6e3335c94c99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:24:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Feb 25 12:24:21 compute-0 ceph-mon[76335]: pgmap v1204: 305 pgs: 305 active+clean; 510 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 8.9 MiB/s wr, 264 op/s
Feb 25 12:24:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Feb 25 12:24:21 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.256 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022261.2556772, 8249bea8-a1c0-42f5-a4d1-12b74e20bccd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.256 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] VM Started (Lifecycle Event)
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.277 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.281 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022261.2558887, 8249bea8-a1c0-42f5-a4d1-12b74e20bccd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.282 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] VM Paused (Lifecycle Event)
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.305 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.308 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.330 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.331 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022261.3272498, 2dd0d0eb-f5ba-419e-8233-256425af6119 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.331 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] VM Started (Lifecycle Event)
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.355 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:21 compute-0 podman[280482]: 2026-02-25 12:24:21.358231109 +0000 UTC m=+0.065760052 container create f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.358 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022261.3273032, 2dd0d0eb-f5ba-419e-8233-256425af6119 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.358 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] VM Paused (Lifecycle Event)
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.377 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.383 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:21 compute-0 systemd[1]: Started libpod-conmon-f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192.scope.
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.406 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8169f2466085c991cdca49e9ea3eefac059674241e3cb9b71cb8eee786b42452/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:21 compute-0 podman[280482]: 2026-02-25 12:24:21.329000672 +0000 UTC m=+0.036529635 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:24:21 compute-0 podman[280482]: 2026-02-25 12:24:21.433800008 +0000 UTC m=+0.141328971 container init f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:24:21 compute-0 podman[280482]: 2026-02-25 12:24:21.43919876 +0000 UTC m=+0.146727703 container start f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:24:21 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[280498]: [NOTICE]   (280502) : New worker (280504) forked
Feb 25 12:24:21 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[280498]: [NOTICE]   (280502) : Loading success.
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.482 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b98cf008-d49f-4577-b9e8-2ca8d67298cc in datapath da46635e-b256-469a-9470-4d538a2beccb unbound from our chassis
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.483 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da46635e-b256-469a-9470-4d538a2beccb
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.491 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0863fc4d-bc22-435f-afe1-bc74634b226e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.491 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapda46635e-b1 in ovnmeta-da46635e-b256-469a-9470-4d538a2beccb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.498 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapda46635e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.499 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e609016d-ed0d-4ea0-9af0-6752ef14f5b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.499 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4e6458-9f70-4a75-9f57-1ef35ddd3e6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.507 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6b5856-8980-4790-b3d5-fdca9d947015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.516 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f3346f6f-eaaf-4c4e-8c0c-fc8fccf01e8a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.527 244018 DEBUG nova.compute.manager [req-cd313b62-92e5-47e6-87f5-97ba30426514 req-50baff5a-0639-455f-88fc-d3af2f24c8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Received event network-vif-plugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.527 244018 DEBUG oslo_concurrency.lockutils [req-cd313b62-92e5-47e6-87f5-97ba30426514 req-50baff5a-0639-455f-88fc-d3af2f24c8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.527 244018 DEBUG oslo_concurrency.lockutils [req-cd313b62-92e5-47e6-87f5-97ba30426514 req-50baff5a-0639-455f-88fc-d3af2f24c8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.528 244018 DEBUG oslo_concurrency.lockutils [req-cd313b62-92e5-47e6-87f5-97ba30426514 req-50baff5a-0639-455f-88fc-d3af2f24c8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.528 244018 DEBUG nova.compute.manager [req-cd313b62-92e5-47e6-87f5-97ba30426514 req-50baff5a-0639-455f-88fc-d3af2f24c8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Processing event network-vif-plugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.528 244018 DEBUG nova.compute.manager [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.530 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022261.530537, 2dd0d0eb-f5ba-419e-8233-256425af6119 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.530 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] VM Resumed (Lifecycle Event)
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.534 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.536 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ace143da-44d6-4d04-a52d-b2110eda378f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.540 244018 INFO nova.virt.libvirt.driver [-] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Instance spawned successfully.
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.541 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:24:21 compute-0 NetworkManager[49836]: <info>  [1772022261.5462] manager: (tapda46635e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/152)
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.545 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16fa6488-38cf-4b82-bde8-8d0ba6b3f694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.554 244018 DEBUG nova.compute.manager [req-a53d3dfd-acba-43dd-9010-5f744cffdeda req-9e33bcf3-437d-45de-add0-481e0717fe41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.555 244018 DEBUG oslo_concurrency.lockutils [req-a53d3dfd-acba-43dd-9010-5f744cffdeda req-9e33bcf3-437d-45de-add0-481e0717fe41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.555 244018 DEBUG oslo_concurrency.lockutils [req-a53d3dfd-acba-43dd-9010-5f744cffdeda req-9e33bcf3-437d-45de-add0-481e0717fe41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.556 244018 DEBUG oslo_concurrency.lockutils [req-a53d3dfd-acba-43dd-9010-5f744cffdeda req-9e33bcf3-437d-45de-add0-481e0717fe41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.556 244018 DEBUG nova.compute.manager [req-a53d3dfd-acba-43dd-9010-5f744cffdeda req-9e33bcf3-437d-45de-add0-481e0717fe41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Processing event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.557 244018 DEBUG nova.compute.manager [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.558 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.561 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.564 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.568 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.568 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.569 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.569 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.570 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.570 244018 DEBUG nova.virt.libvirt.driver [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.575 244018 INFO nova.virt.libvirt.driver [-] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Instance spawned successfully.
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.575 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.577 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b30c3da5-05f2-49fb-9820-c3a12d858aac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.581 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3896d25f-05e0-48aa-9971-cf1734fae47d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 NetworkManager[49836]: <info>  [1772022261.6002] device (tapda46635e-b0): carrier: link connected
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.609 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[41001cf7-0462-4be2-ad11-cda4f032131f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.622 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[91e3c0c5-be71-4d2c-b230-4db2198ede9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda46635e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:ff:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423100, 'reachable_time': 33101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280523, 'error': None, 'target': 'ovnmeta-da46635e-b256-469a-9470-4d538a2beccb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.635 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e97db875-be8e-42c0-b26c-fae179b2f190]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2c:ff2b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423100, 'tstamp': 423100}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280524, 'error': None, 'target': 'ovnmeta-da46635e-b256-469a-9470-4d538a2beccb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.647 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.647 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022261.5610952, 8249bea8-a1c0-42f5-a4d1-12b74e20bccd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.647 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] VM Resumed (Lifecycle Event)
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.652 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61b167e6-85f3-4b16-a2fe-ea0ce27645a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda46635e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:ff:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423100, 'reachable_time': 33101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280525, 'error': None, 'target': 'ovnmeta-da46635e-b256-469a-9470-4d538a2beccb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.683 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b048bc-760a-47c1-a608-5501d188ea85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.720 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.728 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.728 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.729 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.729 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.730 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.730 244018 DEBUG nova.virt.libvirt.driver [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbf802c-01bf-4601-a66a-9185399d62cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.733 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda46635e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.734 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.734 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.735 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda46635e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:21 compute-0 kernel: tapda46635e-b0: entered promiscuous mode
Feb 25 12:24:21 compute-0 NetworkManager[49836]: <info>  [1772022261.7385] manager: (tapda46635e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.742 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.748 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda46635e-b0, col_values=(('external_ids', {'iface-id': '223a6222-9475-459f-808c-fa2e57295008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:21 compute-0 ovn_controller[147040]: 2026-02-25T12:24:21Z|00332|binding|INFO|Releasing lport 223a6222-9475-459f-808c-fa2e57295008 from this chassis (sb_readonly=0)
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 487 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 8.0 MiB/s wr, 460 op/s
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.755 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/da46635e-b256-469a-9470-4d538a2beccb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/da46635e-b256-469a-9470-4d538a2beccb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.756 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88b703a8-476f-4fa4-b4e1-3cda70a51782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.757 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-da46635e-b256-469a-9470-4d538a2beccb
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/da46635e-b256-469a-9470-4d538a2beccb.pid.haproxy
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID da46635e-b256-469a-9470-4d538a2beccb
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.758 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-da46635e-b256-469a-9470-4d538a2beccb', 'env', 'PROCESS_TAG=haproxy-da46635e-b256-469a-9470-4d538a2beccb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/da46635e-b256-469a-9470-4d538a2beccb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.776 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.776 244018 DEBUG oslo_concurrency.lockutils [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.777 244018 DEBUG oslo_concurrency.lockutils [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.777 244018 DEBUG oslo_concurrency.lockutils [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.777 244018 DEBUG oslo_concurrency.lockutils [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.778 244018 DEBUG oslo_concurrency.lockutils [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.779 244018 INFO nova.compute.manager [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Terminating instance
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.781 244018 DEBUG nova.compute.manager [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.793 244018 INFO nova.compute.manager [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Took 7.70 seconds to spawn the instance on the hypervisor.
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.793 244018 DEBUG nova.compute.manager [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.808 244018 INFO nova.compute.manager [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Took 8.80 seconds to spawn the instance on the hypervisor.
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.808 244018 DEBUG nova.compute.manager [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:21 compute-0 kernel: tap8428e69d-5f (unregistering): left promiscuous mode
Feb 25 12:24:21 compute-0 NetworkManager[49836]: <info>  [1772022261.8270] device (tap8428e69d-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:24:21 compute-0 ovn_controller[147040]: 2026-02-25T12:24:21Z|00333|binding|INFO|Releasing lport 8428e69d-5fa3-4f42-9801-6b72fc0fc05b from this chassis (sb_readonly=0)
Feb 25 12:24:21 compute-0 ovn_controller[147040]: 2026-02-25T12:24:21Z|00334|binding|INFO|Setting lport 8428e69d-5fa3-4f42-9801-6b72fc0fc05b down in Southbound
Feb 25 12:24:21 compute-0 ovn_controller[147040]: 2026-02-25T12:24:21Z|00335|binding|INFO|Removing iface tap8428e69d-5f ovn-installed in OVS
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:21.841 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:a7:a4 10.100.0.8'], port_security=['fa:16:3e:20:a7:a4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fb09cc9c-e6a9-4718-bb97-0df5558cb091', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8428e69d-5fa3-4f42-9801-6b72fc0fc05b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:21 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Deactivated successfully.
Feb 25 12:24:21 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Consumed 12.685s CPU time.
Feb 25 12:24:21 compute-0 systemd-machined[210048]: Machine qemu-39-instance-00000023 terminated.
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.894 244018 INFO nova.compute.manager [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Took 10.72 seconds to build instance.
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.896 244018 INFO nova.compute.manager [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Took 10.96 seconds to build instance.
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.914 244018 DEBUG oslo_concurrency.lockutils [None req-167767c5-3938-4214-98e4-2504782bb6b4 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:21 compute-0 nova_compute[244014]: 2026-02-25 12:24:21.915 244018 DEBUG oslo_concurrency.lockutils [None req-fe8d7b32-b322-483b-90bd-c23a2db10683 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:21 compute-0 NetworkManager[49836]: <info>  [1772022261.9951] manager: (tap8428e69d-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.009 244018 INFO nova.virt.libvirt.driver [-] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Instance destroyed successfully.
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.009 244018 DEBUG nova.objects.instance [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'resources' on Instance uuid fb09cc9c-e6a9-4718-bb97-0df5558cb091 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.022 244018 DEBUG nova.virt.libvirt.vif [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:23:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2091476591',display_name='tempest-ImagesTestJSON-server-2091476591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2091476591',id=35,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:23:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-55qce6pu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:00Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=fb09cc9c-e6a9-4718-bb97-0df5558cb091,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "address": "fa:16:3e:20:a7:a4", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8428e69d-5f", "ovs_interfaceid": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.022 244018 DEBUG nova.network.os_vif_util [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "address": "fa:16:3e:20:a7:a4", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8428e69d-5f", "ovs_interfaceid": "8428e69d-5fa3-4f42-9801-6b72fc0fc05b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.023 244018 DEBUG nova.network.os_vif_util [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=8428e69d-5fa3-4f42-9801-6b72fc0fc05b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8428e69d-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.023 244018 DEBUG os_vif [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=8428e69d-5fa3-4f42-9801-6b72fc0fc05b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8428e69d-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.025 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.025 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8428e69d-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.028 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.030 244018 INFO os_vif [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=8428e69d-5fa3-4f42-9801-6b72fc0fc05b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8428e69d-5f')
Feb 25 12:24:22 compute-0 podman[280584]: 2026-02-25 12:24:22.094603149 +0000 UTC m=+0.036289978 container create ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:24:22 compute-0 systemd[1]: Started libpod-conmon-ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7.scope.
Feb 25 12:24:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c448954f71b58fc30a19c444c949a0e3a743475485fac0d77fc7bbe7122b3e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:22 compute-0 podman[280584]: 2026-02-25 12:24:22.164323892 +0000 UTC m=+0.106010731 container init ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:24:22 compute-0 ceph-mon[76335]: osdmap e170: 3 total, 3 up, 3 in
Feb 25 12:24:22 compute-0 podman[280584]: 2026-02-25 12:24:22.170664222 +0000 UTC m=+0.112351071 container start ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:24:22 compute-0 podman[280584]: 2026-02-25 12:24:22.077614078 +0000 UTC m=+0.019300907 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:24:22 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[280601]: [NOTICE]   (280606) : New worker (280608) forked
Feb 25 12:24:22 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[280601]: [NOTICE]   (280606) : Loading success.
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.254 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8428e69d-5fa3-4f42-9801-6b72fc0fc05b in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 unbound from our chassis
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.258 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.259 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ffbcf7-d7fc-4399-ac8b-ae327c2a78f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.260 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace which is not needed anymore
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.263 244018 INFO nova.virt.libvirt.driver [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Deleting instance files /var/lib/nova/instances/fb09cc9c-e6a9-4718-bb97-0df5558cb091_del
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.264 244018 INFO nova.virt.libvirt.driver [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Deletion of /var/lib/nova/instances/fb09cc9c-e6a9-4718-bb97-0df5558cb091_del complete
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.310 244018 INFO nova.compute.manager [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Took 0.53 seconds to destroy the instance on the hypervisor.
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.311 244018 DEBUG oslo.service.loopingcall [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.311 244018 DEBUG nova.compute.manager [-] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.311 244018 DEBUG nova.network.neutron [-] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:24:22 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[277737]: [NOTICE]   (277741) : haproxy version is 2.8.14-c23fe91
Feb 25 12:24:22 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[277737]: [NOTICE]   (277741) : path to executable is /usr/sbin/haproxy
Feb 25 12:24:22 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[277737]: [WARNING]  (277741) : Exiting Master process...
Feb 25 12:24:22 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[277737]: [ALERT]    (277741) : Current worker (277743) exited with code 143 (Terminated)
Feb 25 12:24:22 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[277737]: [WARNING]  (277741) : All workers exited. Exiting... (0)
Feb 25 12:24:22 compute-0 systemd[1]: libpod-5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a.scope: Deactivated successfully.
Feb 25 12:24:22 compute-0 podman[280634]: 2026-02-25 12:24:22.366793263 +0000 UTC m=+0.037079101 container died 5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:24:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a-userdata-shm.mount: Deactivated successfully.
Feb 25 12:24:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-fde15e975be9682ca4b6427e9a4884dbac48c37fb6c688f492d5ea9b2508efce-merged.mount: Deactivated successfully.
Feb 25 12:24:22 compute-0 podman[280634]: 2026-02-25 12:24:22.402865274 +0000 UTC m=+0.073151102 container cleanup 5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:24:22 compute-0 systemd[1]: libpod-conmon-5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a.scope: Deactivated successfully.
Feb 25 12:24:22 compute-0 podman[280658]: 2026-02-25 12:24:22.458280962 +0000 UTC m=+0.035790954 container remove 5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.465 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29104b46-55bb-4b18-b5bc-b262d4193431]: (4, ('Wed Feb 25 12:24:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a)\n5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a\nWed Feb 25 12:24:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a)\n5f9efb752a3eb8b6f44f9fa3369a77eeb339dd1438eecee06a0176c3686f140a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.474 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2cdabea5-0eb1-46d7-be53-36fda468de03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.474 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:22 compute-0 kernel: tap6a1663dd-20: left promiscuous mode
Feb 25 12:24:22 compute-0 nova_compute[244014]: 2026-02-25 12:24:22.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.487 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d4cad588-733b-4263-b383-a65b46b4281a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.499 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[596c6c3b-b57e-4d81-a261-a5706ee55ff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.500 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86501fc0-6c11-489f-b761-f6b3cab34661]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.511 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6350d272-4622-452c-806f-76cc9ad68607]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 420192, 'reachable_time': 15355, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280671, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d6a1663dd\x2d25aa\x2d4b0e\x2da1c0\x2d42434d371a21.mount: Deactivated successfully.
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.513 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:24:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:22.513 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[971dcdc5-330f-4ebd-b60d-ed4d58949885]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:23 compute-0 ceph-mon[76335]: pgmap v1206: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 487 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 8.0 MiB/s wr, 460 op/s
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.685 244018 DEBUG nova.network.neutron [-] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.703 244018 INFO nova.compute.manager [-] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Took 1.39 seconds to deallocate network for instance.
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.748 244018 DEBUG oslo_concurrency.lockutils [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.749 244018 DEBUG oslo_concurrency.lockutils [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 448 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 6.2 MiB/s wr, 544 op/s
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.843 244018 DEBUG oslo_concurrency.lockutils [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "2dd0d0eb-f5ba-419e-8233-256425af6119" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.843 244018 DEBUG oslo_concurrency.lockutils [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.844 244018 DEBUG oslo_concurrency.lockutils [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.844 244018 DEBUG oslo_concurrency.lockutils [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.844 244018 DEBUG oslo_concurrency.lockutils [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.848 244018 INFO nova.compute.manager [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Terminating instance
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.849 244018 DEBUG nova.compute.manager [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:24:23 compute-0 kernel: tap6b0cdbf7-28 (unregistering): left promiscuous mode
Feb 25 12:24:23 compute-0 NetworkManager[49836]: <info>  [1772022263.8874] device (tap6b0cdbf7-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:24:23 compute-0 ovn_controller[147040]: 2026-02-25T12:24:23Z|00336|binding|INFO|Releasing lport 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc from this chassis (sb_readonly=0)
Feb 25 12:24:23 compute-0 ovn_controller[147040]: 2026-02-25T12:24:23Z|00337|binding|INFO|Setting lport 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc down in Southbound
Feb 25 12:24:23 compute-0 ovn_controller[147040]: 2026-02-25T12:24:23Z|00338|binding|INFO|Removing iface tap6b0cdbf7-28 ovn-installed in OVS
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.899 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:23.901 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d6:fa 10.100.0.9'], port_security=['fa:16:3e:20:d6:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2dd0d0eb-f5ba-419e-8233-256425af6119', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:23.902 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 unbound from our chassis
Feb 25 12:24:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:23.903 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d45b1c-1680-4599-a27a-6e3335c94c99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:24:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:23.903 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[641b286c-3525-45ac-84ff-397ebcc51414]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:23.904 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace which is not needed anymore
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.908 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:23 compute-0 nova_compute[244014]: 2026-02-25 12:24:23.920 244018 DEBUG oslo_concurrency.processutils [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:23 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000029.scope: Deactivated successfully.
Feb 25 12:24:23 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000029.scope: Consumed 2.726s CPU time.
Feb 25 12:24:23 compute-0 systemd-machined[210048]: Machine qemu-44-instance-00000029 terminated.
Feb 25 12:24:24 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[280498]: [NOTICE]   (280502) : haproxy version is 2.8.14-c23fe91
Feb 25 12:24:24 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[280498]: [NOTICE]   (280502) : path to executable is /usr/sbin/haproxy
Feb 25 12:24:24 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[280498]: [ALERT]    (280502) : Current worker (280504) exited with code 143 (Terminated)
Feb 25 12:24:24 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[280498]: [WARNING]  (280502) : All workers exited. Exiting... (0)
Feb 25 12:24:24 compute-0 systemd[1]: libpod-f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192.scope: Deactivated successfully.
Feb 25 12:24:24 compute-0 podman[280696]: 2026-02-25 12:24:24.050946106 +0000 UTC m=+0.056347256 container died f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:24:24 compute-0 kernel: tap6b0cdbf7-28: entered promiscuous mode
Feb 25 12:24:24 compute-0 systemd-udevd[280677]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:24:24 compute-0 ovn_controller[147040]: 2026-02-25T12:24:24Z|00339|binding|INFO|Claiming lport 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc for this chassis.
Feb 25 12:24:24 compute-0 NetworkManager[49836]: <info>  [1772022264.0721] manager: (tap6b0cdbf7-28): new Tun device (/org/freedesktop/NetworkManager/Devices/155)
Feb 25 12:24:24 compute-0 ovn_controller[147040]: 2026-02-25T12:24:24Z|00340|binding|INFO|6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc: Claiming fa:16:3e:20:d6:fa 10.100.0.9
Feb 25 12:24:24 compute-0 kernel: tap6b0cdbf7-28 (unregistering): left promiscuous mode
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.082 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d6:fa 10.100.0.9'], port_security=['fa:16:3e:20:d6:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2dd0d0eb-f5ba-419e-8233-256425af6119', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192-userdata-shm.mount: Deactivated successfully.
Feb 25 12:24:24 compute-0 ovn_controller[147040]: 2026-02-25T12:24:24Z|00341|binding|INFO|Releasing lport 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc from this chassis (sb_readonly=0)
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.090 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8169f2466085c991cdca49e9ea3eefac059674241e3cb9b71cb8eee786b42452-merged.mount: Deactivated successfully.
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.096 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d6:fa 10.100.0.9'], port_security=['fa:16:3e:20:d6:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2dd0d0eb-f5ba-419e-8233-256425af6119', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:24 compute-0 podman[280696]: 2026-02-25 12:24:24.104476261 +0000 UTC m=+0.109877371 container cleanup f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.116 244018 INFO nova.virt.libvirt.driver [-] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Instance destroyed successfully.
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.117 244018 DEBUG nova.objects.instance [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'resources' on Instance uuid 2dd0d0eb-f5ba-419e-8233-256425af6119 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:24 compute-0 systemd[1]: libpod-conmon-f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192.scope: Deactivated successfully.
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.131 244018 DEBUG nova.virt.libvirt.vif [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-505619470',display_name='tempest-DeleteServersTestJSON-server-505619470',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-505619470',id=41,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:24:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-026to650',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:21Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=2dd0d0eb-f5ba-419e-8233-256425af6119,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "address": "fa:16:3e:20:d6:fa", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0cdbf7-28", "ovs_interfaceid": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.131 244018 DEBUG nova.network.os_vif_util [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "address": "fa:16:3e:20:d6:fa", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0cdbf7-28", "ovs_interfaceid": "6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.132 244018 DEBUG nova.network.os_vif_util [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0cdbf7-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.132 244018 DEBUG os_vif [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0cdbf7-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.134 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b0cdbf7-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.140 244018 INFO os_vif [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0cdbf7-28')
Feb 25 12:24:24 compute-0 podman[280742]: 2026-02-25 12:24:24.198669106 +0000 UTC m=+0.063308612 container remove f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.214 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[32efd7bc-b5a1-4dd7-86a2-b878bbddef96]: (4, ('Wed Feb 25 12:24:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192)\nf659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192\nWed Feb 25 12:24:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (f659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192)\nf659e3b215f48b2c283e40b4ae9428a2d792c9a2edc322a9257d2ceac7236192\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.216 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a089d21-12ba-466a-bbf1-6f4141d072f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.217 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:24 compute-0 kernel: tapa0d45b1c-10: left promiscuous mode
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.222 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3cde4d-0914-46fd-9e90-42dc88e228bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.229 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.234 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0279b395-fef0-43cd-bc20-d25638eb9fa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.235 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dae32988-51a1-4d00-85a8-b4ec3c53a614]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.247 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5277a3-b8dc-4e5d-a45c-e9cb1558de66]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423020, 'reachable_time': 20706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280775, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:24 compute-0 systemd[1]: run-netns-ovnmeta\x2da0d45b1c\x2d1680\x2d4599\x2da27a\x2d6e3335c94c99.mount: Deactivated successfully.
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.250 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.250 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[6098bd7e-1c8a-45c8-a670-e50e0a915db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.251 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 unbound from our chassis
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.252 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d45b1c-1680-4599-a27a-6e3335c94c99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.253 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[df157642-b609-4d73-ae82-6f7ed74cbadd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.254 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 unbound from our chassis
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.255 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d45b1c-1680-4599-a27a-6e3335c94c99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:24:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:24.256 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[64080dd1-ba8e-4eb1-9328-06da6924725e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.452 244018 INFO nova.virt.libvirt.driver [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Deleting instance files /var/lib/nova/instances/2dd0d0eb-f5ba-419e-8233-256425af6119_del
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.453 244018 INFO nova.virt.libvirt.driver [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Deletion of /var/lib/nova/instances/2dd0d0eb-f5ba-419e-8233-256425af6119_del complete
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.513 244018 INFO nova.compute.manager [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.514 244018 DEBUG oslo.service.loopingcall [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.515 244018 DEBUG nova.compute.manager [-] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.515 244018 DEBUG nova.network.neutron [-] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:24:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2369801446' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.551 244018 DEBUG oslo_concurrency.processutils [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.557 244018 DEBUG nova.compute.provider_tree [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.573 244018 DEBUG nova.scheduler.client.report [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.581 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.605 244018 DEBUG oslo_concurrency.lockutils [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.640 244018 INFO nova.scheduler.client.report [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Deleted allocations for instance fb09cc9c-e6a9-4718-bb97-0df5558cb091
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.712 244018 DEBUG oslo_concurrency.lockutils [None req-b13a4d98-de0e-4955-a9a6-8b58aa994921 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.878 244018 DEBUG nova.compute.manager [req-0f7a10cc-9430-4fc8-a8c9-b2b97c414f03 req-f963dea3-ae22-41a2-95d0-a9301af7f139 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Received event network-vif-plugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.879 244018 DEBUG oslo_concurrency.lockutils [req-0f7a10cc-9430-4fc8-a8c9-b2b97c414f03 req-f963dea3-ae22-41a2-95d0-a9301af7f139 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.879 244018 DEBUG oslo_concurrency.lockutils [req-0f7a10cc-9430-4fc8-a8c9-b2b97c414f03 req-f963dea3-ae22-41a2-95d0-a9301af7f139 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.880 244018 DEBUG oslo_concurrency.lockutils [req-0f7a10cc-9430-4fc8-a8c9-b2b97c414f03 req-f963dea3-ae22-41a2-95d0-a9301af7f139 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.880 244018 DEBUG nova.compute.manager [req-0f7a10cc-9430-4fc8-a8c9-b2b97c414f03 req-f963dea3-ae22-41a2-95d0-a9301af7f139 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] No waiting events found dispatching network-vif-plugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:24 compute-0 nova_compute[244014]: 2026-02-25 12:24:24.880 244018 WARNING nova.compute.manager [req-0f7a10cc-9430-4fc8-a8c9-b2b97c414f03 req-f963dea3-ae22-41a2-95d0-a9301af7f139 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Received unexpected event network-vif-plugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc for instance with vm_state active and task_state deleting.
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.012 244018 DEBUG nova.compute.manager [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.013 244018 DEBUG oslo_concurrency.lockutils [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.013 244018 DEBUG oslo_concurrency.lockutils [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.013 244018 DEBUG oslo_concurrency.lockutils [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.013 244018 DEBUG nova.compute.manager [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] No waiting events found dispatching network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.014 244018 WARNING nova.compute.manager [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received unexpected event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc for instance with vm_state active and task_state None.
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.014 244018 DEBUG nova.compute.manager [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Received event network-vif-unplugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.014 244018 DEBUG oslo_concurrency.lockutils [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.014 244018 DEBUG oslo_concurrency.lockutils [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.015 244018 DEBUG oslo_concurrency.lockutils [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.015 244018 DEBUG nova.compute.manager [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] No waiting events found dispatching network-vif-unplugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.015 244018 WARNING nova.compute.manager [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Received unexpected event network-vif-unplugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b for instance with vm_state deleted and task_state None.
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.015 244018 DEBUG nova.compute.manager [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Received event network-vif-plugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.016 244018 DEBUG oslo_concurrency.lockutils [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.016 244018 DEBUG oslo_concurrency.lockutils [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.016 244018 DEBUG oslo_concurrency.lockutils [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fb09cc9c-e6a9-4718-bb97-0df5558cb091-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.016 244018 DEBUG nova.compute.manager [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] No waiting events found dispatching network-vif-plugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.017 244018 WARNING nova.compute.manager [req-0b2c1785-4fb3-492e-8063-a4c09a07db95 req-54866d6b-abe3-4690-918c-acbaa076ca86 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Received unexpected event network-vif-plugged-8428e69d-5fa3-4f42-9801-6b72fc0fc05b for instance with vm_state deleted and task_state None.
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.073 244018 DEBUG oslo_concurrency.lockutils [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.074 244018 DEBUG oslo_concurrency.lockutils [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.074 244018 DEBUG oslo_concurrency.lockutils [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.074 244018 DEBUG oslo_concurrency.lockutils [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.074 244018 DEBUG oslo_concurrency.lockutils [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.075 244018 INFO nova.compute.manager [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Terminating instance
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.076 244018 DEBUG nova.compute.manager [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:24:25 compute-0 kernel: tapf1012b5d-c0 (unregistering): left promiscuous mode
Feb 25 12:24:25 compute-0 NetworkManager[49836]: <info>  [1772022265.1068] device (tapf1012b5d-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:24:25 compute-0 ovn_controller[147040]: 2026-02-25T12:24:25Z|00342|binding|INFO|Releasing lport f1012b5d-c005-4c9f-a5ec-1e01ce3c581f from this chassis (sb_readonly=0)
Feb 25 12:24:25 compute-0 ovn_controller[147040]: 2026-02-25T12:24:25Z|00343|binding|INFO|Setting lport f1012b5d-c005-4c9f-a5ec-1e01ce3c581f down in Southbound
Feb 25 12:24:25 compute-0 ovn_controller[147040]: 2026-02-25T12:24:25Z|00344|binding|INFO|Removing iface tapf1012b5d-c0 ovn-installed in OVS
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.111 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.118 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:85:fd 10.100.0.5'], port_security=['fa:16:3e:7c:85:fd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd255f122166b45af8d4d83610929aaea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eac83cd3-8631-44c7-aec4-5978a4fbcffa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd7410d4-8e67-4686-bf02-8378887b01c1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f1012b5d-c005-4c9f-a5ec-1e01ce3c581f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.120 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f1012b5d-c005-4c9f-a5ec-1e01ce3c581f in datapath a87ddd03-d435-46c4-8efc-50bb38492ac4 unbound from our chassis
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.123 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a87ddd03-d435-46c4-8efc-50bb38492ac4
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.133 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbf490c-e74a-404f-bbf5-d70fabaacd12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:25 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Deactivated successfully.
Feb 25 12:24:25 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Consumed 8.569s CPU time.
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.156 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[87a3e491-d8ba-4d33-b562-99198995e0c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:25 compute-0 systemd-machined[210048]: Machine qemu-41-instance-00000025 terminated.
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.162 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ec51a2af-f4cd-4aea-9517-c49f2030fbfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:25 compute-0 ceph-mon[76335]: pgmap v1207: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 448 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 6.2 MiB/s wr, 544 op/s
Feb 25 12:24:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2369801446' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.186 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d04646-f1f7-4932-bf33-752275d4deee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.204 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3ef58a-3fdb-4729-8e0b-0429b7c78781]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa87ddd03-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:62:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422552, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280788, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.215 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b8493d31-e401-4ce1-ad5e-deee2f83de6e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa87ddd03-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422562, 'tstamp': 422562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280789, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa87ddd03-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422565, 'tstamp': 422565}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280789, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.217 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa87ddd03-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.221 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.222 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa87ddd03-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.222 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.223 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa87ddd03-d0, col_values=(('external_ids', {'iface-id': 'ad147878-63a9-4ab7-9f47-10bbd9941f76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:25.224 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.302 244018 INFO nova.virt.libvirt.driver [-] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Instance destroyed successfully.
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.302 244018 DEBUG nova.objects.instance [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lazy-loading 'resources' on Instance uuid 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.326 244018 DEBUG nova.virt.libvirt.vif [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-23727972',display_name='tempest-ListServersNegativeTestJSON-server-23727972-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-23727972-1',id=37,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:24:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d255f122166b45af8d4d83610929aaea',ramdisk_id='',reservation_id='r-hbnkbyp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-266593939',owner_user_name='tempest-ListServersNegativeTestJSON-266593939-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:17Z,user_data=None,user_id='2ab8469e67b142bab26cdd996097e148',uuid=2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "address": "fa:16:3e:7c:85:fd", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1012b5d-c0", "ovs_interfaceid": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.327 244018 DEBUG nova.network.os_vif_util [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converting VIF {"id": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "address": "fa:16:3e:7c:85:fd", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1012b5d-c0", "ovs_interfaceid": "f1012b5d-c005-4c9f-a5ec-1e01ce3c581f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.328 244018 DEBUG nova.network.os_vif_util [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:85:fd,bridge_name='br-int',has_traffic_filtering=True,id=f1012b5d-c005-4c9f-a5ec-1e01ce3c581f,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1012b5d-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.332 244018 DEBUG os_vif [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:85:fd,bridge_name='br-int',has_traffic_filtering=True,id=f1012b5d-c005-4c9f-a5ec-1e01ce3c581f,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1012b5d-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.336 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1012b5d-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.341 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.344 244018 INFO os_vif [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:85:fd,bridge_name='br-int',has_traffic_filtering=True,id=f1012b5d-c005-4c9f-a5ec-1e01ce3c581f,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1012b5d-c0')
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.409 244018 DEBUG nova.network.neutron [-] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.461 244018 INFO nova.compute.manager [-] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Took 0.95 seconds to deallocate network for instance.
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.478 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "36c98024-f0f0-4516-ad48-e8ffa90c5058" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.479 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.500 244018 DEBUG nova.compute.manager [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.511 244018 DEBUG oslo_concurrency.lockutils [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.512 244018 DEBUG oslo_concurrency.lockutils [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.579 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.668 244018 DEBUG oslo_concurrency.processutils [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.702 244018 DEBUG oslo_concurrency.lockutils [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.703 244018 DEBUG oslo_concurrency.lockutils [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.703 244018 INFO nova.compute.manager [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Rebooting instance
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.726 244018 DEBUG oslo_concurrency.lockutils [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquiring lock "refresh_cache-8249bea8-a1c0-42f5-a4d1-12b74e20bccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.726 244018 DEBUG oslo_concurrency.lockutils [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquired lock "refresh_cache-8249bea8-a1c0-42f5-a4d1-12b74e20bccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.726 244018 DEBUG nova.network.neutron [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:24:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 448 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 6.2 MiB/s wr, 544 op/s
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.787 244018 INFO nova.virt.libvirt.driver [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Deleting instance files /var/lib/nova/instances/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_del
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.788 244018 INFO nova.virt.libvirt.driver [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Deletion of /var/lib/nova/instances/2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a_del complete
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.849 244018 INFO nova.compute.manager [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Took 0.77 seconds to destroy the instance on the hypervisor.
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.852 244018 DEBUG oslo.service.loopingcall [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.852 244018 DEBUG nova.compute.manager [-] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:24:25 compute-0 nova_compute[244014]: 2026-02-25 12:24:25.852 244018 DEBUG nova.network.neutron [-] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:24:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687368071' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.215 244018 DEBUG oslo_concurrency.processutils [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.221 244018 DEBUG nova.compute.provider_tree [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.245 244018 DEBUG nova.scheduler.client.report [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.279 244018 DEBUG oslo_concurrency.lockutils [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.283 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.292 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.292 244018 INFO nova.compute.claims [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.312 244018 INFO nova.scheduler.client.report [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Deleted allocations for instance 2dd0d0eb-f5ba-419e-8233-256425af6119
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.443 244018 DEBUG oslo_concurrency.lockutils [None req-8d17667c-6448-4021-9451-9dd17a8c7b4f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.564 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:26 compute-0 podman[280844]: 2026-02-25 12:24:26.722315099 +0000 UTC m=+0.057675323 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 25 12:24:26 compute-0 podman[280845]: 2026-02-25 12:24:26.730039708 +0000 UTC m=+0.074236292 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.861 244018 DEBUG nova.network.neutron [-] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.883 244018 INFO nova.compute.manager [-] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Took 1.03 seconds to deallocate network for instance.
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.948 244018 DEBUG oslo_concurrency.lockutils [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.959 244018 DEBUG nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Received event network-vif-unplugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.959 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.959 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.960 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.960 244018 DEBUG nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] No waiting events found dispatching network-vif-unplugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.960 244018 WARNING nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Received unexpected event network-vif-unplugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc for instance with vm_state deleted and task_state None.
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.960 244018 DEBUG nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Received event network-vif-plugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.961 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.961 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.961 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2dd0d0eb-f5ba-419e-8233-256425af6119-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.961 244018 DEBUG nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] No waiting events found dispatching network-vif-plugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.962 244018 WARNING nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Received unexpected event network-vif-plugged-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc for instance with vm_state deleted and task_state None.
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.962 244018 DEBUG nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Received event network-vif-unplugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.962 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.962 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.963 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.963 244018 DEBUG nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] No waiting events found dispatching network-vif-unplugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.963 244018 WARNING nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Received unexpected event network-vif-unplugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f for instance with vm_state deleted and task_state None.
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.963 244018 DEBUG nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Received event network-vif-plugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.964 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.964 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.964 244018 DEBUG oslo_concurrency.lockutils [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.964 244018 DEBUG nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] No waiting events found dispatching network-vif-plugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.965 244018 WARNING nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Received unexpected event network-vif-plugged-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f for instance with vm_state deleted and task_state None.
Feb 25 12:24:26 compute-0 nova_compute[244014]: 2026-02-25 12:24:26.965 244018 DEBUG nova.compute.manager [req-777b23a7-3247-419b-b943-33171fff44d8 req-79d14f3f-45a5-4ff2-b48e-6698070113d3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Received event network-vif-deleted-f1012b5d-c005-4c9f-a5ec-1e01ce3c581f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713362430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.183 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.190 244018 DEBUG nova.compute.provider_tree [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:27 compute-0 ceph-mon[76335]: pgmap v1208: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 448 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 6.2 MiB/s wr, 544 op/s
Feb 25 12:24:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2687368071' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3713362430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.207 244018 DEBUG nova.scheduler.client.report [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.229 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.230 244018 DEBUG nova.compute.manager [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.233 244018 DEBUG oslo_concurrency.lockutils [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.284 244018 DEBUG nova.compute.manager [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.285 244018 DEBUG nova.network.neutron [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.308 244018 INFO nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.328 244018 DEBUG nova.compute.manager [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.349 244018 DEBUG oslo_concurrency.processutils [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.428 244018 DEBUG nova.compute.manager [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.432 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.433 244018 INFO nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Creating image(s)
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.471 244018 DEBUG nova.storage.rbd_utils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.516 244018 DEBUG nova.storage.rbd_utils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.546 244018 DEBUG nova.storage.rbd_utils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.552 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.645 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.646 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.647 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.648 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.680 244018 DEBUG nova.storage.rbd_utils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.686 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.736 244018 DEBUG nova.policy [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1cfc3e643014e1c984d71182534fd24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '851e1817495944c1ad7ac421ab226d13', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.743 244018 DEBUG nova.compute.manager [req-b3a07c98-4fe5-46a5-8a8c-b5eda206ca31 req-72d93b4c-3b95-4408-aa66-b0849755f05f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Received event network-vif-deleted-8428e69d-5fa3-4f42-9801-6b72fc0fc05b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.743 244018 DEBUG nova.compute.manager [req-b3a07c98-4fe5-46a5-8a8c-b5eda206ca31 req-72d93b4c-3b95-4408-aa66-b0849755f05f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Received event network-vif-deleted-6b0cdbf7-28dc-43f7-b5e8-96d6aa8d5fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 293 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 66 KiB/s wr, 562 op/s
Feb 25 12:24:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2138259207' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.907 244018 DEBUG oslo_concurrency.processutils [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.911 244018 DEBUG nova.compute.provider_tree [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.929 244018 DEBUG nova.scheduler.client.report [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.945 244018 DEBUG nova.network.neutron [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Updating instance_info_cache with network_info: [{"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.964 244018 DEBUG oslo_concurrency.lockutils [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.967 244018 DEBUG oslo_concurrency.lockutils [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Releasing lock "refresh_cache-8249bea8-a1c0-42f5-a4d1-12b74e20bccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.968 244018 DEBUG nova.compute.manager [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:27 compute-0 nova_compute[244014]: 2026-02-25 12:24:27.980 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.004 244018 INFO nova.scheduler.client.report [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Deleted allocations for instance 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.039 244018 DEBUG nova.storage.rbd_utils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] resizing rbd image 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.119 244018 DEBUG oslo_concurrency.lockutils [None req-2a0f3718-2aaf-49a9-994b-b1a6e1650589 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.127 244018 DEBUG nova.objects.instance [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'migration_context' on Instance uuid 36c98024-f0f0-4516-ad48-e8ffa90c5058 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.144 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.144 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Ensure instance console log exists: /var/lib/nova/instances/36c98024-f0f0-4516-ad48-e8ffa90c5058/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.145 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.145 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.145 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:28 compute-0 kernel: tapb98cf008-d4 (unregistering): left promiscuous mode
Feb 25 12:24:28 compute-0 ovn_controller[147040]: 2026-02-25T12:24:28Z|00345|binding|INFO|Releasing lport b98cf008-d49f-4577-b9e8-2ca8d67298cc from this chassis (sb_readonly=0)
Feb 25 12:24:28 compute-0 ovn_controller[147040]: 2026-02-25T12:24:28Z|00346|binding|INFO|Setting lport b98cf008-d49f-4577-b9e8-2ca8d67298cc down in Southbound
Feb 25 12:24:28 compute-0 NetworkManager[49836]: <info>  [1772022268.1638] device (tapb98cf008-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:24:28 compute-0 ovn_controller[147040]: 2026-02-25T12:24:28Z|00347|binding|INFO|Removing iface tapb98cf008-d4 ovn-installed in OVS
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.163 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.167 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.177 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f6:38 10.100.0.14'], port_security=['fa:16:3e:f6:f6:38 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8249bea8-a1c0-42f5-a4d1-12b74e20bccd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da46635e-b256-469a-9470-4d538a2beccb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddc1ab49b1f14fa1b483e02aa65e18b8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7defde8-e039-4217-befa-f29edcb63bc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3956e8c4-15f2-4fcd-bec9-9698bba7bb8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b98cf008-d49f-4577-b9e8-2ca8d67298cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.178 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b98cf008-d49f-4577-b9e8-2ca8d67298cc in datapath da46635e-b256-469a-9470-4d538a2beccb unbound from our chassis
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.179 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network da46635e-b256-469a-9470-4d538a2beccb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.180 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[41df881f-64bf-4b47-a50b-03d23928bcfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.181 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-da46635e-b256-469a-9470-4d538a2beccb namespace which is not needed anymore
Feb 25 12:24:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2138259207' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:28 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000028.scope: Deactivated successfully.
Feb 25 12:24:28 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000028.scope: Consumed 6.845s CPU time.
Feb 25 12:24:28 compute-0 systemd-machined[210048]: Machine qemu-45-instance-00000028 terminated.
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.289 244018 DEBUG nova.network.neutron [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Successfully created port: f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:24:28 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[280601]: [NOTICE]   (280606) : haproxy version is 2.8.14-c23fe91
Feb 25 12:24:28 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[280601]: [NOTICE]   (280606) : path to executable is /usr/sbin/haproxy
Feb 25 12:24:28 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[280601]: [ALERT]    (280606) : Current worker (280608) exited with code 143 (Terminated)
Feb 25 12:24:28 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[280601]: [WARNING]  (280606) : All workers exited. Exiting... (0)
Feb 25 12:24:28 compute-0 systemd[1]: libpod-ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7.scope: Deactivated successfully.
Feb 25 12:24:28 compute-0 podman[281117]: 2026-02-25 12:24:28.302347915 +0000 UTC m=+0.051362804 container died ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7-userdata-shm.mount: Deactivated successfully.
Feb 25 12:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c448954f71b58fc30a19c444c949a0e3a743475485fac0d77fc7bbe7122b3e7-merged.mount: Deactivated successfully.
Feb 25 12:24:28 compute-0 podman[281117]: 2026-02-25 12:24:28.335436642 +0000 UTC m=+0.084451541 container cleanup ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.356 244018 INFO nova.virt.libvirt.driver [-] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Instance destroyed successfully.
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.358 244018 DEBUG nova.objects.instance [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lazy-loading 'resources' on Instance uuid 8249bea8-a1c0-42f5-a4d1-12b74e20bccd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:28 compute-0 systemd[1]: libpod-conmon-ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7.scope: Deactivated successfully.
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.372 244018 DEBUG nova.virt.libvirt.vif [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-903296548',display_name='tempest-InstanceActionsTestJSON-server-903296548',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-903296548',id=40,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:24:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ddc1ab49b1f14fa1b483e02aa65e18b8',ramdisk_id='',reservation_id='r-jtdvjz9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1092309800',owner_user_name='tempest-InstanceActionsTestJSON-1092309800-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:28Z,user_data=None,user_id='d9c37980bc44499d879a667da762ceb2',uuid=8249bea8-a1c0-42f5-a4d1-12b74e20bccd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.373 244018 DEBUG nova.network.os_vif_util [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converting VIF {"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.374 244018 DEBUG nova.network.os_vif_util [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.374 244018 DEBUG os_vif [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.377 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb98cf008-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.383 244018 INFO os_vif [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4')
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.388 244018 DEBUG nova.virt.libvirt.driver [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Start _get_guest_xml network_info=[{"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.392 244018 WARNING nova.virt.libvirt.driver [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.399 244018 DEBUG nova.virt.libvirt.host [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.399 244018 DEBUG nova.virt.libvirt.host [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.402 244018 DEBUG nova.virt.libvirt.host [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.403 244018 DEBUG nova.virt.libvirt.host [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.403 244018 DEBUG nova.virt.libvirt.driver [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.403 244018 DEBUG nova.virt.hardware [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.404 244018 DEBUG nova.virt.hardware [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.404 244018 DEBUG nova.virt.hardware [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.405 244018 DEBUG nova.virt.hardware [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.405 244018 DEBUG nova.virt.hardware [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.405 244018 DEBUG nova.virt.hardware [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.405 244018 DEBUG nova.virt.hardware [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.406 244018 DEBUG nova.virt.hardware [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.406 244018 DEBUG nova.virt.hardware [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.406 244018 DEBUG nova.virt.hardware [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.407 244018 DEBUG nova.virt.hardware [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.407 244018 DEBUG nova.objects.instance [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8249bea8-a1c0-42f5-a4d1-12b74e20bccd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:28 compute-0 podman[281146]: 2026-02-25 12:24:28.410533517 +0000 UTC m=+0.050610003 container remove ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.415 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f2114b89-9e49-423b-b565-7a2a57e9a840]: (4, ('Wed Feb 25 12:24:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb (ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7)\nce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7\nWed Feb 25 12:24:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb (ce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7)\nce302c4f08551e052461f2d300fae22df7ff7ac021540a006059b8b98f5059f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.417 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c9bb39dd-038d-4769-87d2-92195f84acc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.418 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda46635e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:28 compute-0 kernel: tapda46635e-b0: left promiscuous mode
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.424 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe2c628-4a61-4c6d-91d6-481ab6208c27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.419 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.436 244018 DEBUG oslo_concurrency.processutils [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.439 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05140eb4-0402-46ea-bf29-4e095e24d7ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.441 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3f105113-a45a-44db-ab03-dc5c1392dd88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.455 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a965fdf8-5d7c-43e8-9698-d75ce10c30ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423094, 'reachable_time': 26584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281166, 'error': None, 'target': 'ovnmeta-da46635e-b256-469a-9470-4d538a2beccb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.459 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-da46635e-b256-469a-9470-4d538a2beccb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:24:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:28.459 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[3b716630-3e05-4fad-a460-651b1873e287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:28 compute-0 systemd[1]: run-netns-ovnmeta\x2dda46635e\x2db256\x2d469a\x2d9470\x2d4d538a2beccb.mount: Deactivated successfully.
Feb 25 12:24:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017524355' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.918 244018 DEBUG oslo_concurrency.processutils [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:28 compute-0 nova_compute[244014]: 2026-02-25 12:24:28.959 244018 DEBUG oslo_concurrency.processutils [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:29 compute-0 ceph-mon[76335]: pgmap v1209: 305 pgs: 305 active+clean; 293 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 66 KiB/s wr, 562 op/s
Feb 25 12:24:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4017524355' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886138223' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.463 244018 DEBUG oslo_concurrency.processutils [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.466 244018 DEBUG nova.virt.libvirt.vif [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-903296548',display_name='tempest-InstanceActionsTestJSON-server-903296548',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-903296548',id=40,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:24:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ddc1ab49b1f14fa1b483e02aa65e18b8',ramdisk_id='',reservation_id='r-jtdvjz9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1092309800',owner_user_name='tempest-InstanceActionsTestJSON-1092309800-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:28Z,user_data=None,user_id='d9c37980bc44499d879a667da762ceb2',uuid=8249bea8-a1c0-42f5-a4d1-12b74e20bccd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.467 244018 DEBUG nova.network.os_vif_util [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converting VIF {"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.468 244018 DEBUG nova.network.os_vif_util [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.470 244018 DEBUG nova.objects.instance [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8249bea8-a1c0-42f5-a4d1-12b74e20bccd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.489 244018 DEBUG nova.virt.libvirt.driver [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:24:29 compute-0 nova_compute[244014]:   <uuid>8249bea8-a1c0-42f5-a4d1-12b74e20bccd</uuid>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   <name>instance-00000028</name>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <nova:name>tempest-InstanceActionsTestJSON-server-903296548</nova:name>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:24:28</nova:creationTime>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <nova:user uuid="d9c37980bc44499d879a667da762ceb2">tempest-InstanceActionsTestJSON-1092309800-project-member</nova:user>
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <nova:project uuid="ddc1ab49b1f14fa1b483e02aa65e18b8">tempest-InstanceActionsTestJSON-1092309800</nova:project>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <nova:port uuid="b98cf008-d49f-4577-b9e8-2ca8d67298cc">
Feb 25 12:24:29 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <system>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <entry name="serial">8249bea8-a1c0-42f5-a4d1-12b74e20bccd</entry>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <entry name="uuid">8249bea8-a1c0-42f5-a4d1-12b74e20bccd</entry>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     </system>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   <os>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   </os>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   <features>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   </features>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk">
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8249bea8-a1c0-42f5-a4d1-12b74e20bccd_disk.config">
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:29 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:f6:f6:38"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <target dev="tapb98cf008-d4"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8249bea8-a1c0-42f5-a4d1-12b74e20bccd/console.log" append="off"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <video>
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     </video>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:24:29 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:24:29 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:24:29 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:24:29 compute-0 nova_compute[244014]: </domain>
Feb 25 12:24:29 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.492 244018 DEBUG nova.virt.libvirt.driver [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.492 244018 DEBUG nova.virt.libvirt.driver [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.493 244018 DEBUG nova.virt.libvirt.vif [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-903296548',display_name='tempest-InstanceActionsTestJSON-server-903296548',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-903296548',id=40,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:24:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='ddc1ab49b1f14fa1b483e02aa65e18b8',ramdisk_id='',reservation_id='r-jtdvjz9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1092309800',owner_user_name='tempest-InstanceActionsTestJSON-1092309800-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:28Z,user_data=None,user_id='d9c37980bc44499d879a667da762ceb2',uuid=8249bea8-a1c0-42f5-a4d1-12b74e20bccd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.494 244018 DEBUG nova.network.os_vif_util [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converting VIF {"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.495 244018 DEBUG nova.network.os_vif_util [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.495 244018 DEBUG os_vif [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.496 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.497 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.501 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb98cf008-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.501 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb98cf008-d4, col_values=(('external_ids', {'iface-id': 'b98cf008-d49f-4577-b9e8-2ca8d67298cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:f6:38', 'vm-uuid': '8249bea8-a1c0-42f5-a4d1-12b74e20bccd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:29 compute-0 NetworkManager[49836]: <info>  [1772022269.5048] manager: (tapb98cf008-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.507 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.510 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.511 244018 INFO os_vif [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4')
Feb 25 12:24:29 compute-0 kernel: tapb98cf008-d4: entered promiscuous mode
Feb 25 12:24:29 compute-0 systemd-udevd[281099]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:24:29 compute-0 NetworkManager[49836]: <info>  [1772022269.5824] manager: (tapb98cf008-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/157)
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:29 compute-0 ovn_controller[147040]: 2026-02-25T12:24:29Z|00348|binding|INFO|Claiming lport b98cf008-d49f-4577-b9e8-2ca8d67298cc for this chassis.
Feb 25 12:24:29 compute-0 ovn_controller[147040]: 2026-02-25T12:24:29Z|00349|binding|INFO|b98cf008-d49f-4577-b9e8-2ca8d67298cc: Claiming fa:16:3e:f6:f6:38 10.100.0.14
Feb 25 12:24:29 compute-0 ovn_controller[147040]: 2026-02-25T12:24:29Z|00350|binding|INFO|Setting lport b98cf008-d49f-4577-b9e8-2ca8d67298cc ovn-installed in OVS
Feb 25 12:24:29 compute-0 ovn_controller[147040]: 2026-02-25T12:24:29Z|00351|binding|INFO|Setting lport b98cf008-d49f-4577-b9e8-2ca8d67298cc up in Southbound
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.595 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f6:38 10.100.0.14'], port_security=['fa:16:3e:f6:f6:38 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8249bea8-a1c0-42f5-a4d1-12b74e20bccd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da46635e-b256-469a-9470-4d538a2beccb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddc1ab49b1f14fa1b483e02aa65e18b8', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c7defde8-e039-4217-befa-f29edcb63bc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3956e8c4-15f2-4fcd-bec9-9698bba7bb8f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b98cf008-d49f-4577-b9e8-2ca8d67298cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.597 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b98cf008-d49f-4577-b9e8-2ca8d67298cc in datapath da46635e-b256-469a-9470-4d538a2beccb bound to our chassis
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:29 compute-0 NetworkManager[49836]: <info>  [1772022269.6020] device (tapb98cf008-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:24:29 compute-0 NetworkManager[49836]: <info>  [1772022269.6029] device (tapb98cf008-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.602 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da46635e-b256-469a-9470-4d538a2beccb
Feb 25 12:24:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.611 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37d6f00c-b810-4f22-a0fd-ad451961e860]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.612 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapda46635e-b1 in ovnmeta-da46635e-b256-469a-9470-4d538a2beccb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:24:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.614 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapda46635e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.614 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9dec0824-931a-40e3-8a5c-be677acb2996]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.615 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1513d1-ec2c-47f1-8c1a-0a812f8d7425]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.624 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9895ee56-b5e1-45e4-b218-aa18628c9606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 systemd-machined[210048]: New machine qemu-46-instance-00000028.
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.635 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[df998c92-c982-4349-9cb2-c8bf95921dcb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-00000028.
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.657 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bdef582b-bf54-4e5b-92b4-ad86965d2516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.664 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf48f86-2f86-45b6-8caf-2ad1973d1a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 NetworkManager[49836]: <info>  [1772022269.6652] manager: (tapda46635e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/158)
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.697 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5848c246-e2a8-4752-afab-08003215d48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.699 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd07169-8554-42d9-a4d0-72c734995a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 NetworkManager[49836]: <info>  [1772022269.7231] device (tapda46635e-b0): carrier: link connected
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.729 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d20aac-dda9-46e2-913e-40a164c558c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 293 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 77 KiB/s wr, 654 op/s
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.764 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[246e7fe3-6e64-4221-826b-a91ee139453a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda46635e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:ff:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423913, 'reachable_time': 26349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281274, 'error': None, 'target': 'ovnmeta-da46635e-b256-469a-9470-4d538a2beccb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.779 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21a897aa-cf17-4f6f-b3ee-a6245bee4138]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2c:ff2b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423913, 'tstamp': 423913}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281275, 'error': None, 'target': 'ovnmeta-da46635e-b256-469a-9470-4d538a2beccb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.791 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf174cb-cff3-4a40-b958-610122b26e99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda46635e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:ff:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423913, 'reachable_time': 26349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281276, 'error': None, 'target': 'ovnmeta-da46635e-b256-469a-9470-4d538a2beccb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.816 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a638147-400e-4094-9938-12cf1fccf75d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.854 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4ab3f0-f965-476c-a24e-e9bc570b715e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.856 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda46635e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.856 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.856 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda46635e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:29 compute-0 NetworkManager[49836]: <info>  [1772022269.8587] manager: (tapda46635e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Feb 25 12:24:29 compute-0 kernel: tapda46635e-b0: entered promiscuous mode
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.861 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda46635e-b0, col_values=(('external_ids', {'iface-id': '223a6222-9475-459f-808c-fa2e57295008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:29 compute-0 ovn_controller[147040]: 2026-02-25T12:24:29Z|00352|binding|INFO|Releasing lport 223a6222-9475-459f-808c-fa2e57295008 from this chassis (sb_readonly=0)
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.877 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/da46635e-b256-469a-9470-4d538a2beccb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/da46635e-b256-469a-9470-4d538a2beccb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.878 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[42e4b015-d8f1-43c3-9d08-373fedc64383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.879 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-da46635e-b256-469a-9470-4d538a2beccb
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/da46635e-b256-469a-9470-4d538a2beccb.pid.haproxy
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID da46635e-b256-469a-9470-4d538a2beccb
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:24:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:29.880 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-da46635e-b256-469a-9470-4d538a2beccb', 'env', 'PROCESS_TAG=haproxy-da46635e-b256-469a-9470-4d538a2beccb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/da46635e-b256-469a-9470-4d538a2beccb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.948 244018 DEBUG nova.network.neutron [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Successfully updated port: f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.964 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "refresh_cache-36c98024-f0f0-4516-ad48-e8ffa90c5058" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.965 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquired lock "refresh_cache-36c98024-f0f0-4516-ad48-e8ffa90c5058" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:29 compute-0 nova_compute[244014]: 2026-02-25 12:24:29.965 244018 DEBUG nova.network.neutron [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.130 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 8249bea8-a1c0-42f5-a4d1-12b74e20bccd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.131 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022270.1288784, 8249bea8-a1c0-42f5-a4d1-12b74e20bccd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.131 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] VM Resumed (Lifecycle Event)
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.134 244018 DEBUG nova.compute.manager [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.139 244018 INFO nova.virt.libvirt.driver [-] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Instance rebooted successfully.
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.140 244018 DEBUG nova.compute.manager [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.142 244018 DEBUG nova.network.neutron [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.156 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.159 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.195 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.196 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022270.1295564, 8249bea8-a1c0-42f5-a4d1-12b74e20bccd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.196 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] VM Started (Lifecycle Event)
Feb 25 12:24:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1886138223' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:30 compute-0 ceph-mon[76335]: osdmap e171: 3 total, 3 up, 3 in
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.214 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.220 244018 DEBUG oslo_concurrency.lockutils [None req-36bb5d9a-2f11-4efa-95db-6af8b039b19c d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.222 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:30 compute-0 podman[281348]: 2026-02-25 12:24:30.291254034 +0000 UTC m=+0.067406548 container create c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:24:30 compute-0 systemd[1]: Started libpod-conmon-c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3.scope.
Feb 25 12:24:30 compute-0 podman[281348]: 2026-02-25 12:24:30.257480488 +0000 UTC m=+0.033633052 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:24:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddcf25511ea60eb4a8123708b566c58ec8603d9be04f81d4f271b91b0c6cd00/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:30 compute-0 podman[281348]: 2026-02-25 12:24:30.378221916 +0000 UTC m=+0.154374490 container init c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:24:30 compute-0 podman[281348]: 2026-02-25 12:24:30.384326548 +0000 UTC m=+0.160479062 container start c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 12:24:30 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[281362]: [NOTICE]   (281366) : New worker (281368) forked
Feb 25 12:24:30 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[281362]: [NOTICE]   (281366) : Loading success.
Feb 25 12:24:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:24:30
Feb 25 12:24:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:24:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:24:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.meta']
Feb 25 12:24:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.941 244018 DEBUG nova.compute.manager [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received event network-vif-unplugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.942 244018 DEBUG oslo_concurrency.lockutils [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.942 244018 DEBUG oslo_concurrency.lockutils [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.942 244018 DEBUG oslo_concurrency.lockutils [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.943 244018 DEBUG nova.compute.manager [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] No waiting events found dispatching network-vif-unplugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.943 244018 WARNING nova.compute.manager [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received unexpected event network-vif-unplugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc for instance with vm_state active and task_state None.
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.944 244018 DEBUG nova.compute.manager [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.944 244018 DEBUG oslo_concurrency.lockutils [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.944 244018 DEBUG oslo_concurrency.lockutils [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.945 244018 DEBUG oslo_concurrency.lockutils [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.945 244018 DEBUG nova.compute.manager [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] No waiting events found dispatching network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:30 compute-0 nova_compute[244014]: 2026-02-25 12:24:30.945 244018 WARNING nova.compute.manager [req-72bf2d67-7aca-4e6b-9a10-3f8424394fda req-9324be7a-e6e0-42a5-81af-ac8a5905ab46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received unexpected event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc for instance with vm_state active and task_state None.
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.190 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022256.189873, 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.191 244018 INFO nova.compute.manager [-] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] VM Stopped (Lifecycle Event)
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.209 244018 DEBUG nova.compute.manager [None req-9109deb4-1d5c-4eb7-a7eb-bb8f3de2663e - - - - - -] [instance: 1d7bcbb8-8755-4a0a-a4b8-df97ab88b49d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:31 compute-0 ceph-mon[76335]: pgmap v1211: 305 pgs: 305 active+clean; 293 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 77 KiB/s wr, 654 op/s
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.228 244018 DEBUG nova.network.neutron [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Updating instance_info_cache with network_info: [{"id": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "address": "fa:16:3e:09:47:8e", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf24dea9b-1f", "ovs_interfaceid": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.252 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Releasing lock "refresh_cache-36c98024-f0f0-4516-ad48-e8ffa90c5058" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.253 244018 DEBUG nova.compute.manager [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Instance network_info: |[{"id": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "address": "fa:16:3e:09:47:8e", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf24dea9b-1f", "ovs_interfaceid": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.256 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Start _get_guest_xml network_info=[{"id": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "address": "fa:16:3e:09:47:8e", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf24dea9b-1f", "ovs_interfaceid": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.259 244018 WARNING nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.264 244018 DEBUG nova.virt.libvirt.host [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.265 244018 DEBUG nova.virt.libvirt.host [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.269 244018 DEBUG nova.virt.libvirt.host [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.269 244018 DEBUG nova.virt.libvirt.host [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.270 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.270 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.271 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.271 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.271 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.272 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.272 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.272 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.273 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.273 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.273 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.273 244018 DEBUG nova.virt.hardware [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.276 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 359 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 4.8 MiB/s wr, 490 op/s
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:24:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:24:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962880528' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.897 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.919 244018 DEBUG nova.storage.rbd_utils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:31 compute-0 nova_compute[244014]: 2026-02-25 12:24:31.922 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.000 244018 DEBUG nova.compute.manager [req-f616c2c9-42d2-4153-921b-7946e807c92b req-f1bae59d-740e-4b78-9b60-ceb50f0ac4d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Received event network-changed-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.001 244018 DEBUG nova.compute.manager [req-f616c2c9-42d2-4153-921b-7946e807c92b req-f1bae59d-740e-4b78-9b60-ceb50f0ac4d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Refreshing instance network info cache due to event network-changed-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.002 244018 DEBUG oslo_concurrency.lockutils [req-f616c2c9-42d2-4153-921b-7946e807c92b req-f1bae59d-740e-4b78-9b60-ceb50f0ac4d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-36c98024-f0f0-4516-ad48-e8ffa90c5058" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.002 244018 DEBUG oslo_concurrency.lockutils [req-f616c2c9-42d2-4153-921b-7946e807c92b req-f1bae59d-740e-4b78-9b60-ceb50f0ac4d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-36c98024-f0f0-4516-ad48-e8ffa90c5058" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.002 244018 DEBUG nova.network.neutron [req-f616c2c9-42d2-4153-921b-7946e807c92b req-f1bae59d-740e-4b78-9b60-ceb50f0ac4d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Refreshing network info cache for port f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:24:32 compute-0 ovn_controller[147040]: 2026-02-25T12:24:32Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:b8:f3 10.100.0.8
Feb 25 12:24:32 compute-0 ovn_controller[147040]: 2026-02-25T12:24:32Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:b8:f3 10.100.0.8
Feb 25 12:24:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3962880528' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616767691' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.456 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.458 244018 DEBUG nova.virt.libvirt.vif [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-899680496',display_name='tempest-ImagesTestJSON-server-899680496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-899680496',id=42,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-govupkud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:27Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=36c98024-f0f0-4516-ad48-e8ffa90c5058,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "address": "fa:16:3e:09:47:8e", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf24dea9b-1f", "ovs_interfaceid": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.459 244018 DEBUG nova.network.os_vif_util [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "address": "fa:16:3e:09:47:8e", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf24dea9b-1f", "ovs_interfaceid": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.460 244018 DEBUG nova.network.os_vif_util [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:47:8e,bridge_name='br-int',has_traffic_filtering=True,id=f24dea9b-1f59-4ca6-90f7-3f97ea1b6740,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf24dea9b-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.462 244018 DEBUG nova.objects.instance [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36c98024-f0f0-4516-ad48-e8ffa90c5058 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.493 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:24:32 compute-0 nova_compute[244014]:   <uuid>36c98024-f0f0-4516-ad48-e8ffa90c5058</uuid>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   <name>instance-0000002a</name>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <nova:name>tempest-ImagesTestJSON-server-899680496</nova:name>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:24:31</nova:creationTime>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <nova:user uuid="f1cfc3e643014e1c984d71182534fd24">tempest-ImagesTestJSON-1353368211-project-member</nova:user>
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <nova:project uuid="851e1817495944c1ad7ac421ab226d13">tempest-ImagesTestJSON-1353368211</nova:project>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <nova:port uuid="f24dea9b-1f59-4ca6-90f7-3f97ea1b6740">
Feb 25 12:24:32 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <system>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <entry name="serial">36c98024-f0f0-4516-ad48-e8ffa90c5058</entry>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <entry name="uuid">36c98024-f0f0-4516-ad48-e8ffa90c5058</entry>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     </system>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   <os>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   </os>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   <features>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   </features>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/36c98024-f0f0-4516-ad48-e8ffa90c5058_disk">
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/36c98024-f0f0-4516-ad48-e8ffa90c5058_disk.config">
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:32 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:09:47:8e"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <target dev="tapf24dea9b-1f"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/36c98024-f0f0-4516-ad48-e8ffa90c5058/console.log" append="off"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <video>
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     </video>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:24:32 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:24:32 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:24:32 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:24:32 compute-0 nova_compute[244014]: </domain>
Feb 25 12:24:32 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.502 244018 DEBUG nova.compute.manager [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Preparing to wait for external event network-vif-plugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.503 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.503 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.503 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.504 244018 DEBUG nova.virt.libvirt.vif [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-899680496',display_name='tempest-ImagesTestJSON-server-899680496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-899680496',id=42,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-govupkud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:27Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=36c98024-f0f0-4516-ad48-e8ffa90c5058,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "address": "fa:16:3e:09:47:8e", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf24dea9b-1f", "ovs_interfaceid": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.505 244018 DEBUG nova.network.os_vif_util [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "address": "fa:16:3e:09:47:8e", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf24dea9b-1f", "ovs_interfaceid": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.507 244018 DEBUG nova.network.os_vif_util [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:47:8e,bridge_name='br-int',has_traffic_filtering=True,id=f24dea9b-1f59-4ca6-90f7-3f97ea1b6740,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf24dea9b-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.508 244018 DEBUG os_vif [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:47:8e,bridge_name='br-int',has_traffic_filtering=True,id=f24dea9b-1f59-4ca6-90f7-3f97ea1b6740,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf24dea9b-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.509 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.510 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.511 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.515 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.516 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf24dea9b-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.517 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf24dea9b-1f, col_values=(('external_ids', {'iface-id': 'f24dea9b-1f59-4ca6-90f7-3f97ea1b6740', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:47:8e', 'vm-uuid': '36c98024-f0f0-4516-ad48-e8ffa90c5058'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:32 compute-0 NetworkManager[49836]: <info>  [1772022272.5214] manager: (tapf24dea9b-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.526 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.528 244018 INFO os_vif [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:47:8e,bridge_name='br-int',has_traffic_filtering=True,id=f24dea9b-1f59-4ca6-90f7-3f97ea1b6740,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf24dea9b-1f')
Feb 25 12:24:32 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.563 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.564 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.597 244018 DEBUG nova.compute.manager [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.610 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.611 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.612 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No VIF found with MAC fa:16:3e:09:47:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.614 244018 INFO nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Using config drive
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.645 244018 DEBUG nova.storage.rbd_utils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.694 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.694 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.699 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.700 244018 INFO nova.compute.claims [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.810 244018 DEBUG oslo_concurrency.lockutils [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.811 244018 DEBUG oslo_concurrency.lockutils [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.811 244018 DEBUG oslo_concurrency.lockutils [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.811 244018 DEBUG oslo_concurrency.lockutils [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.811 244018 DEBUG oslo_concurrency.lockutils [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.812 244018 INFO nova.compute.manager [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Terminating instance
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.813 244018 DEBUG nova.compute.manager [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:24:32 compute-0 kernel: tapb98cf008-d4 (unregistering): left promiscuous mode
Feb 25 12:24:32 compute-0 NetworkManager[49836]: <info>  [1772022272.8466] device (tapb98cf008-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.858 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:32 compute-0 ovn_controller[147040]: 2026-02-25T12:24:32Z|00353|binding|INFO|Releasing lport b98cf008-d49f-4577-b9e8-2ca8d67298cc from this chassis (sb_readonly=0)
Feb 25 12:24:32 compute-0 ovn_controller[147040]: 2026-02-25T12:24:32Z|00354|binding|INFO|Setting lport b98cf008-d49f-4577-b9e8-2ca8d67298cc down in Southbound
Feb 25 12:24:32 compute-0 ovn_controller[147040]: 2026-02-25T12:24:32Z|00355|binding|INFO|Removing iface tapb98cf008-d4 ovn-installed in OVS
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.866 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:32.868 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f6:38 10.100.0.14'], port_security=['fa:16:3e:f6:f6:38 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8249bea8-a1c0-42f5-a4d1-12b74e20bccd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da46635e-b256-469a-9470-4d538a2beccb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddc1ab49b1f14fa1b483e02aa65e18b8', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c7defde8-e039-4217-befa-f29edcb63bc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3956e8c4-15f2-4fcd-bec9-9698bba7bb8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b98cf008-d49f-4577-b9e8-2ca8d67298cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:32.870 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b98cf008-d49f-4577-b9e8-2ca8d67298cc in datapath da46635e-b256-469a-9470-4d538a2beccb unbound from our chassis
Feb 25 12:24:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:32.873 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network da46635e-b256-469a-9470-4d538a2beccb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:24:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:32.877 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5b893514-7d9c-4293-b100-59fdc08a168e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:32.878 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-da46635e-b256-469a-9470-4d538a2beccb namespace which is not needed anymore
Feb 25 12:24:32 compute-0 nova_compute[244014]: 2026-02-25 12:24:32.895 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:32 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Deactivated successfully.
Feb 25 12:24:32 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Consumed 3.006s CPU time.
Feb 25 12:24:32 compute-0 systemd-machined[210048]: Machine qemu-46-instance-00000028 terminated.
Feb 25 12:24:33 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[281362]: [NOTICE]   (281366) : haproxy version is 2.8.14-c23fe91
Feb 25 12:24:33 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[281362]: [NOTICE]   (281366) : path to executable is /usr/sbin/haproxy
Feb 25 12:24:33 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[281362]: [WARNING]  (281366) : Exiting Master process...
Feb 25 12:24:33 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[281362]: [ALERT]    (281366) : Current worker (281368) exited with code 143 (Terminated)
Feb 25 12:24:33 compute-0 neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb[281362]: [WARNING]  (281366) : All workers exited. Exiting... (0)
Feb 25 12:24:33 compute-0 systemd[1]: libpod-c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3.scope: Deactivated successfully.
Feb 25 12:24:33 compute-0 conmon[281362]: conmon c6574c7f89705b3abeec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3.scope/container/memory.events
Feb 25 12:24:33 compute-0 podman[281485]: 2026-02-25 12:24:33.012078028 +0000 UTC m=+0.055708958 container died c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 12:24:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3-userdata-shm.mount: Deactivated successfully.
Feb 25 12:24:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ddcf25511ea60eb4a8123708b566c58ec8603d9be04f81d4f271b91b0c6cd00-merged.mount: Deactivated successfully.
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.045 244018 INFO nova.virt.libvirt.driver [-] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Instance destroyed successfully.
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.046 244018 DEBUG nova.objects.instance [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lazy-loading 'resources' on Instance uuid 8249bea8-a1c0-42f5-a4d1-12b74e20bccd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:33 compute-0 podman[281485]: 2026-02-25 12:24:33.047453889 +0000 UTC m=+0.091084829 container cleanup c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:24:33 compute-0 systemd[1]: libpod-conmon-c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3.scope: Deactivated successfully.
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.075 244018 DEBUG nova.virt.libvirt.vif [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-903296548',display_name='tempest-InstanceActionsTestJSON-server-903296548',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-903296548',id=40,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:24:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ddc1ab49b1f14fa1b483e02aa65e18b8',ramdisk_id='',reservation_id='r-jtdvjz9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1092309800',owner_user_name='tempest-InstanceActionsTestJSON-1092309800-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:30Z,user_data=None,user_id='d9c37980bc44499d879a667da762ceb2',uuid=8249bea8-a1c0-42f5-a4d1-12b74e20bccd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.076 244018 DEBUG nova.network.os_vif_util [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converting VIF {"id": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "address": "fa:16:3e:f6:f6:38", "network": {"id": "da46635e-b256-469a-9470-4d538a2beccb", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2084409924-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddc1ab49b1f14fa1b483e02aa65e18b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb98cf008-d4", "ovs_interfaceid": "b98cf008-d49f-4577-b9e8-2ca8d67298cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.077 244018 DEBUG nova.network.os_vif_util [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.077 244018 DEBUG os_vif [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.079 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.080 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb98cf008-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.083 244018 INFO nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Creating config drive at /var/lib/nova/instances/36c98024-f0f0-4516-ad48-e8ffa90c5058/disk.config
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.089 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36c98024-f0f0-4516-ad48-e8ffa90c5058/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpf0trmc2j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.116 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.117 244018 INFO os_vif [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f6:38,bridge_name='br-int',has_traffic_filtering=True,id=b98cf008-d49f-4577-b9e8-2ca8d67298cc,network=Network(da46635e-b256-469a-9470-4d538a2beccb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb98cf008-d4')
Feb 25 12:24:33 compute-0 podman[281543]: 2026-02-25 12:24:33.122057041 +0000 UTC m=+0.055035309 container remove c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.126 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd050486-47a7-4c3d-b62b-2c97adb7d156]: (4, ('Wed Feb 25 12:24:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb (c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3)\nc6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3\nWed Feb 25 12:24:33 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-da46635e-b256-469a-9470-4d538a2beccb (c6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3)\nc6574c7f89705b3abeeca37f9871c86f27673c21937332b1ce6152a2cdcfdea3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.127 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f15273ea-6d9b-4281-a06a-81b92443864f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.128 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda46635e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:33 compute-0 kernel: tapda46635e-b0: left promiscuous mode
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.141 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d6814c6f-4d56-4f78-a784-00e46a76bf30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.152 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb97acb5-5f7a-4143-90e8-1e8b2b5b1ead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.152 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[51ca121e-15b2-4eb8-a727-d4d2daa2fed7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.166 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2779c583-83bf-436a-a568-a8e5f28d7fce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423906, 'reachable_time': 22162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281581, 'error': None, 'target': 'ovnmeta-da46635e-b256-469a-9470-4d538a2beccb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 systemd[1]: run-netns-ovnmeta\x2dda46635e\x2db256\x2d469a\x2d9470\x2d4d538a2beccb.mount: Deactivated successfully.
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.168 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-da46635e-b256-469a-9470-4d538a2beccb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.168 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[48e91a4e-2c5b-4ef8-8fbf-793979be6b57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.221 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36c98024-f0f0-4516-ad48-e8ffa90c5058/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpf0trmc2j" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:33 compute-0 ceph-mon[76335]: pgmap v1212: 305 pgs: 305 active+clean; 359 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 4.8 MiB/s wr, 490 op/s
Feb 25 12:24:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2616767691' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.242 244018 DEBUG nova.storage.rbd_utils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.245 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36c98024-f0f0-4516-ad48-e8ffa90c5058/disk.config 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:33 compute-0 ovn_controller[147040]: 2026-02-25T12:24:33Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:ac:35 10.100.0.10
Feb 25 12:24:33 compute-0 ovn_controller[147040]: 2026-02-25T12:24:33Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:ac:35 10.100.0.10
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.369 244018 INFO nova.virt.libvirt.driver [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Deleting instance files /var/lib/nova/instances/8249bea8-a1c0-42f5-a4d1-12b74e20bccd_del
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.370 244018 INFO nova.virt.libvirt.driver [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Deletion of /var/lib/nova/instances/8249bea8-a1c0-42f5-a4d1-12b74e20bccd_del complete
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.373 244018 DEBUG oslo_concurrency.processutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36c98024-f0f0-4516-ad48-e8ffa90c5058/disk.config 36c98024-f0f0-4516-ad48-e8ffa90c5058_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.374 244018 INFO nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Deleting local config drive /var/lib/nova/instances/36c98024-f0f0-4516-ad48-e8ffa90c5058/disk.config because it was imported into RBD.
Feb 25 12:24:33 compute-0 kernel: tapf24dea9b-1f: entered promiscuous mode
Feb 25 12:24:33 compute-0 NetworkManager[49836]: <info>  [1772022273.4129] manager: (tapf24dea9b-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Feb 25 12:24:33 compute-0 systemd-udevd[281463]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:24:33 compute-0 ovn_controller[147040]: 2026-02-25T12:24:33Z|00356|binding|INFO|Claiming lport f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 for this chassis.
Feb 25 12:24:33 compute-0 ovn_controller[147040]: 2026-02-25T12:24:33Z|00357|binding|INFO|f24dea9b-1f59-4ca6-90f7-3f97ea1b6740: Claiming fa:16:3e:09:47:8e 10.100.0.4
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.415 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:33 compute-0 ovn_controller[147040]: 2026-02-25T12:24:33Z|00358|binding|INFO|Setting lport f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 ovn-installed in OVS
Feb 25 12:24:33 compute-0 ovn_controller[147040]: 2026-02-25T12:24:33Z|00359|binding|INFO|Setting lport f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 up in Southbound
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.420 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:47:8e 10.100.0.4'], port_security=['fa:16:3e:09:47:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '36c98024-f0f0-4516-ad48-e8ffa90c5058', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f24dea9b-1f59-4ca6-90f7-3f97ea1b6740) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.422 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 bound to our chassis
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.421 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:33 compute-0 NetworkManager[49836]: <info>  [1772022273.4264] device (tapf24dea9b-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:24:33 compute-0 NetworkManager[49836]: <info>  [1772022273.4272] device (tapf24dea9b-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.427 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.428 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.433 244018 INFO nova.compute.manager [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.434 244018 DEBUG oslo.service.loopingcall [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.434 244018 DEBUG nova.compute.manager [-] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.435 244018 DEBUG nova.network.neutron [-] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.437 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8caf3b24-214a-447c-9193-91fdbcca12c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.438 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a1663dd-21 in ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.440 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a1663dd-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.440 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1741a73-4d44-47aa-84b8-2f17a831bb87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.441 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bcfdd468-dadd-47f5-9a00-c709ce8fb20b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 systemd-machined[210048]: New machine qemu-47-instance-0000002a.
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.449 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[431c6135-e397-49c2-bd06-6a61c8663f4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-0000002a.
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.460 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35d28258-0c6f-4919-b9d4-625d574b4e4b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365605391' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.482 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a48c8d-fb7b-4ad4-aac3-6d83b18a084e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 NetworkManager[49836]: <info>  [1772022273.4867] manager: (tap6a1663dd-20): new Veth device (/org/freedesktop/NetworkManager/Devices/162)
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.486 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9a832d-85f2-46de-99e7-0936aa98f5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.488 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.493 244018 DEBUG nova.compute.provider_tree [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.514 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b76dc9a1-562a-4323-b813-fb9afb021f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.515 244018 DEBUG nova.scheduler.client.report [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.516 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e4081f-3b81-4c2d-b3f6-3cabe7e1800d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 NetworkManager[49836]: <info>  [1772022273.5325] device (tap6a1663dd-20): carrier: link connected
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.539 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[747f8aba-47d5-46e4-8b44-ce3ac3e222ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.551 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.552 244018 DEBUG nova.compute.manager [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.554 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd53d98-f7f8-4484-b446-f6f1d85bd1bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424294, 'reachable_time': 24045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281666, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.568 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19b98bc2-0823-4366-ac8b-509961f4a1ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424294, 'tstamp': 424294}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281667, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.582 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a78091bd-c40a-48fb-8729-d9654561ad3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424294, 'reachable_time': 24045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281668, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.600 244018 DEBUG nova.compute.manager [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.601 244018 DEBUG nova.network.neutron [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.603 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf2332f-2c42-4a6e-b0f6-176875bf8a90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.623 244018 INFO nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.640 244018 DEBUG nova.compute.manager [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.649 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd07305-93ce-4a76-b567-1df22c65b9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.651 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.651 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.652 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a1663dd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.654 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:33 compute-0 NetworkManager[49836]: <info>  [1772022273.6549] manager: (tap6a1663dd-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/163)
Feb 25 12:24:33 compute-0 kernel: tap6a1663dd-20: entered promiscuous mode
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.657 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a1663dd-20, col_values=(('external_ids', {'iface-id': '7f515737-b36a-4caa-affc-8ad110539172'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.659 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:24:33 compute-0 ovn_controller[147040]: 2026-02-25T12:24:33Z|00360|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.659 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[48802e29-503c-48c1-83be-96b2b9010ce9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.660 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:24:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:33.660 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'env', 'PROCESS_TAG=haproxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a1663dd-25aa-4b0e-a1c0-42434d371a21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.752 244018 DEBUG nova.compute.manager [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.754 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.754 244018 INFO nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Creating image(s)
Feb 25 12:24:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 382 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.0 MiB/s wr, 447 op/s
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.777 244018 DEBUG nova.storage.rbd_utils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.806 244018 DEBUG nova.storage.rbd_utils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.829 244018 DEBUG nova.storage.rbd_utils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.832 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.856 244018 DEBUG nova.network.neutron [req-f616c2c9-42d2-4153-921b-7946e807c92b req-f1bae59d-740e-4b78-9b60-ceb50f0ac4d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Updated VIF entry in instance network info cache for port f24dea9b-1f59-4ca6-90f7-3f97ea1b6740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.857 244018 DEBUG nova.network.neutron [req-f616c2c9-42d2-4153-921b-7946e807c92b req-f1bae59d-740e-4b78-9b60-ceb50f0ac4d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Updating instance_info_cache with network_info: [{"id": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "address": "fa:16:3e:09:47:8e", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf24dea9b-1f", "ovs_interfaceid": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.862 244018 DEBUG nova.compute.manager [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.862 244018 DEBUG oslo_concurrency.lockutils [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.862 244018 DEBUG oslo_concurrency.lockutils [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.862 244018 DEBUG oslo_concurrency.lockutils [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.863 244018 DEBUG nova.compute.manager [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] No waiting events found dispatching network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.863 244018 WARNING nova.compute.manager [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received unexpected event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc for instance with vm_state active and task_state deleting.
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.863 244018 DEBUG nova.compute.manager [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.863 244018 DEBUG oslo_concurrency.lockutils [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.864 244018 DEBUG oslo_concurrency.lockutils [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.864 244018 DEBUG oslo_concurrency.lockutils [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.864 244018 DEBUG nova.compute.manager [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] No waiting events found dispatching network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.864 244018 WARNING nova.compute.manager [req-32430955-ed5b-4bd7-82de-2852c9ca72ed req-60dda2e4-9e78-42ae-af32-788220eb7cc4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received unexpected event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc for instance with vm_state active and task_state deleting.
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.873 244018 DEBUG oslo_concurrency.lockutils [req-f616c2c9-42d2-4153-921b-7946e807c92b req-f1bae59d-740e-4b78-9b60-ceb50f0ac4d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-36c98024-f0f0-4516-ad48-e8ffa90c5058" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.910 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.911 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.912 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.912 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.933 244018 DEBUG nova.storage.rbd_utils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.936 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.955 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022273.9158964, 36c98024-f0f0-4516-ad48-e8ffa90c5058 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.955 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] VM Started (Lifecycle Event)
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.959 244018 DEBUG nova.policy [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9c44c1a95c8d4d97bc1d7dde284dcf1b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'daab2f813dbd467685c22833bf875ec9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:24:33 compute-0 podman[281799]: 2026-02-25 12:24:33.967472757 +0000 UTC m=+0.047663820 container create cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.982 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.987 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022273.9160016, 36c98024-f0f0-4516-ad48-e8ffa90c5058 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:33 compute-0 nova_compute[244014]: 2026-02-25 12:24:33.987 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] VM Paused (Lifecycle Event)
Feb 25 12:24:33 compute-0 systemd[1]: Started libpod-conmon-cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21.scope.
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.007 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.010 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:34 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ed715b3adca6564ed54107ca6d9cf86e89a58a95c8bc012d225f6a6830e865/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:34 compute-0 podman[281799]: 2026-02-25 12:24:34.032612321 +0000 UTC m=+0.112803384 container init cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.034 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:34 compute-0 podman[281799]: 2026-02-25 12:24:34.040208405 +0000 UTC m=+0.120399468 container start cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:24:34 compute-0 podman[281799]: 2026-02-25 12:24:33.945582057 +0000 UTC m=+0.025773200 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:24:34 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[281833]: [NOTICE]   (281855) : New worker (281857) forked
Feb 25 12:24:34 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[281833]: [NOTICE]   (281855) : Loading success.
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.181 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.216 244018 DEBUG nova.compute.manager [req-69fc8145-a4ce-435a-be59-5f67eac690e3 req-b3f03c99-fe48-4699-a3e5-7716d1feed2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Received event network-vif-plugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.216 244018 DEBUG oslo_concurrency.lockutils [req-69fc8145-a4ce-435a-be59-5f67eac690e3 req-b3f03c99-fe48-4699-a3e5-7716d1feed2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.217 244018 DEBUG oslo_concurrency.lockutils [req-69fc8145-a4ce-435a-be59-5f67eac690e3 req-b3f03c99-fe48-4699-a3e5-7716d1feed2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.217 244018 DEBUG oslo_concurrency.lockutils [req-69fc8145-a4ce-435a-be59-5f67eac690e3 req-b3f03c99-fe48-4699-a3e5-7716d1feed2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.217 244018 DEBUG nova.compute.manager [req-69fc8145-a4ce-435a-be59-5f67eac690e3 req-b3f03c99-fe48-4699-a3e5-7716d1feed2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Processing event network-vif-plugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.218 244018 DEBUG nova.compute.manager [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:24:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/365605391' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.263 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022274.2221699, 36c98024-f0f0-4516-ad48-e8ffa90c5058 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.263 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] VM Resumed (Lifecycle Event)
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.267 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.272 244018 DEBUG nova.storage.rbd_utils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] resizing rbd image c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.306 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.307 244018 DEBUG nova.network.neutron [-] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.309 244018 INFO nova.virt.libvirt.driver [-] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Instance spawned successfully.
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.309 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.312 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.361 244018 INFO nova.compute.manager [-] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Took 0.93 seconds to deallocate network for instance.
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.362 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.369 244018 DEBUG nova.objects.instance [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'migration_context' on Instance uuid c9d55f33-ecc8-4b0b-a335-8052fcdfaecb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.372 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.372 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.372 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.372 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.373 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.373 244018 DEBUG nova.virt.libvirt.driver [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.410 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.410 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Ensure instance console log exists: /var/lib/nova/instances/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.411 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.411 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.411 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.441 244018 DEBUG oslo_concurrency.lockutils [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.442 244018 DEBUG oslo_concurrency.lockutils [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.460 244018 INFO nova.compute.manager [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Took 7.03 seconds to spawn the instance on the hypervisor.
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.461 244018 DEBUG nova.compute.manager [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.546 244018 INFO nova.compute.manager [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Took 9.00 seconds to build instance.
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.580 244018 DEBUG oslo_concurrency.lockutils [None req-7d2c40ad-b744-45ac-b5ea-205ebd1ca202 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.596 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.660 244018 DEBUG oslo_concurrency.processutils [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.804 244018 DEBUG nova.network.neutron [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Successfully created port: d333d3cb-5840-49b2-a761-04da902fae80 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.827 244018 DEBUG oslo_concurrency.lockutils [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "42abb4b3-2144-4c11-a04c-7641d725bcde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.827 244018 DEBUG oslo_concurrency.lockutils [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.827 244018 DEBUG oslo_concurrency.lockutils [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.827 244018 DEBUG oslo_concurrency.lockutils [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.827 244018 DEBUG oslo_concurrency.lockutils [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.828 244018 INFO nova.compute.manager [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Terminating instance
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.829 244018 DEBUG nova.compute.manager [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:24:34 compute-0 kernel: tap7d80048f-ed (unregistering): left promiscuous mode
Feb 25 12:24:34 compute-0 NetworkManager[49836]: <info>  [1772022274.8732] device (tap7d80048f-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.878 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:34 compute-0 ovn_controller[147040]: 2026-02-25T12:24:34Z|00361|binding|INFO|Releasing lport 7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa from this chassis (sb_readonly=0)
Feb 25 12:24:34 compute-0 ovn_controller[147040]: 2026-02-25T12:24:34Z|00362|binding|INFO|Setting lport 7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa down in Southbound
Feb 25 12:24:34 compute-0 ovn_controller[147040]: 2026-02-25T12:24:34Z|00363|binding|INFO|Removing iface tap7d80048f-ed ovn-installed in OVS
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:34.889 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:ac:35 10.100.0.10'], port_security=['fa:16:3e:70:ac:35 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '42abb4b3-2144-4c11-a04c-7641d725bcde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd255f122166b45af8d4d83610929aaea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eac83cd3-8631-44c7-aec4-5978a4fbcffa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd7410d4-8e67-4686-bf02-8378887b01c1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:34.891 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa in datapath a87ddd03-d435-46c4-8efc-50bb38492ac4 unbound from our chassis
Feb 25 12:24:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:34.893 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a87ddd03-d435-46c4-8efc-50bb38492ac4
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.899 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:34.904 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[77be186a-65cf-4929-85a5-60b55e7e8606]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.911 244018 DEBUG oslo_concurrency.lockutils [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.914 244018 INFO nova.compute.manager [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Terminating instance
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.915 244018 DEBUG nova.compute.manager [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:24:34 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Deactivated successfully.
Feb 25 12:24:34 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Consumed 11.633s CPU time.
Feb 25 12:24:34 compute-0 systemd-machined[210048]: Machine qemu-42-instance-00000026 terminated.
Feb 25 12:24:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:34.932 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae50a8e-3b20-45ad-82fa-0e6009587ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:34.935 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3584bb-8656-4d8a-ae2b-fce4bb1351ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:34 compute-0 kernel: tap1e1fa674-42 (unregistering): left promiscuous mode
Feb 25 12:24:34 compute-0 NetworkManager[49836]: <info>  [1772022274.9614] device (tap1e1fa674-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:34 compute-0 ovn_controller[147040]: 2026-02-25T12:24:34Z|00364|binding|INFO|Releasing lport 1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 from this chassis (sb_readonly=0)
Feb 25 12:24:34 compute-0 ovn_controller[147040]: 2026-02-25T12:24:34Z|00365|binding|INFO|Setting lport 1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 down in Southbound
Feb 25 12:24:34 compute-0 ovn_controller[147040]: 2026-02-25T12:24:34Z|00366|binding|INFO|Removing iface tap1e1fa674-42 ovn-installed in OVS
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:34.971 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9370dadf-7402-4d56-976e-7d92d1d8b9ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:34 compute-0 nova_compute[244014]: 2026-02-25 12:24:34.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:34.990 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b38ddd20-f7a9-4ac1-ae0b-f7df00c30f6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa87ddd03-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:62:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422552, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281972, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:34.992 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:b8:f3 10.100.0.8'], port_security=['fa:16:3e:29:b8:f3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '19e66139-1e99-432c-bbd6-fc3a6d3e1b1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd255f122166b45af8d4d83610929aaea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eac83cd3-8631-44c7-aec4-5978a4fbcffa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd7410d4-8e67-4686-bf02-8378887b01c1, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=1e1fa674-42f4-483b-8576-2dd9c4c8ffc0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.008 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4e229063-918a-45df-813b-544c4e488d9d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa87ddd03-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422562, 'tstamp': 422562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281973, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa87ddd03-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422565, 'tstamp': 422565}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281973, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:35 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Deactivated successfully.
Feb 25 12:24:35 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Consumed 11.854s CPU time.
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.011 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa87ddd03-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:35 compute-0 systemd-machined[210048]: Machine qemu-43-instance-00000027 terminated.
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.018 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa87ddd03-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.018 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.019 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa87ddd03-d0, col_values=(('external_ids', {'iface-id': 'ad147878-63a9-4ab7-9f47-10bbd9941f76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.019 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.020 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 in datapath a87ddd03-d435-46c4-8efc-50bb38492ac4 unbound from our chassis
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.021 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a87ddd03-d435-46c4-8efc-50bb38492ac4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[15edebec-c7f8-4979-bba2-2c310286466f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.022 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4 namespace which is not needed anymore
Feb 25 12:24:35 compute-0 NetworkManager[49836]: <info>  [1772022275.0478] manager: (tap7d80048f-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/164)
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.052 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.060 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.076 244018 INFO nova.virt.libvirt.driver [-] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Instance destroyed successfully.
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.076 244018 DEBUG nova.objects.instance [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lazy-loading 'resources' on Instance uuid 42abb4b3-2144-4c11-a04c-7641d725bcde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.095 244018 DEBUG nova.virt.libvirt.vif [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-23727972',display_name='tempest-ListServersNegativeTestJSON-server-23727972-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-23727972-2',id=38,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-02-25T12:24:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d255f122166b45af8d4d83610929aaea',ramdisk_id='',reservation_id='r-hbnkbyp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-266593939',owner_user_name='tempest-ListServersNegativeTestJSON-266593939-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:20Z,user_data=None,user_id='2ab8469e67b142bab26cdd996097e148',uuid=42abb4b3-2144-4c11-a04c-7641d725bcde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "address": "fa:16:3e:70:ac:35", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d80048f-ed", "ovs_interfaceid": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.096 244018 DEBUG nova.network.os_vif_util [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converting VIF {"id": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "address": "fa:16:3e:70:ac:35", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d80048f-ed", "ovs_interfaceid": "7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.097 244018 DEBUG nova.network.os_vif_util [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:ac:35,bridge_name='br-int',has_traffic_filtering=True,id=7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d80048f-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.098 244018 DEBUG os_vif [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:ac:35,bridge_name='br-int',has_traffic_filtering=True,id=7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d80048f-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.100 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.100 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d80048f-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.102 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.104 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.105 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.108 244018 INFO os_vif [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:ac:35,bridge_name='br-int',has_traffic_filtering=True,id=7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d80048f-ed')
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.142 244018 INFO nova.virt.libvirt.driver [-] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Instance destroyed successfully.
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.142 244018 DEBUG nova.objects.instance [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lazy-loading 'resources' on Instance uuid 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.157 244018 DEBUG nova.virt.libvirt.vif [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-23727972',display_name='tempest-ListServersNegativeTestJSON-server-23727972-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-23727972-3',id=39,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-02-25T12:24:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d255f122166b45af8d4d83610929aaea',ramdisk_id='',reservation_id='r-hbnkbyp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-266593939',owner_user_name='tempest-ListServersNegativeTestJSON-266593939-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:20Z,user_data=None,user_id='2ab8469e67b142bab26cdd996097e148',uuid=19e66139-1e99-432c-bbd6-fc3a6d3e1b1d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "address": "fa:16:3e:29:b8:f3", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e1fa674-42", "ovs_interfaceid": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.157 244018 DEBUG nova.network.os_vif_util [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converting VIF {"id": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "address": "fa:16:3e:29:b8:f3", "network": {"id": "a87ddd03-d435-46c4-8efc-50bb38492ac4", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1597195212-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d255f122166b45af8d4d83610929aaea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e1fa674-42", "ovs_interfaceid": "1e1fa674-42f4-483b-8576-2dd9c4c8ffc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.159 244018 DEBUG nova.network.os_vif_util [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:b8:f3,bridge_name='br-int',has_traffic_filtering=True,id=1e1fa674-42f4-483b-8576-2dd9c4c8ffc0,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e1fa674-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.159 244018 DEBUG os_vif [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:b8:f3,bridge_name='br-int',has_traffic_filtering=True,id=1e1fa674-42f4-483b-8576-2dd9c4c8ffc0,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e1fa674-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.161 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.161 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e1fa674-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.163 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.167 244018 INFO os_vif [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:b8:f3,bridge_name='br-int',has_traffic_filtering=True,id=1e1fa674-42f4-483b-8576-2dd9c4c8ffc0,network=Network(a87ddd03-d435-46c4-8efc-50bb38492ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e1fa674-42')
Feb 25 12:24:35 compute-0 neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4[279767]: [NOTICE]   (279774) : haproxy version is 2.8.14-c23fe91
Feb 25 12:24:35 compute-0 neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4[279767]: [NOTICE]   (279774) : path to executable is /usr/sbin/haproxy
Feb 25 12:24:35 compute-0 neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4[279767]: [WARNING]  (279774) : Exiting Master process...
Feb 25 12:24:35 compute-0 neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4[279767]: [WARNING]  (279774) : Exiting Master process...
Feb 25 12:24:35 compute-0 neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4[279767]: [ALERT]    (279774) : Current worker (279779) exited with code 143 (Terminated)
Feb 25 12:24:35 compute-0 neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4[279767]: [WARNING]  (279774) : All workers exited. Exiting... (0)
Feb 25 12:24:35 compute-0 systemd[1]: libpod-9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4.scope: Deactivated successfully.
Feb 25 12:24:35 compute-0 podman[282006]: 2026-02-25 12:24:35.192346942 +0000 UTC m=+0.068858500 container died 9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 12:24:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4-userdata-shm.mount: Deactivated successfully.
Feb 25 12:24:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f76f8a3d92aee60d4d1619494090c8ef810355452f7426bf0426b95f85bdb2b8-merged.mount: Deactivated successfully.
Feb 25 12:24:35 compute-0 podman[282006]: 2026-02-25 12:24:35.243757217 +0000 UTC m=+0.120268765 container cleanup 9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 12:24:35 compute-0 systemd[1]: libpod-conmon-9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4.scope: Deactivated successfully.
Feb 25 12:24:35 compute-0 ceph-mon[76335]: pgmap v1213: 305 pgs: 305 active+clean; 382 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.0 MiB/s wr, 447 op/s
Feb 25 12:24:35 compute-0 podman[282079]: 2026-02-25 12:24:35.311151474 +0000 UTC m=+0.048648758 container remove 9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.319 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c20e09e8-1f01-4c36-b78f-20f3f74d0082]: (4, ('Wed Feb 25 12:24:35 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4 (9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4)\n9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4\nWed Feb 25 12:24:35 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4 (9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4)\n9f2a041127b8240c71d7ba6982b339f083a2de1cd63dac067af31eb2fafdd7b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.322 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8465e6a5-971c-437f-baad-3abcc86574b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.323 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa87ddd03-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.325 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 kernel: tapa87ddd03-d0: left promiscuous mode
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.338 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[567000c6-add8-4160-9e7d-f3cdb7dc7fdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2627686520' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.352 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6e3ed7-1eef-43b3-a880-e0fcb4eb1a98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.353 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dca63ec5-f936-43e0-a099-686b98a7e691]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.367 244018 DEBUG oslo_concurrency.processutils [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.708s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.370 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a9f244-312c-41e5-9ab4-bb34aeea466a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422546, 'reachable_time': 33526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282096, 'error': None, 'target': 'ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:35 compute-0 systemd[1]: run-netns-ovnmeta\x2da87ddd03\x2dd435\x2d46c4\x2d8efc\x2d50bb38492ac4.mount: Deactivated successfully.
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.372 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a87ddd03-d435-46c4-8efc-50bb38492ac4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:24:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:35.372 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[546a4229-ac93-4fa5-be75-74df498bf9fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.374 244018 DEBUG nova.compute.provider_tree [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.396 244018 DEBUG nova.scheduler.client.report [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.433 244018 DEBUG oslo_concurrency.lockutils [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.467 244018 INFO nova.scheduler.client.report [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Deleted allocations for instance 8249bea8-a1c0-42f5-a4d1-12b74e20bccd
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.489 244018 INFO nova.virt.libvirt.driver [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Deleting instance files /var/lib/nova/instances/42abb4b3-2144-4c11-a04c-7641d725bcde_del
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.490 244018 INFO nova.virt.libvirt.driver [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Deletion of /var/lib/nova/instances/42abb4b3-2144-4c11-a04c-7641d725bcde_del complete
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.511 244018 INFO nova.virt.libvirt.driver [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Deleting instance files /var/lib/nova/instances/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_del
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.511 244018 INFO nova.virt.libvirt.driver [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Deletion of /var/lib/nova/instances/19e66139-1e99-432c-bbd6-fc3a6d3e1b1d_del complete
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.628 244018 INFO nova.compute.manager [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Took 0.80 seconds to destroy the instance on the hypervisor.
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.629 244018 DEBUG oslo.service.loopingcall [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.630 244018 DEBUG nova.compute.manager [-] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.631 244018 DEBUG nova.network.neutron [-] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.647 244018 DEBUG oslo_concurrency.lockutils [None req-5e1f38c2-4c4a-4503-a672-5416daf04bc7 d9c37980bc44499d879a667da762ceb2 ddc1ab49b1f14fa1b483e02aa65e18b8 - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.650 244018 INFO nova.compute.manager [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Took 0.73 seconds to destroy the instance on the hypervisor.
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.650 244018 DEBUG oslo.service.loopingcall [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.651 244018 DEBUG nova.compute.manager [-] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.651 244018 DEBUG nova.network.neutron [-] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:24:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 382 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.0 MiB/s wr, 447 op/s
Feb 25 12:24:35 compute-0 nova_compute[244014]: 2026-02-25 12:24:35.981 244018 DEBUG nova.network.neutron [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Successfully updated port: d333d3cb-5840-49b2-a761-04da902fae80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.008 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "refresh_cache-c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.008 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquired lock "refresh_cache-c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.009 244018 DEBUG nova.network.neutron [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.106 244018 DEBUG nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received event network-vif-unplugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.107 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.108 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.108 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.109 244018 DEBUG nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] No waiting events found dispatching network-vif-unplugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.109 244018 WARNING nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received unexpected event network-vif-unplugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc for instance with vm_state deleted and task_state None.
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.110 244018 DEBUG nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.110 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.111 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.111 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8249bea8-a1c0-42f5-a4d1-12b74e20bccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.112 244018 DEBUG nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] No waiting events found dispatching network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.112 244018 WARNING nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received unexpected event network-vif-plugged-b98cf008-d49f-4577-b9e8-2ca8d67298cc for instance with vm_state deleted and task_state None.
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.113 244018 DEBUG nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Received event network-vif-deleted-b98cf008-d49f-4577-b9e8-2ca8d67298cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.113 244018 DEBUG nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Received event network-vif-unplugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.114 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.114 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.115 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.115 244018 DEBUG nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] No waiting events found dispatching network-vif-unplugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.116 244018 DEBUG nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Received event network-vif-unplugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.116 244018 DEBUG nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Received event network-vif-plugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.117 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.117 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.118 244018 DEBUG oslo_concurrency.lockutils [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.118 244018 DEBUG nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] No waiting events found dispatching network-vif-plugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.119 244018 WARNING nova.compute.manager [req-7218fe83-570e-4a2d-97e7-f0752e7b53d6 req-4f511a75-23f9-41f1-a555-f025f2ca0e80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Received unexpected event network-vif-plugged-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa for instance with vm_state active and task_state deleting.
Feb 25 12:24:36 compute-0 sudo[282098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:24:36 compute-0 sudo[282098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:24:36 compute-0 sudo[282098]: pam_unix(sudo:session): session closed for user root
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.220 244018 DEBUG nova.network.neutron [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:24:36 compute-0 sudo[282123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:24:36 compute-0 sudo[282123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:24:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2627686520' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.269 244018 DEBUG nova.compute.manager [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.320 244018 INFO nova.compute.manager [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] instance snapshotting
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.339 244018 DEBUG nova.compute.manager [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Received event network-vif-plugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.339 244018 DEBUG oslo_concurrency.lockutils [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.340 244018 DEBUG oslo_concurrency.lockutils [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.341 244018 DEBUG oslo_concurrency.lockutils [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.341 244018 DEBUG nova.compute.manager [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] No waiting events found dispatching network-vif-plugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.342 244018 WARNING nova.compute.manager [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Received unexpected event network-vif-plugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 for instance with vm_state active and task_state image_snapshot.
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.342 244018 DEBUG nova.compute.manager [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Received event network-vif-unplugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.343 244018 DEBUG oslo_concurrency.lockutils [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.344 244018 DEBUG oslo_concurrency.lockutils [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.344 244018 DEBUG oslo_concurrency.lockutils [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.345 244018 DEBUG nova.compute.manager [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] No waiting events found dispatching network-vif-unplugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.345 244018 DEBUG nova.compute.manager [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Received event network-vif-unplugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.346 244018 DEBUG nova.compute.manager [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Received event network-vif-plugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.347 244018 DEBUG oslo_concurrency.lockutils [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.347 244018 DEBUG oslo_concurrency.lockutils [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.348 244018 DEBUG oslo_concurrency.lockutils [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.348 244018 DEBUG nova.compute.manager [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] No waiting events found dispatching network-vif-plugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.349 244018 WARNING nova.compute.manager [req-22fa2133-ba6d-4c9a-913d-4c0f2634674b req-9b1a0625-928d-4655-a24b-a2368fa2aac4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Received unexpected event network-vif-plugged-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 for instance with vm_state active and task_state deleting.
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.567 244018 INFO nova.virt.libvirt.driver [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Beginning live snapshot process
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.757 244018 DEBUG nova.virt.libvirt.imagebackend [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.765 244018 DEBUG nova.network.neutron [-] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.768 244018 DEBUG nova.network.neutron [-] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.791 244018 INFO nova.compute.manager [-] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Took 1.16 seconds to deallocate network for instance.
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.791 244018 INFO nova.compute.manager [-] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Took 1.14 seconds to deallocate network for instance.
Feb 25 12:24:36 compute-0 sudo[282123]: pam_unix(sudo:session): session closed for user root
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.864 244018 DEBUG oslo_concurrency.lockutils [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.865 244018 DEBUG oslo_concurrency.lockutils [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:36 compute-0 nova_compute[244014]: 2026-02-25 12:24:36.867 244018 DEBUG oslo_concurrency.lockutils [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:24:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:24:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:24:36 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:24:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:24:36 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:24:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:24:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:24:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:24:36 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:24:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:24:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:24:36 compute-0 sudo[282213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:24:36 compute-0 sudo[282213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:24:36 compute-0 sudo[282213]: pam_unix(sudo:session): session closed for user root
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.008 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022262.0069196, fb09cc9c-e6a9-4718-bb97-0df5558cb091 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.009 244018 INFO nova.compute.manager [-] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] VM Stopped (Lifecycle Event)
Feb 25 12:24:37 compute-0 sudo[282238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:24:37 compute-0 sudo[282238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.030 244018 DEBUG nova.compute.manager [None req-ce847d4c-1c6d-4d04-9ec0-38d869e3453a - - - - - -] [instance: fb09cc9c-e6a9-4718-bb97-0df5558cb091] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.037 244018 DEBUG nova.storage.rbd_utils [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(7ff9fed6cce44dc19c7aeb0f652ad6e2) on rbd image(36c98024-f0f0-4516-ad48-e8ffa90c5058_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.107 244018 DEBUG oslo_concurrency.processutils [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.139 244018 DEBUG nova.network.neutron [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Updating instance_info_cache with network_info: [{"id": "d333d3cb-5840-49b2-a761-04da902fae80", "address": "fa:16:3e:e7:bc:ac", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd333d3cb-58", "ovs_interfaceid": "d333d3cb-5840-49b2-a761-04da902fae80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.168 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Releasing lock "refresh_cache-c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.168 244018 DEBUG nova.compute.manager [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Instance network_info: |[{"id": "d333d3cb-5840-49b2-a761-04da902fae80", "address": "fa:16:3e:e7:bc:ac", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd333d3cb-58", "ovs_interfaceid": "d333d3cb-5840-49b2-a761-04da902fae80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.173 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Start _get_guest_xml network_info=[{"id": "d333d3cb-5840-49b2-a761-04da902fae80", "address": "fa:16:3e:e7:bc:ac", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd333d3cb-58", "ovs_interfaceid": "d333d3cb-5840-49b2-a761-04da902fae80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.180 244018 WARNING nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.185 244018 DEBUG nova.virt.libvirt.host [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.186 244018 DEBUG nova.virt.libvirt.host [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.189 244018 DEBUG nova.virt.libvirt.host [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.190 244018 DEBUG nova.virt.libvirt.host [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.193 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.193 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.194 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.194 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.195 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.195 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.195 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.196 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.196 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.198 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.198 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.198 244018 DEBUG nova.virt.hardware [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.202 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:37 compute-0 ceph-mon[76335]: pgmap v1214: 305 pgs: 305 active+clean; 382 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.0 MiB/s wr, 447 op/s
Feb 25 12:24:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:24:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:24:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:24:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:24:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:24:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:24:37 compute-0 podman[282314]: 2026-02-25 12:24:37.350558802 +0000 UTC m=+0.052102415 container create ed83a20ecf1b25ebeb006db29a839620cd23df0702b82bb9e81ee59f45fd2972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_gagarin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:24:37 compute-0 systemd[1]: Started libpod-conmon-ed83a20ecf1b25ebeb006db29a839620cd23df0702b82bb9e81ee59f45fd2972.scope.
Feb 25 12:24:37 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:37 compute-0 podman[282314]: 2026-02-25 12:24:37.330215387 +0000 UTC m=+0.031759090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:24:37 compute-0 podman[282314]: 2026-02-25 12:24:37.432050629 +0000 UTC m=+0.133594312 container init ed83a20ecf1b25ebeb006db29a839620cd23df0702b82bb9e81ee59f45fd2972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_gagarin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:24:37 compute-0 podman[282314]: 2026-02-25 12:24:37.440274541 +0000 UTC m=+0.141818194 container start ed83a20ecf1b25ebeb006db29a839620cd23df0702b82bb9e81ee59f45fd2972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_gagarin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 12:24:37 compute-0 podman[282314]: 2026-02-25 12:24:37.44517879 +0000 UTC m=+0.146722443 container attach ed83a20ecf1b25ebeb006db29a839620cd23df0702b82bb9e81ee59f45fd2972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 12:24:37 compute-0 pensive_gagarin[282349]: 167 167
Feb 25 12:24:37 compute-0 systemd[1]: libpod-ed83a20ecf1b25ebeb006db29a839620cd23df0702b82bb9e81ee59f45fd2972.scope: Deactivated successfully.
Feb 25 12:24:37 compute-0 conmon[282349]: conmon ed83a20ecf1b25ebeb00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed83a20ecf1b25ebeb006db29a839620cd23df0702b82bb9e81ee59f45fd2972.scope/container/memory.events
Feb 25 12:24:37 compute-0 podman[282314]: 2026-02-25 12:24:37.4476468 +0000 UTC m=+0.149190443 container died ed83a20ecf1b25ebeb006db29a839620cd23df0702b82bb9e81ee59f45fd2972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_gagarin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c59b1108cee51a2d53ab885c9ff374b78a6a4477734027700a0af9597971f3e-merged.mount: Deactivated successfully.
Feb 25 12:24:37 compute-0 podman[282314]: 2026-02-25 12:24:37.49144945 +0000 UTC m=+0.192993103 container remove ed83a20ecf1b25ebeb006db29a839620cd23df0702b82bb9e81ee59f45fd2972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_gagarin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:24:37 compute-0 systemd[1]: libpod-conmon-ed83a20ecf1b25ebeb006db29a839620cd23df0702b82bb9e81ee59f45fd2972.scope: Deactivated successfully.
Feb 25 12:24:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1765501651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:37 compute-0 podman[282372]: 2026-02-25 12:24:37.684126583 +0000 UTC m=+0.058059764 container create 68c335b741fdfdf1bd98922c3cce16a9a29823605ef12a1a4fedbdb9beef8531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_diffie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.703 244018 DEBUG oslo_concurrency.processutils [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.714 244018 DEBUG nova.compute.provider_tree [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/35254800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:37 compute-0 systemd[1]: Started libpod-conmon-68c335b741fdfdf1bd98922c3cce16a9a29823605ef12a1a4fedbdb9beef8531.scope.
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.739 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:37 compute-0 podman[282372]: 2026-02-25 12:24:37.657258012 +0000 UTC m=+0.031191043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:24:37 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c49d7b2dc584f785ee4b454a561ddf32ad4859f98bd84b993b59dc7d5eec313/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 246 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 9.4 MiB/s wr, 491 op/s
Feb 25 12:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c49d7b2dc584f785ee4b454a561ddf32ad4859f98bd84b993b59dc7d5eec313/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c49d7b2dc584f785ee4b454a561ddf32ad4859f98bd84b993b59dc7d5eec313/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c49d7b2dc584f785ee4b454a561ddf32ad4859f98bd84b993b59dc7d5eec313/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c49d7b2dc584f785ee4b454a561ddf32ad4859f98bd84b993b59dc7d5eec313/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.781 244018 DEBUG nova.storage.rbd_utils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.786 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:37 compute-0 podman[282372]: 2026-02-25 12:24:37.804673684 +0000 UTC m=+0.178606725 container init 68c335b741fdfdf1bd98922c3cce16a9a29823605ef12a1a4fedbdb9beef8531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 12:24:37 compute-0 podman[282372]: 2026-02-25 12:24:37.811525908 +0000 UTC m=+0.185458949 container start 68c335b741fdfdf1bd98922c3cce16a9a29823605ef12a1a4fedbdb9beef8531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.814 244018 DEBUG nova.scheduler.client.report [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:37 compute-0 podman[282372]: 2026-02-25 12:24:37.849843392 +0000 UTC m=+0.223776413 container attach 68c335b741fdfdf1bd98922c3cce16a9a29823605ef12a1a4fedbdb9beef8531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_diffie, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:24:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Feb 25 12:24:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Feb 25 12:24:37 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.930 244018 DEBUG oslo_concurrency.lockutils [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.935 244018 DEBUG oslo_concurrency.lockutils [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.951 244018 DEBUG nova.storage.rbd_utils [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] cloning vms/36c98024-f0f0-4516-ad48-e8ffa90c5058_disk@7ff9fed6cce44dc19c7aeb0f652ad6e2 to images/8165bc86-2fcc-4af3-8832-53a88fb125b1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:24:37 compute-0 nova_compute[244014]: 2026-02-25 12:24:37.998 244018 INFO nova.scheduler.client.report [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Deleted allocations for instance 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.096 244018 DEBUG oslo_concurrency.processutils [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.165 244018 DEBUG oslo_concurrency.lockutils [None req-83af5e0e-b236-4188-a047-674391ce2e92 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "19e66139-1e99-432c-bbd6-fc3a6d3e1b1d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.232 244018 DEBUG nova.compute.manager [req-6c1232d9-d0f3-4236-a73e-1dd9057bf1ae req-3b40eb6d-a342-4123-88ac-e962e37a652c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Received event network-changed-d333d3cb-5840-49b2-a761-04da902fae80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.233 244018 DEBUG nova.compute.manager [req-6c1232d9-d0f3-4236-a73e-1dd9057bf1ae req-3b40eb6d-a342-4123-88ac-e962e37a652c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Refreshing instance network info cache due to event network-changed-d333d3cb-5840-49b2-a761-04da902fae80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.233 244018 DEBUG oslo_concurrency.lockutils [req-6c1232d9-d0f3-4236-a73e-1dd9057bf1ae req-3b40eb6d-a342-4123-88ac-e962e37a652c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.233 244018 DEBUG oslo_concurrency.lockutils [req-6c1232d9-d0f3-4236-a73e-1dd9057bf1ae req-3b40eb6d-a342-4123-88ac-e962e37a652c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.234 244018 DEBUG nova.network.neutron [req-6c1232d9-d0f3-4236-a73e-1dd9057bf1ae req-3b40eb6d-a342-4123-88ac-e962e37a652c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Refreshing network info cache for port d333d3cb-5840-49b2-a761-04da902fae80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.247 244018 DEBUG nova.storage.rbd_utils [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] flattening images/8165bc86-2fcc-4af3-8832-53a88fb125b1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:24:38 compute-0 strange_diffie[282392]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:24:38 compute-0 strange_diffie[282392]: --> All data devices are unavailable
Feb 25 12:24:38 compute-0 systemd[1]: libpod-68c335b741fdfdf1bd98922c3cce16a9a29823605ef12a1a4fedbdb9beef8531.scope: Deactivated successfully.
Feb 25 12:24:38 compute-0 conmon[282392]: conmon 68c335b741fdfdf1bd98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68c335b741fdfdf1bd98922c3cce16a9a29823605ef12a1a4fedbdb9beef8531.scope/container/memory.events
Feb 25 12:24:38 compute-0 podman[282372]: 2026-02-25 12:24:38.309722447 +0000 UTC m=+0.683655458 container died 68c335b741fdfdf1bd98922c3cce16a9a29823605ef12a1a4fedbdb9beef8531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_diffie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:24:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1765501651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/35254800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:38 compute-0 ceph-mon[76335]: osdmap e172: 3 total, 3 up, 3 in
Feb 25 12:24:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:24:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/160969171' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c49d7b2dc584f785ee4b454a561ddf32ad4859f98bd84b993b59dc7d5eec313-merged.mount: Deactivated successfully.
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.486 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.489 244018 DEBUG nova.virt.libvirt.vif [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2064847536',display_name='tempest-DeleteServersTestJSON-server-2064847536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2064847536',id=43,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-dww8rqi3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:33Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=c9d55f33-ecc8-4b0b-a335-8052fcdfaecb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d333d3cb-5840-49b2-a761-04da902fae80", "address": "fa:16:3e:e7:bc:ac", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd333d3cb-58", "ovs_interfaceid": "d333d3cb-5840-49b2-a761-04da902fae80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.490 244018 DEBUG nova.network.os_vif_util [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "d333d3cb-5840-49b2-a761-04da902fae80", "address": "fa:16:3e:e7:bc:ac", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd333d3cb-58", "ovs_interfaceid": "d333d3cb-5840-49b2-a761-04da902fae80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.490 244018 DEBUG nova.network.os_vif_util [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:bc:ac,bridge_name='br-int',has_traffic_filtering=True,id=d333d3cb-5840-49b2-a761-04da902fae80,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd333d3cb-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.491 244018 DEBUG nova.objects.instance [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'pci_devices' on Instance uuid c9d55f33-ecc8-4b0b-a335-8052fcdfaecb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.510 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:24:38 compute-0 nova_compute[244014]:   <uuid>c9d55f33-ecc8-4b0b-a335-8052fcdfaecb</uuid>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   <name>instance-0000002b</name>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <nova:name>tempest-DeleteServersTestJSON-server-2064847536</nova:name>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:24:37</nova:creationTime>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <nova:user uuid="9c44c1a95c8d4d97bc1d7dde284dcf1b">tempest-DeleteServersTestJSON-285157757-project-member</nova:user>
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <nova:project uuid="daab2f813dbd467685c22833bf875ec9">tempest-DeleteServersTestJSON-285157757</nova:project>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <nova:port uuid="d333d3cb-5840-49b2-a761-04da902fae80">
Feb 25 12:24:38 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <system>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <entry name="serial">c9d55f33-ecc8-4b0b-a335-8052fcdfaecb</entry>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <entry name="uuid">c9d55f33-ecc8-4b0b-a335-8052fcdfaecb</entry>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     </system>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   <os>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   </os>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   <features>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   </features>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk">
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk.config">
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       </source>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:24:38 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:e7:bc:ac"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <target dev="tapd333d3cb-58"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb/console.log" append="off"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <video>
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     </video>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:24:38 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:24:38 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:24:38 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:24:38 compute-0 nova_compute[244014]: </domain>
Feb 25 12:24:38 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.511 244018 DEBUG nova.compute.manager [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Preparing to wait for external event network-vif-plugged-d333d3cb-5840-49b2-a761-04da902fae80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.511 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.511 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.512 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.512 244018 DEBUG nova.virt.libvirt.vif [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2064847536',display_name='tempest-DeleteServersTestJSON-server-2064847536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2064847536',id=43,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-dww8rqi3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:33Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=c9d55f33-ecc8-4b0b-a335-8052fcdfaecb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d333d3cb-5840-49b2-a761-04da902fae80", "address": "fa:16:3e:e7:bc:ac", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd333d3cb-58", "ovs_interfaceid": "d333d3cb-5840-49b2-a761-04da902fae80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.513 244018 DEBUG nova.network.os_vif_util [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "d333d3cb-5840-49b2-a761-04da902fae80", "address": "fa:16:3e:e7:bc:ac", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd333d3cb-58", "ovs_interfaceid": "d333d3cb-5840-49b2-a761-04da902fae80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.513 244018 DEBUG nova.network.os_vif_util [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:bc:ac,bridge_name='br-int',has_traffic_filtering=True,id=d333d3cb-5840-49b2-a761-04da902fae80,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd333d3cb-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.514 244018 DEBUG os_vif [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:bc:ac,bridge_name='br-int',has_traffic_filtering=True,id=d333d3cb-5840-49b2-a761-04da902fae80,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd333d3cb-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.515 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.515 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.525 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.525 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd333d3cb-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.526 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd333d3cb-58, col_values=(('external_ids', {'iface-id': 'd333d3cb-5840-49b2-a761-04da902fae80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:bc:ac', 'vm-uuid': 'c9d55f33-ecc8-4b0b-a335-8052fcdfaecb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:38 compute-0 NetworkManager[49836]: <info>  [1772022278.5292] manager: (tapd333d3cb-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.530 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.537 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.538 244018 INFO os_vif [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:bc:ac,bridge_name='br-int',has_traffic_filtering=True,id=d333d3cb-5840-49b2-a761-04da902fae80,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd333d3cb-58')
Feb 25 12:24:38 compute-0 podman[282372]: 2026-02-25 12:24:38.57412113 +0000 UTC m=+0.948054181 container remove 68c335b741fdfdf1bd98922c3cce16a9a29823605ef12a1a4fedbdb9beef8531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:24:38 compute-0 systemd[1]: libpod-conmon-68c335b741fdfdf1bd98922c3cce16a9a29823605ef12a1a4fedbdb9beef8531.scope: Deactivated successfully.
Feb 25 12:24:38 compute-0 sudo[282238]: pam_unix(sudo:session): session closed for user root
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.683 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.684 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.684 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No VIF found with MAC fa:16:3e:e7:bc:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.685 244018 INFO nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Using config drive
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.704 244018 DEBUG nova.storage.rbd_utils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.709 244018 DEBUG nova.storage.rbd_utils [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] removing snapshot(7ff9fed6cce44dc19c7aeb0f652ad6e2) on rbd image(36c98024-f0f0-4516-ad48-e8ffa90c5058_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:24:38 compute-0 sudo[282556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:24:38 compute-0 sudo[282556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:24:38 compute-0 sudo[282556]: pam_unix(sudo:session): session closed for user root
Feb 25 12:24:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/579282083' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:38 compute-0 sudo[282603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:24:38 compute-0 sudo[282603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.783 244018 DEBUG oslo_concurrency.processutils [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.788 244018 DEBUG nova.compute.provider_tree [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.809 244018 DEBUG nova.scheduler.client.report [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.832 244018 DEBUG oslo_concurrency.lockutils [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.861 244018 INFO nova.scheduler.client.report [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Deleted allocations for instance 42abb4b3-2144-4c11-a04c-7641d725bcde
Feb 25 12:24:38 compute-0 nova_compute[244014]: 2026-02-25 12:24:38.931 244018 DEBUG oslo_concurrency.lockutils [None req-e5577f2f-bbea-4bc9-840a-5f9359e1a431 2ab8469e67b142bab26cdd996097e148 d255f122166b45af8d4d83610929aaea - - default default] Lock "42abb4b3-2144-4c11-a04c-7641d725bcde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:39 compute-0 podman[282642]: 2026-02-25 12:24:39.092041478 +0000 UTC m=+0.061794750 container create fc609cb832ef1d2c838191d7261e6dbda4e2251005d566cb97b72665745e0692 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.094 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022264.0928845, 2dd0d0eb-f5ba-419e-8233-256425af6119 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.094 244018 INFO nova.compute.manager [-] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] VM Stopped (Lifecycle Event)
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.120 244018 DEBUG nova.compute.manager [None req-f4ef1247-2e26-4c9b-8861-3de3ffc151e9 - - - - - -] [instance: 2dd0d0eb-f5ba-419e-8233-256425af6119] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:39 compute-0 ovn_controller[147040]: 2026-02-25T12:24:39Z|00367|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 12:24:39 compute-0 systemd[1]: Started libpod-conmon-fc609cb832ef1d2c838191d7261e6dbda4e2251005d566cb97b72665745e0692.scope.
Feb 25 12:24:39 compute-0 podman[282642]: 2026-02-25 12:24:39.059804855 +0000 UTC m=+0.029558188 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.163 244018 INFO nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Creating config drive at /var/lib/nova/instances/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb/disk.config
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.169 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_h8omj4k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:39 compute-0 podman[282642]: 2026-02-25 12:24:39.228218862 +0000 UTC m=+0.197972174 container init fc609cb832ef1d2c838191d7261e6dbda4e2251005d566cb97b72665745e0692 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_easley, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.231 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:39 compute-0 podman[282642]: 2026-02-25 12:24:39.236135986 +0000 UTC m=+0.205889268 container start fc609cb832ef1d2c838191d7261e6dbda4e2251005d566cb97b72665745e0692 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_easley, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 12:24:39 compute-0 podman[282642]: 2026-02-25 12:24:39.240251572 +0000 UTC m=+0.210004854 container attach fc609cb832ef1d2c838191d7261e6dbda4e2251005d566cb97b72665745e0692 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_easley, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 12:24:39 compute-0 gracious_easley[282660]: 167 167
Feb 25 12:24:39 compute-0 systemd[1]: libpod-fc609cb832ef1d2c838191d7261e6dbda4e2251005d566cb97b72665745e0692.scope: Deactivated successfully.
Feb 25 12:24:39 compute-0 podman[282642]: 2026-02-25 12:24:39.243569096 +0000 UTC m=+0.213322408 container died fc609cb832ef1d2c838191d7261e6dbda4e2251005d566cb97b72665745e0692 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 12:24:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcf922f8d57abb80468f1e43d5cee2072fde4f31eda0508de1621ccbdfa48d37-merged.mount: Deactivated successfully.
Feb 25 12:24:39 compute-0 podman[282642]: 2026-02-25 12:24:39.28858974 +0000 UTC m=+0.258343012 container remove fc609cb832ef1d2c838191d7261e6dbda4e2251005d566cb97b72665745e0692 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_easley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 12:24:39 compute-0 systemd[1]: libpod-conmon-fc609cb832ef1d2c838191d7261e6dbda4e2251005d566cb97b72665745e0692.scope: Deactivated successfully.
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.310 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_h8omj4k" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:39 compute-0 ceph-mon[76335]: pgmap v1215: 305 pgs: 305 active+clean; 246 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 9.4 MiB/s wr, 491 op/s
Feb 25 12:24:39 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/160969171' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:24:39 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/579282083' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.372 244018 DEBUG nova.storage.rbd_utils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.390 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb/disk.config c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Feb 25 12:24:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Feb 25 12:24:39 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Feb 25 12:24:39 compute-0 podman[282706]: 2026-02-25 12:24:39.4825488 +0000 UTC m=+0.057361525 container create ad69224083a6a332d46be4143575d3ef09bc348ba055bf44f24240b28b562522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.490 244018 DEBUG nova.storage.rbd_utils [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(snap) on rbd image(8165bc86-2fcc-4af3-8832-53a88fb125b1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:24:39 compute-0 systemd[1]: Started libpod-conmon-ad69224083a6a332d46be4143575d3ef09bc348ba055bf44f24240b28b562522.scope.
Feb 25 12:24:39 compute-0 podman[282706]: 2026-02-25 12:24:39.451013247 +0000 UTC m=+0.025825992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:24:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d3bb2a4de241fcd24ea59e45697e24bbfca67aba63e35bf7b93921ef506a2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d3bb2a4de241fcd24ea59e45697e24bbfca67aba63e35bf7b93921ef506a2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d3bb2a4de241fcd24ea59e45697e24bbfca67aba63e35bf7b93921ef506a2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d3bb2a4de241fcd24ea59e45697e24bbfca67aba63e35bf7b93921ef506a2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:39 compute-0 podman[282706]: 2026-02-25 12:24:39.567466273 +0000 UTC m=+0.142278968 container init ad69224083a6a332d46be4143575d3ef09bc348ba055bf44f24240b28b562522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:24:39 compute-0 podman[282706]: 2026-02-25 12:24:39.575157841 +0000 UTC m=+0.149970566 container start ad69224083a6a332d46be4143575d3ef09bc348ba055bf44f24240b28b562522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:24:39 compute-0 podman[282706]: 2026-02-25 12:24:39.579666938 +0000 UTC m=+0.154479643 container attach ad69224083a6a332d46be4143575d3ef09bc348ba055bf44f24240b28b562522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.590 244018 DEBUG oslo_concurrency.processutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb/disk.config c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.591 244018 INFO nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Deleting local config drive /var/lib/nova/instances/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb/disk.config because it was imported into RBD.
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.598 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:39 compute-0 kernel: tapd333d3cb-58: entered promiscuous mode
Feb 25 12:24:39 compute-0 NetworkManager[49836]: <info>  [1772022279.6319] manager: (tapd333d3cb-58): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.636 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:39 compute-0 ovn_controller[147040]: 2026-02-25T12:24:39Z|00368|binding|INFO|Claiming lport d333d3cb-5840-49b2-a761-04da902fae80 for this chassis.
Feb 25 12:24:39 compute-0 ovn_controller[147040]: 2026-02-25T12:24:39Z|00369|binding|INFO|d333d3cb-5840-49b2-a761-04da902fae80: Claiming fa:16:3e:e7:bc:ac 10.100.0.13
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.651 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:bc:ac 10.100.0.13'], port_security=['fa:16:3e:e7:bc:ac 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c9d55f33-ecc8-4b0b-a335-8052fcdfaecb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d333d3cb-5840-49b2-a761-04da902fae80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.653 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d333d3cb-5840-49b2-a761-04da902fae80 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 bound to our chassis
Feb 25 12:24:39 compute-0 systemd-machined[210048]: New machine qemu-48-instance-0000002b.
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.655 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:24:39 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000002b.
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.672 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19d69349-85c8-44da-8454-9dcab0ebbdb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.673 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0d45b1c-11 in ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.675 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0d45b1c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.675 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5eaa95ac-4bd7-4693-a619-6c1c8c3a688a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.675 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef1a8d9-b353-4a26-abea-c39b2749562d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_controller[147040]: 2026-02-25T12:24:39Z|00370|binding|INFO|Setting lport d333d3cb-5840-49b2-a761-04da902fae80 ovn-installed in OVS
Feb 25 12:24:39 compute-0 ovn_controller[147040]: 2026-02-25T12:24:39Z|00371|binding|INFO|Setting lport d333d3cb-5840-49b2-a761-04da902fae80 up in Southbound
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:39 compute-0 systemd-udevd[282780]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.689 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[66b0dafa-c86a-4ee4-afb8-38d4f63a4d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 NetworkManager[49836]: <info>  [1772022279.6986] device (tapd333d3cb-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:24:39 compute-0 NetworkManager[49836]: <info>  [1772022279.6996] device (tapd333d3cb-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.701 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[edffd9c5-0b38-4320-a7b7-942a720ac959]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.719 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1d8a82-8a95-4ebd-a1fd-9f2e1a3870a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.722 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3382ac6b-678d-4d64-a5ad-4d9de34ca53d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 NetworkManager[49836]: <info>  [1772022279.7233] manager: (tapa0d45b1c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/167)
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.745 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[359290cf-ebfb-4255-8426-2df7300c5558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.748 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[95b9aee2-1686-4cc9-85a5-dc6a5d940cc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 246 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.8 MiB/s wr, 471 op/s
Feb 25 12:24:39 compute-0 NetworkManager[49836]: <info>  [1772022279.7643] device (tapa0d45b1c-10): carrier: link connected
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.767 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[108e742b-5ddc-4c62-bee2-b7890dcfa188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.777 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[12f03698-7f84-4974-9e7e-7a6aa40ceed7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424917, 'reachable_time': 29608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282811, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.786 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee2c478-bc47-454f-a83f-900df1721c75]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:2bc6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424917, 'tstamp': 424917}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282812, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.798 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5a24a5f4-f2d5-44a2-a084-9252ee15f152]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424917, 'reachable_time': 29608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282815, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.820 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b8be2ce1-1edf-4908-a327-300e143bafb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 hopeful_moser[282752]: {
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:     "0": [
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:         {
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "devices": [
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "/dev/loop3"
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             ],
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_name": "ceph_lv0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_size": "21470642176",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "name": "ceph_lv0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "tags": {
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.cluster_name": "ceph",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.crush_device_class": "",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.encrypted": "0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.objectstore": "bluestore",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.osd_id": "0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.type": "block",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.vdo": "0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.with_tpm": "0"
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             },
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "type": "block",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "vg_name": "ceph_vg0"
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:         }
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:     ],
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:     "1": [
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:         {
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "devices": [
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "/dev/loop4"
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             ],
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_name": "ceph_lv1",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_size": "21470642176",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "name": "ceph_lv1",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "tags": {
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.cluster_name": "ceph",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.crush_device_class": "",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.encrypted": "0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.objectstore": "bluestore",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.osd_id": "1",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.type": "block",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.vdo": "0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.with_tpm": "0"
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             },
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "type": "block",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "vg_name": "ceph_vg1"
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:         }
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:     ],
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:     "2": [
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:         {
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "devices": [
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "/dev/loop5"
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             ],
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_name": "ceph_lv2",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_size": "21470642176",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "name": "ceph_lv2",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "tags": {
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.cluster_name": "ceph",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.crush_device_class": "",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.encrypted": "0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.objectstore": "bluestore",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.osd_id": "2",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.type": "block",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.vdo": "0",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:                 "ceph.with_tpm": "0"
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             },
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "type": "block",
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:             "vg_name": "ceph_vg2"
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:         }
Feb 25 12:24:39 compute-0 hopeful_moser[282752]:     ]
Feb 25 12:24:39 compute-0 hopeful_moser[282752]: }
Feb 25 12:24:39 compute-0 systemd[1]: libpod-ad69224083a6a332d46be4143575d3ef09bc348ba055bf44f24240b28b562522.scope: Deactivated successfully.
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.869 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[036758df-ba90-4e2a-a821-0e2ce94cb4e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.870 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.870 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.870 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d45b1c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:39 compute-0 kernel: tapa0d45b1c-10: entered promiscuous mode
Feb 25 12:24:39 compute-0 NetworkManager[49836]: <info>  [1772022279.8738] manager: (tapa0d45b1c-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.875 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.877 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d45b1c-10, col_values=(('external_ids', {'iface-id': '7c89df63-779c-4e1c-9e58-76a4001fabc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.878 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:39 compute-0 ovn_controller[147040]: 2026-02-25T12:24:39Z|00372|binding|INFO|Releasing lport 7c89df63-779c-4e1c-9e58-76a4001fabc2 from this chassis (sb_readonly=0)
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.878 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.880 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.881 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1b9902-7b87-4670-8ed2-11a9e8a1e85d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.882 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:24:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:39.882 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'env', 'PROCESS_TAG=haproxy-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0d45b1c-1680-4599-a27a-6e3335c94c99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:39 compute-0 podman[282822]: 2026-02-25 12:24:39.897777541 +0000 UTC m=+0.036344339 container died ad69224083a6a332d46be4143575d3ef09bc348ba055bf44f24240b28b562522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.992 244018 DEBUG nova.compute.manager [req-c268b43f-e120-4b8f-9306-c77a34ec643b req-5f12b5a3-2208-4f97-9c2a-ae50c1c8dbb2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Received event network-vif-plugged-d333d3cb-5840-49b2-a761-04da902fae80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.992 244018 DEBUG oslo_concurrency.lockutils [req-c268b43f-e120-4b8f-9306-c77a34ec643b req-5f12b5a3-2208-4f97-9c2a-ae50c1c8dbb2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.993 244018 DEBUG oslo_concurrency.lockutils [req-c268b43f-e120-4b8f-9306-c77a34ec643b req-5f12b5a3-2208-4f97-9c2a-ae50c1c8dbb2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.993 244018 DEBUG oslo_concurrency.lockutils [req-c268b43f-e120-4b8f-9306-c77a34ec643b req-5f12b5a3-2208-4f97-9c2a-ae50c1c8dbb2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:39 compute-0 nova_compute[244014]: 2026-02-25 12:24:39.993 244018 DEBUG nova.compute.manager [req-c268b43f-e120-4b8f-9306-c77a34ec643b req-5f12b5a3-2208-4f97-9c2a-ae50c1c8dbb2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Processing event network-vif-plugged-d333d3cb-5840-49b2-a761-04da902fae80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.113 244018 DEBUG nova.network.neutron [req-6c1232d9-d0f3-4236-a73e-1dd9057bf1ae req-3b40eb6d-a342-4123-88ac-e962e37a652c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Updated VIF entry in instance network info cache for port d333d3cb-5840-49b2-a761-04da902fae80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.114 244018 DEBUG nova.network.neutron [req-6c1232d9-d0f3-4236-a73e-1dd9057bf1ae req-3b40eb6d-a342-4123-88ac-e962e37a652c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Updating instance_info_cache with network_info: [{"id": "d333d3cb-5840-49b2-a761-04da902fae80", "address": "fa:16:3e:e7:bc:ac", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd333d3cb-58", "ovs_interfaceid": "d333d3cb-5840-49b2-a761-04da902fae80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.137 244018 DEBUG oslo_concurrency.lockutils [req-6c1232d9-d0f3-4236-a73e-1dd9057bf1ae req-3b40eb6d-a342-4123-88ac-e962e37a652c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.137 244018 DEBUG nova.compute.manager [req-6c1232d9-d0f3-4236-a73e-1dd9057bf1ae req-3b40eb6d-a342-4123-88ac-e962e37a652c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Received event network-vif-deleted-7d80048f-ed99-48bf-91aa-4ba1ca2f7bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.137 244018 DEBUG nova.compute.manager [req-6c1232d9-d0f3-4236-a73e-1dd9057bf1ae req-3b40eb6d-a342-4123-88ac-e962e37a652c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Received event network-vif-deleted-1e1fa674-42f4-483b-8576-2dd9c4c8ffc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-12d3bb2a4de241fcd24ea59e45697e24bbfca67aba63e35bf7b93921ef506a2e-merged.mount: Deactivated successfully.
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.298 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022265.297357, 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.299 244018 INFO nova.compute.manager [-] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] VM Stopped (Lifecycle Event)
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.338 244018 DEBUG nova.compute.manager [None req-eda1cdea-349d-4668-b575-e37064d36eae - - - - - -] [instance: 2fe12dd6-0dd4-468d-8fd4-78c42db2cd2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:40 compute-0 podman[282822]: 2026-02-25 12:24:40.467564547 +0000 UTC m=+0.606131325 container remove ad69224083a6a332d46be4143575d3ef09bc348ba055bf44f24240b28b562522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:24:40 compute-0 systemd[1]: libpod-conmon-ad69224083a6a332d46be4143575d3ef09bc348ba055bf44f24240b28b562522.scope: Deactivated successfully.
Feb 25 12:24:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Feb 25 12:24:40 compute-0 sudo[282603]: pam_unix(sudo:session): session closed for user root
Feb 25 12:24:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Feb 25 12:24:40 compute-0 ceph-mon[76335]: osdmap e173: 3 total, 3 up, 3 in
Feb 25 12:24:40 compute-0 sudo[282886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:24:40 compute-0 sudo[282886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:24:40 compute-0 sudo[282886]: pam_unix(sudo:session): session closed for user root
Feb 25 12:24:40 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Feb 25 12:24:40 compute-0 podman[282885]: 2026-02-25 12:24:40.651983747 +0000 UTC m=+0.120084610 container create 9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:24:40 compute-0 podman[282885]: 2026-02-25 12:24:40.559767067 +0000 UTC m=+0.027868000 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:24:40 compute-0 sudo[282936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:24:40 compute-0 sudo[282936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.703 244018 DEBUG nova.compute.manager [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.703 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022280.7007945, c9d55f33-ecc8-4b0b-a335-8052fcdfaecb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.704 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] VM Started (Lifecycle Event)
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.734 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.739 244018 INFO nova.virt.libvirt.driver [-] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Instance spawned successfully.
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.740 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.762 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.766 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:40 compute-0 systemd[1]: Started libpod-conmon-9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008.scope.
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.797 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.798 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.798 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.799 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.800 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.801 244018 DEBUG nova.virt.libvirt.driver [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:24:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29995863cb4bedd7fdfa7ea2ac6e4f55664d3eb95cee463932488cece1212528/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.848 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.848 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022280.7010202, c9d55f33-ecc8-4b0b-a335-8052fcdfaecb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.849 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] VM Paused (Lifecycle Event)
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.912 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.918 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022280.7074852, c9d55f33-ecc8-4b0b-a335-8052fcdfaecb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:40 compute-0 nova_compute[244014]: 2026-02-25 12:24:40.918 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] VM Resumed (Lifecycle Event)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.005 244018 INFO nova.compute.manager [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Took 7.25 seconds to spawn the instance on the hypervisor.
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.006 244018 DEBUG nova.compute.manager [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:41 compute-0 podman[282885]: 2026-02-25 12:24:41.051988397 +0000 UTC m=+0.520089310 container init 9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:24:41 compute-0 podman[282885]: 2026-02-25 12:24:41.057842083 +0000 UTC m=+0.525942976 container start 9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:24:41 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[282965]: [NOTICE]   (282981) : New worker (282983) forked
Feb 25 12:24:41 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[282965]: [NOTICE]   (282981) : Loading success.
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.093 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.097 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 8165bc86-2fcc-4af3-8832-53a88fb125b1 could not be found.
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 8165bc86-2fcc-4af3-8832-53a88fb125b1
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver 
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver 
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 8165bc86-2fcc-4af3-8832-53a88fb125b1 could not be found.
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.269 244018 ERROR nova.virt.libvirt.driver 
Feb 25 12:24:41 compute-0 podman[282992]: 2026-02-25 12:24:41.348272273 +0000 UTC m=+0.096985196 container create 7015a8f34a4282a3771190a1470ac906b3c4c2485e815a6174e6e8515b6baa1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_noyce, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:24:41 compute-0 podman[282992]: 2026-02-25 12:24:41.284885279 +0000 UTC m=+0.033598272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.422 244018 DEBUG nova.storage.rbd_utils [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] removing snapshot(snap) on rbd image(8165bc86-2fcc-4af3-8832-53a88fb125b1) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.427 244018 INFO nova.compute.manager [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Took 8.75 seconds to build instance.
Feb 25 12:24:41 compute-0 nova_compute[244014]: 2026-02-25 12:24:41.585 244018 DEBUG oslo_concurrency.lockutils [None req-e340a55d-baf7-4225-9ea8-580e6d646662 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:41 compute-0 systemd[1]: Started libpod-conmon-7015a8f34a4282a3771190a1470ac906b3c4c2485e815a6174e6e8515b6baa1c.scope.
Feb 25 12:24:41 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:41 compute-0 ceph-mon[76335]: pgmap v1218: 305 pgs: 305 active+clean; 246 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.8 MiB/s wr, 471 op/s
Feb 25 12:24:41 compute-0 ceph-mon[76335]: osdmap e174: 3 total, 3 up, 3 in
Feb 25 12:24:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 280 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 6.1 MiB/s wr, 625 op/s
Feb 25 12:24:41 compute-0 podman[282992]: 2026-02-25 12:24:41.766871469 +0000 UTC m=+0.515584462 container init 7015a8f34a4282a3771190a1470ac906b3c4c2485e815a6174e6e8515b6baa1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:24:41 compute-0 podman[282992]: 2026-02-25 12:24:41.776129421 +0000 UTC m=+0.524842324 container start 7015a8f34a4282a3771190a1470ac906b3c4c2485e815a6174e6e8515b6baa1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_noyce, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:24:41 compute-0 gallant_noyce[283026]: 167 167
Feb 25 12:24:41 compute-0 systemd[1]: libpod-7015a8f34a4282a3771190a1470ac906b3c4c2485e815a6174e6e8515b6baa1c.scope: Deactivated successfully.
Feb 25 12:24:41 compute-0 podman[282992]: 2026-02-25 12:24:41.844008552 +0000 UTC m=+0.592721485 container attach 7015a8f34a4282a3771190a1470ac906b3c4c2485e815a6174e6e8515b6baa1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 12:24:41 compute-0 podman[282992]: 2026-02-25 12:24:41.844857956 +0000 UTC m=+0.593570889 container died 7015a8f34a4282a3771190a1470ac906b3c4c2485e815a6174e6e8515b6baa1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-72c10ca442cc6a1abd23218e555b1d88ccdec41016f07b1de8e1f082376a9868-merged.mount: Deactivated successfully.
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000700711365625434 of space, bias 1.0, pg target 0.21021340968763022 quantized to 32 (current 32)
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0027070760400902134 of space, bias 1.0, pg target 0.812122812027064 quantized to 32 (current 32)
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.043356275747878e-07 of space, bias 4.0, pg target 0.0010852027530897455 quantized to 16 (current 16)
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:24:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.304 244018 DEBUG nova.compute.manager [req-48590c8f-9ba6-4335-9d0c-e432efa29a4d req-fa1026b5-1a65-49ec-ad7f-fca55f8a793b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Received event network-vif-plugged-d333d3cb-5840-49b2-a761-04da902fae80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.305 244018 DEBUG oslo_concurrency.lockutils [req-48590c8f-9ba6-4335-9d0c-e432efa29a4d req-fa1026b5-1a65-49ec-ad7f-fca55f8a793b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.306 244018 DEBUG oslo_concurrency.lockutils [req-48590c8f-9ba6-4335-9d0c-e432efa29a4d req-fa1026b5-1a65-49ec-ad7f-fca55f8a793b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.306 244018 DEBUG oslo_concurrency.lockutils [req-48590c8f-9ba6-4335-9d0c-e432efa29a4d req-fa1026b5-1a65-49ec-ad7f-fca55f8a793b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.307 244018 DEBUG nova.compute.manager [req-48590c8f-9ba6-4335-9d0c-e432efa29a4d req-fa1026b5-1a65-49ec-ad7f-fca55f8a793b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] No waiting events found dispatching network-vif-plugged-d333d3cb-5840-49b2-a761-04da902fae80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.307 244018 WARNING nova.compute.manager [req-48590c8f-9ba6-4335-9d0c-e432efa29a4d req-fa1026b5-1a65-49ec-ad7f-fca55f8a793b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Received unexpected event network-vif-plugged-d333d3cb-5840-49b2-a761-04da902fae80 for instance with vm_state active and task_state None.
Feb 25 12:24:42 compute-0 podman[282992]: 2026-02-25 12:24:42.355288522 +0000 UTC m=+1.104001445 container remove 7015a8f34a4282a3771190a1470ac906b3c4c2485e815a6174e6e8515b6baa1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_noyce, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 12:24:42 compute-0 systemd[1]: libpod-conmon-7015a8f34a4282a3771190a1470ac906b3c4c2485e815a6174e6e8515b6baa1c.scope: Deactivated successfully.
Feb 25 12:24:42 compute-0 podman[283052]: 2026-02-25 12:24:42.503152736 +0000 UTC m=+0.020618204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:24:42 compute-0 podman[283052]: 2026-02-25 12:24:42.689656065 +0000 UTC m=+0.207121533 container create 24ae70cf128ca8a52eececb7056db0ca8cc0e61b141bfb3993c4eb86ada2f17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.726 244018 INFO nova.compute.manager [None req-632867ea-62aa-4292-8f3a-e3658bb96529 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Pausing
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.728 244018 DEBUG nova.objects.instance [None req-632867ea-62aa-4292-8f3a-e3658bb96529 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'flavor' on Instance uuid c9d55f33-ecc8-4b0b-a335-8052fcdfaecb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.761 244018 DEBUG nova.compute.manager [None req-632867ea-62aa-4292-8f3a-e3658bb96529 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.762 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022282.7607996, c9d55f33-ecc8-4b0b-a335-8052fcdfaecb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.763 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] VM Paused (Lifecycle Event)
Feb 25 12:24:42 compute-0 systemd[1]: Started libpod-conmon-24ae70cf128ca8a52eececb7056db0ca8cc0e61b141bfb3993c4eb86ada2f17f.scope.
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.807 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.811 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:24:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Feb 25 12:24:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c65924b520e5bd4c217d01c0f5d1725ceee77cf8eb11b301ca800dbd4562f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c65924b520e5bd4c217d01c0f5d1725ceee77cf8eb11b301ca800dbd4562f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c65924b520e5bd4c217d01c0f5d1725ceee77cf8eb11b301ca800dbd4562f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c65924b520e5bd4c217d01c0f5d1725ceee77cf8eb11b301ca800dbd4562f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:24:42 compute-0 nova_compute[244014]: 2026-02-25 12:24:42.840 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] During sync_power_state the instance has a pending task (pausing). Skip.
Feb 25 12:24:42 compute-0 ceph-mon[76335]: pgmap v1220: 305 pgs: 305 active+clean; 280 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 6.1 MiB/s wr, 625 op/s
Feb 25 12:24:42 compute-0 podman[283052]: 2026-02-25 12:24:42.986507496 +0000 UTC m=+0.503973024 container init 24ae70cf128ca8a52eececb7056db0ca8cc0e61b141bfb3993c4eb86ada2f17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:24:42 compute-0 podman[283052]: 2026-02-25 12:24:42.993634508 +0000 UTC m=+0.511099956 container start 24ae70cf128ca8a52eececb7056db0ca8cc0e61b141bfb3993c4eb86ada2f17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 12:24:43 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Feb 25 12:24:43 compute-0 podman[283052]: 2026-02-25 12:24:43.070107752 +0000 UTC m=+0.587573290 container attach 24ae70cf128ca8a52eececb7056db0ca8cc0e61b141bfb3993c4eb86ada2f17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:24:43 compute-0 nova_compute[244014]: 2026-02-25 12:24:43.529 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:43 compute-0 lvm[283166]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:24:43 compute-0 lvm[283166]: VG ceph_vg1 finished
Feb 25 12:24:43 compute-0 lvm[283164]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:24:43 compute-0 lvm[283164]: VG ceph_vg0 finished
Feb 25 12:24:43 compute-0 lvm[283168]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:24:43 compute-0 lvm[283168]: VG ceph_vg2 finished
Feb 25 12:24:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 293 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.7 MiB/s wr, 230 op/s
Feb 25 12:24:43 compute-0 unruffled_boyd[283069]: {}
Feb 25 12:24:43 compute-0 systemd[1]: libpod-24ae70cf128ca8a52eececb7056db0ca8cc0e61b141bfb3993c4eb86ada2f17f.scope: Deactivated successfully.
Feb 25 12:24:43 compute-0 systemd[1]: libpod-24ae70cf128ca8a52eececb7056db0ca8cc0e61b141bfb3993c4eb86ada2f17f.scope: Consumed 1.083s CPU time.
Feb 25 12:24:43 compute-0 podman[283052]: 2026-02-25 12:24:43.84630764 +0000 UTC m=+1.363773108 container died 24ae70cf128ca8a52eececb7056db0ca8cc0e61b141bfb3993c4eb86ada2f17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:24:43 compute-0 ceph-mon[76335]: osdmap e175: 3 total, 3 up, 3 in
Feb 25 12:24:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-88c65924b520e5bd4c217d01c0f5d1725ceee77cf8eb11b301ca800dbd4562f3-merged.mount: Deactivated successfully.
Feb 25 12:24:44 compute-0 podman[283052]: 2026-02-25 12:24:44.514633794 +0000 UTC m=+2.032099242 container remove 24ae70cf128ca8a52eececb7056db0ca8cc0e61b141bfb3993c4eb86ada2f17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:24:44 compute-0 sudo[282936]: pam_unix(sudo:session): session closed for user root
Feb 25 12:24:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:24:44 compute-0 nova_compute[244014]: 2026-02-25 12:24:44.602 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Feb 25 12:24:44 compute-0 systemd[1]: libpod-conmon-24ae70cf128ca8a52eececb7056db0ca8cc0e61b141bfb3993c4eb86ada2f17f.scope: Deactivated successfully.
Feb 25 12:24:44 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:24:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:24:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Feb 25 12:24:44 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.036 244018 DEBUG oslo_concurrency.lockutils [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.037 244018 DEBUG oslo_concurrency.lockutils [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.037 244018 DEBUG oslo_concurrency.lockutils [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.037 244018 DEBUG oslo_concurrency.lockutils [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.038 244018 DEBUG oslo_concurrency.lockutils [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.039 244018 INFO nova.compute.manager [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Terminating instance
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.041 244018 DEBUG nova.compute.manager [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:24:45 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:24:45 compute-0 sudo[283184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:24:45 compute-0 sudo[283184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:24:45 compute-0 sudo[283184]: pam_unix(sudo:session): session closed for user root
Feb 25 12:24:45 compute-0 ceph-mon[76335]: pgmap v1222: 305 pgs: 305 active+clean; 293 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.7 MiB/s wr, 230 op/s
Feb 25 12:24:45 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:24:45 compute-0 ceph-mon[76335]: osdmap e176: 3 total, 3 up, 3 in
Feb 25 12:24:45 compute-0 kernel: tapd333d3cb-58 (unregistering): left promiscuous mode
Feb 25 12:24:45 compute-0 NetworkManager[49836]: <info>  [1772022285.3662] device (tapd333d3cb-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:24:45 compute-0 ovn_controller[147040]: 2026-02-25T12:24:45Z|00373|binding|INFO|Releasing lport d333d3cb-5840-49b2-a761-04da902fae80 from this chassis (sb_readonly=0)
Feb 25 12:24:45 compute-0 ovn_controller[147040]: 2026-02-25T12:24:45Z|00374|binding|INFO|Setting lport d333d3cb-5840-49b2-a761-04da902fae80 down in Southbound
Feb 25 12:24:45 compute-0 ovn_controller[147040]: 2026-02-25T12:24:45Z|00375|binding|INFO|Removing iface tapd333d3cb-58 ovn-installed in OVS
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:45.389 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:bc:ac 10.100.0.13'], port_security=['fa:16:3e:e7:bc:ac 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c9d55f33-ecc8-4b0b-a335-8052fcdfaecb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d333d3cb-5840-49b2-a761-04da902fae80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:45.392 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d333d3cb-5840-49b2-a761-04da902fae80 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 unbound from our chassis
Feb 25 12:24:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:45.394 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d45b1c-1680-4599-a27a-6e3335c94c99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:24:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:45.396 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[124d5b32-c073-41d6-9d87-9c306ea07e3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:45.396 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace which is not needed anymore
Feb 25 12:24:45 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Feb 25 12:24:45 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Consumed 2.993s CPU time.
Feb 25 12:24:45 compute-0 systemd-machined[210048]: Machine qemu-48-instance-0000002b terminated.
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.465 244018 WARNING nova.compute.manager [None req-bbda3b86-86f9-4506-bb2a-29ea7d4f7b1b f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Image not found during snapshot: nova.exception.ImageNotFound: Image 8165bc86-2fcc-4af3-8832-53a88fb125b1 could not be found.
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.482 244018 INFO nova.virt.libvirt.driver [-] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Instance destroyed successfully.
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.483 244018 DEBUG nova.objects.instance [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'resources' on Instance uuid c9d55f33-ecc8-4b0b-a335-8052fcdfaecb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.495 244018 DEBUG nova.virt.libvirt.vif [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2064847536',display_name='tempest-DeleteServersTestJSON-server-2064847536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2064847536',id=43,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:24:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-dww8rqi3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:42Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=c9d55f33-ecc8-4b0b-a335-8052fcdfaecb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "d333d3cb-5840-49b2-a761-04da902fae80", "address": "fa:16:3e:e7:bc:ac", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd333d3cb-58", "ovs_interfaceid": "d333d3cb-5840-49b2-a761-04da902fae80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.495 244018 DEBUG nova.network.os_vif_util [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "d333d3cb-5840-49b2-a761-04da902fae80", "address": "fa:16:3e:e7:bc:ac", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd333d3cb-58", "ovs_interfaceid": "d333d3cb-5840-49b2-a761-04da902fae80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.496 244018 DEBUG nova.network.os_vif_util [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:bc:ac,bridge_name='br-int',has_traffic_filtering=True,id=d333d3cb-5840-49b2-a761-04da902fae80,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd333d3cb-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.498 244018 DEBUG os_vif [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:bc:ac,bridge_name='br-int',has_traffic_filtering=True,id=d333d3cb-5840-49b2-a761-04da902fae80,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd333d3cb-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.501 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.501 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd333d3cb-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.508 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.510 244018 INFO os_vif [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:bc:ac,bridge_name='br-int',has_traffic_filtering=True,id=d333d3cb-5840-49b2-a761-04da902fae80,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd333d3cb-58')
Feb 25 12:24:45 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[282965]: [NOTICE]   (282981) : haproxy version is 2.8.14-c23fe91
Feb 25 12:24:45 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[282965]: [NOTICE]   (282981) : path to executable is /usr/sbin/haproxy
Feb 25 12:24:45 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[282965]: [WARNING]  (282981) : Exiting Master process...
Feb 25 12:24:45 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[282965]: [ALERT]    (282981) : Current worker (282983) exited with code 143 (Terminated)
Feb 25 12:24:45 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[282965]: [WARNING]  (282981) : All workers exited. Exiting... (0)
Feb 25 12:24:45 compute-0 systemd[1]: libpod-9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008.scope: Deactivated successfully.
Feb 25 12:24:45 compute-0 podman[283241]: 2026-02-25 12:24:45.619622546 +0000 UTC m=+0.115718966 container died 9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.744 244018 DEBUG nova.compute.manager [req-7c6393de-a4a8-478e-9889-974250be2b3a req-4b339bb3-1984-409e-985c-01bccacb125d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Received event network-vif-unplugged-d333d3cb-5840-49b2-a761-04da902fae80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.745 244018 DEBUG oslo_concurrency.lockutils [req-7c6393de-a4a8-478e-9889-974250be2b3a req-4b339bb3-1984-409e-985c-01bccacb125d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.745 244018 DEBUG oslo_concurrency.lockutils [req-7c6393de-a4a8-478e-9889-974250be2b3a req-4b339bb3-1984-409e-985c-01bccacb125d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.746 244018 DEBUG oslo_concurrency.lockutils [req-7c6393de-a4a8-478e-9889-974250be2b3a req-4b339bb3-1984-409e-985c-01bccacb125d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.746 244018 DEBUG nova.compute.manager [req-7c6393de-a4a8-478e-9889-974250be2b3a req-4b339bb3-1984-409e-985c-01bccacb125d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] No waiting events found dispatching network-vif-unplugged-d333d3cb-5840-49b2-a761-04da902fae80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.746 244018 DEBUG nova.compute.manager [req-7c6393de-a4a8-478e-9889-974250be2b3a req-4b339bb3-1984-409e-985c-01bccacb125d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Received event network-vif-unplugged-d333d3cb-5840-49b2-a761-04da902fae80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:24:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 293 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.6 MiB/s wr, 224 op/s
Feb 25 12:24:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008-userdata-shm.mount: Deactivated successfully.
Feb 25 12:24:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-29995863cb4bedd7fdfa7ea2ac6e4f55664d3eb95cee463932488cece1212528-merged.mount: Deactivated successfully.
Feb 25 12:24:45 compute-0 podman[283241]: 2026-02-25 12:24:45.820712897 +0000 UTC m=+0.316809317 container cleanup 9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:24:45 compute-0 systemd[1]: libpod-conmon-9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008.scope: Deactivated successfully.
Feb 25 12:24:45 compute-0 podman[283290]: 2026-02-25 12:24:45.962138869 +0000 UTC m=+0.122264781 container remove 9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:24:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:45.970 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[acf28cc4-2ecd-4aed-93a2-779a96594fdc]: (4, ('Wed Feb 25 12:24:45 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008)\n9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008\nWed Feb 25 12:24:45 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008)\n9c382b059bb852a36a77d167f36f2bf0311747066a5051bb68fdd5f9df7a2008\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:45.972 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76959cf8-a5bb-4a98-9e32-cd5cd06353ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:45.973 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:45 compute-0 kernel: tapa0d45b1c-10: left promiscuous mode
Feb 25 12:24:45 compute-0 nova_compute[244014]: 2026-02-25 12:24:45.989 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:45.994 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49cd9b14-4111-476e-9f08-fa3b924028e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.006 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5ea919-2ef0-4a44-9500-9faf8e8c83eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.007 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[814c140a-0e16-4120-bd56-c7a3ce10322e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.026 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ad4132-aea8-4519-94c0-0b90278b9b75]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424912, 'reachable_time': 17931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283305, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.031 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.031 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[da498843-35a9-43fd-a8ff-fb69d1806bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 systemd[1]: run-netns-ovnmeta\x2da0d45b1c\x2d1680\x2d4599\x2da27a\x2d6e3335c94c99.mount: Deactivated successfully.
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.128 244018 INFO nova.virt.libvirt.driver [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Deleting instance files /var/lib/nova/instances/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_del
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.129 244018 INFO nova.virt.libvirt.driver [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Deletion of /var/lib/nova/instances/c9d55f33-ecc8-4b0b-a335-8052fcdfaecb_del complete
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.195 244018 INFO nova.compute.manager [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Took 1.15 seconds to destroy the instance on the hypervisor.
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.196 244018 DEBUG oslo.service.loopingcall [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.196 244018 DEBUG nova.compute.manager [-] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.196 244018 DEBUG nova.network.neutron [-] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.337 244018 DEBUG oslo_concurrency.lockutils [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "36c98024-f0f0-4516-ad48-e8ffa90c5058" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.337 244018 DEBUG oslo_concurrency.lockutils [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.338 244018 DEBUG oslo_concurrency.lockutils [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.338 244018 DEBUG oslo_concurrency.lockutils [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.339 244018 DEBUG oslo_concurrency.lockutils [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.340 244018 INFO nova.compute.manager [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Terminating instance
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.342 244018 DEBUG nova.compute.manager [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:24:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:24:46 compute-0 kernel: tapf24dea9b-1f (unregistering): left promiscuous mode
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:46 compute-0 NetworkManager[49836]: <info>  [1772022286.3864] device (tapf24dea9b-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:46 compute-0 ovn_controller[147040]: 2026-02-25T12:24:46Z|00376|binding|INFO|Releasing lport f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 from this chassis (sb_readonly=0)
Feb 25 12:24:46 compute-0 ovn_controller[147040]: 2026-02-25T12:24:46Z|00377|binding|INFO|Setting lport f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 down in Southbound
Feb 25 12:24:46 compute-0 ovn_controller[147040]: 2026-02-25T12:24:46Z|00378|binding|INFO|Removing iface tapf24dea9b-1f ovn-installed in OVS
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.400 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:47:8e 10.100.0.4'], port_security=['fa:16:3e:09:47:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '36c98024-f0f0-4516-ad48-e8ffa90c5058', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f24dea9b-1f59-4ca6-90f7-3f97ea1b6740) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.401 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 unbound from our chassis
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.402 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.403 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b38b3ea0-139c-4e49-9898-021a79c5adaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.404 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace which is not needed anymore
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:46 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Feb 25 12:24:46 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Consumed 11.087s CPU time.
Feb 25 12:24:46 compute-0 systemd-machined[210048]: Machine qemu-47-instance-0000002a terminated.
Feb 25 12:24:46 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[281833]: [NOTICE]   (281855) : haproxy version is 2.8.14-c23fe91
Feb 25 12:24:46 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[281833]: [NOTICE]   (281855) : path to executable is /usr/sbin/haproxy
Feb 25 12:24:46 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[281833]: [WARNING]  (281855) : Exiting Master process...
Feb 25 12:24:46 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[281833]: [ALERT]    (281855) : Current worker (281857) exited with code 143 (Terminated)
Feb 25 12:24:46 compute-0 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[281833]: [WARNING]  (281855) : All workers exited. Exiting... (0)
Feb 25 12:24:46 compute-0 systemd[1]: libpod-cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21.scope: Deactivated successfully.
Feb 25 12:24:46 compute-0 podman[283328]: 2026-02-25 12:24:46.548316899 +0000 UTC m=+0.053220807 container died cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21-userdata-shm.mount: Deactivated successfully.
Feb 25 12:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-66ed715b3adca6564ed54107ca6d9cf86e89a58a95c8bc012d225f6a6830e865-merged.mount: Deactivated successfully.
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.579 244018 INFO nova.virt.libvirt.driver [-] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Instance destroyed successfully.
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.580 244018 DEBUG nova.objects.instance [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'resources' on Instance uuid 36c98024-f0f0-4516-ad48-e8ffa90c5058 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:46 compute-0 podman[283328]: 2026-02-25 12:24:46.587903099 +0000 UTC m=+0.092807007 container cleanup cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.597 244018 DEBUG nova.virt.libvirt.vif [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-899680496',display_name='tempest-ImagesTestJSON-server-899680496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-899680496',id=42,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:24:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-govupkud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:24:45Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=36c98024-f0f0-4516-ad48-e8ffa90c5058,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "address": "fa:16:3e:09:47:8e", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf24dea9b-1f", "ovs_interfaceid": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.598 244018 DEBUG nova.network.os_vif_util [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "address": "fa:16:3e:09:47:8e", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf24dea9b-1f", "ovs_interfaceid": "f24dea9b-1f59-4ca6-90f7-3f97ea1b6740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.599 244018 DEBUG nova.network.os_vif_util [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:47:8e,bridge_name='br-int',has_traffic_filtering=True,id=f24dea9b-1f59-4ca6-90f7-3f97ea1b6740,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf24dea9b-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.599 244018 DEBUG os_vif [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:47:8e,bridge_name='br-int',has_traffic_filtering=True,id=f24dea9b-1f59-4ca6-90f7-3f97ea1b6740,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf24dea9b-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.601 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf24dea9b-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.605 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:24:46 compute-0 systemd[1]: libpod-conmon-cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21.scope: Deactivated successfully.
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.607 244018 INFO os_vif [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:47:8e,bridge_name='br-int',has_traffic_filtering=True,id=f24dea9b-1f59-4ca6-90f7-3f97ea1b6740,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf24dea9b-1f')
Feb 25 12:24:46 compute-0 podman[283366]: 2026-02-25 12:24:46.646396275 +0000 UTC m=+0.041895917 container remove cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.650 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[342fdd3d-37ee-4ca3-a372-442ee56477b2]: (4, ('Wed Feb 25 12:24:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21)\ncf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21\nWed Feb 25 12:24:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (cf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21)\ncf8c3b659c082c01fd8820bd7d2bd9b69803f6d46d90655e9dccc8c7cdc34a21\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.652 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8953a9-54e2-475d-82e6-dcf489e25d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.653 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:24:46 compute-0 kernel: tap6a1663dd-20: left promiscuous mode
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.655 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.663 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[96855dd3-af7b-4f9d-bd8b-8d0973d998d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.677 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f1e625-a328-42ac-a611-b70acd67c082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.678 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[31dddb07-30d6-4c9e-a2bd-c2f25ae266de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.694 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0605e244-edf5-40a5-a13d-934ca17da3cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424288, 'reachable_time': 30216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283401, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.696 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:24:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:46.697 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f300dc-6029-4f0c-91e0-29fc0f03864d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:24:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d6a1663dd\x2d25aa\x2d4b0e\x2da1c0\x2d42434d371a21.mount: Deactivated successfully.
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.874 244018 INFO nova.virt.libvirt.driver [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Deleting instance files /var/lib/nova/instances/36c98024-f0f0-4516-ad48-e8ffa90c5058_del
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.874 244018 INFO nova.virt.libvirt.driver [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Deletion of /var/lib/nova/instances/36c98024-f0f0-4516-ad48-e8ffa90c5058_del complete
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.945 244018 INFO nova.compute.manager [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.946 244018 DEBUG oslo.service.loopingcall [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.947 244018 DEBUG nova.compute.manager [-] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:24:46 compute-0 nova_compute[244014]: 2026-02-25 12:24:46.947 244018 DEBUG nova.network.neutron [-] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.190 244018 DEBUG nova.network.neutron [-] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.224 244018 INFO nova.compute.manager [-] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Took 1.03 seconds to deallocate network for instance.
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.285 244018 DEBUG oslo_concurrency.lockutils [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.285 244018 DEBUG oslo_concurrency.lockutils [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.349 244018 DEBUG nova.compute.manager [req-01c90655-bdfc-4c31-9d53-fb030d8b3d50 req-97a06588-8e25-410e-9f1d-bca570c60ff3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Received event network-vif-deleted-d333d3cb-5840-49b2-a761-04da902fae80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:47 compute-0 ceph-mon[76335]: pgmap v1224: 305 pgs: 305 active+clean; 293 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.6 MiB/s wr, 224 op/s
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.370 244018 DEBUG oslo_concurrency.processutils [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:24:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3131801508' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:24:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:24:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3131801508' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:24:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 185 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 375 op/s
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.865 244018 DEBUG nova.network.neutron [-] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/748358731' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.910 244018 DEBUG oslo_concurrency.processutils [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.916 244018 DEBUG nova.compute.provider_tree [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.936 244018 DEBUG nova.compute.manager [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Received event network-vif-plugged-d333d3cb-5840-49b2-a761-04da902fae80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.937 244018 DEBUG oslo_concurrency.lockutils [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.937 244018 DEBUG oslo_concurrency.lockutils [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.937 244018 DEBUG oslo_concurrency.lockutils [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.937 244018 DEBUG nova.compute.manager [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] No waiting events found dispatching network-vif-plugged-d333d3cb-5840-49b2-a761-04da902fae80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.938 244018 WARNING nova.compute.manager [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Received unexpected event network-vif-plugged-d333d3cb-5840-49b2-a761-04da902fae80 for instance with vm_state deleted and task_state None.
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.938 244018 DEBUG nova.compute.manager [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Received event network-vif-unplugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.938 244018 DEBUG oslo_concurrency.lockutils [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.938 244018 DEBUG oslo_concurrency.lockutils [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.938 244018 DEBUG oslo_concurrency.lockutils [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.939 244018 DEBUG nova.compute.manager [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] No waiting events found dispatching network-vif-unplugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.939 244018 DEBUG nova.compute.manager [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Received event network-vif-unplugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.939 244018 DEBUG nova.compute.manager [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Received event network-vif-plugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.939 244018 DEBUG oslo_concurrency.lockutils [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.940 244018 DEBUG oslo_concurrency.lockutils [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.940 244018 DEBUG oslo_concurrency.lockutils [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.940 244018 DEBUG nova.compute.manager [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] No waiting events found dispatching network-vif-plugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.940 244018 WARNING nova.compute.manager [req-864c40be-6ea3-4565-96d1-826a0017b1d1 req-e6470928-3c5b-4ff6-8d62-8daea1c89181 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Received unexpected event network-vif-plugged-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 for instance with vm_state active and task_state deleting.
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.950 244018 DEBUG nova.scheduler.client.report [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.956 244018 INFO nova.compute.manager [-] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Took 1.01 seconds to deallocate network for instance.
Feb 25 12:24:47 compute-0 nova_compute[244014]: 2026-02-25 12:24:47.987 244018 DEBUG oslo_concurrency.lockutils [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.034 244018 INFO nova.scheduler.client.report [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Deleted allocations for instance c9d55f33-ecc8-4b0b-a335-8052fcdfaecb
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.037 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022273.0364687, 8249bea8-a1c0-42f5-a4d1-12b74e20bccd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.038 244018 INFO nova.compute.manager [-] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] VM Stopped (Lifecycle Event)
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.047 244018 DEBUG oslo_concurrency.lockutils [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.048 244018 DEBUG oslo_concurrency.lockutils [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.078 244018 DEBUG nova.compute.manager [None req-70ac7af1-5207-4cfb-9684-94df2b0e24df - - - - - -] [instance: 8249bea8-a1c0-42f5-a4d1-12b74e20bccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.100 244018 DEBUG oslo_concurrency.processutils [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.184 244018 DEBUG oslo_concurrency.lockutils [None req-a303b496-8259-418d-9a76-e232aef7bb78 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "c9d55f33-ecc8-4b0b-a335-8052fcdfaecb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3131801508' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:24:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3131801508' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:24:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/748358731' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697953749' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.659 244018 DEBUG oslo_concurrency.processutils [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.666 244018 DEBUG nova.compute.provider_tree [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.684 244018 DEBUG nova.scheduler.client.report [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.712 244018 DEBUG oslo_concurrency.lockutils [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.744 244018 INFO nova.scheduler.client.report [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Deleted allocations for instance 36c98024-f0f0-4516-ad48-e8ffa90c5058
Feb 25 12:24:48 compute-0 nova_compute[244014]: 2026-02-25 12:24:48.820 244018 DEBUG oslo_concurrency.lockutils [None req-cd45f60c-f12a-47ec-9145-1a10f70d4e40 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "36c98024-f0f0-4516-ad48-e8ffa90c5058" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:49 compute-0 ceph-mon[76335]: pgmap v1225: 305 pgs: 305 active+clean; 185 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 375 op/s
Feb 25 12:24:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/697953749' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:49 compute-0 nova_compute[244014]: 2026-02-25 12:24:49.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Feb 25 12:24:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Feb 25 12:24:49 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Feb 25 12:24:49 compute-0 nova_compute[244014]: 2026-02-25 12:24:49.752 244018 DEBUG nova.compute.manager [req-7c15a063-2e3c-4680-90c3-00cd76abd6d3 req-3065da44-ffe9-46b7-b7c6-516db853849e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Received event network-vif-deleted-f24dea9b-1f59-4ca6-90f7-3f97ea1b6740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 185 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Feb 25 12:24:50 compute-0 nova_compute[244014]: 2026-02-25 12:24:50.072 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022275.0685842, 42abb4b3-2144-4c11-a04c-7641d725bcde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:50 compute-0 nova_compute[244014]: 2026-02-25 12:24:50.073 244018 INFO nova.compute.manager [-] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] VM Stopped (Lifecycle Event)
Feb 25 12:24:50 compute-0 nova_compute[244014]: 2026-02-25 12:24:50.096 244018 DEBUG nova.compute.manager [None req-a3195cfd-8ce9-4c3e-8e71-04f6b1a75a21 - - - - - -] [instance: 42abb4b3-2144-4c11-a04c-7641d725bcde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:50 compute-0 nova_compute[244014]: 2026-02-25 12:24:50.140 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022275.1393907, 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:24:50 compute-0 nova_compute[244014]: 2026-02-25 12:24:50.140 244018 INFO nova.compute.manager [-] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] VM Stopped (Lifecycle Event)
Feb 25 12:24:50 compute-0 nova_compute[244014]: 2026-02-25 12:24:50.165 244018 DEBUG nova.compute.manager [None req-64cbd5f7-18ec-4188-a7a8-cee59388400c - - - - - -] [instance: 19e66139-1e99-432c-bbd6-fc3a6d3e1b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:24:50 compute-0 ceph-mon[76335]: osdmap e177: 3 total, 3 up, 3 in
Feb 25 12:24:50 compute-0 ceph-mon[76335]: pgmap v1227: 305 pgs: 305 active+clean; 185 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Feb 25 12:24:51 compute-0 nova_compute[244014]: 2026-02-25 12:24:51.602 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 153 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.1 MiB/s wr, 206 op/s
Feb 25 12:24:52 compute-0 nova_compute[244014]: 2026-02-25 12:24:52.138 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:52 compute-0 nova_compute[244014]: 2026-02-25 12:24:52.139 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:52 compute-0 nova_compute[244014]: 2026-02-25 12:24:52.167 244018 DEBUG nova.compute.manager [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:24:52 compute-0 nova_compute[244014]: 2026-02-25 12:24:52.244 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:52 compute-0 nova_compute[244014]: 2026-02-25 12:24:52.245 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:52 compute-0 nova_compute[244014]: 2026-02-25 12:24:52.256 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:24:52 compute-0 nova_compute[244014]: 2026-02-25 12:24:52.257 244018 INFO nova.compute.claims [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:24:52 compute-0 nova_compute[244014]: 2026-02-25 12:24:52.384 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:24:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3924301816' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:52 compute-0 nova_compute[244014]: 2026-02-25 12:24:52.943 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:52 compute-0 ceph-mon[76335]: pgmap v1228: 305 pgs: 305 active+clean; 153 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.1 MiB/s wr, 206 op/s
Feb 25 12:24:52 compute-0 nova_compute[244014]: 2026-02-25 12:24:52.950 244018 DEBUG nova.compute.provider_tree [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:24:52 compute-0 nova_compute[244014]: 2026-02-25 12:24:52.987 244018 DEBUG nova.scheduler.client.report [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.106 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.107 244018 DEBUG nova.compute.manager [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.187 244018 DEBUG nova.compute.manager [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.187 244018 DEBUG nova.network.neutron [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.208 244018 INFO nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.233 244018 DEBUG nova.compute.manager [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.349 244018 DEBUG nova.compute.manager [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.351 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.352 244018 INFO nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Creating image(s)
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.387 244018 DEBUG nova.storage.rbd_utils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.418 244018 DEBUG nova.storage.rbd_utils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.445 244018 DEBUG nova.storage.rbd_utils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.450 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.529 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.530 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.530 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.531 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.564 244018 DEBUG nova.storage.rbd_utils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.567 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.816 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.858 244018 DEBUG nova.policy [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9c44c1a95c8d4d97bc1d7dde284dcf1b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'daab2f813dbd467685c22833bf875ec9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:24:53 compute-0 nova_compute[244014]: 2026-02-25 12:24:53.906 244018 DEBUG nova.storage.rbd_utils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] resizing rbd image b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:24:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3924301816' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:24:54 compute-0 nova_compute[244014]: 2026-02-25 12:24:54.021 244018 DEBUG nova.objects.instance [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'migration_context' on Instance uuid b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:24:54 compute-0 nova_compute[244014]: 2026-02-25 12:24:54.045 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:24:54 compute-0 nova_compute[244014]: 2026-02-25 12:24:54.046 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Ensure instance console log exists: /var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:24:54 compute-0 nova_compute[244014]: 2026-02-25 12:24:54.046 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:54 compute-0 nova_compute[244014]: 2026-02-25 12:24:54.047 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:54 compute-0 nova_compute[244014]: 2026-02-25 12:24:54.047 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:54 compute-0 nova_compute[244014]: 2026-02-25 12:24:54.606 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:54 compute-0 nova_compute[244014]: 2026-02-25 12:24:54.938 244018 DEBUG nova.network.neutron [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Successfully created port: fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:24:54 compute-0 ceph-mon[76335]: pgmap v1229: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Feb 25 12:24:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:55.007 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:24:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:55.008 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:24:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:24:55.008 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:24:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 165 op/s
Feb 25 12:24:56 compute-0 nova_compute[244014]: 2026-02-25 12:24:56.605 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:56 compute-0 ceph-mon[76335]: pgmap v1230: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 165 op/s
Feb 25 12:24:57 compute-0 nova_compute[244014]: 2026-02-25 12:24:57.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:57 compute-0 podman[283639]: 2026-02-25 12:24:57.736585251 +0000 UTC m=+0.079536552 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:24:57 compute-0 podman[283640]: 2026-02-25 12:24:57.767680931 +0000 UTC m=+0.106398413 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller)
Feb 25 12:24:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 25 12:24:57 compute-0 nova_compute[244014]: 2026-02-25 12:24:57.800 244018 DEBUG nova.network.neutron [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Successfully updated port: fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:24:57 compute-0 nova_compute[244014]: 2026-02-25 12:24:57.815 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:57 compute-0 nova_compute[244014]: 2026-02-25 12:24:57.816 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquired lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:57 compute-0 nova_compute[244014]: 2026-02-25 12:24:57.816 244018 DEBUG nova.network.neutron [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:24:58 compute-0 nova_compute[244014]: 2026-02-25 12:24:58.037 244018 DEBUG nova.network.neutron [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:24:58 compute-0 nova_compute[244014]: 2026-02-25 12:24:58.130 244018 DEBUG nova.compute.manager [req-a680c191-84a0-4af0-a769-acea467f1e69 req-305d81cb-b4a1-4c87-945f-554b6e138580 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received event network-changed-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:24:58 compute-0 nova_compute[244014]: 2026-02-25 12:24:58.131 244018 DEBUG nova.compute.manager [req-a680c191-84a0-4af0-a769-acea467f1e69 req-305d81cb-b4a1-4c87-945f-554b6e138580 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Refreshing instance network info cache due to event network-changed-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:24:58 compute-0 nova_compute[244014]: 2026-02-25 12:24:58.131 244018 DEBUG oslo_concurrency.lockutils [req-a680c191-84a0-4af0-a769-acea467f1e69 req-305d81cb-b4a1-4c87-945f-554b6e138580 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:24:58 compute-0 ceph-mon[76335]: pgmap v1231: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.518 244018 DEBUG nova.network.neutron [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Updating instance_info_cache with network_info: [{"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.545 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Releasing lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.545 244018 DEBUG nova.compute.manager [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance network_info: |[{"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.546 244018 DEBUG oslo_concurrency.lockutils [req-a680c191-84a0-4af0-a769-acea467f1e69 req-305d81cb-b4a1-4c87-945f-554b6e138580 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.546 244018 DEBUG nova.network.neutron [req-a680c191-84a0-4af0-a769-acea467f1e69 req-305d81cb-b4a1-4c87-945f-554b6e138580 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Refreshing network info cache for port fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.551 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Start _get_guest_xml network_info=[{"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.559 244018 WARNING nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.569 244018 DEBUG nova.virt.libvirt.host [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.570 244018 DEBUG nova.virt.libvirt.host [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.576 244018 DEBUG nova.virt.libvirt.host [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.576 244018 DEBUG nova.virt.libvirt.host [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.577 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.577 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.578 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.579 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.579 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.580 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.580 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.581 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.581 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.581 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.582 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.582 244018 DEBUG nova.virt.hardware [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.588 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:24:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.619 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:24:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.909 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:24:59 compute-0 nova_compute[244014]: 2026-02-25 12:24:59.910 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:25:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3761519420' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.140 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.171 244018 DEBUG nova.storage.rbd_utils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.176 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.247 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.248 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.277 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.278 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.281 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.303 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.386 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.387 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.389 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.398 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.398 244018 INFO nova.compute.claims [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.479 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022285.477825, c9d55f33-ecc8-4b0b-a335-8052fcdfaecb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.479 244018 INFO nova.compute.manager [-] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] VM Stopped (Lifecycle Event)
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.523 244018 DEBUG nova.compute.manager [None req-d5dd7f62-4081-40f9-8a17-e3485cc1f6b4 - - - - - -] [instance: c9d55f33-ecc8-4b0b-a335-8052fcdfaecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.567 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448294717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.687 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.689 244018 DEBUG nova.virt.libvirt.vif [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-818536104',display_name='tempest-DeleteServersTestJSON-server-818536104',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-818536104',id=44,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-f8ey9dlk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:53Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.690 244018 DEBUG nova.network.os_vif_util [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.691 244018 DEBUG nova.network.os_vif_util [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:58:40,bridge_name='br-int',has_traffic_filtering=True,id=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcfcdd66-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.693 244018 DEBUG nova.objects.instance [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.714 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:25:00 compute-0 nova_compute[244014]:   <uuid>b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6</uuid>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   <name>instance-0000002c</name>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <nova:name>tempest-DeleteServersTestJSON-server-818536104</nova:name>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:24:59</nova:creationTime>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <nova:user uuid="9c44c1a95c8d4d97bc1d7dde284dcf1b">tempest-DeleteServersTestJSON-285157757-project-member</nova:user>
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <nova:project uuid="daab2f813dbd467685c22833bf875ec9">tempest-DeleteServersTestJSON-285157757</nova:project>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <nova:port uuid="fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5">
Feb 25 12:25:00 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <system>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <entry name="serial">b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6</entry>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <entry name="uuid">b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6</entry>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     </system>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   <os>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   </os>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   <features>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   </features>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk">
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk.config">
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:fc:58:40"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <target dev="tapfcfcdd66-6c"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6/console.log" append="off"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <video>
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     </video>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:25:00 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:25:00 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:25:00 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:25:00 compute-0 nova_compute[244014]: </domain>
Feb 25 12:25:00 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.715 244018 DEBUG nova.compute.manager [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Preparing to wait for external event network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.715 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.715 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.716 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.717 244018 DEBUG nova.virt.libvirt.vif [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-818536104',display_name='tempest-DeleteServersTestJSON-server-818536104',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-818536104',id=44,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-f8ey9dlk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:24:53Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.717 244018 DEBUG nova.network.os_vif_util [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.718 244018 DEBUG nova.network.os_vif_util [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:58:40,bridge_name='br-int',has_traffic_filtering=True,id=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcfcdd66-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.719 244018 DEBUG os_vif [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:58:40,bridge_name='br-int',has_traffic_filtering=True,id=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcfcdd66-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.720 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.720 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.723 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.723 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfcfcdd66-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.724 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfcfcdd66-6c, col_values=(('external_ids', {'iface-id': 'fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:58:40', 'vm-uuid': 'b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.726 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:00 compute-0 NetworkManager[49836]: <info>  [1772022300.7270] manager: (tapfcfcdd66-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.731 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.733 244018 INFO os_vif [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:58:40,bridge_name='br-int',has_traffic_filtering=True,id=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcfcdd66-6c')
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.788 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.789 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.789 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No VIF found with MAC fa:16:3e:fc:58:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.790 244018 INFO nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Using config drive
Feb 25 12:25:00 compute-0 nova_compute[244014]: 2026-02-25 12:25:00.816 244018 DEBUG nova.storage.rbd_utils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:00 compute-0 ceph-mon[76335]: pgmap v1232: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 25 12:25:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3761519420' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1448294717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/919823302' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.190 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.196 244018 DEBUG nova.compute.provider_tree [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.216 244018 DEBUG nova.scheduler.client.report [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.242 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.243 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.249 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.258 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.259 244018 INFO nova.compute.claims [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.320 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.321 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.349 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.372 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.441 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.483 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.485 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.486 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Creating image(s)
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.512 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 22450f11-d042-48c5-941e-fd544e58d84a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.538 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 22450f11-d042-48c5-941e-fd544e58d84a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.562 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 22450f11-d042-48c5-941e-fd544e58d84a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.566 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.596 244018 INFO nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Creating config drive at /var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6/disk.config
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.604 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmptwd55gvb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.633 244018 DEBUG nova.network.neutron [req-a680c191-84a0-4af0-a769-acea467f1e69 req-305d81cb-b4a1-4c87-945f-554b6e138580 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Updated VIF entry in instance network info cache for port fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.635 244018 DEBUG nova.network.neutron [req-a680c191-84a0-4af0-a769-acea467f1e69 req-305d81cb-b4a1-4c87-945f-554b6e138580 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Updating instance_info_cache with network_info: [{"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.638 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022286.5708232, 36c98024-f0f0-4516-ad48-e8ffa90c5058 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.639 244018 INFO nova.compute.manager [-] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] VM Stopped (Lifecycle Event)
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.646 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.647 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.648 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.649 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.683 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 22450f11-d042-48c5-941e-fd544e58d84a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.688 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 22450f11-d042-48c5-941e-fd544e58d84a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.724 244018 DEBUG nova.policy [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0899f3fdb57d46a395d07753dd261241', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8744bdbc0f1499388aab5f477246beb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.731 244018 DEBUG oslo_concurrency.lockutils [req-a680c191-84a0-4af0-a769-acea467f1e69 req-305d81cb-b4a1-4c87-945f-554b6e138580 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.733 244018 DEBUG nova.compute.manager [None req-c32dc427-c31a-4925-bf25-dd0cd453fb6b - - - - - -] [instance: 36c98024-f0f0-4516-ad48-e8ffa90c5058] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.738 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmptwd55gvb" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.775 244018 DEBUG nova.storage.rbd_utils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:01 compute-0 nova_compute[244014]: 2026-02-25 12:25:01.782 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6/disk.config b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/919823302' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043044653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.014 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 22450f11-d042-48c5-941e-fd544e58d84a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.052 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.053 244018 DEBUG oslo_concurrency.processutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6/disk.config b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.054 244018 INFO nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Deleting local config drive /var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6/disk.config because it was imported into RBD.
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.104 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] resizing rbd image 22450f11-d042-48c5-941e-fd544e58d84a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:25:02 compute-0 kernel: tapfcfcdd66-6c: entered promiscuous mode
Feb 25 12:25:02 compute-0 NetworkManager[49836]: <info>  [1772022302.1095] manager: (tapfcfcdd66-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/170)
Feb 25 12:25:02 compute-0 ovn_controller[147040]: 2026-02-25T12:25:02Z|00379|binding|INFO|Claiming lport fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 for this chassis.
Feb 25 12:25:02 compute-0 ovn_controller[147040]: 2026-02-25T12:25:02Z|00380|binding|INFO|fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5: Claiming fa:16:3e:fc:58:40 10.100.0.9
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.127 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:58:40 10.100.0.9'], port_security=['fa:16:3e:fc:58:40 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.130 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 bound to our chassis
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.132 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:25:02 compute-0 systemd-machined[210048]: New machine qemu-49-instance-0000002c.
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.145 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0710efef-074b-4681-ac9c-b27b56d32186]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.146 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0d45b1c-11 in ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.148 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0d45b1c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.148 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[549beb3a-25e7-47ee-8f9b-ebf52c7a8863]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.151 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[84fded39-f726-4735-a7c7-bcdce6baddd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.158 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.163 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a28451-01d8-49b1-a1da-89e526a53576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.169 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:02 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000002c.
Feb 25 12:25:02 compute-0 ovn_controller[147040]: 2026-02-25T12:25:02Z|00381|binding|INFO|Setting lport fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 ovn-installed in OVS
Feb 25 12:25:02 compute-0 ovn_controller[147040]: 2026-02-25T12:25:02Z|00382|binding|INFO|Setting lport fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 up in Southbound
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.173 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.176 244018 DEBUG nova.compute.provider_tree [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.181 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:02 compute-0 systemd-udevd[284013]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.179 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21c2dfa0-075b-4eb7-b714-b0e45a4bfe30]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 NetworkManager[49836]: <info>  [1772022302.1991] device (tapfcfcdd66-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:25:02 compute-0 NetworkManager[49836]: <info>  [1772022302.1998] device (tapfcfcdd66-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.200 244018 DEBUG nova.scheduler.client.report [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.217 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[eb29a0a0-8050-410b-a4ee-03be2c2a2e73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 NetworkManager[49836]: <info>  [1772022302.2244] manager: (tapa0d45b1c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/171)
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.224 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8277d807-6075-4b51-9794-1160a8fd6d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.259 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.260 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.263 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d4282705-86bb-4a1b-a0b1-80366b3f5f32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.267 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f81357-bd9d-406d-b6ec-fd85fa972c60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.272 244018 DEBUG nova.objects.instance [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'migration_context' on Instance uuid 22450f11-d042-48c5-941e-fd544e58d84a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:02 compute-0 NetworkManager[49836]: <info>  [1772022302.2918] device (tapa0d45b1c-10): carrier: link connected
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.296 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7f93b47d-9de1-44ca-9eeb-42f18425c7f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.299 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.300 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Ensure instance console log exists: /var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.300 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.301 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.301 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.313 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4dac5f-77eb-4368-860c-bddf34b9348f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427169, 'reachable_time': 43366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284061, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.326 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f219474d-d323-4233-b377-4a3ea17db376]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:2bc6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427169, 'tstamp': 427169}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284062, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.329 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.330 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.343 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fb78f2bc-436d-4264-bf89-ff21cadf7e86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427169, 'reachable_time': 43366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284063, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.357 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.361 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Successfully created port: 4f6b67d0-056b-4277-a604-6221f16e30b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.371 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[339a01ce-021c-45dc-a6ff-c5e8c251f08c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.387 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.426 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8635b30c-9ebd-4b61-9ee9-dd3ee4dc1356]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.428 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.428 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.429 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d45b1c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:02 compute-0 NetworkManager[49836]: <info>  [1772022302.4321] manager: (tapa0d45b1c-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Feb 25 12:25:02 compute-0 kernel: tapa0d45b1c-10: entered promiscuous mode
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.437 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d45b1c-10, col_values=(('external_ids', {'iface-id': '7c89df63-779c-4e1c-9e58-76a4001fabc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:02 compute-0 ovn_controller[147040]: 2026-02-25T12:25:02Z|00383|binding|INFO|Releasing lport 7c89df63-779c-4e1c-9e58-76a4001fabc2 from this chassis (sb_readonly=0)
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.440 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.441 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c22282ef-ee1e-4c2a-908b-ff53510c674c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.442 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:25:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:02.442 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'env', 'PROCESS_TAG=haproxy-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0d45b1c-1680-4599-a27a-6e3335c94c99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.493 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.495 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.496 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Creating image(s)
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.527 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 1238794a-063b-4ac0-a7d9-3590353e3207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.559 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 1238794a-063b-4ac0-a7d9-3590353e3207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.589 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 1238794a-063b-4ac0-a7d9-3590353e3207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.598 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.627 244018 DEBUG nova.policy [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0899f3fdb57d46a395d07753dd261241', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8744bdbc0f1499388aab5f477246beb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.675 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.676 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.676 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.677 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.711 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 1238794a-063b-4ac0-a7d9-3590353e3207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.720 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 1238794a-063b-4ac0-a7d9-3590353e3207_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:02 compute-0 podman[284207]: 2026-02-25 12:25:02.813198015 +0000 UTC m=+0.055472461 container create e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.860 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022302.8592353, b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.862 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] VM Started (Lifecycle Event)
Feb 25 12:25:02 compute-0 podman[284207]: 2026-02-25 12:25:02.783524565 +0000 UTC m=+0.025799011 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:25:02 compute-0 systemd[1]: Started libpod-conmon-e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148.scope.
Feb 25 12:25:02 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd73bb85b3ddca49f944375bbbcd5f0c148a4c567244e730f0a7b2967982d80b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.924 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.930 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022302.8602152, b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.930 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] VM Paused (Lifecycle Event)
Feb 25 12:25:02 compute-0 podman[284207]: 2026-02-25 12:25:02.936554136 +0000 UTC m=+0.178828612 container init e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 12:25:02 compute-0 podman[284207]: 2026-02-25 12:25:02.940906019 +0000 UTC m=+0.183180455 container start e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 25 12:25:02 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [NOTICE]   (284248) : New worker (284250) forked
Feb 25 12:25:02 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [NOTICE]   (284248) : Loading success.
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.971 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.974 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:02 compute-0 nova_compute[244014]: 2026-02-25 12:25:02.976 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 1238794a-063b-4ac0-a7d9-3590353e3207_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:03 compute-0 ceph-mon[76335]: pgmap v1233: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Feb 25 12:25:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2043044653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.026 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.030 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] resizing rbd image 1238794a-063b-4ac0-a7d9-3590353e3207_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.097 244018 DEBUG nova.objects.instance [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'migration_context' on Instance uuid 1238794a-063b-4ac0-a7d9-3590353e3207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.116 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.116 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Ensure instance console log exists: /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.117 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.117 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.117 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.232 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Successfully updated port: 4f6b67d0-056b-4277-a604-6221f16e30b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.327 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "refresh_cache-22450f11-d042-48c5-941e-fd544e58d84a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.327 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquired lock "refresh_cache-22450f11-d042-48c5-941e-fd544e58d84a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.328 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.585 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.724 244018 DEBUG nova.compute.manager [req-3abf7854-652a-45e6-a89d-48eb6b8b3576 req-40286374-2f42-48d3-9666-63eca667b296 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-changed-4f6b67d0-056b-4277-a604-6221f16e30b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.724 244018 DEBUG nova.compute.manager [req-3abf7854-652a-45e6-a89d-48eb6b8b3576 req-40286374-2f42-48d3-9666-63eca667b296 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Refreshing instance network info cache due to event network-changed-4f6b67d0-056b-4277-a604-6221f16e30b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.725 244018 DEBUG oslo_concurrency.lockutils [req-3abf7854-652a-45e6-a89d-48eb6b8b3576 req-40286374-2f42-48d3-9666-63eca667b296 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-22450f11-d042-48c5-941e-fd544e58d84a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 219 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Feb 25 12:25:03 compute-0 nova_compute[244014]: 2026-02-25 12:25:03.923 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Successfully created port: 9f614955-c92f-41c2-a47e-6d65c378bf82 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.482 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Updating instance_info_cache with network_info: [{"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.510 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Releasing lock "refresh_cache-22450f11-d042-48c5-941e-fd544e58d84a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.511 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Instance network_info: |[{"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.512 244018 DEBUG oslo_concurrency.lockutils [req-3abf7854-652a-45e6-a89d-48eb6b8b3576 req-40286374-2f42-48d3-9666-63eca667b296 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-22450f11-d042-48c5-941e-fd544e58d84a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.513 244018 DEBUG nova.network.neutron [req-3abf7854-652a-45e6-a89d-48eb6b8b3576 req-40286374-2f42-48d3-9666-63eca667b296 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Refreshing network info cache for port 4f6b67d0-056b-4277-a604-6221f16e30b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.520 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Start _get_guest_xml network_info=[{"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.527 244018 WARNING nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.538 244018 DEBUG nova.virt.libvirt.host [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.538 244018 DEBUG nova.virt.libvirt.host [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.544 244018 DEBUG nova.virt.libvirt.host [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.545 244018 DEBUG nova.virt.libvirt.host [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.545 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.546 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.546 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.547 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.547 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.547 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.548 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.548 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.548 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.549 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.551 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.551 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.557 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:25:04 compute-0 nova_compute[244014]: 2026-02-25 12:25:04.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:25:05 compute-0 ceph-mon[76335]: pgmap v1234: 305 pgs: 305 active+clean; 219 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Feb 25 12:25:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2950566152' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.101 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.130 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 22450f11-d042-48c5-941e-fd544e58d84a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.135 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.563 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Successfully updated port: 9f614955-c92f-41c2-a47e-6d65c378bf82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.586 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "refresh_cache-1238794a-063b-4ac0-a7d9-3590353e3207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.586 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquired lock "refresh_cache-1238794a-063b-4ac0-a7d9-3590353e3207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.587 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:25:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511503570' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.691 244018 DEBUG nova.compute.manager [req-d7d932c3-fd87-4c32-aa08-390e7cb78f1b req-be46a7d7-8ee3-4b50-832e-1e9b923b8d66 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-changed-9f614955-c92f-41c2-a47e-6d65c378bf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.692 244018 DEBUG nova.compute.manager [req-d7d932c3-fd87-4c32-aa08-390e7cb78f1b req-be46a7d7-8ee3-4b50-832e-1e9b923b8d66 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Refreshing instance network info cache due to event network-changed-9f614955-c92f-41c2-a47e-6d65c378bf82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.693 244018 DEBUG oslo_concurrency.lockutils [req-d7d932c3-fd87-4c32-aa08-390e7cb78f1b req-be46a7d7-8ee3-4b50-832e-1e9b923b8d66 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-1238794a-063b-4ac0-a7d9-3590353e3207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.697 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.699 244018 DEBUG nova.virt.libvirt.vif [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2025017343',display_name='tempest-tempest.common.compute-instance-2025017343-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2025017343-1',id=45,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-74gnqqyr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:01Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=22450f11-d042-48c5-941e-fd544e58d84a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.699 244018 DEBUG nova.network.os_vif_util [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.700 244018 DEBUG nova.network.os_vif_util [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:f8,bridge_name='br-int',has_traffic_filtering=True,id=4f6b67d0-056b-4277-a604-6221f16e30b2,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f6b67d0-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.702 244018 DEBUG nova.objects.instance [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'pci_devices' on Instance uuid 22450f11-d042-48c5-941e-fd544e58d84a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.718 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:25:05 compute-0 nova_compute[244014]:   <uuid>22450f11-d042-48c5-941e-fd544e58d84a</uuid>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   <name>instance-0000002d</name>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <nova:name>tempest-tempest.common.compute-instance-2025017343-1</nova:name>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:25:04</nova:creationTime>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <nova:user uuid="0899f3fdb57d46a395d07753dd261241">tempest-MultipleCreateTestJSON-1900553995-project-member</nova:user>
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <nova:project uuid="c8744bdbc0f1499388aab5f477246beb">tempest-MultipleCreateTestJSON-1900553995</nova:project>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <nova:port uuid="4f6b67d0-056b-4277-a604-6221f16e30b2">
Feb 25 12:25:05 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <system>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <entry name="serial">22450f11-d042-48c5-941e-fd544e58d84a</entry>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <entry name="uuid">22450f11-d042-48c5-941e-fd544e58d84a</entry>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     </system>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   <os>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   </os>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   <features>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   </features>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/22450f11-d042-48c5-941e-fd544e58d84a_disk">
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/22450f11-d042-48c5-941e-fd544e58d84a_disk.config">
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:05 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:8b:cd:f8"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <target dev="tap4f6b67d0-05"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a/console.log" append="off"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <video>
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     </video>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:25:05 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:25:05 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:25:05 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:25:05 compute-0 nova_compute[244014]: </domain>
Feb 25 12:25:05 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.719 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Preparing to wait for external event network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.720 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.720 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.720 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.722 244018 DEBUG nova.virt.libvirt.vif [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2025017343',display_name='tempest-tempest.common.compute-instance-2025017343-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2025017343-1',id=45,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-74gnqqyr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:01Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=22450f11-d042-48c5-941e-fd544e58d84a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.722 244018 DEBUG nova.network.os_vif_util [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.724 244018 DEBUG nova.network.os_vif_util [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:f8,bridge_name='br-int',has_traffic_filtering=True,id=4f6b67d0-056b-4277-a604-6221f16e30b2,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f6b67d0-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.724 244018 DEBUG os_vif [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:f8,bridge_name='br-int',has_traffic_filtering=True,id=4f6b67d0-056b-4277-a604-6221f16e30b2,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f6b67d0-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.725 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.726 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.727 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.732 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f6b67d0-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.733 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f6b67d0-05, col_values=(('external_ids', {'iface-id': '4f6b67d0-056b-4277-a604-6221f16e30b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:cd:f8', 'vm-uuid': '22450f11-d042-48c5-941e-fd544e58d84a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:05 compute-0 NetworkManager[49836]: <info>  [1772022305.7364] manager: (tap4f6b67d0-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.744 244018 INFO os_vif [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:f8,bridge_name='br-int',has_traffic_filtering=True,id=4f6b67d0-056b-4277-a604-6221f16e30b2,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f6b67d0-05')
Feb 25 12:25:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 219 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.812 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.813 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.813 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No VIF found with MAC fa:16:3e:8b:cd:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.814 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Using config drive
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.845 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 22450f11-d042-48c5-941e-fd544e58d84a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.853 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:25:05 compute-0 nova_compute[244014]: 2026-02-25 12:25:05.885 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:25:06 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2950566152' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:06 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/511503570' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.068 244018 DEBUG nova.network.neutron [req-3abf7854-652a-45e6-a89d-48eb6b8b3576 req-40286374-2f42-48d3-9666-63eca667b296 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Updated VIF entry in instance network info cache for port 4f6b67d0-056b-4277-a604-6221f16e30b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.069 244018 DEBUG nova.network.neutron [req-3abf7854-652a-45e6-a89d-48eb6b8b3576 req-40286374-2f42-48d3-9666-63eca667b296 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Updating instance_info_cache with network_info: [{"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.094 244018 DEBUG oslo_concurrency.lockutils [req-3abf7854-652a-45e6-a89d-48eb6b8b3576 req-40286374-2f42-48d3-9666-63eca667b296 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-22450f11-d042-48c5-941e-fd544e58d84a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.226 244018 DEBUG nova.compute.manager [req-8a6a862e-13a2-423e-9444-dc15e4b9819c req-72920b8d-c396-4099-98a5-90ad579655c6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received event network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.226 244018 DEBUG oslo_concurrency.lockutils [req-8a6a862e-13a2-423e-9444-dc15e4b9819c req-72920b8d-c396-4099-98a5-90ad579655c6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.226 244018 DEBUG oslo_concurrency.lockutils [req-8a6a862e-13a2-423e-9444-dc15e4b9819c req-72920b8d-c396-4099-98a5-90ad579655c6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.226 244018 DEBUG oslo_concurrency.lockutils [req-8a6a862e-13a2-423e-9444-dc15e4b9819c req-72920b8d-c396-4099-98a5-90ad579655c6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.227 244018 DEBUG nova.compute.manager [req-8a6a862e-13a2-423e-9444-dc15e4b9819c req-72920b8d-c396-4099-98a5-90ad579655c6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Processing event network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.227 244018 DEBUG nova.compute.manager [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.231 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022306.23084, b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.231 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] VM Resumed (Lifecycle Event)
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.232 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.236 244018 INFO nova.virt.libvirt.driver [-] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance spawned successfully.
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.237 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.251 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.256 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.258 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.259 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.259 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.259 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.260 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.260 244018 DEBUG nova.virt.libvirt.driver [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.287 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.316 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Creating config drive at /var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a/disk.config
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.319 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyrsmy6jp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.356 244018 INFO nova.compute.manager [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Took 13.01 seconds to spawn the instance on the hypervisor.
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.357 244018 DEBUG nova.compute.manager [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.422 244018 INFO nova.compute.manager [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Took 14.19 seconds to build instance.
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.439 244018 DEBUG oslo_concurrency.lockutils [None req-317912f5-d795-4b75-8dd2-5186e81cff5f 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.463 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyrsmy6jp" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.499 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 22450f11-d042-48c5-941e-fd544e58d84a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.505 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a/disk.config 22450f11-d042-48c5-941e-fd544e58d84a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:25:06 compute-0 nova_compute[244014]: 2026-02-25 12:25:06.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.023 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a/disk.config 22450f11-d042-48c5-941e-fd544e58d84a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.024 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Deleting local config drive /var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a/disk.config because it was imported into RBD.
Feb 25 12:25:07 compute-0 kernel: tap4f6b67d0-05: entered promiscuous mode
Feb 25 12:25:07 compute-0 NetworkManager[49836]: <info>  [1772022307.0898] manager: (tap4f6b67d0-05): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.096 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:07 compute-0 ceph-mon[76335]: pgmap v1235: 305 pgs: 305 active+clean; 219 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Feb 25 12:25:07 compute-0 ovn_controller[147040]: 2026-02-25T12:25:07Z|00384|binding|INFO|Claiming lport 4f6b67d0-056b-4277-a604-6221f16e30b2 for this chassis.
Feb 25 12:25:07 compute-0 ovn_controller[147040]: 2026-02-25T12:25:07Z|00385|binding|INFO|4f6b67d0-056b-4277-a604-6221f16e30b2: Claiming fa:16:3e:8b:cd:f8 10.100.0.12
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.103 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.106 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.115 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:cd:f8 10.100.0.12'], port_security=['fa:16:3e:8b:cd:f8 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '22450f11-d042-48c5-941e-fd544e58d84a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4f6b67d0-056b-4277-a604-6221f16e30b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.118 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4f6b67d0-056b-4277-a604-6221f16e30b2 in datapath c97c5b11-7517-46fe-a6ca-63894792908c bound to our chassis
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.121 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 12:25:07 compute-0 systemd-machined[210048]: New machine qemu-50-instance-0000002d.
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.134 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d933721-f075-4a22-bc8b-05fbdbc05421]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.135 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc97c5b11-71 in ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.136 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc97c5b11-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.137 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88610a09-8156-4e6b-921f-d427550ed123]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.141 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[998f39ed-89e4-4b7d-99fa-58b7fc418622]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000002d.
Feb 25 12:25:07 compute-0 systemd-udevd[284469]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.153 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[83d7abe5-2e45-4f65-b9d2-6ed30ad4ac86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 ovn_controller[147040]: 2026-02-25T12:25:07Z|00386|binding|INFO|Setting lport 4f6b67d0-056b-4277-a604-6221f16e30b2 ovn-installed in OVS
Feb 25 12:25:07 compute-0 ovn_controller[147040]: 2026-02-25T12:25:07Z|00387|binding|INFO|Setting lport 4f6b67d0-056b-4277-a604-6221f16e30b2 up in Southbound
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.169 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:07 compute-0 NetworkManager[49836]: <info>  [1772022307.1734] device (tap4f6b67d0-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:25:07 compute-0 NetworkManager[49836]: <info>  [1772022307.1755] device (tap4f6b67d0-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.198 244018 DEBUG nova.network.neutron [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Updating instance_info_cache with network_info: [{"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.199 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[42036c9a-ecd2-4846-9c99-cfb028aaec6a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.224 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Releasing lock "refresh_cache-1238794a-063b-4ac0-a7d9-3590353e3207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.225 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Instance network_info: |[{"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.226 244018 DEBUG oslo_concurrency.lockutils [req-d7d932c3-fd87-4c32-aa08-390e7cb78f1b req-be46a7d7-8ee3-4b50-832e-1e9b923b8d66 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-1238794a-063b-4ac0-a7d9-3590353e3207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.226 244018 DEBUG nova.network.neutron [req-d7d932c3-fd87-4c32-aa08-390e7cb78f1b req-be46a7d7-8ee3-4b50-832e-1e9b923b8d66 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Refreshing network info cache for port 9f614955-c92f-41c2-a47e-6d65c378bf82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.231 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Start _get_guest_xml network_info=[{"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.242 244018 WARNING nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.249 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[725b02c0-8ab0-4378-899e-07937b8f9af3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 NetworkManager[49836]: <info>  [1772022307.2629] manager: (tapc97c5b11-70): new Veth device (/org/freedesktop/NetworkManager/Devices/175)
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.261 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[adee2693-6dda-4d67-ad8a-8e29960b4e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.275 244018 DEBUG nova.virt.libvirt.host [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.276 244018 DEBUG nova.virt.libvirt.host [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.283 244018 DEBUG nova.virt.libvirt.host [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.284 244018 DEBUG nova.virt.libvirt.host [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.284 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.285 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.285 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.285 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.285 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.286 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.286 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.286 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.286 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.287 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.287 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.287 244018 DEBUG nova.virt.hardware [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.290 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[45599d8c-88f6-4c2d-bafb-356d952ef261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.290 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.293 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bb664cfc-2975-4b36-ac41-5655b15abb46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 NetworkManager[49836]: <info>  [1772022307.3151] device (tapc97c5b11-70): carrier: link connected
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.318 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[871809b0-6cfe-4cd9-ba6e-65c0ffc9442a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.338 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcc1d6f-99cf-4c63-a5ec-78b616f9e501]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427672, 'reachable_time': 15170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284501, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.350 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0e7377-d1fe-4e1f-bd6b-c9cdf2eb78bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:610a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427672, 'tstamp': 427672}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284502, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.364 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd3547d-f61d-4964-a769-e0f83b876fd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427672, 'reachable_time': 15170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284503, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.395 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4d31a23a-1892-475f-b38c-8e3c4491477b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.455 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[59f74612-8967-42ba-9779-df1fc59a3f40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.457 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.457 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.457 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97c5b11-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:07 compute-0 kernel: tapc97c5b11-70: entered promiscuous mode
Feb 25 12:25:07 compute-0 NetworkManager[49836]: <info>  [1772022307.4614] manager: (tapc97c5b11-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.466 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97c5b11-70, col_values=(('external_ids', {'iface-id': 'db412aa7-4ad4-4eb8-b61f-dd3e71d5329d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:07 compute-0 ovn_controller[147040]: 2026-02-25T12:25:07Z|00388|binding|INFO|Releasing lport db412aa7-4ad4-4eb8-b61f-dd3e71d5329d from this chassis (sb_readonly=0)
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.467 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.470 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c97c5b11-7517-46fe-a6ca-63894792908c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c97c5b11-7517-46fe-a6ca-63894792908c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.470 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[254a3398-dd9d-43e2-ae63-2006002c2a0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.471 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/c97c5b11-7517-46fe-a6ca-63894792908c.pid.haproxy
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:25:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:07.472 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'env', 'PROCESS_TAG=haproxy-c97c5b11-7517-46fe-a6ca-63894792908c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c97c5b11-7517-46fe-a6ca-63894792908c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.613 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022307.6129165, 22450f11-d042-48c5-941e-fd544e58d84a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.614 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] VM Started (Lifecycle Event)
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.651 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.657 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022307.613358, 22450f11-d042-48c5-941e-fd544e58d84a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.658 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] VM Paused (Lifecycle Event)
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.681 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.685 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.720 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 293 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 187 KiB/s rd, 5.3 MiB/s wr, 99 op/s
Feb 25 12:25:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1901792126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.836 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.864 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 1238794a-063b-4ac0-a7d9-3590353e3207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:07 compute-0 podman[284596]: 2026-02-25 12:25:07.86580751 +0000 UTC m=+0.111898228 container create b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:25:07 compute-0 podman[284596]: 2026-02-25 12:25:07.778756707 +0000 UTC m=+0.024847485 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:25:07 compute-0 nova_compute[244014]: 2026-02-25 12:25:07.884 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:07 compute-0 systemd[1]: Started libpod-conmon-b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87.scope.
Feb 25 12:25:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8c15061c2508c9a22a1e470245ba40e3bbbed4564cf9ed0210b81c7f79f5d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:07 compute-0 podman[284596]: 2026-02-25 12:25:07.971352847 +0000 UTC m=+0.217443615 container init b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:25:07 compute-0 podman[284596]: 2026-02-25 12:25:07.97959123 +0000 UTC m=+0.225681958 container start b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:25:08 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [NOTICE]   (284638) : New worker (284649) forked
Feb 25 12:25:08 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [NOTICE]   (284638) : Loading success.
Feb 25 12:25:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1901792126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.217 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.218 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.220 244018 INFO nova.compute.manager [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Shelving
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.255 244018 DEBUG nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.261 244018 DEBUG nova.compute.manager [req-668573cc-dfcf-4dca-838e-fc6bf14a0bc8 req-480aa9cb-ac58-493d-94d6-be73f1786df5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.261 244018 DEBUG oslo_concurrency.lockutils [req-668573cc-dfcf-4dca-838e-fc6bf14a0bc8 req-480aa9cb-ac58-493d-94d6-be73f1786df5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.262 244018 DEBUG oslo_concurrency.lockutils [req-668573cc-dfcf-4dca-838e-fc6bf14a0bc8 req-480aa9cb-ac58-493d-94d6-be73f1786df5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.262 244018 DEBUG oslo_concurrency.lockutils [req-668573cc-dfcf-4dca-838e-fc6bf14a0bc8 req-480aa9cb-ac58-493d-94d6-be73f1786df5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.262 244018 DEBUG nova.compute.manager [req-668573cc-dfcf-4dca-838e-fc6bf14a0bc8 req-480aa9cb-ac58-493d-94d6-be73f1786df5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Processing event network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.264 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.281 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022308.2805872, 22450f11-d042-48c5-941e-fd544e58d84a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.282 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] VM Resumed (Lifecycle Event)
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.289 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.293 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:08.295 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:08.297 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.308 244018 INFO nova.virt.libvirt.driver [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Instance spawned successfully.
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.308 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.324 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.330 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.338 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.339 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.339 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.340 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.340 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.340 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.352 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247705501' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.400 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.401 244018 DEBUG nova.virt.libvirt.vif [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2025017343',display_name='tempest-tempest.common.compute-instance-2025017343-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2025017343-2',id=46,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-74gnqqyr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:02Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=1238794a-063b-4ac0-a7d9-3590353e3207,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.402 244018 DEBUG nova.network.os_vif_util [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.403 244018 DEBUG nova.network.os_vif_util [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:31:fe,bridge_name='br-int',has_traffic_filtering=True,id=9f614955-c92f-41c2-a47e-6d65c378bf82,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f614955-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.404 244018 DEBUG nova.objects.instance [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'pci_devices' on Instance uuid 1238794a-063b-4ac0-a7d9-3590353e3207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.408 244018 INFO nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Took 6.92 seconds to spawn the instance on the hypervisor.
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.408 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.423 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:25:08 compute-0 nova_compute[244014]:   <uuid>1238794a-063b-4ac0-a7d9-3590353e3207</uuid>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   <name>instance-0000002e</name>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <nova:name>tempest-tempest.common.compute-instance-2025017343-2</nova:name>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:25:07</nova:creationTime>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <nova:user uuid="0899f3fdb57d46a395d07753dd261241">tempest-MultipleCreateTestJSON-1900553995-project-member</nova:user>
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <nova:project uuid="c8744bdbc0f1499388aab5f477246beb">tempest-MultipleCreateTestJSON-1900553995</nova:project>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <nova:port uuid="9f614955-c92f-41c2-a47e-6d65c378bf82">
Feb 25 12:25:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <system>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <entry name="serial">1238794a-063b-4ac0-a7d9-3590353e3207</entry>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <entry name="uuid">1238794a-063b-4ac0-a7d9-3590353e3207</entry>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     </system>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   <os>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   </os>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   <features>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   </features>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/1238794a-063b-4ac0-a7d9-3590353e3207_disk">
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/1238794a-063b-4ac0-a7d9-3590353e3207_disk.config">
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:db:31:fe"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <target dev="tap9f614955-c9"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/console.log" append="off"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <video>
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     </video>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:25:08 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:25:08 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:25:08 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:25:08 compute-0 nova_compute[244014]: </domain>
Feb 25 12:25:08 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.424 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Preparing to wait for external event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.424 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.425 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.425 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.426 244018 DEBUG nova.virt.libvirt.vif [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2025017343',display_name='tempest-tempest.common.compute-instance-2025017343-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2025017343-2',id=46,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-74gnqqyr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:02Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=1238794a-063b-4ac0-a7d9-3590353e3207,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.426 244018 DEBUG nova.network.os_vif_util [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.427 244018 DEBUG nova.network.os_vif_util [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:31:fe,bridge_name='br-int',has_traffic_filtering=True,id=9f614955-c92f-41c2-a47e-6d65c378bf82,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f614955-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.428 244018 DEBUG os_vif [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:31:fe,bridge_name='br-int',has_traffic_filtering=True,id=9f614955-c92f-41c2-a47e-6d65c378bf82,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f614955-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.429 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.429 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.429 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.433 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.434 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f614955-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.434 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f614955-c9, col_values=(('external_ids', {'iface-id': '9f614955-c92f-41c2-a47e-6d65c378bf82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:31:fe', 'vm-uuid': '1238794a-063b-4ac0-a7d9-3590353e3207'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:08 compute-0 NetworkManager[49836]: <info>  [1772022308.4375] manager: (tap9f614955-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.441 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.445 244018 INFO os_vif [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:31:fe,bridge_name='br-int',has_traffic_filtering=True,id=9f614955-c92f-41c2-a47e-6d65c378bf82,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f614955-c9')
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.495 244018 INFO nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Took 8.16 seconds to build instance.
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.518 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.518 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.519 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No VIF found with MAC fa:16:3e:db:31:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.519 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Using config drive
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.544 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 1238794a-063b-4ac0-a7d9-3590353e3207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.554 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.901 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:25:08 compute-0 nova_compute[244014]: 2026-02-25 12:25:08.902 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.024 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Creating config drive at /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.032 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8ifvi6la execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.073 244018 DEBUG nova.compute.manager [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received event network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.074 244018 DEBUG oslo_concurrency.lockutils [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.076 244018 DEBUG oslo_concurrency.lockutils [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.077 244018 DEBUG oslo_concurrency.lockutils [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.078 244018 DEBUG nova.compute.manager [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] No waiting events found dispatching network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.078 244018 WARNING nova.compute.manager [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received unexpected event network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 for instance with vm_state active and task_state shelving.
Feb 25 12:25:09 compute-0 rsyslogd[1020]: imjournal: 10175 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.173 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8ifvi6la" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.279 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 1238794a-063b-4ac0-a7d9-3590353e3207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.290 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config 1238794a-063b-4ac0-a7d9-3590353e3207_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:09 compute-0 ceph-mon[76335]: pgmap v1236: 305 pgs: 305 active+clean; 293 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 187 KiB/s rd, 5.3 MiB/s wr, 99 op/s
Feb 25 12:25:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2247705501' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816542441' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.521 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.542 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config 1238794a-063b-4ac0-a7d9-3590353e3207_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.543 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Deleting local config drive /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config because it was imported into RBD.
Feb 25 12:25:09 compute-0 kernel: tap9f614955-c9: entered promiscuous mode
Feb 25 12:25:09 compute-0 systemd-udevd[284495]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:25:09 compute-0 NetworkManager[49836]: <info>  [1772022309.5903] manager: (tap9f614955-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/178)
Feb 25 12:25:09 compute-0 ovn_controller[147040]: 2026-02-25T12:25:09Z|00389|binding|INFO|Claiming lport 9f614955-c92f-41c2-a47e-6d65c378bf82 for this chassis.
Feb 25 12:25:09 compute-0 ovn_controller[147040]: 2026-02-25T12:25:09Z|00390|binding|INFO|9f614955-c92f-41c2-a47e-6d65c378bf82: Claiming fa:16:3e:db:31:fe 10.100.0.7
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.594 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.603 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:31:fe 10.100.0.7'], port_security=['fa:16:3e:db:31:fe 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1238794a-063b-4ac0-a7d9-3590353e3207', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9f614955-c92f-41c2-a47e-6d65c378bf82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.606 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9f614955-c92f-41c2-a47e-6d65c378bf82 in datapath c97c5b11-7517-46fe-a6ca-63894792908c bound to our chassis
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.609 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 12:25:09 compute-0 ovn_controller[147040]: 2026-02-25T12:25:09Z|00391|binding|INFO|Setting lport 9f614955-c92f-41c2-a47e-6d65c378bf82 ovn-installed in OVS
Feb 25 12:25:09 compute-0 ovn_controller[147040]: 2026-02-25T12:25:09Z|00392|binding|INFO|Setting lport 9f614955-c92f-41c2-a47e-6d65c378bf82 up in Southbound
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.614 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:09 compute-0 NetworkManager[49836]: <info>  [1772022309.6176] device (tap9f614955-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:25:09 compute-0 NetworkManager[49836]: <info>  [1772022309.6186] device (tap9f614955-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.619 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:09 compute-0 systemd-machined[210048]: New machine qemu-51-instance-0000002e.
Feb 25 12:25:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.623 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a280c7ce-25da-4dbf-bbcb-c3cd5fa3f560]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:09 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002e.
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.650 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[df6f9241-6896-414c-8ec7-5de3fe3fe571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.653 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[70a9c200-db29-4a4e-9a73-2023fd96c1e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.677 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c7383a91-b363-4439-a38a-c909d9303abc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.699 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbf93be-c0d1-4a99-834f-121bd2184d7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427672, 'reachable_time': 15170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284776, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.715 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.715 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.716 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8e20ea54-78ca-46ca-b84a-14c290ab0242]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427682, 'tstamp': 427682}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284777, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427685, 'tstamp': 427685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284777, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.718 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.720 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.722 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97c5b11-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.722 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.723 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97c5b11-70, col_values=(('external_ids', {'iface-id': 'db412aa7-4ad4-4eb8-b61f-dd3e71d5329d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.724 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.726 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.726 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.730 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.730 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.734 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.734 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.737 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:25:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 293 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 3.6 MiB/s wr, 72 op/s
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.836 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.836 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.841 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.842 244018 INFO nova.compute.claims [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.927 244018 DEBUG nova.network.neutron [req-d7d932c3-fd87-4c32-aa08-390e7cb78f1b req-be46a7d7-8ee3-4b50-832e-1e9b923b8d66 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Updated VIF entry in instance network info cache for port 9f614955-c92f-41c2-a47e-6d65c378bf82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.928 244018 DEBUG nova.network.neutron [req-d7d932c3-fd87-4c32-aa08-390e7cb78f1b req-be46a7d7-8ee3-4b50-832e-1e9b923b8d66 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Updating instance_info_cache with network_info: [{"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.930 244018 DEBUG nova.scheduler.client.report [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.938 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.939 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4016MB free_disk=59.92535855714232GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.940 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.941 244018 DEBUG oslo_concurrency.lockutils [req-d7d932c3-fd87-4c32-aa08-390e7cb78f1b req-be46a7d7-8ee3-4b50-832e-1e9b923b8d66 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-1238794a-063b-4ac0-a7d9-3590353e3207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.946 244018 DEBUG nova.scheduler.client.report [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.946 244018 DEBUG nova.compute.provider_tree [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.965 244018 DEBUG nova.scheduler.client.report [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 12:25:09 compute-0 nova_compute[244014]: 2026-02-25 12:25:09.988 244018 DEBUG nova.scheduler.client.report [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.094 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3816542441' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.366 244018 DEBUG nova.compute.manager [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.367 244018 DEBUG oslo_concurrency.lockutils [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.367 244018 DEBUG oslo_concurrency.lockutils [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.368 244018 DEBUG oslo_concurrency.lockutils [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.368 244018 DEBUG nova.compute.manager [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] No waiting events found dispatching network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.368 244018 WARNING nova.compute.manager [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received unexpected event network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 for instance with vm_state active and task_state None.
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.372 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022310.3720725, 1238794a-063b-4ac0-a7d9-3590353e3207 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.372 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] VM Started (Lifecycle Event)
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.407 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.410 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022310.3721282, 1238794a-063b-4ac0-a7d9-3590353e3207 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.410 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] VM Paused (Lifecycle Event)
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.428 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.430 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.449 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552167995' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.673 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.678 244018 DEBUG nova.compute.provider_tree [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.711 244018 DEBUG nova.scheduler.client.report [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.741 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.742 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.746 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.824 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.825 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.860 244018 INFO nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.869 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.869 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 22450f11-d042-48c5-941e-fd544e58d84a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.870 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 1238794a-063b-4ac0-a7d9-3590353e3207 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.870 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b8086e43-4c45-422f-a3b5-fa665c256b30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.871 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.872 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:25:10 compute-0 nova_compute[244014]: 2026-02-25 12:25:10.896 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.003 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.040 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.043 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.048 244018 INFO nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating image(s)
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.078 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.108 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.130 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.133 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.161 244018 DEBUG nova.policy [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b774fd0c04fc403d9ddb205f1e6abbc5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c85a955249394f0faf7c890f5cd0df32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.215 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.216 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.218 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.218 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.247 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.252 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b8086e43-4c45-422f-a3b5-fa665c256b30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:11 compute-0 ceph-mon[76335]: pgmap v1237: 305 pgs: 305 active+clean; 293 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 3.6 MiB/s wr, 72 op/s
Feb 25 12:25:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3552167995' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.529 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b8086e43-4c45-422f-a3b5-fa665c256b30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632242059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.593 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.602 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] resizing rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.633 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.685 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.694 244018 DEBUG nova.objects.instance [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'migration_context' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.712 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.712 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Ensure instance console log exists: /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.713 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.714 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.714 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.716 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.716 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 300 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 197 op/s
Feb 25 12:25:11 compute-0 nova_compute[244014]: 2026-02-25 12:25:11.837 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Successfully created port: abdb97b5-8e9d-4929-af6f-bfb06c067878 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:25:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1632242059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:12 compute-0 nova_compute[244014]: 2026-02-25 12:25:12.643 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Successfully updated port: abdb97b5-8e9d-4929-af6f-bfb06c067878 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:25:12 compute-0 nova_compute[244014]: 2026-02-25 12:25:12.664 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:12 compute-0 nova_compute[244014]: 2026-02-25 12:25:12.665 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:12 compute-0 nova_compute[244014]: 2026-02-25 12:25:12.665 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:25:12 compute-0 nova_compute[244014]: 2026-02-25 12:25:12.893 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.057 244018 DEBUG nova.compute.manager [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.057 244018 DEBUG nova.compute.manager [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing instance network info cache due to event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.058 244018 DEBUG oslo_concurrency.lockutils [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.758 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.758 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 316 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.7 MiB/s wr, 215 op/s
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.793 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:25:13 compute-0 ceph-mon[76335]: pgmap v1238: 305 pgs: 305 active+clean; 300 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 197 op/s
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.850 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.871 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.872 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance network_info: |[{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.875 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.876 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.878 244018 DEBUG oslo_concurrency.lockutils [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.878 244018 DEBUG nova.network.neutron [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.885 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start _get_guest_xml network_info=[{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.894 244018 WARNING nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.900 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.900 244018 INFO nova.compute.claims [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.914 244018 DEBUG nova.virt.libvirt.host [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.916 244018 DEBUG nova.virt.libvirt.host [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.919 244018 DEBUG nova.virt.libvirt.host [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.920 244018 DEBUG nova.virt.libvirt.host [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.921 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.921 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.922 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.923 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.923 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.924 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.924 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.925 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.925 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.926 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.926 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.927 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:25:13 compute-0 nova_compute[244014]: 2026-02-25 12:25:13.931 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:14 compute-0 nova_compute[244014]: 2026-02-25 12:25:14.146 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/380632174' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:14 compute-0 nova_compute[244014]: 2026-02-25 12:25:14.488 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:14 compute-0 nova_compute[244014]: 2026-02-25 12:25:14.507 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:14 compute-0 nova_compute[244014]: 2026-02-25 12:25:14.511 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:14 compute-0 nova_compute[244014]: 2026-02-25 12:25:14.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:25:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884668611' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:14 compute-0 nova_compute[244014]: 2026-02-25 12:25:14.706 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:14 compute-0 nova_compute[244014]: 2026-02-25 12:25:14.711 244018 DEBUG nova.compute.provider_tree [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:14 compute-0 nova_compute[244014]: 2026-02-25 12:25:14.717 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:25:14 compute-0 nova_compute[244014]: 2026-02-25 12:25:14.717 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:25:14 compute-0 nova_compute[244014]: 2026-02-25 12:25:14.965 244018 DEBUG nova.scheduler.client.report [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:14 compute-0 nova_compute[244014]: 2026-02-25 12:25:14.998 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.000 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:25:15 compute-0 ceph-mon[76335]: pgmap v1239: 305 pgs: 305 active+clean; 316 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.7 MiB/s wr, 215 op/s
Feb 25 12:25:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/380632174' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/884668611' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/888031494' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.092 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.093 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.099 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.100 244018 DEBUG nova.virt.libvirt.vif [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ4/ViXjDl7sfXbfHy1Rj0+WVS30xG/+F445xoJQyz45huoziS5Ge/69+H9D3xA69BQvF6LAGpEuOAI4T0oNr5YUcMHaOf8cBGICZoqOX1SEjGVzLtcjONvsNISgitMaQ==',key_name='tempest-keypair-221288102',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.101 244018 DEBUG nova.network.os_vif_util [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.102 244018 DEBUG nova.network.os_vif_util [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.103 244018 DEBUG nova.objects.instance [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.118 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:25:15 compute-0 nova_compute[244014]:   <uuid>b8086e43-4c45-422f-a3b5-fa665c256b30</uuid>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   <name>instance-0000002f</name>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestOtherB-server-2111996537</nova:name>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:25:13</nova:creationTime>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <nova:user uuid="b774fd0c04fc403d9ddb205f1e6abbc5">tempest-ServerActionsTestOtherB-1539976047-project-member</nova:user>
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <nova:project uuid="c85a955249394f0faf7c890f5cd0df32">tempest-ServerActionsTestOtherB-1539976047</nova:project>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <nova:port uuid="abdb97b5-8e9d-4929-af6f-bfb06c067878">
Feb 25 12:25:15 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <system>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <entry name="serial">b8086e43-4c45-422f-a3b5-fa665c256b30</entry>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <entry name="uuid">b8086e43-4c45-422f-a3b5-fa665c256b30</entry>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     </system>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   <os>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   </os>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   <features>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   </features>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk">
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config">
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:f8:53:87"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <target dev="tapabdb97b5-8e"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/console.log" append="off"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <video>
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     </video>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:25:15 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:25:15 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:25:15 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:25:15 compute-0 nova_compute[244014]: </domain>
Feb 25 12:25:15 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.124 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Preparing to wait for external event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.124 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.124 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.125 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.125 244018 DEBUG nova.virt.libvirt.vif [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ4/ViXjDl7sfXbfHy1Rj0+WVS30xG/+F445xoJQyz45huoziS5Ge/69+H9D3xA69BQvF6LAGpEuOAI4T0oNr5YUcMHaOf8cBGICZoqOX1SEjGVzLtcjONvsNISgitMaQ==',key_name='tempest-keypair-221288102',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.126 244018 DEBUG nova.network.os_vif_util [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.127 244018 DEBUG nova.network.os_vif_util [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.127 244018 DEBUG os_vif [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.128 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.129 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.130 244018 INFO nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.133 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.133 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabdb97b5-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.134 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapabdb97b5-8e, col_values=(('external_ids', {'iface-id': 'abdb97b5-8e9d-4929-af6f-bfb06c067878', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:53:87', 'vm-uuid': 'b8086e43-4c45-422f-a3b5-fa665c256b30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:15 compute-0 NetworkManager[49836]: <info>  [1772022315.1365] manager: (tapabdb97b5-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.142 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.142 244018 INFO os_vif [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e')
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.148 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.262 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.263 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.264 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No VIF found with MAC fa:16:3e:f8:53:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.265 244018 INFO nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Using config drive
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.294 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.302 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.304 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.305 244018 INFO nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Creating image(s)
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.329 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.355 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.379 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.382 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.411 244018 DEBUG nova.policy [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '89e71139346a40899212d5bc35835720', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f976004e0b334963a69c2519fca200d2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.457 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.457 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.458 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.459 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.479 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.482 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e291d969-fcea-4f60-a478-f7b81a91ccd9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.505 244018 DEBUG nova.network.neutron [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated VIF entry in instance network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.506 244018 DEBUG nova.network.neutron [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.530 244018 DEBUG oslo_concurrency.lockutils [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 316 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 187 op/s
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.987 244018 INFO nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating config drive at /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config
Feb 25 12:25:15 compute-0 nova_compute[244014]: 2026-02-25 12:25:15.993 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpi8gres87 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.130 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpi8gres87" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.241 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/888031494' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.260 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.288 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Successfully created port: 07860675-4ac4-43a4-ab6b-bacd17801fc2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.300 244018 DEBUG nova.compute.manager [req-533460a8-8a80-4f76-9207-b87bf1746f06 req-7b1ea306-21c8-4561-bbaf-43fbc0a2b7a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.301 244018 DEBUG oslo_concurrency.lockutils [req-533460a8-8a80-4f76-9207-b87bf1746f06 req-7b1ea306-21c8-4561-bbaf-43fbc0a2b7a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.301 244018 DEBUG oslo_concurrency.lockutils [req-533460a8-8a80-4f76-9207-b87bf1746f06 req-7b1ea306-21c8-4561-bbaf-43fbc0a2b7a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.302 244018 DEBUG oslo_concurrency.lockutils [req-533460a8-8a80-4f76-9207-b87bf1746f06 req-7b1ea306-21c8-4561-bbaf-43fbc0a2b7a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.303 244018 DEBUG nova.compute.manager [req-533460a8-8a80-4f76-9207-b87bf1746f06 req-7b1ea306-21c8-4561-bbaf-43fbc0a2b7a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Processing event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:25:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:16.302 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.305 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.309 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022316.3093193, 1238794a-063b-4ac0-a7d9-3590353e3207 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.310 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] VM Resumed (Lifecycle Event)
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.329 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.333 244018 INFO nova.virt.libvirt.driver [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Instance spawned successfully.
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.333 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.347 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.350 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.390 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.402 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.402 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.403 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.403 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.405 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.405 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.456 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e291d969-fcea-4f60-a478-f7b81a91ccd9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.974s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.497 244018 INFO nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Took 14.00 seconds to spawn the instance on the hypervisor.
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.497 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.549 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] resizing rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.696 244018 INFO nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Took 16.34 seconds to build instance.
Feb 25 12:25:16 compute-0 nova_compute[244014]: 2026-02-25 12:25:16.720 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:17 compute-0 ceph-mon[76335]: pgmap v1240: 305 pgs: 305 active+clean; 316 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 187 op/s
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.332 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.333 244018 INFO nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deleting local config drive /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config because it was imported into RBD.
Feb 25 12:25:17 compute-0 NetworkManager[49836]: <info>  [1772022317.3799] manager: (tapabdb97b5-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Feb 25 12:25:17 compute-0 kernel: tapabdb97b5-8e: entered promiscuous mode
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:17 compute-0 ovn_controller[147040]: 2026-02-25T12:25:17Z|00393|binding|INFO|Claiming lport abdb97b5-8e9d-4929-af6f-bfb06c067878 for this chassis.
Feb 25 12:25:17 compute-0 ovn_controller[147040]: 2026-02-25T12:25:17Z|00394|binding|INFO|abdb97b5-8e9d-4929-af6f-bfb06c067878: Claiming fa:16:3e:f8:53:87 10.100.0.6
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:17 compute-0 systemd-udevd[285335]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.407 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:53:87 10.100.0.6'], port_security=['fa:16:3e:f8:53:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8086e43-4c45-422f-a3b5-fa665c256b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '35edb9b7-5285-41a3-a867-f1cc587b3ad5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=abdb97b5-8e9d-4929-af6f-bfb06c067878) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.408 157129 INFO neutron.agent.ovn.metadata.agent [-] Port abdb97b5-8e9d-4929-af6f-bfb06c067878 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 bound to our chassis
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.410 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.417 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[48096c7a-2b31-434c-ac93-5ffe8cfec417]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.418 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64c22162-71 in ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:25:17 compute-0 NetworkManager[49836]: <info>  [1772022317.4291] device (tapabdb97b5-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:25:17 compute-0 NetworkManager[49836]: <info>  [1772022317.4300] device (tapabdb97b5-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.431 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64c22162-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.432 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e83f23a-c5fb-4468-bc4e-cc577f15bf9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.433 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8301af38-9968-4fe9-b420-79121176d3af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 systemd-machined[210048]: New machine qemu-52-instance-0000002f.
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.441 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[3e39619b-a8a3-4b91-879f-102a49fecb80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:17 compute-0 ovn_controller[147040]: 2026-02-25T12:25:17Z|00395|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 ovn-installed in OVS
Feb 25 12:25:17 compute-0 ovn_controller[147040]: 2026-02-25T12:25:17Z|00396|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 up in Southbound
Feb 25 12:25:17 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-0000002f.
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.450 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81205c3b-01ab-4699-951d-dfe245c3db93]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.478 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0d309e13-733e-47e7-a54b-0e16a07c6436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.483 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ddca2a32-e1d1-4577-8ecf-0898df3c4445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 systemd-udevd[285341]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:25:17 compute-0 NetworkManager[49836]: <info>  [1772022317.4846] manager: (tap64c22162-70): new Veth device (/org/freedesktop/NetworkManager/Devices/181)
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.510 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e06a410c-fade-4ab4-8445-99a8a19ff4e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.513 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9815e7ee-a624-48ab-bfc2-922d9d44dc7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 NetworkManager[49836]: <info>  [1772022317.5322] device (tap64c22162-70): carrier: link connected
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.537 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[561538d9-00a9-4130-939f-0c8beea8da7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.549 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61459568-feab-4785-8206-ad66532931e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285371, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.562 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1c24f3fb-2ae9-462c-b500-b7a915fdbd64]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:1cca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428693, 'tstamp': 428693}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285372, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.572 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a74ba5f-a695-4864-8496-82c703e92baf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285373, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.598 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3225e2-ed61-4b6f-bc60-ec0c1051b875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.651 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[757248a8-be73-474d-aa9d-832fd185580a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.653 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.654 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.654 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.656 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:17 compute-0 kernel: tap64c22162-70: entered promiscuous mode
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:17 compute-0 NetworkManager[49836]: <info>  [1772022317.6598] manager: (tap64c22162-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.662 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:17 compute-0 ovn_controller[147040]: 2026-02-25T12:25:17Z|00397|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.665 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.666 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64c22162-7e15-45de-8fd2-8c9a24f27006.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64c22162-7e15-45de-8fd2-8c9a24f27006.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.667 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e903aa24-38d5-47d8-b24f-afc1b66b68aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.668 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/64c22162-7e15-45de-8fd2-8c9a24f27006.pid.haproxy
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:25:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.670 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'env', 'PROCESS_TAG=haproxy-64c22162-7e15-45de-8fd2-8c9a24f27006', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64c22162-7e15-45de-8fd2-8c9a24f27006.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 362 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.898 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.911 244018 DEBUG nova.objects.instance [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'migration_context' on Instance uuid e291d969-fcea-4f60-a478-f7b81a91ccd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.931 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.931 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Ensure instance console log exists: /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.932 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.932 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:17 compute-0 nova_compute[244014]: 2026-02-25 12:25:17.932 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:18 compute-0 podman[285441]: 2026-02-25 12:25:18.01733181 +0000 UTC m=+0.033226941 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:25:18 compute-0 podman[285441]: 2026-02-25 12:25:18.411673391 +0000 UTC m=+0.427568542 container create 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.430 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022318.4300559, b8086e43-4c45-422f-a3b5-fa665c256b30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.431 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Started (Lifecycle Event)
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.439 244018 DEBUG nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.462 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.465 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022318.4301653, b8086e43-4c45-422f-a3b5-fa665c256b30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.466 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Paused (Lifecycle Event)
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.500 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.503 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:18 compute-0 systemd[1]: Started libpod-conmon-61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174.scope.
Feb 25 12:25:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee9ea23fb31ef780d3141b9da7996fc6467cb5ff171d46fe3017a00206862844/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.583 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:18 compute-0 podman[285441]: 2026-02-25 12:25:18.596158192 +0000 UTC m=+0.612053393 container init 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 12:25:18 compute-0 podman[285441]: 2026-02-25 12:25:18.60314415 +0000 UTC m=+0.619039301 container start 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:25:18 compute-0 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [NOTICE]   (285484) : New worker (285486) forked
Feb 25 12:25:18 compute-0 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [NOTICE]   (285484) : Loading success.
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.735 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Successfully updated port: 07860675-4ac4-43a4-ab6b-bacd17801fc2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.765 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.766 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquired lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:18 compute-0 nova_compute[244014]: 2026-02-25 12:25:18.766 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.078 244018 DEBUG nova.compute.manager [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.079 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.079 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.080 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.080 244018 DEBUG nova.compute.manager [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] No waiting events found dispatching network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.080 244018 WARNING nova.compute.manager [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received unexpected event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 for instance with vm_state active and task_state None.
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.081 244018 DEBUG nova.compute.manager [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.081 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.081 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.082 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.082 244018 DEBUG nova.compute.manager [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Processing event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.083 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.087 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022319.0871675, b8086e43-4c45-422f-a3b5-fa665c256b30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.087 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Resumed (Lifecycle Event)
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.095 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.098 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance spawned successfully.
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.098 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.128 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.133 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.133 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.134 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.134 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.134 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.135 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.139 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.173 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.204 244018 INFO nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 8.16 seconds to spawn the instance on the hypervisor.
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.205 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.275 244018 INFO nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 9.47 seconds to build instance.
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.312 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:19 compute-0 ceph-mon[76335]: pgmap v1241: 305 pgs: 305 active+clean; 362 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Feb 25 12:25:19 compute-0 ovn_controller[147040]: 2026-02-25T12:25:19Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:cd:f8 10.100.0.12
Feb 25 12:25:19 compute-0 ovn_controller[147040]: 2026-02-25T12:25:19Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:cd:f8 10.100.0.12
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.714 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.768 244018 DEBUG nova.compute.manager [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-changed-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.768 244018 DEBUG nova.compute.manager [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Refreshing instance network info cache due to event network-changed-07860675-4ac4-43a4-ab6b-bacd17801fc2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:25:19 compute-0 nova_compute[244014]: 2026-02-25 12:25:19.769 244018 DEBUG oslo_concurrency.lockutils [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 362 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.0 MiB/s wr, 249 op/s
Feb 25 12:25:20 compute-0 nova_compute[244014]: 2026-02-25 12:25:20.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:20 compute-0 ovn_controller[147040]: 2026-02-25T12:25:20Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:58:40 10.100.0.9
Feb 25 12:25:20 compute-0 ovn_controller[147040]: 2026-02-25T12:25:20Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:58:40 10.100.0.9
Feb 25 12:25:21 compute-0 ceph-mon[76335]: pgmap v1242: 305 pgs: 305 active+clean; 362 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.0 MiB/s wr, 249 op/s
Feb 25 12:25:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 423 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 6.0 MiB/s wr, 377 op/s
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.154 244018 DEBUG nova.compute.manager [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.155 244018 DEBUG oslo_concurrency.lockutils [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.156 244018 DEBUG oslo_concurrency.lockutils [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.156 244018 DEBUG oslo_concurrency.lockutils [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.157 244018 DEBUG nova.compute.manager [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.158 244018 WARNING nova.compute.manager [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received unexpected event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with vm_state active and task_state None.
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.202 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Updating instance_info_cache with network_info: [{"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.224 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Releasing lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.224 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Instance network_info: |[{"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.226 244018 DEBUG oslo_concurrency.lockutils [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.227 244018 DEBUG nova.network.neutron [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Refreshing network info cache for port 07860675-4ac4-43a4-ab6b-bacd17801fc2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.232 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Start _get_guest_xml network_info=[{"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.238 244018 WARNING nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.245 244018 DEBUG nova.virt.libvirt.host [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.246 244018 DEBUG nova.virt.libvirt.host [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.254 244018 DEBUG nova.virt.libvirt.host [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.255 244018 DEBUG nova.virt.libvirt.host [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.256 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.256 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.257 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.258 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.258 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.259 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.260 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.260 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.261 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.261 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.262 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.262 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.267 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3323372262' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.773 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.774 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.775 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.776 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.776 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.779 244018 INFO nova.compute.manager [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Terminating instance
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.780 244018 DEBUG nova.compute.manager [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.782 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.811 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.817 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:22 compute-0 kernel: tap4f6b67d0-05 (unregistering): left promiscuous mode
Feb 25 12:25:22 compute-0 NetworkManager[49836]: <info>  [1772022322.9003] device (tap4f6b67d0-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:25:22 compute-0 ovn_controller[147040]: 2026-02-25T12:25:22Z|00398|binding|INFO|Releasing lport 4f6b67d0-056b-4277-a604-6221f16e30b2 from this chassis (sb_readonly=0)
Feb 25 12:25:22 compute-0 ovn_controller[147040]: 2026-02-25T12:25:22Z|00399|binding|INFO|Setting lport 4f6b67d0-056b-4277-a604-6221f16e30b2 down in Southbound
Feb 25 12:25:22 compute-0 ovn_controller[147040]: 2026-02-25T12:25:22Z|00400|binding|INFO|Removing iface tap4f6b67d0-05 ovn-installed in OVS
Feb 25 12:25:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.919 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:cd:f8 10.100.0.12'], port_security=['fa:16:3e:8b:cd:f8 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '22450f11-d042-48c5-941e-fd544e58d84a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4f6b67d0-056b-4277-a604-6221f16e30b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.921 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4f6b67d0-056b-4277-a604-6221f16e30b2 in datapath c97c5b11-7517-46fe-a6ca-63894792908c unbound from our chassis
Feb 25 12:25:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.922 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.927 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.936 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b410593f-798d-467c-b2a5-5d7315363496]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.947 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.947 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.948 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.948 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.948 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.949 244018 INFO nova.compute.manager [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Terminating instance
Feb 25 12:25:22 compute-0 nova_compute[244014]: 2026-02-25 12:25:22.950 244018 DEBUG nova.compute.manager [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:25:22 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Feb 25 12:25:22 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Consumed 11.743s CPU time.
Feb 25 12:25:22 compute-0 systemd-machined[210048]: Machine qemu-50-instance-0000002d terminated.
Feb 25 12:25:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.963 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c9af9d4d-6344-4cda-9aed-24c9d25407d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.966 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c30a0132-ad00-441c-b6e7-f121c5e13425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.986 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2ab585-01db-479d-b4a4-2e7fe0ddc6d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.001 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57aa28b4-d99b-4294-ab1e-7863af16d17e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427672, 'reachable_time': 15170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285567, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:23 compute-0 kernel: tap9f614955-c9 (unregistering): left promiscuous mode
Feb 25 12:25:23 compute-0 NetworkManager[49836]: <info>  [1772022323.0121] device (tap9f614955-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.017 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[660aa12f-4bc4-4fd1-a0f6-cbf4d6b30d16]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427682, 'tstamp': 427682}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285568, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427685, 'tstamp': 427685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285568, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:23 compute-0 ovn_controller[147040]: 2026-02-25T12:25:23Z|00401|binding|INFO|Releasing lport 9f614955-c92f-41c2-a47e-6d65c378bf82 from this chassis (sb_readonly=0)
Feb 25 12:25:23 compute-0 ovn_controller[147040]: 2026-02-25T12:25:23Z|00402|binding|INFO|Setting lport 9f614955-c92f-41c2-a47e-6d65c378bf82 down in Southbound
Feb 25 12:25:23 compute-0 ovn_controller[147040]: 2026-02-25T12:25:23Z|00403|binding|INFO|Removing iface tap9f614955-c9 ovn-installed in OVS
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.020 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.022 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.025 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.030 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97c5b11-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.031 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.032 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:31:fe 10.100.0.7'], port_security=['fa:16:3e:db:31:fe 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1238794a-063b-4ac0-a7d9-3590353e3207', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9f614955-c92f-41c2-a47e-6d65c378bf82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.034 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97c5b11-70, col_values=(('external_ids', {'iface-id': 'db412aa7-4ad4-4eb8-b61f-dd3e71d5329d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.035 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.036 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9f614955-c92f-41c2-a47e-6d65c378bf82 in datapath c97c5b11-7517-46fe-a6ca-63894792908c unbound from our chassis
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.039 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c97c5b11-7517-46fe-a6ca-63894792908c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.040 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[250ba1a5-1e81-482a-b1c6-f43dd163874b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.040 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c namespace which is not needed anymore
Feb 25 12:25:23 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Feb 25 12:25:23 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Consumed 7.403s CPU time.
Feb 25 12:25:23 compute-0 systemd-machined[210048]: Machine qemu-51-instance-0000002e terminated.
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.076 244018 INFO nova.virt.libvirt.driver [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Instance destroyed successfully.
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.077 244018 DEBUG nova.objects.instance [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'resources' on Instance uuid 22450f11-d042-48c5-941e-fd544e58d84a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.098 244018 DEBUG nova.virt.libvirt.vif [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2025017343',display_name='tempest-tempest.common.compute-instance-2025017343-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2025017343-1',id=45,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-74gnqqyr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:08Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=22450f11-d042-48c5-941e-fd544e58d84a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.098 244018 DEBUG nova.network.os_vif_util [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.099 244018 DEBUG nova.network.os_vif_util [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:f8,bridge_name='br-int',has_traffic_filtering=True,id=4f6b67d0-056b-4277-a604-6221f16e30b2,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f6b67d0-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.099 244018 DEBUG os_vif [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:f8,bridge_name='br-int',has_traffic_filtering=True,id=4f6b67d0-056b-4277-a604-6221f16e30b2,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f6b67d0-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.101 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.102 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f6b67d0-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.104 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.105 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.107 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.108 244018 INFO os_vif [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:f8,bridge_name='br-int',has_traffic_filtering=True,id=4f6b67d0-056b-4277-a604-6221f16e30b2,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f6b67d0-05')
Feb 25 12:25:23 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [NOTICE]   (284638) : haproxy version is 2.8.14-c23fe91
Feb 25 12:25:23 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [NOTICE]   (284638) : path to executable is /usr/sbin/haproxy
Feb 25 12:25:23 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [WARNING]  (284638) : Exiting Master process...
Feb 25 12:25:23 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [WARNING]  (284638) : Exiting Master process...
Feb 25 12:25:23 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [ALERT]    (284638) : Current worker (284649) exited with code 143 (Terminated)
Feb 25 12:25:23 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [WARNING]  (284638) : All workers exited. Exiting... (0)
Feb 25 12:25:23 compute-0 systemd[1]: libpod-b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87.scope: Deactivated successfully.
Feb 25 12:25:23 compute-0 podman[285607]: 2026-02-25 12:25:23.160929811 +0000 UTC m=+0.044920743 container died b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.184 244018 INFO nova.virt.libvirt.driver [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Instance destroyed successfully.
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.185 244018 DEBUG nova.objects.instance [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'resources' on Instance uuid 1238794a-063b-4ac0-a7d9-3590353e3207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87-userdata-shm.mount: Deactivated successfully.
Feb 25 12:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f8c15061c2508c9a22a1e470245ba40e3bbbed4564cf9ed0210b81c7f79f5d3-merged.mount: Deactivated successfully.
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.231 244018 DEBUG nova.virt.libvirt.vif [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2025017343',display_name='tempest-tempest.common.compute-instance-2025017343-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2025017343-2',id=46,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-02-25T12:25:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-74gnqqyr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:16Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=1238794a-063b-4ac0-a7d9-3590353e3207,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.232 244018 DEBUG nova.network.os_vif_util [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:23 compute-0 podman[285607]: 2026-02-25 12:25:23.232672631 +0000 UTC m=+0.116663553 container cleanup b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.233 244018 DEBUG nova.network.os_vif_util [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:31:fe,bridge_name='br-int',has_traffic_filtering=True,id=9f614955-c92f-41c2-a47e-6d65c378bf82,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f614955-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.233 244018 DEBUG os_vif [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:31:fe,bridge_name='br-int',has_traffic_filtering=True,id=9f614955-c92f-41c2-a47e-6d65c378bf82,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f614955-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.235 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f614955-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:23 compute-0 systemd[1]: libpod-conmon-b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87.scope: Deactivated successfully.
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.243 244018 INFO os_vif [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:31:fe,bridge_name='br-int',has_traffic_filtering=True,id=9f614955-c92f-41c2-a47e-6d65c378bf82,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f614955-c9')
Feb 25 12:25:23 compute-0 podman[285664]: 2026-02-25 12:25:23.316835973 +0000 UTC m=+0.065343340 container remove b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.326 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[066d5a57-1671-4e65-83e3-8cd029ea8f74]: (4, ('Wed Feb 25 12:25:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c (b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87)\nb4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87\nWed Feb 25 12:25:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c (b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87)\nb4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b3574ddf-abf0-4dbd-a096-a174b96fd29f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.330 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:23 compute-0 kernel: tapc97c5b11-70: left promiscuous mode
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.334 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.341 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.342 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a3674ba5-7701-419b-98f5-e5be1643e33b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.355 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[01ca78ac-f750-43fe-8670-86d7ffeccf86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.357 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7db5edf7-f409-4414-8011-312086be8fda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968102273' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.373 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c55aa01-fb80-413f-9b78-9507647906b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427665, 'reachable_time': 19920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285699, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:23 compute-0 systemd[1]: run-netns-ovnmeta\x2dc97c5b11\x2d7517\x2d46fe\x2da6ca\x2d63894792908c.mount: Deactivated successfully.
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.379 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:25:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.380 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[209a15be-f223-49c6-bf90-0adeaa33314c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.385 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.387 244018 DEBUG nova.virt.libvirt.vif [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-20106714',display_name='tempest-ImagesOneServerNegativeTestJSON-server-20106714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-20106714',id=48,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-agofmyat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:15Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=e291d969-fcea-4f60-a478-f7b81a91ccd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.388 244018 DEBUG nova.network.os_vif_util [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.388 244018 DEBUG nova.network.os_vif_util [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.392 244018 DEBUG nova.objects.instance [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid e291d969-fcea-4f60-a478-f7b81a91ccd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.418 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:25:23 compute-0 nova_compute[244014]:   <uuid>e291d969-fcea-4f60-a478-f7b81a91ccd9</uuid>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   <name>instance-00000030</name>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-20106714</nova:name>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:25:22</nova:creationTime>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <nova:user uuid="89e71139346a40899212d5bc35835720">tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member</nova:user>
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <nova:project uuid="f976004e0b334963a69c2519fca200d2">tempest-ImagesOneServerNegativeTestJSON-1374162185</nova:project>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <nova:port uuid="07860675-4ac4-43a4-ab6b-bacd17801fc2">
Feb 25 12:25:23 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <system>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <entry name="serial">e291d969-fcea-4f60-a478-f7b81a91ccd9</entry>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <entry name="uuid">e291d969-fcea-4f60-a478-f7b81a91ccd9</entry>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     </system>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   <os>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   </os>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   <features>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   </features>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e291d969-fcea-4f60-a478-f7b81a91ccd9_disk">
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config">
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:23 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:07:ea:76"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <target dev="tap07860675-4a"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/console.log" append="off"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <video>
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     </video>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:25:23 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:25:23 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:25:23 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:25:23 compute-0 nova_compute[244014]: </domain>
Feb 25 12:25:23 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.420 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Preparing to wait for external event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.420 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.421 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:23 compute-0 ceph-mon[76335]: pgmap v1243: 305 pgs: 305 active+clean; 423 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 6.0 MiB/s wr, 377 op/s
Feb 25 12:25:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3323372262' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/968102273' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.421 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.423 244018 DEBUG nova.virt.libvirt.vif [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-20106714',display_name='tempest-ImagesOneServerNegativeTestJSON-server-20106714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-20106714',id=48,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-agofmyat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:15Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=e291d969-fcea-4f60-a478-f7b81a91ccd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.424 244018 DEBUG nova.network.os_vif_util [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.425 244018 DEBUG nova.network.os_vif_util [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.426 244018 DEBUG os_vif [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.427 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.428 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.428 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.433 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.433 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07860675-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.434 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07860675-4a, col_values=(('external_ids', {'iface-id': '07860675-4ac4-43a4-ab6b-bacd17801fc2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:ea:76', 'vm-uuid': 'e291d969-fcea-4f60-a478-f7b81a91ccd9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:23 compute-0 NetworkManager[49836]: <info>  [1772022323.4374] manager: (tap07860675-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.443 244018 INFO os_vif [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a')
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.505 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.506 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.506 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No VIF found with MAC fa:16:3e:07:ea:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.507 244018 INFO nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Using config drive
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.535 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.545 244018 INFO nova.virt.libvirt.driver [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Deleting instance files /var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a_del
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.545 244018 INFO nova.virt.libvirt.driver [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Deletion of /var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a_del complete
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.604 244018 INFO nova.compute.manager [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Took 0.82 seconds to destroy the instance on the hypervisor.
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.604 244018 DEBUG oslo.service.loopingcall [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.605 244018 DEBUG nova.compute.manager [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.605 244018 DEBUG nova.network.neutron [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.659 244018 INFO nova.virt.libvirt.driver [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Deleting instance files /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207_del
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.660 244018 INFO nova.virt.libvirt.driver [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Deletion of /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207_del complete
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.731 244018 INFO nova.compute.manager [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Took 0.78 seconds to destroy the instance on the hypervisor.
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.732 244018 DEBUG oslo.service.loopingcall [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.732 244018 DEBUG nova.compute.manager [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:25:23 compute-0 nova_compute[244014]: 2026-02-25 12:25:23.732 244018 DEBUG nova.network.neutron [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:25:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 451 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 7.3 MiB/s wr, 332 op/s
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.469 244018 INFO nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Creating config drive at /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.476 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmps5rn5u30 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.517 244018 DEBUG nova.network.neutron [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Updated VIF entry in instance network info cache for port 07860675-4ac4-43a4-ab6b-bacd17801fc2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.519 244018 DEBUG nova.network.neutron [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Updating instance_info_cache with network_info: [{"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.526 244018 DEBUG nova.compute.manager [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-vif-unplugged-4f6b67d0-056b-4277-a604-6221f16e30b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.527 244018 DEBUG oslo_concurrency.lockutils [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.527 244018 DEBUG oslo_concurrency.lockutils [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.528 244018 DEBUG oslo_concurrency.lockutils [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.528 244018 DEBUG nova.compute.manager [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] No waiting events found dispatching network-vif-unplugged-4f6b67d0-056b-4277-a604-6221f16e30b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.529 244018 DEBUG nova.compute.manager [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-vif-unplugged-4f6b67d0-056b-4277-a604-6221f16e30b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.541 244018 DEBUG oslo_concurrency.lockutils [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.616 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmps5rn5u30" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.644 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.648 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.677 244018 DEBUG nova.compute.manager [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-unplugged-9f614955-c92f-41c2-a47e-6d65c378bf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.678 244018 DEBUG oslo_concurrency.lockutils [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.678 244018 DEBUG oslo_concurrency.lockutils [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.678 244018 DEBUG oslo_concurrency.lockutils [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.678 244018 DEBUG nova.compute.manager [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] No waiting events found dispatching network-vif-unplugged-9f614955-c92f-41c2-a47e-6d65c378bf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.679 244018 DEBUG nova.compute.manager [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-unplugged-9f614955-c92f-41c2-a47e-6d65c378bf82 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:24 compute-0 NetworkManager[49836]: <info>  [1772022324.7053] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Feb 25 12:25:24 compute-0 NetworkManager[49836]: <info>  [1772022324.7059] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:24 compute-0 ovn_controller[147040]: 2026-02-25T12:25:24Z|00404|binding|INFO|Releasing lport 7c89df63-779c-4e1c-9e58-76a4001fabc2 from this chassis (sb_readonly=0)
Feb 25 12:25:24 compute-0 ovn_controller[147040]: 2026-02-25T12:25:24Z|00405|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.819 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.820 244018 INFO nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Deleting local config drive /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config because it was imported into RBD.
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.861 244018 DEBUG nova.network.neutron [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:24 compute-0 systemd-udevd[285539]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:25:24 compute-0 kernel: tap07860675-4a: entered promiscuous mode
Feb 25 12:25:24 compute-0 NetworkManager[49836]: <info>  [1772022324.8723] manager: (tap07860675-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/186)
Feb 25 12:25:24 compute-0 ovn_controller[147040]: 2026-02-25T12:25:24Z|00406|binding|INFO|Claiming lport 07860675-4ac4-43a4-ab6b-bacd17801fc2 for this chassis.
Feb 25 12:25:24 compute-0 ovn_controller[147040]: 2026-02-25T12:25:24Z|00407|binding|INFO|07860675-4ac4-43a4-ab6b-bacd17801fc2: Claiming fa:16:3e:07:ea:76 10.100.0.14
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.879 244018 INFO nova.compute.manager [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Took 1.27 seconds to deallocate network for instance.
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.880 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:ea:76 10.100.0.14'], port_security=['fa:16:3e:07:ea:76 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e291d969-fcea-4f60-a478-f7b81a91ccd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=07860675-4ac4-43a4-ab6b-bacd17801fc2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.881 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 07860675-4ac4-43a4-ab6b-bacd17801fc2 in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 bound to our chassis
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.882 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 12:25:24 compute-0 NetworkManager[49836]: <info>  [1772022324.8852] device (tap07860675-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:25:24 compute-0 NetworkManager[49836]: <info>  [1772022324.8878] device (tap07860675-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.894 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[779b9d43-5afe-4da0-a7ea-47442ab53b56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.895 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7693903d-d1 in ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.899 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.897 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7693903d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1780b18d-1fa4-4657-a311-f3c36306c4a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.900 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8ba17a-3f10-4e20-9f75-ca93aee4c675]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:24 compute-0 ovn_controller[147040]: 2026-02-25T12:25:24Z|00408|binding|INFO|Setting lport 07860675-4ac4-43a4-ab6b-bacd17801fc2 ovn-installed in OVS
Feb 25 12:25:24 compute-0 ovn_controller[147040]: 2026-02-25T12:25:24Z|00409|binding|INFO|Setting lport 07860675-4ac4-43a4-ab6b-bacd17801fc2 up in Southbound
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.911 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a01e0248-be83-4960-b3bf-a9b1ecf5e02b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:24 compute-0 systemd-machined[210048]: New machine qemu-53-instance-00000030.
Feb 25 12:25:24 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-00000030.
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.935 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f37601-8521-4f36-97c9-12bafbe93aa8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.939 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:24 compute-0 nova_compute[244014]: 2026-02-25 12:25:24.940 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.962 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c8aa73bb-2739-4534-b91f-038433ce1585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.967 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc3e762-dc6c-47ce-9576-cb6b4d5b9b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:24 compute-0 NetworkManager[49836]: <info>  [1772022324.9688] manager: (tap7693903d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/187)
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.988 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a0c67a-60d6-4256-bf4d-a2bf2970c96b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.991 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0941addc-b11d-4dda-88d9-95b95cef1ef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:25 compute-0 NetworkManager[49836]: <info>  [1772022325.0070] device (tap7693903d-d0): carrier: link connected
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.011 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b1529e-e8b5-432f-bb92-0cfb0f28eeb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8276ab82-9e12-4058-b00b-48f9f174a40d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429441, 'reachable_time': 35620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285807, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.041 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e248e757-f563-456f-945c-2b1eca435123]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:f240'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429441, 'tstamp': 429441}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285808, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.053 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0991a23-d5a0-422b-ba46-c54dca2be63d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429441, 'reachable_time': 35620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285809, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.079 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[43c6e9b6-f1c0-4833-92d8-55a6ffc8c61c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.092 244018 DEBUG oslo_concurrency.processutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.134 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38c43af1-8f0f-4fed-a043-5cd0d290f2f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.136 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.137 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.138 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7693903d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:25 compute-0 NetworkManager[49836]: <info>  [1772022325.1413] manager: (tap7693903d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Feb 25 12:25:25 compute-0 kernel: tap7693903d-d0: entered promiscuous mode
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.144 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7693903d-d0, col_values=(('external_ids', {'iface-id': '6dc5897c-8765-434f-a79d-19523884d8ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:25 compute-0 ovn_controller[147040]: 2026-02-25T12:25:25Z|00410|binding|INFO|Releasing lport 6dc5897c-8765-434f-a79d-19523884d8ae from this chassis (sb_readonly=0)
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.157 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.159 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.159 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.160 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e52780bf-0809-4ff6-86e2-47bdf3d74d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.161 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:25:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.162 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'env', 'PROCESS_TAG=haproxy-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7693903d-d5e2-4b50-a39b-bbbcc4148329.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:25:25 compute-0 ceph-mon[76335]: pgmap v1244: 305 pgs: 305 active+clean; 451 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 7.3 MiB/s wr, 332 op/s
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.500 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022325.4997602, e291d969-fcea-4f60-a478-f7b81a91ccd9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.500 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] VM Started (Lifecycle Event)
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.523 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.530 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022325.500846, e291d969-fcea-4f60-a478-f7b81a91ccd9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.530 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] VM Paused (Lifecycle Event)
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.535 244018 DEBUG nova.network.neutron [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.549 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.552 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.563 244018 INFO nova.compute.manager [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Took 1.83 seconds to deallocate network for instance.
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.568 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:25 compute-0 podman[285903]: 2026-02-25 12:25:25.584996794 +0000 UTC m=+0.060566665 container create e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.603 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:25 compute-0 systemd[1]: Started libpod-conmon-e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a.scope.
Feb 25 12:25:25 compute-0 podman[285903]: 2026-02-25 12:25:25.54737451 +0000 UTC m=+0.022944411 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:25:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332be3fc7ea941666457dff9d1433e95fd9071ac0ddde7234a55bd6539d7c2bc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:25 compute-0 podman[285903]: 2026-02-25 12:25:25.663383343 +0000 UTC m=+0.138953234 container init e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:25:25 compute-0 podman[285903]: 2026-02-25 12:25:25.666911733 +0000 UTC m=+0.142481634 container start e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:25:25 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [NOTICE]   (285922) : New worker (285924) forked
Feb 25 12:25:25 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [NOTICE]   (285922) : Loading success.
Feb 25 12:25:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452448757' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.734 244018 DEBUG oslo_concurrency.processutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.739 244018 DEBUG nova.compute.provider_tree [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.758 244018 DEBUG nova.scheduler.client.report [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.780 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.783 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 451 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.7 MiB/s wr, 314 op/s
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.827 244018 INFO nova.scheduler.client.report [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Deleted allocations for instance 22450f11-d042-48c5-941e-fd544e58d84a
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.930 244018 DEBUG oslo_concurrency.processutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:25 compute-0 nova_compute[244014]: 2026-02-25 12:25:25.964 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/452448757' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3937285555' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.519 244018 DEBUG oslo_concurrency.processutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.525 244018 DEBUG nova.compute.provider_tree [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.541 244018 DEBUG nova.scheduler.client.report [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.575 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.617 244018 INFO nova.scheduler.client.report [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Deleted allocations for instance 1238794a-063b-4ac0-a7d9-3590353e3207
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.653 244018 DEBUG nova.compute.manager [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.654 244018 DEBUG oslo_concurrency.lockutils [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.654 244018 DEBUG oslo_concurrency.lockutils [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.654 244018 DEBUG oslo_concurrency.lockutils [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.655 244018 DEBUG nova.compute.manager [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] No waiting events found dispatching network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.655 244018 WARNING nova.compute.manager [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received unexpected event network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 for instance with vm_state deleted and task_state None.
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.655 244018 DEBUG nova.compute.manager [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-vif-deleted-4f6b67d0-056b-4277-a604-6221f16e30b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.718 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] No waiting events found dispatching network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 WARNING nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received unexpected event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 for instance with vm_state deleted and task_state None.
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.765 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.765 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing instance network info cache due to event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.765 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.765 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:26 compute-0 nova_compute[244014]: 2026-02-25 12:25:26.765 244018 DEBUG nova.network.neutron [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:25:27 compute-0 ceph-mon[76335]: pgmap v1245: 305 pgs: 305 active+clean; 451 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.7 MiB/s wr, 314 op/s
Feb 25 12:25:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3937285555' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 326 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.7 MiB/s wr, 379 op/s
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.262 244018 DEBUG nova.network.neutron [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated VIF entry in instance network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.263 244018 DEBUG nova.network.neutron [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.300 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.301 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.301 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.302 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.302 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.303 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Processing event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.303 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-deleted-9f614955-c92f-41c2-a47e-6d65c378bf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.304 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.304 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.305 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.306 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.306 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] No waiting events found dispatching network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.307 244018 WARNING nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received unexpected event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 for instance with vm_state building and task_state spawning.
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.308 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.313 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022328.312849, e291d969-fcea-4f60-a478-f7b81a91ccd9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.314 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] VM Resumed (Lifecycle Event)
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.317 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.321 244018 INFO nova.virt.libvirt.driver [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Instance spawned successfully.
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.322 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.386 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.390 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.406 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.407 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.407 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.408 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.408 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.408 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.478 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.514 244018 INFO nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Took 13.21 seconds to spawn the instance on the hypervisor.
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.515 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.581 244018 INFO nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Took 14.73 seconds to build instance.
Feb 25 12:25:28 compute-0 podman[285957]: 2026-02-25 12:25:28.73771177 +0000 UTC m=+0.080400177 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:25:28 compute-0 podman[285958]: 2026-02-25 12:25:28.758886099 +0000 UTC m=+0.093824466 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:25:28 compute-0 nova_compute[244014]: 2026-02-25 12:25:28.810 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:29 compute-0 ceph-mon[76335]: pgmap v1246: 305 pgs: 305 active+clean; 326 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.7 MiB/s wr, 379 op/s
Feb 25 12:25:29 compute-0 nova_compute[244014]: 2026-02-25 12:25:29.543 244018 DEBUG nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:25:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:25:29 compute-0 nova_compute[244014]: 2026-02-25 12:25:29.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 326 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 272 op/s
Feb 25 12:25:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:25:30
Feb 25 12:25:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:25:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:25:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.meta']
Feb 25 12:25:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:25:31 compute-0 ovn_controller[147040]: 2026-02-25T12:25:31Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:53:87 10.100.0.6
Feb 25 12:25:31 compute-0 ovn_controller[147040]: 2026-02-25T12:25:31Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:53:87 10.100.0.6
Feb 25 12:25:31 compute-0 ceph-mon[76335]: pgmap v1247: 305 pgs: 305 active+clean; 326 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 272 op/s
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 346 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.4 MiB/s wr, 373 op/s
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:25:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:25:31 compute-0 kernel: tapfcfcdd66-6c (unregistering): left promiscuous mode
Feb 25 12:25:31 compute-0 NetworkManager[49836]: <info>  [1772022331.8709] device (tapfcfcdd66-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:25:31 compute-0 ovn_controller[147040]: 2026-02-25T12:25:31Z|00411|binding|INFO|Releasing lport fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 from this chassis (sb_readonly=0)
Feb 25 12:25:31 compute-0 nova_compute[244014]: 2026-02-25 12:25:31.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:31 compute-0 ovn_controller[147040]: 2026-02-25T12:25:31Z|00412|binding|INFO|Setting lport fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 down in Southbound
Feb 25 12:25:31 compute-0 ovn_controller[147040]: 2026-02-25T12:25:31Z|00413|binding|INFO|Removing iface tapfcfcdd66-6c ovn-installed in OVS
Feb 25 12:25:31 compute-0 nova_compute[244014]: 2026-02-25 12:25:31.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:31.899 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:58:40 10.100.0.9'], port_security=['fa:16:3e:fc:58:40 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:31.902 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 unbound from our chassis
Feb 25 12:25:31 compute-0 nova_compute[244014]: 2026-02-25 12:25:31.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:31.906 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d45b1c-1680-4599-a27a-6e3335c94c99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:25:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:31.907 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16e46228-4704-468f-b1ae-5676b104df5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:31.909 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace which is not needed anymore
Feb 25 12:25:31 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Feb 25 12:25:31 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Consumed 12.439s CPU time.
Feb 25 12:25:31 compute-0 systemd-machined[210048]: Machine qemu-49-instance-0000002c terminated.
Feb 25 12:25:32 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [NOTICE]   (284248) : haproxy version is 2.8.14-c23fe91
Feb 25 12:25:32 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [NOTICE]   (284248) : path to executable is /usr/sbin/haproxy
Feb 25 12:25:32 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [WARNING]  (284248) : Exiting Master process...
Feb 25 12:25:32 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [WARNING]  (284248) : Exiting Master process...
Feb 25 12:25:32 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [ALERT]    (284248) : Current worker (284250) exited with code 143 (Terminated)
Feb 25 12:25:32 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [WARNING]  (284248) : All workers exited. Exiting... (0)
Feb 25 12:25:32 compute-0 systemd[1]: libpod-e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148.scope: Deactivated successfully.
Feb 25 12:25:32 compute-0 podman[286024]: 2026-02-25 12:25:32.03440744 +0000 UTC m=+0.046535258 container died e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148-userdata-shm.mount: Deactivated successfully.
Feb 25 12:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd73bb85b3ddca49f944375bbbcd5f0c148a4c567244e730f0a7b2967982d80b-merged.mount: Deactivated successfully.
Feb 25 12:25:32 compute-0 podman[286024]: 2026-02-25 12:25:32.065979104 +0000 UTC m=+0.078106922 container cleanup e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:25:32 compute-0 systemd[1]: libpod-conmon-e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148.scope: Deactivated successfully.
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.100 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.104 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:32 compute-0 podman[286053]: 2026-02-25 12:25:32.127463834 +0000 UTC m=+0.046718733 container remove e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:25:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.132 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ab57b4-409f-4639-b63b-0b630d505425]: (4, ('Wed Feb 25 12:25:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148)\ne24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148\nWed Feb 25 12:25:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148)\ne24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.133 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[72b42977-01d8-4203-8dcf-ca9782e7180c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.134 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:32 compute-0 kernel: tapa0d45b1c-10: left promiscuous mode
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.144 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.148 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4fcea0f9-157d-4e68-94c1-4b58a690ac06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.166 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9f7ec2-4b00-4732-aa02-fa7585649634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.167 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13da39f1-a679-4b0a-b60a-8edc2d53ca7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.178 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1af9b3-5c20-48dd-810e-ab5881a89956]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427161, 'reachable_time': 28255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286082, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:32 compute-0 systemd[1]: run-netns-ovnmeta\x2da0d45b1c\x2d1680\x2d4599\x2da27a\x2d6e3335c94c99.mount: Deactivated successfully.
Feb 25 12:25:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.183 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:25:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.183 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[046468d3-0103-4565-b4e1-da72a8f20d34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.207 244018 DEBUG nova.compute.manager [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received event network-vif-unplugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.208 244018 DEBUG oslo_concurrency.lockutils [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.208 244018 DEBUG oslo_concurrency.lockutils [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.209 244018 DEBUG oslo_concurrency.lockutils [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.209 244018 DEBUG nova.compute.manager [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] No waiting events found dispatching network-vif-unplugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.210 244018 WARNING nova.compute.manager [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received unexpected event network-vif-unplugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 for instance with vm_state active and task_state shelving.
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.556 244018 INFO nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance shutdown successfully after 24 seconds.
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.563 244018 INFO nova.virt.libvirt.driver [-] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance destroyed successfully.
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.563 244018 DEBUG nova.objects.instance [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'numa_topology' on Instance uuid b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.943 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:32 compute-0 nova_compute[244014]: 2026-02-25 12:25:32.943 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.256 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.257 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.286 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.301 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:25:33 compute-0 ceph-mon[76335]: pgmap v1248: 305 pgs: 305 active+clean; 346 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.4 MiB/s wr, 373 op/s
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.566 244018 INFO nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Beginning cold snapshot process
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.573 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.574 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.583 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.584 244018 INFO nova.compute.claims [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.592 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:33 compute-0 nova_compute[244014]: 2026-02-25 12:25:33.745 244018 DEBUG nova.virt.libvirt.imagebackend [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:25:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 358 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.0 MiB/s wr, 275 op/s
Feb 25 12:25:34 compute-0 nova_compute[244014]: 2026-02-25 12:25:34.031 244018 DEBUG nova.storage.rbd_utils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] creating snapshot(1299e66f82a04239b9607b3d45305c57) on rbd image(b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:25:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Feb 25 12:25:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Feb 25 12:25:34 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Feb 25 12:25:34 compute-0 nova_compute[244014]: 2026-02-25 12:25:34.389 244018 DEBUG nova.storage.rbd_utils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] cloning vms/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk@1299e66f82a04239b9607b3d45305c57 to images/ebcf3504-22bf-4cd8-b1d7-d238a69aa7a5 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:25:34 compute-0 nova_compute[244014]: 2026-02-25 12:25:34.489 244018 DEBUG nova.storage.rbd_utils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] flattening images/ebcf3504-22bf-4cd8-b1d7-d238a69aa7a5 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:25:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:25:34 compute-0 nova_compute[244014]: 2026-02-25 12:25:34.810 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:34 compute-0 nova_compute[244014]: 2026-02-25 12:25:34.815 244018 DEBUG nova.compute.manager [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received event network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:34 compute-0 nova_compute[244014]: 2026-02-25 12:25:34.816 244018 DEBUG oslo_concurrency.lockutils [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:34 compute-0 nova_compute[244014]: 2026-02-25 12:25:34.816 244018 DEBUG oslo_concurrency.lockutils [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:34 compute-0 nova_compute[244014]: 2026-02-25 12:25:34.816 244018 DEBUG oslo_concurrency.lockutils [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:34 compute-0 nova_compute[244014]: 2026-02-25 12:25:34.817 244018 DEBUG nova.compute.manager [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] No waiting events found dispatching network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:34 compute-0 nova_compute[244014]: 2026-02-25 12:25:34.817 244018 WARNING nova.compute.manager [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received unexpected event network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 for instance with vm_state active and task_state shelving_image_uploading.
Feb 25 12:25:34 compute-0 nova_compute[244014]: 2026-02-25 12:25:34.906 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300447857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:35 compute-0 nova_compute[244014]: 2026-02-25 12:25:35.430 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:35 compute-0 nova_compute[244014]: 2026-02-25 12:25:35.438 244018 DEBUG nova.compute.provider_tree [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:35 compute-0 ceph-mon[76335]: pgmap v1249: 305 pgs: 305 active+clean; 358 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.0 MiB/s wr, 275 op/s
Feb 25 12:25:35 compute-0 ceph-mon[76335]: osdmap e178: 3 total, 3 up, 3 in
Feb 25 12:25:35 compute-0 nova_compute[244014]: 2026-02-25 12:25:35.679 244018 DEBUG nova.scheduler.client.report [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 358 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 234 op/s
Feb 25 12:25:35 compute-0 nova_compute[244014]: 2026-02-25 12:25:35.914 244018 DEBUG nova.storage.rbd_utils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] removing snapshot(1299e66f82a04239b9607b3d45305c57) on rbd image(b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:25:36 compute-0 nova_compute[244014]: 2026-02-25 12:25:36.520 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:36 compute-0 nova_compute[244014]: 2026-02-25 12:25:36.521 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:25:36 compute-0 nova_compute[244014]: 2026-02-25 12:25:36.524 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Feb 25 12:25:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Feb 25 12:25:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1300447857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:36 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Feb 25 12:25:36 compute-0 nova_compute[244014]: 2026-02-25 12:25:36.946 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:25:36 compute-0 nova_compute[244014]: 2026-02-25 12:25:36.947 244018 INFO nova.compute.claims [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:25:37 compute-0 nova_compute[244014]: 2026-02-25 12:25:37.001 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:25:37 compute-0 nova_compute[244014]: 2026-02-25 12:25:37.002 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:25:37 compute-0 nova_compute[244014]: 2026-02-25 12:25:37.011 244018 DEBUG nova.storage.rbd_utils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] creating snapshot(snap) on rbd image(ebcf3504-22bf-4cd8-b1d7-d238a69aa7a5) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:25:37 compute-0 nova_compute[244014]: 2026-02-25 12:25:37.040 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:25:37 compute-0 nova_compute[244014]: 2026-02-25 12:25:37.065 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:25:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Feb 25 12:25:37 compute-0 ceph-mon[76335]: pgmap v1251: 305 pgs: 305 active+clean; 358 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 234 op/s
Feb 25 12:25:37 compute-0 ceph-mon[76335]: osdmap e179: 3 total, 3 up, 3 in
Feb 25 12:25:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Feb 25 12:25:37 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Feb 25 12:25:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 438 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 8.9 MiB/s wr, 203 op/s
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.013 244018 DEBUG nova.policy [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0899f3fdb57d46a395d07753dd261241', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8744bdbc0f1499388aab5f477246beb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.074 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022323.072457, 22450f11-d042-48c5-941e-fd544e58d84a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.074 244018 INFO nova.compute.manager [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] VM Stopped (Lifecycle Event)
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.095 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.096 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.096 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Creating image(s)
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.116 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.151 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.185 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.189 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.223 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022323.1808317, 1238794a-063b-4ac0-a7d9-3590353e3207 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.224 244018 INFO nova.compute.manager [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] VM Stopped (Lifecycle Event)
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.226 244018 DEBUG nova.compute.manager [None req-bee6cd0a-00be-4b0e-a459-3c29806908ad - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.227 244018 DEBUG nova.compute.manager [None req-a30f093f-9b1c-4327-88e7-e2a4faad9f43 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.252 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.253 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.253 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.253 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.280 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.285 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ec873c3c-bf46-4537-8c29-b23a3133d281_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.846 244018 DEBUG nova.compute.manager [None req-bf77f507-d49e-4eb0-9557-fee9c174e121 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.849 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Successfully created port: 0b2cc5f4-bf41-4e03-8c24-1e711f742942 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.855 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:38 compute-0 nova_compute[244014]: 2026-02-25 12:25:38.895 244018 INFO nova.compute.manager [None req-a30f093f-9b1c-4327-88e7-e2a4faad9f43 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] instance snapshotting
Feb 25 12:25:39 compute-0 ceph-mon[76335]: osdmap e180: 3 total, 3 up, 3 in
Feb 25 12:25:39 compute-0 ceph-mon[76335]: pgmap v1254: 305 pgs: 305 active+clean; 438 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 8.9 MiB/s wr, 203 op/s
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.064 244018 WARNING nova.compute.manager [None req-a30f093f-9b1c-4327-88e7-e2a4faad9f43 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Image not found during snapshot: nova.exception.ImageNotFound: Image 30923a52-0362-49d2-a639-798fbe2c4beb could not be found.
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439742937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.702 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.847s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.707 244018 DEBUG nova.compute.provider_tree [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.765 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Successfully updated port: 0b2cc5f4-bf41-4e03-8c24-1e711f742942 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.777 244018 DEBUG nova.scheduler.client.report [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.782 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.783 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquired lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.783 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:25:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 438 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 144 op/s
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.804 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.805 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.848 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.849 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.861 244018 DEBUG nova.compute.manager [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-changed-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.861 244018 DEBUG nova.compute.manager [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Refreshing instance network info cache due to event network-changed-0b2cc5f4-bf41-4e03-8c24-1e711f742942. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.861 244018 DEBUG oslo_concurrency.lockutils [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.862 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.862 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.863 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.863 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.863 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.864 244018 INFO nova.compute.manager [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Terminating instance
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.865 244018 DEBUG nova.compute.manager [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.870 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.885 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:25:39 compute-0 nova_compute[244014]: 2026-02-25 12:25:39.994 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.001 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.003 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.004 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Creating image(s)
Feb 25 12:25:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/439742937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.570 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.602 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.626 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.629 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.650 244018 DEBUG nova.policy [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0899f3fdb57d46a395d07753dd261241', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8744bdbc0f1499388aab5f477246beb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.680 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.680 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.681 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.681 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.701 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:40 compute-0 nova_compute[244014]: 2026-02-25 12:25:40.706 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:41 compute-0 nova_compute[244014]: 2026-02-25 12:25:41.731 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Updating instance_info_cache with network_info: [{"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:41 compute-0 nova_compute[244014]: 2026-02-25 12:25:41.738 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Successfully created port: 58ea35ed-ff5d-4827-a801-431f7536d78d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:25:41 compute-0 ceph-mon[76335]: pgmap v1255: 305 pgs: 305 active+clean; 438 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 144 op/s
Feb 25 12:25:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 444 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 7.6 MiB/s wr, 138 op/s
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002031964281383588 of space, bias 1.0, pg target 0.6095892844150764 quantized to 32 (current 32)
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003248144253363575 of space, bias 1.0, pg target 0.9744432760090725 quantized to 32 (current 32)
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.932041486911633e-07 of space, bias 4.0, pg target 0.001071844978429396 quantized to 16 (current 16)
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:25:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:25:42 compute-0 nova_compute[244014]: 2026-02-25 12:25:42.236 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Releasing lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:42 compute-0 nova_compute[244014]: 2026-02-25 12:25:42.237 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Instance network_info: |[{"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:25:42 compute-0 nova_compute[244014]: 2026-02-25 12:25:42.237 244018 DEBUG oslo_concurrency.lockutils [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:42 compute-0 nova_compute[244014]: 2026-02-25 12:25:42.238 244018 DEBUG nova.network.neutron [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Refreshing network info cache for port 0b2cc5f4-bf41-4e03-8c24-1e711f742942 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:25:42 compute-0 sshd-session[286456]: Received disconnect from 45.148.10.157 port 54892:11:  [preauth]
Feb 25 12:25:42 compute-0 sshd-session[286456]: Disconnected from authenticating user root 45.148.10.157 port 54892 [preauth]
Feb 25 12:25:42 compute-0 ceph-mon[76335]: pgmap v1256: 305 pgs: 305 active+clean; 444 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 7.6 MiB/s wr, 138 op/s
Feb 25 12:25:42 compute-0 nova_compute[244014]: 2026-02-25 12:25:42.996 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Successfully updated port: 58ea35ed-ff5d-4827-a801-431f7536d78d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:25:43 compute-0 kernel: tap07860675-4a (unregistering): left promiscuous mode
Feb 25 12:25:43 compute-0 NetworkManager[49836]: <info>  [1772022343.0125] device (tap07860675-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.012 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ec873c3c-bf46-4537-8c29-b23a3133d281_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:43 compute-0 ovn_controller[147040]: 2026-02-25T12:25:43Z|00414|binding|INFO|Releasing lport 07860675-4ac4-43a4-ab6b-bacd17801fc2 from this chassis (sb_readonly=0)
Feb 25 12:25:43 compute-0 ovn_controller[147040]: 2026-02-25T12:25:43Z|00415|binding|INFO|Setting lport 07860675-4ac4-43a4-ab6b-bacd17801fc2 down in Southbound
Feb 25 12:25:43 compute-0 ovn_controller[147040]: 2026-02-25T12:25:43Z|00416|binding|INFO|Removing iface tap07860675-4a ovn-installed in OVS
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.032 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:ea:76 10.100.0.14'], port_security=['fa:16:3e:07:ea:76 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e291d969-fcea-4f60-a478-f7b81a91ccd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=07860675-4ac4-43a4-ab6b-bacd17801fc2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.034 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 07860675-4ac4-43a4-ab6b-bacd17801fc2 in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 unbound from our chassis
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.036 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7693903d-d5e2-4b50-a39b-bbbcc4148329, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.038 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5027d710-b499-491b-b444-7f2a96b7228d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.039 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace which is not needed anymore
Feb 25 12:25:43 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Deactivated successfully.
Feb 25 12:25:43 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Consumed 10.855s CPU time.
Feb 25 12:25:43 compute-0 systemd-machined[210048]: Machine qemu-53-instance-00000030 terminated.
Feb 25 12:25:43 compute-0 systemd-udevd[286468]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:25:43 compute-0 NetworkManager[49836]: <info>  [1772022343.0839] manager: (tap07860675-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/189)
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.132 244018 DEBUG nova.compute.manager [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-changed-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.132 244018 DEBUG nova.compute.manager [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Refreshing instance network info cache due to event network-changed-58ea35ed-ff5d-4827-a801-431f7536d78d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.133 244018 DEBUG oslo_concurrency.lockutils [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.134 244018 DEBUG oslo_concurrency.lockutils [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.134 244018 DEBUG nova.network.neutron [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Refreshing network info cache for port 58ea35ed-ff5d-4827-a801-431f7536d78d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.138 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.138 244018 DEBUG nova.compute.manager [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:43 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [NOTICE]   (285922) : haproxy version is 2.8.14-c23fe91
Feb 25 12:25:43 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [NOTICE]   (285922) : path to executable is /usr/sbin/haproxy
Feb 25 12:25:43 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [WARNING]  (285922) : Exiting Master process...
Feb 25 12:25:43 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [ALERT]    (285922) : Current worker (285924) exited with code 143 (Terminated)
Feb 25 12:25:43 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [WARNING]  (285922) : All workers exited. Exiting... (0)
Feb 25 12:25:43 compute-0 systemd[1]: libpod-e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a.scope: Deactivated successfully.
Feb 25 12:25:43 compute-0 podman[286513]: 2026-02-25 12:25:43.171616716 +0000 UTC m=+0.041253038 container died e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.178 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.183 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] resizing rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:25:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a-userdata-shm.mount: Deactivated successfully.
Feb 25 12:25:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-332be3fc7ea941666457dff9d1433e95fd9071ac0ddde7234a55bd6539d7c2bc-merged.mount: Deactivated successfully.
Feb 25 12:25:43 compute-0 podman[286513]: 2026-02-25 12:25:43.198560728 +0000 UTC m=+0.068197060 container cleanup e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:25:43 compute-0 systemd[1]: libpod-conmon-e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a.scope: Deactivated successfully.
Feb 25 12:25:43 compute-0 podman[286576]: 2026-02-25 12:25:43.254504542 +0000 UTC m=+0.037245705 container remove e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.259 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e22a0b-b0be-43e7-8058-575bd7fcc225]: (4, ('Wed Feb 25 12:25:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a)\ne8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a\nWed Feb 25 12:25:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a)\ne8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.260 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[536cc2b3-1f23-4348-b349-357586b0a92c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.261 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:43 compute-0 kernel: tap7693903d-d0: left promiscuous mode
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.276 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d8aa1b-01ed-46ae-b895-573675ac9f53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.287 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.289 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f23253b5-1137-408f-80a8-710665535799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.290 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8bfea3-a861-4656-b7b1-1cf8f3f1eba6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.290 244018 INFO nova.virt.libvirt.driver [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Instance destroyed successfully.
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.291 244018 INFO nova.compute.manager [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] instance snapshotting
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.291 244018 DEBUG nova.objects.instance [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'flavor' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.293 244018 DEBUG nova.objects.instance [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'resources' on Instance uuid e291d969-fcea-4f60-a478-f7b81a91ccd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.296 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] resizing rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45ebb007-2dee-47d0-a0d7-2b3d0309fb85]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429436, 'reachable_time': 16281, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286653, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.306 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:25:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.306 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fa2db7-1be4-4123-92a8-01297a3ef527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d7693903d\x2dd5e2\x2d4b50\x2da39b\x2dbbbcc4148329.mount: Deactivated successfully.
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.318 244018 DEBUG nova.objects.instance [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'migration_context' on Instance uuid ec873c3c-bf46-4537-8c29-b23a3133d281 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.350 244018 DEBUG nova.objects.instance [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'migration_context' on Instance uuid 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.613 244018 DEBUG nova.compute.manager [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-unplugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.614 244018 DEBUG oslo_concurrency.lockutils [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.614 244018 DEBUG oslo_concurrency.lockutils [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.614 244018 DEBUG oslo_concurrency.lockutils [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.614 244018 DEBUG nova.compute.manager [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] No waiting events found dispatching network-vif-unplugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.615 244018 DEBUG nova.compute.manager [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-unplugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.615 244018 DEBUG nova.network.neutron [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:25:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 472 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 9.4 MiB/s wr, 165 op/s
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.877 244018 DEBUG nova.virt.libvirt.vif [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-20106714',display_name='tempest-ImagesOneServerNegativeTestJSON-server-20106714',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-20106714',id=48,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-agofmyat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:39Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=e291d969-fcea-4f60-a478-f7b81a91ccd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.878 244018 DEBUG nova.network.os_vif_util [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.878 244018 DEBUG nova.network.os_vif_util [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.879 244018 DEBUG os_vif [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.882 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07860675-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.885 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.885 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Ensure instance console log exists: /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.886 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.886 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.886 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.888 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Start _get_guest_xml network_info=[{"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.888 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.888 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Ensure instance console log exists: /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.889 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.889 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.889 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.893 244018 INFO os_vif [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a')
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.912 244018 WARNING nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.918 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.918 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.923 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.923 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.923 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.923 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.924 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.924 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.924 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.924 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.924 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.925 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.925 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.925 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.925 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.926 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:25:43 compute-0 nova_compute[244014]: 2026-02-25 12:25:43.928 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:44 compute-0 nova_compute[244014]: 2026-02-25 12:25:44.230 244018 DEBUG nova.network.neutron [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:44 compute-0 nova_compute[244014]: 2026-02-25 12:25:44.343 244018 DEBUG oslo_concurrency.lockutils [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:44 compute-0 nova_compute[244014]: 2026-02-25 12:25:44.350 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquired lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:44 compute-0 nova_compute[244014]: 2026-02-25 12:25:44.351 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:25:44 compute-0 nova_compute[244014]: 2026-02-25 12:25:44.499 244018 INFO nova.virt.libvirt.driver [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Beginning live snapshot process
Feb 25 12:25:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:25:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Feb 25 12:25:44 compute-0 nova_compute[244014]: 2026-02-25 12:25:44.991 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:25:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535292027' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Feb 25 12:25:44 compute-0 nova_compute[244014]: 2026-02-25 12:25:44.996 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:45 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.023 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.039 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.042 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.199 244018 INFO nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Snapshot image upload complete
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.200 244018 DEBUG nova.compute.manager [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.204 244018 INFO nova.virt.libvirt.driver [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Deleting instance files /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9_del
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.205 244018 INFO nova.virt.libvirt.driver [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Deletion of /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9_del complete
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.209 244018 DEBUG nova.virt.libvirt.imagebackend [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.305 244018 INFO nova.compute.manager [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Took 5.44 seconds to destroy the instance on the hypervisor.
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.306 244018 DEBUG oslo.service.loopingcall [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.307 244018 DEBUG nova.compute.manager [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.307 244018 DEBUG nova.network.neutron [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.317 244018 INFO nova.compute.manager [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Shelve offloading
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.332 244018 INFO nova.virt.libvirt.driver [-] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance destroyed successfully.
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.333 244018 DEBUG nova.compute.manager [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.335 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.336 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquired lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.337 244018 DEBUG nova.network.neutron [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:25:45 compute-0 ceph-mon[76335]: pgmap v1257: 305 pgs: 305 active+clean; 472 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 9.4 MiB/s wr, 165 op/s
Feb 25 12:25:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/535292027' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:45 compute-0 ceph-mon[76335]: osdmap e181: 3 total, 3 up, 3 in
Feb 25 12:25:45 compute-0 sudo[286803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:25:45 compute-0 sudo[286803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:25:45 compute-0 sudo[286803]: pam_unix(sudo:session): session closed for user root
Feb 25 12:25:45 compute-0 sudo[286828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:25:45 compute-0 sudo[286828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.499 244018 DEBUG nova.storage.rbd_utils [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(1242784c9e544b4a9c689b1d2aa99219) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:25:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902237722' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.550 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.555 244018 DEBUG nova.virt.libvirt.vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-2',id=50,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:37Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=ec873c3c-bf46-4537-8c29-b23a3133d281,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.556 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.558 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.560 244018 DEBUG nova.objects.instance [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'pci_devices' on Instance uuid ec873c3c-bf46-4537-8c29-b23a3133d281 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:45 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:25:45 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.584 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:25:45 compute-0 nova_compute[244014]:   <uuid>ec873c3c-bf46-4537-8c29-b23a3133d281</uuid>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   <name>instance-00000032</name>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <nova:name>tempest-MultipleCreateTestJSON-server-2038151869-2</nova:name>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:25:43</nova:creationTime>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <nova:user uuid="0899f3fdb57d46a395d07753dd261241">tempest-MultipleCreateTestJSON-1900553995-project-member</nova:user>
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <nova:project uuid="c8744bdbc0f1499388aab5f477246beb">tempest-MultipleCreateTestJSON-1900553995</nova:project>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <nova:port uuid="0b2cc5f4-bf41-4e03-8c24-1e711f742942">
Feb 25 12:25:45 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <system>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <entry name="serial">ec873c3c-bf46-4537-8c29-b23a3133d281</entry>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <entry name="uuid">ec873c3c-bf46-4537-8c29-b23a3133d281</entry>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     </system>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   <os>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   </os>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   <features>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   </features>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ec873c3c-bf46-4537-8c29-b23a3133d281_disk">
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config">
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:1f:2e:f8"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <target dev="tap0b2cc5f4-bf"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/console.log" append="off"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <video>
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     </video>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:25:45 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:25:45 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:25:45 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:25:45 compute-0 nova_compute[244014]: </domain>
Feb 25 12:25:45 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.593 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Preparing to wait for external event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.593 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.593 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.594 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.595 244018 DEBUG nova.virt.libvirt.vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-2',id=50,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:37Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=ec873c3c-bf46-4537-8c29-b23a3133d281,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.595 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.596 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.596 244018 DEBUG os_vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.599 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.599 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.604 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b2cc5f4-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.604 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b2cc5f4-bf, col_values=(('external_ids', {'iface-id': '0b2cc5f4-bf41-4e03-8c24-1e711f742942', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:2e:f8', 'vm-uuid': 'ec873c3c-bf46-4537-8c29-b23a3133d281'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.606 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:45 compute-0 NetworkManager[49836]: <info>  [1772022345.6083] manager: (tap0b2cc5f4-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.613 244018 INFO os_vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf')
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.672 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.673 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.673 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No VIF found with MAC fa:16:3e:1f:2e:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.674 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Using config drive
Feb 25 12:25:45 compute-0 nova_compute[244014]: 2026-02-25 12:25:45.698 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 472 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Feb 25 12:25:46 compute-0 sudo[286828]: pam_unix(sudo:session): session closed for user root
Feb 25 12:25:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:25:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:25:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:25:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:25:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:25:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:25:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:25:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:25:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:25:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:25:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:25:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.097 244018 DEBUG nova.network.neutron [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Updated VIF entry in instance network info cache for port 0b2cc5f4-bf41-4e03-8c24-1e711f742942. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.098 244018 DEBUG nova.network.neutron [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Updating instance_info_cache with network_info: [{"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.116 244018 DEBUG oslo_concurrency.lockutils [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.135 244018 DEBUG nova.compute.manager [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.135 244018 DEBUG oslo_concurrency.lockutils [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.136 244018 DEBUG oslo_concurrency.lockutils [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.136 244018 DEBUG oslo_concurrency.lockutils [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.137 244018 DEBUG nova.compute.manager [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] No waiting events found dispatching network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.137 244018 WARNING nova.compute.manager [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received unexpected event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 for instance with vm_state active and task_state deleting.
Feb 25 12:25:46 compute-0 sudo[286927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:25:46 compute-0 sudo[286927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:25:46 compute-0 sudo[286927]: pam_unix(sudo:session): session closed for user root
Feb 25 12:25:46 compute-0 sudo[286952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:25:46 compute-0 sudo[286952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.311 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Creating config drive at /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.320 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_mv_swr_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Feb 25 12:25:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Feb 25 12:25:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3902237722' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:25:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:25:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:25:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:25:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:25:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.373 244018 DEBUG nova.network.neutron [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:46 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.409 244018 DEBUG nova.storage.rbd_utils [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk@1242784c9e544b4a9c689b1d2aa99219 to images/5c4d8498-0ce5-4898-96f9-042972db1ddb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.450 244018 INFO nova.compute.manager [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Took 1.14 seconds to deallocate network for instance.
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.463 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_mv_swr_" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.491 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.495 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.524 244018 DEBUG nova.storage.rbd_utils [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening images/5c4d8498-0ce5-4898-96f9-042972db1ddb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.569 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Updating instance_info_cache with network_info: [{"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.571 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.571 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:46 compute-0 podman[287045]: 2026-02-25 12:25:46.571210669 +0000 UTC m=+0.052729434 container create b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 12:25:46 compute-0 podman[287045]: 2026-02-25 12:25:46.554051513 +0000 UTC m=+0.035570258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:25:46 compute-0 systemd[1]: Started libpod-conmon-b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4.scope.
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.668 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Releasing lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.668 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance network_info: |[{"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.673 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Start _get_guest_xml network_info=[{"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.685 244018 WARNING nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:25:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.691 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.696 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.704 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.704 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.705 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.705 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.707 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.708 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.708 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.709 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.709 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.710 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.711 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.711 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.711 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.712 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.717 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:46 compute-0 podman[287045]: 2026-02-25 12:25:46.718556259 +0000 UTC m=+0.200075064 container init b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 12:25:46 compute-0 podman[287045]: 2026-02-25 12:25:46.725647889 +0000 UTC m=+0.207166664 container start b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 25 12:25:46 compute-0 bold_kare[287098]: 167 167
Feb 25 12:25:46 compute-0 systemd[1]: libpod-b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4.scope: Deactivated successfully.
Feb 25 12:25:46 compute-0 podman[287045]: 2026-02-25 12:25:46.740882271 +0000 UTC m=+0.222401056 container attach b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 12:25:46 compute-0 podman[287045]: 2026-02-25 12:25:46.741196079 +0000 UTC m=+0.222714834 container died b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.781 244018 DEBUG oslo_concurrency.processutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7db3bddc775f896f3dabe0e77d5dbe14bc4c1a709fed230308146e9ff20ca5ed-merged.mount: Deactivated successfully.
Feb 25 12:25:46 compute-0 podman[287045]: 2026-02-25 12:25:46.848345352 +0000 UTC m=+0.329864097 container remove b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 12:25:46 compute-0 systemd[1]: libpod-conmon-b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4.scope: Deactivated successfully.
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.913 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.914 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Deleting local config drive /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config because it was imported into RBD.
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.927 244018 DEBUG nova.storage.rbd_utils [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] removing snapshot(1242784c9e544b4a9c689b1d2aa99219) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:25:46 compute-0 NetworkManager[49836]: <info>  [1772022346.9584] manager: (tap0b2cc5f4-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Feb 25 12:25:46 compute-0 kernel: tap0b2cc5f4-bf: entered promiscuous mode
Feb 25 12:25:46 compute-0 ovn_controller[147040]: 2026-02-25T12:25:46Z|00417|binding|INFO|Claiming lport 0b2cc5f4-bf41-4e03-8c24-1e711f742942 for this chassis.
Feb 25 12:25:46 compute-0 ovn_controller[147040]: 2026-02-25T12:25:46Z|00418|binding|INFO|0b2cc5f4-bf41-4e03-8c24-1e711f742942: Claiming fa:16:3e:1f:2e:f8 10.100.0.3
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.962 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.975 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:2e:f8 10.100.0.3'], port_security=['fa:16:3e:1f:2e:f8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ec873c3c-bf46-4537-8c29-b23a3133d281', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0b2cc5f4-bf41-4e03-8c24-1e711f742942) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.977 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0b2cc5f4-bf41-4e03-8c24-1e711f742942 in datapath c97c5b11-7517-46fe-a6ca-63894792908c bound to our chassis
Feb 25 12:25:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.978 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 12:25:46 compute-0 ovn_controller[147040]: 2026-02-25T12:25:46Z|00419|binding|INFO|Setting lport 0b2cc5f4-bf41-4e03-8c24-1e711f742942 ovn-installed in OVS
Feb 25 12:25:46 compute-0 ovn_controller[147040]: 2026-02-25T12:25:46Z|00420|binding|INFO|Setting lport 0b2cc5f4-bf41-4e03-8c24-1e711f742942 up in Southbound
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:46 compute-0 nova_compute[244014]: 2026-02-25 12:25:46.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.990 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d395df89-7870-4994-8578-34c228025b0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.992 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc97c5b11-71 in ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:25:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.994 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc97c5b11-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:25:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.994 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d99ba5d-cf2d-4343-863f-e2f630eddca2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:46 compute-0 systemd-udevd[287206]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:25:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.996 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bb754f0a-8281-4e3c-801c-84fad379631a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:46 compute-0 systemd-machined[210048]: New machine qemu-54-instance-00000032.
Feb 25 12:25:47 compute-0 NetworkManager[49836]: <info>  [1772022347.0064] device (tap0b2cc5f4-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:25:47 compute-0 NetworkManager[49836]: <info>  [1772022347.0071] device (tap0b2cc5f4-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.005 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[52a5ccdc-49b2-4718-93bd-36e89b46b752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000032.
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.018 244018 DEBUG nova.network.neutron [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Updating instance_info_cache with network_info: [{"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:47 compute-0 podman[287193]: 2026-02-25 12:25:47.025742402 +0000 UTC m=+0.046918499 container create 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.031 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[40bc70cc-7c0b-45cf-8ac1-22917d8ff1de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 systemd[1]: Started libpod-conmon-4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb.scope.
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.045 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Releasing lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.060 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dc901aba-866c-4eae-b1f0-aa9ef23b6bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:25:47 compute-0 NetworkManager[49836]: <info>  [1772022347.0659] manager: (tapc97c5b11-70): new Veth device (/org/freedesktop/NetworkManager/Devices/192)
Feb 25 12:25:47 compute-0 systemd-udevd[287210]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.064 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5630f102-9c0e-4c2d-8b13-b15d46af3572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.092 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[11921cbd-005d-4891-97ee-9c6572b86190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 podman[287193]: 2026-02-25 12:25:47.092659416 +0000 UTC m=+0.113835523 container init 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.095 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e494e7-25bb-455a-9a27-205613ee97b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 podman[287193]: 2026-02-25 12:25:47.010708567 +0000 UTC m=+0.031884694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:25:47 compute-0 podman[287193]: 2026-02-25 12:25:47.112168538 +0000 UTC m=+0.133344645 container start 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 12:25:47 compute-0 NetworkManager[49836]: <info>  [1772022347.1130] device (tapc97c5b11-70): carrier: link connected
Feb 25 12:25:47 compute-0 podman[287193]: 2026-02-25 12:25:47.115454941 +0000 UTC m=+0.136631048 container attach 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.116 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022332.1154826, b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.116 244018 INFO nova.compute.manager [-] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] VM Stopped (Lifecycle Event)
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.117 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[df5e2b16-d348-4c67-8d48-fd3755f8f31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f193c05-537d-4788-bfb1-71d5de6f2d49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431652, 'reachable_time': 20919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287255, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.139 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3edf1e77-6c46-43e3-92c2-2f7860368891]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:610a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431652, 'tstamp': 431652}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287256, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.150 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2310cc77-8f2c-4e23-8967-cedf4dcd98f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431652, 'reachable_time': 20919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287257, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.153 244018 DEBUG nova.compute.manager [None req-0b64b02f-611b-4288-bc63-0edaccc3fc6f - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.157 244018 DEBUG nova.compute.manager [None req-0b64b02f-611b-4288-bc63-0edaccc3fc6f - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.167 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09d9d5f7-cddf-4bec-b7b1-1ecb20d7fac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.186 244018 INFO nova.compute.manager [None req-0b64b02f-611b-4288-bc63-0edaccc3fc6f - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.202 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[894032d3-d3de-4a4c-a45e-406fb956d956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.203 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.203 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.204 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97c5b11-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:47 compute-0 NetworkManager[49836]: <info>  [1772022347.2065] manager: (tapc97c5b11-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:47 compute-0 kernel: tapc97c5b11-70: entered promiscuous mode
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.210 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97c5b11-70, col_values=(('external_ids', {'iface-id': 'db412aa7-4ad4-4eb8-b61f-dd3e71d5329d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:47 compute-0 ovn_controller[147040]: 2026-02-25T12:25:47Z|00421|binding|INFO|Releasing lport db412aa7-4ad4-4eb8-b61f-dd3e71d5329d from this chassis (sb_readonly=0)
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.220 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.222 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c97c5b11-7517-46fe-a6ca-63894792908c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c97c5b11-7517-46fe-a6ca-63894792908c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.222 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[366719e8-1c4d-419e-90bb-0e9362e968df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.223 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/c97c5b11-7517-46fe-a6ca-63894792908c.pid.haproxy
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:25:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.223 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'env', 'PROCESS_TAG=haproxy-c97c5b11-7517-46fe-a6ca-63894792908c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c97c5b11-7517-46fe-a6ca-63894792908c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:25:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159697921' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.313 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3314089494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.334 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.341 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.365 244018 DEBUG oslo_concurrency.processutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Feb 25 12:25:47 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Feb 25 12:25:47 compute-0 ceph-mon[76335]: pgmap v1259: 305 pgs: 305 active+clean; 472 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Feb 25 12:25:47 compute-0 ceph-mon[76335]: osdmap e182: 3 total, 3 up, 3 in
Feb 25 12:25:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4159697921' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3314089494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.378 244018 DEBUG nova.compute.provider_tree [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.396 244018 DEBUG nova.storage.rbd_utils [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(snap) on rbd image(5c4d8498-0ce5-4898-96f9-042972db1ddb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.430 244018 DEBUG nova.scheduler.client.report [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.474 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.502 244018 INFO nova.scheduler.client.report [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Deleted allocations for instance e291d969-fcea-4f60-a478-f7b81a91ccd9
Feb 25 12:25:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:25:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1305819421' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:25:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:25:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1305819421' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:25:47 compute-0 sweet_proskuriakova[287224]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:25:47 compute-0 sweet_proskuriakova[287224]: --> All data devices are unavailable
Feb 25 12:25:47 compute-0 systemd[1]: libpod-4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb.scope: Deactivated successfully.
Feb 25 12:25:47 compute-0 podman[287193]: 2026-02-25 12:25:47.568537834 +0000 UTC m=+0.589713941 container died 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.577 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:47 compute-0 podman[287364]: 2026-02-25 12:25:47.61680097 +0000 UTC m=+0.099997051 container create 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5-merged.mount: Deactivated successfully.
Feb 25 12:25:47 compute-0 podman[287364]: 2026-02-25 12:25:47.535273673 +0000 UTC m=+0.018469794 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:25:47 compute-0 podman[287193]: 2026-02-25 12:25:47.645912784 +0000 UTC m=+0.667088891 container remove 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.644 244018 DEBUG nova.compute.manager [req-c0871cc0-369b-4693-a9ff-53e19bbe671a req-d0643c60-c8b6-4be2-a26d-5f95e799f98f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.646 244018 DEBUG oslo_concurrency.lockutils [req-c0871cc0-369b-4693-a9ff-53e19bbe671a req-d0643c60-c8b6-4be2-a26d-5f95e799f98f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.646 244018 DEBUG oslo_concurrency.lockutils [req-c0871cc0-369b-4693-a9ff-53e19bbe671a req-d0643c60-c8b6-4be2-a26d-5f95e799f98f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.647 244018 DEBUG oslo_concurrency.lockutils [req-c0871cc0-369b-4693-a9ff-53e19bbe671a req-d0643c60-c8b6-4be2-a26d-5f95e799f98f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.647 244018 DEBUG nova.compute.manager [req-c0871cc0-369b-4693-a9ff-53e19bbe671a req-d0643c60-c8b6-4be2-a26d-5f95e799f98f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Processing event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:25:47 compute-0 systemd[1]: libpod-conmon-4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb.scope: Deactivated successfully.
Feb 25 12:25:47 compute-0 systemd[1]: Started libpod-conmon-26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1.scope.
Feb 25 12:25:47 compute-0 sudo[286952]: pam_unix(sudo:session): session closed for user root
Feb 25 12:25:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed524380e5edf8481fd2be2b83ddd592e929273edf1401db6f69e8d7d1c59845/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:47 compute-0 podman[287364]: 2026-02-25 12:25:47.713356513 +0000 UTC m=+0.196552614 container init 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:25:47 compute-0 podman[287364]: 2026-02-25 12:25:47.717371877 +0000 UTC m=+0.200567958 container start 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 12:25:47 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [NOTICE]   (287418) : New worker (287439) forked
Feb 25 12:25:47 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [NOTICE]   (287418) : Loading success.
Feb 25 12:25:47 compute-0 sudo[287394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:25:47 compute-0 sudo[287394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:25:47 compute-0 sudo[287394]: pam_unix(sudo:session): session closed for user root
Feb 25 12:25:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 505 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 255 op/s
Feb 25 12:25:47 compute-0 sudo[287464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:25:47 compute-0 sudo[287464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.860 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.860 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022347.8598304, ec873c3c-bf46-4537-8c29-b23a3133d281 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.861 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] VM Started (Lifecycle Event)
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.865 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.868 244018 INFO nova.virt.libvirt.driver [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Instance spawned successfully.
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.868 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:25:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:25:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/206562268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.886 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.887 244018 DEBUG nova.virt.libvirt.vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-1',id=49,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:39Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.887 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.888 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.889 244018 DEBUG nova.objects.instance [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'pci_devices' on Instance uuid 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.890 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.895 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.897 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.897 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.897 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.898 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.898 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.898 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.905 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:25:47 compute-0 nova_compute[244014]:   <uuid>3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2</uuid>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   <name>instance-00000031</name>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <nova:name>tempest-MultipleCreateTestJSON-server-2038151869-1</nova:name>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:25:46</nova:creationTime>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <nova:user uuid="0899f3fdb57d46a395d07753dd261241">tempest-MultipleCreateTestJSON-1900553995-project-member</nova:user>
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <nova:project uuid="c8744bdbc0f1499388aab5f477246beb">tempest-MultipleCreateTestJSON-1900553995</nova:project>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <nova:port uuid="58ea35ed-ff5d-4827-a801-431f7536d78d">
Feb 25 12:25:47 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <system>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <entry name="serial">3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2</entry>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <entry name="uuid">3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2</entry>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     </system>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   <os>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   </os>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   <features>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   </features>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk">
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config">
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       </source>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:25:47 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:03:e4:94"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <target dev="tap58ea35ed-ff"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/console.log" append="off"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <video>
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     </video>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:25:47 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:25:47 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:25:47 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:25:47 compute-0 nova_compute[244014]: </domain>
Feb 25 12:25:47 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.906 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Preparing to wait for external event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.906 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.906 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.906 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.906 244018 DEBUG nova.virt.libvirt.vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-1',id=49,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:39Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.907 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.907 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.907 244018 DEBUG os_vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.908 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.908 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.908 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.910 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ea35ed-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.911 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap58ea35ed-ff, col_values=(('external_ids', {'iface-id': '58ea35ed-ff5d-4827-a801-431f7536d78d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:e4:94', 'vm-uuid': '3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:47 compute-0 NetworkManager[49836]: <info>  [1772022347.9129] manager: (tap58ea35ed-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.917 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.918 244018 INFO os_vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff')
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.926 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.926 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022347.8600066, ec873c3c-bf46-4537-8c29-b23a3133d281 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.926 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] VM Paused (Lifecycle Event)
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.959 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.964 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022347.8652508, ec873c3c-bf46-4537-8c29-b23a3133d281 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.964 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] VM Resumed (Lifecycle Event)
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.971 244018 INFO nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Took 9.88 seconds to spawn the instance on the hypervisor.
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.971 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:47 compute-0 nova_compute[244014]: 2026-02-25 12:25:47.995 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.000 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.000 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.001 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No VIF found with MAC fa:16:3e:03:e4:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.001 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Using config drive
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.021 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.036 244018 INFO nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Took 14.67 seconds to build instance.
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.039 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.068 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:48 compute-0 podman[287535]: 2026-02-25 12:25:48.075144502 +0000 UTC m=+0.041066693 container create a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 25 12:25:48 compute-0 systemd[1]: Started libpod-conmon-a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5.scope.
Feb 25 12:25:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:25:48 compute-0 podman[287535]: 2026-02-25 12:25:48.056430332 +0000 UTC m=+0.022352503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:25:48 compute-0 podman[287535]: 2026-02-25 12:25:48.16198422 +0000 UTC m=+0.127906421 container init a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:25:48 compute-0 podman[287535]: 2026-02-25 12:25:48.168159364 +0000 UTC m=+0.134081565 container start a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 12:25:48 compute-0 podman[287535]: 2026-02-25 12:25:48.170905662 +0000 UTC m=+0.136827863 container attach a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 12:25:48 compute-0 inspiring_raman[287550]: 167 167
Feb 25 12:25:48 compute-0 systemd[1]: libpod-a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5.scope: Deactivated successfully.
Feb 25 12:25:48 compute-0 podman[287535]: 2026-02-25 12:25:48.173128885 +0000 UTC m=+0.139051096 container died a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6eec19b16acb79f6922783fde96875702d154d7892be7583ba291d2cf132e38-merged.mount: Deactivated successfully.
Feb 25 12:25:48 compute-0 podman[287535]: 2026-02-25 12:25:48.219645262 +0000 UTC m=+0.185567473 container remove a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:25:48 compute-0 systemd[1]: libpod-conmon-a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5.scope: Deactivated successfully.
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.272 244018 DEBUG nova.compute.manager [req-40f69660-4c29-4ef1-bd5b-b3e9f5df5050 req-7b1e29fe-5370-4cdc-ae66-575e16513f88 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-deleted-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Feb 25 12:25:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Feb 25 12:25:48 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Feb 25 12:25:48 compute-0 ceph-mon[76335]: osdmap e183: 3 total, 3 up, 3 in
Feb 25 12:25:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1305819421' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:25:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1305819421' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:25:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/206562268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:25:48 compute-0 podman[287573]: 2026-02-25 12:25:48.468378901 +0000 UTC m=+0.090165793 container create caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:25:48 compute-0 podman[287573]: 2026-02-25 12:25:48.434602505 +0000 UTC m=+0.056389457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:25:48 compute-0 systemd[1]: Started libpod-conmon-caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47.scope.
Feb 25 12:25:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9114c1557232f43568203555f2085c0728815fca0dadbd9c1410cd4f5228815b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9114c1557232f43568203555f2085c0728815fca0dadbd9c1410cd4f5228815b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9114c1557232f43568203555f2085c0728815fca0dadbd9c1410cd4f5228815b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9114c1557232f43568203555f2085c0728815fca0dadbd9c1410cd4f5228815b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:48 compute-0 podman[287573]: 2026-02-25 12:25:48.59731641 +0000 UTC m=+0.219103342 container init caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:25:48 compute-0 podman[287573]: 2026-02-25 12:25:48.605044899 +0000 UTC m=+0.226831791 container start caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:25:48 compute-0 podman[287573]: 2026-02-25 12:25:48.608884498 +0000 UTC m=+0.230671450 container attach caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.677 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Creating config drive at /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.684 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpumloveo3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.743 244018 INFO nova.virt.libvirt.driver [-] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance destroyed successfully.
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.745 244018 DEBUG nova.objects.instance [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'resources' on Instance uuid b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.778 244018 DEBUG nova.virt.libvirt.vif [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-818536104',display_name='tempest-DeleteServersTestJSON-server-818536104',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-818536104',id=44,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-f8ey9dlk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member',shelved_at='2026-02-25T12:25:45.200330',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='ebcf3504-22bf-4cd8-b1d7-d238a69aa7a5'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:33Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.779 244018 DEBUG nova.network.os_vif_util [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.781 244018 DEBUG nova.network.os_vif_util [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:58:40,bridge_name='br-int',has_traffic_filtering=True,id=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcfcdd66-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.782 244018 DEBUG os_vif [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:58:40,bridge_name='br-int',has_traffic_filtering=True,id=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcfcdd66-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.785 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.786 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcfcdd66-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.791 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.796 244018 INFO os_vif [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:58:40,bridge_name='br-int',has_traffic_filtering=True,id=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcfcdd66-6c')
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.836 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpumloveo3" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.869 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:48 compute-0 nova_compute[244014]: 2026-02-25 12:25:48.874 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:48 compute-0 cranky_black[287590]: {
Feb 25 12:25:48 compute-0 cranky_black[287590]:     "0": [
Feb 25 12:25:48 compute-0 cranky_black[287590]:         {
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "devices": [
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "/dev/loop3"
Feb 25 12:25:48 compute-0 cranky_black[287590]:             ],
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_name": "ceph_lv0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_size": "21470642176",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "name": "ceph_lv0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "tags": {
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.cluster_name": "ceph",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.crush_device_class": "",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.encrypted": "0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.objectstore": "bluestore",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.osd_id": "0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.type": "block",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.vdo": "0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.with_tpm": "0"
Feb 25 12:25:48 compute-0 cranky_black[287590]:             },
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "type": "block",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "vg_name": "ceph_vg0"
Feb 25 12:25:48 compute-0 cranky_black[287590]:         }
Feb 25 12:25:48 compute-0 cranky_black[287590]:     ],
Feb 25 12:25:48 compute-0 cranky_black[287590]:     "1": [
Feb 25 12:25:48 compute-0 cranky_black[287590]:         {
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "devices": [
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "/dev/loop4"
Feb 25 12:25:48 compute-0 cranky_black[287590]:             ],
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_name": "ceph_lv1",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_size": "21470642176",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "name": "ceph_lv1",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "tags": {
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.cluster_name": "ceph",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.crush_device_class": "",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.encrypted": "0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.objectstore": "bluestore",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.osd_id": "1",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.type": "block",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.vdo": "0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.with_tpm": "0"
Feb 25 12:25:48 compute-0 cranky_black[287590]:             },
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "type": "block",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "vg_name": "ceph_vg1"
Feb 25 12:25:48 compute-0 cranky_black[287590]:         }
Feb 25 12:25:48 compute-0 cranky_black[287590]:     ],
Feb 25 12:25:48 compute-0 cranky_black[287590]:     "2": [
Feb 25 12:25:48 compute-0 cranky_black[287590]:         {
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "devices": [
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "/dev/loop5"
Feb 25 12:25:48 compute-0 cranky_black[287590]:             ],
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_name": "ceph_lv2",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_size": "21470642176",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "name": "ceph_lv2",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "tags": {
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.cluster_name": "ceph",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.crush_device_class": "",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.encrypted": "0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.objectstore": "bluestore",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.osd_id": "2",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.type": "block",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.vdo": "0",
Feb 25 12:25:48 compute-0 cranky_black[287590]:                 "ceph.with_tpm": "0"
Feb 25 12:25:48 compute-0 cranky_black[287590]:             },
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "type": "block",
Feb 25 12:25:48 compute-0 cranky_black[287590]:             "vg_name": "ceph_vg2"
Feb 25 12:25:48 compute-0 cranky_black[287590]:         }
Feb 25 12:25:48 compute-0 cranky_black[287590]:     ]
Feb 25 12:25:48 compute-0 cranky_black[287590]: }
Feb 25 12:25:48 compute-0 systemd[1]: libpod-caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47.scope: Deactivated successfully.
Feb 25 12:25:48 compute-0 podman[287573]: 2026-02-25 12:25:48.905348378 +0000 UTC m=+0.527135230 container died caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 12:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9114c1557232f43568203555f2085c0728815fca0dadbd9c1410cd4f5228815b-merged.mount: Deactivated successfully.
Feb 25 12:25:48 compute-0 podman[287573]: 2026-02-25 12:25:48.942259363 +0000 UTC m=+0.564046215 container remove caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 12:25:48 compute-0 systemd[1]: libpod-conmon-caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47.scope: Deactivated successfully.
Feb 25 12:25:48 compute-0 sudo[287464]: pam_unix(sudo:session): session closed for user root
Feb 25 12:25:49 compute-0 sudo[287667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.050 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.052 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Deleting local config drive /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config because it was imported into RBD.
Feb 25 12:25:49 compute-0 sudo[287667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:25:49 compute-0 sudo[287667]: pam_unix(sudo:session): session closed for user root
Feb 25 12:25:49 compute-0 kernel: tap58ea35ed-ff: entered promiscuous mode
Feb 25 12:25:49 compute-0 NetworkManager[49836]: <info>  [1772022349.0992] manager: (tap58ea35ed-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Feb 25 12:25:49 compute-0 systemd-udevd[287237]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.100 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:49 compute-0 ovn_controller[147040]: 2026-02-25T12:25:49Z|00422|binding|INFO|Claiming lport 58ea35ed-ff5d-4827-a801-431f7536d78d for this chassis.
Feb 25 12:25:49 compute-0 ovn_controller[147040]: 2026-02-25T12:25:49Z|00423|binding|INFO|58ea35ed-ff5d-4827-a801-431f7536d78d: Claiming fa:16:3e:03:e4:94 10.100.0.7
Feb 25 12:25:49 compute-0 sudo[287695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.107 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:e4:94 10.100.0.7'], port_security=['fa:16:3e:03:e4:94 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=58ea35ed-ff5d-4827-a801-431f7536d78d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.108 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 58ea35ed-ff5d-4827-a801-431f7536d78d in datapath c97c5b11-7517-46fe-a6ca-63894792908c bound to our chassis
Feb 25 12:25:49 compute-0 sudo[287695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:25:49 compute-0 ovn_controller[147040]: 2026-02-25T12:25:49Z|00424|binding|INFO|Setting lport 58ea35ed-ff5d-4827-a801-431f7536d78d ovn-installed in OVS
Feb 25 12:25:49 compute-0 ovn_controller[147040]: 2026-02-25T12:25:49Z|00425|binding|INFO|Setting lport 58ea35ed-ff5d-4827-a801-431f7536d78d up in Southbound
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.111 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.111 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:49 compute-0 NetworkManager[49836]: <info>  [1772022349.1119] device (tap58ea35ed-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:25:49 compute-0 NetworkManager[49836]: <info>  [1772022349.1124] device (tap58ea35ed-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.122 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d70379b9-8c1e-43b7-b531-303b236143fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:49 compute-0 systemd-machined[210048]: New machine qemu-55-instance-00000031.
Feb 25 12:25:49 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-00000031.
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.141 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f17c7144-72ea-4865-9868-035ee58b2904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.144 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[360e8de9-9b2d-4e08-ba99-a0dd0591c088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.145 244018 INFO nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Deleting instance files /var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_del
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.146 244018 INFO nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Deletion of /var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_del complete
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.165 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[05b2e913-41ff-44be-b368-48452e79384c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.178 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2305c3-1ac7-4974-b37d-458243c9bd4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431652, 'reachable_time': 20919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287743, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.189 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c74cd329-3bd2-4726-a12d-04df196918b2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431658, 'tstamp': 431658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287744, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431660, 'tstamp': 431660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287744, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.191 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.192 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.193 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97c5b11-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.193 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.194 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97c5b11-70, col_values=(('external_ids', {'iface-id': 'db412aa7-4ad4-4eb8-b61f-dd3e71d5329d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.194 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.263 244018 INFO nova.scheduler.client.report [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Deleted allocations for instance b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.331 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.332 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:49 compute-0 ceph-mon[76335]: pgmap v1262: 305 pgs: 305 active+clean; 505 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 255 op/s
Feb 25 12:25:49 compute-0 ceph-mon[76335]: osdmap e184: 3 total, 3 up, 3 in
Feb 25 12:25:49 compute-0 podman[287773]: 2026-02-25 12:25:49.424140071 +0000 UTC m=+0.055215004 container create 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:25:49 compute-0 systemd[1]: Started libpod-conmon-6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa.scope.
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.471 244018 DEBUG oslo_concurrency.processutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:49 compute-0 podman[287773]: 2026-02-25 12:25:49.396349654 +0000 UTC m=+0.027424627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:25:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.512 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022349.511386, 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.513 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] VM Started (Lifecycle Event)
Feb 25 12:25:49 compute-0 podman[287773]: 2026-02-25 12:25:49.523483512 +0000 UTC m=+0.154558445 container init 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:25:49 compute-0 podman[287773]: 2026-02-25 12:25:49.528784342 +0000 UTC m=+0.159859275 container start 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:25:49 compute-0 podman[287773]: 2026-02-25 12:25:49.532165938 +0000 UTC m=+0.163240861 container attach 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Feb 25 12:25:49 compute-0 compassionate_jepsen[287813]: 167 167
Feb 25 12:25:49 compute-0 systemd[1]: libpod-6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa.scope: Deactivated successfully.
Feb 25 12:25:49 compute-0 podman[287773]: 2026-02-25 12:25:49.535014238 +0000 UTC m=+0.166089161 container died 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.539 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.546 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022349.511532, 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.546 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] VM Paused (Lifecycle Event)
Feb 25 12:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1532f849c4a6033fcc8126a72acc55f4f9f8ee435d311c0351bee119183f7fe-merged.mount: Deactivated successfully.
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.571 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.577 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:49 compute-0 podman[287773]: 2026-02-25 12:25:49.58100902 +0000 UTC m=+0.212083943 container remove 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 12:25:49 compute-0 systemd[1]: libpod-conmon-6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa.scope: Deactivated successfully.
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.611 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.636 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:49 compute-0 podman[287858]: 2026-02-25 12:25:49.711962606 +0000 UTC m=+0.034752804 container create 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 12:25:49 compute-0 systemd[1]: Started libpod-conmon-30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1.scope.
Feb 25 12:25:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd4f6e36b767a08c0291371882b67c87f4011d23494fcc796781fde240aa06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd4f6e36b767a08c0291371882b67c87f4011d23494fcc796781fde240aa06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd4f6e36b767a08c0291371882b67c87f4011d23494fcc796781fde240aa06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd4f6e36b767a08c0291371882b67c87f4011d23494fcc796781fde240aa06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:25:49 compute-0 podman[287858]: 2026-02-25 12:25:49.789726066 +0000 UTC m=+0.112516294 container init 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:25:49 compute-0 podman[287858]: 2026-02-25 12:25:49.697321642 +0000 UTC m=+0.020111860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:25:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 505 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 9.3 MiB/s wr, 258 op/s
Feb 25 12:25:49 compute-0 podman[287858]: 2026-02-25 12:25:49.797725923 +0000 UTC m=+0.120516151 container start 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 12:25:49 compute-0 podman[287858]: 2026-02-25 12:25:49.801571361 +0000 UTC m=+0.124361589 container attach 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.866 244018 DEBUG nova.compute.manager [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.867 244018 DEBUG oslo_concurrency.lockutils [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.868 244018 DEBUG oslo_concurrency.lockutils [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.868 244018 DEBUG oslo_concurrency.lockutils [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.868 244018 DEBUG nova.compute.manager [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] No waiting events found dispatching network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:49 compute-0 nova_compute[244014]: 2026-02-25 12:25:49.869 244018 WARNING nova.compute.manager [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received unexpected event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 for instance with vm_state active and task_state None.
Feb 25 12:25:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:25:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124586646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.073 244018 DEBUG oslo_concurrency.processutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.079 244018 DEBUG nova.compute.provider_tree [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.099 244018 DEBUG nova.scheduler.client.report [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.125 244018 INFO nova.virt.libvirt.driver [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Snapshot image upload complete
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.126 244018 INFO nova.compute.manager [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 6.29 seconds to snapshot the instance on the hypervisor.
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.130 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.211 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 41.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.366 244018 DEBUG nova.compute.manager [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.366 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG nova.compute.manager [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Processing event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG nova.compute.manager [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.368 244018 DEBUG nova.compute.manager [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] No waiting events found dispatching network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.368 244018 WARNING nova.compute.manager [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received unexpected event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d for instance with vm_state building and task_state spawning.
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.370 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.374 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022350.373836, 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.374 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] VM Resumed (Lifecycle Event)
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.375 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.379 244018 INFO nova.virt.libvirt.driver [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance spawned successfully.
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.380 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.397 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.401 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:25:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.404 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1124586646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.404 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.404 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.405 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.405 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.405 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:25:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Feb 25 12:25:50 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.433 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.469 244018 INFO nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Took 10.47 seconds to spawn the instance on the hypervisor.
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.470 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.517 244018 DEBUG nova.compute.manager [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.530 244018 INFO nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Took 17.17 seconds to build instance.
Feb 25 12:25:50 compute-0 nova_compute[244014]: 2026-02-25 12:25:50.555 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:50 compute-0 lvm[287951]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:25:50 compute-0 lvm[287951]: VG ceph_vg0 finished
Feb 25 12:25:50 compute-0 lvm[287953]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:25:50 compute-0 lvm[287953]: VG ceph_vg1 finished
Feb 25 12:25:50 compute-0 lvm[287955]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:25:50 compute-0 lvm[287955]: VG ceph_vg2 finished
Feb 25 12:25:50 compute-0 lvm[287957]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:25:50 compute-0 lvm[287957]: VG ceph_vg2 finished
Feb 25 12:25:50 compute-0 quizzical_lamarr[287872]: {}
Feb 25 12:25:50 compute-0 lvm[287959]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:25:50 compute-0 lvm[287959]: VG ceph_vg2 finished
Feb 25 12:25:50 compute-0 systemd[1]: libpod-30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1.scope: Deactivated successfully.
Feb 25 12:25:50 compute-0 podman[287858]: 2026-02-25 12:25:50.741021799 +0000 UTC m=+1.063811997 container died 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb 25 12:25:50 compute-0 systemd[1]: libpod-30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1.scope: Consumed 1.190s CPU time.
Feb 25 12:25:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-62dd4f6e36b767a08c0291371882b67c87f4011d23494fcc796781fde240aa06-merged.mount: Deactivated successfully.
Feb 25 12:25:50 compute-0 podman[287858]: 2026-02-25 12:25:50.779770476 +0000 UTC m=+1.102560674 container remove 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:25:50 compute-0 systemd[1]: libpod-conmon-30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1.scope: Deactivated successfully.
Feb 25 12:25:50 compute-0 sudo[287695]: pam_unix(sudo:session): session closed for user root
Feb 25 12:25:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:25:50 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:25:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:25:50 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:25:50 compute-0 sudo[287973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:25:50 compute-0 sudo[287973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:25:50 compute-0 sudo[287973]: pam_unix(sudo:session): session closed for user root
Feb 25 12:25:51 compute-0 ceph-mon[76335]: pgmap v1264: 305 pgs: 305 active+clean; 505 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 9.3 MiB/s wr, 258 op/s
Feb 25 12:25:51 compute-0 ceph-mon[76335]: osdmap e185: 3 total, 3 up, 3 in
Feb 25 12:25:51 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:25:51 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:25:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 456 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 12 MiB/s wr, 552 op/s
Feb 25 12:25:53 compute-0 ceph-mon[76335]: pgmap v1266: 305 pgs: 305 active+clean; 456 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 12 MiB/s wr, 552 op/s
Feb 25 12:25:53 compute-0 nova_compute[244014]: 2026-02-25 12:25:53.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 405 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 5.0 MiB/s wr, 480 op/s
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.159 244018 DEBUG nova.compute.manager [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.177 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.178 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.228 244018 INFO nova.compute.manager [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] instance snapshotting
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.229 244018 DEBUG nova.objects.instance [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'flavor' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.232 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.314 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.315 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.316 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.316 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.317 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.318 244018 INFO nova.compute.manager [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Terminating instance
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.320 244018 DEBUG nova.compute.manager [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.336 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.336 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.346 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.347 244018 INFO nova.compute.claims [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:25:54 compute-0 kernel: tap58ea35ed-ff (unregistering): left promiscuous mode
Feb 25 12:25:54 compute-0 NetworkManager[49836]: <info>  [1772022354.3583] device (tap58ea35ed-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 ovn_controller[147040]: 2026-02-25T12:25:54Z|00426|binding|INFO|Releasing lport 58ea35ed-ff5d-4827-a801-431f7536d78d from this chassis (sb_readonly=0)
Feb 25 12:25:54 compute-0 ovn_controller[147040]: 2026-02-25T12:25:54Z|00427|binding|INFO|Setting lport 58ea35ed-ff5d-4827-a801-431f7536d78d down in Southbound
Feb 25 12:25:54 compute-0 ovn_controller[147040]: 2026-02-25T12:25:54Z|00428|binding|INFO|Removing iface tap58ea35ed-ff ovn-installed in OVS
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.372 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:e4:94 10.100.0.7'], port_security=['fa:16:3e:03:e4:94 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=58ea35ed-ff5d-4827-a801-431f7536d78d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.374 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 58ea35ed-ff5d-4827-a801-431f7536d78d in datapath c97c5b11-7517-46fe-a6ca-63894792908c unbound from our chassis
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.376 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.398 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a23d6e9a-fe6c-4a74-9343-c07699178d08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Deactivated successfully.
Feb 25 12:25:54 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Consumed 4.384s CPU time.
Feb 25 12:25:54 compute-0 systemd-machined[210048]: Machine qemu-55-instance-00000031 terminated.
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.420 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[aafc41a7-1ff5-4222-8698-3e9b134a29d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.426 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[448ab6a9-a54d-48ad-a957-ffb451b0c52a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.447 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[134d773f-58bf-4b58-9b5e-409a1e3384e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.466 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[789257e8-946f-41f2-8e5e-72149cfdff73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431652, 'reachable_time': 20919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288010, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.474 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.474 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.475 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.475 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.476 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.477 244018 INFO nova.compute.manager [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Terminating instance
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.479 244018 DEBUG nova.compute.manager [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.484 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee5ec55-dabf-4c6e-a80b-82c85082d71e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431658, 'tstamp': 431658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288011, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431660, 'tstamp': 431660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288011, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.486 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.492 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97c5b11-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.493 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.493 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97c5b11-70, col_values=(('external_ids', {'iface-id': 'db412aa7-4ad4-4eb8-b61f-dd3e71d5329d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.493 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:25:54 compute-0 kernel: tap0b2cc5f4-bf (unregistering): left promiscuous mode
Feb 25 12:25:54 compute-0 NetworkManager[49836]: <info>  [1772022354.5261] device (tap0b2cc5f4-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 ovn_controller[147040]: 2026-02-25T12:25:54Z|00429|binding|INFO|Releasing lport 0b2cc5f4-bf41-4e03-8c24-1e711f742942 from this chassis (sb_readonly=0)
Feb 25 12:25:54 compute-0 ovn_controller[147040]: 2026-02-25T12:25:54Z|00430|binding|INFO|Setting lport 0b2cc5f4-bf41-4e03-8c24-1e711f742942 down in Southbound
Feb 25 12:25:54 compute-0 ovn_controller[147040]: 2026-02-25T12:25:54Z|00431|binding|INFO|Removing iface tap0b2cc5f4-bf ovn-installed in OVS
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.545 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:2e:f8 10.100.0.3'], port_security=['fa:16:3e:1f:2e:f8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ec873c3c-bf46-4537-8c29-b23a3133d281', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0b2cc5f4-bf41-4e03-8c24-1e711f742942) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.546 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0b2cc5f4-bf41-4e03-8c24-1e711f742942 in datapath c97c5b11-7517-46fe-a6ca-63894792908c unbound from our chassis
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.548 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c97c5b11-7517-46fe-a6ca-63894792908c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.550 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1880ff0-c44e-4211-9136-e20814ef28dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.551 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c namespace which is not needed anymore
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.566 244018 INFO nova.virt.libvirt.driver [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance destroyed successfully.
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.566 244018 DEBUG nova.objects.instance [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'resources' on Instance uuid 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.570 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:54 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000032.scope: Deactivated successfully.
Feb 25 12:25:54 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000032.scope: Consumed 7.498s CPU time.
Feb 25 12:25:54 compute-0 systemd-machined[210048]: Machine qemu-54-instance-00000032 terminated.
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.606 244018 DEBUG nova.virt.libvirt.vif [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-1',id=49,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:50Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.607 244018 DEBUG nova.network.os_vif_util [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.608 244018 DEBUG nova.network.os_vif_util [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.608 244018 DEBUG os_vif [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.610 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.611 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ea35ed-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.620 244018 INFO os_vif [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff')
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.682 244018 INFO nova.virt.libvirt.driver [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Beginning live snapshot process
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.716 244018 INFO nova.virt.libvirt.driver [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Instance destroyed successfully.
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.717 244018 DEBUG nova.objects.instance [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'resources' on Instance uuid ec873c3c-bf46-4537-8c29-b23a3133d281 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:54 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [NOTICE]   (287418) : haproxy version is 2.8.14-c23fe91
Feb 25 12:25:54 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [NOTICE]   (287418) : path to executable is /usr/sbin/haproxy
Feb 25 12:25:54 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [WARNING]  (287418) : Exiting Master process...
Feb 25 12:25:54 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [ALERT]    (287418) : Current worker (287439) exited with code 143 (Terminated)
Feb 25 12:25:54 compute-0 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [WARNING]  (287418) : All workers exited. Exiting... (0)
Feb 25 12:25:54 compute-0 systemd[1]: libpod-26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1.scope: Deactivated successfully.
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.733 244018 DEBUG nova.virt.libvirt.vif [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-2',id=50,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-02-25T12:25:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:48Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=ec873c3c-bf46-4537-8c29-b23a3133d281,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.734 244018 DEBUG nova.network.os_vif_util [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.736 244018 DEBUG nova.network.os_vif_util [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.736 244018 DEBUG os_vif [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 podman[288062]: 2026-02-25 12:25:54.740501579 +0000 UTC m=+0.068087838 container died 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.740 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b2cc5f4-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.751 244018 INFO os_vif [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf')
Feb 25 12:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed524380e5edf8481fd2be2b83ddd592e929273edf1401db6f69e8d7d1c59845-merged.mount: Deactivated successfully.
Feb 25 12:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1-userdata-shm.mount: Deactivated successfully.
Feb 25 12:25:54 compute-0 podman[288062]: 2026-02-25 12:25:54.779868024 +0000 UTC m=+0.107454283 container cleanup 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:25:54 compute-0 systemd[1]: libpod-conmon-26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1.scope: Deactivated successfully.
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.840 244018 DEBUG nova.virt.libvirt.imagebackend [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:25:54 compute-0 podman[288150]: 2026-02-25 12:25:54.862831311 +0000 UTC m=+0.058546247 container remove 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.869 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19876bc5-91d0-4512-bf57-88198c7de211]: (4, ('Wed Feb 25 12:25:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c (26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1)\n26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1\nWed Feb 25 12:25:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c (26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1)\n26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.871 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5de2e2-69e5-4fc8-98fd-47a8fd886c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.872 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:25:54 compute-0 kernel: tapc97c5b11-70: left promiscuous mode
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 nova_compute[244014]: 2026-02-25 12:25:54.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.888 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4518b2c8-d986-4a83-b01f-d3db8f9e3ce2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.899 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0387d0c-fb04-48af-972d-03c45a069415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.900 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cf079a03-6f3e-4777-aac0-fb481bf64e72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.916 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[db91499b-df57-455e-bf44-03d3b760c4af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431646, 'reachable_time': 44850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288191, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dc97c5b11\x2d7517\x2d46fe\x2da6ca\x2d63894792908c.mount: Deactivated successfully.
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.922 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:25:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.922 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[dc683313-ff2b-4ed5-8e78-3bcf75bb4905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:25:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:25:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Feb 25 12:25:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Feb 25 12:25:55 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Feb 25 12:25:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:55.008 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:55.008 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:25:55.012 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.078 244018 INFO nova.virt.libvirt.driver [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Deleting instance files /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_del
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.079 244018 INFO nova.virt.libvirt.driver [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Deletion of /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_del complete
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.084 244018 DEBUG nova.storage.rbd_utils [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(1bd836acae524e0d8fac760064da62b9) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:25:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141398883' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.125 244018 DEBUG nova.compute.manager [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-unplugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.126 244018 DEBUG oslo_concurrency.lockutils [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.126 244018 DEBUG oslo_concurrency.lockutils [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.127 244018 DEBUG oslo_concurrency.lockutils [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.127 244018 DEBUG nova.compute.manager [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] No waiting events found dispatching network-vif-unplugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.127 244018 DEBUG nova.compute.manager [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-unplugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.128 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.132 244018 DEBUG nova.compute.provider_tree [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.149 244018 DEBUG nova.scheduler.client.report [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.185 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.186 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.195 244018 INFO nova.compute.manager [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Took 0.87 seconds to destroy the instance on the hypervisor.
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.196 244018 DEBUG oslo.service.loopingcall [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.197 244018 DEBUG nova.compute.manager [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.198 244018 DEBUG nova.network.neutron [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.250 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.251 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.262 244018 INFO nova.virt.libvirt.driver [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Deleting instance files /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281_del
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.264 244018 INFO nova.virt.libvirt.driver [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Deletion of /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281_del complete
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.303 244018 INFO nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.329 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.339 244018 INFO nova.compute.manager [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Took 0.86 seconds to destroy the instance on the hypervisor.
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.340 244018 DEBUG oslo.service.loopingcall [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.341 244018 DEBUG nova.compute.manager [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.341 244018 DEBUG nova.network.neutron [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.416 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.418 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.419 244018 INFO nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Creating image(s)
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.451 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:55 compute-0 ceph-mon[76335]: pgmap v1267: 305 pgs: 305 active+clean; 405 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 5.0 MiB/s wr, 480 op/s
Feb 25 12:25:55 compute-0 ceph-mon[76335]: osdmap e186: 3 total, 3 up, 3 in
Feb 25 12:25:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3141398883' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.485 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.518 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.522 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.554 244018 DEBUG nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-unplugged-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.554 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.555 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.555 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.555 244018 DEBUG nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] No waiting events found dispatching network-vif-unplugged-58ea35ed-ff5d-4827-a801-431f7536d78d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.556 244018 DEBUG nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-unplugged-58ea35ed-ff5d-4827-a801-431f7536d78d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.556 244018 DEBUG nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.557 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.557 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.557 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.557 244018 DEBUG nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] No waiting events found dispatching network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.558 244018 WARNING nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received unexpected event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d for instance with vm_state active and task_state deleting.
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.602 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.602 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.603 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.604 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.635 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.640 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 a826c2fd-1af8-4b55-b801-90ce87d04466_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:55 compute-0 nova_compute[244014]: 2026-02-25 12:25:55.750 244018 DEBUG nova.policy [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '89e71139346a40899212d5bc35835720', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f976004e0b334963a69c2519fca200d2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:25:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 405 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 4.3 MiB/s wr, 416 op/s
Feb 25 12:25:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.421 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.422 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Feb 25 12:25:56 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.462 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.526 244018 DEBUG nova.storage.rbd_utils [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk@1bd836acae524e0d8fac760064da62b9 to images/47933427-b31f-475b-a243-3521fd903d10 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.579 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.579 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.585 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.586 244018 INFO nova.compute.claims [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.683 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 a826c2fd-1af8-4b55-b801-90ce87d04466_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.720 244018 DEBUG nova.storage.rbd_utils [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening images/47933427-b31f-475b-a243-3521fd903d10 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.838 244018 DEBUG nova.network.neutron [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.846 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] resizing rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.976 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Successfully created port: 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:25:56 compute-0 nova_compute[244014]: 2026-02-25 12:25:56.980 244018 INFO nova.compute.manager [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Took 1.78 seconds to deallocate network for instance.
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.081 244018 DEBUG nova.network.neutron [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.083 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.127 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.165 244018 DEBUG nova.objects.instance [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'migration_context' on Instance uuid a826c2fd-1af8-4b55-b801-90ce87d04466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.168 244018 INFO nova.compute.manager [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Took 1.83 seconds to deallocate network for instance.
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.179 244018 DEBUG nova.compute.manager [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.180 244018 DEBUG oslo_concurrency.lockutils [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.180 244018 DEBUG oslo_concurrency.lockutils [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.181 244018 DEBUG oslo_concurrency.lockutils [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.181 244018 DEBUG nova.compute.manager [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] No waiting events found dispatching network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.181 244018 WARNING nova.compute.manager [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received unexpected event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 for instance with vm_state active and task_state deleting.
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.182 244018 DEBUG nova.compute.manager [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-deleted-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.184 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.184 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Ensure instance console log exists: /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.185 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.185 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.186 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.201 244018 DEBUG nova.storage.rbd_utils [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] removing snapshot(1bd836acae524e0d8fac760064da62b9) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.223 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:57 compute-0 ceph-mon[76335]: pgmap v1269: 305 pgs: 305 active+clean; 405 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 4.3 MiB/s wr, 416 op/s
Feb 25 12:25:57 compute-0 ceph-mon[76335]: osdmap e187: 3 total, 3 up, 3 in
Feb 25 12:25:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Feb 25 12:25:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Feb 25 12:25:57 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.547 244018 DEBUG nova.compute.manager [req-5609c235-524b-45b9-abb1-7137b5747bd5 req-b90cff67-f2fd-4b93-a03a-f6cd7dea4077 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-deleted-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.576 244018 DEBUG nova.storage.rbd_utils [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(snap) on rbd image(47933427-b31f-475b-a243-3521fd903d10) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:25:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23727402' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.673 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.680 244018 DEBUG nova.compute.provider_tree [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.708 244018 DEBUG nova.scheduler.client.report [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.736 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.737 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.739 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.793 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.793 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:25:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 366 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.5 MiB/s wr, 329 op/s
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.821 244018 INFO nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.824 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Successfully updated port: 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.846 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.846 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquired lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.847 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.848 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.870 244018 DEBUG oslo_concurrency.processutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.960 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.962 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.962 244018 INFO nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Creating image(s)
Feb 25 12:25:57 compute-0 nova_compute[244014]: 2026-02-25 12:25:57.984 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.008 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.034 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.038 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.070 244018 DEBUG nova.policy [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9c44c1a95c8d4d97bc1d7dde284dcf1b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'daab2f813dbd467685c22833bf875ec9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.073 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.119 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.120 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.120 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.121 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.139 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.142 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.165 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022343.1000125, e291d969-fcea-4f60-a478-f7b81a91ccd9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.165 244018 INFO nova.compute.manager [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] VM Stopped (Lifecycle Event)
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.186 244018 DEBUG nova.compute.manager [None req-92343db1-01ce-4928-a1f5-ffa60b4c7d5a - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:25:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788305962' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.510 244018 DEBUG oslo_concurrency.processutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.514 244018 DEBUG nova.compute.provider_tree [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Feb 25 12:25:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Feb 25 12:25:58 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Feb 25 12:25:58 compute-0 ceph-mon[76335]: osdmap e188: 3 total, 3 up, 3 in
Feb 25 12:25:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/23727402' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1788305962' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.530 244018 DEBUG nova.scheduler.client.report [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.536 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.570 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.573 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.617 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] resizing rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.658 244018 INFO nova.scheduler.client.report [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Deleted allocations for instance 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.725 244018 DEBUG nova.objects.instance [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'migration_context' on Instance uuid 679fb15f-b258-473a-8cdc-a2c143eb4d92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.746 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.746 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Ensure instance console log exists: /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.747 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.747 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.747 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.760 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.801 244018 DEBUG oslo_concurrency.processutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:58 compute-0 nova_compute[244014]: 2026-02-25 12:25:58.845 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Successfully created port: fea2b354-8a21-4bff-bd90-431f8a17aa19 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:25:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:25:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3085999945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.382 244018 DEBUG oslo_concurrency.processutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.388 244018 DEBUG nova.compute.provider_tree [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.405 244018 DEBUG nova.scheduler.client.report [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.433 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.474 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Updating instance_info_cache with network_info: [{"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.491 244018 INFO nova.scheduler.client.report [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Deleted allocations for instance ec873c3c-bf46-4537-8c29-b23a3133d281
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.495 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Releasing lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.496 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Instance network_info: |[{"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.500 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Start _get_guest_xml network_info=[{"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.510 244018 WARNING nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.516 244018 DEBUG nova.virt.libvirt.host [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.516 244018 DEBUG nova.virt.libvirt.host [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.521 244018 DEBUG nova.virt.libvirt.host [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.522 244018 DEBUG nova.virt.libvirt.host [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.523 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.523 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.524 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.524 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.524 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.525 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.525 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.525 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.526 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.526 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.526 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.527 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.531 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.570 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:25:59 compute-0 ceph-mon[76335]: pgmap v1272: 305 pgs: 305 active+clean; 366 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.5 MiB/s wr, 329 op/s
Feb 25 12:25:59 compute-0 ceph-mon[76335]: osdmap e189: 3 total, 3 up, 3 in
Feb 25 12:25:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3085999945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.669 244018 DEBUG nova.compute.manager [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Received event network-changed-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.677 244018 DEBUG nova.compute.manager [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Refreshing instance network info cache due to event network-changed-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.678 244018 DEBUG oslo_concurrency.lockutils [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.678 244018 DEBUG oslo_concurrency.lockutils [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.678 244018 DEBUG nova.network.neutron [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Refreshing network info cache for port 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:25:59 compute-0 podman[288703]: 2026-02-25 12:25:59.729759502 +0000 UTC m=+0.071815184 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 12:25:59 compute-0 nova_compute[244014]: 2026-02-25 12:25:59.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:25:59 compute-0 podman[288713]: 2026-02-25 12:25:59.764787773 +0000 UTC m=+0.108808830 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 12:25:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 366 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.5 MiB/s wr, 211 op/s
Feb 25 12:25:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2567944676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.069 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.137 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.141 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.250 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Successfully updated port: fea2b354-8a21-4bff-bd90-431f8a17aa19 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.269 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.269 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquired lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.270 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.472 244018 INFO nova.virt.libvirt.driver [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Snapshot image upload complete
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.473 244018 INFO nova.compute.manager [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 6.20 seconds to snapshot the instance on the hypervisor.
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.479 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:26:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2567944676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3227801658' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.691 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.693 244018 DEBUG nova.virt.libvirt.vif [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-131355373',display_name='tempest-ImagesOneServerNegativeTestJSON-server-131355373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-131355373',id=51,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-w9t1cq22',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:55Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=a826c2fd-1af8-4b55-b801-90ce87d04466,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.694 244018 DEBUG nova.network.os_vif_util [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.695 244018 DEBUG nova.network.os_vif_util [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.697 244018 DEBUG nova.objects.instance [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid a826c2fd-1af8-4b55-b801-90ce87d04466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.717 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:26:00 compute-0 nova_compute[244014]:   <uuid>a826c2fd-1af8-4b55-b801-90ce87d04466</uuid>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   <name>instance-00000033</name>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-131355373</nova:name>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:25:59</nova:creationTime>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <nova:user uuid="89e71139346a40899212d5bc35835720">tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member</nova:user>
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <nova:project uuid="f976004e0b334963a69c2519fca200d2">tempest-ImagesOneServerNegativeTestJSON-1374162185</nova:project>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <nova:port uuid="4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf">
Feb 25 12:26:00 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <system>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <entry name="serial">a826c2fd-1af8-4b55-b801-90ce87d04466</entry>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <entry name="uuid">a826c2fd-1af8-4b55-b801-90ce87d04466</entry>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     </system>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   <os>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   </os>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   <features>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   </features>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/a826c2fd-1af8-4b55-b801-90ce87d04466_disk">
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config">
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:b6:1d:3c"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <target dev="tap4bf8f3ad-0e"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/console.log" append="off"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <video>
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     </video>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:26:00 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:26:00 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:26:00 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:26:00 compute-0 nova_compute[244014]: </domain>
Feb 25 12:26:00 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.718 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Preparing to wait for external event network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.718 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.719 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.719 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.721 244018 DEBUG nova.virt.libvirt.vif [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-131355373',display_name='tempest-ImagesOneServerNegativeTestJSON-server-131355373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-131355373',id=51,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-w9t1cq22',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:55Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=a826c2fd-1af8-4b55-b801-90ce87d04466,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.721 244018 DEBUG nova.network.os_vif_util [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.723 244018 DEBUG nova.network.os_vif_util [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.723 244018 DEBUG os_vif [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.725 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.726 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.730 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bf8f3ad-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.730 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4bf8f3ad-0e, col_values=(('external_ids', {'iface-id': '4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:1d:3c', 'vm-uuid': 'a826c2fd-1af8-4b55-b801-90ce87d04466'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:00 compute-0 NetworkManager[49836]: <info>  [1772022360.7336] manager: (tap4bf8f3ad-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.744 244018 INFO os_vif [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e')
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.818 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.818 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.819 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No VIF found with MAC fa:16:3e:b6:1d:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.819 244018 INFO nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Using config drive
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.849 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.864 244018 DEBUG nova.compute.manager [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.901 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:26:00 compute-0 nova_compute[244014]: 2026-02-25 12:26:00.901 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.146 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.146 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.146 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.147 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.470 244018 INFO nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Creating config drive at /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.478 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp38szqe_b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.550 244018 DEBUG nova.network.neutron [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Updated VIF entry in instance network info cache for port 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.551 244018 DEBUG nova.network.neutron [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Updating instance_info_cache with network_info: [{"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.569 244018 DEBUG oslo_concurrency.lockutils [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.616 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp38szqe_b" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.652 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.657 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:01 compute-0 ceph-mon[76335]: pgmap v1274: 305 pgs: 305 active+clean; 366 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.5 MiB/s wr, 211 op/s
Feb 25 12:26:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3227801658' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 463 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 14 MiB/s wr, 393 op/s
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.834 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.835 244018 INFO nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Deleting local config drive /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config because it was imported into RBD.
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.886 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Updating instance_info_cache with network_info: [{"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.916 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Releasing lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.917 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance network_info: |[{"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:26:01 compute-0 kernel: tap4bf8f3ad-0e: entered promiscuous mode
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.921 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Start _get_guest_xml network_info=[{"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:26:01 compute-0 NetworkManager[49836]: <info>  [1772022361.9225] manager: (tap4bf8f3ad-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/197)
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.923 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:01 compute-0 ovn_controller[147040]: 2026-02-25T12:26:01Z|00432|binding|INFO|Claiming lport 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf for this chassis.
Feb 25 12:26:01 compute-0 ovn_controller[147040]: 2026-02-25T12:26:01Z|00433|binding|INFO|4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf: Claiming fa:16:3e:b6:1d:3c 10.100.0.4
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.931 244018 WARNING nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:26:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.935 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:1d:3c 10.100.0.4'], port_security=['fa:16:3e:b6:1d:3c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a826c2fd-1af8-4b55-b801-90ce87d04466', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.937 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 bound to our chassis
Feb 25 12:26:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.941 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.943 244018 DEBUG nova.virt.libvirt.host [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.944 244018 DEBUG nova.virt.libvirt.host [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:26:01 compute-0 ovn_controller[147040]: 2026-02-25T12:26:01Z|00434|binding|INFO|Setting lport 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf ovn-installed in OVS
Feb 25 12:26:01 compute-0 ovn_controller[147040]: 2026-02-25T12:26:01Z|00435|binding|INFO|Setting lport 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf up in Southbound
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.945 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.955 244018 DEBUG nova.virt.libvirt.host [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:26:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.956 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a542f6be-5ce0-4eb4-9e66-976179e5afdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.957 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7693903d-d1 in ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:26:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.959 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7693903d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:26:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.959 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ffdd9def-a7da-4da4-b7d2-06761290cfff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.957 244018 DEBUG nova.virt.libvirt.host [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.959 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.959 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.960 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.961 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:26:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.961 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9871f67a-5abe-468e-9af2-e0c25f03892a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.961 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.961 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.962 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.962 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.963 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.963 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.964 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.964 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:26:01 compute-0 systemd-machined[210048]: New machine qemu-56-instance-00000033.
Feb 25 12:26:01 compute-0 nova_compute[244014]: 2026-02-25 12:26:01.969 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.977 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4beaf90a-0747-47d9-830e-dfad2d82ba13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:01 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000033.
Feb 25 12:26:01 compute-0 systemd-udevd[288881]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.002 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[048854ef-0418-44e8-b78a-389e2d82736c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 NetworkManager[49836]: <info>  [1772022362.0090] device (tap4bf8f3ad-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:26:02 compute-0 NetworkManager[49836]: <info>  [1772022362.0102] device (tap4bf8f3ad-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.027 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1afddf7e-e429-43df-bbac-7638735ab72a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.029 244018 DEBUG nova.compute.manager [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-changed-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.030 244018 DEBUG nova.compute.manager [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Refreshing instance network info cache due to event network-changed-fea2b354-8a21-4bff-bd90-431f8a17aa19. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.030 244018 DEBUG oslo_concurrency.lockutils [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.030 244018 DEBUG oslo_concurrency.lockutils [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.031 244018 DEBUG nova.network.neutron [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Refreshing network info cache for port fea2b354-8a21-4bff-bd90-431f8a17aa19 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:26:02 compute-0 systemd-udevd[288884]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.033 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb5ea11-4f47-4f45-bf9c-c57544170021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 NetworkManager[49836]: <info>  [1772022362.0353] manager: (tap7693903d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/198)
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.067 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0b013e80-98dd-4d85-86c1-d348f6595361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.070 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[acbba05d-ce46-4a5d-a5ba-6cb9d4ea9748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 NetworkManager[49836]: <info>  [1772022362.0917] device (tap7693903d-d0): carrier: link connected
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.098 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4c4ee0-a36a-44d4-8ef7-72414e274211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.110 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[248e0952-30af-4c81-8dda-2726e10684fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433149, 'reachable_time': 31366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288911, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.126 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9d3407-3ce2-46fe-974c-81183d1a4c8c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:f240'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433149, 'tstamp': 433149}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288913, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.145 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ed21c935-af64-4ab9-9244-9929b39373d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433149, 'reachable_time': 31366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288932, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.167 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[66869389-2c8a-4946-bb8a-4dabdfadc636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.210 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[23a46632-4851-483b-9e6a-c18a57cd217d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.212 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.213 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.214 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7693903d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:02 compute-0 NetworkManager[49836]: <info>  [1772022362.2165] manager: (tap7693903d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Feb 25 12:26:02 compute-0 kernel: tap7693903d-d0: entered promiscuous mode
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.220 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7693903d-d0, col_values=(('external_ids', {'iface-id': '6dc5897c-8765-434f-a79d-19523884d8ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:02 compute-0 ovn_controller[147040]: 2026-02-25T12:26:02Z|00436|binding|INFO|Releasing lport 6dc5897c-8765-434f-a79d-19523884d8ae from this chassis (sb_readonly=0)
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.224 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.225 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c8693c16-6f1b-4b80-bece-11a78831af7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.225 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:26:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.226 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'env', 'PROCESS_TAG=haproxy-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7693903d-d5e2-4b50-a39b-bbbcc4148329.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.237 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1815433461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.533 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.557 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.561 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.625 244018 DEBUG nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:02 compute-0 podman[288964]: 2026-02-25 12:26:02.567098881 +0000 UTC m=+0.048232866 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:26:02 compute-0 podman[288964]: 2026-02-25 12:26:02.698864681 +0000 UTC m=+0.179998606 container create c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 12:26:02 compute-0 ceph-mon[76335]: pgmap v1275: 305 pgs: 305 active+clean; 463 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 14 MiB/s wr, 393 op/s
Feb 25 12:26:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1815433461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.743 244018 INFO nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] instance snapshotting
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.744 244018 DEBUG nova.objects.instance [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'flavor' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:02 compute-0 systemd[1]: Started libpod-conmon-c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c.scope.
Feb 25 12:26:02 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f375a03ac1bcd5215d8d83e4e0e939faf3fd84985807e733642b344e8fa21f6d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.832 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022362.8320043, a826c2fd-1af8-4b55-b801-90ce87d04466 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.833 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] VM Started (Lifecycle Event)
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.861 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.866 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022362.8325772, a826c2fd-1af8-4b55-b801-90ce87d04466 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.866 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] VM Paused (Lifecycle Event)
Feb 25 12:26:02 compute-0 podman[288964]: 2026-02-25 12:26:02.878033571 +0000 UTC m=+0.359167506 container init c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:26:02 compute-0 podman[288964]: 2026-02-25 12:26:02.885236305 +0000 UTC m=+0.366370210 container start c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.899 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.904 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:02 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [NOTICE]   (289064) : New worker (289066) forked
Feb 25 12:26:02 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [NOTICE]   (289064) : Loading success.
Feb 25 12:26:02 compute-0 nova_compute[244014]: 2026-02-25 12:26:02.946 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.077 244018 INFO nova.virt.libvirt.driver [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Beginning live snapshot process
Feb 25 12:26:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1308210811' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.226 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.229 244018 DEBUG nova.virt.libvirt.imagebackend [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.231 244018 DEBUG nova.virt.libvirt.vif [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1785986325',display_name='tempest-DeleteServersTestJSON-server-1785986325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1785986325',id=52,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-j1l7797p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:57Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=679fb15f-b258-473a-8cdc-a2c143eb4d92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.231 244018 DEBUG nova.network.os_vif_util [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.232 244018 DEBUG nova.network.os_vif_util [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.233 244018 DEBUG nova.objects.instance [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 679fb15f-b258-473a-8cdc-a2c143eb4d92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.268 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:26:03 compute-0 nova_compute[244014]:   <uuid>679fb15f-b258-473a-8cdc-a2c143eb4d92</uuid>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   <name>instance-00000034</name>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <nova:name>tempest-DeleteServersTestJSON-server-1785986325</nova:name>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:26:01</nova:creationTime>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <nova:user uuid="9c44c1a95c8d4d97bc1d7dde284dcf1b">tempest-DeleteServersTestJSON-285157757-project-member</nova:user>
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <nova:project uuid="daab2f813dbd467685c22833bf875ec9">tempest-DeleteServersTestJSON-285157757</nova:project>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <nova:port uuid="fea2b354-8a21-4bff-bd90-431f8a17aa19">
Feb 25 12:26:03 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <system>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <entry name="serial">679fb15f-b258-473a-8cdc-a2c143eb4d92</entry>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <entry name="uuid">679fb15f-b258-473a-8cdc-a2c143eb4d92</entry>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     </system>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   <os>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   </os>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   <features>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   </features>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/679fb15f-b258-473a-8cdc-a2c143eb4d92_disk">
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config">
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:03 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:28:ef:45"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <target dev="tapfea2b354-8a"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/console.log" append="off"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <video>
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     </video>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:26:03 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:26:03 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:26:03 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:26:03 compute-0 nova_compute[244014]: </domain>
Feb 25 12:26:03 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.269 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Preparing to wait for external event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.269 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.269 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.270 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.270 244018 DEBUG nova.virt.libvirt.vif [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1785986325',display_name='tempest-DeleteServersTestJSON-server-1785986325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1785986325',id=52,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-j1l7797p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:57Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=679fb15f-b258-473a-8cdc-a2c143eb4d92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.271 244018 DEBUG nova.network.os_vif_util [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.271 244018 DEBUG nova.network.os_vif_util [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.272 244018 DEBUG os_vif [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.273 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.273 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.276 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfea2b354-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.277 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfea2b354-8a, col_values=(('external_ids', {'iface-id': 'fea2b354-8a21-4bff-bd90-431f8a17aa19', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:ef:45', 'vm-uuid': '679fb15f-b258-473a-8cdc-a2c143eb4d92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:03 compute-0 NetworkManager[49836]: <info>  [1772022363.2793] manager: (tapfea2b354-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.281 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.284 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.285 244018 INFO os_vif [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a')
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.369 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.369 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.369 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No VIF found with MAC fa:16:3e:28:ef:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.370 244018 INFO nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Using config drive
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.393 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:03 compute-0 nova_compute[244014]: 2026-02-25 12:26:03.567 244018 DEBUG nova.storage.rbd_utils [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(620e1832902b4b9abcaddca4af5c4c8f) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:26:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 484 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 12 MiB/s wr, 278 op/s
Feb 25 12:26:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Feb 25 12:26:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1308210811' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Feb 25 12:26:04 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Feb 25 12:26:04 compute-0 nova_compute[244014]: 2026-02-25 12:26:04.644 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:04 compute-0 nova_compute[244014]: 2026-02-25 12:26:04.751 244018 INFO nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Creating config drive at /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config
Feb 25 12:26:04 compute-0 nova_compute[244014]: 2026-02-25 12:26:04.759 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwrh01z72 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:04 compute-0 nova_compute[244014]: 2026-02-25 12:26:04.904 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwrh01z72" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:04 compute-0 nova_compute[244014]: 2026-02-25 12:26:04.940 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:04 compute-0 nova_compute[244014]: 2026-02-25 12:26:04.945 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.085 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.112 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.112 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:26:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Feb 25 12:26:05 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.294 244018 DEBUG nova.storage.rbd_utils [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk@620e1832902b4b9abcaddca4af5c4c8f to images/37d89e85-5a66-4a69-8ead-230ec4360b24 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:26:05 compute-0 ceph-mon[76335]: pgmap v1276: 305 pgs: 305 active+clean; 484 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 12 MiB/s wr, 278 op/s
Feb 25 12:26:05 compute-0 ceph-mon[76335]: osdmap e190: 3 total, 3 up, 3 in
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.674 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.675 244018 INFO nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Deleting local config drive /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config because it was imported into RBD.
Feb 25 12:26:05 compute-0 kernel: tapfea2b354-8a: entered promiscuous mode
Feb 25 12:26:05 compute-0 NetworkManager[49836]: <info>  [1772022365.7260] manager: (tapfea2b354-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:05 compute-0 ovn_controller[147040]: 2026-02-25T12:26:05Z|00437|binding|INFO|Claiming lport fea2b354-8a21-4bff-bd90-431f8a17aa19 for this chassis.
Feb 25 12:26:05 compute-0 ovn_controller[147040]: 2026-02-25T12:26:05Z|00438|binding|INFO|fea2b354-8a21-4bff-bd90-431f8a17aa19: Claiming fa:16:3e:28:ef:45 10.100.0.12
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.744 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:ef:45 10.100.0.12'], port_security=['fa:16:3e:28:ef:45 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '679fb15f-b258-473a-8cdc-a2c143eb4d92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fea2b354-8a21-4bff-bd90-431f8a17aa19) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.746 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.748 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fea2b354-8a21-4bff-bd90-431f8a17aa19 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 bound to our chassis
Feb 25 12:26:05 compute-0 ovn_controller[147040]: 2026-02-25T12:26:05Z|00439|binding|INFO|Setting lport fea2b354-8a21-4bff-bd90-431f8a17aa19 ovn-installed in OVS
Feb 25 12:26:05 compute-0 ovn_controller[147040]: 2026-02-25T12:26:05Z|00440|binding|INFO|Setting lport fea2b354-8a21-4bff-bd90-431f8a17aa19 up in Southbound
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.751 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.764 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[18e529dd-fc1b-492c-a8e5-8dff531fb946]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.765 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0d45b1c-11 in ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.767 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0d45b1c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.767 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f005b2b-ae95-4305-8edf-0a985850a14e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.768 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a7005a-b25d-4dcf-8b95-99771afe8347]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 systemd-machined[210048]: New machine qemu-57-instance-00000034.
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.780 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4c639fa9-0dbc-416a-be23-364d1e02a650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.791 244018 DEBUG nova.compute.manager [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Received event network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.792 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.792 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.793 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.793 244018 DEBUG nova.compute.manager [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Processing event network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.793 244018 DEBUG nova.compute.manager [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Received event network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:05 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000034.
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.794 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.794 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.794 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.795 244018 DEBUG nova.compute.manager [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] No waiting events found dispatching network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.794 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca5e77a-6bc2-4b10-812f-d4bdbfec94ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.795 244018 WARNING nova.compute.manager [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Received unexpected event network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf for instance with vm_state building and task_state spawning.
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.796 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:26:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 484 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 8.6 MiB/s wr, 189 op/s
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.803 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022365.802834, a826c2fd-1af8-4b55-b801-90ce87d04466 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.803 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] VM Resumed (Lifecycle Event)
Feb 25 12:26:05 compute-0 systemd-udevd[289242]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.810 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.817 244018 INFO nova.virt.libvirt.driver [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Instance spawned successfully.
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.818 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.818 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfeeded-cba0-41e5-9ffe-258220845475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 NetworkManager[49836]: <info>  [1772022365.8235] device (tapfea2b354-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:26:05 compute-0 NetworkManager[49836]: <info>  [1772022365.8245] device (tapfea2b354-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.830 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6229621e-2335-4a03-9d4b-e0d2d7111f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 systemd-udevd[289245]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:26:05 compute-0 NetworkManager[49836]: <info>  [1772022365.8313] manager: (tapa0d45b1c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/202)
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.829 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.863 244018 DEBUG nova.storage.rbd_utils [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening images/37d89e85-5a66-4a69-8ead-230ec4360b24 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.870 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[95455d47-613e-4f89-829c-a41cec13dead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.874 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[159467a2-75f2-4f3b-841e-89072495960a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 NetworkManager[49836]: <info>  [1772022365.8999] device (tapa0d45b1c-10): carrier: link connected
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.906 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fae049b7-ef2b-430f-a3f2-dec2bbb3cbd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.923 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed4552f-d5bf-46ce-a0e4-7ed9a8f137dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433530, 'reachable_time': 28858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289289, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.938 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4de577c5-99b9-46d0-bb50-ed49b0e3915f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:2bc6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433530, 'tstamp': 433530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289290, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.963 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0c96fc5d-3a05-48c5-a9de-6708907f106a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433530, 'reachable_time': 28858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289291, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.976 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.978 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.981 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.982 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.983 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.983 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.984 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:05 compute-0 nova_compute[244014]: 2026-02-25 12:26:05.984 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.998 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[716f5e85-29e1-434e-b7a1-c9d456191cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.026 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.064 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5fabcba7-611c-46ed-9804-6f6b5ae9937f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.066 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.067 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.068 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d45b1c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:06 compute-0 kernel: tapa0d45b1c-10: entered promiscuous mode
Feb 25 12:26:06 compute-0 NetworkManager[49836]: <info>  [1772022366.0720] manager: (tapa0d45b1c-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.072 244018 INFO nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Took 10.66 seconds to spawn the instance on the hypervisor.
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.073 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d45b1c-10, col_values=(('external_ids', {'iface-id': '7c89df63-779c-4e1c-9e58-76a4001fabc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:06 compute-0 ovn_controller[147040]: 2026-02-25T12:26:06Z|00441|binding|INFO|Releasing lport 7c89df63-779c-4e1c-9e58-76a4001fabc2 from this chassis (sb_readonly=0)
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.083 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.084 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47de337f-e114-406b-838d-faf5c3687f48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.085 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:26:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.086 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'env', 'PROCESS_TAG=haproxy-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0d45b1c-1680-4599-a27a-6e3335c94c99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.090 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.154 244018 INFO nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Took 11.84 seconds to build instance.
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.181 244018 DEBUG nova.network.neutron [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Updated VIF entry in instance network info cache for port fea2b354-8a21-4bff-bd90-431f8a17aa19. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.182 244018 DEBUG nova.network.neutron [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Updating instance_info_cache with network_info: [{"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.200 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.204 244018 DEBUG oslo_concurrency.lockutils [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:06 compute-0 ovn_controller[147040]: 2026-02-25T12:26:06Z|00442|binding|INFO|Releasing lport 7c89df63-779c-4e1c-9e58-76a4001fabc2 from this chassis (sb_readonly=0)
Feb 25 12:26:06 compute-0 ovn_controller[147040]: 2026-02-25T12:26:06Z|00443|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 12:26:06 compute-0 ovn_controller[147040]: 2026-02-25T12:26:06Z|00444|binding|INFO|Releasing lport 6dc5897c-8765-434f-a79d-19523884d8ae from this chassis (sb_readonly=0)
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.307 244018 DEBUG nova.compute.manager [req-38de0482-6df2-4742-95f1-0c7c98ffb9e4 req-db80227c-f66a-4565-abfc-ef3253aab614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.308 244018 DEBUG oslo_concurrency.lockutils [req-38de0482-6df2-4742-95f1-0c7c98ffb9e4 req-db80227c-f66a-4565-abfc-ef3253aab614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.308 244018 DEBUG oslo_concurrency.lockutils [req-38de0482-6df2-4742-95f1-0c7c98ffb9e4 req-db80227c-f66a-4565-abfc-ef3253aab614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.308 244018 DEBUG oslo_concurrency.lockutils [req-38de0482-6df2-4742-95f1-0c7c98ffb9e4 req-db80227c-f66a-4565-abfc-ef3253aab614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:06 compute-0 nova_compute[244014]: 2026-02-25 12:26:06.308 244018 DEBUG nova.compute.manager [req-38de0482-6df2-4742-95f1-0c7c98ffb9e4 req-db80227c-f66a-4565-abfc-ef3253aab614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Processing event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:26:06 compute-0 podman[289341]: 2026-02-25 12:26:06.436161691 +0000 UTC m=+0.033488329 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:26:06 compute-0 ceph-mon[76335]: osdmap e191: 3 total, 3 up, 3 in
Feb 25 12:26:07 compute-0 podman[289341]: 2026-02-25 12:26:07.567911021 +0000 UTC m=+1.165237659 container create dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:26:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 484 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 7.9 MiB/s wr, 310 op/s
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:26:07 compute-0 systemd[1]: Started libpod-conmon-dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232.scope.
Feb 25 12:26:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef35dca2b6cc01530d501e999723c27176f167b9ee93f4dca8bdbb9412fbbf6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.957 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022367.9574857, 679fb15f-b258-473a-8cdc-a2c143eb4d92 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.958 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] VM Started (Lifecycle Event)
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.960 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.963 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.968 244018 INFO nova.virt.libvirt.driver [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance spawned successfully.
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.969 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.982 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.987 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.991 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.991 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.992 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.992 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.992 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:07 compute-0 nova_compute[244014]: 2026-02-25 12:26:07.993 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.016 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.016 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022367.9576776, 679fb15f-b258-473a-8cdc-a2c143eb4d92 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.017 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] VM Paused (Lifecycle Event)
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.043 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.046 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022367.9635012, 679fb15f-b258-473a-8cdc-a2c143eb4d92 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.046 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] VM Resumed (Lifecycle Event)
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.054 244018 INFO nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Took 10.09 seconds to spawn the instance on the hypervisor.
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.055 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.066 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.068 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.120 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.148 244018 INFO nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Took 11.59 seconds to build instance.
Feb 25 12:26:08 compute-0 ceph-mon[76335]: pgmap v1279: 305 pgs: 305 active+clean; 484 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 8.6 MiB/s wr, 189 op/s
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.173 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:08 compute-0 podman[289341]: 2026-02-25 12:26:08.182816093 +0000 UTC m=+1.780142691 container init dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:26:08 compute-0 podman[289341]: 2026-02-25 12:26:08.190070508 +0000 UTC m=+1.787397106 container start dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:26:08 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [NOTICE]   (289385) : New worker (289387) forked
Feb 25 12:26:08 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [NOTICE]   (289385) : Loading success.
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.281 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.809 244018 DEBUG nova.compute.manager [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.810 244018 DEBUG oslo_concurrency.lockutils [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.810 244018 DEBUG oslo_concurrency.lockutils [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.810 244018 DEBUG oslo_concurrency.lockutils [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.810 244018 DEBUG nova.compute.manager [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] No waiting events found dispatching network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.810 244018 WARNING nova.compute.manager [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received unexpected event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 for instance with vm_state active and task_state None.
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.921 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:26:08 compute-0 nova_compute[244014]: 2026-02-25 12:26:08.922 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:09 compute-0 ceph-mon[76335]: pgmap v1280: 305 pgs: 305 active+clean; 484 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 7.9 MiB/s wr, 310 op/s
Feb 25 12:26:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:26:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732650868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.522 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.563 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022354.5611358, 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.563 244018 INFO nova.compute.manager [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] VM Stopped (Lifecycle Event)
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.596 244018 DEBUG nova.compute.manager [None req-e4580369-bca1-4f97-9536-68e60431feae - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.623 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.623 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.629 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.629 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.634 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.634 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.646 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.709 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022354.708602, ec873c3c-bf46-4537-8c29-b23a3133d281 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.709 244018 INFO nova.compute.manager [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] VM Stopped (Lifecycle Event)
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.734 244018 DEBUG nova.storage.rbd_utils [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] removing snapshot(620e1832902b4b9abcaddca4af5c4c8f) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.741 244018 DEBUG nova.compute.manager [None req-c37593e0-4014-4c7a-a1d2-5be128e34bba - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 484 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.0 MiB/s wr, 142 op/s
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:09.822 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:09.824 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.914 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.915 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3472MB free_disk=59.900443555787206GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:09 compute-0 nova_compute[244014]: 2026-02-25 12:26:09.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.033 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b8086e43-4c45-422f-a3b5-fa665c256b30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.034 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance a826c2fd-1af8-4b55-b801-90ce87d04466 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.034 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 679fb15f-b258-473a-8cdc-a2c143eb4d92 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.034 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.035 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:26:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.134 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Feb 25 12:26:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1732650868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Feb 25 12:26:10 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.550 244018 DEBUG nova.storage.rbd_utils [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(snap) on rbd image(37d89e85-5a66-4a69-8ead-230ec4360b24) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.692 244018 DEBUG nova.compute.manager [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:26:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604942363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.719 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.724 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.744 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.753 244018 INFO nova.compute.manager [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] instance snapshotting
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.794 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.795 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.837 244018 DEBUG oslo_concurrency.lockutils [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.837 244018 DEBUG oslo_concurrency.lockutils [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.838 244018 DEBUG nova.compute.manager [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.841 244018 DEBUG nova.compute.manager [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.842 244018 DEBUG nova.objects.instance [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'flavor' on Instance uuid 679fb15f-b258-473a-8cdc-a2c143eb4d92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:10 compute-0 nova_compute[244014]: 2026-02-25 12:26:10.868 244018 DEBUG nova.virt.libvirt.driver [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:26:11 compute-0 nova_compute[244014]: 2026-02-25 12:26:11.131 244018 INFO nova.virt.libvirt.driver [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Beginning live snapshot process
Feb 25 12:26:11 compute-0 nova_compute[244014]: 2026-02-25 12:26:11.285 244018 DEBUG nova.virt.libvirt.imagebackend [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:26:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Feb 25 12:26:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Feb 25 12:26:11 compute-0 nova_compute[244014]: 2026-02-25 12:26:11.742 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] creating snapshot(004186d8a7564dabbca17fe0d43e3e73) on rbd image(a826c2fd-1af8-4b55-b801-90ce87d04466_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:26:11 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Feb 25 12:26:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 507 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 1.3 MiB/s wr, 256 op/s
Feb 25 12:26:11 compute-0 ceph-mon[76335]: pgmap v1281: 305 pgs: 305 active+clean; 484 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.0 MiB/s wr, 142 op/s
Feb 25 12:26:11 compute-0 ceph-mon[76335]: osdmap e192: 3 total, 3 up, 3 in
Feb 25 12:26:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3604942363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:11 compute-0 nova_compute[244014]: 2026-02-25 12:26:11.938 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:26:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Feb 25 12:26:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Feb 25 12:26:12 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Feb 25 12:26:13 compute-0 ceph-mon[76335]: osdmap e193: 3 total, 3 up, 3 in
Feb 25 12:26:13 compute-0 ceph-mon[76335]: pgmap v1284: 305 pgs: 305 active+clean; 507 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 1.3 MiB/s wr, 256 op/s
Feb 25 12:26:13 compute-0 nova_compute[244014]: 2026-02-25 12:26:13.284 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:13 compute-0 nova_compute[244014]: 2026-02-25 12:26:13.538 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] cloning vms/a826c2fd-1af8-4b55-b801-90ce87d04466_disk@004186d8a7564dabbca17fe0d43e3e73 to images/cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:26:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 563 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.8 MiB/s wr, 264 op/s
Feb 25 12:26:13 compute-0 nova_compute[244014]: 2026-02-25 12:26:13.831 244018 INFO nova.virt.libvirt.driver [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Snapshot image upload complete
Feb 25 12:26:13 compute-0 nova_compute[244014]: 2026-02-25 12:26:13.832 244018 INFO nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 11.06 seconds to snapshot the instance on the hypervisor.
Feb 25 12:26:13 compute-0 nova_compute[244014]: 2026-02-25 12:26:13.849 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] flattening images/cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:26:13 compute-0 nova_compute[244014]: 2026-02-25 12:26:13.937 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:26:13 compute-0 nova_compute[244014]: 2026-02-25 12:26:13.937 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:26:14 compute-0 ceph-mon[76335]: osdmap e194: 3 total, 3 up, 3 in
Feb 25 12:26:14 compute-0 nova_compute[244014]: 2026-02-25 12:26:14.235 244018 DEBUG nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Feb 25 12:26:14 compute-0 nova_compute[244014]: 2026-02-25 12:26:14.236 244018 DEBUG nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Feb 25 12:26:14 compute-0 nova_compute[244014]: 2026-02-25 12:26:14.236 244018 DEBUG nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deleting image 5c4d8498-0ce5-4898-96f9-042972db1ddb _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Feb 25 12:26:14 compute-0 nova_compute[244014]: 2026-02-25 12:26:14.381 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] removing snapshot(004186d8a7564dabbca17fe0d43e3e73) on rbd image(a826c2fd-1af8-4b55-b801-90ce87d04466_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:26:14 compute-0 nova_compute[244014]: 2026-02-25 12:26:14.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Feb 25 12:26:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Feb 25 12:26:15 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Feb 25 12:26:15 compute-0 ceph-mon[76335]: pgmap v1286: 305 pgs: 305 active+clean; 563 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.8 MiB/s wr, 264 op/s
Feb 25 12:26:15 compute-0 ceph-mon[76335]: osdmap e195: 3 total, 3 up, 3 in
Feb 25 12:26:15 compute-0 nova_compute[244014]: 2026-02-25 12:26:15.244 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] creating snapshot(snap) on rbd image(cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:26:15 compute-0 nova_compute[244014]: 2026-02-25 12:26:15.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 563 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 8.7 MiB/s wr, 297 op/s
Feb 25 12:26:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Feb 25 12:26:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Feb 25 12:26:16 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 25 12:26:16 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad could not be found.
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver 
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver 
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad could not be found.
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver 
Feb 25 12:26:16 compute-0 nova_compute[244014]: 2026-02-25 12:26:16.613 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] removing snapshot(snap) on rbd image(cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:26:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:16.826 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Feb 25 12:26:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Feb 25 12:26:17 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Feb 25 12:26:17 compute-0 ceph-mon[76335]: pgmap v1288: 305 pgs: 305 active+clean; 563 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 8.7 MiB/s wr, 297 op/s
Feb 25 12:26:17 compute-0 ceph-mon[76335]: osdmap e196: 3 total, 3 up, 3 in
Feb 25 12:26:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 547 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 8.8 MiB/s wr, 306 op/s
Feb 25 12:26:18 compute-0 nova_compute[244014]: 2026-02-25 12:26:18.289 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:18 compute-0 ovn_controller[147040]: 2026-02-25T12:26:18Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:1d:3c 10.100.0.4
Feb 25 12:26:18 compute-0 ovn_controller[147040]: 2026-02-25T12:26:18Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:1d:3c 10.100.0.4
Feb 25 12:26:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Feb 25 12:26:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Feb 25 12:26:18 compute-0 ceph-mon[76335]: osdmap e197: 3 total, 3 up, 3 in
Feb 25 12:26:18 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Feb 25 12:26:19 compute-0 nova_compute[244014]: 2026-02-25 12:26:19.108 244018 WARNING nova.compute.manager [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Image not found during snapshot: nova.exception.ImageNotFound: Image cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad could not be found.
Feb 25 12:26:19 compute-0 nova_compute[244014]: 2026-02-25 12:26:19.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Feb 25 12:26:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Feb 25 12:26:19 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Feb 25 12:26:19 compute-0 ceph-mon[76335]: pgmap v1291: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 547 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 8.8 MiB/s wr, 306 op/s
Feb 25 12:26:19 compute-0 ceph-mon[76335]: osdmap e198: 3 total, 3 up, 3 in
Feb 25 12:26:19 compute-0 ovn_controller[147040]: 2026-02-25T12:26:19Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:ef:45 10.100.0.12
Feb 25 12:26:19 compute-0 ovn_controller[147040]: 2026-02-25T12:26:19Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:ef:45 10.100.0.12
Feb 25 12:26:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 547 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 370 op/s
Feb 25 12:26:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Feb 25 12:26:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Feb 25 12:26:20 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Feb 25 12:26:20 compute-0 nova_compute[244014]: 2026-02-25 12:26:20.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:20 compute-0 ceph-mon[76335]: osdmap e199: 3 total, 3 up, 3 in
Feb 25 12:26:20 compute-0 ceph-mon[76335]: pgmap v1294: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 547 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 370 op/s
Feb 25 12:26:20 compute-0 ceph-mon[76335]: osdmap e200: 3 total, 3 up, 3 in
Feb 25 12:26:20 compute-0 nova_compute[244014]: 2026-02-25 12:26:20.933 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:20 compute-0 nova_compute[244014]: 2026-02-25 12:26:20.934 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:20 compute-0 nova_compute[244014]: 2026-02-25 12:26:20.934 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:20 compute-0 nova_compute[244014]: 2026-02-25 12:26:20.935 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:20 compute-0 nova_compute[244014]: 2026-02-25 12:26:20.935 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:20 compute-0 nova_compute[244014]: 2026-02-25 12:26:20.937 244018 INFO nova.compute.manager [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Terminating instance
Feb 25 12:26:20 compute-0 nova_compute[244014]: 2026-02-25 12:26:20.939 244018 DEBUG nova.compute.manager [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:26:20 compute-0 nova_compute[244014]: 2026-02-25 12:26:20.979 244018 DEBUG nova.virt.libvirt.driver [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:26:21 compute-0 kernel: tap4bf8f3ad-0e (unregistering): left promiscuous mode
Feb 25 12:26:21 compute-0 NetworkManager[49836]: <info>  [1772022381.4623] device (tap4bf8f3ad-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.470 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:21 compute-0 ovn_controller[147040]: 2026-02-25T12:26:21Z|00445|binding|INFO|Releasing lport 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf from this chassis (sb_readonly=0)
Feb 25 12:26:21 compute-0 ovn_controller[147040]: 2026-02-25T12:26:21Z|00446|binding|INFO|Setting lport 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf down in Southbound
Feb 25 12:26:21 compute-0 ovn_controller[147040]: 2026-02-25T12:26:21Z|00447|binding|INFO|Removing iface tap4bf8f3ad-0e ovn-installed in OVS
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:21.481 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:1d:3c 10.100.0.4'], port_security=['fa:16:3e:b6:1d:3c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a826c2fd-1af8-4b55-b801-90ce87d04466', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:21.483 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 unbound from our chassis
Feb 25 12:26:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:21.486 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7693903d-d5e2-4b50-a39b-bbbcc4148329, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:26:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:21.487 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4bda1118-083e-4d0d-968f-f16261eac76a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:21.488 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace which is not needed anymore
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:21 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Deactivated successfully.
Feb 25 12:26:21 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Consumed 12.048s CPU time.
Feb 25 12:26:21 compute-0 systemd-machined[210048]: Machine qemu-56-instance-00000033 terminated.
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.581 244018 INFO nova.virt.libvirt.driver [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Instance destroyed successfully.
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.583 244018 DEBUG nova.objects.instance [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'resources' on Instance uuid a826c2fd-1af8-4b55-b801-90ce87d04466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.603 244018 DEBUG nova.virt.libvirt.vif [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-131355373',display_name='tempest-ImagesOneServerNegativeTestJSON-server-131355373',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-131355373',id=51,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-w9t1cq22',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:19Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=a826c2fd-1af8-4b55-b801-90ce87d04466,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.604 244018 DEBUG nova.network.os_vif_util [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.606 244018 DEBUG nova.network.os_vif_util [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.607 244018 DEBUG os_vif [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.612 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bf8f3ad-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.614 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:21 compute-0 nova_compute[244014]: 2026-02-25 12:26:21.620 244018 INFO os_vif [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e')
Feb 25 12:26:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 499 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 241 op/s
Feb 25 12:26:21 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [NOTICE]   (289064) : haproxy version is 2.8.14-c23fe91
Feb 25 12:26:21 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [NOTICE]   (289064) : path to executable is /usr/sbin/haproxy
Feb 25 12:26:21 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [ALERT]    (289064) : Current worker (289066) exited with code 143 (Terminated)
Feb 25 12:26:21 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [WARNING]  (289064) : All workers exited. Exiting... (0)
Feb 25 12:26:21 compute-0 systemd[1]: libpod-c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c.scope: Deactivated successfully.
Feb 25 12:26:21 compute-0 podman[289691]: 2026-02-25 12:26:21.880579331 +0000 UTC m=+0.262107740 container died c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 12:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c-userdata-shm.mount: Deactivated successfully.
Feb 25 12:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f375a03ac1bcd5215d8d83e4e0e939faf3fd84985807e733642b344e8fa21f6d-merged.mount: Deactivated successfully.
Feb 25 12:26:21 compute-0 podman[289691]: 2026-02-25 12:26:21.948926005 +0000 UTC m=+0.330454404 container cleanup c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 12:26:21 compute-0 systemd[1]: libpod-conmon-c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c.scope: Deactivated successfully.
Feb 25 12:26:22 compute-0 podman[289739]: 2026-02-25 12:26:22.029833934 +0000 UTC m=+0.052778105 container remove c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:26:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.035 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[14e39747-63fa-4dd9-aace-022af9bc9d1f]: (4, ('Wed Feb 25 12:26:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c)\nc7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c\nWed Feb 25 12:26:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c)\nc7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.037 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09440b3b-9707-43a6-af14-948e52ce5a42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.039 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:22 compute-0 nova_compute[244014]: 2026-02-25 12:26:22.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:22 compute-0 kernel: tap7693903d-d0: left promiscuous mode
Feb 25 12:26:22 compute-0 nova_compute[244014]: 2026-02-25 12:26:22.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.057 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b1c822a-2076-467a-9745-c31a80220f5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.071 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19b7d3ff-8671-4db5-94e4-2d75c7581198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d63e3b8-fe8a-4685-99b7-f89813b40d0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.083 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[40520456-b0ff-4f80-9af6-9c1862b0b50a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433143, 'reachable_time': 27786, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289755, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.085 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:26:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.086 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[880c7a4e-07ac-4b3a-9575-4d1f6f21665a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d7693903d\x2dd5e2\x2d4b50\x2da39b\x2dbbbcc4148329.mount: Deactivated successfully.
Feb 25 12:26:22 compute-0 nova_compute[244014]: 2026-02-25 12:26:22.133 244018 INFO nova.virt.libvirt.driver [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Deleting instance files /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466_del
Feb 25 12:26:22 compute-0 nova_compute[244014]: 2026-02-25 12:26:22.134 244018 INFO nova.virt.libvirt.driver [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Deletion of /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466_del complete
Feb 25 12:26:22 compute-0 nova_compute[244014]: 2026-02-25 12:26:22.194 244018 INFO nova.compute.manager [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Took 1.25 seconds to destroy the instance on the hypervisor.
Feb 25 12:26:22 compute-0 nova_compute[244014]: 2026-02-25 12:26:22.194 244018 DEBUG oslo.service.loopingcall [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:26:22 compute-0 nova_compute[244014]: 2026-02-25 12:26:22.195 244018 DEBUG nova.compute.manager [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:26:22 compute-0 nova_compute[244014]: 2026-02-25 12:26:22.195 244018 DEBUG nova.network.neutron [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:26:22 compute-0 ceph-mon[76335]: pgmap v1296: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 499 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 241 op/s
Feb 25 12:26:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 366 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 865 KiB/s rd, 5.0 MiB/s wr, 343 op/s
Feb 25 12:26:23 compute-0 nova_compute[244014]: 2026-02-25 12:26:23.946 244018 DEBUG nova.network.neutron [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:23 compute-0 nova_compute[244014]: 2026-02-25 12:26:23.964 244018 INFO nova.compute.manager [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Took 1.77 seconds to deallocate network for instance.
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.016 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.017 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.121 244018 DEBUG oslo_concurrency.processutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:24 compute-0 kernel: tapfea2b354-8a (unregistering): left promiscuous mode
Feb 25 12:26:24 compute-0 NetworkManager[49836]: <info>  [1772022384.3577] device (tapfea2b354-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:26:24 compute-0 ovn_controller[147040]: 2026-02-25T12:26:24Z|00448|binding|INFO|Releasing lport fea2b354-8a21-4bff-bd90-431f8a17aa19 from this chassis (sb_readonly=0)
Feb 25 12:26:24 compute-0 ovn_controller[147040]: 2026-02-25T12:26:24Z|00449|binding|INFO|Setting lport fea2b354-8a21-4bff-bd90-431f8a17aa19 down in Southbound
Feb 25 12:26:24 compute-0 ovn_controller[147040]: 2026-02-25T12:26:24Z|00450|binding|INFO|Removing iface tapfea2b354-8a ovn-installed in OVS
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.378 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:ef:45 10.100.0.12'], port_security=['fa:16:3e:28:ef:45 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '679fb15f-b258-473a-8cdc-a2c143eb4d92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fea2b354-8a21-4bff-bd90-431f8a17aa19) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.381 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fea2b354-8a21-4bff-bd90-431f8a17aa19 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 unbound from our chassis
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.384 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d45b1c-1680-4599-a27a-6e3335c94c99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.385 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9749f414-116a-4e5b-98f5-888ac0c96f65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.386 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace which is not needed anymore
Feb 25 12:26:24 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Deactivated successfully.
Feb 25 12:26:24 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Consumed 12.174s CPU time.
Feb 25 12:26:24 compute-0 systemd-machined[210048]: Machine qemu-57-instance-00000034 terminated.
Feb 25 12:26:24 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [NOTICE]   (289385) : haproxy version is 2.8.14-c23fe91
Feb 25 12:26:24 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [NOTICE]   (289385) : path to executable is /usr/sbin/haproxy
Feb 25 12:26:24 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [WARNING]  (289385) : Exiting Master process...
Feb 25 12:26:24 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [WARNING]  (289385) : Exiting Master process...
Feb 25 12:26:24 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [ALERT]    (289385) : Current worker (289387) exited with code 143 (Terminated)
Feb 25 12:26:24 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [WARNING]  (289385) : All workers exited. Exiting... (0)
Feb 25 12:26:24 compute-0 systemd[1]: libpod-dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232.scope: Deactivated successfully.
Feb 25 12:26:24 compute-0 podman[289798]: 2026-02-25 12:26:24.555502814 +0000 UTC m=+0.056173971 container died dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232-userdata-shm.mount: Deactivated successfully.
Feb 25 12:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef35dca2b6cc01530d501e999723c27176f167b9ee93f4dca8bdbb9412fbbf6a-merged.mount: Deactivated successfully.
Feb 25 12:26:24 compute-0 podman[289798]: 2026-02-25 12:26:24.618147846 +0000 UTC m=+0.118818993 container cleanup dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:26:24 compute-0 systemd[1]: libpod-conmon-dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232.scope: Deactivated successfully.
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.655 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:26:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334945106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.685 244018 DEBUG oslo_concurrency.processutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:24 compute-0 podman[289832]: 2026-02-25 12:26:24.688862848 +0000 UTC m=+0.049084130 container remove dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.691 244018 DEBUG nova.compute.provider_tree [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.693 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[deb4c5eb-a7fe-4904-8d8c-1e0358a84211]: (4, ('Wed Feb 25 12:26:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232)\ndcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232\nWed Feb 25 12:26:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232)\ndcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.695 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37655ca1-0964-4348-a1f0-2af4b43bb4f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.695 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:24 compute-0 kernel: tapa0d45b1c-10: left promiscuous mode
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.697 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.712 244018 DEBUG nova.scheduler.client.report [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.712 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7c52e0-1613-4974-88f1-21db1f9467e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.725 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6d54e9de-351c-4779-934d-a70ffbba52af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.726 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3418e8-c746-4464-ab65-660e71457c08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.737 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f1e411-4d9a-413a-921f-d411bc9c2537]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433522, 'reachable_time': 24205, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289863, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.739 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:26:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.739 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8d1b4f-4aab-46d6-ac9c-a3b0ed470c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:24 compute-0 systemd[1]: run-netns-ovnmeta\x2da0d45b1c\x2d1680\x2d4599\x2da27a\x2d6e3335c94c99.mount: Deactivated successfully.
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.740 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.801 244018 INFO nova.scheduler.client.report [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Deleted allocations for instance a826c2fd-1af8-4b55-b801-90ce87d04466
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.886 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.976 244018 DEBUG nova.compute.manager [req-98d39001-bfa5-4481-985d-e79394288d37 req-b2df0f6d-8f14-4f9a-9fa0-f9af1acd18ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Received event network-vif-deleted-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:24 compute-0 nova_compute[244014]: 2026-02-25 12:26:24.998 244018 INFO nova.virt.libvirt.driver [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance shutdown successfully after 14 seconds.
Feb 25 12:26:25 compute-0 nova_compute[244014]: 2026-02-25 12:26:25.005 244018 INFO nova.virt.libvirt.driver [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance destroyed successfully.
Feb 25 12:26:25 compute-0 nova_compute[244014]: 2026-02-25 12:26:25.006 244018 DEBUG nova.objects.instance [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 679fb15f-b258-473a-8cdc-a2c143eb4d92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:25 compute-0 nova_compute[244014]: 2026-02-25 12:26:25.020 244018 DEBUG nova.compute.manager [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:25 compute-0 nova_compute[244014]: 2026-02-25 12:26:25.058 244018 DEBUG oslo_concurrency.lockutils [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Feb 25 12:26:25 compute-0 nova_compute[244014]: 2026-02-25 12:26:25.105 244018 DEBUG nova.compute.manager [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-vif-unplugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:25 compute-0 nova_compute[244014]: 2026-02-25 12:26:25.106 244018 DEBUG oslo_concurrency.lockutils [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:25 compute-0 nova_compute[244014]: 2026-02-25 12:26:25.107 244018 DEBUG oslo_concurrency.lockutils [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:25 compute-0 nova_compute[244014]: 2026-02-25 12:26:25.107 244018 DEBUG oslo_concurrency.lockutils [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:25 compute-0 nova_compute[244014]: 2026-02-25 12:26:25.108 244018 DEBUG nova.compute.manager [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] No waiting events found dispatching network-vif-unplugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:25 compute-0 nova_compute[244014]: 2026-02-25 12:26:25.108 244018 WARNING nova.compute.manager [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received unexpected event network-vif-unplugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 for instance with vm_state stopped and task_state None.
Feb 25 12:26:25 compute-0 ceph-mon[76335]: pgmap v1297: 305 pgs: 305 active+clean; 366 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 865 KiB/s rd, 5.0 MiB/s wr, 343 op/s
Feb 25 12:26:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/334945106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Feb 25 12:26:25 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Feb 25 12:26:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 366 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 4.9 MiB/s wr, 338 op/s
Feb 25 12:26:26 compute-0 nova_compute[244014]: 2026-02-25 12:26:26.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:26 compute-0 ceph-mon[76335]: osdmap e201: 3 total, 3 up, 3 in
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.515 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.516 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.560 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.589 244018 DEBUG nova.compute.manager [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.590 244018 DEBUG oslo_concurrency.lockutils [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.591 244018 DEBUG oslo_concurrency.lockutils [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.591 244018 DEBUG oslo_concurrency.lockutils [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.592 244018 DEBUG nova.compute.manager [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] No waiting events found dispatching network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.592 244018 WARNING nova.compute.manager [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received unexpected event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 for instance with vm_state stopped and task_state None.
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.630 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.630 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:27 compute-0 ceph-mon[76335]: pgmap v1299: 305 pgs: 305 active+clean; 366 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 4.9 MiB/s wr, 338 op/s
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.640 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.640 244018 INFO nova.compute.claims [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.797 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 312 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 673 KiB/s rd, 3.8 MiB/s wr, 294 op/s
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.897 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.898 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.921 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:26:27 compute-0 nova_compute[244014]: 2026-02-25 12:26:27.995 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.011 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.012 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.013 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.013 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.014 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.016 244018 INFO nova.compute.manager [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Terminating instance
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.018 244018 DEBUG nova.compute.manager [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.028 244018 INFO nova.virt.libvirt.driver [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance destroyed successfully.
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.029 244018 DEBUG nova.objects.instance [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'resources' on Instance uuid 679fb15f-b258-473a-8cdc-a2c143eb4d92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.044 244018 DEBUG nova.virt.libvirt.vif [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1785986325',display_name='tempest-DeleteServersTestJSON-server-1785986325',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1785986325',id=52,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-j1l7797p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:25Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=679fb15f-b258-473a-8cdc-a2c143eb4d92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.045 244018 DEBUG nova.network.os_vif_util [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.046 244018 DEBUG nova.network.os_vif_util [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.047 244018 DEBUG os_vif [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.051 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfea2b354-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.059 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.062 244018 INFO os_vif [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a')
Feb 25 12:26:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:26:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702035612' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.444 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.450 244018 DEBUG nova.compute.provider_tree [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.472 244018 DEBUG nova.scheduler.client.report [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.509 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.513 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.516 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.524 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.524 244018 INFO nova.compute.claims [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.586 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.587 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.612 244018 INFO nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.628 244018 INFO nova.virt.libvirt.driver [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Deleting instance files /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92_del
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.630 244018 INFO nova.virt.libvirt.driver [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Deletion of /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92_del complete
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.640 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:26:28 compute-0 ceph-mon[76335]: pgmap v1300: 305 pgs: 305 active+clean; 312 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 673 KiB/s rd, 3.8 MiB/s wr, 294 op/s
Feb 25 12:26:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/702035612' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.721 244018 INFO nova.compute.manager [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Took 0.70 seconds to destroy the instance on the hypervisor.
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.722 244018 DEBUG oslo.service.loopingcall [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.722 244018 DEBUG nova.compute.manager [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.723 244018 DEBUG nova.network.neutron [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.734 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.765 244018 DEBUG nova.policy [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f8bbe7db4454108aca005daa72d5c22', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56700581ea88438ba482d90bc702ced3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.793 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.796 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.796 244018 INFO nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Creating image(s)
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.830 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.862 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.888 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.894 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.945 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.946 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.947 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.947 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.974 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:28 compute-0 nova_compute[244014]: 2026-02-25 12:26:28.980 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.017 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.017 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.035 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.136 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.313 244018 DEBUG nova.network.neutron [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:26:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192472623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.336 244018 INFO nova.compute.manager [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Took 0.61 seconds to deallocate network for instance.
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.352 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.361 244018 DEBUG nova.compute.provider_tree [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.384 244018 DEBUG nova.scheduler.client.report [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.391 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.405 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.406 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.408 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.415 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.415 244018 INFO nova.compute.claims [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.461 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.462 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.486 244018 INFO nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.495 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.536 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.598 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] resizing rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.641 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.643 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.644 244018 INFO nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Creating image(s)
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.676 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/192472623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.711 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.744 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.750 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.790 244018 DEBUG nova.policy [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '89e71139346a40899212d5bc35835720', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f976004e0b334963a69c2519fca200d2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.797 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Successfully created port: ee46268d-740d-4ff9-8b65-4a81fc61eec3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.805 244018 DEBUG nova.compute.manager [req-34e94e5c-e1f3-4d3d-bacb-5278c821e6d3 req-3d958801-3ad4-4f47-a22d-80d062663e9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-vif-deleted-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 312 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 561 KiB/s rd, 3.2 MiB/s wr, 245 op/s
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.830 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.831 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.831 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.832 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.858 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.869 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 56267d17-0733-4abe-b916-d1a25e516514_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:29 compute-0 nova_compute[244014]: 2026-02-25 12:26:29.924 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.002 244018 DEBUG nova.objects.instance [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.025 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.025 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Ensure instance console log exists: /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.026 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.027 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.027 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.186 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 56267d17-0733-4abe-b916-d1a25e516514_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.230 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] resizing rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.327 244018 DEBUG nova.objects.instance [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 56267d17-0733-4abe-b916-d1a25e516514 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.342 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.342 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Ensure instance console log exists: /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.343 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.343 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.343 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.420 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Successfully created port: dd03c667-f058-4e13-bb03-517ed838be2e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:26:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:26:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710562482' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.549 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.553 244018 DEBUG nova.compute.provider_tree [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.566 244018 DEBUG nova.scheduler.client.report [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.570 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Successfully updated port: ee46268d-740d-4ff9-8b65-4a81fc61eec3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.586 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.586 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.586 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.588 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.589 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.590 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.635 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.636 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.650 244018 DEBUG nova.compute.manager [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-changed-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.651 244018 DEBUG nova.compute.manager [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Refreshing instance network info cache due to event network-changed-ee46268d-740d-4ff9-8b65-4a81fc61eec3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.652 244018 DEBUG oslo_concurrency.lockutils [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.661 244018 INFO nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:26:30 compute-0 ceph-mon[76335]: pgmap v1301: 305 pgs: 305 active+clean; 312 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 561 KiB/s rd, 3.2 MiB/s wr, 245 op/s
Feb 25 12:26:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3710562482' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.698 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:26:30 compute-0 podman[290281]: 2026-02-25 12:26:30.733129458 +0000 UTC m=+0.066923805 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.731 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.741 244018 DEBUG oslo_concurrency.processutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:30 compute-0 podman[290282]: 2026-02-25 12:26:30.757010713 +0000 UTC m=+0.089435052 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.831 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.832 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.832 244018 INFO nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Creating image(s)
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.853 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.873 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.893 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.896 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:26:30
Feb 25 12:26:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:26:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:26:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups', 'images']
Feb 25 12:26:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.925 244018 DEBUG nova.policy [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b774fd0c04fc403d9ddb205f1e6abbc5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c85a955249394f0faf7c890f5cd0df32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.964 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.965 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.965 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:30 compute-0 nova_compute[244014]: 2026-02-25 12:26:30.966 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.003 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.006 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.144 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Successfully updated port: dd03c667-f058-4e13-bb03-517ed838be2e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.164 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.164 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquired lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.165 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:26:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:26:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631888206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.312 244018 DEBUG oslo_concurrency.processutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.320 244018 DEBUG nova.compute.provider_tree [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.347 244018 DEBUG nova.scheduler.client.report [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.374 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.379 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.403 244018 INFO nova.scheduler.client.report [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Deleted allocations for instance 679fb15f-b258-473a-8cdc-a2c143eb4d92
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.463 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.629 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1631888206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.714 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] resizing rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 297 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 439 KiB/s rd, 3.0 MiB/s wr, 192 op/s
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.819 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.829 244018 DEBUG nova.objects.instance [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'migration_context' on Instance uuid 43b8959e-9cf0-42ca-aa1f-8a380321c971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.842 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.843 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Ensure instance console log exists: /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.844 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.845 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.846 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.848 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.849 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance network_info: |[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:26:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.851 244018 DEBUG oslo_concurrency.lockutils [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.851 244018 DEBUG nova.network.neutron [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Refreshing network info cache for port ee46268d-740d-4ff9-8b65-4a81fc61eec3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.856 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start _get_guest_xml network_info=[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.862 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Successfully created port: 69011b0a-5af7-4bef-a14c-8d83e63e08ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.868 244018 WARNING nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.878 244018 DEBUG nova.virt.libvirt.host [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.879 244018 DEBUG nova.virt.libvirt.host [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.882 244018 DEBUG nova.virt.libvirt.host [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.883 244018 DEBUG nova.virt.libvirt.host [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.883 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.884 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.884 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.885 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.885 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.886 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.886 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.886 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.887 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.887 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.888 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.888 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.893 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.994 244018 DEBUG nova.compute.manager [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-changed-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.995 244018 DEBUG nova.compute.manager [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Refreshing instance network info cache due to event network-changed-dd03c667-f058-4e13-bb03-517ed838be2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:26:31 compute-0 nova_compute[244014]: 2026-02-25 12:26:31.996 244018 DEBUG oslo_concurrency.lockutils [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.164 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Updating instance_info_cache with network_info: [{"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.183 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Releasing lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.183 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Instance network_info: |[{"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.184 244018 DEBUG oslo_concurrency.lockutils [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.184 244018 DEBUG nova.network.neutron [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Refreshing network info cache for port dd03c667-f058-4e13-bb03-517ed838be2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.190 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Start _get_guest_xml network_info=[{"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.195 244018 WARNING nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.200 244018 DEBUG nova.virt.libvirt.host [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.201 244018 DEBUG nova.virt.libvirt.host [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.209 244018 DEBUG nova.virt.libvirt.host [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.210 244018 DEBUG nova.virt.libvirt.host [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.210 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.211 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.212 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.212 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.212 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.213 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.213 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.213 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.214 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.214 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.214 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.215 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.220 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/878136816' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.413 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.447 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.453 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:32 compute-0 ceph-mon[76335]: pgmap v1302: 305 pgs: 305 active+clean; 297 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 439 KiB/s rd, 3.0 MiB/s wr, 192 op/s
Feb 25 12:26:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/878136816' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.780 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Successfully updated port: 69011b0a-5af7-4bef-a14c-8d83e63e08ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:26:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145244569' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.797 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.797 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.798 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.807 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.835 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.840 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.877 244018 DEBUG nova.compute.manager [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-changed-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.878 244018 DEBUG nova.compute.manager [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Refreshing instance network info cache due to event network-changed-69011b0a-5af7-4bef-a14c-8d83e63e08ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.878 244018 DEBUG oslo_concurrency.lockutils [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388521735' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.975 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.978 244018 DEBUG nova.virt.libvirt.vif [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.979 244018 DEBUG nova.network.os_vif_util [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.981 244018 DEBUG nova.network.os_vif_util [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:32 compute-0 nova_compute[244014]: 2026-02-25 12:26:32.983 244018 DEBUG nova.objects.instance [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.000 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <uuid>8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</uuid>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <name>instance-00000035</name>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestJSON-server-1971346294</nova:name>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:26:31</nova:creationTime>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:port uuid="ee46268d-740d-4ff9-8b65-4a81fc61eec3">
Feb 25 12:26:33 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <system>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="serial">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="uuid">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </system>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <os>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </os>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <features>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </features>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk">
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config">
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:ba:87:f1"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <target dev="tapee46268d-74"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log" append="off"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <video>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </video>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:26:33 compute-0 nova_compute[244014]: </domain>
Feb 25 12:26:33 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.002 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Preparing to wait for external event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.002 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.003 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.003 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.005 244018 DEBUG nova.virt.libvirt.vif [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.006 244018 DEBUG nova.network.os_vif_util [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.007 244018 DEBUG nova.network.os_vif_util [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.038 244018 DEBUG os_vif [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.039 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.040 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.046 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.047 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.049 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:33 compute-0 NetworkManager[49836]: <info>  [1772022393.0497] manager: (tapee46268d-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.054 244018 INFO os_vif [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.105 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.105 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.105 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No VIF found with MAC fa:16:3e:ba:87:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.106 244018 INFO nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Using config drive
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.120 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.124 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.213 244018 DEBUG nova.network.neutron [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updated VIF entry in instance network info cache for port ee46268d-740d-4ff9-8b65-4a81fc61eec3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.214 244018 DEBUG nova.network.neutron [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.243 244018 DEBUG oslo_concurrency.lockutils [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3186710184' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.397 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.398 244018 DEBUG nova.virt.libvirt.vif [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1327428650',id=54,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-pc621z15',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:29Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=56267d17-0733-4abe-b916-d1a25e516514,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.399 244018 DEBUG nova.network.os_vif_util [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.400 244018 DEBUG nova.network.os_vif_util [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.402 244018 DEBUG nova.objects.instance [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 56267d17-0733-4abe-b916-d1a25e516514 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.421 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <uuid>56267d17-0733-4abe-b916-d1a25e516514</uuid>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <name>instance-00000036</name>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1327428650</nova:name>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:26:32</nova:creationTime>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:user uuid="89e71139346a40899212d5bc35835720">tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member</nova:user>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:project uuid="f976004e0b334963a69c2519fca200d2">tempest-ImagesOneServerNegativeTestJSON-1374162185</nova:project>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <nova:port uuid="dd03c667-f058-4e13-bb03-517ed838be2e">
Feb 25 12:26:33 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <system>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="serial">56267d17-0733-4abe-b916-d1a25e516514</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="uuid">56267d17-0733-4abe-b916-d1a25e516514</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </system>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <os>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </os>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <features>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </features>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/56267d17-0733-4abe-b916-d1a25e516514_disk">
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/56267d17-0733-4abe-b916-d1a25e516514_disk.config">
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:33 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:87:52:b8"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <target dev="tapdd03c667-f0"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/console.log" append="off"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <video>
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </video>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:26:33 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:26:33 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:26:33 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:26:33 compute-0 nova_compute[244014]: </domain>
Feb 25 12:26:33 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.421 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Preparing to wait for external event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.422 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.422 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.422 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.424 244018 DEBUG nova.virt.libvirt.vif [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1327428650',id=54,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-pc621z15',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:29Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=56267d17-0733-4abe-b916-d1a25e516514,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.424 244018 DEBUG nova.network.os_vif_util [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.425 244018 DEBUG nova.network.os_vif_util [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.426 244018 DEBUG os_vif [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.427 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.427 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.428 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.431 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd03c667-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.432 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdd03c667-f0, col_values=(('external_ids', {'iface-id': 'dd03c667-f058-4e13-bb03-517ed838be2e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:52:b8', 'vm-uuid': '56267d17-0733-4abe-b916-d1a25e516514'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:33 compute-0 NetworkManager[49836]: <info>  [1772022393.4367] manager: (tapdd03c667-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.443 244018 INFO os_vif [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0')
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.461 244018 DEBUG nova.network.neutron [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Updated VIF entry in instance network info cache for port dd03c667-f058-4e13-bb03-517ed838be2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.461 244018 DEBUG nova.network.neutron [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Updating instance_info_cache with network_info: [{"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.485 244018 DEBUG oslo_concurrency.lockutils [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.491 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.492 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.492 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No VIF found with MAC fa:16:3e:87:52:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.493 244018 INFO nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Using config drive
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.522 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.532 244018 INFO nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Creating config drive at /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.541 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpckp0qi7m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.684 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpckp0qi7m" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.723 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.727 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 367 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 6.2 MiB/s wr, 150 op/s
Feb 25 12:26:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4145244569' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2388521735' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3186710184' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.966 244018 INFO nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Creating config drive at /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config
Feb 25 12:26:33 compute-0 nova_compute[244014]: 2026-02-25 12:26:33.973 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiy9b6owk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.007 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Updating instance_info_cache with network_info: [{"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.033 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.034 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Instance network_info: |[{"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.035 244018 DEBUG oslo_concurrency.lockutils [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.035 244018 DEBUG nova.network.neutron [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Refreshing network info cache for port 69011b0a-5af7-4bef-a14c-8d83e63e08ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.040 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Start _get_guest_xml network_info=[{"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.046 244018 WARNING nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.051 244018 DEBUG nova.virt.libvirt.host [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.051 244018 DEBUG nova.virt.libvirt.host [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.062 244018 DEBUG nova.virt.libvirt.host [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.062 244018 DEBUG nova.virt.libvirt.host [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.063 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.063 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.064 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.064 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.064 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.065 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.065 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.065 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.066 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.066 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.066 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.067 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.071 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.113 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiy9b6owk" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.147 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.152 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config 56267d17-0733-4abe-b916-d1a25e516514_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.189 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.197 244018 INFO nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Deleting local config drive /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config because it was imported into RBD.
Feb 25 12:26:34 compute-0 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 12:26:34 compute-0 NetworkManager[49836]: <info>  [1772022394.2563] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Feb 25 12:26:34 compute-0 ovn_controller[147040]: 2026-02-25T12:26:34Z|00451|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 12:26:34 compute-0 ovn_controller[147040]: 2026-02-25T12:26:34Z|00452|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.260 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.272 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.274 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.277 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:26:34 compute-0 ovn_controller[147040]: 2026-02-25T12:26:34Z|00453|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 12:26:34 compute-0 ovn_controller[147040]: 2026-02-25T12:26:34Z|00454|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.289 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[55b3b8af-382c-4220-87ca-2d258197bde0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.290 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.293 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.293 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[18aed446-261f-4da5-9542-39eeb3e855b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.294 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[141b1b41-e2f8-4d3e-9b81-1b68c271a00d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 systemd-machined[210048]: New machine qemu-58-instance-00000035.
Feb 25 12:26:34 compute-0 systemd-udevd[290793]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:26:34 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000035.
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.305 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd52680-5317-44be-9f80-c542ba8f5565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 NetworkManager[49836]: <info>  [1772022394.3080] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:26:34 compute-0 NetworkManager[49836]: <info>  [1772022394.3098] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.326 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ed60fc16-cefc-4767-8658-ce818203f5b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.351 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a2983ba6-4543-4eaf-ad20-5975a5b4ea0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 NetworkManager[49836]: <info>  [1772022394.3564] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/207)
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.355 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7fd718-f47e-43e9-9c32-9c39e7f3c088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 systemd-udevd[290796]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.378 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config 56267d17-0733-4abe-b916-d1a25e516514_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.379 244018 INFO nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Deleting local config drive /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config because it was imported into RBD.
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.379 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c6d1c7-fd87-40d2-b47e-dff37f27481e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.382 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d3123649-77ca-4a24-8def-7bd5ba7ff025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.395 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.395 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:34 compute-0 NetworkManager[49836]: <info>  [1772022394.3992] device (tapce318891-c0): carrier: link connected
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.402 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fab75f-558b-48ee-9b45-c426167a0323]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.412 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:26:34 compute-0 NetworkManager[49836]: <info>  [1772022394.4203] manager: (tapdd03c667-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Feb 25 12:26:34 compute-0 kernel: tapdd03c667-f0: entered promiscuous mode
Feb 25 12:26:34 compute-0 ovn_controller[147040]: 2026-02-25T12:26:34Z|00455|binding|INFO|Claiming lport dd03c667-f058-4e13-bb03-517ed838be2e for this chassis.
Feb 25 12:26:34 compute-0 ovn_controller[147040]: 2026-02-25T12:26:34Z|00456|binding|INFO|dd03c667-f058-4e13-bb03-517ed838be2e: Claiming fa:16:3e:87:52:b8 10.100.0.13
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.422 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:34 compute-0 NetworkManager[49836]: <info>  [1772022394.4294] device (tapdd03c667-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:26:34 compute-0 NetworkManager[49836]: <info>  [1772022394.4305] device (tapdd03c667-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.418 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d25f3c9-8d79-43ae-ac75-ab633469507f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436380, 'reachable_time': 33874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290835, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.433 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:52:b8 10.100.0.13'], port_security=['fa:16:3e:87:52:b8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '56267d17-0733-4abe-b916-d1a25e516514', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dd03c667-f058-4e13-bb03-517ed838be2e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.432 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:34 compute-0 ovn_controller[147040]: 2026-02-25T12:26:34Z|00457|binding|INFO|Setting lport dd03c667-f058-4e13-bb03-517ed838be2e ovn-installed in OVS
Feb 25 12:26:34 compute-0 ovn_controller[147040]: 2026-02-25T12:26:34Z|00458|binding|INFO|Setting lport dd03c667-f058-4e13-bb03-517ed838be2e up in Southbound
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:34 compute-0 systemd-machined[210048]: New machine qemu-59-instance-00000036.
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.444 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[91d54acb-8a9c-45cf-a854-1de78bb22852]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436380, 'tstamp': 436380}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290843, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000036.
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.468 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3bcd1013-2bba-405d-a206-32833369db92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436380, 'reachable_time': 33874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290846, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.469 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.469 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.481 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.481 244018 INFO nova.compute.claims [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.496 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a9a91f-1c08-48de-a25e-eacbd84d8c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.537 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[55a9f54e-56f3-46c9-abbe-7e9a95634561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.547 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.547 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.548 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:34 compute-0 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.550 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:34 compute-0 NetworkManager[49836]: <info>  [1772022394.5512] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:34 compute-0 ovn_controller[147040]: 2026-02-25T12:26:34Z|00459|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.557 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.558 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2114601c-3bc3-4df9-91f1-82b32c8ab7bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.558 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.558 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:26:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.559 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.564 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068374435' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.659 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.680 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.683 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.721 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.749 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022394.7491221, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.750 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.768 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.790 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022394.7492712, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.791 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Paused (Lifecycle Event)
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.808 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.811 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.832 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:34 compute-0 ceph-mon[76335]: pgmap v1303: 305 pgs: 305 active+clean; 367 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 6.2 MiB/s wr, 150 op/s
Feb 25 12:26:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4068374435' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:34 compute-0 podman[290985]: 2026-02-25 12:26:34.914949268 +0000 UTC m=+0.050980663 container create 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:26:34 compute-0 systemd[1]: Started libpod-conmon-0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225.scope.
Feb 25 12:26:34 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669065cfe9b355ae4db308310ad10b864c500e39dbf8d8abf3254990187da5c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:34 compute-0 podman[290985]: 2026-02-25 12:26:34.884427085 +0000 UTC m=+0.020458510 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.984 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.985 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.986 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.986 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.987 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Processing event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.987 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.988 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.988 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.989 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.990 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.990 244018 WARNING nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state building and task_state spawning.
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.991 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.991 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.992 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.992 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.993 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Processing event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.994 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.997 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022394.9976788, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:34 compute-0 nova_compute[244014]: 2026-02-25 12:26:34.998 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)
Feb 25 12:26:34 compute-0 podman[290985]: 2026-02-25 12:26:34.999228194 +0000 UTC m=+0.135259609 container init 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.001 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:26:35 compute-0 podman[290985]: 2026-02-25 12:26:35.004468312 +0000 UTC m=+0.140499707 container start 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.008 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance spawned successfully.
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.009 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.016 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:35 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [NOTICE]   (291038) : New worker (291045) forked
Feb 25 12:26:35 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [NOTICE]   (291038) : Loading success.
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.029 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.035 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.036 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.036 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.037 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.038 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.039 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.048 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.061 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dd03c667-f058-4e13-bb03-517ed838be2e in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 unbound from our chassis
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.063 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.070 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5786b26c-3990-422e-b2dd-347a33629b9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.070 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7693903d-d1 in ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.072 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7693903d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[48960a31-0060-4bb9-a29a-b28cb813c563]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.073 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e676696c-78c7-4bd3-a91c-bda5781d0ec3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.084 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[7bacb9a1-662c-40f2-b108-81503437177c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.090 244018 INFO nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Took 6.30 seconds to spawn the instance on the hypervisor.
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.090 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.098 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29932cc3-966a-4d56-9c88-960bb82bdb44]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.100 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.101 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022395.1008575, 56267d17-0733-4abe-b916-d1a25e516514 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.101 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] VM Started (Lifecycle Event)
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.103 244018 DEBUG nova.network.neutron [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Updated VIF entry in instance network info cache for port 69011b0a-5af7-4bef-a14c-8d83e63e08ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.103 244018 DEBUG nova.network.neutron [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Updating instance_info_cache with network_info: [{"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.106 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:26:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.121 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.123 244018 INFO nova.virt.libvirt.driver [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Instance spawned successfully.
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.123 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.124 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[69a0cbfc-53fb-471b-b985-edd365302e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.127 244018 DEBUG oslo_concurrency.lockutils [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:35 compute-0 NetworkManager[49836]: <info>  [1772022395.1301] manager: (tap7693903d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/210)
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49f174a9-2853-48ac-96f6-28d5bf5a0584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.132 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.159 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5eee53-58f1-49fb-8c82-07d822cb4734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.164 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a596d94b-9ac5-48a9-a8ab-bfb881544142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 NetworkManager[49836]: <info>  [1772022395.1865] device (tap7693903d-d0): carrier: link connected
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.192 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b72a45a0-ebdb-433a-b759-95f473309154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.206 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff12b8c-9dea-47d8-b7e2-e427b0fcd070]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436459, 'reachable_time': 40040, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291065, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.219 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2af1a4-4344-4216-8fe2-d262085eabb3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:f240'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436459, 'tstamp': 436459}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291066, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.231 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be9117ad-6e72-42af-90ba-e77505af2048]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436459, 'reachable_time': 40040, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291067, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.248 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0590cd2-0832-4e71-a436-4539162af819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2360327645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.271 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.272 244018 DEBUG nova.virt.libvirt.vif [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1336556788',display_name='tempest-ServerActionsTestOtherB-server-1336556788',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1336556788',id=55,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-8l7ggktq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:30Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=43b8959e-9cf0-42ca-aa1f-8a380321c971,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.273 244018 DEBUG nova.network.os_vif_util [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.274 244018 DEBUG nova.network.os_vif_util [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:26:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1159604673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.276 244018 DEBUG nova.objects.instance [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 43b8959e-9cf0-42ca-aa1f-8a380321c971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.283 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61f221e3-c684-4c48-8752-4ecdca297281]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7693903d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.287 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:35 compute-0 NetworkManager[49836]: <info>  [1772022395.2877] manager: (tap7693903d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Feb 25 12:26:35 compute-0 kernel: tap7693903d-d0: entered promiscuous mode
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.289 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7693903d-d0, col_values=(('external_ids', {'iface-id': '6dc5897c-8765-434f-a79d-19523884d8ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:35 compute-0 ovn_controller[147040]: 2026-02-25T12:26:35Z|00460|binding|INFO|Releasing lport 6dc5897c-8765-434f-a79d-19523884d8ae from this chassis (sb_readonly=0)
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.294 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.297 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8619b1d0-6ff2-413d-a194-b58ac50ca068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.297 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:26:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.298 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'env', 'PROCESS_TAG=haproxy-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7693903d-d5e2-4b50-a39b-bbbcc4148329.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.301 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.302 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.319 244018 DEBUG nova.compute.provider_tree [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.324 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.324 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022395.1012468, 56267d17-0733-4abe-b916-d1a25e516514 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.325 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] VM Paused (Lifecycle Event)
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.330 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:26:35 compute-0 nova_compute[244014]:   <uuid>43b8959e-9cf0-42ca-aa1f-8a380321c971</uuid>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   <name>instance-00000037</name>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestOtherB-server-1336556788</nova:name>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:26:34</nova:creationTime>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <nova:user uuid="b774fd0c04fc403d9ddb205f1e6abbc5">tempest-ServerActionsTestOtherB-1539976047-project-member</nova:user>
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <nova:project uuid="c85a955249394f0faf7c890f5cd0df32">tempest-ServerActionsTestOtherB-1539976047</nova:project>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <nova:port uuid="69011b0a-5af7-4bef-a14c-8d83e63e08ae">
Feb 25 12:26:35 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <system>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <entry name="serial">43b8959e-9cf0-42ca-aa1f-8a380321c971</entry>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <entry name="uuid">43b8959e-9cf0-42ca-aa1f-8a380321c971</entry>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     </system>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   <os>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   </os>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   <features>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   </features>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/43b8959e-9cf0-42ca-aa1f-8a380321c971_disk">
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config">
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:35 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:df:0d:06"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <target dev="tap69011b0a-5a"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/console.log" append="off"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <video>
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     </video>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:26:35 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:26:35 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:26:35 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:26:35 compute-0 nova_compute[244014]: </domain>
Feb 25 12:26:35 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.336 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Preparing to wait for external event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.337 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.337 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.338 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.338 244018 DEBUG nova.virt.libvirt.vif [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1336556788',display_name='tempest-ServerActionsTestOtherB-server-1336556788',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1336556788',id=55,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-8l7ggktq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:30Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=43b8959e-9cf0-42ca-aa1f-8a380321c971,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.339 244018 DEBUG nova.network.os_vif_util [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.340 244018 DEBUG nova.network.os_vif_util [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.340 244018 DEBUG os_vif [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.342 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.343 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.343 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.350 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.351 244018 DEBUG nova.scheduler.client.report [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.358 244018 INFO nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Took 7.75 seconds to build instance.
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.362 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.363 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.364 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.365 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.365 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.366 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.371 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69011b0a-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.371 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69011b0a-5a, col_values=(('external_ids', {'iface-id': '69011b0a-5af7-4bef-a14c-8d83e63e08ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:0d:06', 'vm-uuid': '43b8959e-9cf0-42ca-aa1f-8a380321c971'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:35 compute-0 NetworkManager[49836]: <info>  [1772022395.3739] manager: (tap69011b0a-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.375 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022395.1050055, 56267d17-0733-4abe-b916-d1a25e516514 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.375 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] VM Resumed (Lifecycle Event)
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.378 244018 INFO os_vif [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a')
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.398 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.399 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.406 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.424 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.427 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.455 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.458 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.459 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.459 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No VIF found with MAC fa:16:3e:df:0d:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.459 244018 INFO nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Using config drive
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.479 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.489 244018 INFO nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Took 5.85 seconds to spawn the instance on the hypervisor.
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.490 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.490 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.491 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.516 244018 INFO nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.537 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.549 244018 INFO nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Took 7.58 seconds to build instance.
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.574 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.618 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.619 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.620 244018 INFO nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Creating image(s)
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.641 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.665 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.690 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.698 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:35 compute-0 podman[291122]: 2026-02-25 12:26:35.703500686 +0000 UTC m=+0.089670539 container create 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.740 244018 DEBUG nova.policy [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9c44c1a95c8d4d97bc1d7dde284dcf1b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'daab2f813dbd467685c22833bf875ec9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:26:35 compute-0 systemd[1]: Started libpod-conmon-41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115.scope.
Feb 25 12:26:35 compute-0 podman[291122]: 2026-02-25 12:26:35.664601455 +0000 UTC m=+0.050771418 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:26:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b6407dc8ff3720d9297a51c2a82a3e2ac0165aaf6a4b9b13749f15d5541d8db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.796 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.797 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.797 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.798 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:35 compute-0 podman[291122]: 2026-02-25 12:26:35.799237905 +0000 UTC m=+0.185407808 container init 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 12:26:35 compute-0 podman[291122]: 2026-02-25 12:26:35.809425793 +0000 UTC m=+0.195595686 container start 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:26:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 367 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 6.1 MiB/s wr, 147 op/s
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.819 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:35 compute-0 nova_compute[244014]: 2026-02-25 12:26:35.824 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:35 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [NOTICE]   (291212) : New worker (291219) forked
Feb 25 12:26:35 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [NOTICE]   (291212) : Loading success.
Feb 25 12:26:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2360327645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1159604673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.070 244018 INFO nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Creating config drive at /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.075 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsq0q_haq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.138 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.203 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] resizing rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.232 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsq0q_haq" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.259 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.263 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.326 244018 DEBUG nova.objects.instance [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'migration_context' on Instance uuid d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.341 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.341 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Ensure instance console log exists: /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.341 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.342 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.342 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.378 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.379 244018 INFO nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Deleting local config drive /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config because it was imported into RBD.
Feb 25 12:26:36 compute-0 kernel: tap69011b0a-5a: entered promiscuous mode
Feb 25 12:26:36 compute-0 NetworkManager[49836]: <info>  [1772022396.4129] manager: (tap69011b0a-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Feb 25 12:26:36 compute-0 ovn_controller[147040]: 2026-02-25T12:26:36Z|00461|binding|INFO|Claiming lport 69011b0a-5af7-4bef-a14c-8d83e63e08ae for this chassis.
Feb 25 12:26:36 compute-0 ovn_controller[147040]: 2026-02-25T12:26:36Z|00462|binding|INFO|69011b0a-5af7-4bef-a14c-8d83e63e08ae: Claiming fa:16:3e:df:0d:06 10.100.0.5
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:36 compute-0 NetworkManager[49836]: <info>  [1772022396.4243] device (tap69011b0a-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:26:36 compute-0 NetworkManager[49836]: <info>  [1772022396.4250] device (tap69011b0a-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:26:36 compute-0 ovn_controller[147040]: 2026-02-25T12:26:36Z|00463|binding|INFO|Setting lport 69011b0a-5af7-4bef-a14c-8d83e63e08ae ovn-installed in OVS
Feb 25 12:26:36 compute-0 ovn_controller[147040]: 2026-02-25T12:26:36Z|00464|binding|INFO|Setting lport 69011b0a-5af7-4bef-a14c-8d83e63e08ae up in Southbound
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.428 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:0d:06 10.100.0.5'], port_security=['fa:16:3e:df:0d:06 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '43b8959e-9cf0-42ca-aa1f-8a380321c971', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=69011b0a-5af7-4bef-a14c-8d83e63e08ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.430 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 69011b0a-5af7-4bef-a14c-8d83e63e08ae in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 bound to our chassis
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.432 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 12:26:36 compute-0 systemd-machined[210048]: New machine qemu-60-instance-00000037.
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.449 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a77c24c-883b-49c5-83e9-dd78f1f86c77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:36 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000037.
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.469 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[42cedd75-191d-4b4e-b0db-4bdff8caafab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.479 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[295e2408-1fba-4bf2-898b-cf64522c5fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.497 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6728bc-176b-4db8-8eed-3d9305d1d1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.507 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9c82bbaf-378a-427a-ba50-470509a23e05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291382, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.520 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00256589-2aae-4dc8-8f15-716d51d31131]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291383, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291383, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.521 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.525 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.525 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.577 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022381.5763638, a826c2fd-1af8-4b55-b801-90ce87d04466 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.577 244018 INFO nova.compute.manager [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] VM Stopped (Lifecycle Event)
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.596 244018 DEBUG nova.compute.manager [None req-2506aba5-5168-4ee6-a5fe-447a1ceb6c70 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.836 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022396.836163, 43b8959e-9cf0-42ca-aa1f-8a380321c971 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.837 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] VM Started (Lifecycle Event)
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.864 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.868 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022396.8365889, 43b8959e-9cf0-42ca-aa1f-8a380321c971 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.868 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] VM Paused (Lifecycle Event)
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.892 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.895 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:36 compute-0 nova_compute[244014]: 2026-02-25 12:26:36.929 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:36 compute-0 ceph-mon[76335]: pgmap v1304: 305 pgs: 305 active+clean; 367 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 6.1 MiB/s wr, 147 op/s
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.015 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Successfully created port: a4d3e156-0255-47ba-811a-4ddaf7c16468 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.119 244018 DEBUG nova.compute.manager [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.120 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.120 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.120 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.121 244018 DEBUG nova.compute.manager [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] No waiting events found dispatching network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.121 244018 WARNING nova.compute.manager [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received unexpected event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e for instance with vm_state active and task_state None.
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.121 244018 DEBUG nova.compute.manager [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.122 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.122 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.122 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.123 244018 DEBUG nova.compute.manager [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Processing event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.124 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.127 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022397.1269784, 43b8959e-9cf0-42ca-aa1f-8a380321c971 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.128 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] VM Resumed (Lifecycle Event)
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.130 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.134 244018 INFO nova.virt.libvirt.driver [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Instance spawned successfully.
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.135 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.158 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.163 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.164 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.164 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.165 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.166 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.166 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.171 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.216 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.250 244018 INFO nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Took 6.42 seconds to spawn the instance on the hypervisor.
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.250 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.295 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.295 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.295 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.295 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.296 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.297 244018 INFO nova.compute.manager [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Terminating instance
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.298 244018 DEBUG nova.compute.manager [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.316 244018 INFO nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Took 8.21 seconds to build instance.
Feb 25 12:26:37 compute-0 kernel: tapdd03c667-f0 (unregistering): left promiscuous mode
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.336 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:37 compute-0 ovn_controller[147040]: 2026-02-25T12:26:37Z|00465|binding|INFO|Releasing lport dd03c667-f058-4e13-bb03-517ed838be2e from this chassis (sb_readonly=0)
Feb 25 12:26:37 compute-0 ovn_controller[147040]: 2026-02-25T12:26:37Z|00466|binding|INFO|Setting lport dd03c667-f058-4e13-bb03-517ed838be2e down in Southbound
Feb 25 12:26:37 compute-0 ovn_controller[147040]: 2026-02-25T12:26:37Z|00467|binding|INFO|Removing iface tapdd03c667-f0 ovn-installed in OVS
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.338 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:37 compute-0 NetworkManager[49836]: <info>  [1772022397.3459] device (tapdd03c667-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.353 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:52:b8 10.100.0.13'], port_security=['fa:16:3e:87:52:b8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '56267d17-0733-4abe-b916-d1a25e516514', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dd03c667-f058-4e13-bb03-517ed838be2e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.356 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dd03c667-f058-4e13-bb03-517ed838be2e in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 unbound from our chassis
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.359 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7693903d-d5e2-4b50-a39b-bbbcc4148329, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.360 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[576c3a00-47f9-4a9d-afc9-308eb73e16e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.361 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace which is not needed anymore
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:37 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000036.scope: Deactivated successfully.
Feb 25 12:26:37 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000036.scope: Consumed 2.783s CPU time.
Feb 25 12:26:37 compute-0 systemd-machined[210048]: Machine qemu-59-instance-00000036 terminated.
Feb 25 12:26:37 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [NOTICE]   (291212) : haproxy version is 2.8.14-c23fe91
Feb 25 12:26:37 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [NOTICE]   (291212) : path to executable is /usr/sbin/haproxy
Feb 25 12:26:37 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [WARNING]  (291212) : Exiting Master process...
Feb 25 12:26:37 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [WARNING]  (291212) : Exiting Master process...
Feb 25 12:26:37 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [ALERT]    (291212) : Current worker (291219) exited with code 143 (Terminated)
Feb 25 12:26:37 compute-0 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [WARNING]  (291212) : All workers exited. Exiting... (0)
Feb 25 12:26:37 compute-0 systemd[1]: libpod-41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115.scope: Deactivated successfully.
Feb 25 12:26:37 compute-0 conmon[291192]: conmon 41085a498c43d4215289 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115.scope/container/memory.events
Feb 25 12:26:37 compute-0 podman[291450]: 2026-02-25 12:26:37.518016128 +0000 UTC m=+0.060714999 container died 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.532 244018 INFO nova.virt.libvirt.driver [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Instance destroyed successfully.
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.533 244018 DEBUG nova.objects.instance [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'resources' on Instance uuid 56267d17-0733-4abe-b916-d1a25e516514 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115-userdata-shm.mount: Deactivated successfully.
Feb 25 12:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b6407dc8ff3720d9297a51c2a82a3e2ac0165aaf6a4b9b13749f15d5541d8db-merged.mount: Deactivated successfully.
Feb 25 12:26:37 compute-0 podman[291450]: 2026-02-25 12:26:37.573470407 +0000 UTC m=+0.116169278 container cleanup 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 12:26:37 compute-0 systemd[1]: libpod-conmon-41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115.scope: Deactivated successfully.
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.589 244018 DEBUG nova.virt.libvirt.vif [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1327428650',id=54,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-pc621z15',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:35Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=56267d17-0733-4abe-b916-d1a25e516514,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.590 244018 DEBUG nova.network.os_vif_util [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.591 244018 DEBUG nova.network.os_vif_util [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.592 244018 DEBUG os_vif [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.595 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd03c667-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.600 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.603 244018 INFO os_vif [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0')
Feb 25 12:26:37 compute-0 podman[291490]: 2026-02-25 12:26:37.656049415 +0000 UTC m=+0.059649280 container remove 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.664 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef018af5-d8ae-4382-a5f6-cb745f8f409a]: (4, ('Wed Feb 25 12:26:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115)\n41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115\nWed Feb 25 12:26:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115)\n41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.666 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b13b8b1f-5716-4bcc-b19b-33bc491f3334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.667 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:37 compute-0 kernel: tap7693903d-d0: left promiscuous mode
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.677 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.680 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ad82f17c-bbf9-4c7b-a4b8-22baaf174887]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.694 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7e493225-a91e-48ab-9afe-f3a5414131d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.696 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[53433b70-22ae-4917-89e2-27a1c6a62c52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.721 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[41d3b2b5-4d11-459e-bd27-2f0a2c9d3006]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436452, 'reachable_time': 31271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291520, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d7693903d\x2dd5e2\x2d4b50\x2da39b\x2dbbbcc4148329.mount: Deactivated successfully.
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.727 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:26:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.728 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[972ef46e-33e1-451e-9659-54fea56c75e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 418 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.2 MiB/s wr, 313 op/s
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.913 244018 INFO nova.virt.libvirt.driver [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Deleting instance files /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514_del
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.914 244018 INFO nova.virt.libvirt.driver [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Deletion of /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514_del complete
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.982 244018 INFO nova.compute.manager [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Took 0.68 seconds to destroy the instance on the hypervisor.
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.983 244018 DEBUG oslo.service.loopingcall [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.983 244018 DEBUG nova.compute.manager [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:26:37 compute-0 nova_compute[244014]: 2026-02-25 12:26:37.983 244018 DEBUG nova.network.neutron [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:26:38 compute-0 nova_compute[244014]: 2026-02-25 12:26:38.350 244018 DEBUG nova.compute.manager [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-changed-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:38 compute-0 nova_compute[244014]: 2026-02-25 12:26:38.350 244018 DEBUG nova.compute.manager [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Refreshing instance network info cache due to event network-changed-ee46268d-740d-4ff9-8b65-4a81fc61eec3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:26:38 compute-0 nova_compute[244014]: 2026-02-25 12:26:38.350 244018 DEBUG oslo_concurrency.lockutils [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:38 compute-0 nova_compute[244014]: 2026-02-25 12:26:38.351 244018 DEBUG oslo_concurrency.lockutils [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:38 compute-0 nova_compute[244014]: 2026-02-25 12:26:38.351 244018 DEBUG nova.network.neutron [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Refreshing network info cache for port ee46268d-740d-4ff9-8b65-4a81fc61eec3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:26:38 compute-0 nova_compute[244014]: 2026-02-25 12:26:38.626 244018 INFO nova.compute.manager [None req-53df2c5c-bcc5-4781-af56-8409d0c14aa0 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Get console output
Feb 25 12:26:38 compute-0 nova_compute[244014]: 2026-02-25 12:26:38.632 244018 INFO oslo.privsep.daemon [None req-53df2c5c-bcc5-4781-af56-8409d0c14aa0 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp6qasf0gg/privsep.sock']
Feb 25 12:26:38 compute-0 nova_compute[244014]: 2026-02-25 12:26:38.936 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Successfully updated port: a4d3e156-0255-47ba-811a-4ddaf7c16468 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:26:38 compute-0 nova_compute[244014]: 2026-02-25 12:26:38.960 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:38 compute-0 nova_compute[244014]: 2026-02-25 12:26:38.961 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquired lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:38 compute-0 nova_compute[244014]: 2026-02-25 12:26:38.961 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:26:38 compute-0 ceph-mon[76335]: pgmap v1305: 305 pgs: 305 active+clean; 418 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.2 MiB/s wr, 313 op/s
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.200 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.430 244018 DEBUG nova.network.neutron [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.449 244018 INFO nova.compute.manager [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Took 1.47 seconds to deallocate network for instance.
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.493 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.494 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.596 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022384.5960622, 679fb15f-b258-473a-8cdc-a2c143eb4d92 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.597 244018 INFO nova.compute.manager [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] VM Stopped (Lifecycle Event)
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.616 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.616 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.617 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.617 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.617 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] No waiting events found dispatching network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.618 244018 WARNING nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received unexpected event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae for instance with vm_state active and task_state None.
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.618 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-vif-unplugged-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.618 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.618 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.619 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.619 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] No waiting events found dispatching network-vif-unplugged-dd03c667-f058-4e13-bb03-517ed838be2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.619 244018 WARNING nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received unexpected event network-vif-unplugged-dd03c667-f058-4e13-bb03-517ed838be2e for instance with vm_state deleted and task_state None.
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.620 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.620 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.620 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.621 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.621 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] No waiting events found dispatching network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.621 244018 WARNING nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received unexpected event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e for instance with vm_state deleted and task_state None.
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.621 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-changed-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.622 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Refreshing instance network info cache due to event network-changed-a4d3e156-0255-47ba-811a-4ddaf7c16468. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.622 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.625 244018 DEBUG nova.compute.manager [None req-740ac07a-bb3f-48bb-a219-c99c1806dd91 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.665 244018 DEBUG oslo_concurrency.processutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.797 244018 INFO oslo.privsep.daemon [None req-53df2c5c-bcc5-4781-af56-8409d0c14aa0 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Spawned new privsep daemon via rootwrap
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.671 291526 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.675 291526 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.677 291526 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.677 291526 INFO oslo.privsep.daemon [-] privsep daemon running as pid 291526
Feb 25 12:26:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 418 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.1 MiB/s wr, 289 op/s
Feb 25 12:26:39 compute-0 nova_compute[244014]: 2026-02-25 12:26:39.940 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:26:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:26:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570298839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.195 244018 DEBUG oslo_concurrency.processutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.200 244018 DEBUG nova.compute.provider_tree [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.219 244018 DEBUG nova.scheduler.client.report [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.246 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.270 244018 INFO nova.scheduler.client.report [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Deleted allocations for instance 56267d17-0733-4abe-b916-d1a25e516514
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.365 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.447 244018 DEBUG nova.compute.manager [req-6301e5b5-0962-4265-b118-a55d486c89b1 req-51d7d4fd-c05f-4607-85ab-0383e8952358 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-vif-deleted-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.493 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Updating instance_info_cache with network_info: [{"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.519 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Releasing lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.520 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Instance network_info: |[{"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.520 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.521 244018 DEBUG nova.network.neutron [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Refreshing network info cache for port a4d3e156-0255-47ba-811a-4ddaf7c16468 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.527 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Start _get_guest_xml network_info=[{"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.541 244018 WARNING nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.553 244018 DEBUG nova.virt.libvirt.host [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.553 244018 DEBUG nova.virt.libvirt.host [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.558 244018 DEBUG nova.virt.libvirt.host [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.558 244018 DEBUG nova.virt.libvirt.host [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.559 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.559 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.560 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.560 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.561 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.561 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.561 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.562 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.562 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.562 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.563 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.563 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.568 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.770 244018 DEBUG nova.network.neutron [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updated VIF entry in instance network info cache for port ee46268d-740d-4ff9-8b65-4a81fc61eec3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.771 244018 DEBUG nova.network.neutron [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:40 compute-0 nova_compute[244014]: 2026-02-25 12:26:40.796 244018 DEBUG oslo_concurrency.lockutils [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:40 compute-0 ceph-mon[76335]: pgmap v1306: 305 pgs: 305 active+clean; 418 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.1 MiB/s wr, 289 op/s
Feb 25 12:26:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3570298839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1166624250' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:41 compute-0 nova_compute[244014]: 2026-02-25 12:26:41.122 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:41 compute-0 nova_compute[244014]: 2026-02-25 12:26:41.158 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:41 compute-0 nova_compute[244014]: 2026-02-25 12:26:41.171 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477816515' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:41 compute-0 nova_compute[244014]: 2026-02-25 12:26:41.694 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:41 compute-0 nova_compute[244014]: 2026-02-25 12:26:41.697 244018 DEBUG nova.virt.libvirt.vif [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1913570277',display_name='tempest-DeleteServersTestJSON-server-1913570277',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1913570277',id=56,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-hpk4oof7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:35Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:26:41 compute-0 nova_compute[244014]: 2026-02-25 12:26:41.698 244018 DEBUG nova.network.os_vif_util [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:41 compute-0 nova_compute[244014]: 2026-02-25 12:26:41.699 244018 DEBUG nova.network.os_vif_util [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:41 compute-0 nova_compute[244014]: 2026-02-25 12:26:41.701 244018 DEBUG nova.objects.instance [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'pci_devices' on Instance uuid d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 407 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.1 MiB/s wr, 320 op/s
Feb 25 12:26:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1166624250' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3477816515' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020518122360856455 of space, bias 1.0, pg target 0.6155436708256936 quantized to 32 (current 32)
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002491905081122308 of space, bias 1.0, pg target 0.7475715243366924 quantized to 32 (current 32)
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0829516171927781e-06 of space, bias 4.0, pg target 0.0012995419406313337 quantized to 16 (current 16)
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:26:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.363 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:26:42 compute-0 nova_compute[244014]:   <uuid>d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84</uuid>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   <name>instance-00000038</name>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <nova:name>tempest-DeleteServersTestJSON-server-1913570277</nova:name>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:26:40</nova:creationTime>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <nova:user uuid="9c44c1a95c8d4d97bc1d7dde284dcf1b">tempest-DeleteServersTestJSON-285157757-project-member</nova:user>
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <nova:project uuid="daab2f813dbd467685c22833bf875ec9">tempest-DeleteServersTestJSON-285157757</nova:project>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <nova:port uuid="a4d3e156-0255-47ba-811a-4ddaf7c16468">
Feb 25 12:26:42 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <system>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <entry name="serial">d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84</entry>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <entry name="uuid">d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84</entry>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     </system>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   <os>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   </os>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   <features>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   </features>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk">
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config">
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:42 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:61:0e:2a"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <target dev="tapa4d3e156-02"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/console.log" append="off"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <video>
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     </video>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:26:42 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:26:42 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:26:42 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:26:42 compute-0 nova_compute[244014]: </domain>
Feb 25 12:26:42 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.364 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Preparing to wait for external event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.364 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.364 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.365 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.365 244018 DEBUG nova.virt.libvirt.vif [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1913570277',display_name='tempest-DeleteServersTestJSON-server-1913570277',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1913570277',id=56,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-hpk4oof7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:35Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.366 244018 DEBUG nova.network.os_vif_util [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.366 244018 DEBUG nova.network.os_vif_util [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.367 244018 DEBUG os_vif [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.368 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.368 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.371 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4d3e156-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.372 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4d3e156-02, col_values=(('external_ids', {'iface-id': 'a4d3e156-0255-47ba-811a-4ddaf7c16468', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:0e:2a', 'vm-uuid': 'd0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:42 compute-0 NetworkManager[49836]: <info>  [1772022402.3748] manager: (tapa4d3e156-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.382 244018 INFO os_vif [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02')
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.592 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.592 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.592 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No VIF found with MAC fa:16:3e:61:0e:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.593 244018 INFO nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Using config drive
Feb 25 12:26:42 compute-0 nova_compute[244014]: 2026-02-25 12:26:42.621 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:42 compute-0 ceph-mon[76335]: pgmap v1307: 305 pgs: 305 active+clean; 407 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.1 MiB/s wr, 320 op/s
Feb 25 12:26:43 compute-0 nova_compute[244014]: 2026-02-25 12:26:43.680 244018 INFO nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Creating config drive at /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config
Feb 25 12:26:43 compute-0 nova_compute[244014]: 2026-02-25 12:26:43.683 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpke6237ft execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 372 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 6.3 MiB/s wr, 364 op/s
Feb 25 12:26:43 compute-0 nova_compute[244014]: 2026-02-25 12:26:43.831 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpke6237ft" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:43 compute-0 nova_compute[244014]: 2026-02-25 12:26:43.858 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:43 compute-0 nova_compute[244014]: 2026-02-25 12:26:43.862 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.148 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.149 244018 INFO nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Deleting local config drive /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config because it was imported into RBD.
Feb 25 12:26:44 compute-0 NetworkManager[49836]: <info>  [1772022404.1992] manager: (tapa4d3e156-02): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Feb 25 12:26:44 compute-0 kernel: tapa4d3e156-02: entered promiscuous mode
Feb 25 12:26:44 compute-0 ovn_controller[147040]: 2026-02-25T12:26:44Z|00468|binding|INFO|Claiming lport a4d3e156-0255-47ba-811a-4ddaf7c16468 for this chassis.
Feb 25 12:26:44 compute-0 ovn_controller[147040]: 2026-02-25T12:26:44Z|00469|binding|INFO|a4d3e156-0255-47ba-811a-4ddaf7c16468: Claiming fa:16:3e:61:0e:2a 10.100.0.11
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.216 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:0e:2a 10.100.0.11'], port_security=['fa:16:3e:61:0e:2a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4d3e156-0255-47ba-811a-4ddaf7c16468) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.217 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4d3e156-0255-47ba-811a-4ddaf7c16468 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 bound to our chassis
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.219 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.228 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b57c957-fccc-42ae-bbfb-610912b12846]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 ovn_controller[147040]: 2026-02-25T12:26:44Z|00470|binding|INFO|Setting lport a4d3e156-0255-47ba-811a-4ddaf7c16468 ovn-installed in OVS
Feb 25 12:26:44 compute-0 ovn_controller[147040]: 2026-02-25T12:26:44Z|00471|binding|INFO|Setting lport a4d3e156-0255-47ba-811a-4ddaf7c16468 up in Southbound
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.228 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0d45b1c-11 in ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.231 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0d45b1c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.231 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d93af6-4012-4475-84a2-a5301460d8d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.232 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2f72d63f-ec94-4929-8530-83a9a0d77b73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.245 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[1a41013b-25a8-4042-96a9-2768ec6a86b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 systemd-machined[210048]: New machine qemu-61-instance-00000038.
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.257 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[603bcaa8-2981-4ce1-9802-ffb6bbd98632]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000038.
Feb 25 12:26:44 compute-0 systemd-udevd[291691]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:26:44 compute-0 NetworkManager[49836]: <info>  [1772022404.2916] device (tapa4d3e156-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:26:44 compute-0 NetworkManager[49836]: <info>  [1772022404.2929] device (tapa4d3e156-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.296 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f90c70db-1960-47a3-828b-0ff5677b2721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 NetworkManager[49836]: <info>  [1772022404.3032] manager: (tapa0d45b1c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30e7872c-5cd4-4694-9fbc-c07642ac6769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.333 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2b335d-b3d1-442a-a9d8-4093490237e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.337 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b5054d55-9ffa-4db0-a837-62c2bee024d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 NetworkManager[49836]: <info>  [1772022404.3542] device (tapa0d45b1c-10): carrier: link connected
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.359 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6d721806-584f-4180-86ff-a66032ada2a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.372 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d5be93ca-1793-4f47-93d1-d13bc05d534e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437376, 'reachable_time': 35387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291719, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.397 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d53c3e7-00d6-4260-920a-1332ddf91531]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:2bc6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437376, 'tstamp': 437376}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291720, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.410 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[63201e63-9f50-4198-a591-2735d0db3eff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437376, 'reachable_time': 35387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291721, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.451 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49a72f49-083d-4297-b0b8-62723f8820df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.499 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bd70650d-6624-489a-90b5-5d4bd9665fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.500 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.501 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.501 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d45b1c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:44 compute-0 kernel: tapa0d45b1c-10: entered promiscuous mode
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:44 compute-0 NetworkManager[49836]: <info>  [1772022404.5043] manager: (tapa0d45b1c-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.510 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d45b1c-10, col_values=(('external_ids', {'iface-id': '7c89df63-779c-4e1c-9e58-76a4001fabc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:44 compute-0 ovn_controller[147040]: 2026-02-25T12:26:44Z|00472|binding|INFO|Releasing lport 7c89df63-779c-4e1c-9e58-76a4001fabc2 from this chassis (sb_readonly=0)
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.512 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.515 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.516 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8011f0db-8bd2-4077-bf1f-93008ebb78e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.517 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:26:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.518 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'env', 'PROCESS_TAG=haproxy-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0d45b1c-1680-4599-a27a-6e3335c94c99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.525 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.662 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.755 244018 DEBUG nova.network.neutron [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Updated VIF entry in instance network info cache for port a4d3e156-0255-47ba-811a-4ddaf7c16468. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.756 244018 DEBUG nova.network.neutron [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Updating instance_info_cache with network_info: [{"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.758 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022404.7578166, d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.758 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] VM Started (Lifecycle Event)
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.784 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.791 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.801 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022404.7580752, d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.802 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] VM Paused (Lifecycle Event)
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.835 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.839 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:44 compute-0 nova_compute[244014]: 2026-02-25 12:26:44.866 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:44 compute-0 podman[291793]: 2026-02-25 12:26:44.885796845 +0000 UTC m=+0.057963022 container create d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 25 12:26:44 compute-0 systemd[1]: Started libpod-conmon-d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72.scope.
Feb 25 12:26:44 compute-0 podman[291793]: 2026-02-25 12:26:44.854502489 +0000 UTC m=+0.026668666 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:26:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163036a2769903f79b3e9bb16ada6a0f381d1efd3f2b026f8dd2d7aac6106578/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:44 compute-0 podman[291793]: 2026-02-25 12:26:44.977626274 +0000 UTC m=+0.149792431 container init d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 12:26:44 compute-0 podman[291793]: 2026-02-25 12:26:44.982241394 +0000 UTC m=+0.154407521 container start d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 12:26:45 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [NOTICE]   (291812) : New worker (291814) forked
Feb 25 12:26:45 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [NOTICE]   (291812) : Loading success.
Feb 25 12:26:45 compute-0 ceph-mon[76335]: pgmap v1308: 305 pgs: 305 active+clean; 372 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 6.3 MiB/s wr, 364 op/s
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.033 244018 DEBUG nova.compute.manager [req-72c40636-f206-447f-a50c-bb1aefb58e6d req-ffd21d9e-7359-4041-ba3a-795827c6e0b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.034 244018 DEBUG oslo_concurrency.lockutils [req-72c40636-f206-447f-a50c-bb1aefb58e6d req-ffd21d9e-7359-4041-ba3a-795827c6e0b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.034 244018 DEBUG oslo_concurrency.lockutils [req-72c40636-f206-447f-a50c-bb1aefb58e6d req-ffd21d9e-7359-4041-ba3a-795827c6e0b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.034 244018 DEBUG oslo_concurrency.lockutils [req-72c40636-f206-447f-a50c-bb1aefb58e6d req-ffd21d9e-7359-4041-ba3a-795827c6e0b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.035 244018 DEBUG nova.compute.manager [req-72c40636-f206-447f-a50c-bb1aefb58e6d req-ffd21d9e-7359-4041-ba3a-795827c6e0b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Processing event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.036 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.048 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022405.0394669, d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.049 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] VM Resumed (Lifecycle Event)
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.051 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.058 244018 INFO nova.virt.libvirt.driver [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Instance spawned successfully.
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.058 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.072 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.078 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.082 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.082 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.083 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.083 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.084 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.084 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.122 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.152 244018 INFO nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Took 9.53 seconds to spawn the instance on the hypervisor.
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.152 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.224 244018 INFO nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Took 10.77 seconds to build instance.
Feb 25 12:26:45 compute-0 nova_compute[244014]: 2026-02-25 12:26:45.245 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 372 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 285 op/s
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.040 244018 DEBUG nova.objects.instance [None req-ea726091-0f1a-4f29-aed0-0c444d808ba9 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'pci_devices' on Instance uuid d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.067 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022407.0676155, d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.068 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] VM Paused (Lifecycle Event)
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.085 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.090 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.116 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 25 12:26:47 compute-0 ceph-mon[76335]: pgmap v1309: 305 pgs: 305 active+clean; 372 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 285 op/s
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:26:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2160361272' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:26:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:26:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2160361272' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:26:47 compute-0 kernel: tapa4d3e156-02 (unregistering): left promiscuous mode
Feb 25 12:26:47 compute-0 NetworkManager[49836]: <info>  [1772022407.5272] device (tapa4d3e156-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:26:47 compute-0 ovn_controller[147040]: 2026-02-25T12:26:47Z|00473|binding|INFO|Releasing lport a4d3e156-0255-47ba-811a-4ddaf7c16468 from this chassis (sb_readonly=0)
Feb 25 12:26:47 compute-0 ovn_controller[147040]: 2026-02-25T12:26:47Z|00474|binding|INFO|Setting lport a4d3e156-0255-47ba-811a-4ddaf7c16468 down in Southbound
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:47 compute-0 ovn_controller[147040]: 2026-02-25T12:26:47Z|00475|binding|INFO|Removing iface tapa4d3e156-02 ovn-installed in OVS
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.553 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:0e:2a 10.100.0.11'], port_security=['fa:16:3e:61:0e:2a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4d3e156-0255-47ba-811a-4ddaf7c16468) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.555 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4d3e156-0255-47ba-811a-4ddaf7c16468 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 unbound from our chassis
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.557 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d45b1c-1680-4599-a27a-6e3335c94c99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.558 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8559cf85-ae46-4fa9-a0db-353c611e57a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.559 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace which is not needed anymore
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:47 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000038.scope: Deactivated successfully.
Feb 25 12:26:47 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000038.scope: Consumed 2.643s CPU time.
Feb 25 12:26:47 compute-0 systemd-machined[210048]: Machine qemu-61-instance-00000038 terminated.
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.696 244018 DEBUG nova.compute.manager [None req-ea726091-0f1a-4f29-aed0-0c444d808ba9 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:47 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [NOTICE]   (291812) : haproxy version is 2.8.14-c23fe91
Feb 25 12:26:47 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [NOTICE]   (291812) : path to executable is /usr/sbin/haproxy
Feb 25 12:26:47 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [WARNING]  (291812) : Exiting Master process...
Feb 25 12:26:47 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [ALERT]    (291812) : Current worker (291814) exited with code 143 (Terminated)
Feb 25 12:26:47 compute-0 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [WARNING]  (291812) : All workers exited. Exiting... (0)
Feb 25 12:26:47 compute-0 systemd[1]: libpod-d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72.scope: Deactivated successfully.
Feb 25 12:26:47 compute-0 podman[291847]: 2026-02-25 12:26:47.73638199 +0000 UTC m=+0.070471426 container died d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:26:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72-userdata-shm.mount: Deactivated successfully.
Feb 25 12:26:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-163036a2769903f79b3e9bb16ada6a0f381d1efd3f2b026f8dd2d7aac6106578-merged.mount: Deactivated successfully.
Feb 25 12:26:47 compute-0 podman[291847]: 2026-02-25 12:26:47.773907262 +0000 UTC m=+0.107996688 container cleanup d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:26:47 compute-0 systemd[1]: libpod-conmon-d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72.scope: Deactivated successfully.
Feb 25 12:26:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 400 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 5.0 MiB/s wr, 413 op/s
Feb 25 12:26:47 compute-0 podman[291885]: 2026-02-25 12:26:47.841821604 +0000 UTC m=+0.051009165 container remove d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.848 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c48b7e4b-13dd-4acb-9f6f-9de8045a7f52]: (4, ('Wed Feb 25 12:26:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72)\nd8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72\nWed Feb 25 12:26:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72)\nd8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.849 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d54a34bf-775c-4686-8701-979f3a4a197d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.851 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.853 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:47 compute-0 kernel: tapa0d45b1c-10: left promiscuous mode
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.867 244018 DEBUG nova.compute.manager [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.868 244018 DEBUG oslo_concurrency.lockutils [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.868 244018 DEBUG oslo_concurrency.lockutils [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.869 244018 DEBUG oslo_concurrency.lockutils [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.870 244018 DEBUG nova.compute.manager [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] No waiting events found dispatching network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.870 244018 WARNING nova.compute.manager [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received unexpected event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 for instance with vm_state suspended and task_state None.
Feb 25 12:26:47 compute-0 nova_compute[244014]: 2026-02-25 12:26:47.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.875 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a119115-bf8d-47b0-baa8-e4c8643544bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.889 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a055e47-7970-4c84-ad4f-b98c8a987a0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.890 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f92c503e-4854-4f05-a98e-7b56771be6cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.905 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bf12e7e4-d527-4973-88d5-d432752f3970]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437369, 'reachable_time': 39228, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291905, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:47 compute-0 systemd[1]: run-netns-ovnmeta\x2da0d45b1c\x2d1680\x2d4599\x2da27a\x2d6e3335c94c99.mount: Deactivated successfully.
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.911 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:26:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.911 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[980887b2-d563-4b20-bf05-c26071a92e36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:48 compute-0 ovn_controller[147040]: 2026-02-25T12:26:48Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 12:26:48 compute-0 ovn_controller[147040]: 2026-02-25T12:26:48Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 12:26:48 compute-0 nova_compute[244014]: 2026-02-25 12:26:48.057 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:48 compute-0 nova_compute[244014]: 2026-02-25 12:26:48.058 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:48 compute-0 nova_compute[244014]: 2026-02-25 12:26:48.073 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:26:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2160361272' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:26:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2160361272' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:26:48 compute-0 nova_compute[244014]: 2026-02-25 12:26:48.146 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:48 compute-0 nova_compute[244014]: 2026-02-25 12:26:48.147 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:48 compute-0 nova_compute[244014]: 2026-02-25 12:26:48.157 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:26:48 compute-0 nova_compute[244014]: 2026-02-25 12:26:48.159 244018 INFO nova.compute.claims [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:26:48 compute-0 ovn_controller[147040]: 2026-02-25T12:26:48Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:0d:06 10.100.0.5
Feb 25 12:26:48 compute-0 ovn_controller[147040]: 2026-02-25T12:26:48Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:0d:06 10.100.0.5
Feb 25 12:26:48 compute-0 nova_compute[244014]: 2026-02-25 12:26:48.360 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:26:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2568133218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:48 compute-0 nova_compute[244014]: 2026-02-25 12:26:48.967 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:48 compute-0 nova_compute[244014]: 2026-02-25 12:26:48.977 244018 DEBUG nova.compute.provider_tree [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:26:48 compute-0 nova_compute[244014]: 2026-02-25 12:26:48.996 244018 DEBUG nova.scheduler.client.report [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.052 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.053 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.135 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.136 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.167 244018 INFO nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:26:49 compute-0 ceph-mon[76335]: pgmap v1310: 305 pgs: 305 active+clean; 400 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 5.0 MiB/s wr, 413 op/s
Feb 25 12:26:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2568133218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.188 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.409 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.411 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.412 244018 INFO nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Creating image(s)
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.442 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.479 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.514 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.519 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.585 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.586 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.587 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.588 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.617 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.621 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0612ec20-e725-4516-a153-f1fe9be74f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.665 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.734 244018 DEBUG nova.policy [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b774fd0c04fc403d9ddb205f1e6abbc5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c85a955249394f0faf7c890f5cd0df32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:26:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 400 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 224 op/s
Feb 25 12:26:49 compute-0 nova_compute[244014]: 2026-02-25 12:26:49.964 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0612ec20-e725-4516-a153-f1fe9be74f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.002 244018 DEBUG nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-vif-unplugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.002 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.003 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.003 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.004 244018 DEBUG nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] No waiting events found dispatching network-vif-unplugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.004 244018 WARNING nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received unexpected event network-vif-unplugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 for instance with vm_state suspended and task_state None.
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.004 244018 DEBUG nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.005 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.005 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.006 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.006 244018 DEBUG nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] No waiting events found dispatching network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.006 244018 WARNING nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received unexpected event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 for instance with vm_state suspended and task_state None.
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.063 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] resizing rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.104 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.105 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.105 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.106 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.106 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.110 244018 INFO nova.compute.manager [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Terminating instance
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.112 244018 DEBUG nova.compute.manager [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.167 244018 INFO nova.virt.libvirt.driver [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Instance destroyed successfully.
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.169 244018 DEBUG nova.objects.instance [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'resources' on Instance uuid d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.178 244018 DEBUG nova.objects.instance [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'migration_context' on Instance uuid 0612ec20-e725-4516-a153-f1fe9be74f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.192 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.193 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Ensure instance console log exists: /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:26:50 compute-0 ovn_controller[147040]: 2026-02-25T12:26:50Z|00476|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 12:26:50 compute-0 ovn_controller[147040]: 2026-02-25T12:26:50Z|00477|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.194 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.195 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.196 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.198 244018 DEBUG nova.virt.libvirt.vif [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1913570277',display_name='tempest-DeleteServersTestJSON-server-1913570277',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1913570277',id=56,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-hpk4oof7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:47Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.198 244018 DEBUG nova.network.os_vif_util [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.199 244018 DEBUG nova.network.os_vif_util [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.200 244018 DEBUG os_vif [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.203 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4d3e156-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.242 244018 INFO os_vif [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02')
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.424 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Successfully created port: 5951ca77-9d6a-41db-aaf8-3508d21701f3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.611 244018 INFO nova.virt.libvirt.driver [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Deleting instance files /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_del
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.612 244018 INFO nova.virt.libvirt.driver [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Deletion of /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_del complete
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.686 244018 INFO nova.compute.manager [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Took 0.57 seconds to destroy the instance on the hypervisor.
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.687 244018 DEBUG oslo.service.loopingcall [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.687 244018 DEBUG nova.compute.manager [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:26:50 compute-0 nova_compute[244014]: 2026-02-25 12:26:50.687 244018 DEBUG nova.network.neutron [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:26:51 compute-0 sudo[292113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:26:51 compute-0 sudo[292113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:26:51 compute-0 sudo[292113]: pam_unix(sudo:session): session closed for user root
Feb 25 12:26:51 compute-0 sudo[292138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:26:51 compute-0 sudo[292138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:26:51 compute-0 ceph-mon[76335]: pgmap v1311: 305 pgs: 305 active+clean; 400 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 224 op/s
Feb 25 12:26:51 compute-0 sudo[292138]: pam_unix(sudo:session): session closed for user root
Feb 25 12:26:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:26:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:26:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:26:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:26:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:26:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:26:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:26:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:26:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:26:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:26:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:26:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:26:51 compute-0 sudo[292194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:26:51 compute-0 sudo[292194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:26:51 compute-0 sudo[292194]: pam_unix(sudo:session): session closed for user root
Feb 25 12:26:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 413 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.4 MiB/s wr, 264 op/s
Feb 25 12:26:51 compute-0 sudo[292219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:26:51 compute-0 sudo[292219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:26:52 compute-0 podman[292256]: 2026-02-25 12:26:52.178395705 +0000 UTC m=+0.085116650 container create fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 12:26:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:26:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:26:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:26:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:26:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:26:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:26:52 compute-0 podman[292256]: 2026-02-25 12:26:52.1153433 +0000 UTC m=+0.022064265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:26:52 compute-0 systemd[1]: Started libpod-conmon-fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5.scope.
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.254 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Successfully updated port: 5951ca77-9d6a-41db-aaf8-3508d21701f3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:26:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.271 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.271 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.271 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:26:52 compute-0 podman[292256]: 2026-02-25 12:26:52.276930004 +0000 UTC m=+0.183650989 container init fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 12:26:52 compute-0 podman[292256]: 2026-02-25 12:26:52.285035573 +0000 UTC m=+0.191756518 container start fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:26:52 compute-0 podman[292256]: 2026-02-25 12:26:52.288606094 +0000 UTC m=+0.195327029 container attach fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:26:52 compute-0 reverent_elgamal[292274]: 167 167
Feb 25 12:26:52 compute-0 systemd[1]: libpod-fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5.scope: Deactivated successfully.
Feb 25 12:26:52 compute-0 podman[292256]: 2026-02-25 12:26:52.292474133 +0000 UTC m=+0.199195058 container died fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-54e5bc5d52300805adb94b1af1293dd35ef8e5abd63bbb27ab7906e94f36c1af-merged.mount: Deactivated successfully.
Feb 25 12:26:52 compute-0 podman[292256]: 2026-02-25 12:26:52.336610723 +0000 UTC m=+0.243331648 container remove fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:26:52 compute-0 systemd[1]: libpod-conmon-fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5.scope: Deactivated successfully.
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.409 244018 DEBUG nova.compute.manager [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-changed-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.410 244018 DEBUG nova.compute.manager [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Refreshing instance network info cache due to event network-changed-5951ca77-9d6a-41db-aaf8-3508d21701f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.411 244018 DEBUG oslo_concurrency.lockutils [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.453 244018 DEBUG nova.network.neutron [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.479 244018 INFO nova.compute.manager [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Took 1.79 seconds to deallocate network for instance.
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.520 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.527 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:52 compute-0 podman[292299]: 2026-02-25 12:26:52.527345841 +0000 UTC m=+0.055533023 container create 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.527 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.529 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022397.52653, 56267d17-0733-4abe-b916-d1a25e516514 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.529 244018 INFO nova.compute.manager [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] VM Stopped (Lifecycle Event)
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.551 244018 DEBUG nova.compute.manager [None req-4e8a76df-d682-4c13-b90b-a24f00faf586 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:52 compute-0 systemd[1]: Started libpod-conmon-2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297.scope.
Feb 25 12:26:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:52 compute-0 podman[292299]: 2026-02-25 12:26:52.50504274 +0000 UTC m=+0.033229962 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:26:52 compute-0 podman[292299]: 2026-02-25 12:26:52.606014467 +0000 UTC m=+0.134201699 container init 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:26:52 compute-0 podman[292299]: 2026-02-25 12:26:52.618619384 +0000 UTC m=+0.146806566 container start 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 12:26:52 compute-0 podman[292299]: 2026-02-25 12:26:52.623489232 +0000 UTC m=+0.151676464 container attach 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 12:26:52 compute-0 nova_compute[244014]: 2026-02-25 12:26:52.704 244018 DEBUG oslo_concurrency.processutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:53 compute-0 lucid_darwin[292316]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:26:53 compute-0 lucid_darwin[292316]: --> All data devices are unavailable
Feb 25 12:26:53 compute-0 systemd[1]: libpod-2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297.scope: Deactivated successfully.
Feb 25 12:26:53 compute-0 podman[292299]: 2026-02-25 12:26:53.098930127 +0000 UTC m=+0.627117269 container died 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 12:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff-merged.mount: Deactivated successfully.
Feb 25 12:26:53 compute-0 podman[292299]: 2026-02-25 12:26:53.129844752 +0000 UTC m=+0.658031934 container remove 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 12:26:53 compute-0 systemd[1]: libpod-conmon-2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297.scope: Deactivated successfully.
Feb 25 12:26:53 compute-0 sudo[292219]: pam_unix(sudo:session): session closed for user root
Feb 25 12:26:53 compute-0 ceph-mon[76335]: pgmap v1312: 305 pgs: 305 active+clean; 413 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.4 MiB/s wr, 264 op/s
Feb 25 12:26:53 compute-0 sudo[292369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:26:53 compute-0 sudo[292369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:26:53 compute-0 sudo[292369]: pam_unix(sudo:session): session closed for user root
Feb 25 12:26:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:26:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3436349269' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.265 244018 DEBUG oslo_concurrency.processutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.269 244018 DEBUG nova.compute.provider_tree [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:26:53 compute-0 sudo[292395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:26:53 compute-0 sudo[292395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.300 244018 DEBUG nova.scheduler.client.report [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.330 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.351 244018 INFO nova.scheduler.client.report [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Deleted allocations for instance d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.448 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.508 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updating instance_info_cache with network_info: [{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.541 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.542 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance network_info: |[{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.542 244018 DEBUG oslo_concurrency.lockutils [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.543 244018 DEBUG nova.network.neutron [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Refreshing network info cache for port 5951ca77-9d6a-41db-aaf8-3508d21701f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.547 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Start _get_guest_xml network_info=[{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.553 244018 WARNING nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.572 244018 DEBUG nova.virt.libvirt.host [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.573 244018 DEBUG nova.virt.libvirt.host [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.580 244018 DEBUG nova.virt.libvirt.host [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.581 244018 DEBUG nova.virt.libvirt.host [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.582 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.582 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.583 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.584 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.584 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.584 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.585 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.585 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.586 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.586 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.587 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.587 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:26:53 compute-0 podman[292434]: 2026-02-25 12:26:53.591994932 +0000 UTC m=+0.062108539 container create f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:26:53 compute-0 nova_compute[244014]: 2026-02-25 12:26:53.593 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:53 compute-0 systemd[1]: Started libpod-conmon-f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482.scope.
Feb 25 12:26:53 compute-0 podman[292434]: 2026-02-25 12:26:53.565016158 +0000 UTC m=+0.035129675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:26:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:53 compute-0 podman[292434]: 2026-02-25 12:26:53.726807237 +0000 UTC m=+0.196920724 container init f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 12:26:53 compute-0 podman[292434]: 2026-02-25 12:26:53.73573782 +0000 UTC m=+0.205851297 container start f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 12:26:53 compute-0 wonderful_raman[292450]: 167 167
Feb 25 12:26:53 compute-0 systemd[1]: libpod-f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482.scope: Deactivated successfully.
Feb 25 12:26:53 compute-0 podman[292434]: 2026-02-25 12:26:53.744075386 +0000 UTC m=+0.214188913 container attach f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:26:53 compute-0 podman[292434]: 2026-02-25 12:26:53.747947845 +0000 UTC m=+0.218061322 container died f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 12:26:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 438 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.0 MiB/s wr, 317 op/s
Feb 25 12:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-74438388b85b99b05cb213c34d0a25c08a291d2db83f9740623de3b66a656f36-merged.mount: Deactivated successfully.
Feb 25 12:26:53 compute-0 podman[292434]: 2026-02-25 12:26:53.968503808 +0000 UTC m=+0.438617295 container remove f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:26:54 compute-0 systemd[1]: libpod-conmon-f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482.scope: Deactivated successfully.
Feb 25 12:26:54 compute-0 podman[292494]: 2026-02-25 12:26:54.152800503 +0000 UTC m=+0.058389053 container create 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 12:26:54 compute-0 systemd[1]: Started libpod-conmon-02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4.scope.
Feb 25 12:26:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2872472128' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3436349269' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:26:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309cf003cb58abb02efa06ff5647b9af010919c0f9b8f07c2f36c3f59fa3158d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309cf003cb58abb02efa06ff5647b9af010919c0f9b8f07c2f36c3f59fa3158d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309cf003cb58abb02efa06ff5647b9af010919c0f9b8f07c2f36c3f59fa3158d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309cf003cb58abb02efa06ff5647b9af010919c0f9b8f07c2f36c3f59fa3158d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:54 compute-0 podman[292494]: 2026-02-25 12:26:54.128885367 +0000 UTC m=+0.034473967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.224 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:54 compute-0 podman[292494]: 2026-02-25 12:26:54.243574571 +0000 UTC m=+0.149163111 container init 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 12:26:54 compute-0 podman[292494]: 2026-02-25 12:26:54.252576496 +0000 UTC m=+0.158165046 container start 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:26:54 compute-0 podman[292494]: 2026-02-25 12:26:54.256516748 +0000 UTC m=+0.162105278 container attach 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.257 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.262 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:54 compute-0 elegant_benz[292511]: {
Feb 25 12:26:54 compute-0 elegant_benz[292511]:     "0": [
Feb 25 12:26:54 compute-0 elegant_benz[292511]:         {
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "devices": [
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "/dev/loop3"
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             ],
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_name": "ceph_lv0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_size": "21470642176",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "name": "ceph_lv0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "tags": {
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.cluster_name": "ceph",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.crush_device_class": "",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.encrypted": "0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.objectstore": "bluestore",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.osd_id": "0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.type": "block",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.vdo": "0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.with_tpm": "0"
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             },
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "type": "block",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "vg_name": "ceph_vg0"
Feb 25 12:26:54 compute-0 elegant_benz[292511]:         }
Feb 25 12:26:54 compute-0 elegant_benz[292511]:     ],
Feb 25 12:26:54 compute-0 elegant_benz[292511]:     "1": [
Feb 25 12:26:54 compute-0 elegant_benz[292511]:         {
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "devices": [
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "/dev/loop4"
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             ],
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_name": "ceph_lv1",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_size": "21470642176",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "name": "ceph_lv1",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "tags": {
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.cluster_name": "ceph",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.crush_device_class": "",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.encrypted": "0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.objectstore": "bluestore",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.osd_id": "1",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.type": "block",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.vdo": "0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.with_tpm": "0"
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             },
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "type": "block",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "vg_name": "ceph_vg1"
Feb 25 12:26:54 compute-0 elegant_benz[292511]:         }
Feb 25 12:26:54 compute-0 elegant_benz[292511]:     ],
Feb 25 12:26:54 compute-0 elegant_benz[292511]:     "2": [
Feb 25 12:26:54 compute-0 elegant_benz[292511]:         {
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "devices": [
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "/dev/loop5"
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             ],
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_name": "ceph_lv2",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_size": "21470642176",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "name": "ceph_lv2",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "tags": {
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.cluster_name": "ceph",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.crush_device_class": "",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.encrypted": "0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.objectstore": "bluestore",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.osd_id": "2",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.type": "block",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.vdo": "0",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:                 "ceph.with_tpm": "0"
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             },
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "type": "block",
Feb 25 12:26:54 compute-0 elegant_benz[292511]:             "vg_name": "ceph_vg2"
Feb 25 12:26:54 compute-0 elegant_benz[292511]:         }
Feb 25 12:26:54 compute-0 elegant_benz[292511]:     ]
Feb 25 12:26:54 compute-0 elegant_benz[292511]: }
Feb 25 12:26:54 compute-0 systemd[1]: libpod-02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4.scope: Deactivated successfully.
Feb 25 12:26:54 compute-0 podman[292494]: 2026-02-25 12:26:54.560279185 +0000 UTC m=+0.465867815 container died 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:26:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-309cf003cb58abb02efa06ff5647b9af010919c0f9b8f07c2f36c3f59fa3158d-merged.mount: Deactivated successfully.
Feb 25 12:26:54 compute-0 podman[292494]: 2026-02-25 12:26:54.610157156 +0000 UTC m=+0.515745716 container remove 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:26:54 compute-0 systemd[1]: libpod-conmon-02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4.scope: Deactivated successfully.
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:54 compute-0 sudo[292395]: pam_unix(sudo:session): session closed for user root
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.692 244018 DEBUG nova.compute.manager [req-2b8a0610-fa99-4e1c-8299-f78ef6d1acf3 req-c5d88f2c-eb2f-4f84-b10b-6d7329d77c21 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-vif-deleted-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:54 compute-0 sudo[292571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:26:54 compute-0 sudo[292571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:26:54 compute-0 sudo[292571]: pam_unix(sudo:session): session closed for user root
Feb 25 12:26:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3626153450' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.799 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.800 244018 DEBUG nova.virt.libvirt.vif [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-526421737',display_name='tempest-ServerActionsTestOtherB-server-526421737',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-526421737',id=57,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-l9xxjiai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:49Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=0612ec20-e725-4516-a153-f1fe9be74f75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.801 244018 DEBUG nova.network.os_vif_util [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.802 244018 DEBUG nova.network.os_vif_util [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.804 244018 DEBUG nova.objects.instance [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0612ec20-e725-4516-a153-f1fe9be74f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.823 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:26:54 compute-0 nova_compute[244014]:   <uuid>0612ec20-e725-4516-a153-f1fe9be74f75</uuid>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   <name>instance-00000039</name>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestOtherB-server-526421737</nova:name>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:26:53</nova:creationTime>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <nova:user uuid="b774fd0c04fc403d9ddb205f1e6abbc5">tempest-ServerActionsTestOtherB-1539976047-project-member</nova:user>
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <nova:project uuid="c85a955249394f0faf7c890f5cd0df32">tempest-ServerActionsTestOtherB-1539976047</nova:project>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <nova:port uuid="5951ca77-9d6a-41db-aaf8-3508d21701f3">
Feb 25 12:26:54 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <system>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <entry name="serial">0612ec20-e725-4516-a153-f1fe9be74f75</entry>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <entry name="uuid">0612ec20-e725-4516-a153-f1fe9be74f75</entry>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     </system>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   <os>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   </os>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   <features>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   </features>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:26:54 compute-0 sudo[292596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:26:54 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0612ec20-e725-4516-a153-f1fe9be74f75_disk">
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0612ec20-e725-4516-a153-f1fe9be74f75_disk.config">
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:54 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:a7:90:29"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <target dev="tap5951ca77-9d"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/console.log" append="off"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <video>
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     </video>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:26:54 compute-0 sudo[292596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:26:54 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:26:54 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:26:54 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:26:54 compute-0 nova_compute[244014]: </domain>
Feb 25 12:26:54 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.823 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Preparing to wait for external event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.824 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.824 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.824 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.825 244018 DEBUG nova.virt.libvirt.vif [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-526421737',display_name='tempest-ServerActionsTestOtherB-server-526421737',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-526421737',id=57,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-l9xxjiai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:49Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=0612ec20-e725-4516-a153-f1fe9be74f75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.826 244018 DEBUG nova.network.os_vif_util [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.827 244018 DEBUG nova.network.os_vif_util [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.827 244018 DEBUG os_vif [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.828 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.829 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.833 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5951ca77-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.834 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5951ca77-9d, col_values=(('external_ids', {'iface-id': '5951ca77-9d6a-41db-aaf8-3508d21701f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:90:29', 'vm-uuid': '0612ec20-e725-4516-a153-f1fe9be74f75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:54 compute-0 NetworkManager[49836]: <info>  [1772022414.8370] manager: (tap5951ca77-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.844 244018 INFO os_vif [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d')
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.885 244018 DEBUG nova.network.neutron [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updated VIF entry in instance network info cache for port 5951ca77-9d6a-41db-aaf8-3508d21701f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:26:54 compute-0 nova_compute[244014]: 2026-02-25 12:26:54.885 244018 DEBUG nova.network.neutron [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updating instance_info_cache with network_info: [{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.005 244018 DEBUG oslo_concurrency.lockutils [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.008 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.009 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.010 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.012 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.013 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.014 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No VIF found with MAC fa:16:3e:a7:90:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.015 244018 INFO nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Using config drive
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.052 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:26:55 compute-0 podman[292655]: 2026-02-25 12:26:55.123266848 +0000 UTC m=+0.056789158 container create e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:26:55 compute-0 systemd[1]: Started libpod-conmon-e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e.scope.
Feb 25 12:26:55 compute-0 podman[292655]: 2026-02-25 12:26:55.099061863 +0000 UTC m=+0.032584233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:26:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:55 compute-0 ceph-mon[76335]: pgmap v1313: 305 pgs: 305 active+clean; 438 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.0 MiB/s wr, 317 op/s
Feb 25 12:26:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2872472128' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3626153450' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:55 compute-0 podman[292655]: 2026-02-25 12:26:55.212244956 +0000 UTC m=+0.145767256 container init e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:26:55 compute-0 podman[292655]: 2026-02-25 12:26:55.219889643 +0000 UTC m=+0.153411933 container start e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:26:55 compute-0 podman[292655]: 2026-02-25 12:26:55.224189974 +0000 UTC m=+0.157712294 container attach e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:26:55 compute-0 pedantic_ganguly[292671]: 167 167
Feb 25 12:26:55 compute-0 systemd[1]: libpod-e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e.scope: Deactivated successfully.
Feb 25 12:26:55 compute-0 podman[292655]: 2026-02-25 12:26:55.226249963 +0000 UTC m=+0.159772283 container died e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a03d63988efeff3422ea892508cd78ef1d783f8fe477150b460e1aa03bfad9d-merged.mount: Deactivated successfully.
Feb 25 12:26:55 compute-0 podman[292655]: 2026-02-25 12:26:55.271889744 +0000 UTC m=+0.205412054 container remove e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 12:26:55 compute-0 systemd[1]: libpod-conmon-e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e.scope: Deactivated successfully.
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.287 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.288 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.289 244018 INFO nova.compute.manager [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Rebooting instance
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.306 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.306 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.306 244018 DEBUG nova.network.neutron [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.405 244018 INFO nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Creating config drive at /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.412 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpowdgiq3q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:55 compute-0 podman[292694]: 2026-02-25 12:26:55.49114874 +0000 UTC m=+0.085426729 container create b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Feb 25 12:26:55 compute-0 podman[292694]: 2026-02-25 12:26:55.426179681 +0000 UTC m=+0.020457690 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:26:55 compute-0 systemd[1]: Started libpod-conmon-b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993.scope.
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.555 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpowdgiq3q" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c318287c6bdcf7c6f65e75d2b32ee4652ceda2d174449994cd4497453e8733b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c318287c6bdcf7c6f65e75d2b32ee4652ceda2d174449994cd4497453e8733b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c318287c6bdcf7c6f65e75d2b32ee4652ceda2d174449994cd4497453e8733b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c318287c6bdcf7c6f65e75d2b32ee4652ceda2d174449994cd4497453e8733b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.599 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:26:55 compute-0 podman[292694]: 2026-02-25 12:26:55.605027062 +0000 UTC m=+0.199305101 container init b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.605 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config 0612ec20-e725-4516-a153-f1fe9be74f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:55 compute-0 podman[292694]: 2026-02-25 12:26:55.614930063 +0000 UTC m=+0.209208022 container start b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 12:26:55 compute-0 podman[292694]: 2026-02-25 12:26:55.620171731 +0000 UTC m=+0.214449780 container attach b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.759 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config 0612ec20-e725-4516-a153-f1fe9be74f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.761 244018 INFO nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Deleting local config drive /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config because it was imported into RBD.
Feb 25 12:26:55 compute-0 NetworkManager[49836]: <info>  [1772022415.8069] manager: (tap5951ca77-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Feb 25 12:26:55 compute-0 kernel: tap5951ca77-9d: entered promiscuous mode
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:55 compute-0 ovn_controller[147040]: 2026-02-25T12:26:55Z|00478|binding|INFO|Claiming lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 for this chassis.
Feb 25 12:26:55 compute-0 ovn_controller[147040]: 2026-02-25T12:26:55Z|00479|binding|INFO|5951ca77-9d6a-41db-aaf8-3508d21701f3: Claiming fa:16:3e:a7:90:29 10.100.0.11
Feb 25 12:26:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 438 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 252 op/s
Feb 25 12:26:55 compute-0 ovn_controller[147040]: 2026-02-25T12:26:55Z|00480|binding|INFO|Setting lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 ovn-installed in OVS
Feb 25 12:26:55 compute-0 ovn_controller[147040]: 2026-02-25T12:26:55Z|00481|binding|INFO|Setting lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 up in Southbound
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:55 compute-0 systemd-udevd[292776]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.834 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:90:29 10.100.0.11'], port_security=['fa:16:3e:a7:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0612ec20-e725-4516-a153-f1fe9be74f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5951ca77-9d6a-41db-aaf8-3508d21701f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.839 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5951ca77-9d6a-41db-aaf8-3508d21701f3 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 bound to our chassis
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.841 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 12:26:55 compute-0 NetworkManager[49836]: <info>  [1772022415.8490] device (tap5951ca77-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:26:55 compute-0 NetworkManager[49836]: <info>  [1772022415.8496] device (tap5951ca77-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.853 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[04d92c6a-1538-4ce0-a72e-04e029b42001]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:55 compute-0 systemd-machined[210048]: New machine qemu-62-instance-00000039.
Feb 25 12:26:55 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000039.
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.878 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b12e7f52-c358-44fc-b3cb-1c2c17b56878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.880 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3ddece-2f7c-4fd4-8021-0dadbb12e837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.902 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[366ba607-3e13-485b-bc5c-6cf7e2cb5969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3a61a8f4-cc80-4456-b25f-40c0e0b0295c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292793, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.929 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3401a1-348e-42ea-a6b4-aff76786fa2d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292795, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292795, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.930 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.933 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:55 compute-0 nova_compute[244014]: 2026-02-25 12:26:55.933 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.933 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.934 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.934 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.141 244018 DEBUG nova.compute.manager [req-b770b531-4473-48a9-805e-5d7be4a33422 req-eaf14dea-167a-4b6b-a59d-5bff035dbeec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.142 244018 DEBUG oslo_concurrency.lockutils [req-b770b531-4473-48a9-805e-5d7be4a33422 req-eaf14dea-167a-4b6b-a59d-5bff035dbeec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.142 244018 DEBUG oslo_concurrency.lockutils [req-b770b531-4473-48a9-805e-5d7be4a33422 req-eaf14dea-167a-4b6b-a59d-5bff035dbeec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.142 244018 DEBUG oslo_concurrency.lockutils [req-b770b531-4473-48a9-805e-5d7be4a33422 req-eaf14dea-167a-4b6b-a59d-5bff035dbeec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.142 244018 DEBUG nova.compute.manager [req-b770b531-4473-48a9-805e-5d7be4a33422 req-eaf14dea-167a-4b6b-a59d-5bff035dbeec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Processing event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:26:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Feb 25 12:26:56 compute-0 lvm[292853]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:26:56 compute-0 lvm[292853]: VG ceph_vg0 finished
Feb 25 12:26:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Feb 25 12:26:56 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.239775) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416239836, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2285, "num_deletes": 263, "total_data_size": 3230759, "memory_usage": 3291224, "flush_reason": "Manual Compaction"}
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Feb 25 12:26:56 compute-0 lvm[292855]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:26:56 compute-0 lvm[292855]: VG ceph_vg1 finished
Feb 25 12:26:56 compute-0 lvm[292857]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:26:56 compute-0 lvm[292857]: VG ceph_vg2 finished
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416263935, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3178108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25854, "largest_seqno": 28138, "table_properties": {"data_size": 3167750, "index_size": 6532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22830, "raw_average_key_size": 21, "raw_value_size": 3146507, "raw_average_value_size": 2916, "num_data_blocks": 284, "num_entries": 1079, "num_filter_entries": 1079, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022244, "oldest_key_time": 1772022244, "file_creation_time": 1772022416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 24228 microseconds, and 8930 cpu microseconds.
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.264000) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3178108 bytes OK
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.264028) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.267041) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.267066) EVENT_LOG_v1 {"time_micros": 1772022416267058, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.267090) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3220933, prev total WAL file size 3220933, number of live WAL files 2.
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.267934) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3103KB)], [59(6823KB)]
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416267989, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10165305, "oldest_snapshot_seqno": -1}
Feb 25 12:26:56 compute-0 lvm[292859]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:26:56 compute-0 lvm[292859]: VG ceph_vg2 finished
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5391 keys, 8527064 bytes, temperature: kUnknown
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416311852, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8527064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8489836, "index_size": 22654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 133973, "raw_average_key_size": 24, "raw_value_size": 8391779, "raw_average_value_size": 1556, "num_data_blocks": 929, "num_entries": 5391, "num_filter_entries": 5391, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.312035) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8527064 bytes
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.313816) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 231.5 rd, 194.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 6.7 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5923, records dropped: 532 output_compression: NoCompression
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.313833) EVENT_LOG_v1 {"time_micros": 1772022416313823, "job": 32, "event": "compaction_finished", "compaction_time_micros": 43917, "compaction_time_cpu_micros": 13040, "output_level": 6, "num_output_files": 1, "total_output_size": 8527064, "num_input_records": 5923, "num_output_records": 5391, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416314178, "job": 32, "event": "table_file_deletion", "file_number": 61}
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416315001, "job": 32, "event": "table_file_deletion", "file_number": 59}
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.267840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.315045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.315050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.315051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.315053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:26:56 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.315054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:26:56 compute-0 affectionate_fermi[292713]: {}
Feb 25 12:26:56 compute-0 systemd[1]: libpod-b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993.scope: Deactivated successfully.
Feb 25 12:26:56 compute-0 systemd[1]: libpod-b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993.scope: Consumed 1.071s CPU time.
Feb 25 12:26:56 compute-0 podman[292694]: 2026-02-25 12:26:56.367569084 +0000 UTC m=+0.961847113 container died b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 12:26:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c318287c6bdcf7c6f65e75d2b32ee4652ceda2d174449994cd4497453e8733b-merged.mount: Deactivated successfully.
Feb 25 12:26:56 compute-0 podman[292694]: 2026-02-25 12:26:56.422457267 +0000 UTC m=+1.016735236 container remove b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 12:26:56 compute-0 systemd[1]: libpod-conmon-b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993.scope: Deactivated successfully.
Feb 25 12:26:56 compute-0 sudo[292596]: pam_unix(sudo:session): session closed for user root
Feb 25 12:26:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:26:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:26:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:26:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:26:56 compute-0 sudo[292913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:26:56 compute-0 sudo[292913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:26:56 compute-0 sudo[292913]: pam_unix(sudo:session): session closed for user root
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.566 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022416.566098, 0612ec20-e725-4516-a153-f1fe9be74f75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.567 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] VM Started (Lifecycle Event)
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.571 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.574 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.578 244018 INFO nova.virt.libvirt.driver [-] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance spawned successfully.
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.578 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.701 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.702 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.705 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.706 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.707 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.708 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.713 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.718 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.761 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.761 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022416.566317, 0612ec20-e725-4516-a153-f1fe9be74f75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.762 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] VM Paused (Lifecycle Event)
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.773 244018 INFO nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Took 7.36 seconds to spawn the instance on the hypervisor.
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.773 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.786 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.790 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022416.57372, 0612ec20-e725-4516-a153-f1fe9be74f75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.791 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] VM Resumed (Lifecycle Event)
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.816 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.822 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.840 244018 INFO nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Took 8.72 seconds to build instance.
Feb 25 12:26:56 compute-0 nova_compute[244014]: 2026-02-25 12:26:56.867 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.136 244018 DEBUG nova.network.neutron [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.152 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.154 244018 DEBUG nova.compute.manager [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:57 compute-0 ceph-mon[76335]: pgmap v1314: 305 pgs: 305 active+clean; 438 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 252 op/s
Feb 25 12:26:57 compute-0 ceph-mon[76335]: osdmap e202: 3 total, 3 up, 3 in
Feb 25 12:26:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:26:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:26:57 compute-0 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 12:26:57 compute-0 NetworkManager[49836]: <info>  [1772022417.3348] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:26:57 compute-0 ovn_controller[147040]: 2026-02-25T12:26:57Z|00482|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 12:26:57 compute-0 ovn_controller[147040]: 2026-02-25T12:26:57Z|00483|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 12:26:57 compute-0 ovn_controller[147040]: 2026-02-25T12:26:57Z|00484|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.354 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.356 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.359 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.361 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[681f618f-a409-4177-88f6-f5580c5e3058]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.362 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore
Feb 25 12:26:57 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 12:26:57 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000035.scope: Consumed 12.252s CPU time.
Feb 25 12:26:57 compute-0 systemd-machined[210048]: Machine qemu-58-instance-00000035 terminated.
Feb 25 12:26:57 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [NOTICE]   (291038) : haproxy version is 2.8.14-c23fe91
Feb 25 12:26:57 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [NOTICE]   (291038) : path to executable is /usr/sbin/haproxy
Feb 25 12:26:57 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [WARNING]  (291038) : Exiting Master process...
Feb 25 12:26:57 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [WARNING]  (291038) : Exiting Master process...
Feb 25 12:26:57 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [ALERT]    (291038) : Current worker (291045) exited with code 143 (Terminated)
Feb 25 12:26:57 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [WARNING]  (291038) : All workers exited. Exiting... (0)
Feb 25 12:26:57 compute-0 systemd[1]: libpod-0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225.scope: Deactivated successfully.
Feb 25 12:26:57 compute-0 podman[292964]: 2026-02-25 12:26:57.518669811 +0000 UTC m=+0.053750192 container died 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.528 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.529 244018 DEBUG nova.objects.instance [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225-userdata-shm.mount: Deactivated successfully.
Feb 25 12:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-669065cfe9b355ae4db308310ad10b864c500e39dbf8d8abf3254990187da5c4-merged.mount: Deactivated successfully.
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.550 244018 DEBUG nova.virt.libvirt.vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.551 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.552 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.552 244018 DEBUG os_vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.554 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee46268d-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:57 compute-0 podman[292964]: 2026-02-25 12:26:57.558332744 +0000 UTC m=+0.093413105 container cleanup 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.560 244018 INFO os_vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.566 244018 DEBUG nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start _get_guest_xml network_info=[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.570 244018 WARNING nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.576 244018 DEBUG nova.virt.libvirt.host [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.576 244018 DEBUG nova.virt.libvirt.host [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.579 244018 DEBUG nova.virt.libvirt.host [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.580 244018 DEBUG nova.virt.libvirt.host [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.580 244018 DEBUG nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.580 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.581 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.581 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:26:57 compute-0 systemd[1]: libpod-conmon-0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225.scope: Deactivated successfully.
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.581 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.581 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.581 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.582 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.582 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.582 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.582 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.582 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.583 244018 DEBUG nova.objects.instance [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:57 compute-0 podman[293004]: 2026-02-25 12:26:57.623426196 +0000 UTC m=+0.037659647 container remove 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.625 244018 DEBUG oslo_concurrency.processutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.640 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a25b50d-6e95-485e-a186-a970ddcadfb4]: (4, ('Wed Feb 25 12:26:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225)\n0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225\nWed Feb 25 12:26:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225)\n0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.642 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb4918b-ad99-4019-8ea6-a004b6f84989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.643 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:57 compute-0 kernel: tapce318891-c0: left promiscuous mode
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.650 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6553f18d-51bb-44be-b48f-74527c5754a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.660 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f17db2a3-390d-4bc2-90f2-5fcafa22658f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.664 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35e413c9-f5c2-4c1f-a3cd-c15e43205153]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.677 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[14779900-4fd7-4e32-a691-fa313ad19492]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436375, 'reachable_time': 26540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293020, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:57 compute-0 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.679 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:26:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.680 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[64aadbeb-a3c0-4676-b371-4bc15a8b3b69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 934 KiB/s rd, 3.7 MiB/s wr, 180 op/s
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.897 244018 DEBUG nova.compute.manager [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.897 244018 DEBUG oslo_concurrency.lockutils [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.898 244018 DEBUG oslo_concurrency.lockutils [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.898 244018 DEBUG oslo_concurrency.lockutils [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.899 244018 DEBUG nova.compute.manager [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:57 compute-0 nova_compute[244014]: 2026-02-25 12:26:57.899 244018 WARNING nova.compute.manager [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state reboot_started_hard.
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.155 244018 INFO nova.compute.manager [None req-d8dffaa2-f9e3-4d2f-8619-690073e20fb8 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Pausing
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.156 244018 DEBUG nova.objects.instance [None req-d8dffaa2-f9e3-4d2f-8619-690073e20fb8 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'flavor' on Instance uuid 0612ec20-e725-4516-a153-f1fe9be74f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647707096' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.173 244018 DEBUG oslo_concurrency.processutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:58 compute-0 ovn_controller[147040]: 2026-02-25T12:26:58Z|00485|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 12:26:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3647707096' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.232 244018 DEBUG oslo_concurrency.processutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.268 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022418.26787, 0612ec20-e725-4516-a153-f1fe9be74f75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.268 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] VM Paused (Lifecycle Event)
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.271 244018 DEBUG nova.compute.manager [None req-d8dffaa2-f9e3-4d2f-8619-690073e20fb8 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.304 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.306 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.352 244018 DEBUG nova.compute.manager [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.352 244018 DEBUG oslo_concurrency.lockutils [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.353 244018 DEBUG oslo_concurrency.lockutils [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.353 244018 DEBUG oslo_concurrency.lockutils [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.354 244018 DEBUG nova.compute.manager [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] No waiting events found dispatching network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.354 244018 WARNING nova.compute.manager [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received unexpected event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 for instance with vm_state active and task_state pausing.
Feb 25 12:26:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:26:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3124443606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.774 244018 DEBUG oslo_concurrency.processutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.777 244018 DEBUG nova.virt.libvirt.vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.778 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.780 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.782 244018 DEBUG nova.objects.instance [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.814 244018 DEBUG nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:26:58 compute-0 nova_compute[244014]:   <uuid>8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</uuid>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   <name>instance-00000035</name>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestJSON-server-1971346294</nova:name>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:26:57</nova:creationTime>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <nova:port uuid="ee46268d-740d-4ff9-8b65-4a81fc61eec3">
Feb 25 12:26:58 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <system>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <entry name="serial">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <entry name="uuid">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     </system>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   <os>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   </os>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   <features>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   </features>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk">
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config">
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       </source>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:26:58 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:ba:87:f1"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <target dev="tapee46268d-74"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log" append="off"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <video>
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     </video>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:26:58 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:26:58 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:26:58 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:26:58 compute-0 nova_compute[244014]: </domain>
Feb 25 12:26:58 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.816 244018 DEBUG nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.816 244018 DEBUG nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.818 244018 DEBUG nova.virt.libvirt.vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.818 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.820 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.820 244018 DEBUG os_vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.822 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.823 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.827 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:58 compute-0 NetworkManager[49836]: <info>  [1772022418.8315] manager: (tapee46268d-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.840 244018 INFO os_vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')
Feb 25 12:26:58 compute-0 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 12:26:58 compute-0 NetworkManager[49836]: <info>  [1772022418.9303] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/221)
Feb 25 12:26:58 compute-0 ovn_controller[147040]: 2026-02-25T12:26:58Z|00486|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 12:26:58 compute-0 ovn_controller[147040]: 2026-02-25T12:26:58Z|00487|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.931 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.939 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:26:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.941 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis
Feb 25 12:26:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.944 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:26:58 compute-0 ovn_controller[147040]: 2026-02-25T12:26:58Z|00488|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 12:26:58 compute-0 ovn_controller[147040]: 2026-02-25T12:26:58Z|00489|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 12:26:58 compute-0 nova_compute[244014]: 2026-02-25 12:26:58.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.956 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5474b0-588f-452c-9318-6ee281fa178f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.957 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:26:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.959 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:26:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.959 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4445bed-7088-436d-8327-f9c9a1482a88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.961 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[304831df-7a4c-4a16-91cd-cb9652e82a7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:58 compute-0 systemd-machined[210048]: New machine qemu-63-instance-00000035.
Feb 25 12:26:58 compute-0 systemd-udevd[293098]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:26:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.974 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[59cc087c-d0d7-4882-a81d-c27dc5490a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:58 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000035.
Feb 25 12:26:58 compute-0 NetworkManager[49836]: <info>  [1772022418.9867] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:26:58 compute-0 NetworkManager[49836]: <info>  [1772022418.9884] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:26:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.990 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[959419a3-7223-4628-baa6-90f4353411e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.018 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ad794d-351b-4e45-8409-8ca9834ee23c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 systemd-udevd[293102]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:26:59 compute-0 NetworkManager[49836]: <info>  [1772022419.0270] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/222)
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.028 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[22c71db0-d241-4c86-ab23-063e17ca4fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.055 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[31a7eef7-9a2e-4264-b31b-8cb995560cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.059 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8f0af2-b3cb-40c9-893b-a22f5f2afb6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 NetworkManager[49836]: <info>  [1772022419.0810] device (tapce318891-c0): carrier: link connected
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.087 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c5714778-368e-4189-b05d-c09028e77893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.100 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd42f1ca-2221-42bd-a484-c19667f7aeb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438848, 'reachable_time': 24499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293130, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.110 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bae719af-088c-437c-9c8e-ba013c6acb28]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438848, 'tstamp': 438848}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293131, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.126 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[393926f5-dc64-4db9-ba2a-0a6a58f14494]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438848, 'reachable_time': 24499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293132, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.149 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a9628a36-f67f-43a0-8bcb-4c6b0ab44d3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.204 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc3e8ed-ab9c-44ba-918b-0fa708c7ebd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.206 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.206 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.207 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:59 compute-0 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 12:26:59 compute-0 NetworkManager[49836]: <info>  [1772022419.2129] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.213 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:26:59 compute-0 ovn_controller[147040]: 2026-02-25T12:26:59Z|00490|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.229 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.233 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.234 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.236 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2eaff5-15de-43e1-8c45-61a94f47875c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.237 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:26:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:26:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.238 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:26:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Feb 25 12:26:59 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Feb 25 12:26:59 compute-0 ceph-mon[76335]: pgmap v1316: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 934 KiB/s rd, 3.7 MiB/s wr, 180 op/s
Feb 25 12:26:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3124443606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.593 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.593 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022419.592792, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.593 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.598 244018 DEBUG nova.compute.manager [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.602 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance rebooted successfully.
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.603 244018 DEBUG nova.compute.manager [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:59 compute-0 podman[293205]: 2026-02-25 12:26:59.61466364 +0000 UTC m=+0.060286017 container create 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.640 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.644 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:59 compute-0 systemd[1]: Started libpod-conmon-3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a.scope.
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.668 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022419.5975893, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.668 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.677 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:26:59 compute-0 podman[293205]: 2026-02-25 12:26:59.581825281 +0000 UTC m=+0.027447668 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:26:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba61f7f2d26200ebeb270f5b7762f03e0ef58df99ed676891fb1a63a7baeb70/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.693 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:26:59 compute-0 nova_compute[244014]: 2026-02-25 12:26:59.702 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:26:59 compute-0 podman[293205]: 2026-02-25 12:26:59.708327481 +0000 UTC m=+0.153949908 container init 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 12:26:59 compute-0 podman[293205]: 2026-02-25 12:26:59.716736499 +0000 UTC m=+0.162358896 container start 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:26:59 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [NOTICE]   (293226) : New worker (293228) forked
Feb 25 12:26:59 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [NOTICE]   (293226) : Loading success.
Feb 25 12:26:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 612 KiB/s rd, 2.5 MiB/s wr, 165 op/s
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.019 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.019 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.020 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.021 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.021 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.022 244018 WARNING nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.022 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.023 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.023 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.024 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.024 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.024 244018 WARNING nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.025 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.025 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.026 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.026 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.026 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.026 244018 WARNING nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.
Feb 25 12:27:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:00 compute-0 ceph-mon[76335]: osdmap e203: 3 total, 3 up, 3 in
Feb 25 12:27:00 compute-0 nova_compute[244014]: 2026-02-25 12:27:00.994 244018 INFO nova.compute.manager [None req-cca10bbf-0f06-4916-959d-8382b9e0e3a0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Get console output
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.002 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.079 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.079 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.080 244018 INFO nova.compute.manager [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Shelving
Feb 25 12:27:01 compute-0 kernel: tap5951ca77-9d (unregistering): left promiscuous mode
Feb 25 12:27:01 compute-0 NetworkManager[49836]: <info>  [1772022421.1364] device (tap5951ca77-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 ovn_controller[147040]: 2026-02-25T12:27:01Z|00491|binding|INFO|Releasing lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 from this chassis (sb_readonly=0)
Feb 25 12:27:01 compute-0 ovn_controller[147040]: 2026-02-25T12:27:01Z|00492|binding|INFO|Setting lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 down in Southbound
Feb 25 12:27:01 compute-0 ovn_controller[147040]: 2026-02-25T12:27:01Z|00493|binding|INFO|Removing iface tap5951ca77-9d ovn-installed in OVS
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.149 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.156 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.156 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:90:29 10.100.0.11'], port_security=['fa:16:3e:a7:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0612ec20-e725-4516-a153-f1fe9be74f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5951ca77-9d6a-41db-aaf8-3508d21701f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.159 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5951ca77-9d6a-41db-aaf8-3508d21701f3 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 unbound from our chassis
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.162 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.173 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0350793-529c-4f88-bca9-015d4c9ced6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000039.scope: Deactivated successfully.
Feb 25 12:27:01 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000039.scope: Consumed 2.383s CPU time.
Feb 25 12:27:01 compute-0 systemd-machined[210048]: Machine qemu-62-instance-00000039 terminated.
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.199 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a565ab83-2cd4-46b7-9f20-2b0c9ae7189a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.203 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ccca2450-6aaa-47f6-a231-a2b090b94875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 podman[293239]: 2026-02-25 12:27:01.215449325 +0000 UTC m=+0.057866669 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.43.0)
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.225 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[293e1400-6799-4d76-8f55-3597e0688b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 podman[293242]: 2026-02-25 12:27:01.253598065 +0000 UTC m=+0.087115127 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.261 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2514a43f-7452-4db2-8ea6-e5869bd80aff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293283, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ceph-mon[76335]: pgmap v1318: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 612 KiB/s rd, 2.5 MiB/s wr, 165 op/s
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.277 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c12bbe1c-1fd8-4801-87a7-b7022233cb9f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293291, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293291, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.279 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.284 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.286 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:01 compute-0 kernel: tap5951ca77-9d: entered promiscuous mode
Feb 25 12:27:01 compute-0 NetworkManager[49836]: <info>  [1772022421.3146] manager: (tap5951ca77-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/224)
Feb 25 12:27:01 compute-0 kernel: tap5951ca77-9d (unregistering): left promiscuous mode
Feb 25 12:27:01 compute-0 systemd-udevd[293117]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:27:01 compute-0 ovn_controller[147040]: 2026-02-25T12:27:01Z|00494|binding|INFO|Claiming lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 for this chassis.
Feb 25 12:27:01 compute-0 ovn_controller[147040]: 2026-02-25T12:27:01Z|00495|binding|INFO|5951ca77-9d6a-41db-aaf8-3508d21701f3: Claiming fa:16:3e:a7:90:29 10.100.0.11
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.319 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 ovn_controller[147040]: 2026-02-25T12:27:01Z|00496|binding|INFO|Setting lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 ovn-installed in OVS
Feb 25 12:27:01 compute-0 ovn_controller[147040]: 2026-02-25T12:27:01Z|00497|if_status|INFO|Dropped 2 log messages in last 214 seconds (most recently, 214 seconds ago) due to excessive rate
Feb 25 12:27:01 compute-0 ovn_controller[147040]: 2026-02-25T12:27:01Z|00498|if_status|INFO|Not setting lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 down as sb is readonly
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.334 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 ovn_controller[147040]: 2026-02-25T12:27:01Z|00499|binding|INFO|Releasing lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 from this chassis (sb_readonly=0)
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.338 244018 INFO nova.virt.libvirt.driver [-] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance destroyed successfully.
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.339 244018 DEBUG nova.objects.instance [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0612ec20-e725-4516-a153-f1fe9be74f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.341 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:90:29 10.100.0.11'], port_security=['fa:16:3e:a7:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0612ec20-e725-4516-a153-f1fe9be74f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5951ca77-9d6a-41db-aaf8-3508d21701f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.344 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.347 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5951ca77-9d6a-41db-aaf8-3508d21701f3 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 bound to our chassis
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.350 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:90:29 10.100.0.11'], port_security=['fa:16:3e:a7:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0612ec20-e725-4516-a153-f1fe9be74f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5951ca77-9d6a-41db-aaf8-3508d21701f3) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.353 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.369 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[167ed9a0-3c07-489e-983d-22a9203ba8ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.386 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c9547e70-d821-4d70-9908-e9bf6987d9d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.390 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4e54c359-f103-4840-83a0-2b5196642fed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.400 244018 DEBUG nova.compute.manager [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-vif-unplugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.400 244018 DEBUG oslo_concurrency.lockutils [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.400 244018 DEBUG oslo_concurrency.lockutils [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.400 244018 DEBUG oslo_concurrency.lockutils [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.401 244018 DEBUG nova.compute.manager [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] No waiting events found dispatching network-vif-unplugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.401 244018 WARNING nova.compute.manager [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received unexpected event network-vif-unplugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 for instance with vm_state paused and task_state shelving.
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.415 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3c590edf-4e52-4f40-8dd2-679e88b5329c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.427 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3c611a27-489f-4947-a2fe-be706a3e2e81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293302, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.444 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f526e6f9-a4eb-4752-b9f9-e9540312028c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293303, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293303, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.447 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.453 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.454 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.454 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.455 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.457 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5951ca77-9d6a-41db-aaf8-3508d21701f3 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 unbound from our chassis
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.459 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.470 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5240f9f-3afd-4490-8d53-16d5326b72ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.488 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2621ab86-97ef-424e-93ee-92cbb23a8890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.491 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bda54c9b-f334-469b-afb8-0b4b601e9f32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.511 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4264fac6-9673-4559-988e-0a150a118e79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.526 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[06242db4-2e80-4139-a5d2-b12ed82af825]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293310, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.537 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc17ada6-10c1-4290-bc9c-236abf9f16ed]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293311, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293311, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.539 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.545 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.545 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.547 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.547 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:27:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 59 KiB/s wr, 118 op/s
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.881 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:27:01 compute-0 nova_compute[244014]: 2026-02-25 12:27:01.884 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.001 244018 INFO nova.virt.libvirt.driver [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Beginning cold snapshot process
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.071 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.071 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.072 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.073 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.159 244018 INFO nova.compute.manager [None req-d1df27ec-1aa2-4e7c-bc7b-cc612375e94e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Get console output
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.168 244018 DEBUG nova.virt.libvirt.imagebackend [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.174 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:27:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Feb 25 12:27:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Feb 25 12:27:02 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.376 244018 DEBUG nova.storage.rbd_utils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(e1535f1184504ef7b5b0e4adb03eb1de) on rbd image(0612ec20-e725-4516-a153-f1fe9be74f75_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.697 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022407.6968205, d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.698 244018 INFO nova.compute.manager [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] VM Stopped (Lifecycle Event)
Feb 25 12:27:02 compute-0 nova_compute[244014]: 2026-02-25 12:27:02.732 244018 DEBUG nova.compute.manager [None req-427c5340-36ca-4f1c-8de7-5c223952a13d - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Feb 25 12:27:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Feb 25 12:27:03 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Feb 25 12:27:03 compute-0 ceph-mon[76335]: pgmap v1319: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 59 KiB/s wr, 118 op/s
Feb 25 12:27:03 compute-0 ceph-mon[76335]: osdmap e204: 3 total, 3 up, 3 in
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.376 244018 DEBUG nova.storage.rbd_utils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning vms/0612ec20-e725-4516-a153-f1fe9be74f75_disk@e1535f1184504ef7b5b0e4adb03eb1de to images/bac7e237-2902-4866-9b21-65c1a5c9d2ec clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.493 244018 DEBUG nova.storage.rbd_utils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening images/bac7e237-2902-4866-9b21-65c1a5c9d2ec flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.706 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.719 244018 DEBUG nova.storage.rbd_utils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] removing snapshot(e1535f1184504ef7b5b0e4adb03eb1de) on rbd image(0612ec20-e725-4516-a153-f1fe9be74f75_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.733 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.734 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.778 244018 DEBUG nova.compute.manager [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.778 244018 DEBUG oslo_concurrency.lockutils [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.779 244018 DEBUG oslo_concurrency.lockutils [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.779 244018 DEBUG oslo_concurrency.lockutils [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.779 244018 DEBUG nova.compute.manager [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] No waiting events found dispatching network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.780 244018 WARNING nova.compute.manager [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received unexpected event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 for instance with vm_state paused and task_state shelving_image_uploading.
Feb 25 12:27:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 3.3 KiB/s wr, 300 op/s
Feb 25 12:27:03 compute-0 nova_compute[244014]: 2026-02-25 12:27:03.831 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Feb 25 12:27:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Feb 25 12:27:04 compute-0 ceph-mon[76335]: osdmap e205: 3 total, 3 up, 3 in
Feb 25 12:27:04 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Feb 25 12:27:04 compute-0 nova_compute[244014]: 2026-02-25 12:27:04.346 244018 DEBUG nova.storage.rbd_utils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(snap) on rbd image(bac7e237-2902-4866-9b21-65c1a5c9d2ec) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:27:04 compute-0 nova_compute[244014]: 2026-02-25 12:27:04.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Feb 25 12:27:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Feb 25 12:27:05 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Feb 25 12:27:05 compute-0 ceph-mon[76335]: pgmap v1322: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 3.3 KiB/s wr, 300 op/s
Feb 25 12:27:05 compute-0 ceph-mon[76335]: osdmap e206: 3 total, 3 up, 3 in
Feb 25 12:27:05 compute-0 nova_compute[244014]: 2026-02-25 12:27:05.664 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:05 compute-0 nova_compute[244014]: 2026-02-25 12:27:05.665 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:05 compute-0 nova_compute[244014]: 2026-02-25 12:27:05.687 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:27:05 compute-0 nova_compute[244014]: 2026-02-25 12:27:05.805 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:05 compute-0 nova_compute[244014]: 2026-02-25 12:27:05.806 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:05 compute-0 nova_compute[244014]: 2026-02-25 12:27:05.812 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:27:05 compute-0 nova_compute[244014]: 2026-02-25 12:27:05.813 244018 INFO nova.compute.claims [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:27:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 2.7 KiB/s wr, 289 op/s
Feb 25 12:27:06 compute-0 nova_compute[244014]: 2026-02-25 12:27:06.000 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:06 compute-0 ceph-mon[76335]: osdmap e207: 3 total, 3 up, 3 in
Feb 25 12:27:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:27:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604082407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:06 compute-0 nova_compute[244014]: 2026-02-25 12:27:06.615 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:06 compute-0 nova_compute[244014]: 2026-02-25 12:27:06.625 244018 DEBUG nova.compute.provider_tree [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:27:06 compute-0 nova_compute[244014]: 2026-02-25 12:27:06.664 244018 DEBUG nova.scheduler.client.report [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:27:06 compute-0 nova_compute[244014]: 2026-02-25 12:27:06.699 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:06 compute-0 nova_compute[244014]: 2026-02-25 12:27:06.700 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:27:06 compute-0 nova_compute[244014]: 2026-02-25 12:27:06.763 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:27:06 compute-0 nova_compute[244014]: 2026-02-25 12:27:06.764 244018 DEBUG nova.network.neutron [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:27:06 compute-0 nova_compute[244014]: 2026-02-25 12:27:06.787 244018 INFO nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:27:06 compute-0 nova_compute[244014]: 2026-02-25 12:27:06.818 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.119 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.122 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.122 244018 INFO nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Creating image(s)
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.160 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.195 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.230 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.235 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.271 244018 INFO nova.virt.libvirt.driver [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Snapshot image upload complete
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.272 244018 DEBUG nova.compute.manager [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.278 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.279 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.313 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.324 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.324 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.325 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.325 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:07 compute-0 ceph-mon[76335]: pgmap v1325: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 2.7 KiB/s wr, 289 op/s
Feb 25 12:27:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1604082407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.351 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.356 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.397 244018 INFO nova.compute.manager [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Shelve offloading
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.410 244018 INFO nova.virt.libvirt.driver [-] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance destroyed successfully.
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.410 244018 DEBUG nova.compute.manager [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.413 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.413 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.413 244018 DEBUG nova.network.neutron [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.545 244018 DEBUG oslo_concurrency.lockutils [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.547 244018 DEBUG oslo_concurrency.lockutils [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.547 244018 DEBUG nova.compute.manager [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.555 244018 DEBUG nova.compute.manager [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.557 244018 DEBUG nova.objects.instance [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.582 244018 DEBUG nova.virt.libvirt.driver [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.627 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.628 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.637 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.638 244018 INFO nova.compute.claims [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.646 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.683 244018 DEBUG nova.network.neutron [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.684 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.731 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] resizing rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:27:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 485 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 3.9 MiB/s wr, 257 op/s
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.844 244018 DEBUG nova.objects.instance [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'migration_context' on Instance uuid 160a4e3f-b197-4b82-a2ff-cebf79df47df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.860 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.860 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Ensure instance console log exists: /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.861 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.861 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.861 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.863 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.867 244018 WARNING nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.871 244018 DEBUG nova.virt.libvirt.host [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.871 244018 DEBUG nova.virt.libvirt.host [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.875 244018 DEBUG nova.virt.libvirt.host [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.875 244018 DEBUG nova.virt.libvirt.host [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.876 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.876 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.876 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.877 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.877 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.877 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.877 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.878 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.878 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.878 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.878 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.879 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.881 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.915 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:07 compute-0 nova_compute[244014]: 2026-02-25 12:27:07.980 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/725141415' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.492 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.525 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.530 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:27:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077633717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.582 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.589 244018 DEBUG nova.compute.provider_tree [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.607 244018 DEBUG nova.scheduler.client.report [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.628 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.629 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.740 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.740 244018 DEBUG nova.network.neutron [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.850 244018 INFO nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.936 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.973 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.974 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.974 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.975 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:27:08 compute-0 nova_compute[244014]: 2026-02-25 12:27:08.975 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139178603' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.059 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.063 244018 DEBUG nova.objects.instance [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'pci_devices' on Instance uuid 160a4e3f-b197-4b82-a2ff-cebf79df47df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.113 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:27:09 compute-0 nova_compute[244014]:   <uuid>160a4e3f-b197-4b82-a2ff-cebf79df47df</uuid>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   <name>instance-0000003a</name>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1856460464</nova:name>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:27:07</nova:creationTime>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:27:09 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:27:09 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:27:09 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:27:09 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:27:09 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:27:09 compute-0 nova_compute[244014]:         <nova:user uuid="29db18fbf1f1410384934731b9c53cb5">tempest-ListImageFiltersTestJSON-1391082587-project-member</nova:user>
Feb 25 12:27:09 compute-0 nova_compute[244014]:         <nova:project uuid="ddbe63406adc44468e143b66e5b1c207">tempest-ListImageFiltersTestJSON-1391082587</nova:project>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <system>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <entry name="serial">160a4e3f-b197-4b82-a2ff-cebf79df47df</entry>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <entry name="uuid">160a4e3f-b197-4b82-a2ff-cebf79df47df</entry>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     </system>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   <os>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   </os>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   <features>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   </features>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/160a4e3f-b197-4b82-a2ff-cebf79df47df_disk">
Feb 25 12:27:09 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:09 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config">
Feb 25 12:27:09 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:09 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/console.log" append="off"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <video>
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     </video>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:27:09 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:27:09 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:27:09 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:27:09 compute-0 nova_compute[244014]: </domain>
Feb 25 12:27:09 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.115 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.116 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.117 244018 INFO nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Creating image(s)
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.137 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.163 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.191 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.193 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.281 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.281 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.282 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.283 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.302 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.306 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.336 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.337 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.338 244018 INFO nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Using config drive
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.357 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:09 compute-0 ceph-mon[76335]: pgmap v1326: 305 pgs: 305 active+clean; 485 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 3.9 MiB/s wr, 257 op/s
Feb 25 12:27:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/725141415' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4077633717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3139178603' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:27:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933452317' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.575 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.604 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.639 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] resizing rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.673 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.684 244018 INFO nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Creating config drive at /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.688 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1s80scfm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.779 244018 DEBUG nova.objects.instance [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'migration_context' on Instance uuid bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.793 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.794 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Ensure instance console log exists: /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.795 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.795 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.795 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.798 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.798 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.805 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.806 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 485 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.3 MiB/s wr, 135 op/s
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.836 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1s80scfm" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.861 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.865 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.894 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.895 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.899 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.900 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.903 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:09 compute-0 nova_compute[244014]: 2026-02-25 12:27:09.903 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.051 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.051 244018 INFO nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Deleting local config drive /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config because it was imported into RBD.
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.065 244018 DEBUG nova.network.neutron [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.066 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.068 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.084 244018 WARNING nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.089 244018 DEBUG nova.virt.libvirt.host [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.089 244018 DEBUG nova.virt.libvirt.host [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.095 244018 DEBUG nova.virt.libvirt.host [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.095 244018 DEBUG nova.virt.libvirt.host [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.096 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.096 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.097 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.097 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.097 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.097 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.097 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.098 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.098 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.098 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.098 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.099 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.102 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:10 compute-0 systemd-machined[210048]: New machine qemu-64-instance-0000003a.
Feb 25 12:27:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Feb 25 12:27:10 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-0000003a.
Feb 25 12:27:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Feb 25 12:27:10 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.196 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.197 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3387MB free_disk=59.827746075578034GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.197 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.198 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.239 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:10.240 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:10.240 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.305 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b8086e43-4c45-422f-a3b5-fa665c256b30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.305 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.306 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 43b8959e-9cf0-42ca-aa1f-8a380321c971 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.306 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 0612ec20-e725-4516-a153-f1fe9be74f75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.306 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 160a4e3f-b197-4b82-a2ff-cebf79df47df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.306 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.307 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.307 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:27:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3933452317' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:10 compute-0 ceph-mon[76335]: osdmap e208: 3 total, 3 up, 3 in
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.449 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.554 244018 DEBUG nova.network.neutron [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updating instance_info_cache with network_info: [{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.580 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026256746' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.633 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.671 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.684 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.710 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022430.6713681, 160a4e3f-b197-4b82-a2ff-cebf79df47df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.711 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] VM Resumed (Lifecycle Event)
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.715 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.715 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.719 244018 INFO nova.virt.libvirt.driver [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance spawned successfully.
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.719 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.744 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.749 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.749 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.749 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.750 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.750 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.750 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.754 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.792 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.792 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022430.6724477, 160a4e3f-b197-4b82-a2ff-cebf79df47df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.792 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] VM Started (Lifecycle Event)
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.820 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.823 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.833 244018 INFO nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 3.71 seconds to spawn the instance on the hypervisor.
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.833 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.843 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.886 244018 INFO nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 5.11 seconds to build instance.
Feb 25 12:27:10 compute-0 nova_compute[244014]: 2026-02-25 12:27:10.914 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:27:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1415389268' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.010 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.015 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.033 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.058 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.059 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1226214579' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.173 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.175 244018 DEBUG nova.objects.instance [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.193 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:27:11 compute-0 nova_compute[244014]:   <uuid>bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36</uuid>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   <name>instance-0000003b</name>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <nova:name>tempest-ListImageFiltersTestJSON-server-428345901</nova:name>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:27:10</nova:creationTime>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:27:11 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:27:11 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:27:11 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:27:11 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:27:11 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:27:11 compute-0 nova_compute[244014]:         <nova:user uuid="29db18fbf1f1410384934731b9c53cb5">tempest-ListImageFiltersTestJSON-1391082587-project-member</nova:user>
Feb 25 12:27:11 compute-0 nova_compute[244014]:         <nova:project uuid="ddbe63406adc44468e143b66e5b1c207">tempest-ListImageFiltersTestJSON-1391082587</nova:project>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <system>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <entry name="serial">bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36</entry>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <entry name="uuid">bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36</entry>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     </system>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   <os>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   </os>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   <features>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   </features>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk">
Feb 25 12:27:11 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:11 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config">
Feb 25 12:27:11 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:11 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/console.log" append="off"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <video>
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     </video>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:27:11 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:27:11 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:27:11 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:27:11 compute-0 nova_compute[244014]: </domain>
Feb 25 12:27:11 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.250 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.250 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.251 244018 INFO nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Using config drive
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.290 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:11 compute-0 ceph-mon[76335]: pgmap v1327: 305 pgs: 305 active+clean; 485 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.3 MiB/s wr, 135 op/s
Feb 25 12:27:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3026256746' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1415389268' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1226214579' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.748 244018 INFO nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Creating config drive at /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.758 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp95i13reb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 516 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 180 op/s
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.902 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp95i13reb" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.926 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:11 compute-0 nova_compute[244014]: 2026-02-25 12:27:11.930 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.055 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.056 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.056 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.056 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.089 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.089 244018 INFO nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Deleting local config drive /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config because it was imported into RBD.
Feb 25 12:27:12 compute-0 systemd-machined[210048]: New machine qemu-65-instance-0000003b.
Feb 25 12:27:12 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-0000003b.
Feb 25 12:27:12 compute-0 ovn_controller[147040]: 2026-02-25T12:27:12Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.607 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022432.606436, bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.608 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] VM Resumed (Lifecycle Event)
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.611 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.611 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.616 244018 INFO nova.virt.libvirt.driver [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance spawned successfully.
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.616 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.649 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.655 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.656 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.657 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.657 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.658 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.658 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.667 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.703 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.703 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022432.6116762, bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.703 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] VM Started (Lifecycle Event)
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.737 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.741 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.747 244018 INFO nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Took 3.63 seconds to spawn the instance on the hypervisor.
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.747 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.762 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.823 244018 INFO nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Took 5.42 seconds to build instance.
Feb 25 12:27:12 compute-0 nova_compute[244014]: 2026-02-25 12:27:12.845 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:13 compute-0 ceph-mon[76335]: pgmap v1329: 305 pgs: 305 active+clean; 516 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 180 op/s
Feb 25 12:27:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 577 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 7.5 MiB/s wr, 313 op/s
Feb 25 12:27:13 compute-0 nova_compute[244014]: 2026-02-25 12:27:13.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:13 compute-0 nova_compute[244014]: 2026-02-25 12:27:13.975 244018 INFO nova.virt.libvirt.driver [-] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance destroyed successfully.
Feb 25 12:27:13 compute-0 nova_compute[244014]: 2026-02-25 12:27:13.976 244018 DEBUG nova.objects.instance [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'resources' on Instance uuid 0612ec20-e725-4516-a153-f1fe9be74f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:14 compute-0 nova_compute[244014]: 2026-02-25 12:27:14.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.024 244018 DEBUG nova.virt.libvirt.vif [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-526421737',display_name='tempest-ServerActionsTestOtherB-server-526421737',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-526421737',id=57,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-l9xxjiai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member',shelved_at='2026-02-25T12:27:07.272339',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='bac7e237-2902-4866-9b21-65c1a5c9d2ec'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:02Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=0612ec20-e725-4516-a153-f1fe9be74f75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.025 244018 DEBUG nova.network.os_vif_util [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.026 244018 DEBUG nova.network.os_vif_util [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.027 244018 DEBUG os_vif [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.030 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.030 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5951ca77-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.033 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.039 244018 INFO os_vif [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d')
Feb 25 12:27:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:15.242 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.401 244018 INFO nova.virt.libvirt.driver [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Deleting instance files /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75_del
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.402 244018 INFO nova.virt.libvirt.driver [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Deletion of /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75_del complete
Feb 25 12:27:15 compute-0 ceph-mon[76335]: pgmap v1330: 305 pgs: 305 active+clean; 577 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 7.5 MiB/s wr, 313 op/s
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.526 244018 INFO nova.scheduler.client.report [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Deleted allocations for instance 0612ec20-e725-4516-a153-f1fe9be74f75
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.584 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.585 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.725 244018 DEBUG oslo_concurrency.processutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.791 244018 DEBUG nova.compute.manager [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-changed-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.792 244018 DEBUG nova.compute.manager [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Refreshing instance network info cache due to event network-changed-5951ca77-9d6a-41db-aaf8-3508d21701f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.793 244018 DEBUG oslo_concurrency.lockutils [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.793 244018 DEBUG oslo_concurrency.lockutils [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.794 244018 DEBUG nova.network.neutron [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Refreshing network info cache for port 5951ca77-9d6a-41db-aaf8-3508d21701f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:27:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 577 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.4 MiB/s wr, 266 op/s
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:15 compute-0 nova_compute[244014]: 2026-02-25 12:27:15.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:27:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:27:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956758993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:16 compute-0 nova_compute[244014]: 2026-02-25 12:27:16.260 244018 DEBUG oslo_concurrency.processutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:16 compute-0 nova_compute[244014]: 2026-02-25 12:27:16.266 244018 DEBUG nova.compute.provider_tree [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:27:16 compute-0 nova_compute[244014]: 2026-02-25 12:27:16.292 244018 DEBUG nova.scheduler.client.report [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:27:16 compute-0 nova_compute[244014]: 2026-02-25 12:27:16.317 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:16 compute-0 nova_compute[244014]: 2026-02-25 12:27:16.332 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022421.332212, 0612ec20-e725-4516-a153-f1fe9be74f75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:16 compute-0 nova_compute[244014]: 2026-02-25 12:27:16.333 244018 INFO nova.compute.manager [-] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] VM Stopped (Lifecycle Event)
Feb 25 12:27:16 compute-0 nova_compute[244014]: 2026-02-25 12:27:16.371 244018 DEBUG nova.compute.manager [None req-583d0699-2ba5-4a25-97d9-21bf1daa1fb0 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:16 compute-0 nova_compute[244014]: 2026-02-25 12:27:16.383 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 15.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3956758993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:16 compute-0 nova_compute[244014]: 2026-02-25 12:27:16.543 244018 DEBUG nova.compute.manager [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:16 compute-0 nova_compute[244014]: 2026-02-25 12:27:16.597 244018 INFO nova.compute.manager [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] instance snapshotting
Feb 25 12:27:16 compute-0 nova_compute[244014]: 2026-02-25 12:27:16.869 244018 INFO nova.virt.libvirt.driver [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Beginning live snapshot process
Feb 25 12:27:17 compute-0 nova_compute[244014]: 2026-02-25 12:27:17.141 244018 DEBUG nova.virt.libvirt.imagebackend [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:27:17 compute-0 nova_compute[244014]: 2026-02-25 12:27:17.350 244018 DEBUG nova.storage.rbd_utils [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(a332b96ae3c14c26b3b29692ee7dfe37) on rbd image(160a4e3f-b197-4b82-a2ff-cebf79df47df_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:27:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Feb 25 12:27:17 compute-0 ceph-mon[76335]: pgmap v1331: 305 pgs: 305 active+clean; 577 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.4 MiB/s wr, 266 op/s
Feb 25 12:27:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Feb 25 12:27:17 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Feb 25 12:27:17 compute-0 nova_compute[244014]: 2026-02-25 12:27:17.526 244018 DEBUG nova.storage.rbd_utils [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] cloning vms/160a4e3f-b197-4b82-a2ff-cebf79df47df_disk@a332b96ae3c14c26b3b29692ee7dfe37 to images/c955e791-f328-407c-9295-73bc5d2887e7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:27:17 compute-0 nova_compute[244014]: 2026-02-25 12:27:17.608 244018 DEBUG nova.storage.rbd_utils [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] flattening images/c955e791-f328-407c-9295-73bc5d2887e7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:27:17 compute-0 nova_compute[244014]: 2026-02-25 12:27:17.770 244018 DEBUG nova.virt.libvirt.driver [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:27:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 532 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.4 MiB/s wr, 419 op/s
Feb 25 12:27:17 compute-0 nova_compute[244014]: 2026-02-25 12:27:17.981 244018 DEBUG nova.storage.rbd_utils [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] removing snapshot(a332b96ae3c14c26b3b29692ee7dfe37) on rbd image(160a4e3f-b197-4b82-a2ff-cebf79df47df_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:27:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Feb 25 12:27:18 compute-0 ceph-mon[76335]: osdmap e209: 3 total, 3 up, 3 in
Feb 25 12:27:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Feb 25 12:27:18 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Feb 25 12:27:18 compute-0 nova_compute[244014]: 2026-02-25 12:27:18.563 244018 DEBUG nova.storage.rbd_utils [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(snap) on rbd image(c955e791-f328-407c-9295-73bc5d2887e7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:27:18 compute-0 nova_compute[244014]: 2026-02-25 12:27:18.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:18 compute-0 nova_compute[244014]: 2026-02-25 12:27:18.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 12:27:18 compute-0 nova_compute[244014]: 2026-02-25 12:27:18.891 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 12:27:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Feb 25 12:27:19 compute-0 ceph-mon[76335]: pgmap v1333: 305 pgs: 305 active+clean; 532 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.4 MiB/s wr, 419 op/s
Feb 25 12:27:19 compute-0 ceph-mon[76335]: osdmap e210: 3 total, 3 up, 3 in
Feb 25 12:27:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Feb 25 12:27:19 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Feb 25 12:27:19 compute-0 nova_compute[244014]: 2026-02-25 12:27:19.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 532 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 48 KiB/s wr, 261 op/s
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.101 244018 DEBUG nova.network.neutron [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updated VIF entry in instance network info cache for port 5951ca77-9d6a-41db-aaf8-3508d21701f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.101 244018 DEBUG nova.network.neutron [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updating instance_info_cache with network_info: [{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": null, "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap5951ca77-9d", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.123 244018 DEBUG oslo_concurrency.lockutils [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:20 compute-0 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 12:27:20 compute-0 NetworkManager[49836]: <info>  [1772022440.1315] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:20 compute-0 ovn_controller[147040]: 2026-02-25T12:27:20Z|00500|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 12:27:20 compute-0 ovn_controller[147040]: 2026-02-25T12:27:20Z|00501|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 12:27:20 compute-0 ovn_controller[147040]: 2026-02-25T12:27:20Z|00502|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.142 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.144 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.145 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.147 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.151 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e88eb5bc-da6a-468b-be68-725d95aca44b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.154 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore
Feb 25 12:27:20 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 12:27:20 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000035.scope: Consumed 12.639s CPU time.
Feb 25 12:27:20 compute-0 systemd-machined[210048]: Machine qemu-63-instance-00000035 terminated.
Feb 25 12:27:20 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [NOTICE]   (293226) : haproxy version is 2.8.14-c23fe91
Feb 25 12:27:20 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [NOTICE]   (293226) : path to executable is /usr/sbin/haproxy
Feb 25 12:27:20 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [WARNING]  (293226) : Exiting Master process...
Feb 25 12:27:20 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [ALERT]    (293226) : Current worker (293228) exited with code 143 (Terminated)
Feb 25 12:27:20 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [WARNING]  (293226) : All workers exited. Exiting... (0)
Feb 25 12:27:20 compute-0 systemd[1]: libpod-3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a.scope: Deactivated successfully.
Feb 25 12:27:20 compute-0 conmon[293222]: conmon 3a7f482e2061e992266d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a.scope/container/memory.events
Feb 25 12:27:20 compute-0 podman[294439]: 2026-02-25 12:27:20.283010669 +0000 UTC m=+0.045463868 container died 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.301 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a-userdata-shm.mount: Deactivated successfully.
Feb 25 12:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-eba61f7f2d26200ebeb270f5b7762f03e0ef58df99ed676891fb1a63a7baeb70-merged.mount: Deactivated successfully.
Feb 25 12:27:20 compute-0 podman[294439]: 2026-02-25 12:27:20.328835716 +0000 UTC m=+0.091288905 container cleanup 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:27:20 compute-0 systemd[1]: libpod-conmon-3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a.scope: Deactivated successfully.
Feb 25 12:27:20 compute-0 podman[294469]: 2026-02-25 12:27:20.390166231 +0000 UTC m=+0.044642514 container remove 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.395 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d0ee1a-69d1-48ca-b68e-b666d707df28]: (4, ('Wed Feb 25 12:27:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a)\n3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a\nWed Feb 25 12:27:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a)\n3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.396 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[750c4e91-5d50-4aa9-96f5-b8e65bbd4c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.397 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:20 compute-0 kernel: tapce318891-c0: left promiscuous mode
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.417 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[906afe99-9bbf-43ed-b297-d8097d6a0a5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.432 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[48a5da02-a959-43af-812b-c059dcf8adb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.433 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11a59933-1ce5-411c-94b4-9799b4f05362]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.443 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[65e35d28-cc93-4ff0-ab1c-89f67ae041c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438842, 'reachable_time': 35302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294496, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.447 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:27:20 compute-0 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 12:27:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.447 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1061c7-1489-4c62-a6b6-b9ccf076c50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:20 compute-0 ceph-mon[76335]: osdmap e211: 3 total, 3 up, 3 in
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.697 244018 DEBUG nova.compute.manager [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.699 244018 DEBUG oslo_concurrency.lockutils [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.699 244018 DEBUG oslo_concurrency.lockutils [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.700 244018 DEBUG oslo_concurrency.lockutils [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.700 244018 DEBUG nova.compute.manager [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.701 244018 WARNING nova.compute.manager [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state powering-off.
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.788 244018 INFO nova.virt.libvirt.driver [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance shutdown successfully after 13 seconds.
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.795 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.796 244018 DEBUG nova.objects.instance [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.817 244018 DEBUG nova.compute.manager [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.867 244018 DEBUG oslo_concurrency.lockutils [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.900 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.900 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 12:27:20 compute-0 nova_compute[244014]: 2026-02-25 12:27:20.917 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:27:21 compute-0 nova_compute[244014]: 2026-02-25 12:27:21.065 244018 INFO nova.virt.libvirt.driver [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Snapshot image upload complete
Feb 25 12:27:21 compute-0 nova_compute[244014]: 2026-02-25 12:27:21.066 244018 INFO nova.compute.manager [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 4.47 seconds to snapshot the instance on the hypervisor.
Feb 25 12:27:21 compute-0 ceph-mon[76335]: pgmap v1336: 305 pgs: 305 active+clean; 532 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 48 KiB/s wr, 261 op/s
Feb 25 12:27:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 543 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 413 KiB/s wr, 291 op/s
Feb 25 12:27:22 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 25 12:27:23 compute-0 ceph-mon[76335]: pgmap v1337: 305 pgs: 305 active+clean; 543 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 413 KiB/s wr, 291 op/s
Feb 25 12:27:23 compute-0 nova_compute[244014]: 2026-02-25 12:27:23.727 244018 DEBUG nova.compute.manager [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:23 compute-0 nova_compute[244014]: 2026-02-25 12:27:23.728 244018 DEBUG oslo_concurrency.lockutils [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:23 compute-0 nova_compute[244014]: 2026-02-25 12:27:23.729 244018 DEBUG oslo_concurrency.lockutils [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:23 compute-0 nova_compute[244014]: 2026-02-25 12:27:23.729 244018 DEBUG oslo_concurrency.lockutils [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:23 compute-0 nova_compute[244014]: 2026-02-25 12:27:23.729 244018 DEBUG nova.compute.manager [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:23 compute-0 nova_compute[244014]: 2026-02-25 12:27:23.730 244018 WARNING nova.compute.manager [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state stopped and task_state None.
Feb 25 12:27:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 584 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.8 MiB/s wr, 220 op/s
Feb 25 12:27:23 compute-0 nova_compute[244014]: 2026-02-25 12:27:23.873 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:23 compute-0 nova_compute[244014]: 2026-02-25 12:27:23.894 244018 DEBUG oslo_concurrency.lockutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:23 compute-0 nova_compute[244014]: 2026-02-25 12:27:23.894 244018 DEBUG oslo_concurrency.lockutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:23 compute-0 nova_compute[244014]: 2026-02-25 12:27:23.894 244018 DEBUG nova.network.neutron [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:27:23 compute-0 nova_compute[244014]: 2026-02-25 12:27:23.894 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'info_cache' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.030 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.031 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.050 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.128 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.129 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.137 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.137 244018 INFO nova.compute.claims [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.356 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.453 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.453 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.454 244018 INFO nova.compute.manager [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Shelving
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.477 244018 DEBUG nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.680 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:27:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2589183456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.914 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.921 244018 DEBUG nova.compute.provider_tree [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.942 244018 DEBUG nova.scheduler.client.report [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.970 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:24 compute-0 nova_compute[244014]: 2026-02-25 12:27:24.972 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.026 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.027 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.053 244018 INFO nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.072 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.080 244018 DEBUG nova.network.neutron [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.126 244018 DEBUG oslo_concurrency.lockutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Feb 25 12:27:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Feb 25 12:27:25 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.171 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.172 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.244 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.293 244018 DEBUG nova.virt.libvirt.vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.294 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.295 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.296 244018 DEBUG os_vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.300 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee46268d-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.305 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.307 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.308 244018 INFO nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Creating image(s)
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.347 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.381 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.417 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.423 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.467 244018 DEBUG nova.policy [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9f1b75cc204c46bba5f57dbfecdde9ad', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09adeeb80de1411fb55d81d407987bba', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.471 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.480 244018 INFO os_vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.488 244018 DEBUG nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start _get_guest_xml network_info=[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.492 244018 WARNING nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.512 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.513 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.513 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.514 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.535 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.540 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b2915349-3797-40de-a554-2de79463723b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.575 244018 DEBUG nova.virt.libvirt.host [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.576 244018 DEBUG nova.virt.libvirt.host [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.578 244018 DEBUG nova.compute.manager [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.581 244018 DEBUG nova.virt.libvirt.host [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.582 244018 DEBUG nova.virt.libvirt.host [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.583 244018 DEBUG nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.583 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.585 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.585 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.586 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.586 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.586 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.586 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.586 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.587 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.587 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.587 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.587 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:25 compute-0 ceph-mon[76335]: pgmap v1338: 305 pgs: 305 active+clean; 584 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.8 MiB/s wr, 220 op/s
Feb 25 12:27:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2589183456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:25 compute-0 ceph-mon[76335]: osdmap e212: 3 total, 3 up, 3 in
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.626 244018 DEBUG oslo_concurrency.processutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.656 244018 INFO nova.compute.manager [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] instance snapshotting
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.823 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b2915349-3797-40de-a554-2de79463723b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 584 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.1 MiB/s wr, 118 op/s
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.902 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] resizing rbd image b2915349-3797-40de-a554-2de79463723b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:27:25 compute-0 nova_compute[244014]: 2026-02-25 12:27:25.989 244018 DEBUG nova.objects.instance [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lazy-loading 'migration_context' on Instance uuid b2915349-3797-40de-a554-2de79463723b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.041 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.042 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Ensure instance console log exists: /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.043 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.043 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.043 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2699635394' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.222 244018 DEBUG oslo_concurrency.processutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.262 244018 DEBUG oslo_concurrency.processutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.303 244018 INFO nova.virt.libvirt.driver [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Beginning live snapshot process
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.520 244018 DEBUG nova.virt.libvirt.imagebackend [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:27:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2699635394' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1249750884' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:26 compute-0 kernel: tapabdb97b5-8e (unregistering): left promiscuous mode
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.828 244018 DEBUG oslo_concurrency.processutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:26 compute-0 NetworkManager[49836]: <info>  [1772022446.8297] device (tapabdb97b5-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.830 244018 DEBUG nova.virt.libvirt.vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.831 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.832 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:26 compute-0 ovn_controller[147040]: 2026-02-25T12:27:26Z|00503|binding|INFO|Releasing lport abdb97b5-8e9d-4929-af6f-bfb06c067878 from this chassis (sb_readonly=0)
Feb 25 12:27:26 compute-0 ovn_controller[147040]: 2026-02-25T12:27:26Z|00504|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 down in Southbound
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.833 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:26 compute-0 ovn_controller[147040]: 2026-02-25T12:27:26Z|00505|binding|INFO|Removing iface tapabdb97b5-8e ovn-installed in OVS
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.847 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.865 244018 DEBUG nova.storage.rbd_utils [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(a8b6c9ac7dad412996d3d497216ce479) on rbd image(bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:27:26 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Feb 25 12:27:26 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Consumed 16.659s CPU time.
Feb 25 12:27:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.882 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:53:87 10.100.0.6'], port_security=['fa:16:3e:f8:53:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8086e43-4c45-422f-a3b5-fa665c256b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '35edb9b7-5285-41a3-a867-f1cc587b3ad5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=abdb97b5-8e9d-4929-af6f-bfb06c067878) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:26 compute-0 systemd-machined[210048]: Machine qemu-52-instance-0000002f terminated.
Feb 25 12:27:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.885 157129 INFO neutron.agent.ovn.metadata.agent [-] Port abdb97b5-8e9d-4929-af6f-bfb06c067878 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 unbound from our chassis
Feb 25 12:27:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.888 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 12:27:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.903 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4dca787b-1924-4c95-bc3d-a900460ae08d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.907 244018 DEBUG nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:27:26 compute-0 nova_compute[244014]:   <uuid>8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</uuid>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   <name>instance-00000035</name>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestJSON-server-1971346294</nova:name>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:27:25</nova:creationTime>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <nova:port uuid="ee46268d-740d-4ff9-8b65-4a81fc61eec3">
Feb 25 12:27:26 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <system>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <entry name="serial">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <entry name="uuid">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     </system>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   <os>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   </os>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   <features>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   </features>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk">
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config">
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:26 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:ba:87:f1"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <target dev="tapee46268d-74"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log" append="off"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <video>
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     </video>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:27:26 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:27:26 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:27:26 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:27:26 compute-0 nova_compute[244014]: </domain>
Feb 25 12:27:26 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.910 244018 DEBUG nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.910 244018 DEBUG nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.912 244018 DEBUG nova.virt.libvirt.vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.912 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.914 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.914 244018 DEBUG os_vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.916 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.916 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.920 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.921 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.923 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:26 compute-0 NetworkManager[49836]: <info>  [1772022446.9250] manager: (tapee46268d-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.928 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.932 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:26 compute-0 nova_compute[244014]: 2026-02-25 12:27:26.933 244018 INFO os_vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')
Feb 25 12:27:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.933 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[979cfdf8-9dc2-4c13-a321-a7413d265591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.938 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dd382c7f-45bd-495b-8294-053a1c9c1f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.964 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c480a8be-0f6c-4408-8fe6-7957e67f68b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.984 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9e555133-07a0-485b-99dc-ab4ffe172a8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294813, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.003 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0a30084d-7c7b-4040-bc45-22c63df1b424]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294818, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294818, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.005 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.007 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:27 compute-0 systemd-udevd[294782]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:27:27 compute-0 NetworkManager[49836]: <info>  [1772022447.0162] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Feb 25 12:27:27 compute-0 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.019 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:27 compute-0 ovn_controller[147040]: 2026-02-25T12:27:27Z|00506|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 12:27:27 compute-0 ovn_controller[147040]: 2026-02-25T12:27:27Z|00507|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.022 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.022 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.023 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.024 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:27 compute-0 NetworkManager[49836]: <info>  [1772022447.0311] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:27:27 compute-0 NetworkManager[49836]: <info>  [1772022447.0324] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:27:27 compute-0 ovn_controller[147040]: 2026-02-25T12:27:27Z|00508|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:27 compute-0 NetworkManager[49836]: <info>  [1772022447.0474] manager: (tapabdb97b5-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Feb 25 12:27:27 compute-0 kernel: tapabdb97b5-8e: entered promiscuous mode
Feb 25 12:27:27 compute-0 kernel: tapabdb97b5-8e (unregistering): left promiscuous mode
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:27 compute-0 ovn_controller[147040]: 2026-02-25T12:27:27Z|00509|if_status|INFO|Not updating pb chassis for abdb97b5-8e9d-4929-af6f-bfb06c067878 now as sb is readonly
Feb 25 12:27:27 compute-0 ovn_controller[147040]: 2026-02-25T12:27:27Z|00510|if_status|INFO|Dropped 2 log messages in last 26 seconds (most recently, 26 seconds ago) due to excessive rate
Feb 25 12:27:27 compute-0 ovn_controller[147040]: 2026-02-25T12:27:27Z|00511|if_status|INFO|Not setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 down as sb is readonly
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:27 compute-0 systemd-machined[210048]: New machine qemu-66-instance-00000035.
Feb 25 12:27:27 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-00000035.
Feb 25 12:27:27 compute-0 ovn_controller[147040]: 2026-02-25T12:27:27Z|00512|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.238 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.240 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.243 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.254 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e284922-15c2-4945-adaf-fd1e2a5d9d74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.255 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.258 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.258 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b5dc4f-927e-4016-a0f6-1e2120a612ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.259 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86afdd55-3fd9-4e29-90da-c994453b9ec1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.272 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[324b79c4-69b2-4119-a87b-721aa3c6e99b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.295 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8d151f28-bc6a-425a-9473-920d89a62a8d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.322 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8ce840-5426-4156-8f9e-973d2be79b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.326 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Successfully created port: be60a13c-bf18-4eb6-ab12-d76c0abf9525 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[616682c2-d067-41a8-8e38-575f1894c579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 NetworkManager[49836]: <info>  [1772022447.3323] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.365 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2609454a-3bde-44cd-a6c9-0c7154e11fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.369 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e777bd65-687f-4865-9a33-e37ef47abfaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 NetworkManager[49836]: <info>  [1772022447.3922] device (tapce318891-c0): carrier: link connected
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.396 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f8254e67-38d9-43c9-a0f0-e115dbace23c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.418 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbd112a-edd3-4cd9-a1d6-3995b0946f7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441679, 'reachable_time': 17977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294870, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.434 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[084009a9-91c3-4927-91e1-9a359e9b62d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441679, 'tstamp': 441679}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294871, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.451 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21c4a839-c1b7-43b8-877c-c7fc07544d62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441679, 'reachable_time': 17977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294872, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.474 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1823b4-71cf-4a92-b26b-685eb72ac1da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.525 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a5433787-1c51-49fc-ad41-6d5b06003872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.527 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.528 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.528 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.530 244018 INFO nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance shutdown successfully after 3 seconds.
Feb 25 12:27:27 compute-0 NetworkManager[49836]: <info>  [1772022447.5324] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Feb 25 12:27:27 compute-0 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.534 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.541 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:27 compute-0 ovn_controller[147040]: 2026-02-25T12:27:27Z|00513|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.548 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance destroyed successfully.
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.549 244018 DEBUG nova.objects.instance [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'numa_topology' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.566 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.567 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a0121760-f049-4100-b890-1e68480ff1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.568 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:27:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.568 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:27:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Feb 25 12:27:27 compute-0 ceph-mon[76335]: pgmap v1340: 305 pgs: 305 active+clean; 584 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.1 MiB/s wr, 118 op/s
Feb 25 12:27:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1249750884' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Feb 25 12:27:27 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.657 244018 DEBUG nova.storage.rbd_utils [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] cloning vms/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk@a8b6c9ac7dad412996d3d497216ce479 to images/a8d16fd6-a547-4180-934b-00fd58f79b87 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.701 244018 DEBUG nova.compute.manager [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.702 244018 DEBUG oslo_concurrency.lockutils [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.703 244018 DEBUG oslo_concurrency.lockutils [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.703 244018 DEBUG oslo_concurrency.lockutils [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.704 244018 DEBUG nova.compute.manager [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.704 244018 WARNING nova.compute.manager [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received unexpected event network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with vm_state active and task_state shelving.
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.714 244018 DEBUG nova.compute.manager [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.715 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.716 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022447.715182, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.716 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.725 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance rebooted successfully.
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.725 244018 DEBUG nova.compute.manager [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.747 244018 DEBUG nova.storage.rbd_utils [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] flattening images/a8d16fd6-a547-4180-934b-00fd58f79b87 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.788 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.793 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.836 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022447.715263, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.836 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)
Feb 25 12:27:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 691 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 12 MiB/s wr, 322 op/s
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.866 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.871 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:27 compute-0 nova_compute[244014]: 2026-02-25 12:27:27.887 244018 INFO nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Beginning cold snapshot process
Feb 25 12:27:28 compute-0 podman[295004]: 2026-02-25 12:27:27.947994508 +0000 UTC m=+0.021622363 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.070 244018 DEBUG nova.virt.libvirt.imagebackend [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:27:28 compute-0 podman[295004]: 2026-02-25 12:27:28.073052727 +0000 UTC m=+0.146680562 container create 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:27:28 compute-0 systemd[1]: Started libpod-conmon-32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8.scope.
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.146 244018 DEBUG nova.storage.rbd_utils [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] removing snapshot(a8b6c9ac7dad412996d3d497216ce479) on rbd image(bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:27:28 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb72b0e246b6d9347232c841bd42065a2553e7580308af9097a9325d7f9de613/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:28 compute-0 podman[295004]: 2026-02-25 12:27:28.166565904 +0000 UTC m=+0.240193789 container init 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:27:28 compute-0 podman[295004]: 2026-02-25 12:27:28.171221515 +0000 UTC m=+0.244849360 container start 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:27:28 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [NOTICE]   (295073) : New worker (295075) forked
Feb 25 12:27:28 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [NOTICE]   (295073) : Loading success.
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.236 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Successfully updated port: be60a13c-bf18-4eb6-ab12-d76c0abf9525 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.256 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.257 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquired lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.257 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.291 244018 DEBUG nova.storage.rbd_utils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(b5e987aa5269471aa49cf0d7cd2b10d6) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.433 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:27:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Feb 25 12:27:28 compute-0 ceph-mon[76335]: osdmap e213: 3 total, 3 up, 3 in
Feb 25 12:27:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Feb 25 12:27:28 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.641 244018 DEBUG nova.compute.manager [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-changed-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.642 244018 DEBUG nova.compute.manager [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Refreshing instance network info cache due to event network-changed-be60a13c-bf18-4eb6-ab12-d76c0abf9525. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.643 244018 DEBUG oslo_concurrency.lockutils [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.661 244018 DEBUG nova.storage.rbd_utils [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(snap) on rbd image(a8d16fd6-a547-4180-934b-00fd58f79b87) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.725 244018 DEBUG nova.storage.rbd_utils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk@b5e987aa5269471aa49cf0d7cd2b10d6 to images/a1bfc991-8c23-4709-b2d6-80b0d18f6430 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:27:28 compute-0 nova_compute[244014]: 2026-02-25 12:27:28.851 244018 DEBUG nova.storage.rbd_utils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening images/a1bfc991-8c23-4709-b2d6-80b0d18f6430 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:27:29 compute-0 nova_compute[244014]: 2026-02-25 12:27:29.333 244018 DEBUG nova.storage.rbd_utils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] removing snapshot(b5e987aa5269471aa49cf0d7cd2b10d6) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:27:29 compute-0 nova_compute[244014]: 2026-02-25 12:27:29.549 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Updating instance_info_cache with network_info: [{"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Feb 25 12:27:29 compute-0 ceph-mon[76335]: pgmap v1342: 305 pgs: 305 active+clean; 691 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 12 MiB/s wr, 322 op/s
Feb 25 12:27:29 compute-0 ceph-mon[76335]: osdmap e214: 3 total, 3 up, 3 in
Feb 25 12:27:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Feb 25 12:27:29 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Feb 25 12:27:29 compute-0 nova_compute[244014]: 2026-02-25 12:27:29.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:29 compute-0 nova_compute[244014]: 2026-02-25 12:27:29.713 244018 DEBUG nova.storage.rbd_utils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(snap) on rbd image(a1bfc991-8c23-4709-b2d6-80b0d18f6430) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:27:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 691 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 14 MiB/s wr, 363 op/s
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.063 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.063 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.063 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.064 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.064 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.064 244018 WARNING nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received unexpected event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with vm_state active and task_state shelving_image_uploading.
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.064 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.065 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.065 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.065 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.065 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.065 244018 WARNING nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.066 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.066 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.066 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.066 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.066 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.067 244018 WARNING nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.070 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Releasing lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.071 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Instance network_info: |[{"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.071 244018 DEBUG oslo_concurrency.lockutils [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.071 244018 DEBUG nova.network.neutron [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Refreshing network info cache for port be60a13c-bf18-4eb6-ab12-d76c0abf9525 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.074 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Start _get_guest_xml network_info=[{"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.081 244018 WARNING nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.093 244018 DEBUG nova.virt.libvirt.host [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.094 244018 DEBUG nova.virt.libvirt.host [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.097 244018 DEBUG nova.virt.libvirt.host [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.097 244018 DEBUG nova.virt.libvirt.host [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.097 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.098 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.098 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.098 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.098 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.099 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.099 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.099 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.099 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.099 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.100 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.100 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.103 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Feb 25 12:27:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Feb 25 12:27:30 compute-0 ceph-mon[76335]: osdmap e215: 3 total, 3 up, 3 in
Feb 25 12:27:30 compute-0 ceph-mon[76335]: pgmap v1345: 305 pgs: 305 active+clean; 691 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 14 MiB/s wr, 363 op/s
Feb 25 12:27:30 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Feb 25 12:27:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519301337' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.780 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.808 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:30 compute-0 nova_compute[244014]: 2026-02-25 12:27:30.814 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:27:30
Feb 25 12:27:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:27:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:27:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.meta', 'backups']
Feb 25 12:27:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:27:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1850460076' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.398 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.400 244018 DEBUG nova.virt.libvirt.vif [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1911361008',display_name='tempest-InstanceActionsNegativeTestJSON-server-1911361008',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1911361008',id=60,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09adeeb80de1411fb55d81d407987bba',ramdisk_id='',reservation_id='r-dfoydzhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1721839948',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1721839948-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:27:25Z,user_data=None,user_id='9f1b75cc204c46bba5f57dbfecdde9ad',uuid=b2915349-3797-40de-a554-2de79463723b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.401 244018 DEBUG nova.network.os_vif_util [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converting VIF {"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.402 244018 DEBUG nova.network.os_vif_util [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.403 244018 DEBUG nova.objects.instance [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lazy-loading 'pci_devices' on Instance uuid b2915349-3797-40de-a554-2de79463723b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.428 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:27:31 compute-0 nova_compute[244014]:   <uuid>b2915349-3797-40de-a554-2de79463723b</uuid>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   <name>instance-0000003c</name>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1911361008</nova:name>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:27:30</nova:creationTime>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <nova:user uuid="9f1b75cc204c46bba5f57dbfecdde9ad">tempest-InstanceActionsNegativeTestJSON-1721839948-project-member</nova:user>
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <nova:project uuid="09adeeb80de1411fb55d81d407987bba">tempest-InstanceActionsNegativeTestJSON-1721839948</nova:project>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <nova:port uuid="be60a13c-bf18-4eb6-ab12-d76c0abf9525">
Feb 25 12:27:31 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <system>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <entry name="serial">b2915349-3797-40de-a554-2de79463723b</entry>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <entry name="uuid">b2915349-3797-40de-a554-2de79463723b</entry>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     </system>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   <os>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   </os>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   <features>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   </features>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b2915349-3797-40de-a554-2de79463723b_disk">
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b2915349-3797-40de-a554-2de79463723b_disk.config">
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:31 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:78:7f:56"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <target dev="tapbe60a13c-bf"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/console.log" append="off"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <video>
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     </video>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:27:31 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:27:31 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:27:31 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:27:31 compute-0 nova_compute[244014]: </domain>
Feb 25 12:27:31 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.430 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Preparing to wait for external event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.430 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.431 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.431 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.432 244018 DEBUG nova.virt.libvirt.vif [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1911361008',display_name='tempest-InstanceActionsNegativeTestJSON-server-1911361008',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1911361008',id=60,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09adeeb80de1411fb55d81d407987bba',ramdisk_id='',reservation_id='r-dfoydzhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1721839948',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1721839948-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:27:25Z,user_data=None,user_id='9f1b75cc204c46bba5f57dbfecdde9ad',uuid=b2915349-3797-40de-a554-2de79463723b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.433 244018 DEBUG nova.network.os_vif_util [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converting VIF {"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.433 244018 DEBUG nova.network.os_vif_util [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.434 244018 DEBUG os_vif [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.435 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.435 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.440 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe60a13c-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.441 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe60a13c-bf, col_values=(('external_ids', {'iface-id': 'be60a13c-bf18-4eb6-ab12-d76c0abf9525', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:7f:56', 'vm-uuid': 'b2915349-3797-40de-a554-2de79463723b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:31 compute-0 NetworkManager[49836]: <info>  [1772022451.4437] manager: (tapbe60a13c-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.453 244018 INFO os_vif [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf')
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.516 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.517 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.518 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] No VIF found with MAC fa:16:3e:78:7f:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.518 244018 INFO nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Using config drive
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.551 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:31 compute-0 podman[295276]: 2026-02-25 12:27:31.561143365 +0000 UTC m=+0.075142228 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.563 244018 INFO nova.virt.libvirt.driver [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Snapshot image upload complete
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.564 244018 INFO nova.compute.manager [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Took 5.91 seconds to snapshot the instance on the hypervisor.
Feb 25 12:27:31 compute-0 podman[295277]: 2026-02-25 12:27:31.570055827 +0000 UTC m=+0.078529183 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.602 244018 DEBUG nova.network.neutron [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Updated VIF entry in instance network info cache for port be60a13c-bf18-4eb6-ab12-d76c0abf9525. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.602 244018 DEBUG nova.network.neutron [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Updating instance_info_cache with network_info: [{"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.629 244018 DEBUG oslo_concurrency.lockutils [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:31 compute-0 ceph-mon[76335]: osdmap e216: 3 total, 3 up, 3 in
Feb 25 12:27:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2519301337' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1850460076' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 737 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 5.8 MiB/s wr, 335 op/s
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:27:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.970 244018 INFO nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Creating config drive at /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config
Feb 25 12:27:31 compute-0 nova_compute[244014]: 2026-02-25 12:27:31.980 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5dpdrarh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.133 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5dpdrarh" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.167 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.172 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config b2915349-3797-40de-a554-2de79463723b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.317 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config b2915349-3797-40de-a554-2de79463723b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.318 244018 INFO nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Deleting local config drive /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config because it was imported into RBD.
Feb 25 12:27:32 compute-0 kernel: tapbe60a13c-bf: entered promiscuous mode
Feb 25 12:27:32 compute-0 NetworkManager[49836]: <info>  [1772022452.3761] manager: (tapbe60a13c-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Feb 25 12:27:32 compute-0 ovn_controller[147040]: 2026-02-25T12:27:32Z|00514|binding|INFO|Claiming lport be60a13c-bf18-4eb6-ab12-d76c0abf9525 for this chassis.
Feb 25 12:27:32 compute-0 ovn_controller[147040]: 2026-02-25T12:27:32Z|00515|binding|INFO|be60a13c-bf18-4eb6-ab12-d76c0abf9525: Claiming fa:16:3e:78:7f:56 10.100.0.5
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.423 244018 INFO nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Snapshot image upload complete
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.423 244018 DEBUG nova.compute.manager [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.428 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:7f:56 10.100.0.5'], port_security=['fa:16:3e:78:7f:56 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b2915349-3797-40de-a554-2de79463723b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6988a49-992e-4d2d-a359-182201377e6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09adeeb80de1411fb55d81d407987bba', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a7fbc9cc-1faa-472a-913b-870c01f46476', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=129e28e8-2627-4723-80a9-1e2113f78748, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=be60a13c-bf18-4eb6-ab12-d76c0abf9525) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.429 157129 INFO neutron.agent.ovn.metadata.agent [-] Port be60a13c-bf18-4eb6-ab12-d76c0abf9525 in datapath c6988a49-992e-4d2d-a359-182201377e6a bound to our chassis
Feb 25 12:27:32 compute-0 ovn_controller[147040]: 2026-02-25T12:27:32Z|00516|binding|INFO|Setting lport be60a13c-bf18-4eb6-ab12-d76c0abf9525 ovn-installed in OVS
Feb 25 12:27:32 compute-0 ovn_controller[147040]: 2026-02-25T12:27:32Z|00517|binding|INFO|Setting lport be60a13c-bf18-4eb6-ab12-d76c0abf9525 up in Southbound
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.430 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6988a49-992e-4d2d-a359-182201377e6a
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:32 compute-0 systemd-udevd[295388]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.444 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc66d52b-b050-48c8-8239-2b0886ee430a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.445 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6988a49-91 in ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.447 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6988a49-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.447 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7112e1-47bd-4f5b-a4de-219cdcf3ef16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.448 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[08b62aef-4296-4cef-a8cc-69f1f7a4dc2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 NetworkManager[49836]: <info>  [1772022452.4503] device (tapbe60a13c-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:27:32 compute-0 NetworkManager[49836]: <info>  [1772022452.4522] device (tapbe60a13c-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:27:32 compute-0 systemd-machined[210048]: New machine qemu-67-instance-0000003c.
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.458 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[54c22e39-e3b9-4c27-8150-bd08eaabf8c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000003c.
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.480 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[44a17314-84a9-40c2-82f0-723dd61dfdaa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.514 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ec223556-ec71-4e1e-8d7e-a271bb2e4581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.519 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6298527-1037-4b46-885e-f9ee087997ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 NetworkManager[49836]: <info>  [1772022452.5214] manager: (tapc6988a49-90): new Veth device (/org/freedesktop/NetworkManager/Devices/232)
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.552 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[66f66186-f6be-45cb-bafa-bc20160b3b6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.553 244018 INFO nova.compute.manager [None req-f7aff48c-c972-466c-ae6d-c69218c95f2e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Pausing
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.554 244018 DEBUG nova.objects.instance [None req-f7aff48c-c972-466c-ae6d-c69218c95f2e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.556 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c34bfe12-f6f3-4f19-ac4e-7e4dfede6d75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 NetworkManager[49836]: <info>  [1772022452.5761] device (tapc6988a49-90): carrier: link connected
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.584 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[85ca81bf-f045-4b55-940f-93ae69f6ac5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.590 244018 INFO nova.compute.manager [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Shelve offloading
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.597 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance destroyed successfully.
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.598 244018 DEBUG nova.compute.manager [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.600 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.600 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.600 244018 DEBUG nova.network.neutron [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.606 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc5d82c-b811-4028-9416-68d2ac9be700]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6988a49-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:db:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442198, 'reachable_time': 35522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295427, 'error': None, 'target': 'ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.617 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[78ad5fcd-2648-4ef7-9bbe-48fb50462eeb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:db8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442198, 'tstamp': 442198}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295428, 'error': None, 'target': 'ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.633 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9edd0707-7ea9-4b4e-b6f1-b74d517f192a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6988a49-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:db:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442198, 'reachable_time': 35522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295429, 'error': None, 'target': 'ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.657 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[740fffc7-e82d-41f3-b74e-a1a26b7576ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.687 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022452.6873376, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.687 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Paused (Lifecycle Event)
Feb 25 12:27:32 compute-0 ceph-mon[76335]: pgmap v1347: 305 pgs: 305 active+clean; 737 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 5.8 MiB/s wr, 335 op/s
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.689 244018 DEBUG nova.compute.manager [None req-f7aff48c-c972-466c-ae6d-c69218c95f2e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.710 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3adbf6-679a-4a5e-ab6e-d36084bda2f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.712 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6988a49-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.712 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.713 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6988a49-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:32 compute-0 NetworkManager[49836]: <info>  [1772022452.7166] manager: (tapc6988a49-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Feb 25 12:27:32 compute-0 kernel: tapc6988a49-90: entered promiscuous mode
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.715 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.719 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.725 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6988a49-90, col_values=(('external_ids', {'iface-id': '1b406748-e819-4bf2-b148-753ecf2ebbe2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.727 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:32 compute-0 ovn_controller[147040]: 2026-02-25T12:27:32Z|00518|binding|INFO|Releasing lport 1b406748-e819-4bf2-b148-753ecf2ebbe2 from this chassis (sb_readonly=0)
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.728 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.739 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6988a49-992e-4d2d-a359-182201377e6a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6988a49-992e-4d2d-a359-182201377e6a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.740 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0876a098-79eb-4013-a91b-c8ea01cd702c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.741 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-c6988a49-992e-4d2d-a359-182201377e6a
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/c6988a49-992e-4d2d-a359-182201377e6a.pid.haproxy
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID c6988a49-992e-4d2d-a359-182201377e6a
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:27:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.741 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a', 'env', 'PROCESS_TAG=haproxy-c6988a49-992e-4d2d-a359-182201377e6a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6988a49-992e-4d2d-a359-182201377e6a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.977 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022452.9768987, b2915349-3797-40de-a554-2de79463723b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.978 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] VM Started (Lifecycle Event)
Feb 25 12:27:32 compute-0 nova_compute[244014]: 2026-02-25 12:27:32.997 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:33 compute-0 nova_compute[244014]: 2026-02-25 12:27:33.001 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022452.9806504, b2915349-3797-40de-a554-2de79463723b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:33 compute-0 nova_compute[244014]: 2026-02-25 12:27:33.001 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] VM Paused (Lifecycle Event)
Feb 25 12:27:33 compute-0 nova_compute[244014]: 2026-02-25 12:27:33.024 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:33 compute-0 nova_compute[244014]: 2026-02-25 12:27:33.027 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:33 compute-0 nova_compute[244014]: 2026-02-25 12:27:33.047 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:27:33 compute-0 podman[295508]: 2026-02-25 12:27:33.176470721 +0000 UTC m=+0.071282438 container create e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:27:33 compute-0 podman[295508]: 2026-02-25 12:27:33.143900039 +0000 UTC m=+0.038711796 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:27:33 compute-0 systemd[1]: Started libpod-conmon-e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9.scope.
Feb 25 12:27:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b4837bf9aa32acef51cbe6f98489e364315734f563f075261a323ae88457b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:33 compute-0 podman[295508]: 2026-02-25 12:27:33.289225912 +0000 UTC m=+0.184037659 container init e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:27:33 compute-0 podman[295508]: 2026-02-25 12:27:33.295960653 +0000 UTC m=+0.190772370 container start e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:27:33 compute-0 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [NOTICE]   (295527) : New worker (295529) forked
Feb 25 12:27:33 compute-0 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [NOTICE]   (295527) : Loading success.
Feb 25 12:27:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 849 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 20 MiB/s rd, 16 MiB/s wr, 482 op/s
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.164 244018 DEBUG nova.compute.manager [req-98a8adfe-752d-4342-9f52-3c714b9a21e5 req-5bba8c33-a33a-4616-a738-ae7c6155292d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.164 244018 DEBUG oslo_concurrency.lockutils [req-98a8adfe-752d-4342-9f52-3c714b9a21e5 req-5bba8c33-a33a-4616-a738-ae7c6155292d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.165 244018 DEBUG oslo_concurrency.lockutils [req-98a8adfe-752d-4342-9f52-3c714b9a21e5 req-5bba8c33-a33a-4616-a738-ae7c6155292d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.165 244018 DEBUG oslo_concurrency.lockutils [req-98a8adfe-752d-4342-9f52-3c714b9a21e5 req-5bba8c33-a33a-4616-a738-ae7c6155292d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.166 244018 DEBUG nova.compute.manager [req-98a8adfe-752d-4342-9f52-3c714b9a21e5 req-5bba8c33-a33a-4616-a738-ae7c6155292d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Processing event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.167 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.171 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022454.1715372, b2915349-3797-40de-a554-2de79463723b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.172 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] VM Resumed (Lifecycle Event)
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.176 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.181 244018 INFO nova.virt.libvirt.driver [-] [instance: b2915349-3797-40de-a554-2de79463723b] Instance spawned successfully.
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.181 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.206 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.215 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.220 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.221 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.222 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.222 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.223 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.223 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.247 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.281 244018 INFO nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Took 8.98 seconds to spawn the instance on the hypervisor.
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.282 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.396 244018 INFO nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Took 10.30 seconds to build instance.
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.423 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.723 244018 DEBUG nova.compute.manager [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.784 244018 INFO nova.compute.manager [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] instance snapshotting
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.793 244018 INFO nova.compute.manager [None req-f71c8029-49a4-47cd-be1d-685b1ed79a50 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Unpausing
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.794 244018 DEBUG nova.objects.instance [None req-f71c8029-49a4-47cd-be1d-685b1ed79a50 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.830 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022454.8306704, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.831 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)
Feb 25 12:27:34 compute-0 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.835 244018 DEBUG nova.virt.libvirt.guest [None req-f71c8029-49a4-47cd-be1d-685b1ed79a50 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.835 244018 DEBUG nova.compute.manager [None req-f71c8029-49a4-47cd-be1d-685b1ed79a50 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.852 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.864 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:34 compute-0 nova_compute[244014]: 2026-02-25 12:27:34.892 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (unpausing). Skip.
Feb 25 12:27:34 compute-0 ceph-mon[76335]: pgmap v1348: 305 pgs: 305 active+clean; 849 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 20 MiB/s rd, 16 MiB/s wr, 482 op/s
Feb 25 12:27:35 compute-0 nova_compute[244014]: 2026-02-25 12:27:35.070 244018 INFO nova.virt.libvirt.driver [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Beginning live snapshot process
Feb 25 12:27:35 compute-0 nova_compute[244014]: 2026-02-25 12:27:35.131 244018 DEBUG nova.network.neutron [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Feb 25 12:27:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Feb 25 12:27:35 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Feb 25 12:27:35 compute-0 nova_compute[244014]: 2026-02-25 12:27:35.157 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:35 compute-0 nova_compute[244014]: 2026-02-25 12:27:35.238 244018 DEBUG nova.virt.libvirt.imagebackend [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:27:35 compute-0 nova_compute[244014]: 2026-02-25 12:27:35.480 244018 DEBUG nova.storage.rbd_utils [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(164300b8e5ac4fcebf239e5a5f4ee14f) on rbd image(160a4e3f-b197-4b82-a2ff-cebf79df47df_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:27:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 849 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 19 MiB/s rd, 15 MiB/s wr, 468 op/s
Feb 25 12:27:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Feb 25 12:27:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Feb 25 12:27:36 compute-0 ceph-mon[76335]: osdmap e217: 3 total, 3 up, 3 in
Feb 25 12:27:36 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.209 244018 DEBUG nova.storage.rbd_utils [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] cloning vms/160a4e3f-b197-4b82-a2ff-cebf79df47df_disk@164300b8e5ac4fcebf239e5a5f4ee14f to images/d6d3f72c-5f81-4cf6-8807-0685977b2a2c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.318 244018 DEBUG nova.storage.rbd_utils [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] flattening images/d6d3f72c-5f81-4cf6-8807-0685977b2a2c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.436 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance destroyed successfully.
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.438 244018 DEBUG nova.objects.instance [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'resources' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.460 244018 DEBUG nova.virt.libvirt.vif [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ4/ViXjDl7sfXbfHy1Rj0+WVS30xG/+F445xoJQyz45huoziS5Ge/69+H9D3xA69BQvF6LAGpEuOAI4T0oNr5YUcMHaOf8cBGICZoqOX1SEjGVzLtcjONvsNISgitMaQ==',key_name='tempest-keypair-221288102',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member',shelved_at='2026-02-25T12:27:32.423369',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a1bfc991-8c23-4709-b2d6-80b0d18f6430'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.460 244018 DEBUG nova.network.os_vif_util [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.462 244018 DEBUG nova.network.os_vif_util [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.462 244018 DEBUG os_vif [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.466 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabdb97b5-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.471 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.477 244018 INFO os_vif [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e')
Feb 25 12:27:36 compute-0 nova_compute[244014]: 2026-02-25 12:27:36.928 244018 DEBUG nova.storage.rbd_utils [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] removing snapshot(164300b8e5ac4fcebf239e5a5f4ee14f) on rbd image(160a4e3f-b197-4b82-a2ff-cebf79df47df_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.002 244018 DEBUG nova.compute.manager [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.003 244018 DEBUG oslo_concurrency.lockutils [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.003 244018 DEBUG oslo_concurrency.lockutils [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.003 244018 DEBUG oslo_concurrency.lockutils [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.004 244018 DEBUG nova.compute.manager [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] No waiting events found dispatching network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.004 244018 WARNING nova.compute.manager [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received unexpected event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 for instance with vm_state active and task_state None.
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.045 244018 DEBUG nova.compute.manager [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.045 244018 DEBUG nova.compute.manager [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing instance network info cache due to event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.046 244018 DEBUG oslo_concurrency.lockutils [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.046 244018 DEBUG oslo_concurrency.lockutils [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.047 244018 DEBUG nova.network.neutron [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.066 244018 INFO nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deleting instance files /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30_del
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.068 244018 INFO nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deletion of /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30_del complete
Feb 25 12:27:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Feb 25 12:27:37 compute-0 ceph-mon[76335]: pgmap v1350: 305 pgs: 305 active+clean; 849 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 19 MiB/s rd, 15 MiB/s wr, 468 op/s
Feb 25 12:27:37 compute-0 ceph-mon[76335]: osdmap e218: 3 total, 3 up, 3 in
Feb 25 12:27:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Feb 25 12:27:37 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.172 244018 INFO nova.scheduler.client.report [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Deleted allocations for instance b8086e43-4c45-422f-a3b5-fa665c256b30
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.200 244018 DEBUG nova.storage.rbd_utils [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(snap) on rbd image(d6d3f72c-5f81-4cf6-8807-0685977b2a2c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.277 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.278 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.294 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.295 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.296 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.297 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.299 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.302 244018 INFO nova.compute.manager [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Terminating instance
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.304 244018 DEBUG nova.compute.manager [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:27:37 compute-0 kernel: tapbe60a13c-bf (unregistering): left promiscuous mode
Feb 25 12:27:37 compute-0 NetworkManager[49836]: <info>  [1772022457.3523] device (tapbe60a13c-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:37 compute-0 ovn_controller[147040]: 2026-02-25T12:27:37Z|00519|binding|INFO|Releasing lport be60a13c-bf18-4eb6-ab12-d76c0abf9525 from this chassis (sb_readonly=0)
Feb 25 12:27:37 compute-0 ovn_controller[147040]: 2026-02-25T12:27:37Z|00520|binding|INFO|Setting lport be60a13c-bf18-4eb6-ab12-d76c0abf9525 down in Southbound
Feb 25 12:27:37 compute-0 ovn_controller[147040]: 2026-02-25T12:27:37Z|00521|binding|INFO|Removing iface tapbe60a13c-bf ovn-installed in OVS
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.370 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:7f:56 10.100.0.5'], port_security=['fa:16:3e:78:7f:56 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b2915349-3797-40de-a554-2de79463723b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6988a49-992e-4d2d-a359-182201377e6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09adeeb80de1411fb55d81d407987bba', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a7fbc9cc-1faa-472a-913b-870c01f46476', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=129e28e8-2627-4723-80a9-1e2113f78748, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=be60a13c-bf18-4eb6-ab12-d76c0abf9525) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.371 157129 INFO neutron.agent.ovn.metadata.agent [-] Port be60a13c-bf18-4eb6-ab12-d76c0abf9525 in datapath c6988a49-992e-4d2d-a359-182201377e6a unbound from our chassis
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.372 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6988a49-992e-4d2d-a359-182201377e6a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.373 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c06cfaf-abc5-4de6-838e-b7b7cef3ae3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.373 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a namespace which is not needed anymore
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.397 244018 DEBUG oslo_concurrency.processutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:37 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Feb 25 12:27:37 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Consumed 3.726s CPU time.
Feb 25 12:27:37 compute-0 systemd-machined[210048]: Machine qemu-67-instance-0000003c terminated.
Feb 25 12:27:37 compute-0 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [NOTICE]   (295527) : haproxy version is 2.8.14-c23fe91
Feb 25 12:27:37 compute-0 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [NOTICE]   (295527) : path to executable is /usr/sbin/haproxy
Feb 25 12:27:37 compute-0 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [WARNING]  (295527) : Exiting Master process...
Feb 25 12:27:37 compute-0 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [ALERT]    (295527) : Current worker (295529) exited with code 143 (Terminated)
Feb 25 12:27:37 compute-0 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [WARNING]  (295527) : All workers exited. Exiting... (0)
Feb 25 12:27:37 compute-0 systemd[1]: libpod-e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9.scope: Deactivated successfully.
Feb 25 12:27:37 compute-0 podman[295723]: 2026-02-25 12:27:37.515627032 +0000 UTC m=+0.049754579 container died e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.542 244018 INFO nova.virt.libvirt.driver [-] [instance: b2915349-3797-40de-a554-2de79463723b] Instance destroyed successfully.
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.543 244018 DEBUG nova.objects.instance [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lazy-loading 'resources' on Instance uuid b2915349-3797-40de-a554-2de79463723b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-21b4837bf9aa32acef51cbe6f98489e364315734f563f075261a323ae88457b7-merged.mount: Deactivated successfully.
Feb 25 12:27:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9-userdata-shm.mount: Deactivated successfully.
Feb 25 12:27:37 compute-0 podman[295723]: 2026-02-25 12:27:37.560721238 +0000 UTC m=+0.094848785 container cleanup e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.563 244018 DEBUG nova.virt.libvirt.vif [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1911361008',display_name='tempest-InstanceActionsNegativeTestJSON-server-1911361008',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1911361008',id=60,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:27:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09adeeb80de1411fb55d81d407987bba',ramdisk_id='',reservation_id='r-dfoydzhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1721839948',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1721839948-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:34Z,user_data=None,user_id='9f1b75cc204c46bba5f57dbfecdde9ad',uuid=b2915349-3797-40de-a554-2de79463723b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.564 244018 DEBUG nova.network.os_vif_util [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converting VIF {"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.566 244018 DEBUG nova.network.os_vif_util [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.567 244018 DEBUG os_vif [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.571 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe60a13c-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.581 244018 INFO os_vif [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf')
Feb 25 12:27:37 compute-0 systemd[1]: libpod-conmon-e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9.scope: Deactivated successfully.
Feb 25 12:27:37 compute-0 podman[295781]: 2026-02-25 12:27:37.623481175 +0000 UTC m=+0.042652588 container remove e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.628 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4c969b4a-d140-4c04-b792-69f03b265a48]: (4, ('Wed Feb 25 12:27:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a (e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9)\ne11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9\nWed Feb 25 12:27:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a (e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9)\ne11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.629 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dcadbdb4-f3bf-44f7-87be-5e702571dd81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.630 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6988a49-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:37 compute-0 kernel: tapc6988a49-90: left promiscuous mode
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.645 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30881c5d-7963-429d-b1de-d0455770e6f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.659 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba62574b-d2ab-4b22-a36b-02c00b02def5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.661 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f64a33a8-ba30-480d-9dd2-d850e774d911]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.674 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f24074-7ec0-4ca8-91ee-296bd29d61c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442191, 'reachable_time': 20694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295814, 'error': None, 'target': 'ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.677 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:27:37 compute-0 systemd[1]: run-netns-ovnmeta\x2dc6988a49\x2d992e\x2d4d2d\x2da359\x2d182201377e6a.mount: Deactivated successfully.
Feb 25 12:27:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.677 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[98a54d47-9b4b-4cf8-ab16-b07d06105278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.839 244018 INFO nova.virt.libvirt.driver [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Deleting instance files /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b_del
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.839 244018 INFO nova.virt.libvirt.driver [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Deletion of /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b_del complete
Feb 25 12:27:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 867 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 21 MiB/s rd, 16 MiB/s wr, 536 op/s
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.903 244018 INFO nova.compute.manager [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.904 244018 DEBUG oslo.service.loopingcall [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.904 244018 DEBUG nova.compute.manager [-] [instance: b2915349-3797-40de-a554-2de79463723b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.905 244018 DEBUG nova.network.neutron [-] [instance: b2915349-3797-40de-a554-2de79463723b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:27:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:27:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392415029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.951 244018 DEBUG oslo_concurrency.processutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.957 244018 DEBUG nova.compute.provider_tree [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:27:37 compute-0 nova_compute[244014]: 2026-02-25 12:27:37.983 244018 DEBUG nova.scheduler.client.report [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:27:38 compute-0 nova_compute[244014]: 2026-02-25 12:27:38.032 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Feb 25 12:27:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Feb 25 12:27:38 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Feb 25 12:27:38 compute-0 ceph-mon[76335]: osdmap e219: 3 total, 3 up, 3 in
Feb 25 12:27:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/392415029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:38 compute-0 nova_compute[244014]: 2026-02-25 12:27:38.699 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:39 compute-0 ceph-mon[76335]: pgmap v1353: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 867 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 21 MiB/s rd, 16 MiB/s wr, 536 op/s
Feb 25 12:27:39 compute-0 ceph-mon[76335]: osdmap e220: 3 total, 3 up, 3 in
Feb 25 12:27:39 compute-0 nova_compute[244014]: 2026-02-25 12:27:39.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 867 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 6.0 MiB/s wr, 369 op/s
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.005 244018 DEBUG nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-unplugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.005 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.006 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.006 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.006 244018 DEBUG nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] No waiting events found dispatching network-vif-unplugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.006 244018 DEBUG nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-unplugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.007 244018 DEBUG nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.007 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.007 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.007 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.008 244018 DEBUG nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] No waiting events found dispatching network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.008 244018 WARNING nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received unexpected event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 for instance with vm_state active and task_state deleting.
Feb 25 12:27:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.172 244018 DEBUG nova.network.neutron [-] [instance: b2915349-3797-40de-a554-2de79463723b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.210 244018 INFO nova.compute.manager [-] [instance: b2915349-3797-40de-a554-2de79463723b] Took 2.30 seconds to deallocate network for instance.
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.265 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.265 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.385 244018 DEBUG nova.network.neutron [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated VIF entry in instance network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.385 244018 DEBUG nova.network.neutron [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": null, "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.425 244018 DEBUG oslo_concurrency.lockutils [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.449 244018 DEBUG oslo_concurrency.processutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.497 244018 DEBUG nova.compute.manager [req-6c6d41ce-1601-4f77-b9f4-21749c28f3ba req-79a4d743-8446-41b3-85b8-0ff3b8d3501f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-deleted-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.625 244018 INFO nova.virt.libvirt.driver [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Snapshot image upload complete
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.626 244018 INFO nova.compute.manager [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 5.84 seconds to snapshot the instance on the hypervisor.
Feb 25 12:27:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:27:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691922157' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:40 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.991 244018 DEBUG oslo_concurrency.processutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:41 compute-0 nova_compute[244014]: 2026-02-25 12:27:40.999 244018 DEBUG nova.compute.provider_tree [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:27:41 compute-0 nova_compute[244014]: 2026-02-25 12:27:41.019 244018 DEBUG nova.scheduler.client.report [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:27:41 compute-0 nova_compute[244014]: 2026-02-25 12:27:41.051 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:41 compute-0 nova_compute[244014]: 2026-02-25 12:27:41.094 244018 INFO nova.scheduler.client.report [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Deleted allocations for instance b2915349-3797-40de-a554-2de79463723b
Feb 25 12:27:41 compute-0 nova_compute[244014]: 2026-02-25 12:27:41.175 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:41 compute-0 ceph-mon[76335]: pgmap v1355: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 867 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 6.0 MiB/s wr, 369 op/s
Feb 25 12:27:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2691922157' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:41 compute-0 ovn_controller[147040]: 2026-02-25T12:27:41Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 12:27:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 856 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 396 op/s
Feb 25 12:27:42 compute-0 nova_compute[244014]: 2026-02-25 12:27:42.063 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022447.062671, b8086e43-4c45-422f-a3b5-fa665c256b30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:42 compute-0 nova_compute[244014]: 2026-02-25 12:27:42.064 244018 INFO nova.compute.manager [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Stopped (Lifecycle Event)
Feb 25 12:27:42 compute-0 nova_compute[244014]: 2026-02-25 12:27:42.100 244018 DEBUG nova.compute.manager [None req-cfad8324-b99c-481b-9e4f-4217ff02b6eb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0035085793762395815 of space, bias 1.0, pg target 1.0525738128718745 quantized to 32 (current 32)
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005452531556362146 of space, bias 1.0, pg target 1.6303069353522814 quantized to 32 (current 32)
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0971725860315506e-06 of space, bias 4.0, pg target 0.0013078297225496084 quantized to 16 (current 16)
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011370018558312169 quantized to 32 (current 32)
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012507020414143388 quantized to 32 (current 32)
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:27:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015160024744416227 quantized to 32 (current 32)
Feb 25 12:27:42 compute-0 nova_compute[244014]: 2026-02-25 12:27:42.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:43 compute-0 ceph-mon[76335]: pgmap v1356: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 856 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 396 op/s
Feb 25 12:27:43 compute-0 nova_compute[244014]: 2026-02-25 12:27:43.343 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:43 compute-0 nova_compute[244014]: 2026-02-25 12:27:43.344 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:43 compute-0 nova_compute[244014]: 2026-02-25 12:27:43.344 244018 INFO nova.compute.manager [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Unshelving
Feb 25 12:27:43 compute-0 nova_compute[244014]: 2026-02-25 12:27:43.492 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:43 compute-0 nova_compute[244014]: 2026-02-25 12:27:43.493 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:43 compute-0 nova_compute[244014]: 2026-02-25 12:27:43.499 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'pci_requests' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:43 compute-0 nova_compute[244014]: 2026-02-25 12:27:43.518 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'numa_topology' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:43 compute-0 nova_compute[244014]: 2026-02-25 12:27:43.533 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:27:43 compute-0 nova_compute[244014]: 2026-02-25 12:27:43.534 244018 INFO nova.compute.claims [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:27:43 compute-0 nova_compute[244014]: 2026-02-25 12:27:43.730 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 802 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 6.1 MiB/s wr, 418 op/s
Feb 25 12:27:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:27:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085921296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:44 compute-0 nova_compute[244014]: 2026-02-25 12:27:44.343 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:44 compute-0 nova_compute[244014]: 2026-02-25 12:27:44.349 244018 DEBUG nova.compute.provider_tree [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:27:44 compute-0 nova_compute[244014]: 2026-02-25 12:27:44.365 244018 DEBUG nova.scheduler.client.report [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:27:44 compute-0 nova_compute[244014]: 2026-02-25 12:27:44.387 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:44 compute-0 ovn_controller[147040]: 2026-02-25T12:27:44Z|00522|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 12:27:44 compute-0 ovn_controller[147040]: 2026-02-25T12:27:44Z|00523|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:27:44 compute-0 nova_compute[244014]: 2026-02-25 12:27:44.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:44 compute-0 nova_compute[244014]: 2026-02-25 12:27:44.622 244018 INFO nova.network.neutron [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating port abdb97b5-8e9d-4929-af6f-bfb06c067878 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Feb 25 12:27:44 compute-0 nova_compute[244014]: 2026-02-25 12:27:44.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Feb 25 12:27:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Feb 25 12:27:45 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Feb 25 12:27:45 compute-0 ceph-mon[76335]: pgmap v1357: 305 pgs: 305 active+clean; 802 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 6.1 MiB/s wr, 418 op/s
Feb 25 12:27:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1085921296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:45 compute-0 ceph-mon[76335]: osdmap e221: 3 total, 3 up, 3 in
Feb 25 12:27:45 compute-0 nova_compute[244014]: 2026-02-25 12:27:45.392 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:45 compute-0 nova_compute[244014]: 2026-02-25 12:27:45.393 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:45 compute-0 nova_compute[244014]: 2026-02-25 12:27:45.393 244018 DEBUG nova.network.neutron [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:27:45 compute-0 nova_compute[244014]: 2026-02-25 12:27:45.545 244018 DEBUG nova.compute.manager [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:45 compute-0 nova_compute[244014]: 2026-02-25 12:27:45.545 244018 DEBUG nova.compute.manager [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing instance network info cache due to event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:27:45 compute-0 nova_compute[244014]: 2026-02-25 12:27:45.546 244018 DEBUG oslo_concurrency.lockutils [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 802 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 184 op/s
Feb 25 12:27:47 compute-0 ceph-mon[76335]: pgmap v1359: 305 pgs: 305 active+clean; 802 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 184 op/s
Feb 25 12:27:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:27:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2118540703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:27:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:27:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2118540703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:27:47 compute-0 nova_compute[244014]: 2026-02-25 12:27:47.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 802 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 153 op/s
Feb 25 12:27:48 compute-0 nova_compute[244014]: 2026-02-25 12:27:48.791 244018 DEBUG nova.network.neutron [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.223 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.226 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.226 244018 INFO nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating image(s)
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.309 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2118540703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:27:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2118540703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.315 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.317 244018 DEBUG oslo_concurrency.lockutils [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.318 244018 DEBUG nova.network.neutron [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.377 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.411 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.415 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "79eac0f5768899c715881ae5099eb80ea7cb356e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.417 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "79eac0f5768899c715881ae5099eb80ea7cb356e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.710 244018 DEBUG nova.virt.libvirt.imagebackend [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/a1bfc991-8c23-4709-b2d6-80b0d18f6430/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/a1bfc991-8c23-4709-b2d6-80b0d18f6430/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.779 244018 DEBUG nova.virt.libvirt.imagebackend [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Selected location: {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/a1bfc991-8c23-4709-b2d6-80b0d18f6430/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 25 12:27:49 compute-0 nova_compute[244014]: 2026-02-25 12:27:49.780 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning images/a1bfc991-8c23-4709-b2d6-80b0d18f6430@snap to None/b8086e43-4c45-422f-a3b5-fa665c256b30_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:27:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 802 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 148 op/s
Feb 25 12:27:50 compute-0 nova_compute[244014]: 2026-02-25 12:27:50.053 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "79eac0f5768899c715881ae5099eb80ea7cb356e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:50 compute-0 nova_compute[244014]: 2026-02-25 12:27:50.223 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'migration_context' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:50 compute-0 nova_compute[244014]: 2026-02-25 12:27:50.306 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:27:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Feb 25 12:27:50 compute-0 ceph-mon[76335]: pgmap v1360: 305 pgs: 305 active+clean; 802 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 153 op/s
Feb 25 12:27:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Feb 25 12:27:50 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Feb 25 12:27:50 compute-0 nova_compute[244014]: 2026-02-25 12:27:50.610 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:50 compute-0 nova_compute[244014]: 2026-02-25 12:27:50.610 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:50 compute-0 nova_compute[244014]: 2026-02-25 12:27:50.611 244018 INFO nova.compute.manager [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Rebooting instance
Feb 25 12:27:50 compute-0 nova_compute[244014]: 2026-02-25 12:27:50.629 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:50 compute-0 nova_compute[244014]: 2026-02-25 12:27:50.630 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:50 compute-0 nova_compute[244014]: 2026-02-25 12:27:50.630 244018 DEBUG nova.network.neutron [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:27:50 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.010 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Image rbd:vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.011 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.012 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Ensure instance console log exists: /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.012 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.013 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.013 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.018 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start _get_guest_xml network_info=[{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:27:24Z,direct_url=<?>,disk_format='raw',id=a1bfc991-8c23-4709-b2d6-80b0d18f6430,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-2111996537-shelved',owner='c85a955249394f0faf7c890f5cd0df32',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:27:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.023 244018 WARNING nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.169 244018 DEBUG nova.virt.libvirt.host [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.170 244018 DEBUG nova.virt.libvirt.host [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.174 244018 DEBUG nova.virt.libvirt.host [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.175 244018 DEBUG nova.virt.libvirt.host [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.175 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.175 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:27:24Z,direct_url=<?>,disk_format='raw',id=a1bfc991-8c23-4709-b2d6-80b0d18f6430,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-2111996537-shelved',owner='c85a955249394f0faf7c890f5cd0df32',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:27:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.176 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.176 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.176 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.177 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.177 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.177 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.177 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.178 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.178 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.178 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.178 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.201 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:51 compute-0 ceph-mon[76335]: pgmap v1361: 305 pgs: 305 active+clean; 802 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 148 op/s
Feb 25 12:27:51 compute-0 ceph-mon[76335]: osdmap e222: 3 total, 3 up, 3 in
Feb 25 12:27:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2017960151' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.787 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.806 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:51 compute-0 nova_compute[244014]: 2026-02-25 12:27:51.809 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 844 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 74 op/s
Feb 25 12:27:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2435174376' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.296 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.298 244018 DEBUG nova.virt.libvirt.vif [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='a1bfc991-8c23-4709-b2d6-80b0d18f6430',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-221288102',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member',shelved_at='2026-02-25T12:27:32.423369',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a1bfc991-8c23-4709-b2d6-80b0d18f6430'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:27:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.299 244018 DEBUG nova.network.os_vif_util [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.300 244018 DEBUG nova.network.os_vif_util [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.301 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.341 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:27:52 compute-0 nova_compute[244014]:   <uuid>b8086e43-4c45-422f-a3b5-fa665c256b30</uuid>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   <name>instance-0000002f</name>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestOtherB-server-2111996537</nova:name>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:27:51</nova:creationTime>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <nova:user uuid="b774fd0c04fc403d9ddb205f1e6abbc5">tempest-ServerActionsTestOtherB-1539976047-project-member</nova:user>
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <nova:project uuid="c85a955249394f0faf7c890f5cd0df32">tempest-ServerActionsTestOtherB-1539976047</nova:project>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="a1bfc991-8c23-4709-b2d6-80b0d18f6430"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <nova:port uuid="abdb97b5-8e9d-4929-af6f-bfb06c067878">
Feb 25 12:27:52 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <system>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <entry name="serial">b8086e43-4c45-422f-a3b5-fa665c256b30</entry>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <entry name="uuid">b8086e43-4c45-422f-a3b5-fa665c256b30</entry>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     </system>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   <os>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   </os>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   <features>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   </features>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk">
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config">
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:52 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:f8:53:87"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <target dev="tapabdb97b5-8e"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/console.log" append="off"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <video>
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     </video>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:27:52 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:27:52 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:27:52 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:27:52 compute-0 nova_compute[244014]: </domain>
Feb 25 12:27:52 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.343 244018 DEBUG nova.compute.manager [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Preparing to wait for external event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.344 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.344 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.344 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.345 244018 DEBUG nova.virt.libvirt.vif [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='a1bfc991-8c23-4709-b2d6-80b0d18f6430',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-221288102',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member',shelved_at='2026-02-25T12:27:32.423369',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a1bfc991-8c23-4709-b2d6-80b0d18f6430'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:27:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.345 244018 DEBUG nova.network.os_vif_util [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.346 244018 DEBUG nova.network.os_vif_util [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.346 244018 DEBUG os_vif [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.348 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.348 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.352 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabdb97b5-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.353 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapabdb97b5-8e, col_values=(('external_ids', {'iface-id': 'abdb97b5-8e9d-4929-af6f-bfb06c067878', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:53:87', 'vm-uuid': 'b8086e43-4c45-422f-a3b5-fa665c256b30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:52 compute-0 NetworkManager[49836]: <info>  [1772022472.3554] manager: (tapabdb97b5-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.362 244018 INFO os_vif [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e')
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.410 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.411 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.411 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No VIF found with MAC fa:16:3e:f8:53:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.411 244018 INFO nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Using config drive
Feb 25 12:27:52 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2017960151' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:52 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2435174376' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.440 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.461 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:52 compute-0 sshd-session[296116]: Invalid user latitude from 80.94.92.186 port 53622
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.505 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'keypairs' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.538 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022457.5376704, b2915349-3797-40de-a554-2de79463723b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.538 244018 INFO nova.compute.manager [-] [instance: b2915349-3797-40de-a554-2de79463723b] VM Stopped (Lifecycle Event)
Feb 25 12:27:52 compute-0 nova_compute[244014]: 2026-02-25 12:27:52.558 244018 DEBUG nova.compute.manager [None req-b1637fb9-7cce-4702-8bcc-ff31a0e649e4 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:52 compute-0 sshd-session[296116]: Connection closed by invalid user latitude 80.94.92.186 port 53622 [preauth]
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.135 244018 DEBUG nova.network.neutron [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated VIF entry in instance network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.136 244018 DEBUG nova.network.neutron [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.169 244018 DEBUG oslo_concurrency.lockutils [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.200 244018 INFO nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating config drive at /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.205 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_rgib05k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.343 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_rgib05k" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.363 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.366 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Feb 25 12:27:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Feb 25 12:27:53 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Feb 25 12:27:53 compute-0 ceph-mon[76335]: pgmap v1363: 305 pgs: 305 active+clean; 844 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 74 op/s
Feb 25 12:27:53 compute-0 ceph-mon[76335]: osdmap e223: 3 total, 3 up, 3 in
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.539 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.540 244018 INFO nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deleting local config drive /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config because it was imported into RBD.
Feb 25 12:27:53 compute-0 kernel: tapabdb97b5-8e: entered promiscuous mode
Feb 25 12:27:53 compute-0 NetworkManager[49836]: <info>  [1772022473.6021] manager: (tapabdb97b5-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/235)
Feb 25 12:27:53 compute-0 ovn_controller[147040]: 2026-02-25T12:27:53Z|00524|binding|INFO|Claiming lport abdb97b5-8e9d-4929-af6f-bfb06c067878 for this chassis.
Feb 25 12:27:53 compute-0 ovn_controller[147040]: 2026-02-25T12:27:53Z|00525|binding|INFO|abdb97b5-8e9d-4929-af6f-bfb06c067878: Claiming fa:16:3e:f8:53:87 10.100.0.6
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:53 compute-0 ovn_controller[147040]: 2026-02-25T12:27:53Z|00526|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 ovn-installed in OVS
Feb 25 12:27:53 compute-0 ovn_controller[147040]: 2026-02-25T12:27:53Z|00527|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 up in Southbound
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.623 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:53:87 10.100.0.6'], port_security=['fa:16:3e:f8:53:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8086e43-4c45-422f-a3b5-fa665c256b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '7', 'neutron:security_group_ids': '35edb9b7-5285-41a3-a867-f1cc587b3ad5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=abdb97b5-8e9d-4929-af6f-bfb06c067878) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.626 157129 INFO neutron.agent.ovn.metadata.agent [-] Port abdb97b5-8e9d-4929-af6f-bfb06c067878 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 bound to our chassis
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.628 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.646 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[324e64d3-404c-4551-a2ad-7e772598cbab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:53 compute-0 systemd-machined[210048]: New machine qemu-68-instance-0000002f.
Feb 25 12:27:53 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000002f.
Feb 25 12:27:53 compute-0 systemd-udevd[296216]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.683 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcc3188-e43f-4fcb-8e40-dbedd86edfda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.687 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1745ca41-cf22-4868-9024-b2a0d691d5f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:53 compute-0 NetworkManager[49836]: <info>  [1772022473.6897] device (tapabdb97b5-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:27:53 compute-0 NetworkManager[49836]: <info>  [1772022473.6909] device (tapabdb97b5-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.715 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3988a1-67ec-409b-8ca7-7720df7ebe79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e0cf6c-3ddc-496e-9311-b7cdca772910]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296224, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.752 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30a026f2-f948-4387-a1a1-8476d0169343]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296227, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296227, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.753 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.755 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.758 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.758 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.758 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.759 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 835 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 153 op/s
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.913 244018 DEBUG nova.network.neutron [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.938 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:53 compute-0 nova_compute[244014]: 2026-02-25 12:27:53.940 244018 DEBUG nova.compute.manager [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.104 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022474.1039176, b8086e43-4c45-422f-a3b5-fa665c256b30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.105 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Started (Lifecycle Event)
Feb 25 12:27:54 compute-0 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 12:27:54 compute-0 NetworkManager[49836]: <info>  [1772022474.1185] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.124 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:54 compute-0 ovn_controller[147040]: 2026-02-25T12:27:54Z|00528|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 12:27:54 compute-0 ovn_controller[147040]: 2026-02-25T12:27:54Z|00529|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.127 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 ovn_controller[147040]: 2026-02-25T12:27:54Z|00530|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.136 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022474.108122, b8086e43-4c45-422f-a3b5-fa665c256b30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.136 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Paused (Lifecycle Event)
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.136 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.138 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.141 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.142 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f99e3dcb-a9cf-4b4a-b822-3c9f6299c150]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.143 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.163 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:54 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 12:27:54 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000035.scope: Consumed 12.334s CPU time.
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.168 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:54 compute-0 systemd-machined[210048]: Machine qemu-66-instance-00000035 terminated.
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.187 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.255 244018 DEBUG nova.compute.manager [req-12e5385c-8192-4a9f-bd04-35c41d8fc1c2 req-cae868a7-2509-407c-86e5-83fdc2d36673 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.255 244018 DEBUG oslo_concurrency.lockutils [req-12e5385c-8192-4a9f-bd04-35c41d8fc1c2 req-cae868a7-2509-407c-86e5-83fdc2d36673 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.256 244018 DEBUG oslo_concurrency.lockutils [req-12e5385c-8192-4a9f-bd04-35c41d8fc1c2 req-cae868a7-2509-407c-86e5-83fdc2d36673 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.257 244018 DEBUG oslo_concurrency.lockutils [req-12e5385c-8192-4a9f-bd04-35c41d8fc1c2 req-cae868a7-2509-407c-86e5-83fdc2d36673 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.257 244018 DEBUG nova.compute.manager [req-12e5385c-8192-4a9f-bd04-35c41d8fc1c2 req-cae868a7-2509-407c-86e5-83fdc2d36673 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Processing event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.258 244018 DEBUG nova.compute.manager [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.262 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022474.2612882, b8086e43-4c45-422f-a3b5-fa665c256b30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.262 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Resumed (Lifecycle Event)
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.265 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.273 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance spawned successfully.
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.281 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.286 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.291 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.300 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.301 244018 DEBUG nova.objects.instance [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.318 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:27:54 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [NOTICE]   (295073) : haproxy version is 2.8.14-c23fe91
Feb 25 12:27:54 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [NOTICE]   (295073) : path to executable is /usr/sbin/haproxy
Feb 25 12:27:54 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [WARNING]  (295073) : Exiting Master process...
Feb 25 12:27:54 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [ALERT]    (295073) : Current worker (295075) exited with code 143 (Terminated)
Feb 25 12:27:54 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [WARNING]  (295073) : All workers exited. Exiting... (0)
Feb 25 12:27:54 compute-0 systemd[1]: libpod-32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8.scope: Deactivated successfully.
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.325 244018 DEBUG nova.virt.libvirt.vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.325 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.327 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.327 244018 DEBUG os_vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.330 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.330 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee46268d-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 podman[296291]: 2026-02-25 12:27:54.33393653 +0000 UTC m=+0.069335113 container died 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.339 244018 INFO os_vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.352 244018 DEBUG nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start _get_guest_xml network_info=[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.361 244018 WARNING nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8-userdata-shm.mount: Deactivated successfully.
Feb 25 12:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb72b0e246b6d9347232c841bd42065a2553e7580308af9097a9325d7f9de613-merged.mount: Deactivated successfully.
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.375 244018 DEBUG nova.virt.libvirt.host [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.376 244018 DEBUG nova.virt.libvirt.host [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.382 244018 DEBUG nova.virt.libvirt.host [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.382 244018 DEBUG nova.virt.libvirt.host [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.383 244018 DEBUG nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.383 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.383 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.384 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.384 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.384 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.385 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.385 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.385 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.386 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.387 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.387 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.387 244018 DEBUG nova.objects.instance [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:54 compute-0 podman[296291]: 2026-02-25 12:27:54.388246347 +0000 UTC m=+0.123644890 container cleanup 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:27:54 compute-0 systemd[1]: libpod-conmon-32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8.scope: Deactivated successfully.
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.410 244018 DEBUG oslo_concurrency.processutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Feb 25 12:27:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Feb 25 12:27:54 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Feb 25 12:27:54 compute-0 podman[296326]: 2026-02-25 12:27:54.461708586 +0000 UTC m=+0.053370012 container remove 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.470 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4403e540-1701-416a-9627-6491f75bc007]: (4, ('Wed Feb 25 12:27:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8)\n32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8\nWed Feb 25 12:27:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8)\n32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.471 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f9feccb1-dba4-4628-8f0c-26acefe61ba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.472 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:54 compute-0 kernel: tapce318891-c0: left promiscuous mode
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.478 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.491 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[85e2afd5-14b4-4af8-917a-1db5424f52d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.511 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[63789194-c637-4089-96fa-46baec28aabe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.513 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88484fda-28a6-4001-b929-6e0802811c23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.528 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[571ab088-b755-4ed7-bcdd-3ad0cf903ee3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441672, 'reachable_time': 35801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296341, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.530 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:27:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.530 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[f34e5279-982a-492e-89f2-edb8a3d97b1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.942 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644554855' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:54 compute-0 nova_compute[244014]: 2026-02-25 12:27:54.973 244018 DEBUG oslo_concurrency.processutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.010 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.010 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.011 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.023 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.024 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.024 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.026 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.026 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.029 244018 INFO nova.compute.manager [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Terminating instance
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.031 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "refresh_cache-bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.032 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquired lock "refresh_cache-bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.032 244018 DEBUG nova.network.neutron [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.039 244018 DEBUG oslo_concurrency.processutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:27:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Feb 25 12:27:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Feb 25 12:27:55 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.202 244018 DEBUG nova.network.neutron [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:27:55 compute-0 ceph-mon[76335]: pgmap v1365: 305 pgs: 305 active+clean; 835 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 153 op/s
Feb 25 12:27:55 compute-0 ceph-mon[76335]: osdmap e224: 3 total, 3 up, 3 in
Feb 25 12:27:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3644554855' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:55 compute-0 ceph-mon[76335]: osdmap e225: 3 total, 3 up, 3 in
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.504 244018 DEBUG nova.compute.manager [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.599 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:27:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4210909642' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.620 244018 DEBUG oslo_concurrency.processutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.622 244018 DEBUG nova.virt.libvirt.vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.622 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.623 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.625 244018 DEBUG nova.objects.instance [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.639 244018 DEBUG nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:27:55 compute-0 nova_compute[244014]:   <uuid>8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</uuid>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   <name>instance-00000035</name>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestJSON-server-1971346294</nova:name>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:27:54</nova:creationTime>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <nova:port uuid="ee46268d-740d-4ff9-8b65-4a81fc61eec3">
Feb 25 12:27:55 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <system>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <entry name="serial">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <entry name="uuid">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     </system>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   <os>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   </os>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   <features>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   </features>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk">
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config">
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       </source>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:27:55 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:ba:87:f1"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <target dev="tapee46268d-74"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log" append="off"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <video>
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     </video>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:27:55 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:27:55 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:27:55 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:27:55 compute-0 nova_compute[244014]: </domain>
Feb 25 12:27:55 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.645 244018 DEBUG nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.645 244018 DEBUG nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.646 244018 DEBUG nova.virt.libvirt.vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.646 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.647 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.648 244018 DEBUG os_vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.650 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.651 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.654 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.655 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.655 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:55 compute-0 NetworkManager[49836]: <info>  [1772022475.6581] manager: (tapee46268d-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.665 244018 INFO os_vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')
Feb 25 12:27:55 compute-0 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 12:27:55 compute-0 systemd-udevd[296218]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:27:55 compute-0 ovn_controller[147040]: 2026-02-25T12:27:55Z|00531|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 12:27:55 compute-0 ovn_controller[147040]: 2026-02-25T12:27:55Z|00532|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:55 compute-0 NetworkManager[49836]: <info>  [1772022475.7407] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.747 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.748 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.750 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:27:55 compute-0 NetworkManager[49836]: <info>  [1772022475.7521] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:27:55 compute-0 NetworkManager[49836]: <info>  [1772022475.7528] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:27:55 compute-0 ovn_controller[147040]: 2026-02-25T12:27:55Z|00533|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 12:27:55 compute-0 ovn_controller[147040]: 2026-02-25T12:27:55Z|00534|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.760 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[348a5fa6-b656-4844-963c-6046c5d26358]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 nova_compute[244014]: 2026-02-25 12:27:55.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.761 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.764 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.764 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bf03320b-ddea-42bd-8efb-81bcf9810cd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.764 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b23570b-84ab-492b-8938-7135b98cb793]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.778 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[569a3c9d-58fc-411e-8a8a-fb871ef35418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 systemd-machined[210048]: New machine qemu-69-instance-00000035.
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.792 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9d4590-6081-4f31-898a-423e33090336]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-00000035.
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.821 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f3bb35-c063-4aa1-8f3a-f29fd556c335]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 NetworkManager[49836]: <info>  [1772022475.8279] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/238)
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.826 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcdde45-b0ba-4f01-b6a9-bcdf1b22b723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 835 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 8.7 MiB/s wr, 227 op/s
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.868 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd295640-5147-481b-a126-58915d030f60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.871 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a62e68-9e49-4535-92a7-da4feb83bf34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 NetworkManager[49836]: <info>  [1772022475.8923] device (tapce318891-c0): carrier: link connected
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.897 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d46193bf-0797-4d01-bb90-1780a23ebbff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.917 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[06a57f03-fea4-40ac-a65e-efcbbc9d0950]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296449, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.931 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9b09eae9-74fa-4ee5-84db-223b7acdb2f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444530, 'tstamp': 444530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296450, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.947 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b1c192-8902-432b-94a3-3822aafc9e04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296451, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.978 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[836474ad-207b-47ef-85fc-a2ccf8979eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.044 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d3170aa6-7286-4251-8e1a-2a6b4a22f674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.045 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.045 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.046 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:56 compute-0 NetworkManager[49836]: <info>  [1772022476.0488] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Feb 25 12:27:56 compute-0 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.052 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:27:56 compute-0 ovn_controller[147040]: 2026-02-25T12:27:56Z|00535|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.057 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.058 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4732e7-35df-4d14-b3c3-c76648df6e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.059 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:27:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.060 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.334 244018 DEBUG nova.network.neutron [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.350 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Releasing lock "refresh_cache-bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.351 244018 DEBUG nova.compute.manager [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.394 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.394 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.394 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.395 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.395 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.395 244018 WARNING nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received unexpected event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with vm_state active and task_state None.
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.395 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.396 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.397 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.397 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.397 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.397 244018 WARNING nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state reboot_started_hard.
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.398 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.398 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.398 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.399 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.399 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.399 244018 WARNING nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state reboot_started_hard.
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.399 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.400 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.400 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.400 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.400 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.400 244018 WARNING nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state reboot_started_hard.
Feb 25 12:27:56 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Feb 25 12:27:56 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003b.scope: Consumed 14.759s CPU time.
Feb 25 12:27:56 compute-0 systemd-machined[210048]: Machine qemu-65-instance-0000003b terminated.
Feb 25 12:27:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4210909642' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:27:56 compute-0 podman[296482]: 2026-02-25 12:27:56.468786688 +0000 UTC m=+0.086636243 container create 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:27:56 compute-0 systemd[1]: Started libpod-conmon-43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080.scope.
Feb 25 12:27:56 compute-0 podman[296482]: 2026-02-25 12:27:56.41160793 +0000 UTC m=+0.029457505 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:27:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a9a945243e4aa6728c2c83d101f229caef91189a564a26a79cc76c0a932bd4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:56 compute-0 podman[296482]: 2026-02-25 12:27:56.566530085 +0000 UTC m=+0.184379690 container init 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 25 12:27:56 compute-0 podman[296482]: 2026-02-25 12:27:56.577107444 +0000 UTC m=+0.194957019 container start 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.581 244018 INFO nova.virt.libvirt.driver [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance destroyed successfully.
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.582 244018 DEBUG nova.objects.instance [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'resources' on Instance uuid bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:27:56 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [NOTICE]   (296548) : New worker (296562) forked
Feb 25 12:27:56 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [NOTICE]   (296548) : Loading success.
Feb 25 12:27:56 compute-0 sudo[296525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:27:56 compute-0 sudo[296525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:27:56 compute-0 sudo[296525]: pam_unix(sudo:session): session closed for user root
Feb 25 12:27:56 compute-0 sudo[296583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:27:56 compute-0 sudo[296583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.756 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.757 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022476.7564242, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.757 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.759 244018 DEBUG nova.compute.manager [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.762 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance rebooted successfully.
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.762 244018 DEBUG nova.compute.manager [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.792 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.795 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.820 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.821 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022476.757529, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.821 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.965 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:27:56 compute-0 nova_compute[244014]: 2026-02-25 12:27:56.968 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:27:57 compute-0 sudo[296583]: pam_unix(sudo:session): session closed for user root
Feb 25 12:27:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:27:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:27:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:27:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:27:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:27:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:27:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:27:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:27:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:27:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:27:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:27:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:27:57 compute-0 sudo[296658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:27:57 compute-0 sudo[296658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:27:57 compute-0 sudo[296658]: pam_unix(sudo:session): session closed for user root
Feb 25 12:27:57 compute-0 sudo[296683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:27:57 compute-0 sudo[296683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:27:57 compute-0 ceph-mon[76335]: pgmap v1368: 305 pgs: 305 active+clean; 835 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 8.7 MiB/s wr, 227 op/s
Feb 25 12:27:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:27:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:27:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:27:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:27:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:27:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:27:57 compute-0 nova_compute[244014]: 2026-02-25 12:27:57.690 244018 INFO nova.virt.libvirt.driver [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Deleting instance files /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_del
Feb 25 12:27:57 compute-0 nova_compute[244014]: 2026-02-25 12:27:57.691 244018 INFO nova.virt.libvirt.driver [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Deletion of /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_del complete
Feb 25 12:27:57 compute-0 podman[296722]: 2026-02-25 12:27:57.777825536 +0000 UTC m=+0.054996748 container create a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:27:57 compute-0 systemd[1]: Started libpod-conmon-a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c.scope.
Feb 25 12:27:57 compute-0 podman[296722]: 2026-02-25 12:27:57.743724331 +0000 UTC m=+0.020895583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:27:57 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:27:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 580 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 3.3 MiB/s wr, 426 op/s
Feb 25 12:27:57 compute-0 nova_compute[244014]: 2026-02-25 12:27:57.854 244018 INFO nova.compute.manager [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Took 1.50 seconds to destroy the instance on the hypervisor.
Feb 25 12:27:57 compute-0 nova_compute[244014]: 2026-02-25 12:27:57.855 244018 DEBUG oslo.service.loopingcall [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:27:57 compute-0 nova_compute[244014]: 2026-02-25 12:27:57.855 244018 DEBUG nova.compute.manager [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:27:57 compute-0 nova_compute[244014]: 2026-02-25 12:27:57.855 244018 DEBUG nova.network.neutron [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:27:57 compute-0 podman[296722]: 2026-02-25 12:27:57.866163946 +0000 UTC m=+0.143335248 container init a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:27:57 compute-0 podman[296722]: 2026-02-25 12:27:57.873516614 +0000 UTC m=+0.150687866 container start a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 12:27:57 compute-0 podman[296722]: 2026-02-25 12:27:57.877932299 +0000 UTC m=+0.155103541 container attach a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 12:27:57 compute-0 interesting_khorana[296738]: 167 167
Feb 25 12:27:57 compute-0 systemd[1]: libpod-a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c.scope: Deactivated successfully.
Feb 25 12:27:57 compute-0 podman[296722]: 2026-02-25 12:27:57.881230902 +0000 UTC m=+0.158402144 container died a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-adaa34dcf965cf40c66b6085bb41edaaf98b0324a3f5cee74187a774bc175bcb-merged.mount: Deactivated successfully.
Feb 25 12:27:57 compute-0 podman[296722]: 2026-02-25 12:27:57.927370858 +0000 UTC m=+0.204542080 container remove a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:27:57 compute-0 systemd[1]: libpod-conmon-a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c.scope: Deactivated successfully.
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.045 244018 DEBUG nova.network.neutron [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.061 244018 DEBUG nova.network.neutron [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.081 244018 INFO nova.compute.manager [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Took 0.23 seconds to deallocate network for instance.
Feb 25 12:27:58 compute-0 podman[296761]: 2026-02-25 12:27:58.112822027 +0000 UTC m=+0.058485006 container create 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.124 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.126 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:58 compute-0 systemd[1]: Started libpod-conmon-32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d.scope.
Feb 25 12:27:58 compute-0 podman[296761]: 2026-02-25 12:27:58.092279525 +0000 UTC m=+0.037942484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:27:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:58 compute-0 podman[296761]: 2026-02-25 12:27:58.250242976 +0000 UTC m=+0.195905985 container init 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:27:58 compute-0 podman[296761]: 2026-02-25 12:27:58.266367393 +0000 UTC m=+0.212030372 container start 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Feb 25 12:27:58 compute-0 podman[296761]: 2026-02-25 12:27:58.270825229 +0000 UTC m=+0.216488208 container attach 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.283 244018 DEBUG oslo_concurrency.processutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.642 244018 DEBUG nova.compute.manager [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.644 244018 DEBUG oslo_concurrency.lockutils [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.644 244018 DEBUG oslo_concurrency.lockutils [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.645 244018 DEBUG oslo_concurrency.lockutils [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.645 244018 DEBUG nova.compute.manager [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.646 244018 WARNING nova.compute.manager [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.
Feb 25 12:27:58 compute-0 admiring_satoshi[296777]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:27:58 compute-0 admiring_satoshi[296777]: --> All data devices are unavailable
Feb 25 12:27:58 compute-0 systemd[1]: libpod-32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d.scope: Deactivated successfully.
Feb 25 12:27:58 compute-0 podman[296761]: 2026-02-25 12:27:58.790261848 +0000 UTC m=+0.735924827 container died 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:27:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:27:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167045000' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007-merged.mount: Deactivated successfully.
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.840 244018 DEBUG oslo_concurrency.processutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:27:58 compute-0 podman[296761]: 2026-02-25 12:27:58.843873316 +0000 UTC m=+0.789536295 container remove 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:27:58 compute-0 systemd[1]: libpod-conmon-32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d.scope: Deactivated successfully.
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.879 244018 DEBUG nova.compute.provider_tree [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:27:58 compute-0 sudo[296683]: pam_unix(sudo:session): session closed for user root
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.902 244018 DEBUG nova.scheduler.client.report [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:27:58 compute-0 sudo[296830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:27:58 compute-0 sudo[296830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:27:58 compute-0 sudo[296830]: pam_unix(sudo:session): session closed for user root
Feb 25 12:27:58 compute-0 nova_compute[244014]: 2026-02-25 12:27:58.944 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:58 compute-0 sudo[296855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:27:59 compute-0 sudo[296855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.025 244018 INFO nova.scheduler.client.report [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Deleted allocations for instance bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.091 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:59 compute-0 podman[296892]: 2026-02-25 12:27:59.303574906 +0000 UTC m=+0.054359990 container create 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 12:27:59 compute-0 systemd[1]: Started libpod-conmon-0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01.scope.
Feb 25 12:27:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:27:59 compute-0 podman[296892]: 2026-02-25 12:27:59.36521516 +0000 UTC m=+0.116000254 container init 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 12:27:59 compute-0 podman[296892]: 2026-02-25 12:27:59.372766934 +0000 UTC m=+0.123552008 container start 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:27:59 compute-0 podman[296892]: 2026-02-25 12:27:59.277883259 +0000 UTC m=+0.028668383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:27:59 compute-0 podman[296892]: 2026-02-25 12:27:59.376031196 +0000 UTC m=+0.126816290 container attach 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 12:27:59 compute-0 amazing_khayyam[296908]: 167 167
Feb 25 12:27:59 compute-0 podman[296892]: 2026-02-25 12:27:59.379638799 +0000 UTC m=+0.130423913 container died 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:27:59 compute-0 systemd[1]: libpod-0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01.scope: Deactivated successfully.
Feb 25 12:27:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aab89e1ce8e4f742a7ee84231cf2312625904772350d5335b2c5022bd9716c0-merged.mount: Deactivated successfully.
Feb 25 12:27:59 compute-0 podman[296892]: 2026-02-25 12:27:59.418703674 +0000 UTC m=+0.169488758 container remove 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:27:59 compute-0 systemd[1]: libpod-conmon-0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01.scope: Deactivated successfully.
Feb 25 12:27:59 compute-0 ceph-mon[76335]: pgmap v1369: 305 pgs: 305 active+clean; 580 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 3.3 MiB/s wr, 426 op/s
Feb 25 12:27:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1167045000' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:27:59 compute-0 podman[296931]: 2026-02-25 12:27:59.590208168 +0000 UTC m=+0.054532674 container create 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:27:59 compute-0 systemd[1]: Started libpod-conmon-659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c.scope.
Feb 25 12:27:59 compute-0 podman[296931]: 2026-02-25 12:27:59.566417465 +0000 UTC m=+0.030742041 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:27:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f06d7702a45c7b3b07c518e0a7eb9cc9c4590bb82e0ebf4d0f39184fda2be0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f06d7702a45c7b3b07c518e0a7eb9cc9c4590bb82e0ebf4d0f39184fda2be0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f06d7702a45c7b3b07c518e0a7eb9cc9c4590bb82e0ebf4d0f39184fda2be0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f06d7702a45c7b3b07c518e0a7eb9cc9c4590bb82e0ebf4d0f39184fda2be0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:27:59 compute-0 podman[296931]: 2026-02-25 12:27:59.708714662 +0000 UTC m=+0.173039198 container init 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:59 compute-0 podman[296931]: 2026-02-25 12:27:59.716504602 +0000 UTC m=+0.180829128 container start 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 12:27:59 compute-0 podman[296931]: 2026-02-25 12:27:59.720612979 +0000 UTC m=+0.184937485 container attach 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:27:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 580 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 52 KiB/s wr, 300 op/s
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.898 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.899 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.900 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "160a4e3f-b197-4b82-a2ff-cebf79df47df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.900 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.901 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.902 244018 INFO nova.compute.manager [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Terminating instance
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.904 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "refresh_cache-160a4e3f-b197-4b82-a2ff-cebf79df47df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.904 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquired lock "refresh_cache-160a4e3f-b197-4b82-a2ff-cebf79df47df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:27:59 compute-0 nova_compute[244014]: 2026-02-25 12:27:59.904 244018 DEBUG nova.network.neutron [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:28:00 compute-0 great_jepsen[296947]: {
Feb 25 12:28:00 compute-0 great_jepsen[296947]:     "0": [
Feb 25 12:28:00 compute-0 great_jepsen[296947]:         {
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "devices": [
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "/dev/loop3"
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             ],
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_name": "ceph_lv0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_size": "21470642176",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "name": "ceph_lv0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "tags": {
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.cluster_name": "ceph",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.crush_device_class": "",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.encrypted": "0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.objectstore": "bluestore",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.osd_id": "0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.type": "block",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.vdo": "0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.with_tpm": "0"
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             },
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "type": "block",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "vg_name": "ceph_vg0"
Feb 25 12:28:00 compute-0 great_jepsen[296947]:         }
Feb 25 12:28:00 compute-0 great_jepsen[296947]:     ],
Feb 25 12:28:00 compute-0 great_jepsen[296947]:     "1": [
Feb 25 12:28:00 compute-0 great_jepsen[296947]:         {
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "devices": [
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "/dev/loop4"
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             ],
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_name": "ceph_lv1",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_size": "21470642176",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "name": "ceph_lv1",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "tags": {
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.cluster_name": "ceph",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.crush_device_class": "",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.encrypted": "0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.objectstore": "bluestore",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.osd_id": "1",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.type": "block",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.vdo": "0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.with_tpm": "0"
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             },
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "type": "block",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "vg_name": "ceph_vg1"
Feb 25 12:28:00 compute-0 great_jepsen[296947]:         }
Feb 25 12:28:00 compute-0 great_jepsen[296947]:     ],
Feb 25 12:28:00 compute-0 great_jepsen[296947]:     "2": [
Feb 25 12:28:00 compute-0 great_jepsen[296947]:         {
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "devices": [
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "/dev/loop5"
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             ],
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_name": "ceph_lv2",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_size": "21470642176",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "name": "ceph_lv2",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "tags": {
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.cluster_name": "ceph",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.crush_device_class": "",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.encrypted": "0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.objectstore": "bluestore",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.osd_id": "2",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.type": "block",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.vdo": "0",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:                 "ceph.with_tpm": "0"
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             },
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "type": "block",
Feb 25 12:28:00 compute-0 great_jepsen[296947]:             "vg_name": "ceph_vg2"
Feb 25 12:28:00 compute-0 great_jepsen[296947]:         }
Feb 25 12:28:00 compute-0 great_jepsen[296947]:     ]
Feb 25 12:28:00 compute-0 great_jepsen[296947]: }
Feb 25 12:28:00 compute-0 systemd[1]: libpod-659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c.scope: Deactivated successfully.
Feb 25 12:28:00 compute-0 podman[296931]: 2026-02-25 12:28:00.050815684 +0000 UTC m=+0.515140170 container died 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 12:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-02f06d7702a45c7b3b07c518e0a7eb9cc9c4590bb82e0ebf4d0f39184fda2be0-merged.mount: Deactivated successfully.
Feb 25 12:28:00 compute-0 podman[296931]: 2026-02-25 12:28:00.086095072 +0000 UTC m=+0.550419548 container remove 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:28:00 compute-0 nova_compute[244014]: 2026-02-25 12:28:00.098 244018 DEBUG nova.network.neutron [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:28:00 compute-0 sudo[296855]: pam_unix(sudo:session): session closed for user root
Feb 25 12:28:00 compute-0 systemd[1]: libpod-conmon-659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c.scope: Deactivated successfully.
Feb 25 12:28:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Feb 25 12:28:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Feb 25 12:28:00 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Feb 25 12:28:00 compute-0 sudo[296968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:28:00 compute-0 sudo[296968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:28:00 compute-0 sudo[296968]: pam_unix(sudo:session): session closed for user root
Feb 25 12:28:00 compute-0 sudo[296993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:28:00 compute-0 sudo[296993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:28:00 compute-0 nova_compute[244014]: 2026-02-25 12:28:00.463 244018 DEBUG nova.network.neutron [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:00 compute-0 nova_compute[244014]: 2026-02-25 12:28:00.480 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Releasing lock "refresh_cache-160a4e3f-b197-4b82-a2ff-cebf79df47df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:00 compute-0 nova_compute[244014]: 2026-02-25 12:28:00.481 244018 DEBUG nova.compute.manager [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:28:00 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Feb 25 12:28:00 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000003a.scope: Consumed 14.289s CPU time.
Feb 25 12:28:00 compute-0 systemd-machined[210048]: Machine qemu-64-instance-0000003a terminated.
Feb 25 12:28:00 compute-0 podman[297031]: 2026-02-25 12:28:00.581524704 +0000 UTC m=+0.067857012 container create 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 12:28:00 compute-0 systemd[1]: Started libpod-conmon-950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460.scope.
Feb 25 12:28:00 compute-0 podman[297031]: 2026-02-25 12:28:00.547271254 +0000 UTC m=+0.033603572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:28:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:28:00 compute-0 nova_compute[244014]: 2026-02-25 12:28:00.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:00 compute-0 podman[297031]: 2026-02-25 12:28:00.66018698 +0000 UTC m=+0.146519288 container init 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 12:28:00 compute-0 podman[297031]: 2026-02-25 12:28:00.669049931 +0000 UTC m=+0.155382209 container start 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:28:00 compute-0 podman[297031]: 2026-02-25 12:28:00.67288791 +0000 UTC m=+0.159220268 container attach 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 12:28:00 compute-0 priceless_shamir[297046]: 167 167
Feb 25 12:28:00 compute-0 systemd[1]: libpod-950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460.scope: Deactivated successfully.
Feb 25 12:28:00 compute-0 podman[297031]: 2026-02-25 12:28:00.675104712 +0000 UTC m=+0.161437020 container died 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 12:28:00 compute-0 nova_compute[244014]: 2026-02-25 12:28:00.701 244018 INFO nova.virt.libvirt.driver [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance destroyed successfully.
Feb 25 12:28:00 compute-0 nova_compute[244014]: 2026-02-25 12:28:00.701 244018 DEBUG nova.objects.instance [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'resources' on Instance uuid 160a4e3f-b197-4b82-a2ff-cebf79df47df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a86e46fc80c4633abd3ba6dff11910203d7188ae0affae4cb7626efce0140231-merged.mount: Deactivated successfully.
Feb 25 12:28:00 compute-0 podman[297031]: 2026-02-25 12:28:00.73757087 +0000 UTC m=+0.223903148 container remove 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:28:00 compute-0 systemd[1]: libpod-conmon-950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460.scope: Deactivated successfully.
Feb 25 12:28:00 compute-0 podman[297092]: 2026-02-25 12:28:00.961858558 +0000 UTC m=+0.068338805 container create 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:28:01 compute-0 systemd[1]: Started libpod-conmon-837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e.scope.
Feb 25 12:28:01 compute-0 podman[297092]: 2026-02-25 12:28:00.938428465 +0000 UTC m=+0.044908802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:28:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556b365383e7b3fad21c5c626887e848e47cc361e110b111c312871b9f5067a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556b365383e7b3fad21c5c626887e848e47cc361e110b111c312871b9f5067a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556b365383e7b3fad21c5c626887e848e47cc361e110b111c312871b9f5067a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556b365383e7b3fad21c5c626887e848e47cc361e110b111c312871b9f5067a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:28:01 compute-0 podman[297092]: 2026-02-25 12:28:01.070814141 +0000 UTC m=+0.177294408 container init 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:28:01 compute-0 podman[297092]: 2026-02-25 12:28:01.080603368 +0000 UTC m=+0.187083615 container start 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 12:28:01 compute-0 podman[297092]: 2026-02-25 12:28:01.084079407 +0000 UTC m=+0.190559664 container attach 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.116 244018 INFO nova.virt.libvirt.driver [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Deleting instance files /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df_del
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.117 244018 INFO nova.virt.libvirt.driver [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Deletion of /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df_del complete
Feb 25 12:28:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Feb 25 12:28:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Feb 25 12:28:01 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Feb 25 12:28:01 compute-0 ceph-mon[76335]: pgmap v1370: 305 pgs: 305 active+clean; 580 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 52 KiB/s wr, 300 op/s
Feb 25 12:28:01 compute-0 ceph-mon[76335]: osdmap e226: 3 total, 3 up, 3 in
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.201 244018 INFO nova.compute.manager [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 0.72 seconds to destroy the instance on the hypervisor.
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.202 244018 DEBUG oslo.service.loopingcall [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.202 244018 DEBUG nova.compute.manager [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.202 244018 DEBUG nova.network.neutron [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.579 244018 DEBUG nova.network.neutron [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.595 244018 DEBUG nova.network.neutron [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.613 244018 INFO nova.compute.manager [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 0.41 seconds to deallocate network for instance.
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.667 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.667 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:01 compute-0 podman[297174]: 2026-02-25 12:28:01.726160599 +0000 UTC m=+0.074549321 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:28:01 compute-0 lvm[297222]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:28:01 compute-0 lvm[297222]: VG ceph_vg0 finished
Feb 25 12:28:01 compute-0 lvm[297228]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:28:01 compute-0 lvm[297228]: VG ceph_vg1 finished
Feb 25 12:28:01 compute-0 podman[297178]: 2026-02-25 12:28:01.748306055 +0000 UTC m=+0.093903878 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 25 12:28:01 compute-0 lvm[297230]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:28:01 compute-0 lvm[297230]: VG ceph_vg2 finished
Feb 25 12:28:01 compute-0 nova_compute[244014]: 2026-02-25 12:28:01.767 244018 DEBUG oslo_concurrency.processutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:01 compute-0 nervous_thompson[297111]: {}
Feb 25 12:28:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 451 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 54 KiB/s wr, 458 op/s
Feb 25 12:28:01 compute-0 systemd[1]: libpod-837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e.scope: Deactivated successfully.
Feb 25 12:28:01 compute-0 systemd[1]: libpod-837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e.scope: Consumed 1.078s CPU time.
Feb 25 12:28:01 compute-0 podman[297092]: 2026-02-25 12:28:01.865981386 +0000 UTC m=+0.972461633 container died 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:28:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-556b365383e7b3fad21c5c626887e848e47cc361e110b111c312871b9f5067a9-merged.mount: Deactivated successfully.
Feb 25 12:28:01 compute-0 podman[297092]: 2026-02-25 12:28:01.902473229 +0000 UTC m=+1.008953476 container remove 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 12:28:01 compute-0 systemd[1]: libpod-conmon-837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e.scope: Deactivated successfully.
Feb 25 12:28:01 compute-0 sudo[296993]: pam_unix(sudo:session): session closed for user root
Feb 25 12:28:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:28:01 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:28:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:28:01 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:28:02 compute-0 sudo[297263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:28:02 compute-0 sudo[297263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:28:02 compute-0 sudo[297263]: pam_unix(sudo:session): session closed for user root
Feb 25 12:28:02 compute-0 ceph-mon[76335]: osdmap e227: 3 total, 3 up, 3 in
Feb 25 12:28:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:28:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:28:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1538028854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:02 compute-0 nova_compute[244014]: 2026-02-25 12:28:02.363 244018 DEBUG oslo_concurrency.processutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:02 compute-0 nova_compute[244014]: 2026-02-25 12:28:02.372 244018 DEBUG nova.compute.provider_tree [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:28:02 compute-0 nova_compute[244014]: 2026-02-25 12:28:02.398 244018 DEBUG nova.scheduler.client.report [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:28:02 compute-0 nova_compute[244014]: 2026-02-25 12:28:02.422 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:02 compute-0 nova_compute[244014]: 2026-02-25 12:28:02.462 244018 INFO nova.scheduler.client.report [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Deleted allocations for instance 160a4e3f-b197-4b82-a2ff-cebf79df47df
Feb 25 12:28:02 compute-0 nova_compute[244014]: 2026-02-25 12:28:02.541 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:03 compute-0 ceph-mon[76335]: pgmap v1373: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 451 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 54 KiB/s wr, 458 op/s
Feb 25 12:28:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1538028854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.391 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.392 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.392 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.393 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.393 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.395 244018 INFO nova.compute.manager [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Terminating instance
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.397 244018 DEBUG nova.compute.manager [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:28:03 compute-0 kernel: tap69011b0a-5a (unregistering): left promiscuous mode
Feb 25 12:28:03 compute-0 NetworkManager[49836]: <info>  [1772022483.4530] device (tap69011b0a-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:28:03 compute-0 ovn_controller[147040]: 2026-02-25T12:28:03Z|00536|binding|INFO|Releasing lport 69011b0a-5af7-4bef-a14c-8d83e63e08ae from this chassis (sb_readonly=0)
Feb 25 12:28:03 compute-0 ovn_controller[147040]: 2026-02-25T12:28:03Z|00537|binding|INFO|Setting lport 69011b0a-5af7-4bef-a14c-8d83e63e08ae down in Southbound
Feb 25 12:28:03 compute-0 ovn_controller[147040]: 2026-02-25T12:28:03Z|00538|binding|INFO|Removing iface tap69011b0a-5a ovn-installed in OVS
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.478 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:0d:06 10.100.0.5'], port_security=['fa:16:3e:df:0d:06 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '43b8959e-9cf0-42ca-aa1f-8a380321c971', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=69011b0a-5af7-4bef-a14c-8d83e63e08ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.483 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 69011b0a-5af7-4bef-a14c-8d83e63e08ae in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 unbound from our chassis
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.486 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.498 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d62bb3a4-08eb-46fe-a778-8d5e3917fae8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:03 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000037.scope: Deactivated successfully.
Feb 25 12:28:03 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000037.scope: Consumed 14.469s CPU time.
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.518 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[135c072a-b4e2-4313-ac88-27ba041a814f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:03 compute-0 systemd-machined[210048]: Machine qemu-60-instance-00000037 terminated.
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.525 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[efe1cdba-33f7-4740-85b8-bb3041362f8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.548 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[03962804-dd0a-4033-bd3e-fc362e0e53c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.565 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6341d750-2d8c-4705-9626-30692ad3e1e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 700, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 700, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297299, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.577 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30cac6ed-d47f-42f7-b982-1aabc9c75ea8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297300, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297300, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.579 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.585 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.585 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.586 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.587 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.628 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.628 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.632 244018 INFO nova.virt.libvirt.driver [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Instance destroyed successfully.
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.632 244018 DEBUG nova.objects.instance [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'resources' on Instance uuid 43b8959e-9cf0-42ca-aa1f-8a380321c971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.648 244018 DEBUG nova.virt.libvirt.vif [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1336556788',display_name='tempest-ServerActionsTestOtherB-server-1336556788',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1336556788',id=55,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-8l7ggktq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:37Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=43b8959e-9cf0-42ca-aa1f-8a380321c971,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.649 244018 DEBUG nova.network.os_vif_util [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.649 244018 DEBUG nova.network.os_vif_util [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.650 244018 DEBUG os_vif [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.652 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69011b0a-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.653 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.660 244018 INFO os_vif [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a')
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.739 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.741 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.749 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.750 244018 INFO nova.compute.claims [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:28:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 410 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 46 KiB/s wr, 436 op/s
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.906 244018 INFO nova.virt.libvirt.driver [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Deleting instance files /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971_del
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.907 244018 INFO nova.virt.libvirt.driver [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Deletion of /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971_del complete
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.914 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.915 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.948 244018 DEBUG nova.compute.manager [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-unplugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.948 244018 DEBUG oslo_concurrency.lockutils [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.949 244018 DEBUG oslo_concurrency.lockutils [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.949 244018 DEBUG oslo_concurrency.lockutils [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.950 244018 DEBUG nova.compute.manager [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] No waiting events found dispatching network-vif-unplugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.950 244018 DEBUG nova.compute.manager [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-unplugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:28:03 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.952 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:03.999 244018 INFO nova.compute.manager [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.001 244018 DEBUG oslo.service.loopingcall [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.001 244018 DEBUG nova.compute.manager [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.002 244018 DEBUG nova.network.neutron [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:28:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Feb 25 12:28:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.188 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:04 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.193 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.194 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:28:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2846933071' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.534 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.541 244018 DEBUG nova.compute.provider_tree [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.855 244018 DEBUG nova.scheduler.client.report [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.899 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.900 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.981 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:28:04 compute-0 nova_compute[244014]: 2026-02-25 12:28:04.982 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.013 244018 INFO nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.041 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:28:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:05 compute-0 ceph-mon[76335]: pgmap v1374: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 410 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 46 KiB/s wr, 436 op/s
Feb 25 12:28:05 compute-0 ceph-mon[76335]: osdmap e228: 3 total, 3 up, 3 in
Feb 25 12:28:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2846933071' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.387 244018 DEBUG nova.policy [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '322943d9098b419f8d2d573cc6b7fe8e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fbfdd47eb32e4d4aadc46f464899cf16', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.398 244018 DEBUG nova.compute.manager [req-6bb749f3-6d1a-47a5-ae30-ea8645ad9b9c req-a5244247-2a11-4d3e-a50c-79f56cc1576f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-deleted-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.399 244018 INFO nova.compute.manager [req-6bb749f3-6d1a-47a5-ae30-ea8645ad9b9c req-a5244247-2a11-4d3e-a50c-79f56cc1576f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Neutron deleted interface 69011b0a-5af7-4bef-a14c-8d83e63e08ae; detaching it from the instance and deleting it from the info cache
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.399 244018 DEBUG nova.network.neutron [req-6bb749f3-6d1a-47a5-ae30-ea8645ad9b9c req-a5244247-2a11-4d3e-a50c-79f56cc1576f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.401 244018 DEBUG nova.network.neutron [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.678 244018 INFO nova.compute.manager [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Took 1.68 seconds to deallocate network for instance.
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.680 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.682 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.683 244018 INFO nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Creating image(s)
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.713 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.743 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.775 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.779 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.824 244018 DEBUG nova.compute.manager [req-6bb749f3-6d1a-47a5-ae30-ea8645ad9b9c req-a5244247-2a11-4d3e-a50c-79f56cc1576f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Detach interface failed, port_id=69011b0a-5af7-4bef-a14c-8d83e63e08ae, reason: Instance 43b8959e-9cf0-42ca-aa1f-8a380321c971 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:28:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 410 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.7 KiB/s wr, 259 op/s
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.857 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.858 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.859 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.860 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.891 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:05 compute-0 nova_compute[244014]: 2026-02-25 12:28:05.896 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 caff6378-2d93-4b73-8d58-da0e74a6d46e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.050 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.051 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.209 244018 DEBUG nova.compute.manager [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.210 244018 DEBUG oslo_concurrency.lockutils [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.210 244018 DEBUG oslo_concurrency.lockutils [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.210 244018 DEBUG oslo_concurrency.lockutils [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.211 244018 DEBUG nova.compute.manager [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] No waiting events found dispatching network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.211 244018 WARNING nova.compute.manager [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received unexpected event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae for instance with vm_state deleted and task_state None.
Feb 25 12:28:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Feb 25 12:28:06 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.231 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 caff6378-2d93-4b73-8d58-da0e74a6d46e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.304 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] resizing rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.512 244018 DEBUG nova.objects.instance [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lazy-loading 'migration_context' on Instance uuid caff6378-2d93-4b73-8d58-da0e74a6d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.655 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Successfully created port: 21ae71a3-b142-47f7-a2b0-b62a7625a31d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:28:06 compute-0 ovn_controller[147040]: 2026-02-25T12:28:06Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:53:87 10.100.0.6
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.707 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.708 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Ensure instance console log exists: /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.708 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.708 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:06 compute-0 nova_compute[244014]: 2026-02-25 12:28:06.709 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:07 compute-0 nova_compute[244014]: 2026-02-25 12:28:07.084 244018 DEBUG oslo_concurrency.processutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:07 compute-0 ceph-mon[76335]: pgmap v1376: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 410 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.7 KiB/s wr, 259 op/s
Feb 25 12:28:07 compute-0 ceph-mon[76335]: osdmap e229: 3 total, 3 up, 3 in
Feb 25 12:28:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1125683448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:07 compute-0 nova_compute[244014]: 2026-02-25 12:28:07.625 244018 DEBUG oslo_concurrency.processutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:07 compute-0 nova_compute[244014]: 2026-02-25 12:28:07.633 244018 DEBUG nova.compute.provider_tree [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:28:07 compute-0 nova_compute[244014]: 2026-02-25 12:28:07.637 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 360 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 475 op/s
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.496 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Successfully updated port: 21ae71a3-b142-47f7-a2b0-b62a7625a31d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:28:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1125683448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.502 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.502 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.504 244018 DEBUG nova.compute.manager [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-changed-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.504 244018 DEBUG nova.compute.manager [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Refreshing instance network info cache due to event network-changed-21ae71a3-b142-47f7-a2b0-b62a7625a31d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.504 244018 DEBUG oslo_concurrency.lockutils [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.505 244018 DEBUG oslo_concurrency.lockutils [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.505 244018 DEBUG nova.network.neutron [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Refreshing network info cache for port 21ae71a3-b142-47f7-a2b0-b62a7625a31d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.507 244018 DEBUG nova.scheduler.client.report [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.512 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.514 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.517 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.556 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.558 244018 WARNING nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] While synchronizing instance power states, found 4 instances in the database and 2 instances on the hypervisor.
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.559 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid b8086e43-4c45-422f-a3b5-fa665c256b30 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.559 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.559 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 43b8959e-9cf0-42ca-aa1f-8a380321c971 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.559 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid caff6378-2d93-4b73-8d58-da0e74a6d46e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.559 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.560 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.560 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.560 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.561 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.561 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.582 244018 INFO nova.scheduler.client.report [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Deleted allocations for instance 43b8959e-9cf0-42ca-aa1f-8a380321c971
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.604 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.608 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.654 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.655 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.656 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.669 244018 DEBUG nova.network.neutron [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:08 compute-0 nova_compute[244014]: 2026-02-25 12:28:08.924 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.046 244018 DEBUG nova.network.neutron [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:09 compute-0 ovn_controller[147040]: 2026-02-25T12:28:09Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.069 244018 DEBUG oslo_concurrency.lockutils [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.070 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquired lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.070 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.263 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.496 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.497 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.497 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.498 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.498 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.500 244018 INFO nova.compute.manager [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Terminating instance
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.501 244018 DEBUG nova.compute.manager [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:28:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Feb 25 12:28:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Feb 25 12:28:09 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Feb 25 12:28:09 compute-0 ceph-mon[76335]: pgmap v1378: 305 pgs: 305 active+clean; 360 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 475 op/s
Feb 25 12:28:09 compute-0 kernel: tapabdb97b5-8e (unregistering): left promiscuous mode
Feb 25 12:28:09 compute-0 NetworkManager[49836]: <info>  [1772022489.5640] device (tapabdb97b5-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:28:09 compute-0 ovn_controller[147040]: 2026-02-25T12:28:09Z|00539|binding|INFO|Releasing lport abdb97b5-8e9d-4929-af6f-bfb06c067878 from this chassis (sb_readonly=0)
Feb 25 12:28:09 compute-0 ovn_controller[147040]: 2026-02-25T12:28:09Z|00540|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 down in Southbound
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:09 compute-0 ovn_controller[147040]: 2026-02-25T12:28:09Z|00541|binding|INFO|Removing iface tapabdb97b5-8e ovn-installed in OVS
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.582 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:53:87 10.100.0.6'], port_security=['fa:16:3e:f8:53:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8086e43-4c45-422f-a3b5-fa665c256b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '9', 'neutron:security_group_ids': '35edb9b7-5285-41a3-a867-f1cc587b3ad5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=abdb97b5-8e9d-4929-af6f-bfb06c067878) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.586 157129 INFO neutron.agent.ovn.metadata.agent [-] Port abdb97b5-8e9d-4929-af6f-bfb06c067878 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 unbound from our chassis
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.589 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64c22162-7e15-45de-8fd2-8c9a24f27006, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.590 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6e932f6b-21b0-4193-a016-a1b96312d96a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.591 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006 namespace which is not needed anymore
Feb 25 12:28:09 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Feb 25 12:28:09 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000002f.scope: Consumed 11.788s CPU time.
Feb 25 12:28:09 compute-0 systemd-machined[210048]: Machine qemu-68-instance-0000002f terminated.
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.734 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance destroyed successfully.
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.734 244018 DEBUG nova.objects.instance [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'resources' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.752 244018 DEBUG nova.virt.libvirt.vif [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ4/ViXjDl7sfXbfHy1Rj0+WVS30xG/+F445xoJQyz45huoziS5Ge/69+H9D3xA69BQvF6LAGpEuOAI4T0oNr5YUcMHaOf8cBGICZoqOX1SEjGVzLtcjONvsNISgitMaQ==',key_name='tempest-keypair-221288102',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:27:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.753 244018 DEBUG nova.network.os_vif_util [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.754 244018 DEBUG nova.network.os_vif_util [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.755 244018 DEBUG os_vif [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.757 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabdb97b5-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.758 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:09 compute-0 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [NOTICE]   (285484) : haproxy version is 2.8.14-c23fe91
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.765 244018 INFO os_vif [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e')
Feb 25 12:28:09 compute-0 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [NOTICE]   (285484) : path to executable is /usr/sbin/haproxy
Feb 25 12:28:09 compute-0 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [WARNING]  (285484) : Exiting Master process...
Feb 25 12:28:09 compute-0 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [WARNING]  (285484) : Exiting Master process...
Feb 25 12:28:09 compute-0 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [ALERT]    (285484) : Current worker (285486) exited with code 143 (Terminated)
Feb 25 12:28:09 compute-0 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [WARNING]  (285484) : All workers exited. Exiting... (0)
Feb 25 12:28:09 compute-0 systemd[1]: libpod-61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174.scope: Deactivated successfully.
Feb 25 12:28:09 compute-0 podman[297567]: 2026-02-25 12:28:09.776940975 +0000 UTC m=+0.069736565 container died 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:28:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174-userdata-shm.mount: Deactivated successfully.
Feb 25 12:28:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee9ea23fb31ef780d3141b9da7996fc6467cb5ff171d46fe3017a00206862844-merged.mount: Deactivated successfully.
Feb 25 12:28:09 compute-0 podman[297567]: 2026-02-25 12:28:09.827884887 +0000 UTC m=+0.120680487 container cleanup 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:28:09 compute-0 systemd[1]: libpod-conmon-61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174.scope: Deactivated successfully.
Feb 25 12:28:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 360 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 270 op/s
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:28:09 compute-0 podman[297623]: 2026-02-25 12:28:09.900243055 +0000 UTC m=+0.046290401 container remove 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.908 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6962df76-db3f-4ac6-a252-d4887b5a2ec2]: (4, ('Wed Feb 25 12:28:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006 (61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174)\n61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174\nWed Feb 25 12:28:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006 (61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174)\n61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.912 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5201f07-6677-4689-8fd2-0f6ec6844143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.914 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:09 compute-0 kernel: tap64c22162-70: left promiscuous mode
Feb 25 12:28:09 compute-0 nova_compute[244014]: 2026-02-25 12:28:09.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.934 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67a0879d-fee6-403f-be5e-e69ee7a03675]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.949 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4131b8f2-1482-4a72-a752-db705548f1a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.951 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[80caa983-2a27-48a8-81d2-d73536b797d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.969 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[745d7ba7-1893-4198-b781-c662f979ced6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428688, 'reachable_time': 18847, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297638, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d64c22162\x2d7e15\x2d45de\x2d8fd2\x2d8c9a24f27006.mount: Deactivated successfully.
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.973 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:28:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.973 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3b10c0-fc53-4477-8669-ef5aec32d128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.137 244018 INFO nova.virt.libvirt.driver [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deleting instance files /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30_del
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.138 244018 INFO nova.virt.libvirt.driver [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deletion of /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30_del complete
Feb 25 12:28:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Feb 25 12:28:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Feb 25 12:28:10 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.218 244018 INFO nova.compute.manager [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 0.72 seconds to destroy the instance on the hypervisor.
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.219 244018 DEBUG oslo.service.loopingcall [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.220 244018 DEBUG nova.compute.manager [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.220 244018 DEBUG nova.network.neutron [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.344 244018 DEBUG nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.345 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.346 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.346 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.347 244018 DEBUG nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.348 244018 DEBUG nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.348 244018 DEBUG nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.349 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.349 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.350 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.350 244018 DEBUG nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.351 244018 WARNING nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received unexpected event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with vm_state active and task_state deleting.
Feb 25 12:28:10 compute-0 ceph-mon[76335]: osdmap e230: 3 total, 3 up, 3 in
Feb 25 12:28:10 compute-0 ceph-mon[76335]: osdmap e231: 3 total, 3 up, 3 in
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:28:10 compute-0 nova_compute[244014]: 2026-02-25 12:28:10.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.054 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.054 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.055 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.055 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.056 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.105 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Updating instance_info_cache with network_info: [{"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.132 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Releasing lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.133 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance network_info: |[{"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.135 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Start _get_guest_xml network_info=[{"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.140 244018 WARNING nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.147 244018 DEBUG nova.virt.libvirt.host [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.147 244018 DEBUG nova.virt.libvirt.host [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.151 244018 DEBUG nova.virt.libvirt.host [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.151 244018 DEBUG nova.virt.libvirt.host [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.152 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.152 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.152 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.153 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.153 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.153 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.153 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.153 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.154 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.154 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.154 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.154 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.157 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4247859297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.577 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022476.5753376, bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.577 244018 INFO nova.compute.manager [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] VM Stopped (Lifecycle Event)
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.585 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.596 244018 DEBUG nova.compute.manager [None req-7aa3b82a-cf7c-4506-8750-c047a9e57d1d - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:11 compute-0 ceph-mon[76335]: pgmap v1380: 305 pgs: 305 active+clean; 360 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 270 op/s
Feb 25 12:28:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4247859297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:28:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:28:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1376598932' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.730 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.763 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.766 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 306 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 444 op/s
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.890 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.891 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3741MB free_disk=59.87575867306441GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.892 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.892 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.967 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.967 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b8086e43-4c45-422f-a3b5-fa665c256b30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.967 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance caff6378-2d93-4b73-8d58-da0e74a6d46e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.968 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:28:11 compute-0 nova_compute[244014]: 2026-02-25 12:28:11.968 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.037 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:12.262 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:28:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:12.263 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:28:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97689290' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.290 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.292 244018 DEBUG nova.virt.libvirt.vif [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-409359068',display_name='tempest-ServerAddressesNegativeTestJSON-server-409359068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-409359068',id=61,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbfdd47eb32e4d4aadc46f464899cf16',ramdisk_id='',reservation_id='r-061938kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-512544112',owner_user_name='tempest-ServerAddressesNegativeTestJSON-512544112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:05Z,user_data=None,user_id='322943d9098b419f8d2d573cc6b7fe8e',uuid=caff6378-2d93-4b73-8d58-da0e74a6d46e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.293 244018 DEBUG nova.network.os_vif_util [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converting VIF {"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.294 244018 DEBUG nova.network.os_vif_util [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.296 244018 DEBUG nova.objects.instance [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lazy-loading 'pci_devices' on Instance uuid caff6378-2d93-4b73-8d58-da0e74a6d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.314 244018 DEBUG nova.network.neutron [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.319 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:28:12 compute-0 nova_compute[244014]:   <uuid>caff6378-2d93-4b73-8d58-da0e74a6d46e</uuid>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   <name>instance-0000003d</name>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-409359068</nova:name>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:28:11</nova:creationTime>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <nova:user uuid="322943d9098b419f8d2d573cc6b7fe8e">tempest-ServerAddressesNegativeTestJSON-512544112-project-member</nova:user>
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <nova:project uuid="fbfdd47eb32e4d4aadc46f464899cf16">tempest-ServerAddressesNegativeTestJSON-512544112</nova:project>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <nova:port uuid="21ae71a3-b142-47f7-a2b0-b62a7625a31d">
Feb 25 12:28:12 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <system>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <entry name="serial">caff6378-2d93-4b73-8d58-da0e74a6d46e</entry>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <entry name="uuid">caff6378-2d93-4b73-8d58-da0e74a6d46e</entry>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     </system>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   <os>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   </os>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   <features>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   </features>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/caff6378-2d93-4b73-8d58-da0e74a6d46e_disk">
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       </source>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config">
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       </source>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:28:12 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:a1:6b:aa"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <target dev="tap21ae71a3-b1"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/console.log" append="off"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <video>
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     </video>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:28:12 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:28:12 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:28:12 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:28:12 compute-0 nova_compute[244014]: </domain>
Feb 25 12:28:12 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.320 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Preparing to wait for external event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.320 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.320 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.321 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.321 244018 DEBUG nova.virt.libvirt.vif [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-409359068',display_name='tempest-ServerAddressesNegativeTestJSON-server-409359068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-409359068',id=61,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbfdd47eb32e4d4aadc46f464899cf16',ramdisk_id='',reservation_id='r-061938kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-512544112',owner_user_name='tempest-ServerAddressesNegativeTestJSON-512544112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:05Z,user_data=None,user_id='322943d9098b419f8d2d573cc6b7fe8e',uuid=caff6378-2d93-4b73-8d58-da0e74a6d46e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.322 244018 DEBUG nova.network.os_vif_util [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converting VIF {"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.322 244018 DEBUG nova.network.os_vif_util [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.323 244018 DEBUG os_vif [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.323 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.324 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.324 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.327 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.328 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21ae71a3-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.328 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21ae71a3-b1, col_values=(('external_ids', {'iface-id': '21ae71a3-b142-47f7-a2b0-b62a7625a31d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:6b:aa', 'vm-uuid': 'caff6378-2d93-4b73-8d58-da0e74a6d46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:12 compute-0 NetworkManager[49836]: <info>  [1772022492.3305] manager: (tap21ae71a3-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.336 244018 INFO os_vif [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1')
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.342 244018 INFO nova.compute.manager [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 2.12 seconds to deallocate network for instance.
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.413 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.419 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.420 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.420 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] No VIF found with MAC fa:16:3e:a1:6b:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.421 244018 INFO nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Using config drive
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.455 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.496 244018 DEBUG nova.compute.manager [req-e6b9ebf6-69fd-4508-8b1a-155e39e8fa11 req-3dc83c4d-f50f-45bd-ad2c-1b9340969744 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-deleted-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170653208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.636 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.643 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.662 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:28:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1376598932' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:12 compute-0 ceph-mon[76335]: pgmap v1382: 305 pgs: 305 active+clean; 306 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 444 op/s
Feb 25 12:28:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/97689290' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4170653208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.691 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.692 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.693 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:12 compute-0 nova_compute[244014]: 2026-02-25 12:28:12.783 244018 DEBUG oslo_concurrency.processutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:13 compute-0 nova_compute[244014]: 2026-02-25 12:28:13.333 244018 INFO nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Creating config drive at /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config
Feb 25 12:28:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1048601496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:13 compute-0 nova_compute[244014]: 2026-02-25 12:28:13.341 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3609w17g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:13 compute-0 nova_compute[244014]: 2026-02-25 12:28:13.365 244018 DEBUG oslo_concurrency.processutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:13 compute-0 nova_compute[244014]: 2026-02-25 12:28:13.372 244018 DEBUG nova.compute.provider_tree [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:28:13 compute-0 nova_compute[244014]: 2026-02-25 12:28:13.385 244018 DEBUG nova.scheduler.client.report [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:28:13 compute-0 nova_compute[244014]: 2026-02-25 12:28:13.406 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:13 compute-0 nova_compute[244014]: 2026-02-25 12:28:13.427 244018 INFO nova.scheduler.client.report [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Deleted allocations for instance b8086e43-4c45-422f-a3b5-fa665c256b30
Feb 25 12:28:13 compute-0 nova_compute[244014]: 2026-02-25 12:28:13.471 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3609w17g" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 374 op/s
Feb 25 12:28:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1048601496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:14 compute-0 nova_compute[244014]: 2026-02-25 12:28:14.545 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:14 compute-0 nova_compute[244014]: 2026-02-25 12:28:14.548 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:14 compute-0 nova_compute[244014]: 2026-02-25 12:28:14.579 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:28:14 compute-0 nova_compute[244014]: 2026-02-25 12:28:14.582 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:14 compute-0 nova_compute[244014]: 2026-02-25 12:28:14.584 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:28:14 compute-0 nova_compute[244014]: 2026-02-25 12:28:14.696 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:14 compute-0 nova_compute[244014]: 2026-02-25 12:28:14.697 244018 INFO nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Deleting local config drive /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config because it was imported into RBD.
Feb 25 12:28:14 compute-0 nova_compute[244014]: 2026-02-25 12:28:14.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:14 compute-0 kernel: tap21ae71a3-b1: entered promiscuous mode
Feb 25 12:28:14 compute-0 NetworkManager[49836]: <info>  [1772022494.7502] manager: (tap21ae71a3-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Feb 25 12:28:14 compute-0 ovn_controller[147040]: 2026-02-25T12:28:14Z|00542|binding|INFO|Claiming lport 21ae71a3-b142-47f7-a2b0-b62a7625a31d for this chassis.
Feb 25 12:28:14 compute-0 ovn_controller[147040]: 2026-02-25T12:28:14Z|00543|binding|INFO|21ae71a3-b142-47f7-a2b0-b62a7625a31d: Claiming fa:16:3e:a1:6b:aa 10.100.0.7
Feb 25 12:28:14 compute-0 nova_compute[244014]: 2026-02-25 12:28:14.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.762 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:6b:aa 10.100.0.7'], port_security=['fa:16:3e:a1:6b:aa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'caff6378-2d93-4b73-8d58-da0e74a6d46e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd427256-862c-42a4-8ee4-681f0377401d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbfdd47eb32e4d4aadc46f464899cf16', 'neutron:revision_number': '2', 'neutron:security_group_ids': '61b17d81-cbe9-4d4b-9f04-df6306382ea5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48a62b60-46ed-47ca-bd9c-153a4a8b3bce, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=21ae71a3-b142-47f7-a2b0-b62a7625a31d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.764 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 21ae71a3-b142-47f7-a2b0-b62a7625a31d in datapath fd427256-862c-42a4-8ee4-681f0377401d bound to our chassis
Feb 25 12:28:14 compute-0 ovn_controller[147040]: 2026-02-25T12:28:14Z|00544|binding|INFO|Setting lport 21ae71a3-b142-47f7-a2b0-b62a7625a31d ovn-installed in OVS
Feb 25 12:28:14 compute-0 ovn_controller[147040]: 2026-02-25T12:28:14Z|00545|binding|INFO|Setting lport 21ae71a3-b142-47f7-a2b0-b62a7625a31d up in Southbound
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.766 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd427256-862c-42a4-8ee4-681f0377401d
Feb 25 12:28:14 compute-0 nova_compute[244014]: 2026-02-25 12:28:14.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.779 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1715bd7a-b995-4f87-91fa-f2506d604339]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.780 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd427256-81 in ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.782 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd427256-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.782 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8524f905-c4ca-4f74-b367-3f86c8721df3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.784 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[01f9dc86-1555-4377-880d-9ce30da19f51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 systemd-udevd[297843]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:28:14 compute-0 systemd-machined[210048]: New machine qemu-70-instance-0000003d.
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.796 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[02d754bb-8371-4cab-8895-317843421fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 NetworkManager[49836]: <info>  [1772022494.8038] device (tap21ae71a3-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:28:14 compute-0 NetworkManager[49836]: <info>  [1772022494.8045] device (tap21ae71a3-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:28:14 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000003d.
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.811 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa268d2-58d4-4bc9-8c48-8326faa19c2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.841 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9a00817b-37dd-43f9-a597-2fc4b8234c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.847 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[acc8a9ff-6155-4c05-85d4-18193ae86794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 NetworkManager[49836]: <info>  [1772022494.8496] manager: (tapfd427256-80): new Veth device (/org/freedesktop/NetworkManager/Devices/242)
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.879 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[50fb0ac2-bc68-48a4-a96b-76db49b851f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.883 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3a88aeef-7977-4853-8227-d06224cc011b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 NetworkManager[49836]: <info>  [1772022494.9043] device (tapfd427256-80): carrier: link connected
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.907 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b73af8-095a-4ec0-99ea-f4f4943cb467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.925 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4920f20-13fc-40c2-ac8b-c8333bd086c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd427256-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:8b:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446431, 'reachable_time': 31818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297875, 'error': None, 'target': 'ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.941 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1fddcdbb-8928-4a80-b0f8-43ec8826ffe1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:8b6e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446431, 'tstamp': 446431}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297876, 'error': None, 'target': 'ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.963 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd276803-2f5b-4f69-9444-7d6dcef97d54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd427256-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:8b:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446431, 'reachable_time': 31818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297877, 'error': None, 'target': 'ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.000 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a1415673-e4bf-4c48-b0f0-2256e9f66c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.060 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dbabe378-8d00-4c1c-95df-80c804f1b951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.065 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd427256-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.066 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.066 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd427256-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:15 compute-0 kernel: tapfd427256-80: entered promiscuous mode
Feb 25 12:28:15 compute-0 NetworkManager[49836]: <info>  [1772022495.0698] manager: (tapfd427256-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.075 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd427256-80, col_values=(('external_ids', {'iface-id': '4c9fe647-e7fc-4a1b-aee3-9f86c40cf6d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:15 compute-0 ovn_controller[147040]: 2026-02-25T12:28:15Z|00546|binding|INFO|Releasing lport 4c9fe647-e7fc-4a1b-aee3-9f86c40cf6d7 from this chassis (sb_readonly=0)
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.078 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.078 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd427256-862c-42a4-8ee4-681f0377401d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd427256-862c-42a4-8ee4-681f0377401d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.079 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[70a525a9-d4de-474c-a4ae-4022c3f78c49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.081 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-fd427256-862c-42a4-8ee4-681f0377401d
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/fd427256-862c-42a4-8ee4-681f0377401d.pid.haproxy
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID fd427256-862c-42a4-8ee4-681f0377401d
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:28:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.081 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d', 'env', 'PROCESS_TAG=haproxy-fd427256-862c-42a4-8ee4-681f0377401d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd427256-862c-42a4-8ee4-681f0377401d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.089 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Feb 25 12:28:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Feb 25 12:28:15 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.230 244018 DEBUG nova.compute.manager [req-7f1a0d98-2987-4143-876f-17fb601e0bcf req-7d29d795-3b40-4199-9f20-dde7ae75a46d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.230 244018 DEBUG oslo_concurrency.lockutils [req-7f1a0d98-2987-4143-876f-17fb601e0bcf req-7d29d795-3b40-4199-9f20-dde7ae75a46d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.230 244018 DEBUG oslo_concurrency.lockutils [req-7f1a0d98-2987-4143-876f-17fb601e0bcf req-7d29d795-3b40-4199-9f20-dde7ae75a46d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.231 244018 DEBUG oslo_concurrency.lockutils [req-7f1a0d98-2987-4143-876f-17fb601e0bcf req-7d29d795-3b40-4199-9f20-dde7ae75a46d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.231 244018 DEBUG nova.compute.manager [req-7f1a0d98-2987-4143-876f-17fb601e0bcf req-7d29d795-3b40-4199-9f20-dde7ae75a46d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Processing event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.335 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.335 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022495.3342507, caff6378-2d93-4b73-8d58-da0e74a6d46e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.336 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] VM Started (Lifecycle Event)
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.340 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.344 244018 INFO nova.virt.libvirt.driver [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance spawned successfully.
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.344 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.371 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.375 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.390 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.391 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.392 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.392 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.393 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.394 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.405 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.406 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022495.3378887, caff6378-2d93-4b73-8d58-da0e74a6d46e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.406 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] VM Paused (Lifecycle Event)
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.442 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.447 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022495.340199, caff6378-2d93-4b73-8d58-da0e74a6d46e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.447 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] VM Resumed (Lifecycle Event)
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.473 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.477 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.485 244018 INFO nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Took 9.80 seconds to spawn the instance on the hypervisor.
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.487 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.498 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:28:15 compute-0 podman[297951]: 2026-02-25 12:28:15.535013105 +0000 UTC m=+0.068718865 container create c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.557 244018 INFO nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Took 11.85 seconds to build instance.
Feb 25 12:28:15 compute-0 systemd[1]: Started libpod-conmon-c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6.scope.
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.587 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.587 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 7.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.587 244018 INFO nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.587 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:15 compute-0 podman[297951]: 2026-02-25 12:28:15.502960308 +0000 UTC m=+0.036666138 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:28:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741e5774616400b8936dd56216d1caef309170a34fbe27eb97875339193d1ec0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:28:15 compute-0 ceph-mon[76335]: pgmap v1383: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 374 op/s
Feb 25 12:28:15 compute-0 ceph-mon[76335]: osdmap e232: 3 total, 3 up, 3 in
Feb 25 12:28:15 compute-0 podman[297951]: 2026-02-25 12:28:15.630348464 +0000 UTC m=+0.164054284 container init c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:28:15 compute-0 podman[297951]: 2026-02-25 12:28:15.637410363 +0000 UTC m=+0.171116123 container start c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:28:15 compute-0 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [NOTICE]   (297970) : New worker (297972) forked
Feb 25 12:28:15 compute-0 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [NOTICE]   (297970) : Loading success.
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.701 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022480.699482, 160a4e3f-b197-4b82-a2ff-cebf79df47df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.701 244018 INFO nova.compute.manager [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] VM Stopped (Lifecycle Event)
Feb 25 12:28:15 compute-0 nova_compute[244014]: 2026-02-25 12:28:15.725 244018 DEBUG nova.compute.manager [None req-040ae3d5-94b3-4c26-b839-d874f5f27b09 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 15 KiB/s wr, 195 op/s
Feb 25 12:28:16 compute-0 nova_compute[244014]: 2026-02-25 12:28:16.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:28:16 compute-0 nova_compute[244014]: 2026-02-25 12:28:16.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.264 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.265 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.265 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.266 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.266 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.268 244018 INFO nova.compute.manager [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Terminating instance
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.269 244018 DEBUG nova.compute.manager [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:28:17 compute-0 kernel: tap21ae71a3-b1 (unregistering): left promiscuous mode
Feb 25 12:28:17 compute-0 NetworkManager[49836]: <info>  [1772022497.3046] device (tap21ae71a3-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:28:17 compute-0 ovn_controller[147040]: 2026-02-25T12:28:17Z|00547|binding|INFO|Releasing lport 21ae71a3-b142-47f7-a2b0-b62a7625a31d from this chassis (sb_readonly=0)
Feb 25 12:28:17 compute-0 ovn_controller[147040]: 2026-02-25T12:28:17Z|00548|binding|INFO|Setting lport 21ae71a3-b142-47f7-a2b0-b62a7625a31d down in Southbound
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.310 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:17 compute-0 ovn_controller[147040]: 2026-02-25T12:28:17Z|00549|binding|INFO|Removing iface tap21ae71a3-b1 ovn-installed in OVS
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.318 244018 DEBUG nova.compute.manager [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.319 244018 DEBUG oslo_concurrency.lockutils [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.319 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:6b:aa 10.100.0.7'], port_security=['fa:16:3e:a1:6b:aa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'caff6378-2d93-4b73-8d58-da0e74a6d46e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd427256-862c-42a4-8ee4-681f0377401d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbfdd47eb32e4d4aadc46f464899cf16', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61b17d81-cbe9-4d4b-9f04-df6306382ea5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48a62b60-46ed-47ca-bd9c-153a4a8b3bce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=21ae71a3-b142-47f7-a2b0-b62a7625a31d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.319 244018 DEBUG oslo_concurrency.lockutils [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.319 244018 DEBUG oslo_concurrency.lockutils [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.320 244018 DEBUG nova.compute.manager [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] No waiting events found dispatching network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.320 244018 WARNING nova.compute.manager [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received unexpected event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d for instance with vm_state active and task_state deleting.
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.321 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 21ae71a3-b142-47f7-a2b0-b62a7625a31d in datapath fd427256-862c-42a4-8ee4-681f0377401d unbound from our chassis
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.324 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd427256-862c-42a4-8ee4-681f0377401d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.325 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b510df0-95b3-4d15-8092-6b7c8c104941]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.326 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d namespace which is not needed anymore
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:17 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Feb 25 12:28:17 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Consumed 2.485s CPU time.
Feb 25 12:28:17 compute-0 systemd-machined[210048]: Machine qemu-70-instance-0000003d terminated.
Feb 25 12:28:17 compute-0 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [NOTICE]   (297970) : haproxy version is 2.8.14-c23fe91
Feb 25 12:28:17 compute-0 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [NOTICE]   (297970) : path to executable is /usr/sbin/haproxy
Feb 25 12:28:17 compute-0 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [WARNING]  (297970) : Exiting Master process...
Feb 25 12:28:17 compute-0 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [ALERT]    (297970) : Current worker (297972) exited with code 143 (Terminated)
Feb 25 12:28:17 compute-0 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [WARNING]  (297970) : All workers exited. Exiting... (0)
Feb 25 12:28:17 compute-0 systemd[1]: libpod-c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6.scope: Deactivated successfully.
Feb 25 12:28:17 compute-0 conmon[297966]: conmon c2c93aca61e07f2afbed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6.scope/container/memory.events
Feb 25 12:28:17 compute-0 podman[298003]: 2026-02-25 12:28:17.488572243 +0000 UTC m=+0.054977297 container died c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.507 244018 INFO nova.virt.libvirt.driver [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance destroyed successfully.
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.509 244018 DEBUG nova.objects.instance [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lazy-loading 'resources' on Instance uuid caff6378-2d93-4b73-8d58-da0e74a6d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6-userdata-shm.mount: Deactivated successfully.
Feb 25 12:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-741e5774616400b8936dd56216d1caef309170a34fbe27eb97875339193d1ec0-merged.mount: Deactivated successfully.
Feb 25 12:28:17 compute-0 podman[298003]: 2026-02-25 12:28:17.540534294 +0000 UTC m=+0.106939368 container cleanup c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.540 244018 DEBUG nova.virt.libvirt.vif [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-409359068',display_name='tempest-ServerAddressesNegativeTestJSON-server-409359068',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-409359068',id=61,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fbfdd47eb32e4d4aadc46f464899cf16',ramdisk_id='',reservation_id='r-061938kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-512544112',owner_user_name='tempest-ServerAddressesNegativeTestJSON-512544112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:28:15Z,user_data=None,user_id='322943d9098b419f8d2d573cc6b7fe8e',uuid=caff6378-2d93-4b73-8d58-da0e74a6d46e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.541 244018 DEBUG nova.network.os_vif_util [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converting VIF {"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.543 244018 DEBUG nova.network.os_vif_util [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.544 244018 DEBUG os_vif [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.548 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21ae71a3-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:17 compute-0 systemd[1]: libpod-conmon-c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6.scope: Deactivated successfully.
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.556 244018 INFO os_vif [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1')
Feb 25 12:28:17 compute-0 podman[298042]: 2026-02-25 12:28:17.62448854 +0000 UTC m=+0.059507566 container remove c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:28:17 compute-0 ceph-mon[76335]: pgmap v1385: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 15 KiB/s wr, 195 op/s
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.631 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86f3cc12-689d-4ac3-b893-319d1c59a131]: (4, ('Wed Feb 25 12:28:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d (c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6)\nc2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6\nWed Feb 25 12:28:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d (c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6)\nc2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.633 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ed14f5ee-73e5-4d6e-814a-3f09954ee61d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.634 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd427256-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:17 compute-0 kernel: tapfd427256-80: left promiscuous mode
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.654 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d91bfad-f777-42a0-8666-1c70b3ada04f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.675 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[74d1852b-15e9-4e36-8404-84735c3d2483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.677 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c7966e39-bff7-4dca-8e11-0d8fd6e984e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.695 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[46345c1c-cb8c-4740-a4b1-909c152db1a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446424, 'reachable_time': 22937, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298076, 'error': None, 'target': 'ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:17 compute-0 systemd[1]: run-netns-ovnmeta\x2dfd427256\x2d862c\x2d42a4\x2d8ee4\x2d681f0377401d.mount: Deactivated successfully.
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.699 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:28:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.699 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[252231f8-acfb-46e7-83e5-8d351f55776a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 46 KiB/s wr, 264 op/s
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.871 244018 INFO nova.virt.libvirt.driver [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Deleting instance files /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e_del
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.872 244018 INFO nova.virt.libvirt.driver [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Deletion of /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e_del complete
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.943 244018 INFO nova.compute.manager [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Took 0.67 seconds to destroy the instance on the hypervisor.
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.944 244018 DEBUG oslo.service.loopingcall [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.944 244018 DEBUG nova.compute.manager [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:28:17 compute-0 nova_compute[244014]: 2026-02-25 12:28:17.945 244018 DEBUG nova.network.neutron [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:28:18 compute-0 nova_compute[244014]: 2026-02-25 12:28:18.609 244018 DEBUG nova.network.neutron [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:18 compute-0 nova_compute[244014]: 2026-02-25 12:28:18.625 244018 INFO nova.compute.manager [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Took 0.68 seconds to deallocate network for instance.
Feb 25 12:28:18 compute-0 nova_compute[244014]: 2026-02-25 12:28:18.630 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022483.6297674, 43b8959e-9cf0-42ca-aa1f-8a380321c971 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:18 compute-0 nova_compute[244014]: 2026-02-25 12:28:18.631 244018 INFO nova.compute.manager [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] VM Stopped (Lifecycle Event)
Feb 25 12:28:18 compute-0 nova_compute[244014]: 2026-02-25 12:28:18.659 244018 DEBUG nova.compute.manager [None req-9eca5cda-9cbb-45b4-9187-e96ff7bc1e36 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:18 compute-0 nova_compute[244014]: 2026-02-25 12:28:18.665 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:18 compute-0 nova_compute[244014]: 2026-02-25 12:28:18.665 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:18 compute-0 nova_compute[244014]: 2026-02-25 12:28:18.764 244018 DEBUG oslo_concurrency.processutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:18 compute-0 nova_compute[244014]: 2026-02-25 12:28:18.804 244018 DEBUG nova.compute.manager [req-1e3174d4-f346-4a6b-937f-ca4ff8467ace req-c9002bfa-064e-41eb-90ac-8b85d15034dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-vif-deleted-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:18 compute-0 ovn_controller[147040]: 2026-02-25T12:28:18Z|00550|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.023 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:19.265 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/951023251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.367 244018 DEBUG oslo_concurrency.processutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.374 244018 DEBUG nova.compute.provider_tree [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.402 244018 DEBUG nova.scheduler.client.report [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.435 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.454 244018 DEBUG nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-vif-unplugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.455 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.455 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.456 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.456 244018 DEBUG nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] No waiting events found dispatching network-vif-unplugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.457 244018 WARNING nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received unexpected event network-vif-unplugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d for instance with vm_state deleted and task_state None.
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.457 244018 DEBUG nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.457 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.458 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.458 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.458 244018 DEBUG nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] No waiting events found dispatching network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.459 244018 WARNING nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received unexpected event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d for instance with vm_state deleted and task_state None.
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.463 244018 INFO nova.scheduler.client.report [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Deleted allocations for instance caff6378-2d93-4b73-8d58-da0e74a6d46e
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.543 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:19 compute-0 ceph-mon[76335]: pgmap v1386: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 46 KiB/s wr, 264 op/s
Feb 25 12:28:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/951023251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:19 compute-0 nova_compute[244014]: 2026-02-25 12:28:19.712 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 38 KiB/s wr, 217 op/s
Feb 25 12:28:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:20 compute-0 ceph-mon[76335]: pgmap v1387: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 38 KiB/s wr, 217 op/s
Feb 25 12:28:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 248 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 33 KiB/s wr, 116 op/s
Feb 25 12:28:22 compute-0 nova_compute[244014]: 2026-02-25 12:28:22.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:22 compute-0 ceph-mon[76335]: pgmap v1388: 305 pgs: 305 active+clean; 248 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 33 KiB/s wr, 116 op/s
Feb 25 12:28:23 compute-0 ovn_controller[147040]: 2026-02-25T12:28:23Z|00551|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:28:23 compute-0 nova_compute[244014]: 2026-02-25 12:28:23.406 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 121 op/s
Feb 25 12:28:24 compute-0 nova_compute[244014]: 2026-02-25 12:28:24.715 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:24 compute-0 nova_compute[244014]: 2026-02-25 12:28:24.733 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022489.7320051, b8086e43-4c45-422f-a3b5-fa665c256b30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:24 compute-0 nova_compute[244014]: 2026-02-25 12:28:24.734 244018 INFO nova.compute.manager [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Stopped (Lifecycle Event)
Feb 25 12:28:24 compute-0 nova_compute[244014]: 2026-02-25 12:28:24.754 244018 DEBUG nova.compute.manager [None req-c660e2bb-19a0-4ae8-a36f-3580bc5492ed - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:24 compute-0 ceph-mon[76335]: pgmap v1389: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 121 op/s
Feb 25 12:28:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 30 KiB/s wr, 113 op/s
Feb 25 12:28:26 compute-0 nova_compute[244014]: 2026-02-25 12:28:26.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:27 compute-0 ceph-mon[76335]: pgmap v1390: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 30 KiB/s wr, 113 op/s
Feb 25 12:28:27 compute-0 nova_compute[244014]: 2026-02-25 12:28:27.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:27 compute-0 nova_compute[244014]: 2026-02-25 12:28:27.693 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 101 op/s
Feb 25 12:28:29 compute-0 ceph-mon[76335]: pgmap v1391: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 101 op/s
Feb 25 12:28:29 compute-0 nova_compute[244014]: 2026-02-25 12:28:29.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 4.8 KiB/s wr, 27 op/s
Feb 25 12:28:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:28:30
Feb 25 12:28:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:28:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:28:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.meta', 'images', '.rgw.root', 'volumes', 'default.rgw.control', 'default.rgw.log', '.mgr']
Feb 25 12:28:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:28:31 compute-0 nova_compute[244014]: 2026-02-25 12:28:31.078 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:31 compute-0 ceph-mon[76335]: pgmap v1392: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 4.8 KiB/s wr, 27 op/s
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:28:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:28:32 compute-0 nova_compute[244014]: 2026-02-25 12:28:32.505 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022497.5050864, caff6378-2d93-4b73-8d58-da0e74a6d46e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:32 compute-0 nova_compute[244014]: 2026-02-25 12:28:32.506 244018 INFO nova.compute.manager [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] VM Stopped (Lifecycle Event)
Feb 25 12:28:32 compute-0 nova_compute[244014]: 2026-02-25 12:28:32.530 244018 DEBUG nova.compute.manager [None req-36b35eb2-c45f-453e-b18f-c29ad76644f0 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:32 compute-0 nova_compute[244014]: 2026-02-25 12:28:32.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:32 compute-0 podman[298100]: 2026-02-25 12:28:32.743927599 +0000 UTC m=+0.081365478 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:28:32 compute-0 podman[298101]: 2026-02-25 12:28:32.776358939 +0000 UTC m=+0.110101433 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 12:28:32 compute-0 nova_compute[244014]: 2026-02-25 12:28:32.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:33 compute-0 ceph-mon[76335]: pgmap v1393: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Feb 25 12:28:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 20 op/s
Feb 25 12:28:33 compute-0 nova_compute[244014]: 2026-02-25 12:28:33.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:34 compute-0 nova_compute[244014]: 2026-02-25 12:28:34.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.057 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.058 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.075 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:28:35 compute-0 ceph-mon[76335]: pgmap v1394: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 20 op/s
Feb 25 12:28:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.175 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.176 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.186 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.186 244018 INFO nova.compute.claims [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.335 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 0 op/s
Feb 25 12:28:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2522776792' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.897 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.905 244018 DEBUG nova.compute.provider_tree [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.926 244018 DEBUG nova.scheduler.client.report [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.953 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:35 compute-0 nova_compute[244014]: 2026-02-25 12:28:35.954 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.000 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.001 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.029 244018 INFO nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.057 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:28:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2522776792' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.155 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.157 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.158 244018 INFO nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Creating image(s)
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.188 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.220 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.253 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.258 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.346 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.347 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.348 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.349 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.382 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.388 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.416 244018 DEBUG nova.policy [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1a7d945e359492faaab7c197de3e9e8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d46f1174f384dc3be789d4301748e2d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.682 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.730 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.732 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.778 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] resizing rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.817 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.884 244018 DEBUG nova.objects.instance [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lazy-loading 'migration_context' on Instance uuid 9630899b-57d8-4e46-b9e0-8762f0f4f2cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.908 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.909 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Ensure instance console log exists: /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.909 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.910 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.910 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.922 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.923 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.933 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:28:36 compute-0 nova_compute[244014]: 2026-02-25 12:28:36.934 244018 INFO nova.compute.claims [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.132 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:37 compute-0 ceph-mon[76335]: pgmap v1395: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 0 op/s
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.301 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.462 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Successfully created port: 1b61d923-a06e-4948-b459-b6cf5b8c668d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.559 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772933509' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.671 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.678 244018 DEBUG nova.compute.provider_tree [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.711 244018 DEBUG nova.scheduler.client.report [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.737 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.739 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.810 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.811 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.833 244018 INFO nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.854 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:28:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 264 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 826 KiB/s wr, 14 op/s
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.976 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.978 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:28:37 compute-0 nova_compute[244014]: 2026-02-25 12:28:37.979 244018 INFO nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating image(s)
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.013 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.051 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.086 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.091 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.130 244018 DEBUG nova.policy [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f8bbe7db4454108aca005daa72d5c22', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56700581ea88438ba482d90bc702ced3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:28:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/772933509' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.193 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.194 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.195 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.196 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.228 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.232 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.522 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.619 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.619 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.625 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] resizing rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.676 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.709 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Successfully updated port: 1b61d923-a06e-4948-b459-b6cf5b8c668d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.715 244018 DEBUG nova.objects.instance [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'migration_context' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.738 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.739 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquired lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.739 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.748 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.749 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Ensure instance console log exists: /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.749 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.749 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.749 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.762 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.762 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.768 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.769 244018 INFO nova.compute.claims [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.942 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:28:38 compute-0 nova_compute[244014]: 2026-02-25 12:28:38.948 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.112 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Successfully created port: 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:28:39 compute-0 ceph-mon[76335]: pgmap v1396: 305 pgs: 305 active+clean; 264 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 826 KiB/s wr, 14 op/s
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.250 244018 DEBUG nova.compute.manager [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-changed-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.250 244018 DEBUG nova.compute.manager [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Refreshing instance network info cache due to event network-changed-1b61d923-a06e-4948-b459-b6cf5b8c668d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.251 244018 DEBUG oslo_concurrency.lockutils [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3428883068' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.499 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.507 244018 DEBUG nova.compute.provider_tree [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.528 244018 DEBUG nova.scheduler.client.report [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.553 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.555 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.648 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.649 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.673 244018 INFO nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.696 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.722 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.813 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.815 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.815 244018 INFO nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Creating image(s)
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.845 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 264 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 825 KiB/s wr, 14 op/s
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.882 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.917 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.923 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.973 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Updating instance_info_cache with network_info: [{"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.982 244018 DEBUG nova.policy [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bcb4ded096bc4f7993f96ca892b82333', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.996 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:39 compute-0 nova_compute[244014]: 2026-02-25 12:28:39.996 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.002 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Releasing lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.003 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Instance network_info: |[{"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.004 244018 DEBUG oslo_concurrency.lockutils [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.004 244018 DEBUG nova.network.neutron [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Refreshing network info cache for port 1b61d923-a06e-4948-b459-b6cf5b8c668d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.010 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Start _get_guest_xml network_info=[{"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.012 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.022 244018 WARNING nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.031 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.032 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.033 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.033 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.061 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.066 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8086400b-ac70-4c79-928b-4f1966084384_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.109 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Successfully updated port: 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.123 244018 DEBUG nova.virt.libvirt.host [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.124 244018 DEBUG nova.virt.libvirt.host [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.129 244018 DEBUG nova.virt.libvirt.host [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.129 244018 DEBUG nova.virt.libvirt.host [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.130 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.130 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.131 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.132 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.132 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.133 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.133 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.133 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.134 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.135 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.135 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.135 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.141 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3428883068' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.184 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.186 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.192 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.193 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.193 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.206 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.207 244018 INFO nova.compute.claims [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.371 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.386 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.412 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8086400b-ac70-4c79-928b-4f1966084384_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.482 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] resizing rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.580 244018 DEBUG nova.objects.instance [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'migration_context' on Instance uuid 8086400b-ac70-4c79-928b-4f1966084384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.596 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.596 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Ensure instance console log exists: /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.597 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.597 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.597 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.623 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Successfully created port: 05178abb-a113-4013-9194-9243afe9d0ff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:28:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:28:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/275815993' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.744 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.765 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.769 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096300758' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.947 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.953 244018 DEBUG nova.compute.provider_tree [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.969 244018 DEBUG nova.scheduler.client.report [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.995 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:40 compute-0 nova_compute[244014]: 2026-02-25 12:28:40.996 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.045 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.046 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.065 244018 INFO nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.083 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.179 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.180 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.180 244018 INFO nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Creating image(s)
Feb 25 12:28:41 compute-0 ceph-mon[76335]: pgmap v1397: 305 pgs: 305 active+clean; 264 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 825 KiB/s wr, 14 op/s
Feb 25 12:28:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/275815993' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4096300758' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.208 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.238 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:28:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3528588789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.267 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.272 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.314 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.316 244018 DEBUG nova.virt.libvirt.vif [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-761477046',display_name='tempest-ServerAddressesTestJSON-server-761477046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-761477046',id=62,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d46f1174f384dc3be789d4301748e2d',ramdisk_id='',reservation_id='r-0u0fmhzw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-785917861',owner_user_name='tempest-ServerAddressesTestJSON-785917861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:36Z,user_data=None,user_id='f1a7d945e359492faaab7c197de3e9e8',uuid=9630899b-57d8-4e46-b9e0-8762f0f4f2cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.317 244018 DEBUG nova.network.os_vif_util [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converting VIF {"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.318 244018 DEBUG nova.network.os_vif_util [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.320 244018 DEBUG nova.objects.instance [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lazy-loading 'pci_devices' on Instance uuid 9630899b-57d8-4e46-b9e0-8762f0f4f2cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.340 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:28:41 compute-0 nova_compute[244014]:   <uuid>9630899b-57d8-4e46-b9e0-8762f0f4f2cb</uuid>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   <name>instance-0000003e</name>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerAddressesTestJSON-server-761477046</nova:name>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:28:40</nova:creationTime>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <nova:user uuid="f1a7d945e359492faaab7c197de3e9e8">tempest-ServerAddressesTestJSON-785917861-project-member</nova:user>
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <nova:project uuid="3d46f1174f384dc3be789d4301748e2d">tempest-ServerAddressesTestJSON-785917861</nova:project>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <nova:port uuid="1b61d923-a06e-4948-b459-b6cf5b8c668d">
Feb 25 12:28:41 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <system>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <entry name="serial">9630899b-57d8-4e46-b9e0-8762f0f4f2cb</entry>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <entry name="uuid">9630899b-57d8-4e46-b9e0-8762f0f4f2cb</entry>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     </system>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   <os>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   </os>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   <features>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   </features>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk">
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       </source>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config">
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       </source>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:28:41 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:f6:48:42"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <target dev="tap1b61d923-a0"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/console.log" append="off"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <video>
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     </video>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:28:41 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:28:41 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:28:41 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:28:41 compute-0 nova_compute[244014]: </domain>
Feb 25 12:28:41 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.341 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Preparing to wait for external event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.341 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.342 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.342 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.343 244018 DEBUG nova.virt.libvirt.vif [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-761477046',display_name='tempest-ServerAddressesTestJSON-server-761477046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-761477046',id=62,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d46f1174f384dc3be789d4301748e2d',ramdisk_id='',reservation_id='r-0u0fmhzw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-785917861',owner_user_name='tempest-ServerAddressesTestJSON-785917861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:36Z,user_data=None,user_id='f1a7d945e359492faaab7c197de3e9e8',uuid=9630899b-57d8-4e46-b9e0-8762f0f4f2cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.343 244018 DEBUG nova.network.os_vif_util [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converting VIF {"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.344 244018 DEBUG nova.network.os_vif_util [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.345 244018 DEBUG os_vif [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.346 244018 DEBUG nova.network.neutron [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Updated VIF entry in instance network info cache for port 1b61d923-a06e-4948-b459-b6cf5b8c668d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.347 244018 DEBUG nova.network.neutron [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Updating instance_info_cache with network_info: [{"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.348 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.349 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.352 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b61d923-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.353 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b61d923-a0, col_values=(('external_ids', {'iface-id': '1b61d923-a06e-4948-b459-b6cf5b8c668d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:48:42', 'vm-uuid': '9630899b-57d8-4e46-b9e0-8762f0f4f2cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:41 compute-0 NetworkManager[49836]: <info>  [1772022521.3563] manager: (tap1b61d923-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.361 244018 DEBUG nova.compute.manager [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-changed-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.361 244018 DEBUG nova.compute.manager [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Refreshing instance network info cache due to event network-changed-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.361 244018 DEBUG oslo_concurrency.lockutils [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.363 244018 INFO os_vif [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0')
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.367 244018 DEBUG oslo_concurrency.lockutils [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.371 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.371 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.372 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.372 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.403 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.408 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 826789b1-e26a-4569-bd77-bd1ef76388be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.489 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.490 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.490 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] No VIF found with MAC fa:16:3e:f6:48:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.491 244018 INFO nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Using config drive
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.526 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.537 244018 DEBUG nova.policy [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05ca7159581049009a4223cf01ebf146', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'be8db082f3894d28b63a3709be538262', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.575 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Updating instance_info_cache with network_info: [{"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.593 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.593 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance network_info: |[{"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.593 244018 DEBUG oslo_concurrency.lockutils [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.594 244018 DEBUG nova.network.neutron [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Refreshing network info cache for port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.598 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start _get_guest_xml network_info=[{"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.604 244018 WARNING nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.618 244018 DEBUG nova.virt.libvirt.host [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.619 244018 DEBUG nova.virt.libvirt.host [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.623 244018 DEBUG nova.virt.libvirt.host [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.624 244018 DEBUG nova.virt.libvirt.host [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.624 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.624 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.625 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.625 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.625 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.625 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.625 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.626 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.626 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.626 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.626 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.626 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.630 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.714 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 826789b1-e26a-4569-bd77-bd1ef76388be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.807 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] resizing rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.862 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Successfully updated port: 05178abb-a113-4013-9194-9243afe9d0ff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:28:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 350 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 4.3 MiB/s wr, 80 op/s
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.902 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.903 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquired lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.903 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.910 244018 DEBUG nova.objects.instance [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lazy-loading 'migration_context' on Instance uuid 826789b1-e26a-4569-bd77-bd1ef76388be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.929 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.930 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Ensure instance console log exists: /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.930 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.931 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.931 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.981 244018 DEBUG nova.compute.manager [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-changed-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.982 244018 DEBUG nova.compute.manager [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing instance network info cache due to event network-changed-05178abb-a113-4013-9194-9243afe9d0ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:28:41 compute-0 nova_compute[244014]: 2026-02-25 12:28:41.982 244018 DEBUG oslo_concurrency.lockutils [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.044 244018 INFO nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Creating config drive at /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.048 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfkaefgbm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:28:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691004792' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3528588789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1691004792' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.193 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.214 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.217 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.236 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfkaefgbm" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.238 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.266 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016095451353241661 of space, bias 1.0, pg target 0.48286354059724984 quantized to 32 (current 32)
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002492519703322481 of space, bias 1.0, pg target 0.7477559109967442 quantized to 32 (current 32)
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.270 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.773530469614834e-07 of space, bias 4.0, pg target 0.0010528236563537802 quantized to 16 (current 16)
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:28:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.335 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Successfully created port: ba138ed1-c811-4043-9bd6-e1a5c6127f84 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.439 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.440 244018 INFO nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Deleting local config drive /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config because it was imported into RBD.
Feb 25 12:28:42 compute-0 kernel: tap1b61d923-a0: entered promiscuous mode
Feb 25 12:28:42 compute-0 ovn_controller[147040]: 2026-02-25T12:28:42Z|00552|binding|INFO|Claiming lport 1b61d923-a06e-4948-b459-b6cf5b8c668d for this chassis.
Feb 25 12:28:42 compute-0 ovn_controller[147040]: 2026-02-25T12:28:42Z|00553|binding|INFO|1b61d923-a06e-4948-b459-b6cf5b8c668d: Claiming fa:16:3e:f6:48:42 10.100.0.7
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.497 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 ovn_controller[147040]: 2026-02-25T12:28:42Z|00554|binding|INFO|Setting lport 1b61d923-a06e-4948-b459-b6cf5b8c668d ovn-installed in OVS
Feb 25 12:28:42 compute-0 ovn_controller[147040]: 2026-02-25T12:28:42Z|00555|binding|INFO|Setting lport 1b61d923-a06e-4948-b459-b6cf5b8c668d up in Southbound
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.509 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:48:42 10.100.0.7'], port_security=['fa:16:3e:f6:48:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9630899b-57d8-4e46-b9e0-8762f0f4f2cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0406cfb3-4360-4052-8e1d-7019c4224092', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d46f1174f384dc3be789d4301748e2d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6a0b035a-6f2a-459e-b461-0e5414640091', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4783117-d743-4200-8206-169bf90ef6ab, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=1b61d923-a06e-4948-b459-b6cf5b8c668d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.512 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 1b61d923-a06e-4948-b459-b6cf5b8c668d in datapath 0406cfb3-4360-4052-8e1d-7019c4224092 bound to our chassis
Feb 25 12:28:42 compute-0 NetworkManager[49836]: <info>  [1772022522.5164] manager: (tap1b61d923-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.519 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0406cfb3-4360-4052-8e1d-7019c4224092
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.530 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4487cd-4abb-4ad7-b0e2-3a276f3dc5c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.531 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0406cfb3-41 in ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.533 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0406cfb3-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.533 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[904eff9a-0ce4-4de8-8091-bbb9e0a5b808]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.534 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0089a305-a759-44e4-a497-33c85008b14b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 systemd-udevd[299097]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:28:42 compute-0 systemd-machined[210048]: New machine qemu-71-instance-0000003e.
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.553 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[cf09f10c-f2f0-457e-a702-4e1d8b083aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 NetworkManager[49836]: <info>  [1772022522.5607] device (tap1b61d923-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:28:42 compute-0 NetworkManager[49836]: <info>  [1772022522.5616] device (tap1b61d923-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:28:42 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-0000003e.
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.572 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[716e4291-1518-4835-a88b-5c5df4413d39]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.606 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8ed7e1-533f-411e-9ece-a88d957edbde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 NetworkManager[49836]: <info>  [1772022522.6136] manager: (tap0406cfb3-40): new Veth device (/org/freedesktop/NetworkManager/Devices/246)
Feb 25 12:28:42 compute-0 systemd-udevd[299100]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.611 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e365d2d9-e08f-4495-a2e0-7fb8106ca739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.651 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[34263ff2-82d4-42ce-91fc-f21b400c6ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.656 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[244013a8-d397-483e-9146-7acd1e8aab0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 NetworkManager[49836]: <info>  [1772022522.6782] device (tap0406cfb3-40): carrier: link connected
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.684 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd0eb9b1-2588-49b8-8365-6fb88b7e983b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.702 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73ae7c56-8467-4d6f-b493-a813a158a133]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0406cfb3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:38:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449208, 'reachable_time': 32070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299128, 'error': None, 'target': 'ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.717 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b520daf-a6b5-40f2-9149-e29866aa1f5b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:38ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449208, 'tstamp': 449208}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299129, 'error': None, 'target': 'ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.736 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0548b219-2754-40a1-b6ff-d78b0dda16d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0406cfb3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:38:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449208, 'reachable_time': 32070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299130, 'error': None, 'target': 'ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:28:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/691462048' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.764 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb88e89c-253a-4b2b-938d-cfd25bde7656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.769 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.771 244018 DEBUG nova.virt.libvirt.vif [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-tempest.common.compute-instance-745827155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:37Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.771 244018 DEBUG nova.network.os_vif_util [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.772 244018 DEBUG nova.network.os_vif_util [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.773 244018 DEBUG nova.objects.instance [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.789 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:28:42 compute-0 nova_compute[244014]:   <uuid>ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</uuid>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   <name>instance-0000003f</name>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <nova:name>tempest-tempest.common.compute-instance-745827155</nova:name>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:28:41</nova:creationTime>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <nova:port uuid="9848fa6c-0a42-4cac-a3d8-2b90d5b7920c">
Feb 25 12:28:42 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <system>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <entry name="serial">ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</entry>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <entry name="uuid">ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</entry>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     </system>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   <os>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   </os>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   <features>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   </features>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk">
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       </source>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config">
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       </source>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:28:42 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:2e:3b:33"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <target dev="tap9848fa6c-0a"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/console.log" append="off"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <video>
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     </video>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:28:42 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:28:42 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:28:42 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:28:42 compute-0 nova_compute[244014]: </domain>
Feb 25 12:28:42 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.790 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Preparing to wait for external event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.790 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.791 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.791 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.792 244018 DEBUG nova.virt.libvirt.vif [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-tempest.common.compute-instance-745827155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:37Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.792 244018 DEBUG nova.network.os_vif_util [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.793 244018 DEBUG nova.network.os_vif_util [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.793 244018 DEBUG os_vif [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.794 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.795 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.797 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9848fa6c-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.797 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9848fa6c-0a, col_values=(('external_ids', {'iface-id': '9848fa6c-0a42-4cac-a3d8-2b90d5b7920c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:3b:33', 'vm-uuid': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 NetworkManager[49836]: <info>  [1772022522.8000] manager: (tap9848fa6c-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.804 244018 INFO os_vif [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a')
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.814 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bee743e5-8a12-43b2-8721-b9328abdae1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0406cfb3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.816 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0406cfb3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 kernel: tap0406cfb3-40: entered promiscuous mode
Feb 25 12:28:42 compute-0 NetworkManager[49836]: <info>  [1772022522.8203] manager: (tap0406cfb3-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.823 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0406cfb3-40, col_values=(('external_ids', {'iface-id': 'd0900c78-8fda-4698-ae1c-119352f6dad6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:42 compute-0 ovn_controller[147040]: 2026-02-25T12:28:42Z|00556|binding|INFO|Releasing lport d0900c78-8fda-4698-ae1c-119352f6dad6 from this chassis (sb_readonly=0)
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.835 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0406cfb3-4360-4052-8e1d-7019c4224092.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0406cfb3-4360-4052-8e1d-7019c4224092.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.836 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[058b6fae-c366-4465-b746-4e33e5bda1ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.837 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-0406cfb3-4360-4052-8e1d-7019c4224092
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/0406cfb3-4360-4052-8e1d-7019c4224092.pid.haproxy
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 0406cfb3-4360-4052-8e1d-7019c4224092
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:28:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.838 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092', 'env', 'PROCESS_TAG=haproxy-0406cfb3-4360-4052-8e1d-7019c4224092', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0406cfb3-4360-4052-8e1d-7019c4224092.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.873 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.873 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.874 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No VIF found with MAC fa:16:3e:2e:3b:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.874 244018 INFO nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Using config drive
Feb 25 12:28:42 compute-0 nova_compute[244014]: 2026-02-25 12:28:42.899 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:43 compute-0 podman[299227]: 2026-02-25 12:28:43.191831004 +0000 UTC m=+0.055200356 container create 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:28:43 compute-0 ceph-mon[76335]: pgmap v1398: 305 pgs: 305 active+clean; 350 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 4.3 MiB/s wr, 80 op/s
Feb 25 12:28:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/691462048' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:43 compute-0 nova_compute[244014]: 2026-02-25 12:28:43.210 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022523.2096448, 9630899b-57d8-4e46-b9e0-8762f0f4f2cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:43 compute-0 nova_compute[244014]: 2026-02-25 12:28:43.211 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] VM Started (Lifecycle Event)
Feb 25 12:28:43 compute-0 systemd[1]: Started libpod-conmon-92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070.scope.
Feb 25 12:28:43 compute-0 nova_compute[244014]: 2026-02-25 12:28:43.234 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:43 compute-0 nova_compute[244014]: 2026-02-25 12:28:43.241 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022523.2103922, 9630899b-57d8-4e46-b9e0-8762f0f4f2cb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:43 compute-0 nova_compute[244014]: 2026-02-25 12:28:43.242 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] VM Paused (Lifecycle Event)
Feb 25 12:28:43 compute-0 podman[299227]: 2026-02-25 12:28:43.163669776 +0000 UTC m=+0.027039158 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:28:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:28:43 compute-0 nova_compute[244014]: 2026-02-25 12:28:43.265 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:43 compute-0 nova_compute[244014]: 2026-02-25 12:28:43.269 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:28:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e401b80a9939de6acae8801df5427e297dd65f2008b59d33321f3f6cd457d4f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:28:43 compute-0 podman[299227]: 2026-02-25 12:28:43.285358727 +0000 UTC m=+0.148728149 container init 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:28:43 compute-0 podman[299227]: 2026-02-25 12:28:43.293892259 +0000 UTC m=+0.157261631 container start 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:28:43 compute-0 nova_compute[244014]: 2026-02-25 12:28:43.297 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:28:43 compute-0 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [NOTICE]   (299247) : New worker (299249) forked
Feb 25 12:28:43 compute-0 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [NOTICE]   (299247) : Loading success.
Feb 25 12:28:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 401 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 6.3 MiB/s wr, 109 op/s
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.118 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updating instance_info_cache with network_info: [{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.131 244018 DEBUG nova.network.neutron [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Updated VIF entry in instance network info cache for port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.132 244018 DEBUG nova.network.neutron [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Updating instance_info_cache with network_info: [{"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.279 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Releasing lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.279 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Instance network_info: |[{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.281 244018 DEBUG oslo_concurrency.lockutils [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.282 244018 DEBUG nova.network.neutron [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.288 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Start _get_guest_xml network_info=[{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.291 244018 DEBUG oslo_concurrency.lockutils [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.299 244018 WARNING nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.311 244018 DEBUG nova.virt.libvirt.host [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.312 244018 DEBUG nova.virt.libvirt.host [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.317 244018 DEBUG nova.virt.libvirt.host [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.318 244018 DEBUG nova.virt.libvirt.host [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.319 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.319 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.320 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.321 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.321 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.322 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.322 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.323 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.323 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.324 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.324 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.325 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.330 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.725 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:28:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548982358' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.936 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.960 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:44 compute-0 nova_compute[244014]: 2026-02-25 12:28:44.965 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.003 244018 INFO nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating config drive at /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.009 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe6lfj44a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.078 244018 DEBUG nova.compute.manager [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.079 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.080 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.080 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.081 244018 DEBUG nova.compute.manager [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Processing event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.081 244018 DEBUG nova.compute.manager [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.082 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.082 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.082 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.083 244018 DEBUG nova.compute.manager [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] No waiting events found dispatching network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.083 244018 WARNING nova.compute.manager [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received unexpected event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d for instance with vm_state building and task_state spawning.
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.084 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.089 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022525.089278, 9630899b-57d8-4e46-b9e0-8762f0f4f2cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.090 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] VM Resumed (Lifecycle Event)
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.093 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.097 244018 INFO nova.virt.libvirt.driver [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Instance spawned successfully.
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.097 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.116 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.126 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.131 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.131 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.132 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.133 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.133 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.134 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.149 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe6lfj44a" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.183 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.187 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:45 compute-0 ceph-mon[76335]: pgmap v1399: 305 pgs: 305 active+clean; 401 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 6.3 MiB/s wr, 109 op/s
Feb 25 12:28:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1548982358' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.220 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.223 244018 INFO nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Took 9.07 seconds to spawn the instance on the hypervisor.
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.223 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.295 244018 INFO nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Took 10.17 seconds to build instance.
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.313 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.354 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.354 244018 INFO nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deleting local config drive /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config because it was imported into RBD.
Feb 25 12:28:45 compute-0 NetworkManager[49836]: <info>  [1772022525.4075] manager: (tap9848fa6c-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Feb 25 12:28:45 compute-0 systemd-udevd[299114]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:28:45 compute-0 kernel: tap9848fa6c-0a: entered promiscuous mode
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.413 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:45 compute-0 ovn_controller[147040]: 2026-02-25T12:28:45Z|00557|binding|INFO|Claiming lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for this chassis.
Feb 25 12:28:45 compute-0 ovn_controller[147040]: 2026-02-25T12:28:45Z|00558|binding|INFO|9848fa6c-0a42-4cac-a3d8-2b90d5b7920c: Claiming fa:16:3e:2e:3b:33 10.100.0.14
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.431 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:3b:33 10.100.0.14'], port_security=['fa:16:3e:2e:3b:33 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf6e8407-f371-4f64-a4aa-eb412980736d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.434 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis
Feb 25 12:28:45 compute-0 NetworkManager[49836]: <info>  [1772022525.4379] device (tap9848fa6c-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:28:45 compute-0 NetworkManager[49836]: <info>  [1772022525.4387] device (tap9848fa6c-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.441 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:28:45 compute-0 ovn_controller[147040]: 2026-02-25T12:28:45Z|00559|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c ovn-installed in OVS
Feb 25 12:28:45 compute-0 ovn_controller[147040]: 2026-02-25T12:28:45Z|00560|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c up in Southbound
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.459 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13a63070-70a2-4a97-bd57-2be4c14a8ccc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:45 compute-0 systemd-machined[210048]: New machine qemu-72-instance-0000003f.
Feb 25 12:28:45 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-0000003f.
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.489 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c150bcb7-6b15-46c5-9b3c-3df29303fb80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.496 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f656a9a1-5866-4c06-84af-4ad1f5713c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.525 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2a86d515-84ec-484c-92f0-92bf2be27876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.547 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4aecf67-e9f7-4d42-b7d0-d35705f9ab83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299383, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:28:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3881799203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.564 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f0387b0d-ca95-4b4e-96ca-141984565e69]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444541, 'tstamp': 444541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299384, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444544, 'tstamp': 444544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299384, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.568 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.574 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.575 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.575 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.576 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.594 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.597 244018 DEBUG nova.virt.libvirt.vif [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-105273812',display_name='tempest-SecurityGroupsTestJSON-server-105273812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-105273812',id=64,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-6s6re1xg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:39Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=8086400b-ac70-4c79-928b-4f1966084384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.598 244018 DEBUG nova.network.os_vif_util [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.599 244018 DEBUG nova.network.os_vif_util [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.602 244018 DEBUG nova.objects.instance [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'pci_devices' on Instance uuid 8086400b-ac70-4c79-928b-4f1966084384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.623 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:28:45 compute-0 nova_compute[244014]:   <uuid>8086400b-ac70-4c79-928b-4f1966084384</uuid>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   <name>instance-00000040</name>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <nova:name>tempest-SecurityGroupsTestJSON-server-105273812</nova:name>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:28:44</nova:creationTime>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <nova:user uuid="bcb4ded096bc4f7993f96ca892b82333">tempest-SecurityGroupsTestJSON-195828884-project-member</nova:user>
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <nova:project uuid="780a93a8758a4bd78b22fe68ed6276cf">tempest-SecurityGroupsTestJSON-195828884</nova:project>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <nova:port uuid="05178abb-a113-4013-9194-9243afe9d0ff">
Feb 25 12:28:45 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <system>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <entry name="serial">8086400b-ac70-4c79-928b-4f1966084384</entry>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <entry name="uuid">8086400b-ac70-4c79-928b-4f1966084384</entry>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     </system>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   <os>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   </os>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   <features>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   </features>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8086400b-ac70-4c79-928b-4f1966084384_disk">
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8086400b-ac70-4c79-928b-4f1966084384_disk.config">
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:28:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:7e:1d:a7"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <target dev="tap05178abb-a1"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/console.log" append="off"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <video>
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     </video>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:28:45 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:28:45 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:28:45 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:28:45 compute-0 nova_compute[244014]: </domain>
Feb 25 12:28:45 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.625 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Preparing to wait for external event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.625 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.626 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.627 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.628 244018 DEBUG nova.virt.libvirt.vif [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-105273812',display_name='tempest-SecurityGroupsTestJSON-server-105273812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-105273812',id=64,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-6s6re1xg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:39Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=8086400b-ac70-4c79-928b-4f1966084384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.629 244018 DEBUG nova.network.os_vif_util [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.630 244018 DEBUG nova.network.os_vif_util [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.631 244018 DEBUG os_vif [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.632 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.633 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.638 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05178abb-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.639 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05178abb-a1, col_values=(('external_ids', {'iface-id': '05178abb-a113-4013-9194-9243afe9d0ff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:1d:a7', 'vm-uuid': '8086400b-ac70-4c79-928b-4f1966084384'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:45 compute-0 NetworkManager[49836]: <info>  [1772022525.6427] manager: (tap05178abb-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.646 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.651 244018 INFO os_vif [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1')
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.709 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.710 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.710 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No VIF found with MAC fa:16:3e:7e:1d:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.710 244018 INFO nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Using config drive
Feb 25 12:28:45 compute-0 nova_compute[244014]: 2026-02-25 12:28:45.731 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 401 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 6.3 MiB/s wr, 109 op/s
Feb 25 12:28:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3881799203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.279 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022526.2788637, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.279 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Started (Lifecycle Event)
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.308 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.313 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022526.2797914, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.313 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Paused (Lifecycle Event)
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.338 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.347 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.356 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Successfully updated port: ba138ed1-c811-4043-9bd6-e1a5c6127f84 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.370 244018 INFO nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Creating config drive at /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.376 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5kkvbou0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.406 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.408 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.408 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquired lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.408 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.509 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5kkvbou0" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.547 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.550 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config 8086400b-ac70-4c79-928b-4f1966084384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.575 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.668 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config 8086400b-ac70-4c79-928b-4f1966084384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.669 244018 INFO nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Deleting local config drive /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config because it was imported into RBD.
Feb 25 12:28:46 compute-0 kernel: tap05178abb-a1: entered promiscuous mode
Feb 25 12:28:46 compute-0 NetworkManager[49836]: <info>  [1772022526.7233] manager: (tap05178abb-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Feb 25 12:28:46 compute-0 systemd-udevd[299450]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:28:46 compute-0 ovn_controller[147040]: 2026-02-25T12:28:46Z|00561|binding|INFO|Claiming lport 05178abb-a113-4013-9194-9243afe9d0ff for this chassis.
Feb 25 12:28:46 compute-0 ovn_controller[147040]: 2026-02-25T12:28:46Z|00562|binding|INFO|05178abb-a113-4013-9194-9243afe9d0ff: Claiming fa:16:3e:7e:1d:a7 10.100.0.10
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:46 compute-0 ovn_controller[147040]: 2026-02-25T12:28:46Z|00563|binding|INFO|Setting lport 05178abb-a113-4013-9194-9243afe9d0ff ovn-installed in OVS
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.738 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:1d:a7 10.100.0.10'], port_security=['fa:16:3e:7e:1d:a7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8086400b-ac70-4c79-928b-4f1966084384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '809ecded-d7db-4d4f-aed2-3cfc6bac71b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=05178abb-a113-4013-9194-9243afe9d0ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:28:46 compute-0 ovn_controller[147040]: 2026-02-25T12:28:46Z|00564|binding|INFO|Setting lport 05178abb-a113-4013-9194-9243afe9d0ff up in Southbound
Feb 25 12:28:46 compute-0 NetworkManager[49836]: <info>  [1772022526.7411] device (tap05178abb-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:28:46 compute-0 NetworkManager[49836]: <info>  [1772022526.7420] device (tap05178abb-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.743 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 05178abb-a113-4013-9194-9243afe9d0ff in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 bound to our chassis
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.746 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a448894-87d7-4c8e-a168-2593011ffed7
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.760 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[476994eb-e12c-434d-9d03-839507ee21a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.761 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9a448894-81 in ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.763 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9a448894-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.763 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[59c2d0e8-1e89-4150-a70b-2d36baa73175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 systemd-machined[210048]: New machine qemu-73-instance-00000040.
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.765 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e32d2c0-9ace-4f47-a0e7-f6992c9afe91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.774 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[869689d0-45e0-4177-b79b-5e8ff7cb6780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-00000040.
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.787 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b8231bd5-632c-4519-a4fc-04df8274bf4f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.792 244018 DEBUG nova.compute.manager [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-changed-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.793 244018 DEBUG nova.compute.manager [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Refreshing instance network info cache due to event network-changed-ba138ed1-c811-4043-9bd6-e1a5c6127f84. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:28:46 compute-0 nova_compute[244014]: 2026-02-25 12:28:46.794 244018 DEBUG oslo_concurrency.lockutils [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.839 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f02cb8e9-0674-4c65-89b9-c3469130ce1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 NetworkManager[49836]: <info>  [1772022526.8480] manager: (tap9a448894-80): new Veth device (/org/freedesktop/NetworkManager/Devices/252)
Feb 25 12:28:46 compute-0 systemd-udevd[299505]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.849 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[32069880-0e7e-466a-99b3-4f324740ea90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.898 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f69bf4b7-9426-417a-8a20-9be5a75f1805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.902 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ed06973f-6749-457d-b79b-35f83a074c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 NetworkManager[49836]: <info>  [1772022526.9294] device (tap9a448894-80): carrier: link connected
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.934 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[46fca849-b87d-4eb2-b2bf-787664f3c72d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.952 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[93a59b10-20b0-4aa5-8d03-d38ba17eeeaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299537, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.969 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[96701aa6-9150-4ff4-9e54-f2535950c503]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:4c93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449633, 'tstamp': 449633}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299538, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.983 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[391b59ab-4ca3-4fff-9bf6-c0303044f90b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299539, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.013 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d73fcd-632e-480b-a902-b7fb2ff4e874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.063 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[43968bf0-009d-4421-b54a-98a9212d8f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.065 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.065 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.066 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a448894-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:47 compute-0 kernel: tap9a448894-80: entered promiscuous mode
Feb 25 12:28:47 compute-0 NetworkManager[49836]: <info>  [1772022527.0698] manager: (tap9a448894-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.075 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a448894-80, col_values=(('external_ids', {'iface-id': '6ef06fbe-6eb9-4d55-bbd8-9394a70da39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:47 compute-0 ovn_controller[147040]: 2026-02-25T12:28:47Z|00565|binding|INFO|Releasing lport 6ef06fbe-6eb9-4d55-bbd8-9394a70da39f from this chassis (sb_readonly=0)
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.090 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.091 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9a448894-87d7-4c8e-a168-2593011ffed7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9a448894-87d7-4c8e-a168-2593011ffed7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.092 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cb02802c-9830-4926-bcc5-e1453699e2bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.093 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-9a448894-87d7-4c8e-a168-2593011ffed7
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/9a448894-87d7-4c8e-a168-2593011ffed7.pid.haproxy
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 9a448894-87d7-4c8e-a168-2593011ffed7
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:28:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.095 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'env', 'PROCESS_TAG=haproxy-9a448894-87d7-4c8e-a168-2593011ffed7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9a448894-87d7-4c8e-a168-2593011ffed7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.147 244018 DEBUG nova.network.neutron [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updated VIF entry in instance network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.147 244018 DEBUG nova.network.neutron [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updating instance_info_cache with network_info: [{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.223 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022527.222624, 8086400b-ac70-4c79-928b-4f1966084384 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.224 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] VM Started (Lifecycle Event)
Feb 25 12:28:47 compute-0 ceph-mon[76335]: pgmap v1400: 305 pgs: 305 active+clean; 401 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 6.3 MiB/s wr, 109 op/s
Feb 25 12:28:47 compute-0 podman[299613]: 2026-02-25 12:28:47.504733768 +0000 UTC m=+0.062605597 container create 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.512 244018 DEBUG nova.compute.manager [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.514 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.514 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.515 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.515 244018 DEBUG nova.compute.manager [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Processing event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.516 244018 DEBUG nova.compute.manager [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.516 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.517 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.517 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.518 244018 DEBUG nova.compute.manager [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.518 244018 WARNING nova.compute.manager [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received unexpected event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with vm_state building and task_state spawning.
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.521 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.522 244018 DEBUG oslo_concurrency.lockutils [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.530 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.537 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance spawned successfully.
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.539 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.544 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.552 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022527.2237253, 8086400b-ac70-4c79-928b-4f1966084384 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.553 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] VM Paused (Lifecycle Event)
Feb 25 12:28:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:28:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1766219868' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:28:47 compute-0 systemd[1]: Started libpod-conmon-620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7.scope.
Feb 25 12:28:47 compute-0 podman[299613]: 2026-02-25 12:28:47.471225687 +0000 UTC m=+0.029097576 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:28:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:28:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2584817c9d371ffcf257a70731a89404fd3bb1f9d05f4f6659b9234327334bd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:28:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 420 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 192 op/s
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.923 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.929 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.933 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.933 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.933 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.934 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.934 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:47 compute-0 nova_compute[244014]: 2026-02-25 12:28:47.934 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:48 compute-0 nova_compute[244014]: 2026-02-25 12:28:48.417 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:28:48 compute-0 nova_compute[244014]: 2026-02-25 12:28:48.417 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022527.5282967, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:48 compute-0 nova_compute[244014]: 2026-02-25 12:28:48.417 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Resumed (Lifecycle Event)
Feb 25 12:28:48 compute-0 nova_compute[244014]: 2026-02-25 12:28:48.665 244018 INFO nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Took 10.69 seconds to spawn the instance on the hypervisor.
Feb 25 12:28:48 compute-0 nova_compute[244014]: 2026-02-25 12:28:48.666 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:28:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1766219868' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:28:48 compute-0 podman[299613]: 2026-02-25 12:28:48.676937849 +0000 UTC m=+1.234809758 container init 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:28:48 compute-0 podman[299613]: 2026-02-25 12:28:48.685532302 +0000 UTC m=+1.243404141 container start 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:28:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1766219868' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:28:48 compute-0 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [NOTICE]   (299632) : New worker (299634) forked
Feb 25 12:28:48 compute-0 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [NOTICE]   (299632) : Loading success.
Feb 25 12:28:48 compute-0 nova_compute[244014]: 2026-02-25 12:28:48.715 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:48 compute-0 nova_compute[244014]: 2026-02-25 12:28:48.719 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:28:48 compute-0 nova_compute[244014]: 2026-02-25 12:28:48.749 244018 INFO nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Took 11.85 seconds to build instance.
Feb 25 12:28:48 compute-0 nova_compute[244014]: 2026-02-25 12:28:48.782 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:49 compute-0 ceph-mon[76335]: pgmap v1401: 305 pgs: 305 active+clean; 420 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 192 op/s
Feb 25 12:28:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1766219868' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:28:49 compute-0 nova_compute[244014]: 2026-02-25 12:28:49.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 420 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.3 MiB/s wr, 179 op/s
Feb 25 12:28:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:50 compute-0 ceph-mon[76335]: pgmap v1402: 305 pgs: 305 active+clean; 420 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.3 MiB/s wr, 179 op/s
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.923 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Updating instance_info_cache with network_info: [{"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.951 244018 DEBUG nova.compute.manager [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.951 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.952 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.952 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.952 244018 DEBUG nova.compute.manager [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Processing event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.953 244018 DEBUG nova.compute.manager [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.953 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.953 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.953 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.954 244018 DEBUG nova.compute.manager [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] No waiting events found dispatching network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.954 244018 WARNING nova.compute.manager [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received unexpected event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff for instance with vm_state building and task_state spawning.
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.955 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.957 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Releasing lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.958 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Instance network_info: |[{"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.959 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022530.9581943, 8086400b-ac70-4c79-928b-4f1966084384 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.959 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] VM Resumed (Lifecycle Event)
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.960 244018 DEBUG oslo_concurrency.lockutils [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.961 244018 DEBUG nova.network.neutron [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Refreshing network info cache for port ba138ed1-c811-4043-9bd6-e1a5c6127f84 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.965 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Start _get_guest_xml network_info=[{"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.965 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.969 244018 INFO nova.virt.libvirt.driver [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Instance spawned successfully.
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.970 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.971 244018 WARNING nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.978 244018 DEBUG nova.virt.libvirt.host [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.978 244018 DEBUG nova.virt.libvirt.host [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.982 244018 DEBUG nova.virt.libvirt.host [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.983 244018 DEBUG nova.virt.libvirt.host [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.983 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.983 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.984 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.984 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.984 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.985 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.985 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.985 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.985 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.986 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.986 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.986 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:28:50 compute-0 nova_compute[244014]: 2026-02-25 12:28:50.989 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.027 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.040 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.045 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.046 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.047 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.047 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.048 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.049 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.072 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.106 244018 INFO nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Took 11.29 seconds to spawn the instance on the hypervisor.
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.107 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.179 244018 INFO nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Took 12.43 seconds to build instance.
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.205 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.319 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.320 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.321 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.321 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.322 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.324 244018 INFO nova.compute.manager [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Terminating instance
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.325 244018 DEBUG nova.compute.manager [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:28:51 compute-0 kernel: tap1b61d923-a0 (unregistering): left promiscuous mode
Feb 25 12:28:51 compute-0 NetworkManager[49836]: <info>  [1772022531.3593] device (tap1b61d923-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:28:51 compute-0 ovn_controller[147040]: 2026-02-25T12:28:51Z|00566|binding|INFO|Releasing lport 1b61d923-a06e-4948-b459-b6cf5b8c668d from this chassis (sb_readonly=0)
Feb 25 12:28:51 compute-0 ovn_controller[147040]: 2026-02-25T12:28:51Z|00567|binding|INFO|Setting lport 1b61d923-a06e-4948-b459-b6cf5b8c668d down in Southbound
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:51 compute-0 ovn_controller[147040]: 2026-02-25T12:28:51Z|00568|binding|INFO|Removing iface tap1b61d923-a0 ovn-installed in OVS
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.375 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:48:42 10.100.0.7'], port_security=['fa:16:3e:f6:48:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9630899b-57d8-4e46-b9e0-8762f0f4f2cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0406cfb3-4360-4052-8e1d-7019c4224092', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d46f1174f384dc3be789d4301748e2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6a0b035a-6f2a-459e-b461-0e5414640091', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4783117-d743-4200-8206-169bf90ef6ab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=1b61d923-a06e-4948-b459-b6cf5b8c668d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.376 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 1b61d923-a06e-4948-b459-b6cf5b8c668d in datapath 0406cfb3-4360-4052-8e1d-7019c4224092 unbound from our chassis
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.378 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0406cfb3-4360-4052-8e1d-7019c4224092, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.381 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[455a5187-238e-43e2-9ed9-8e036422b959]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.382 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092 namespace which is not needed anymore
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:51 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Feb 25 12:28:51 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003e.scope: Consumed 7.071s CPU time.
Feb 25 12:28:51 compute-0 systemd-machined[210048]: Machine qemu-71-instance-0000003e terminated.
Feb 25 12:28:51 compute-0 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [NOTICE]   (299247) : haproxy version is 2.8.14-c23fe91
Feb 25 12:28:51 compute-0 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [NOTICE]   (299247) : path to executable is /usr/sbin/haproxy
Feb 25 12:28:51 compute-0 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [WARNING]  (299247) : Exiting Master process...
Feb 25 12:28:51 compute-0 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [ALERT]    (299247) : Current worker (299249) exited with code 143 (Terminated)
Feb 25 12:28:51 compute-0 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [WARNING]  (299247) : All workers exited. Exiting... (0)
Feb 25 12:28:51 compute-0 systemd[1]: libpod-92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070.scope: Deactivated successfully.
Feb 25 12:28:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:28:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1151926361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:51 compute-0 podman[299687]: 2026-02-25 12:28:51.53444313 +0000 UTC m=+0.056041880 container died 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.548 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e401b80a9939de6acae8801df5427e297dd65f2008b59d33321f3f6cd457d4f-merged.mount: Deactivated successfully.
Feb 25 12:28:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070-userdata-shm.mount: Deactivated successfully.
Feb 25 12:28:51 compute-0 podman[299687]: 2026-02-25 12:28:51.585663092 +0000 UTC m=+0.107261812 container cleanup 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.595 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.604 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:51 compute-0 systemd[1]: libpod-conmon-92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070.scope: Deactivated successfully.
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.639 244018 INFO nova.virt.libvirt.driver [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Instance destroyed successfully.
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.639 244018 DEBUG nova.objects.instance [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lazy-loading 'resources' on Instance uuid 9630899b-57d8-4e46-b9e0-8762f0f4f2cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.655 244018 DEBUG nova.virt.libvirt.vif [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-761477046',display_name='tempest-ServerAddressesTestJSON-server-761477046',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-761477046',id=62,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d46f1174f384dc3be789d4301748e2d',ramdisk_id='',reservation_id='r-0u0fmhzw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-785917861',owner_user_name='tempest-ServerAddressesTestJSON-785917861-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:28:45Z,user_data=None,user_id='f1a7d945e359492faaab7c197de3e9e8',uuid=9630899b-57d8-4e46-b9e0-8762f0f4f2cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.656 244018 DEBUG nova.network.os_vif_util [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converting VIF {"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.657 244018 DEBUG nova.network.os_vif_util [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.657 244018 DEBUG os_vif [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.660 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b61d923-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.665 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.668 244018 INFO os_vif [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0')
Feb 25 12:28:51 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1151926361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:51 compute-0 podman[299744]: 2026-02-25 12:28:51.762233019 +0000 UTC m=+0.149794568 container remove 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.767 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47526ffa-2eda-485b-9d0c-956c3cce5ca5]: (4, ('Wed Feb 25 12:28:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092 (92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070)\n92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070\nWed Feb 25 12:28:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092 (92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070)\n92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.768 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[33bef22d-b901-4172-b8e8-d91be5ac89f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.770 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0406cfb3-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:51 compute-0 kernel: tap0406cfb3-40: left promiscuous mode
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.783 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f55e6983-1dc8-4823-a33c-7127ae4bfb37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.793 244018 DEBUG nova.compute.manager [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-unplugged-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.794 244018 DEBUG oslo_concurrency.lockutils [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.794 244018 DEBUG oslo_concurrency.lockutils [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.794 244018 DEBUG oslo_concurrency.lockutils [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.795 244018 DEBUG nova.compute.manager [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] No waiting events found dispatching network-vif-unplugged-1b61d923-a06e-4948-b459-b6cf5b8c668d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.795 244018 DEBUG nova.compute.manager [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-unplugged-1b61d923-a06e-4948-b459-b6cf5b8c668d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.798 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b14a8525-04c1-48df-9c07-d64274d5d805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.799 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76865ea3-eb54-4d66-ba7a-1b94925885c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.812 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fafa093e-8c40-49a2-8d3c-b3762e84c18d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449200, 'reachable_time': 24261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299799, 'error': None, 'target': 'ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.814 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:28:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.814 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0efcc455-c5bf-4b42-95b2-c027f7902008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d0406cfb3\x2d4360\x2d4052\x2d8e1d\x2d7019c4224092.mount: Deactivated successfully.
Feb 25 12:28:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 408 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.3 MiB/s wr, 245 op/s
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.956 244018 INFO nova.virt.libvirt.driver [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Deleting instance files /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb_del
Feb 25 12:28:51 compute-0 nova_compute[244014]: 2026-02-25 12:28:51.956 244018 INFO nova.virt.libvirt.driver [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Deletion of /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb_del complete
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.004 244018 INFO nova.compute.manager [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Took 0.68 seconds to destroy the instance on the hypervisor.
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.005 244018 DEBUG oslo.service.loopingcall [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.005 244018 DEBUG nova.compute.manager [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.006 244018 DEBUG nova.network.neutron [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.139 244018 DEBUG nova.network.neutron [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Updated VIF entry in instance network info cache for port ba138ed1-c811-4043-9bd6-e1a5c6127f84. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.140 244018 DEBUG nova.network.neutron [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Updating instance_info_cache with network_info: [{"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:28:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694668567' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.219 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.224 244018 DEBUG nova.virt.libvirt.vif [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-846051174',display_name='tempest-ServerPasswordTestJSON-server-846051174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-846051174',id=65,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be8db082f3894d28b63a3709be538262',ramdisk_id='',reservation_id='r-624y952x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1533125897',owner_user_name='tempest-ServerPasswordTestJSON-1533125897-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:41Z,user_data=None,user_id='05ca7159581049009a4223cf01ebf146',uuid=826789b1-e26a-4569-bd77-bd1ef76388be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.224 244018 DEBUG nova.network.os_vif_util [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converting VIF {"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.225 244018 DEBUG nova.network.os_vif_util [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.228 244018 DEBUG nova.objects.instance [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lazy-loading 'pci_devices' on Instance uuid 826789b1-e26a-4569-bd77-bd1ef76388be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.615 244018 DEBUG oslo_concurrency.lockutils [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.632 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:28:52 compute-0 nova_compute[244014]:   <uuid>826789b1-e26a-4569-bd77-bd1ef76388be</uuid>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   <name>instance-00000041</name>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerPasswordTestJSON-server-846051174</nova:name>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:28:50</nova:creationTime>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <nova:user uuid="05ca7159581049009a4223cf01ebf146">tempest-ServerPasswordTestJSON-1533125897-project-member</nova:user>
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <nova:project uuid="be8db082f3894d28b63a3709be538262">tempest-ServerPasswordTestJSON-1533125897</nova:project>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <nova:port uuid="ba138ed1-c811-4043-9bd6-e1a5c6127f84">
Feb 25 12:28:52 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <system>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <entry name="serial">826789b1-e26a-4569-bd77-bd1ef76388be</entry>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <entry name="uuid">826789b1-e26a-4569-bd77-bd1ef76388be</entry>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     </system>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   <os>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   </os>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   <features>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   </features>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/826789b1-e26a-4569-bd77-bd1ef76388be_disk">
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       </source>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/826789b1-e26a-4569-bd77-bd1ef76388be_disk.config">
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       </source>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:28:52 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:bd:a8:78"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <target dev="tapba138ed1-c8"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/console.log" append="off"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <video>
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     </video>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:28:52 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:28:52 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:28:52 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:28:52 compute-0 nova_compute[244014]: </domain>
Feb 25 12:28:52 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.633 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Preparing to wait for external event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.633 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.634 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.634 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.635 244018 DEBUG nova.virt.libvirt.vif [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-846051174',display_name='tempest-ServerPasswordTestJSON-server-846051174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-846051174',id=65,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be8db082f3894d28b63a3709be538262',ramdisk_id='',reservation_id='r-624y952x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1533125897',owner_user_name='tempest-ServerPasswordTestJSON-1533125897-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:41Z,user_data=None,user_id='05ca7159581049009a4223cf01ebf146',uuid=826789b1-e26a-4569-bd77-bd1ef76388be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.636 244018 DEBUG nova.network.os_vif_util [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converting VIF {"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.636 244018 DEBUG nova.network.os_vif_util [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.637 244018 DEBUG os_vif [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.638 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.639 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.643 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba138ed1-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.643 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba138ed1-c8, col_values=(('external_ids', {'iface-id': 'ba138ed1-c811-4043-9bd6-e1a5c6127f84', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:a8:78', 'vm-uuid': '826789b1-e26a-4569-bd77-bd1ef76388be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:52 compute-0 NetworkManager[49836]: <info>  [1772022532.6465] manager: (tapba138ed1-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.648 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:52 compute-0 nova_compute[244014]: 2026-02-25 12:28:52.652 244018 INFO os_vif [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8')
Feb 25 12:28:52 compute-0 ceph-mon[76335]: pgmap v1403: 305 pgs: 305 active+clean; 408 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.3 MiB/s wr, 245 op/s
Feb 25 12:28:52 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3694668567' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:28:53 compute-0 nova_compute[244014]: 2026-02-25 12:28:53.230 244018 DEBUG nova.network.neutron [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:53 compute-0 nova_compute[244014]: 2026-02-25 12:28:53.236 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:28:53 compute-0 nova_compute[244014]: 2026-02-25 12:28:53.237 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:28:53 compute-0 nova_compute[244014]: 2026-02-25 12:28:53.237 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] No VIF found with MAC fa:16:3e:bd:a8:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:28:53 compute-0 nova_compute[244014]: 2026-02-25 12:28:53.238 244018 INFO nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Using config drive
Feb 25 12:28:53 compute-0 nova_compute[244014]: 2026-02-25 12:28:53.264 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:53 compute-0 nova_compute[244014]: 2026-02-25 12:28:53.273 244018 INFO nova.compute.manager [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Rebuilding instance
Feb 25 12:28:53 compute-0 nova_compute[244014]: 2026-02-25 12:28:53.276 244018 INFO nova.compute.manager [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Took 1.27 seconds to deallocate network for instance.
Feb 25 12:28:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 391 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.8 MiB/s wr, 239 op/s
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.387 244018 DEBUG nova.compute.manager [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-changed-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.387 244018 DEBUG nova.compute.manager [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing instance network info cache due to event network-changed-05178abb-a113-4013-9194-9243afe9d0ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.387 244018 DEBUG oslo_concurrency.lockutils [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.387 244018 DEBUG oslo_concurrency.lockutils [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.387 244018 DEBUG nova.network.neutron [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.448 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.449 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.600 244018 DEBUG oslo_concurrency.processutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.676 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.696 244018 DEBUG nova.compute.manager [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.744 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_requests' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.760 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.775 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.786 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'migration_context' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.798 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:28:54 compute-0 nova_compute[244014]: 2026-02-25 12:28:54.802 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:28:54 compute-0 ceph-mon[76335]: pgmap v1404: 305 pgs: 305 active+clean; 391 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.8 MiB/s wr, 239 op/s
Feb 25 12:28:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:55.011 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:55.011 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:55.012 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.190 244018 DEBUG nova.compute.manager [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.190 244018 DEBUG oslo_concurrency.lockutils [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.191 244018 DEBUG oslo_concurrency.lockutils [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.191 244018 DEBUG oslo_concurrency.lockutils [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.192 244018 DEBUG nova.compute.manager [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] No waiting events found dispatching network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.192 244018 WARNING nova.compute.manager [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received unexpected event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d for instance with vm_state deleted and task_state None.
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.192 244018 DEBUG nova.compute.manager [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-deleted-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:28:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715193955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.247 244018 DEBUG oslo_concurrency.processutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.251 244018 DEBUG nova.compute.provider_tree [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.408 244018 DEBUG nova.scheduler.client.report [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.431 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.455 244018 INFO nova.scheduler.client.report [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Deleted allocations for instance 9630899b-57d8-4e46-b9e0-8762f0f4f2cb
Feb 25 12:28:55 compute-0 nova_compute[244014]: 2026-02-25 12:28:55.522 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 391 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 825 KiB/s wr, 209 op/s
Feb 25 12:28:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3715193955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:28:56 compute-0 nova_compute[244014]: 2026-02-25 12:28:56.471 244018 INFO nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Creating config drive at /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config
Feb 25 12:28:56 compute-0 nova_compute[244014]: 2026-02-25 12:28:56.476 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnjw6iv01 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:56 compute-0 nova_compute[244014]: 2026-02-25 12:28:56.617 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnjw6iv01" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:56 compute-0 nova_compute[244014]: 2026-02-25 12:28:56.652 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:28:56 compute-0 nova_compute[244014]: 2026-02-25 12:28:56.656 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config 826789b1-e26a-4569-bd77-bd1ef76388be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:28:56 compute-0 nova_compute[244014]: 2026-02-25 12:28:56.798 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config 826789b1-e26a-4569-bd77-bd1ef76388be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:28:56 compute-0 nova_compute[244014]: 2026-02-25 12:28:56.800 244018 INFO nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Deleting local config drive /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config because it was imported into RBD.
Feb 25 12:28:56 compute-0 kernel: tapba138ed1-c8: entered promiscuous mode
Feb 25 12:28:56 compute-0 NetworkManager[49836]: <info>  [1772022536.8384] manager: (tapba138ed1-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/255)
Feb 25 12:28:56 compute-0 ovn_controller[147040]: 2026-02-25T12:28:56Z|00569|binding|INFO|Claiming lport ba138ed1-c811-4043-9bd6-e1a5c6127f84 for this chassis.
Feb 25 12:28:56 compute-0 ovn_controller[147040]: 2026-02-25T12:28:56Z|00570|binding|INFO|ba138ed1-c811-4043-9bd6-e1a5c6127f84: Claiming fa:16:3e:bd:a8:78 10.100.0.14
Feb 25 12:28:56 compute-0 nova_compute[244014]: 2026-02-25 12:28:56.841 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:56 compute-0 ovn_controller[147040]: 2026-02-25T12:28:56Z|00571|binding|INFO|Setting lport ba138ed1-c811-4043-9bd6-e1a5c6127f84 ovn-installed in OVS
Feb 25 12:28:56 compute-0 nova_compute[244014]: 2026-02-25 12:28:56.857 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:56 compute-0 nova_compute[244014]: 2026-02-25 12:28:56.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:56 compute-0 systemd-udevd[299895]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:28:56 compute-0 NetworkManager[49836]: <info>  [1772022536.8790] device (tapba138ed1-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:28:56 compute-0 NetworkManager[49836]: <info>  [1772022536.8799] device (tapba138ed1-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:28:56 compute-0 ceph-mon[76335]: pgmap v1405: 305 pgs: 305 active+clean; 391 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 825 KiB/s wr, 209 op/s
Feb 25 12:28:57 compute-0 ovn_controller[147040]: 2026-02-25T12:28:57Z|00572|binding|INFO|Setting lport ba138ed1-c811-4043-9bd6-e1a5c6127f84 up in Southbound
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.032 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:a8:78 10.100.0.14'], port_security=['fa:16:3e:bd:a8:78 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '826789b1-e26a-4569-bd77-bd1ef76388be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be8db082f3894d28b63a3709be538262', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c589948-8c4c-4f73-8545-f79b3c943dbd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bcb3a436-cbdb-491a-80cb-92f630c9f966, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ba138ed1-c811-4043-9bd6-e1a5c6127f84) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.033 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ba138ed1-c811-4043-9bd6-e1a5c6127f84 in datapath 23ab8bc1-e701-454d-a828-42d18cbd9afc bound to our chassis
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.034 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 23ab8bc1-e701-454d-a828-42d18cbd9afc
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.041 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5d9203-0d38-4fc3-a9b4-bfabd9b4f2fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.042 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap23ab8bc1-e1 in ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.045 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap23ab8bc1-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.045 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfb4ea2-7f2b-433d-87d2-056384b08983]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.046 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38a97cbc-52c5-4acc-b214-b650cf8b4696]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 systemd-machined[210048]: New machine qemu-74-instance-00000041.
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.054 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[1b891dea-19cf-469e-88c2-c197ac6a4a18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.065 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc29f777-9e57-44d7-9a29-f05fe5a08751]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-00000041.
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.085 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b7be0200-6fc9-499e-852c-662f92f75e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 systemd-udevd[299897]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:28:57 compute-0 NetworkManager[49836]: <info>  [1772022537.0908] manager: (tap23ab8bc1-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/256)
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.091 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67c4db90-d8b2-4a93-a735-fe183c068849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.140 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[88b47e2c-51a2-46b6-ad59-4ab2cc58d46b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.143 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[656713b4-02df-4016-9376-e2403c25aa8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 NetworkManager[49836]: <info>  [1772022537.1667] device (tap23ab8bc1-e0): carrier: link connected
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.174 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c1227d-1d1a-4ccb-85c5-90774a29442f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.191 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09345465-07a8-4848-b151-07535beb377a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23ab8bc1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:bf:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450657, 'reachable_time': 39331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299931, 'error': None, 'target': 'ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.206 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38d140f6-962d-493a-9cef-231aacaaf5b6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:bf8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 450657, 'tstamp': 450657}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299932, 'error': None, 'target': 'ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.221 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69f2069e-ad7d-4695-b9b3-70c00b94aa6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23ab8bc1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:bf:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450657, 'reachable_time': 39331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299933, 'error': None, 'target': 'ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.247 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fb23e843-3b3e-463a-8a86-fa74ee0c37b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.305 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f0b87b-fb3f-44d1-8b26-10e5bc0d7e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.307 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23ab8bc1-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.307 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.308 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23ab8bc1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:57 compute-0 NetworkManager[49836]: <info>  [1772022537.3121] manager: (tap23ab8bc1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Feb 25 12:28:57 compute-0 kernel: tap23ab8bc1-e0: entered promiscuous mode
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.316 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap23ab8bc1-e0, col_values=(('external_ids', {'iface-id': 'aa84a84b-80b1-4514-9f74-691107edfa38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:57 compute-0 ovn_controller[147040]: 2026-02-25T12:28:57Z|00573|binding|INFO|Releasing lport aa84a84b-80b1-4514-9f74-691107edfa38 from this chassis (sb_readonly=0)
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.333 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/23ab8bc1-e701-454d-a828-42d18cbd9afc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/23ab8bc1-e701-454d-a828-42d18cbd9afc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.334 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fefb0849-2094-414a-b72c-4ea975c8475c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.334 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-23ab8bc1-e701-454d-a828-42d18cbd9afc
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/23ab8bc1-e701-454d-a828-42d18cbd9afc.pid.haproxy
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 23ab8bc1-e701-454d-a828-42d18cbd9afc
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:28:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.335 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'env', 'PROCESS_TAG=haproxy-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/23ab8bc1-e701-454d-a828-42d18cbd9afc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.358 244018 DEBUG nova.compute.manager [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-changed-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.359 244018 DEBUG nova.compute.manager [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing instance network info cache due to event network-changed-05178abb-a113-4013-9194-9243afe9d0ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.359 244018 DEBUG oslo_concurrency.lockutils [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.627 244018 DEBUG nova.network.neutron [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updated VIF entry in instance network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.636 244018 DEBUG nova.network.neutron [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updating instance_info_cache with network_info: [{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.646 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.655 244018 DEBUG oslo_concurrency.lockutils [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.656 244018 DEBUG oslo_concurrency.lockutils [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.657 244018 DEBUG nova.network.neutron [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:28:57 compute-0 podman[299994]: 2026-02-25 12:28:57.602583836 +0000 UTC m=+0.023566469 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.712 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022537.7121496, 826789b1-e26a-4569-bd77-bd1ef76388be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.713 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] VM Started (Lifecycle Event)
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.733 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.737 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022537.713391, 826789b1-e26a-4569-bd77-bd1ef76388be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.737 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] VM Paused (Lifecycle Event)
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.759 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.764 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:28:57 compute-0 nova_compute[244014]: 2026-02-25 12:28:57.785 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:28:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 374 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 838 KiB/s wr, 249 op/s
Feb 25 12:28:58 compute-0 podman[299994]: 2026-02-25 12:28:58.265771262 +0000 UTC m=+0.686753865 container create 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 12:28:58 compute-0 ovn_controller[147040]: 2026-02-25T12:28:58Z|00574|binding|INFO|Releasing lport aa84a84b-80b1-4514-9f74-691107edfa38 from this chassis (sb_readonly=0)
Feb 25 12:28:58 compute-0 ovn_controller[147040]: 2026-02-25T12:28:58Z|00575|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:28:58 compute-0 ovn_controller[147040]: 2026-02-25T12:28:58Z|00576|binding|INFO|Releasing lport 6ef06fbe-6eb9-4d55-bbd8-9394a70da39f from this chassis (sb_readonly=0)
Feb 25 12:28:58 compute-0 nova_compute[244014]: 2026-02-25 12:28:58.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:58 compute-0 systemd[1]: Started libpod-conmon-8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6.scope.
Feb 25 12:28:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767066c256d5f6bdcbbcdf1ad704e26891b6cc82d028e5003584f4e4aff9b4b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:28:58 compute-0 podman[299994]: 2026-02-25 12:28:58.794193777 +0000 UTC m=+1.215176370 container init 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:28:58 compute-0 podman[299994]: 2026-02-25 12:28:58.803197933 +0000 UTC m=+1.224180526 container start 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 12:28:58 compute-0 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [NOTICE]   (300024) : New worker (300026) forked
Feb 25 12:28:58 compute-0 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [NOTICE]   (300024) : Loading success.
Feb 25 12:28:59 compute-0 ceph-mon[76335]: pgmap v1406: 305 pgs: 305 active+clean; 374 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 838 KiB/s wr, 249 op/s
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.456 244018 DEBUG nova.compute.manager [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.457 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.458 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.458 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.459 244018 DEBUG nova.compute.manager [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Processing event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.459 244018 DEBUG nova.compute.manager [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.459 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.460 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.460 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.460 244018 DEBUG nova.compute.manager [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] No waiting events found dispatching network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.461 244018 WARNING nova.compute.manager [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received unexpected event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 for instance with vm_state building and task_state spawning.
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.461 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.464 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022539.4643273, 826789b1-e26a-4569-bd77-bd1ef76388be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.464 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] VM Resumed (Lifecycle Event)
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.480 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.482 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.486 244018 INFO nova.virt.libvirt.driver [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Instance spawned successfully.
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.486 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.488 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.848 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.849 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.850 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.850 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.851 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.851 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.866 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:28:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 374 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 14 KiB/s wr, 166 op/s
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.946 244018 INFO nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Took 18.77 seconds to spawn the instance on the hypervisor.
Feb 25 12:28:59 compute-0 nova_compute[244014]: 2026-02-25 12:28:59.947 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:00 compute-0 nova_compute[244014]: 2026-02-25 12:29:00.027 244018 INFO nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Took 19.91 seconds to build instance.
Feb 25 12:29:00 compute-0 nova_compute[244014]: 2026-02-25 12:29:00.050 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:00 compute-0 nova_compute[244014]: 2026-02-25 12:29:00.619 244018 DEBUG nova.network.neutron [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updated VIF entry in instance network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:29:00 compute-0 nova_compute[244014]: 2026-02-25 12:29:00.620 244018 DEBUG nova.network.neutron [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updating instance_info_cache with network_info: [{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:00 compute-0 nova_compute[244014]: 2026-02-25 12:29:00.641 244018 DEBUG oslo_concurrency.lockutils [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:01 compute-0 ovn_controller[147040]: 2026-02-25T12:29:01Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:3b:33 10.100.0.14
Feb 25 12:29:01 compute-0 ovn_controller[147040]: 2026-02-25T12:29:01Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:3b:33 10.100.0.14
Feb 25 12:29:01 compute-0 ceph-mon[76335]: pgmap v1407: 305 pgs: 305 active+clean; 374 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 14 KiB/s wr, 166 op/s
Feb 25 12:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:29:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 390 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 1.3 MiB/s wr, 217 op/s
Feb 25 12:29:02 compute-0 sudo[300035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:29:02 compute-0 sudo[300035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:29:02 compute-0 sudo[300035]: pam_unix(sudo:session): session closed for user root
Feb 25 12:29:02 compute-0 sudo[300060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:29:02 compute-0 sudo[300060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:29:02 compute-0 ovn_controller[147040]: 2026-02-25T12:29:02Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:1d:a7 10.100.0.10
Feb 25 12:29:02 compute-0 ovn_controller[147040]: 2026-02-25T12:29:02Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:1d:a7 10.100.0.10
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.556 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.556 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.556 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.556 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.557 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.557 244018 INFO nova.compute.manager [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Terminating instance
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.558 244018 DEBUG nova.compute.manager [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:29:02 compute-0 kernel: tapba138ed1-c8 (unregistering): left promiscuous mode
Feb 25 12:29:02 compute-0 NetworkManager[49836]: <info>  [1772022542.5980] device (tapba138ed1-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:02 compute-0 ovn_controller[147040]: 2026-02-25T12:29:02Z|00577|binding|INFO|Releasing lport ba138ed1-c811-4043-9bd6-e1a5c6127f84 from this chassis (sb_readonly=0)
Feb 25 12:29:02 compute-0 ovn_controller[147040]: 2026-02-25T12:29:02Z|00578|binding|INFO|Setting lport ba138ed1-c811-4043-9bd6-e1a5c6127f84 down in Southbound
Feb 25 12:29:02 compute-0 ovn_controller[147040]: 2026-02-25T12:29:02Z|00579|binding|INFO|Removing iface tapba138ed1-c8 ovn-installed in OVS
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.614 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.618 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:a8:78 10.100.0.14'], port_security=['fa:16:3e:bd:a8:78 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '826789b1-e26a-4569-bd77-bd1ef76388be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be8db082f3894d28b63a3709be538262', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c589948-8c4c-4f73-8545-f79b3c943dbd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bcb3a436-cbdb-491a-80cb-92f630c9f966, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ba138ed1-c811-4043-9bd6-e1a5c6127f84) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.621 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ba138ed1-c811-4043-9bd6-e1a5c6127f84 in datapath 23ab8bc1-e701-454d-a828-42d18cbd9afc unbound from our chassis
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.622 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 23ab8bc1-e701-454d-a828-42d18cbd9afc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.624 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e4569eeb-9dca-42fa-815c-cfa5c3655da3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.624 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc namespace which is not needed anymore
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:02 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Deactivated successfully.
Feb 25 12:29:02 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Consumed 3.819s CPU time.
Feb 25 12:29:02 compute-0 systemd-machined[210048]: Machine qemu-74-instance-00000041 terminated.
Feb 25 12:29:02 compute-0 sudo[300060]: pam_unix(sudo:session): session closed for user root
Feb 25 12:29:02 compute-0 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [NOTICE]   (300024) : haproxy version is 2.8.14-c23fe91
Feb 25 12:29:02 compute-0 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [NOTICE]   (300024) : path to executable is /usr/sbin/haproxy
Feb 25 12:29:02 compute-0 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [WARNING]  (300024) : Exiting Master process...
Feb 25 12:29:02 compute-0 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [WARNING]  (300024) : Exiting Master process...
Feb 25 12:29:02 compute-0 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [ALERT]    (300024) : Current worker (300026) exited with code 143 (Terminated)
Feb 25 12:29:02 compute-0 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [WARNING]  (300024) : All workers exited. Exiting... (0)
Feb 25 12:29:02 compute-0 systemd[1]: libpod-8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6.scope: Deactivated successfully.
Feb 25 12:29:02 compute-0 podman[300138]: 2026-02-25 12:29:02.743486619 +0000 UTC m=+0.040587492 container died 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:29:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:29:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:29:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:29:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:29:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-767066c256d5f6bdcbbcdf1ad704e26891b6cc82d028e5003584f4e4aff9b4b3-merged.mount: Deactivated successfully.
Feb 25 12:29:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6-userdata-shm.mount: Deactivated successfully.
Feb 25 12:29:02 compute-0 podman[300138]: 2026-02-25 12:29:02.784997746 +0000 UTC m=+0.082098639 container cleanup 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:29:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:29:02 compute-0 systemd[1]: libpod-conmon-8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6.scope: Deactivated successfully.
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:29:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.811 244018 INFO nova.virt.libvirt.driver [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Instance destroyed successfully.
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.812 244018 DEBUG nova.objects.instance [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lazy-loading 'resources' on Instance uuid 826789b1-e26a-4569-bd77-bd1ef76388be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:29:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:29:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:29:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:29:02 compute-0 podman[300177]: 2026-02-25 12:29:02.861465314 +0000 UTC m=+0.054088684 container remove 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.867 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb33748-9392-46e6-993d-c6bf6fd41d1c]: (4, ('Wed Feb 25 12:29:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc (8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6)\n8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6\nWed Feb 25 12:29:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc (8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6)\n8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.868 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d54b366-adb1-446f-9138-2650d66b0356]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.869 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23ab8bc1-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:02 compute-0 sudo[300203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.877 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:02 compute-0 kernel: tap23ab8bc1-e0: left promiscuous mode
Feb 25 12:29:02 compute-0 sudo[300203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:29:02 compute-0 podman[300161]: 2026-02-25 12:29:02.880529865 +0000 UTC m=+0.091895427 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:29:02 compute-0 sudo[300203]: pam_unix(sudo:session): session closed for user root
Feb 25 12:29:02 compute-0 nova_compute[244014]: 2026-02-25 12:29:02.881 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.884 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eca6ad54-3784-45db-8492-294dff622a53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8179e798-284e-4cd3-8d3b-71d55bfcefc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.899 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[580ab355-c336-43a7-9786-809eb26fd7f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc85f059-17cb-4b1a-ac4d-bd8d651a6dca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450649, 'reachable_time': 32336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300269, 'error': None, 'target': 'ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d23ab8bc1\x2de701\x2d454d\x2da828\x2d42d18cbd9afc.mount: Deactivated successfully.
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.918 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:29:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.918 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2a576c1b-5e64-4d72-8462-74ce4fc02edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:02 compute-0 sudo[300251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:29:02 compute-0 sudo[300251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:29:02 compute-0 podman[300176]: 2026-02-25 12:29:02.941097753 +0000 UTC m=+0.123168744 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.064 244018 DEBUG nova.virt.libvirt.vif [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-846051174',display_name='tempest-ServerPasswordTestJSON-server-846051174',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-846051174',id=65,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be8db082f3894d28b63a3709be538262',ramdisk_id='',reservation_id='r-624y952x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1533125897',owner_user_name='tempest-ServerPasswordTestJSON-1533125897-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:02Z,user_data=None,user_id='05ca7159581049009a4223cf01ebf146',uuid=826789b1-e26a-4569-bd77-bd1ef76388be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.065 244018 DEBUG nova.network.os_vif_util [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converting VIF {"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.066 244018 DEBUG nova.network.os_vif_util [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.066 244018 DEBUG os_vif [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.069 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba138ed1-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.077 244018 INFO os_vif [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8')
Feb 25 12:29:03 compute-0 podman[300316]: 2026-02-25 12:29:03.173823751 +0000 UTC m=+0.023222459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:29:03 compute-0 podman[300316]: 2026-02-25 12:29:03.280055694 +0000 UTC m=+0.129454362 container create 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 12:29:03 compute-0 ceph-mon[76335]: pgmap v1408: 305 pgs: 305 active+clean; 390 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 1.3 MiB/s wr, 217 op/s
Feb 25 12:29:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:29:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:29:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:29:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:29:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:29:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:29:03 compute-0 systemd[1]: Started libpod-conmon-4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8.scope.
Feb 25 12:29:03 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:29:03 compute-0 podman[300316]: 2026-02-25 12:29:03.370123658 +0000 UTC m=+0.219522356 container init 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:29:03 compute-0 podman[300316]: 2026-02-25 12:29:03.382654733 +0000 UTC m=+0.232053411 container start 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 12:29:03 compute-0 podman[300316]: 2026-02-25 12:29:03.387065968 +0000 UTC m=+0.236464636 container attach 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:29:03 compute-0 funny_hofstadter[300332]: 167 167
Feb 25 12:29:03 compute-0 systemd[1]: libpod-4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8.scope: Deactivated successfully.
Feb 25 12:29:03 compute-0 podman[300316]: 2026-02-25 12:29:03.39099373 +0000 UTC m=+0.240392398 container died 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:29:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6727e85ca8755540978617833a2740aa501d9128825fd4a5eb532fe2a667ace-merged.mount: Deactivated successfully.
Feb 25 12:29:03 compute-0 podman[300316]: 2026-02-25 12:29:03.429720728 +0000 UTC m=+0.279119396 container remove 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb 25 12:29:03 compute-0 systemd[1]: libpod-conmon-4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8.scope: Deactivated successfully.
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.515 244018 INFO nova.virt.libvirt.driver [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Deleting instance files /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be_del
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.516 244018 INFO nova.virt.libvirt.driver [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Deletion of /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be_del complete
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.532 244018 DEBUG nova.compute.manager [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-unplugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.532 244018 DEBUG oslo_concurrency.lockutils [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.533 244018 DEBUG oslo_concurrency.lockutils [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.533 244018 DEBUG oslo_concurrency.lockutils [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.533 244018 DEBUG nova.compute.manager [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] No waiting events found dispatching network-vif-unplugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.533 244018 DEBUG nova.compute.manager [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-unplugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:29:03 compute-0 podman[300356]: 2026-02-25 12:29:03.550656257 +0000 UTC m=+0.034585341 container create 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.572 244018 INFO nova.compute.manager [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Took 1.01 seconds to destroy the instance on the hypervisor.
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.572 244018 DEBUG oslo.service.loopingcall [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.573 244018 DEBUG nova.compute.manager [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:29:03 compute-0 nova_compute[244014]: 2026-02-25 12:29:03.573 244018 DEBUG nova.network.neutron [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:29:03 compute-0 systemd[1]: Started libpod-conmon-8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13.scope.
Feb 25 12:29:03 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:03 compute-0 podman[300356]: 2026-02-25 12:29:03.537443113 +0000 UTC m=+0.021372207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:29:03 compute-0 podman[300356]: 2026-02-25 12:29:03.642026898 +0000 UTC m=+0.125956002 container init 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 12:29:03 compute-0 podman[300356]: 2026-02-25 12:29:03.646591788 +0000 UTC m=+0.130520862 container start 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 12:29:03 compute-0 podman[300356]: 2026-02-25 12:29:03.649228863 +0000 UTC m=+0.133157937 container attach 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:29:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 427 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.9 MiB/s wr, 277 op/s
Feb 25 12:29:04 compute-0 reverent_newton[300372]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:29:04 compute-0 reverent_newton[300372]: --> All data devices are unavailable
Feb 25 12:29:04 compute-0 systemd[1]: libpod-8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13.scope: Deactivated successfully.
Feb 25 12:29:04 compute-0 podman[300356]: 2026-02-25 12:29:04.073035851 +0000 UTC m=+0.556964935 container died 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:29:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14-merged.mount: Deactivated successfully.
Feb 25 12:29:04 compute-0 podman[300356]: 2026-02-25 12:29:04.124469689 +0000 UTC m=+0.608398803 container remove 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:29:04 compute-0 systemd[1]: libpod-conmon-8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13.scope: Deactivated successfully.
Feb 25 12:29:04 compute-0 sudo[300251]: pam_unix(sudo:session): session closed for user root
Feb 25 12:29:04 compute-0 sudo[300404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:29:04 compute-0 sudo[300404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:29:04 compute-0 sudo[300404]: pam_unix(sudo:session): session closed for user root
Feb 25 12:29:04 compute-0 sudo[300429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:29:04 compute-0 sudo[300429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:29:04 compute-0 nova_compute[244014]: 2026-02-25 12:29:04.323 244018 DEBUG nova.network.neutron [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:04 compute-0 nova_compute[244014]: 2026-02-25 12:29:04.343 244018 INFO nova.compute.manager [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Took 0.77 seconds to deallocate network for instance.
Feb 25 12:29:04 compute-0 nova_compute[244014]: 2026-02-25 12:29:04.410 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:04 compute-0 nova_compute[244014]: 2026-02-25 12:29:04.410 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:04 compute-0 nova_compute[244014]: 2026-02-25 12:29:04.415 244018 DEBUG nova.compute.manager [req-dd486d5a-61d5-4390-aa67-961ac8607cec req-33bf3965-7d88-4686-aefd-d7dc3a445000 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-deleted-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:04 compute-0 podman[300466]: 2026-02-25 12:29:04.536453792 +0000 UTC m=+0.035438146 container create 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 12:29:04 compute-0 nova_compute[244014]: 2026-02-25 12:29:04.570 244018 DEBUG oslo_concurrency.processutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:04 compute-0 systemd[1]: Started libpod-conmon-93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa.scope.
Feb 25 12:29:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:29:04 compute-0 podman[300466]: 2026-02-25 12:29:04.520630353 +0000 UTC m=+0.019614697 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:29:04 compute-0 podman[300466]: 2026-02-25 12:29:04.623291805 +0000 UTC m=+0.122276219 container init 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:29:04 compute-0 podman[300466]: 2026-02-25 12:29:04.634932735 +0000 UTC m=+0.133917089 container start 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:29:04 compute-0 podman[300466]: 2026-02-25 12:29:04.638433324 +0000 UTC m=+0.137417728 container attach 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:29:04 compute-0 cranky_mccarthy[300483]: 167 167
Feb 25 12:29:04 compute-0 systemd[1]: libpod-93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa.scope: Deactivated successfully.
Feb 25 12:29:04 compute-0 podman[300466]: 2026-02-25 12:29:04.640508743 +0000 UTC m=+0.139493097 container died 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 12:29:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-72d8009d5776e297b7b27261bcc33f44c9e0c2e7322f85689a12fccea5c18c58-merged.mount: Deactivated successfully.
Feb 25 12:29:04 compute-0 podman[300466]: 2026-02-25 12:29:04.68731308 +0000 UTC m=+0.186297444 container remove 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 12:29:04 compute-0 systemd[1]: libpod-conmon-93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa.scope: Deactivated successfully.
Feb 25 12:29:04 compute-0 nova_compute[244014]: 2026-02-25 12:29:04.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:04 compute-0 nova_compute[244014]: 2026-02-25 12:29:04.857 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:29:04 compute-0 nova_compute[244014]: 2026-02-25 12:29:04.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:29:04 compute-0 nova_compute[244014]: 2026-02-25 12:29:04.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:29:04 compute-0 podman[300528]: 2026-02-25 12:29:04.9048686 +0000 UTC m=+0.085959139 container create b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:29:04 compute-0 podman[300528]: 2026-02-25 12:29:04.84495108 +0000 UTC m=+0.026041669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:29:04 compute-0 systemd[1]: Started libpod-conmon-b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5.scope.
Feb 25 12:29:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0445bdfbd75dd41a98cda322291371e74dd8847b3b6099bedefad35672f242a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0445bdfbd75dd41a98cda322291371e74dd8847b3b6099bedefad35672f242a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0445bdfbd75dd41a98cda322291371e74dd8847b3b6099bedefad35672f242a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0445bdfbd75dd41a98cda322291371e74dd8847b3b6099bedefad35672f242a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:05 compute-0 podman[300528]: 2026-02-25 12:29:05.032975472 +0000 UTC m=+0.214065991 container init b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:29:05 compute-0 podman[300528]: 2026-02-25 12:29:05.04313408 +0000 UTC m=+0.224224579 container start b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 12:29:05 compute-0 podman[300528]: 2026-02-25 12:29:05.046202267 +0000 UTC m=+0.227292766 container attach b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 12:29:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:29:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/374794373' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.133 244018 DEBUG oslo_concurrency.processutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.142 244018 DEBUG nova.compute.provider_tree [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.149 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:29:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:05 compute-0 priceless_neumann[300545]: {
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:     "0": [
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:         {
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "devices": [
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "/dev/loop3"
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             ],
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_name": "ceph_lv0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_size": "21470642176",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "name": "ceph_lv0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "tags": {
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.cluster_name": "ceph",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.crush_device_class": "",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.encrypted": "0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.objectstore": "bluestore",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.osd_id": "0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.type": "block",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.vdo": "0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.with_tpm": "0"
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             },
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "type": "block",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "vg_name": "ceph_vg0"
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:         }
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:     ],
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:     "1": [
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:         {
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "devices": [
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "/dev/loop4"
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             ],
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_name": "ceph_lv1",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_size": "21470642176",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "name": "ceph_lv1",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "tags": {
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.cluster_name": "ceph",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.crush_device_class": "",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.encrypted": "0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.objectstore": "bluestore",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.osd_id": "1",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.type": "block",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.vdo": "0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.with_tpm": "0"
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             },
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "type": "block",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "vg_name": "ceph_vg1"
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:         }
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:     ],
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:     "2": [
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:         {
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "devices": [
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "/dev/loop5"
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             ],
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_name": "ceph_lv2",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_size": "21470642176",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "name": "ceph_lv2",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "tags": {
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.cluster_name": "ceph",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.crush_device_class": "",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.encrypted": "0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.objectstore": "bluestore",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.osd_id": "2",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.type": "block",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.vdo": "0",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:                 "ceph.with_tpm": "0"
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             },
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "type": "block",
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:             "vg_name": "ceph_vg2"
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:         }
Feb 25 12:29:05 compute-0 priceless_neumann[300545]:     ]
Feb 25 12:29:05 compute-0 priceless_neumann[300545]: }
Feb 25 12:29:05 compute-0 ceph-mon[76335]: pgmap v1409: 305 pgs: 305 active+clean; 427 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.9 MiB/s wr, 277 op/s
Feb 25 12:29:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/374794373' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:05 compute-0 systemd[1]: libpod-b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5.scope: Deactivated successfully.
Feb 25 12:29:05 compute-0 podman[300528]: 2026-02-25 12:29:05.379433577 +0000 UTC m=+0.560524116 container died b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:29:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0445bdfbd75dd41a98cda322291371e74dd8847b3b6099bedefad35672f242a-merged.mount: Deactivated successfully.
Feb 25 12:29:05 compute-0 podman[300528]: 2026-02-25 12:29:05.4306612 +0000 UTC m=+0.611751709 container remove b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 12:29:05 compute-0 systemd[1]: libpod-conmon-b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5.scope: Deactivated successfully.
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.514 244018 DEBUG nova.scheduler.client.report [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:29:05 compute-0 sudo[300429]: pam_unix(sudo:session): session closed for user root
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.557 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.590 244018 INFO nova.scheduler.client.report [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Deleted allocations for instance 826789b1-e26a-4569-bd77-bd1ef76388be
Feb 25 12:29:05 compute-0 sudo[300568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:29:05 compute-0 sudo[300568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:29:05 compute-0 sudo[300568]: pam_unix(sudo:session): session closed for user root
Feb 25 12:29:05 compute-0 sudo[300593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:29:05 compute-0 sudo[300593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.740 244018 DEBUG nova.compute.manager [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.740 244018 DEBUG oslo_concurrency.lockutils [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.741 244018 DEBUG oslo_concurrency.lockutils [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.741 244018 DEBUG oslo_concurrency.lockutils [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.741 244018 DEBUG nova.compute.manager [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] No waiting events found dispatching network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:05 compute-0 nova_compute[244014]: 2026-02-25 12:29:05.742 244018 WARNING nova.compute.manager [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received unexpected event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 for instance with vm_state deleted and task_state None.
Feb 25 12:29:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 427 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 218 op/s
Feb 25 12:29:05 compute-0 podman[300631]: 2026-02-25 12:29:05.995846707 +0000 UTC m=+0.068104552 container create f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 12:29:06 compute-0 podman[300631]: 2026-02-25 12:29:05.958770726 +0000 UTC m=+0.031028641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:29:06 compute-0 nova_compute[244014]: 2026-02-25 12:29:06.253 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:06 compute-0 systemd[1]: Started libpod-conmon-f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d.scope.
Feb 25 12:29:06 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:29:06 compute-0 podman[300631]: 2026-02-25 12:29:06.348504058 +0000 UTC m=+0.420762003 container init f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 12:29:06 compute-0 podman[300631]: 2026-02-25 12:29:06.357785841 +0000 UTC m=+0.430043716 container start f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:29:06 compute-0 stupefied_shockley[300648]: 167 167
Feb 25 12:29:06 compute-0 systemd[1]: libpod-f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d.scope: Deactivated successfully.
Feb 25 12:29:06 compute-0 podman[300631]: 2026-02-25 12:29:06.481043416 +0000 UTC m=+0.553301281 container attach f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:29:06 compute-0 podman[300631]: 2026-02-25 12:29:06.481734996 +0000 UTC m=+0.553992861 container died f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:29:06 compute-0 nova_compute[244014]: 2026-02-25 12:29:06.636 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022531.5631955, 9630899b-57d8-4e46-b9e0-8762f0f4f2cb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:06 compute-0 nova_compute[244014]: 2026-02-25 12:29:06.636 244018 INFO nova.compute.manager [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] VM Stopped (Lifecycle Event)
Feb 25 12:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c52a3132975f3e9e1b76132660b628d327853c2e9e2ce16428a735dbbd0b0a78-merged.mount: Deactivated successfully.
Feb 25 12:29:06 compute-0 nova_compute[244014]: 2026-02-25 12:29:06.669 244018 DEBUG nova.compute.manager [None req-351521fe-e0b9-4fde-bc76-de32c1dfb050 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:06 compute-0 podman[300631]: 2026-02-25 12:29:06.805868267 +0000 UTC m=+0.878126112 container remove f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 12:29:06 compute-0 systemd[1]: libpod-conmon-f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d.scope: Deactivated successfully.
Feb 25 12:29:07 compute-0 podman[300674]: 2026-02-25 12:29:07.012159216 +0000 UTC m=+0.079680370 container create b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:29:07 compute-0 podman[300674]: 2026-02-25 12:29:06.964072293 +0000 UTC m=+0.031593517 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:29:07 compute-0 systemd[1]: Started libpod-conmon-b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd.scope.
Feb 25 12:29:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9135927551e557500372eaf360e65893f2dcd5de224e740d89afa4e21224d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9135927551e557500372eaf360e65893f2dcd5de224e740d89afa4e21224d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9135927551e557500372eaf360e65893f2dcd5de224e740d89afa4e21224d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9135927551e557500372eaf360e65893f2dcd5de224e740d89afa4e21224d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:07 compute-0 podman[300674]: 2026-02-25 12:29:07.190234426 +0000 UTC m=+0.257755570 container init b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:29:07 compute-0 podman[300674]: 2026-02-25 12:29:07.200403495 +0000 UTC m=+0.267924619 container start b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:29:07 compute-0 podman[300674]: 2026-02-25 12:29:07.223538191 +0000 UTC m=+0.291059325 container attach b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:29:07 compute-0 kernel: tap9848fa6c-0a (unregistering): left promiscuous mode
Feb 25 12:29:07 compute-0 NetworkManager[49836]: <info>  [1772022547.2422] device (tap9848fa6c-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:07 compute-0 ovn_controller[147040]: 2026-02-25T12:29:07Z|00580|binding|INFO|Releasing lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c from this chassis (sb_readonly=0)
Feb 25 12:29:07 compute-0 ovn_controller[147040]: 2026-02-25T12:29:07Z|00581|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c down in Southbound
Feb 25 12:29:07 compute-0 ovn_controller[147040]: 2026-02-25T12:29:07Z|00582|binding|INFO|Removing iface tap9848fa6c-0a ovn-installed in OVS
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.262 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:3b:33 10.100.0.14'], port_security=['fa:16:3e:2e:3b:33 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf6e8407-f371-4f64-a4aa-eb412980736d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.267 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.270 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.285 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a898f40c-a1b6-4907-88d7-3c835f4d7e65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:07 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Feb 25 12:29:07 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003f.scope: Consumed 12.385s CPU time.
Feb 25 12:29:07 compute-0 systemd-machined[210048]: Machine qemu-72-instance-0000003f terminated.
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.318 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebb7dd3-8377-4a2c-b8f5-7aedc6b7d972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.322 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[84fb996b-d0c0-4ea7-ac6b-3d9efb6a81bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.347 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[03748ac3-684f-4b0b-817e-7dcc8adcd6e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.363 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69043fba-6a79-4d71-880a-e237cacfa55e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300708, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.377 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6ba236-3b3e-4c76-bbce-d731a1b93811]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444541, 'tstamp': 444541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300709, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444544, 'tstamp': 444544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300709, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.379 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.385 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.385 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.386 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.386 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:07 compute-0 ceph-mon[76335]: pgmap v1410: 305 pgs: 305 active+clean; 427 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 218 op/s
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.530 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:07 compute-0 lvm[300792]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:29:07 compute-0 lvm[300792]: VG ceph_vg0 finished
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.873 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance shutdown successfully after 13 seconds.
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.878 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance destroyed successfully.
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.882 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance destroyed successfully.
Feb 25 12:29:07 compute-0 lvm[300794]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:29:07 compute-0 lvm[300794]: VG ceph_vg1 finished
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.883 244018 DEBUG nova.virt.libvirt.vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-ServerActionsTestJSON-server-1953116819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:52Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.883 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 393 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 272 op/s
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.884 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.885 244018 DEBUG os_vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.887 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.887 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9848fa6c-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:07 compute-0 nova_compute[244014]: 2026-02-25 12:29:07.892 244018 INFO os_vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a')
Feb 25 12:29:07 compute-0 lvm[300795]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:29:07 compute-0 lvm[300795]: VG ceph_vg2 finished
Feb 25 12:29:07 compute-0 quirky_golick[300691]: {}
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.000 244018 DEBUG nova.compute.manager [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.001 244018 DEBUG oslo_concurrency.lockutils [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.002 244018 DEBUG oslo_concurrency.lockutils [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.002 244018 DEBUG oslo_concurrency.lockutils [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.002 244018 DEBUG nova.compute.manager [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.002 244018 WARNING nova.compute.manager [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received unexpected event network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with vm_state active and task_state rebuilding.
Feb 25 12:29:08 compute-0 systemd[1]: libpod-b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd.scope: Deactivated successfully.
Feb 25 12:29:08 compute-0 systemd[1]: libpod-b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd.scope: Consumed 1.151s CPU time.
Feb 25 12:29:08 compute-0 podman[300674]: 2026-02-25 12:29:08.032583003 +0000 UTC m=+1.100104157 container died b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:29:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4f9135927551e557500372eaf360e65893f2dcd5de224e740d89afa4e21224d-merged.mount: Deactivated successfully.
Feb 25 12:29:08 compute-0 podman[300674]: 2026-02-25 12:29:08.07336505 +0000 UTC m=+1.140886204 container remove b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:29:08 compute-0 systemd[1]: libpod-conmon-b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd.scope: Deactivated successfully.
Feb 25 12:29:08 compute-0 sudo[300593]: pam_unix(sudo:session): session closed for user root
Feb 25 12:29:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:29:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:29:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:29:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:29:08 compute-0 sudo[300828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:29:08 compute-0 sudo[300828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:29:08 compute-0 sudo[300828]: pam_unix(sudo:session): session closed for user root
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.174 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deleting instance files /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_del
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.175 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deletion of /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_del complete
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.338 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.339 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating image(s)
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.360 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.389 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.428 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.433 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.523 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.525 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.526 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.526 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.553 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.556 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.862 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.900 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:29:08 compute-0 nova_compute[244014]: 2026-02-25 12:29:08.945 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] resizing rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.038 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.039 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Ensure instance console log exists: /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.039 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.040 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.040 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.043 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start _get_guest_xml network_info=[{"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.047 244018 WARNING nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.054 244018 DEBUG nova.virt.libvirt.host [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.054 244018 DEBUG nova.virt.libvirt.host [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.065 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.065 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.066 244018 DEBUG nova.virt.libvirt.host [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.067 244018 DEBUG nova.virt.libvirt.host [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.068 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.068 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.068 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.069 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.069 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.069 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.069 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.070 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.070 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.070 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.071 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.071 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.071 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.099 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:09 compute-0 ceph-mon[76335]: pgmap v1411: 305 pgs: 305 active+clean; 393 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 272 op/s
Feb 25 12:29:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:29:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.132 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.244 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.245 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.253 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.253 244018 INFO nova.compute.claims [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.398 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3579413670' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.641 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.676 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.684 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 393 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 232 op/s
Feb 25 12:29:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:29:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169542644' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.979 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:09 compute-0 nova_compute[244014]: 2026-02-25 12:29:09.984 244018 DEBUG nova.compute.provider_tree [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.002 244018 DEBUG nova.scheduler.client.report [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.032 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.033 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.087 244018 DEBUG nova.compute.manager [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.088 244018 DEBUG oslo_concurrency.lockutils [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.088 244018 DEBUG oslo_concurrency.lockutils [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.089 244018 DEBUG oslo_concurrency.lockutils [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.089 244018 DEBUG nova.compute.manager [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.090 244018 WARNING nova.compute.manager [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received unexpected event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with vm_state active and task_state rebuild_spawning.
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.097 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.098 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.120 244018 INFO nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:29:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3579413670' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3169542644' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.140 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:29:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641828908' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.199 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.200 244018 DEBUG nova.virt.libvirt.vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-ServerActionsTestJSON-server-1953116819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:08Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.201 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.201 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.205 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:29:10 compute-0 nova_compute[244014]:   <uuid>ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</uuid>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   <name>instance-0000003f</name>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestJSON-server-1953116819</nova:name>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:29:09</nova:creationTime>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <nova:port uuid="9848fa6c-0a42-4cac-a3d8-2b90d5b7920c">
Feb 25 12:29:10 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <system>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <entry name="serial">ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</entry>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <entry name="uuid">ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</entry>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     </system>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   <os>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   </os>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   <features>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   </features>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk">
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config">
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:10 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:2e:3b:33"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <target dev="tap9848fa6c-0a"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/console.log" append="off"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <video>
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     </video>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:29:10 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:29:10 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:29:10 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:29:10 compute-0 nova_compute[244014]: </domain>
Feb 25 12:29:10 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.207 244018 DEBUG nova.compute.manager [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Preparing to wait for external event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.207 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.207 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.208 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.208 244018 DEBUG nova.virt.libvirt.vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-ServerActionsTestJSON-server-1953116819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:08Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.209 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.210 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.210 244018 DEBUG os_vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.211 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.211 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.215 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9848fa6c-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.215 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9848fa6c-0a, col_values=(('external_ids', {'iface-id': '9848fa6c-0a42-4cac-a3d8-2b90d5b7920c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:3b:33', 'vm-uuid': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:10 compute-0 NetworkManager[49836]: <info>  [1772022550.2184] manager: (tap9848fa6c-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.219 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.224 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.226 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.226 244018 INFO nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Creating image(s)
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.247 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.265 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.286 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.289 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.319 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.322 244018 INFO os_vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a')
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.355 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.356 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.357 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.358 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.385 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.389 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 abe229eb-2238-4237-a7f2-83b8476ac1dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.422 244018 DEBUG nova.policy [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bcb4ded096bc4f7993f96ca892b82333', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.446 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.446 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.447 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No VIF found with MAC fa:16:3e:2e:3b:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.447 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Using config drive
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.469 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.487 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.516 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'keypairs' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.675 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 abe229eb-2238-4237-a7f2-83b8476ac1dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.741 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] resizing rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.784 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating config drive at /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config
Feb 25 12:29:10 compute-0 ovn_controller[147040]: 2026-02-25T12:29:10Z|00583|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:29:10 compute-0 ovn_controller[147040]: 2026-02-25T12:29:10Z|00584|binding|INFO|Releasing lport 6ef06fbe-6eb9-4d55-bbd8-9394a70da39f from this chassis (sb_readonly=0)
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.789 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbfsah201 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.851 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.858 244018 DEBUG nova.objects.instance [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'migration_context' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.874 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.875 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Ensure instance console log exists: /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.875 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.876 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.876 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.929 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbfsah201" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.960 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:10 compute-0 nova_compute[244014]: 2026-02-25 12:29:10.964 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:11 compute-0 nova_compute[244014]: 2026-02-25 12:29:11.114 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:11 compute-0 nova_compute[244014]: 2026-02-25 12:29:11.115 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deleting local config drive /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config because it was imported into RBD.
Feb 25 12:29:11 compute-0 ceph-mon[76335]: pgmap v1412: 305 pgs: 305 active+clean; 393 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 232 op/s
Feb 25 12:29:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3641828908' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:11 compute-0 kernel: tap9848fa6c-0a: entered promiscuous mode
Feb 25 12:29:11 compute-0 NetworkManager[49836]: <info>  [1772022551.1689] manager: (tap9848fa6c-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Feb 25 12:29:11 compute-0 ovn_controller[147040]: 2026-02-25T12:29:11Z|00585|binding|INFO|Claiming lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for this chassis.
Feb 25 12:29:11 compute-0 ovn_controller[147040]: 2026-02-25T12:29:11Z|00586|binding|INFO|9848fa6c-0a42-4cac-a3d8-2b90d5b7920c: Claiming fa:16:3e:2e:3b:33 10.100.0.14
Feb 25 12:29:11 compute-0 nova_compute[244014]: 2026-02-25 12:29:11.169 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.178 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:3b:33 10.100.0.14'], port_security=['fa:16:3e:2e:3b:33 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'cf6e8407-f371-4f64-a4aa-eb412980736d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:11 compute-0 ovn_controller[147040]: 2026-02-25T12:29:11Z|00587|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c ovn-installed in OVS
Feb 25 12:29:11 compute-0 ovn_controller[147040]: 2026-02-25T12:29:11Z|00588|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c up in Southbound
Feb 25 12:29:11 compute-0 nova_compute[244014]: 2026-02-25 12:29:11.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:11 compute-0 nova_compute[244014]: 2026-02-25 12:29:11.186 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.186 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.189 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:29:11 compute-0 systemd-udevd[301342]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:29:11 compute-0 systemd-machined[210048]: New machine qemu-75-instance-0000003f.
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.206 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc710d1d-46cc-4a99-a687-877269b74952]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:11 compute-0 NetworkManager[49836]: <info>  [1772022551.2105] device (tap9848fa6c-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:29:11 compute-0 NetworkManager[49836]: <info>  [1772022551.2121] device (tap9848fa6c-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:29:11 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-0000003f.
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.236 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5774e92b-2c27-437f-adfd-c72dd827cfc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.239 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8e103f64-b7ed-45b8-90f7-c6d0ec528291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.264 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[24b4d2b1-81e4-4448-af35-7a21a2aded92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.284 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3708e6e2-8e87-4274-a93c-d8396376ee52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301355, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[316699bf-3e00-49fa-92e0-4b20d891de3e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444541, 'tstamp': 444541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301357, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444544, 'tstamp': 444544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301357, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.304 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:11 compute-0 nova_compute[244014]: 2026-02-25 12:29:11.306 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:11 compute-0 nova_compute[244014]: 2026-02-25 12:29:11.308 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.308 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.309 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.309 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.309 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:11 compute-0 nova_compute[244014]: 2026-02-25 12:29:11.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:29:11 compute-0 nova_compute[244014]: 2026-02-25 12:29:11.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:29:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 411 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.3 MiB/s wr, 294 op/s
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.214 244018 DEBUG nova.compute.manager [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.215 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.215 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.216 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.217 244018 DEBUG nova.compute.manager [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Processing event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.217 244018 DEBUG nova.compute.manager [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.218 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.218 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.219 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.219 244018 DEBUG nova.compute.manager [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.220 244018 WARNING nova.compute.manager [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received unexpected event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with vm_state active and task_state rebuild_spawning.
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.396 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Successfully created port: ab721623-aa6d-494f-8b90-6ffd63b7a33f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.410 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for ee9cd98b-1ca6-48e7-aa44-a09caf048a1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.410 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022552.4099329, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.410 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Started (Lifecycle Event)
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.413 244018 DEBUG nova.compute.manager [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.417 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.421 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance spawned successfully.
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.421 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.431 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.436 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.445 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.445 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.448 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.448 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.449 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.449 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.480 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.481 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022552.4126215, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.482 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Paused (Lifecycle Event)
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.515 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.520 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022552.4160485, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.521 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Resumed (Lifecycle Event)
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.525 244018 DEBUG nova.compute.manager [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.548 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.551 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.576 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.587 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.588 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.588 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.642 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:12.668 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:12.670 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.670 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:29:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6637 writes, 29K keys, 6637 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6637 writes, 6637 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1781 writes, 8156 keys, 1781 commit groups, 1.0 writes per commit group, ingest: 10.59 MB, 0.02 MB/s
                                           Interval WAL: 1781 writes, 1781 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     89.4      0.36              0.09        16    0.023       0      0       0.0       0.0
                                             L6      1/0    8.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    130.2    106.3      1.03              0.30        15    0.068     70K   8315       0.0       0.0
                                            Sum      1/0    8.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     96.0    101.9      1.39              0.40        31    0.045     70K   8315       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    166.6    171.4      0.24              0.11         8    0.030     22K   2558       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    130.2    106.3      1.03              0.30        15    0.068     70K   8315       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     90.3      0.36              0.09        15    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.032, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 1.4 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 15.84 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000134 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(989,15.28 MB,5.02476%) FilterBlock(32,203.17 KB,0.0652665%) IndexBlock(32,371.19 KB,0.119239%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:29:12 compute-0 nova_compute[244014]: 2026-02-25 12:29:12.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:13 compute-0 ceph-mon[76335]: pgmap v1413: 305 pgs: 305 active+clean; 411 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.3 MiB/s wr, 294 op/s
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:29:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4088956665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.492 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.591 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.591 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.594 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.594 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.597 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.597 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:29:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:13.671 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.719 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.720 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3536MB free_disk=59.85440388228744GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.720 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.720 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.6 MiB/s wr, 271 op/s
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.898 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.898 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance ee9cd98b-1ca6-48e7-aa44-a09caf048a1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.899 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8086400b-ac70-4c79-928b-4f1966084384 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.899 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance abe229eb-2238-4237-a7f2-83b8476ac1dc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.899 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.900 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:29:13 compute-0 nova_compute[244014]: 2026-02-25 12:29:13.993 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4088956665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:29:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3763794133' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.545 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.549 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.577 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.603 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.603 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.695 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Successfully updated port: ab721623-aa6d-494f-8b90-6ffd63b7a33f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.710 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.711 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquired lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.711 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.822 244018 DEBUG nova.compute.manager [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.823 244018 DEBUG nova.compute.manager [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing instance network info cache due to event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.823 244018 DEBUG oslo_concurrency.lockutils [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:14 compute-0 nova_compute[244014]: 2026-02-25 12:29:14.917 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:29:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:15 compute-0 ceph-mon[76335]: pgmap v1414: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.6 MiB/s wr, 271 op/s
Feb 25 12:29:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3763794133' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:15 compute-0 nova_compute[244014]: 2026-02-25 12:29:15.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:15 compute-0 nova_compute[244014]: 2026-02-25 12:29:15.604 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:29:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 3.9 MiB/s wr, 144 op/s
Feb 25 12:29:15 compute-0 nova_compute[244014]: 2026-02-25 12:29:15.992 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.019 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Releasing lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.020 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance network_info: |[{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.021 244018 DEBUG oslo_concurrency.lockutils [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.022 244018 DEBUG nova.network.neutron [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.027 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start _get_guest_xml network_info=[{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.033 244018 WARNING nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.038 244018 DEBUG nova.virt.libvirt.host [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.039 244018 DEBUG nova.virt.libvirt.host [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.045 244018 DEBUG nova.virt.libvirt.host [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.045 244018 DEBUG nova.virt.libvirt.host [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.046 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.046 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.046 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.048 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.048 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.048 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.050 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.466 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.466 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.467 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.467 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.467 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.469 244018 INFO nova.compute.manager [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Terminating instance
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.470 244018 DEBUG nova.compute.manager [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:29:16 compute-0 kernel: tap9848fa6c-0a (unregistering): left promiscuous mode
Feb 25 12:29:16 compute-0 NetworkManager[49836]: <info>  [1772022556.5451] device (tap9848fa6c-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:16 compute-0 ovn_controller[147040]: 2026-02-25T12:29:16Z|00589|binding|INFO|Releasing lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c from this chassis (sb_readonly=0)
Feb 25 12:29:16 compute-0 ovn_controller[147040]: 2026-02-25T12:29:16Z|00590|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c down in Southbound
Feb 25 12:29:16 compute-0 ovn_controller[147040]: 2026-02-25T12:29:16Z|00591|binding|INFO|Removing iface tap9848fa6c-0a ovn-installed in OVS
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.565 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:3b:33 10.100.0.14'], port_security=['fa:16:3e:2e:3b:33 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'cf6e8407-f371-4f64-a4aa-eb412980736d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.568 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.572 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.588 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2943d324-7220-4d23-8446-79e7e689f602]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:16 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Feb 25 12:29:16 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003f.scope: Consumed 5.326s CPU time.
Feb 25 12:29:16 compute-0 systemd-machined[210048]: Machine qemu-75-instance-0000003f terminated.
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.621 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[76fe03af-147b-475b-a712-7623f4d8b215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.625 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[00d89bfe-e20e-407f-b9f7-f445f867625f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.649 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[206aad5a-3684-4a38-8f26-daf7604f356e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.663 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5645f3a6-3506-4b74-bd53-465e6c3b73d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301477, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.676 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3893f223-db96-452e-b6d0-e48ca0a03aeb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444541, 'tstamp': 444541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301478, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444544, 'tstamp': 444544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301478, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.678 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.684 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.684 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.684 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.685 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2728083525' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.701 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance destroyed successfully.
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.702 244018 DEBUG nova.objects.instance [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.719 244018 DEBUG nova.virt.libvirt.vif [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-ServerActionsTestJSON-server-1953116819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:12Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.719 244018 DEBUG nova.network.os_vif_util [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.720 244018 DEBUG nova.network.os_vif_util [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.720 244018 DEBUG os_vif [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.722 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.722 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.722 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9848fa6c-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.742 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.746 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.769 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.771 244018 INFO os_vif [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a')
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.859 244018 DEBUG nova.compute.manager [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.859 244018 DEBUG oslo_concurrency.lockutils [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.860 244018 DEBUG oslo_concurrency.lockutils [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.860 244018 DEBUG oslo_concurrency.lockutils [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.860 244018 DEBUG nova.compute.manager [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:16 compute-0 nova_compute[244014]: 2026-02-25 12:29:16.860 244018 DEBUG nova.compute.manager [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.220 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.221 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.238 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.309 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.309 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.318 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.318 244018 INFO nova.compute.claims [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.323 244018 DEBUG nova.network.neutron [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updated VIF entry in instance network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.324 244018 DEBUG nova.network.neutron [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015791985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.360 244018 DEBUG oslo_concurrency.lockutils [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.377 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.379 244018 DEBUG nova.virt.libvirt.vif [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:10Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.380 244018 DEBUG nova.network.os_vif_util [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.381 244018 DEBUG nova.network.os_vif_util [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.383 244018 DEBUG nova.objects.instance [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'pci_devices' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.402 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:29:17 compute-0 nova_compute[244014]:   <uuid>abe229eb-2238-4237-a7f2-83b8476ac1dc</uuid>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   <name>instance-00000042</name>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1679149420</nova:name>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:29:16</nova:creationTime>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <nova:user uuid="bcb4ded096bc4f7993f96ca892b82333">tempest-SecurityGroupsTestJSON-195828884-project-member</nova:user>
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <nova:project uuid="780a93a8758a4bd78b22fe68ed6276cf">tempest-SecurityGroupsTestJSON-195828884</nova:project>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <nova:port uuid="ab721623-aa6d-494f-8b90-6ffd63b7a33f">
Feb 25 12:29:17 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <system>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <entry name="serial">abe229eb-2238-4237-a7f2-83b8476ac1dc</entry>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <entry name="uuid">abe229eb-2238-4237-a7f2-83b8476ac1dc</entry>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     </system>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   <os>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   </os>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   <features>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   </features>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/abe229eb-2238-4237-a7f2-83b8476ac1dc_disk">
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config">
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:17 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:53:6c:fd"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <target dev="tapab721623-aa"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/console.log" append="off"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <video>
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     </video>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:29:17 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:29:17 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:29:17 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:29:17 compute-0 nova_compute[244014]: </domain>
Feb 25 12:29:17 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.403 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Preparing to wait for external event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.403 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.404 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.404 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.405 244018 DEBUG nova.virt.libvirt.vif [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:10Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.405 244018 DEBUG nova.network.os_vif_util [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.414 244018 DEBUG nova.network.os_vif_util [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.415 244018 DEBUG os_vif [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.417 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.417 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.425 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab721623-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.426 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab721623-aa, col_values=(('external_ids', {'iface-id': 'ab721623-aa6d-494f-8b90-6ffd63b7a33f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:6c:fd', 'vm-uuid': 'abe229eb-2238-4237-a7f2-83b8476ac1dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:17 compute-0 NetworkManager[49836]: <info>  [1772022557.4297] manager: (tapab721623-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:17 compute-0 ceph-mon[76335]: pgmap v1415: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 3.9 MiB/s wr, 144 op/s
Feb 25 12:29:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2728083525' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.437 244018 INFO os_vif [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa')
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.523 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.556 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.557 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.557 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No VIF found with MAC fa:16:3e:53:6c:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.558 244018 INFO nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Using config drive
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.582 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.798 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022542.797305, 826789b1-e26a-4569-bd77-bd1ef76388be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.798 244018 INFO nova.compute.manager [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] VM Stopped (Lifecycle Event)
Feb 25 12:29:17 compute-0 nova_compute[244014]: 2026-02-25 12:29:17.816 244018 DEBUG nova.compute.manager [None req-3a84685d-6a19-4319-a4c3-aeb4e53d0c8a - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 215 op/s
Feb 25 12:29:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:29:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1170211199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.078 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.085 244018 DEBUG nova.compute.provider_tree [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.103 244018 DEBUG nova.scheduler.client.report [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.126 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.127 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.180 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.181 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.202 244018 INFO nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.219 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.316 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.319 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.320 244018 INFO nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Creating image(s)
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.354 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.392 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.424 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.428 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.500 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.502 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.503 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.503 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.534 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.538 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bf261ccf-c216-4383-a22a-7f0553198152_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:18 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3015791985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:18 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1170211199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.689 244018 INFO nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Creating config drive at /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.693 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg_4gn57z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.744 244018 DEBUG nova.policy [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3aef97e1c87341848423edc65828540c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4445209a7384565a93895032b4f077e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.840 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg_4gn57z" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.870 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.874 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.906 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:29:18 compute-0 nova_compute[244014]: 2026-02-25 12:29:18.907 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:29:19 compute-0 nova_compute[244014]: 2026-02-25 12:29:19.221 244018 DEBUG nova.compute.manager [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:19 compute-0 nova_compute[244014]: 2026-02-25 12:29:19.222 244018 DEBUG oslo_concurrency.lockutils [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:19 compute-0 nova_compute[244014]: 2026-02-25 12:29:19.222 244018 DEBUG oslo_concurrency.lockutils [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:19 compute-0 nova_compute[244014]: 2026-02-25 12:29:19.223 244018 DEBUG oslo_concurrency.lockutils [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:19 compute-0 nova_compute[244014]: 2026-02-25 12:29:19.223 244018 DEBUG nova.compute.manager [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:19 compute-0 nova_compute[244014]: 2026-02-25 12:29:19.223 244018 WARNING nova.compute.manager [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received unexpected event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with vm_state active and task_state deleting.
Feb 25 12:29:19 compute-0 nova_compute[244014]: 2026-02-25 12:29:19.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:19 compute-0 ceph-mon[76335]: pgmap v1416: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 215 op/s
Feb 25 12:29:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 161 op/s
Feb 25 12:29:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:20 compute-0 nova_compute[244014]: 2026-02-25 12:29:20.492 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bf261ccf-c216-4383-a22a-7f0553198152_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.954s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:20 compute-0 nova_compute[244014]: 2026-02-25 12:29:20.582 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] resizing rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:29:20 compute-0 nova_compute[244014]: 2026-02-25 12:29:20.968 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Successfully created port: c75a4bcb-9292-41aa-b0c3-14b1433392e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.125 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.126 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:21 compute-0 ceph-mon[76335]: pgmap v1417: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 161 op/s
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.147 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.212 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.213 244018 INFO nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Deleting local config drive /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config because it was imported into RBD.
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.225 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.225 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.257 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.257 244018 INFO nova.compute.claims [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:29:21 compute-0 kernel: tapab721623-aa: entered promiscuous mode
Feb 25 12:29:21 compute-0 NetworkManager[49836]: <info>  [1772022561.2782] manager: (tapab721623-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Feb 25 12:29:21 compute-0 ovn_controller[147040]: 2026-02-25T12:29:21Z|00592|binding|INFO|Claiming lport ab721623-aa6d-494f-8b90-6ffd63b7a33f for this chassis.
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:21 compute-0 ovn_controller[147040]: 2026-02-25T12:29:21Z|00593|binding|INFO|ab721623-aa6d-494f-8b90-6ffd63b7a33f: Claiming fa:16:3e:53:6c:fd 10.100.0.13
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.287 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:6c:fd 10.100.0.13'], port_security=['fa:16:3e:53:6c:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'abe229eb-2238-4237-a7f2-83b8476ac1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '809ecded-d7db-4d4f-aed2-3cfc6bac71b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ab721623-aa6d-494f-8b90-6ffd63b7a33f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.288 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ab721623-aa6d-494f-8b90-6ffd63b7a33f in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 bound to our chassis
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.291 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a448894-87d7-4c8e-a168-2593011ffed7
Feb 25 12:29:21 compute-0 ovn_controller[147040]: 2026-02-25T12:29:21Z|00594|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f ovn-installed in OVS
Feb 25 12:29:21 compute-0 ovn_controller[147040]: 2026-02-25T12:29:21Z|00595|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f up in Southbound
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.295 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.310 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e67adfb5-7e6e-4512-9cdd-66106f62f578]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:21 compute-0 systemd-machined[210048]: New machine qemu-76-instance-00000042.
Feb 25 12:29:21 compute-0 systemd-udevd[301797]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:29:21 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-00000042.
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.342 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c6c366-a427-454c-bd73-b0bb8fedc7eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:21 compute-0 NetworkManager[49836]: <info>  [1772022561.3482] device (tapab721623-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.347 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[13cd4534-eddb-485c-a41a-8976248d491a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:21 compute-0 NetworkManager[49836]: <info>  [1772022561.3495] device (tapab721623-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.383 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d3083412-87d9-4613-a860-3b7c099aaa76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.401 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9e139ce4-7a5e-4257-b9ed-c9c06b89ae52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301803, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.420 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a384e1c6-e3d5-4428-b8bb-87a578936821]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449644, 'tstamp': 449644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301808, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449646, 'tstamp': 449646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301808, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.423 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.427 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a448894-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.428 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.429 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a448894-80, col_values=(('external_ids', {'iface-id': '6ef06fbe-6eb9-4d55-bbd8-9394a70da39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.430 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.443 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.484 244018 DEBUG nova.compute.manager [req-8ec2bdd4-24ed-4094-a1ce-b865ca7fcda9 req-09896613-68a3-46db-ba08-39e285187289 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.485 244018 DEBUG oslo_concurrency.lockutils [req-8ec2bdd4-24ed-4094-a1ce-b865ca7fcda9 req-09896613-68a3-46db-ba08-39e285187289 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.486 244018 DEBUG oslo_concurrency.lockutils [req-8ec2bdd4-24ed-4094-a1ce-b865ca7fcda9 req-09896613-68a3-46db-ba08-39e285187289 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.486 244018 DEBUG oslo_concurrency.lockutils [req-8ec2bdd4-24ed-4094-a1ce-b865ca7fcda9 req-09896613-68a3-46db-ba08-39e285187289 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.487 244018 DEBUG nova.compute.manager [req-8ec2bdd4-24ed-4094-a1ce-b865ca7fcda9 req-09896613-68a3-46db-ba08-39e285187289 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Processing event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.872 244018 DEBUG nova.objects.instance [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'migration_context' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 395 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 197 op/s
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.889 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.889 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Ensure instance console log exists: /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.891 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.892 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:21 compute-0 nova_compute[244014]: 2026-02-25 12:29:21.892 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:29:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2591715701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.009 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.015 244018 DEBUG nova.compute.provider_tree [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.033 244018 DEBUG nova.scheduler.client.report [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.048 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022562.048216, abe229eb-2238-4237-a7f2-83b8476ac1dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.049 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Started (Lifecycle Event)
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.050 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.053 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.054 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.056 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.060 244018 INFO nova.virt.libvirt.driver [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance spawned successfully.
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.060 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.076 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.079 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.090 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.090 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.091 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.091 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.092 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.092 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.106 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.107 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.115 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.115 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022562.0490148, abe229eb-2238-4237-a7f2-83b8476ac1dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.115 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Paused (Lifecycle Event)
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.151 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.151 244018 INFO nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.157 244018 INFO nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Took 11.93 seconds to spawn the instance on the hypervisor.
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.157 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.160 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022562.0526254, abe229eb-2238-4237-a7f2-83b8476ac1dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.160 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Resumed (Lifecycle Event)
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.185 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.186 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.191 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.224 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.249 244018 INFO nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Took 13.02 seconds to build instance.
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.280 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.299 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.301 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.301 244018 INFO nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Creating image(s)
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.327 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.354 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.378 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.381 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.473 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.474 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.475 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.475 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.511 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.516 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.545 244018 DEBUG nova.policy [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3aef97e1c87341848423edc65828540c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4445209a7384565a93895032b4f077e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.553 244018 INFO nova.virt.libvirt.driver [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deleting instance files /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_del
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.554 244018 INFO nova.virt.libvirt.driver [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deletion of /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_del complete
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.616 244018 INFO nova.compute.manager [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Took 6.15 seconds to destroy the instance on the hypervisor.
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.617 244018 DEBUG oslo.service.loopingcall [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.618 244018 DEBUG nova.compute.manager [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.618 244018 DEBUG nova.network.neutron [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.641 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Successfully updated port: c75a4bcb-9292-41aa-b0c3-14b1433392e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:29:22 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2591715701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.659 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.659 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquired lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.660 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.748 244018 DEBUG nova.compute.manager [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-changed-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.749 244018 DEBUG nova.compute.manager [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Refreshing instance network info cache due to event network-changed-c75a4bcb-9292-41aa-b0c3-14b1433392e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.749 244018 DEBUG oslo_concurrency.lockutils [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:22 compute-0 nova_compute[244014]: 2026-02-25 12:29:22.909 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:29:23 compute-0 nova_compute[244014]: 2026-02-25 12:29:23.654 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:23 compute-0 nova_compute[244014]: 2026-02-25 12:29:23.805 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] resizing rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:29:23 compute-0 ceph-mon[76335]: pgmap v1418: 305 pgs: 305 active+clean; 395 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 197 op/s
Feb 25 12:29:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 153 op/s
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 DEBUG nova.compute.manager [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 DEBUG oslo_concurrency.lockutils [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 DEBUG oslo_concurrency.lockutils [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 DEBUG oslo_concurrency.lockutils [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 DEBUG nova.compute.manager [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 WARNING nova.compute.manager [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state None.
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.135 244018 DEBUG nova.objects.instance [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'migration_context' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.148 244018 DEBUG nova.network.neutron [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.164 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.164 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Ensure instance console log exists: /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.164 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.165 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.165 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.169 244018 INFO nova.compute.manager [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Took 1.55 seconds to deallocate network for instance.
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.208 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.209 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.318 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Successfully created port: 27c957cd-d68f-48d8-b2e1-170275200ed3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.350 244018 DEBUG oslo_concurrency.processutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.539 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updating instance_info_cache with network_info: [{"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.558 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Releasing lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.559 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Instance network_info: |[{"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.559 244018 DEBUG oslo_concurrency.lockutils [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.560 244018 DEBUG nova.network.neutron [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Refreshing network info cache for port c75a4bcb-9292-41aa-b0c3-14b1433392e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.563 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Start _get_guest_xml network_info=[{"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.569 244018 WARNING nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.578 244018 DEBUG nova.virt.libvirt.host [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.579 244018 DEBUG nova.virt.libvirt.host [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.583 244018 DEBUG nova.virt.libvirt.host [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.584 244018 DEBUG nova.virt.libvirt.host [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.584 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.584 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.585 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.585 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.586 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.586 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.586 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.586 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.587 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.587 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.587 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.587 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.591 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.754 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.832 244018 DEBUG nova.compute.manager [req-929ff49a-c566-47a5-b3ff-1278a6a267d7 req-27ae22af-2154-4b5b-9926-f2e0e745b8cd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-deleted-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:29:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:29:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826584454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.936 244018 DEBUG oslo_concurrency.processutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.940 244018 DEBUG nova.compute.provider_tree [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.957 244018 DEBUG nova.scheduler.client.report [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:29:24 compute-0 nova_compute[244014]: 2026-02-25 12:29:24.979 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.010 244018 INFO nova.scheduler.client.report [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Deleted allocations for instance ee9cd98b-1ca6-48e7-aa44-a09caf048a1c
Feb 25 12:29:25 compute-0 ceph-mon[76335]: pgmap v1419: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 153 op/s
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.080 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228440592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.203 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.226 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.229 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/35679060' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.784 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.787 244018 DEBUG nova.virt.libvirt.vif [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-789545168',display_name='tempest-ServerRescueNegativeTestJSON-server-789545168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-789545168',id=67,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-3pw48dcp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:18Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=bf261ccf-c216-4383-a22a-7f0553198152,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.788 244018 DEBUG nova.network.os_vif_util [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.790 244018 DEBUG nova.network.os_vif_util [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.792 244018 DEBUG nova.objects.instance [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'pci_devices' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.811 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:29:25 compute-0 nova_compute[244014]:   <uuid>bf261ccf-c216-4383-a22a-7f0553198152</uuid>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   <name>instance-00000043</name>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-789545168</nova:name>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:29:24</nova:creationTime>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <nova:user uuid="3aef97e1c87341848423edc65828540c">tempest-ServerRescueNegativeTestJSON-75220162-project-member</nova:user>
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <nova:project uuid="c4445209a7384565a93895032b4f077e">tempest-ServerRescueNegativeTestJSON-75220162</nova:project>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <nova:port uuid="c75a4bcb-9292-41aa-b0c3-14b1433392e2">
Feb 25 12:29:25 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <system>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <entry name="serial">bf261ccf-c216-4383-a22a-7f0553198152</entry>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <entry name="uuid">bf261ccf-c216-4383-a22a-7f0553198152</entry>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     </system>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   <os>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   </os>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   <features>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   </features>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/bf261ccf-c216-4383-a22a-7f0553198152_disk">
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/bf261ccf-c216-4383-a22a-7f0553198152_disk.config">
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:25 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:03:8c:25"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <target dev="tapc75a4bcb-92"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/console.log" append="off"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <video>
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     </video>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:29:25 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:29:25 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:29:25 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:29:25 compute-0 nova_compute[244014]: </domain>
Feb 25 12:29:25 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.813 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Preparing to wait for external event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.814 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.815 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.815 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.816 244018 DEBUG nova.virt.libvirt.vif [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-789545168',display_name='tempest-ServerRescueNegativeTestJSON-server-789545168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-789545168',id=67,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-3pw48dcp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:18Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=bf261ccf-c216-4383-a22a-7f0553198152,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.817 244018 DEBUG nova.network.os_vif_util [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.818 244018 DEBUG nova.network.os_vif_util [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.819 244018 DEBUG os_vif [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.822 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.823 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc75a4bcb-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.829 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc75a4bcb-92, col_values=(('external_ids', {'iface-id': 'c75a4bcb-9292-41aa-b0c3-14b1433392e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:8c:25', 'vm-uuid': 'bf261ccf-c216-4383-a22a-7f0553198152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:25 compute-0 NetworkManager[49836]: <info>  [1772022565.8336] manager: (tapc75a4bcb-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.841 244018 INFO os_vif [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92')
Feb 25 12:29:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.948 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.949 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.949 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No VIF found with MAC fa:16:3e:03:8c:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.950 244018 INFO nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Using config drive
Feb 25 12:29:25 compute-0 nova_compute[244014]: 2026-02-25 12:29:25.980 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1826584454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4228440592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/35679060' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.180 244018 DEBUG nova.compute.manager [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.181 244018 DEBUG nova.compute.manager [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing instance network info cache due to event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.181 244018 DEBUG oslo_concurrency.lockutils [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.182 244018 DEBUG oslo_concurrency.lockutils [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.182 244018 DEBUG nova.network.neutron [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.186 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Successfully updated port: 27c957cd-d68f-48d8-b2e1-170275200ed3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.213 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.213 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquired lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.214 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.354 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.356 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.356 244018 INFO nova.compute.manager [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Rebooting instance
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.374 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:26 compute-0 nova_compute[244014]: 2026-02-25 12:29:26.743 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:29:27 compute-0 nova_compute[244014]: 2026-02-25 12:29:27.252 244018 INFO nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Creating config drive at /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config
Feb 25 12:29:27 compute-0 nova_compute[244014]: 2026-02-25 12:29:27.258 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxkhxuyd4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:27 compute-0 nova_compute[244014]: 2026-02-25 12:29:27.314 244018 DEBUG nova.network.neutron [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updated VIF entry in instance network info cache for port c75a4bcb-9292-41aa-b0c3-14b1433392e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:29:27 compute-0 nova_compute[244014]: 2026-02-25 12:29:27.315 244018 DEBUG nova.network.neutron [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updating instance_info_cache with network_info: [{"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:27 compute-0 nova_compute[244014]: 2026-02-25 12:29:27.336 244018 DEBUG oslo_concurrency.lockutils [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:27 compute-0 ceph-mon[76335]: pgmap v1420: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Feb 25 12:29:27 compute-0 nova_compute[244014]: 2026-02-25 12:29:27.357 244018 DEBUG nova.compute.manager [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-changed-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:27 compute-0 nova_compute[244014]: 2026-02-25 12:29:27.358 244018 DEBUG nova.compute.manager [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Refreshing instance network info cache due to event network-changed-27c957cd-d68f-48d8-b2e1-170275200ed3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:29:27 compute-0 nova_compute[244014]: 2026-02-25 12:29:27.359 244018 DEBUG oslo_concurrency.lockutils [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:27 compute-0 nova_compute[244014]: 2026-02-25 12:29:27.407 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxkhxuyd4" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:27 compute-0 nova_compute[244014]: 2026-02-25 12:29:27.448 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:27 compute-0 nova_compute[244014]: 2026-02-25 12:29:27.453 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config bf261ccf-c216-4383-a22a-7f0553198152_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.291 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config bf261ccf-c216-4383-a22a-7f0553198152_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.837s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.292 244018 INFO nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Deleting local config drive /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config because it was imported into RBD.
Feb 25 12:29:28 compute-0 kernel: tapc75a4bcb-92: entered promiscuous mode
Feb 25 12:29:28 compute-0 NetworkManager[49836]: <info>  [1772022568.3402] manager: (tapc75a4bcb-92): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.343 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:28 compute-0 ovn_controller[147040]: 2026-02-25T12:29:28Z|00596|binding|INFO|Claiming lport c75a4bcb-9292-41aa-b0c3-14b1433392e2 for this chassis.
Feb 25 12:29:28 compute-0 ovn_controller[147040]: 2026-02-25T12:29:28Z|00597|binding|INFO|c75a4bcb-9292-41aa-b0c3-14b1433392e2: Claiming fa:16:3e:03:8c:25 10.100.0.12
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.361 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:28 compute-0 ovn_controller[147040]: 2026-02-25T12:29:28Z|00598|binding|INFO|Setting lport c75a4bcb-9292-41aa-b0c3-14b1433392e2 ovn-installed in OVS
Feb 25 12:29:28 compute-0 ovn_controller[147040]: 2026-02-25T12:29:28Z|00599|binding|INFO|Setting lport c75a4bcb-9292-41aa-b0c3-14b1433392e2 up in Southbound
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.363 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:8c:25 10.100.0.12'], port_security=['fa:16:3e:03:8c:25 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bf261ccf-c216-4383-a22a-7f0553198152', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=c75a4bcb-9292-41aa-b0c3-14b1433392e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.365 157129 INFO neutron.agent.ovn.metadata.agent [-] Port c75a4bcb-9292-41aa-b0c3-14b1433392e2 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 bound to our chassis
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.368 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3
Feb 25 12:29:28 compute-0 systemd-machined[210048]: New machine qemu-77-instance-00000043.
Feb 25 12:29:28 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-00000043.
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.387 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[72e7bdb3-eea6-4d3d-b811-92d90fc252f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.389 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85b56c79-01 in ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.391 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85b56c79-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.391 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5448d1-8675-421d-a527-1743fac7098d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.392 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6578be-70c0-42aa-a2d1-20886f17babd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.405 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[204edc39-d02b-4498-abed-b7e1755c2eca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.418 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Updating instance_info_cache with network_info: [{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.421 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1b1421-9c3b-41ab-a3b6-cc6382e7ddff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 systemd-udevd[302224]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:29:28 compute-0 NetworkManager[49836]: <info>  [1772022568.4458] device (tapc75a4bcb-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:29:28 compute-0 NetworkManager[49836]: <info>  [1772022568.4467] device (tapc75a4bcb-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.452 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cea68dd5-8253-49c9-800a-aeea971d158b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 systemd-udevd[302229]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:29:28 compute-0 NetworkManager[49836]: <info>  [1772022568.4601] manager: (tap85b56c79-00): new Veth device (/org/freedesktop/NetworkManager/Devices/264)
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.458 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[633894f5-88d1-40e7-a9cb-aadfcd4081b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.463 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Releasing lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.464 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance network_info: |[{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.464 244018 DEBUG oslo_concurrency.lockutils [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.465 244018 DEBUG nova.network.neutron [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Refreshing network info cache for port 27c957cd-d68f-48d8-b2e1-170275200ed3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.468 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start _get_guest_xml network_info=[{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.485 244018 WARNING nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.496 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b801dd27-a0d2-4cfd-acb6-8fa07e819d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.500 244018 DEBUG nova.virt.libvirt.host [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.501 244018 DEBUG nova.virt.libvirt.host [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.507 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a8654fea-21f2-48fc-9ee8-028386423de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.511 244018 DEBUG nova.virt.libvirt.host [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.511 244018 DEBUG nova.virt.libvirt.host [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.512 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.512 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.512 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.513 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.513 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.513 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.513 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.513 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.514 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.514 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.514 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.514 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.517 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:28 compute-0 NetworkManager[49836]: <info>  [1772022568.5248] device (tap85b56c79-00): carrier: link connected
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.532 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8f25b533-b1df-45a5-8d2f-82f9bd4e1533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.546 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b688eba6-e3e3-4eb6-a4bf-b34dfe90061e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302250, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.561 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0211d7c1-331f-4b86-89cb-7910f551b408]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe20:24ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453793, 'tstamp': 453793}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302251, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.579 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[75ba23f5-b007-4a02-be33-884aff627ba1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302252, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.603 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69a642c1-756d-459c-b6f8-caabfe6c558a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.648 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81cc115d-e2e7-4a27-b571-23342271ac1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.649 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.650 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.650 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b56c79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:28 compute-0 kernel: tap85b56c79-00: entered promiscuous mode
Feb 25 12:29:28 compute-0 NetworkManager[49836]: <info>  [1772022568.6550] manager: (tap85b56c79-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.660 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85b56c79-00, col_values=(('external_ids', {'iface-id': 'ad3b1754-7f6c-4114-8056-d68f2e9a25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:28 compute-0 ovn_controller[147040]: 2026-02-25T12:29:28Z|00600|binding|INFO|Releasing lport ad3b1754-7f6c-4114-8056-d68f2e9a25d5 from this chassis (sb_readonly=0)
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.664 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85b56c79-01b6-47e7-ab3b-02e44acca3d3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85b56c79-01b6-47e7-ab3b-02e44acca3d3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.666 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37930e9c-7931-4b9d-974e-bb459132fc10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.666 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-85b56c79-01b6-47e7-ab3b-02e44acca3d3
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/85b56c79-01b6-47e7-ab3b-02e44acca3d3.pid.haproxy
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 85b56c79-01b6-47e7-ab3b-02e44acca3d3
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:29:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.670 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'env', 'PROCESS_TAG=haproxy-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85b56c79-01b6-47e7-ab3b-02e44acca3d3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.679 244018 DEBUG nova.network.neutron [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updated VIF entry in instance network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.679 244018 DEBUG nova.network.neutron [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.704 244018 DEBUG oslo_concurrency.lockutils [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.705 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquired lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.705 244018 DEBUG nova.network.neutron [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.851 244018 DEBUG oslo_concurrency.lockutils [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.854 244018 DEBUG oslo_concurrency.lockutils [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.854 244018 DEBUG nova.compute.manager [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.872 244018 DEBUG nova.compute.manager [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.874 244018 DEBUG nova.objects.instance [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.887 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022568.8867865, bf261ccf-c216-4383-a22a-7f0553198152 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.887 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Started (Lifecycle Event)
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.904 244018 DEBUG nova.virt.libvirt.driver [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.910 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.918 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022568.8875976, bf261ccf-c216-4383-a22a-7f0553198152 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.918 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Paused (Lifecycle Event)
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.942 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.947 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:28 compute-0 nova_compute[244014]: 2026-02-25 12:29:28.982 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:29:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1526574973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.073 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:29 compute-0 podman[302344]: 2026-02-25 12:29:28.996389601 +0000 UTC m=+0.024183847 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.113 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.117 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:29 compute-0 podman[302344]: 2026-02-25 12:29:29.52883654 +0000 UTC m=+0.556630726 container create e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:29:29 compute-0 ceph-mon[76335]: pgmap v1421: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Feb 25 12:29:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1526574973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3537716141' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.706 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:29 compute-0 systemd[1]: Started libpod-conmon-e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e.scope.
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.710 244018 DEBUG nova.virt.libvirt.vif [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1779266838',display_name='tempest-ServerRescueNegativeTestJSON-server-1779266838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1779266838',id=68,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-ut2etugv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:22Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=f7b18575-d1fc-423f-a596-8ca6d8ed08fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.711 244018 DEBUG nova.network.os_vif_util [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.713 244018 DEBUG nova.network.os_vif_util [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.715 244018 DEBUG nova.objects.instance [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'pci_devices' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.740 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:29:29 compute-0 nova_compute[244014]:   <uuid>f7b18575-d1fc-423f-a596-8ca6d8ed08fa</uuid>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   <name>instance-00000044</name>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1779266838</nova:name>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:29:28</nova:creationTime>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <nova:user uuid="3aef97e1c87341848423edc65828540c">tempest-ServerRescueNegativeTestJSON-75220162-project-member</nova:user>
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <nova:project uuid="c4445209a7384565a93895032b4f077e">tempest-ServerRescueNegativeTestJSON-75220162</nova:project>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <nova:port uuid="27c957cd-d68f-48d8-b2e1-170275200ed3">
Feb 25 12:29:29 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <system>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <entry name="serial">f7b18575-d1fc-423f-a596-8ca6d8ed08fa</entry>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <entry name="uuid">f7b18575-d1fc-423f-a596-8ca6d8ed08fa</entry>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     </system>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   <os>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   </os>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   <features>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   </features>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4ba589ec8c9c7feb5a1c1945ac7be1440c19e55dcd78f4845f04574fea8475/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk">
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config">
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:29 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:bf:b7:62"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <target dev="tap27c957cd-d6"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/console.log" append="off"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <video>
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     </video>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:29:29 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:29:29 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:29:29 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:29:29 compute-0 nova_compute[244014]: </domain>
Feb 25 12:29:29 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.740 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Preparing to wait for external event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.741 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.741 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.741 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.742 244018 DEBUG nova.virt.libvirt.vif [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1779266838',display_name='tempest-ServerRescueNegativeTestJSON-server-1779266838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1779266838',id=68,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-ut2etugv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:22Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=f7b18575-d1fc-423f-a596-8ca6d8ed08fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.742 244018 DEBUG nova.network.os_vif_util [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.742 244018 DEBUG nova.network.os_vif_util [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.743 244018 DEBUG os_vif [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.743 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.744 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.753 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.753 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27c957cd-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.754 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27c957cd-d6, col_values=(('external_ids', {'iface-id': '27c957cd-d68f-48d8-b2e1-170275200ed3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:b7:62', 'vm-uuid': 'f7b18575-d1fc-423f-a596-8ca6d8ed08fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:29 compute-0 NetworkManager[49836]: <info>  [1772022569.7559] manager: (tap27c957cd-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.758 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.765 244018 INFO os_vif [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6')
Feb 25 12:29:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 151 op/s
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.896 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.955 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.956 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.956 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No VIF found with MAC fa:16:3e:bf:b7:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.956 244018 INFO nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Using config drive
Feb 25 12:29:29 compute-0 nova_compute[244014]: 2026-02-25 12:29:29.986 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:30 compute-0 podman[302344]: 2026-02-25 12:29:30.048175627 +0000 UTC m=+1.075969823 container init e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:29:30 compute-0 podman[302344]: 2026-02-25 12:29:30.057257624 +0000 UTC m=+1.085051820 container start e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 12:29:30 compute-0 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [NOTICE]   (302426) : New worker (302428) forked
Feb 25 12:29:30 compute-0 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [NOTICE]   (302426) : Loading success.
Feb 25 12:29:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:30 compute-0 nova_compute[244014]: 2026-02-25 12:29:30.414 244018 INFO nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Creating config drive at /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config
Feb 25 12:29:30 compute-0 nova_compute[244014]: 2026-02-25 12:29:30.419 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwdbfqr1r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:30 compute-0 nova_compute[244014]: 2026-02-25 12:29:30.560 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwdbfqr1r" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:30 compute-0 nova_compute[244014]: 2026-02-25 12:29:30.632 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:30 compute-0 nova_compute[244014]: 2026-02-25 12:29:30.637 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3537716141' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:30 compute-0 nova_compute[244014]: 2026-02-25 12:29:30.779 244018 DEBUG nova.network.neutron [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Updated VIF entry in instance network info cache for port 27c957cd-d68f-48d8-b2e1-170275200ed3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:29:30 compute-0 nova_compute[244014]: 2026-02-25 12:29:30.780 244018 DEBUG nova.network.neutron [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Updating instance_info_cache with network_info: [{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:30 compute-0 nova_compute[244014]: 2026-02-25 12:29:30.804 244018 DEBUG oslo_concurrency.lockutils [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:29:30
Feb 25 12:29:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:29:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:29:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.control', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 25 12:29:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:29:30 compute-0 nova_compute[244014]: 2026-02-25 12:29:30.934 244018 DEBUG nova.network.neutron [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:30 compute-0 nova_compute[244014]: 2026-02-25 12:29:30.963 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Releasing lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:30 compute-0 nova_compute[244014]: 2026-02-25 12:29:30.965 244018 DEBUG nova.compute.manager [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:31 compute-0 kernel: tapab721623-aa (unregistering): left promiscuous mode
Feb 25 12:29:31 compute-0 NetworkManager[49836]: <info>  [1772022571.3438] device (tapab721623-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:29:31 compute-0 ovn_controller[147040]: 2026-02-25T12:29:31Z|00601|binding|INFO|Releasing lport ab721623-aa6d-494f-8b90-6ffd63b7a33f from this chassis (sb_readonly=0)
Feb 25 12:29:31 compute-0 ovn_controller[147040]: 2026-02-25T12:29:31Z|00602|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f down in Southbound
Feb 25 12:29:31 compute-0 ovn_controller[147040]: 2026-02-25T12:29:31Z|00603|binding|INFO|Removing iface tapab721623-aa ovn-installed in OVS
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.368 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:6c:fd 10.100.0.13'], port_security=['fa:16:3e:53:6c:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'abe229eb-2238-4237-a7f2-83b8476ac1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '5', 'neutron:security_group_ids': '809ecded-d7db-4d4f-aed2-3cfc6bac71b9 bdf2b6e4-9059-4e5b-93fb-9739a22eb004', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ab721623-aa6d-494f-8b90-6ffd63b7a33f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.371 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ab721623-aa6d-494f-8b90-6ffd63b7a33f in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 unbound from our chassis
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.374 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a448894-87d7-4c8e-a168-2593011ffed7
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.391 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcdcf09-7d6d-473e-8da3-48a5bd834a2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:31 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000042.scope: Deactivated successfully.
Feb 25 12:29:31 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000042.scope: Consumed 9.871s CPU time.
Feb 25 12:29:31 compute-0 systemd-machined[210048]: Machine qemu-76-instance-00000042 terminated.
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.422 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[16c24d04-1c6b-46ff-bd34-3c40186cee75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.426 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f5eced-6631-4c74-8089-13ff795dacf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.458 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8bd20e-71be-45b0-b9e1-46ef0b289e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.479 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1755b1-e917-4828-b54c-43f5d4330c51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302490, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.498 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[54ce0d52-0935-471e-aad3-881d9cc17165]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449644, 'tstamp': 449644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302491, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449646, 'tstamp': 449646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302491, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.500 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.502 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.510 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.511 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a448894-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.511 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.512 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a448894-80, col_values=(('external_ids', {'iface-id': '6ef06fbe-6eb9-4d55-bbd8-9394a70da39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.513 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.570 244018 INFO nova.virt.libvirt.driver [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance destroyed successfully.
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.571 244018 DEBUG nova.objects.instance [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'resources' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.585 244018 DEBUG nova.virt.libvirt.vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:31Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.586 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.587 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.588 244018 DEBUG os_vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.592 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab721623-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.604 244018 INFO os_vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa')
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.614 244018 DEBUG nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start _get_guest_xml network_info=[{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.619 244018 WARNING nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.625 244018 DEBUG nova.virt.libvirt.host [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.626 244018 DEBUG nova.virt.libvirt.host [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.630 244018 DEBUG nova.virt.libvirt.host [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.631 244018 DEBUG nova.virt.libvirt.host [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.631 244018 DEBUG nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.632 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.633 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.633 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.634 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.634 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.635 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.635 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.635 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.636 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.636 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.637 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.637 244018 DEBUG nova.objects.instance [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'vcpu_model' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.675 244018 DEBUG oslo_concurrency.processutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.712 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022556.6982589, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.713 244018 INFO nova.compute.manager [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Stopped (Lifecycle Event)
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.739 244018 DEBUG nova.compute.manager [None req-39648e57-89ff-41b4-9760-0fcf089e4b43 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:31 compute-0 ceph-mon[76335]: pgmap v1422: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 151 op/s
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.854 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.855 244018 INFO nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Deleting local config drive /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config because it was imported into RBD.
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:29:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Feb 25 12:29:31 compute-0 kernel: tap27c957cd-d6: entered promiscuous mode
Feb 25 12:29:31 compute-0 systemd-udevd[302482]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:29:31 compute-0 NetworkManager[49836]: <info>  [1772022571.9242] manager: (tap27c957cd-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Feb 25 12:29:31 compute-0 NetworkManager[49836]: <info>  [1772022571.9352] device (tap27c957cd-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:29:31 compute-0 NetworkManager[49836]: <info>  [1772022571.9356] device (tap27c957cd-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:31 compute-0 ovn_controller[147040]: 2026-02-25T12:29:31Z|00604|binding|INFO|Claiming lport 27c957cd-d68f-48d8-b2e1-170275200ed3 for this chassis.
Feb 25 12:29:31 compute-0 ovn_controller[147040]: 2026-02-25T12:29:31Z|00605|binding|INFO|27c957cd-d68f-48d8-b2e1-170275200ed3: Claiming fa:16:3e:bf:b7:62 10.100.0.4
Feb 25 12:29:31 compute-0 ovn_controller[147040]: 2026-02-25T12:29:31Z|00606|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 ovn-installed in OVS
Feb 25 12:29:31 compute-0 ovn_controller[147040]: 2026-02-25T12:29:31Z|00607|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 up in Southbound
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.961 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:b7:62 10.100.0.4'], port_security=['fa:16:3e:bf:b7:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f7b18575-d1fc-423f-a596-8ca6d8ed08fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=27c957cd-d68f-48d8-b2e1-170275200ed3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.962 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 27c957cd-d68f-48d8-b2e1-170275200ed3 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 bound to our chassis
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.963 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.970 244018 INFO nova.virt.libvirt.driver [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance shutdown successfully after 3 seconds.
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.971 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.977 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2591a4cc-67ce-4cb8-a038-df7371b76d38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:31 compute-0 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 12:29:31 compute-0 ovn_controller[147040]: 2026-02-25T12:29:31Z|00608|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 12:29:31 compute-0 ovn_controller[147040]: 2026-02-25T12:29:31Z|00609|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 12:29:31 compute-0 systemd-machined[210048]: New machine qemu-78-instance-00000044.
Feb 25 12:29:31 compute-0 ovn_controller[147040]: 2026-02-25T12:29:31Z|00610|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 12:29:31 compute-0 nova_compute[244014]: 2026-02-25 12:29:31.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:31 compute-0 NetworkManager[49836]: <info>  [1772022571.9993] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:29:32 compute-0 nova_compute[244014]: 2026-02-25 12:29:32.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:32 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000044.
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.006 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:32 compute-0 nova_compute[244014]: 2026-02-25 12:29:32.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.028 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc906c9-7b0a-4ed9-8b4f-8c1aab8829cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.033 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3909477f-7655-471a-9e8a-213658b9e14c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:32 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 12:29:32 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000035.scope: Consumed 15.848s CPU time.
Feb 25 12:29:32 compute-0 systemd-machined[210048]: Machine qemu-69-instance-00000035 terminated.
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.061 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[843a3a68-21ed-4979-aaa6-7d3ce0387280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.094 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0219aac7-f1a2-46a7-9690-6a73ba8a4571]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302556, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.111 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5bc890-8ec7-4090-8da6-d140262d506c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453803, 'tstamp': 453803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302558, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453805, 'tstamp': 453805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302558, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:32 compute-0 nova_compute[244014]: 2026-02-25 12:29:32.116 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:32 compute-0 nova_compute[244014]: 2026-02-25 12:29:32.121 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.122 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b56c79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.122 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.123 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85b56c79-00, col_values=(('external_ids', {'iface-id': 'ad3b1754-7f6c-4114-8056-d68f2e9a25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.124 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.125 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.128 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a120c8cb-520b-409e-83d6-ffe298f524ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.129 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore
Feb 25 12:29:32 compute-0 nova_compute[244014]: 2026-02-25 12:29:32.220 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.
Feb 25 12:29:32 compute-0 nova_compute[244014]: 2026-02-25 12:29:32.221 244018 DEBUG nova.objects.instance [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:32 compute-0 nova_compute[244014]: 2026-02-25 12:29:32.248 244018 DEBUG nova.compute.manager [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:32 compute-0 nova_compute[244014]: 2026-02-25 12:29:32.317 244018 DEBUG oslo_concurrency.lockutils [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867522823' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:32 compute-0 nova_compute[244014]: 2026-02-25 12:29:32.356 244018 DEBUG oslo_concurrency.processutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:32 compute-0 nova_compute[244014]: 2026-02-25 12:29:32.391 244018 DEBUG oslo_concurrency.processutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:32 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [NOTICE]   (296548) : haproxy version is 2.8.14-c23fe91
Feb 25 12:29:32 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [NOTICE]   (296548) : path to executable is /usr/sbin/haproxy
Feb 25 12:29:32 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [WARNING]  (296548) : Exiting Master process...
Feb 25 12:29:32 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [ALERT]    (296548) : Current worker (296562) exited with code 143 (Terminated)
Feb 25 12:29:32 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [WARNING]  (296548) : All workers exited. Exiting... (0)
Feb 25 12:29:32 compute-0 systemd[1]: libpod-43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080.scope: Deactivated successfully.
Feb 25 12:29:32 compute-0 podman[302588]: 2026-02-25 12:29:32.42440745 +0000 UTC m=+0.149526741 container died 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 12:29:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446312852' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:32 compute-0 ceph-mon[76335]: pgmap v1423: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Feb 25 12:29:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/867522823' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.000 244018 DEBUG oslo_concurrency.processutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.003 244018 DEBUG nova.virt.libvirt.vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:31Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.004 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.005 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.007 244018 DEBUG nova.objects.instance [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'pci_devices' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.026 244018 DEBUG nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:29:33 compute-0 nova_compute[244014]:   <uuid>abe229eb-2238-4237-a7f2-83b8476ac1dc</uuid>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   <name>instance-00000042</name>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1679149420</nova:name>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:29:31</nova:creationTime>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <nova:user uuid="bcb4ded096bc4f7993f96ca892b82333">tempest-SecurityGroupsTestJSON-195828884-project-member</nova:user>
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <nova:project uuid="780a93a8758a4bd78b22fe68ed6276cf">tempest-SecurityGroupsTestJSON-195828884</nova:project>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <nova:port uuid="ab721623-aa6d-494f-8b90-6ffd63b7a33f">
Feb 25 12:29:33 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <system>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <entry name="serial">abe229eb-2238-4237-a7f2-83b8476ac1dc</entry>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <entry name="uuid">abe229eb-2238-4237-a7f2-83b8476ac1dc</entry>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     </system>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   <os>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   </os>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   <features>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   </features>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/abe229eb-2238-4237-a7f2-83b8476ac1dc_disk">
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config">
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:33 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:53:6c:fd"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <target dev="tapab721623-aa"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/console.log" append="off"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <video>
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     </video>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:29:33 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:29:33 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:29:33 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:29:33 compute-0 nova_compute[244014]: </domain>
Feb 25 12:29:33 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.028 244018 DEBUG nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.028 244018 DEBUG nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.031 244018 DEBUG nova.virt.libvirt.vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:31Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.031 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.032 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.033 244018 DEBUG os_vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.034 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.035 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.035 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.040 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab721623-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.040 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab721623-aa, col_values=(('external_ids', {'iface-id': 'ab721623-aa6d-494f-8b90-6ffd63b7a33f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:6c:fd', 'vm-uuid': 'abe229eb-2238-4237-a7f2-83b8476ac1dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:33 compute-0 NetworkManager[49836]: <info>  [1772022573.0442] manager: (tapab721623-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.052 244018 INFO os_vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa')
Feb 25 12:29:33 compute-0 NetworkManager[49836]: <info>  [1772022573.4009] manager: (tapab721623-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Feb 25 12:29:33 compute-0 kernel: tapab721623-aa: entered promiscuous mode
Feb 25 12:29:33 compute-0 ovn_controller[147040]: 2026-02-25T12:29:33Z|00611|binding|INFO|Claiming lport ab721623-aa6d-494f-8b90-6ffd63b7a33f for this chassis.
Feb 25 12:29:33 compute-0 ovn_controller[147040]: 2026-02-25T12:29:33Z|00612|binding|INFO|ab721623-aa6d-494f-8b90-6ffd63b7a33f: Claiming fa:16:3e:53:6c:fd 10.100.0.13
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:33.415 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:6c:fd 10.100.0.13'], port_security=['fa:16:3e:53:6c:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'abe229eb-2238-4237-a7f2-83b8476ac1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '6', 'neutron:security_group_ids': '809ecded-d7db-4d4f-aed2-3cfc6bac71b9 bdf2b6e4-9059-4e5b-93fb-9739a22eb004', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ab721623-aa6d-494f-8b90-6ffd63b7a33f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:33 compute-0 NetworkManager[49836]: <info>  [1772022573.4179] device (tapab721623-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:29:33 compute-0 NetworkManager[49836]: <info>  [1772022573.4189] device (tapab721623-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:29:33 compute-0 ovn_controller[147040]: 2026-02-25T12:29:33Z|00613|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f ovn-installed in OVS
Feb 25 12:29:33 compute-0 ovn_controller[147040]: 2026-02-25T12:29:33Z|00614|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f up in Southbound
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:33 compute-0 systemd-machined[210048]: New machine qemu-79-instance-00000042.
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.436 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022573.4359405, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.436 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Started (Lifecycle Event)
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:33 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-00000042.
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.467 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.472 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022573.4373517, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.473 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Paused (Lifecycle Event)
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.495 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.498 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.518 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.539 244018 DEBUG nova.compute.manager [req-f41c47c5-a32b-4223-a192-67c4fc070725 req-84a18666-b6d4-444c-8b99-922df59f41ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.540 244018 DEBUG oslo_concurrency.lockutils [req-f41c47c5-a32b-4223-a192-67c4fc070725 req-84a18666-b6d4-444c-8b99-922df59f41ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.540 244018 DEBUG oslo_concurrency.lockutils [req-f41c47c5-a32b-4223-a192-67c4fc070725 req-84a18666-b6d4-444c-8b99-922df59f41ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.540 244018 DEBUG oslo_concurrency.lockutils [req-f41c47c5-a32b-4223-a192-67c4fc070725 req-84a18666-b6d4-444c-8b99-922df59f41ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.541 244018 DEBUG nova.compute.manager [req-f41c47c5-a32b-4223-a192-67c4fc070725 req-84a18666-b6d4-444c-8b99-922df59f41ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Processing event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.541 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.545 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022573.545179, bf261ccf-c216-4383-a22a-7f0553198152 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.546 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Resumed (Lifecycle Event)
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.548 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.558 244018 INFO nova.virt.libvirt.driver [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Instance spawned successfully.
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.558 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.567 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.571 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.582 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.582 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.590 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.591 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.592 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.592 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.598 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.601 244018 DEBUG nova.compute.manager [req-f258d385-c84d-4a78-ad22-5eef9bd1ba55 req-d5765433-12ce-4907-a928-c3f980f2a344 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.602 244018 DEBUG oslo_concurrency.lockutils [req-f258d385-c84d-4a78-ad22-5eef9bd1ba55 req-d5765433-12ce-4907-a928-c3f980f2a344 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.602 244018 DEBUG oslo_concurrency.lockutils [req-f258d385-c84d-4a78-ad22-5eef9bd1ba55 req-d5765433-12ce-4907-a928-c3f980f2a344 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.602 244018 DEBUG oslo_concurrency.lockutils [req-f258d385-c84d-4a78-ad22-5eef9bd1ba55 req-d5765433-12ce-4907-a928-c3f980f2a344 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.603 244018 DEBUG nova.compute.manager [req-f258d385-c84d-4a78-ad22-5eef9bd1ba55 req-d5765433-12ce-4907-a928-c3f980f2a344 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Processing event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.604 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.613 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.613 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022573.6130521, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.613 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Resumed (Lifecycle Event)
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.617 244018 INFO nova.virt.libvirt.driver [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance spawned successfully.
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.617 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.643 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.649 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.652 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.652 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.652 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.653 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.653 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.653 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.660 244018 INFO nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Took 15.34 seconds to spawn the instance on the hypervisor.
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.660 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.691 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.737 244018 INFO nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Took 11.44 seconds to spawn the instance on the hypervisor.
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.738 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.740 244018 INFO nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Took 16.45 seconds to build instance.
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.763 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.811 244018 INFO nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Took 12.61 seconds to build instance.
Feb 25 12:29:33 compute-0 nova_compute[244014]: 2026-02-25 12:29:33.831 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 129 op/s
Feb 25 12:29:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080-userdata-shm.mount: Deactivated successfully.
Feb 25 12:29:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-43a9a945243e4aa6728c2c83d101f229caef91189a564a26a79cc76c0a932bd4-merged.mount: Deactivated successfully.
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.002 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.022 244018 DEBUG oslo_concurrency.lockutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.022 244018 DEBUG oslo_concurrency.lockutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.022 244018 DEBUG nova.network.neutron [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.022 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'info_cache' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:34 compute-0 podman[302695]: 2026-02-25 12:29:34.044217994 +0000 UTC m=+0.940363837 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Feb 25 12:29:34 compute-0 podman[302696]: 2026-02-25 12:29:34.060125175 +0000 UTC m=+0.964333667 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.147 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.147 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.167 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.244 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.245 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.251 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.251 244018 INFO nova.compute.claims [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:29:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/446312852' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.441 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.653 244018 DEBUG nova.compute.manager [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.654 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for abe229eb-2238-4237-a7f2-83b8476ac1dc due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.655 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022574.6543152, abe229eb-2238-4237-a7f2-83b8476ac1dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.655 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Resumed (Lifecycle Event)
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.662 244018 INFO nova.virt.libvirt.driver [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance rebooted successfully.
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.662 244018 DEBUG nova.compute.manager [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.708 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.712 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.779 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.800 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022574.6552982, abe229eb-2238-4237-a7f2-83b8476ac1dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.801 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Started (Lifecycle Event)
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.821 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:34 compute-0 nova_compute[244014]: 2026-02-25 12:29:34.826 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:35 compute-0 podman[302588]: 2026-02-25 12:29:35.023625178 +0000 UTC m=+2.748744429 container cleanup 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:29:35 compute-0 systemd[1]: libpod-conmon-43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080.scope: Deactivated successfully.
Feb 25 12:29:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:29:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825455234' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.371 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.929s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.376 244018 DEBUG nova.compute.provider_tree [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.406 244018 DEBUG nova.scheduler.client.report [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.445 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.446 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.512 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.512 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.531 244018 INFO nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.553 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.586 244018 INFO nova.compute.manager [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Rescuing
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.586 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.586 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquired lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.586 244018 DEBUG nova.network.neutron [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.645 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.646 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.646 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.646 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.646 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] No waiting events found dispatching network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.647 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received unexpected event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 for instance with vm_state active and task_state None.
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.647 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.647 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.647 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.647 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.648 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.648 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state None.
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.648 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.648 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.648 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state None.
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.650 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.650 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.650 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state stopped and task_state powering-on.
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.650 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.650 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.651 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.651 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.651 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.651 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state stopped and task_state powering-on.
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.652 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.652 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.652 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.652 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.652 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.653 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state None.
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.653 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.653 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.653 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.653 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.654 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.654 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state None.
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.661 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.662 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.662 244018 INFO nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Creating image(s)
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.690 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:35 compute-0 ceph-mon[76335]: pgmap v1424: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 129 op/s
Feb 25 12:29:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3825455234' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:35 compute-0 nova_compute[244014]: 2026-02-25 12:29:35.713 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.273 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.279 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.314 244018 DEBUG nova.policy [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '74af2f394ab04b06b55e62150e81b6b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '61371c73c9fb4961886c5c22f8f871e1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.334 244018 DEBUG nova.compute.manager [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.335 244018 DEBUG oslo_concurrency.lockutils [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.335 244018 DEBUG oslo_concurrency.lockutils [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.335 244018 DEBUG oslo_concurrency.lockutils [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.336 244018 DEBUG nova.compute.manager [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.336 244018 WARNING nova.compute.manager [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state active and task_state rescuing.
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.368 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.369 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.370 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.370 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.411 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.415 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d945940d-a1b5-4a36-b980-efda3a9efda6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:36 compute-0 podman[302842]: 2026-02-25 12:29:36.603603701 +0000 UTC m=+1.557957780 container remove 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.612 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be27f3dc-b54f-46b2-9cb2-d4cebfd52f52]: (4, ('Wed Feb 25 12:29:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080)\n43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080\nWed Feb 25 12:29:35 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080)\n43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.614 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f7359ea8-c0b5-4da0-9254-948cc1f1a4c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.614 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:36 compute-0 kernel: tapce318891-c0: left promiscuous mode
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.630 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[53d2a483-df62-4ca2-8a01-6e1cec06347e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.641 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11bf57f4-2a7f-47dd-8e26-9bb0cfe023c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.644 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5719c827-5606-4161-9e78-5efb3a4ae505]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.655 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7e4d69-bfdc-4aaf-8a45-cdeba1aca87c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444522, 'reachable_time': 39645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302952, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.659 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.659 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[80f267af-32ce-4681-8b7d-79b33c3ab7ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.660 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ab721623-aa6d-494f-8b90-6ffd63b7a33f in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 unbound from our chassis
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.661 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a448894-87d7-4c8e-a168-2593011ffed7
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.675 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d944f74f-7520-466c-b2a1-13f2be13f86a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.691 244018 DEBUG nova.network.neutron [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.717 244018 DEBUG oslo_concurrency.lockutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.731 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3532cf3f-c0dc-43e0-a7f7-ee925cb57a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.734 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ebaf3f98-e250-4909-94b8-8cb6bbf63c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.756 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.757 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.773 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.777 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c6933f26-5fde-4daa-b4fd-0f049ff72bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.789 244018 DEBUG nova.virt.libvirt.vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.789 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.790 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.791 244018 DEBUG os_vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.790 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef9df95-90f0-448c-b501-ef10139ff652]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302958, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.794 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee46268d-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.801 244018 INFO os_vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.810 244018 DEBUG nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start _get_guest_xml network_info=[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.810 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6097d166-556f-4288-9261-135f3db01628]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449644, 'tstamp': 449644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302959, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449646, 'tstamp': 449646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302959, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.812 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a448894-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.816 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a448894-80, col_values=(('external_ids', {'iface-id': '6ef06fbe-6eb9-4d55-bbd8-9394a70da39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.816 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.817 244018 WARNING nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.825 244018 DEBUG nova.virt.libvirt.host [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.826 244018 DEBUG nova.virt.libvirt.host [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.830 244018 DEBUG nova.virt.libvirt.host [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.831 244018 DEBUG nova.virt.libvirt.host [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.831 244018 DEBUG nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.832 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.832 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.833 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.833 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.833 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.834 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.834 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.834 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.835 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.835 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.835 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.836 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:36 compute-0 nova_compute[244014]: 2026-02-25 12:29:36.856 244018 DEBUG oslo_concurrency.processutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:36 compute-0 ceph-mon[76335]: pgmap v1425: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Feb 25 12:29:37 compute-0 nova_compute[244014]: 2026-02-25 12:29:37.325 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Successfully created port: 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:29:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/611464997' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:37 compute-0 nova_compute[244014]: 2026-02-25 12:29:37.697 244018 DEBUG oslo_concurrency.processutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.841s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 1.8 MiB/s wr, 320 op/s
Feb 25 12:29:38 compute-0 nova_compute[244014]: 2026-02-25 12:29:38.404 244018 DEBUG nova.network.neutron [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Updating instance_info_cache with network_info: [{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:38 compute-0 nova_compute[244014]: 2026-02-25 12:29:38.409 244018 DEBUG nova.compute.manager [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:38 compute-0 nova_compute[244014]: 2026-02-25 12:29:38.410 244018 DEBUG nova.compute.manager [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing instance network info cache due to event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:29:38 compute-0 nova_compute[244014]: 2026-02-25 12:29:38.410 244018 DEBUG oslo_concurrency.lockutils [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:38 compute-0 nova_compute[244014]: 2026-02-25 12:29:38.411 244018 DEBUG oslo_concurrency.lockutils [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:38 compute-0 nova_compute[244014]: 2026-02-25 12:29:38.411 244018 DEBUG nova.network.neutron [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:29:38 compute-0 nova_compute[244014]: 2026-02-25 12:29:38.416 244018 DEBUG oslo_concurrency.processutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:38 compute-0 nova_compute[244014]: 2026-02-25 12:29:38.444 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Releasing lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/611464997' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:38 compute-0 nova_compute[244014]: 2026-02-25 12:29:38.752 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:29:38 compute-0 nova_compute[244014]: 2026-02-25 12:29:38.765 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d945940d-a1b5-4a36-b980-efda3a9efda6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:38 compute-0 nova_compute[244014]: 2026-02-25 12:29:38.850 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] resizing rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:29:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587152409' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.011 244018 DEBUG oslo_concurrency.processutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.013 244018 DEBUG nova.virt.libvirt.vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.014 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.016 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.018 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.034 244018 DEBUG nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:29:39 compute-0 nova_compute[244014]:   <uuid>8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</uuid>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   <name>instance-00000035</name>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestJSON-server-1971346294</nova:name>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:29:36</nova:creationTime>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <nova:port uuid="ee46268d-740d-4ff9-8b65-4a81fc61eec3">
Feb 25 12:29:39 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <system>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <entry name="serial">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <entry name="uuid">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     </system>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   <os>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   </os>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   <features>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   </features>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk">
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config">
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:39 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:ba:87:f1"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <target dev="tapee46268d-74"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log" append="off"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <video>
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     </video>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:29:39 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:29:39 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:29:39 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:29:39 compute-0 nova_compute[244014]: </domain>
Feb 25 12:29:39 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.043 244018 DEBUG nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.044 244018 DEBUG nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.045 244018 DEBUG nova.virt.libvirt.vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.046 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.048 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.049 244018 DEBUG os_vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.051 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.052 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.056 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.057 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.058 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:39 compute-0 NetworkManager[49836]: <info>  [1772022579.0610] manager: (tapee46268d-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.067 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.068 244018 INFO os_vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')
Feb 25 12:29:39 compute-0 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 12:29:39 compute-0 NetworkManager[49836]: <info>  [1772022579.1547] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.154 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:39 compute-0 ovn_controller[147040]: 2026-02-25T12:29:39Z|00615|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 12:29:39 compute-0 ovn_controller[147040]: 2026-02-25T12:29:39Z|00616|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 12:29:39 compute-0 ovn_controller[147040]: 2026-02-25T12:29:39Z|00617|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 12:29:39 compute-0 ovn_controller[147040]: 2026-02-25T12:29:39Z|00618|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.164 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.167 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.170 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.174 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.178 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0982127e-1b31-4483-b733-523668d5b7d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.179 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:29:39 compute-0 systemd-udevd[303093]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.182 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.182 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b827c021-bc4a-4ed1-8985-f31e4c8160e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.184 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fc07a779-e37a-4ee5-894a-cc00ce4811c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 NetworkManager[49836]: <info>  [1772022579.1911] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:29:39 compute-0 NetworkManager[49836]: <info>  [1772022579.1917] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.196 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[71daed12-3fb2-4fd7-ab84-9f27baad6088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 systemd-machined[210048]: New machine qemu-80-instance-00000035.
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.210 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88125daa-1780-4855-af3e-a08c7c5e9489]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000035.
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.239 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f5111a29-c98c-4ce7-bcd2-927a957a0a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.246 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0b03db-152f-4d6f-9862-93f0e83f96d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 NetworkManager[49836]: <info>  [1772022579.2488] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Feb 25 12:29:39 compute-0 systemd-udevd[303098]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.278 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8d3ecf-d169-4f21-a3b2-152eee6f4950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.281 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d799bc6c-70b6-477b-b4e4-d36cbc5754d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 NetworkManager[49836]: <info>  [1772022579.3012] device (tapce318891-c0): carrier: link connected
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.303 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[66b36504-7508-4789-bca9-def41eb824c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.316 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c426acab-0bb4-4f38-b5c5-dc05585798d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454870, 'reachable_time': 42375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303127, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.327 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4767c139-ee34-48e3-99d4-744daf077a12]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454870, 'tstamp': 454870}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303128, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.340 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[22ee209a-92f5-4a5d-b957-e76a09ff7501]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454870, 'reachable_time': 42375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303129, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.359 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6737fb57-9948-4f5b-ac19-c63b4240749f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.398 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa498d4-4152-4c61-9a0c-c15bf1d250db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.400 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.400 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.400 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:39 compute-0 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:39 compute-0 NetworkManager[49836]: <info>  [1772022579.4028] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.404 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:39 compute-0 ovn_controller[147040]: 2026-02-25T12:29:39Z|00619|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.410 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.415 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0e4c19-e400-492b-bf40-7e3ff80fc9d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.416 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:29:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.417 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.459 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Successfully updated port: 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.479 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.479 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.480 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:39 compute-0 podman[303191]: 2026-02-25 12:29:39.739441905 +0000 UTC m=+0.022767506 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:29:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 34 KiB/s wr, 224 op/s
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.900 244018 DEBUG nova.objects.instance [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'migration_context' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.985 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.986 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Ensure instance console log exists: /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.987 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.988 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:39 compute-0 nova_compute[244014]: 2026-02-25 12:29:39.988 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:40 compute-0 ceph-mon[76335]: pgmap v1426: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 1.8 MiB/s wr, 320 op/s
Feb 25 12:29:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1587152409' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.125 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.184 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.185 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022580.177592, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.192 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.195 244018 DEBUG nova.compute.manager [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.200 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance rebooted successfully.
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.200 244018 DEBUG nova.compute.manager [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.229 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.233 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.262 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (powering-on). Skip.
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.263 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022580.189978, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.263 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.296 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.299 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.307 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.308 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.308 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.309 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.309 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.311 244018 INFO nova.compute.manager [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Terminating instance
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.312 244018 DEBUG nova.compute.manager [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.379 244018 DEBUG nova.compute.manager [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.379 244018 DEBUG oslo_concurrency.lockutils [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.379 244018 DEBUG oslo_concurrency.lockutils [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.380 244018 DEBUG oslo_concurrency.lockutils [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.380 244018 DEBUG nova.compute.manager [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.380 244018 WARNING nova.compute.manager [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.
Feb 25 12:29:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:40 compute-0 podman[303191]: 2026-02-25 12:29:40.601362267 +0000 UTC m=+0.884687878 container create 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 12:29:40 compute-0 kernel: tapab721623-aa (unregistering): left promiscuous mode
Feb 25 12:29:40 compute-0 NetworkManager[49836]: <info>  [1772022580.6350] device (tapab721623-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:29:40 compute-0 ovn_controller[147040]: 2026-02-25T12:29:40Z|00620|binding|INFO|Releasing lport ab721623-aa6d-494f-8b90-6ffd63b7a33f from this chassis (sb_readonly=0)
Feb 25 12:29:40 compute-0 ovn_controller[147040]: 2026-02-25T12:29:40Z|00621|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f down in Southbound
Feb 25 12:29:40 compute-0 ovn_controller[147040]: 2026-02-25T12:29:40Z|00622|binding|INFO|Removing iface tapab721623-aa ovn-installed in OVS
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.654 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:40.664 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:6c:fd 10.100.0.13'], port_security=['fa:16:3e:53:6c:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'abe229eb-2238-4237-a7f2-83b8476ac1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '8', 'neutron:security_group_ids': '809ecded-d7db-4d4f-aed2-3cfc6bac71b9 bdf2b6e4-9059-4e5b-93fb-9739a22eb004 fb44570f-d13c-4d72-bde7-f388c86c70de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ab721623-aa6d-494f-8b90-6ffd63b7a33f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:40 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000042.scope: Deactivated successfully.
Feb 25 12:29:40 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000042.scope: Consumed 6.572s CPU time.
Feb 25 12:29:40 compute-0 systemd-machined[210048]: Machine qemu-79-instance-00000042 terminated.
Feb 25 12:29:40 compute-0 NetworkManager[49836]: <info>  [1772022580.7282] manager: (tapab721623-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.744 244018 INFO nova.virt.libvirt.driver [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance destroyed successfully.
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.745 244018 DEBUG nova.objects.instance [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'resources' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.759 244018 DEBUG nova.virt.libvirt.vif [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:34Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.760 244018 DEBUG nova.network.os_vif_util [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.763 244018 DEBUG nova.network.os_vif_util [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.764 244018 DEBUG os_vif [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.769 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab721623-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.771 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.775 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:40 compute-0 nova_compute[244014]: 2026-02-25 12:29:40.779 244018 INFO os_vif [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa')
Feb 25 12:29:40 compute-0 systemd[1]: Started libpod-conmon-84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85.scope.
Feb 25 12:29:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47676d99690e7ef9a70c5e78706610c8c77da71cb8b8a0b03963190efdeac6fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.155 244018 DEBUG nova.network.neutron [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updated VIF entry in instance network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.157 244018 DEBUG nova.network.neutron [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.176 244018 DEBUG oslo_concurrency.lockutils [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.211 244018 DEBUG nova.compute.manager [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.212 244018 DEBUG nova.compute.manager [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing instance network info cache due to event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.212 244018 DEBUG oslo_concurrency.lockutils [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:41 compute-0 podman[303191]: 2026-02-25 12:29:41.222865462 +0000 UTC m=+1.506191063 container init 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 12:29:41 compute-0 podman[303191]: 2026-02-25 12:29:41.230857679 +0000 UTC m=+1.514183260 container start 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 25 12:29:41 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [NOTICE]   (303271) : New worker (303273) forked
Feb 25 12:29:41 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [NOTICE]   (303271) : Loading success.
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.280 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.309 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.310 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance network_info: |[{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.310 244018 DEBUG oslo_concurrency.lockutils [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.311 244018 DEBUG nova.network.neutron [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.314 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start _get_guest_xml network_info=[{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.318 244018 WARNING nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.324 244018 DEBUG nova.virt.libvirt.host [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.325 244018 DEBUG nova.virt.libvirt.host [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.329 244018 DEBUG nova.virt.libvirt.host [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.329 244018 DEBUG nova.virt.libvirt.host [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.330 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.330 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.331 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.331 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.331 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.331 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.332 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.332 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.332 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.333 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.333 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.333 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.336 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.648 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ab721623-aa6d-494f-8b90-6ffd63b7a33f in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 unbound from our chassis
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.650 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a448894-87d7-4c8e-a168-2593011ffed7
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.665 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0479a1d9-4b26-4252-bcf8-3ebfdf433fb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.691 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f06e15-bc6f-4dca-af54-e896713b99bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.694 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[57a80eaa-e239-4ef1-83dd-3449086721e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:41 compute-0 ceph-mon[76335]: pgmap v1427: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 34 KiB/s wr, 224 op/s
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.722 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7aab4d1a-a809-4fb8-b2a7-7faa4c859b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.741 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7613de-9629-4c2a-a761-7ca748d74457]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303307, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.758 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4b6c96-a438-4f2d-bcca-9034bd93d177]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449644, 'tstamp': 449644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303308, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449646, 'tstamp': 449646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303308, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.760 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:41 compute-0 nova_compute[244014]: 2026-02-25 12:29:41.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.766 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a448894-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.766 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.767 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a448894-80, col_values=(('external_ids', {'iface-id': '6ef06fbe-6eb9-4d55-bbd8-9394a70da39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.767 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 473 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 857 KiB/s wr, 284 op/s
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:29:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223370145' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0027299510273700845 of space, bias 1.0, pg target 0.8189853082110253 quantized to 32 (current 32)
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002492563996361888 of space, bias 1.0, pg target 0.7477691989085664 quantized to 32 (current 32)
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.701649371663396e-07 of space, bias 4.0, pg target 0.0010441979245996076 quantized to 16 (current 16)
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:29:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.320 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.984s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.367 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.372 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108628078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.923 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.926 244018 DEBUG nova.virt.libvirt.vif [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-579439343',display_name='tempest-ServerRescueTestJSONUnderV235-server-579439343',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-579439343',id=69,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61371c73c9fb4961886c5c22f8f871e1',ramdisk_id='',reservation_id='r-yaoxup28',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1059268888',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1059268888-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:35Z,user_data=None,user_id='74af2f394ab04b06b55e62150e81b6b1',uuid=d945940d-a1b5-4a36-b980-efda3a9efda6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.926 244018 DEBUG nova.network.os_vif_util [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converting VIF {"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.927 244018 DEBUG nova.network.os_vif_util [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.929 244018 DEBUG nova.objects.instance [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.954 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:29:42 compute-0 nova_compute[244014]:   <uuid>d945940d-a1b5-4a36-b980-efda3a9efda6</uuid>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   <name>instance-00000045</name>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-579439343</nova:name>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:29:41</nova:creationTime>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <nova:user uuid="74af2f394ab04b06b55e62150e81b6b1">tempest-ServerRescueTestJSONUnderV235-1059268888-project-member</nova:user>
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <nova:project uuid="61371c73c9fb4961886c5c22f8f871e1">tempest-ServerRescueTestJSONUnderV235-1059268888</nova:project>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <nova:port uuid="197929cb-aaa4-48f0-a831-7ac3f4ac5b37">
Feb 25 12:29:42 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <system>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <entry name="serial">d945940d-a1b5-4a36-b980-efda3a9efda6</entry>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <entry name="uuid">d945940d-a1b5-4a36-b980-efda3a9efda6</entry>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     </system>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   <os>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   </os>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   <features>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   </features>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d945940d-a1b5-4a36-b980-efda3a9efda6_disk">
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config">
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:42 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:3f:10:e8"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <target dev="tap197929cb-aa"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/console.log" append="off"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <video>
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     </video>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:29:42 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:29:42 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:29:42 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:29:42 compute-0 nova_compute[244014]: </domain>
Feb 25 12:29:42 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.955 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Preparing to wait for external event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.955 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.956 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.956 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.957 244018 DEBUG nova.virt.libvirt.vif [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-579439343',display_name='tempest-ServerRescueTestJSONUnderV235-server-579439343',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-579439343',id=69,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61371c73c9fb4961886c5c22f8f871e1',ramdisk_id='',reservation_id='r-yaoxup28',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1059268888',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1059268888-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:35Z,user_data=None,user_id='74af2f394ab04b06b55e62150e81b6b1',uuid=d945940d-a1b5-4a36-b980-efda3a9efda6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.958 244018 DEBUG nova.network.os_vif_util [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converting VIF {"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.959 244018 DEBUG nova.network.os_vif_util [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.959 244018 DEBUG os_vif [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.960 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.961 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.964 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.964 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap197929cb-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.965 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap197929cb-aa, col_values=(('external_ids', {'iface-id': '197929cb-aaa4-48f0-a831-7ac3f4ac5b37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:10:e8', 'vm-uuid': 'd945940d-a1b5-4a36-b980-efda3a9efda6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:42 compute-0 NetworkManager[49836]: <info>  [1772022582.9688] manager: (tap197929cb-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:42 compute-0 nova_compute[244014]: 2026-02-25 12:29:42.975 244018 INFO os_vif [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa')
Feb 25 12:29:42 compute-0 ceph-mon[76335]: pgmap v1428: 305 pgs: 305 active+clean; 473 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 857 KiB/s wr, 284 op/s
Feb 25 12:29:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2223370145' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.137 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.138 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.138 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No VIF found with MAC fa:16:3e:3f:10:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.138 244018 INFO nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Using config drive
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.532 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.541 244018 DEBUG nova.network.neutron [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updated VIF entry in instance network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.542 244018 DEBUG nova.network.neutron [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.548 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.548 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.548 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.549 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.549 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.549 244018 WARNING nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.550 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.550 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.550 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.551 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.551 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.551 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.552 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.552 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.552 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.553 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.553 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.553 244018 WARNING nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state deleting.
Feb 25 12:29:43 compute-0 nova_compute[244014]: 2026-02-25 12:29:43.579 244018 DEBUG oslo_concurrency.lockutils [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 500 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.8 MiB/s wr, 317 op/s
Feb 25 12:29:44 compute-0 nova_compute[244014]: 2026-02-25 12:29:44.306 244018 INFO nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Creating config drive at /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config
Feb 25 12:29:44 compute-0 nova_compute[244014]: 2026-02-25 12:29:44.312 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcq3tr85h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:44 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1108628078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:44 compute-0 nova_compute[244014]: 2026-02-25 12:29:44.449 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcq3tr85h" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:44 compute-0 nova_compute[244014]: 2026-02-25 12:29:44.802 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:44 compute-0 nova_compute[244014]: 2026-02-25 12:29:44.806 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:44 compute-0 nova_compute[244014]: 2026-02-25 12:29:44.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:44 compute-0 nova_compute[244014]: 2026-02-25 12:29:44.867 244018 DEBUG nova.objects.instance [None req-cf4e8229-fdc1-426c-989d-74ca743cc7c1 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:44 compute-0 nova_compute[244014]: 2026-02-25 12:29:44.894 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022584.893929, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:44 compute-0 nova_compute[244014]: 2026-02-25 12:29:44.894 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Paused (Lifecycle Event)
Feb 25 12:29:44 compute-0 nova_compute[244014]: 2026-02-25 12:29:44.925 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:44 compute-0 nova_compute[244014]: 2026-02-25 12:29:44.931 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:45 compute-0 nova_compute[244014]: 2026-02-25 12:29:45.172 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 25 12:29:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 500 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.8 MiB/s wr, 310 op/s
Feb 25 12:29:45 compute-0 ceph-mon[76335]: pgmap v1429: 305 pgs: 305 active+clean; 500 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.8 MiB/s wr, 317 op/s
Feb 25 12:29:46 compute-0 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 12:29:46 compute-0 NetworkManager[49836]: <info>  [1772022586.4840] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:46 compute-0 ovn_controller[147040]: 2026-02-25T12:29:46Z|00623|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 12:29:46 compute-0 ovn_controller[147040]: 2026-02-25T12:29:46Z|00624|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 12:29:46 compute-0 ovn_controller[147040]: 2026-02-25T12:29:46Z|00625|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:46.498 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:46.500 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis
Feb 25 12:29:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:46.503 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:46.505 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[efbe512e-808c-4924-907e-a57b12bf2c0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:46.508 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore
Feb 25 12:29:46 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 12:29:46 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000035.scope: Consumed 5.400s CPU time.
Feb 25 12:29:46 compute-0 systemd-machined[210048]: Machine qemu-80-instance-00000035 terminated.
Feb 25 12:29:46 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.588 244018 DEBUG nova.compute.manager [None req-cf4e8229-fdc1-426c-989d-74ca743cc7c1 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:46 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [NOTICE]   (303271) : haproxy version is 2.8.14-c23fe91
Feb 25 12:29:46 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [NOTICE]   (303271) : path to executable is /usr/sbin/haproxy
Feb 25 12:29:46 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [WARNING]  (303271) : Exiting Master process...
Feb 25 12:29:46 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [ALERT]    (303271) : Current worker (303273) exited with code 143 (Terminated)
Feb 25 12:29:46 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [WARNING]  (303271) : All workers exited. Exiting... (0)
Feb 25 12:29:46 compute-0 systemd[1]: libpod-84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85.scope: Deactivated successfully.
Feb 25 12:29:46 compute-0 podman[303454]: 2026-02-25 12:29:46.764356304 +0000 UTC m=+0.143163061 container died 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.833 244018 DEBUG nova.compute.manager [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.834 244018 DEBUG oslo_concurrency.lockutils [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.834 244018 DEBUG oslo_concurrency.lockutils [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.835 244018 DEBUG oslo_concurrency.lockutils [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.835 244018 DEBUG nova.compute.manager [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:46 compute-0 nova_compute[244014]: 2026-02-25 12:29:46.836 244018 WARNING nova.compute.manager [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state suspended and task_state None.
Feb 25 12:29:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85-userdata-shm.mount: Deactivated successfully.
Feb 25 12:29:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-47676d99690e7ef9a70c5e78706610c8c77da71cb8b8a0b03963190efdeac6fe-merged.mount: Deactivated successfully.
Feb 25 12:29:47 compute-0 podman[303454]: 2026-02-25 12:29:47.33375856 +0000 UTC m=+0.712565327 container cleanup 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:29:47 compute-0 ceph-mon[76335]: pgmap v1430: 305 pgs: 305 active+clean; 500 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.8 MiB/s wr, 310 op/s
Feb 25 12:29:47 compute-0 systemd[1]: libpod-conmon-84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85.scope: Deactivated successfully.
Feb 25 12:29:47 compute-0 nova_compute[244014]: 2026-02-25 12:29:47.347 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:47 compute-0 nova_compute[244014]: 2026-02-25 12:29:47.348 244018 INFO nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Deleting local config drive /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config because it was imported into RBD.
Feb 25 12:29:47 compute-0 systemd-udevd[303422]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:29:47 compute-0 NetworkManager[49836]: <info>  [1772022587.4131] manager: (tap197929cb-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Feb 25 12:29:47 compute-0 kernel: tap197929cb-aa: entered promiscuous mode
Feb 25 12:29:47 compute-0 nova_compute[244014]: 2026-02-25 12:29:47.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:47 compute-0 ovn_controller[147040]: 2026-02-25T12:29:47Z|00626|binding|INFO|Claiming lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for this chassis.
Feb 25 12:29:47 compute-0 ovn_controller[147040]: 2026-02-25T12:29:47Z|00627|binding|INFO|197929cb-aaa4-48f0-a831-7ac3f4ac5b37: Claiming fa:16:3e:3f:10:e8 10.100.0.4
Feb 25 12:29:47 compute-0 NetworkManager[49836]: <info>  [1772022587.4287] device (tap197929cb-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:29:47 compute-0 NetworkManager[49836]: <info>  [1772022587.4314] device (tap197929cb-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:29:47 compute-0 ovn_controller[147040]: 2026-02-25T12:29:47Z|00628|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 ovn-installed in OVS
Feb 25 12:29:47 compute-0 nova_compute[244014]: 2026-02-25 12:29:47.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:47 compute-0 ovn_controller[147040]: 2026-02-25T12:29:47Z|00629|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 up in Southbound
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.440 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:10:e8 10.100.0.4'], port_security=['fa:16:3e:3f:10:e8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd945940d-a1b5-4a36-b980-efda3a9efda6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a36e5ee4-5e24-4d80-83ee-01c487ff157c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61371c73c9fb4961886c5c22f8f871e1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '95094e28-96cd-49ef-b6c0-712f53ce443e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bfc4d05-cf0b-42bf-9420-27556b829f8f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=197929cb-aaa4-48f0-a831-7ac3f4ac5b37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:47 compute-0 nova_compute[244014]: 2026-02-25 12:29:47.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:47 compute-0 systemd-machined[210048]: New machine qemu-81-instance-00000045.
Feb 25 12:29:47 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000045.
Feb 25 12:29:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:29:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2950221945' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:29:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:29:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2950221945' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:29:47 compute-0 podman[303486]: 2026-02-25 12:29:47.716118683 +0000 UTC m=+0.352786825 container remove 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.721 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[71bcecc7-752f-4125-97e3-166e02ba4764]: (4, ('Wed Feb 25 12:29:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85)\n84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85\nWed Feb 25 12:29:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85)\n84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.723 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[806eac2d-8b0a-44db-8594-cbf5033ea585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.724 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:47 compute-0 nova_compute[244014]: 2026-02-25 12:29:47.726 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:47 compute-0 kernel: tapce318891-c0: left promiscuous mode
Feb 25 12:29:47 compute-0 nova_compute[244014]: 2026-02-25 12:29:47.758 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.768 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[26cd8a5e-e808-4853-bba1-ade62cf6b606]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.792 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6bddcb-c378-44f9-b14e-69fe5e72dd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.794 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf5526a-2713-40cc-af99-c1bbaa634331]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.815 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8df4e874-f775-4653-8628-e343ff64baae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454864, 'reachable_time': 41913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303524, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:47 compute-0 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.821 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.821 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[986f38cb-10b3-492b-b249-17e5bb392955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.822 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 in datapath a36e5ee4-5e24-4d80-83ee-01c487ff157c unbound from our chassis
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.823 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a36e5ee4-5e24-4d80-83ee-01c487ff157c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:29:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[815c4d3d-ed20-4797-b187-31dc154ae3c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 485 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 5.0 MiB/s wr, 370 op/s
Feb 25 12:29:47 compute-0 nova_compute[244014]: 2026-02-25 12:29:47.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.206 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022588.2057252, d945940d-a1b5-4a36-b980-efda3a9efda6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.208 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Started (Lifecycle Event)
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.238 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.246 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022588.2058125, d945940d-a1b5-4a36-b980-efda3a9efda6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.246 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Paused (Lifecycle Event)
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.278 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.281 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.317 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.350 244018 INFO nova.virt.libvirt.driver [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Deleting instance files /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc_del
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.352 244018 INFO nova.virt.libvirt.driver [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Deletion of /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc_del complete
Feb 25 12:29:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2950221945' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:29:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2950221945' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.459 244018 INFO nova.compute.manager [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Took 8.15 seconds to destroy the instance on the hypervisor.
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.460 244018 DEBUG oslo.service.loopingcall [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.461 244018 DEBUG nova.compute.manager [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.461 244018 DEBUG nova.network.neutron [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.568 244018 INFO nova.compute.manager [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Resuming
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.569 244018 DEBUG nova.objects.instance [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.677 244018 DEBUG oslo_concurrency.lockutils [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.678 244018 DEBUG oslo_concurrency.lockutils [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.678 244018 DEBUG nova.network.neutron [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:29:48 compute-0 ovn_controller[147040]: 2026-02-25T12:29:48Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:8c:25 10.100.0.12
Feb 25 12:29:48 compute-0 ovn_controller[147040]: 2026-02-25T12:29:48Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:8c:25 10.100.0.12
Feb 25 12:29:48 compute-0 nova_compute[244014]: 2026-02-25 12:29:48.864 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:29:49 compute-0 ovn_controller[147040]: 2026-02-25T12:29:49Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:b7:62 10.100.0.4
Feb 25 12:29:49 compute-0 ovn_controller[147040]: 2026-02-25T12:29:49Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:b7:62 10.100.0.4
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.325 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.326 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.326 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.326 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.327 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.327 244018 WARNING nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state suspended and task_state resuming.
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.327 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.328 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.328 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.328 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.329 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Processing event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.329 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.329 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.330 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.330 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.330 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.331 244018 WARNING nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state building and task_state spawning.
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.331 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.338 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022589.3354928, d945940d-a1b5-4a36-b980-efda3a9efda6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.338 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Resumed (Lifecycle Event)
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.340 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.345 244018 INFO nova.virt.libvirt.driver [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance spawned successfully.
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.345 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.372 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.372 244018 DEBUG nova.network.neutron [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.380 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.389 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.389 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.390 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.391 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.391 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.392 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.423 244018 INFO nova.compute.manager [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Took 0.96 seconds to deallocate network for instance.
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.431 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:29:49 compute-0 ceph-mon[76335]: pgmap v1431: 305 pgs: 305 active+clean; 485 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 5.0 MiB/s wr, 370 op/s
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.576 244018 DEBUG nova.compute.manager [req-2e824472-ceb5-42bd-9166-eef5c5a7caba req-43df687e-8bf5-461e-a840-7344d4ef49bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-deleted-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.610 244018 INFO nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Took 13.95 seconds to spawn the instance on the hypervisor.
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.610 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.699 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.700 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.794 244018 INFO nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Took 15.58 seconds to build instance.
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.827 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:49 compute-0 nova_compute[244014]: 2026-02-25 12:29:49.857 244018 DEBUG oslo_concurrency.processutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 485 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.0 MiB/s wr, 161 op/s
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.332 244018 DEBUG nova.network.neutron [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.359 244018 DEBUG oslo_concurrency.lockutils [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.366 244018 DEBUG nova.virt.libvirt.vif [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.367 244018 DEBUG nova.network.os_vif_util [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.369 244018 DEBUG nova.network.os_vif_util [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.369 244018 DEBUG os_vif [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.372 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.373 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.377 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.377 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.378 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.379 244018 INFO os_vif [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.402 244018 DEBUG nova.objects.instance [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:29:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123865339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.417 244018 DEBUG oslo_concurrency.processutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.429 244018 DEBUG nova.compute.provider_tree [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.442 244018 DEBUG nova.scheduler.client.report [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:29:50 compute-0 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 12:29:50 compute-0 NetworkManager[49836]: <info>  [1772022590.4676] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/277)
Feb 25 12:29:50 compute-0 ovn_controller[147040]: 2026-02-25T12:29:50Z|00630|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:50 compute-0 ovn_controller[147040]: 2026-02-25T12:29:50Z|00631|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 12:29:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4123865339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.493 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.494 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.496 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:29:50 compute-0 ovn_controller[147040]: 2026-02-25T12:29:50Z|00632|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 12:29:50 compute-0 ovn_controller[147040]: 2026-02-25T12:29:50Z|00633|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.503 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[568078eb-5c4a-4d0d-9e23-342cd6caf7a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.504 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.506 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.506 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5d9de0-c20e-4a97-beec-71b0c915e33a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.507 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09a128ca-7de9-4c21-8227-717d7c76313d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.515 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7c8900-9b15-45ce-877d-28e7f6d68112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 systemd-machined[210048]: New machine qemu-82-instance-00000035.
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.524 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:50 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:29:50 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-00000035.
Feb 25 12:29:50 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.526 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ff1990-dfcb-4585-8555-ff1ba73131cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.545 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[998e1bc7-2081-4d07-b3cf-3cc1666364d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 systemd-udevd[303611]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:29:50 compute-0 NetworkManager[49836]: <info>  [1772022590.5508] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/278)
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.550 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a5760220-1436-4ae7-8586-a54a286700c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 systemd-udevd[303613]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.553 244018 INFO nova.scheduler.client.report [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Deleted allocations for instance abe229eb-2238-4237-a7f2-83b8476ac1dc
Feb 25 12:29:50 compute-0 NetworkManager[49836]: <info>  [1772022590.5581] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:29:50 compute-0 NetworkManager[49836]: <info>  [1772022590.5587] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.577 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[232b8af4-630d-4588-8030-dfe442b1ee4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.579 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0219eee3-c6e0-4421-b0d3-7a9206fea53b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 NetworkManager[49836]: <info>  [1772022590.5945] device (tapce318891-c0): carrier: link connected
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.596 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1f2adb-a0d8-4764-ae89-daba1f489594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.607 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc31043-6602-4558-938b-2e68ad4a16f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456000, 'reachable_time': 24131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303639, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.614 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b321ddc-7d29-4a33-80ac-8c5e988ffb7d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456000, 'tstamp': 456000}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303640, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.623 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f8518fcd-4533-45d5-9ffb-4bd118548a5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456000, 'reachable_time': 24131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303641, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.645 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38825170-758a-4a66-9219-64175a91f6fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.653 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.699 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d857c223-b04b-4aa9-b42b-63f2403ec233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.700 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.700 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.701 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:50 compute-0 NetworkManager[49836]: <info>  [1772022590.7029] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Feb 25 12:29:50 compute-0 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.706 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:50 compute-0 ovn_controller[147040]: 2026-02-25T12:29:50Z|00634|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.713 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.716 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.717 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[adf1c13f-b68c-4c88-80d6-f00089e2733d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.718 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:29:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.718 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:29:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.923 244018 INFO nova.compute.manager [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Rescuing
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.924 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.925 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:29:50 compute-0 nova_compute[244014]: 2026-02-25 12:29:50.925 244018 DEBUG nova.network.neutron [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.042 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.043 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022591.0407772, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.043 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.056 244018 DEBUG nova.compute.manager [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.056 244018 DEBUG nova.objects.instance [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.069 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.074 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.078 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance running successfully.
Feb 25 12:29:51 compute-0 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.081 244018 DEBUG nova.virt.libvirt.guest [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.081 244018 DEBUG nova.compute.manager [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.113 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.114 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022591.0556786, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.114 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)
Feb 25 12:29:51 compute-0 podman[303717]: 2026-02-25 12:29:51.134725996 +0000 UTC m=+0.098150324 container create 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.140 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.148 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:51 compute-0 podman[303717]: 2026-02-25 12:29:51.073893051 +0000 UTC m=+0.037317389 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:29:51 compute-0 systemd[1]: Started libpod-conmon-22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d.scope.
Feb 25 12:29:51 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08bd54308eaf352c167ef973cce0fd266db351b043b8cd64c7edc8d50209f31/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:29:51 compute-0 podman[303717]: 2026-02-25 12:29:51.230647116 +0000 UTC m=+0.194071464 container init 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:29:51 compute-0 podman[303717]: 2026-02-25 12:29:51.235446272 +0000 UTC m=+0.198870590 container start 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 25 12:29:51 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [NOTICE]   (303735) : New worker (303737) forked
Feb 25 12:29:51 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [NOTICE]   (303735) : Loading success.
Feb 25 12:29:51 compute-0 ceph-mon[76335]: pgmap v1432: 305 pgs: 305 active+clean; 485 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.0 MiB/s wr, 161 op/s
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.807 244018 DEBUG nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.807 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.808 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.809 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.809 244018 DEBUG nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.809 244018 WARNING nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.809 244018 DEBUG nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.810 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.810 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.810 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.810 244018 DEBUG nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:51 compute-0 nova_compute[244014]: 2026-02-25 12:29:51.811 244018 WARNING nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.
Feb 25 12:29:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 509 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.9 MiB/s wr, 259 op/s
Feb 25 12:29:52 compute-0 kernel: tap27c957cd-d6 (unregistering): left promiscuous mode
Feb 25 12:29:52 compute-0 NetworkManager[49836]: <info>  [1772022592.3664] device (tap27c957cd-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:29:52 compute-0 ovn_controller[147040]: 2026-02-25T12:29:52Z|00635|binding|INFO|Releasing lport 27c957cd-d68f-48d8-b2e1-170275200ed3 from this chassis (sb_readonly=0)
Feb 25 12:29:52 compute-0 ovn_controller[147040]: 2026-02-25T12:29:52Z|00636|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 down in Southbound
Feb 25 12:29:52 compute-0 ovn_controller[147040]: 2026-02-25T12:29:52Z|00637|binding|INFO|Removing iface tap27c957cd-d6 ovn-installed in OVS
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.383 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:b7:62 10.100.0.4'], port_security=['fa:16:3e:bf:b7:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f7b18575-d1fc-423f-a596-8ca6d8ed08fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=27c957cd-d68f-48d8-b2e1-170275200ed3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.386 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 27c957cd-d68f-48d8-b2e1-170275200ed3 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 unbound from our chassis
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.390 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.394 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.400 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[550c85b9-45e1-45ec-a7e7-f35c8670b583]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:52 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000044.scope: Deactivated successfully.
Feb 25 12:29:52 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000044.scope: Consumed 11.379s CPU time.
Feb 25 12:29:52 compute-0 systemd-machined[210048]: Machine qemu-78-instance-00000044 terminated.
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.426 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[82bd3f4b-c900-4e39-b218-f4529ae6baa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.430 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7faf8b54-d30a-4fc1-9074-b85f5aadac3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.452 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1a8f9f-ca60-401b-8ba0-2783ce633774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.466 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2dd83be-f5e5-40b0-8ebe-eb7136489e79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303755, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.476 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a04de21d-758d-4c55-8787-7d3626f19586]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453803, 'tstamp': 453803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303756, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453805, 'tstamp': 453805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303756, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.478 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.486 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b56c79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.487 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.489 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85b56c79-00, col_values=(('external_ids', {'iface-id': 'ad3b1754-7f6c-4114-8056-d68f2e9a25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.490 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.681 244018 DEBUG nova.network.neutron [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.732 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.776 244018 DEBUG nova.compute.manager [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.776 244018 DEBUG oslo_concurrency.lockutils [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.777 244018 DEBUG oslo_concurrency.lockutils [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.777 244018 DEBUG oslo_concurrency.lockutils [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.777 244018 DEBUG nova.compute.manager [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.778 244018 WARNING nova.compute.manager [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state active and task_state rescuing.
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.910 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance shutdown successfully after 14 seconds.
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.916 244018 INFO nova.virt.libvirt.driver [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance destroyed successfully.
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.917 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'numa_topology' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.935 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Attempting rescue
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.936 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.941 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.941 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Creating image(s)
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.980 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.987 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'trusted_certs' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:52 compute-0 nova_compute[244014]: 2026-02-25 12:29:52.994 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.049 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.083 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.088 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.132 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.175 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.176 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.177 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.177 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.213 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.220 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.417 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.418 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.418 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.418 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.419 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.420 244018 INFO nova.compute.manager [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Terminating instance
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.421 244018 DEBUG nova.compute.manager [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:29:53 compute-0 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 12:29:53 compute-0 NetworkManager[49836]: <info>  [1772022593.4934] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:29:53 compute-0 ceph-mon[76335]: pgmap v1433: 305 pgs: 305 active+clean; 509 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.9 MiB/s wr, 259 op/s
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.504 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:53 compute-0 ovn_controller[147040]: 2026-02-25T12:29:53Z|00638|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 12:29:53 compute-0 ovn_controller[147040]: 2026-02-25T12:29:53Z|00639|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 12:29:53 compute-0 ovn_controller[147040]: 2026-02-25T12:29:53Z|00640|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.519 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.519 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'migration_context' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:53.521 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '14', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:53.524 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis
Feb 25 12:29:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:53.526 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:29:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:53.527 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13184264-f485-440d-9136-bff0fd6e99e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:53.528 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.539 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.539 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start _get_guest_xml network_info=[{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "vif_mac": "fa:16:3e:bf:b7:62"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.540 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'resources' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:53 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 12:29:53 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000035.scope: Consumed 2.837s CPU time.
Feb 25 12:29:53 compute-0 systemd-machined[210048]: Machine qemu-82-instance-00000035 terminated.
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.562 244018 WARNING nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.572 244018 DEBUG nova.virt.libvirt.host [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.573 244018 DEBUG nova.virt.libvirt.host [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.577 244018 DEBUG nova.virt.libvirt.host [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.577 244018 DEBUG nova.virt.libvirt.host [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.577 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.578 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.578 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.578 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.579 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.579 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.579 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.579 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.580 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.580 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.580 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.581 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.581 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'vcpu_model' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.601 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.658 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.658 244018 DEBUG nova.objects.instance [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.676 244018 DEBUG nova.virt.libvirt.vif [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.677 244018 DEBUG nova.network.os_vif_util [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.677 244018 DEBUG nova.network.os_vif_util [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.678 244018 DEBUG os_vif [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.680 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.680 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee46268d-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.686 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:53 compute-0 nova_compute[244014]: 2026-02-25 12:29:53.689 244018 INFO os_vif [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')
Feb 25 12:29:53 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [NOTICE]   (303735) : haproxy version is 2.8.14-c23fe91
Feb 25 12:29:53 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [NOTICE]   (303735) : path to executable is /usr/sbin/haproxy
Feb 25 12:29:53 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [WARNING]  (303735) : Exiting Master process...
Feb 25 12:29:53 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [ALERT]    (303735) : Current worker (303737) exited with code 143 (Terminated)
Feb 25 12:29:53 compute-0 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [WARNING]  (303735) : All workers exited. Exiting... (0)
Feb 25 12:29:53 compute-0 systemd[1]: libpod-22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d.scope: Deactivated successfully.
Feb 25 12:29:53 compute-0 podman[303888]: 2026-02-25 12:29:53.704062235 +0000 UTC m=+0.074110812 container died 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:29:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d-userdata-shm.mount: Deactivated successfully.
Feb 25 12:29:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a08bd54308eaf352c167ef973cce0fd266db351b043b8cd64c7edc8d50209f31-merged.mount: Deactivated successfully.
Feb 25 12:29:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 519 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.3 MiB/s wr, 265 op/s
Feb 25 12:29:53 compute-0 podman[303888]: 2026-02-25 12:29:53.934905851 +0000 UTC m=+0.304954448 container cleanup 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:29:53 compute-0 systemd[1]: libpod-conmon-22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d.scope: Deactivated successfully.
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.004 244018 DEBUG nova.compute.manager [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.005 244018 DEBUG oslo_concurrency.lockutils [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.005 244018 DEBUG oslo_concurrency.lockutils [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.006 244018 DEBUG oslo_concurrency.lockutils [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.006 244018 DEBUG nova.compute.manager [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.006 244018 DEBUG nova.compute.manager [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:29:54 compute-0 podman[303964]: 2026-02-25 12:29:54.104293045 +0000 UTC m=+0.137748227 container remove 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:29:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.113 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[99b0e8ab-75c4-4303-8831-37129f44272a]: (4, ('Wed Feb 25 12:29:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d)\n22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d\nWed Feb 25 12:29:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d)\n22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.116 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f454ebd9-daec-4b28-b241-3162bde8485c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.117 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:54 compute-0 kernel: tapce318891-c0: left promiscuous mode
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.150 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[12ee1714-ce11-4133-9df8-bd82c69cb989]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/358877739' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.164 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2a99f8-922d-49e1-88ee-1b088fd9acd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.167 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76d4e0a4-60cd-47b3-b227-6e9fb3add5fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.179 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.180 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.183 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d69bcaa-589e-43eb-bc77-2540a8692afa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455995, 'reachable_time': 28972, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303985, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 12:29:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.189 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:29:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.189 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e55f8bbc-bb9a-4b8d-a89f-de2ca130b49a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.352 244018 INFO nova.virt.libvirt.driver [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Deleting instance files /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_del
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.353 244018 INFO nova.virt.libvirt.driver [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Deletion of /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_del complete
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.442 244018 INFO nova.compute.manager [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Took 1.02 seconds to destroy the instance on the hypervisor.
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.443 244018 DEBUG oslo.service.loopingcall [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.443 244018 DEBUG nova.compute.manager [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.443 244018 DEBUG nova.network.neutron [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:29:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/358877739' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.514099) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594514147, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2484, "num_deletes": 515, "total_data_size": 3274670, "memory_usage": 3327520, "flush_reason": "Manual Compaction"}
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594533810, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3211321, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28139, "largest_seqno": 30622, "table_properties": {"data_size": 3200577, "index_size": 6406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25751, "raw_average_key_size": 19, "raw_value_size": 3176708, "raw_average_value_size": 2453, "num_data_blocks": 280, "num_entries": 1295, "num_filter_entries": 1295, "num_deletions": 515, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022416, "oldest_key_time": 1772022416, "file_creation_time": 1772022594, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 19765 microseconds, and 8348 cpu microseconds.
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.533863) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3211321 bytes OK
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.533891) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.536247) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.536269) EVENT_LOG_v1 {"time_micros": 1772022594536262, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.536291) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3263151, prev total WAL file size 3263151, number of live WAL files 2.
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.537315) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3136KB)], [62(8327KB)]
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594537373, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11738385, "oldest_snapshot_seqno": -1}
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5642 keys, 9910825 bytes, temperature: kUnknown
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594594911, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 9910825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9869660, "index_size": 25951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14149, "raw_key_size": 141476, "raw_average_key_size": 25, "raw_value_size": 9764991, "raw_average_value_size": 1730, "num_data_blocks": 1058, "num_entries": 5642, "num_filter_entries": 5642, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022594, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.595172) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9910825 bytes
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.596523) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.7 rd, 172.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.1 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 6686, records dropped: 1044 output_compression: NoCompression
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.596553) EVENT_LOG_v1 {"time_micros": 1772022594596539, "job": 34, "event": "compaction_finished", "compaction_time_micros": 57622, "compaction_time_cpu_micros": 28449, "output_level": 6, "num_output_files": 1, "total_output_size": 9910825, "num_input_records": 6686, "num_output_records": 5642, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594597114, "job": 34, "event": "table_file_deletion", "file_number": 64}
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594598169, "job": 34, "event": "table_file_deletion", "file_number": 62}
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.537239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.598207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.598214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.598217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.598220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:29:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.598223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:29:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918405371' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.747 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.749 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:54 compute-0 nova_compute[244014]: 2026-02-25 12:29:54.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:55.011 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:55.012 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:55.013 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.128 244018 DEBUG nova.compute.manager [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.129 244018 DEBUG oslo_concurrency.lockutils [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.129 244018 DEBUG oslo_concurrency.lockutils [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.130 244018 DEBUG oslo_concurrency.lockutils [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.130 244018 DEBUG nova.compute.manager [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.131 244018 WARNING nova.compute.manager [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state active and task_state rescuing.
Feb 25 12:29:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:29:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1776476394' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.399 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.401 244018 DEBUG nova.virt.libvirt.vif [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1779266838',display_name='tempest-ServerRescueNegativeTestJSON-server-1779266838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1779266838',id=68,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-ut2etugv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:33Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=f7b18575-d1fc-423f-a596-8ca6d8ed08fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "vif_mac": "fa:16:3e:bf:b7:62"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.402 244018 DEBUG nova.network.os_vif_util [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "vif_mac": "fa:16:3e:bf:b7:62"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.403 244018 DEBUG nova.network.os_vif_util [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.405 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'pci_devices' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.432 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:29:55 compute-0 nova_compute[244014]:   <uuid>f7b18575-d1fc-423f-a596-8ca6d8ed08fa</uuid>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   <name>instance-00000044</name>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1779266838</nova:name>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:29:53</nova:creationTime>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <nova:user uuid="3aef97e1c87341848423edc65828540c">tempest-ServerRescueNegativeTestJSON-75220162-project-member</nova:user>
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <nova:project uuid="c4445209a7384565a93895032b4f077e">tempest-ServerRescueNegativeTestJSON-75220162</nova:project>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <nova:port uuid="27c957cd-d68f-48d8-b2e1-170275200ed3">
Feb 25 12:29:55 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <system>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <entry name="serial">f7b18575-d1fc-423f-a596-8ca6d8ed08fa</entry>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <entry name="uuid">f7b18575-d1fc-423f-a596-8ca6d8ed08fa</entry>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     </system>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   <os>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   </os>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   <features>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   </features>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue">
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk">
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <target dev="vdb" bus="virtio"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config.rescue">
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       </source>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:29:55 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:bf:b7:62"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <target dev="tap27c957cd-d6"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/console.log" append="off"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <video>
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     </video>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:29:55 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:29:55 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:29:55 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:29:55 compute-0 nova_compute[244014]: </domain>
Feb 25 12:29:55 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.444 244018 INFO nova.virt.libvirt.driver [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance destroyed successfully.
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.507 244018 DEBUG nova.network.neutron [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:55 compute-0 ceph-mon[76335]: pgmap v1434: 305 pgs: 305 active+clean; 519 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.3 MiB/s wr, 265 op/s
Feb 25 12:29:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3918405371' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1776476394' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.522 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.524 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.524 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.525 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No VIF found with MAC fa:16:3e:bf:b7:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.526 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Using config drive
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.550 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.558 244018 INFO nova.compute.manager [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Took 1.11 seconds to deallocate network for instance.
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.581 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'ec2_ids' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.621 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'keypairs' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.632 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.632 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.741 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022580.7403479, abe229eb-2238-4237-a7f2-83b8476ac1dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.742 244018 INFO nova.compute.manager [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Stopped (Lifecycle Event)
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.769 244018 DEBUG nova.compute.manager [None req-12f5ec44-1447-4ed9-afbe-0b5b828bd547 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:55 compute-0 nova_compute[244014]: 2026-02-25 12:29:55.803 244018 DEBUG oslo_concurrency.processutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 519 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.033 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Creating config drive at /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.039 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgbk_prr1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.127 244018 DEBUG nova.compute.manager [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.128 244018 DEBUG oslo_concurrency.lockutils [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.128 244018 DEBUG oslo_concurrency.lockutils [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.128 244018 DEBUG oslo_concurrency.lockutils [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.129 244018 DEBUG nova.compute.manager [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.129 244018 WARNING nova.compute.manager [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state deleted and task_state None.
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.129 244018 DEBUG nova.compute.manager [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-deleted-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.178 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgbk_prr1" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.210 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.213 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:29:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754663529' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.353 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.354 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Deleting local config drive /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue because it was imported into RBD.
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.367 244018 DEBUG oslo_concurrency.processutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.372 244018 DEBUG nova.compute.provider_tree [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.392 244018 DEBUG nova.scheduler.client.report [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:29:56 compute-0 systemd-udevd[303868]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:29:56 compute-0 kernel: tap27c957cd-d6: entered promiscuous mode
Feb 25 12:29:56 compute-0 NetworkManager[49836]: <info>  [1772022596.4065] manager: (tap27c957cd-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:56 compute-0 ovn_controller[147040]: 2026-02-25T12:29:56Z|00641|binding|INFO|Claiming lport 27c957cd-d68f-48d8-b2e1-170275200ed3 for this chassis.
Feb 25 12:29:56 compute-0 ovn_controller[147040]: 2026-02-25T12:29:56Z|00642|binding|INFO|27c957cd-d68f-48d8-b2e1-170275200ed3: Claiming fa:16:3e:bf:b7:62 10.100.0.4
Feb 25 12:29:56 compute-0 ovn_controller[147040]: 2026-02-25T12:29:56Z|00643|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 ovn-installed in OVS
Feb 25 12:29:56 compute-0 ovn_controller[147040]: 2026-02-25T12:29:56Z|00644|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 up in Southbound
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.419 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.416 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:b7:62 10.100.0.4'], port_security=['fa:16:3e:bf:b7:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f7b18575-d1fc-423f-a596-8ca6d8ed08fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=27c957cd-d68f-48d8-b2e1-170275200ed3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.418 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 27c957cd-d68f-48d8-b2e1-170275200ed3 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 bound to our chassis
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.421 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3
Feb 25 12:29:56 compute-0 NetworkManager[49836]: <info>  [1772022596.4236] device (tap27c957cd-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:29:56 compute-0 NetworkManager[49836]: <info>  [1772022596.4242] device (tap27c957cd-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.427 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.435 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7769574d-1675-480a-b9eb-ce6b67d74381]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:56 compute-0 systemd-machined[210048]: New machine qemu-83-instance-00000044.
Feb 25 12:29:56 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-00000044.
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.456 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc9dcfc-0668-4ca3-8dbd-946741dcacd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.459 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f88636af-c322-4794-ad6d-fb1eabc1c2e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.474 244018 INFO nova.scheduler.client.report [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Deleted allocations for instance 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.487 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[79d34b4b-9ddf-4532-9573-5302f1a82f09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.507 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c394152-1742-4f5c-87a6-806b62363af1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304136, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.520 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b66be0-8e84-4dd5-ac71-282a32cf00dc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453803, 'tstamp': 453803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304138, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453805, 'tstamp': 453805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304138, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.521 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1754663529' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b56c79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.525 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85b56c79-00, col_values=(('external_ids', {'iface-id': 'ad3b1754-7f6c-4114-8056-d68f2e9a25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.525 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.562 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.857 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.858 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.858 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.858 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.859 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.860 244018 INFO nova.compute.manager [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Terminating instance
Feb 25 12:29:56 compute-0 nova_compute[244014]: 2026-02-25 12:29:56.861 244018 DEBUG nova.compute.manager [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:29:56 compute-0 kernel: tap05178abb-a1 (unregistering): left promiscuous mode
Feb 25 12:29:56 compute-0 NetworkManager[49836]: <info>  [1772022596.9926] device (tap05178abb-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.002 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:57 compute-0 ovn_controller[147040]: 2026-02-25T12:29:57Z|00645|binding|INFO|Releasing lport 05178abb-a113-4013-9194-9243afe9d0ff from this chassis (sb_readonly=0)
Feb 25 12:29:57 compute-0 ovn_controller[147040]: 2026-02-25T12:29:57Z|00646|binding|INFO|Setting lport 05178abb-a113-4013-9194-9243afe9d0ff down in Southbound
Feb 25 12:29:57 compute-0 ovn_controller[147040]: 2026-02-25T12:29:57Z|00647|binding|INFO|Removing iface tap05178abb-a1 ovn-installed in OVS
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.015 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:1d:a7 10.100.0.10'], port_security=['fa:16:3e:7e:1d:a7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8086400b-ac70-4c79-928b-4f1966084384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3f8dac82-3613-4042-9783-376a897175f3 809ecded-d7db-4d4f-aed2-3cfc6bac71b9 8484ff9f-cb46-4540-b482-1e509f2bf2a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=05178abb-a113-4013-9194-9243afe9d0ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.019 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 05178abb-a113-4013-9194-9243afe9d0ff in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 unbound from our chassis
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.021 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9a448894-87d7-4c8e-a168-2593011ffed7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.023 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9b3b89-9955-4a5b-b9c7-fd01e5470c56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.024 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7 namespace which is not needed anymore
Feb 25 12:29:57 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Deactivated successfully.
Feb 25 12:29:57 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Consumed 13.675s CPU time.
Feb 25 12:29:57 compute-0 systemd-machined[210048]: Machine qemu-73-instance-00000040 terminated.
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.101 244018 INFO nova.virt.libvirt.driver [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Instance destroyed successfully.
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.102 244018 DEBUG nova.objects.instance [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'resources' on Instance uuid 8086400b-ac70-4c79-928b-4f1966084384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.131 244018 DEBUG nova.virt.libvirt.vif [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:28:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-105273812',display_name='tempest-SecurityGroupsTestJSON-server-105273812',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-105273812',id=64,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-6s6re1xg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:28:51Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=8086400b-ac70-4c79-928b-4f1966084384,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.134 244018 DEBUG nova.network.os_vif_util [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.136 244018 DEBUG nova.network.os_vif_util [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.136 244018 DEBUG os_vif [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.139 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.140 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05178abb-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.148 244018 INFO os_vif [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1')
Feb 25 12:29:57 compute-0 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [NOTICE]   (299632) : haproxy version is 2.8.14-c23fe91
Feb 25 12:29:57 compute-0 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [NOTICE]   (299632) : path to executable is /usr/sbin/haproxy
Feb 25 12:29:57 compute-0 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [WARNING]  (299632) : Exiting Master process...
Feb 25 12:29:57 compute-0 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [WARNING]  (299632) : Exiting Master process...
Feb 25 12:29:57 compute-0 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [ALERT]    (299632) : Current worker (299634) exited with code 143 (Terminated)
Feb 25 12:29:57 compute-0 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [WARNING]  (299632) : All workers exited. Exiting... (0)
Feb 25 12:29:57 compute-0 systemd[1]: libpod-620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7.scope: Deactivated successfully.
Feb 25 12:29:57 compute-0 podman[304228]: 2026-02-25 12:29:57.176144044 +0000 UTC m=+0.056793431 container died 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7-userdata-shm.mount: Deactivated successfully.
Feb 25 12:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2584817c9d371ffcf257a70731a89404fd3bb1f9d05f4f6659b9234327334bd-merged.mount: Deactivated successfully.
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.218 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for f7b18575-d1fc-423f-a596-8ca6d8ed08fa due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.218 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022597.2176583, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.218 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Resumed (Lifecycle Event)
Feb 25 12:29:57 compute-0 podman[304228]: 2026-02-25 12:29:57.22044083 +0000 UTC m=+0.101090227 container cleanup 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.222 244018 DEBUG nova.compute.manager [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:57 compute-0 systemd[1]: libpod-conmon-620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7.scope: Deactivated successfully.
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.271 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.273 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:57 compute-0 podman[304278]: 2026-02-25 12:29:57.307100228 +0000 UTC m=+0.057812391 container remove 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.313 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] During sync_power_state the instance has a pending task (rescuing). Skip.
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.313 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aaab6cea-813a-4480-8864-e7c9c9f7a2b4]: (4, ('Wed Feb 25 12:29:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7 (620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7)\n620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7\nWed Feb 25 12:29:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7 (620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7)\n620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.314 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022597.2190325, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.314 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Started (Lifecycle Event)
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.315 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[076d6dc6-66ea-4f6e-8c20-63c908de4f8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.315 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:57 compute-0 kernel: tap9a448894-80: left promiscuous mode
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.328 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1f02f46e-2be2-4675-972b-25fcfb64ab4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.336 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.339 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0375fb2-1a8d-49a3-8470-b637f327fe1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.340 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3441946c-c9b3-48b9-9d92-2b0985000dda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.340 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.353 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61decc58-42e5-41d1-a3b7-5c755070efda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449623, 'reachable_time': 18705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304293, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d9a448894\x2d87d7\x2d4c8e\x2da168\x2d2593011ffed7.mount: Deactivated successfully.
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.355 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:29:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.356 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e68b0565-2786-4dfd-8dea-7a1350ec244e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.482 244018 INFO nova.virt.libvirt.driver [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Deleting instance files /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384_del
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.483 244018 INFO nova.virt.libvirt.driver [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Deletion of /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384_del complete
Feb 25 12:29:57 compute-0 ceph-mon[76335]: pgmap v1435: 305 pgs: 305 active+clean; 519 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.555 244018 INFO nova.compute.manager [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Took 0.69 seconds to destroy the instance on the hypervisor.
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.556 244018 DEBUG oslo.service.loopingcall [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.556 244018 DEBUG nova.compute.manager [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:29:57 compute-0 nova_compute[244014]: 2026-02-25 12:29:57.556 244018 DEBUG nova.network.neutron [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:29:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 457 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 283 op/s
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.240 244018 DEBUG nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.241 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.241 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.242 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.242 244018 DEBUG nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.243 244018 WARNING nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state rescued and task_state None.
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.243 244018 DEBUG nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.244 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.244 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.245 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.245 244018 DEBUG nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.246 244018 WARNING nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state rescued and task_state None.
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.867 244018 DEBUG nova.network.neutron [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.888 244018 INFO nova.compute.manager [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Took 1.33 seconds to deallocate network for instance.
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.938 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.939 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:29:58 compute-0 nova_compute[244014]: 2026-02-25 12:29:58.961 244018 DEBUG nova.compute.manager [req-3bb00d0e-1883-4dda-afa8-5d50771db852 req-b9d3249a-6d0e-4b18-89d9-4b3b946ee062 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-vif-deleted-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:29:59 compute-0 nova_compute[244014]: 2026-02-25 12:29:59.067 244018 DEBUG oslo_concurrency.processutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:29:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:29:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/289405999' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:29:59 compute-0 nova_compute[244014]: 2026-02-25 12:29:59.729 244018 DEBUG oslo_concurrency.processutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:29:59 compute-0 nova_compute[244014]: 2026-02-25 12:29:59.735 244018 DEBUG nova.compute.provider_tree [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:29:59 compute-0 nova_compute[244014]: 2026-02-25 12:29:59.759 244018 DEBUG nova.scheduler.client.report [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:29:59 compute-0 nova_compute[244014]: 2026-02-25 12:29:59.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:29:59 compute-0 nova_compute[244014]: 2026-02-25 12:29:59.790 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:29:59 compute-0 ceph-mon[76335]: pgmap v1436: 305 pgs: 305 active+clean; 457 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 283 op/s
Feb 25 12:29:59 compute-0 nova_compute[244014]: 2026-02-25 12:29:59.826 244018 INFO nova.scheduler.client.report [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Deleted allocations for instance 8086400b-ac70-4c79-928b-4f1966084384
Feb 25 12:29:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 457 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 223 op/s
Feb 25 12:29:59 compute-0 nova_compute[244014]: 2026-02-25 12:29:59.914 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/289405999' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:00 compute-0 ceph-mon[76335]: pgmap v1437: 305 pgs: 305 active+clean; 457 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 223 op/s
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.064 244018 DEBUG nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-vif-unplugged-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.064 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.064 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.064 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 DEBUG nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] No waiting events found dispatching network-vif-unplugged-05178abb-a113-4013-9194-9243afe9d0ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 WARNING nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received unexpected event network-vif-unplugged-05178abb-a113-4013-9194-9243afe9d0ff for instance with vm_state deleted and task_state None.
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 DEBUG nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.066 244018 DEBUG nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] No waiting events found dispatching network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.066 244018 WARNING nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received unexpected event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff for instance with vm_state deleted and task_state None.
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.366 244018 INFO nova.compute.manager [None req-771650aa-3e1a-409a-8bbf-bd6b74dfe0e8 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Pausing
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.370 244018 DEBUG nova.objects.instance [None req-771650aa-3e1a-409a-8bbf-bd6b74dfe0e8 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'flavor' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.418 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022601.4179032, bf261ccf-c216-4383-a22a-7f0553198152 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.419 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Paused (Lifecycle Event)
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.422 244018 DEBUG nova.compute.manager [None req-771650aa-3e1a-409a-8bbf-bd6b74dfe0e8 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.454 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:01 compute-0 nova_compute[244014]: 2026-02-25 12:30:01.459 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:30:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 421 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 320 op/s
Feb 25 12:30:02 compute-0 nova_compute[244014]: 2026-02-25 12:30:02.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:02 compute-0 ovn_controller[147040]: 2026-02-25T12:30:02Z|00648|binding|INFO|Releasing lport ad3b1754-7f6c-4114-8056-d68f2e9a25d5 from this chassis (sb_readonly=0)
Feb 25 12:30:02 compute-0 nova_compute[244014]: 2026-02-25 12:30:02.504 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:02 compute-0 ovn_controller[147040]: 2026-02-25T12:30:02Z|00649|binding|INFO|Releasing lport ad3b1754-7f6c-4114-8056-d68f2e9a25d5 from this chassis (sb_readonly=0)
Feb 25 12:30:02 compute-0 nova_compute[244014]: 2026-02-25 12:30:02.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:02 compute-0 ceph-mon[76335]: pgmap v1438: 305 pgs: 305 active+clean; 421 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 320 op/s
Feb 25 12:30:03 compute-0 nova_compute[244014]: 2026-02-25 12:30:03.205 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:30:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 435 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 277 op/s
Feb 25 12:30:04 compute-0 nova_compute[244014]: 2026-02-25 12:30:04.468 244018 INFO nova.compute.manager [None req-94377022-49b0-490c-95b7-9ce5549d5586 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Unpausing
Feb 25 12:30:04 compute-0 nova_compute[244014]: 2026-02-25 12:30:04.469 244018 DEBUG nova.objects.instance [None req-94377022-49b0-490c-95b7-9ce5549d5586 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'flavor' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:04 compute-0 nova_compute[244014]: 2026-02-25 12:30:04.512 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022604.5119896, bf261ccf-c216-4383-a22a-7f0553198152 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:04 compute-0 nova_compute[244014]: 2026-02-25 12:30:04.512 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Resumed (Lifecycle Event)
Feb 25 12:30:04 compute-0 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 12:30:04 compute-0 nova_compute[244014]: 2026-02-25 12:30:04.518 244018 DEBUG nova.virt.libvirt.guest [None req-94377022-49b0-490c-95b7-9ce5549d5586 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 25 12:30:04 compute-0 nova_compute[244014]: 2026-02-25 12:30:04.518 244018 DEBUG nova.compute.manager [None req-94377022-49b0-490c-95b7-9ce5549d5586 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:04 compute-0 nova_compute[244014]: 2026-02-25 12:30:04.553 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:04 compute-0 nova_compute[244014]: 2026-02-25 12:30:04.557 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:30:04 compute-0 nova_compute[244014]: 2026-02-25 12:30:04.582 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] During sync_power_state the instance has a pending task (unpausing). Skip.
Feb 25 12:30:04 compute-0 podman[304318]: 2026-02-25 12:30:04.740797918 +0000 UTC m=+0.074327489 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:30:04 compute-0 nova_compute[244014]: 2026-02-25 12:30:04.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:04 compute-0 podman[304319]: 2026-02-25 12:30:04.777766266 +0000 UTC m=+0.108344193 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 12:30:05 compute-0 ceph-mon[76335]: pgmap v1439: 305 pgs: 305 active+clean; 435 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 277 op/s
Feb 25 12:30:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:05 compute-0 kernel: tap197929cb-aa (unregistering): left promiscuous mode
Feb 25 12:30:05 compute-0 nova_compute[244014]: 2026-02-25 12:30:05.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:05 compute-0 NetworkManager[49836]: <info>  [1772022605.7532] device (tap197929cb-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:30:05 compute-0 ovn_controller[147040]: 2026-02-25T12:30:05Z|00650|binding|INFO|Releasing lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 from this chassis (sb_readonly=0)
Feb 25 12:30:05 compute-0 ovn_controller[147040]: 2026-02-25T12:30:05Z|00651|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 down in Southbound
Feb 25 12:30:05 compute-0 ovn_controller[147040]: 2026-02-25T12:30:05Z|00652|binding|INFO|Removing iface tap197929cb-aa ovn-installed in OVS
Feb 25 12:30:05 compute-0 nova_compute[244014]: 2026-02-25 12:30:05.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:05.763 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:10:e8 10.100.0.4'], port_security=['fa:16:3e:3f:10:e8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd945940d-a1b5-4a36-b980-efda3a9efda6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a36e5ee4-5e24-4d80-83ee-01c487ff157c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61371c73c9fb4961886c5c22f8f871e1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '95094e28-96cd-49ef-b6c0-712f53ce443e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bfc4d05-cf0b-42bf-9420-27556b829f8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=197929cb-aaa4-48f0-a831-7ac3f4ac5b37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:30:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:05.764 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 in datapath a36e5ee4-5e24-4d80-83ee-01c487ff157c unbound from our chassis
Feb 25 12:30:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:05.765 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a36e5ee4-5e24-4d80-83ee-01c487ff157c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:30:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:05.766 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67743a01-91a9-4d44-826c-2b1d15139753]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:05 compute-0 nova_compute[244014]: 2026-02-25 12:30:05.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:05 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Deactivated successfully.
Feb 25 12:30:05 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Consumed 12.277s CPU time.
Feb 25 12:30:05 compute-0 systemd-machined[210048]: Machine qemu-81-instance-00000045 terminated.
Feb 25 12:30:05 compute-0 nova_compute[244014]: 2026-02-25 12:30:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:30:05 compute-0 nova_compute[244014]: 2026-02-25 12:30:05.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:30:05 compute-0 nova_compute[244014]: 2026-02-25 12:30:05.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:30:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 435 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 211 op/s
Feb 25 12:30:05 compute-0 nova_compute[244014]: 2026-02-25 12:30:05.978 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:05 compute-0 nova_compute[244014]: 2026-02-25 12:30:05.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.095 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.095 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.096 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.097 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.219 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance shutdown successfully after 13 seconds.
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.227 244018 INFO nova.virt.libvirt.driver [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance destroyed successfully.
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.228 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'numa_topology' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.250 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Attempting rescue
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.252 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.257 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.258 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Creating image(s)
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.290 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.294 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.355 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.390 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.395 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.481 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.482 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.483 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.484 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.520 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.527 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.568 244018 DEBUG nova.compute.manager [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.569 244018 DEBUG oslo_concurrency.lockutils [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.569 244018 DEBUG oslo_concurrency.lockutils [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.570 244018 DEBUG oslo_concurrency.lockutils [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.570 244018 DEBUG nova.compute.manager [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:06 compute-0 nova_compute[244014]: 2026-02-25 12:30:06.570 244018 WARNING nova.compute.manager [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state active and task_state rescuing.
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.059 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.060 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'migration_context' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.077 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.078 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start _get_guest_xml network_info=[{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "vif_mac": "fa:16:3e:3f:10:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.079 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'resources' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.081 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.082 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.082 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.082 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.083 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.085 244018 INFO nova.compute.manager [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Terminating instance
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.087 244018 DEBUG nova.compute.manager [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.097 244018 WARNING nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.103 244018 DEBUG nova.virt.libvirt.host [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.104 244018 DEBUG nova.virt.libvirt.host [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.113 244018 DEBUG nova.virt.libvirt.host [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.114 244018 DEBUG nova.virt.libvirt.host [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.115 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.115 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.116 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.116 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.117 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.117 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.118 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.118 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.119 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.119 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.119 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.120 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.120 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:07 compute-0 ceph-mon[76335]: pgmap v1440: 305 pgs: 305 active+clean; 435 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 211 op/s
Feb 25 12:30:07 compute-0 kernel: tap27c957cd-d6 (unregistering): left promiscuous mode
Feb 25 12:30:07 compute-0 NetworkManager[49836]: <info>  [1772022607.1386] device (tap27c957cd-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.144 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:07 compute-0 ovn_controller[147040]: 2026-02-25T12:30:07Z|00653|binding|INFO|Releasing lport 27c957cd-d68f-48d8-b2e1-170275200ed3 from this chassis (sb_readonly=0)
Feb 25 12:30:07 compute-0 ovn_controller[147040]: 2026-02-25T12:30:07Z|00654|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 down in Southbound
Feb 25 12:30:07 compute-0 ovn_controller[147040]: 2026-02-25T12:30:07Z|00655|binding|INFO|Removing iface tap27c957cd-d6 ovn-installed in OVS
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.162 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:b7:62 10.100.0.4'], port_security=['fa:16:3e:bf:b7:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f7b18575-d1fc-423f-a596-8ca6d8ed08fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=27c957cd-d68f-48d8-b2e1-170275200ed3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.164 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 27c957cd-d68f-48d8-b2e1-170275200ed3 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 unbound from our chassis
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.166 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.177 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cf0d1f-46c1-4cb9-832d-38bad3efcb14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.179 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.182 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:30:07 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000044.scope: Deactivated successfully.
Feb 25 12:30:07 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000044.scope: Consumed 10.661s CPU time.
Feb 25 12:30:07 compute-0 systemd-machined[210048]: Machine qemu-83-instance-00000044 terminated.
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.206 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e8673af6-cca6-4816-813b-abe47268cbe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.210 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[64e97d70-7d12-4496-b9e2-ea121c677e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.241 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d89e94-8dcc-404e-a5ec-37e3d445b4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.259 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30c89533-d536-4e37-ada1-6e9d2cdf08e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304488, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.270 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30f2eec5-a34a-45dc-a2b1-a50d4776939e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453803, 'tstamp': 453803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304489, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453805, 'tstamp': 453805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304489, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.271 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.273 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.281 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b56c79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.282 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.282 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85b56c79-00, col_values=(('external_ids', {'iface-id': 'ad3b1754-7f6c-4114-8056-d68f2e9a25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.282 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.327 244018 INFO nova.virt.libvirt.driver [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance destroyed successfully.
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.329 244018 DEBUG nova.objects.instance [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'resources' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.365 244018 DEBUG nova.virt.libvirt.vif [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1779266838',display_name='tempest-ServerRescueNegativeTestJSON-server-1779266838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1779266838',id=68,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-ut2etugv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:57Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=f7b18575-d1fc-423f-a596-8ca6d8ed08fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.365 244018 DEBUG nova.network.os_vif_util [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.367 244018 DEBUG nova.network.os_vif_util [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.367 244018 DEBUG os_vif [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.370 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.370 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27c957cd-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.372 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.379 244018 INFO os_vif [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6')
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.552 244018 DEBUG nova.compute.manager [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.552 244018 DEBUG oslo_concurrency.lockutils [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.553 244018 DEBUG oslo_concurrency.lockutils [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.553 244018 DEBUG oslo_concurrency.lockutils [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.553 244018 DEBUG nova.compute.manager [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.554 244018 DEBUG nova.compute.manager [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:30:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:30:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187409761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.703 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.705 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 452 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.3 MiB/s wr, 234 op/s
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.921 244018 INFO nova.virt.libvirt.driver [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Deleting instance files /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_del
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.922 244018 INFO nova.virt.libvirt.driver [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Deletion of /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_del complete
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.992 244018 INFO nova.compute.manager [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Took 0.90 seconds to destroy the instance on the hypervisor.
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.995 244018 DEBUG oslo.service.loopingcall [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.996 244018 DEBUG nova.compute.manager [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:30:07 compute-0 nova_compute[244014]: 2026-02-25 12:30:07.996 244018 DEBUG nova.network.neutron [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:30:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3187409761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:30:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3834581766' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.226 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.227 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:08 compute-0 sudo[304566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:30:08 compute-0 sudo[304566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:30:08 compute-0 sudo[304566]: pam_unix(sudo:session): session closed for user root
Feb 25 12:30:08 compute-0 sudo[304594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:30:08 compute-0 sudo[304594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.389 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updating instance_info_cache with network_info: [{"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.457 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.458 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.651 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022593.6500142, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.652 244018 INFO nova.compute.manager [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Stopped (Lifecycle Event)
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.658 244018 DEBUG nova.compute.manager [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.659 244018 DEBUG oslo_concurrency.lockutils [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.660 244018 DEBUG oslo_concurrency.lockutils [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.660 244018 DEBUG oslo_concurrency.lockutils [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.660 244018 DEBUG nova.compute.manager [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.661 244018 WARNING nova.compute.manager [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state active and task_state rescuing.
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.675 244018 DEBUG nova.network.neutron [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.691 244018 DEBUG nova.compute.manager [None req-c69c3ff9-3a62-4f35-891e-8dd24a5b9402 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.704 244018 INFO nova.compute.manager [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Took 0.71 seconds to deallocate network for instance.
Feb 25 12:30:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:30:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246071265' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.727 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.729 244018 DEBUG nova.virt.libvirt.vif [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-579439343',display_name='tempest-ServerRescueTestJSONUnderV235-server-579439343',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-579439343',id=69,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61371c73c9fb4961886c5c22f8f871e1',ramdisk_id='',reservation_id='r-yaoxup28',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1059268888',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1059268888-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:49Z,user_data=None,user_id='74af2f394ab04b06b55e62150e81b6b1',uuid=d945940d-a1b5-4a36-b980-efda3a9efda6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "vif_mac": "fa:16:3e:3f:10:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.730 244018 DEBUG nova.network.os_vif_util [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converting VIF {"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "vif_mac": "fa:16:3e:3f:10:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.731 244018 DEBUG nova.network.os_vif_util [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.732 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.749 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:30:08 compute-0 nova_compute[244014]:   <uuid>d945940d-a1b5-4a36-b980-efda3a9efda6</uuid>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   <name>instance-00000045</name>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-579439343</nova:name>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:30:07</nova:creationTime>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <nova:user uuid="74af2f394ab04b06b55e62150e81b6b1">tempest-ServerRescueTestJSONUnderV235-1059268888-project-member</nova:user>
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <nova:project uuid="61371c73c9fb4961886c5c22f8f871e1">tempest-ServerRescueTestJSONUnderV235-1059268888</nova:project>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <nova:port uuid="197929cb-aaa4-48f0-a831-7ac3f4ac5b37">
Feb 25 12:30:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <system>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <entry name="serial">d945940d-a1b5-4a36-b980-efda3a9efda6</entry>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <entry name="uuid">d945940d-a1b5-4a36-b980-efda3a9efda6</entry>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     </system>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   <os>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   </os>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   <features>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   </features>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue">
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d945940d-a1b5-4a36-b980-efda3a9efda6_disk">
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <target dev="vdb" bus="virtio"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config.rescue">
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:30:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:3f:10:e8"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <target dev="tap197929cb-aa"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/console.log" append="off"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <video>
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     </video>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:30:08 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:30:08 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:30:08 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:30:08 compute-0 nova_compute[244014]: </domain>
Feb 25 12:30:08 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.751 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.752 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.759 244018 INFO nova.virt.libvirt.driver [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance destroyed successfully.
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.833 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.834 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.834 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.835 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No VIF found with MAC fa:16:3e:3f:10:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.837 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Using config drive
Feb 25 12:30:08 compute-0 sudo[304594]: pam_unix(sudo:session): session closed for user root
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.873 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.886 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.900 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:30:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:30:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:30:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.932 244018 DEBUG oslo_concurrency.processutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:30:08 compute-0 nova_compute[244014]: 2026-02-25 12:30:08.966 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'keypairs' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:09 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:30:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:30:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:30:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:30:09 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:30:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:30:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:30:09 compute-0 sudo[304693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:30:09 compute-0 sudo[304693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:30:09 compute-0 sudo[304693]: pam_unix(sudo:session): session closed for user root
Feb 25 12:30:09 compute-0 sudo[304735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:30:09 compute-0 sudo[304735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:30:09 compute-0 ceph-mon[76335]: pgmap v1441: 305 pgs: 305 active+clean; 452 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.3 MiB/s wr, 234 op/s
Feb 25 12:30:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3834581766' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3246071265' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:30:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:30:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:30:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:30:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:30:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.340 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Creating config drive at /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.349 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkhvsrgt7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:30:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3319225679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:09 compute-0 podman[304775]: 2026-02-25 12:30:09.485059663 +0000 UTC m=+0.079783543 container create 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.489 244018 DEBUG oslo_concurrency.processutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.494 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkhvsrgt7" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.529 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:09 compute-0 podman[304775]: 2026-02-25 12:30:09.438544214 +0000 UTC m=+0.033268154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.534 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:09 compute-0 systemd[1]: Started libpod-conmon-7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8.scope.
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.575 244018 DEBUG nova.compute.provider_tree [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:30:09 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.595 244018 DEBUG nova.scheduler.client.report [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.626 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:09 compute-0 podman[304775]: 2026-02-25 12:30:09.647900851 +0000 UTC m=+0.242624741 container init 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.651 244018 INFO nova.scheduler.client.report [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Deleted allocations for instance f7b18575-d1fc-423f-a596-8ca6d8ed08fa
Feb 25 12:30:09 compute-0 podman[304775]: 2026-02-25 12:30:09.656891896 +0000 UTC m=+0.251615786 container start 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:30:09 compute-0 relaxed_ptolemy[304814]: 167 167
Feb 25 12:30:09 compute-0 systemd[1]: libpod-7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8.scope: Deactivated successfully.
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.694 244018 DEBUG nova.compute.manager [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.696 244018 DEBUG oslo_concurrency.lockutils [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.696 244018 DEBUG oslo_concurrency.lockutils [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.697 244018 DEBUG oslo_concurrency.lockutils [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.697 244018 DEBUG nova.compute.manager [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.698 244018 WARNING nova.compute.manager [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state deleted and task_state None.
Feb 25 12:30:09 compute-0 podman[304775]: 2026-02-25 12:30:09.710500416 +0000 UTC m=+0.305224356 container attach 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Feb 25 12:30:09 compute-0 podman[304775]: 2026-02-25 12:30:09.711370881 +0000 UTC m=+0.306094771 container died 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.716 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:09 compute-0 nova_compute[244014]: 2026-02-25 12:30:09.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:30:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 452 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 176 op/s
Feb 25 12:30:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-39ce674c49af47e01cf173d3e9ff2dc63879be75715e1bf81f3f8fee4d2429c9-merged.mount: Deactivated successfully.
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.049 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.050 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Deleting local config drive /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue because it was imported into RBD.
Feb 25 12:30:10 compute-0 kernel: tap197929cb-aa: entered promiscuous mode
Feb 25 12:30:10 compute-0 NetworkManager[49836]: <info>  [1772022610.1121] manager: (tap197929cb-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.111 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:10 compute-0 ovn_controller[147040]: 2026-02-25T12:30:10Z|00656|binding|INFO|Claiming lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for this chassis.
Feb 25 12:30:10 compute-0 ovn_controller[147040]: 2026-02-25T12:30:10Z|00657|binding|INFO|197929cb-aaa4-48f0-a831-7ac3f4ac5b37: Claiming fa:16:3e:3f:10:e8 10.100.0.4
Feb 25 12:30:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.120 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:10:e8 10.100.0.4'], port_security=['fa:16:3e:3f:10:e8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd945940d-a1b5-4a36-b980-efda3a9efda6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a36e5ee4-5e24-4d80-83ee-01c487ff157c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61371c73c9fb4961886c5c22f8f871e1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '95094e28-96cd-49ef-b6c0-712f53ce443e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bfc4d05-cf0b-42bf-9420-27556b829f8f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=197929cb-aaa4-48f0-a831-7ac3f4ac5b37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:30:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.123 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 in datapath a36e5ee4-5e24-4d80-83ee-01c487ff157c bound to our chassis
Feb 25 12:30:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.127 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a36e5ee4-5e24-4d80-83ee-01c487ff157c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:10 compute-0 ovn_controller[147040]: 2026-02-25T12:30:10Z|00658|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 ovn-installed in OVS
Feb 25 12:30:10 compute-0 ovn_controller[147040]: 2026-02-25T12:30:10Z|00659|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 up in Southbound
Feb 25 12:30:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.129 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29edfadd-486c-4da2-84ae-7bee46631fcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.133 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:10 compute-0 systemd-machined[210048]: New machine qemu-84-instance-00000045.
Feb 25 12:30:10 compute-0 systemd-udevd[304863]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:30:10 compute-0 NetworkManager[49836]: <info>  [1772022610.1665] device (tap197929cb-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:30:10 compute-0 NetworkManager[49836]: <info>  [1772022610.1672] device (tap197929cb-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:30:10 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-00000045.
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.476 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.477 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.477 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.478 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.478 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.480 244018 INFO nova.compute.manager [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Terminating instance
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.481 244018 DEBUG nova.compute.manager [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:30:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3319225679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:10 compute-0 podman[304775]: 2026-02-25 12:30:10.601332218 +0000 UTC m=+1.196056098 container remove 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:30:10 compute-0 systemd[1]: libpod-conmon-7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8.scope: Deactivated successfully.
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.750 244018 DEBUG nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-deleted-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.750 244018 DEBUG nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.750 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.750 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.751 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.751 244018 DEBUG nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.751 244018 WARNING nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state active and task_state rescuing.
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.751 244018 DEBUG nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.752 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.752 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.752 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.752 244018 DEBUG nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.752 244018 WARNING nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state active and task_state rescuing.
Feb 25 12:30:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:10 compute-0 kernel: tapc75a4bcb-92 (unregistering): left promiscuous mode
Feb 25 12:30:10 compute-0 NetworkManager[49836]: <info>  [1772022610.7996] device (tapc75a4bcb-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:10 compute-0 ovn_controller[147040]: 2026-02-25T12:30:10Z|00660|binding|INFO|Releasing lport c75a4bcb-9292-41aa-b0c3-14b1433392e2 from this chassis (sb_readonly=0)
Feb 25 12:30:10 compute-0 ovn_controller[147040]: 2026-02-25T12:30:10Z|00661|binding|INFO|Setting lport c75a4bcb-9292-41aa-b0c3-14b1433392e2 down in Southbound
Feb 25 12:30:10 compute-0 ovn_controller[147040]: 2026-02-25T12:30:10Z|00662|binding|INFO|Removing iface tapc75a4bcb-92 ovn-installed in OVS
Feb 25 12:30:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.818 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:8c:25 10.100.0.12'], port_security=['fa:16:3e:03:8c:25 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bf261ccf-c216-4383-a22a-7f0553198152', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=c75a4bcb-9292-41aa-b0c3-14b1433392e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:30:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.820 157129 INFO neutron.agent.ovn.metadata.agent [-] Port c75a4bcb-9292-41aa-b0c3-14b1433392e2 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 unbound from our chassis
Feb 25 12:30:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.823 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:30:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aabe164c-44eb-4c66-9799-272e0e23f8cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.824 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3 namespace which is not needed anymore
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.829 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:10 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000043.scope: Deactivated successfully.
Feb 25 12:30:10 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000043.scope: Consumed 12.463s CPU time.
Feb 25 12:30:10 compute-0 systemd-machined[210048]: Machine qemu-77-instance-00000043 terminated.
Feb 25 12:30:10 compute-0 podman[304904]: 2026-02-25 12:30:10.863810751 +0000 UTC m=+0.109691311 container create f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:30:10 compute-0 podman[304904]: 2026-02-25 12:30:10.79358154 +0000 UTC m=+0.039462150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.900 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.905 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:10 compute-0 systemd[1]: Started libpod-conmon-f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57.scope.
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.918 244018 INFO nova.virt.libvirt.driver [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Instance destroyed successfully.
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.918 244018 DEBUG nova.objects.instance [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'resources' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.937 244018 DEBUG nova.virt.libvirt.vif [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-789545168',display_name='tempest-ServerRescueNegativeTestJSON-server-789545168',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-789545168',id=67,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-3pw48dcp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:30:04Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=bf261ccf-c216-4383-a22a-7f0553198152,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.937 244018 DEBUG nova.network.os_vif_util [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.938 244018 DEBUG nova.network.os_vif_util [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.938 244018 DEBUG os_vif [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.940 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc75a4bcb-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.942 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.952 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.957 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:10 compute-0 nova_compute[244014]: 2026-02-25 12:30:10.959 244018 INFO os_vif [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92')
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.008 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for d945940d-a1b5-4a36-b980-efda3a9efda6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.008 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022611.008239, d945940d-a1b5-4a36-b980-efda3a9efda6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.009 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Resumed (Lifecycle Event)
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.012 244018 DEBUG nova.compute.manager [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.036 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.038 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:30:11 compute-0 podman[304904]: 2026-02-25 12:30:11.04892165 +0000 UTC m=+0.294802210 container init f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 12:30:11 compute-0 podman[304904]: 2026-02-25 12:30:11.059983693 +0000 UTC m=+0.305864253 container start f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:30:11 compute-0 podman[304904]: 2026-02-25 12:30:11.065797018 +0000 UTC m=+0.311677568 container attach f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.076 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022611.0104368, d945940d-a1b5-4a36-b980-efda3a9efda6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.077 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Started (Lifecycle Event)
Feb 25 12:30:11 compute-0 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [NOTICE]   (302426) : haproxy version is 2.8.14-c23fe91
Feb 25 12:30:11 compute-0 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [NOTICE]   (302426) : path to executable is /usr/sbin/haproxy
Feb 25 12:30:11 compute-0 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [WARNING]  (302426) : Exiting Master process...
Feb 25 12:30:11 compute-0 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [ALERT]    (302426) : Current worker (302428) exited with code 143 (Terminated)
Feb 25 12:30:11 compute-0 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [WARNING]  (302426) : All workers exited. Exiting... (0)
Feb 25 12:30:11 compute-0 systemd[1]: libpod-e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e.scope: Deactivated successfully.
Feb 25 12:30:11 compute-0 conmon[302401]: conmon e2300c096a2f5eef6b0f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e.scope/container/memory.events
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.099 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:11 compute-0 podman[304983]: 2026-02-25 12:30:11.100809611 +0000 UTC m=+0.171401220 container died e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.106 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e-userdata-shm.mount: Deactivated successfully.
Feb 25 12:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d4ba589ec8c9c7feb5a1c1945ac7be1440c19e55dcd78f4845f04574fea8475-merged.mount: Deactivated successfully.
Feb 25 12:30:11 compute-0 podman[304983]: 2026-02-25 12:30:11.151579401 +0000 UTC m=+0.222171000 container cleanup e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:30:11 compute-0 systemd[1]: libpod-conmon-e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e.scope: Deactivated successfully.
Feb 25 12:30:11 compute-0 podman[305034]: 2026-02-25 12:30:11.22912996 +0000 UTC m=+0.046897261 container remove e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:30:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.233 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9f916d67-3fd2-4328-8c43-aadc19a76f3a]: (4, ('Wed Feb 25 12:30:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3 (e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e)\ne2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e\nWed Feb 25 12:30:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3 (e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e)\ne2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.235 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[912d6fdc-a1e0-4d2b-b435-f5dc66a88a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.236 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:11 compute-0 kernel: tap85b56c79-00: left promiscuous mode
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.244 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.247 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2411aeae-d619-4ed2-9729-035fceceebf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.265 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcca248-83d8-4c4a-9377-41be0dcf5b49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.270 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[871ac734-e478-44da-a289-bc1faefdb471]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.284 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6414912b-993c-4fc9-8d56-4f38539d1b2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453785, 'reachable_time': 43175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305050, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d85b56c79\x2d01b6\x2d47e7\x2dab3b\x2d02e44acca3d3.mount: Deactivated successfully.
Feb 25 12:30:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.287 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:30:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.288 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[01015799-c10a-4e1d-b3f3-0e39502f07df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.338 244018 INFO nova.virt.libvirt.driver [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Deleting instance files /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152_del
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.339 244018 INFO nova.virt.libvirt.driver [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Deletion of /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152_del complete
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.402 244018 INFO nova.compute.manager [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Took 0.92 seconds to destroy the instance on the hypervisor.
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.403 244018 DEBUG oslo.service.loopingcall [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.403 244018 DEBUG nova.compute.manager [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.403 244018 DEBUG nova.network.neutron [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:30:11 compute-0 ceph-mon[76335]: pgmap v1442: 305 pgs: 305 active+clean; 452 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 176 op/s
Feb 25 12:30:11 compute-0 thirsty_swanson[304976]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:30:11 compute-0 thirsty_swanson[304976]: --> All data devices are unavailable
Feb 25 12:30:11 compute-0 systemd[1]: libpod-f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57.scope: Deactivated successfully.
Feb 25 12:30:11 compute-0 podman[304904]: 2026-02-25 12:30:11.592367091 +0000 UTC m=+0.838247651 container died f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 12:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf-merged.mount: Deactivated successfully.
Feb 25 12:30:11 compute-0 nova_compute[244014]: 2026-02-25 12:30:11.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:30:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 408 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 217 op/s
Feb 25 12:30:11 compute-0 podman[304904]: 2026-02-25 12:30:11.96298352 +0000 UTC m=+1.208864030 container remove f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:30:11 compute-0 systemd[1]: libpod-conmon-f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57.scope: Deactivated successfully.
Feb 25 12:30:12 compute-0 sudo[304735]: pam_unix(sudo:session): session closed for user root
Feb 25 12:30:12 compute-0 sudo[305081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:30:12 compute-0 sudo[305081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:30:12 compute-0 sudo[305081]: pam_unix(sudo:session): session closed for user root
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.099 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022597.098086, 8086400b-ac70-4c79-928b-4f1966084384 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.099 244018 INFO nova.compute.manager [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] VM Stopped (Lifecycle Event)
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.120 244018 DEBUG nova.compute.manager [None req-a6b66b3f-d954-4db5-9409-a940349a8506 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:12 compute-0 sudo[305106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:30:12 compute-0 sudo[305106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.203 244018 DEBUG nova.network.neutron [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.226 244018 INFO nova.compute.manager [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Took 0.82 seconds to deallocate network for instance.
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.281 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.281 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.301 244018 DEBUG nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.319 244018 DEBUG nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.319 244018 DEBUG nova.compute.provider_tree [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.340 244018 DEBUG nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.363 244018 DEBUG nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.420 244018 DEBUG oslo_concurrency.processutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:12 compute-0 podman[305144]: 2026-02-25 12:30:12.492399933 +0000 UTC m=+0.058233252 container create 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:30:12 compute-0 systemd[1]: Started libpod-conmon-6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978.scope.
Feb 25 12:30:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:30:12 compute-0 podman[305144]: 2026-02-25 12:30:12.466061706 +0000 UTC m=+0.031895116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:30:12 compute-0 podman[305144]: 2026-02-25 12:30:12.5779855 +0000 UTC m=+0.143818859 container init 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:30:12 compute-0 podman[305144]: 2026-02-25 12:30:12.58360632 +0000 UTC m=+0.149439629 container start 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:30:12 compute-0 podman[305144]: 2026-02-25 12:30:12.587214372 +0000 UTC m=+0.153047721 container attach 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 12:30:12 compute-0 nice_lovelace[305162]: 167 167
Feb 25 12:30:12 compute-0 systemd[1]: libpod-6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978.scope: Deactivated successfully.
Feb 25 12:30:12 compute-0 podman[305144]: 2026-02-25 12:30:12.588412586 +0000 UTC m=+0.154245895 container died 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a3acee57314c787e20438ea5caca15a18f1dca625b53f986f8d8b1f400173a2-merged.mount: Deactivated successfully.
Feb 25 12:30:12 compute-0 podman[305144]: 2026-02-25 12:30:12.622649437 +0000 UTC m=+0.188482776 container remove 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:30:12 compute-0 systemd[1]: libpod-conmon-6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978.scope: Deactivated successfully.
Feb 25 12:30:12 compute-0 podman[305204]: 2026-02-25 12:30:12.788992724 +0000 UTC m=+0.058685745 container create 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:30:12 compute-0 systemd[1]: Started libpod-conmon-89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed.scope.
Feb 25 12:30:12 compute-0 podman[305204]: 2026-02-25 12:30:12.763334046 +0000 UTC m=+0.033027127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:30:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a741be0c23eb2bfe3d05cc207dad273e536d6620ccc8491fe745c0d7a5f9c90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a741be0c23eb2bfe3d05cc207dad273e536d6620ccc8491fe745c0d7a5f9c90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a741be0c23eb2bfe3d05cc207dad273e536d6620ccc8491fe745c0d7a5f9c90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a741be0c23eb2bfe3d05cc207dad273e536d6620ccc8491fe745c0d7a5f9c90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:30:12 compute-0 podman[305204]: 2026-02-25 12:30:12.884504003 +0000 UTC m=+0.154197064 container init 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 12:30:12 compute-0 podman[305204]: 2026-02-25 12:30:12.896778521 +0000 UTC m=+0.166471542 container start 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 12:30:12 compute-0 podman[305204]: 2026-02-25 12:30:12.900563928 +0000 UTC m=+0.170256949 container attach 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:30:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:30:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322769673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.969 244018 DEBUG oslo_concurrency.processutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:12 compute-0 nova_compute[244014]: 2026-02-25 12:30:12.976 244018 DEBUG nova.compute.provider_tree [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.001 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.052 244018 DEBUG nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-vif-unplugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.054 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.056 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.057 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.057 244018 DEBUG nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] No waiting events found dispatching network-vif-unplugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.058 244018 WARNING nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received unexpected event network-vif-unplugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 for instance with vm_state deleted and task_state None.
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.058 244018 DEBUG nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.059 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.059 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.060 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.060 244018 DEBUG nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] No waiting events found dispatching network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.061 244018 WARNING nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received unexpected event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 for instance with vm_state deleted and task_state None.
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.061 244018 DEBUG nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-vif-deleted-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.073 244018 DEBUG nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]: {
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:     "0": [
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:         {
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "devices": [
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "/dev/loop3"
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             ],
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_name": "ceph_lv0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_size": "21470642176",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "name": "ceph_lv0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "tags": {
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.cluster_name": "ceph",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.crush_device_class": "",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.encrypted": "0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.objectstore": "bluestore",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.osd_id": "0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.type": "block",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.vdo": "0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.with_tpm": "0"
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             },
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "type": "block",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "vg_name": "ceph_vg0"
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:         }
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:     ],
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:     "1": [
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:         {
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "devices": [
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "/dev/loop4"
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             ],
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_name": "ceph_lv1",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_size": "21470642176",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "name": "ceph_lv1",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "tags": {
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.cluster_name": "ceph",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.crush_device_class": "",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.encrypted": "0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.objectstore": "bluestore",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.osd_id": "1",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.type": "block",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.vdo": "0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.with_tpm": "0"
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             },
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "type": "block",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "vg_name": "ceph_vg1"
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:         }
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:     ],
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:     "2": [
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:         {
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "devices": [
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "/dev/loop5"
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             ],
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_name": "ceph_lv2",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_size": "21470642176",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "name": "ceph_lv2",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "tags": {
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.cluster_name": "ceph",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.crush_device_class": "",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.encrypted": "0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.objectstore": "bluestore",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.osd_id": "2",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.type": "block",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.vdo": "0",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:                 "ceph.with_tpm": "0"
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             },
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "type": "block",
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:             "vg_name": "ceph_vg2"
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:         }
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]:     ]
Feb 25 12:30:13 compute-0 nifty_keldysh[305222]: }
Feb 25 12:30:13 compute-0 systemd[1]: libpod-89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed.scope: Deactivated successfully.
Feb 25 12:30:13 compute-0 podman[305204]: 2026-02-25 12:30:13.163870545 +0000 UTC m=+0.433563606 container died 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a741be0c23eb2bfe3d05cc207dad273e536d6620ccc8491fe745c0d7a5f9c90-merged.mount: Deactivated successfully.
Feb 25 12:30:13 compute-0 podman[305204]: 2026-02-25 12:30:13.214233743 +0000 UTC m=+0.483926754 container remove 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:30:13 compute-0 systemd[1]: libpod-conmon-89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed.scope: Deactivated successfully.
Feb 25 12:30:13 compute-0 sudo[305106]: pam_unix(sudo:session): session closed for user root
Feb 25 12:30:13 compute-0 sudo[305245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:30:13 compute-0 sudo[305245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:30:13 compute-0 sudo[305245]: pam_unix(sudo:session): session closed for user root
Feb 25 12:30:13 compute-0 sudo[305270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:30:13 compute-0 sudo[305270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:30:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:13.458 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:30:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:13.459 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.464 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.467 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.468 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.468 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.469 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:13 compute-0 ceph-mon[76335]: pgmap v1443: 305 pgs: 305 active+clean; 408 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 217 op/s
Feb 25 12:30:13 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2322769673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.589 244018 INFO nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Deleted allocations for instance bf261ccf-c216-4383-a22a-7f0553198152
Feb 25 12:30:13 compute-0 podman[305307]: 2026-02-25 12:30:13.676360808 +0000 UTC m=+0.046111589 container create a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 12:30:13 compute-0 systemd[1]: Started libpod-conmon-a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5.scope.
Feb 25 12:30:13 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:30:13 compute-0 podman[305307]: 2026-02-25 12:30:13.658010037 +0000 UTC m=+0.027760838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:30:13 compute-0 podman[305307]: 2026-02-25 12:30:13.763490749 +0000 UTC m=+0.133241570 container init a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:30:13 compute-0 nova_compute[244014]: 2026-02-25 12:30:13.767 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:13 compute-0 podman[305307]: 2026-02-25 12:30:13.771142405 +0000 UTC m=+0.140893176 container start a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:30:13 compute-0 podman[305307]: 2026-02-25 12:30:13.775068167 +0000 UTC m=+0.144818958 container attach a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:30:13 compute-0 bold_booth[305340]: 167 167
Feb 25 12:30:13 compute-0 systemd[1]: libpod-a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5.scope: Deactivated successfully.
Feb 25 12:30:13 compute-0 podman[305307]: 2026-02-25 12:30:13.777153936 +0000 UTC m=+0.146904687 container died a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-383781a54c6752887e248227683818866c6af9180adade514d8b66ed9dff04cd-merged.mount: Deactivated successfully.
Feb 25 12:30:13 compute-0 podman[305307]: 2026-02-25 12:30:13.828547793 +0000 UTC m=+0.198298554 container remove a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 12:30:13 compute-0 systemd[1]: libpod-conmon-a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5.scope: Deactivated successfully.
Feb 25 12:30:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 279 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 214 op/s
Feb 25 12:30:14 compute-0 podman[305363]: 2026-02-25 12:30:14.000904981 +0000 UTC m=+0.052375826 container create c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:30:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:30:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4101531742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:14 compute-0 systemd[1]: Started libpod-conmon-c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086.scope.
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.071 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:14 compute-0 podman[305363]: 2026-02-25 12:30:13.978188217 +0000 UTC m=+0.029659082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:30:14 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d43e1317d961cd7f0f34ea745f3a16587e636f1f83e6dc60777b648ca38c5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d43e1317d961cd7f0f34ea745f3a16587e636f1f83e6dc60777b648ca38c5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d43e1317d961cd7f0f34ea745f3a16587e636f1f83e6dc60777b648ca38c5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d43e1317d961cd7f0f34ea745f3a16587e636f1f83e6dc60777b648ca38c5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:14 compute-0 podman[305363]: 2026-02-25 12:30:14.107622787 +0000 UTC m=+0.159093622 container init c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 12:30:14 compute-0 podman[305363]: 2026-02-25 12:30:14.114328137 +0000 UTC m=+0.165798972 container start c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 12:30:14 compute-0 podman[305363]: 2026-02-25 12:30:14.118968909 +0000 UTC m=+0.170439794 container attach c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.261 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.262 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.262 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.410 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.411 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3735MB free_disk=59.855410382151604GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.412 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.412 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:14.460 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.480 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d945940d-a1b5-4a36-b980-efda3a9efda6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.481 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.481 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.521 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4101531742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:14 compute-0 lvm[305479]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:30:14 compute-0 lvm[305479]: VG ceph_vg0 finished
Feb 25 12:30:14 compute-0 lvm[305481]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:30:14 compute-0 lvm[305481]: VG ceph_vg1 finished
Feb 25 12:30:14 compute-0 lvm[305483]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:30:14 compute-0 lvm[305483]: VG ceph_vg2 finished
Feb 25 12:30:14 compute-0 nova_compute[244014]: 2026-02-25 12:30:14.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:14 compute-0 nostalgic_taussig[305380]: {}
Feb 25 12:30:14 compute-0 systemd[1]: libpod-c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086.scope: Deactivated successfully.
Feb 25 12:30:14 compute-0 podman[305363]: 2026-02-25 12:30:14.852246042 +0000 UTC m=+0.903716907 container died c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:30:14 compute-0 systemd[1]: libpod-c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086.scope: Consumed 1.030s CPU time.
Feb 25 12:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-86d43e1317d961cd7f0f34ea745f3a16587e636f1f83e6dc60777b648ca38c5b-merged.mount: Deactivated successfully.
Feb 25 12:30:14 compute-0 podman[305363]: 2026-02-25 12:30:14.911223825 +0000 UTC m=+0.962694690 container remove c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:30:14 compute-0 sudo[305270]: pam_unix(sudo:session): session closed for user root
Feb 25 12:30:14 compute-0 systemd[1]: libpod-conmon-c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086.scope: Deactivated successfully.
Feb 25 12:30:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:30:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:30:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:30:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:30:15 compute-0 sudo[305499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:30:15 compute-0 sudo[305499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:30:15 compute-0 sudo[305499]: pam_unix(sudo:session): session closed for user root
Feb 25 12:30:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:30:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633452403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:15 compute-0 nova_compute[244014]: 2026-02-25 12:30:15.097 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:15 compute-0 nova_compute[244014]: 2026-02-25 12:30:15.104 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:30:15 compute-0 nova_compute[244014]: 2026-02-25 12:30:15.219 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:30:15 compute-0 nova_compute[244014]: 2026-02-25 12:30:15.409 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:30:15 compute-0 nova_compute[244014]: 2026-02-25 12:30:15.410 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:30:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 23K writes, 94K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s
                                           Cumulative WAL: 23K writes, 8073 syncs, 2.96 writes per sync, written: 0.09 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 45.83 MB, 0.08 MB/s
                                           Interval WAL: 11K writes, 4632 syncs, 2.56 writes per sync, written: 0.04 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:30:15 compute-0 nova_compute[244014]: 2026-02-25 12:30:15.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:16 compute-0 nova_compute[244014]: 2026-02-25 12:30:16.093 244018 DEBUG nova.compute.manager [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:16 compute-0 nova_compute[244014]: 2026-02-25 12:30:16.094 244018 DEBUG nova.compute.manager [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing instance network info cache due to event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:30:16 compute-0 nova_compute[244014]: 2026-02-25 12:30:16.094 244018 DEBUG oslo_concurrency.lockutils [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:16 compute-0 nova_compute[244014]: 2026-02-25 12:30:16.094 244018 DEBUG oslo_concurrency.lockutils [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:16 compute-0 nova_compute[244014]: 2026-02-25 12:30:16.094 244018 DEBUG nova.network.neutron [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:30:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 279 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Feb 25 12:30:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:16 compute-0 ceph-mon[76335]: pgmap v1444: 305 pgs: 305 active+clean; 279 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 214 op/s
Feb 25 12:30:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:30:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:30:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1633452403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:17 compute-0 nova_compute[244014]: 2026-02-25 12:30:17.543 244018 DEBUG nova.compute.manager [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:17 compute-0 nova_compute[244014]: 2026-02-25 12:30:17.544 244018 DEBUG nova.compute.manager [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing instance network info cache due to event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:30:17 compute-0 nova_compute[244014]: 2026-02-25 12:30:17.545 244018 DEBUG oslo_concurrency.lockutils [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:17 compute-0 ceph-mon[76335]: pgmap v1445: 305 pgs: 305 active+clean; 279 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Feb 25 12:30:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Feb 25 12:30:18 compute-0 nova_compute[244014]: 2026-02-25 12:30:18.409 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:30:18 compute-0 ceph-mon[76335]: pgmap v1446: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Feb 25 12:30:18 compute-0 nova_compute[244014]: 2026-02-25 12:30:18.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:30:18 compute-0 nova_compute[244014]: 2026-02-25 12:30:18.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:30:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:30:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 25K writes, 100K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s
                                           Cumulative WAL: 25K writes, 8835 syncs, 2.92 writes per sync, written: 0.09 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 45K keys, 12K commit groups, 1.0 writes per commit group, ingest: 47.52 MB, 0.08 MB/s
                                           Interval WAL: 12K writes, 4809 syncs, 2.52 writes per sync, written: 0.05 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:30:19 compute-0 nova_compute[244014]: 2026-02-25 12:30:19.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 489 KiB/s wr, 160 op/s
Feb 25 12:30:20 compute-0 nova_compute[244014]: 2026-02-25 12:30:20.373 244018 DEBUG nova.network.neutron [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updated VIF entry in instance network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:30:20 compute-0 nova_compute[244014]: 2026-02-25 12:30:20.373 244018 DEBUG nova.network.neutron [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:20 compute-0 nova_compute[244014]: 2026-02-25 12:30:20.412 244018 DEBUG oslo_concurrency.lockutils [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:20 compute-0 nova_compute[244014]: 2026-02-25 12:30:20.413 244018 DEBUG oslo_concurrency.lockutils [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:20 compute-0 nova_compute[244014]: 2026-02-25 12:30:20.413 244018 DEBUG nova.network.neutron [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:30:20 compute-0 nova_compute[244014]: 2026-02-25 12:30:20.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:21 compute-0 nova_compute[244014]: 2026-02-25 12:30:21.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:21 compute-0 ceph-mon[76335]: pgmap v1447: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 489 KiB/s wr, 160 op/s
Feb 25 12:30:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 490 KiB/s wr, 174 op/s
Feb 25 12:30:22 compute-0 nova_compute[244014]: 2026-02-25 12:30:22.324 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022607.3221087, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:22 compute-0 nova_compute[244014]: 2026-02-25 12:30:22.324 244018 INFO nova.compute.manager [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Stopped (Lifecycle Event)
Feb 25 12:30:22 compute-0 nova_compute[244014]: 2026-02-25 12:30:22.403 244018 DEBUG nova.compute.manager [None req-822b9d25-2dc2-4944-8032-363fdbdfb602 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:23 compute-0 ceph-mon[76335]: pgmap v1448: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 490 KiB/s wr, 174 op/s
Feb 25 12:30:23 compute-0 nova_compute[244014]: 2026-02-25 12:30:23.304 244018 DEBUG nova.network.neutron [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updated VIF entry in instance network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:30:23 compute-0 nova_compute[244014]: 2026-02-25 12:30:23.305 244018 DEBUG nova.network.neutron [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:23 compute-0 nova_compute[244014]: 2026-02-25 12:30:23.321 244018 DEBUG oslo_concurrency.lockutils [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:23 compute-0 nova_compute[244014]: 2026-02-25 12:30:23.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:23 compute-0 NetworkManager[49836]: <info>  [1772022623.8734] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Feb 25 12:30:23 compute-0 NetworkManager[49836]: <info>  [1772022623.8749] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Feb 25 12:30:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 158 op/s
Feb 25 12:30:23 compute-0 nova_compute[244014]: 2026-02-25 12:30:23.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:23 compute-0 nova_compute[244014]: 2026-02-25 12:30:23.945 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.582 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.582 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.605 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.681 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.681 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.690 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.690 244018 INFO nova.compute.claims [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.751 244018 DEBUG nova.compute.manager [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.752 244018 DEBUG nova.compute.manager [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing instance network info cache due to event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.752 244018 DEBUG oslo_concurrency.lockutils [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.753 244018 DEBUG oslo_concurrency.lockutils [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.753 244018 DEBUG nova.network.neutron [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:24 compute-0 nova_compute[244014]: 2026-02-25 12:30:24.844 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:30:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.6 total, 600.0 interval
                                           Cumulative writes: 20K writes, 83K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 20K writes, 6665 syncs, 3.07 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9652 writes, 39K keys, 9652 commit groups, 1.0 writes per commit group, ingest: 43.03 MB, 0.07 MB/s
                                           Interval WAL: 9652 writes, 3681 syncs, 2.62 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:30:25 compute-0 ceph-mon[76335]: pgmap v1449: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 158 op/s
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:30:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/245090991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.428 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.433 244018 DEBUG nova.compute.provider_tree [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.452 244018 DEBUG nova.scheduler.client.report [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.474 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.475 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.519 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.520 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.542 244018 INFO nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.560 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.655 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.657 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.658 244018 INFO nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Creating image(s)
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.689 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.719 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.751 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.756 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.823 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.824 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.825 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.826 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.860 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.866 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5732c5fb-59b3-4590-b65a-a696b9c90152_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.913 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022610.9121666, bf261ccf-c216-4383-a22a-7f0553198152 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.914 244018 INFO nova.compute.manager [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Stopped (Lifecycle Event)
Feb 25 12:30:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 11 KiB/s wr, 65 op/s
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.938 244018 DEBUG nova.compute.manager [None req-222c7227-50a8-4c19-a01c-f6bb22bec95a - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:25 compute-0 nova_compute[244014]: 2026-02-25 12:30:25.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/245090991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:26 compute-0 nova_compute[244014]: 2026-02-25 12:30:26.102 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5732c5fb-59b3-4590-b65a-a696b9c90152_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:26 compute-0 nova_compute[244014]: 2026-02-25 12:30:26.186 244018 DEBUG nova.policy [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b63928451c6a4137bb65e25561326aff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:30:26 compute-0 nova_compute[244014]: 2026-02-25 12:30:26.196 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] resizing rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:30:26 compute-0 nova_compute[244014]: 2026-02-25 12:30:26.285 244018 DEBUG nova.objects.instance [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'migration_context' on Instance uuid 5732c5fb-59b3-4590-b65a-a696b9c90152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:26 compute-0 nova_compute[244014]: 2026-02-25 12:30:26.301 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:30:26 compute-0 nova_compute[244014]: 2026-02-25 12:30:26.302 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Ensure instance console log exists: /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:30:26 compute-0 nova_compute[244014]: 2026-02-25 12:30:26.302 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:26 compute-0 nova_compute[244014]: 2026-02-25 12:30:26.302 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:26 compute-0 nova_compute[244014]: 2026-02-25 12:30:26.302 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:27 compute-0 ceph-mon[76335]: pgmap v1450: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 11 KiB/s wr, 65 op/s
Feb 25 12:30:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 12:30:27 compute-0 nova_compute[244014]: 2026-02-25 12:30:27.173 244018 DEBUG nova.network.neutron [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updated VIF entry in instance network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:30:27 compute-0 nova_compute[244014]: 2026-02-25 12:30:27.174 244018 DEBUG nova.network.neutron [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:27 compute-0 nova_compute[244014]: 2026-02-25 12:30:27.198 244018 DEBUG oslo_concurrency.lockutils [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 325 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Feb 25 12:30:28 compute-0 nova_compute[244014]: 2026-02-25 12:30:28.306 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Successfully created port: d09a3a83-0b97-4635-9188-5ab61ccc4626 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:30:29 compute-0 ceph-mon[76335]: pgmap v1451: 305 pgs: 305 active+clean; 325 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Feb 25 12:30:29 compute-0 nova_compute[244014]: 2026-02-25 12:30:29.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 325 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Feb 25 12:30:30 compute-0 ceph-mon[76335]: pgmap v1452: 305 pgs: 305 active+clean; 325 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Feb 25 12:30:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:30:30
Feb 25 12:30:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:30:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:30:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.control', '.mgr', 'images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Feb 25 12:30:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:30:30 compute-0 nova_compute[244014]: 2026-02-25 12:30:30.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:31 compute-0 nova_compute[244014]: 2026-02-25 12:30:31.156 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Successfully updated port: d09a3a83-0b97-4635-9188-5ab61ccc4626 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:30:31 compute-0 nova_compute[244014]: 2026-02-25 12:30:31.176 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:31 compute-0 nova_compute[244014]: 2026-02-25 12:30:31.177 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquired lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:31 compute-0 nova_compute[244014]: 2026-02-25 12:30:31.177 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:30:31 compute-0 nova_compute[244014]: 2026-02-25 12:30:31.368 244018 DEBUG nova.compute.manager [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-changed-d09a3a83-0b97-4635-9188-5ab61ccc4626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:31 compute-0 nova_compute[244014]: 2026-02-25 12:30:31.369 244018 DEBUG nova.compute.manager [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Refreshing instance network info cache due to event network-changed-d09a3a83-0b97-4635-9188-5ab61ccc4626. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:30:31 compute-0 nova_compute[244014]: 2026-02-25 12:30:31.369 244018 DEBUG oslo_concurrency.lockutils [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:31 compute-0 nova_compute[244014]: 2026-02-25 12:30:31.469 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:30:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:31 compute-0 nova_compute[244014]: 2026-02-25 12:30:31.865 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:30:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 327 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Feb 25 12:30:31 compute-0 nova_compute[244014]: 2026-02-25 12:30:31.921 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:32 compute-0 nova_compute[244014]: 2026-02-25 12:30:32.918 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updating instance_info_cache with network_info: [{"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:32 compute-0 nova_compute[244014]: 2026-02-25 12:30:32.967 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Releasing lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:32 compute-0 nova_compute[244014]: 2026-02-25 12:30:32.968 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Instance network_info: |[{"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:30:32 compute-0 nova_compute[244014]: 2026-02-25 12:30:32.969 244018 DEBUG oslo_concurrency.lockutils [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:32 compute-0 nova_compute[244014]: 2026-02-25 12:30:32.970 244018 DEBUG nova.network.neutron [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Refreshing network info cache for port d09a3a83-0b97-4635-9188-5ab61ccc4626 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:30:32 compute-0 nova_compute[244014]: 2026-02-25 12:30:32.976 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Start _get_guest_xml network_info=[{"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:30:32 compute-0 ceph-mon[76335]: pgmap v1453: 305 pgs: 305 active+clean; 327 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Feb 25 12:30:32 compute-0 nova_compute[244014]: 2026-02-25 12:30:32.983 244018 WARNING nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:30:32 compute-0 nova_compute[244014]: 2026-02-25 12:30:32.990 244018 DEBUG nova.virt.libvirt.host [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:30:32 compute-0 nova_compute[244014]: 2026-02-25 12:30:32.991 244018 DEBUG nova.virt.libvirt.host [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.004 244018 DEBUG nova.virt.libvirt.host [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.005 244018 DEBUG nova.virt.libvirt.host [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.006 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.006 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.007 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.007 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.008 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.008 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.008 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.009 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.009 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.010 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.010 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.010 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.015 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:30:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758504434' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.528 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.559 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.564 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.601 244018 DEBUG nova.compute.manager [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.601 244018 DEBUG nova.compute.manager [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing instance network info cache due to event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.602 244018 DEBUG oslo_concurrency.lockutils [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.602 244018 DEBUG oslo_concurrency.lockutils [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:33 compute-0 nova_compute[244014]: 2026-02-25 12:30:33.602 244018 DEBUG nova.network.neutron [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:30:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 327 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 505 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Feb 25 12:30:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2758504434' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:30:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3407926462' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.124 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.125 244018 DEBUG nova.virt.libvirt.vif [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-187776660',display_name='tempest-ServerActionsTestOtherA-server-187776660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-187776660',id=70,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH+oU121pR6KUJ0WVaybHO4eKD7+IjTA6CrXc/3HOS7yAeJNztQB4sxwZQLs0fjSQoqiFpts6gDtbDhfFq9vINorEApwAS8sG60gm5VPKd60x5tzDBbplVpTIhXJOJeGqw==',key_name='tempest-keypair-2086603261',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-0x55w3gp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b63928451c6a4137bb65e25561326aff',uuid=5732c5fb-59b3-4590-b65a-a696b9c90152,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.125 244018 DEBUG nova.network.os_vif_util [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.126 244018 DEBUG nova.network.os_vif_util [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:63:29,bridge_name='br-int',has_traffic_filtering=True,id=d09a3a83-0b97-4635-9188-5ab61ccc4626,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd09a3a83-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.127 244018 DEBUG nova.objects.instance [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5732c5fb-59b3-4590-b65a-a696b9c90152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.142 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:30:34 compute-0 nova_compute[244014]:   <uuid>5732c5fb-59b3-4590-b65a-a696b9c90152</uuid>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   <name>instance-00000046</name>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestOtherA-server-187776660</nova:name>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:30:32</nova:creationTime>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <nova:user uuid="b63928451c6a4137bb65e25561326aff">tempest-ServerActionsTestOtherA-1615886603-project-member</nova:user>
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <nova:project uuid="8315f545d21f4f8d9a43d810f50e7b78">tempest-ServerActionsTestOtherA-1615886603</nova:project>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <nova:port uuid="d09a3a83-0b97-4635-9188-5ab61ccc4626">
Feb 25 12:30:34 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <system>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <entry name="serial">5732c5fb-59b3-4590-b65a-a696b9c90152</entry>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <entry name="uuid">5732c5fb-59b3-4590-b65a-a696b9c90152</entry>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     </system>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   <os>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   </os>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   <features>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   </features>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/5732c5fb-59b3-4590-b65a-a696b9c90152_disk">
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       </source>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config">
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       </source>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:30:34 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:23:63:29"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <target dev="tapd09a3a83-0b"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/console.log" append="off"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <video>
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     </video>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:30:34 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:30:34 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:30:34 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:30:34 compute-0 nova_compute[244014]: </domain>
Feb 25 12:30:34 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.144 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Preparing to wait for external event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.144 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.144 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.144 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.145 244018 DEBUG nova.virt.libvirt.vif [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-187776660',display_name='tempest-ServerActionsTestOtherA-server-187776660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-187776660',id=70,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH+oU121pR6KUJ0WVaybHO4eKD7+IjTA6CrXc/3HOS7yAeJNztQB4sxwZQLs0fjSQoqiFpts6gDtbDhfFq9vINorEApwAS8sG60gm5VPKd60x5tzDBbplVpTIhXJOJeGqw==',key_name='tempest-keypair-2086603261',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-0x55w3gp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b63928451c6a4137bb65e25561326aff',uuid=5732c5fb-59b3-4590-b65a-a696b9c90152,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.145 244018 DEBUG nova.network.os_vif_util [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.146 244018 DEBUG nova.network.os_vif_util [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:63:29,bridge_name='br-int',has_traffic_filtering=True,id=d09a3a83-0b97-4635-9188-5ab61ccc4626,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd09a3a83-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.146 244018 DEBUG os_vif [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:63:29,bridge_name='br-int',has_traffic_filtering=True,id=d09a3a83-0b97-4635-9188-5ab61ccc4626,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd09a3a83-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.149 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.149 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.149 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.153 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd09a3a83-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.153 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd09a3a83-0b, col_values=(('external_ids', {'iface-id': 'd09a3a83-0b97-4635-9188-5ab61ccc4626', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:63:29', 'vm-uuid': '5732c5fb-59b3-4590-b65a-a696b9c90152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.154 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:34 compute-0 NetworkManager[49836]: <info>  [1772022634.1562] manager: (tapd09a3a83-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.157 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.161 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.162 244018 INFO os_vif [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:63:29,bridge_name='br-int',has_traffic_filtering=True,id=d09a3a83-0b97-4635-9188-5ab61ccc4626,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd09a3a83-0b')
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.206 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.207 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.207 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No VIF found with MAC fa:16:3e:23:63:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.207 244018 INFO nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Using config drive
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.227 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:34 compute-0 nova_compute[244014]: 2026-02-25 12:30:34.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:34 compute-0 ceph-mon[76335]: pgmap v1454: 305 pgs: 305 active+clean; 327 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 505 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Feb 25 12:30:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3407926462' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.212 244018 INFO nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Creating config drive at /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.217 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg5mi_yur execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.356 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg5mi_yur" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.381 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.385 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config 5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.533 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config 5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.534 244018 INFO nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Deleting local config drive /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config because it was imported into RBD.
Feb 25 12:30:35 compute-0 kernel: tapd09a3a83-0b: entered promiscuous mode
Feb 25 12:30:35 compute-0 NetworkManager[49836]: <info>  [1772022635.5709] manager: (tapd09a3a83-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Feb 25 12:30:35 compute-0 ovn_controller[147040]: 2026-02-25T12:30:35Z|00663|binding|INFO|Claiming lport d09a3a83-0b97-4635-9188-5ab61ccc4626 for this chassis.
Feb 25 12:30:35 compute-0 ovn_controller[147040]: 2026-02-25T12:30:35Z|00664|binding|INFO|d09a3a83-0b97-4635-9188-5ab61ccc4626: Claiming fa:16:3e:23:63:29 10.100.0.11
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.581 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:63:29 10.100.0.11'], port_security=['fa:16:3e:23:63:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5732c5fb-59b3-4590-b65a-a696b9c90152', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b1ce41f-d52c-4d7a-8899-239050236d80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d09a3a83-0b97-4635-9188-5ab61ccc4626) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.582 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d09a3a83-0b97-4635-9188-5ab61ccc4626 in datapath cd796561-bd80-4610-8abc-655ee9e3676f bound to our chassis
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.583 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.591 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd2cf66-ab90-44bb-860d-f156de9f5c7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.591 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd796561-b1 in ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.593 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd796561-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.593 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f2ae3e-7960-4aa3-bde9-0f0379babb18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.594 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8212a547-132d-4bcc-a835-d4c60df58274]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.601 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc9428f-08ff-43a3-a58e-66c8f4d11a07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-00000046.
Feb 25 12:30:35 compute-0 systemd-machined[210048]: New machine qemu-85-instance-00000046.
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.622 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[070674f2-a82d-4e6e-8211-122f5a6e9b35]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 systemd-udevd[305878]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:35 compute-0 ovn_controller[147040]: 2026-02-25T12:30:35Z|00665|binding|INFO|Setting lport d09a3a83-0b97-4635-9188-5ab61ccc4626 ovn-installed in OVS
Feb 25 12:30:35 compute-0 ovn_controller[147040]: 2026-02-25T12:30:35Z|00666|binding|INFO|Setting lport d09a3a83-0b97-4635-9188-5ab61ccc4626 up in Southbound
Feb 25 12:30:35 compute-0 NetworkManager[49836]: <info>  [1772022635.6418] device (tapd09a3a83-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:30:35 compute-0 NetworkManager[49836]: <info>  [1772022635.6429] device (tapd09a3a83-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.643 244018 DEBUG nova.network.neutron [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updated VIF entry in instance network info cache for port d09a3a83-0b97-4635-9188-5ab61ccc4626. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.644 244018 DEBUG nova.network.neutron [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updating instance_info_cache with network_info: [{"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.652 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[693c0485-3a6e-46b2-bfd5-3b4bcb7ddb31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 systemd-udevd[305888]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.656 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3b19b827-cee7-4e0f-b149-b913895ce259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 NetworkManager[49836]: <info>  [1772022635.6583] manager: (tapcd796561-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/286)
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.669 244018 DEBUG oslo_concurrency.lockutils [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.680 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[45064a85-b332-4771-9bc0-e99f9ea285fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.683 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ec1c54e6-19c3-4111-930a-c70e6c28a038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 podman[305850]: 2026-02-25 12:30:35.686009464 +0000 UTC m=+0.087184754 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:30:35 compute-0 podman[305849]: 2026-02-25 12:30:35.686138487 +0000 UTC m=+0.088243483 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:30:35 compute-0 NetworkManager[49836]: <info>  [1772022635.6979] device (tapcd796561-b0): carrier: link connected
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.701 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ef721427-80f9-4986-a093-c9e958c972c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.713 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6c7367-452f-4951-b573-2139b86ba278]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305927, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.724 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76a247fd-c00a-4226-9854-494dbceacf0c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:48dc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460510, 'tstamp': 460510}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305928, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb8e090-b78c-4522-9f18-51f40f25a68f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305929, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.750 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a00ef26e-d642-43f3-87cf-7d9dba788d8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.778 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf961ab-91c2-4b98-918b-a4a4a3cbcaaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.779 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.779 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.780 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:35 compute-0 NetworkManager[49836]: <info>  [1772022635.7821] manager: (tapcd796561-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:35 compute-0 kernel: tapcd796561-b0: entered promiscuous mode
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.784 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.785 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:35 compute-0 ovn_controller[147040]: 2026-02-25T12:30:35Z|00667|binding|INFO|Releasing lport d456b40d-31d4-4447-97a8-c536382c29f9 from this chassis (sb_readonly=0)
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.786 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd796561-bd80-4610-8abc-655ee9e3676f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd796561-bd80-4610-8abc-655ee9e3676f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.787 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37e46f7f-8e15-490b-af2c-a46e0e186bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.788 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/cd796561-bd80-4610-8abc-655ee9e3676f.pid.haproxy
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:30:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.788 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'env', 'PROCESS_TAG=haproxy-cd796561-bd80-4610-8abc-655ee9e3676f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd796561-bd80-4610-8abc-655ee9e3676f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 327 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.929 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022635.9287539, 5732c5fb-59b3-4590-b65a-a696b9c90152 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.930 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] VM Started (Lifecycle Event)
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.953 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.957 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022635.9289703, 5732c5fb-59b3-4590-b65a-a696b9c90152 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.957 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] VM Paused (Lifecycle Event)
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.975 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.978 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:30:35 compute-0 nova_compute[244014]: 2026-02-25 12:30:35.999 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:30:36 compute-0 podman[306003]: 2026-02-25 12:30:36.113052743 +0000 UTC m=+0.057162101 container create 91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:30:36 compute-0 systemd[1]: Started libpod-conmon-91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0.scope.
Feb 25 12:30:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:30:36 compute-0 podman[306003]: 2026-02-25 12:30:36.078893445 +0000 UTC m=+0.023002843 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f940080901bec8896734a9c7645d302a54dac0d4387fd56144cab0605773a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:36 compute-0 podman[306003]: 2026-02-25 12:30:36.193505864 +0000 UTC m=+0.137615282 container init 91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 12:30:36 compute-0 podman[306003]: 2026-02-25 12:30:36.20078854 +0000 UTC m=+0.144897928 container start 91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 12:30:36 compute-0 neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f[306019]: [NOTICE]   (306023) : New worker (306025) forked
Feb 25 12:30:36 compute-0 neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f[306019]: [NOTICE]   (306023) : Loading success.
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.603 244018 DEBUG nova.network.neutron [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updated VIF entry in instance network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.604 244018 DEBUG nova.network.neutron [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.634 244018 DEBUG oslo_concurrency.lockutils [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.712 244018 DEBUG nova.compute.manager [req-5071cb96-955d-417f-af51-2fbd1390f6d6 req-09b1ed19-288f-41a8-b43a-0d13b632ed58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.713 244018 DEBUG oslo_concurrency.lockutils [req-5071cb96-955d-417f-af51-2fbd1390f6d6 req-09b1ed19-288f-41a8-b43a-0d13b632ed58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.713 244018 DEBUG oslo_concurrency.lockutils [req-5071cb96-955d-417f-af51-2fbd1390f6d6 req-09b1ed19-288f-41a8-b43a-0d13b632ed58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.714 244018 DEBUG oslo_concurrency.lockutils [req-5071cb96-955d-417f-af51-2fbd1390f6d6 req-09b1ed19-288f-41a8-b43a-0d13b632ed58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.714 244018 DEBUG nova.compute.manager [req-5071cb96-955d-417f-af51-2fbd1390f6d6 req-09b1ed19-288f-41a8-b43a-0d13b632ed58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Processing event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.715 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.720 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022636.720331, 5732c5fb-59b3-4590-b65a-a696b9c90152 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.720 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] VM Resumed (Lifecycle Event)
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.722 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.726 244018 INFO nova.virt.libvirt.driver [-] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Instance spawned successfully.
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.726 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.747 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.752 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.752 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.753 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.753 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.754 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.754 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.757 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.795 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.821 244018 INFO nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Took 11.16 seconds to spawn the instance on the hypervisor.
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.821 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.887 244018 INFO nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Took 12.24 seconds to build instance.
Feb 25 12:30:36 compute-0 nova_compute[244014]: 2026-02-25 12:30:36.907 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:37 compute-0 ceph-mon[76335]: pgmap v1455: 305 pgs: 305 active+clean; 327 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Feb 25 12:30:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 327 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 447 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.272 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.273 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.273 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.274 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.274 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.276 244018 INFO nova.compute.manager [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Terminating instance
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.277 244018 DEBUG nova.compute.manager [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:30:38 compute-0 kernel: tap197929cb-aa (unregistering): left promiscuous mode
Feb 25 12:30:38 compute-0 NetworkManager[49836]: <info>  [1772022638.3460] device (tap197929cb-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:30:38 compute-0 ovn_controller[147040]: 2026-02-25T12:30:38Z|00668|binding|INFO|Releasing lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 from this chassis (sb_readonly=0)
Feb 25 12:30:38 compute-0 ovn_controller[147040]: 2026-02-25T12:30:38Z|00669|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 down in Southbound
Feb 25 12:30:38 compute-0 ovn_controller[147040]: 2026-02-25T12:30:38Z|00670|binding|INFO|Removing iface tap197929cb-aa ovn-installed in OVS
Feb 25 12:30:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:38.362 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:10:e8 10.100.0.4'], port_security=['fa:16:3e:3f:10:e8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd945940d-a1b5-4a36-b980-efda3a9efda6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a36e5ee4-5e24-4d80-83ee-01c487ff157c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61371c73c9fb4961886c5c22f8f871e1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '95094e28-96cd-49ef-b6c0-712f53ce443e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bfc4d05-cf0b-42bf-9420-27556b829f8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=197929cb-aaa4-48f0-a831-7ac3f4ac5b37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:30:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:38.363 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 in datapath a36e5ee4-5e24-4d80-83ee-01c487ff157c unbound from our chassis
Feb 25 12:30:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:38.365 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a36e5ee4-5e24-4d80-83ee-01c487ff157c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:30:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:38.366 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[50c2b022-4cfd-4da1-9320-92e6364a6883]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:38 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000045.scope: Deactivated successfully.
Feb 25 12:30:38 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000045.scope: Consumed 12.983s CPU time.
Feb 25 12:30:38 compute-0 systemd-machined[210048]: Machine qemu-84-instance-00000045 terminated.
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.519 244018 INFO nova.virt.libvirt.driver [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance destroyed successfully.
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.520 244018 DEBUG nova.objects.instance [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'resources' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.556 244018 DEBUG nova.virt.libvirt.vif [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-579439343',display_name='tempest-ServerRescueTestJSONUnderV235-server-579439343',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-579439343',id=69,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:30:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61371c73c9fb4961886c5c22f8f871e1',ramdisk_id='',reservation_id='r-yaoxup28',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1059268888',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1059268888-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:30:11Z,user_data=None,user_id='74af2f394ab04b06b55e62150e81b6b1',uuid=d945940d-a1b5-4a36-b980-efda3a9efda6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.557 244018 DEBUG nova.network.os_vif_util [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converting VIF {"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.559 244018 DEBUG nova.network.os_vif_util [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.559 244018 DEBUG os_vif [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.562 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap197929cb-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.564 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.570 244018 INFO os_vif [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa')
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.934 244018 DEBUG nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.935 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.935 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.936 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.936 244018 DEBUG nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] No waiting events found dispatching network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.937 244018 WARNING nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received unexpected event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 for instance with vm_state active and task_state None.
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.937 244018 DEBUG nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.937 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.938 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.938 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.938 244018 DEBUG nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:38 compute-0 nova_compute[244014]: 2026-02-25 12:30:38.939 244018 DEBUG nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:30:39 compute-0 ceph-mon[76335]: pgmap v1456: 305 pgs: 305 active+clean; 327 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 447 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Feb 25 12:30:39 compute-0 nova_compute[244014]: 2026-02-25 12:30:39.098 244018 INFO nova.virt.libvirt.driver [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Deleting instance files /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6_del
Feb 25 12:30:39 compute-0 nova_compute[244014]: 2026-02-25 12:30:39.099 244018 INFO nova.virt.libvirt.driver [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Deletion of /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6_del complete
Feb 25 12:30:39 compute-0 nova_compute[244014]: 2026-02-25 12:30:39.155 244018 INFO nova.compute.manager [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Took 0.88 seconds to destroy the instance on the hypervisor.
Feb 25 12:30:39 compute-0 nova_compute[244014]: 2026-02-25 12:30:39.155 244018 DEBUG oslo.service.loopingcall [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:30:39 compute-0 nova_compute[244014]: 2026-02-25 12:30:39.156 244018 DEBUG nova.compute.manager [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:30:39 compute-0 nova_compute[244014]: 2026-02-25 12:30:39.156 244018 DEBUG nova.network.neutron [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:30:39 compute-0 nova_compute[244014]: 2026-02-25 12:30:39.777 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 327 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 17 KiB/s wr, 16 op/s
Feb 25 12:30:40 compute-0 nova_compute[244014]: 2026-02-25 12:30:40.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:40 compute-0 NetworkManager[49836]: <info>  [1772022640.1790] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Feb 25 12:30:40 compute-0 NetworkManager[49836]: <info>  [1772022640.1798] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Feb 25 12:30:40 compute-0 nova_compute[244014]: 2026-02-25 12:30:40.208 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:40 compute-0 ovn_controller[147040]: 2026-02-25T12:30:40Z|00671|binding|INFO|Releasing lport d456b40d-31d4-4447-97a8-c536382c29f9 from this chassis (sb_readonly=0)
Feb 25 12:30:40 compute-0 nova_compute[244014]: 2026-02-25 12:30:40.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:40 compute-0 nova_compute[244014]: 2026-02-25 12:30:40.501 244018 DEBUG nova.network.neutron [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:40 compute-0 nova_compute[244014]: 2026-02-25 12:30:40.520 244018 INFO nova.compute.manager [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Took 1.36 seconds to deallocate network for instance.
Feb 25 12:30:40 compute-0 nova_compute[244014]: 2026-02-25 12:30:40.561 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:40 compute-0 nova_compute[244014]: 2026-02-25 12:30:40.561 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:40 compute-0 nova_compute[244014]: 2026-02-25 12:30:40.641 244018 DEBUG oslo_concurrency.processutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:41 compute-0 ceph-mon[76335]: pgmap v1457: 305 pgs: 305 active+clean; 327 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 17 KiB/s wr, 16 op/s
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.055 244018 DEBUG nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.056 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.057 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.057 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.057 244018 DEBUG nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.058 244018 WARNING nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state deleted and task_state None.
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.058 244018 DEBUG nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-deleted-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.059 244018 DEBUG nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-changed-d09a3a83-0b97-4635-9188-5ab61ccc4626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.059 244018 DEBUG nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Refreshing instance network info cache due to event network-changed-d09a3a83-0b97-4635-9188-5ab61ccc4626. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.060 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.060 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.060 244018 DEBUG nova.network.neutron [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Refreshing network info cache for port d09a3a83-0b97-4635-9188-5ab61ccc4626 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:30:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:30:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1138094607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.231 244018 DEBUG oslo_concurrency.processutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.239 244018 DEBUG nova.compute.provider_tree [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.275 244018 DEBUG nova.scheduler.client.report [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.309 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.348 244018 INFO nova.scheduler.client.report [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Deleted allocations for instance d945940d-a1b5-4a36-b980-efda3a9efda6
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.420 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.659 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.660 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.678 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:30:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.759 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.760 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.768 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.769 244018 INFO nova.compute.claims [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:30:41 compute-0 nova_compute[244014]: 2026-02-25 12:30:41.915 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 271 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 19 KiB/s wr, 83 op/s
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1138094607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009143007851876558 of space, bias 1.0, pg target 0.2742902355562967 quantized to 32 (current 32)
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024925910720929524 of space, bias 1.0, pg target 0.7477773216278857 quantized to 32 (current 32)
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.641567330827097e-07 of space, bias 4.0, pg target 0.0010369880796992515 quantized to 16 (current 16)
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:30:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:30:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:30:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/683198314' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.484 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.491 244018 DEBUG nova.compute.provider_tree [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.525 244018 DEBUG nova.scheduler.client.report [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.555 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.556 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.615 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.616 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.644 244018 INFO nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.667 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.756 244018 DEBUG nova.network.neutron [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updated VIF entry in instance network info cache for port d09a3a83-0b97-4635-9188-5ab61ccc4626. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.757 244018 DEBUG nova.network.neutron [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updating instance_info_cache with network_info: [{"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.784 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.788 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.790 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.791 244018 INFO nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Creating image(s)
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.823 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.859 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.890 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.894 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.926 244018 DEBUG nova.policy [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f34eea4e6284ff7af83727c45d504ae', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f3f7599abe54e879797365670ae88f0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.979 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.979 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.980 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:42 compute-0 nova_compute[244014]: 2026-02-25 12:30:42.980 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.003 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.006 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 25eece29-8689-47a8-b930-0492bf528de5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:43 compute-0 ceph-mon[76335]: pgmap v1458: 305 pgs: 305 active+clean; 271 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 19 KiB/s wr, 83 op/s
Feb 25 12:30:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/683198314' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.278 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 25eece29-8689-47a8-b930-0492bf528de5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.350 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] resizing rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.448 244018 DEBUG nova.objects.instance [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lazy-loading 'migration_context' on Instance uuid 25eece29-8689-47a8-b930-0492bf528de5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.466 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.467 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Ensure instance console log exists: /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.467 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.468 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.468 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:43 compute-0 nova_compute[244014]: 2026-02-25 12:30:43.797 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Successfully created port: 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:30:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 200 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 127 op/s
Feb 25 12:30:44 compute-0 nova_compute[244014]: 2026-02-25 12:30:44.696 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Successfully updated port: 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:30:44 compute-0 nova_compute[244014]: 2026-02-25 12:30:44.712 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:44 compute-0 nova_compute[244014]: 2026-02-25 12:30:44.713 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquired lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:44 compute-0 nova_compute[244014]: 2026-02-25 12:30:44.713 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:30:44 compute-0 nova_compute[244014]: 2026-02-25 12:30:44.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:44 compute-0 nova_compute[244014]: 2026-02-25 12:30:44.794 244018 DEBUG nova.compute.manager [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-changed-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:44 compute-0 nova_compute[244014]: 2026-02-25 12:30:44.794 244018 DEBUG nova.compute.manager [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Refreshing instance network info cache due to event network-changed-8c2528a5-82a5-4855-b5eb-dd3a5eba5030. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:30:44 compute-0 nova_compute[244014]: 2026-02-25 12:30:44.794 244018 DEBUG oslo_concurrency.lockutils [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:44 compute-0 ovn_controller[147040]: 2026-02-25T12:30:44Z|00672|binding|INFO|Releasing lport d456b40d-31d4-4447-97a8-c536382c29f9 from this chassis (sb_readonly=0)
Feb 25 12:30:44 compute-0 nova_compute[244014]: 2026-02-25 12:30:44.858 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:44 compute-0 nova_compute[244014]: 2026-02-25 12:30:44.892 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:30:45 compute-0 ceph-mon[76335]: pgmap v1459: 305 pgs: 305 active+clean; 200 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 127 op/s
Feb 25 12:30:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 200 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 127 op/s
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.187 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Updating instance_info_cache with network_info: [{"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.216 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Releasing lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.217 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Instance network_info: |[{"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.217 244018 DEBUG oslo_concurrency.lockutils [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.218 244018 DEBUG nova.network.neutron [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Refreshing network info cache for port 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.222 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Start _get_guest_xml network_info=[{"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.229 244018 WARNING nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.243 244018 DEBUG nova.virt.libvirt.host [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.244 244018 DEBUG nova.virt.libvirt.host [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.250 244018 DEBUG nova.virt.libvirt.host [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.251 244018 DEBUG nova.virt.libvirt.host [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.252 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.252 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.252 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.253 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.253 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.253 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.254 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.254 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.254 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.255 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.255 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.255 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.259 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:30:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2563724747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.919 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.938 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:46 compute-0 nova_compute[244014]: 2026-02-25 12:30:46.941 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:47 compute-0 ceph-mon[76335]: pgmap v1460: 305 pgs: 305 active+clean; 200 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 127 op/s
Feb 25 12:30:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2563724747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:47 compute-0 ovn_controller[147040]: 2026-02-25T12:30:47Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:63:29 10.100.0.11
Feb 25 12:30:47 compute-0 ovn_controller[147040]: 2026-02-25T12:30:47Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:63:29 10.100.0.11
Feb 25 12:30:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:30:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035742841' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.473 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.476 244018 DEBUG nova.virt.libvirt.vif [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-910589157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-910589157',id=71,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f3f7599abe54e879797365670ae88f0',ramdisk_id='',reservation_id='r-ct0nuxro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1961555021',owner_user_name='tempest-ServerTagsTestJSON-1961555021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:42Z,user_data=None,user_id='7f34eea4e6284ff7af83727c45d504ae',uuid=25eece29-8689-47a8-b930-0492bf528de5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.477 244018 DEBUG nova.network.os_vif_util [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converting VIF {"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.479 244018 DEBUG nova.network.os_vif_util [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.481 244018 DEBUG nova.objects.instance [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 25eece29-8689-47a8-b930-0492bf528de5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.498 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:30:47 compute-0 nova_compute[244014]:   <uuid>25eece29-8689-47a8-b930-0492bf528de5</uuid>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   <name>instance-00000047</name>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerTagsTestJSON-server-910589157</nova:name>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:30:46</nova:creationTime>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <nova:user uuid="7f34eea4e6284ff7af83727c45d504ae">tempest-ServerTagsTestJSON-1961555021-project-member</nova:user>
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <nova:project uuid="1f3f7599abe54e879797365670ae88f0">tempest-ServerTagsTestJSON-1961555021</nova:project>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <nova:port uuid="8c2528a5-82a5-4855-b5eb-dd3a5eba5030">
Feb 25 12:30:47 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <system>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <entry name="serial">25eece29-8689-47a8-b930-0492bf528de5</entry>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <entry name="uuid">25eece29-8689-47a8-b930-0492bf528de5</entry>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     </system>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   <os>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   </os>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   <features>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   </features>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/25eece29-8689-47a8-b930-0492bf528de5_disk">
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       </source>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/25eece29-8689-47a8-b930-0492bf528de5_disk.config">
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       </source>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:30:47 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:a1:57:14"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <target dev="tap8c2528a5-82"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/console.log" append="off"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <video>
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     </video>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:30:47 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:30:47 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:30:47 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:30:47 compute-0 nova_compute[244014]: </domain>
Feb 25 12:30:47 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.501 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Preparing to wait for external event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.503 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.503 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.504 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.505 244018 DEBUG nova.virt.libvirt.vif [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-910589157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-910589157',id=71,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f3f7599abe54e879797365670ae88f0',ramdisk_id='',reservation_id='r-ct0nuxro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1961555021',owner_user_name='tempest-ServerTagsTestJSON-1961555021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:42Z,user_data=None,user_id='7f34eea4e6284ff7af83727c45d504ae',uuid=25eece29-8689-47a8-b930-0492bf528de5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:30:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2900533914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.505 244018 DEBUG nova.network.os_vif_util [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converting VIF {"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:30:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.506 244018 DEBUG nova.network.os_vif_util [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:30:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2900533914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.507 244018 DEBUG os_vif [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.508 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.509 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.509 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.514 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c2528a5-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.515 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c2528a5-82, col_values=(('external_ids', {'iface-id': '8c2528a5-82a5-4855-b5eb-dd3a5eba5030', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:57:14', 'vm-uuid': '25eece29-8689-47a8-b930-0492bf528de5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:47 compute-0 NetworkManager[49836]: <info>  [1772022647.5183] manager: (tap8c2528a5-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.521 244018 INFO os_vif [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82')
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.578 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.578 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.579 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] No VIF found with MAC fa:16:3e:a1:57:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.579 244018 INFO nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Using config drive
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.598 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.873 244018 DEBUG nova.network.neutron [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Updated VIF entry in instance network info cache for port 8c2528a5-82a5-4855-b5eb-dd3a5eba5030. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.874 244018 DEBUG nova.network.neutron [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Updating instance_info_cache with network_info: [{"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.895 244018 DEBUG oslo_concurrency.lockutils [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 262 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.1 MiB/s wr, 178 op/s
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.941 244018 INFO nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Creating config drive at /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config
Feb 25 12:30:47 compute-0 nova_compute[244014]: 2026-02-25 12:30:47.945 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmph0dwn_9c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.071 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmph0dwn_9c" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3035742841' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2900533914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:30:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2900533914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.110 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.115 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config 25eece29-8689-47a8-b930-0492bf528de5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.255 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config 25eece29-8689-47a8-b930-0492bf528de5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.256 244018 INFO nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Deleting local config drive /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config because it was imported into RBD.
Feb 25 12:30:48 compute-0 NetworkManager[49836]: <info>  [1772022648.2984] manager: (tap8c2528a5-82): new Tun device (/org/freedesktop/NetworkManager/Devices/291)
Feb 25 12:30:48 compute-0 kernel: tap8c2528a5-82: entered promiscuous mode
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:48 compute-0 ovn_controller[147040]: 2026-02-25T12:30:48Z|00673|binding|INFO|Claiming lport 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 for this chassis.
Feb 25 12:30:48 compute-0 ovn_controller[147040]: 2026-02-25T12:30:48Z|00674|binding|INFO|8c2528a5-82a5-4855-b5eb-dd3a5eba5030: Claiming fa:16:3e:a1:57:14 10.100.0.7
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.311 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:57:14 10.100.0.7'], port_security=['fa:16:3e:a1:57:14 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '25eece29-8689-47a8-b930-0492bf528de5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f3f7599abe54e879797365670ae88f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c8caf3e4-1d71-4f11-8550-c5b31d856ffc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94153ad-cd99-45e9-920b-47039b1f034f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8c2528a5-82a5-4855-b5eb-dd3a5eba5030) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:30:48 compute-0 ovn_controller[147040]: 2026-02-25T12:30:48Z|00675|binding|INFO|Setting lport 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 ovn-installed in OVS
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.313 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 in datapath 748b52bb-559a-4d75-b4f7-a397fd5e7e77 bound to our chassis
Feb 25 12:30:48 compute-0 ovn_controller[147040]: 2026-02-25T12:30:48Z|00676|binding|INFO|Setting lport 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 up in Southbound
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.316 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 748b52bb-559a-4d75-b4f7-a397fd5e7e77
Feb 25 12:30:48 compute-0 systemd-udevd[306411]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.323 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.325 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ae097a23-e25f-42f5-95e5-99dd8eff81af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.326 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap748b52bb-51 in ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.328 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap748b52bb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9c9956-4f8d-46ba-8bae-2acdb193a9df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.330 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b90feb4-37af-40c6-ad83-db28b042b26a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 NetworkManager[49836]: <info>  [1772022648.3364] device (tap8c2528a5-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:30:48 compute-0 NetworkManager[49836]: <info>  [1772022648.3369] device (tap8c2528a5-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:30:48 compute-0 systemd-machined[210048]: New machine qemu-86-instance-00000047.
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.345 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[409d7142-a740-4b48-8350-b4b685f2e700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-00000047.
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.360 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a055118c-3ee4-40bd-9d81-70ccfe5183a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.391 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb4a9c5-d555-4d78-bb69-23ef691e2f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.399 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5edb7c5d-060c-4ac5-af79-a2b899cc5292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 NetworkManager[49836]: <info>  [1772022648.4005] manager: (tap748b52bb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/292)
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.426 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9f1c14-e4bc-4bb7-86fc-ece2c51ab008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.429 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4934d40f-92a6-4b0e-bd8a-c7e6c9d59659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 NetworkManager[49836]: <info>  [1772022648.4509] device (tap748b52bb-50): carrier: link connected
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.455 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[30472656-2af6-4d7b-9cc4-21e3a5635bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.469 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9565e0a7-069f-4c58-a164-96dd3b2496e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap748b52bb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:f3:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461785, 'reachable_time': 25756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306447, 'error': None, 'target': 'ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.490 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d725c026-f197-4590-aa08-36776ff8cb79]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:f364'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461785, 'tstamp': 461785}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306448, 'error': None, 'target': 'ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.506 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[07b0d5b6-983d-4551-9346-3c7ae3eecd73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap748b52bb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:f3:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461785, 'reachable_time': 25756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306449, 'error': None, 'target': 'ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.537 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[12dd21ba-b1b4-430b-b124-cc014d646c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.586 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88f01dde-c1b5-4917-a1f7-969dac8fa7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.588 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap748b52bb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.589 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.589 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap748b52bb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:48 compute-0 NetworkManager[49836]: <info>  [1772022648.5923] manager: (tap748b52bb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Feb 25 12:30:48 compute-0 kernel: tap748b52bb-50: entered promiscuous mode
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.594 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.596 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap748b52bb-50, col_values=(('external_ids', {'iface-id': '8c2f9c93-8b5a-45ae-baad-19b412bcdcc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:48 compute-0 ovn_controller[147040]: 2026-02-25T12:30:48Z|00677|binding|INFO|Releasing lport 8c2f9c93-8b5a-45ae-baad-19b412bcdcc8 from this chassis (sb_readonly=0)
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.599 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/748b52bb-559a-4d75-b4f7-a397fd5e7e77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/748b52bb-559a-4d75-b4f7-a397fd5e7e77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.600 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13e36a55-6d42-4f36-a3ca-fe3c9b292352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.601 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-748b52bb-559a-4d75-b4f7-a397fd5e7e77
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/748b52bb-559a-4d75-b4f7-a397fd5e7e77.pid.haproxy
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 748b52bb-559a-4d75-b4f7-a397fd5e7e77
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:30:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.602 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'env', 'PROCESS_TAG=haproxy-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/748b52bb-559a-4d75-b4f7-a397fd5e7e77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.791 244018 DEBUG nova.compute.manager [req-7a1d20e1-de81-4f44-9ae4-d19275dd626e req-289c9ba6-4017-4754-985b-f733b4f2993a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.791 244018 DEBUG oslo_concurrency.lockutils [req-7a1d20e1-de81-4f44-9ae4-d19275dd626e req-289c9ba6-4017-4754-985b-f733b4f2993a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.791 244018 DEBUG oslo_concurrency.lockutils [req-7a1d20e1-de81-4f44-9ae4-d19275dd626e req-289c9ba6-4017-4754-985b-f733b4f2993a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.792 244018 DEBUG oslo_concurrency.lockutils [req-7a1d20e1-de81-4f44-9ae4-d19275dd626e req-289c9ba6-4017-4754-985b-f733b4f2993a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.792 244018 DEBUG nova.compute.manager [req-7a1d20e1-de81-4f44-9ae4-d19275dd626e req-289c9ba6-4017-4754-985b-f733b4f2993a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Processing event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.896 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022648.89564, 25eece29-8689-47a8-b930-0492bf528de5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.896 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] VM Started (Lifecycle Event)
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.898 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.901 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.904 244018 INFO nova.virt.libvirt.driver [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Instance spawned successfully.
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.904 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.930 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.931 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.931 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.932 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.932 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.932 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.935 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:48 compute-0 nova_compute[244014]: 2026-02-25 12:30:48.938 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:30:48 compute-0 podman[306523]: 2026-02-25 12:30:48.972761221 +0000 UTC m=+0.053872529 container create 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:30:49 compute-0 systemd[1]: Started libpod-conmon-3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b.scope.
Feb 25 12:30:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb79d7d65570a7eb27f03f2a9bdb296e9b93b584c351901416e7c35c831aebbc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:30:49 compute-0 podman[306523]: 2026-02-25 12:30:48.949762978 +0000 UTC m=+0.030874306 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:30:49 compute-0 podman[306523]: 2026-02-25 12:30:49.05629941 +0000 UTC m=+0.137410718 container init 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:30:49 compute-0 podman[306523]: 2026-02-25 12:30:49.063816663 +0000 UTC m=+0.144927971 container start 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.070 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.071 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022648.8959663, 25eece29-8689-47a8-b930-0492bf528de5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.071 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] VM Paused (Lifecycle Event)
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.073 244018 INFO nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Took 6.28 seconds to spawn the instance on the hypervisor.
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.073 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:49 compute-0 ceph-mon[76335]: pgmap v1461: 305 pgs: 305 active+clean; 262 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.1 MiB/s wr, 178 op/s
Feb 25 12:30:49 compute-0 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [NOTICE]   (306542) : New worker (306544) forked
Feb 25 12:30:49 compute-0 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [NOTICE]   (306542) : Loading success.
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.341 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.347 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022648.9006157, 25eece29-8689-47a8-b930-0492bf528de5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.347 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] VM Resumed (Lifecycle Event)
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.369 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.372 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.380 244018 INFO nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Took 7.64 seconds to build instance.
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.427 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:49 compute-0 nova_compute[244014]: 2026-02-25 12:30:49.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 262 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 162 op/s
Feb 25 12:30:50 compute-0 ceph-mon[76335]: pgmap v1462: 305 pgs: 305 active+clean; 262 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 162 op/s
Feb 25 12:30:51 compute-0 nova_compute[244014]: 2026-02-25 12:30:51.216 244018 DEBUG nova.compute.manager [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:51 compute-0 nova_compute[244014]: 2026-02-25 12:30:51.218 244018 DEBUG oslo_concurrency.lockutils [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:51 compute-0 nova_compute[244014]: 2026-02-25 12:30:51.218 244018 DEBUG oslo_concurrency.lockutils [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:51 compute-0 nova_compute[244014]: 2026-02-25 12:30:51.219 244018 DEBUG oslo_concurrency.lockutils [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:51 compute-0 nova_compute[244014]: 2026-02-25 12:30:51.219 244018 DEBUG nova.compute.manager [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] No waiting events found dispatching network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:51 compute-0 nova_compute[244014]: 2026-02-25 12:30:51.219 244018 WARNING nova.compute.manager [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received unexpected event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 for instance with vm_state active and task_state None.
Feb 25 12:30:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 278 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 261 op/s
Feb 25 12:30:52 compute-0 ceph-mon[76335]: pgmap v1463: 305 pgs: 305 active+clean; 278 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 261 op/s
Feb 25 12:30:52 compute-0 nova_compute[244014]: 2026-02-25 12:30:52.252 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:52 compute-0 nova_compute[244014]: 2026-02-25 12:30:52.254 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:52 compute-0 nova_compute[244014]: 2026-02-25 12:30:52.283 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:30:52 compute-0 nova_compute[244014]: 2026-02-25 12:30:52.378 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:52 compute-0 nova_compute[244014]: 2026-02-25 12:30:52.379 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:52 compute-0 nova_compute[244014]: 2026-02-25 12:30:52.387 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:30:52 compute-0 nova_compute[244014]: 2026-02-25 12:30:52.387 244018 INFO nova.compute.claims [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:30:52 compute-0 nova_compute[244014]: 2026-02-25 12:30:52.519 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:52 compute-0 nova_compute[244014]: 2026-02-25 12:30:52.543 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:30:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2129785163' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.069 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.076 244018 DEBUG nova.compute.provider_tree [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.103 244018 DEBUG nova.scheduler.client.report [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.140 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.141 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.210 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.211 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.237 244018 INFO nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:30:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2129785163' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.264 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.390 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.392 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.393 244018 INFO nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Creating image(s)
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.419 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.445 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.477 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.481 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.516 244018 DEBUG nova.policy [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87b7b8b029dc48549d8d5982d7329f63', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c7f30b1d5a1f4604bb44f655b6be0571', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.522 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022638.5156546, d945940d-a1b5-4a36-b980-efda3a9efda6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.522 244018 INFO nova.compute.manager [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Stopped (Lifecycle Event)
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.524 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "e954c936-91fe-4aa5-8c91-78ec08c85221" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.524 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.551 244018 DEBUG nova.compute.manager [None req-def9d0e1-9cc0-45cd-a8f2-960ffcd6cd3a - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.560 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.566 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.566 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.567 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.567 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.590 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.592 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7715b665-70af-4b84-b8a3-85ccea6ab805_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.634 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.635 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.645 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.645 244018 INFO nova.compute.claims [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.689 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "9b9beb6f-4642-40d1-b9e5-96345c682341" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.689 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.709 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.776 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:53 compute-0 nova_compute[244014]: 2026-02-25 12:30:53.838 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.182 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.183 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.183 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.183 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.184 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.185 244018 INFO nova.compute.manager [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Terminating instance
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.186 244018 DEBUG nova.compute.manager [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:30:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:30:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602529104' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:54 compute-0 kernel: tap8c2528a5-82 (unregistering): left promiscuous mode
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.469 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:54 compute-0 NetworkManager[49836]: <info>  [1772022654.4719] device (tap8c2528a5-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.477 244018 DEBUG nova.compute.provider_tree [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:30:54 compute-0 ovn_controller[147040]: 2026-02-25T12:30:54Z|00678|binding|INFO|Releasing lport 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 from this chassis (sb_readonly=0)
Feb 25 12:30:54 compute-0 ovn_controller[147040]: 2026-02-25T12:30:54Z|00679|binding|INFO|Setting lport 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 down in Southbound
Feb 25 12:30:54 compute-0 ovn_controller[147040]: 2026-02-25T12:30:54Z|00680|binding|INFO|Removing iface tap8c2528a5-82 ovn-installed in OVS
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:54 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000047.scope: Deactivated successfully.
Feb 25 12:30:54 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000047.scope: Consumed 5.915s CPU time.
Feb 25 12:30:54 compute-0 systemd-machined[210048]: Machine qemu-86-instance-00000047 terminated.
Feb 25 12:30:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:54.636 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:57:14 10.100.0.7'], port_security=['fa:16:3e:a1:57:14 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '25eece29-8689-47a8-b930-0492bf528de5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f3f7599abe54e879797365670ae88f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c8caf3e4-1d71-4f11-8550-c5b31d856ffc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94153ad-cd99-45e9-920b-47039b1f034f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8c2528a5-82a5-4855-b5eb-dd3a5eba5030) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.636 244018 INFO nova.virt.libvirt.driver [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Instance destroyed successfully.
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.637 244018 DEBUG nova.objects.instance [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lazy-loading 'resources' on Instance uuid 25eece29-8689-47a8-b930-0492bf528de5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:54.639 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 in datapath 748b52bb-559a-4d75-b4f7-a397fd5e7e77 unbound from our chassis
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.639 244018 DEBUG nova.scheduler.client.report [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:30:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:54.641 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 748b52bb-559a-4d75-b4f7-a397fd5e7e77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:30:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:54.643 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd638720-4d89-4b34-a6ba-1acf1b3258ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:54.644 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77 namespace which is not needed anymore
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.670 244018 DEBUG nova.virt.libvirt.vif [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:30:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-910589157',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-910589157',id=71,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:30:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f3f7599abe54e879797365670ae88f0',ramdisk_id='',reservation_id='r-ct0nuxro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1961555021',owner_user_name='tempest-ServerTagsTestJSON-1961555021-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:30:49Z,user_data=None,user_id='7f34eea4e6284ff7af83727c45d504ae',uuid=25eece29-8689-47a8-b930-0492bf528de5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.671 244018 DEBUG nova.network.os_vif_util [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converting VIF {"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.673 244018 DEBUG nova.network.os_vif_util [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.674 244018 DEBUG os_vif [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.679 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c2528a5-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.687 244018 INFO os_vif [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82')
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.772 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.773 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.776 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.785 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.786 244018 INFO nova.compute.claims [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.809 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Successfully created port: e61932f1-9a36-4c95-a52f-470c182ac70f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.845 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.849 244018 DEBUG nova.compute.manager [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-unplugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.849 244018 DEBUG oslo_concurrency.lockutils [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.849 244018 DEBUG oslo_concurrency.lockutils [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.849 244018 DEBUG oslo_concurrency.lockutils [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.850 244018 DEBUG nova.compute.manager [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] No waiting events found dispatching network-vif-unplugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.850 244018 DEBUG nova.compute.manager [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-unplugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.862 244018 INFO nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.880 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.962 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:54 compute-0 ceph-mon[76335]: pgmap v1464: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.987 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.989 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:30:54 compute-0 nova_compute[244014]: 2026-02-25 12:30:54.989 244018 INFO nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Creating image(s)
Feb 25 12:30:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:55.012 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:55.012 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:55.013 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:55 compute-0 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [NOTICE]   (306542) : haproxy version is 2.8.14-c23fe91
Feb 25 12:30:55 compute-0 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [NOTICE]   (306542) : path to executable is /usr/sbin/haproxy
Feb 25 12:30:55 compute-0 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [WARNING]  (306542) : Exiting Master process...
Feb 25 12:30:55 compute-0 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [ALERT]    (306542) : Current worker (306544) exited with code 143 (Terminated)
Feb 25 12:30:55 compute-0 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [WARNING]  (306542) : All workers exited. Exiting... (0)
Feb 25 12:30:55 compute-0 systemd[1]: libpod-3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b.scope: Deactivated successfully.
Feb 25 12:30:55 compute-0 podman[306734]: 2026-02-25 12:30:55.126899796 +0000 UTC m=+0.393295604 container died 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.147 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.168 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.190 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.192 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.244 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.245 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.246 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.246 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.269 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.273 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e954c936-91fe-4aa5-8c91-78ec08c85221_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:30:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909746229' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.534 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.542 244018 DEBUG nova.compute.provider_tree [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.561 244018 DEBUG nova.scheduler.client.report [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.594 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.596 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.655 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.674 244018 INFO nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.691 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Successfully updated port: e61932f1-9a36-4c95-a52f-470c182ac70f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.695 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.704 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.704 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquired lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.705 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.745 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7715b665-70af-4b84-b8a3-85ccea6ab805_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.781 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.782 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.783 244018 INFO nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating image(s)
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.804 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.827 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.845 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.848 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:55 compute-0 nova_compute[244014]: 2026-02-25 12:30:55.906 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] resizing rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:30:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Feb 25 12:30:56 compute-0 nova_compute[244014]: 2026-02-25 12:30:56.011 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:56 compute-0 nova_compute[244014]: 2026-02-25 12:30:56.012 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:56 compute-0 nova_compute[244014]: 2026-02-25 12:30:56.013 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:56 compute-0 nova_compute[244014]: 2026-02-25 12:30:56.013 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:56 compute-0 nova_compute[244014]: 2026-02-25 12:30:56.090 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:56 compute-0 nova_compute[244014]: 2026-02-25 12:30:56.094 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9b9beb6f-4642-40d1-b9e5-96345c682341_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:56 compute-0 nova_compute[244014]: 2026-02-25 12:30:56.124 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:30:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1602529104' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2909746229' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b-userdata-shm.mount: Deactivated successfully.
Feb 25 12:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb79d7d65570a7eb27f03f2a9bdb296e9b93b584c351901416e7c35c831aebbc-merged.mount: Deactivated successfully.
Feb 25 12:30:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:30:56 compute-0 podman[306734]: 2026-02-25 12:30:56.818191477 +0000 UTC m=+2.084587315 container cleanup 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:30:56 compute-0 systemd[1]: libpod-conmon-3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b.scope: Deactivated successfully.
Feb 25 12:30:56 compute-0 nova_compute[244014]: 2026-02-25 12:30:56.841 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updating instance_info_cache with network_info: [{"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:56 compute-0 nova_compute[244014]: 2026-02-25 12:30:56.864 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Releasing lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:56 compute-0 nova_compute[244014]: 2026-02-25 12:30:56.865 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Instance network_info: |[{"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.199 244018 DEBUG nova.compute.manager [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.200 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.200 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.200 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.201 244018 DEBUG nova.compute.manager [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] No waiting events found dispatching network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.201 244018 WARNING nova.compute.manager [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received unexpected event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 for instance with vm_state active and task_state deleting.
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.201 244018 DEBUG nova.compute.manager [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-changed-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.202 244018 DEBUG nova.compute.manager [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Refreshing instance network info cache due to event network-changed-e61932f1-9a36-4c95-a52f-470c182ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.202 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.202 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.203 244018 DEBUG nova.network.neutron [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Refreshing network info cache for port e61932f1-9a36-4c95-a52f-470c182ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:30:57 compute-0 ceph-mon[76335]: pgmap v1465: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.411 244018 DEBUG nova.objects.instance [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lazy-loading 'migration_context' on Instance uuid 7715b665-70af-4b84-b8a3-85ccea6ab805 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.426 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.426 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Ensure instance console log exists: /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.427 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.427 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.427 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.430 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Start _get_guest_xml network_info=[{"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.433 244018 WARNING nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.437 244018 DEBUG nova.virt.libvirt.host [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.438 244018 DEBUG nova.virt.libvirt.host [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.441 244018 DEBUG nova.virt.libvirt.host [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.441 244018 DEBUG nova.virt.libvirt.host [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.441 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.442 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.442 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.442 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.443 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.443 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.443 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.443 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.443 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.444 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.444 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.444 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.447 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:57 compute-0 podman[307034]: 2026-02-25 12:30:57.516288523 +0000 UTC m=+0.672853142 container remove 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:30:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.521 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ab71f7ed-2a17-4daa-807a-7f6a90cb2712]: (4, ('Wed Feb 25 12:30:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77 (3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b)\n3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b\nWed Feb 25 12:30:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77 (3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b)\n3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.522 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f3139c-ce1f-426d-99df-099c6d8249a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.523 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap748b52bb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.525 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:57 compute-0 kernel: tap748b52bb-50: left promiscuous mode
Feb 25 12:30:57 compute-0 nova_compute[244014]: 2026-02-25 12:30:57.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.545 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6392ca98-469b-4ca6-af25-a80fcc3b8ab5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.566 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[07767f53-7d8a-4b69-843c-22d578fe2165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.568 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47324682-9a42-41cc-b97a-b09523b4c857]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.584 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b309c1b0-22b7-4d4c-9400-fd923a6e354a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461779, 'reachable_time': 44390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307070, 'error': None, 'target': 'ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.587 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:30:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.587 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e861710d-409e-4f73-a64e-bd2c99bed4d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:30:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d748b52bb\x2d559a\x2d4d75\x2db4f7\x2da397fd5e7e77.mount: Deactivated successfully.
Feb 25 12:30:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 337 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.9 MiB/s wr, 198 op/s
Feb 25 12:30:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:30:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735797724' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.041 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.063 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.065 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.166 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e954c936-91fe-4aa5-8c91-78ec08c85221_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.893s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.246 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] resizing rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:30:58 compute-0 ceph-mon[76335]: pgmap v1466: 305 pgs: 305 active+clean; 337 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.9 MiB/s wr, 198 op/s
Feb 25 12:30:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/735797724' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:30:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/674155135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.699 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.700 244018 DEBUG nova.virt.libvirt.vif [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1810610267',display_name='tempest-ServersTestManualDisk-server-1810610267',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1810610267',id=72,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+ZyZhedZYP+GQcHR1V4WhhTfwoBFUWOed32RhXsQrJwpPpF6RwVYj0ZbkDWlwidrgnyGCLLNNfWgQRbNPFMLe3H8LOerGP8hP+UJf+LIu8QRlzX2YtSutjJfTqBBvhdg==',key_name='tempest-keypair-1369517783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7f30b1d5a1f4604bb44f655b6be0571',ramdisk_id='',reservation_id='r-60vlmj26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-155925755',owner_user_name='tempest-ServersTestManualDisk-155925755-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='87b7b8b029dc48549d8d5982d7329f63',uuid=7715b665-70af-4b84-b8a3-85ccea6ab805,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.701 244018 DEBUG nova.network.os_vif_util [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converting VIF {"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.702 244018 DEBUG nova.network.os_vif_util [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.704 244018 DEBUG nova.objects.instance [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7715b665-70af-4b84-b8a3-85ccea6ab805 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.712 244018 DEBUG nova.network.neutron [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updated VIF entry in instance network info cache for port e61932f1-9a36-4c95-a52f-470c182ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.713 244018 DEBUG nova.network.neutron [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updating instance_info_cache with network_info: [{"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.729 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:30:58 compute-0 nova_compute[244014]:   <uuid>7715b665-70af-4b84-b8a3-85ccea6ab805</uuid>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   <name>instance-00000048</name>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersTestManualDisk-server-1810610267</nova:name>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:30:57</nova:creationTime>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <nova:user uuid="87b7b8b029dc48549d8d5982d7329f63">tempest-ServersTestManualDisk-155925755-project-member</nova:user>
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <nova:project uuid="c7f30b1d5a1f4604bb44f655b6be0571">tempest-ServersTestManualDisk-155925755</nova:project>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <nova:port uuid="e61932f1-9a36-4c95-a52f-470c182ac70f">
Feb 25 12:30:58 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <system>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <entry name="serial">7715b665-70af-4b84-b8a3-85ccea6ab805</entry>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <entry name="uuid">7715b665-70af-4b84-b8a3-85ccea6ab805</entry>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     </system>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   <os>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   </os>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   <features>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   </features>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7715b665-70af-4b84-b8a3-85ccea6ab805_disk">
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       </source>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config">
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       </source>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:30:58 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:38:25:49"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <target dev="tape61932f1-9a"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/console.log" append="off"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <video>
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     </video>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:30:58 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:30:58 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:30:58 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:30:58 compute-0 nova_compute[244014]: </domain>
Feb 25 12:30:58 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.730 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Preparing to wait for external event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.730 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.730 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.730 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.731 244018 DEBUG nova.virt.libvirt.vif [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1810610267',display_name='tempest-ServersTestManualDisk-server-1810610267',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1810610267',id=72,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+ZyZhedZYP+GQcHR1V4WhhTfwoBFUWOed32RhXsQrJwpPpF6RwVYj0ZbkDWlwidrgnyGCLLNNfWgQRbNPFMLe3H8LOerGP8hP+UJf+LIu8QRlzX2YtSutjJfTqBBvhdg==',key_name='tempest-keypair-1369517783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7f30b1d5a1f4604bb44f655b6be0571',ramdisk_id='',reservation_id='r-60vlmj26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-155925755',owner_user_name='tempest-ServersTestManualDisk-155925755-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='87b7b8b029dc48549d8d5982d7329f63',uuid=7715b665-70af-4b84-b8a3-85ccea6ab805,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.731 244018 DEBUG nova.network.os_vif_util [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converting VIF {"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.732 244018 DEBUG nova.network.os_vif_util [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.732 244018 DEBUG os_vif [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.733 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.733 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.734 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.736 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape61932f1-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.737 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape61932f1-9a, col_values=(('external_ids', {'iface-id': 'e61932f1-9a36-4c95-a52f-470c182ac70f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:25:49', 'vm-uuid': '7715b665-70af-4b84-b8a3-85ccea6ab805'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.738 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:58 compute-0 NetworkManager[49836]: <info>  [1772022658.7390] manager: (tape61932f1-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.744 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.744 244018 INFO os_vif [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a')
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.914 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9b9beb6f-4642-40d1-b9e5-96345c682341_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.819s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:58 compute-0 nova_compute[244014]: 2026-02-25 12:30:58.998 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] resizing rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.045 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.046 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.046 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] No VIF found with MAC fa:16:3e:38:25:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.046 244018 INFO nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Using config drive
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.065 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.212 244018 DEBUG nova.objects.instance [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'migration_context' on Instance uuid e954c936-91fe-4aa5-8c91-78ec08c85221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.243 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.243 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Ensure instance console log exists: /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.243 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.244 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.244 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.245 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.254 244018 WARNING nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.262 244018 DEBUG nova.virt.libvirt.host [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.262 244018 DEBUG nova.virt.libvirt.host [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.265 244018 DEBUG nova.virt.libvirt.host [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.266 244018 DEBUG nova.virt.libvirt.host [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.266 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.266 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.266 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.267 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.267 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.267 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.267 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.267 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.268 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.268 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.268 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.268 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.271 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.319 244018 DEBUG nova.objects.instance [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.336 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.337 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Ensure instance console log exists: /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.337 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.337 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.338 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.339 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.342 244018 WARNING nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.346 244018 DEBUG nova.virt.libvirt.host [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.347 244018 DEBUG nova.virt.libvirt.host [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.348 244018 DEBUG nova.virt.libvirt.host [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.348 244018 DEBUG nova.virt.libvirt.host [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.349 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.349 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.349 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.349 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.351 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.353 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.433 244018 INFO nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Creating config drive at /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.438 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkgllwqmt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.584 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkgllwqmt" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.682 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/674155135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.687 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config 7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.729 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.730 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.744 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.810 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.811 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.816 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.817 244018 INFO nova.compute.claims [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:30:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:30:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225140866' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:30:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232766859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.849 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.869 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.872 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.899 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.927 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:30:59 compute-0 nova_compute[244014]: 2026-02-25 12:30:59.930 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:30:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 337 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 147 op/s
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.162 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:31:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1465789387' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.434 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.437 244018 DEBUG nova.objects.instance [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'pci_devices' on Instance uuid e954c936-91fe-4aa5-8c91-78ec08c85221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:31:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3574508948' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.447 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config 7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.760s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.448 244018 INFO nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Deleting local config drive /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config because it was imported into RBD.
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.469 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <uuid>e954c936-91fe-4aa5-8c91-78ec08c85221</uuid>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <name>instance-00000049</name>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerShowV247Test-server-978520516</nova:name>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:30:59</nova:creationTime>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:user uuid="28ac489e13c44a30a1aafde4937f3cff">tempest-ServerShowV247Test-1730972552-project-member</nova:user>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:project uuid="e7c1be4d53154ae793a86c8bcc2c5b47">tempest-ServerShowV247Test-1730972552</nova:project>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <system>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="serial">e954c936-91fe-4aa5-8c91-78ec08c85221</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="uuid">e954c936-91fe-4aa5-8c91-78ec08c85221</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </system>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <os>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </os>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <features>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </features>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e954c936-91fe-4aa5-8c91-78ec08c85221_disk">
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config">
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/console.log" append="off"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <video>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </video>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:31:00 compute-0 nova_compute[244014]: </domain>
Feb 25 12:31:00 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.471 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.472 244018 DEBUG nova.objects.instance [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.490 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <uuid>9b9beb6f-4642-40d1-b9e5-96345c682341</uuid>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <name>instance-0000004a</name>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerShowV247Test-server-1350466245</nova:name>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:30:59</nova:creationTime>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:user uuid="28ac489e13c44a30a1aafde4937f3cff">tempest-ServerShowV247Test-1730972552-project-member</nova:user>
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <nova:project uuid="e7c1be4d53154ae793a86c8bcc2c5b47">tempest-ServerShowV247Test-1730972552</nova:project>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <system>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="serial">9b9beb6f-4642-40d1-b9e5-96345c682341</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="uuid">9b9beb6f-4642-40d1-b9e5-96345c682341</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </system>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <os>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </os>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <features>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </features>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/9b9beb6f-4642-40d1-b9e5-96345c682341_disk">
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config">
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/console.log" append="off"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <video>
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </video>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:31:00 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:31:00 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:31:00 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:31:00 compute-0 nova_compute[244014]: </domain>
Feb 25 12:31:00 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.503 244018 INFO nova.virt.libvirt.driver [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Deleting instance files /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5_del
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.504 244018 INFO nova.virt.libvirt.driver [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Deletion of /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5_del complete
Feb 25 12:31:00 compute-0 kernel: tape61932f1-9a: entered promiscuous mode
Feb 25 12:31:00 compute-0 NetworkManager[49836]: <info>  [1772022660.5060] manager: (tape61932f1-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Feb 25 12:31:00 compute-0 systemd-udevd[307073]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.509 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.512 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:00 compute-0 ovn_controller[147040]: 2026-02-25T12:31:00Z|00681|binding|INFO|Claiming lport e61932f1-9a36-4c95-a52f-470c182ac70f for this chassis.
Feb 25 12:31:00 compute-0 ovn_controller[147040]: 2026-02-25T12:31:00Z|00682|binding|INFO|e61932f1-9a36-4c95-a52f-470c182ac70f: Claiming fa:16:3e:38:25:49 10.100.0.5
Feb 25 12:31:00 compute-0 NetworkManager[49836]: <info>  [1772022660.5173] device (tape61932f1-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:31:00 compute-0 NetworkManager[49836]: <info>  [1772022660.5179] device (tape61932f1-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.525 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:25:49 10.100.0.5'], port_security=['fa:16:3e:38:25:49 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7715b665-70af-4b84-b8a3-85ccea6ab805', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f200cab-355f-4832-b410-50e7782a19b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7f30b1d5a1f4604bb44f655b6be0571', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbda8792-bcc5-48a3-8da7-ae12bedd4d63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba165cab-01bd-4b15-a1be-78df84ef667b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e61932f1-9a36-4c95-a52f-470c182ac70f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:31:00 compute-0 ovn_controller[147040]: 2026-02-25T12:31:00Z|00683|binding|INFO|Setting lport e61932f1-9a36-4c95-a52f-470c182ac70f ovn-installed in OVS
Feb 25 12:31:00 compute-0 ovn_controller[147040]: 2026-02-25T12:31:00Z|00684|binding|INFO|Setting lport e61932f1-9a36-4c95-a52f-470c182ac70f up in Southbound
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.528 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.527 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e61932f1-9a36-4c95-a52f-470c182ac70f in datapath 2f200cab-355f-4832-b410-50e7782a19b7 bound to our chassis
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.529 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f200cab-355f-4832-b410-50e7782a19b7
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.536 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a37775-1190-4ecb-ac32-0d7f804eeca9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.536 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f200cab-31 in ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.539 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f200cab-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.539 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0194b5c0-fa6b-47b7-a937-380be93cc96a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.540 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9851d08e-940d-4746-a6dd-831a51f867c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.553 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbff711-bfa1-4ee3-9a3a-e3aa292f63ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 systemd-machined[210048]: New machine qemu-87-instance-00000048.
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.564 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[99b0628f-d279-41b5-a9cc-976ca15ea808]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-00000048.
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.579 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.583 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.584 244018 INFO nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Using config drive
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.587 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c077243f-bc02-4fde-81f8-509e34d9aa9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 NetworkManager[49836]: <info>  [1772022660.5919] manager: (tap2f200cab-30): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.592 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[07528d38-0667-4db6-941f-4220c90c93f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 systemd-udevd[307528]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.617 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.619 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3d05860c-ba12-43fd-8b2f-6696c6c1c9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.624 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8f7896-35b9-46ef-bf08-fb34b7a83fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 NetworkManager[49836]: <info>  [1772022660.6436] device (tap2f200cab-30): carrier: link connected
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.644 244018 INFO nova.compute.manager [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Took 6.46 seconds to destroy the instance on the hypervisor.
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.645 244018 DEBUG oslo.service.loopingcall [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.645 244018 DEBUG nova.compute.manager [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.645 244018 DEBUG nova.network.neutron [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.648 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e7bf8d-0b39-48ac-8eb0-94053955ec96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.652 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.652 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.653 244018 INFO nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Using config drive
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.663 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a043d328-618e-4ffa-bb67-967eb617e701]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f200cab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:99:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463005, 'reachable_time': 43854, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307551, 'error': None, 'target': 'ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.673 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b4014b6b-c5e7-4fdd-9544-278f11312a10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:9962'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463005, 'tstamp': 463005}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307566, 'error': None, 'target': 'ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.681 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.692 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[edad8808-2e7e-4c01-a47d-0766b120576a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f200cab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:99:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463005, 'reachable_time': 43854, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307571, 'error': None, 'target': 'ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2225140866' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2232766859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:00 compute-0 ceph-mon[76335]: pgmap v1467: 305 pgs: 305 active+clean; 337 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 147 op/s
Feb 25 12:31:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1465789387' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3574508948' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.718 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8f15001b-f2c9-4d09-9aa0-7cddf29daa7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:31:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3442358198' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.741 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.745 244018 DEBUG nova.compute.provider_tree [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.768 244018 DEBUG nova.scheduler.client.report [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.787 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d2aa5b42-d7f6-436e-9a4f-7b4447a1eb33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.789 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f200cab-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.790 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.790 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f200cab-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:00 compute-0 kernel: tap2f200cab-30: entered promiscuous mode
Feb 25 12:31:00 compute-0 NetworkManager[49836]: <info>  [1772022660.7936] manager: (tap2f200cab-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.796 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f200cab-30, col_values=(('external_ids', {'iface-id': '87b3311c-e91e-486e-ab14-ba8c3046ee03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:00 compute-0 ovn_controller[147040]: 2026-02-25T12:31:00Z|00685|binding|INFO|Releasing lport 87b3311c-e91e-486e-ab14-ba8c3046ee03 from this chassis (sb_readonly=0)
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.798 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.799 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.801 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f200cab-355f-4832-b410-50e7782a19b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f200cab-355f-4832-b410-50e7782a19b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.802 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73bf487b-2001-4b0e-a75b-4772d7ae1fab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.803 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-2f200cab-355f-4832-b410-50e7782a19b7
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/2f200cab-355f-4832-b410-50e7782a19b7.pid.haproxy
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 2f200cab-355f-4832-b410-50e7782a19b7
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:31:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.804 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7', 'env', 'PROCESS_TAG=haproxy-2f200cab-355f-4832-b410-50e7782a19b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f200cab-355f-4832-b410-50e7782a19b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.848 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.849 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.869 244018 INFO nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.890 244018 INFO nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Creating config drive at /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.894 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_ucae5xe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.923 244018 INFO nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating config drive at /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.926 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp25k7oviy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.954 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.958 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022660.9372306, 7715b665-70af-4b84-b8a3-85ccea6ab805 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.958 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] VM Started (Lifecycle Event)
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.988 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.993 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022660.9373624, 7715b665-70af-4b84-b8a3-85ccea6ab805 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:00 compute-0 nova_compute[244014]: 2026-02-25 12:31:00.993 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] VM Paused (Lifecycle Event)
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.021 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.025 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.030 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_ucae5xe" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.053 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.056 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.078 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp25k7oviy" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.100 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.107 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.182 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.184 244018 INFO nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Deleting local config drive /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config because it was imported into RBD.
Feb 25 12:31:01 compute-0 podman[307706]: 2026-02-25 12:31:01.199969571 +0000 UTC m=+0.058468559 container create 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.207 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.211 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.212 244018 INFO nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating image(s)
Feb 25 12:31:01 compute-0 systemd[1]: Started libpod-conmon-1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795.scope.
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.239 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/562f19f351018e8168f67d2f0ec9c0e4adfbd26e742451e2aa28d80d269eda69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:01 compute-0 podman[307706]: 2026-02-25 12:31:01.167895242 +0000 UTC m=+0.026394310 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.264 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:01 compute-0 podman[307706]: 2026-02-25 12:31:01.271339475 +0000 UTC m=+0.129838493 container init 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:31:01 compute-0 podman[307706]: 2026-02-25 12:31:01.276989685 +0000 UTC m=+0.135488673 container start 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.291 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.295 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:01 compute-0 systemd-machined[210048]: New machine qemu-88-instance-00000049.
Feb 25 12:31:01 compute-0 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [NOTICE]   (307803) : New worker (307813) forked
Feb 25 12:31:01 compute-0 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [NOTICE]   (307803) : Loading success.
Feb 25 12:31:01 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-00000049.
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.319 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.320 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.320 244018 INFO nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deleting local config drive /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config because it was imported into RBD.
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.363 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.363 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.364 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.364 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.382 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.385 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.492 244018 DEBUG nova.policy [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b63928451c6a4137bb65e25561326aff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:31:01 compute-0 systemd-machined[210048]: New machine qemu-89-instance-0000004a.
Feb 25 12:31:01 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-0000004a.
Feb 25 12:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:31:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3442358198' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.718 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.788 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] resizing rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.888 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022661.8566487, e954c936-91fe-4aa5-8c91-78ec08c85221 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.888 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] VM Resumed (Lifecycle Event)
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.890 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.890 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.894 244018 DEBUG nova.objects.instance [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'migration_context' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.897 244018 INFO nova.virt.libvirt.driver [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Instance spawned successfully.
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.897 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:31:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 376 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 224 op/s
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.933 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.934 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.935 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Ensure instance console log exists: /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.935 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.935 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.935 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.940 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.943 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.943 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.944 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.945 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.945 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.945 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.946 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.946 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.952 244018 DEBUG nova.compute.manager [req-88fccf28-523c-4c82-9d7a-5d3f8091e5b9 req-41746def-6d6a-422a-9bec-e8f9efe937bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.953 244018 DEBUG oslo_concurrency.lockutils [req-88fccf28-523c-4c82-9d7a-5d3f8091e5b9 req-41746def-6d6a-422a-9bec-e8f9efe937bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.953 244018 DEBUG oslo_concurrency.lockutils [req-88fccf28-523c-4c82-9d7a-5d3f8091e5b9 req-41746def-6d6a-422a-9bec-e8f9efe937bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.953 244018 DEBUG oslo_concurrency.lockutils [req-88fccf28-523c-4c82-9d7a-5d3f8091e5b9 req-41746def-6d6a-422a-9bec-e8f9efe937bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.953 244018 DEBUG nova.compute.manager [req-88fccf28-523c-4c82-9d7a-5d3f8091e5b9 req-41746def-6d6a-422a-9bec-e8f9efe937bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Processing event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.954 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.954 244018 INFO nova.virt.libvirt.driver [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance spawned successfully.
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.955 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.957 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.959 244018 INFO nova.virt.libvirt.driver [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Instance spawned successfully.
Feb 25 12:31:01 compute-0 nova_compute[244014]: 2026-02-25 12:31:01.959 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.015 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.016 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.017 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.017 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.017 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.018 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.021 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.022 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022661.8569694, e954c936-91fe-4aa5-8c91-78ec08c85221 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.022 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] VM Started (Lifecycle Event)
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.036 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.036 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.037 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.037 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.038 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.039 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.087 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.096 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.118 244018 INFO nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Took 7.13 seconds to spawn the instance on the hypervisor.
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.118 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.132 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.132 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022661.930519, 9b9beb6f-4642-40d1-b9e5-96345c682341 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.132 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] VM Resumed (Lifecycle Event)
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.148 244018 INFO nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Took 6.37 seconds to spawn the instance on the hypervisor.
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.148 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.171 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.174 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.204 244018 INFO nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Took 8.81 seconds to spawn the instance on the hypervisor.
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.204 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.210 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.210 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022661.9440706, 9b9beb6f-4642-40d1-b9e5-96345c682341 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.211 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] VM Started (Lifecycle Event)
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.213 244018 INFO nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Took 8.60 seconds to build instance.
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.223 244018 INFO nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Took 8.47 seconds to build instance.
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.540 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.544 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.550 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.559 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.583 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022661.9685843, 7715b665-70af-4b84-b8a3-85ccea6ab805 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.583 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] VM Resumed (Lifecycle Event)
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.603 244018 INFO nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Took 10.26 seconds to build instance.
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.610 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.614 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.656 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:02 compute-0 ceph-mon[76335]: pgmap v1468: 305 pgs: 305 active+clean; 376 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 224 op/s
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.755 244018 DEBUG nova.network.neutron [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.773 244018 INFO nova.compute.manager [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Took 2.13 seconds to deallocate network for instance.
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.846 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.847 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.855 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Successfully created port: a308a2fe-7f22-4520-8be1-36876d0d5361 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:31:02 compute-0 nova_compute[244014]: 2026-02-25 12:31:02.997 244018 DEBUG oslo_concurrency.processutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:31:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580024248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:03 compute-0 nova_compute[244014]: 2026-02-25 12:31:03.541 244018 DEBUG oslo_concurrency.processutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:03 compute-0 nova_compute[244014]: 2026-02-25 12:31:03.548 244018 DEBUG nova.compute.provider_tree [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:31:03 compute-0 nova_compute[244014]: 2026-02-25 12:31:03.575 244018 DEBUG nova.scheduler.client.report [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:31:03 compute-0 nova_compute[244014]: 2026-02-25 12:31:03.602 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:03 compute-0 nova_compute[244014]: 2026-02-25 12:31:03.656 244018 INFO nova.scheduler.client.report [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Deleted allocations for instance 25eece29-8689-47a8-b930-0492bf528de5
Feb 25 12:31:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3580024248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:03 compute-0 nova_compute[244014]: 2026-02-25 12:31:03.726 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:03 compute-0 nova_compute[244014]: 2026-02-25 12:31:03.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:03 compute-0 nova_compute[244014]: 2026-02-25 12:31:03.831 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Successfully updated port: a308a2fe-7f22-4520-8be1-36876d0d5361 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:31:03 compute-0 nova_compute[244014]: 2026-02-25 12:31:03.849 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:31:03 compute-0 nova_compute[244014]: 2026-02-25 12:31:03.850 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquired lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:31:03 compute-0 nova_compute[244014]: 2026-02-25 12:31:03.850 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:31:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 396 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.1 MiB/s wr, 183 op/s
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.046 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.261 244018 DEBUG nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.262 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.262 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.263 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.264 244018 DEBUG nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] No waiting events found dispatching network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.264 244018 WARNING nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received unexpected event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f for instance with vm_state active and task_state None.
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.265 244018 DEBUG nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-deleted-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.265 244018 DEBUG nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-changed-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.266 244018 DEBUG nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Refreshing instance network info cache due to event network-changed-a308a2fe-7f22-4520-8be1-36876d0d5361. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.269 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.295 244018 INFO nova.compute.manager [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Rebuilding instance
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.562 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.585 244018 DEBUG nova.compute.manager [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.641 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.655 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.667 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'resources' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.682 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.700 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.704 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:31:04 compute-0 ceph-mon[76335]: pgmap v1469: 305 pgs: 305 active+clean; 396 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.1 MiB/s wr, 183 op/s
Feb 25 12:31:04 compute-0 nova_compute[244014]: 2026-02-25 12:31:04.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.141 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Updating instance_info_cache with network_info: [{"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.158 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Releasing lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.159 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance network_info: |[{"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.159 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.160 244018 DEBUG nova.network.neutron [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Refreshing network info cache for port a308a2fe-7f22-4520-8be1-36876d0d5361 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.162 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Start _get_guest_xml network_info=[{"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.166 244018 WARNING nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.171 244018 DEBUG nova.virt.libvirt.host [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.172 244018 DEBUG nova.virt.libvirt.host [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.175 244018 DEBUG nova.virt.libvirt.host [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.175 244018 DEBUG nova.virt.libvirt.host [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.176 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.176 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.177 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.177 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.177 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.177 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.178 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.178 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.178 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.179 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.179 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.179 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.183 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:31:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2437135054' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.818 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.841 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.846 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:31:05 compute-0 nova_compute[244014]: 2026-02-25 12:31:05.919 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:31:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 396 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.1 MiB/s wr, 168 op/s
Feb 25 12:31:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2437135054' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:31:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220219505' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.363 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.364 244018 DEBUG nova.virt.libvirt.vif [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1795002660',display_name='tempest-tempest.common.compute-instance-1795002660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1795002660',id=75,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-p68nlja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:01Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=3f143481-f5f9-45d3-9d0b-b66e77ee0714,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.364 244018 DEBUG nova.network.os_vif_util [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.365 244018 DEBUG nova.network.os_vif_util [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.367 244018 DEBUG nova.objects.instance [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.396 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:31:06 compute-0 nova_compute[244014]:   <uuid>3f143481-f5f9-45d3-9d0b-b66e77ee0714</uuid>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   <name>instance-0000004b</name>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <nova:name>tempest-tempest.common.compute-instance-1795002660</nova:name>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:31:05</nova:creationTime>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <nova:user uuid="b63928451c6a4137bb65e25561326aff">tempest-ServerActionsTestOtherA-1615886603-project-member</nova:user>
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <nova:project uuid="8315f545d21f4f8d9a43d810f50e7b78">tempest-ServerActionsTestOtherA-1615886603</nova:project>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <nova:port uuid="a308a2fe-7f22-4520-8be1-36876d0d5361">
Feb 25 12:31:06 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <system>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <entry name="serial">3f143481-f5f9-45d3-9d0b-b66e77ee0714</entry>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <entry name="uuid">3f143481-f5f9-45d3-9d0b-b66e77ee0714</entry>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     </system>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   <os>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   </os>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   <features>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   </features>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk">
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config">
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:06 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:da:c8:fb"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <target dev="tapa308a2fe-7f"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/console.log" append="off"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <video>
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     </video>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:31:06 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:31:06 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:31:06 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:31:06 compute-0 nova_compute[244014]: </domain>
Feb 25 12:31:06 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.396 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Preparing to wait for external event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.396 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.397 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.397 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.397 244018 DEBUG nova.virt.libvirt.vif [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1795002660',display_name='tempest-tempest.common.compute-instance-1795002660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1795002660',id=75,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-p68nlja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:01Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=3f143481-f5f9-45d3-9d0b-b66e77ee0714,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.398 244018 DEBUG nova.network.os_vif_util [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.398 244018 DEBUG nova.network.os_vif_util [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.399 244018 DEBUG os_vif [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.399 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.400 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.403 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.403 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa308a2fe-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.403 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa308a2fe-7f, col_values=(('external_ids', {'iface-id': 'a308a2fe-7f22-4520-8be1-36876d0d5361', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:c8:fb', 'vm-uuid': '3f143481-f5f9-45d3-9d0b-b66e77ee0714'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:06 compute-0 NetworkManager[49836]: <info>  [1772022666.4062] manager: (tapa308a2fe-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.408 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.411 244018 INFO os_vif [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f')
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.538 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.538 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.538 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No VIF found with MAC fa:16:3e:da:c8:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.539 244018 INFO nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Using config drive
Feb 25 12:31:06 compute-0 podman[308124]: 2026-02-25 12:31:06.550506688 +0000 UTC m=+0.109885617 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.564 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.570 244018 DEBUG nova.compute.manager [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-changed-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.570 244018 DEBUG nova.compute.manager [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Refreshing instance network info cache due to event network-changed-e61932f1-9a36-4c95-a52f-470c182ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.570 244018 DEBUG oslo_concurrency.lockutils [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.571 244018 DEBUG oslo_concurrency.lockutils [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.571 244018 DEBUG nova.network.neutron [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Refreshing network info cache for port e61932f1-9a36-4c95-a52f-470c182ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:31:06 compute-0 podman[308125]: 2026-02-25 12:31:06.599867868 +0000 UTC m=+0.154281606 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:31:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.729 244018 DEBUG nova.network.neutron [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Updated VIF entry in instance network info cache for port a308a2fe-7f22-4520-8be1-36876d0d5361. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.730 244018 DEBUG nova.network.neutron [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Updating instance_info_cache with network_info: [{"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:31:06 compute-0 nova_compute[244014]: 2026-02-25 12:31:06.755 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:31:07 compute-0 ceph-mon[76335]: pgmap v1470: 305 pgs: 305 active+clean; 396 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.1 MiB/s wr, 168 op/s
Feb 25 12:31:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/220219505' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:07 compute-0 nova_compute[244014]: 2026-02-25 12:31:07.259 244018 INFO nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating config drive at /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config
Feb 25 12:31:07 compute-0 nova_compute[244014]: 2026-02-25 12:31:07.267 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv7o3qpwn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:07 compute-0 nova_compute[244014]: 2026-02-25 12:31:07.400 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv7o3qpwn" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:07 compute-0 nova_compute[244014]: 2026-02-25 12:31:07.423 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:07 compute-0 nova_compute[244014]: 2026-02-25 12:31:07.427 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:07 compute-0 nova_compute[244014]: 2026-02-25 12:31:07.565 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:07 compute-0 nova_compute[244014]: 2026-02-25 12:31:07.566 244018 INFO nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Deleting local config drive /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config because it was imported into RBD.
Feb 25 12:31:07 compute-0 kernel: tapa308a2fe-7f: entered promiscuous mode
Feb 25 12:31:07 compute-0 NetworkManager[49836]: <info>  [1772022667.6050] manager: (tapa308a2fe-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Feb 25 12:31:07 compute-0 nova_compute[244014]: 2026-02-25 12:31:07.606 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:07 compute-0 ovn_controller[147040]: 2026-02-25T12:31:07Z|00686|binding|INFO|Claiming lport a308a2fe-7f22-4520-8be1-36876d0d5361 for this chassis.
Feb 25 12:31:07 compute-0 ovn_controller[147040]: 2026-02-25T12:31:07Z|00687|binding|INFO|a308a2fe-7f22-4520-8be1-36876d0d5361: Claiming fa:16:3e:da:c8:fb 10.100.0.4
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.619 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:c8:fb 10.100.0.4'], port_security=['fa:16:3e:da:c8:fb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f143481-f5f9-45d3-9d0b-b66e77ee0714', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '2', 'neutron:security_group_ids': '290a60c1-6d36-455f-8454-5a2e312fec46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a308a2fe-7f22-4520-8be1-36876d0d5361) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.620 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a308a2fe-7f22-4520-8be1-36876d0d5361 in datapath cd796561-bd80-4610-8abc-655ee9e3676f bound to our chassis
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.627 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 12:31:07 compute-0 ovn_controller[147040]: 2026-02-25T12:31:07Z|00688|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 ovn-installed in OVS
Feb 25 12:31:07 compute-0 ovn_controller[147040]: 2026-02-25T12:31:07Z|00689|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 up in Southbound
Feb 25 12:31:07 compute-0 nova_compute[244014]: 2026-02-25 12:31:07.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:07 compute-0 nova_compute[244014]: 2026-02-25 12:31:07.635 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:07 compute-0 systemd-machined[210048]: New machine qemu-90-instance-0000004b.
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.646 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[342eba33-5e6d-47ad-ab7f-3574f2117238]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:07 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-0000004b.
Feb 25 12:31:07 compute-0 systemd-udevd[308243]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:31:07 compute-0 NetworkManager[49836]: <info>  [1772022667.6763] device (tapa308a2fe-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:31:07 compute-0 NetworkManager[49836]: <info>  [1772022667.6769] device (tapa308a2fe-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.682 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[35c38c72-d169-453c-ba31-73333cd87e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.684 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b51b41fa-0caf-45b6-b90b-cc73a3b2beff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.717 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[83e64448-1a38-4aab-9264-e3d90895ba67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd088751-3c3b-433e-acd5-68865542c7b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308254, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.743 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37f61b81-4a80-4608-a814-1f39ce88fdc6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460517, 'tstamp': 460517}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308255, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460518, 'tstamp': 460518}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308255, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.745 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:07 compute-0 nova_compute[244014]: 2026-02-25 12:31:07.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.751 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.751 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.752 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.752 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 419 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 7.2 MiB/s wr, 361 op/s
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.140 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022668.1383505, 3f143481-f5f9-45d3-9d0b-b66e77ee0714 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.141 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] VM Started (Lifecycle Event)
Feb 25 12:31:08 compute-0 ceph-mon[76335]: pgmap v1471: 305 pgs: 305 active+clean; 419 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 7.2 MiB/s wr, 361 op/s
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.169 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.172 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022668.1397903, 3f143481-f5f9-45d3-9d0b-b66e77ee0714 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.173 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] VM Paused (Lifecycle Event)
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.204 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.208 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.232 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:31:08 compute-0 ovn_controller[147040]: 2026-02-25T12:31:08Z|00690|binding|INFO|Releasing lport 87b3311c-e91e-486e-ab14-ba8c3046ee03 from this chassis (sb_readonly=0)
Feb 25 12:31:08 compute-0 ovn_controller[147040]: 2026-02-25T12:31:08Z|00691|binding|INFO|Releasing lport d456b40d-31d4-4447-97a8-c536382c29f9 from this chassis (sb_readonly=0)
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.265 244018 DEBUG nova.compute.manager [req-56772232-9eb7-4e1c-96dc-6c8687984e82 req-db14442a-ebc4-40ac-a426-c4201f74b835 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.267 244018 DEBUG oslo_concurrency.lockutils [req-56772232-9eb7-4e1c-96dc-6c8687984e82 req-db14442a-ebc4-40ac-a426-c4201f74b835 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.268 244018 DEBUG oslo_concurrency.lockutils [req-56772232-9eb7-4e1c-96dc-6c8687984e82 req-db14442a-ebc4-40ac-a426-c4201f74b835 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.268 244018 DEBUG oslo_concurrency.lockutils [req-56772232-9eb7-4e1c-96dc-6c8687984e82 req-db14442a-ebc4-40ac-a426-c4201f74b835 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.269 244018 DEBUG nova.compute.manager [req-56772232-9eb7-4e1c-96dc-6c8687984e82 req-db14442a-ebc4-40ac-a426-c4201f74b835 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Processing event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.284 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.288 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022668.288267, 3f143481-f5f9-45d3-9d0b-b66e77ee0714 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.289 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] VM Resumed (Lifecycle Event)
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.307 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.312 244018 INFO nova.virt.libvirt.driver [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance spawned successfully.
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.313 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.321 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.327 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.346 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.347 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.348 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.349 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.350 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.350 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.357 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.439 244018 INFO nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Took 7.23 seconds to spawn the instance on the hypervisor.
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.440 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.523 244018 INFO nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Took 8.74 seconds to build instance.
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.556 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.581 244018 DEBUG nova.network.neutron [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updated VIF entry in instance network info cache for port e61932f1-9a36-4c95-a52f-470c182ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.582 244018 DEBUG nova.network.neutron [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updating instance_info_cache with network_info: [{"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.610 244018 DEBUG oslo_concurrency.lockutils [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:31:08 compute-0 nova_compute[244014]: 2026-02-25 12:31:08.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:31:09 compute-0 nova_compute[244014]: 2026-02-25 12:31:09.634 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022654.6325033, 25eece29-8689-47a8-b930-0492bf528de5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:09 compute-0 nova_compute[244014]: 2026-02-25 12:31:09.635 244018 INFO nova.compute.manager [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] VM Stopped (Lifecycle Event)
Feb 25 12:31:09 compute-0 nova_compute[244014]: 2026-02-25 12:31:09.656 244018 DEBUG nova.compute.manager [None req-5ed2e0a9-5a44-4f30-a48a-5b33741ec828 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:09 compute-0 nova_compute[244014]: 2026-02-25 12:31:09.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 419 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.1 MiB/s wr, 327 op/s
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.392 244018 DEBUG nova.compute.manager [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.393 244018 DEBUG oslo_concurrency.lockutils [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.393 244018 DEBUG oslo_concurrency.lockutils [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.393 244018 DEBUG oslo_concurrency.lockutils [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.393 244018 DEBUG nova.compute.manager [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.394 244018 WARNING nova.compute.manager [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state active and task_state None.
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.606 244018 DEBUG oslo_concurrency.lockutils [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.606 244018 DEBUG oslo_concurrency.lockutils [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.606 244018 DEBUG nova.compute.manager [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.609 244018 DEBUG nova.compute.manager [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.610 244018 DEBUG nova.objects.instance [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'flavor' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.646 244018 DEBUG nova.virt.libvirt.driver [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:31:10 compute-0 nova_compute[244014]: 2026-02-25 12:31:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:31:10 compute-0 ceph-mon[76335]: pgmap v1472: 305 pgs: 305 active+clean; 419 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.1 MiB/s wr, 327 op/s
Feb 25 12:31:11 compute-0 nova_compute[244014]: 2026-02-25 12:31:11.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 419 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 5.1 MiB/s wr, 367 op/s
Feb 25 12:31:12 compute-0 nova_compute[244014]: 2026-02-25 12:31:12.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:31:12 compute-0 nova_compute[244014]: 2026-02-25 12:31:12.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:31:12 compute-0 nova_compute[244014]: 2026-02-25 12:31:12.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:31:12 compute-0 nova_compute[244014]: 2026-02-25 12:31:12.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:31:12 compute-0 nova_compute[244014]: 2026-02-25 12:31:12.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:12 compute-0 nova_compute[244014]: 2026-02-25 12:31:12.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:12 compute-0 nova_compute[244014]: 2026-02-25 12:31:12.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:12 compute-0 nova_compute[244014]: 2026-02-25 12:31:12.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:31:12 compute-0 nova_compute[244014]: 2026-02-25 12:31:12.911 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:13 compute-0 ceph-mon[76335]: pgmap v1473: 305 pgs: 305 active+clean; 419 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 5.1 MiB/s wr, 367 op/s
Feb 25 12:31:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:31:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/20037407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.569 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.646 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.646 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.651 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.651 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.660 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.660 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.664 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.664 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:31:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:13.805 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:31:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:13.806 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.805 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.854 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.855 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3005MB free_disk=59.85860504861921GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.855 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:13 compute-0 nova_compute[244014]: 2026-02-25 12:31:13.855 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 419 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 2.4 MiB/s wr, 320 op/s
Feb 25 12:31:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/20037407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.100 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 5732c5fb-59b3-4590-b65a-a696b9c90152 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.100 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 7715b665-70af-4b84-b8a3-85ccea6ab805 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.101 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e954c936-91fe-4aa5-8c91-78ec08c85221 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.101 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 9b9beb6f-4642-40d1-b9e5-96345c682341 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.101 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 3f143481-f5f9-45d3-9d0b-b66e77ee0714 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.101 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.101 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.207 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:14 compute-0 ovn_controller[147040]: 2026-02-25T12:31:14Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:25:49 10.100.0.5
Feb 25 12:31:14 compute-0 ovn_controller[147040]: 2026-02-25T12:31:14Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:25:49 10.100.0.5
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.751 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:14 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Feb 25 12:31:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:31:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1930314918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.949 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.955 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:31:14 compute-0 nova_compute[244014]: 2026-02-25 12:31:14.994 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:31:15 compute-0 nova_compute[244014]: 2026-02-25 12:31:15.022 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:31:15 compute-0 nova_compute[244014]: 2026-02-25 12:31:15.023 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:15 compute-0 ceph-mon[76335]: pgmap v1474: 305 pgs: 305 active+clean; 419 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 2.4 MiB/s wr, 320 op/s
Feb 25 12:31:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1930314918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:15 compute-0 sudo[308345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:31:15 compute-0 sudo[308345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:31:15 compute-0 sudo[308345]: pam_unix(sudo:session): session closed for user root
Feb 25 12:31:15 compute-0 sudo[308370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 12:31:15 compute-0 sudo[308370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:31:15 compute-0 sudo[308370]: pam_unix(sudo:session): session closed for user root
Feb 25 12:31:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:31:15 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:31:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:31:15 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:31:15 compute-0 sudo[308414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:31:15 compute-0 sudo[308414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:31:15 compute-0 sudo[308414]: pam_unix(sudo:session): session closed for user root
Feb 25 12:31:15 compute-0 sudo[308439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:31:15 compute-0 sudo[308439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:31:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:15.808 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 419 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 1.0 MiB/s wr, 262 op/s
Feb 25 12:31:16 compute-0 sudo[308439]: pam_unix(sudo:session): session closed for user root
Feb 25 12:31:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:31:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:31:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:31:16 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:31:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:31:16 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:31:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:31:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:31:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:31:16 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:31:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:31:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:31:16 compute-0 sudo[308496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:31:16 compute-0 sudo[308496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:31:16 compute-0 sudo[308496]: pam_unix(sudo:session): session closed for user root
Feb 25 12:31:16 compute-0 sudo[308521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:31:16 compute-0 sudo[308521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:31:16 compute-0 nova_compute[244014]: 2026-02-25 12:31:16.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:31:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:31:16 compute-0 ceph-mon[76335]: pgmap v1475: 305 pgs: 305 active+clean; 419 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 1.0 MiB/s wr, 262 op/s
Feb 25 12:31:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:31:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:31:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:31:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:31:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:31:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:31:16 compute-0 podman[308559]: 2026-02-25 12:31:16.552111868 +0000 UTC m=+0.057740129 container create b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:31:16 compute-0 systemd[1]: Started libpod-conmon-b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c.scope.
Feb 25 12:31:16 compute-0 podman[308559]: 2026-02-25 12:31:16.523904398 +0000 UTC m=+0.029532659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:31:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:31:16 compute-0 podman[308559]: 2026-02-25 12:31:16.648254224 +0000 UTC m=+0.153882485 container init b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:31:16 compute-0 podman[308559]: 2026-02-25 12:31:16.658577047 +0000 UTC m=+0.164205318 container start b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 12:31:16 compute-0 podman[308559]: 2026-02-25 12:31:16.664488764 +0000 UTC m=+0.170117125 container attach b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 12:31:16 compute-0 hardcore_joliot[308575]: 167 167
Feb 25 12:31:16 compute-0 systemd[1]: libpod-b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c.scope: Deactivated successfully.
Feb 25 12:31:16 compute-0 podman[308559]: 2026-02-25 12:31:16.680046286 +0000 UTC m=+0.185674567 container died b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8725b5f832f0fd1045a12d874d634f58ce113e4306344a225cad4240be5678e4-merged.mount: Deactivated successfully.
Feb 25 12:31:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:16 compute-0 podman[308559]: 2026-02-25 12:31:16.725775562 +0000 UTC m=+0.231403833 container remove b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:31:16 compute-0 systemd[1]: libpod-conmon-b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c.scope: Deactivated successfully.
Feb 25 12:31:16 compute-0 podman[308600]: 2026-02-25 12:31:16.912279761 +0000 UTC m=+0.072474046 container create e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:31:16 compute-0 systemd[1]: Started libpod-conmon-e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e.scope.
Feb 25 12:31:16 compute-0 podman[308600]: 2026-02-25 12:31:16.884811742 +0000 UTC m=+0.045006097 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:31:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:17 compute-0 podman[308600]: 2026-02-25 12:31:17.022995451 +0000 UTC m=+0.183189816 container init e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:31:17 compute-0 podman[308600]: 2026-02-25 12:31:17.040521578 +0000 UTC m=+0.200715843 container start e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:31:17 compute-0 podman[308600]: 2026-02-25 12:31:17.048115963 +0000 UTC m=+0.208310308 container attach e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:31:17 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Feb 25 12:31:17 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004a.scope: Consumed 12.738s CPU time.
Feb 25 12:31:17 compute-0 systemd-machined[210048]: Machine qemu-89-instance-0000004a terminated.
Feb 25 12:31:17 compute-0 happy_hawking[308617]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:31:17 compute-0 happy_hawking[308617]: --> All data devices are unavailable
Feb 25 12:31:17 compute-0 systemd[1]: libpod-e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e.scope: Deactivated successfully.
Feb 25 12:31:17 compute-0 podman[308600]: 2026-02-25 12:31:17.613553688 +0000 UTC m=+0.773747963 container died e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:31:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4-merged.mount: Deactivated successfully.
Feb 25 12:31:17 compute-0 podman[308600]: 2026-02-25 12:31:17.657921796 +0000 UTC m=+0.818116071 container remove e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:31:17 compute-0 systemd[1]: libpod-conmon-e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e.scope: Deactivated successfully.
Feb 25 12:31:17 compute-0 sudo[308521]: pam_unix(sudo:session): session closed for user root
Feb 25 12:31:17 compute-0 sudo[308651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:31:17 compute-0 sudo[308651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:31:17 compute-0 sudo[308651]: pam_unix(sudo:session): session closed for user root
Feb 25 12:31:17 compute-0 nova_compute[244014]: 2026-02-25 12:31:17.768 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance shutdown successfully after 13 seconds.
Feb 25 12:31:17 compute-0 nova_compute[244014]: 2026-02-25 12:31:17.774 244018 INFO nova.virt.libvirt.driver [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance destroyed successfully.
Feb 25 12:31:17 compute-0 nova_compute[244014]: 2026-02-25 12:31:17.779 244018 INFO nova.virt.libvirt.driver [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance destroyed successfully.
Feb 25 12:31:17 compute-0 sudo[308678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:31:17 compute-0 sudo[308678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:31:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 517 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.5 MiB/s wr, 448 op/s
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.024 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:31:18 compute-0 podman[308733]: 2026-02-25 12:31:18.078088321 +0000 UTC m=+0.045890453 container create 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.098 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deleting instance files /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341_del
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.099 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deletion of /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341_del complete
Feb 25 12:31:18 compute-0 systemd[1]: Started libpod-conmon-51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14.scope.
Feb 25 12:31:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:31:18 compute-0 podman[308733]: 2026-02-25 12:31:18.140189502 +0000 UTC m=+0.107991644 container init 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 12:31:18 compute-0 podman[308733]: 2026-02-25 12:31:18.147809278 +0000 UTC m=+0.115611420 container start 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:31:18 compute-0 podman[308733]: 2026-02-25 12:31:18.054359958 +0000 UTC m=+0.022162090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:31:18 compute-0 stupefied_einstein[308749]: 167 167
Feb 25 12:31:18 compute-0 systemd[1]: libpod-51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14.scope: Deactivated successfully.
Feb 25 12:31:18 compute-0 podman[308733]: 2026-02-25 12:31:18.153336375 +0000 UTC m=+0.121138517 container attach 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:31:18 compute-0 podman[308733]: 2026-02-25 12:31:18.157307827 +0000 UTC m=+0.125109949 container died 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6af21c1b6ef1e4e6e3f5b7af2aee9faed8870d8483ad65ba06337b7ddcee0d5-merged.mount: Deactivated successfully.
Feb 25 12:31:18 compute-0 podman[308733]: 2026-02-25 12:31:18.20360562 +0000 UTC m=+0.171407742 container remove 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 12:31:18 compute-0 systemd[1]: libpod-conmon-51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14.scope: Deactivated successfully.
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.246 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.246 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating image(s)
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.272 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.295 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.317 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.321 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:18 compute-0 podman[308825]: 2026-02-25 12:31:18.365418989 +0000 UTC m=+0.042773974 container create 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:31:18 compute-0 systemd[1]: Started libpod-conmon-820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb.scope.
Feb 25 12:31:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c885edf9bbe7d949c95966ebf04c7fa71748d4370709c9c1119b002eceffc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c885edf9bbe7d949c95966ebf04c7fa71748d4370709c9c1119b002eceffc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c885edf9bbe7d949c95966ebf04c7fa71748d4370709c9c1119b002eceffc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c885edf9bbe7d949c95966ebf04c7fa71748d4370709c9c1119b002eceffc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.437 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.438 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.439 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.439 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:18 compute-0 podman[308825]: 2026-02-25 12:31:18.341599103 +0000 UTC m=+0.018954128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:31:18 compute-0 podman[308825]: 2026-02-25 12:31:18.442229807 +0000 UTC m=+0.119584812 container init 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 12:31:18 compute-0 podman[308825]: 2026-02-25 12:31:18.448485454 +0000 UTC m=+0.125840439 container start 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 12:31:18 compute-0 podman[308825]: 2026-02-25 12:31:18.454575167 +0000 UTC m=+0.131930192 container attach 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.467 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.470 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 9b9beb6f-4642-40d1-b9e5-96345c682341_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.734 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 9b9beb6f-4642-40d1-b9e5-96345c682341_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:18 compute-0 keen_mclaren[308846]: {
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:     "0": [
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:         {
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "devices": [
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "/dev/loop3"
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             ],
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_name": "ceph_lv0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_size": "21470642176",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "name": "ceph_lv0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "tags": {
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.cluster_name": "ceph",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.crush_device_class": "",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.encrypted": "0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.objectstore": "bluestore",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.osd_id": "0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.type": "block",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.vdo": "0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.with_tpm": "0"
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             },
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "type": "block",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "vg_name": "ceph_vg0"
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:         }
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:     ],
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:     "1": [
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:         {
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "devices": [
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "/dev/loop4"
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             ],
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_name": "ceph_lv1",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_size": "21470642176",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "name": "ceph_lv1",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "tags": {
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.cluster_name": "ceph",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.crush_device_class": "",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.encrypted": "0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.objectstore": "bluestore",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.osd_id": "1",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.type": "block",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.vdo": "0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.with_tpm": "0"
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             },
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "type": "block",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "vg_name": "ceph_vg1"
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:         }
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:     ],
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:     "2": [
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:         {
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "devices": [
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "/dev/loop5"
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             ],
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_name": "ceph_lv2",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_size": "21470642176",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "name": "ceph_lv2",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "tags": {
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.cluster_name": "ceph",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.crush_device_class": "",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.encrypted": "0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.objectstore": "bluestore",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.osd_id": "2",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.type": "block",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.vdo": "0",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:                 "ceph.with_tpm": "0"
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             },
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "type": "block",
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:             "vg_name": "ceph_vg2"
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:         }
Feb 25 12:31:18 compute-0 keen_mclaren[308846]:     ]
Feb 25 12:31:18 compute-0 keen_mclaren[308846]: }
Feb 25 12:31:18 compute-0 systemd[1]: libpod-820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb.scope: Deactivated successfully.
Feb 25 12:31:18 compute-0 podman[308825]: 2026-02-25 12:31:18.773218083 +0000 UTC m=+0.450573078 container died 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-43c885edf9bbe7d949c95966ebf04c7fa71748d4370709c9c1119b002eceffc5-merged.mount: Deactivated successfully.
Feb 25 12:31:18 compute-0 podman[308825]: 2026-02-25 12:31:18.812792945 +0000 UTC m=+0.490147920 container remove 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 12:31:18 compute-0 systemd[1]: libpod-conmon-820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb.scope: Deactivated successfully.
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.834 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] resizing rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:31:18 compute-0 sudo[308678]: pam_unix(sudo:session): session closed for user root
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.882 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:31:18 compute-0 sudo[308954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:31:18 compute-0 sudo[308954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:31:18 compute-0 sudo[308954]: pam_unix(sudo:session): session closed for user root
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.959 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.959 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Ensure instance console log exists: /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.960 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.960 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.960 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.961 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:31:18 compute-0 sudo[308983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:31:18 compute-0 sudo[308983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.967 244018 WARNING nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.973 244018 DEBUG nova.virt.libvirt.host [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.973 244018 DEBUG nova.virt.libvirt.host [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.976 244018 DEBUG nova.virt.libvirt.host [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.976 244018 DEBUG nova.virt.libvirt.host [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.977 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.978 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.978 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.979 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.979 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.979 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.979 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.979 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.980 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.980 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.980 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.981 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.982 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:18 compute-0 nova_compute[244014]: 2026-02-25 12:31:18.999 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:19 compute-0 ceph-mon[76335]: pgmap v1476: 305 pgs: 305 active+clean; 517 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.5 MiB/s wr, 448 op/s
Feb 25 12:31:19 compute-0 podman[309058]: 2026-02-25 12:31:19.275770103 +0000 UTC m=+0.054633460 container create 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:31:19 compute-0 systemd[1]: Started libpod-conmon-2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da.scope.
Feb 25 12:31:19 compute-0 podman[309058]: 2026-02-25 12:31:19.257154605 +0000 UTC m=+0.036017982 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:31:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:31:19 compute-0 podman[309058]: 2026-02-25 12:31:19.389606501 +0000 UTC m=+0.168469868 container init 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:31:19 compute-0 podman[309058]: 2026-02-25 12:31:19.399165762 +0000 UTC m=+0.178029129 container start 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:31:19 compute-0 podman[309058]: 2026-02-25 12:31:19.403130735 +0000 UTC m=+0.181994102 container attach 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:31:19 compute-0 goofy_feistel[309074]: 167 167
Feb 25 12:31:19 compute-0 systemd[1]: libpod-2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da.scope: Deactivated successfully.
Feb 25 12:31:19 compute-0 podman[309058]: 2026-02-25 12:31:19.40685742 +0000 UTC m=+0.185720817 container died 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 12:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0fea8d2f47546ab634dcdb10b00af7a9056721817a50344774451d54c15fe19-merged.mount: Deactivated successfully.
Feb 25 12:31:19 compute-0 podman[309058]: 2026-02-25 12:31:19.464136155 +0000 UTC m=+0.242999552 container remove 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:31:19 compute-0 systemd[1]: libpod-conmon-2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da.scope: Deactivated successfully.
Feb 25 12:31:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:31:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941473776' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:19 compute-0 nova_compute[244014]: 2026-02-25 12:31:19.586 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:19 compute-0 nova_compute[244014]: 2026-02-25 12:31:19.614 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:19 compute-0 nova_compute[244014]: 2026-02-25 12:31:19.618 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:19 compute-0 podman[309106]: 2026-02-25 12:31:19.661572034 +0000 UTC m=+0.048308711 container create 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:31:19 compute-0 systemd[1]: Started libpod-conmon-4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8.scope.
Feb 25 12:31:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b85be122383e8aceaf8f7f9fd363410c5c69ab549d34747c0e826f2200a350/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:19 compute-0 podman[309106]: 2026-02-25 12:31:19.642529934 +0000 UTC m=+0.029266631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b85be122383e8aceaf8f7f9fd363410c5c69ab549d34747c0e826f2200a350/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b85be122383e8aceaf8f7f9fd363410c5c69ab549d34747c0e826f2200a350/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b85be122383e8aceaf8f7f9fd363410c5c69ab549d34747c0e826f2200a350/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:31:19 compute-0 podman[309106]: 2026-02-25 12:31:19.754526149 +0000 UTC m=+0.141262936 container init 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:31:19 compute-0 podman[309106]: 2026-02-25 12:31:19.763644478 +0000 UTC m=+0.150381205 container start 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:31:19 compute-0 podman[309106]: 2026-02-25 12:31:19.777750568 +0000 UTC m=+0.164487285 container attach 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:31:19 compute-0 nova_compute[244014]: 2026-02-25 12:31:19.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 517 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.4 MiB/s wr, 256 op/s
Feb 25 12:31:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/941473776' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:31:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661453527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.279 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.282 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:31:20 compute-0 nova_compute[244014]:   <uuid>9b9beb6f-4642-40d1-b9e5-96345c682341</uuid>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   <name>instance-0000004a</name>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerShowV247Test-server-1350466245</nova:name>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:31:18</nova:creationTime>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:31:20 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:31:20 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:31:20 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:31:20 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:31:20 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:31:20 compute-0 nova_compute[244014]:         <nova:user uuid="28ac489e13c44a30a1aafde4937f3cff">tempest-ServerShowV247Test-1730972552-project-member</nova:user>
Feb 25 12:31:20 compute-0 nova_compute[244014]:         <nova:project uuid="e7c1be4d53154ae793a86c8bcc2c5b47">tempest-ServerShowV247Test-1730972552</nova:project>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <system>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <entry name="serial">9b9beb6f-4642-40d1-b9e5-96345c682341</entry>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <entry name="uuid">9b9beb6f-4642-40d1-b9e5-96345c682341</entry>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     </system>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   <os>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   </os>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   <features>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   </features>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/9b9beb6f-4642-40d1-b9e5-96345c682341_disk">
Feb 25 12:31:20 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:20 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config">
Feb 25 12:31:20 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:20 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/console.log" append="off"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <video>
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     </video>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:31:20 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:31:20 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:31:20 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:31:20 compute-0 nova_compute[244014]: </domain>
Feb 25 12:31:20 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.332 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.332 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.332 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Using config drive
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.348 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:20 compute-0 lvm[309251]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:31:20 compute-0 lvm[309251]: VG ceph_vg0 finished
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.372 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:20 compute-0 lvm[309254]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:31:20 compute-0 lvm[309254]: VG ceph_vg1 finished
Feb 25 12:31:20 compute-0 lvm[309256]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:31:20 compute-0 lvm[309256]: VG ceph_vg2 finished
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.408 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'keypairs' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:20 compute-0 lucid_williams[309135]: {}
Feb 25 12:31:20 compute-0 systemd[1]: libpod-4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8.scope: Deactivated successfully.
Feb 25 12:31:20 compute-0 podman[309106]: 2026-02-25 12:31:20.519307037 +0000 UTC m=+0.906043714 container died 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:31:20 compute-0 systemd[1]: libpod-4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8.scope: Consumed 1.035s CPU time.
Feb 25 12:31:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8b85be122383e8aceaf8f7f9fd363410c5c69ab549d34747c0e826f2200a350-merged.mount: Deactivated successfully.
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.564 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating config drive at /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.568 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1g8sith0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:20 compute-0 podman[309106]: 2026-02-25 12:31:20.575429238 +0000 UTC m=+0.962165915 container remove 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:31:20 compute-0 systemd[1]: libpod-conmon-4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8.scope: Deactivated successfully.
Feb 25 12:31:20 compute-0 sudo[308983]: pam_unix(sudo:session): session closed for user root
Feb 25 12:31:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:31:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:31:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:31:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.697 244018 DEBUG nova.virt.libvirt.driver [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:31:20 compute-0 sudo[309273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:31:20 compute-0 sudo[309273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.708 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1g8sith0" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:20 compute-0 sudo[309273]: pam_unix(sudo:session): session closed for user root
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.751 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.756 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.889 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:20 compute-0 nova_compute[244014]: 2026-02-25 12:31:20.890 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deleting local config drive /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config because it was imported into RBD.
Feb 25 12:31:20 compute-0 systemd-machined[210048]: New machine qemu-91-instance-0000004a.
Feb 25 12:31:20 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-0000004a.
Feb 25 12:31:21 compute-0 ceph-mon[76335]: pgmap v1477: 305 pgs: 305 active+clean; 517 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.4 MiB/s wr, 256 op/s
Feb 25 12:31:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/661453527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:31:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.412 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.876 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 9b9beb6f-4642-40d1-b9e5-96345c682341 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.877 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022681.8759117, 9b9beb6f-4642-40d1-b9e5-96345c682341 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.878 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] VM Resumed (Lifecycle Event)
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.882 244018 DEBUG nova.compute.manager [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.883 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.887 244018 INFO nova.virt.libvirt.driver [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance spawned successfully.
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.888 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.921 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.925 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 507 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.8 MiB/s wr, 352 op/s
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.946 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.947 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022681.8811173, 9b9beb6f-4642-40d1-b9e5-96345c682341 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.948 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] VM Started (Lifecycle Event)
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.954 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.954 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.955 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.956 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.956 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.957 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.965 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.970 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:21 compute-0 nova_compute[244014]: 2026-02-25 12:31:21.999 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:31:22 compute-0 nova_compute[244014]: 2026-02-25 12:31:22.019 244018 DEBUG nova.compute.manager [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:22 compute-0 nova_compute[244014]: 2026-02-25 12:31:22.086 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:22 compute-0 nova_compute[244014]: 2026-02-25 12:31:22.087 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:22 compute-0 nova_compute[244014]: 2026-02-25 12:31:22.087 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:31:22 compute-0 nova_compute[244014]: 2026-02-25 12:31:22.156 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:23 compute-0 ceph-mon[76335]: pgmap v1478: 305 pgs: 305 active+clean; 507 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.8 MiB/s wr, 352 op/s
Feb 25 12:31:23 compute-0 sshd-session[309392]: Invalid user lighthouse from 80.94.92.186 port 56796
Feb 25 12:31:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 516 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 10 MiB/s wr, 405 op/s
Feb 25 12:31:24 compute-0 sshd-session[309392]: Connection closed by invalid user lighthouse 80.94.92.186 port 56796 [preauth]
Feb 25 12:31:24 compute-0 kernel: tapa308a2fe-7f (unregistering): left promiscuous mode
Feb 25 12:31:24 compute-0 NetworkManager[49836]: <info>  [1772022684.5591] device (tapa308a2fe-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:31:24 compute-0 ovn_controller[147040]: 2026-02-25T12:31:24Z|00692|binding|INFO|Releasing lport a308a2fe-7f22-4520-8be1-36876d0d5361 from this chassis (sb_readonly=0)
Feb 25 12:31:24 compute-0 ovn_controller[147040]: 2026-02-25T12:31:24Z|00693|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 down in Southbound
Feb 25 12:31:24 compute-0 nova_compute[244014]: 2026-02-25 12:31:24.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:24 compute-0 ovn_controller[147040]: 2026-02-25T12:31:24Z|00694|binding|INFO|Removing iface tapa308a2fe-7f ovn-installed in OVS
Feb 25 12:31:24 compute-0 nova_compute[244014]: 2026-02-25 12:31:24.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:24 compute-0 nova_compute[244014]: 2026-02-25 12:31:24.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:24 compute-0 ceph-mon[76335]: pgmap v1479: 305 pgs: 305 active+clean; 516 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 10 MiB/s wr, 405 op/s
Feb 25 12:31:24 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Feb 25 12:31:24 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004b.scope: Consumed 13.090s CPU time.
Feb 25 12:31:24 compute-0 systemd-machined[210048]: Machine qemu-90-instance-0000004b terminated.
Feb 25 12:31:24 compute-0 nova_compute[244014]: 2026-02-25 12:31:24.795 244018 INFO nova.virt.libvirt.driver [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance shutdown successfully after 14 seconds.
Feb 25 12:31:24 compute-0 nova_compute[244014]: 2026-02-25 12:31:24.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:24 compute-0 nova_compute[244014]: 2026-02-25 12:31:24.804 244018 INFO nova.virt.libvirt.driver [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance destroyed successfully.
Feb 25 12:31:24 compute-0 nova_compute[244014]: 2026-02-25 12:31:24.804 244018 DEBUG nova.objects.instance [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.520 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.521 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.521 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.522 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.522 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.523 244018 INFO nova.compute.manager [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Terminating instance
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.524 244018 DEBUG nova.compute.manager [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.521 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:c8:fb 10.100.0.4'], port_security=['fa:16:3e:da:c8:fb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f143481-f5f9-45d3-9d0b-b66e77ee0714', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '4', 'neutron:security_group_ids': '290a60c1-6d36-455f-8454-5a2e312fec46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a308a2fe-7f22-4520-8be1-36876d0d5361) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.523 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a308a2fe-7f22-4520-8be1-36876d0d5361 in datapath cd796561-bd80-4610-8abc-655ee9e3676f unbound from our chassis
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.524 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.544 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd91573-1bdf-4cd0-8e10-793a3ee70a5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.552 244018 DEBUG nova.compute.manager [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.558 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "9b9beb6f-4642-40d1-b9e5-96345c682341" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.559 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.560 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "9b9beb6f-4642-40d1-b9e5-96345c682341-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.562 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.563 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.564 244018 INFO nova.compute.manager [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Terminating instance
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.565 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "refresh_cache-9b9beb6f-4642-40d1-b9e5-96345c682341" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.565 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquired lock "refresh_cache-9b9beb6f-4642-40d1-b9e5-96345c682341" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.566 244018 DEBUG nova.network.neutron [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.582 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[743af188-b430-4105-a0c2-779add01fb88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 kernel: tape61932f1-9a (unregistering): left promiscuous mode
Feb 25 12:31:25 compute-0 NetworkManager[49836]: <info>  [1772022685.5875] device (tape61932f1-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.589 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[52f018ff-fc75-4d46-a398-af09f1c6e62d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 ovn_controller[147040]: 2026-02-25T12:31:25Z|00695|binding|INFO|Releasing lport e61932f1-9a36-4c95-a52f-470c182ac70f from this chassis (sb_readonly=0)
Feb 25 12:31:25 compute-0 ovn_controller[147040]: 2026-02-25T12:31:25Z|00696|binding|INFO|Setting lport e61932f1-9a36-4c95-a52f-470c182ac70f down in Southbound
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:25 compute-0 ovn_controller[147040]: 2026-02-25T12:31:25Z|00697|binding|INFO|Removing iface tape61932f1-9a ovn-installed in OVS
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.602 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.603 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:25:49 10.100.0.5'], port_security=['fa:16:3e:38:25:49 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7715b665-70af-4b84-b8a3-85ccea6ab805', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f200cab-355f-4832-b410-50e7782a19b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7f30b1d5a1f4604bb44f655b6be0571', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbda8792-bcc5-48a3-8da7-ae12bedd4d63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba165cab-01bd-4b15-a1be-78df84ef667b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e61932f1-9a36-4c95-a52f-470c182ac70f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.630 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[22554f35-bf68-4162-91b4-8f743b5d01bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000048.scope: Deactivated successfully.
Feb 25 12:31:25 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000048.scope: Consumed 12.698s CPU time.
Feb 25 12:31:25 compute-0 systemd-machined[210048]: Machine qemu-87-instance-00000048 terminated.
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.644 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e34ddfdc-bc9c-4fec-8b67-ab150c50ad20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309422, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.657 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7319865c-6d1c-4dba-a941-701666102baf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460517, 'tstamp': 460517}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309423, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460518, 'tstamp': 460518}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309423, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.658 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.664 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.665 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.665 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.666 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.667 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e61932f1-9a36-4c95-a52f-470c182ac70f in datapath 2f200cab-355f-4832-b410-50e7782a19b7 unbound from our chassis
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.668 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f200cab-355f-4832-b410-50e7782a19b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.669 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d246b04-4b29-48ac-84ee-fd15d12f2206]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.669 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7 namespace which is not needed anymore
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.715 244018 DEBUG oslo_concurrency.lockutils [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 15.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.752 244018 INFO nova.virt.libvirt.driver [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Instance destroyed successfully.
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.752 244018 DEBUG nova.objects.instance [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lazy-loading 'resources' on Instance uuid 7715b665-70af-4b84-b8a3-85ccea6ab805 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.778 244018 DEBUG nova.virt.libvirt.vif [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:30:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1810610267',display_name='tempest-ServersTestManualDisk-server-1810610267',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1810610267',id=72,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+ZyZhedZYP+GQcHR1V4WhhTfwoBFUWOed32RhXsQrJwpPpF6RwVYj0ZbkDWlwidrgnyGCLLNNfWgQRbNPFMLe3H8LOerGP8hP+UJf+LIu8QRlzX2YtSutjJfTqBBvhdg==',key_name='tempest-keypair-1369517783',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:31:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c7f30b1d5a1f4604bb44f655b6be0571',ramdisk_id='',reservation_id='r-60vlmj26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-155925755',owner_user_name='tempest-ServersTestManualDisk-155925755-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:31:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='87b7b8b029dc48549d8d5982d7329f63',uuid=7715b665-70af-4b84-b8a3-85ccea6ab805,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:31:25 compute-0 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [NOTICE]   (307803) : haproxy version is 2.8.14-c23fe91
Feb 25 12:31:25 compute-0 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [NOTICE]   (307803) : path to executable is /usr/sbin/haproxy
Feb 25 12:31:25 compute-0 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [WARNING]  (307803) : Exiting Master process...
Feb 25 12:31:25 compute-0 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [ALERT]    (307803) : Current worker (307813) exited with code 143 (Terminated)
Feb 25 12:31:25 compute-0 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [WARNING]  (307803) : All workers exited. Exiting... (0)
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.783 244018 DEBUG nova.network.os_vif_util [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converting VIF {"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:31:25 compute-0 systemd[1]: libpod-1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795.scope: Deactivated successfully.
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.784 244018 DEBUG nova.network.os_vif_util [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.784 244018 DEBUG os_vif [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.787 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape61932f1-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:31:25 compute-0 podman[309445]: 2026-02-25 12:31:25.792042228 +0000 UTC m=+0.046640583 container died 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.795 244018 INFO os_vif [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a')
Feb 25 12:31:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795-userdata-shm.mount: Deactivated successfully.
Feb 25 12:31:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-562f19f351018e8168f67d2f0ec9c0e4adfbd26e742451e2aa28d80d269eda69-merged.mount: Deactivated successfully.
Feb 25 12:31:25 compute-0 podman[309445]: 2026-02-25 12:31:25.840628616 +0000 UTC m=+0.095226971 container cleanup 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:31:25 compute-0 systemd[1]: libpod-conmon-1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795.scope: Deactivated successfully.
Feb 25 12:31:25 compute-0 podman[309501]: 2026-02-25 12:31:25.91481026 +0000 UTC m=+0.056975397 container remove 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.922 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[251ff6c7-58d0-47a2-8023-824f46ef5640]: (4, ('Wed Feb 25 12:31:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7 (1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795)\n1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795\nWed Feb 25 12:31:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7 (1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795)\n1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.924 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67e8b6aa-e0c1-4746-b39f-a626bf122888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.925 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f200cab-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:25 compute-0 kernel: tap2f200cab-30: left promiscuous mode
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.927 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:25 compute-0 nova_compute[244014]: 2026-02-25 12:31:25.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.939 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e9bfb4a0-2d27-4017-ae3e-2b4e5c242444]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 516 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 10 MiB/s wr, 376 op/s
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.952 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[819808a7-ff2c-4834-ad40-c5b27f0a6ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.953 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4715f6e-b79f-41cf-8cb7-9ead44aabb2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.969 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[60d70df9-9863-4d95-a4da-8c4f8097db35]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462999, 'reachable_time': 22537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309520, 'error': None, 'target': 'ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d2f200cab\x2d355f\x2d4832\x2db410\x2d50e7782a19b7.mount: Deactivated successfully.
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.974 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:31:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.974 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fd60e3-285e-426b-9a13-5d725a22df35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.128 244018 INFO nova.virt.libvirt.driver [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Deleting instance files /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805_del
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.129 244018 INFO nova.virt.libvirt.driver [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Deletion of /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805_del complete
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.140 244018 DEBUG nova.compute.manager [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.141 244018 DEBUG oslo_concurrency.lockutils [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.142 244018 DEBUG oslo_concurrency.lockutils [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.142 244018 DEBUG oslo_concurrency.lockutils [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.143 244018 DEBUG nova.compute.manager [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.143 244018 WARNING nova.compute.manager [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state stopped and task_state None.
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.186 244018 DEBUG nova.network.neutron [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.208 244018 INFO nova.compute.manager [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Took 0.68 seconds to destroy the instance on the hypervisor.
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.208 244018 DEBUG oslo.service.loopingcall [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.209 244018 DEBUG nova.compute.manager [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.210 244018 DEBUG nova.network.neutron [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:31:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.976 244018 DEBUG nova.network.neutron [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.995 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Releasing lock "refresh_cache-9b9beb6f-4642-40d1-b9e5-96345c682341" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:31:26 compute-0 nova_compute[244014]: 2026-02-25 12:31:26.996 244018 DEBUG nova.compute.manager [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:31:27 compute-0 ceph-mon[76335]: pgmap v1480: 305 pgs: 305 active+clean; 516 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 10 MiB/s wr, 376 op/s
Feb 25 12:31:27 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Feb 25 12:31:27 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004a.scope: Consumed 5.981s CPU time.
Feb 25 12:31:27 compute-0 systemd-machined[210048]: Machine qemu-91-instance-0000004a terminated.
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.222 244018 INFO nova.virt.libvirt.driver [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance destroyed successfully.
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.223 244018 DEBUG nova.objects.instance [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'resources' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:27 compute-0 sshd-session[309522]: Received disconnect from 45.148.10.157 port 8850:11:  [preauth]
Feb 25 12:31:27 compute-0 sshd-session[309522]: Disconnected from authenticating user root 45.148.10.157 port 8850 [preauth]
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.622 244018 INFO nova.virt.libvirt.driver [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deleting instance files /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341_del
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.624 244018 INFO nova.virt.libvirt.driver [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deletion of /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341_del complete
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.648 244018 DEBUG nova.network.neutron [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.696 244018 INFO nova.compute.manager [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Took 1.49 seconds to deallocate network for instance.
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.705 244018 INFO nova.compute.manager [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Took 0.71 seconds to destroy the instance on the hypervisor.
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.706 244018 DEBUG oslo.service.loopingcall [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.706 244018 DEBUG nova.compute.manager [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.706 244018 DEBUG nova.network.neutron [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.771 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.771 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.912 244018 DEBUG nova.network.neutron [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.933 244018 DEBUG nova.network.neutron [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:31:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 418 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 10 MiB/s wr, 481 op/s
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.953 244018 DEBUG oslo_concurrency.processutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:27 compute-0 nova_compute[244014]: 2026-02-25 12:31:27.983 244018 INFO nova.compute.manager [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Took 0.28 seconds to deallocate network for instance.
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.034 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.365 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.365 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.366 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.366 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.366 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.367 244018 WARNING nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state stopped and task_state None.
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.367 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-vif-unplugged-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.367 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.367 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.368 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.368 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] No waiting events found dispatching network-vif-unplugged-e61932f1-9a36-4c95-a52f-470c182ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.368 244018 WARNING nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received unexpected event network-vif-unplugged-e61932f1-9a36-4c95-a52f-470c182ac70f for instance with vm_state deleted and task_state None.
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.368 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.369 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.369 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.369 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.369 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] No waiting events found dispatching network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.370 244018 WARNING nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received unexpected event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f for instance with vm_state deleted and task_state None.
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.370 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-vif-deleted-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:31:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279200398' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.556 244018 DEBUG oslo_concurrency.processutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.563 244018 DEBUG nova.compute.provider_tree [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.578 244018 DEBUG nova.scheduler.client.report [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.604 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.606 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.646 244018 INFO nova.scheduler.client.report [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Deleted allocations for instance 7715b665-70af-4b84-b8a3-85ccea6ab805
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.744 244018 DEBUG oslo_concurrency.processutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:28 compute-0 nova_compute[244014]: 2026-02-25 12:31:28.793 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:29 compute-0 ceph-mon[76335]: pgmap v1481: 305 pgs: 305 active+clean; 418 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 10 MiB/s wr, 481 op/s
Feb 25 12:31:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4279200398' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:31:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030116511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:29 compute-0 nova_compute[244014]: 2026-02-25 12:31:29.346 244018 DEBUG oslo_concurrency.processutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:29 compute-0 nova_compute[244014]: 2026-02-25 12:31:29.351 244018 DEBUG nova.compute.provider_tree [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:31:29 compute-0 nova_compute[244014]: 2026-02-25 12:31:29.373 244018 DEBUG nova.scheduler.client.report [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:31:29 compute-0 nova_compute[244014]: 2026-02-25 12:31:29.403 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:29 compute-0 nova_compute[244014]: 2026-02-25 12:31:29.443 244018 INFO nova.scheduler.client.report [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Deleted allocations for instance 9b9beb6f-4642-40d1-b9e5-96345c682341
Feb 25 12:31:29 compute-0 nova_compute[244014]: 2026-02-25 12:31:29.522 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:29 compute-0 sshd-session[308341]: Connection closed by 167.94.138.207 port 44152 [preauth]
Feb 25 12:31:29 compute-0 nova_compute[244014]: 2026-02-25 12:31:29.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 418 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 295 op/s
Feb 25 12:31:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2030116511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.137 244018 INFO nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Rebuilding instance
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.582 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "e954c936-91fe-4aa5-8c91-78ec08c85221" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.583 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.584 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "e954c936-91fe-4aa5-8c91-78ec08c85221-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.584 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.584 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.585 244018 INFO nova.compute.manager [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Terminating instance
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.586 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "refresh_cache-e954c936-91fe-4aa5-8c91-78ec08c85221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.586 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquired lock "refresh_cache-e954c936-91fe-4aa5-8c91-78ec08c85221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.587 244018 DEBUG nova.network.neutron [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.664 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.686 244018 DEBUG nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.745 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.756 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.766 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'resources' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.783 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'migration_context' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.793 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.797 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance already shutdown.
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.803 244018 INFO nova.virt.libvirt.driver [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance destroyed successfully.
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.809 244018 INFO nova.virt.libvirt.driver [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance destroyed successfully.
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.810 244018 DEBUG nova.virt.libvirt.vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1795002660',display_name='tempest-tempest.common.compute-instance-1795002660',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1795002660',id=75,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:31:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-p68nlja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:29Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=3f143481-f5f9-45d3-9d0b-b66e77ee0714,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.810 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.811 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.811 244018 DEBUG os_vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.817 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa308a2fe-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.824 244018 INFO os_vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f')
Feb 25 12:31:30 compute-0 nova_compute[244014]: 2026-02-25 12:31:30.865 244018 DEBUG nova.network.neutron [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:31:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:31:30
Feb 25 12:31:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:31:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:31:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.meta']
Feb 25 12:31:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:31:31 compute-0 ceph-mon[76335]: pgmap v1482: 305 pgs: 305 active+clean; 418 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 295 op/s
Feb 25 12:31:31 compute-0 nova_compute[244014]: 2026-02-25 12:31:31.155 244018 DEBUG nova.network.neutron [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:31:31 compute-0 nova_compute[244014]: 2026-02-25 12:31:31.172 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Releasing lock "refresh_cache-e954c936-91fe-4aa5-8c91-78ec08c85221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:31:31 compute-0 nova_compute[244014]: 2026-02-25 12:31:31.173 244018 DEBUG nova.compute.manager [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:31:31 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000049.scope: Deactivated successfully.
Feb 25 12:31:31 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000049.scope: Consumed 12.876s CPU time.
Feb 25 12:31:31 compute-0 systemd-machined[210048]: Machine qemu-88-instance-00000049 terminated.
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:31:31 compute-0 nova_compute[244014]: 2026-02-25 12:31:31.595 244018 INFO nova.virt.libvirt.driver [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Instance destroyed successfully.
Feb 25 12:31:31 compute-0 nova_compute[244014]: 2026-02-25 12:31:31.595 244018 DEBUG nova.objects.instance [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'resources' on Instance uuid e954c936-91fe-4aa5-8c91-78ec08c85221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:31:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:31:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 372 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 311 op/s
Feb 25 12:31:32 compute-0 ceph-mon[76335]: pgmap v1483: 305 pgs: 305 active+clean; 372 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 311 op/s
Feb 25 12:31:32 compute-0 nova_compute[244014]: 2026-02-25 12:31:32.709 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Deleting instance files /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714_del
Feb 25 12:31:32 compute-0 nova_compute[244014]: 2026-02-25 12:31:32.710 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Deletion of /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714_del complete
Feb 25 12:31:32 compute-0 nova_compute[244014]: 2026-02-25 12:31:32.857 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:31:32 compute-0 nova_compute[244014]: 2026-02-25 12:31:32.858 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating image(s)
Feb 25 12:31:32 compute-0 nova_compute[244014]: 2026-02-25 12:31:32.887 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:32 compute-0 nova_compute[244014]: 2026-02-25 12:31:32.919 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:32 compute-0 nova_compute[244014]: 2026-02-25 12:31:32.954 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:32 compute-0 nova_compute[244014]: 2026-02-25 12:31:32.959 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.090 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.092 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.093 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.094 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.124 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.128 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.868 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.912 244018 INFO nova.virt.libvirt.driver [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Deleting instance files /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221_del
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.913 244018 INFO nova.virt.libvirt.driver [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Deletion of /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221_del complete
Feb 25 12:31:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 322 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 237 op/s
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.953 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] resizing rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.988 244018 INFO nova.compute.manager [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Took 2.81 seconds to destroy the instance on the hypervisor.
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.989 244018 DEBUG oslo.service.loopingcall [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.989 244018 DEBUG nova.compute.manager [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:31:33 compute-0 nova_compute[244014]: 2026-02-25 12:31:33.989 244018 DEBUG nova.network.neutron [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.038 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.039 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Ensure instance console log exists: /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.039 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.039 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.040 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.041 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Start _get_guest_xml network_info=[{"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.045 244018 WARNING nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.050 244018 DEBUG nova.virt.libvirt.host [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.050 244018 DEBUG nova.virt.libvirt.host [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.053 244018 DEBUG nova.virt.libvirt.host [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.053 244018 DEBUG nova.virt.libvirt.host [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.053 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.053 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.054 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.054 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.054 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.054 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.054 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.055 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.055 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.055 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.055 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.055 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.056 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.070 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:34 compute-0 ovn_controller[147040]: 2026-02-25T12:31:34Z|00698|binding|INFO|Releasing lport d456b40d-31d4-4447-97a8-c536382c29f9 from this chassis (sb_readonly=0)
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.271 244018 DEBUG nova.network.neutron [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.291 244018 DEBUG nova.network.neutron [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.311 244018 INFO nova.compute.manager [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Took 0.32 seconds to deallocate network for instance.
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.386 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.387 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.529 244018 DEBUG oslo_concurrency.processutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:31:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486925998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.617 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.648 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.654 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:34 compute-0 nova_compute[244014]: 2026-02-25 12:31:34.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:35 compute-0 ceph-mon[76335]: pgmap v1484: 305 pgs: 305 active+clean; 322 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 237 op/s
Feb 25 12:31:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3486925998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:31:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3339143367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.109 244018 DEBUG oslo_concurrency.processutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.114 244018 DEBUG nova.compute.provider_tree [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.138 244018 DEBUG nova.scheduler.client.report [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.169 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.195 244018 INFO nova.scheduler.client.report [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Deleted allocations for instance e954c936-91fe-4aa5-8c91-78ec08c85221
Feb 25 12:31:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:31:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/907256136' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.237 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.239 244018 DEBUG nova.virt.libvirt.vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1795002660',display_name='tempest-tempest.common.compute-instance-1795002660',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1795002660',id=75,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:31:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-p68nlja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:32Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=3f143481-f5f9-45d3-9d0b-b66e77ee0714,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.240 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.241 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.245 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:31:35 compute-0 nova_compute[244014]:   <uuid>3f143481-f5f9-45d3-9d0b-b66e77ee0714</uuid>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   <name>instance-0000004b</name>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <nova:name>tempest-tempest.common.compute-instance-1795002660</nova:name>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:31:34</nova:creationTime>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <nova:user uuid="b63928451c6a4137bb65e25561326aff">tempest-ServerActionsTestOtherA-1615886603-project-member</nova:user>
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <nova:project uuid="8315f545d21f4f8d9a43d810f50e7b78">tempest-ServerActionsTestOtherA-1615886603</nova:project>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <nova:port uuid="a308a2fe-7f22-4520-8be1-36876d0d5361">
Feb 25 12:31:35 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <system>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <entry name="serial">3f143481-f5f9-45d3-9d0b-b66e77ee0714</entry>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <entry name="uuid">3f143481-f5f9-45d3-9d0b-b66e77ee0714</entry>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     </system>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   <os>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   </os>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   <features>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   </features>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk">
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config">
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:35 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:da:c8:fb"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <target dev="tapa308a2fe-7f"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/console.log" append="off"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <video>
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     </video>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:31:35 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:31:35 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:31:35 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:31:35 compute-0 nova_compute[244014]: </domain>
Feb 25 12:31:35 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.246 244018 DEBUG nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Preparing to wait for external event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.247 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.247 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.248 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.249 244018 DEBUG nova.virt.libvirt.vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1795002660',display_name='tempest-tempest.common.compute-instance-1795002660',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1795002660',id=75,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:31:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-p68nlja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:32Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=3f143481-f5f9-45d3-9d0b-b66e77ee0714,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.249 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.250 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.251 244018 DEBUG os_vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.253 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.253 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.254 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.259 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa308a2fe-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.260 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa308a2fe-7f, col_values=(('external_ids', {'iface-id': 'a308a2fe-7f22-4520-8be1-36876d0d5361', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:c8:fb', 'vm-uuid': '3f143481-f5f9-45d3-9d0b-b66e77ee0714'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:35 compute-0 NetworkManager[49836]: <info>  [1772022695.2633] manager: (tapa308a2fe-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.268 244018 INFO os_vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f')
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.282 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.367 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.367 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.368 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No VIF found with MAC fa:16:3e:da:c8:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.370 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Using config drive
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.406 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.435 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:35 compute-0 nova_compute[244014]: 2026-02-25 12:31:35.497 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'keypairs' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 322 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 28 KiB/s wr, 142 op/s
Feb 25 12:31:36 compute-0 nova_compute[244014]: 2026-02-25 12:31:36.027 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating config drive at /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config
Feb 25 12:31:36 compute-0 nova_compute[244014]: 2026-02-25 12:31:36.034 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7t6jeuso execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:36 compute-0 nova_compute[244014]: 2026-02-25 12:31:36.182 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7t6jeuso" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3339143367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/907256136' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:36 compute-0 nova_compute[244014]: 2026-02-25 12:31:36.495 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:36 compute-0 nova_compute[244014]: 2026-02-25 12:31:36.500 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:36 compute-0 podman[309938]: 2026-02-25 12:31:36.803267027 +0000 UTC m=+0.144786166 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 12:31:36 compute-0 podman[309939]: 2026-02-25 12:31:36.901928115 +0000 UTC m=+0.243081224 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:31:37 compute-0 ceph-mon[76335]: pgmap v1485: 305 pgs: 305 active+clean; 322 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 28 KiB/s wr, 142 op/s
Feb 25 12:31:37 compute-0 nova_compute[244014]: 2026-02-25 12:31:37.393 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.893s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:37 compute-0 nova_compute[244014]: 2026-02-25 12:31:37.394 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Deleting local config drive /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config because it was imported into RBD.
Feb 25 12:31:37 compute-0 kernel: tapa308a2fe-7f: entered promiscuous mode
Feb 25 12:31:37 compute-0 ovn_controller[147040]: 2026-02-25T12:31:37Z|00699|binding|INFO|Claiming lport a308a2fe-7f22-4520-8be1-36876d0d5361 for this chassis.
Feb 25 12:31:37 compute-0 ovn_controller[147040]: 2026-02-25T12:31:37Z|00700|binding|INFO|a308a2fe-7f22-4520-8be1-36876d0d5361: Claiming fa:16:3e:da:c8:fb 10.100.0.4
Feb 25 12:31:37 compute-0 NetworkManager[49836]: <info>  [1772022697.4507] manager: (tapa308a2fe-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/301)
Feb 25 12:31:37 compute-0 nova_compute[244014]: 2026-02-25 12:31:37.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:37 compute-0 ovn_controller[147040]: 2026-02-25T12:31:37Z|00701|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 ovn-installed in OVS
Feb 25 12:31:37 compute-0 nova_compute[244014]: 2026-02-25 12:31:37.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:37 compute-0 nova_compute[244014]: 2026-02-25 12:31:37.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:37 compute-0 systemd-udevd[309995]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:31:37 compute-0 NetworkManager[49836]: <info>  [1772022697.4936] device (tapa308a2fe-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:31:37 compute-0 NetworkManager[49836]: <info>  [1772022697.4945] device (tapa308a2fe-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:31:37 compute-0 ovn_controller[147040]: 2026-02-25T12:31:37Z|00702|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 up in Southbound
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.535 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:c8:fb 10.100.0.4'], port_security=['fa:16:3e:da:c8:fb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f143481-f5f9-45d3-9d0b-b66e77ee0714', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '5', 'neutron:security_group_ids': '290a60c1-6d36-455f-8454-5a2e312fec46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a308a2fe-7f22-4520-8be1-36876d0d5361) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.537 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a308a2fe-7f22-4520-8be1-36876d0d5361 in datapath cd796561-bd80-4610-8abc-655ee9e3676f bound to our chassis
Feb 25 12:31:37 compute-0 rsyslogd[1020]: imjournal from <np0005629333:ovn_controller>: begin to drop messages due to rate-limiting
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.538 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.555 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[48a01a5d-f2a3-40d9-831d-b8e8556cea4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:37 compute-0 systemd-machined[210048]: New machine qemu-92-instance-0000004b.
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.584 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[34c4e556-321b-4b97-972d-144e96c42345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.587 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1b372ad2-930f-418c-a287-3daf4eb93615]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:37 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-0000004b.
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.614 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6921a1-4efc-4f52-9954-6701f681446a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.630 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba8480a-1e15-4603-b0a4-4c6cf5aca7e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310008, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.645 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d993a5dc-ee59-4e69-80ec-281c5402696d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460517, 'tstamp': 460517}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310012, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460518, 'tstamp': 460518}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310012, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.647 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:37 compute-0 nova_compute[244014]: 2026-02-25 12:31:37.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.651 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.652 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.653 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:37.654 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 279 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 204 op/s
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.019 244018 DEBUG nova.compute.manager [req-3b0b0e6c-6811-4351-a4d1-58061efbb0b2 req-5e5001c8-5c07-4a7b-b27a-9ca6269dc232 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.020 244018 DEBUG oslo_concurrency.lockutils [req-3b0b0e6c-6811-4351-a4d1-58061efbb0b2 req-5e5001c8-5c07-4a7b-b27a-9ca6269dc232 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.020 244018 DEBUG oslo_concurrency.lockutils [req-3b0b0e6c-6811-4351-a4d1-58061efbb0b2 req-5e5001c8-5c07-4a7b-b27a-9ca6269dc232 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.021 244018 DEBUG oslo_concurrency.lockutils [req-3b0b0e6c-6811-4351-a4d1-58061efbb0b2 req-5e5001c8-5c07-4a7b-b27a-9ca6269dc232 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.021 244018 DEBUG nova.compute.manager [req-3b0b0e6c-6811-4351-a4d1-58061efbb0b2 req-5e5001c8-5c07-4a7b-b27a-9ca6269dc232 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Processing event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:31:38 compute-0 ceph-mon[76335]: pgmap v1486: 305 pgs: 305 active+clean; 279 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 204 op/s
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.767 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 3f143481-f5f9-45d3-9d0b-b66e77ee0714 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.768 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022698.7669773, 3f143481-f5f9-45d3-9d0b-b66e77ee0714 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.768 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] VM Started (Lifecycle Event)
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.771 244018 DEBUG nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.776 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.780 244018 INFO nova.virt.libvirt.driver [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance spawned successfully.
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.780 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.815 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.822 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.823 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.824 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.824 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.824 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.825 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.830 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.864 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.864 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022698.7673032, 3f143481-f5f9-45d3-9d0b-b66e77ee0714 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.865 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] VM Paused (Lifecycle Event)
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.901 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.906 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022698.774202, 3f143481-f5f9-45d3-9d0b-b66e77ee0714 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.906 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] VM Resumed (Lifecycle Event)
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.917 244018 DEBUG nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.933 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.936 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:38 compute-0 nova_compute[244014]: 2026-02-25 12:31:38.970 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.017 244018 INFO nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] bringing vm to original state: 'stopped'
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.127 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.127 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.132 244018 DEBUG nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.137 244018 DEBUG nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 25 12:31:39 compute-0 kernel: tapa308a2fe-7f (unregistering): left promiscuous mode
Feb 25 12:31:39 compute-0 NetworkManager[49836]: <info>  [1772022699.3582] device (tapa308a2fe-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00703|binding|INFO|Releasing lport a308a2fe-7f22-4520-8be1-36876d0d5361 from this chassis (sb_readonly=0)
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00704|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 down in Southbound
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00705|binding|INFO|Removing iface tapa308a2fe-7f ovn-installed in OVS
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.370 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.378 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:c8:fb 10.100.0.4'], port_security=['fa:16:3e:da:c8:fb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f143481-f5f9-45d3-9d0b-b66e77ee0714', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '6', 'neutron:security_group_ids': '290a60c1-6d36-455f-8454-5a2e312fec46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a308a2fe-7f22-4520-8be1-36876d0d5361) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.380 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a308a2fe-7f22-4520-8be1-36876d0d5361 in datapath cd796561-bd80-4610-8abc-655ee9e3676f unbound from our chassis
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.383 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.400 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[db2ba8ee-7e77-4aff-82b8-827cbb4e63f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Feb 25 12:31:39 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004b.scope: Consumed 1.074s CPU time.
Feb 25 12:31:39 compute-0 systemd-machined[210048]: Machine qemu-92-instance-0000004b terminated.
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.433 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[79426871-ffd8-4876-883c-021b2e5c0838]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.438 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf6af77-b678-4a66-a68a-ab5df2b2b1b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.466 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7c572cc3-3d7a-4461-a38a-15071af4109b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.487 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[914b38d2-ef96-4757-b9da-5c068d853845]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310065, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.500 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16571486-56a3-4d18-bc17-729a0f13bcc7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460517, 'tstamp': 460517}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310066, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460518, 'tstamp': 460518}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310066, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.502 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.504 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.507 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.508 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.509 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.510 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.511 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:39 compute-0 kernel: tapa308a2fe-7f: entered promiscuous mode
Feb 25 12:31:39 compute-0 kernel: tapa308a2fe-7f (unregistering): left promiscuous mode
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00706|binding|INFO|Claiming lport a308a2fe-7f22-4520-8be1-36876d0d5361 for this chassis.
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00707|binding|INFO|a308a2fe-7f22-4520-8be1-36876d0d5361: Claiming fa:16:3e:da:c8:fb 10.100.0.4
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.560 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.567 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:c8:fb 10.100.0.4'], port_security=['fa:16:3e:da:c8:fb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f143481-f5f9-45d3-9d0b-b66e77ee0714', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '6', 'neutron:security_group_ids': '290a60c1-6d36-455f-8454-5a2e312fec46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a308a2fe-7f22-4520-8be1-36876d0d5361) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.569 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a308a2fe-7f22-4520-8be1-36876d0d5361 in datapath cd796561-bd80-4610-8abc-655ee9e3676f bound to our chassis
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00708|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 ovn-installed in OVS
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00709|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 up in Southbound
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.570 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00710|binding|INFO|Releasing lport a308a2fe-7f22-4520-8be1-36876d0d5361 from this chassis (sb_readonly=1)
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00711|if_status|INFO|Dropped 2 log messages in last 253 seconds (most recently, 253 seconds ago) due to excessive rate
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00712|if_status|INFO|Not setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 down as sb is readonly
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00713|binding|INFO|Removing iface tapa308a2fe-7f ovn-installed in OVS
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00714|binding|INFO|Releasing lport a308a2fe-7f22-4520-8be1-36876d0d5361 from this chassis (sb_readonly=0)
Feb 25 12:31:39 compute-0 ovn_controller[147040]: 2026-02-25T12:31:39Z|00715|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 down in Southbound
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.577 244018 INFO nova.virt.libvirt.driver [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance destroyed successfully.
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.577 244018 DEBUG nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.584 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:c8:fb 10.100.0.4'], port_security=['fa:16:3e:da:c8:fb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f143481-f5f9-45d3-9d0b-b66e77ee0714', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '6', 'neutron:security_group_ids': '290a60c1-6d36-455f-8454-5a2e312fec46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a308a2fe-7f22-4520-8be1-36876d0d5361) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.585 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b5cb37-deee-4cfc-b32f-bd2e7250ee84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.619 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6a510911-a481-49ea-a80b-33cd4480fec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.622 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed7ea3c-de24-4e95-b8e2-4b3683103bd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.644 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[84fad4ce-dde3-4ee6-bf9c-fefda1a589c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.655 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.661 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8c45cc33-a769-41ca-b64b-bf5def6bdc64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310083, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.676 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5a592911-6a09-47ca-8b92-8f51240cc942]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460517, 'tstamp': 460517}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310084, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460518, 'tstamp': 460518}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310084, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.677 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.684 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.684 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.685 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a308a2fe-7f22-4520-8be1-36876d0d5361 in datapath cd796561-bd80-4610-8abc-655ee9e3676f unbound from our chassis
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.687 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.692 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.693 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.693 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.706 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[df3a0948-ee0d-46ce-a5d3-3965d6be00fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.733 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[103e5c81-c421-4171-a0ed-27583ba55762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.740 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c453730a-cae2-48e3-b9d5-c1ee0bac1dcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.755 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.774 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[373e6a99-3799-44d2-b42e-afb8fff9c077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.795 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[02151b71-1d99-4f9b-889e-da736d755538]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310091, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.805 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.817 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8461392d-63f8-42c1-846c-88b5869e2757]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460517, 'tstamp': 460517}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310092, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460518, 'tstamp': 460518}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310092, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.819 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 nova_compute[244014]: 2026-02-25 12:31:39.825 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.825 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.826 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.827 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:39.828 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:31:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 279 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:40 compute-0 ceph-mon[76335]: pgmap v1487: 305 pgs: 305 active+clean; 279 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.469 244018 DEBUG nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.469 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.470 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.470 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.470 244018 DEBUG nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.470 244018 WARNING nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state stopped and task_state None.
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.471 244018 DEBUG nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.471 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.471 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.472 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.472 244018 DEBUG nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.472 244018 WARNING nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state stopped and task_state None.
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.472 244018 DEBUG nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.472 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.472 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.473 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.473 244018 DEBUG nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.473 244018 WARNING nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state stopped and task_state None.
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.473 244018 DEBUG nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.473 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.474 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.474 244018 DEBUG oslo_concurrency.lockutils [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.474 244018 DEBUG nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.474 244018 WARNING nova.compute.manager [req-68f9c2a3-ba67-4a01-8fdc-43f744c06f7e req-faa57ced-68cb-4d29-b924-f230b98bab6a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state stopped and task_state None.
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.749 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022685.7486222, 7715b665-70af-4b84-b8a3-85ccea6ab805 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.750 244018 INFO nova.compute.manager [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] VM Stopped (Lifecycle Event)
Feb 25 12:31:40 compute-0 nova_compute[244014]: 2026-02-25 12:31:40.949 244018 DEBUG nova.compute.manager [None req-202358ff-305b-4152-8b79-094c84225d0d - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:41 compute-0 nova_compute[244014]: 2026-02-25 12:31:41.132 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquiring lock "7f9688d5-08dc-4dca-8a01-f22b4efd73f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:41 compute-0 nova_compute[244014]: 2026-02-25 12:31:41.133 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "7f9688d5-08dc-4dca-8a01-f22b4efd73f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:41 compute-0 nova_compute[244014]: 2026-02-25 12:31:41.153 244018 DEBUG nova.compute.manager [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:31:41 compute-0 nova_compute[244014]: 2026-02-25 12:31:41.243 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:41 compute-0 nova_compute[244014]: 2026-02-25 12:31:41.244 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:41 compute-0 nova_compute[244014]: 2026-02-25 12:31:41.255 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:31:41 compute-0 nova_compute[244014]: 2026-02-25 12:31:41.256 244018 INFO nova.compute.claims [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:31:41 compute-0 nova_compute[244014]: 2026-02-25 12:31:41.415 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:41 compute-0 nova_compute[244014]: 2026-02-25 12:31:41.711 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 25 12:31:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:31:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2600348751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.063 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.069 244018 DEBUG nova.compute.provider_tree [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.097 244018 DEBUG nova.scheduler.client.report [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.126 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.127 244018 DEBUG nova.compute.manager [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:31:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2600348751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.185 244018 DEBUG nova.compute.manager [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.215 244018 INFO nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.220 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022687.219057, 9b9beb6f-4642-40d1-b9e5-96345c682341 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.220 244018 INFO nova.compute.manager [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] VM Stopped (Lifecycle Event)
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.253 244018 DEBUG nova.compute.manager [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.258 244018 DEBUG nova.compute.manager [None req-087072eb-e83f-4729-967b-9547c4fe6ba3 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011162942311427955 of space, bias 1.0, pg target 0.33488826934283866 quantized to 32 (current 32)
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024926574883644506 of space, bias 1.0, pg target 0.7477972465093352 quantized to 32 (current 32)
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.511777702560569e-07 of space, bias 4.0, pg target 0.0010214133243072684 quantized to 16 (current 16)
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:31:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.351 244018 DEBUG nova.compute.manager [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.352 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.353 244018 INFO nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Creating image(s)
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.373 244018 DEBUG nova.storage.rbd_utils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.396 244018 DEBUG nova.storage.rbd_utils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.419 244018 DEBUG nova.storage.rbd_utils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.423 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.478 244018 DEBUG oslo_concurrency.lockutils [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.478 244018 DEBUG oslo_concurrency.lockutils [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.479 244018 DEBUG oslo_concurrency.lockutils [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.479 244018 DEBUG oslo_concurrency.lockutils [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.479 244018 DEBUG oslo_concurrency.lockutils [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.481 244018 INFO nova.compute.manager [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Terminating instance
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.482 244018 DEBUG nova.compute.manager [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.487 244018 INFO nova.virt.libvirt.driver [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance destroyed successfully.
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.487 244018 DEBUG nova.objects.instance [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'resources' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.493 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.494 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.495 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.495 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.512 244018 DEBUG nova.storage.rbd_utils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.515 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.539 244018 DEBUG nova.virt.libvirt.vif [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1795002660',display_name='tempest-tempest.common.compute-instance-1795002660',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1795002660',id=75,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:31:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-p68nlja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:31:39Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=3f143481-f5f9-45d3-9d0b-b66e77ee0714,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.540 244018 DEBUG nova.network.os_vif_util [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.541 244018 DEBUG nova.network.os_vif_util [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.542 244018 DEBUG os_vif [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.544 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa308a2fe-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.549 244018 INFO os_vif [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f')
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.646 244018 DEBUG nova.compute.manager [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.649 244018 DEBUG oslo_concurrency.lockutils [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.650 244018 DEBUG oslo_concurrency.lockutils [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.650 244018 DEBUG oslo_concurrency.lockutils [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.650 244018 DEBUG nova.compute.manager [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.651 244018 WARNING nova.compute.manager [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state stopped and task_state deleting.
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.651 244018 DEBUG nova.compute.manager [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.651 244018 DEBUG oslo_concurrency.lockutils [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.652 244018 DEBUG oslo_concurrency.lockutils [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.652 244018 DEBUG oslo_concurrency.lockutils [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.652 244018 DEBUG nova.compute.manager [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.653 244018 DEBUG nova.compute.manager [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.653 244018 DEBUG nova.compute.manager [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.653 244018 DEBUG oslo_concurrency.lockutils [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.654 244018 DEBUG oslo_concurrency.lockutils [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.654 244018 DEBUG oslo_concurrency.lockutils [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.654 244018 DEBUG nova.compute.manager [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:31:42 compute-0 nova_compute[244014]: 2026-02-25 12:31:42.655 244018 WARNING nova.compute.manager [req-63123385-f20b-401d-84f7-967439eaf7fe req-620e836e-6ea2-4a4d-b77f-64a86f5ee733 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state stopped and task_state deleting.
Feb 25 12:31:43 compute-0 ceph-mon[76335]: pgmap v1488: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 25 12:31:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.006 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.083 244018 DEBUG nova.storage.rbd_utils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] resizing rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.168 244018 DEBUG nova.objects.instance [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.184 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.185 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Ensure instance console log exists: /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.185 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.185 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.186 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.188 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.192 244018 WARNING nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.197 244018 DEBUG nova.virt.libvirt.host [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.197 244018 DEBUG nova.virt.libvirt.host [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.200 244018 DEBUG nova.virt.libvirt.host [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.201 244018 DEBUG nova.virt.libvirt.host [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.201 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.202 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.202 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.202 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.203 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.203 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.203 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.203 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.204 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.204 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.204 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.205 244018 DEBUG nova.virt.hardware [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.208 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:44 compute-0 ceph-mon[76335]: pgmap v1489: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.241 244018 INFO nova.virt.libvirt.driver [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Deleting instance files /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714_del
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.243 244018 INFO nova.virt.libvirt.driver [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Deletion of /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714_del complete
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.310 244018 INFO nova.compute.manager [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Took 1.83 seconds to destroy the instance on the hypervisor.
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.311 244018 DEBUG oslo.service.loopingcall [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.311 244018 DEBUG nova.compute.manager [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.311 244018 DEBUG nova.network.neutron [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.515 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:31:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032123301' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.775 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.797 244018 DEBUG nova.storage.rbd_utils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.800 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:44 compute-0 nova_compute[244014]: 2026-02-25 12:31:44.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.200 244018 DEBUG nova.network.neutron [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.234 244018 INFO nova.compute.manager [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Took 0.92 seconds to deallocate network for instance.
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.293 244018 DEBUG oslo_concurrency.lockutils [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.293 244018 DEBUG oslo_concurrency.lockutils [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:31:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/342534473' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.325 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.326 244018 DEBUG nova.objects.instance [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1032123301' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.348 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:31:45 compute-0 nova_compute[244014]:   <uuid>7f9688d5-08dc-4dca-8a01-f22b4efd73f7</uuid>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   <name>instance-0000004c</name>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerShowV257Test-server-878139332</nova:name>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:31:44</nova:creationTime>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:31:45 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:31:45 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:31:45 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:31:45 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:31:45 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:31:45 compute-0 nova_compute[244014]:         <nova:user uuid="0658aff9d14a4a779bd6eecd4839d072">tempest-ServerShowV257Test-939673239-project-member</nova:user>
Feb 25 12:31:45 compute-0 nova_compute[244014]:         <nova:project uuid="10298efd53b440ebae3f696be5d24b76">tempest-ServerShowV257Test-939673239</nova:project>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <system>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <entry name="serial">7f9688d5-08dc-4dca-8a01-f22b4efd73f7</entry>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <entry name="uuid">7f9688d5-08dc-4dca-8a01-f22b4efd73f7</entry>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     </system>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   <os>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   </os>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   <features>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   </features>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk">
Feb 25 12:31:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config">
Feb 25 12:31:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:31:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/console.log" append="off"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <video>
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     </video>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:31:45 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:31:45 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:31:45 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:31:45 compute-0 nova_compute[244014]: </domain>
Feb 25 12:31:45 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.420 244018 DEBUG oslo_concurrency.processutils [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.451 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.452 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.452 244018 INFO nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Using config drive
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.474 244018 DEBUG nova.storage.rbd_utils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.514 244018 DEBUG nova.compute.manager [req-b162bab4-da6d-49dc-aae4-ca50ba300ec3 req-38ebe39d-f42f-4155-aed5-91d4ec9ac021 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-deleted-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:31:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:31:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2110478245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.944 244018 DEBUG oslo_concurrency.processutils [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.951 244018 DEBUG nova.compute.provider_tree [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:31:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Feb 25 12:31:45 compute-0 nova_compute[244014]: 2026-02-25 12:31:45.972 244018 DEBUG nova.scheduler.client.report [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:31:46 compute-0 nova_compute[244014]: 2026-02-25 12:31:46.005 244018 DEBUG oslo_concurrency.lockutils [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:46 compute-0 nova_compute[244014]: 2026-02-25 12:31:46.050 244018 INFO nova.scheduler.client.report [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Deleted allocations for instance 3f143481-f5f9-45d3-9d0b-b66e77ee0714
Feb 25 12:31:46 compute-0 nova_compute[244014]: 2026-02-25 12:31:46.070 244018 INFO nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Creating config drive at /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config
Feb 25 12:31:46 compute-0 nova_compute[244014]: 2026-02-25 12:31:46.076 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpku7hvtyb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:46 compute-0 nova_compute[244014]: 2026-02-25 12:31:46.198 244018 DEBUG oslo_concurrency.lockutils [None req-bb630a6b-56e6-488c-8660-b0bd17e7d00f b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:46 compute-0 nova_compute[244014]: 2026-02-25 12:31:46.217 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpku7hvtyb" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:46 compute-0 nova_compute[244014]: 2026-02-25 12:31:46.251 244018 DEBUG nova.storage.rbd_utils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:46 compute-0 nova_compute[244014]: 2026-02-25 12:31:46.255 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/342534473' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:31:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2110478245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:46 compute-0 ceph-mon[76335]: pgmap v1490: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Feb 25 12:31:46 compute-0 nova_compute[244014]: 2026-02-25 12:31:46.593 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022691.5915518, e954c936-91fe-4aa5-8c91-78ec08c85221 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:46 compute-0 nova_compute[244014]: 2026-02-25 12:31:46.594 244018 INFO nova.compute.manager [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] VM Stopped (Lifecycle Event)
Feb 25 12:31:46 compute-0 nova_compute[244014]: 2026-02-25 12:31:46.625 244018 DEBUG nova.compute.manager [None req-858f0773-648a-4b63-9464-c36de5eb9970 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:46 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Feb 25 12:31:46 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:46.826736) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:31:46 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Feb 25 12:31:46 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022706826804, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1291, "num_deletes": 251, "total_data_size": 1819535, "memory_usage": 1853120, "flush_reason": "Manual Compaction"}
Feb 25 12:31:46 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022707019574, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1111701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30623, "largest_seqno": 31913, "table_properties": {"data_size": 1107032, "index_size": 2001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12935, "raw_average_key_size": 20, "raw_value_size": 1096670, "raw_average_value_size": 1771, "num_data_blocks": 90, "num_entries": 619, "num_filter_entries": 619, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022595, "oldest_key_time": 1772022595, "file_creation_time": 1772022706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 192985 microseconds, and 5781 cpu microseconds.
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.019664) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1111701 bytes OK
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.019756) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.239862) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.239906) EVENT_LOG_v1 {"time_micros": 1772022707239896, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.239931) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1813655, prev total WAL file size 1813655, number of live WAL files 2.
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.240605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1085KB)], [65(9678KB)]
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022707240710, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11022526, "oldest_snapshot_seqno": -1}
Feb 25 12:31:47 compute-0 nova_compute[244014]: 2026-02-25 12:31:47.258 244018 DEBUG oslo_concurrency.processutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.003s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:47 compute-0 nova_compute[244014]: 2026-02-25 12:31:47.259 244018 INFO nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Deleting local config drive /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config because it was imported into RBD.
Feb 25 12:31:47 compute-0 systemd-machined[210048]: New machine qemu-93-instance-0000004c.
Feb 25 12:31:47 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-0000004c.
Feb 25 12:31:47 compute-0 nova_compute[244014]: 2026-02-25 12:31:47.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:31:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3419393208' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:31:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:31:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3419393208' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5797 keys, 8329386 bytes, temperature: kUnknown
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022707569529, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 8329386, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8290642, "index_size": 23146, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 144978, "raw_average_key_size": 25, "raw_value_size": 8186714, "raw_average_value_size": 1412, "num_data_blocks": 945, "num_entries": 5797, "num_filter_entries": 5797, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022707, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.569903) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 8329386 bytes
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.608054) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.5 rd, 25.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.5 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(17.4) write-amplify(7.5) OK, records in: 6261, records dropped: 464 output_compression: NoCompression
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.608104) EVENT_LOG_v1 {"time_micros": 1772022707608082, "job": 36, "event": "compaction_finished", "compaction_time_micros": 328921, "compaction_time_cpu_micros": 21441, "output_level": 6, "num_output_files": 1, "total_output_size": 8329386, "num_input_records": 6261, "num_output_records": 5797, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022707608529, "job": 36, "event": "table_file_deletion", "file_number": 67}
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022707610517, "job": 36, "event": "table_file_deletion", "file_number": 65}
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.240489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.610634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.610639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.610641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.610642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:31:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:31:47.610644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:31:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 279 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 3.6 MiB/s wr, 124 op/s
Feb 25 12:31:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3419393208' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:31:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3419393208' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:31:48 compute-0 nova_compute[244014]: 2026-02-25 12:31:48.957 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022708.957301, 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:48 compute-0 nova_compute[244014]: 2026-02-25 12:31:48.958 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] VM Resumed (Lifecycle Event)
Feb 25 12:31:48 compute-0 nova_compute[244014]: 2026-02-25 12:31:48.961 244018 DEBUG nova.compute.manager [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:31:48 compute-0 nova_compute[244014]: 2026-02-25 12:31:48.962 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:31:48 compute-0 nova_compute[244014]: 2026-02-25 12:31:48.966 244018 INFO nova.virt.libvirt.driver [-] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Instance spawned successfully.
Feb 25 12:31:48 compute-0 nova_compute[244014]: 2026-02-25 12:31:48.966 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:31:48 compute-0 nova_compute[244014]: 2026-02-25 12:31:48.986 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:48 compute-0 nova_compute[244014]: 2026-02-25 12:31:48.997 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:48.999 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.000 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.000 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.000 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.001 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.001 244018 DEBUG nova.virt.libvirt.driver [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.027 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.027 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022708.9582686, 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.028 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] VM Started (Lifecycle Event)
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.050 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.054 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.060 244018 INFO nova.compute.manager [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Took 6.71 seconds to spawn the instance on the hypervisor.
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.061 244018 DEBUG nova.compute.manager [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.076 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.128 244018 INFO nova.compute.manager [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Took 7.93 seconds to build instance.
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.144 244018 DEBUG oslo_concurrency.lockutils [None req-50927508-d268-44b8-9ee2-6d716b889442 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "7f9688d5-08dc-4dca-8a01-f22b4efd73f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:49 compute-0 ceph-mon[76335]: pgmap v1491: 305 pgs: 305 active+clean; 279 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 3.6 MiB/s wr, 124 op/s
Feb 25 12:31:49 compute-0 nova_compute[244014]: 2026-02-25 12:31:49.808 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 279 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Feb 25 12:31:50 compute-0 ceph-mon[76335]: pgmap v1492: 305 pgs: 305 active+clean; 279 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Feb 25 12:31:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Feb 25 12:31:52 compute-0 nova_compute[244014]: 2026-02-25 12:31:52.550 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:52 compute-0 ceph-mon[76335]: pgmap v1493: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Feb 25 12:31:53 compute-0 nova_compute[244014]: 2026-02-25 12:31:53.086 244018 INFO nova.compute.manager [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Rebuilding instance
Feb 25 12:31:53 compute-0 nova_compute[244014]: 2026-02-25 12:31:53.584 244018 DEBUG nova.objects.instance [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:53 compute-0 nova_compute[244014]: 2026-02-25 12:31:53.601 244018 DEBUG nova.compute.manager [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:53 compute-0 nova_compute[244014]: 2026-02-25 12:31:53.656 244018 DEBUG nova.objects.instance [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:53 compute-0 nova_compute[244014]: 2026-02-25 12:31:53.672 244018 DEBUG nova.objects.instance [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:53 compute-0 nova_compute[244014]: 2026-02-25 12:31:53.708 244018 DEBUG nova.objects.instance [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lazy-loading 'resources' on Instance uuid 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:53 compute-0 nova_compute[244014]: 2026-02-25 12:31:53.719 244018 DEBUG nova.objects.instance [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:53 compute-0 nova_compute[244014]: 2026-02-25 12:31:53.731 244018 DEBUG nova.objects.instance [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:31:53 compute-0 nova_compute[244014]: 2026-02-25 12:31:53.734 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:31:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Feb 25 12:31:54 compute-0 nova_compute[244014]: 2026-02-25 12:31:54.575 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022699.5723782, 3f143481-f5f9-45d3-9d0b-b66e77ee0714 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:31:54 compute-0 nova_compute[244014]: 2026-02-25 12:31:54.576 244018 INFO nova.compute.manager [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] VM Stopped (Lifecycle Event)
Feb 25 12:31:54 compute-0 nova_compute[244014]: 2026-02-25 12:31:54.598 244018 DEBUG nova.compute.manager [None req-cc7dcc72-aec9-41e6-88d4-53f7af5383ae - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:31:54 compute-0 nova_compute[244014]: 2026-02-25 12:31:54.810 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:55.012 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:55.013 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:31:55.013 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:55 compute-0 ceph-mon[76335]: pgmap v1494: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Feb 25 12:31:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Feb 25 12:31:56 compute-0 ceph-mon[76335]: pgmap v1495: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Feb 25 12:31:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:31:57 compute-0 nova_compute[244014]: 2026-02-25 12:31:57.315 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:57 compute-0 nova_compute[244014]: 2026-02-25 12:31:57.316 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:57 compute-0 nova_compute[244014]: 2026-02-25 12:31:57.343 244018 DEBUG nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:31:57 compute-0 nova_compute[244014]: 2026-02-25 12:31:57.434 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:57 compute-0 nova_compute[244014]: 2026-02-25 12:31:57.434 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:57 compute-0 nova_compute[244014]: 2026-02-25 12:31:57.442 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:31:57 compute-0 nova_compute[244014]: 2026-02-25 12:31:57.443 244018 INFO nova.compute.claims [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:31:57 compute-0 nova_compute[244014]: 2026-02-25 12:31:57.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:57 compute-0 nova_compute[244014]: 2026-02-25 12:31:57.595 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Feb 25 12:31:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:31:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913285606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.142 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.147 244018 DEBUG nova.compute.provider_tree [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.175 244018 DEBUG nova.scheduler.client.report [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.213 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.214 244018 DEBUG nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.270 244018 DEBUG nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.271 244018 DEBUG nova.network.neutron [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.299 244018 INFO nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.325 244018 DEBUG nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.446 244018 DEBUG nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.448 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.449 244018 INFO nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Creating image(s)
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.480 244018 DEBUG nova.storage.rbd_utils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 8decd922-10f2-4731-ac1e-36a062777059_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.506 244018 DEBUG nova.storage.rbd_utils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 8decd922-10f2-4731-ac1e-36a062777059_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.536 244018 DEBUG nova.storage.rbd_utils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 8decd922-10f2-4731-ac1e-36a062777059_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.540 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.609 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.610 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.611 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.611 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.631 244018 DEBUG nova.storage.rbd_utils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 8decd922-10f2-4731-ac1e-36a062777059_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:31:58 compute-0 nova_compute[244014]: 2026-02-25 12:31:58.636 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8decd922-10f2-4731-ac1e-36a062777059_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.004 244018 DEBUG nova.policy [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f6c1c9101c442839ec520dd809c1205', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff49334847464c879338fb12c3b27419', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:31:59 compute-0 ceph-mon[76335]: pgmap v1496: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Feb 25 12:31:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/913285606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.029 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8decd922-10f2-4731-ac1e-36a062777059_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.087 244018 DEBUG nova.storage.rbd_utils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] resizing rbd image 8decd922-10f2-4731-ac1e-36a062777059_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.159 244018 DEBUG nova.objects.instance [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lazy-loading 'migration_context' on Instance uuid 8decd922-10f2-4731-ac1e-36a062777059 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.197 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.198 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Ensure instance console log exists: /var/lib/nova/instances/8decd922-10f2-4731-ac1e-36a062777059/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.199 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.199 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.199 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.479 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "89230879-b88a-44d3-841e-fc664e122158" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.479 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.494 244018 DEBUG nova.compute.manager [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.549 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.549 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.555 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.555 244018 INFO nova.compute.claims [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.739 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:31:59 compute-0 nova_compute[244014]: 2026-02-25 12:31:59.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:31:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 72 op/s
Feb 25 12:32:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:32:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833428896' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.298 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.304 244018 DEBUG nova.compute.provider_tree [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.381 244018 DEBUG nova.network.neutron [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Successfully created port: 325e7788-75cd-4305-b792-3102ddb850bf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.395 244018 DEBUG nova.scheduler.client.report [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.425 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.426 244018 DEBUG nova.compute.manager [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.506 244018 DEBUG nova.compute.manager [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.506 244018 DEBUG nova.network.neutron [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.541 244018 INFO nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.561 244018 DEBUG nova.compute.manager [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.733 244018 DEBUG nova.compute.manager [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.734 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.734 244018 INFO nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Creating image(s)
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.755 244018 DEBUG nova.storage.rbd_utils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 89230879-b88a-44d3-841e-fc664e122158_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.777 244018 DEBUG nova.storage.rbd_utils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 89230879-b88a-44d3-841e-fc664e122158_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.798 244018 DEBUG nova.storage.rbd_utils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 89230879-b88a-44d3-841e-fc664e122158_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.801 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.858 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.859 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.859 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:00 compute-0 nova_compute[244014]: 2026-02-25 12:32:00.860 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:01 compute-0 nova_compute[244014]: 2026-02-25 12:32:01.361 244018 DEBUG nova.storage.rbd_utils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 89230879-b88a-44d3-841e-fc664e122158_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:01 compute-0 nova_compute[244014]: 2026-02-25 12:32:01.365 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 89230879-b88a-44d3-841e-fc664e122158_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:01 compute-0 nova_compute[244014]: 2026-02-25 12:32:01.393 244018 DEBUG nova.policy [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b63928451c6a4137bb65e25561326aff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:32:01 compute-0 ceph-mon[76335]: pgmap v1497: 305 pgs: 305 active+clean; 279 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 72 op/s
Feb 25 12:32:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2833428896' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:01 compute-0 nova_compute[244014]: 2026-02-25 12:32:01.582 244018 DEBUG nova.network.neutron [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Successfully created port: df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:32:01 compute-0 nova_compute[244014]: 2026-02-25 12:32:01.766 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 89230879-b88a-44d3-841e-fc664e122158_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:01 compute-0 nova_compute[244014]: 2026-02-25 12:32:01.841 244018 DEBUG nova.storage.rbd_utils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] resizing rbd image 89230879-b88a-44d3-841e-fc664e122158_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:32:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 338 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 127 op/s
Feb 25 12:32:02 compute-0 nova_compute[244014]: 2026-02-25 12:32:02.555 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:02 compute-0 ceph-mon[76335]: pgmap v1498: 305 pgs: 305 active+clean; 338 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 127 op/s
Feb 25 12:32:02 compute-0 nova_compute[244014]: 2026-02-25 12:32:02.889 244018 DEBUG nova.objects.instance [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'migration_context' on Instance uuid 89230879-b88a-44d3-841e-fc664e122158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:02 compute-0 nova_compute[244014]: 2026-02-25 12:32:02.907 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:32:02 compute-0 nova_compute[244014]: 2026-02-25 12:32:02.908 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Ensure instance console log exists: /var/lib/nova/instances/89230879-b88a-44d3-841e-fc664e122158/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:32:02 compute-0 nova_compute[244014]: 2026-02-25 12:32:02.908 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:02 compute-0 nova_compute[244014]: 2026-02-25 12:32:02.909 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:02 compute-0 nova_compute[244014]: 2026-02-25 12:32:02.909 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:03 compute-0 nova_compute[244014]: 2026-02-25 12:32:03.030 244018 DEBUG nova.network.neutron [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Successfully created port: 00b26e62-f20c-48bc-9701-d74dd9b15435 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:32:03 compute-0 nova_compute[244014]: 2026-02-25 12:32:03.061 244018 DEBUG nova.network.neutron [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Successfully created port: 97818aef-c1e4-4037-8d99-2ad0e69351c4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:32:03 compute-0 nova_compute[244014]: 2026-02-25 12:32:03.860 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:32:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 391 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 1007 KiB/s rd, 5.1 MiB/s wr, 123 op/s
Feb 25 12:32:04 compute-0 nova_compute[244014]: 2026-02-25 12:32:04.767 244018 DEBUG nova.network.neutron [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Successfully updated port: 325e7788-75cd-4305-b792-3102ddb850bf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:32:04 compute-0 nova_compute[244014]: 2026-02-25 12:32:04.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:04 compute-0 nova_compute[244014]: 2026-02-25 12:32:04.830 244018 DEBUG nova.network.neutron [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Successfully updated port: 00b26e62-f20c-48bc-9701-d74dd9b15435 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:32:04 compute-0 nova_compute[244014]: 2026-02-25 12:32:04.852 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "refresh_cache-89230879-b88a-44d3-841e-fc664e122158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:04 compute-0 nova_compute[244014]: 2026-02-25 12:32:04.853 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquired lock "refresh_cache-89230879-b88a-44d3-841e-fc664e122158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:04 compute-0 nova_compute[244014]: 2026-02-25 12:32:04.853 244018 DEBUG nova.network.neutron [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:32:04 compute-0 nova_compute[244014]: 2026-02-25 12:32:04.971 244018 DEBUG nova.compute.manager [req-06f5f9d7-de57-4e94-8bf8-0dd102922b5c req-17a405c1-43cd-4d5e-8a0d-fcabe4315d44 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-changed-325e7788-75cd-4305-b792-3102ddb850bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:04 compute-0 nova_compute[244014]: 2026-02-25 12:32:04.972 244018 DEBUG nova.compute.manager [req-06f5f9d7-de57-4e94-8bf8-0dd102922b5c req-17a405c1-43cd-4d5e-8a0d-fcabe4315d44 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Refreshing instance network info cache due to event network-changed-325e7788-75cd-4305-b792-3102ddb850bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:32:04 compute-0 nova_compute[244014]: 2026-02-25 12:32:04.972 244018 DEBUG oslo_concurrency.lockutils [req-06f5f9d7-de57-4e94-8bf8-0dd102922b5c req-17a405c1-43cd-4d5e-8a0d-fcabe4315d44 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:04 compute-0 nova_compute[244014]: 2026-02-25 12:32:04.972 244018 DEBUG oslo_concurrency.lockutils [req-06f5f9d7-de57-4e94-8bf8-0dd102922b5c req-17a405c1-43cd-4d5e-8a0d-fcabe4315d44 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:04 compute-0 nova_compute[244014]: 2026-02-25 12:32:04.973 244018 DEBUG nova.network.neutron [req-06f5f9d7-de57-4e94-8bf8-0dd102922b5c req-17a405c1-43cd-4d5e-8a0d-fcabe4315d44 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Refreshing network info cache for port 325e7788-75cd-4305-b792-3102ddb850bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:32:05 compute-0 ceph-mon[76335]: pgmap v1499: 305 pgs: 305 active+clean; 391 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 1007 KiB/s rd, 5.1 MiB/s wr, 123 op/s
Feb 25 12:32:05 compute-0 nova_compute[244014]: 2026-02-25 12:32:05.094 244018 DEBUG nova.network.neutron [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:32:05 compute-0 nova_compute[244014]: 2026-02-25 12:32:05.241 244018 DEBUG nova.network.neutron [req-06f5f9d7-de57-4e94-8bf8-0dd102922b5c req-17a405c1-43cd-4d5e-8a0d-fcabe4315d44 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:32:05 compute-0 nova_compute[244014]: 2026-02-25 12:32:05.779 244018 DEBUG nova.network.neutron [req-06f5f9d7-de57-4e94-8bf8-0dd102922b5c req-17a405c1-43cd-4d5e-8a0d-fcabe4315d44 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:05 compute-0 nova_compute[244014]: 2026-02-25 12:32:05.812 244018 DEBUG oslo_concurrency.lockutils [req-06f5f9d7-de57-4e94-8bf8-0dd102922b5c req-17a405c1-43cd-4d5e-8a0d-fcabe4315d44 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 391 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 5.1 MiB/s wr, 100 op/s
Feb 25 12:32:06 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Feb 25 12:32:06 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004c.scope: Consumed 12.239s CPU time.
Feb 25 12:32:06 compute-0 systemd-machined[210048]: Machine qemu-93-instance-0000004c terminated.
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.116 244018 DEBUG nova.network.neutron [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Successfully updated port: df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.684 244018 DEBUG nova.network.neutron [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Updating instance_info_cache with network_info: [{"id": "00b26e62-f20c-48bc-9701-d74dd9b15435", "address": "fa:16:3e:61:98:17", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b26e62-f2", "ovs_interfaceid": "00b26e62-f20c-48bc-9701-d74dd9b15435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.725 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Releasing lock "refresh_cache-89230879-b88a-44d3-841e-fc664e122158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.725 244018 DEBUG nova.compute.manager [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Instance network_info: |[{"id": "00b26e62-f20c-48bc-9701-d74dd9b15435", "address": "fa:16:3e:61:98:17", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b26e62-f2", "ovs_interfaceid": "00b26e62-f20c-48bc-9701-d74dd9b15435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.728 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Start _get_guest_xml network_info=[{"id": "00b26e62-f20c-48bc-9701-d74dd9b15435", "address": "fa:16:3e:61:98:17", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b26e62-f2", "ovs_interfaceid": "00b26e62-f20c-48bc-9701-d74dd9b15435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.731 244018 WARNING nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.738 244018 DEBUG nova.virt.libvirt.host [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.739 244018 DEBUG nova.virt.libvirt.host [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.742 244018 DEBUG nova.virt.libvirt.host [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.742 244018 DEBUG nova.virt.libvirt.host [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.743 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.743 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.743 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.744 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.744 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.744 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.744 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.745 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.745 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.745 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.745 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.745 244018 DEBUG nova.virt.hardware [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.749 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.878 244018 INFO nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Instance shutdown successfully after 13 seconds.
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.883 244018 INFO nova.virt.libvirt.driver [-] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Instance destroyed successfully.
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.887 244018 INFO nova.virt.libvirt.driver [-] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Instance destroyed successfully.
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.908 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:32:06 compute-0 nova_compute[244014]: 2026-02-25 12:32:06.908 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:32:07 compute-0 ceph-mon[76335]: pgmap v1500: 305 pgs: 305 active+clean; 391 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 5.1 MiB/s wr, 100 op/s
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.100 244018 DEBUG nova.compute.manager [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Received event network-changed-00b26e62-f20c-48bc-9701-d74dd9b15435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.101 244018 DEBUG nova.compute.manager [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Refreshing instance network info cache due to event network-changed-00b26e62-f20c-48bc-9701-d74dd9b15435. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.102 244018 DEBUG oslo_concurrency.lockutils [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-89230879-b88a-44d3-841e-fc664e122158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.102 244018 DEBUG oslo_concurrency.lockutils [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-89230879-b88a-44d3-841e-fc664e122158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.102 244018 DEBUG nova.network.neutron [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Refreshing network info cache for port 00b26e62-f20c-48bc-9701-d74dd9b15435 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.105 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.105 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.106 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.106 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5732c5fb-59b3-4590-b65a-a696b9c90152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.119 244018 DEBUG nova.network.neutron [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Successfully updated port: 97818aef-c1e4-4037-8d99-2ad0e69351c4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.170 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.170 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquired lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.170 244018 DEBUG nova.network.neutron [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:32:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:32:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638271116' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.278 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.313 244018 DEBUG nova.storage.rbd_utils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 89230879-b88a-44d3-841e-fc664e122158_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.318 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.358 244018 INFO nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Deleting instance files /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7_del
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.360 244018 INFO nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Deletion of /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7_del complete
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.401 244018 DEBUG nova.network.neutron [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.559 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.560 244018 INFO nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Creating image(s)
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.589 244018 DEBUG nova.storage.rbd_utils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.610 244018 DEBUG nova.storage.rbd_utils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.635 244018 DEBUG nova.storage.rbd_utils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.639 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.700 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.701 244018 DEBUG oslo_concurrency.lockutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.702 244018 DEBUG oslo_concurrency.lockutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.702 244018 DEBUG oslo_concurrency.lockutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:07 compute-0 podman[311011]: 2026-02-25 12:32:07.705339238 +0000 UTC m=+0.048723283 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.722 244018 DEBUG nova.storage.rbd_utils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:07 compute-0 podman[311014]: 2026-02-25 12:32:07.725496599 +0000 UTC m=+0.068248166 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.726 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:32:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2416654670' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.910 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.913 244018 DEBUG nova.virt.libvirt.vif [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:31:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-606599630',display_name='tempest-ServerActionsTestOtherA-server-606599630',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-606599630',id=78,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-vhzo8rss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:32:00Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=89230879-b88a-44d3-841e-fc664e122158,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00b26e62-f20c-48bc-9701-d74dd9b15435", "address": "fa:16:3e:61:98:17", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b26e62-f2", "ovs_interfaceid": "00b26e62-f20c-48bc-9701-d74dd9b15435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.913 244018 DEBUG nova.network.os_vif_util [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "00b26e62-f20c-48bc-9701-d74dd9b15435", "address": "fa:16:3e:61:98:17", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b26e62-f2", "ovs_interfaceid": "00b26e62-f20c-48bc-9701-d74dd9b15435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.915 244018 DEBUG nova.network.os_vif_util [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:98:17,bridge_name='br-int',has_traffic_filtering=True,id=00b26e62-f20c-48bc-9701-d74dd9b15435,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b26e62-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.916 244018 DEBUG nova.objects.instance [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89230879-b88a-44d3-841e-fc664e122158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 373 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 5.7 MiB/s wr, 129 op/s
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.983 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:32:07 compute-0 nova_compute[244014]:   <uuid>89230879-b88a-44d3-841e-fc664e122158</uuid>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   <name>instance-0000004e</name>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerActionsTestOtherA-server-606599630</nova:name>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:32:06</nova:creationTime>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <nova:user uuid="b63928451c6a4137bb65e25561326aff">tempest-ServerActionsTestOtherA-1615886603-project-member</nova:user>
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <nova:project uuid="8315f545d21f4f8d9a43d810f50e7b78">tempest-ServerActionsTestOtherA-1615886603</nova:project>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <nova:port uuid="00b26e62-f20c-48bc-9701-d74dd9b15435">
Feb 25 12:32:07 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <system>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <entry name="serial">89230879-b88a-44d3-841e-fc664e122158</entry>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <entry name="uuid">89230879-b88a-44d3-841e-fc664e122158</entry>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     </system>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   <os>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   </os>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   <features>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   </features>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/89230879-b88a-44d3-841e-fc664e122158_disk">
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/89230879-b88a-44d3-841e-fc664e122158_disk.config">
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:32:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:61:98:17"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <target dev="tap00b26e62-f2"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/89230879-b88a-44d3-841e-fc664e122158/console.log" append="off"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <video>
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     </video>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:32:07 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:32:07 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:32:07 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:32:07 compute-0 nova_compute[244014]: </domain>
Feb 25 12:32:07 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.984 244018 DEBUG nova.compute.manager [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Preparing to wait for external event network-vif-plugged-00b26e62-f20c-48bc-9701-d74dd9b15435 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.984 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "89230879-b88a-44d3-841e-fc664e122158-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.984 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.985 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.985 244018 DEBUG nova.virt.libvirt.vif [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:31:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-606599630',display_name='tempest-ServerActionsTestOtherA-server-606599630',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-606599630',id=78,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-vhzo8rss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:32:00Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=89230879-b88a-44d3-841e-fc664e122158,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00b26e62-f20c-48bc-9701-d74dd9b15435", "address": "fa:16:3e:61:98:17", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b26e62-f2", "ovs_interfaceid": "00b26e62-f20c-48bc-9701-d74dd9b15435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.986 244018 DEBUG nova.network.os_vif_util [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "00b26e62-f20c-48bc-9701-d74dd9b15435", "address": "fa:16:3e:61:98:17", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b26e62-f2", "ovs_interfaceid": "00b26e62-f20c-48bc-9701-d74dd9b15435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.986 244018 DEBUG nova.network.os_vif_util [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:98:17,bridge_name='br-int',has_traffic_filtering=True,id=00b26e62-f20c-48bc-9701-d74dd9b15435,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b26e62-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.987 244018 DEBUG os_vif [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:98:17,bridge_name='br-int',has_traffic_filtering=True,id=00b26e62-f20c-48bc-9701-d74dd9b15435,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b26e62-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.988 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.988 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:07 compute-0 nova_compute[244014]: 2026-02-25 12:32:07.990 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.018 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00b26e62-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.019 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00b26e62-f2, col_values=(('external_ids', {'iface-id': '00b26e62-f20c-48bc-9701-d74dd9b15435', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:98:17', 'vm-uuid': '89230879-b88a-44d3-841e-fc664e122158'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:08 compute-0 NetworkManager[49836]: <info>  [1772022728.0218] manager: (tap00b26e62-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Feb 25 12:32:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3638271116' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2416654670' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.058 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.067 244018 INFO os_vif [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:98:17,bridge_name='br-int',has_traffic_filtering=True,id=00b26e62-f20c-48bc-9701-d74dd9b15435,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b26e62-f2')
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.073 244018 DEBUG nova.storage.rbd_utils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] resizing rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.157 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.159 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Ensure instance console log exists: /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.160 244018 DEBUG oslo_concurrency.lockutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.161 244018 DEBUG oslo_concurrency.lockutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.161 244018 DEBUG oslo_concurrency.lockutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.162 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.166 244018 WARNING nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.170 244018 DEBUG nova.virt.libvirt.host [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.171 244018 DEBUG nova.virt.libvirt.host [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.174 244018 DEBUG nova.virt.libvirt.host [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.175 244018 DEBUG nova.virt.libvirt.host [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.175 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.175 244018 DEBUG nova.virt.hardware [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.176 244018 DEBUG nova.virt.hardware [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.176 244018 DEBUG nova.virt.hardware [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.176 244018 DEBUG nova.virt.hardware [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.176 244018 DEBUG nova.virt.hardware [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.176 244018 DEBUG nova.virt.hardware [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.177 244018 DEBUG nova.virt.hardware [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.177 244018 DEBUG nova.virt.hardware [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.177 244018 DEBUG nova.virt.hardware [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.177 244018 DEBUG nova.virt.hardware [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.177 244018 DEBUG nova.virt.hardware [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.177 244018 DEBUG nova.objects.instance [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.197 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.225 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.225 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.226 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No VIF found with MAC fa:16:3e:61:98:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.226 244018 INFO nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Using config drive
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.244 244018 DEBUG nova.storage.rbd_utils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 89230879-b88a-44d3-841e-fc664e122158_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:32:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/348560454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.691 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.722 244018 DEBUG nova.storage.rbd_utils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:08 compute-0 nova_compute[244014]: 2026-02-25 12:32:08.727 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:09 compute-0 ceph-mon[76335]: pgmap v1501: 305 pgs: 305 active+clean; 373 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 5.7 MiB/s wr, 129 op/s
Feb 25 12:32:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/348560454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.204 244018 INFO nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Creating config drive at /var/lib/nova/instances/89230879-b88a-44d3-841e-fc664e122158/disk.config
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.208 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89230879-b88a-44d3-841e-fc664e122158/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpknoo4tge execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.288 244018 DEBUG nova.compute.manager [req-dd33ef68-9eb4-4f83-a81c-40706307ebed req-763f18c6-9ba2-404b-89fe-072560f64ab2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-changed-97818aef-c1e4-4037-8d99-2ad0e69351c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.289 244018 DEBUG nova.compute.manager [req-dd33ef68-9eb4-4f83-a81c-40706307ebed req-763f18c6-9ba2-404b-89fe-072560f64ab2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Refreshing instance network info cache due to event network-changed-97818aef-c1e4-4037-8d99-2ad0e69351c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.289 244018 DEBUG oslo_concurrency.lockutils [req-dd33ef68-9eb4-4f83-a81c-40706307ebed req-763f18c6-9ba2-404b-89fe-072560f64ab2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:32:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432351701' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.322 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.324 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:32:09 compute-0 nova_compute[244014]:   <uuid>7f9688d5-08dc-4dca-8a01-f22b4efd73f7</uuid>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   <name>instance-0000004c</name>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerShowV257Test-server-878139332</nova:name>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:32:08</nova:creationTime>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:32:09 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:32:09 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:32:09 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:32:09 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:32:09 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:32:09 compute-0 nova_compute[244014]:         <nova:user uuid="0658aff9d14a4a779bd6eecd4839d072">tempest-ServerShowV257Test-939673239-project-member</nova:user>
Feb 25 12:32:09 compute-0 nova_compute[244014]:         <nova:project uuid="10298efd53b440ebae3f696be5d24b76">tempest-ServerShowV257Test-939673239</nova:project>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <system>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <entry name="serial">7f9688d5-08dc-4dca-8a01-f22b4efd73f7</entry>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <entry name="uuid">7f9688d5-08dc-4dca-8a01-f22b4efd73f7</entry>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     </system>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   <os>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   </os>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   <features>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   </features>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk">
Feb 25 12:32:09 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       </source>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:32:09 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config">
Feb 25 12:32:09 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       </source>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:32:09 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/console.log" append="off"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <video>
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     </video>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:32:09 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:32:09 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:32:09 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:32:09 compute-0 nova_compute[244014]: </domain>
Feb 25 12:32:09 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.345 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89230879-b88a-44d3-841e-fc664e122158/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpknoo4tge" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.364 244018 DEBUG nova.storage.rbd_utils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 89230879-b88a-44d3-841e-fc664e122158_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.367 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89230879-b88a-44d3-841e-fc664e122158/disk.config 89230879-b88a-44d3-841e-fc664e122158_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.411 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.411 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.412 244018 INFO nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Using config drive
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.435 244018 DEBUG nova.storage.rbd_utils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.454 244018 DEBUG nova.objects.instance [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.472 244018 DEBUG oslo_concurrency.processutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89230879-b88a-44d3-841e-fc664e122158/disk.config 89230879-b88a-44d3-841e-fc664e122158_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.473 244018 INFO nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Deleting local config drive /var/lib/nova/instances/89230879-b88a-44d3-841e-fc664e122158/disk.config because it was imported into RBD.
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.493 244018 DEBUG nova.objects.instance [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lazy-loading 'keypairs' on Instance uuid 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:09 compute-0 kernel: tap00b26e62-f2: entered promiscuous mode
Feb 25 12:32:09 compute-0 NetworkManager[49836]: <info>  [1772022729.5151] manager: (tap00b26e62-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/303)
Feb 25 12:32:09 compute-0 ovn_controller[147040]: 2026-02-25T12:32:09Z|00716|binding|INFO|Claiming lport 00b26e62-f20c-48bc-9701-d74dd9b15435 for this chassis.
Feb 25 12:32:09 compute-0 ovn_controller[147040]: 2026-02-25T12:32:09Z|00717|binding|INFO|00b26e62-f20c-48bc-9701-d74dd9b15435: Claiming fa:16:3e:61:98:17 10.100.0.8
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.522 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:98:17 10.100.0.8'], port_security=['fa:16:3e:61:98:17 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '89230879-b88a-44d3-841e-fc664e122158', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '2', 'neutron:security_group_ids': '290a60c1-6d36-455f-8454-5a2e312fec46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=00b26e62-f20c-48bc-9701-d74dd9b15435) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.523 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 00b26e62-f20c-48bc-9701-d74dd9b15435 in datapath cd796561-bd80-4610-8abc-655ee9e3676f bound to our chassis
Feb 25 12:32:09 compute-0 ovn_controller[147040]: 2026-02-25T12:32:09Z|00718|binding|INFO|Setting lport 00b26e62-f20c-48bc-9701-d74dd9b15435 ovn-installed in OVS
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.524 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 12:32:09 compute-0 ovn_controller[147040]: 2026-02-25T12:32:09Z|00719|binding|INFO|Setting lport 00b26e62-f20c-48bc-9701-d74dd9b15435 up in Southbound
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.538 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[852705ec-8955-4a3a-bac6-0520499c8a8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:09 compute-0 systemd-udevd[311327]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:32:09 compute-0 systemd-machined[210048]: New machine qemu-94-instance-0000004e.
Feb 25 12:32:09 compute-0 NetworkManager[49836]: <info>  [1772022729.5492] device (tap00b26e62-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:32:09 compute-0 NetworkManager[49836]: <info>  [1772022729.5498] device (tap00b26e62-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:32:09 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-0000004e.
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.559 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[97b63590-49b5-4abe-8e53-7be1cf4ace35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.562 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[de9e36fd-e3ae-4011-85af-e6d9bbca180b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.582 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[26383588-2a52-472b-bb59-14a34121f121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.596 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2ffd4b53-d99f-4fc1-911d-7dc077899aa3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 616, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 616, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 39372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311335, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.608 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3df26e59-448b-4d73-845e-ebaa4cae8664]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460517, 'tstamp': 460517}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311340, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460518, 'tstamp': 460518}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311340, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.610 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.613 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.613 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.614 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:09.614 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.629 244018 DEBUG nova.network.neutron [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Updated VIF entry in instance network info cache for port 00b26e62-f20c-48bc-9701-d74dd9b15435. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.630 244018 DEBUG nova.network.neutron [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Updating instance_info_cache with network_info: [{"id": "00b26e62-f20c-48bc-9701-d74dd9b15435", "address": "fa:16:3e:61:98:17", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b26e62-f2", "ovs_interfaceid": "00b26e62-f20c-48bc-9701-d74dd9b15435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.649 244018 DEBUG oslo_concurrency.lockutils [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-89230879-b88a-44d3-841e-fc664e122158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.650 244018 DEBUG nova.compute.manager [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-changed-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.650 244018 DEBUG nova.compute.manager [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Refreshing instance network info cache due to event network-changed-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.650 244018 DEBUG oslo_concurrency.lockutils [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.660 244018 INFO nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Creating config drive at /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.664 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpoaoq5y3l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.795 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpoaoq5y3l" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.819 244018 DEBUG nova.storage.rbd_utils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] rbd image 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.826 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.854 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.862 244018 DEBUG nova.compute.manager [req-5b7cb26f-64c3-4ce8-93c1-6dce6110e6df req-5bfdbd8f-82fc-4160-ba15-ccf5b94eb07f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Received event network-vif-plugged-00b26e62-f20c-48bc-9701-d74dd9b15435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.863 244018 DEBUG oslo_concurrency.lockutils [req-5b7cb26f-64c3-4ce8-93c1-6dce6110e6df req-5bfdbd8f-82fc-4160-ba15-ccf5b94eb07f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89230879-b88a-44d3-841e-fc664e122158-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.863 244018 DEBUG oslo_concurrency.lockutils [req-5b7cb26f-64c3-4ce8-93c1-6dce6110e6df req-5bfdbd8f-82fc-4160-ba15-ccf5b94eb07f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.863 244018 DEBUG oslo_concurrency.lockutils [req-5b7cb26f-64c3-4ce8-93c1-6dce6110e6df req-5bfdbd8f-82fc-4160-ba15-ccf5b94eb07f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.864 244018 DEBUG nova.compute.manager [req-5b7cb26f-64c3-4ce8-93c1-6dce6110e6df req-5bfdbd8f-82fc-4160-ba15-ccf5b94eb07f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Processing event network-vif-plugged-00b26e62-f20c-48bc-9701-d74dd9b15435 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.953 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updating instance_info_cache with network_info: [{"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 373 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 5.7 MiB/s wr, 129 op/s
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.971 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.971 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.972 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.976 244018 DEBUG oslo_concurrency.processutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config 7f9688d5-08dc-4dca-8a01-f22b4efd73f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:09 compute-0 nova_compute[244014]: 2026-02-25 12:32:09.976 244018 INFO nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Deleting local config drive /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7/disk.config because it was imported into RBD.
Feb 25 12:32:10 compute-0 systemd-machined[210048]: New machine qemu-95-instance-0000004c.
Feb 25 12:32:10 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-0000004c.
Feb 25 12:32:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1432351701' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.181 244018 DEBUG nova.compute.manager [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.182 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022730.1807785, 89230879-b88a-44d3-841e-fc664e122158 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.182 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] VM Started (Lifecycle Event)
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.184 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.186 244018 INFO nova.virt.libvirt.driver [-] [instance: 89230879-b88a-44d3-841e-fc664e122158] Instance spawned successfully.
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.187 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.217 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.224 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.224 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.225 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.226 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.227 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.228 244018 DEBUG nova.virt.libvirt.driver [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.234 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.288 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.289 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022730.1810002, 89230879-b88a-44d3-841e-fc664e122158 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.289 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] VM Paused (Lifecycle Event)
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.313 244018 INFO nova.compute.manager [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Took 9.58 seconds to spawn the instance on the hypervisor.
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.313 244018 DEBUG nova.compute.manager [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.315 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.324 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022730.1839159, 89230879-b88a-44d3-841e-fc664e122158 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.324 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] VM Resumed (Lifecycle Event)
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.349 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.352 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.379 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.399 244018 INFO nova.compute.manager [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Took 10.86 seconds to build instance.
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.429 244018 DEBUG oslo_concurrency.lockutils [None req-8df48893-2c78-430f-8e67-f25a9b45f49d b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.558 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.558 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022730.5579555, 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.559 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] VM Resumed (Lifecycle Event)
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.562 244018 DEBUG nova.compute.manager [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.563 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.567 244018 INFO nova.virt.libvirt.driver [-] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Instance spawned successfully.
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.567 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.593 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.601 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.607 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.607 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.608 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.609 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.610 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.611 244018 DEBUG nova.virt.libvirt.driver [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.620 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.621 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022730.5622358, 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.621 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] VM Started (Lifecycle Event)
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.646 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.651 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.677 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.683 244018 DEBUG nova.compute.manager [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.806 244018 DEBUG oslo_concurrency.lockutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.806 244018 DEBUG oslo_concurrency.lockutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.806 244018 DEBUG nova.objects.instance [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:32:10 compute-0 nova_compute[244014]: 2026-02-25 12:32:10.949 244018 DEBUG oslo_concurrency.lockutils [None req-728c1360-6f11-4fbb-854b-54d6a5fd5f0b 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:11 compute-0 ceph-mon[76335]: pgmap v1502: 305 pgs: 305 active+clean; 373 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 5.7 MiB/s wr, 129 op/s
Feb 25 12:32:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 361 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 7.2 MiB/s wr, 271 op/s
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.080 244018 DEBUG nova.compute.manager [req-b6a2bc25-fb70-4b6b-9f2c-b85468a7813b req-08c7157b-e725-4e47-a469-5cd0158ead94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Received event network-vif-plugged-00b26e62-f20c-48bc-9701-d74dd9b15435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.080 244018 DEBUG oslo_concurrency.lockutils [req-b6a2bc25-fb70-4b6b-9f2c-b85468a7813b req-08c7157b-e725-4e47-a469-5cd0158ead94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89230879-b88a-44d3-841e-fc664e122158-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.081 244018 DEBUG oslo_concurrency.lockutils [req-b6a2bc25-fb70-4b6b-9f2c-b85468a7813b req-08c7157b-e725-4e47-a469-5cd0158ead94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.081 244018 DEBUG oslo_concurrency.lockutils [req-b6a2bc25-fb70-4b6b-9f2c-b85468a7813b req-08c7157b-e725-4e47-a469-5cd0158ead94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.081 244018 DEBUG nova.compute.manager [req-b6a2bc25-fb70-4b6b-9f2c-b85468a7813b req-08c7157b-e725-4e47-a469-5cd0158ead94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] No waiting events found dispatching network-vif-plugged-00b26e62-f20c-48bc-9701-d74dd9b15435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.081 244018 WARNING nova.compute.manager [req-b6a2bc25-fb70-4b6b-9f2c-b85468a7813b req-08c7157b-e725-4e47-a469-5cd0158ead94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Received unexpected event network-vif-plugged-00b26e62-f20c-48bc-9701-d74dd9b15435 for instance with vm_state active and task_state None.
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.562 244018 DEBUG nova.network.neutron [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Updating instance_info_cache with network_info: [{"id": "325e7788-75cd-4305-b792-3102ddb850bf", "address": "fa:16:3e:26:5c:07", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap325e7788-75", "ovs_interfaceid": "325e7788-75cd-4305-b792-3102ddb850bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.594 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Releasing lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.595 244018 DEBUG nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Instance network_info: |[{"id": "325e7788-75cd-4305-b792-3102ddb850bf", "address": "fa:16:3e:26:5c:07", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap325e7788-75", "ovs_interfaceid": "325e7788-75cd-4305-b792-3102ddb850bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.595 244018 DEBUG oslo_concurrency.lockutils [req-dd33ef68-9eb4-4f83-a81c-40706307ebed req-763f18c6-9ba2-404b-89fe-072560f64ab2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.595 244018 DEBUG nova.network.neutron [req-dd33ef68-9eb4-4f83-a81c-40706307ebed req-763f18c6-9ba2-404b-89fe-072560f64ab2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Refreshing network info cache for port 97818aef-c1e4-4037-8d99-2ad0e69351c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.598 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Start _get_guest_xml network_info=[{"id": "325e7788-75cd-4305-b792-3102ddb850bf", "address": "fa:16:3e:26:5c:07", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap325e7788-75", "ovs_interfaceid": "325e7788-75cd-4305-b792-3102ddb850bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.603 244018 WARNING nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.607 244018 DEBUG nova.virt.libvirt.host [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.608 244018 DEBUG nova.virt.libvirt.host [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.611 244018 DEBUG nova.virt.libvirt.host [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.612 244018 DEBUG nova.virt.libvirt.host [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.614 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.614 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.614 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.614 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.614 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.615 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.615 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.615 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.615 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.615 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.615 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.616 244018 DEBUG nova.virt.hardware [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.623 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:12 compute-0 nova_compute[244014]: 2026-02-25 12:32:12.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:32:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3303932655' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.221 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.246 244018 DEBUG nova.storage.rbd_utils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 8decd922-10f2-4731-ac1e-36a062777059_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.250 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:13 compute-0 ceph-mon[76335]: pgmap v1503: 305 pgs: 305 active+clean; 361 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 7.2 MiB/s wr, 271 op/s
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.557 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquiring lock "7f9688d5-08dc-4dca-8a01-f22b4efd73f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.558 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "7f9688d5-08dc-4dca-8a01-f22b4efd73f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.559 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquiring lock "7f9688d5-08dc-4dca-8a01-f22b4efd73f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.559 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "7f9688d5-08dc-4dca-8a01-f22b4efd73f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.559 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "7f9688d5-08dc-4dca-8a01-f22b4efd73f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.562 244018 INFO nova.compute.manager [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Terminating instance
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.563 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquiring lock "refresh_cache-7f9688d5-08dc-4dca-8a01-f22b4efd73f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.564 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquired lock "refresh_cache-7f9688d5-08dc-4dca-8a01-f22b4efd73f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.565 244018 DEBUG nova.network.neutron [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:13 compute-0 nova_compute[244014]: 2026-02-25 12:32:13.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 372 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.6 MiB/s wr, 263 op/s
Feb 25 12:32:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:32:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022611664' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.050 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.800s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.052 244018 DEBUG nova.virt.libvirt.vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-187634331',display_name='tempest-ServersTestMultiNic-server-187634331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-187634331',id=77,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-vuuv8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:58Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=8decd922-10f2-4731-ac1e-36a062777059,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "325e7788-75cd-4305-b792-3102ddb850bf", "address": "fa:16:3e:26:5c:07", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap325e7788-75", "ovs_interfaceid": "325e7788-75cd-4305-b792-3102ddb850bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.052 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "325e7788-75cd-4305-b792-3102ddb850bf", "address": "fa:16:3e:26:5c:07", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap325e7788-75", "ovs_interfaceid": "325e7788-75cd-4305-b792-3102ddb850bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.054 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:5c:07,bridge_name='br-int',has_traffic_filtering=True,id=325e7788-75cd-4305-b792-3102ddb850bf,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap325e7788-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.055 244018 DEBUG nova.virt.libvirt.vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-187634331',display_name='tempest-ServersTestMultiNic-server-187634331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-187634331',id=77,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-vuuv8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:58Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=8decd922-10f2-4731-ac1e-36a062777059,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.055 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.056 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:98:fe,bridge_name='br-int',has_traffic_filtering=True,id=df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a,network=Network(9ecd8942-957f-4772-ab44-ac4bae35909b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf7ea72b-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.057 244018 DEBUG nova.virt.libvirt.vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-187634331',display_name='tempest-ServersTestMultiNic-server-187634331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-187634331',id=77,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-vuuv8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:58Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=8decd922-10f2-4731-ac1e-36a062777059,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.058 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.058 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a3:e8,bridge_name='br-int',has_traffic_filtering=True,id=97818aef-c1e4-4037-8d99-2ad0e69351c4,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97818aef-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.060 244018 DEBUG nova.objects.instance [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8decd922-10f2-4731-ac1e-36a062777059 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.070 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.070 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.071 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.071 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.072 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.127 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:32:14 compute-0 nova_compute[244014]:   <uuid>8decd922-10f2-4731-ac1e-36a062777059</uuid>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   <name>instance-0000004d</name>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersTestMultiNic-server-187634331</nova:name>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:32:12</nova:creationTime>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <nova:user uuid="8f6c1c9101c442839ec520dd809c1205">tempest-ServersTestMultiNic-1041606516-project-member</nova:user>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <nova:project uuid="ff49334847464c879338fb12c3b27419">tempest-ServersTestMultiNic-1041606516</nova:project>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <nova:port uuid="325e7788-75cd-4305-b792-3102ddb850bf">
Feb 25 12:32:14 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.166" ipVersion="4"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <nova:port uuid="df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a">
Feb 25 12:32:14 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.1.4" ipVersion="4"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <nova:port uuid="97818aef-c1e4-4037-8d99-2ad0e69351c4">
Feb 25 12:32:14 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.201" ipVersion="4"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <system>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <entry name="serial">8decd922-10f2-4731-ac1e-36a062777059</entry>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <entry name="uuid">8decd922-10f2-4731-ac1e-36a062777059</entry>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     </system>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   <os>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   </os>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   <features>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   </features>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8decd922-10f2-4731-ac1e-36a062777059_disk">
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       </source>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8decd922-10f2-4731-ac1e-36a062777059_disk.config">
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       </source>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:32:14 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:26:5c:07"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <target dev="tap325e7788-75"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:a2:98:fe"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <target dev="tapdf7ea72b-ee"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:b0:a3:e8"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <target dev="tap97818aef-c1"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8decd922-10f2-4731-ac1e-36a062777059/console.log" append="off"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <video>
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     </video>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:32:14 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:32:14 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:32:14 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:32:14 compute-0 nova_compute[244014]: </domain>
Feb 25 12:32:14 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.128 244018 DEBUG nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Preparing to wait for external event network-vif-plugged-325e7788-75cd-4305-b792-3102ddb850bf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.128 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.129 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.129 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.129 244018 DEBUG nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Preparing to wait for external event network-vif-plugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.129 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.130 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.130 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.130 244018 DEBUG nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Preparing to wait for external event network-vif-plugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.131 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.131 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.131 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.132 244018 DEBUG nova.virt.libvirt.vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-187634331',display_name='tempest-ServersTestMultiNic-server-187634331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-187634331',id=77,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-vuuv8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:58Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=8decd922-10f2-4731-ac1e-36a062777059,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "325e7788-75cd-4305-b792-3102ddb850bf", "address": "fa:16:3e:26:5c:07", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap325e7788-75", "ovs_interfaceid": "325e7788-75cd-4305-b792-3102ddb850bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.132 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "325e7788-75cd-4305-b792-3102ddb850bf", "address": "fa:16:3e:26:5c:07", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap325e7788-75", "ovs_interfaceid": "325e7788-75cd-4305-b792-3102ddb850bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.133 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:5c:07,bridge_name='br-int',has_traffic_filtering=True,id=325e7788-75cd-4305-b792-3102ddb850bf,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap325e7788-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.133 244018 DEBUG os_vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:5c:07,bridge_name='br-int',has_traffic_filtering=True,id=325e7788-75cd-4305-b792-3102ddb850bf,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap325e7788-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.134 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.134 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.138 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap325e7788-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.138 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap325e7788-75, col_values=(('external_ids', {'iface-id': '325e7788-75cd-4305-b792-3102ddb850bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:5c:07', 'vm-uuid': '8decd922-10f2-4731-ac1e-36a062777059'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.139 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 NetworkManager[49836]: <info>  [1772022734.1405] manager: (tap325e7788-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.148 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.149 244018 INFO os_vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:5c:07,bridge_name='br-int',has_traffic_filtering=True,id=325e7788-75cd-4305-b792-3102ddb850bf,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap325e7788-75')
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.150 244018 DEBUG nova.virt.libvirt.vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-187634331',display_name='tempest-ServersTestMultiNic-server-187634331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-187634331',id=77,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-vuuv8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:58Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=8decd922-10f2-4731-ac1e-36a062777059,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.150 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.151 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:98:fe,bridge_name='br-int',has_traffic_filtering=True,id=df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a,network=Network(9ecd8942-957f-4772-ab44-ac4bae35909b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf7ea72b-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.151 244018 DEBUG os_vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:98:fe,bridge_name='br-int',has_traffic_filtering=True,id=df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a,network=Network(9ecd8942-957f-4772-ab44-ac4bae35909b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf7ea72b-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.152 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.152 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.154 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.154 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf7ea72b-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.155 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf7ea72b-ee, col_values=(('external_ids', {'iface-id': 'df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:98:fe', 'vm-uuid': '8decd922-10f2-4731-ac1e-36a062777059'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:14 compute-0 NetworkManager[49836]: <info>  [1772022734.1569] manager: (tapdf7ea72b-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.157 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.159 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.162 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.163 244018 INFO os_vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:98:fe,bridge_name='br-int',has_traffic_filtering=True,id=df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a,network=Network(9ecd8942-957f-4772-ab44-ac4bae35909b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf7ea72b-ee')
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.163 244018 DEBUG nova.virt.libvirt.vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-187634331',display_name='tempest-ServersTestMultiNic-server-187634331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-187634331',id=77,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-vuuv8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:58Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=8decd922-10f2-4731-ac1e-36a062777059,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.164 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.164 244018 DEBUG nova.network.os_vif_util [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a3:e8,bridge_name='br-int',has_traffic_filtering=True,id=97818aef-c1e4-4037-8d99-2ad0e69351c4,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97818aef-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.165 244018 DEBUG os_vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a3:e8,bridge_name='br-int',has_traffic_filtering=True,id=97818aef-c1e4-4037-8d99-2ad0e69351c4,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97818aef-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.165 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.166 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.167 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.167 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97818aef-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.168 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97818aef-c1, col_values=(('external_ids', {'iface-id': '97818aef-c1e4-4037-8d99-2ad0e69351c4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:a3:e8', 'vm-uuid': '8decd922-10f2-4731-ac1e-36a062777059'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:14 compute-0 NetworkManager[49836]: <info>  [1772022734.1695] manager: (tap97818aef-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.169 244018 DEBUG nova.network.neutron [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.173 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.174 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.175 244018 INFO os_vif [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a3:e8,bridge_name='br-int',has_traffic_filtering=True,id=97818aef-c1e4-4037-8d99-2ad0e69351c4,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97818aef-c1')
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.265 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.266 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.266 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] No VIF found with MAC fa:16:3e:26:5c:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.267 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] No VIF found with MAC fa:16:3e:a2:98:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.267 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] No VIF found with MAC fa:16:3e:b0:a3:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.267 244018 INFO nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Using config drive
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.285 244018 DEBUG nova.storage.rbd_utils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 8decd922-10f2-4731-ac1e-36a062777059_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3303932655' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:14 compute-0 ceph-mon[76335]: pgmap v1504: 305 pgs: 305 active+clean; 372 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.6 MiB/s wr, 263 op/s
Feb 25 12:32:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3022611664' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.730 244018 DEBUG nova.network.neutron [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.749 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Releasing lock "refresh_cache-7f9688d5-08dc-4dca-8a01-f22b4efd73f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.750 244018 DEBUG nova.compute.manager [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:32:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:32:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3704404108' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.814 244018 DEBUG nova.network.neutron [req-dd33ef68-9eb4-4f83-a81c-40706307ebed req-763f18c6-9ba2-404b-89fe-072560f64ab2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Updated VIF entry in instance network info cache for port 97818aef-c1e4-4037-8d99-2ad0e69351c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.814 244018 DEBUG nova.network.neutron [req-dd33ef68-9eb4-4f83-a81c-40706307ebed req-763f18c6-9ba2-404b-89fe-072560f64ab2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Updating instance_info_cache with network_info: [{"id": "325e7788-75cd-4305-b792-3102ddb850bf", "address": "fa:16:3e:26:5c:07", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap325e7788-75", "ovs_interfaceid": "325e7788-75cd-4305-b792-3102ddb850bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.818 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.835 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.841 244018 DEBUG oslo_concurrency.lockutils [req-dd33ef68-9eb4-4f83-a81c-40706307ebed req-763f18c6-9ba2-404b-89fe-072560f64ab2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.841 244018 DEBUG oslo_concurrency.lockutils [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.844 244018 DEBUG nova.network.neutron [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Refreshing network info cache for port df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:32:14 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Feb 25 12:32:14 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004c.scope: Consumed 4.740s CPU time.
Feb 25 12:32:14 compute-0 systemd-machined[210048]: Machine qemu-95-instance-0000004c terminated.
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.971 244018 INFO nova.virt.libvirt.driver [-] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Instance destroyed successfully.
Feb 25 12:32:14 compute-0 nova_compute[244014]: 2026-02-25 12:32:14.972 244018 DEBUG nova.objects.instance [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lazy-loading 'resources' on Instance uuid 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.069 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.070 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.074 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.074 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.078 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.078 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.082 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.082 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.126 244018 DEBUG nova.compute.manager [req-aac299b2-f840-4083-888a-bbe86741f3d1 req-2b4cfcad-2a37-4354-940f-85284601afaf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Received event network-changed-00b26e62-f20c-48bc-9701-d74dd9b15435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.127 244018 DEBUG nova.compute.manager [req-aac299b2-f840-4083-888a-bbe86741f3d1 req-2b4cfcad-2a37-4354-940f-85284601afaf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Refreshing instance network info cache due to event network-changed-00b26e62-f20c-48bc-9701-d74dd9b15435. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.128 244018 DEBUG oslo_concurrency.lockutils [req-aac299b2-f840-4083-888a-bbe86741f3d1 req-2b4cfcad-2a37-4354-940f-85284601afaf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-89230879-b88a-44d3-841e-fc664e122158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.128 244018 DEBUG oslo_concurrency.lockutils [req-aac299b2-f840-4083-888a-bbe86741f3d1 req-2b4cfcad-2a37-4354-940f-85284601afaf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-89230879-b88a-44d3-841e-fc664e122158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.128 244018 DEBUG nova.network.neutron [req-aac299b2-f840-4083-888a-bbe86741f3d1 req-2b4cfcad-2a37-4354-940f-85284601afaf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Refreshing network info cache for port 00b26e62-f20c-48bc-9701-d74dd9b15435 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.143 244018 INFO nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Creating config drive at /var/lib/nova/instances/8decd922-10f2-4731-ac1e-36a062777059/disk.config
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.146 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8decd922-10f2-4731-ac1e-36a062777059/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxzxw8sum execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.271 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.272 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3555MB free_disk=59.87964235711843GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.273 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.273 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.274 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8decd922-10f2-4731-ac1e-36a062777059/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxzxw8sum" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.295 244018 DEBUG nova.storage.rbd_utils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 8decd922-10f2-4731-ac1e-36a062777059_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.298 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8decd922-10f2-4731-ac1e-36a062777059/disk.config 8decd922-10f2-4731-ac1e-36a062777059_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3704404108' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.662 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 5732c5fb-59b3-4590-b65a-a696b9c90152 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.662 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.663 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8decd922-10f2-4731-ac1e-36a062777059 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.663 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 89230879-b88a-44d3-841e-fc664e122158 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.663 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.663 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.863 244018 DEBUG oslo_concurrency.processutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8decd922-10f2-4731-ac1e-36a062777059/disk.config 8decd922-10f2-4731-ac1e-36a062777059_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.865 244018 INFO nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Deleting local config drive /var/lib/nova/instances/8decd922-10f2-4731-ac1e-36a062777059/disk.config because it was imported into RBD.
Feb 25 12:32:15 compute-0 kernel: tap325e7788-75: entered promiscuous mode
Feb 25 12:32:15 compute-0 NetworkManager[49836]: <info>  [1772022735.9193] manager: (tap325e7788-75): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Feb 25 12:32:15 compute-0 systemd-udevd[311590]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.925 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00720|binding|INFO|Claiming lport 325e7788-75cd-4305-b792-3102ddb850bf for this chassis.
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00721|binding|INFO|325e7788-75cd-4305-b792-3102ddb850bf: Claiming fa:16:3e:26:5c:07 10.100.0.166
Feb 25 12:32:15 compute-0 NetworkManager[49836]: <info>  [1772022735.9348] manager: (tapdf7ea72b-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.935 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:5c:07 10.100.0.166'], port_security=['fa:16:3e:26:5c:07 10.100.0.166'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.166/24', 'neutron:device_id': '8decd922-10f2-4731-ac1e-36a062777059', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db9cae67-3184-4925-839a-8bcfc572b146', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff49334847464c879338fb12c3b27419', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0602774e-0849-430a-95de-27662d70ef20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d903016f-980a-4074-be2b-57de069ed00a, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=325e7788-75cd-4305-b792-3102ddb850bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:32:15 compute-0 NetworkManager[49836]: <info>  [1772022735.9377] device (tap325e7788-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:32:15 compute-0 NetworkManager[49836]: <info>  [1772022735.9388] device (tap325e7788-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.937 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 325e7788-75cd-4305-b792-3102ddb850bf in datapath db9cae67-3184-4925-839a-8bcfc572b146 bound to our chassis
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.939 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db9cae67-3184-4925-839a-8bcfc572b146
Feb 25 12:32:15 compute-0 systemd-udevd[311670]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:32:15 compute-0 NetworkManager[49836]: <info>  [1772022735.9455] manager: (tap97818aef-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Feb 25 12:32:15 compute-0 kernel: tapdf7ea72b-ee: entered promiscuous mode
Feb 25 12:32:15 compute-0 kernel: tap97818aef-c1: entered promiscuous mode
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.950 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a214484a-ef94-40e5-9349-958375eb0aee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.951 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdb9cae67-31 in ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00722|binding|INFO|Claiming lport df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a for this chassis.
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00723|binding|INFO|df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a: Claiming fa:16:3e:a2:98:fe 10.100.1.4
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00724|binding|INFO|Claiming lport 97818aef-c1e4-4037-8d99-2ad0e69351c4 for this chassis.
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00725|binding|INFO|97818aef-c1e4-4037-8d99-2ad0e69351c4: Claiming fa:16:3e:b0:a3:e8 10.100.0.201
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.952 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:15 compute-0 NetworkManager[49836]: <info>  [1772022735.9558] device (tapdf7ea72b-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.954 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdb9cae67-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.954 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c8295ee3-1803-4176-8eae-9d30a946c315]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.955 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[42954e30-37f4-4041-bd02-2be81f7774e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:15 compute-0 NetworkManager[49836]: <info>  [1772022735.9570] device (tapdf7ea72b-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00726|binding|INFO|Setting lport 325e7788-75cd-4305-b792-3102ddb850bf ovn-installed in OVS
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00727|binding|INFO|Setting lport 325e7788-75cd-4305-b792-3102ddb850bf up in Southbound
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.964 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:98:fe 10.100.1.4'], port_security=['fa:16:3e:a2:98:fe 10.100.1.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.4/24', 'neutron:device_id': '8decd922-10f2-4731-ac1e-36a062777059', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ecd8942-957f-4772-ab44-ac4bae35909b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff49334847464c879338fb12c3b27419', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0602774e-0849-430a-95de-27662d70ef20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec01ef9a-fa88-4695-a73a-5b42fae615b8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:32:15 compute-0 NetworkManager[49836]: <info>  [1772022735.9661] device (tap97818aef-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.966 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:a3:e8 10.100.0.201'], port_security=['fa:16:3e:b0:a3:e8 10.100.0.201'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.201/24', 'neutron:device_id': '8decd922-10f2-4731-ac1e-36a062777059', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db9cae67-3184-4925-839a-8bcfc572b146', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff49334847464c879338fb12c3b27419', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0602774e-0849-430a-95de-27662d70ef20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d903016f-980a-4074-be2b-57de069ed00a, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=97818aef-c1e4-4037-8d99-2ad0e69351c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:32:15 compute-0 NetworkManager[49836]: <info>  [1772022735.9671] device (tap97818aef-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:32:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 372 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.4 MiB/s wr, 218 op/s
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.973 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[487f9a9a-d876-4866-b064-903342acb1ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:15.983 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[51ee3824-4ea3-4b5e-ab01-093115038c19]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:15 compute-0 systemd-machined[210048]: New machine qemu-96-instance-0000004d.
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00728|binding|INFO|Setting lport df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a ovn-installed in OVS
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00729|binding|INFO|Setting lport df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a up in Southbound
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00730|binding|INFO|Setting lport 97818aef-c1e4-4037-8d99-2ad0e69351c4 ovn-installed in OVS
Feb 25 12:32:15 compute-0 ovn_controller[147040]: 2026-02-25T12:32:15Z|00731|binding|INFO|Setting lport 97818aef-c1e4-4037-8d99-2ad0e69351c4 up in Southbound
Feb 25 12:32:15 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-0000004d.
Feb 25 12:32:15 compute-0 nova_compute[244014]: 2026-02-25 12:32:15.993 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.005 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[685bbfb7-794b-424c-a2a0-8b34b73ba96f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:16 compute-0 systemd-udevd[311672]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:32:16 compute-0 NetworkManager[49836]: <info>  [1772022736.0117] manager: (tapdb9cae67-30): new Veth device (/org/freedesktop/NetworkManager/Devices/310)
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.010 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d3833040-582c-489b-9646-a51574681307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.040 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[62ba5f80-9ce9-4118-a4d3-00ddbdcbe023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.042 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.043 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1eafd808-ef5d-4442-848d-d2ef4c280dc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:16 compute-0 NetworkManager[49836]: <info>  [1772022736.0623] device (tapdb9cae67-30): carrier: link connected
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.064 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[34e60c79-d38b-4479-a87a-d3d06863b656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.079 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[175edd8f-8d3c-4a75-aafa-e2a91507a737]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb9cae67-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c1:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470547, 'reachable_time': 37613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311711, 'error': None, 'target': 'ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.099 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1a5a32-b327-4fc3-9ea6-7592d56468a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:c174'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 470547, 'tstamp': 470547}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311712, 'error': None, 'target': 'ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.114 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bebec126-d411-4026-ade3-a8f282ba3631]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb9cae67-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c1:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470547, 'reachable_time': 37613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311713, 'error': None, 'target': 'ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.135 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[db08ca3e-23e6-4509-b4cc-074fbd8d22df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.177 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a22126c-5c82-4ed7-ae97-c83a5f6a0097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.178 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb9cae67-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.179 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.179 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb9cae67-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:16 compute-0 NetworkManager[49836]: <info>  [1772022736.1814] manager: (tapdb9cae67-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Feb 25 12:32:16 compute-0 kernel: tapdb9cae67-30: entered promiscuous mode
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.180 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.186 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb9cae67-30, col_values=(('external_ids', {'iface-id': 'c6e8f7c2-37ff-440e-9be5-40d3dd5e3130'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:16 compute-0 ovn_controller[147040]: 2026-02-25T12:32:16Z|00732|binding|INFO|Releasing lport c6e8f7c2-37ff-440e-9be5-40d3dd5e3130 from this chassis (sb_readonly=0)
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.189 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/db9cae67-3184-4925-839a-8bcfc572b146.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/db9cae67-3184-4925-839a-8bcfc572b146.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.190 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[193430e3-ffaf-4a2b-882c-63beadb4ffdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.191 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-db9cae67-3184-4925-839a-8bcfc572b146
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/db9cae67-3184-4925-839a-8bcfc572b146.pid.haproxy
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID db9cae67-3184-4925-839a-8bcfc572b146
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.191 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146', 'env', 'PROCESS_TAG=haproxy-db9cae67-3184-4925-839a-8bcfc572b146', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/db9cae67-3184-4925-839a-8bcfc572b146.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.192 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:32:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2041766995' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.603 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.609 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.628 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.650 244018 INFO nova.virt.libvirt.driver [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Deleting instance files /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7_del
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.650 244018 INFO nova.virt.libvirt.driver [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Deletion of /var/lib/nova/instances/7f9688d5-08dc-4dca-8a01-f22b4efd73f7_del complete
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.669 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.669 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:16 compute-0 ceph-mon[76335]: pgmap v1505: 305 pgs: 305 active+clean; 372 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.4 MiB/s wr, 218 op/s
Feb 25 12:32:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2041766995' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.715 244018 INFO nova.compute.manager [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Took 1.96 seconds to destroy the instance on the hypervisor.
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.716 244018 DEBUG oslo.service.loopingcall [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.717 244018 DEBUG nova.compute.manager [-] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.717 244018 DEBUG nova.network.neutron [-] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.808 244018 DEBUG nova.compute.manager [req-cef55c50-6345-45b8-84f2-f21518c8341e req-8b0dd10b-4557-4044-ab1e-cc4101a8b1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-plugged-325e7788-75cd-4305-b792-3102ddb850bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.809 244018 DEBUG oslo_concurrency.lockutils [req-cef55c50-6345-45b8-84f2-f21518c8341e req-8b0dd10b-4557-4044-ab1e-cc4101a8b1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.809 244018 DEBUG oslo_concurrency.lockutils [req-cef55c50-6345-45b8-84f2-f21518c8341e req-8b0dd10b-4557-4044-ab1e-cc4101a8b1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.809 244018 DEBUG oslo_concurrency.lockutils [req-cef55c50-6345-45b8-84f2-f21518c8341e req-8b0dd10b-4557-4044-ab1e-cc4101a8b1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.809 244018 DEBUG nova.compute.manager [req-cef55c50-6345-45b8-84f2-f21518c8341e req-8b0dd10b-4557-4044-ab1e-cc4101a8b1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Processing event network-vif-plugged-325e7788-75cd-4305-b792-3102ddb850bf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:32:16 compute-0 nova_compute[244014]: 2026-02-25 12:32:16.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:16.817 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:32:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:16 compute-0 podman[311765]: 2026-02-25 12:32:16.75952631 +0000 UTC m=+0.022193640 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:32:17 compute-0 podman[311765]: 2026-02-25 12:32:16.999991539 +0000 UTC m=+0.262658849 container create f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.058 244018 DEBUG oslo_concurrency.lockutils [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "89230879-b88a-44d3-841e-fc664e122158" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.058 244018 DEBUG oslo_concurrency.lockutils [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.058 244018 DEBUG oslo_concurrency.lockutils [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "89230879-b88a-44d3-841e-fc664e122158-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.058 244018 DEBUG oslo_concurrency.lockutils [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.058 244018 DEBUG oslo_concurrency.lockutils [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.059 244018 INFO nova.compute.manager [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Terminating instance
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.060 244018 DEBUG nova.compute.manager [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.117 244018 DEBUG nova.network.neutron [-] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.134 244018 DEBUG nova.network.neutron [-] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.147 244018 INFO nova.compute.manager [-] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Took 0.43 seconds to deallocate network for instance.
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.208 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.209 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:17 compute-0 systemd[1]: Started libpod-conmon-f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59.scope.
Feb 25 12:32:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:32:17 compute-0 kernel: tap00b26e62-f2 (unregistering): left promiscuous mode
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.318 244018 DEBUG nova.compute.manager [req-ef3466b3-701e-40a2-908f-0f46cd23e3f5 req-970cdc66-1ebc-44a1-b9de-857ab8d0128f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-plugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.318 244018 DEBUG oslo_concurrency.lockutils [req-ef3466b3-701e-40a2-908f-0f46cd23e3f5 req-970cdc66-1ebc-44a1-b9de-857ab8d0128f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.319 244018 DEBUG oslo_concurrency.lockutils [req-ef3466b3-701e-40a2-908f-0f46cd23e3f5 req-970cdc66-1ebc-44a1-b9de-857ab8d0128f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.319 244018 DEBUG oslo_concurrency.lockutils [req-ef3466b3-701e-40a2-908f-0f46cd23e3f5 req-970cdc66-1ebc-44a1-b9de-857ab8d0128f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.319 244018 DEBUG nova.compute.manager [req-ef3466b3-701e-40a2-908f-0f46cd23e3f5 req-970cdc66-1ebc-44a1-b9de-857ab8d0128f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Processing event network-vif-plugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.320 244018 DEBUG nova.compute.manager [req-ef3466b3-701e-40a2-908f-0f46cd23e3f5 req-970cdc66-1ebc-44a1-b9de-857ab8d0128f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-plugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.320 244018 DEBUG oslo_concurrency.lockutils [req-ef3466b3-701e-40a2-908f-0f46cd23e3f5 req-970cdc66-1ebc-44a1-b9de-857ab8d0128f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.321 244018 DEBUG oslo_concurrency.lockutils [req-ef3466b3-701e-40a2-908f-0f46cd23e3f5 req-970cdc66-1ebc-44a1-b9de-857ab8d0128f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.321 244018 DEBUG oslo_concurrency.lockutils [req-ef3466b3-701e-40a2-908f-0f46cd23e3f5 req-970cdc66-1ebc-44a1-b9de-857ab8d0128f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.321 244018 DEBUG nova.compute.manager [req-ef3466b3-701e-40a2-908f-0f46cd23e3f5 req-970cdc66-1ebc-44a1-b9de-857ab8d0128f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] No event matching network-vif-plugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 in dict_keys([('network-vif-plugged', 'df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.321 244018 WARNING nova.compute.manager [req-ef3466b3-701e-40a2-908f-0f46cd23e3f5 req-970cdc66-1ebc-44a1-b9de-857ab8d0128f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received unexpected event network-vif-plugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 for instance with vm_state building and task_state spawning.
Feb 25 12:32:17 compute-0 NetworkManager[49836]: <info>  [1772022737.3269] device (tap00b26e62-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04faa2824e328b26f8d153be12606e6dd098c41f569d9e4535e38a535276409/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.345 244018 DEBUG oslo_concurrency.processutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:17 compute-0 ovn_controller[147040]: 2026-02-25T12:32:17Z|00733|binding|INFO|Releasing lport 00b26e62-f20c-48bc-9701-d74dd9b15435 from this chassis (sb_readonly=0)
Feb 25 12:32:17 compute-0 ovn_controller[147040]: 2026-02-25T12:32:17Z|00734|binding|INFO|Setting lport 00b26e62-f20c-48bc-9701-d74dd9b15435 down in Southbound
Feb 25 12:32:17 compute-0 ovn_controller[147040]: 2026-02-25T12:32:17Z|00735|binding|INFO|Removing iface tap00b26e62-f2 ovn-installed in OVS
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.362 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:98:17 10.100.0.8'], port_security=['fa:16:3e:61:98:17 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '89230879-b88a-44d3-841e-fc664e122158', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=00b26e62-f20c-48bc-9701-d74dd9b15435) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:17 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Feb 25 12:32:17 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004e.scope: Consumed 7.633s CPU time.
Feb 25 12:32:17 compute-0 systemd-machined[210048]: Machine qemu-94-instance-0000004e terminated.
Feb 25 12:32:17 compute-0 NetworkManager[49836]: <info>  [1772022737.4788] manager: (tap00b26e62-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/312)
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.512 244018 INFO nova.virt.libvirt.driver [-] [instance: 89230879-b88a-44d3-841e-fc664e122158] Instance destroyed successfully.
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.513 244018 DEBUG nova.objects.instance [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'resources' on Instance uuid 89230879-b88a-44d3-841e-fc664e122158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.554 244018 DEBUG nova.virt.libvirt.vif [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:31:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-606599630',display_name='tempest-ServerActionsTestOtherA-server-606599630',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-606599630',id=78,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:32:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-vhzo8rss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:32:10Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=89230879-b88a-44d3-841e-fc664e122158,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "00b26e62-f20c-48bc-9701-d74dd9b15435", "address": "fa:16:3e:61:98:17", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b26e62-f2", "ovs_interfaceid": "00b26e62-f20c-48bc-9701-d74dd9b15435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.555 244018 DEBUG nova.network.os_vif_util [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "00b26e62-f20c-48bc-9701-d74dd9b15435", "address": "fa:16:3e:61:98:17", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b26e62-f2", "ovs_interfaceid": "00b26e62-f20c-48bc-9701-d74dd9b15435", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.556 244018 DEBUG nova.network.os_vif_util [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:98:17,bridge_name='br-int',has_traffic_filtering=True,id=00b26e62-f20c-48bc-9701-d74dd9b15435,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b26e62-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.556 244018 DEBUG os_vif [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:98:17,bridge_name='br-int',has_traffic_filtering=True,id=00b26e62-f20c-48bc-9701-d74dd9b15435,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b26e62-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.559 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.559 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00b26e62-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.561 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.564 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.566 244018 INFO os_vif [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:98:17,bridge_name='br-int',has_traffic_filtering=True,id=00b26e62-f20c-48bc-9701-d74dd9b15435,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b26e62-f2')
Feb 25 12:32:17 compute-0 podman[311765]: 2026-02-25 12:32:17.574877812 +0000 UTC m=+0.837545152 container init f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:32:17 compute-0 podman[311765]: 2026-02-25 12:32:17.587219002 +0000 UTC m=+0.849886322 container start f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 12:32:17 compute-0 neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146[311780]: [NOTICE]   (311865) : New worker (311878) forked
Feb 25 12:32:17 compute-0 neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146[311780]: [NOTICE]   (311865) : Loading success.
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.670 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.670 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.671 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.682 157129 INFO neutron.agent.ovn.metadata.agent [-] Port df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a in datapath 9ecd8942-957f-4772-ab44-ac4bae35909b unbound from our chassis
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.684 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ecd8942-957f-4772-ab44-ac4bae35909b
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.696 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3f556a2a-de28-4362-940f-ad0363c94fd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.697 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ecd8942-91 in ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.699 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ecd8942-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.699 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5934b5dc-238a-4071-a87b-1d43e0456b5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.700 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca7a2c4-22d4-4849-b459-39108aa68475]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.709 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3e69ed-a578-482e-ab08-b03249809978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.712 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022737.7092025, 8decd922-10f2-4731-ac1e-36a062777059 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.713 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] VM Started (Lifecycle Event)
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.729 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2e22614a-728b-4071-9d89-b4c9b8a1088a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.752 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2389dbf3-594e-4c97-bb6f-f4a81f64c04f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.756 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e48d121f-00bf-4143-bf75-6aa787c6599e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 NetworkManager[49836]: <info>  [1772022737.7578] manager: (tap9ecd8942-90): new Veth device (/org/freedesktop/NetworkManager/Devices/313)
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.758 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:17 compute-0 systemd-udevd[311695]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.777 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022737.7092872, 8decd922-10f2-4731-ac1e-36a062777059 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.777 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] VM Paused (Lifecycle Event)
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.786 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3133e47b-1cb2-4685-8bd1-ebabb188ab54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.794 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8c2b11-3a81-419f-b0e7-5ad790fd97b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.812 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:17 compute-0 NetworkManager[49836]: <info>  [1772022737.8133] device (tap9ecd8942-90): carrier: link connected
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.816 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[93b9b328-81d9-477e-83a9-812b809947bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.822 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.830 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35ffc92d-95d2-49c4-9258-06ccece2c733]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ecd8942-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:52:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470722, 'reachable_time': 43477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311901, 'error': None, 'target': 'ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.842 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8bcf4d-1c47-485a-bf51-7c2c918ceefb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:5247'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 470722, 'tstamp': 470722}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311902, 'error': None, 'target': 'ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.848 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.858 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[654ac23d-cedc-4be2-8a54-a3ca23109f97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ecd8942-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:52:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470722, 'reachable_time': 43477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311903, 'error': None, 'target': 'ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.881 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4852d8d5-533e-4eb3-a3af-db2aa4b10e1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:32:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2186340603' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.931 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[87b0bd0b-f89f-4ac7-8626-d7cd06e104c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.932 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ecd8942-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.932 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.932 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ecd8942-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:17 compute-0 NetworkManager[49836]: <info>  [1772022737.9346] manager: (tap9ecd8942-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.934 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:17 compute-0 kernel: tap9ecd8942-90: entered promiscuous mode
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.937 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ecd8942-90, col_values=(('external_ids', {'iface-id': 'c3677baa-8ee7-492f-93ca-6321f52ac12c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:17 compute-0 ovn_controller[147040]: 2026-02-25T12:32:17Z|00736|binding|INFO|Releasing lport c3677baa-8ee7-492f-93ca-6321f52ac12c from this chassis (sb_readonly=0)
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.946 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ecd8942-957f-4772-ab44-ac4bae35909b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ecd8942-957f-4772-ab44-ac4bae35909b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.946 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[006c537c-d10c-4e9e-9437-be687af3d451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.947 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-9ecd8942-957f-4772-ab44-ac4bae35909b
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/9ecd8942-957f-4772-ab44-ac4bae35909b.pid.haproxy
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 9ecd8942-957f-4772-ab44-ac4bae35909b
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:32:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:17.947 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b', 'env', 'PROCESS_TAG=haproxy-9ecd8942-957f-4772-ab44-ac4bae35909b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ecd8942-957f-4772-ab44-ac4bae35909b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.950 244018 DEBUG oslo_concurrency.processutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.955 244018 DEBUG nova.compute.provider_tree [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:32:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 326 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 254 op/s
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.969 244018 DEBUG nova.scheduler.client.report [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:32:17 compute-0 nova_compute[244014]: 2026-02-25 12:32:17.992 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:18 compute-0 nova_compute[244014]: 2026-02-25 12:32:18.024 244018 INFO nova.scheduler.client.report [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Deleted allocations for instance 7f9688d5-08dc-4dca-8a01-f22b4efd73f7
Feb 25 12:32:18 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2186340603' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:18 compute-0 nova_compute[244014]: 2026-02-25 12:32:18.098 244018 DEBUG oslo_concurrency.lockutils [None req-fef940dc-2ceb-4be0-b15e-b3d83e2c0d42 0658aff9d14a4a779bd6eecd4839d072 10298efd53b440ebae3f696be5d24b76 - - default default] Lock "7f9688d5-08dc-4dca-8a01-f22b4efd73f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:18 compute-0 nova_compute[244014]: 2026-02-25 12:32:18.113 244018 DEBUG nova.network.neutron [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Updated VIF entry in instance network info cache for port df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:32:18 compute-0 nova_compute[244014]: 2026-02-25 12:32:18.113 244018 DEBUG nova.network.neutron [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Updating instance_info_cache with network_info: [{"id": "325e7788-75cd-4305-b792-3102ddb850bf", "address": "fa:16:3e:26:5c:07", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap325e7788-75", "ovs_interfaceid": "325e7788-75cd-4305-b792-3102ddb850bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:18 compute-0 nova_compute[244014]: 2026-02-25 12:32:18.127 244018 DEBUG oslo_concurrency.lockutils [req-81194d80-9e0d-4beb-9012-f23491ac0ebc req-13f2e234-1818-4309-89e6-3072c74ab240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8decd922-10f2-4731-ac1e-36a062777059" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:18 compute-0 podman[311936]: 2026-02-25 12:32:18.226675155 +0000 UTC m=+0.027301215 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:32:18 compute-0 nova_compute[244014]: 2026-02-25 12:32:18.400 244018 DEBUG nova.network.neutron [req-aac299b2-f840-4083-888a-bbe86741f3d1 req-2b4cfcad-2a37-4354-940f-85284601afaf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Updated VIF entry in instance network info cache for port 00b26e62-f20c-48bc-9701-d74dd9b15435. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:32:18 compute-0 nova_compute[244014]: 2026-02-25 12:32:18.401 244018 DEBUG nova.network.neutron [req-aac299b2-f840-4083-888a-bbe86741f3d1 req-2b4cfcad-2a37-4354-940f-85284601afaf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Updating instance_info_cache with network_info: [{"id": "00b26e62-f20c-48bc-9701-d74dd9b15435", "address": "fa:16:3e:61:98:17", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b26e62-f2", "ovs_interfaceid": "00b26e62-f20c-48bc-9701-d74dd9b15435", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:18 compute-0 nova_compute[244014]: 2026-02-25 12:32:18.440 244018 DEBUG oslo_concurrency.lockutils [req-aac299b2-f840-4083-888a-bbe86741f3d1 req-2b4cfcad-2a37-4354-940f-85284601afaf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-89230879-b88a-44d3-841e-fc664e122158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:18 compute-0 podman[311936]: 2026-02-25 12:32:18.47934566 +0000 UTC m=+0.279971700 container create 8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:32:18 compute-0 systemd[1]: Started libpod-conmon-8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91.scope.
Feb 25 12:32:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a12ab3cd4690a0f4bae928e47fb3035e220fe1f5a6a31459a0d77f5fc55d0a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.053 244018 DEBUG nova.compute.manager [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-plugged-325e7788-75cd-4305-b792-3102ddb850bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.053 244018 DEBUG oslo_concurrency.lockutils [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.054 244018 DEBUG oslo_concurrency.lockutils [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.054 244018 DEBUG oslo_concurrency.lockutils [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.054 244018 DEBUG nova.compute.manager [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] No event matching network-vif-plugged-325e7788-75cd-4305-b792-3102ddb850bf in dict_keys([('network-vif-plugged', 'df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.054 244018 WARNING nova.compute.manager [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received unexpected event network-vif-plugged-325e7788-75cd-4305-b792-3102ddb850bf for instance with vm_state building and task_state spawning.
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.055 244018 DEBUG nova.compute.manager [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-plugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.055 244018 DEBUG oslo_concurrency.lockutils [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.055 244018 DEBUG oslo_concurrency.lockutils [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.055 244018 DEBUG oslo_concurrency.lockutils [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.055 244018 DEBUG nova.compute.manager [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Processing event network-vif-plugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.056 244018 DEBUG nova.compute.manager [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-plugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.056 244018 DEBUG oslo_concurrency.lockutils [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.056 244018 DEBUG oslo_concurrency.lockutils [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.056 244018 DEBUG oslo_concurrency.lockutils [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.057 244018 DEBUG nova.compute.manager [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] No waiting events found dispatching network-vif-plugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.057 244018 WARNING nova.compute.manager [req-ac437d99-973c-495f-a4ea-3a131868c1e5 req-25936e26-f8ec-48a7-af10-9c7686ae24e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received unexpected event network-vif-plugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a for instance with vm_state building and task_state spawning.
Feb 25 12:32:19 compute-0 podman[311936]: 2026-02-25 12:32:19.057363981 +0000 UTC m=+0.857990041 container init 8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.057 244018 DEBUG nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.061 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022739.0607178, 8decd922-10f2-4731-ac1e-36a062777059 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.062 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] VM Resumed (Lifecycle Event)
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.063 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:32:19 compute-0 podman[311936]: 2026-02-25 12:32:19.065187533 +0000 UTC m=+0.865813573 container start 8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.069 244018 INFO nova.virt.libvirt.driver [-] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Instance spawned successfully.
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.070 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:32:19 compute-0 neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b[311953]: [NOTICE]   (311957) : New worker (311959) forked
Feb 25 12:32:19 compute-0 neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b[311953]: [NOTICE]   (311957) : Loading success.
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.097 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.106 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.112 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.113 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.114 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.115 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.115 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.116 244018 DEBUG nova.virt.libvirt.driver [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.133 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 97818aef-c1e4-4037-8d99-2ad0e69351c4 in datapath db9cae67-3184-4925-839a-8bcfc572b146 unbound from our chassis
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.137 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db9cae67-3184-4925-839a-8bcfc572b146
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.148 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.154 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[429a8576-5dff-4860-92af-61d6470e14b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.184 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1832a93a-9352-4fc9-ba2e-4f4d724e81c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.187 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8c469541-9cd9-437d-b748-81ca9a5cf1b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.192 244018 INFO nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Took 20.75 seconds to spawn the instance on the hypervisor.
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.193 244018 DEBUG nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:19 compute-0 ceph-mon[76335]: pgmap v1506: 305 pgs: 305 active+clean; 326 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 254 op/s
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.212 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e985c671-54a7-4f9c-b992-f909764ad945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.229 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2f22ff3d-eb64-4970-922d-7cd075dcd898]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb9cae67-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c1:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470547, 'reachable_time': 37613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311973, 'error': None, 'target': 'ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.240 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67585565-de02-4fa7-ba39-864dbfb30be9]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapdb9cae67-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 470556, 'tstamp': 470556}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311974, 'error': None, 'target': 'ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdb9cae67-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 470558, 'tstamp': 470558}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311974, 'error': None, 'target': 'ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.242 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb9cae67-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.243 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.244 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.245 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb9cae67-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.245 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.245 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb9cae67-30, col_values=(('external_ids', {'iface-id': 'c6e8f7c2-37ff-440e-9be5-40d3dd5e3130'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.246 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.248 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.249 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 00b26e62-f20c-48bc-9701-d74dd9b15435 in datapath cd796561-bd80-4610-8abc-655ee9e3676f unbound from our chassis
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.252 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.270 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad49847-be76-4827-a9b4-36415cd0a42e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.271 244018 INFO nova.compute.manager [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Took 21.87 seconds to build instance.
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.294 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bbcea7ad-3d3d-49aa-9fd4-2b16eb257675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.297 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[42b39974-2745-4f0d-bfb5-3d7748c52a8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.319 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[08e2a998-e120-4811-a737-fee81678e794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.331 244018 DEBUG oslo_concurrency.lockutils [None req-12a8ae1d-439d-40f9-aaa1-fbfddd1a20a4 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.335 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2c2f60-428d-4f65-87e4-847b70f6f016]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 19, 'rx_bytes': 616, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 19, 'rx_bytes': 616, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 39372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311980, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.351 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a7d2eb-83a8-4d53-8b17-aa54c8b27f5b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460517, 'tstamp': 460517}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311981, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460518, 'tstamp': 460518}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311981, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.353 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.356 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.357 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.357 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:19.358 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.480 244018 DEBUG nova.compute.manager [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Received event network-vif-unplugged-00b26e62-f20c-48bc-9701-d74dd9b15435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.480 244018 DEBUG oslo_concurrency.lockutils [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89230879-b88a-44d3-841e-fc664e122158-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.481 244018 DEBUG oslo_concurrency.lockutils [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.481 244018 DEBUG oslo_concurrency.lockutils [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.481 244018 DEBUG nova.compute.manager [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] No waiting events found dispatching network-vif-unplugged-00b26e62-f20c-48bc-9701-d74dd9b15435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.482 244018 DEBUG nova.compute.manager [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Received event network-vif-unplugged-00b26e62-f20c-48bc-9701-d74dd9b15435 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.482 244018 DEBUG nova.compute.manager [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Received event network-vif-plugged-00b26e62-f20c-48bc-9701-d74dd9b15435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.483 244018 DEBUG oslo_concurrency.lockutils [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89230879-b88a-44d3-841e-fc664e122158-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.483 244018 DEBUG oslo_concurrency.lockutils [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.483 244018 DEBUG oslo_concurrency.lockutils [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.484 244018 DEBUG nova.compute.manager [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] No waiting events found dispatching network-vif-plugged-00b26e62-f20c-48bc-9701-d74dd9b15435 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.484 244018 WARNING nova.compute.manager [req-df0ae271-e7d4-4276-bd52-db6069ef4183 req-a2ae10a1-7ccf-471d-8947-09be7beb5b13 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Received unexpected event network-vif-plugged-00b26e62-f20c-48bc-9701-d74dd9b15435 for instance with vm_state active and task_state deleting.
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:19 compute-0 nova_compute[244014]: 2026-02-25 12:32:19.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:32:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 326 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Feb 25 12:32:20 compute-0 ceph-mon[76335]: pgmap v1507: 305 pgs: 305 active+clean; 326 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Feb 25 12:32:20 compute-0 sudo[311982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:32:20 compute-0 sudo[311982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:32:20 compute-0 sudo[311982]: pam_unix(sudo:session): session closed for user root
Feb 25 12:32:20 compute-0 sudo[312007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:32:20 compute-0 sudo[312007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:20.998 244018 INFO nova.virt.libvirt.driver [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Deleting instance files /var/lib/nova/instances/89230879-b88a-44d3-841e-fc664e122158_del
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:20.998 244018 INFO nova.virt.libvirt.driver [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Deletion of /var/lib/nova/instances/89230879-b88a-44d3-841e-fc664e122158_del complete
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.057 244018 INFO nova.compute.manager [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Took 4.00 seconds to destroy the instance on the hypervisor.
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.058 244018 DEBUG oslo.service.loopingcall [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.060 244018 DEBUG nova.compute.manager [-] [instance: 89230879-b88a-44d3-841e-fc664e122158] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.060 244018 DEBUG nova.network.neutron [-] [instance: 89230879-b88a-44d3-841e-fc664e122158] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.212 244018 DEBUG oslo_concurrency.lockutils [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.213 244018 DEBUG oslo_concurrency.lockutils [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.214 244018 DEBUG oslo_concurrency.lockutils [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.214 244018 DEBUG oslo_concurrency.lockutils [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.214 244018 DEBUG oslo_concurrency.lockutils [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.216 244018 INFO nova.compute.manager [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Terminating instance
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.217 244018 DEBUG nova.compute.manager [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:32:21 compute-0 kernel: tap325e7788-75 (unregistering): left promiscuous mode
Feb 25 12:32:21 compute-0 NetworkManager[49836]: <info>  [1772022741.4836] device (tap325e7788-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 ovn_controller[147040]: 2026-02-25T12:32:21Z|00737|binding|INFO|Releasing lport 325e7788-75cd-4305-b792-3102ddb850bf from this chassis (sb_readonly=0)
Feb 25 12:32:21 compute-0 ovn_controller[147040]: 2026-02-25T12:32:21Z|00738|binding|INFO|Setting lport 325e7788-75cd-4305-b792-3102ddb850bf down in Southbound
Feb 25 12:32:21 compute-0 ovn_controller[147040]: 2026-02-25T12:32:21Z|00739|binding|INFO|Removing iface tap325e7788-75 ovn-installed in OVS
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 kernel: tapdf7ea72b-ee (unregistering): left promiscuous mode
Feb 25 12:32:21 compute-0 NetworkManager[49836]: <info>  [1772022741.4996] device (tapdf7ea72b-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:32:21 compute-0 sudo[312007]: pam_unix(sudo:session): session closed for user root
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.505 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:5c:07 10.100.0.166'], port_security=['fa:16:3e:26:5c:07 10.100.0.166'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.166/24', 'neutron:device_id': '8decd922-10f2-4731-ac1e-36a062777059', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db9cae67-3184-4925-839a-8bcfc572b146', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff49334847464c879338fb12c3b27419', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0602774e-0849-430a-95de-27662d70ef20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d903016f-980a-4074-be2b-57de069ed00a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=325e7788-75cd-4305-b792-3102ddb850bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.507 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 325e7788-75cd-4305-b792-3102ddb850bf in datapath db9cae67-3184-4925-839a-8bcfc572b146 unbound from our chassis
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.511 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db9cae67-3184-4925-839a-8bcfc572b146
Feb 25 12:32:21 compute-0 ovn_controller[147040]: 2026-02-25T12:32:21Z|00740|binding|INFO|Releasing lport df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a from this chassis (sb_readonly=0)
Feb 25 12:32:21 compute-0 ovn_controller[147040]: 2026-02-25T12:32:21Z|00741|binding|INFO|Setting lport df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a down in Southbound
Feb 25 12:32:21 compute-0 ovn_controller[147040]: 2026-02-25T12:32:21Z|00742|binding|INFO|Removing iface tapdf7ea72b-ee ovn-installed in OVS
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.513 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 kernel: tap97818aef-c1 (unregistering): left promiscuous mode
Feb 25 12:32:21 compute-0 NetworkManager[49836]: <info>  [1772022741.5256] device (tap97818aef-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.532 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5af16361-04d4-4ec7-a5ce-8d841f395133]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.537 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:98:fe 10.100.1.4'], port_security=['fa:16:3e:a2:98:fe 10.100.1.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.4/24', 'neutron:device_id': '8decd922-10f2-4731-ac1e-36a062777059', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ecd8942-957f-4772-ab44-ac4bae35909b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff49334847464c879338fb12c3b27419', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0602774e-0849-430a-95de-27662d70ef20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec01ef9a-fa88-4695-a73a-5b42fae615b8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:32:21 compute-0 ovn_controller[147040]: 2026-02-25T12:32:21Z|00743|binding|INFO|Releasing lport 97818aef-c1e4-4037-8d99-2ad0e69351c4 from this chassis (sb_readonly=0)
Feb 25 12:32:21 compute-0 ovn_controller[147040]: 2026-02-25T12:32:21Z|00744|binding|INFO|Setting lport 97818aef-c1e4-4037-8d99-2ad0e69351c4 down in Southbound
Feb 25 12:32:21 compute-0 ovn_controller[147040]: 2026-02-25T12:32:21Z|00745|binding|INFO|Removing iface tap97818aef-c1 ovn-installed in OVS
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.559 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:a3:e8 10.100.0.201'], port_security=['fa:16:3e:b0:a3:e8 10.100.0.201'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.201/24', 'neutron:device_id': '8decd922-10f2-4731-ac1e-36a062777059', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db9cae67-3184-4925-839a-8bcfc572b146', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff49334847464c879338fb12c3b27419', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0602774e-0849-430a-95de-27662d70ef20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d903016f-980a-4074-be2b-57de069ed00a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=97818aef-c1e4-4037-8d99-2ad0e69351c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:32:21 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Feb 25 12:32:21 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d0000004d.scope: Consumed 3.674s CPU time.
Feb 25 12:32:21 compute-0 systemd-machined[210048]: Machine qemu-96-instance-0000004d terminated.
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.566 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[959e4180-0e8f-4bff-bd67-67d99433abd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.569 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[48fc5799-6e94-416b-b11f-fb1537446e4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 12:32:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 12:32:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:32:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:32:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:32:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.599 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[946d7d5e-c438-4acf-adb7-400a46b9d4b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.617 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c26c64a2-cae4-4e10-8793-bda15ff6af94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb9cae67-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c1:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470547, 'reachable_time': 37613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312088, 'error': None, 'target': 'ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.630 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[efafc6d8-51a7-4b81-b0c7-ea9e820d2e1f]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapdb9cae67-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 470556, 'tstamp': 470556}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312089, 'error': None, 'target': 'ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdb9cae67-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 470558, 'tstamp': 470558}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312089, 'error': None, 'target': 'ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.632 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb9cae67-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.646 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.647 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb9cae67-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.647 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.648 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb9cae67-30, col_values=(('external_ids', {'iface-id': 'c6e8f7c2-37ff-440e-9be5-40d3dd5e3130'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.648 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.650 157129 INFO neutron.agent.ovn.metadata.agent [-] Port df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a in datapath 9ecd8942-957f-4772-ab44-ac4bae35909b unbound from our chassis
Feb 25 12:32:21 compute-0 NetworkManager[49836]: <info>  [1772022741.6516] manager: (tapdf7ea72b-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/315)
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.651 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ecd8942-957f-4772-ab44-ac4bae35909b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.652 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6572aa63-3363-4494-ba5e-464c961b4d85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:21.652 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b namespace which is not needed anymore
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.676 244018 INFO nova.virt.libvirt.driver [-] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Instance destroyed successfully.
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.677 244018 DEBUG nova.objects.instance [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lazy-loading 'resources' on Instance uuid 8decd922-10f2-4731-ac1e-36a062777059 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.752 244018 DEBUG nova.virt.libvirt.vif [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-187634331',display_name='tempest-ServersTestMultiNic-server-187634331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-187634331',id=77,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:32:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-vuuv8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:32:19Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=8decd922-10f2-4731-ac1e-36a062777059,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "325e7788-75cd-4305-b792-3102ddb850bf", "address": "fa:16:3e:26:5c:07", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap325e7788-75", "ovs_interfaceid": "325e7788-75cd-4305-b792-3102ddb850bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.753 244018 DEBUG nova.network.os_vif_util [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "325e7788-75cd-4305-b792-3102ddb850bf", "address": "fa:16:3e:26:5c:07", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.166", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap325e7788-75", "ovs_interfaceid": "325e7788-75cd-4305-b792-3102ddb850bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.753 244018 DEBUG nova.network.os_vif_util [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:5c:07,bridge_name='br-int',has_traffic_filtering=True,id=325e7788-75cd-4305-b792-3102ddb850bf,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap325e7788-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.754 244018 DEBUG os_vif [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:5c:07,bridge_name='br-int',has_traffic_filtering=True,id=325e7788-75cd-4305-b792-3102ddb850bf,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap325e7788-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.756 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap325e7788-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.764 244018 INFO os_vif [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:5c:07,bridge_name='br-int',has_traffic_filtering=True,id=325e7788-75cd-4305-b792-3102ddb850bf,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap325e7788-75')
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.765 244018 DEBUG nova.virt.libvirt.vif [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-187634331',display_name='tempest-ServersTestMultiNic-server-187634331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-187634331',id=77,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:32:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-vuuv8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:32:19Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=8decd922-10f2-4731-ac1e-36a062777059,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.766 244018 DEBUG nova.network.os_vif_util [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.766 244018 DEBUG nova.network.os_vif_util [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:98:fe,bridge_name='br-int',has_traffic_filtering=True,id=df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a,network=Network(9ecd8942-957f-4772-ab44-ac4bae35909b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf7ea72b-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.767 244018 DEBUG os_vif [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:98:fe,bridge_name='br-int',has_traffic_filtering=True,id=df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a,network=Network(9ecd8942-957f-4772-ab44-ac4bae35909b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf7ea72b-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.769 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf7ea72b-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.770 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.776 244018 INFO os_vif [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:98:fe,bridge_name='br-int',has_traffic_filtering=True,id=df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a,network=Network(9ecd8942-957f-4772-ab44-ac4bae35909b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf7ea72b-ee')
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.777 244018 DEBUG nova.virt.libvirt.vif [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-187634331',display_name='tempest-ServersTestMultiNic-server-187634331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-187634331',id=77,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:32:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-vuuv8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:32:19Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=8decd922-10f2-4731-ac1e-36a062777059,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.777 244018 DEBUG nova.network.os_vif_util [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.778 244018 DEBUG nova.network.os_vif_util [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a3:e8,bridge_name='br-int',has_traffic_filtering=True,id=97818aef-c1e4-4037-8d99-2ad0e69351c4,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97818aef-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.779 244018 DEBUG os_vif [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a3:e8,bridge_name='br-int',has_traffic_filtering=True,id=97818aef-c1e4-4037-8d99-2ad0e69351c4,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97818aef-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.780 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97818aef-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:32:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:32:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.786 244018 INFO os_vif [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a3:e8,bridge_name='br-int',has_traffic_filtering=True,id=97818aef-c1e4-4037-8d99-2ad0e69351c4,network=Network(db9cae67-3184-4925-839a-8bcfc572b146),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97818aef-c1')
Feb 25 12:32:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:32:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:32:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:32:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:32:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 12:32:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:32:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:32:21 compute-0 sudo[312159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:32:21 compute-0 sudo[312159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:32:21 compute-0 sudo[312159]: pam_unix(sudo:session): session closed for user root
Feb 25 12:32:21 compute-0 neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b[311953]: [NOTICE]   (311957) : haproxy version is 2.8.14-c23fe91
Feb 25 12:32:21 compute-0 neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b[311953]: [NOTICE]   (311957) : path to executable is /usr/sbin/haproxy
Feb 25 12:32:21 compute-0 neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b[311953]: [WARNING]  (311957) : Exiting Master process...
Feb 25 12:32:21 compute-0 neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b[311953]: [ALERT]    (311957) : Current worker (311959) exited with code 143 (Terminated)
Feb 25 12:32:21 compute-0 neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b[311953]: [WARNING]  (311957) : All workers exited. Exiting... (0)
Feb 25 12:32:21 compute-0 systemd[1]: libpod-8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91.scope: Deactivated successfully.
Feb 25 12:32:21 compute-0 podman[312145]: 2026-02-25 12:32:21.894543936 +0000 UTC m=+0.158664630 container died 8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.894 244018 DEBUG nova.compute.manager [req-60c30f9d-e3a2-4cbe-9b39-92226494947a req-6773e23d-9218-42d3-9542-ea1a0513d0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-unplugged-325e7788-75cd-4305-b792-3102ddb850bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.894 244018 DEBUG oslo_concurrency.lockutils [req-60c30f9d-e3a2-4cbe-9b39-92226494947a req-6773e23d-9218-42d3-9542-ea1a0513d0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.895 244018 DEBUG oslo_concurrency.lockutils [req-60c30f9d-e3a2-4cbe-9b39-92226494947a req-6773e23d-9218-42d3-9542-ea1a0513d0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.895 244018 DEBUG oslo_concurrency.lockutils [req-60c30f9d-e3a2-4cbe-9b39-92226494947a req-6773e23d-9218-42d3-9542-ea1a0513d0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.895 244018 DEBUG nova.compute.manager [req-60c30f9d-e3a2-4cbe-9b39-92226494947a req-6773e23d-9218-42d3-9542-ea1a0513d0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] No waiting events found dispatching network-vif-unplugged-325e7788-75cd-4305-b792-3102ddb850bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.895 244018 DEBUG nova.compute.manager [req-60c30f9d-e3a2-4cbe-9b39-92226494947a req-6773e23d-9218-42d3-9542-ea1a0513d0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-unplugged-325e7788-75cd-4305-b792-3102ddb850bf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:32:21 compute-0 sudo[312202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:32:21 compute-0 sudo[312202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.966 244018 DEBUG nova.compute.manager [req-68d88d76-6bf4-4d35-8775-10872d4912cc req-094b8e75-82c5-41be-87da-4049a7bf4505 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-unplugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.967 244018 DEBUG oslo_concurrency.lockutils [req-68d88d76-6bf4-4d35-8775-10872d4912cc req-094b8e75-82c5-41be-87da-4049a7bf4505 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.967 244018 DEBUG oslo_concurrency.lockutils [req-68d88d76-6bf4-4d35-8775-10872d4912cc req-094b8e75-82c5-41be-87da-4049a7bf4505 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.968 244018 DEBUG oslo_concurrency.lockutils [req-68d88d76-6bf4-4d35-8775-10872d4912cc req-094b8e75-82c5-41be-87da-4049a7bf4505 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.968 244018 DEBUG nova.compute.manager [req-68d88d76-6bf4-4d35-8775-10872d4912cc req-094b8e75-82c5-41be-87da-4049a7bf4505 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] No waiting events found dispatching network-vif-unplugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:21 compute-0 nova_compute[244014]: 2026-02-25 12:32:21.969 244018 DEBUG nova.compute.manager [req-68d88d76-6bf4-4d35-8775-10872d4912cc req-094b8e75-82c5-41be-87da-4049a7bf4505 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-unplugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:32:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 304 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 287 op/s
Feb 25 12:32:22 compute-0 nova_compute[244014]: 2026-02-25 12:32:22.211 244018 DEBUG nova.network.neutron [-] [instance: 89230879-b88a-44d3-841e-fc664e122158] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:22 compute-0 nova_compute[244014]: 2026-02-25 12:32:22.253 244018 INFO nova.compute.manager [-] [instance: 89230879-b88a-44d3-841e-fc664e122158] Took 1.19 seconds to deallocate network for instance.
Feb 25 12:32:22 compute-0 nova_compute[244014]: 2026-02-25 12:32:22.321 244018 DEBUG oslo_concurrency.lockutils [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:22 compute-0 nova_compute[244014]: 2026-02-25 12:32:22.322 244018 DEBUG oslo_concurrency.lockutils [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91-userdata-shm.mount: Deactivated successfully.
Feb 25 12:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a12ab3cd4690a0f4bae928e47fb3035e220fe1f5a6a31459a0d77f5fc55d0a2-merged.mount: Deactivated successfully.
Feb 25 12:32:22 compute-0 nova_compute[244014]: 2026-02-25 12:32:22.425 244018 DEBUG oslo_concurrency.processutils [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:22 compute-0 podman[312253]: 2026-02-25 12:32:22.470223021 +0000 UTC m=+0.242256941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:32:22 compute-0 podman[312145]: 2026-02-25 12:32:22.587826176 +0000 UTC m=+0.851946880 container cleanup 8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 12:32:22 compute-0 systemd[1]: libpod-conmon-8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91.scope: Deactivated successfully.
Feb 25 12:32:22 compute-0 podman[312253]: 2026-02-25 12:32:22.713824479 +0000 UTC m=+0.485858389 container create e44a5a82e397815ca8f904c3f0efba7097240f8b030d81468cef7ce6e088448f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_saha, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 12:32:22 compute-0 systemd[1]: Started libpod-conmon-e44a5a82e397815ca8f904c3f0efba7097240f8b030d81468cef7ce6e088448f.scope.
Feb 25 12:32:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:32:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:32:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390473692' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:22 compute-0 nova_compute[244014]: 2026-02-25 12:32:22.992 244018 DEBUG oslo_concurrency.processutils [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:23 compute-0 nova_compute[244014]: 2026-02-25 12:32:23.001 244018 DEBUG nova.compute.provider_tree [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:32:23 compute-0 nova_compute[244014]: 2026-02-25 12:32:23.034 244018 DEBUG nova.scheduler.client.report [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:32:23 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:32:23 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:32:23 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:32:23 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:32:23 compute-0 ceph-mon[76335]: pgmap v1508: 305 pgs: 305 active+clean; 304 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 287 op/s
Feb 25 12:32:23 compute-0 podman[312253]: 2026-02-25 12:32:23.164744906 +0000 UTC m=+0.936778876 container init e44a5a82e397815ca8f904c3f0efba7097240f8b030d81468cef7ce6e088448f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_saha, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:32:23 compute-0 nova_compute[244014]: 2026-02-25 12:32:23.174 244018 DEBUG oslo_concurrency.lockutils [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:23 compute-0 podman[312253]: 2026-02-25 12:32:23.175428799 +0000 UTC m=+0.947462729 container start e44a5a82e397815ca8f904c3f0efba7097240f8b030d81468cef7ce6e088448f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_saha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:32:23 compute-0 upbeat_saha[312303]: 167 167
Feb 25 12:32:23 compute-0 systemd[1]: libpod-e44a5a82e397815ca8f904c3f0efba7097240f8b030d81468cef7ce6e088448f.scope: Deactivated successfully.
Feb 25 12:32:23 compute-0 nova_compute[244014]: 2026-02-25 12:32:23.239 244018 INFO nova.scheduler.client.report [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Deleted allocations for instance 89230879-b88a-44d3-841e-fc664e122158
Feb 25 12:32:23 compute-0 nova_compute[244014]: 2026-02-25 12:32:23.320 244018 DEBUG oslo_concurrency.lockutils [None req-1e037604-870d-45a5-9d14-732a428e19f4 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "89230879-b88a-44d3-841e-fc664e122158" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:23 compute-0 podman[312253]: 2026-02-25 12:32:23.332420361 +0000 UTC m=+1.104454271 container attach e44a5a82e397815ca8f904c3f0efba7097240f8b030d81468cef7ce6e088448f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 12:32:23 compute-0 podman[312253]: 2026-02-25 12:32:23.332978917 +0000 UTC m=+1.105012827 container died e44a5a82e397815ca8f904c3f0efba7097240f8b030d81468cef7ce6e088448f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_saha, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 12:32:23 compute-0 nova_compute[244014]: 2026-02-25 12:32:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:23 compute-0 nova_compute[244014]: 2026-02-25 12:32:23.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 12:32:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 279 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 307 KiB/s wr, 177 op/s
Feb 25 12:32:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6dc3f0dd88246d597fe0692d3dcca0915dc18e00495a5a8ac188e7dd700f293-merged.mount: Deactivated successfully.
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.077 244018 DEBUG nova.compute.manager [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-plugged-325e7788-75cd-4305-b792-3102ddb850bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.078 244018 DEBUG oslo_concurrency.lockutils [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.078 244018 DEBUG oslo_concurrency.lockutils [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.079 244018 DEBUG oslo_concurrency.lockutils [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.079 244018 DEBUG nova.compute.manager [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] No waiting events found dispatching network-vif-plugged-325e7788-75cd-4305-b792-3102ddb850bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.080 244018 WARNING nova.compute.manager [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received unexpected event network-vif-plugged-325e7788-75cd-4305-b792-3102ddb850bf for instance with vm_state active and task_state deleting.
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.080 244018 DEBUG nova.compute.manager [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-unplugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.080 244018 DEBUG oslo_concurrency.lockutils [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.081 244018 DEBUG oslo_concurrency.lockutils [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.081 244018 DEBUG oslo_concurrency.lockutils [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.081 244018 DEBUG nova.compute.manager [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] No waiting events found dispatching network-vif-unplugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.082 244018 DEBUG nova.compute.manager [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-unplugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.082 244018 DEBUG nova.compute.manager [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-plugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.083 244018 DEBUG oslo_concurrency.lockutils [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.083 244018 DEBUG oslo_concurrency.lockutils [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.083 244018 DEBUG oslo_concurrency.lockutils [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.084 244018 DEBUG nova.compute.manager [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] No waiting events found dispatching network-vif-plugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.084 244018 WARNING nova.compute.manager [req-4129f4f4-6b3e-443a-8cea-4accd3d10e97 req-8ecaebb4-6999-491b-acba-28716ce84d82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received unexpected event network-vif-plugged-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a for instance with vm_state active and task_state deleting.
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.088 244018 DEBUG nova.compute.manager [req-d67c41f5-49d8-4c9e-b3b3-adf4284f5bbb req-8b495489-f1ff-4447-8e2b-82e94fd70f45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-plugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.089 244018 DEBUG oslo_concurrency.lockutils [req-d67c41f5-49d8-4c9e-b3b3-adf4284f5bbb req-8b495489-f1ff-4447-8e2b-82e94fd70f45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8decd922-10f2-4731-ac1e-36a062777059-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.089 244018 DEBUG oslo_concurrency.lockutils [req-d67c41f5-49d8-4c9e-b3b3-adf4284f5bbb req-8b495489-f1ff-4447-8e2b-82e94fd70f45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.089 244018 DEBUG oslo_concurrency.lockutils [req-d67c41f5-49d8-4c9e-b3b3-adf4284f5bbb req-8b495489-f1ff-4447-8e2b-82e94fd70f45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.090 244018 DEBUG nova.compute.manager [req-d67c41f5-49d8-4c9e-b3b3-adf4284f5bbb req-8b495489-f1ff-4447-8e2b-82e94fd70f45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] No waiting events found dispatching network-vif-plugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.090 244018 WARNING nova.compute.manager [req-d67c41f5-49d8-4c9e-b3b3-adf4284f5bbb req-8b495489-f1ff-4447-8e2b-82e94fd70f45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received unexpected event network-vif-plugged-97818aef-c1e4-4037-8d99-2ad0e69351c4 for instance with vm_state active and task_state deleting.
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.090 244018 DEBUG nova.compute.manager [req-d67c41f5-49d8-4c9e-b3b3-adf4284f5bbb req-8b495489-f1ff-4447-8e2b-82e94fd70f45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89230879-b88a-44d3-841e-fc664e122158] Received event network-vif-deleted-00b26e62-f20c-48bc-9701-d74dd9b15435 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1390473692' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:24 compute-0 podman[312253]: 2026-02-25 12:32:24.310588528 +0000 UTC m=+2.082622468 container remove e44a5a82e397815ca8f904c3f0efba7097240f8b030d81468cef7ce6e088448f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:32:24 compute-0 podman[312290]: 2026-02-25 12:32:24.425644171 +0000 UTC m=+1.814302459 container remove 8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.451 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00018fa4-8eb3-40a8-9182-9837269545ab]: (4, ('Wed Feb 25 12:32:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b (8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91)\n8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91\nWed Feb 25 12:32:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b (8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91)\n8d2bed4370125b67f02b724e99aed036251d7fad250af811c0b36c8ede1a9d91\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.455 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6610110e-0aaa-433e-8661-c50cf8d5a174]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.456 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ecd8942-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:24 compute-0 systemd[1]: libpod-conmon-e44a5a82e397815ca8f904c3f0efba7097240f8b030d81468cef7ce6e088448f.scope: Deactivated successfully.
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.457 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:24 compute-0 kernel: tap9ecd8942-90: left promiscuous mode
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.471 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e6fc69-0777-4600-8a87-7e092eeb54fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.485 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b0f8f6-bbbb-482f-bdc6-a27d9922a680]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.486 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[72dbfde5-45ab-4243-8b49-a3f641593e36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.497 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4bcd84a5-d7f1-4fef-ae52-f91d1d2cf28c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470715, 'reachable_time': 37615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312344, 'error': None, 'target': 'ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.499 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ecd8942-957f-4772-ab44-ac4bae35909b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.499 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[98c3a7b4-4384-4984-883c-d5e5e5111291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.500 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 97818aef-c1e4-4037-8d99-2ad0e69351c4 in datapath db9cae67-3184-4925-839a-8bcfc572b146 unbound from our chassis
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.501 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network db9cae67-3184-4925-839a-8bcfc572b146, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:32:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d9ecd8942\x2d957f\x2d4772\x2dab44\x2dac4bae35909b.mount: Deactivated successfully.
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.502 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f30c2be7-b04e-4978-8f20-bc905d8a890f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.503 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146 namespace which is not needed anymore
Feb 25 12:32:24 compute-0 podman[312332]: 2026-02-25 12:32:24.504366843 +0000 UTC m=+0.037811213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:32:24 compute-0 podman[312332]: 2026-02-25 12:32:24.604874413 +0000 UTC m=+0.138318743 container create 3f6dce4490adab4b58f228c5c9627157b765a75609632a5c103f5615ca940c27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_newton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.673 244018 DEBUG oslo_concurrency.lockutils [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.674 244018 DEBUG oslo_concurrency.lockutils [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.675 244018 DEBUG oslo_concurrency.lockutils [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.676 244018 DEBUG oslo_concurrency.lockutils [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.677 244018 DEBUG oslo_concurrency.lockutils [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.679 244018 INFO nova.compute.manager [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Terminating instance
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.680 244018 DEBUG nova.compute.manager [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.844 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:24 compute-0 systemd[1]: Started libpod-conmon-3f6dce4490adab4b58f228c5c9627157b765a75609632a5c103f5615ca940c27.scope.
Feb 25 12:32:24 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e1f08e77134d8d4966ed82b7c66aa8795cf36d0fa4c366fc6b3b5e36b2bac7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e1f08e77134d8d4966ed82b7c66aa8795cf36d0fa4c366fc6b3b5e36b2bac7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e1f08e77134d8d4966ed82b7c66aa8795cf36d0fa4c366fc6b3b5e36b2bac7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e1f08e77134d8d4966ed82b7c66aa8795cf36d0fa4c366fc6b3b5e36b2bac7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e1f08e77134d8d4966ed82b7c66aa8795cf36d0fa4c366fc6b3b5e36b2bac7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:24 compute-0 kernel: tapd09a3a83-0b (unregistering): left promiscuous mode
Feb 25 12:32:24 compute-0 NetworkManager[49836]: <info>  [1772022744.9721] device (tapd09a3a83-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:24 compute-0 ovn_controller[147040]: 2026-02-25T12:32:24Z|00746|binding|INFO|Releasing lport d09a3a83-0b97-4635-9188-5ab61ccc4626 from this chassis (sb_readonly=0)
Feb 25 12:32:24 compute-0 ovn_controller[147040]: 2026-02-25T12:32:24Z|00747|binding|INFO|Setting lport d09a3a83-0b97-4635-9188-5ab61ccc4626 down in Southbound
Feb 25 12:32:24 compute-0 ovn_controller[147040]: 2026-02-25T12:32:24Z|00748|binding|INFO|Removing iface tapd09a3a83-0b ovn-installed in OVS
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:24.988 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:63:29 10.100.0.11'], port_security=['fa:16:3e:23:63:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5732c5fb-59b3-4590-b65a-a696b9c90152', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b1ce41f-d52c-4d7a-8899-239050236d80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d09a3a83-0b97-4635-9188-5ab61ccc4626) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:32:24 compute-0 nova_compute[244014]: 2026-02-25 12:32:24.992 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:25 compute-0 neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146[311780]: [NOTICE]   (311865) : haproxy version is 2.8.14-c23fe91
Feb 25 12:32:25 compute-0 neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146[311780]: [NOTICE]   (311865) : path to executable is /usr/sbin/haproxy
Feb 25 12:32:25 compute-0 neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146[311780]: [WARNING]  (311865) : Exiting Master process...
Feb 25 12:32:25 compute-0 neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146[311780]: [ALERT]    (311865) : Current worker (311878) exited with code 143 (Terminated)
Feb 25 12:32:25 compute-0 neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146[311780]: [WARNING]  (311865) : All workers exited. Exiting... (0)
Feb 25 12:32:25 compute-0 systemd[1]: libpod-f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59.scope: Deactivated successfully.
Feb 25 12:32:25 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000046.scope: Deactivated successfully.
Feb 25 12:32:25 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000046.scope: Consumed 15.163s CPU time.
Feb 25 12:32:25 compute-0 systemd-machined[210048]: Machine qemu-85-instance-00000046 terminated.
Feb 25 12:32:25 compute-0 podman[312332]: 2026-02-25 12:32:25.07940472 +0000 UTC m=+0.612849070 container init 3f6dce4490adab4b58f228c5c9627157b765a75609632a5c103f5615ca940c27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 12:32:25 compute-0 podman[312332]: 2026-02-25 12:32:25.089000582 +0000 UTC m=+0.622444942 container start 3f6dce4490adab4b58f228c5c9627157b765a75609632a5c103f5615ca940c27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_newton, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:32:25 compute-0 NetworkManager[49836]: <info>  [1772022745.0999] manager: (tapd09a3a83-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.114 244018 INFO nova.virt.libvirt.driver [-] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Instance destroyed successfully.
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.116 244018 DEBUG nova.objects.instance [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'resources' on Instance uuid 5732c5fb-59b3-4590-b65a-a696b9c90152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.129 244018 DEBUG nova.virt.libvirt.vif [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-187776660',display_name='tempest-ServerActionsTestOtherA-server-187776660',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-187776660',id=70,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH+oU121pR6KUJ0WVaybHO4eKD7+IjTA6CrXc/3HOS7yAeJNztQB4sxwZQLs0fjSQoqiFpts6gDtbDhfFq9vINorEApwAS8sG60gm5VPKd60x5tzDBbplVpTIhXJOJeGqw==',key_name='tempest-keypair-2086603261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:30:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-0x55w3gp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:30:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b63928451c6a4137bb65e25561326aff',uuid=5732c5fb-59b3-4590-b65a-a696b9c90152,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.130 244018 DEBUG nova.network.os_vif_util [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.131 244018 DEBUG nova.network.os_vif_util [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:63:29,bridge_name='br-int',has_traffic_filtering=True,id=d09a3a83-0b97-4635-9188-5ab61ccc4626,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd09a3a83-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.131 244018 DEBUG os_vif [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:63:29,bridge_name='br-int',has_traffic_filtering=True,id=d09a3a83-0b97-4635-9188-5ab61ccc4626,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd09a3a83-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.133 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.134 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd09a3a83-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.140 244018 INFO os_vif [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:63:29,bridge_name='br-int',has_traffic_filtering=True,id=d09a3a83-0b97-4635-9188-5ab61ccc4626,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd09a3a83-0b')
Feb 25 12:32:25 compute-0 podman[312332]: 2026-02-25 12:32:25.148070317 +0000 UTC m=+0.681514667 container attach 3f6dce4490adab4b58f228c5c9627157b765a75609632a5c103f5615ca940c27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:32:25 compute-0 podman[312369]: 2026-02-25 12:32:25.154287134 +0000 UTC m=+0.283994635 container died f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:32:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59-userdata-shm.mount: Deactivated successfully.
Feb 25 12:32:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a04faa2824e328b26f8d153be12606e6dd098c41f569d9e4535e38a535276409-merged.mount: Deactivated successfully.
Feb 25 12:32:25 compute-0 ceph-mon[76335]: pgmap v1509: 305 pgs: 305 active+clean; 279 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 307 KiB/s wr, 177 op/s
Feb 25 12:32:25 compute-0 nifty_newton[312370]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:32:25 compute-0 nifty_newton[312370]: --> All data devices are unavailable
Feb 25 12:32:25 compute-0 systemd[1]: libpod-3f6dce4490adab4b58f228c5c9627157b765a75609632a5c103f5615ca940c27.scope: Deactivated successfully.
Feb 25 12:32:25 compute-0 podman[312369]: 2026-02-25 12:32:25.526951712 +0000 UTC m=+0.656659183 container cleanup f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:32:25 compute-0 podman[312332]: 2026-02-25 12:32:25.536676697 +0000 UTC m=+1.070121027 container died 3f6dce4490adab4b58f228c5c9627157b765a75609632a5c103f5615ca940c27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_newton, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 25 12:32:25 compute-0 systemd[1]: libpod-conmon-f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59.scope: Deactivated successfully.
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.643 244018 INFO nova.virt.libvirt.driver [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Deleting instance files /var/lib/nova/instances/8decd922-10f2-4731-ac1e-36a062777059_del
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.644 244018 INFO nova.virt.libvirt.driver [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Deletion of /var/lib/nova/instances/8decd922-10f2-4731-ac1e-36a062777059_del complete
Feb 25 12:32:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-21e1f08e77134d8d4966ed82b7c66aa8795cf36d0fa4c366fc6b3b5e36b2bac7-merged.mount: Deactivated successfully.
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.936 244018 INFO nova.compute.manager [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Took 4.72 seconds to destroy the instance on the hypervisor.
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.937 244018 DEBUG oslo.service.loopingcall [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.937 244018 DEBUG nova.compute.manager [-] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:32:25 compute-0 nova_compute[244014]: 2026-02-25 12:32:25.937 244018 DEBUG nova.network.neutron [-] [instance: 8decd922-10f2-4731-ac1e-36a062777059] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:32:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 279 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 130 op/s
Feb 25 12:32:26 compute-0 podman[312332]: 2026-02-25 12:32:26.20465561 +0000 UTC m=+1.738099940 container remove 3f6dce4490adab4b58f228c5c9627157b765a75609632a5c103f5615ca940c27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_newton, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:26 compute-0 sudo[312202]: pam_unix(sudo:session): session closed for user root
Feb 25 12:32:26 compute-0 systemd[1]: libpod-conmon-3f6dce4490adab4b58f228c5c9627157b765a75609632a5c103f5615ca940c27.scope: Deactivated successfully.
Feb 25 12:32:26 compute-0 sudo[312477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:32:26 compute-0 sudo[312477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:32:26 compute-0 sudo[312477]: pam_unix(sudo:session): session closed for user root
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.321 244018 DEBUG nova.compute.manager [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-vif-unplugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.323 244018 DEBUG oslo_concurrency.lockutils [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.324 244018 DEBUG oslo_concurrency.lockutils [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.324 244018 DEBUG oslo_concurrency.lockutils [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.324 244018 DEBUG nova.compute.manager [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] No waiting events found dispatching network-vif-unplugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.324 244018 DEBUG nova.compute.manager [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-vif-unplugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.325 244018 DEBUG nova.compute.manager [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.325 244018 DEBUG oslo_concurrency.lockutils [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.325 244018 DEBUG oslo_concurrency.lockutils [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.325 244018 DEBUG oslo_concurrency.lockutils [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.325 244018 DEBUG nova.compute.manager [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] No waiting events found dispatching network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.326 244018 WARNING nova.compute.manager [req-56268a6a-a48f-4185-94a6-cf24d6f29f38 req-680480c7-97a9-4ed9-bc46-af80adb3426c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received unexpected event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 for instance with vm_state active and task_state deleting.
Feb 25 12:32:26 compute-0 sudo[312504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:32:26 compute-0 sudo[312504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:32:26 compute-0 ceph-mon[76335]: pgmap v1510: 305 pgs: 305 active+clean; 279 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 130 op/s
Feb 25 12:32:26 compute-0 podman[312465]: 2026-02-25 12:32:26.694635234 +0000 UTC m=+1.138619499 container remove f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.699 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[40d08a17-3dca-4861-9f0e-bd9ea498b6d0]: (4, ('Wed Feb 25 12:32:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146 (f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59)\nf0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59\nWed Feb 25 12:32:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146 (f0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59)\nf0542ab8d7bada0ab0f7a7f593300b5c0c427630094547e877642c904db42f59\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.701 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35b5c1ce-8e83-4ef8-b939-ca575e397a9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.701 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb9cae67-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:26 compute-0 kernel: tapdb9cae67-30: left promiscuous mode
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.713 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[447dde4d-12c4-4390-b4ca-6dc8d1efceac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:26 compute-0 nova_compute[244014]: 2026-02-25 12:32:26.711 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.743 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a198ae-ea85-4bee-87bf-5b249880667f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.744 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[006fc263-f7ba-4342-a941-a9a99dff0b96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.756 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7eea7271-2f13-48f9-8897-abd8c60aa968]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470541, 'reachable_time': 27098, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312556, 'error': None, 'target': 'ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:26 compute-0 systemd[1]: run-netns-ovnmeta\x2ddb9cae67\x2d3184\x2d4925\x2d839a\x2d8bcfc572b146.mount: Deactivated successfully.
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.759 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-db9cae67-3184-4925-839a-8bcfc572b146 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.759 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[147a3274-fcca-49e6-ba7a-77ca55446fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.760 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d09a3a83-0b97-4635-9188-5ab61ccc4626 in datapath cd796561-bd80-4610-8abc-655ee9e3676f unbound from our chassis
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.761 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd796561-bd80-4610-8abc-655ee9e3676f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.764 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19d01a96-ab8a-4ae3-9a1d-190b9be3aec2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:26.764 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f namespace which is not needed anymore
Feb 25 12:32:26 compute-0 podman[312541]: 2026-02-25 12:32:26.713636853 +0000 UTC m=+0.065458837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:32:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:26 compute-0 podman[312541]: 2026-02-25 12:32:26.977173136 +0000 UTC m=+0.328995140 container create 79de469541d0e985d1ebbe5414b0dd2d4853f9548e91615d177ee3e80efcb9ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:32:27 compute-0 systemd[1]: Started libpod-conmon-79de469541d0e985d1ebbe5414b0dd2d4853f9548e91615d177ee3e80efcb9ab.scope.
Feb 25 12:32:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:32:27 compute-0 podman[312541]: 2026-02-25 12:32:27.150882491 +0000 UTC m=+0.502704555 container init 79de469541d0e985d1ebbe5414b0dd2d4853f9548e91615d177ee3e80efcb9ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:32:27 compute-0 podman[312541]: 2026-02-25 12:32:27.160384731 +0000 UTC m=+0.512206715 container start 79de469541d0e985d1ebbe5414b0dd2d4853f9548e91615d177ee3e80efcb9ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:32:27 compute-0 elastic_proskuriakova[312573]: 167 167
Feb 25 12:32:27 compute-0 systemd[1]: libpod-79de469541d0e985d1ebbe5414b0dd2d4853f9548e91615d177ee3e80efcb9ab.scope: Deactivated successfully.
Feb 25 12:32:27 compute-0 podman[312541]: 2026-02-25 12:32:27.249387265 +0000 UTC m=+0.601209239 container attach 79de469541d0e985d1ebbe5414b0dd2d4853f9548e91615d177ee3e80efcb9ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 12:32:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:27.250 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:27 compute-0 podman[312541]: 2026-02-25 12:32:27.251206486 +0000 UTC m=+0.603028490 container died 79de469541d0e985d1ebbe5414b0dd2d4853f9548e91615d177ee3e80efcb9ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_proskuriakova, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:32:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b1c912046002fe6bfe6dccbb28960e3ebc9f8c71b6ffa457db37a090b537d21-merged.mount: Deactivated successfully.
Feb 25 12:32:27 compute-0 podman[312541]: 2026-02-25 12:32:27.897638697 +0000 UTC m=+1.249460701 container remove 79de469541d0e985d1ebbe5414b0dd2d4853f9548e91615d177ee3e80efcb9ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 12:32:27 compute-0 systemd[1]: libpod-conmon-79de469541d0e985d1ebbe5414b0dd2d4853f9548e91615d177ee3e80efcb9ab.scope: Deactivated successfully.
Feb 25 12:32:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 192 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 173 op/s
Feb 25 12:32:28 compute-0 neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f[306019]: [NOTICE]   (306023) : haproxy version is 2.8.14-c23fe91
Feb 25 12:32:28 compute-0 neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f[306019]: [NOTICE]   (306023) : path to executable is /usr/sbin/haproxy
Feb 25 12:32:28 compute-0 neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f[306019]: [WARNING]  (306023) : Exiting Master process...
Feb 25 12:32:28 compute-0 neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f[306019]: [WARNING]  (306023) : Exiting Master process...
Feb 25 12:32:28 compute-0 neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f[306019]: [ALERT]    (306023) : Current worker (306025) exited with code 143 (Terminated)
Feb 25 12:32:28 compute-0 neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f[306019]: [WARNING]  (306023) : All workers exited. Exiting... (0)
Feb 25 12:32:28 compute-0 systemd[1]: libpod-91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0.scope: Deactivated successfully.
Feb 25 12:32:28 compute-0 conmon[306019]: conmon 91b65c1d93e47db67b1a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0.scope/container/memory.events
Feb 25 12:32:28 compute-0 podman[312601]: 2026-02-25 12:32:28.210824599 +0000 UTC m=+0.221833782 container died 91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:32:28 compute-0 ceph-mon[76335]: pgmap v1511: 305 pgs: 305 active+clean; 192 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 173 op/s
Feb 25 12:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0-userdata-shm.mount: Deactivated successfully.
Feb 25 12:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f940080901bec8896734a9c7645d302a54dac0d4387fd56144cab0605773a5-merged.mount: Deactivated successfully.
Feb 25 12:32:28 compute-0 nova_compute[244014]: 2026-02-25 12:32:28.627 244018 DEBUG nova.compute.manager [req-2fc1d3c1-5c1e-4a8b-8d6f-1c1bf08eca54 req-b584693b-ff16-45e0-8e7c-5e15a1dcf615 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-deleted-325e7788-75cd-4305-b792-3102ddb850bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:28 compute-0 nova_compute[244014]: 2026-02-25 12:32:28.627 244018 INFO nova.compute.manager [req-2fc1d3c1-5c1e-4a8b-8d6f-1c1bf08eca54 req-b584693b-ff16-45e0-8e7c-5e15a1dcf615 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Neutron deleted interface 325e7788-75cd-4305-b792-3102ddb850bf; detaching it from the instance and deleting it from the info cache
Feb 25 12:32:28 compute-0 nova_compute[244014]: 2026-02-25 12:32:28.628 244018 DEBUG nova.network.neutron [req-2fc1d3c1-5c1e-4a8b-8d6f-1c1bf08eca54 req-b584693b-ff16-45e0-8e7c-5e15a1dcf615 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Updating instance_info_cache with network_info: [{"id": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "address": "fa:16:3e:a2:98:fe", "network": {"id": "9ecd8942-957f-4772-ab44-ac4bae35909b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-724851667", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf7ea72b-ee", "ovs_interfaceid": "df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "address": "fa:16:3e:b0:a3:e8", "network": {"id": "db9cae67-3184-4925-839a-8bcfc572b146", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2067811822", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97818aef-c1", "ovs_interfaceid": "97818aef-c1e4-4037-8d99-2ad0e69351c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:28 compute-0 nova_compute[244014]: 2026-02-25 12:32:28.655 244018 DEBUG nova.compute.manager [req-2fc1d3c1-5c1e-4a8b-8d6f-1c1bf08eca54 req-b584693b-ff16-45e0-8e7c-5e15a1dcf615 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Detach interface failed, port_id=325e7788-75cd-4305-b792-3102ddb850bf, reason: Instance 8decd922-10f2-4731-ac1e-36a062777059 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:32:28 compute-0 podman[312617]: 2026-02-25 12:32:28.666990974 +0000 UTC m=+0.631560240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:32:28 compute-0 podman[312601]: 2026-02-25 12:32:28.699063414 +0000 UTC m=+0.710072567 container cleanup 91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 25 12:32:28 compute-0 systemd[1]: libpod-conmon-91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0.scope: Deactivated successfully.
Feb 25 12:32:28 compute-0 podman[312617]: 2026-02-25 12:32:28.839125896 +0000 UTC m=+0.803695142 container create 513f0ffa146ac4aa77e32e61a17708a4cf62602b60e4abbd27d0290ef2d13d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:32:28 compute-0 nova_compute[244014]: 2026-02-25 12:32:28.908 244018 INFO nova.virt.libvirt.driver [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Deleting instance files /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152_del
Feb 25 12:32:28 compute-0 nova_compute[244014]: 2026-02-25 12:32:28.909 244018 INFO nova.virt.libvirt.driver [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Deletion of /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152_del complete
Feb 25 12:32:28 compute-0 nova_compute[244014]: 2026-02-25 12:32:28.956 244018 INFO nova.compute.manager [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Took 4.27 seconds to destroy the instance on the hypervisor.
Feb 25 12:32:28 compute-0 nova_compute[244014]: 2026-02-25 12:32:28.958 244018 DEBUG oslo.service.loopingcall [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:32:28 compute-0 nova_compute[244014]: 2026-02-25 12:32:28.959 244018 DEBUG nova.compute.manager [-] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:32:28 compute-0 nova_compute[244014]: 2026-02-25 12:32:28.959 244018 DEBUG nova.network.neutron [-] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:32:28 compute-0 systemd[1]: Started libpod-conmon-513f0ffa146ac4aa77e32e61a17708a4cf62602b60e4abbd27d0290ef2d13d08.scope.
Feb 25 12:32:28 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e295346c297d2c40adbe955fe1f00820538e493c246228201e8f89e7852ce79b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e295346c297d2c40adbe955fe1f00820538e493c246228201e8f89e7852ce79b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e295346c297d2c40adbe955fe1f00820538e493c246228201e8f89e7852ce79b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e295346c297d2c40adbe955fe1f00820538e493c246228201e8f89e7852ce79b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:29 compute-0 podman[312617]: 2026-02-25 12:32:29.306923451 +0000 UTC m=+1.271492787 container init 513f0ffa146ac4aa77e32e61a17708a4cf62602b60e4abbd27d0290ef2d13d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bouman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:32:29 compute-0 podman[312617]: 2026-02-25 12:32:29.314556248 +0000 UTC m=+1.279125524 container start 513f0ffa146ac4aa77e32e61a17708a4cf62602b60e4abbd27d0290ef2d13d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:32:29 compute-0 nova_compute[244014]: 2026-02-25 12:32:29.370 244018 DEBUG nova.network.neutron [-] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:29 compute-0 nova_compute[244014]: 2026-02-25 12:32:29.392 244018 INFO nova.compute.manager [-] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Took 3.45 seconds to deallocate network for instance.
Feb 25 12:32:29 compute-0 podman[312617]: 2026-02-25 12:32:29.437090442 +0000 UTC m=+1.401659768 container attach 513f0ffa146ac4aa77e32e61a17708a4cf62602b60e4abbd27d0290ef2d13d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Feb 25 12:32:29 compute-0 nova_compute[244014]: 2026-02-25 12:32:29.444 244018 DEBUG oslo_concurrency.lockutils [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:29 compute-0 nova_compute[244014]: 2026-02-25 12:32:29.445 244018 DEBUG oslo_concurrency.lockutils [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:29 compute-0 nova_compute[244014]: 2026-02-25 12:32:29.539 244018 DEBUG oslo_concurrency.processutils [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:29 compute-0 podman[312650]: 2026-02-25 12:32:29.602967556 +0000 UTC m=+0.874926561 container remove 91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:32:29 compute-0 laughing_bouman[312661]: {
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:     "0": [
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:         {
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "devices": [
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "/dev/loop3"
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             ],
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_name": "ceph_lv0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_size": "21470642176",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "name": "ceph_lv0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "tags": {
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.cluster_name": "ceph",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.crush_device_class": "",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.encrypted": "0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.objectstore": "bluestore",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.osd_id": "0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.type": "block",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.vdo": "0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.with_tpm": "0"
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             },
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "type": "block",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "vg_name": "ceph_vg0"
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:         }
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:     ],
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:     "1": [
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:         {
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "devices": [
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "/dev/loop4"
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             ],
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_name": "ceph_lv1",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_size": "21470642176",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "name": "ceph_lv1",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "tags": {
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.cluster_name": "ceph",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.crush_device_class": "",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.encrypted": "0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.objectstore": "bluestore",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.osd_id": "1",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.type": "block",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.vdo": "0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.with_tpm": "0"
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             },
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "type": "block",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "vg_name": "ceph_vg1"
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:         }
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:     ],
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:     "2": [
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:         {
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "devices": [
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "/dev/loop5"
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             ],
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_name": "ceph_lv2",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_size": "21470642176",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "name": "ceph_lv2",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "tags": {
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.cluster_name": "ceph",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.crush_device_class": "",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.encrypted": "0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.objectstore": "bluestore",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.osd_id": "2",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.type": "block",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.vdo": "0",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:                 "ceph.with_tpm": "0"
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             },
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "type": "block",
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:             "vg_name": "ceph_vg2"
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:         }
Feb 25 12:32:29 compute-0 laughing_bouman[312661]:     ]
Feb 25 12:32:29 compute-0 laughing_bouman[312661]: }
Feb 25 12:32:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:29.611 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b16d0b66-53e2-4108-b694-dbbb4720bab3]: (4, ('Wed Feb 25 12:32:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f (91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0)\n91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0\nWed Feb 25 12:32:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f (91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0)\n91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:29.614 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[776150ee-94c1-41e9-8743-86982217969b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:29.615 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:29 compute-0 nova_compute[244014]: 2026-02-25 12:32:29.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:29 compute-0 kernel: tapcd796561-b0: left promiscuous mode
Feb 25 12:32:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:29.635 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1da53132-5a0f-4e92-b820-9af54654eccd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:29 compute-0 nova_compute[244014]: 2026-02-25 12:32:29.636 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:29 compute-0 systemd[1]: libpod-513f0ffa146ac4aa77e32e61a17708a4cf62602b60e4abbd27d0290ef2d13d08.scope: Deactivated successfully.
Feb 25 12:32:29 compute-0 podman[312617]: 2026-02-25 12:32:29.648258341 +0000 UTC m=+1.612827587 container died 513f0ffa146ac4aa77e32e61a17708a4cf62602b60e4abbd27d0290ef2d13d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:32:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:29.661 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6950cf-9cf1-4fd1-bff2-3ef10c8467db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:29.663 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a10c6d46-0dd3-4f31-bf78-0c9e7d179ad0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:29.688 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b65c66dd-4492-4015-bd6c-e187ba743dcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460505, 'reachable_time': 32089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312682, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:29 compute-0 systemd[1]: run-netns-ovnmeta\x2dcd796561\x2dbd80\x2d4610\x2d8abc\x2d655ee9e3676f.mount: Deactivated successfully.
Feb 25 12:32:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:29.692 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:32:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:29.692 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e3de5644-7712-4e99-a7f0-dcf730b339b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:29 compute-0 nova_compute[244014]: 2026-02-25 12:32:29.846 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:29 compute-0 nova_compute[244014]: 2026-02-25 12:32:29.969 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022734.9676082, 7f9688d5-08dc-4dca-8a01-f22b4efd73f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:29 compute-0 nova_compute[244014]: 2026-02-25 12:32:29.970 244018 INFO nova.compute.manager [-] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] VM Stopped (Lifecycle Event)
Feb 25 12:32:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 192 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 KiB/s wr, 137 op/s
Feb 25 12:32:29 compute-0 nova_compute[244014]: 2026-02-25 12:32:29.994 244018 DEBUG nova.compute.manager [None req-9c2b5b49-f7c8-4cd0-907e-13d197fb600b - - - - - -] [instance: 7f9688d5-08dc-4dca-8a01-f22b4efd73f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e295346c297d2c40adbe955fe1f00820538e493c246228201e8f89e7852ce79b-merged.mount: Deactivated successfully.
Feb 25 12:32:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:32:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133245859' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.148 244018 DEBUG oslo_concurrency.processutils [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.156 244018 DEBUG nova.compute.provider_tree [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.186 244018 DEBUG nova.scheduler.client.report [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.219 244018 DEBUG oslo_concurrency.lockutils [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.281 244018 INFO nova.scheduler.client.report [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Deleted allocations for instance 8decd922-10f2-4731-ac1e-36a062777059
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.314 244018 DEBUG nova.network.neutron [-] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.357 244018 INFO nova.compute.manager [-] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Took 1.40 seconds to deallocate network for instance.
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.384 244018 DEBUG oslo_concurrency.lockutils [None req-4e0e3443-133e-4040-bdfa-de071f4d7dd9 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "8decd922-10f2-4731-ac1e-36a062777059" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.414 244018 DEBUG oslo_concurrency.lockutils [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.414 244018 DEBUG oslo_concurrency.lockutils [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.471 244018 DEBUG oslo_concurrency.processutils [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.523 244018 DEBUG nova.compute.manager [req-6c9f38e1-881d-475a-a40f-8e3b6daa6848 req-cedf07b9-06b8-49aa-a7f4-7337351cc34e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-vif-deleted-d09a3a83-0b97-4635-9188-5ab61ccc4626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:30 compute-0 ceph-mon[76335]: pgmap v1512: 305 pgs: 305 active+clean; 192 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 KiB/s wr, 137 op/s
Feb 25 12:32:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/133245859' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:30 compute-0 podman[312617]: 2026-02-25 12:32:30.800291139 +0000 UTC m=+2.764860415 container remove 513f0ffa146ac4aa77e32e61a17708a4cf62602b60e4abbd27d0290ef2d13d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.816 244018 DEBUG nova.compute.manager [req-c71e84a9-c500-4d0e-b985-c7977670b0da req-a4ec81b8-9b3c-40b3-a3cb-e07b29b8dd24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-deleted-df7ea72b-ee2b-4e72-9e8e-5dc6f56f7d5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.816 244018 DEBUG nova.compute.manager [req-c71e84a9-c500-4d0e-b985-c7977670b0da req-a4ec81b8-9b3c-40b3-a3cb-e07b29b8dd24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Received event network-vif-deleted-97818aef-c1e4-4037-8d99-2ad0e69351c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:30 compute-0 sudo[312504]: pam_unix(sudo:session): session closed for user root
Feb 25 12:32:30 compute-0 systemd[1]: libpod-conmon-513f0ffa146ac4aa77e32e61a17708a4cf62602b60e4abbd27d0290ef2d13d08.scope: Deactivated successfully.
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.910 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.912 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 12:32:30 compute-0 sudo[312732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:32:30 compute-0 sudo[312732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:32:30 compute-0 sudo[312732]: pam_unix(sudo:session): session closed for user root
Feb 25 12:32:30 compute-0 nova_compute[244014]: 2026-02-25 12:32:30.933 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 12:32:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:32:30
Feb 25 12:32:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:32:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:32:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'backups', 'default.rgw.control', 'volumes']
Feb 25 12:32:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:32:30 compute-0 sudo[312757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:32:30 compute-0 sudo[312757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:32:31 compute-0 nova_compute[244014]: 2026-02-25 12:32:31.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:32:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625395388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:31 compute-0 nova_compute[244014]: 2026-02-25 12:32:31.197 244018 DEBUG oslo_concurrency.processutils [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.726s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:31 compute-0 nova_compute[244014]: 2026-02-25 12:32:31.206 244018 DEBUG nova.compute.provider_tree [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:32:31 compute-0 nova_compute[244014]: 2026-02-25 12:32:31.228 244018 DEBUG nova.scheduler.client.report [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:32:31 compute-0 nova_compute[244014]: 2026-02-25 12:32:31.255 244018 DEBUG oslo_concurrency.lockutils [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:31 compute-0 nova_compute[244014]: 2026-02-25 12:32:31.293 244018 INFO nova.scheduler.client.report [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Deleted allocations for instance 5732c5fb-59b3-4590-b65a-a696b9c90152
Feb 25 12:32:31 compute-0 podman[312796]: 2026-02-25 12:32:31.259547742 +0000 UTC m=+0.019629078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:32:31 compute-0 podman[312796]: 2026-02-25 12:32:31.376293153 +0000 UTC m=+0.136374439 container create 402571aeebc4f32d7f5c4fbee0c3074099ccb347f6fd4d3d5aebb40e3f3bf53e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 12:32:31 compute-0 nova_compute[244014]: 2026-02-25 12:32:31.376 244018 DEBUG oslo_concurrency.lockutils [None req-613ae569-24ca-4d41-b3f1-ec1b2c71a760 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:31 compute-0 systemd[1]: Started libpod-conmon-402571aeebc4f32d7f5c4fbee0c3074099ccb347f6fd4d3d5aebb40e3f3bf53e.scope.
Feb 25 12:32:31 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:32:31 compute-0 podman[312796]: 2026-02-25 12:32:31.721249104 +0000 UTC m=+0.481330440 container init 402571aeebc4f32d7f5c4fbee0c3074099ccb347f6fd4d3d5aebb40e3f3bf53e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:32:31 compute-0 podman[312796]: 2026-02-25 12:32:31.729033675 +0000 UTC m=+0.489114941 container start 402571aeebc4f32d7f5c4fbee0c3074099ccb347f6fd4d3d5aebb40e3f3bf53e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:32:31 compute-0 sweet_clarke[312812]: 167 167
Feb 25 12:32:31 compute-0 systemd[1]: libpod-402571aeebc4f32d7f5c4fbee0c3074099ccb347f6fd4d3d5aebb40e3f3bf53e.scope: Deactivated successfully.
Feb 25 12:32:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2625395388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:32:31 compute-0 podman[312796]: 2026-02-25 12:32:31.944086154 +0000 UTC m=+0.704167490 container attach 402571aeebc4f32d7f5c4fbee0c3074099ccb347f6fd4d3d5aebb40e3f3bf53e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:32:31 compute-0 podman[312796]: 2026-02-25 12:32:31.945243237 +0000 UTC m=+0.705324523 container died 402571aeebc4f32d7f5c4fbee0c3074099ccb347f6fd4d3d5aebb40e3f3bf53e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_clarke, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:32:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 154 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 KiB/s wr, 141 op/s
Feb 25 12:32:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-35febc8fb2eaeeccb18030a483b4e5a99b70e6c1164c9c58b8501b6a4c5e66f4-merged.mount: Deactivated successfully.
Feb 25 12:32:32 compute-0 nova_compute[244014]: 2026-02-25 12:32:32.508 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022737.506906, 89230879-b88a-44d3-841e-fc664e122158 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:32 compute-0 nova_compute[244014]: 2026-02-25 12:32:32.509 244018 INFO nova.compute.manager [-] [instance: 89230879-b88a-44d3-841e-fc664e122158] VM Stopped (Lifecycle Event)
Feb 25 12:32:32 compute-0 podman[312796]: 2026-02-25 12:32:32.516253189 +0000 UTC m=+1.276334475 container remove 402571aeebc4f32d7f5c4fbee0c3074099ccb347f6fd4d3d5aebb40e3f3bf53e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:32:32 compute-0 nova_compute[244014]: 2026-02-25 12:32:32.527 244018 DEBUG nova.compute.manager [None req-fc8efd0b-35b4-4394-ba7c-5cd4f731654b - - - - - -] [instance: 89230879-b88a-44d3-841e-fc664e122158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:32 compute-0 systemd[1]: libpod-conmon-402571aeebc4f32d7f5c4fbee0c3074099ccb347f6fd4d3d5aebb40e3f3bf53e.scope: Deactivated successfully.
Feb 25 12:32:32 compute-0 nova_compute[244014]: 2026-02-25 12:32:32.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:32 compute-0 podman[312837]: 2026-02-25 12:32:32.678259254 +0000 UTC m=+0.032173924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:32:32 compute-0 podman[312837]: 2026-02-25 12:32:32.810870834 +0000 UTC m=+0.164785424 container create cf24267353aba45a74692bf6dc3879e6a05002fe4089e8832eab1959b434d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:32:32 compute-0 nova_compute[244014]: 2026-02-25 12:32:32.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:32 compute-0 nova_compute[244014]: 2026-02-25 12:32:32.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:32:32 compute-0 ceph-mon[76335]: pgmap v1513: 305 pgs: 305 active+clean; 154 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 KiB/s wr, 141 op/s
Feb 25 12:32:32 compute-0 systemd[1]: Started libpod-conmon-cf24267353aba45a74692bf6dc3879e6a05002fe4089e8832eab1959b434d354.scope.
Feb 25 12:32:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:32:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8529b6d244fc3dfbe210b48329068f34de34aa5303c067899a22f3fd907a4ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8529b6d244fc3dfbe210b48329068f34de34aa5303c067899a22f3fd907a4ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8529b6d244fc3dfbe210b48329068f34de34aa5303c067899a22f3fd907a4ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8529b6d244fc3dfbe210b48329068f34de34aa5303c067899a22f3fd907a4ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:33 compute-0 podman[312837]: 2026-02-25 12:32:33.083098163 +0000 UTC m=+0.437012823 container init cf24267353aba45a74692bf6dc3879e6a05002fe4089e8832eab1959b434d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:32:33 compute-0 podman[312837]: 2026-02-25 12:32:33.093641593 +0000 UTC m=+0.447556223 container start cf24267353aba45a74692bf6dc3879e6a05002fe4089e8832eab1959b434d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:32:33 compute-0 podman[312837]: 2026-02-25 12:32:33.221483858 +0000 UTC m=+0.575398488 container attach cf24267353aba45a74692bf6dc3879e6a05002fe4089e8832eab1959b434d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:32:33 compute-0 lvm[312935]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:32:33 compute-0 lvm[312936]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:32:33 compute-0 lvm[312935]: VG ceph_vg1 finished
Feb 25 12:32:33 compute-0 lvm[312936]: VG ceph_vg2 finished
Feb 25 12:32:33 compute-0 lvm[312933]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:32:33 compute-0 lvm[312933]: VG ceph_vg0 finished
Feb 25 12:32:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 153 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 595 KiB/s rd, 2.7 KiB/s wr, 84 op/s
Feb 25 12:32:34 compute-0 sleepy_lovelace[312855]: {}
Feb 25 12:32:34 compute-0 systemd[1]: libpod-cf24267353aba45a74692bf6dc3879e6a05002fe4089e8832eab1959b434d354.scope: Deactivated successfully.
Feb 25 12:32:34 compute-0 podman[312837]: 2026-02-25 12:32:34.082426732 +0000 UTC m=+1.436341322 container died cf24267353aba45a74692bf6dc3879e6a05002fe4089e8832eab1959b434d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:32:34 compute-0 systemd[1]: libpod-cf24267353aba45a74692bf6dc3879e6a05002fe4089e8832eab1959b434d354.scope: Consumed 1.276s CPU time.
Feb 25 12:32:34 compute-0 ceph-mon[76335]: pgmap v1514: 305 pgs: 305 active+clean; 153 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 595 KiB/s rd, 2.7 KiB/s wr, 84 op/s
Feb 25 12:32:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8529b6d244fc3dfbe210b48329068f34de34aa5303c067899a22f3fd907a4ee-merged.mount: Deactivated successfully.
Feb 25 12:32:34 compute-0 podman[312837]: 2026-02-25 12:32:34.712787207 +0000 UTC m=+2.066701837 container remove cf24267353aba45a74692bf6dc3879e6a05002fe4089e8832eab1959b434d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 12:32:34 compute-0 systemd[1]: libpod-conmon-cf24267353aba45a74692bf6dc3879e6a05002fe4089e8832eab1959b434d354.scope: Deactivated successfully.
Feb 25 12:32:34 compute-0 sudo[312757]: pam_unix(sudo:session): session closed for user root
Feb 25 12:32:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:32:34 compute-0 nova_compute[244014]: 2026-02-25 12:32:34.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:32:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:32:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:32:34 compute-0 sudo[312953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:32:35 compute-0 sudo[312953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:32:35 compute-0 sudo[312953]: pam_unix(sudo:session): session closed for user root
Feb 25 12:32:35 compute-0 nova_compute[244014]: 2026-02-25 12:32:35.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:35 compute-0 nova_compute[244014]: 2026-02-25 12:32:35.699 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:35 compute-0 nova_compute[244014]: 2026-02-25 12:32:35.700 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:35 compute-0 nova_compute[244014]: 2026-02-25 12:32:35.720 244018 DEBUG nova.compute.manager [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:32:35 compute-0 nova_compute[244014]: 2026-02-25 12:32:35.809 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:35 compute-0 nova_compute[244014]: 2026-02-25 12:32:35.810 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:35 compute-0 nova_compute[244014]: 2026-02-25 12:32:35.820 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:32:35 compute-0 nova_compute[244014]: 2026-02-25 12:32:35.820 244018 INFO nova.compute.claims [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:32:35 compute-0 nova_compute[244014]: 2026-02-25 12:32:35.930 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:32:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:32:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 153 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Feb 25 12:32:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:32:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565225563' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.545 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.551 244018 DEBUG nova.compute.provider_tree [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.584 244018 DEBUG nova.scheduler.client.report [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.613 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.614 244018 DEBUG nova.compute.manager [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.676 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022741.6749365, 8decd922-10f2-4731-ac1e-36a062777059 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.677 244018 INFO nova.compute.manager [-] [instance: 8decd922-10f2-4731-ac1e-36a062777059] VM Stopped (Lifecycle Event)
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.693 244018 DEBUG nova.compute.manager [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.694 244018 DEBUG nova.network.neutron [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.711 244018 DEBUG nova.compute.manager [None req-f37a9037-9b9b-4af0-a70c-16bbc60473f2 - - - - - -] [instance: 8decd922-10f2-4731-ac1e-36a062777059] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.714 244018 INFO nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.734 244018 DEBUG nova.compute.manager [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.836 244018 DEBUG nova.compute.manager [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.837 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.838 244018 INFO nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Creating image(s)
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.860 244018 DEBUG nova.storage.rbd_utils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.886 244018 DEBUG nova.storage.rbd_utils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.908 244018 DEBUG nova.storage.rbd_utils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.911 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.966 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.968 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.969 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.969 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.992 244018 DEBUG nova.storage.rbd_utils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:36 compute-0 nova_compute[244014]: 2026-02-25 12:32:36.997 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:37 compute-0 ceph-mon[76335]: pgmap v1515: 305 pgs: 305 active+clean; 153 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Feb 25 12:32:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2565225563' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:37 compute-0 nova_compute[244014]: 2026-02-25 12:32:37.041 244018 DEBUG nova.policy [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '84ba7d5e80a44535b25853f3b18e352d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d34ca23436b401fbaeb0b01190a440a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:32:37 compute-0 nova_compute[244014]: 2026-02-25 12:32:37.773 244018 DEBUG nova.network.neutron [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Successfully created port: d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:32:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 166 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 297 KiB/s wr, 63 op/s
Feb 25 12:32:38 compute-0 nova_compute[244014]: 2026-02-25 12:32:38.328 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:38 compute-0 ceph-mon[76335]: pgmap v1516: 305 pgs: 305 active+clean; 166 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 297 KiB/s wr, 63 op/s
Feb 25 12:32:38 compute-0 nova_compute[244014]: 2026-02-25 12:32:38.494 244018 DEBUG nova.storage.rbd_utils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] resizing rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:32:38 compute-0 podman[313148]: 2026-02-25 12:32:38.748266303 +0000 UTC m=+0.088535731 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Feb 25 12:32:38 compute-0 podman[313149]: 2026-02-25 12:32:38.802491151 +0000 UTC m=+0.137028757 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ovn_controller)
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.027 244018 DEBUG nova.objects.instance [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'migration_context' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.092 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.092 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Ensure instance console log exists: /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.093 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.093 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.094 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.458 244018 DEBUG nova.network.neutron [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Successfully updated port: d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.535 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.536 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquired lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.536 244018 DEBUG nova.network.neutron [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.730 244018 DEBUG nova.compute.manager [req-fa6ef5dc-d210-4e07-b858-73ede4696444 req-a306e598-c739-4f27-9018-e0b051528c02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-changed-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.731 244018 DEBUG nova.compute.manager [req-fa6ef5dc-d210-4e07-b858-73ede4696444 req-a306e598-c739-4f27-9018-e0b051528c02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Refreshing instance network info cache due to event network-changed-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.731 244018 DEBUG oslo_concurrency.lockutils [req-fa6ef5dc-d210-4e07-b858-73ede4696444 req-a306e598-c739-4f27-9018-e0b051528c02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:39 compute-0 nova_compute[244014]: 2026-02-25 12:32:39.902 244018 DEBUG nova.network.neutron [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:32:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 166 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 295 KiB/s wr, 20 op/s
Feb 25 12:32:40 compute-0 nova_compute[244014]: 2026-02-25 12:32:40.113 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022745.1123466, 5732c5fb-59b3-4590-b65a-a696b9c90152 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:40 compute-0 nova_compute[244014]: 2026-02-25 12:32:40.113 244018 INFO nova.compute.manager [-] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] VM Stopped (Lifecycle Event)
Feb 25 12:32:40 compute-0 nova_compute[244014]: 2026-02-25 12:32:40.134 244018 DEBUG nova.compute.manager [None req-b23c0ded-e17c-4276-98d3-e254adfa344c - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:40 compute-0 nova_compute[244014]: 2026-02-25 12:32:40.142 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:41 compute-0 ceph-mon[76335]: pgmap v1517: 305 pgs: 305 active+clean; 166 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 295 KiB/s wr, 20 op/s
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.310 244018 DEBUG nova.network.neutron [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updating instance_info_cache with network_info: [{"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.336 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Releasing lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.337 244018 DEBUG nova.compute.manager [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance network_info: |[{"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.338 244018 DEBUG oslo_concurrency.lockutils [req-fa6ef5dc-d210-4e07-b858-73ede4696444 req-a306e598-c739-4f27-9018-e0b051528c02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.339 244018 DEBUG nova.network.neutron [req-fa6ef5dc-d210-4e07-b858-73ede4696444 req-a306e598-c739-4f27-9018-e0b051528c02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Refreshing network info cache for port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.344 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Start _get_guest_xml network_info=[{"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.351 244018 WARNING nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.357 244018 DEBUG nova.virt.libvirt.host [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.358 244018 DEBUG nova.virt.libvirt.host [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.362 244018 DEBUG nova.virt.libvirt.host [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.363 244018 DEBUG nova.virt.libvirt.host [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.364 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.364 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.365 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.366 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.366 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.367 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.367 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.368 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.368 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.369 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.369 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.370 244018 DEBUG nova.virt.hardware [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.375 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:32:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046368810' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.915 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.938 244018 DEBUG nova.storage.rbd_utils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:41 compute-0 nova_compute[244014]: 2026-02-25 12:32:41.943 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 200 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Feb 25 12:32:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3046368810' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003546732139748863 of space, bias 1.0, pg target 0.10640196419246589 quantized to 32 (current 32)
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002492722957606364 of space, bias 1.0, pg target 0.7478168872819092 quantized to 32 (current 32)
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.476535781966668e-07 of space, bias 4.0, pg target 0.0010171842938360002 quantized to 16 (current 16)
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:32:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:32:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:32:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975701572' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.518 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.520 244018 DEBUG nova.virt.libvirt.vif [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-293517057',display_name='tempest-ServersNegativeTestJSON-server-293517057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-293517057',id=79,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d34ca23436b401fbaeb0b01190a440a',ramdisk_id='',reservation_id='r-lll6a7bq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1613719120',owner_user_name='tempest-ServersNegativeTestJSON-1613719120-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:32:36Z,user_data=None,user_id='84ba7d5e80a44535b25853f3b18e352d',uuid=19abddab-88d5-48b8-b98e-1dedccbb8b7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.521 244018 DEBUG nova.network.os_vif_util [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converting VIF {"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.522 244018 DEBUG nova.network.os_vif_util [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.524 244018 DEBUG nova.objects.instance [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'pci_devices' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.541 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:32:42 compute-0 nova_compute[244014]:   <uuid>19abddab-88d5-48b8-b98e-1dedccbb8b7f</uuid>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   <name>instance-0000004f</name>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersNegativeTestJSON-server-293517057</nova:name>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:32:41</nova:creationTime>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <nova:user uuid="84ba7d5e80a44535b25853f3b18e352d">tempest-ServersNegativeTestJSON-1613719120-project-member</nova:user>
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <nova:project uuid="0d34ca23436b401fbaeb0b01190a440a">tempest-ServersNegativeTestJSON-1613719120</nova:project>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <nova:port uuid="d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536">
Feb 25 12:32:42 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <system>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <entry name="serial">19abddab-88d5-48b8-b98e-1dedccbb8b7f</entry>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <entry name="uuid">19abddab-88d5-48b8-b98e-1dedccbb8b7f</entry>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     </system>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   <os>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   </os>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   <features>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   </features>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk">
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       </source>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config">
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       </source>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:32:42 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:af:66:6e"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <target dev="tapd6f9abb7-ac"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/console.log" append="off"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <video>
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     </video>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:32:42 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:32:42 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:32:42 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:32:42 compute-0 nova_compute[244014]: </domain>
Feb 25 12:32:42 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.543 244018 DEBUG nova.compute.manager [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Preparing to wait for external event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.543 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.544 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.544 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.546 244018 DEBUG nova.virt.libvirt.vif [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-293517057',display_name='tempest-ServersNegativeTestJSON-server-293517057',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-293517057',id=79,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d34ca23436b401fbaeb0b01190a440a',ramdisk_id='',reservation_id='r-lll6a7bq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1613719120',owner_user_name='tempest-ServersNegativeTestJSON-1613719120-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:32:36Z,user_data=None,user_id='84ba7d5e80a44535b25853f3b18e352d',uuid=19abddab-88d5-48b8-b98e-1dedccbb8b7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.546 244018 DEBUG nova.network.os_vif_util [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converting VIF {"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.547 244018 DEBUG nova.network.os_vif_util [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.548 244018 DEBUG os_vif [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.549 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.550 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.554 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6f9abb7-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.555 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6f9abb7-ac, col_values=(('external_ids', {'iface-id': 'd6f9abb7-ac51-44f8-88ae-b5a8ef3b6536', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:66:6e', 'vm-uuid': '19abddab-88d5-48b8-b98e-1dedccbb8b7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.558 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:42 compute-0 NetworkManager[49836]: <info>  [1772022762.5593] manager: (tapd6f9abb7-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.567 244018 INFO os_vif [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac')
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.650 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.650 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.651 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] No VIF found with MAC fa:16:3e:af:66:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.651 244018 INFO nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Using config drive
Feb 25 12:32:42 compute-0 nova_compute[244014]: 2026-02-25 12:32:42.683 244018 DEBUG nova.storage.rbd_utils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.184 244018 INFO nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Creating config drive at /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.191 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyrokglnb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:43 compute-0 ceph-mon[76335]: pgmap v1518: 305 pgs: 305 active+clean; 200 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Feb 25 12:32:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1975701572' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.240 244018 DEBUG nova.network.neutron [req-fa6ef5dc-d210-4e07-b858-73ede4696444 req-a306e598-c739-4f27-9018-e0b051528c02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updated VIF entry in instance network info cache for port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.241 244018 DEBUG nova.network.neutron [req-fa6ef5dc-d210-4e07-b858-73ede4696444 req-a306e598-c739-4f27-9018-e0b051528c02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updating instance_info_cache with network_info: [{"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.260 244018 DEBUG oslo_concurrency.lockutils [req-fa6ef5dc-d210-4e07-b858-73ede4696444 req-a306e598-c739-4f27-9018-e0b051528c02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.326 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyrokglnb" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.358 244018 DEBUG nova.storage.rbd_utils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.362 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.420 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.421 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.441 244018 DEBUG nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.531 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.532 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.540 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.540 244018 INFO nova.compute.claims [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.650 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.681 244018 DEBUG oslo_concurrency.processutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.682 244018 INFO nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Deleting local config drive /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config because it was imported into RBD.
Feb 25 12:32:43 compute-0 kernel: tapd6f9abb7-ac: entered promiscuous mode
Feb 25 12:32:43 compute-0 NetworkManager[49836]: <info>  [1772022763.7402] manager: (tapd6f9abb7-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/318)
Feb 25 12:32:43 compute-0 ovn_controller[147040]: 2026-02-25T12:32:43Z|00749|binding|INFO|Claiming lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for this chassis.
Feb 25 12:32:43 compute-0 ovn_controller[147040]: 2026-02-25T12:32:43Z|00750|binding|INFO|d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536: Claiming fa:16:3e:af:66:6e 10.100.0.8
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.754 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.762 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:66:6e 10.100.0.8'], port_security=['fa:16:3e:af:66:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '19abddab-88d5-48b8-b98e-1dedccbb8b7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d34ca23436b401fbaeb0b01190a440a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3f8cb539-706f-470a-94ac-5465c7a896f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2c08b8a-392f-4189-9ba4-eaa2e30b0540, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.764 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 in datapath a0cf2281-bf49-498f-8de5-70cdba33cd62 bound to our chassis
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.766 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0cf2281-bf49-498f-8de5-70cdba33cd62
Feb 25 12:32:43 compute-0 systemd-machined[210048]: New machine qemu-97-instance-0000004f.
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.779 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5a333678-8076-4d5f-b0f5-8e7c1aec777f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.779 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0cf2281-b1 in ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.781 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0cf2281-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.781 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[527e1a74-50a4-4f4d-8635-d0adf1c0ee42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.781 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e677cc9e-f893-470a-9adb-1acb34dd1861]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.788 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a60a50-4b09-4217-91c8-3cf537101465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.798 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f77d70e2-f07a-446a-a470-3bc128b2dc7c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-0000004f.
Feb 25 12:32:43 compute-0 ovn_controller[147040]: 2026-02-25T12:32:43Z|00751|binding|INFO|Setting lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 ovn-installed in OVS
Feb 25 12:32:43 compute-0 ovn_controller[147040]: 2026-02-25T12:32:43Z|00752|binding|INFO|Setting lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 up in Southbound
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:43 compute-0 systemd-udevd[313351]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:32:43 compute-0 NetworkManager[49836]: <info>  [1772022763.8176] device (tapd6f9abb7-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:32:43 compute-0 NetworkManager[49836]: <info>  [1772022763.8187] device (tapd6f9abb7-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.825 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[812ecad6-eb60-4a8b-9f86-aff881e7af6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 NetworkManager[49836]: <info>  [1772022763.8306] manager: (tapa0cf2281-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/319)
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.834 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[59c2ec50-97ee-408e-a04f-40c85245fe02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.862 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5e515a-de86-479b-9484-086e655ed26c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.865 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d409051b-baa6-4e95-ac7a-fd5862cc6332]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 NetworkManager[49836]: <info>  [1772022763.8831] device (tapa0cf2281-b0): carrier: link connected
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.885 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c27c6067-6a96-4a54-8b5e-d98451259394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.897 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05bd91a4-b528-4691-9662-d3deceb0645b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0cf2281-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:10:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473329, 'reachable_time': 21010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313400, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.907 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b99c709b-cece-4f45-948f-30f213c533c5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4a:1087'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473329, 'tstamp': 473329}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313401, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bde696-9729-4bac-b7b6-a2aba71b0d47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0cf2281-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:10:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473329, 'reachable_time': 21010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313402, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.932 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3e8905-f96b-4fa2-a3ce-1d0aa4cca5a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.978 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c780c7c9-0f37-4527-ae66-4cee6e470ac2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.979 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0cf2281-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.980 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.981 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0cf2281-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 200 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb 25 12:32:43 compute-0 NetworkManager[49836]: <info>  [1772022763.9835] manager: (tapa0cf2281-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:43 compute-0 kernel: tapa0cf2281-b0: entered promiscuous mode
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.985 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.986 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0cf2281-b0, col_values=(('external_ids', {'iface-id': '134cce92-aa02-44fa-b97c-8b46159f1d29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:32:43 compute-0 ovn_controller[147040]: 2026-02-25T12:32:43Z|00753|binding|INFO|Releasing lport 134cce92-aa02-44fa-b97c-8b46159f1d29 from this chassis (sb_readonly=0)
Feb 25 12:32:43 compute-0 nova_compute[244014]: 2026-02-25 12:32:43.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.991 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0cf2281-bf49-498f-8de5-70cdba33cd62.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0cf2281-bf49-498f-8de5-70cdba33cd62.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.992 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a26b5907-e9cf-48b1-a56a-d1cf6432ed43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.993 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-a0cf2281-bf49-498f-8de5-70cdba33cd62
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/a0cf2281-bf49-498f-8de5-70cdba33cd62.pid.haproxy
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID a0cf2281-bf49-498f-8de5-70cdba33cd62
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:32:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:43.994 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'env', 'PROCESS_TAG=haproxy-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0cf2281-bf49-498f-8de5-70cdba33cd62.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:32:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187761525' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:44 compute-0 ceph-mon[76335]: pgmap v1519: 305 pgs: 305 active+clean; 200 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.270 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.276 244018 DEBUG nova.compute.provider_tree [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.314 244018 DEBUG nova.scheduler.client.report [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.343 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.344 244018 DEBUG nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.402 244018 DEBUG nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.404 244018 DEBUG nova.network.neutron [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.409 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022764.4026725, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.409 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Started (Lifecycle Event)
Feb 25 12:32:44 compute-0 podman[313465]: 2026-02-25 12:32:44.325537208 +0000 UTC m=+0.022972201 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.437 244018 INFO nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.442 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.446 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022764.4086227, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.446 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Paused (Lifecycle Event)
Feb 25 12:32:44 compute-0 podman[313465]: 2026-02-25 12:32:44.447248007 +0000 UTC m=+0.144682970 container create 69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.463 244018 DEBUG nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.469 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.473 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.509 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:32:44 compute-0 systemd[1]: Started libpod-conmon-69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191.scope.
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.530 244018 DEBUG nova.compute.manager [req-25d4192c-46ed-45fa-b129-087355750748 req-0843ff64-dcce-4fc1-91ab-fd492b8f5423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.531 244018 DEBUG oslo_concurrency.lockutils [req-25d4192c-46ed-45fa-b129-087355750748 req-0843ff64-dcce-4fc1-91ab-fd492b8f5423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.531 244018 DEBUG oslo_concurrency.lockutils [req-25d4192c-46ed-45fa-b129-087355750748 req-0843ff64-dcce-4fc1-91ab-fd492b8f5423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.531 244018 DEBUG oslo_concurrency.lockutils [req-25d4192c-46ed-45fa-b129-087355750748 req-0843ff64-dcce-4fc1-91ab-fd492b8f5423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.532 244018 DEBUG nova.compute.manager [req-25d4192c-46ed-45fa-b129-087355750748 req-0843ff64-dcce-4fc1-91ab-fd492b8f5423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Processing event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.533 244018 DEBUG nova.compute.manager [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.536 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022764.5362213, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.537 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Resumed (Lifecycle Event)
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.539 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.542 244018 INFO nova.virt.libvirt.driver [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance spawned successfully.
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.543 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:32:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6294ed470bb7fef415eacb48a4705631d3f8b94ba50302f69a55420830910089/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.568 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.574 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.578 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.578 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.578 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.579 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.579 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.580 244018 DEBUG nova.virt.libvirt.driver [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.584 244018 DEBUG nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.585 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.586 244018 INFO nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Creating image(s)
Feb 25 12:32:44 compute-0 podman[313465]: 2026-02-25 12:32:44.602490765 +0000 UTC m=+0.299925768 container init 69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.608 244018 DEBUG nova.storage.rbd_utils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 59845b53-2d16-4e64-9550-ed88157328c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:44 compute-0 podman[313465]: 2026-02-25 12:32:44.61186678 +0000 UTC m=+0.309301783 container start 69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.628 244018 DEBUG nova.storage.rbd_utils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 59845b53-2d16-4e64-9550-ed88157328c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:44 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[313490]: [NOTICE]   (313512) : New worker (313546) forked
Feb 25 12:32:44 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[313490]: [NOTICE]   (313512) : Loading success.
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.647 244018 DEBUG nova.storage.rbd_utils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 59845b53-2d16-4e64-9550-ed88157328c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.650 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.672 244018 DEBUG nova.policy [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f6c1c9101c442839ec520dd809c1205', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff49334847464c879338fb12c3b27419', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.676 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.677 244018 INFO nova.compute.manager [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Took 7.84 seconds to spawn the instance on the hypervisor.
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.677 244018 DEBUG nova.compute.manager [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.702 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.702 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.703 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.703 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.723 244018 DEBUG nova.storage.rbd_utils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 59845b53-2d16-4e64-9550-ed88157328c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.727 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 59845b53-2d16-4e64-9550-ed88157328c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.756 244018 INFO nova.compute.manager [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Took 8.97 seconds to build instance.
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.774 244018 DEBUG oslo_concurrency.lockutils [None req-e5b47984-244c-40ca-9981-dda6bcffa605 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:44 compute-0 nova_compute[244014]: 2026-02-25 12:32:44.850 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/187761525' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:45 compute-0 nova_compute[244014]: 2026-02-25 12:32:45.320 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 59845b53-2d16-4e64-9550-ed88157328c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:45 compute-0 nova_compute[244014]: 2026-02-25 12:32:45.390 244018 DEBUG nova.storage.rbd_utils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] resizing rbd image 59845b53-2d16-4e64-9550-ed88157328c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:32:45 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Feb 25 12:32:45 compute-0 nova_compute[244014]: 2026-02-25 12:32:45.498 244018 DEBUG nova.network.neutron [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Successfully created port: 79d3b96a-fd0f-4a46-86e7-65281666b00f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:32:45 compute-0 nova_compute[244014]: 2026-02-25 12:32:45.573 244018 DEBUG nova.objects.instance [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lazy-loading 'migration_context' on Instance uuid 59845b53-2d16-4e64-9550-ed88157328c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:45 compute-0 nova_compute[244014]: 2026-02-25 12:32:45.648 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:32:45 compute-0 nova_compute[244014]: 2026-02-25 12:32:45.649 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Ensure instance console log exists: /var/lib/nova/instances/59845b53-2d16-4e64-9550-ed88157328c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:32:45 compute-0 nova_compute[244014]: 2026-02-25 12:32:45.649 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:45 compute-0 nova_compute[244014]: 2026-02-25 12:32:45.649 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:45 compute-0 nova_compute[244014]: 2026-02-25 12:32:45.650 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 200 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:32:46 compute-0 ceph-mon[76335]: pgmap v1520: 305 pgs: 305 active+clean; 200 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:32:46 compute-0 nova_compute[244014]: 2026-02-25 12:32:46.717 244018 DEBUG nova.compute.manager [req-b1a32809-2282-4ff0-aebb-3125fca3a92f req-8724e801-ce8e-4024-ae9a-9a9f044075f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:46 compute-0 nova_compute[244014]: 2026-02-25 12:32:46.718 244018 DEBUG oslo_concurrency.lockutils [req-b1a32809-2282-4ff0-aebb-3125fca3a92f req-8724e801-ce8e-4024-ae9a-9a9f044075f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:46 compute-0 nova_compute[244014]: 2026-02-25 12:32:46.718 244018 DEBUG oslo_concurrency.lockutils [req-b1a32809-2282-4ff0-aebb-3125fca3a92f req-8724e801-ce8e-4024-ae9a-9a9f044075f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:46 compute-0 nova_compute[244014]: 2026-02-25 12:32:46.718 244018 DEBUG oslo_concurrency.lockutils [req-b1a32809-2282-4ff0-aebb-3125fca3a92f req-8724e801-ce8e-4024-ae9a-9a9f044075f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:46 compute-0 nova_compute[244014]: 2026-02-25 12:32:46.718 244018 DEBUG nova.compute.manager [req-b1a32809-2282-4ff0-aebb-3125fca3a92f req-8724e801-ce8e-4024-ae9a-9a9f044075f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:32:46 compute-0 nova_compute[244014]: 2026-02-25 12:32:46.718 244018 WARNING nova.compute.manager [req-b1a32809-2282-4ff0-aebb-3125fca3a92f req-8724e801-ce8e-4024-ae9a-9a9f044075f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received unexpected event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with vm_state active and task_state None.
Feb 25 12:32:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:47 compute-0 nova_compute[244014]: 2026-02-25 12:32:47.032 244018 DEBUG nova.network.neutron [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Successfully created port: f0c3ab6d-106c-4ce9-9e41-200ba59de578 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:32:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:32:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2275440743' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:32:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:32:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2275440743' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:32:47 compute-0 nova_compute[244014]: 2026-02-25 12:32:47.559 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2275440743' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:32:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2275440743' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:32:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 246 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Feb 25 12:32:48 compute-0 ceph-mon[76335]: pgmap v1521: 305 pgs: 305 active+clean; 246 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Feb 25 12:32:49 compute-0 nova_compute[244014]: 2026-02-25 12:32:49.851 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 246 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 116 op/s
Feb 25 12:32:51 compute-0 ceph-mon[76335]: pgmap v1522: 305 pgs: 305 active+clean; 246 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 116 op/s
Feb 25 12:32:51 compute-0 nova_compute[244014]: 2026-02-25 12:32:51.201 244018 DEBUG nova.network.neutron [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Successfully updated port: 79d3b96a-fd0f-4a46-86e7-65281666b00f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:32:51 compute-0 nova_compute[244014]: 2026-02-25 12:32:51.313 244018 DEBUG nova.compute.manager [req-2dde4768-effc-4e29-94ab-ff9735e7c5bf req-7a58b51a-0252-48f1-81d4-647d94618024 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-changed-79d3b96a-fd0f-4a46-86e7-65281666b00f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:51 compute-0 nova_compute[244014]: 2026-02-25 12:32:51.314 244018 DEBUG nova.compute.manager [req-2dde4768-effc-4e29-94ab-ff9735e7c5bf req-7a58b51a-0252-48f1-81d4-647d94618024 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Refreshing instance network info cache due to event network-changed-79d3b96a-fd0f-4a46-86e7-65281666b00f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:32:51 compute-0 nova_compute[244014]: 2026-02-25 12:32:51.314 244018 DEBUG oslo_concurrency.lockutils [req-2dde4768-effc-4e29-94ab-ff9735e7c5bf req-7a58b51a-0252-48f1-81d4-647d94618024 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-59845b53-2d16-4e64-9550-ed88157328c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:51 compute-0 nova_compute[244014]: 2026-02-25 12:32:51.314 244018 DEBUG oslo_concurrency.lockutils [req-2dde4768-effc-4e29-94ab-ff9735e7c5bf req-7a58b51a-0252-48f1-81d4-647d94618024 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-59845b53-2d16-4e64-9550-ed88157328c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:51 compute-0 nova_compute[244014]: 2026-02-25 12:32:51.314 244018 DEBUG nova.network.neutron [req-2dde4768-effc-4e29-94ab-ff9735e7c5bf req-7a58b51a-0252-48f1-81d4-647d94618024 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Refreshing network info cache for port 79d3b96a-fd0f-4a46-86e7-65281666b00f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:32:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 246 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 116 op/s
Feb 25 12:32:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:52 compute-0 nova_compute[244014]: 2026-02-25 12:32:52.124 244018 DEBUG nova.network.neutron [req-2dde4768-effc-4e29-94ab-ff9735e7c5bf req-7a58b51a-0252-48f1-81d4-647d94618024 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:32:52 compute-0 nova_compute[244014]: 2026-02-25 12:32:52.481 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "2403cab9-191d-42ad-9d3c-3938a6644ae7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:52 compute-0 nova_compute[244014]: 2026-02-25 12:32:52.481 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:52 compute-0 nova_compute[244014]: 2026-02-25 12:32:52.506 244018 DEBUG nova.compute.manager [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:32:52 compute-0 nova_compute[244014]: 2026-02-25 12:32:52.561 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:52 compute-0 nova_compute[244014]: 2026-02-25 12:32:52.627 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:52 compute-0 nova_compute[244014]: 2026-02-25 12:32:52.628 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:52 compute-0 nova_compute[244014]: 2026-02-25 12:32:52.636 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:32:52 compute-0 nova_compute[244014]: 2026-02-25 12:32:52.637 244018 INFO nova.compute.claims [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:32:52 compute-0 nova_compute[244014]: 2026-02-25 12:32:52.841 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:53 compute-0 nova_compute[244014]: 2026-02-25 12:32:53.030 244018 DEBUG nova.network.neutron [req-2dde4768-effc-4e29-94ab-ff9735e7c5bf req-7a58b51a-0252-48f1-81d4-647d94618024 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:53 compute-0 nova_compute[244014]: 2026-02-25 12:32:53.073 244018 DEBUG oslo_concurrency.lockutils [req-2dde4768-effc-4e29-94ab-ff9735e7c5bf req-7a58b51a-0252-48f1-81d4-647d94618024 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-59845b53-2d16-4e64-9550-ed88157328c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:53 compute-0 ceph-mon[76335]: pgmap v1523: 305 pgs: 305 active+clean; 246 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 116 op/s
Feb 25 12:32:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:32:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991979561' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:53 compute-0 nova_compute[244014]: 2026-02-25 12:32:53.418 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:53 compute-0 nova_compute[244014]: 2026-02-25 12:32:53.424 244018 DEBUG nova.compute.provider_tree [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:32:53 compute-0 nova_compute[244014]: 2026-02-25 12:32:53.584 244018 DEBUG nova.scheduler.client.report [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:32:53 compute-0 nova_compute[244014]: 2026-02-25 12:32:53.758 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:53 compute-0 nova_compute[244014]: 2026-02-25 12:32:53.759 244018 DEBUG nova.compute.manager [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:32:53 compute-0 nova_compute[244014]: 2026-02-25 12:32:53.851 244018 DEBUG nova.compute.manager [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:32:53 compute-0 nova_compute[244014]: 2026-02-25 12:32:53.852 244018 DEBUG nova.network.neutron [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:32:53 compute-0 nova_compute[244014]: 2026-02-25 12:32:53.883 244018 INFO nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:32:53 compute-0 nova_compute[244014]: 2026-02-25 12:32:53.920 244018 DEBUG nova.compute.manager [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:32:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 246 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.046 244018 DEBUG nova.compute.manager [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.047 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.047 244018 INFO nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Creating image(s)
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.066 244018 DEBUG nova.storage.rbd_utils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.089 244018 DEBUG nova.storage.rbd_utils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.107 244018 DEBUG nova.storage.rbd_utils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.111 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.185 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.186 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.187 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.187 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3991979561' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:32:54 compute-0 ceph-mon[76335]: pgmap v1524: 305 pgs: 305 active+clean; 246 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.205 244018 DEBUG nova.storage.rbd_utils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.208 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.259 244018 DEBUG nova.policy [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '84ba7d5e80a44535b25853f3b18e352d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d34ca23436b401fbaeb0b01190a440a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.849 244018 DEBUG nova.network.neutron [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Successfully updated port: f0c3ab6d-106c-4ce9-9e41-200ba59de578 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.853 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.883 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "refresh_cache-59845b53-2d16-4e64-9550-ed88157328c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.884 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquired lock "refresh_cache-59845b53-2d16-4e64-9550-ed88157328c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.884 244018 DEBUG nova.network.neutron [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:32:54 compute-0 nova_compute[244014]: 2026-02-25 12:32:54.968 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.760s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:32:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:55.014 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:55.015 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:32:55.015 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:55 compute-0 nova_compute[244014]: 2026-02-25 12:32:55.068 244018 DEBUG nova.storage.rbd_utils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] resizing rbd image 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:32:55 compute-0 nova_compute[244014]: 2026-02-25 12:32:55.184 244018 DEBUG nova.compute.manager [req-b0422562-7957-4ba5-8144-1334989924b0 req-6e9c3285-0600-4dab-b8c5-c1c0385391e6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-changed-f0c3ab6d-106c-4ce9-9e41-200ba59de578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:55 compute-0 nova_compute[244014]: 2026-02-25 12:32:55.185 244018 DEBUG nova.compute.manager [req-b0422562-7957-4ba5-8144-1334989924b0 req-6e9c3285-0600-4dab-b8c5-c1c0385391e6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Refreshing instance network info cache due to event network-changed-f0c3ab6d-106c-4ce9-9e41-200ba59de578. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:32:55 compute-0 nova_compute[244014]: 2026-02-25 12:32:55.185 244018 DEBUG oslo_concurrency.lockutils [req-b0422562-7957-4ba5-8144-1334989924b0 req-6e9c3285-0600-4dab-b8c5-c1c0385391e6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-59845b53-2d16-4e64-9550-ed88157328c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:55 compute-0 nova_compute[244014]: 2026-02-25 12:32:55.325 244018 DEBUG nova.network.neutron [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:32:55 compute-0 nova_compute[244014]: 2026-02-25 12:32:55.412 244018 DEBUG nova.objects.instance [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'migration_context' on Instance uuid 2403cab9-191d-42ad-9d3c-3938a6644ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:32:55 compute-0 nova_compute[244014]: 2026-02-25 12:32:55.460 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:32:55 compute-0 nova_compute[244014]: 2026-02-25 12:32:55.460 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Ensure instance console log exists: /var/lib/nova/instances/2403cab9-191d-42ad-9d3c-3938a6644ae7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:32:55 compute-0 nova_compute[244014]: 2026-02-25 12:32:55.461 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:32:55 compute-0 nova_compute[244014]: 2026-02-25 12:32:55.462 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:32:55 compute-0 nova_compute[244014]: 2026-02-25 12:32:55.462 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:32:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 246 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:32:56 compute-0 nova_compute[244014]: 2026-02-25 12:32:56.116 244018 DEBUG nova.network.neutron [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Successfully created port: 47863a41-5044-4493-9137-885a7693c6e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:32:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:32:57 compute-0 ceph-mon[76335]: pgmap v1525: 305 pgs: 305 active+clean; 246 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:32:57 compute-0 ovn_controller[147040]: 2026-02-25T12:32:57Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:66:6e 10.100.0.8
Feb 25 12:32:57 compute-0 ovn_controller[147040]: 2026-02-25T12:32:57Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:66:6e 10.100.0.8
Feb 25 12:32:57 compute-0 nova_compute[244014]: 2026-02-25 12:32:57.563 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 324 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 186 op/s
Feb 25 12:32:58 compute-0 nova_compute[244014]: 2026-02-25 12:32:58.280 244018 DEBUG nova.network.neutron [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Successfully updated port: 47863a41-5044-4493-9137-885a7693c6e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:32:58 compute-0 nova_compute[244014]: 2026-02-25 12:32:58.311 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "refresh_cache-2403cab9-191d-42ad-9d3c-3938a6644ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:58 compute-0 nova_compute[244014]: 2026-02-25 12:32:58.311 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquired lock "refresh_cache-2403cab9-191d-42ad-9d3c-3938a6644ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:58 compute-0 nova_compute[244014]: 2026-02-25 12:32:58.312 244018 DEBUG nova.network.neutron [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:32:58 compute-0 nova_compute[244014]: 2026-02-25 12:32:58.447 244018 DEBUG nova.compute.manager [req-6fe30d20-1250-4f1f-be77-66ba09a9751b req-fe88dd22-390a-437f-9bdc-830ee7c4f0e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Received event network-changed-47863a41-5044-4493-9137-885a7693c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:32:58 compute-0 nova_compute[244014]: 2026-02-25 12:32:58.448 244018 DEBUG nova.compute.manager [req-6fe30d20-1250-4f1f-be77-66ba09a9751b req-fe88dd22-390a-437f-9bdc-830ee7c4f0e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Refreshing instance network info cache due to event network-changed-47863a41-5044-4493-9137-885a7693c6e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:32:58 compute-0 nova_compute[244014]: 2026-02-25 12:32:58.449 244018 DEBUG oslo_concurrency.lockutils [req-6fe30d20-1250-4f1f-be77-66ba09a9751b req-fe88dd22-390a-437f-9bdc-830ee7c4f0e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-2403cab9-191d-42ad-9d3c-3938a6644ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:32:58 compute-0 nova_compute[244014]: 2026-02-25 12:32:58.683 244018 DEBUG nova.network.neutron [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:32:59 compute-0 ceph-mon[76335]: pgmap v1526: 305 pgs: 305 active+clean; 324 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 186 op/s
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.628 244018 DEBUG nova.network.neutron [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Updating instance_info_cache with network_info: [{"id": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "address": "fa:16:3e:95:95:65", "network": {"id": "da0791cc-53d2-49de-bb3a-c242872ad639", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1068033415", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d3b96a-fd", "ovs_interfaceid": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "address": "fa:16:3e:a3:f4:ce", "network": {"id": "5b7a7eb3-b799-4043-93af-927c57bf0214", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-300246542", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.135", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0c3ab6d-10", "ovs_interfaceid": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.662 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Releasing lock "refresh_cache-59845b53-2d16-4e64-9550-ed88157328c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.663 244018 DEBUG nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Instance network_info: |[{"id": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "address": "fa:16:3e:95:95:65", "network": {"id": "da0791cc-53d2-49de-bb3a-c242872ad639", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1068033415", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d3b96a-fd", "ovs_interfaceid": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "address": "fa:16:3e:a3:f4:ce", "network": {"id": "5b7a7eb3-b799-4043-93af-927c57bf0214", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-300246542", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.135", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0c3ab6d-10", "ovs_interfaceid": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.664 244018 DEBUG oslo_concurrency.lockutils [req-b0422562-7957-4ba5-8144-1334989924b0 req-6e9c3285-0600-4dab-b8c5-c1c0385391e6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-59845b53-2d16-4e64-9550-ed88157328c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.665 244018 DEBUG nova.network.neutron [req-b0422562-7957-4ba5-8144-1334989924b0 req-6e9c3285-0600-4dab-b8c5-c1c0385391e6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Refreshing network info cache for port f0c3ab6d-106c-4ce9-9e41-200ba59de578 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.672 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Start _get_guest_xml network_info=[{"id": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "address": "fa:16:3e:95:95:65", "network": {"id": "da0791cc-53d2-49de-bb3a-c242872ad639", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1068033415", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d3b96a-fd", "ovs_interfaceid": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "address": "fa:16:3e:a3:f4:ce", "network": {"id": "5b7a7eb3-b799-4043-93af-927c57bf0214", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-300246542", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.135", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0c3ab6d-10", "ovs_interfaceid": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.684 244018 WARNING nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.693 244018 DEBUG nova.virt.libvirt.host [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.694 244018 DEBUG nova.virt.libvirt.host [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.698 244018 DEBUG nova.virt.libvirt.host [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.698 244018 DEBUG nova.virt.libvirt.host [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.699 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.699 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.700 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.700 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.700 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.700 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.701 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.701 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.701 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.701 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.702 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.702 244018 DEBUG nova.virt.hardware [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.705 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:32:59 compute-0 nova_compute[244014]: 2026-02-25 12:32:59.855 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:32:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 324 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 3.9 MiB/s wr, 86 op/s
Feb 25 12:33:00 compute-0 ceph-mon[76335]: pgmap v1527: 305 pgs: 305 active+clean; 324 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 3.9 MiB/s wr, 86 op/s
Feb 25 12:33:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:33:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/463999745' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.350 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.379 244018 DEBUG nova.storage.rbd_utils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 59845b53-2d16-4e64-9550-ed88157328c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.383 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.570 244018 DEBUG nova.network.neutron [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Updating instance_info_cache with network_info: [{"id": "47863a41-5044-4493-9137-885a7693c6e9", "address": "fa:16:3e:d5:4f:cc", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47863a41-50", "ovs_interfaceid": "47863a41-5044-4493-9137-885a7693c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.608 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Releasing lock "refresh_cache-2403cab9-191d-42ad-9d3c-3938a6644ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.609 244018 DEBUG nova.compute.manager [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Instance network_info: |[{"id": "47863a41-5044-4493-9137-885a7693c6e9", "address": "fa:16:3e:d5:4f:cc", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47863a41-50", "ovs_interfaceid": "47863a41-5044-4493-9137-885a7693c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.609 244018 DEBUG oslo_concurrency.lockutils [req-6fe30d20-1250-4f1f-be77-66ba09a9751b req-fe88dd22-390a-437f-9bdc-830ee7c4f0e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-2403cab9-191d-42ad-9d3c-3938a6644ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.610 244018 DEBUG nova.network.neutron [req-6fe30d20-1250-4f1f-be77-66ba09a9751b req-fe88dd22-390a-437f-9bdc-830ee7c4f0e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Refreshing network info cache for port 47863a41-5044-4493-9137-885a7693c6e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.616 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Start _get_guest_xml network_info=[{"id": "47863a41-5044-4493-9137-885a7693c6e9", "address": "fa:16:3e:d5:4f:cc", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47863a41-50", "ovs_interfaceid": "47863a41-5044-4493-9137-885a7693c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.623 244018 WARNING nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.629 244018 DEBUG nova.virt.libvirt.host [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.630 244018 DEBUG nova.virt.libvirt.host [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.634 244018 DEBUG nova.virt.libvirt.host [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.634 244018 DEBUG nova.virt.libvirt.host [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.635 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.635 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.636 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.636 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.637 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.637 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.638 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.638 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.639 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.639 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.639 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.640 244018 DEBUG nova.virt.hardware [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.645 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:33:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2837012583' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.942 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.945 244018 DEBUG nova.virt.libvirt.vif [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-376651105',display_name='tempest-ServersTestMultiNic-server-376651105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-376651105',id=80,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-h883ai3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:32:44Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=59845b53-2d16-4e64-9550-ed88157328c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "address": "fa:16:3e:95:95:65", "network": {"id": "da0791cc-53d2-49de-bb3a-c242872ad639", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1068033415", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d3b96a-fd", "ovs_interfaceid": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.945 244018 DEBUG nova.network.os_vif_util [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "address": "fa:16:3e:95:95:65", "network": {"id": "da0791cc-53d2-49de-bb3a-c242872ad639", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1068033415", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d3b96a-fd", "ovs_interfaceid": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.947 244018 DEBUG nova.network.os_vif_util [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:95:65,bridge_name='br-int',has_traffic_filtering=True,id=79d3b96a-fd0f-4a46-86e7-65281666b00f,network=Network(da0791cc-53d2-49de-bb3a-c242872ad639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d3b96a-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.948 244018 DEBUG nova.virt.libvirt.vif [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-376651105',display_name='tempest-ServersTestMultiNic-server-376651105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-376651105',id=80,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-h883ai3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:32:44Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=59845b53-2d16-4e64-9550-ed88157328c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "address": "fa:16:3e:a3:f4:ce", "network": {"id": "5b7a7eb3-b799-4043-93af-927c57bf0214", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-300246542", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.135", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0c3ab6d-10", "ovs_interfaceid": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.949 244018 DEBUG nova.network.os_vif_util [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "address": "fa:16:3e:a3:f4:ce", "network": {"id": "5b7a7eb3-b799-4043-93af-927c57bf0214", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-300246542", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.135", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0c3ab6d-10", "ovs_interfaceid": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.950 244018 DEBUG nova.network.os_vif_util [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:f4:ce,bridge_name='br-int',has_traffic_filtering=True,id=f0c3ab6d-106c-4ce9-9e41-200ba59de578,network=Network(5b7a7eb3-b799-4043-93af-927c57bf0214),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0c3ab6d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.952 244018 DEBUG nova.objects.instance [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59845b53-2d16-4e64-9550-ed88157328c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.987 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:33:00 compute-0 nova_compute[244014]:   <uuid>59845b53-2d16-4e64-9550-ed88157328c2</uuid>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   <name>instance-00000050</name>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersTestMultiNic-server-376651105</nova:name>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:32:59</nova:creationTime>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <nova:user uuid="8f6c1c9101c442839ec520dd809c1205">tempest-ServersTestMultiNic-1041606516-project-member</nova:user>
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <nova:project uuid="ff49334847464c879338fb12c3b27419">tempest-ServersTestMultiNic-1041606516</nova:project>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <nova:port uuid="79d3b96a-fd0f-4a46-86e7-65281666b00f">
Feb 25 12:33:00 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.46" ipVersion="4"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <nova:port uuid="f0c3ab6d-106c-4ce9-9e41-200ba59de578">
Feb 25 12:33:00 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.1.135" ipVersion="4"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <system>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <entry name="serial">59845b53-2d16-4e64-9550-ed88157328c2</entry>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <entry name="uuid">59845b53-2d16-4e64-9550-ed88157328c2</entry>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     </system>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   <os>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   </os>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   <features>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   </features>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/59845b53-2d16-4e64-9550-ed88157328c2_disk">
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/59845b53-2d16-4e64-9550-ed88157328c2_disk.config">
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       </source>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:33:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:95:95:65"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <target dev="tap79d3b96a-fd"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:a3:f4:ce"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <target dev="tapf0c3ab6d-10"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/59845b53-2d16-4e64-9550-ed88157328c2/console.log" append="off"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <video>
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     </video>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:33:00 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:33:00 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:33:00 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:33:00 compute-0 nova_compute[244014]: </domain>
Feb 25 12:33:00 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.989 244018 DEBUG nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Preparing to wait for external event network-vif-plugged-79d3b96a-fd0f-4a46-86e7-65281666b00f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.989 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.990 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.990 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.990 244018 DEBUG nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Preparing to wait for external event network-vif-plugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.991 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.991 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.992 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.993 244018 DEBUG nova.virt.libvirt.vif [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-376651105',display_name='tempest-ServersTestMultiNic-server-376651105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-376651105',id=80,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-h883ai3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:32:44Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=59845b53-2d16-4e64-9550-ed88157328c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "address": "fa:16:3e:95:95:65", "network": {"id": "da0791cc-53d2-49de-bb3a-c242872ad639", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1068033415", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d3b96a-fd", "ovs_interfaceid": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.993 244018 DEBUG nova.network.os_vif_util [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "address": "fa:16:3e:95:95:65", "network": {"id": "da0791cc-53d2-49de-bb3a-c242872ad639", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1068033415", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d3b96a-fd", "ovs_interfaceid": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.994 244018 DEBUG nova.network.os_vif_util [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:95:65,bridge_name='br-int',has_traffic_filtering=True,id=79d3b96a-fd0f-4a46-86e7-65281666b00f,network=Network(da0791cc-53d2-49de-bb3a-c242872ad639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d3b96a-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.995 244018 DEBUG os_vif [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:95:65,bridge_name='br-int',has_traffic_filtering=True,id=79d3b96a-fd0f-4a46-86e7-65281666b00f,network=Network(da0791cc-53d2-49de-bb3a-c242872ad639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d3b96a-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.996 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.996 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:00 compute-0 nova_compute[244014]: 2026-02-25 12:33:00.998 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.002 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.002 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79d3b96a-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.003 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79d3b96a-fd, col_values=(('external_ids', {'iface-id': '79d3b96a-fd0f-4a46-86e7-65281666b00f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:95:65', 'vm-uuid': '59845b53-2d16-4e64-9550-ed88157328c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:01 compute-0 NetworkManager[49836]: <info>  [1772022781.0057] manager: (tap79d3b96a-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.004 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.014 244018 INFO os_vif [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:95:65,bridge_name='br-int',has_traffic_filtering=True,id=79d3b96a-fd0f-4a46-86e7-65281666b00f,network=Network(da0791cc-53d2-49de-bb3a-c242872ad639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d3b96a-fd')
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.015 244018 DEBUG nova.virt.libvirt.vif [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-376651105',display_name='tempest-ServersTestMultiNic-server-376651105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-376651105',id=80,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-h883ai3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:32:44Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=59845b53-2d16-4e64-9550-ed88157328c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "address": "fa:16:3e:a3:f4:ce", "network": {"id": "5b7a7eb3-b799-4043-93af-927c57bf0214", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-300246542", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.135", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0c3ab6d-10", "ovs_interfaceid": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.016 244018 DEBUG nova.network.os_vif_util [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "address": "fa:16:3e:a3:f4:ce", "network": {"id": "5b7a7eb3-b799-4043-93af-927c57bf0214", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-300246542", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.135", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0c3ab6d-10", "ovs_interfaceid": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.017 244018 DEBUG nova.network.os_vif_util [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:f4:ce,bridge_name='br-int',has_traffic_filtering=True,id=f0c3ab6d-106c-4ce9-9e41-200ba59de578,network=Network(5b7a7eb3-b799-4043-93af-927c57bf0214),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0c3ab6d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.017 244018 DEBUG os_vif [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:f4:ce,bridge_name='br-int',has_traffic_filtering=True,id=f0c3ab6d-106c-4ce9-9e41-200ba59de578,network=Network(5b7a7eb3-b799-4043-93af-927c57bf0214),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0c3ab6d-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.022 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.023 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.023 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.027 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0c3ab6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.028 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf0c3ab6d-10, col_values=(('external_ids', {'iface-id': 'f0c3ab6d-106c-4ce9-9e41-200ba59de578', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:f4:ce', 'vm-uuid': '59845b53-2d16-4e64-9550-ed88157328c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:01 compute-0 NetworkManager[49836]: <info>  [1772022781.0302] manager: (tapf0c3ab6d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.029 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.033 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.037 244018 INFO os_vif [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:f4:ce,bridge_name='br-int',has_traffic_filtering=True,id=f0c3ab6d-106c-4ce9-9e41-200ba59de578,network=Network(5b7a7eb3-b799-4043-93af-927c57bf0214),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0c3ab6d-10')
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.107 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.109 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.109 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] No VIF found with MAC fa:16:3e:95:95:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.109 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] No VIF found with MAC fa:16:3e:a3:f4:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.110 244018 INFO nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Using config drive
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.138 244018 DEBUG nova.storage.rbd_utils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 59845b53-2d16-4e64-9550-ed88157328c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/463999745' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2837012583' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:33:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3527830221' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.312 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.342 244018 DEBUG nova.storage.rbd_utils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.348 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.606 244018 INFO nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Creating config drive at /var/lib/nova/instances/59845b53-2d16-4e64-9550-ed88157328c2/disk.config
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.613 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59845b53-2d16-4e64-9550-ed88157328c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0ncpu63v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.715 244018 DEBUG nova.network.neutron [req-b0422562-7957-4ba5-8144-1334989924b0 req-6e9c3285-0600-4dab-b8c5-c1c0385391e6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Updated VIF entry in instance network info cache for port f0c3ab6d-106c-4ce9-9e41-200ba59de578. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.716 244018 DEBUG nova.network.neutron [req-b0422562-7957-4ba5-8144-1334989924b0 req-6e9c3285-0600-4dab-b8c5-c1c0385391e6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Updating instance_info_cache with network_info: [{"id": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "address": "fa:16:3e:95:95:65", "network": {"id": "da0791cc-53d2-49de-bb3a-c242872ad639", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1068033415", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d3b96a-fd", "ovs_interfaceid": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "address": "fa:16:3e:a3:f4:ce", "network": {"id": "5b7a7eb3-b799-4043-93af-927c57bf0214", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-300246542", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.135", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0c3ab6d-10", "ovs_interfaceid": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.738 244018 DEBUG oslo_concurrency.lockutils [req-b0422562-7957-4ba5-8144-1334989924b0 req-6e9c3285-0600-4dab-b8c5-c1c0385391e6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-59845b53-2d16-4e64-9550-ed88157328c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.754 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59845b53-2d16-4e64-9550-ed88157328c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0ncpu63v" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.787 244018 DEBUG nova.storage.rbd_utils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] rbd image 59845b53-2d16-4e64-9550-ed88157328c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.791 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59845b53-2d16-4e64-9550-ed88157328c2/disk.config 59845b53-2d16-4e64-9550-ed88157328c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:33:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1387359851' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.899 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.902 244018 DEBUG nova.virt.libvirt.vif [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:32:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1403180466',display_name='tempest-ServersNegativeTestJSON-server-1403180466',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1403180466',id=81,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d34ca23436b401fbaeb0b01190a440a',ramdisk_id='',reservation_id='r-d0jjddz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1613719120',owner_user_name='tempest-ServersNegativeTestJSON-1613719120-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:32:53Z,user_data=None,user_id='84ba7d5e80a44535b25853f3b18e352d',uuid=2403cab9-191d-42ad-9d3c-3938a6644ae7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47863a41-5044-4493-9137-885a7693c6e9", "address": "fa:16:3e:d5:4f:cc", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47863a41-50", "ovs_interfaceid": "47863a41-5044-4493-9137-885a7693c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.903 244018 DEBUG nova.network.os_vif_util [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converting VIF {"id": "47863a41-5044-4493-9137-885a7693c6e9", "address": "fa:16:3e:d5:4f:cc", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47863a41-50", "ovs_interfaceid": "47863a41-5044-4493-9137-885a7693c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.905 244018 DEBUG nova.network.os_vif_util [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:4f:cc,bridge_name='br-int',has_traffic_filtering=True,id=47863a41-5044-4493-9137-885a7693c6e9,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47863a41-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.907 244018 DEBUG nova.objects.instance [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2403cab9-191d-42ad-9d3c-3938a6644ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.933 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:33:01 compute-0 nova_compute[244014]:   <uuid>2403cab9-191d-42ad-9d3c-3938a6644ae7</uuid>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   <name>instance-00000051</name>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersNegativeTestJSON-server-1403180466</nova:name>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:33:00</nova:creationTime>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <nova:user uuid="84ba7d5e80a44535b25853f3b18e352d">tempest-ServersNegativeTestJSON-1613719120-project-member</nova:user>
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <nova:project uuid="0d34ca23436b401fbaeb0b01190a440a">tempest-ServersNegativeTestJSON-1613719120</nova:project>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <nova:port uuid="47863a41-5044-4493-9137-885a7693c6e9">
Feb 25 12:33:01 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <system>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <entry name="serial">2403cab9-191d-42ad-9d3c-3938a6644ae7</entry>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <entry name="uuid">2403cab9-191d-42ad-9d3c-3938a6644ae7</entry>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     </system>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   <os>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   </os>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   <features>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   </features>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2403cab9-191d-42ad-9d3c-3938a6644ae7_disk">
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       </source>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2403cab9-191d-42ad-9d3c-3938a6644ae7_disk.config">
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       </source>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:33:01 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:d5:4f:cc"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <target dev="tap47863a41-50"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/2403cab9-191d-42ad-9d3c-3938a6644ae7/console.log" append="off"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <video>
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     </video>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:33:01 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:33:01 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:33:01 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:33:01 compute-0 nova_compute[244014]: </domain>
Feb 25 12:33:01 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.937 244018 DEBUG nova.compute.manager [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Preparing to wait for external event network-vif-plugged-47863a41-5044-4493-9137-885a7693c6e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.938 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.938 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.938 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.939 244018 DEBUG nova.virt.libvirt.vif [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:32:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1403180466',display_name='tempest-ServersNegativeTestJSON-server-1403180466',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1403180466',id=81,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d34ca23436b401fbaeb0b01190a440a',ramdisk_id='',reservation_id='r-d0jjddz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1613719120',owner_user_name='tempest-ServersNegativeTestJSON-1613719120-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:32:53Z,user_data=None,user_id='84ba7d5e80a44535b25853f3b18e352d',uuid=2403cab9-191d-42ad-9d3c-3938a6644ae7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47863a41-5044-4493-9137-885a7693c6e9", "address": "fa:16:3e:d5:4f:cc", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47863a41-50", "ovs_interfaceid": "47863a41-5044-4493-9137-885a7693c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.940 244018 DEBUG nova.network.os_vif_util [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converting VIF {"id": "47863a41-5044-4493-9137-885a7693c6e9", "address": "fa:16:3e:d5:4f:cc", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47863a41-50", "ovs_interfaceid": "47863a41-5044-4493-9137-885a7693c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.941 244018 DEBUG nova.network.os_vif_util [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:4f:cc,bridge_name='br-int',has_traffic_filtering=True,id=47863a41-5044-4493-9137-885a7693c6e9,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47863a41-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.941 244018 DEBUG os_vif [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:4f:cc,bridge_name='br-int',has_traffic_filtering=True,id=47863a41-5044-4493-9137-885a7693c6e9,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47863a41-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.942 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.943 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.943 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.948 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47863a41-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.948 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47863a41-50, col_values=(('external_ids', {'iface-id': '47863a41-5044-4493-9137-885a7693c6e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:4f:cc', 'vm-uuid': '2403cab9-191d-42ad-9d3c-3938a6644ae7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:01 compute-0 NetworkManager[49836]: <info>  [1772022781.9518] manager: (tap47863a41-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:01 compute-0 nova_compute[244014]: 2026-02-25 12:33:01.960 244018 INFO os_vif [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:4f:cc,bridge_name='br-int',has_traffic_filtering=True,id=47863a41-5044-4493-9137-885a7693c6e9,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47863a41-50')
Feb 25 12:33:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:33:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.100 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.101 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.101 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] No VIF found with MAC fa:16:3e:d5:4f:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.103 244018 INFO nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Using config drive
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.129 244018 DEBUG nova.storage.rbd_utils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.255 244018 DEBUG nova.network.neutron [req-6fe30d20-1250-4f1f-be77-66ba09a9751b req-fe88dd22-390a-437f-9bdc-830ee7c4f0e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Updated VIF entry in instance network info cache for port 47863a41-5044-4493-9137-885a7693c6e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.255 244018 DEBUG nova.network.neutron [req-6fe30d20-1250-4f1f-be77-66ba09a9751b req-fe88dd22-390a-437f-9bdc-830ee7c4f0e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Updating instance_info_cache with network_info: [{"id": "47863a41-5044-4493-9137-885a7693c6e9", "address": "fa:16:3e:d5:4f:cc", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47863a41-50", "ovs_interfaceid": "47863a41-5044-4493-9137-885a7693c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3527830221' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1387359851' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:02 compute-0 ceph-mon[76335]: pgmap v1528: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.268 244018 DEBUG oslo_concurrency.processutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59845b53-2d16-4e64-9550-ed88157328c2/disk.config 59845b53-2d16-4e64-9550-ed88157328c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.269 244018 INFO nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Deleting local config drive /var/lib/nova/instances/59845b53-2d16-4e64-9550-ed88157328c2/disk.config because it was imported into RBD.
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.275 244018 DEBUG oslo_concurrency.lockutils [req-6fe30d20-1250-4f1f-be77-66ba09a9751b req-fe88dd22-390a-437f-9bdc-830ee7c4f0e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-2403cab9-191d-42ad-9d3c-3938a6644ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:33:02 compute-0 NetworkManager[49836]: <info>  [1772022782.3269] manager: (tap79d3b96a-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Feb 25 12:33:02 compute-0 kernel: tap79d3b96a-fd: entered promiscuous mode
Feb 25 12:33:02 compute-0 ovn_controller[147040]: 2026-02-25T12:33:02Z|00754|binding|INFO|Claiming lport 79d3b96a-fd0f-4a46-86e7-65281666b00f for this chassis.
Feb 25 12:33:02 compute-0 ovn_controller[147040]: 2026-02-25T12:33:02Z|00755|binding|INFO|79d3b96a-fd0f-4a46-86e7-65281666b00f: Claiming fa:16:3e:95:95:65 10.100.0.46
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.334 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.343 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:95:65 10.100.0.46'], port_security=['fa:16:3e:95:95:65 10.100.0.46'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.46/24', 'neutron:device_id': '59845b53-2d16-4e64-9550-ed88157328c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da0791cc-53d2-49de-bb3a-c242872ad639', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff49334847464c879338fb12c3b27419', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0602774e-0849-430a-95de-27662d70ef20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6ac2194-6a7c-4681-8fb8-ecf102671790, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=79d3b96a-fd0f-4a46-86e7-65281666b00f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:02 compute-0 NetworkManager[49836]: <info>  [1772022782.3461] manager: (tapf0c3ab6d-10): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.346 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 79d3b96a-fd0f-4a46-86e7-65281666b00f in datapath da0791cc-53d2-49de-bb3a-c242872ad639 bound to our chassis
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.348 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da0791cc-53d2-49de-bb3a-c242872ad639
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.359 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Acquiring lock "6e9252ef-013b-4247-8b3c-494150e0785d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.359 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.359 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e00af40-ab90-4cc9-b7e6-15198d045ce5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.360 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapda0791cc-51 in ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.363 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapda0791cc-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.363 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1d7408-3850-4210-b23d-77b48d799b97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.364 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf54f4f-9b09-4525-b99f-e527c2b517e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 kernel: tapf0c3ab6d-10: entered promiscuous mode
Feb 25 12:33:02 compute-0 systemd-udevd[314088]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:33:02 compute-0 systemd-udevd[314087]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.378 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[552b744a-e725-4286-9e2e-5e4335151879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 ovn_controller[147040]: 2026-02-25T12:33:02Z|00756|binding|INFO|Claiming lport f0c3ab6d-106c-4ce9-9e41-200ba59de578 for this chassis.
Feb 25 12:33:02 compute-0 ovn_controller[147040]: 2026-02-25T12:33:02Z|00757|binding|INFO|f0c3ab6d-106c-4ce9-9e41-200ba59de578: Claiming fa:16:3e:a3:f4:ce 10.100.1.135
Feb 25 12:33:02 compute-0 ovn_controller[147040]: 2026-02-25T12:33:02Z|00758|binding|INFO|Setting lport 79d3b96a-fd0f-4a46-86e7-65281666b00f ovn-installed in OVS
Feb 25 12:33:02 compute-0 systemd-machined[210048]: New machine qemu-98-instance-00000050.
Feb 25 12:33:02 compute-0 ovn_controller[147040]: 2026-02-25T12:33:02Z|00759|binding|INFO|Setting lport 79d3b96a-fd0f-4a46-86e7-65281666b00f up in Southbound
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.390 244018 DEBUG nova.compute.manager [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:33:02 compute-0 NetworkManager[49836]: <info>  [1772022782.3928] device (tap79d3b96a-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:33:02 compute-0 NetworkManager[49836]: <info>  [1772022782.3936] device (tap79d3b96a-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:33:02 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-00000050.
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.392 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:f4:ce 10.100.1.135'], port_security=['fa:16:3e:a3:f4:ce 10.100.1.135'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.135/24', 'neutron:device_id': '59845b53-2d16-4e64-9550-ed88157328c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b7a7eb3-b799-4043-93af-927c57bf0214', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff49334847464c879338fb12c3b27419', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0602774e-0849-430a-95de-27662d70ef20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1c875bc-5dee-4820-a499-dbcdbc6e8e2b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f0c3ab6d-106c-4ce9-9e41-200ba59de578) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:02 compute-0 NetworkManager[49836]: <info>  [1772022782.3990] device (tapf0c3ab6d-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:33:02 compute-0 NetworkManager[49836]: <info>  [1772022782.3999] device (tapf0c3ab6d-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:33:02 compute-0 ovn_controller[147040]: 2026-02-25T12:33:02Z|00760|binding|INFO|Setting lport f0c3ab6d-106c-4ce9-9e41-200ba59de578 ovn-installed in OVS
Feb 25 12:33:02 compute-0 ovn_controller[147040]: 2026-02-25T12:33:02Z|00761|binding|INFO|Setting lport f0c3ab6d-106c-4ce9-9e41-200ba59de578 up in Southbound
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.406 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7b727b-0318-4881-9685-f283109a33e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.438 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a4921d-7626-4ac8-89f0-f7ec69c8e218]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.443 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b96b46-c445-4615-9971-830f8e02a0e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 systemd-udevd[314095]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:33:02 compute-0 NetworkManager[49836]: <info>  [1772022782.4445] manager: (tapda0791cc-50): new Veth device (/org/freedesktop/NetworkManager/Devices/326)
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.468 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b69ef698-5307-4568-ade7-d30042fbc665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.472 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[29157bcc-743e-47cb-8a4d-9bcf8c4bc189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.487 244018 INFO nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Creating config drive at /var/lib/nova/instances/2403cab9-191d-42ad-9d3c-3938a6644ae7/disk.config
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.492 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2403cab9-191d-42ad-9d3c-3938a6644ae7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp5ptgw9c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:02 compute-0 NetworkManager[49836]: <info>  [1772022782.4949] device (tapda0791cc-50): carrier: link connected
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.499 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f4cbce06-8138-41ab-a5ce-e3bec5a744e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.544 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.545 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.560 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.560 244018 INFO nova.compute.claims [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.617 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[06aae009-050d-4db5-8f29-ca1952fbbdfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda0791cc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0e:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475190, 'reachable_time': 40155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314126, 'error': None, 'target': 'ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.624 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2403cab9-191d-42ad-9d3c-3938a6644ae7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp5ptgw9c" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.636 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0122add1-781c-4c7c-a718-1d1c536b4949]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:e08'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475190, 'tstamp': 475190}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314129, 'error': None, 'target': 'ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.650 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc7d033-03ea-4423-ab02-e05a3c90be59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda0791cc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0e:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475190, 'reachable_time': 40155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314137, 'error': None, 'target': 'ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.656 244018 DEBUG nova.storage.rbd_utils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.665 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2403cab9-191d-42ad-9d3c-3938a6644ae7/disk.config 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.675 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[75941d38-b070-4d15-b4f0-b44544906659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.716 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8109e14f-a511-422c-adfa-3886293f5740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.718 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda0791cc-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.718 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.719 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda0791cc-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:02 compute-0 kernel: tapda0791cc-50: entered promiscuous mode
Feb 25 12:33:02 compute-0 NetworkManager[49836]: <info>  [1772022782.7218] manager: (tapda0791cc-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.728 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda0791cc-50, col_values=(('external_ids', {'iface-id': 'ab3b0de1-d7b9-41dd-ba1d-5881af3dd783'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:02 compute-0 ovn_controller[147040]: 2026-02-25T12:33:02Z|00762|binding|INFO|Releasing lport ab3b0de1-d7b9-41dd-ba1d-5881af3dd783 from this chassis (sb_readonly=0)
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.740 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/da0791cc-53d2-49de-bb3a-c242872ad639.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/da0791cc-53d2-49de-bb3a-c242872ad639.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.753 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1e3829-511b-4fa2-b0a3-9850f95b2a44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.754 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-da0791cc-53d2-49de-bb3a-c242872ad639
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/da0791cc-53d2-49de-bb3a-c242872ad639.pid.haproxy
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID da0791cc-53d2-49de-bb3a-c242872ad639
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:33:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:02.754 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639', 'env', 'PROCESS_TAG=haproxy-da0791cc-53d2-49de-bb3a-c242872ad639', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/da0791cc-53d2-49de-bb3a-c242872ad639.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.790 244018 DEBUG nova.compute.manager [req-ced48323-5b23-4967-887b-8f8964594523 req-9c434a5a-b10f-4f54-a5c0-45cdaa697b0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-plugged-79d3b96a-fd0f-4a46-86e7-65281666b00f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.791 244018 DEBUG oslo_concurrency.lockutils [req-ced48323-5b23-4967-887b-8f8964594523 req-9c434a5a-b10f-4f54-a5c0-45cdaa697b0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.791 244018 DEBUG oslo_concurrency.lockutils [req-ced48323-5b23-4967-887b-8f8964594523 req-9c434a5a-b10f-4f54-a5c0-45cdaa697b0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.791 244018 DEBUG oslo_concurrency.lockutils [req-ced48323-5b23-4967-887b-8f8964594523 req-9c434a5a-b10f-4f54-a5c0-45cdaa697b0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.791 244018 DEBUG nova.compute.manager [req-ced48323-5b23-4967-887b-8f8964594523 req-9c434a5a-b10f-4f54-a5c0-45cdaa697b0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Processing event network-vif-plugged-79d3b96a-fd0f-4a46-86e7-65281666b00f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:33:02 compute-0 nova_compute[244014]: 2026-02-25 12:33:02.820 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.121 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022783.1212523, 59845b53-2d16-4e64-9550-ed88157328c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.122 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] VM Started (Lifecycle Event)
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.145 244018 DEBUG oslo_concurrency.processutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2403cab9-191d-42ad-9d3c-3938a6644ae7/disk.config 2403cab9-191d-42ad-9d3c-3938a6644ae7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.146 244018 INFO nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Deleting local config drive /var/lib/nova/instances/2403cab9-191d-42ad-9d3c-3938a6644ae7/disk.config because it was imported into RBD.
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.185 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:03 compute-0 kernel: tap47863a41-50: entered promiscuous mode
Feb 25 12:33:03 compute-0 NetworkManager[49836]: <info>  [1772022783.1924] manager: (tap47863a41-50): new Tun device (/org/freedesktop/NetworkManager/Devices/328)
Feb 25 12:33:03 compute-0 ovn_controller[147040]: 2026-02-25T12:33:03Z|00763|binding|INFO|Claiming lport 47863a41-5044-4493-9137-885a7693c6e9 for this chassis.
Feb 25 12:33:03 compute-0 ovn_controller[147040]: 2026-02-25T12:33:03Z|00764|binding|INFO|47863a41-5044-4493-9137-885a7693c6e9: Claiming fa:16:3e:d5:4f:cc 10.100.0.13
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.192 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:03 compute-0 systemd-udevd[314112]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:33:03 compute-0 podman[314265]: 2026-02-25 12:33:03.110187804 +0000 UTC m=+0.016113227 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:33:03 compute-0 ovn_controller[147040]: 2026-02-25T12:33:03Z|00765|binding|INFO|Setting lport 47863a41-5044-4493-9137-885a7693c6e9 ovn-installed in OVS
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.213 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.214 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.216 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022783.1224127, 59845b53-2d16-4e64-9550-ed88157328c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.216 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] VM Paused (Lifecycle Event)
Feb 25 12:33:03 compute-0 NetworkManager[49836]: <info>  [1772022783.2184] device (tap47863a41-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:33:03 compute-0 NetworkManager[49836]: <info>  [1772022783.2192] device (tap47863a41-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:33:03 compute-0 systemd-machined[210048]: New machine qemu-99-instance-00000051.
Feb 25 12:33:03 compute-0 ovn_controller[147040]: 2026-02-25T12:33:03Z|00766|binding|INFO|Setting lport 47863a41-5044-4493-9137-885a7693c6e9 up in Southbound
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.240 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:4f:cc 10.100.0.13'], port_security=['fa:16:3e:d5:4f:cc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2403cab9-191d-42ad-9d3c-3938a6644ae7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d34ca23436b401fbaeb0b01190a440a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3f8cb539-706f-470a-94ac-5465c7a896f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2c08b8a-392f-4189-9ba4-eaa2e30b0540, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=47863a41-5044-4493-9137-885a7693c6e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:03 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-00000051.
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.289 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.292 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.328 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:33:03 compute-0 podman[314265]: 2026-02-25 12:33:03.338787115 +0000 UTC m=+0.244712498 container create 7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:33:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:33:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935166498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.405 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.414 244018 DEBUG nova.compute.provider_tree [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.434 244018 DEBUG nova.scheduler.client.report [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:33:03 compute-0 systemd[1]: Started libpod-conmon-7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e.scope.
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.465 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.467 244018 DEBUG nova.compute.manager [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:33:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2935166498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:03 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c02ea3f32024fbbaae96565996febea7e015299a6f678f50145fcf01963756f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:03 compute-0 podman[314265]: 2026-02-25 12:33:03.540972979 +0000 UTC m=+0.446898432 container init 7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.540 244018 DEBUG nova.compute.manager [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.541 244018 DEBUG nova.network.neutron [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:33:03 compute-0 podman[314265]: 2026-02-25 12:33:03.54986078 +0000 UTC m=+0.455786183 container start 7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.567 244018 INFO nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:33:03 compute-0 neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639[314302]: [NOTICE]   (314306) : New worker (314308) forked
Feb 25 12:33:03 compute-0 neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639[314302]: [NOTICE]   (314306) : Loading success.
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.592 244018 DEBUG nova.compute.manager [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.645 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f0c3ab6d-106c-4ce9-9e41-200ba59de578 in datapath 5b7a7eb3-b799-4043-93af-927c57bf0214 unbound from our chassis
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.648 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b7a7eb3-b799-4043-93af-927c57bf0214
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.658 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ae74614f-c925-4c71-8cf0-48a490aba5b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.659 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5b7a7eb3-b1 in ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.661 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5b7a7eb3-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.661 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[01cad561-70ac-4e05-ab94-6dcb79b02c67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.662 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7503337b-5a48-473e-9e4e-49f31da7f1c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.672 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2f638181-e033-461f-a84f-6affd9df761a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.693 244018 DEBUG nova.compute.manager [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.694 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.695 244018 INFO nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Creating image(s)
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.701 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[77e9df62-0bf3-4b1d-88ea-d1d6d3c39a2e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.722 244018 DEBUG nova.storage.rbd_utils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] rbd image 6e9252ef-013b-4247-8b3c-494150e0785d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.734 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e55ac499-9593-4a87-ba5e-6ca8a45b4f48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.739 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0318c3-3b72-4503-b17c-dfe960ca146c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 NetworkManager[49836]: <info>  [1772022783.7401] manager: (tap5b7a7eb3-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/329)
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.753 244018 DEBUG nova.storage.rbd_utils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] rbd image 6e9252ef-013b-4247-8b3c-494150e0785d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.769 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cf98a52b-f6ff-49c0-aee4-0bfcea449f7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.772 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d1711d05-d8c2-4a44-8f66-7f5c69e8f8b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 NetworkManager[49836]: <info>  [1772022783.7940] device (tap5b7a7eb3-b0): carrier: link connected
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.795 244018 DEBUG nova.storage.rbd_utils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] rbd image 6e9252ef-013b-4247-8b3c-494150e0785d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.797 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4c069e1a-fd73-450f-b0e7-830e00c84e53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.800 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.813 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc53996-52f6-47e6-92ed-a88f09b95a3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b7a7eb3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:0f:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475320, 'reachable_time': 36827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314381, 'error': None, 'target': 'ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.829 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6e75a3f2-5496-4d7b-8a39-4c2d7d7852c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:f92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475320, 'tstamp': 475320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314383, 'error': None, 'target': 'ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.841 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a8fa4d0c-2786-4e04-ad60-008550707b5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b7a7eb3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:0f:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475320, 'reachable_time': 36827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314384, 'error': None, 'target': 'ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.868 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5663245b-bb34-471b-a79a-71557eb79b67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.880 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.880 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.881 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.881 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.902 244018 DEBUG nova.storage.rbd_utils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] rbd image 6e9252ef-013b-4247-8b3c-494150e0785d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.908 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6e9252ef-013b-4247-8b3c-494150e0785d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.919 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f43dd3-f12d-4550-b11f-840129a98702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.920 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b7a7eb3-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.920 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.921 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b7a7eb3-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:03 compute-0 kernel: tap5b7a7eb3-b0: entered promiscuous mode
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.925 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b7a7eb3-b0, col_values=(('external_ids', {'iface-id': '5255d64e-2248-4ac6-b19a-51f0b2828673'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:03 compute-0 NetworkManager[49836]: <info>  [1772022783.9258] manager: (tap5b7a7eb3-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Feb 25 12:33:03 compute-0 ovn_controller[147040]: 2026-02-25T12:33:03Z|00767|binding|INFO|Releasing lport 5255d64e-2248-4ac6-b19a-51f0b2828673 from this chassis (sb_readonly=0)
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.929 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5b7a7eb3-b799-4043-93af-927c57bf0214.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5b7a7eb3-b799-4043-93af-927c57bf0214.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.932 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[99c6b81c-8505-4679-9ba0-05237399acb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.933 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-5b7a7eb3-b799-4043-93af-927c57bf0214
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/5b7a7eb3-b799-4043-93af-927c57bf0214.pid.haproxy
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 5b7a7eb3-b799-4043-93af-927c57bf0214
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:33:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:03.933 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214', 'env', 'PROCESS_TAG=haproxy-5b7a7eb3-b799-4043-93af-927c57bf0214', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5b7a7eb3-b799-4043-93af-927c57bf0214.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:33:03 compute-0 nova_compute[244014]: 2026-02-25 12:33:03.938 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.101 244018 DEBUG nova.policy [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '20b3cb5baaa440b08f43f40d533e26a7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cf74cfdf1cf045d59e889e7d53288e55', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.200 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022784.20015, 2403cab9-191d-42ad-9d3c-3938a6644ae7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.201 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] VM Started (Lifecycle Event)
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.225 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.229 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022784.2008035, 2403cab9-191d-42ad-9d3c-3938a6644ae7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.229 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] VM Paused (Lifecycle Event)
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.268 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.271 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.296 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:33:04 compute-0 podman[314496]: 2026-02-25 12:33:04.240674054 +0000 UTC m=+0.016907479 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:33:04 compute-0 podman[314496]: 2026-02-25 12:33:04.531075852 +0000 UTC m=+0.307309307 container create 654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:33:04 compute-0 ceph-mon[76335]: pgmap v1529: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Feb 25 12:33:04 compute-0 systemd[1]: Started libpod-conmon-654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d.scope.
Feb 25 12:33:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ccf2b4c5617da289aa9e01a09063588f6a81864eadee02aeeb108bb2057e33/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.720 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6e9252ef-013b-4247-8b3c-494150e0785d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.812s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:04 compute-0 podman[314496]: 2026-02-25 12:33:04.72633861 +0000 UTC m=+0.502572035 container init 654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 12:33:04 compute-0 podman[314496]: 2026-02-25 12:33:04.732192636 +0000 UTC m=+0.508426051 container start 654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.745 244018 DEBUG nova.network.neutron [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Successfully created port: 69680ef1-2d32-45a1-8134-6c610a5b7217 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:33:04 compute-0 neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214[314511]: [NOTICE]   (314530) : New worker (314535) forked
Feb 25 12:33:04 compute-0 neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214[314511]: [NOTICE]   (314530) : Loading success.
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.788 244018 DEBUG nova.storage.rbd_utils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] resizing rbd image 6e9252ef-013b-4247-8b3c-494150e0785d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.802 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 47863a41-5044-4493-9137-885a7693c6e9 in datapath a0cf2281-bf49-498f-8de5-70cdba33cd62 unbound from our chassis
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.804 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0cf2281-bf49-498f-8de5-70cdba33cd62
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.816 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f8680ef0-5f61-4bc7-90e5-d32886f63d2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.840 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5606e3e0-b86b-474f-9cae-0a5e93e51498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.842 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[476b3666-d7a3-4c66-abed-bd241ab54154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.857 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.867 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce4120a-9e90-4ff3-b9ed-ccd82779b080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.884 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7278f5-6a05-4d82-88c0-23e883727943]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0cf2281-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:10:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473329, 'reachable_time': 21010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314585, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.897 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc1a084-1dff-4a5a-8957-ed51ab0eb0d0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa0cf2281-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473335, 'tstamp': 473335}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314586, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa0cf2281-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473338, 'tstamp': 473338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314586, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.899 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0cf2281-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.901 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.902 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0cf2281-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.903 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.903 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0cf2281-b0, col_values=(('external_ids', {'iface-id': '134cce92-aa02-44fa-b97c-8b46159f1d29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:04.904 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.970 244018 DEBUG nova.objects.instance [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lazy-loading 'migration_context' on Instance uuid 6e9252ef-013b-4247-8b3c-494150e0785d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.975 244018 DEBUG nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-plugged-79d3b96a-fd0f-4a46-86e7-65281666b00f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.976 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.976 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.977 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.977 244018 DEBUG nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] No event matching network-vif-plugged-79d3b96a-fd0f-4a46-86e7-65281666b00f in dict_keys([('network-vif-plugged', 'f0c3ab6d-106c-4ce9-9e41-200ba59de578')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.977 244018 WARNING nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received unexpected event network-vif-plugged-79d3b96a-fd0f-4a46-86e7-65281666b00f for instance with vm_state building and task_state spawning.
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.977 244018 DEBUG nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-plugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.978 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.978 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.978 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.978 244018 DEBUG nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Processing event network-vif-plugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.979 244018 DEBUG nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-plugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.979 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.979 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.979 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.980 244018 DEBUG nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] No waiting events found dispatching network-vif-plugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.980 244018 WARNING nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received unexpected event network-vif-plugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 for instance with vm_state building and task_state spawning.
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.980 244018 DEBUG nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Received event network-vif-plugged-47863a41-5044-4493-9137-885a7693c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.981 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.981 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.981 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.981 244018 DEBUG nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Processing event network-vif-plugged-47863a41-5044-4493-9137-885a7693c6e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.982 244018 DEBUG nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Received event network-vif-plugged-47863a41-5044-4493-9137-885a7693c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.982 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.982 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.982 244018 DEBUG oslo_concurrency.lockutils [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.983 244018 DEBUG nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] No waiting events found dispatching network-vif-plugged-47863a41-5044-4493-9137-885a7693c6e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.983 244018 WARNING nova.compute.manager [req-213b16fd-da47-4b32-92e8-87e5dd840001 req-67ca7dac-9c11-4828-922c-89d695a896f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Received unexpected event network-vif-plugged-47863a41-5044-4493-9137-885a7693c6e9 for instance with vm_state building and task_state spawning.
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.984 244018 DEBUG nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.985 244018 DEBUG nova.compute.manager [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.989 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022784.9891522, 2403cab9-191d-42ad-9d3c-3938a6644ae7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.990 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] VM Resumed (Lifecycle Event)
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.992 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.992 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.997 244018 INFO nova.virt.libvirt.driver [-] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Instance spawned successfully.
Feb 25 12:33:04 compute-0 nova_compute[244014]: 2026-02-25 12:33:04.998 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.000 244018 INFO nova.virt.libvirt.driver [-] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Instance spawned successfully.
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.001 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.023 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.023 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Ensure instance console log exists: /var/lib/nova/instances/6e9252ef-013b-4247-8b3c-494150e0785d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.024 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.024 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.025 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.029 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.035 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.038 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.039 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.039 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.040 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.040 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.041 244018 DEBUG nova.virt.libvirt.driver [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.045 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.046 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.046 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.047 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.047 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.048 244018 DEBUG nova.virt.libvirt.driver [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.113 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.113 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022784.9893098, 59845b53-2d16-4e64-9550-ed88157328c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.114 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] VM Resumed (Lifecycle Event)
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.145 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.148 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.154 244018 INFO nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Took 20.57 seconds to spawn the instance on the hypervisor.
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.154 244018 DEBUG nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.157 244018 INFO nova.compute.manager [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Took 11.11 seconds to spawn the instance on the hypervisor.
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.158 244018 DEBUG nova.compute.manager [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.168 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.230 244018 INFO nova.compute.manager [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Took 12.65 seconds to build instance.
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.236 244018 INFO nova.compute.manager [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Took 21.73 seconds to build instance.
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.264 244018 DEBUG oslo_concurrency.lockutils [None req-3b2a562f-ca06-4580-bbaf-7a047139c138 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.271 244018 DEBUG oslo_concurrency.lockutils [None req-431537ea-0eeb-4bbc-9eb0-75cb72c2491a 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.624 244018 DEBUG nova.network.neutron [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Successfully updated port: 69680ef1-2d32-45a1-8134-6c610a5b7217 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.650 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Acquiring lock "refresh_cache-6e9252ef-013b-4247-8b3c-494150e0785d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.650 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Acquired lock "refresh_cache-6e9252ef-013b-4247-8b3c-494150e0785d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.651 244018 DEBUG nova.network.neutron [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.738 244018 DEBUG nova.compute.manager [req-a58f2b1b-c14e-4618-99c4-d8348bec0805 req-9816111e-85ba-40c6-86e2-4da278eda9ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Received event network-changed-69680ef1-2d32-45a1-8134-6c610a5b7217 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.738 244018 DEBUG nova.compute.manager [req-a58f2b1b-c14e-4618-99c4-d8348bec0805 req-9816111e-85ba-40c6-86e2-4da278eda9ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Refreshing instance network info cache due to event network-changed-69680ef1-2d32-45a1-8134-6c610a5b7217. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.738 244018 DEBUG oslo_concurrency.lockutils [req-a58f2b1b-c14e-4618-99c4-d8348bec0805 req-9816111e-85ba-40c6-86e2-4da278eda9ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6e9252ef-013b-4247-8b3c-494150e0785d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:33:05 compute-0 nova_compute[244014]: 2026-02-25 12:33:05.805 244018 DEBUG nova.network.neutron [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:33:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.444 244018 DEBUG oslo_concurrency.lockutils [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "2403cab9-191d-42ad-9d3c-3938a6644ae7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.445 244018 DEBUG oslo_concurrency.lockutils [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.445 244018 DEBUG oslo_concurrency.lockutils [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.445 244018 DEBUG oslo_concurrency.lockutils [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.446 244018 DEBUG oslo_concurrency.lockutils [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.447 244018 INFO nova.compute.manager [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Terminating instance
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.449 244018 DEBUG nova.compute.manager [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:33:06 compute-0 kernel: tap47863a41-50 (unregistering): left promiscuous mode
Feb 25 12:33:06 compute-0 NetworkManager[49836]: <info>  [1772022786.5125] device (tap47863a41-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:33:06 compute-0 ovn_controller[147040]: 2026-02-25T12:33:06Z|00768|binding|INFO|Releasing lport 47863a41-5044-4493-9137-885a7693c6e9 from this chassis (sb_readonly=0)
Feb 25 12:33:06 compute-0 ovn_controller[147040]: 2026-02-25T12:33:06Z|00769|binding|INFO|Setting lport 47863a41-5044-4493-9137-885a7693c6e9 down in Southbound
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.533 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:06 compute-0 ovn_controller[147040]: 2026-02-25T12:33:06Z|00770|binding|INFO|Removing iface tap47863a41-50 ovn-installed in OVS
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.535 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.540 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:4f:cc 10.100.0.13'], port_security=['fa:16:3e:d5:4f:cc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2403cab9-191d-42ad-9d3c-3938a6644ae7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d34ca23436b401fbaeb0b01190a440a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3f8cb539-706f-470a-94ac-5465c7a896f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2c08b8a-392f-4189-9ba4-eaa2e30b0540, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=47863a41-5044-4493-9137-885a7693c6e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.542 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 47863a41-5044-4493-9137-885a7693c6e9 in datapath a0cf2281-bf49-498f-8de5-70cdba33cd62 unbound from our chassis
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.544 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0cf2281-bf49-498f-8de5-70cdba33cd62
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:06 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000051.scope: Deactivated successfully.
Feb 25 12:33:06 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000051.scope: Consumed 2.362s CPU time.
Feb 25 12:33:06 compute-0 systemd-machined[210048]: Machine qemu-99-instance-00000051 terminated.
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.565 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a244b54a-ed67-4003-b104-a7735758fc8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.591 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdf3f8e-cba3-4e4b-a895-205f319f90cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.594 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[db901d71-59a6-4b6f-9e73-e9f2b373f07f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.615 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2699e2f9-0243-4952-ac4b-f2b8644812ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.629 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[22c1ab8d-6c90-4471-9b6c-b6eed89d5c7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0cf2281-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:10:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473329, 'reachable_time': 21010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314617, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.642 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8060aba4-b945-46cc-bc24-009c9197b1a5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa0cf2281-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473335, 'tstamp': 473335}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314618, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa0cf2281-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473338, 'tstamp': 473338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314618, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.644 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0cf2281-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.648 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.649 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0cf2281-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.649 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.649 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0cf2281-b0, col_values=(('external_ids', {'iface-id': '134cce92-aa02-44fa-b97c-8b46159f1d29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:06.650 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.662 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.668 244018 DEBUG nova.network.neutron [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Updating instance_info_cache with network_info: [{"id": "69680ef1-2d32-45a1-8134-6c610a5b7217", "address": "fa:16:3e:a5:11:99", "network": {"id": "d337bae8-3653-4026-9587-ee479cf3e099", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-233008396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf74cfdf1cf045d59e889e7d53288e55", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69680ef1-2d", "ovs_interfaceid": "69680ef1-2d32-45a1-8134-6c610a5b7217", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.670 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.673 244018 INFO nova.virt.libvirt.driver [-] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Instance destroyed successfully.
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.673 244018 DEBUG nova.objects.instance [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'resources' on Instance uuid 2403cab9-191d-42ad-9d3c-3938a6644ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.706 244018 DEBUG nova.virt.libvirt.vif [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:32:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1403180466',display_name='tempest-ServersNegativeTestJSON-server-1403180466',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1403180466',id=81,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:33:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d34ca23436b401fbaeb0b01190a440a',ramdisk_id='',reservation_id='r-d0jjddz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1613719120',owner_user_name='tempest-ServersNegativeTestJSON-1613719120-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:33:05Z,user_data=None,user_id='84ba7d5e80a44535b25853f3b18e352d',uuid=2403cab9-191d-42ad-9d3c-3938a6644ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47863a41-5044-4493-9137-885a7693c6e9", "address": "fa:16:3e:d5:4f:cc", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47863a41-50", "ovs_interfaceid": "47863a41-5044-4493-9137-885a7693c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.711 244018 DEBUG nova.network.os_vif_util [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converting VIF {"id": "47863a41-5044-4493-9137-885a7693c6e9", "address": "fa:16:3e:d5:4f:cc", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47863a41-50", "ovs_interfaceid": "47863a41-5044-4493-9137-885a7693c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.712 244018 DEBUG nova.network.os_vif_util [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:4f:cc,bridge_name='br-int',has_traffic_filtering=True,id=47863a41-5044-4493-9137-885a7693c6e9,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47863a41-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.712 244018 DEBUG os_vif [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:4f:cc,bridge_name='br-int',has_traffic_filtering=True,id=47863a41-5044-4493-9137-885a7693c6e9,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47863a41-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.714 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.714 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47863a41-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.718 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Releasing lock "refresh_cache-6e9252ef-013b-4247-8b3c-494150e0785d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.718 244018 DEBUG nova.compute.manager [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Instance network_info: |[{"id": "69680ef1-2d32-45a1-8134-6c610a5b7217", "address": "fa:16:3e:a5:11:99", "network": {"id": "d337bae8-3653-4026-9587-ee479cf3e099", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-233008396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf74cfdf1cf045d59e889e7d53288e55", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69680ef1-2d", "ovs_interfaceid": "69680ef1-2d32-45a1-8134-6c610a5b7217", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.720 244018 DEBUG oslo_concurrency.lockutils [req-a58f2b1b-c14e-4618-99c4-d8348bec0805 req-9816111e-85ba-40c6-86e2-4da278eda9ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6e9252ef-013b-4247-8b3c-494150e0785d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.720 244018 DEBUG nova.network.neutron [req-a58f2b1b-c14e-4618-99c4-d8348bec0805 req-9816111e-85ba-40c6-86e2-4da278eda9ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Refreshing network info cache for port 69680ef1-2d32-45a1-8134-6c610a5b7217 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.724 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Start _get_guest_xml network_info=[{"id": "69680ef1-2d32-45a1-8134-6c610a5b7217", "address": "fa:16:3e:a5:11:99", "network": {"id": "d337bae8-3653-4026-9587-ee479cf3e099", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-233008396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf74cfdf1cf045d59e889e7d53288e55", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69680ef1-2d", "ovs_interfaceid": "69680ef1-2d32-45a1-8134-6c610a5b7217", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.725 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.728 244018 INFO os_vif [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:4f:cc,bridge_name='br-int',has_traffic_filtering=True,id=47863a41-5044-4493-9137-885a7693c6e9,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47863a41-50')
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.751 244018 WARNING nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.759 244018 DEBUG nova.virt.libvirt.host [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.759 244018 DEBUG nova.virt.libvirt.host [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.762 244018 DEBUG nova.virt.libvirt.host [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.763 244018 DEBUG nova.virt.libvirt.host [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.763 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.763 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.764 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.764 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.764 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.765 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.765 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.765 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.765 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.765 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.766 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.766 244018 DEBUG nova.virt.hardware [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:33:06 compute-0 nova_compute[244014]: 2026-02-25 12:33:06.768 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:07 compute-0 ceph-mon[76335]: pgmap v1530: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.170 244018 DEBUG nova.compute.manager [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Received event network-vif-unplugged-47863a41-5044-4493-9137-885a7693c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.170 244018 DEBUG oslo_concurrency.lockutils [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.171 244018 DEBUG oslo_concurrency.lockutils [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.171 244018 DEBUG oslo_concurrency.lockutils [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.171 244018 DEBUG nova.compute.manager [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] No waiting events found dispatching network-vif-unplugged-47863a41-5044-4493-9137-885a7693c6e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.172 244018 DEBUG nova.compute.manager [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Received event network-vif-unplugged-47863a41-5044-4493-9137-885a7693c6e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.172 244018 DEBUG nova.compute.manager [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Received event network-vif-plugged-47863a41-5044-4493-9137-885a7693c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.172 244018 DEBUG oslo_concurrency.lockutils [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.173 244018 DEBUG oslo_concurrency.lockutils [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.173 244018 DEBUG oslo_concurrency.lockutils [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.173 244018 DEBUG nova.compute.manager [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] No waiting events found dispatching network-vif-plugged-47863a41-5044-4493-9137-885a7693c6e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.174 244018 WARNING nova.compute.manager [req-1217199e-c5b5-4dbe-bc15-64f81c05a5b1 req-69718f7c-c61e-4d18-8b18-70f69672c0d0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Received unexpected event network-vif-plugged-47863a41-5044-4493-9137-885a7693c6e9 for instance with vm_state active and task_state deleting.
Feb 25 12:33:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:33:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/163910652' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.317 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.345 244018 DEBUG nova.storage.rbd_utils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] rbd image 6e9252ef-013b-4247-8b3c-494150e0785d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.349 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.655 244018 DEBUG oslo_concurrency.lockutils [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.656 244018 DEBUG oslo_concurrency.lockutils [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.656 244018 DEBUG oslo_concurrency.lockutils [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.656 244018 DEBUG oslo_concurrency.lockutils [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.657 244018 DEBUG oslo_concurrency.lockutils [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.658 244018 INFO nova.compute.manager [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Terminating instance
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.659 244018 DEBUG nova.compute.manager [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:33:07 compute-0 kernel: tap79d3b96a-fd (unregistering): left promiscuous mode
Feb 25 12:33:07 compute-0 NetworkManager[49836]: <info>  [1772022787.7697] device (tap79d3b96a-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:33:07 compute-0 ovn_controller[147040]: 2026-02-25T12:33:07Z|00771|binding|INFO|Releasing lport 79d3b96a-fd0f-4a46-86e7-65281666b00f from this chassis (sb_readonly=0)
Feb 25 12:33:07 compute-0 ovn_controller[147040]: 2026-02-25T12:33:07Z|00772|binding|INFO|Setting lport 79d3b96a-fd0f-4a46-86e7-65281666b00f down in Southbound
Feb 25 12:33:07 compute-0 ovn_controller[147040]: 2026-02-25T12:33:07Z|00773|binding|INFO|Removing iface tap79d3b96a-fd ovn-installed in OVS
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.776 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:07.783 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:95:65 10.100.0.46'], port_security=['fa:16:3e:95:95:65 10.100.0.46'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.46/24', 'neutron:device_id': '59845b53-2d16-4e64-9550-ed88157328c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da0791cc-53d2-49de-bb3a-c242872ad639', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff49334847464c879338fb12c3b27419', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0602774e-0849-430a-95de-27662d70ef20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6ac2194-6a7c-4681-8fb8-ecf102671790, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=79d3b96a-fd0f-4a46-86e7-65281666b00f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:07.785 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 79d3b96a-fd0f-4a46-86e7-65281666b00f in datapath da0791cc-53d2-49de-bb3a-c242872ad639 unbound from our chassis
Feb 25 12:33:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:07.786 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network da0791cc-53d2-49de-bb3a-c242872ad639, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:33:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:07.787 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b3b59f-8103-401d-81b7-940c5c2d371c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:07.788 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639 namespace which is not needed anymore
Feb 25 12:33:07 compute-0 kernel: tapf0c3ab6d-10 (unregistering): left promiscuous mode
Feb 25 12:33:07 compute-0 NetworkManager[49836]: <info>  [1772022787.7940] device (tapf0c3ab6d-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:33:07 compute-0 ovn_controller[147040]: 2026-02-25T12:33:07Z|00774|binding|INFO|Releasing lport f0c3ab6d-106c-4ce9-9e41-200ba59de578 from this chassis (sb_readonly=0)
Feb 25 12:33:07 compute-0 ovn_controller[147040]: 2026-02-25T12:33:07Z|00775|binding|INFO|Setting lport f0c3ab6d-106c-4ce9-9e41-200ba59de578 down in Southbound
Feb 25 12:33:07 compute-0 ovn_controller[147040]: 2026-02-25T12:33:07Z|00776|binding|INFO|Removing iface tapf0c3ab6d-10 ovn-installed in OVS
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:07.811 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:f4:ce 10.100.1.135'], port_security=['fa:16:3e:a3:f4:ce 10.100.1.135'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.135/24', 'neutron:device_id': '59845b53-2d16-4e64-9550-ed88157328c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b7a7eb3-b799-4043-93af-927c57bf0214', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff49334847464c879338fb12c3b27419', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0602774e-0849-430a-95de-27662d70ef20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1c875bc-5dee-4820-a499-dbcdbc6e8e2b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f0c3ab6d-106c-4ce9-9e41-200ba59de578) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:07 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000050.scope: Deactivated successfully.
Feb 25 12:33:07 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000050.scope: Consumed 3.353s CPU time.
Feb 25 12:33:07 compute-0 systemd-machined[210048]: Machine qemu-98-instance-00000050 terminated.
Feb 25 12:33:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:33:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944716295' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.882 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.884 244018 DEBUG nova.virt.libvirt.vif [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:33:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-506231471',display_name='tempest-ServerMetadataTestJSON-server-506231471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-506231471',id=82,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf74cfdf1cf045d59e889e7d53288e55',ramdisk_id='',reservation_id='r-0otuxpvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1903723033',owner_user_name='tempest-ServerMetadataTestJSON-1903723033-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:33:03Z,user_data=None,user_id='20b3cb5baaa440b08f43f40d533e26a7',uuid=6e9252ef-013b-4247-8b3c-494150e0785d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69680ef1-2d32-45a1-8134-6c610a5b7217", "address": "fa:16:3e:a5:11:99", "network": {"id": "d337bae8-3653-4026-9587-ee479cf3e099", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-233008396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf74cfdf1cf045d59e889e7d53288e55", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69680ef1-2d", "ovs_interfaceid": "69680ef1-2d32-45a1-8134-6c610a5b7217", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.884 244018 DEBUG nova.network.os_vif_util [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Converting VIF {"id": "69680ef1-2d32-45a1-8134-6c610a5b7217", "address": "fa:16:3e:a5:11:99", "network": {"id": "d337bae8-3653-4026-9587-ee479cf3e099", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-233008396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf74cfdf1cf045d59e889e7d53288e55", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69680ef1-2d", "ovs_interfaceid": "69680ef1-2d32-45a1-8134-6c610a5b7217", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:07 compute-0 NetworkManager[49836]: <info>  [1772022787.8855] manager: (tapf0c3ab6d-10): new Tun device (/org/freedesktop/NetworkManager/Devices/331)
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.885 244018 DEBUG nova.network.os_vif_util [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:11:99,bridge_name='br-int',has_traffic_filtering=True,id=69680ef1-2d32-45a1-8134-6c610a5b7217,network=Network(d337bae8-3653-4026-9587-ee479cf3e099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69680ef1-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.888 244018 DEBUG nova.objects.instance [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e9252ef-013b-4247-8b3c-494150e0785d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.900 244018 INFO nova.virt.libvirt.driver [-] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Instance destroyed successfully.
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.900 244018 DEBUG nova.objects.instance [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lazy-loading 'resources' on Instance uuid 59845b53-2d16-4e64-9550-ed88157328c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.906 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:33:07 compute-0 nova_compute[244014]:   <uuid>6e9252ef-013b-4247-8b3c-494150e0785d</uuid>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   <name>instance-00000052</name>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerMetadataTestJSON-server-506231471</nova:name>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:33:06</nova:creationTime>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <nova:user uuid="20b3cb5baaa440b08f43f40d533e26a7">tempest-ServerMetadataTestJSON-1903723033-project-member</nova:user>
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <nova:project uuid="cf74cfdf1cf045d59e889e7d53288e55">tempest-ServerMetadataTestJSON-1903723033</nova:project>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <nova:port uuid="69680ef1-2d32-45a1-8134-6c610a5b7217">
Feb 25 12:33:07 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <system>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <entry name="serial">6e9252ef-013b-4247-8b3c-494150e0785d</entry>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <entry name="uuid">6e9252ef-013b-4247-8b3c-494150e0785d</entry>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     </system>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   <os>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   </os>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   <features>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   </features>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/6e9252ef-013b-4247-8b3c-494150e0785d_disk">
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/6e9252ef-013b-4247-8b3c-494150e0785d_disk.config">
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:33:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:a5:11:99"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <target dev="tap69680ef1-2d"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/6e9252ef-013b-4247-8b3c-494150e0785d/console.log" append="off"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <video>
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     </video>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:33:07 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:33:07 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:33:07 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:33:07 compute-0 nova_compute[244014]: </domain>
Feb 25 12:33:07 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.906 244018 DEBUG nova.compute.manager [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Preparing to wait for external event network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.906 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Acquiring lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.907 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.907 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.908 244018 DEBUG nova.virt.libvirt.vif [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:33:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-506231471',display_name='tempest-ServerMetadataTestJSON-server-506231471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-506231471',id=82,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf74cfdf1cf045d59e889e7d53288e55',ramdisk_id='',reservation_id='r-0otuxpvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1903723033',owner_user_name='tempest-ServerMetadataTestJSON-1903723033-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:33:03Z,user_data=None,user_id='20b3cb5baaa440b08f43f40d533e26a7',uuid=6e9252ef-013b-4247-8b3c-494150e0785d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69680ef1-2d32-45a1-8134-6c610a5b7217", "address": "fa:16:3e:a5:11:99", "network": {"id": "d337bae8-3653-4026-9587-ee479cf3e099", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-233008396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf74cfdf1cf045d59e889e7d53288e55", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69680ef1-2d", "ovs_interfaceid": "69680ef1-2d32-45a1-8134-6c610a5b7217", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.908 244018 DEBUG nova.network.os_vif_util [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Converting VIF {"id": "69680ef1-2d32-45a1-8134-6c610a5b7217", "address": "fa:16:3e:a5:11:99", "network": {"id": "d337bae8-3653-4026-9587-ee479cf3e099", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-233008396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf74cfdf1cf045d59e889e7d53288e55", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69680ef1-2d", "ovs_interfaceid": "69680ef1-2d32-45a1-8134-6c610a5b7217", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.908 244018 DEBUG nova.network.os_vif_util [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:11:99,bridge_name='br-int',has_traffic_filtering=True,id=69680ef1-2d32-45a1-8134-6c610a5b7217,network=Network(d337bae8-3653-4026-9587-ee479cf3e099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69680ef1-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.909 244018 DEBUG os_vif [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:11:99,bridge_name='br-int',has_traffic_filtering=True,id=69680ef1-2d32-45a1-8134-6c610a5b7217,network=Network(d337bae8-3653-4026-9587-ee479cf3e099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69680ef1-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.909 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.909 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.909 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.911 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.912 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69680ef1-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.912 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69680ef1-2d, col_values=(('external_ids', {'iface-id': '69680ef1-2d32-45a1-8134-6c610a5b7217', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:11:99', 'vm-uuid': '6e9252ef-013b-4247-8b3c-494150e0785d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.913 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 NetworkManager[49836]: <info>  [1772022787.9142] manager: (tap69680ef1-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.917 244018 DEBUG nova.virt.libvirt.vif [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-376651105',display_name='tempest-ServersTestMultiNic-server-376651105',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-376651105',id=80,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:33:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-h883ai3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:33:05Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=59845b53-2d16-4e64-9550-ed88157328c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "address": "fa:16:3e:95:95:65", "network": {"id": "da0791cc-53d2-49de-bb3a-c242872ad639", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1068033415", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d3b96a-fd", "ovs_interfaceid": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.917 244018 DEBUG nova.network.os_vif_util [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "address": "fa:16:3e:95:95:65", "network": {"id": "da0791cc-53d2-49de-bb3a-c242872ad639", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1068033415", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79d3b96a-fd", "ovs_interfaceid": "79d3b96a-fd0f-4a46-86e7-65281666b00f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.918 244018 DEBUG nova.network.os_vif_util [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:95:65,bridge_name='br-int',has_traffic_filtering=True,id=79d3b96a-fd0f-4a46-86e7-65281666b00f,network=Network(da0791cc-53d2-49de-bb3a-c242872ad639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d3b96a-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.918 244018 DEBUG os_vif [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:95:65,bridge_name='br-int',has_traffic_filtering=True,id=79d3b96a-fd0f-4a46-86e7-65281666b00f,network=Network(da0791cc-53d2-49de-bb3a-c242872ad639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d3b96a-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.923 244018 INFO os_vif [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:11:99,bridge_name='br-int',has_traffic_filtering=True,id=69680ef1-2d32-45a1-8134-6c610a5b7217,network=Network(d337bae8-3653-4026-9587-ee479cf3e099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69680ef1-2d')
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.924 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.925 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79d3b96a-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.926 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.928 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.933 244018 INFO os_vif [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:95:65,bridge_name='br-int',has_traffic_filtering=True,id=79d3b96a-fd0f-4a46-86e7-65281666b00f,network=Network(da0791cc-53d2-49de-bb3a-c242872ad639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79d3b96a-fd')
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.933 244018 DEBUG nova.virt.libvirt.vif [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-376651105',display_name='tempest-ServersTestMultiNic-server-376651105',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-376651105',id=80,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:33:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff49334847464c879338fb12c3b27419',ramdisk_id='',reservation_id='r-h883ai3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1041606516',owner_user_name='tempest-ServersTestMultiNic-1041606516-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:33:05Z,user_data=None,user_id='8f6c1c9101c442839ec520dd809c1205',uuid=59845b53-2d16-4e64-9550-ed88157328c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "address": "fa:16:3e:a3:f4:ce", "network": {"id": "5b7a7eb3-b799-4043-93af-927c57bf0214", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-300246542", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.135", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0c3ab6d-10", "ovs_interfaceid": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.934 244018 DEBUG nova.network.os_vif_util [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converting VIF {"id": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "address": "fa:16:3e:a3:f4:ce", "network": {"id": "5b7a7eb3-b799-4043-93af-927c57bf0214", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-300246542", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.135", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff49334847464c879338fb12c3b27419", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0c3ab6d-10", "ovs_interfaceid": "f0c3ab6d-106c-4ce9-9e41-200ba59de578", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.934 244018 DEBUG nova.network.os_vif_util [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:f4:ce,bridge_name='br-int',has_traffic_filtering=True,id=f0c3ab6d-106c-4ce9-9e41-200ba59de578,network=Network(5b7a7eb3-b799-4043-93af-927c57bf0214),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0c3ab6d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.935 244018 DEBUG os_vif [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:f4:ce,bridge_name='br-int',has_traffic_filtering=True,id=f0c3ab6d-106c-4ce9-9e41-200ba59de578,network=Network(5b7a7eb3-b799-4043-93af-927c57bf0214),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0c3ab6d-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.936 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0c3ab6d-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.946 244018 INFO os_vif [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:f4:ce,bridge_name='br-int',has_traffic_filtering=True,id=f0c3ab6d-106c-4ce9-9e41-200ba59de578,network=Network(5b7a7eb3-b799-4043-93af-927c57bf0214),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0c3ab6d-10')
Feb 25 12:33:07 compute-0 neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639[314302]: [NOTICE]   (314306) : haproxy version is 2.8.14-c23fe91
Feb 25 12:33:07 compute-0 neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639[314302]: [NOTICE]   (314306) : path to executable is /usr/sbin/haproxy
Feb 25 12:33:07 compute-0 neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639[314302]: [WARNING]  (314306) : Exiting Master process...
Feb 25 12:33:07 compute-0 neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639[314302]: [ALERT]    (314306) : Current worker (314308) exited with code 143 (Terminated)
Feb 25 12:33:07 compute-0 neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639[314302]: [WARNING]  (314306) : All workers exited. Exiting... (0)
Feb 25 12:33:07 compute-0 systemd[1]: libpod-7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e.scope: Deactivated successfully.
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.977 244018 INFO nova.virt.libvirt.driver [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Deleting instance files /var/lib/nova/instances/2403cab9-191d-42ad-9d3c-3938a6644ae7_del
Feb 25 12:33:07 compute-0 nova_compute[244014]: 2026-02-25 12:33:07.978 244018 INFO nova.virt.libvirt.driver [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Deletion of /var/lib/nova/instances/2403cab9-191d-42ad-9d3c-3938a6644ae7_del complete
Feb 25 12:33:07 compute-0 podman[314741]: 2026-02-25 12:33:07.978607597 +0000 UTC m=+0.090642473 container died 7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:33:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 355 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 281 op/s
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.028 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.029 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.029 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] No VIF found with MAC fa:16:3e:a5:11:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.030 244018 INFO nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Using config drive
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.058 244018 DEBUG nova.storage.rbd_utils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] rbd image 6e9252ef-013b-4247-8b3c-494150e0785d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.071 244018 INFO nova.compute.manager [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Took 1.62 seconds to destroy the instance on the hypervisor.
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.072 244018 DEBUG oslo.service.loopingcall [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.072 244018 DEBUG nova.compute.manager [-] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.072 244018 DEBUG nova.network.neutron [-] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e-userdata-shm.mount: Deactivated successfully.
Feb 25 12:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c02ea3f32024fbbaae96565996febea7e015299a6f678f50145fcf01963756f-merged.mount: Deactivated successfully.
Feb 25 12:33:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/163910652' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2944716295' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:08 compute-0 podman[314741]: 2026-02-25 12:33:08.325659705 +0000 UTC m=+0.437694611 container cleanup 7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:33:08 compute-0 systemd[1]: libpod-conmon-7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e.scope: Deactivated successfully.
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.443 244018 DEBUG nova.network.neutron [req-a58f2b1b-c14e-4618-99c4-d8348bec0805 req-9816111e-85ba-40c6-86e2-4da278eda9ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Updated VIF entry in instance network info cache for port 69680ef1-2d32-45a1-8134-6c610a5b7217. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.444 244018 DEBUG nova.network.neutron [req-a58f2b1b-c14e-4618-99c4-d8348bec0805 req-9816111e-85ba-40c6-86e2-4da278eda9ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Updating instance_info_cache with network_info: [{"id": "69680ef1-2d32-45a1-8134-6c610a5b7217", "address": "fa:16:3e:a5:11:99", "network": {"id": "d337bae8-3653-4026-9587-ee479cf3e099", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-233008396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf74cfdf1cf045d59e889e7d53288e55", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69680ef1-2d", "ovs_interfaceid": "69680ef1-2d32-45a1-8134-6c610a5b7217", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.477 244018 DEBUG oslo_concurrency.lockutils [req-a58f2b1b-c14e-4618-99c4-d8348bec0805 req-9816111e-85ba-40c6-86e2-4da278eda9ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6e9252ef-013b-4247-8b3c-494150e0785d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:33:08 compute-0 podman[314830]: 2026-02-25 12:33:08.499346024 +0000 UTC m=+0.152005797 container remove 7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.506 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[68bf5d65-b379-4409-9abd-0126f62b8b11]: (4, ('Wed Feb 25 12:33:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639 (7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e)\n7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e\nWed Feb 25 12:33:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639 (7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e)\n7cfbbdb4b3abd8913d1c4644c9054665f2a96110c343b942b011cbdd783d3f6e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.508 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[60ce6c6d-8913-4133-9b4e-f6dad0b1a761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.509 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda0791cc-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.510 244018 INFO nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Creating config drive at /var/lib/nova/instances/6e9252ef-013b-4247-8b3c-494150e0785d/disk.config
Feb 25 12:33:08 compute-0 kernel: tapda0791cc-50: left promiscuous mode
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.517 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e9252ef-013b-4247-8b3c-494150e0785d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6d3kfatq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.532 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6630440f-10d4-41a2-b8b1-0ad69a31fe2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.548 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[180de095-ca11-4468-b972-0f4229d0f0c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.549 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e978854f-67a6-4157-91dd-d14ed69edc79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.563 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4bf18c-2ebe-402d-b5f9-8f1c2d474eb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475184, 'reachable_time': 28430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314850, 'error': None, 'target': 'ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:08 compute-0 systemd[1]: run-netns-ovnmeta\x2dda0791cc\x2d53d2\x2d49de\x2dbb3a\x2dc242872ad639.mount: Deactivated successfully.
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.566 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-da0791cc-53d2-49de-bb3a-c242872ad639 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.567 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a69a2049-b309-4799-99b6-5ca419d8b6ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.567 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f0c3ab6d-106c-4ce9-9e41-200ba59de578 in datapath 5b7a7eb3-b799-4043-93af-927c57bf0214 unbound from our chassis
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.569 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b7a7eb3-b799-4043-93af-927c57bf0214, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.569 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[884110bf-1bcb-418e-9bff-aa9400657048]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:08.570 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214 namespace which is not needed anymore
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.660 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e9252ef-013b-4247-8b3c-494150e0785d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6d3kfatq" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.680 244018 DEBUG nova.storage.rbd_utils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] rbd image 6e9252ef-013b-4247-8b3c-494150e0785d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.684 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e9252ef-013b-4247-8b3c-494150e0785d/disk.config 6e9252ef-013b-4247-8b3c-494150e0785d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:08 compute-0 neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214[314511]: [NOTICE]   (314530) : haproxy version is 2.8.14-c23fe91
Feb 25 12:33:08 compute-0 neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214[314511]: [NOTICE]   (314530) : path to executable is /usr/sbin/haproxy
Feb 25 12:33:08 compute-0 neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214[314511]: [WARNING]  (314530) : Exiting Master process...
Feb 25 12:33:08 compute-0 neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214[314511]: [ALERT]    (314530) : Current worker (314535) exited with code 143 (Terminated)
Feb 25 12:33:08 compute-0 neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214[314511]: [WARNING]  (314530) : All workers exited. Exiting... (0)
Feb 25 12:33:08 compute-0 systemd[1]: libpod-654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d.scope: Deactivated successfully.
Feb 25 12:33:08 compute-0 podman[314871]: 2026-02-25 12:33:08.792020926 +0000 UTC m=+0.138803074 container died 654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.923 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.924 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:33:08 compute-0 nova_compute[244014]: 2026-02-25 12:33:08.948 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d-userdata-shm.mount: Deactivated successfully.
Feb 25 12:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-47ccf2b4c5617da289aa9e01a09063588f6a81864eadee02aeeb108bb2057e33-merged.mount: Deactivated successfully.
Feb 25 12:33:09 compute-0 podman[314871]: 2026-02-25 12:33:09.116752594 +0000 UTC m=+0.463534712 container cleanup 654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:33:09 compute-0 systemd[1]: libpod-conmon-654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d.scope: Deactivated successfully.
Feb 25 12:33:09 compute-0 podman[314943]: 2026-02-25 12:33:09.12654175 +0000 UTC m=+0.218521257 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 25 12:33:09 compute-0 podman[314921]: 2026-02-25 12:33:09.131458179 +0000 UTC m=+0.317741851 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.182 244018 DEBUG oslo_concurrency.processutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e9252ef-013b-4247-8b3c-494150e0785d/disk.config 6e9252ef-013b-4247-8b3c-494150e0785d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.183 244018 INFO nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Deleting local config drive /var/lib/nova/instances/6e9252ef-013b-4247-8b3c-494150e0785d/disk.config because it was imported into RBD.
Feb 25 12:33:09 compute-0 kernel: tap69680ef1-2d: entered promiscuous mode
Feb 25 12:33:09 compute-0 systemd-udevd[314609]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:33:09 compute-0 NetworkManager[49836]: <info>  [1772022789.2272] manager: (tap69680ef1-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Feb 25 12:33:09 compute-0 ovn_controller[147040]: 2026-02-25T12:33:09Z|00777|binding|INFO|Claiming lport 69680ef1-2d32-45a1-8134-6c610a5b7217 for this chassis.
Feb 25 12:33:09 compute-0 ovn_controller[147040]: 2026-02-25T12:33:09Z|00778|binding|INFO|69680ef1-2d32-45a1-8134-6c610a5b7217: Claiming fa:16:3e:a5:11:99 10.100.0.14
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.241 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:11:99 10.100.0.14'], port_security=['fa:16:3e:a5:11:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6e9252ef-013b-4247-8b3c-494150e0785d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d337bae8-3653-4026-9587-ee479cf3e099', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf74cfdf1cf045d59e889e7d53288e55', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bb977d49-55e7-47ae-9d82-2cab57c33e00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d07df630-921c-4d94-9fcb-27b3fec5b574, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=69680ef1-2d32-45a1-8134-6c610a5b7217) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:09 compute-0 NetworkManager[49836]: <info>  [1772022789.2444] device (tap69680ef1-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:33:09 compute-0 NetworkManager[49836]: <info>  [1772022789.2450] device (tap69680ef1-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:33:09 compute-0 systemd-machined[210048]: New machine qemu-100-instance-00000052.
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:09 compute-0 ovn_controller[147040]: 2026-02-25T12:33:09Z|00779|binding|INFO|Setting lport 69680ef1-2d32-45a1-8134-6c610a5b7217 ovn-installed in OVS
Feb 25 12:33:09 compute-0 ovn_controller[147040]: 2026-02-25T12:33:09Z|00780|binding|INFO|Setting lport 69680ef1-2d32-45a1-8134-6c610a5b7217 up in Southbound
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:09 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000052.
Feb 25 12:33:09 compute-0 podman[314982]: 2026-02-25 12:33:09.295620669 +0000 UTC m=+0.155769354 container remove 654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.300 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0095941a-52e6-432a-9b94-fd376de42685]: (4, ('Wed Feb 25 12:33:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214 (654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d)\n654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d\nWed Feb 25 12:33:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214 (654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d)\n654c8aca8391f70fdd1ec6882514d42822cf091cc9281b30692a04d835dfe14d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.302 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[acbd7d62-84ad-4e8f-996f-51e41bc674f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.303 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b7a7eb3-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.305 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:09 compute-0 kernel: tap5b7a7eb3-b0: left promiscuous mode
Feb 25 12:33:09 compute-0 ceph-mon[76335]: pgmap v1531: 305 pgs: 305 active+clean; 355 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 281 op/s
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.321 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00d0f425-dc84-492b-8ea6-0ffaab9f22c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.332 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d31b6d-1e68-4434-8b7a-9b31844de368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.333 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0f927a-1beb-4873-b539-274719c83aea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.349 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e7dd9139-9e1e-4d89-831b-4904e0ef901a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475313, 'reachable_time': 17261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315019, 'error': None, 'target': 'ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d5b7a7eb3\x2db799\x2d4043\x2d93af\x2d927c57bf0214.mount: Deactivated successfully.
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.351 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5b7a7eb3-b799-4043-93af-927c57bf0214 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.352 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e7412c32-41f9-4a16-a0c2-b54a7c89d99f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.353 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 69680ef1-2d32-45a1-8134-6c610a5b7217 in datapath d337bae8-3653-4026-9587-ee479cf3e099 unbound from our chassis
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.355 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d337bae8-3653-4026-9587-ee479cf3e099
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.364 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c03bb912-e9c1-4bda-b8ea-a38f69c61284]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.365 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd337bae8-31 in ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.366 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd337bae8-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.366 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69ed2fad-206f-4d9d-9815-566717289720]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.367 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b8c30e-daf5-4a70-b7f4-ea455740de8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.377 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6bd661-0366-4f11-9968-0a39020f6723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.389 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d3958a20-5bca-4fb1-8fef-897baab25ba6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.408 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0163dd88-2807-4c2f-8e05-c17c39a358af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 NetworkManager[49836]: <info>  [1772022789.4146] manager: (tapd337bae8-30): new Veth device (/org/freedesktop/NetworkManager/Devices/334)
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.412 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e49f5ae2-7077-4e63-933b-2310ec765810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.424 244018 DEBUG nova.compute.manager [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-unplugged-79d3b96a-fd0f-4a46-86e7-65281666b00f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.425 244018 DEBUG oslo_concurrency.lockutils [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.425 244018 DEBUG oslo_concurrency.lockutils [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.425 244018 DEBUG oslo_concurrency.lockutils [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.426 244018 DEBUG nova.compute.manager [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] No waiting events found dispatching network-vif-unplugged-79d3b96a-fd0f-4a46-86e7-65281666b00f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.426 244018 DEBUG nova.compute.manager [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-unplugged-79d3b96a-fd0f-4a46-86e7-65281666b00f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.426 244018 DEBUG nova.compute.manager [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-plugged-79d3b96a-fd0f-4a46-86e7-65281666b00f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.426 244018 DEBUG oslo_concurrency.lockutils [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.426 244018 DEBUG oslo_concurrency.lockutils [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.426 244018 DEBUG oslo_concurrency.lockutils [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.426 244018 DEBUG nova.compute.manager [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] No waiting events found dispatching network-vif-plugged-79d3b96a-fd0f-4a46-86e7-65281666b00f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.427 244018 WARNING nova.compute.manager [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received unexpected event network-vif-plugged-79d3b96a-fd0f-4a46-86e7-65281666b00f for instance with vm_state active and task_state deleting.
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.427 244018 DEBUG nova.compute.manager [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-unplugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.427 244018 DEBUG oslo_concurrency.lockutils [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.427 244018 DEBUG oslo_concurrency.lockutils [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.427 244018 DEBUG oslo_concurrency.lockutils [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.427 244018 DEBUG nova.compute.manager [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] No waiting events found dispatching network-vif-unplugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.427 244018 DEBUG nova.compute.manager [req-6d3621ce-fc7a-4d94-9229-48b50229f058 req-52dd6344-10be-49fc-9278-758b87959795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-unplugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.443 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1df0d5a3-f7ad-41e6-b0da-19b7c1823dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.447 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[43fcd016-8cec-4bc1-9785-bb0ca828ca04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 NetworkManager[49836]: <info>  [1772022789.4637] device (tapd337bae8-30): carrier: link connected
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.470 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1605a0f5-5890-4283-9717-62b474e8cdba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.487 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1957ec27-5cb2-45f2-b52a-b7a9969d867c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd337bae8-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:ff:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475887, 'reachable_time': 23652, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315045, 'error': None, 'target': 'ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.501 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc2dc92-f2f5-4900-8906-bc06b4bad328]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:ff4e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475887, 'tstamp': 475887}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315046, 'error': None, 'target': 'ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.515 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[52bfb24d-1165-4270-a944-8e1172996577]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd337bae8-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:ff:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475887, 'reachable_time': 23652, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315047, 'error': None, 'target': 'ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.546 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c10272-35ae-4fa4-ab08-b852478e8b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.597 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f98caac4-1f8d-4cd5-b746-a6afee0b09e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.598 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd337bae8-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.599 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.599 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd337bae8-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:09 compute-0 NetworkManager[49836]: <info>  [1772022789.6022] manager: (tapd337bae8-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Feb 25 12:33:09 compute-0 kernel: tapd337bae8-30: entered promiscuous mode
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.608 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd337bae8-30, col_values=(('external_ids', {'iface-id': 'b5e396bd-3c11-4c20-b7f2-3e2fe4421f8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.609 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:09 compute-0 ovn_controller[147040]: 2026-02-25T12:33:09Z|00781|binding|INFO|Releasing lport b5e396bd-3c11-4c20-b7f2-3e2fe4421f8a from this chassis (sb_readonly=0)
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.610 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.613 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d337bae8-3653-4026-9587-ee479cf3e099.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d337bae8-3653-4026-9587-ee479cf3e099.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.614 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c67675-6c12-4e73-a17c-e61d9cd3ae7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.614 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-d337bae8-3653-4026-9587-ee479cf3e099
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/d337bae8-3653-4026-9587-ee479cf3e099.pid.haproxy
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID d337bae8-3653-4026-9587-ee479cf3e099
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:33:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:09.615 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099', 'env', 'PROCESS_TAG=haproxy-d337bae8-3653-4026-9587-ee479cf3e099', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d337bae8-3653-4026-9587-ee479cf3e099.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.636 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.763 244018 INFO nova.virt.libvirt.driver [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Deleting instance files /var/lib/nova/instances/59845b53-2d16-4e64-9550-ed88157328c2_del
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.765 244018 INFO nova.virt.libvirt.driver [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Deletion of /var/lib/nova/instances/59845b53-2d16-4e64-9550-ed88157328c2_del complete
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:09 compute-0 nova_compute[244014]: 2026-02-25 12:33:09.967 244018 DEBUG nova.network.neutron [-] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 355 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 195 op/s
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.007 244018 INFO nova.compute.manager [-] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Took 1.93 seconds to deallocate network for instance.
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.016 244018 INFO nova.compute.manager [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Took 2.36 seconds to destroy the instance on the hypervisor.
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.017 244018 DEBUG oslo.service.loopingcall [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.018 244018 DEBUG nova.compute.manager [-] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.018 244018 DEBUG nova.network.neutron [-] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:33:10 compute-0 podman[315097]: 2026-02-25 12:33:09.95944073 +0000 UTC m=+0.033167598 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.066 244018 DEBUG oslo_concurrency.lockutils [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.066 244018 DEBUG oslo_concurrency.lockutils [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:10 compute-0 podman[315097]: 2026-02-25 12:33:10.110019735 +0000 UTC m=+0.183746513 container create c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.135 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022790.1345305, 6e9252ef-013b-4247-8b3c-494150e0785d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.136 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] VM Started (Lifecycle Event)
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.154 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.160 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022790.1347282, 6e9252ef-013b-4247-8b3c-494150e0785d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.160 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] VM Paused (Lifecycle Event)
Feb 25 12:33:10 compute-0 systemd[1]: Started libpod-conmon-c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16.scope.
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.195 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.202 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:33:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946ad2d41d02615aeb78f0a721c3084b9a33633777a2cb1e301773a90500f5ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.226 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.228 244018 DEBUG oslo_concurrency.processutils [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:10 compute-0 podman[315097]: 2026-02-25 12:33:10.264389308 +0000 UTC m=+0.338116176 container init c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:33:10 compute-0 podman[315097]: 2026-02-25 12:33:10.269656706 +0000 UTC m=+0.343383524 container start c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:33:10 compute-0 neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099[315137]: [NOTICE]   (315142) : New worker (315144) forked
Feb 25 12:33:10 compute-0 neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099[315137]: [NOTICE]   (315142) : Loading success.
Feb 25 12:33:10 compute-0 ceph-mon[76335]: pgmap v1532: 305 pgs: 305 active+clean; 355 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 195 op/s
Feb 25 12:33:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:33:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1880819950' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.825 244018 DEBUG oslo_concurrency.processutils [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.833 244018 DEBUG nova.compute.provider_tree [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.852 244018 DEBUG nova.scheduler.client.report [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.875 244018 DEBUG oslo_concurrency.lockutils [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:10 compute-0 nova_compute[244014]: 2026-02-25 12:33:10.904 244018 INFO nova.scheduler.client.report [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Deleted allocations for instance 2403cab9-191d-42ad-9d3c-3938a6644ae7
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.007 244018 DEBUG oslo_concurrency.lockutils [None req-a2ad5048-59e8-4d7a-9a3e-b4a2897dbaf5 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "2403cab9-191d-42ad-9d3c-3938a6644ae7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1880819950' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.454 244018 DEBUG nova.network.neutron [-] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.474 244018 INFO nova.compute.manager [-] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Took 1.46 seconds to deallocate network for instance.
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.522 244018 DEBUG oslo_concurrency.lockutils [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.524 244018 DEBUG oslo_concurrency.lockutils [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.600 244018 DEBUG nova.compute.manager [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-plugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.600 244018 DEBUG oslo_concurrency.lockutils [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "59845b53-2d16-4e64-9550-ed88157328c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.601 244018 DEBUG oslo_concurrency.lockutils [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.601 244018 DEBUG oslo_concurrency.lockutils [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.601 244018 DEBUG nova.compute.manager [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] No waiting events found dispatching network-vif-plugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.602 244018 WARNING nova.compute.manager [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received unexpected event network-vif-plugged-f0c3ab6d-106c-4ce9-9e41-200ba59de578 for instance with vm_state deleted and task_state None.
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.602 244018 DEBUG nova.compute.manager [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Received event network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.602 244018 DEBUG oslo_concurrency.lockutils [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.602 244018 DEBUG oslo_concurrency.lockutils [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.603 244018 DEBUG oslo_concurrency.lockutils [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.603 244018 DEBUG nova.compute.manager [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Processing event network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.604 244018 DEBUG nova.compute.manager [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Received event network-vif-deleted-47863a41-5044-4493-9137-885a7693c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.604 244018 DEBUG nova.compute.manager [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Received event network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.605 244018 DEBUG oslo_concurrency.lockutils [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.605 244018 DEBUG oslo_concurrency.lockutils [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.605 244018 DEBUG oslo_concurrency.lockutils [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.605 244018 DEBUG nova.compute.manager [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] No waiting events found dispatching network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.606 244018 WARNING nova.compute.manager [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Received unexpected event network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 for instance with vm_state building and task_state spawning.
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.606 244018 DEBUG nova.compute.manager [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-deleted-79d3b96a-fd0f-4a46-86e7-65281666b00f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.607 244018 DEBUG nova.compute.manager [req-6144999d-de96-453f-a1ee-428058c4e486 req-12979b4a-a03f-45ca-9661-9d87e2fcb0af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Received event network-vif-deleted-f0c3ab6d-106c-4ce9-9e41-200ba59de578 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.607 244018 DEBUG nova.compute.manager [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.612 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022791.6124513, 6e9252ef-013b-4247-8b3c-494150e0785d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.613 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] VM Resumed (Lifecycle Event)
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.615 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.619 244018 INFO nova.virt.libvirt.driver [-] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Instance spawned successfully.
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.619 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.631 244018 DEBUG oslo_concurrency.processutils [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.660 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.664 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.685 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.685 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.686 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.686 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.687 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.688 244018 DEBUG nova.virt.libvirt.driver [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.739 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.795 244018 INFO nova.compute.manager [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Took 8.10 seconds to spawn the instance on the hypervisor.
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.795 244018 DEBUG nova.compute.manager [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.849 244018 INFO nova.compute.manager [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Took 9.37 seconds to build instance.
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.867 244018 DEBUG oslo_concurrency.lockutils [None req-3880591c-25f4-436e-be0c-1f899e219729 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:11 compute-0 nova_compute[244014]: 2026-02-25 12:33:11.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:33:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 301 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Feb 25 12:33:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:33:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216933715' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:12 compute-0 nova_compute[244014]: 2026-02-25 12:33:12.166 244018 DEBUG oslo_concurrency.processutils [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:12 compute-0 nova_compute[244014]: 2026-02-25 12:33:12.171 244018 DEBUG nova.compute.provider_tree [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:33:12 compute-0 nova_compute[244014]: 2026-02-25 12:33:12.192 244018 DEBUG nova.scheduler.client.report [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:33:12 compute-0 nova_compute[244014]: 2026-02-25 12:33:12.220 244018 DEBUG oslo_concurrency.lockutils [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:12 compute-0 nova_compute[244014]: 2026-02-25 12:33:12.247 244018 INFO nova.scheduler.client.report [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Deleted allocations for instance 59845b53-2d16-4e64-9550-ed88157328c2
Feb 25 12:33:12 compute-0 nova_compute[244014]: 2026-02-25 12:33:12.328 244018 DEBUG oslo_concurrency.lockutils [None req-bca0125d-97d3-4f99-b107-2024cc394504 8f6c1c9101c442839ec520dd809c1205 ff49334847464c879338fb12c3b27419 - - default default] Lock "59845b53-2d16-4e64-9550-ed88157328c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:12 compute-0 ceph-mon[76335]: pgmap v1533: 305 pgs: 305 active+clean; 301 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Feb 25 12:33:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4216933715' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:12 compute-0 nova_compute[244014]: 2026-02-25 12:33:12.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:13 compute-0 nova_compute[244014]: 2026-02-25 12:33:13.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:33:13 compute-0 nova_compute[244014]: 2026-02-25 12:33:13.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:33:13 compute-0 nova_compute[244014]: 2026-02-25 12:33:13.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:13 compute-0 nova_compute[244014]: 2026-02-25 12:33:13.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:13 compute-0 nova_compute[244014]: 2026-02-25 12:33:13.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:13 compute-0 nova_compute[244014]: 2026-02-25 12:33:13.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:33:13 compute-0 nova_compute[244014]: 2026-02-25 12:33:13.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 279 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 255 op/s
Feb 25 12:33:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:33:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2526932702' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.508 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.602 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.602 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.606 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.606 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:33:14 compute-0 ovn_controller[147040]: 2026-02-25T12:33:14Z|00782|binding|INFO|Releasing lport 134cce92-aa02-44fa-b97c-8b46159f1d29 from this chassis (sb_readonly=0)
Feb 25 12:33:14 compute-0 ovn_controller[147040]: 2026-02-25T12:33:14Z|00783|binding|INFO|Releasing lport b5e396bd-3c11-4c20-b7f2-3e2fe4421f8a from this chassis (sb_readonly=0)
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.785 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.824 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.825 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3535MB free_disk=59.92134573776275GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.826 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.826 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.860 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.940 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 19abddab-88d5-48b8-b98e-1dedccbb8b7f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.940 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 6e9252ef-013b-4247-8b3c-494150e0785d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.940 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:33:14 compute-0 nova_compute[244014]: 2026-02-25 12:33:14.941 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:33:15 compute-0 nova_compute[244014]: 2026-02-25 12:33:15.011 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Feb 25 12:33:15 compute-0 ceph-mon[76335]: pgmap v1534: 305 pgs: 305 active+clean; 279 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 255 op/s
Feb 25 12:33:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2526932702' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Feb 25 12:33:15 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.129423) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022795129452, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1043, "num_deletes": 251, "total_data_size": 1372418, "memory_usage": 1394352, "flush_reason": "Manual Compaction"}
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022795145251, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1358087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31914, "largest_seqno": 32956, "table_properties": {"data_size": 1353069, "index_size": 2478, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11394, "raw_average_key_size": 19, "raw_value_size": 1342798, "raw_average_value_size": 2347, "num_data_blocks": 110, "num_entries": 572, "num_filter_entries": 572, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022708, "oldest_key_time": 1772022708, "file_creation_time": 1772022795, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 15861 microseconds, and 3516 cpu microseconds.
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.145281) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1358087 bytes OK
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.145309) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.147192) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.147203) EVENT_LOG_v1 {"time_micros": 1772022795147200, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.147220) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1367494, prev total WAL file size 1367494, number of live WAL files 2.
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.147601) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1326KB)], [68(8134KB)]
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022795147630, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 9687473, "oldest_snapshot_seqno": -1}
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5851 keys, 8051387 bytes, temperature: kUnknown
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022795203312, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8051387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8012747, "index_size": 22914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 146837, "raw_average_key_size": 25, "raw_value_size": 7908314, "raw_average_value_size": 1351, "num_data_blocks": 929, "num_entries": 5851, "num_filter_entries": 5851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022795, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.203613) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8051387 bytes
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.206480) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.6 rd, 144.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(13.1) write-amplify(5.9) OK, records in: 6369, records dropped: 518 output_compression: NoCompression
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.206537) EVENT_LOG_v1 {"time_micros": 1772022795206513, "job": 38, "event": "compaction_finished", "compaction_time_micros": 55800, "compaction_time_cpu_micros": 13531, "output_level": 6, "num_output_files": 1, "total_output_size": 8051387, "num_input_records": 6369, "num_output_records": 5851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022795206981, "job": 38, "event": "table_file_deletion", "file_number": 70}
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022795208257, "job": 38, "event": "table_file_deletion", "file_number": 68}
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.147526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.208388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.208397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.208400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.208404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:15.208408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:33:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4155612400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:15 compute-0 nova_compute[244014]: 2026-02-25 12:33:15.554 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:15 compute-0 nova_compute[244014]: 2026-02-25 12:33:15.559 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:33:15 compute-0 nova_compute[244014]: 2026-02-25 12:33:15.589 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:33:15 compute-0 nova_compute[244014]: 2026-02-25 12:33:15.621 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:33:15 compute-0 nova_compute[244014]: 2026-02-25 12:33:15.622 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 279 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.2 MiB/s wr, 304 op/s
Feb 25 12:33:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Feb 25 12:33:16 compute-0 ceph-mon[76335]: osdmap e233: 3 total, 3 up, 3 in
Feb 25 12:33:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4155612400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Feb 25 12:33:16 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Feb 25 12:33:16 compute-0 nova_compute[244014]: 2026-02-25 12:33:16.616 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:33:16 compute-0 nova_compute[244014]: 2026-02-25 12:33:16.617 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:33:16 compute-0 nova_compute[244014]: 2026-02-25 12:33:16.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:33:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:17.035 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:17.036 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:17 compute-0 ceph-mon[76335]: pgmap v1536: 305 pgs: 305 active+clean; 279 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.2 MiB/s wr, 304 op/s
Feb 25 12:33:17 compute-0 ceph-mon[76335]: osdmap e234: 3 total, 3 up, 3 in
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.666 244018 DEBUG oslo_concurrency.lockutils [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Acquiring lock "6e9252ef-013b-4247-8b3c-494150e0785d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.668 244018 DEBUG oslo_concurrency.lockutils [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.668 244018 DEBUG oslo_concurrency.lockutils [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Acquiring lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.669 244018 DEBUG oslo_concurrency.lockutils [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.670 244018 DEBUG oslo_concurrency.lockutils [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.672 244018 INFO nova.compute.manager [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Terminating instance
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.674 244018 DEBUG nova.compute.manager [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:33:17 compute-0 kernel: tap69680ef1-2d (unregistering): left promiscuous mode
Feb 25 12:33:17 compute-0 NetworkManager[49836]: <info>  [1772022797.7615] device (tap69680ef1-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00784|binding|INFO|Releasing lport 69680ef1-2d32-45a1-8134-6c610a5b7217 from this chassis (sb_readonly=0)
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00785|binding|INFO|Setting lport 69680ef1-2d32-45a1-8134-6c610a5b7217 down in Southbound
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00786|binding|INFO|Removing iface tap69680ef1-2d ovn-installed in OVS
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.769 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:17.777 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:11:99 10.100.0.14'], port_security=['fa:16:3e:a5:11:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6e9252ef-013b-4247-8b3c-494150e0785d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d337bae8-3653-4026-9587-ee479cf3e099', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf74cfdf1cf045d59e889e7d53288e55', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bb977d49-55e7-47ae-9d82-2cab57c33e00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d07df630-921c-4d94-9fcb-27b3fec5b574, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=69680ef1-2d32-45a1-8134-6c610a5b7217) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.777 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:17.778 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 69680ef1-2d32-45a1-8134-6c610a5b7217 in datapath d337bae8-3653-4026-9587-ee479cf3e099 unbound from our chassis
Feb 25 12:33:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:17.779 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d337bae8-3653-4026-9587-ee479cf3e099, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:33:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:17.780 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e05028-32d0-4eba-b295-336c6350794d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:17.780 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099 namespace which is not needed anymore
Feb 25 12:33:17 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000052.scope: Deactivated successfully.
Feb 25 12:33:17 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000052.scope: Consumed 7.013s CPU time.
Feb 25 12:33:17 compute-0 systemd-machined[210048]: Machine qemu-100-instance-00000052 terminated.
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:33:17 compute-0 kernel: tap69680ef1-2d: entered promiscuous mode
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00787|binding|INFO|Claiming lport 69680ef1-2d32-45a1-8134-6c610a5b7217 for this chassis.
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00788|binding|INFO|69680ef1-2d32-45a1-8134-6c610a5b7217: Claiming fa:16:3e:a5:11:99 10.100.0.14
Feb 25 12:33:17 compute-0 NetworkManager[49836]: <info>  [1772022797.8963] manager: (tap69680ef1-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/336)
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:17 compute-0 kernel: tap69680ef1-2d (unregistering): left promiscuous mode
Feb 25 12:33:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:17.904 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:11:99 10.100.0.14'], port_security=['fa:16:3e:a5:11:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6e9252ef-013b-4247-8b3c-494150e0785d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d337bae8-3653-4026-9587-ee479cf3e099', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf74cfdf1cf045d59e889e7d53288e55', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bb977d49-55e7-47ae-9d82-2cab57c33e00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d07df630-921c-4d94-9fcb-27b3fec5b574, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=69680ef1-2d32-45a1-8134-6c610a5b7217) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00789|binding|INFO|Setting lport 69680ef1-2d32-45a1-8134-6c610a5b7217 ovn-installed in OVS
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00790|binding|INFO|Setting lport 69680ef1-2d32-45a1-8134-6c610a5b7217 up in Southbound
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00791|binding|INFO|Releasing lport 69680ef1-2d32-45a1-8134-6c610a5b7217 from this chassis (sb_readonly=1)
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00792|if_status|INFO|Dropped 2 log messages in last 98 seconds (most recently, 98 seconds ago) due to excessive rate
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00793|if_status|INFO|Not setting lport 69680ef1-2d32-45a1-8134-6c610a5b7217 down as sb is readonly
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00794|binding|INFO|Removing iface tap69680ef1-2d ovn-installed in OVS
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.908 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00795|binding|INFO|Releasing lport 69680ef1-2d32-45a1-8134-6c610a5b7217 from this chassis (sb_readonly=0)
Feb 25 12:33:17 compute-0 ovn_controller[147040]: 2026-02-25T12:33:17Z|00796|binding|INFO|Setting lport 69680ef1-2d32-45a1-8134-6c610a5b7217 down in Southbound
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.917 244018 INFO nova.virt.libvirt.driver [-] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Instance destroyed successfully.
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.918 244018 DEBUG nova.objects.instance [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lazy-loading 'resources' on Instance uuid 6e9252ef-013b-4247-8b3c-494150e0785d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:17.918 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:11:99 10.100.0.14'], port_security=['fa:16:3e:a5:11:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6e9252ef-013b-4247-8b3c-494150e0785d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d337bae8-3653-4026-9587-ee479cf3e099', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf74cfdf1cf045d59e889e7d53288e55', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bb977d49-55e7-47ae-9d82-2cab57c33e00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d07df630-921c-4d94-9fcb-27b3fec5b574, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=69680ef1-2d32-45a1-8134-6c610a5b7217) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.921 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:17 compute-0 neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099[315137]: [NOTICE]   (315142) : haproxy version is 2.8.14-c23fe91
Feb 25 12:33:17 compute-0 neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099[315137]: [NOTICE]   (315142) : path to executable is /usr/sbin/haproxy
Feb 25 12:33:17 compute-0 neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099[315137]: [WARNING]  (315142) : Exiting Master process...
Feb 25 12:33:17 compute-0 neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099[315137]: [ALERT]    (315142) : Current worker (315144) exited with code 143 (Terminated)
Feb 25 12:33:17 compute-0 neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099[315137]: [WARNING]  (315142) : All workers exited. Exiting... (0)
Feb 25 12:33:17 compute-0 systemd[1]: libpod-c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16.scope: Deactivated successfully.
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.938 244018 DEBUG nova.virt.libvirt.vif [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:33:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-506231471',display_name='tempest-ServerMetadataTestJSON-server-506231471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-506231471',id=82,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:33:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cf74cfdf1cf045d59e889e7d53288e55',ramdisk_id='',reservation_id='r-0otuxpvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-1903723033',owner_user_name='tempest-ServerMetadataTestJSON-1903723033-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:33:17Z,user_data=None,user_id='20b3cb5baaa440b08f43f40d533e26a7',uuid=6e9252ef-013b-4247-8b3c-494150e0785d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "69680ef1-2d32-45a1-8134-6c610a5b7217", "address": "fa:16:3e:a5:11:99", "network": {"id": "d337bae8-3653-4026-9587-ee479cf3e099", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-233008396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf74cfdf1cf045d59e889e7d53288e55", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69680ef1-2d", "ovs_interfaceid": "69680ef1-2d32-45a1-8134-6c610a5b7217", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.939 244018 DEBUG nova.network.os_vif_util [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Converting VIF {"id": "69680ef1-2d32-45a1-8134-6c610a5b7217", "address": "fa:16:3e:a5:11:99", "network": {"id": "d337bae8-3653-4026-9587-ee479cf3e099", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-233008396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf74cfdf1cf045d59e889e7d53288e55", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69680ef1-2d", "ovs_interfaceid": "69680ef1-2d32-45a1-8134-6c610a5b7217", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.940 244018 DEBUG nova.network.os_vif_util [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:11:99,bridge_name='br-int',has_traffic_filtering=True,id=69680ef1-2d32-45a1-8134-6c610a5b7217,network=Network(d337bae8-3653-4026-9587-ee479cf3e099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69680ef1-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.941 244018 DEBUG os_vif [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:11:99,bridge_name='br-int',has_traffic_filtering=True,id=69680ef1-2d32-45a1-8134-6c610a5b7217,network=Network(d337bae8-3653-4026-9587-ee479cf3e099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69680ef1-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:33:17 compute-0 podman[315266]: 2026-02-25 12:33:17.942821877 +0000 UTC m=+0.072271513 container died c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.945 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.945 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69680ef1-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:17 compute-0 nova_compute[244014]: 2026-02-25 12:33:17.951 244018 INFO os_vif [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:11:99,bridge_name='br-int',has_traffic_filtering=True,id=69680ef1-2d32-45a1-8134-6c610a5b7217,network=Network(d337bae8-3653-4026-9587-ee479cf3e099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69680ef1-2d')
Feb 25 12:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16-userdata-shm.mount: Deactivated successfully.
Feb 25 12:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-946ad2d41d02615aeb78f0a721c3084b9a33633777a2cb1e301773a90500f5ba-merged.mount: Deactivated successfully.
Feb 25 12:33:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 33 KiB/s wr, 214 op/s
Feb 25 12:33:18 compute-0 podman[315266]: 2026-02-25 12:33:18.012827936 +0000 UTC m=+0.142277572 container cleanup c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:33:18 compute-0 systemd[1]: libpod-conmon-c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16.scope: Deactivated successfully.
Feb 25 12:33:18 compute-0 podman[315318]: 2026-02-25 12:33:18.117252797 +0000 UTC m=+0.081893035 container remove c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c62bd76d-6c98-4b58-9a75-94673ecf8acc]: (4, ('Wed Feb 25 12:33:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099 (c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16)\nc380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16\nWed Feb 25 12:33:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099 (c380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16)\nc380e754a9e95a83700d03961492534a520cb7ca9eef647825809441738f0f16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.130 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[94cc0f59-0249-49ac-9830-0236395a8c57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.131 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd337bae8-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:18 compute-0 kernel: tapd337bae8-30: left promiscuous mode
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.139 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.143 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5c5100-766e-447d-8e20-f2f9a15777fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.152 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d38fb17c-5793-452f-bfb4-39cdbe7d9f75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.154 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e88348-e8e2-4569-bf4f-ebf351e943ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.166 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b2ba0f-4e70-4c87-b5ca-de7e581cc4a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475881, 'reachable_time': 37799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315333, 'error': None, 'target': 'ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:18 compute-0 ceph-mon[76335]: pgmap v1538: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 33 KiB/s wr, 214 op/s
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.170 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d337bae8-3653-4026-9587-ee479cf3e099 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.170 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[74d778ce-6004-47e7-b207-800358606813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:18 compute-0 systemd[1]: run-netns-ovnmeta\x2dd337bae8\x2d3653\x2d4026\x2d9587\x2dee479cf3e099.mount: Deactivated successfully.
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.171 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 69680ef1-2d32-45a1-8134-6c610a5b7217 in datapath d337bae8-3653-4026-9587-ee479cf3e099 unbound from our chassis
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.172 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d337bae8-3653-4026-9587-ee479cf3e099, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.173 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[70ef5f43-df03-40a2-a36f-c87a1a9d0f57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.173 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 69680ef1-2d32-45a1-8134-6c610a5b7217 in datapath d337bae8-3653-4026-9587-ee479cf3e099 unbound from our chassis
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.175 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d337bae8-3653-4026-9587-ee479cf3e099, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:33:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:18.175 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c067dda7-d481-4850-ba21-aeb091273d23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.395 244018 INFO nova.virt.libvirt.driver [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Deleting instance files /var/lib/nova/instances/6e9252ef-013b-4247-8b3c-494150e0785d_del
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.396 244018 INFO nova.virt.libvirt.driver [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Deletion of /var/lib/nova/instances/6e9252ef-013b-4247-8b3c-494150e0785d_del complete
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.542 244018 INFO nova.compute.manager [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Took 0.87 seconds to destroy the instance on the hypervisor.
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.542 244018 DEBUG oslo.service.loopingcall [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.542 244018 DEBUG nova.compute.manager [-] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.543 244018 DEBUG nova.network.neutron [-] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.973 244018 DEBUG nova.compute.manager [req-e37a2d9d-4661-45af-9a77-46fa10cca0f8 req-461ad307-69d3-49ef-95e4-4655b34a273c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Received event network-vif-unplugged-69680ef1-2d32-45a1-8134-6c610a5b7217 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.974 244018 DEBUG oslo_concurrency.lockutils [req-e37a2d9d-4661-45af-9a77-46fa10cca0f8 req-461ad307-69d3-49ef-95e4-4655b34a273c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.975 244018 DEBUG oslo_concurrency.lockutils [req-e37a2d9d-4661-45af-9a77-46fa10cca0f8 req-461ad307-69d3-49ef-95e4-4655b34a273c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.975 244018 DEBUG oslo_concurrency.lockutils [req-e37a2d9d-4661-45af-9a77-46fa10cca0f8 req-461ad307-69d3-49ef-95e4-4655b34a273c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.976 244018 DEBUG nova.compute.manager [req-e37a2d9d-4661-45af-9a77-46fa10cca0f8 req-461ad307-69d3-49ef-95e4-4655b34a273c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] No waiting events found dispatching network-vif-unplugged-69680ef1-2d32-45a1-8134-6c610a5b7217 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:18 compute-0 nova_compute[244014]: 2026-02-25 12:33:18.976 244018 DEBUG nova.compute.manager [req-e37a2d9d-4661-45af-9a77-46fa10cca0f8 req-461ad307-69d3-49ef-95e4-4655b34a273c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Received event network-vif-unplugged-69680ef1-2d32-45a1-8134-6c610a5b7217 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:33:19 compute-0 nova_compute[244014]: 2026-02-25 12:33:19.863 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 12 KiB/s wr, 170 op/s
Feb 25 12:33:20 compute-0 nova_compute[244014]: 2026-02-25 12:33:20.662 244018 DEBUG nova.network.neutron [-] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.109 244018 INFO nova.compute.manager [-] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Took 2.57 seconds to deallocate network for instance.
Feb 25 12:33:21 compute-0 ceph-mon[76335]: pgmap v1539: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 12 KiB/s wr, 170 op/s
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.250 244018 DEBUG oslo_concurrency.lockutils [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.251 244018 DEBUG oslo_concurrency.lockutils [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.287 244018 DEBUG nova.compute.manager [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Received event network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.288 244018 DEBUG oslo_concurrency.lockutils [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.288 244018 DEBUG oslo_concurrency.lockutils [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.288 244018 DEBUG oslo_concurrency.lockutils [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.288 244018 DEBUG nova.compute.manager [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] No waiting events found dispatching network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.289 244018 WARNING nova.compute.manager [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Received unexpected event network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 for instance with vm_state deleted and task_state None.
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.289 244018 DEBUG nova.compute.manager [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Received event network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.289 244018 DEBUG oslo_concurrency.lockutils [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.289 244018 DEBUG oslo_concurrency.lockutils [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.290 244018 DEBUG oslo_concurrency.lockutils [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.290 244018 DEBUG nova.compute.manager [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] No waiting events found dispatching network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.290 244018 WARNING nova.compute.manager [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Received unexpected event network-vif-plugged-69680ef1-2d32-45a1-8134-6c610a5b7217 for instance with vm_state deleted and task_state None.
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.290 244018 DEBUG nova.compute.manager [req-eef508c1-3632-4838-8783-06ad6f8f2aca req-a151c8ba-e432-4a2b-a165-fb1813e22746 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Received event network-vif-deleted-69680ef1-2d32-45a1-8134-6c610a5b7217 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.324 244018 DEBUG oslo_concurrency.processutils [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.672 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022786.6712372, 2403cab9-191d-42ad-9d3c-3938a6644ae7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.674 244018 INFO nova.compute.manager [-] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] VM Stopped (Lifecycle Event)
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.716 244018 DEBUG nova.compute.manager [None req-c0c56eba-8aaa-4258-93a0-6e16a15b8949 - - - - - -] [instance: 2403cab9-191d-42ad-9d3c-3938a6644ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:33:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3873916780' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.876 244018 DEBUG oslo_concurrency.processutils [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.882 244018 DEBUG nova.compute.provider_tree [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.949 244018 DEBUG nova.scheduler.client.report [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:33:21 compute-0 nova_compute[244014]: 2026-02-25 12:33:21.985 244018 DEBUG oslo_concurrency.lockutils [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 247 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.1 KiB/s wr, 123 op/s
Feb 25 12:33:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:22 compute-0 nova_compute[244014]: 2026-02-25 12:33:22.039 244018 INFO nova.scheduler.client.report [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Deleted allocations for instance 6e9252ef-013b-4247-8b3c-494150e0785d
Feb 25 12:33:22 compute-0 nova_compute[244014]: 2026-02-25 12:33:22.106 244018 DEBUG oslo_concurrency.lockutils [None req-4dd00315-9eb3-452a-8c6e-42da0ab47040 20b3cb5baaa440b08f43f40d533e26a7 cf74cfdf1cf045d59e889e7d53288e55 - - default default] Lock "6e9252ef-013b-4247-8b3c-494150e0785d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.154059) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022802154094, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 318, "num_deletes": 256, "total_data_size": 116182, "memory_usage": 122616, "flush_reason": "Manual Compaction"}
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022802168866, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 115627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32957, "largest_seqno": 33274, "table_properties": {"data_size": 113568, "index_size": 223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5152, "raw_average_key_size": 17, "raw_value_size": 109432, "raw_average_value_size": 377, "num_data_blocks": 10, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022796, "oldest_key_time": 1772022796, "file_creation_time": 1772022802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 14875 microseconds, and 1272 cpu microseconds.
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:33:22 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3873916780' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.168929) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 115627 bytes OK
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.168952) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.185649) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.185670) EVENT_LOG_v1 {"time_micros": 1772022802185663, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.185720) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 113904, prev total WAL file size 141991, number of live WAL files 2.
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.186075) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303038' seq:72057594037927935, type:22 .. '6C6F676D0031323630' seq:0, type:0; will stop at (end)
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(112KB)], [71(7862KB)]
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022802186153, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 8167014, "oldest_snapshot_seqno": -1}
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5618 keys, 8050875 bytes, temperature: kUnknown
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022802272374, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8050875, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8013137, "index_size": 22588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 142945, "raw_average_key_size": 25, "raw_value_size": 7912138, "raw_average_value_size": 1408, "num_data_blocks": 911, "num_entries": 5618, "num_filter_entries": 5618, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.272838) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8050875 bytes
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.278327) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.6 rd, 93.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.7 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(140.3) write-amplify(69.6) OK, records in: 6141, records dropped: 523 output_compression: NoCompression
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.278376) EVENT_LOG_v1 {"time_micros": 1772022802278355, "job": 40, "event": "compaction_finished", "compaction_time_micros": 86342, "compaction_time_cpu_micros": 22331, "output_level": 6, "num_output_files": 1, "total_output_size": 8050875, "num_input_records": 6141, "num_output_records": 5618, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022802278811, "job": 40, "event": "table_file_deletion", "file_number": 73}
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022802280226, "job": 40, "event": "table_file_deletion", "file_number": 71}
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.185943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.280342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.280349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.280352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.280355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:33:22.280358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:33:22 compute-0 nova_compute[244014]: 2026-02-25 12:33:22.898 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022787.8976448, 59845b53-2d16-4e64-9550-ed88157328c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:22 compute-0 nova_compute[244014]: 2026-02-25 12:33:22.899 244018 INFO nova.compute.manager [-] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] VM Stopped (Lifecycle Event)
Feb 25 12:33:22 compute-0 nova_compute[244014]: 2026-02-25 12:33:22.920 244018 DEBUG nova.compute.manager [None req-fbdd8174-ff9c-4a39-9145-b492b020600d - - - - - -] [instance: 59845b53-2d16-4e64-9550-ed88157328c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:22 compute-0 nova_compute[244014]: 2026-02-25 12:33:22.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:23 compute-0 ceph-mon[76335]: pgmap v1540: 305 pgs: 305 active+clean; 247 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.1 KiB/s wr, 123 op/s
Feb 25 12:33:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 233 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.4 KiB/s wr, 142 op/s
Feb 25 12:33:24 compute-0 ceph-mon[76335]: pgmap v1541: 305 pgs: 305 active+clean; 233 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.4 KiB/s wr, 142 op/s
Feb 25 12:33:24 compute-0 nova_compute[244014]: 2026-02-25 12:33:24.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 233 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.7 KiB/s wr, 126 op/s
Feb 25 12:33:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:26.037 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Feb 25 12:33:27 compute-0 ovn_controller[147040]: 2026-02-25T12:33:27Z|00797|binding|INFO|Releasing lport 134cce92-aa02-44fa-b97c-8b46159f1d29 from this chassis (sb_readonly=0)
Feb 25 12:33:27 compute-0 ceph-mon[76335]: pgmap v1542: 305 pgs: 305 active+clean; 233 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.7 KiB/s wr, 126 op/s
Feb 25 12:33:27 compute-0 nova_compute[244014]: 2026-02-25 12:33:27.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Feb 25 12:33:27 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Feb 25 12:33:27 compute-0 nova_compute[244014]: 2026-02-25 12:33:27.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 233 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Feb 25 12:33:28 compute-0 ceph-mon[76335]: osdmap e235: 3 total, 3 up, 3 in
Feb 25 12:33:28 compute-0 ceph-mon[76335]: pgmap v1544: 305 pgs: 305 active+clean; 233 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Feb 25 12:33:29 compute-0 nova_compute[244014]: 2026-02-25 12:33:29.865 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 233 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Feb 25 12:33:30 compute-0 nova_compute[244014]: 2026-02-25 12:33:30.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:33:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:33:30
Feb 25 12:33:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:33:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:33:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'images', '.mgr', 'default.rgw.log', 'vms', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.meta']
Feb 25 12:33:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:33:31 compute-0 ceph-mon[76335]: pgmap v1545: 305 pgs: 305 active+clean; 233 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:33:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:33:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 233 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.4 KiB/s wr, 27 op/s
Feb 25 12:33:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:32 compute-0 nova_compute[244014]: 2026-02-25 12:33:32.915 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022797.914643, 6e9252ef-013b-4247-8b3c-494150e0785d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:32 compute-0 nova_compute[244014]: 2026-02-25 12:33:32.916 244018 INFO nova.compute.manager [-] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] VM Stopped (Lifecycle Event)
Feb 25 12:33:32 compute-0 nova_compute[244014]: 2026-02-25 12:33:32.951 244018 DEBUG nova.compute.manager [None req-2e0bd62e-9e36-41a8-85d2-c1918edd03fd - - - - - -] [instance: 6e9252ef-013b-4247-8b3c-494150e0785d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:32 compute-0 nova_compute[244014]: 2026-02-25 12:33:32.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:33 compute-0 ceph-mon[76335]: pgmap v1546: 305 pgs: 305 active+clean; 233 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.4 KiB/s wr, 27 op/s
Feb 25 12:33:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 233 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s wr, 0 op/s
Feb 25 12:33:34 compute-0 nova_compute[244014]: 2026-02-25 12:33:34.867 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:35 compute-0 sudo[315359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:33:35 compute-0 sudo[315359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:33:35 compute-0 sudo[315359]: pam_unix(sudo:session): session closed for user root
Feb 25 12:33:35 compute-0 ceph-mon[76335]: pgmap v1547: 305 pgs: 305 active+clean; 233 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s wr, 0 op/s
Feb 25 12:33:35 compute-0 sudo[315384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 12:33:35 compute-0 sudo[315384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:33:35 compute-0 podman[315454]: 2026-02-25 12:33:35.564938248 +0000 UTC m=+0.067216681 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:33:35 compute-0 podman[315454]: 2026-02-25 12:33:35.655677272 +0000 UTC m=+0.157955705 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:33:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 233 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s wr, 0 op/s
Feb 25 12:33:36 compute-0 sudo[315384]: pam_unix(sudo:session): session closed for user root
Feb 25 12:33:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:33:36 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:33:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:33:36 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:33:36 compute-0 sudo[315644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:33:36 compute-0 sudo[315644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:33:36 compute-0 sudo[315644]: pam_unix(sudo:session): session closed for user root
Feb 25 12:33:36 compute-0 sudo[315669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:33:36 compute-0 sudo[315669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:33:37 compute-0 ceph-mon[76335]: pgmap v1548: 305 pgs: 305 active+clean; 233 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s wr, 0 op/s
Feb 25 12:33:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:33:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:33:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:37 compute-0 sudo[315669]: pam_unix(sudo:session): session closed for user root
Feb 25 12:33:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:33:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:33:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:33:37 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:33:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:33:37 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:33:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:33:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:33:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:33:37 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:33:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:33:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:33:37 compute-0 sudo[315726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:33:37 compute-0 sudo[315726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:33:37 compute-0 sudo[315726]: pam_unix(sudo:session): session closed for user root
Feb 25 12:33:37 compute-0 sudo[315751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:33:37 compute-0 sudo[315751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:33:37 compute-0 podman[315787]: 2026-02-25 12:33:37.695007649 +0000 UTC m=+0.031186773 container create 34046af39fa110f48c49c1a4e5280d9fb1827d6eb9e2afc0e80da03e3788890e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_shamir, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 12:33:37 compute-0 systemd[1]: Started libpod-conmon-34046af39fa110f48c49c1a4e5280d9fb1827d6eb9e2afc0e80da03e3788890e.scope.
Feb 25 12:33:37 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:33:37 compute-0 nova_compute[244014]: 2026-02-25 12:33:37.757 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:37 compute-0 nova_compute[244014]: 2026-02-25 12:33:37.759 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:37 compute-0 podman[315787]: 2026-02-25 12:33:37.775340109 +0000 UTC m=+0.111519253 container init 34046af39fa110f48c49c1a4e5280d9fb1827d6eb9e2afc0e80da03e3788890e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:33:37 compute-0 podman[315787]: 2026-02-25 12:33:37.681640321 +0000 UTC m=+0.017819465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:33:37 compute-0 nova_compute[244014]: 2026-02-25 12:33:37.779 244018 DEBUG nova.compute.manager [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:33:37 compute-0 podman[315787]: 2026-02-25 12:33:37.78174749 +0000 UTC m=+0.117926614 container start 34046af39fa110f48c49c1a4e5280d9fb1827d6eb9e2afc0e80da03e3788890e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_shamir, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:33:37 compute-0 podman[315787]: 2026-02-25 12:33:37.785479646 +0000 UTC m=+0.121658770 container attach 34046af39fa110f48c49c1a4e5280d9fb1827d6eb9e2afc0e80da03e3788890e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_shamir, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:33:37 compute-0 systemd[1]: libpod-34046af39fa110f48c49c1a4e5280d9fb1827d6eb9e2afc0e80da03e3788890e.scope: Deactivated successfully.
Feb 25 12:33:37 compute-0 nervous_shamir[315805]: 167 167
Feb 25 12:33:37 compute-0 conmon[315805]: conmon 34046af39fa110f48c49 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34046af39fa110f48c49c1a4e5280d9fb1827d6eb9e2afc0e80da03e3788890e.scope/container/memory.events
Feb 25 12:33:37 compute-0 podman[315810]: 2026-02-25 12:33:37.829422108 +0000 UTC m=+0.030909275 container died 34046af39fa110f48c49c1a4e5280d9fb1827d6eb9e2afc0e80da03e3788890e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_shamir, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:33:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c1db18b301ec796e7baaa6a621d2afe5a71537094fbf50db39658b23ac421b2-merged.mount: Deactivated successfully.
Feb 25 12:33:37 compute-0 podman[315810]: 2026-02-25 12:33:37.870656823 +0000 UTC m=+0.072143920 container remove 34046af39fa110f48c49c1a4e5280d9fb1827d6eb9e2afc0e80da03e3788890e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_shamir, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 12:33:37 compute-0 systemd[1]: libpod-conmon-34046af39fa110f48c49c1a4e5280d9fb1827d6eb9e2afc0e80da03e3788890e.scope: Deactivated successfully.
Feb 25 12:33:37 compute-0 nova_compute[244014]: 2026-02-25 12:33:37.892 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:37 compute-0 nova_compute[244014]: 2026-02-25 12:33:37.893 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:37 compute-0 nova_compute[244014]: 2026-02-25 12:33:37.904 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:33:37 compute-0 nova_compute[244014]: 2026-02-25 12:33:37.905 244018 INFO nova.compute.claims [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:33:37 compute-0 nova_compute[244014]: 2026-02-25 12:33:37.957 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 233 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Feb 25 12:33:38 compute-0 podman[315832]: 2026-02-25 12:33:38.039964938 +0000 UTC m=+0.059126872 container create 6b9c3e36280bfbf3cb668cbec669ea956d5b8798fe1aa14b288546af9e2c887c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhaskara, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:33:38 compute-0 systemd[1]: Started libpod-conmon-6b9c3e36280bfbf3cb668cbec669ea956d5b8798fe1aa14b288546af9e2c887c.scope.
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.089 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:38 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:33:38 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:33:38 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:33:38 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:33:38 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:33:38 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:33:38 compute-0 podman[315832]: 2026-02-25 12:33:38.009715453 +0000 UTC m=+0.028877187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:33:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd79701c78a536b5624c5c223d6a3159f8cbc7d3acca4e7bcd46b5598e23e973/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd79701c78a536b5624c5c223d6a3159f8cbc7d3acca4e7bcd46b5598e23e973/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd79701c78a536b5624c5c223d6a3159f8cbc7d3acca4e7bcd46b5598e23e973/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd79701c78a536b5624c5c223d6a3159f8cbc7d3acca4e7bcd46b5598e23e973/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd79701c78a536b5624c5c223d6a3159f8cbc7d3acca4e7bcd46b5598e23e973/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:38 compute-0 podman[315832]: 2026-02-25 12:33:38.143438003 +0000 UTC m=+0.162599737 container init 6b9c3e36280bfbf3cb668cbec669ea956d5b8798fe1aa14b288546af9e2c887c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 12:33:38 compute-0 podman[315832]: 2026-02-25 12:33:38.154839195 +0000 UTC m=+0.174000879 container start 6b9c3e36280bfbf3cb668cbec669ea956d5b8798fe1aa14b288546af9e2c887c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhaskara, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:33:38 compute-0 podman[315832]: 2026-02-25 12:33:38.159655081 +0000 UTC m=+0.178816755 container attach 6b9c3e36280bfbf3cb668cbec669ea956d5b8798fe1aa14b288546af9e2c887c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhaskara, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:33:38 compute-0 zealous_bhaskara[315849]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:33:38 compute-0 zealous_bhaskara[315849]: --> All data devices are unavailable
Feb 25 12:33:38 compute-0 systemd[1]: libpod-6b9c3e36280bfbf3cb668cbec669ea956d5b8798fe1aa14b288546af9e2c887c.scope: Deactivated successfully.
Feb 25 12:33:38 compute-0 podman[315832]: 2026-02-25 12:33:38.635874499 +0000 UTC m=+0.655036143 container died 6b9c3e36280bfbf3cb668cbec669ea956d5b8798fe1aa14b288546af9e2c887c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhaskara, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:33:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:33:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332079526' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd79701c78a536b5624c5c223d6a3159f8cbc7d3acca4e7bcd46b5598e23e973-merged.mount: Deactivated successfully.
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.665 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.671 244018 DEBUG nova.compute.provider_tree [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:33:38 compute-0 podman[315832]: 2026-02-25 12:33:38.672490484 +0000 UTC m=+0.691652148 container remove 6b9c3e36280bfbf3cb668cbec669ea956d5b8798fe1aa14b288546af9e2c887c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 12:33:38 compute-0 systemd[1]: libpod-conmon-6b9c3e36280bfbf3cb668cbec669ea956d5b8798fe1aa14b288546af9e2c887c.scope: Deactivated successfully.
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.690 244018 DEBUG nova.scheduler.client.report [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:33:38 compute-0 sudo[315751]: pam_unix(sudo:session): session closed for user root
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.740 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.741 244018 DEBUG nova.compute.manager [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:33:38 compute-0 sudo[315905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:33:38 compute-0 sudo[315905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:33:38 compute-0 sudo[315905]: pam_unix(sudo:session): session closed for user root
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.799 244018 DEBUG nova.compute.manager [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.801 244018 DEBUG nova.network.neutron [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:33:38 compute-0 sudo[315930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:33:38 compute-0 sudo[315930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.822 244018 INFO nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.845 244018 DEBUG nova.compute.manager [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.959 244018 DEBUG nova.compute.manager [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.961 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.961 244018 INFO nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Creating image(s)
Feb 25 12:33:38 compute-0 nova_compute[244014]: 2026-02-25 12:33:38.986 244018 DEBUG nova.storage.rbd_utils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.017 244018 DEBUG nova.storage.rbd_utils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.046 244018 DEBUG nova.storage.rbd_utils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.049 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:39 compute-0 podman[316020]: 2026-02-25 12:33:39.093935405 +0000 UTC m=+0.049205332 container create 22d536b5afb62cc2b3c447d4f4e3e64bed311734b3d6e4be31421176d20b4e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:33:39 compute-0 ceph-mon[76335]: pgmap v1549: 305 pgs: 305 active+clean; 233 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Feb 25 12:33:39 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3332079526' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.120 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.120 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.121 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.121 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:39 compute-0 systemd[1]: Started libpod-conmon-22d536b5afb62cc2b3c447d4f4e3e64bed311734b3d6e4be31421176d20b4e3d.scope.
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.144 244018 DEBUG nova.storage.rbd_utils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.156 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:33:39 compute-0 podman[316020]: 2026-02-25 12:33:39.066717756 +0000 UTC m=+0.021987773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:33:39 compute-0 podman[316020]: 2026-02-25 12:33:39.172449344 +0000 UTC m=+0.127719311 container init 22d536b5afb62cc2b3c447d4f4e3e64bed311734b3d6e4be31421176d20b4e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 12:33:39 compute-0 podman[316020]: 2026-02-25 12:33:39.180525962 +0000 UTC m=+0.135795899 container start 22d536b5afb62cc2b3c447d4f4e3e64bed311734b3d6e4be31421176d20b4e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 12:33:39 compute-0 systemd[1]: libpod-22d536b5afb62cc2b3c447d4f4e3e64bed311734b3d6e4be31421176d20b4e3d.scope: Deactivated successfully.
Feb 25 12:33:39 compute-0 quizzical_buck[316059]: 167 167
Feb 25 12:33:39 compute-0 conmon[316059]: conmon 22d536b5afb62cc2b3c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22d536b5afb62cc2b3c447d4f4e3e64bed311734b3d6e4be31421176d20b4e3d.scope/container/memory.events
Feb 25 12:33:39 compute-0 podman[316020]: 2026-02-25 12:33:39.185799451 +0000 UTC m=+0.141069408 container attach 22d536b5afb62cc2b3c447d4f4e3e64bed311734b3d6e4be31421176d20b4e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 12:33:39 compute-0 podman[316020]: 2026-02-25 12:33:39.186194443 +0000 UTC m=+0.141464400 container died 22d536b5afb62cc2b3c447d4f4e3e64bed311734b3d6e4be31421176d20b4e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-aacd8d6a9052c812a8146fb990fd09dbd7a68527bd76964a193457e429d80a9a-merged.mount: Deactivated successfully.
Feb 25 12:33:39 compute-0 podman[316020]: 2026-02-25 12:33:39.228430416 +0000 UTC m=+0.183700333 container remove 22d536b5afb62cc2b3c447d4f4e3e64bed311734b3d6e4be31421176d20b4e3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:33:39 compute-0 systemd[1]: libpod-conmon-22d536b5afb62cc2b3c447d4f4e3e64bed311734b3d6e4be31421176d20b4e3d.scope: Deactivated successfully.
Feb 25 12:33:39 compute-0 podman[316064]: 2026-02-25 12:33:39.249385219 +0000 UTC m=+0.093345740 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:33:39 compute-0 podman[316060]: 2026-02-25 12:33:39.266881253 +0000 UTC m=+0.109572128 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.290 244018 DEBUG nova.policy [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8636d59ca0d49698907e2edb5dc4967', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e80b416cdd774e9483545c9e08abf805', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:33:39 compute-0 podman[316149]: 2026-02-25 12:33:39.39627299 +0000 UTC m=+0.077409909 container create 39feb9acdd90ea882bc63ad173ca81478da93c9c7b0d65c0192039267e79a07d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:33:39 compute-0 podman[316149]: 2026-02-25 12:33:39.344671322 +0000 UTC m=+0.025808241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.459 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:39 compute-0 systemd[1]: Started libpod-conmon-39feb9acdd90ea882bc63ad173ca81478da93c9c7b0d65c0192039267e79a07d.scope.
Feb 25 12:33:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4339fa6a8aac7dbb02b845d9c67d8893515e14ddc4f502502cc400c1a371fc13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4339fa6a8aac7dbb02b845d9c67d8893515e14ddc4f502502cc400c1a371fc13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4339fa6a8aac7dbb02b845d9c67d8893515e14ddc4f502502cc400c1a371fc13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4339fa6a8aac7dbb02b845d9c67d8893515e14ddc4f502502cc400c1a371fc13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:39 compute-0 podman[316149]: 2026-02-25 12:33:39.516786226 +0000 UTC m=+0.197923145 container init 39feb9acdd90ea882bc63ad173ca81478da93c9c7b0d65c0192039267e79a07d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_chatelet, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.520 244018 DEBUG nova.storage.rbd_utils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] resizing rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:33:39 compute-0 podman[316149]: 2026-02-25 12:33:39.531916304 +0000 UTC m=+0.213053213 container start 39feb9acdd90ea882bc63ad173ca81478da93c9c7b0d65c0192039267e79a07d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:33:39 compute-0 podman[316149]: 2026-02-25 12:33:39.535487194 +0000 UTC m=+0.216624103 container attach 39feb9acdd90ea882bc63ad173ca81478da93c9c7b0d65c0192039267e79a07d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_chatelet, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.597 244018 DEBUG nova.objects.instance [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f9c0573-54eb-4c29-82f2-f79ac673120f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.619 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.620 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Ensure instance console log exists: /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.620 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.621 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.621 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]: {
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:     "0": [
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:         {
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "devices": [
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "/dev/loop3"
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             ],
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_name": "ceph_lv0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_size": "21470642176",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "name": "ceph_lv0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "tags": {
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.cluster_name": "ceph",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.crush_device_class": "",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.encrypted": "0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.objectstore": "bluestore",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.osd_id": "0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.type": "block",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.vdo": "0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.with_tpm": "0"
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             },
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "type": "block",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "vg_name": "ceph_vg0"
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:         }
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:     ],
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:     "1": [
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:         {
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "devices": [
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "/dev/loop4"
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             ],
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_name": "ceph_lv1",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_size": "21470642176",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "name": "ceph_lv1",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "tags": {
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.cluster_name": "ceph",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.crush_device_class": "",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.encrypted": "0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.objectstore": "bluestore",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.osd_id": "1",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.type": "block",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.vdo": "0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.with_tpm": "0"
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             },
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "type": "block",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "vg_name": "ceph_vg1"
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:         }
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:     ],
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:     "2": [
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:         {
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "devices": [
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "/dev/loop5"
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             ],
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_name": "ceph_lv2",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_size": "21470642176",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "name": "ceph_lv2",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "tags": {
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.cluster_name": "ceph",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.crush_device_class": "",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.encrypted": "0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.objectstore": "bluestore",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.osd_id": "2",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.type": "block",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.vdo": "0",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:                 "ceph.with_tpm": "0"
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             },
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "type": "block",
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:             "vg_name": "ceph_vg2"
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:         }
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]:     ]
Feb 25 12:33:39 compute-0 compassionate_chatelet[316173]: }
Feb 25 12:33:39 compute-0 systemd[1]: libpod-39feb9acdd90ea882bc63ad173ca81478da93c9c7b0d65c0192039267e79a07d.scope: Deactivated successfully.
Feb 25 12:33:39 compute-0 podman[316247]: 2026-02-25 12:33:39.839679222 +0000 UTC m=+0.027028275 container died 39feb9acdd90ea882bc63ad173ca81478da93c9c7b0d65c0192039267e79a07d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_chatelet, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 12:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4339fa6a8aac7dbb02b845d9c67d8893515e14ddc4f502502cc400c1a371fc13-merged.mount: Deactivated successfully.
Feb 25 12:33:39 compute-0 nova_compute[244014]: 2026-02-25 12:33:39.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:39 compute-0 podman[316247]: 2026-02-25 12:33:39.878997713 +0000 UTC m=+0.066346756 container remove 39feb9acdd90ea882bc63ad173ca81478da93c9c7b0d65c0192039267e79a07d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_chatelet, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:33:39 compute-0 systemd[1]: libpod-conmon-39feb9acdd90ea882bc63ad173ca81478da93c9c7b0d65c0192039267e79a07d.scope: Deactivated successfully.
Feb 25 12:33:39 compute-0 sudo[315930]: pam_unix(sudo:session): session closed for user root
Feb 25 12:33:39 compute-0 sudo[316262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:33:39 compute-0 sudo[316262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:33:39 compute-0 sudo[316262]: pam_unix(sudo:session): session closed for user root
Feb 25 12:33:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 233 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Feb 25 12:33:40 compute-0 sudo[316287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:33:40 compute-0 sudo[316287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:33:40 compute-0 podman[316326]: 2026-02-25 12:33:40.351833496 +0000 UTC m=+0.052101313 container create 1ae83188c1e7f58ee425c69ccd454e1defeb8cc647e613f6da82910468e02c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:33:40 compute-0 systemd[1]: Started libpod-conmon-1ae83188c1e7f58ee425c69ccd454e1defeb8cc647e613f6da82910468e02c78.scope.
Feb 25 12:33:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:33:40 compute-0 podman[316326]: 2026-02-25 12:33:40.33178815 +0000 UTC m=+0.032055987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:33:40 compute-0 podman[316326]: 2026-02-25 12:33:40.437315762 +0000 UTC m=+0.137583619 container init 1ae83188c1e7f58ee425c69ccd454e1defeb8cc647e613f6da82910468e02c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:33:40 compute-0 podman[316326]: 2026-02-25 12:33:40.443522768 +0000 UTC m=+0.143790595 container start 1ae83188c1e7f58ee425c69ccd454e1defeb8cc647e613f6da82910468e02c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 12:33:40 compute-0 podman[316326]: 2026-02-25 12:33:40.447198372 +0000 UTC m=+0.147466259 container attach 1ae83188c1e7f58ee425c69ccd454e1defeb8cc647e613f6da82910468e02c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 12:33:40 compute-0 bold_haibt[316342]: 167 167
Feb 25 12:33:40 compute-0 systemd[1]: libpod-1ae83188c1e7f58ee425c69ccd454e1defeb8cc647e613f6da82910468e02c78.scope: Deactivated successfully.
Feb 25 12:33:40 compute-0 conmon[316342]: conmon 1ae83188c1e7f58ee425 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ae83188c1e7f58ee425c69ccd454e1defeb8cc647e613f6da82910468e02c78.scope/container/memory.events
Feb 25 12:33:40 compute-0 podman[316326]: 2026-02-25 12:33:40.450799613 +0000 UTC m=+0.151067490 container died 1ae83188c1e7f58ee425c69ccd454e1defeb8cc647e613f6da82910468e02c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-92fde9cfb8a83cd33e9bbc04dd5b22bd8d8e7b947ca9b028ded40a1660a89cab-merged.mount: Deactivated successfully.
Feb 25 12:33:40 compute-0 podman[316326]: 2026-02-25 12:33:40.490634499 +0000 UTC m=+0.190902316 container remove 1ae83188c1e7f58ee425c69ccd454e1defeb8cc647e613f6da82910468e02c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 12:33:40 compute-0 systemd[1]: libpod-conmon-1ae83188c1e7f58ee425c69ccd454e1defeb8cc647e613f6da82910468e02c78.scope: Deactivated successfully.
Feb 25 12:33:40 compute-0 podman[316366]: 2026-02-25 12:33:40.691810025 +0000 UTC m=+0.059222135 container create b48b245314143e11d8d36139ddcfb2b92789103e7a1c0eac39432683387f525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:33:40 compute-0 systemd[1]: Started libpod-conmon-b48b245314143e11d8d36139ddcfb2b92789103e7a1c0eac39432683387f525e.scope.
Feb 25 12:33:40 compute-0 podman[316366]: 2026-02-25 12:33:40.667903489 +0000 UTC m=+0.035315649 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:33:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c65d27f0a5d0e059b3b423bc8f87a193effbaf2d2855c10ee208fca234dc2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c65d27f0a5d0e059b3b423bc8f87a193effbaf2d2855c10ee208fca234dc2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c65d27f0a5d0e059b3b423bc8f87a193effbaf2d2855c10ee208fca234dc2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c65d27f0a5d0e059b3b423bc8f87a193effbaf2d2855c10ee208fca234dc2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:40 compute-0 podman[316366]: 2026-02-25 12:33:40.814154623 +0000 UTC m=+0.181566723 container init b48b245314143e11d8d36139ddcfb2b92789103e7a1c0eac39432683387f525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:33:40 compute-0 podman[316366]: 2026-02-25 12:33:40.82256261 +0000 UTC m=+0.189974680 container start b48b245314143e11d8d36139ddcfb2b92789103e7a1c0eac39432683387f525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:33:40 compute-0 podman[316366]: 2026-02-25 12:33:40.825834413 +0000 UTC m=+0.193246523 container attach b48b245314143e11d8d36139ddcfb2b92789103e7a1c0eac39432683387f525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_brahmagupta, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:33:40 compute-0 nova_compute[244014]: 2026-02-25 12:33:40.830 244018 DEBUG nova.network.neutron [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Successfully created port: fcd3999e-6238-4484-971c-1fdb438ff4ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:33:41 compute-0 ceph-mon[76335]: pgmap v1550: 305 pgs: 305 active+clean; 233 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Feb 25 12:33:41 compute-0 lvm[316462]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:33:41 compute-0 lvm[316460]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:33:41 compute-0 lvm[316460]: VG ceph_vg0 finished
Feb 25 12:33:41 compute-0 lvm[316462]: VG ceph_vg1 finished
Feb 25 12:33:41 compute-0 lvm[316464]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:33:41 compute-0 lvm[316464]: VG ceph_vg2 finished
Feb 25 12:33:41 compute-0 loving_brahmagupta[316383]: {}
Feb 25 12:33:41 compute-0 systemd[1]: libpod-b48b245314143e11d8d36139ddcfb2b92789103e7a1c0eac39432683387f525e.scope: Deactivated successfully.
Feb 25 12:33:41 compute-0 systemd[1]: libpod-b48b245314143e11d8d36139ddcfb2b92789103e7a1c0eac39432683387f525e.scope: Consumed 1.084s CPU time.
Feb 25 12:33:41 compute-0 podman[316366]: 2026-02-25 12:33:41.631011129 +0000 UTC m=+0.998423239 container died b48b245314143e11d8d36139ddcfb2b92789103e7a1c0eac39432683387f525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:33:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-97c65d27f0a5d0e059b3b423bc8f87a193effbaf2d2855c10ee208fca234dc2a-merged.mount: Deactivated successfully.
Feb 25 12:33:41 compute-0 podman[316366]: 2026-02-25 12:33:41.679972553 +0000 UTC m=+1.047384623 container remove b48b245314143e11d8d36139ddcfb2b92789103e7a1c0eac39432683387f525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:33:41 compute-0 systemd[1]: libpod-conmon-b48b245314143e11d8d36139ddcfb2b92789103e7a1c0eac39432683387f525e.scope: Deactivated successfully.
Feb 25 12:33:41 compute-0 sudo[316287]: pam_unix(sudo:session): session closed for user root
Feb 25 12:33:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:33:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:33:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:33:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:33:41 compute-0 sudo[316480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:33:41 compute-0 sudo[316480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:33:41 compute-0 sudo[316480]: pam_unix(sudo:session): session closed for user root
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 253 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 854 KiB/s wr, 25 op/s
Feb 25 12:33:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009302331917359042 of space, bias 1.0, pg target 0.27906995752077124 quantized to 32 (current 32)
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024927422707998526 of space, bias 1.0, pg target 0.7478226812399558 quantized to 32 (current 32)
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.362581730354668e-07 of space, bias 4.0, pg target 0.0010035098076425601 quantized to 16 (current 16)
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:33:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:33:42 compute-0 nova_compute[244014]: 2026-02-25 12:33:42.539 244018 DEBUG nova.network.neutron [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Successfully updated port: fcd3999e-6238-4484-971c-1fdb438ff4ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:33:42 compute-0 nova_compute[244014]: 2026-02-25 12:33:42.561 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "refresh_cache-0f9c0573-54eb-4c29-82f2-f79ac673120f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:33:42 compute-0 nova_compute[244014]: 2026-02-25 12:33:42.561 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquired lock "refresh_cache-0f9c0573-54eb-4c29-82f2-f79ac673120f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:33:42 compute-0 nova_compute[244014]: 2026-02-25 12:33:42.562 244018 DEBUG nova.network.neutron [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:33:42 compute-0 nova_compute[244014]: 2026-02-25 12:33:42.649 244018 DEBUG nova.compute.manager [req-456a569e-870a-4153-a754-adf12c540b73 req-663cd205-fc12-435d-8753-dc8b122b2ce9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received event network-changed-fcd3999e-6238-4484-971c-1fdb438ff4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:42 compute-0 nova_compute[244014]: 2026-02-25 12:33:42.649 244018 DEBUG nova.compute.manager [req-456a569e-870a-4153-a754-adf12c540b73 req-663cd205-fc12-435d-8753-dc8b122b2ce9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Refreshing instance network info cache due to event network-changed-fcd3999e-6238-4484-971c-1fdb438ff4ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:33:42 compute-0 nova_compute[244014]: 2026-02-25 12:33:42.650 244018 DEBUG oslo_concurrency.lockutils [req-456a569e-870a-4153-a754-adf12c540b73 req-663cd205-fc12-435d-8753-dc8b122b2ce9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0f9c0573-54eb-4c29-82f2-f79ac673120f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:33:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:33:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:33:42 compute-0 ceph-mon[76335]: pgmap v1551: 305 pgs: 305 active+clean; 253 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 854 KiB/s wr, 25 op/s
Feb 25 12:33:42 compute-0 nova_compute[244014]: 2026-02-25 12:33:42.783 244018 DEBUG nova.network.neutron [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:33:42 compute-0 nova_compute[244014]: 2026-02-25 12:33:42.962 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:43 compute-0 nova_compute[244014]: 2026-02-25 12:33:43.956 244018 DEBUG nova.network.neutron [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Updating instance_info_cache with network_info: [{"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:43 compute-0 nova_compute[244014]: 2026-02-25 12:33:43.981 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Releasing lock "refresh_cache-0f9c0573-54eb-4c29-82f2-f79ac673120f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:33:43 compute-0 nova_compute[244014]: 2026-02-25 12:33:43.982 244018 DEBUG nova.compute.manager [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Instance network_info: |[{"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:33:43 compute-0 nova_compute[244014]: 2026-02-25 12:33:43.982 244018 DEBUG oslo_concurrency.lockutils [req-456a569e-870a-4153-a754-adf12c540b73 req-663cd205-fc12-435d-8753-dc8b122b2ce9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0f9c0573-54eb-4c29-82f2-f79ac673120f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:33:43 compute-0 nova_compute[244014]: 2026-02-25 12:33:43.983 244018 DEBUG nova.network.neutron [req-456a569e-870a-4153-a754-adf12c540b73 req-663cd205-fc12-435d-8753-dc8b122b2ce9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Refreshing network info cache for port fcd3999e-6238-4484-971c-1fdb438ff4ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:33:43 compute-0 nova_compute[244014]: 2026-02-25 12:33:43.988 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Start _get_guest_xml network_info=[{"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:33:43 compute-0 nova_compute[244014]: 2026-02-25 12:33:43.995 244018 WARNING nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.002 244018 DEBUG nova.virt.libvirt.host [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.003 244018 DEBUG nova.virt.libvirt.host [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.007 244018 DEBUG nova.virt.libvirt.host [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:33:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.008 244018 DEBUG nova.virt.libvirt.host [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.009 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.010 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.011 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.011 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.012 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.012 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.012 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.013 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.013 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.013 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.013 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.014 244018 DEBUG nova.virt.hardware [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.018 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:44 compute-0 ceph-mon[76335]: pgmap v1552: 305 pgs: 305 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:33:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:33:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398086101' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.571 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.589 244018 DEBUG nova.storage.rbd_utils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.592 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:44 compute-0 nova_compute[244014]: 2026-02-25 12:33:44.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:33:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3254845911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.075 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.076 244018 DEBUG nova.virt.libvirt.vif [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:33:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-588501724',display_name='tempest-ServerDiskConfigTestJSON-server-588501724',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-588501724',id=83,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-xb709nop',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:33:38Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=0f9c0573-54eb-4c29-82f2-f79ac673120f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.077 244018 DEBUG nova.network.os_vif_util [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.077 244018 DEBUG nova.network.os_vif_util [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.078 244018 DEBUG nova.objects.instance [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f9c0573-54eb-4c29-82f2-f79ac673120f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.094 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:33:45 compute-0 nova_compute[244014]:   <uuid>0f9c0573-54eb-4c29-82f2-f79ac673120f</uuid>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   <name>instance-00000053</name>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-588501724</nova:name>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:33:43</nova:creationTime>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <nova:user uuid="d8636d59ca0d49698907e2edb5dc4967">tempest-ServerDiskConfigTestJSON-1145834762-project-member</nova:user>
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <nova:project uuid="e80b416cdd774e9483545c9e08abf805">tempest-ServerDiskConfigTestJSON-1145834762</nova:project>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <nova:port uuid="fcd3999e-6238-4484-971c-1fdb438ff4ee">
Feb 25 12:33:45 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <system>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <entry name="serial">0f9c0573-54eb-4c29-82f2-f79ac673120f</entry>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <entry name="uuid">0f9c0573-54eb-4c29-82f2-f79ac673120f</entry>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     </system>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   <os>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   </os>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   <features>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   </features>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0f9c0573-54eb-4c29-82f2-f79ac673120f_disk">
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config">
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       </source>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:33:45 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:f0:8e:f8"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <target dev="tapfcd3999e-62"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/console.log" append="off"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <video>
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     </video>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:33:45 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:33:45 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:33:45 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:33:45 compute-0 nova_compute[244014]: </domain>
Feb 25 12:33:45 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.095 244018 DEBUG nova.compute.manager [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Preparing to wait for external event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.095 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.095 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.095 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.096 244018 DEBUG nova.virt.libvirt.vif [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:33:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-588501724',display_name='tempest-ServerDiskConfigTestJSON-server-588501724',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-588501724',id=83,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-xb709nop',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:33:38Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=0f9c0573-54eb-4c29-82f2-f79ac673120f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.096 244018 DEBUG nova.network.os_vif_util [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.097 244018 DEBUG nova.network.os_vif_util [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.097 244018 DEBUG os_vif [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.097 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.098 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.098 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.101 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.101 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfcd3999e-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.102 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfcd3999e-62, col_values=(('external_ids', {'iface-id': 'fcd3999e-6238-4484-971c-1fdb438ff4ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:8e:f8', 'vm-uuid': '0f9c0573-54eb-4c29-82f2-f79ac673120f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.103 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:45 compute-0 NetworkManager[49836]: <info>  [1772022825.1040] manager: (tapfcd3999e-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.105 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.111 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.112 244018 INFO os_vif [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62')
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.165 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.165 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.166 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No VIF found with MAC fa:16:3e:f0:8e:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.166 244018 INFO nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Using config drive
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.193 244018 DEBUG nova.storage.rbd_utils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1398086101' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3254845911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.940 244018 INFO nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Creating config drive at /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config
Feb 25 12:33:45 compute-0 nova_compute[244014]: 2026-02-25 12:33:45.943 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfcsma7gx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.070 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfcsma7gx" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.093 244018 DEBUG nova.storage.rbd_utils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.097 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.214 244018 DEBUG oslo_concurrency.processutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.215 244018 INFO nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Deleting local config drive /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config because it was imported into RBD.
Feb 25 12:33:46 compute-0 kernel: tapfcd3999e-62: entered promiscuous mode
Feb 25 12:33:46 compute-0 ovn_controller[147040]: 2026-02-25T12:33:46Z|00798|binding|INFO|Claiming lport fcd3999e-6238-4484-971c-1fdb438ff4ee for this chassis.
Feb 25 12:33:46 compute-0 ovn_controller[147040]: 2026-02-25T12:33:46Z|00799|binding|INFO|fcd3999e-6238-4484-971c-1fdb438ff4ee: Claiming fa:16:3e:f0:8e:f8 10.100.0.14
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:46 compute-0 NetworkManager[49836]: <info>  [1772022826.2576] manager: (tapfcd3999e-62): new Tun device (/org/freedesktop/NetworkManager/Devices/338)
Feb 25 12:33:46 compute-0 ceph-mon[76335]: pgmap v1553: 305 pgs: 305 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:46 compute-0 systemd-udevd[316638]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:33:46 compute-0 NetworkManager[49836]: <info>  [1772022826.2884] device (tapfcd3999e-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:33:46 compute-0 NetworkManager[49836]: <info>  [1772022826.2888] device (tapfcd3999e-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:33:46 compute-0 systemd-machined[210048]: New machine qemu-101-instance-00000053.
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.296 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:8e:f8 10.100.0.14'], port_security=['fa:16:3e:f0:8e:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0f9c0573-54eb-4c29-82f2-f79ac673120f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fcd3999e-6238-4484-971c-1fdb438ff4ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.298 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fcd3999e-6238-4484-971c-1fdb438ff4ee in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 bound to our chassis
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.299 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:33:46 compute-0 ovn_controller[147040]: 2026-02-25T12:33:46Z|00800|binding|INFO|Setting lport fcd3999e-6238-4484-971c-1fdb438ff4ee ovn-installed in OVS
Feb 25 12:33:46 compute-0 ovn_controller[147040]: 2026-02-25T12:33:46Z|00801|binding|INFO|Setting lport fcd3999e-6238-4484-971c-1fdb438ff4ee up in Southbound
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.301 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:46 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000053.
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.308 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f49204b5-abce-4b25-b154-1ae7691a288d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.309 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d1639de-d1 in ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.313 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d1639de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.313 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[40b846fd-4996-4350-b731-68ca9781ff82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.314 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a88083e1-fa0d-4439-84cd-3a45de8629d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.326 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f44930-44c9-410f-b541-e35ba08eb00e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.338 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[82cd379a-01b3-49b2-9bba-125b4976251b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.363 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d89724bb-0739-4baf-9851-40fe1b4566af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 systemd-udevd[316643]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:33:46 compute-0 NetworkManager[49836]: <info>  [1772022826.3713] manager: (tap9d1639de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/339)
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.370 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[42f79887-1bad-4123-a8e5-c6fc754fc210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.396 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a24f79ba-917b-4a65-8bd1-e040e16d9c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.401 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[42d4359a-f32d-441e-8d1b-46fbdbe14e80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 NetworkManager[49836]: <info>  [1772022826.4232] device (tap9d1639de-d0): carrier: link connected
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.426 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[134a226c-18cf-4f36-9803-cd7b7ed5f2e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.440 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bd48356f-da6e-4459-9664-deb394190f16]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479583, 'reachable_time': 43970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316674, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.452 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a59f58-30f8-4a28-944e-d1d93c5aee6f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:8cd3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479583, 'tstamp': 479583}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316675, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.464 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8c06d4-25b5-4ce9-9432-d2e64ac31077]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479583, 'reachable_time': 43970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316676, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.485 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb1862c-ee77-4669-820f-52c5e5cfc31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.533 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8499c496-cd42-4e20-aeb3-be3e0035dd63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.535 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.536 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.537 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d1639de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:46 compute-0 NetworkManager[49836]: <info>  [1772022826.5400] manager: (tap9d1639de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Feb 25 12:33:46 compute-0 kernel: tap9d1639de-d0: entered promiscuous mode
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.544 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d1639de-d0, col_values=(('external_ids', {'iface-id': '2a0bb56b-974f-4df3-ab65-5a5521eee6ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:46 compute-0 ovn_controller[147040]: 2026-02-25T12:33:46Z|00802|binding|INFO|Releasing lport 2a0bb56b-974f-4df3-ab65-5a5521eee6ab from this chassis (sb_readonly=0)
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.555 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.556 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.558 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e94def-8a66-4a7e-908f-374fa856f376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.558 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:33:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:46.559 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'env', 'PROCESS_TAG=haproxy-9d1639de-d0ac-47b6-9707-253523b26c21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d1639de-d0ac-47b6-9707-253523b26c21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.685 244018 DEBUG nova.network.neutron [req-456a569e-870a-4153-a754-adf12c540b73 req-663cd205-fc12-435d-8753-dc8b122b2ce9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Updated VIF entry in instance network info cache for port fcd3999e-6238-4484-971c-1fdb438ff4ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.685 244018 DEBUG nova.network.neutron [req-456a569e-870a-4153-a754-adf12c540b73 req-663cd205-fc12-435d-8753-dc8b122b2ce9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Updating instance_info_cache with network_info: [{"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.690 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022826.6896057, 0f9c0573-54eb-4c29-82f2-f79ac673120f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.690 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] VM Started (Lifecycle Event)
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.776 244018 DEBUG oslo_concurrency.lockutils [req-456a569e-870a-4153-a754-adf12c540b73 req-663cd205-fc12-435d-8753-dc8b122b2ce9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0f9c0573-54eb-4c29-82f2-f79ac673120f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.778 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.783 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022826.6897244, 0f9c0573-54eb-4c29-82f2-f79ac673120f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.783 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] VM Paused (Lifecycle Event)
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.915 244018 DEBUG nova.compute.manager [req-d1042d3d-02a6-469e-8743-19bea9894332 req-f36bda4c-a6de-4057-97fa-57b46d4d0b5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.915 244018 DEBUG oslo_concurrency.lockutils [req-d1042d3d-02a6-469e-8743-19bea9894332 req-f36bda4c-a6de-4057-97fa-57b46d4d0b5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.916 244018 DEBUG oslo_concurrency.lockutils [req-d1042d3d-02a6-469e-8743-19bea9894332 req-f36bda4c-a6de-4057-97fa-57b46d4d0b5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.916 244018 DEBUG oslo_concurrency.lockutils [req-d1042d3d-02a6-469e-8743-19bea9894332 req-f36bda4c-a6de-4057-97fa-57b46d4d0b5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.917 244018 DEBUG nova.compute.manager [req-d1042d3d-02a6-469e-8743-19bea9894332 req-f36bda4c-a6de-4057-97fa-57b46d4d0b5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Processing event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.918 244018 DEBUG nova.compute.manager [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.924 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.929 244018 INFO nova.virt.libvirt.driver [-] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Instance spawned successfully.
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.929 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.933 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.938 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022826.9228854, 0f9c0573-54eb-4c29-82f2-f79ac673120f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.938 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] VM Resumed (Lifecycle Event)
Feb 25 12:33:46 compute-0 podman[316750]: 2026-02-25 12:33:46.954640056 +0000 UTC m=+0.076864643 container create 8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 12:33:46 compute-0 systemd[1]: Started libpod-conmon-8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b.scope.
Feb 25 12:33:46 compute-0 nova_compute[244014]: 2026-02-25 12:33:46.997 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.005 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:33:47 compute-0 podman[316750]: 2026-02-25 12:33:46.917067894 +0000 UTC m=+0.039292541 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.008 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.009 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.009 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.009 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.010 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.010 244018 DEBUG nova.virt.libvirt.driver [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1541cfdcc527c3aae642c99cef6b14fc2ac3bbcb0b8c5ce35f768d47b5c1aeb7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:33:47 compute-0 podman[316750]: 2026-02-25 12:33:47.044432954 +0000 UTC m=+0.166657551 container init 8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 12:33:47 compute-0 podman[316750]: 2026-02-25 12:33:47.055570759 +0000 UTC m=+0.177795326 container start 8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.078 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:33:47 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[316765]: [NOTICE]   (316769) : New worker (316771) forked
Feb 25 12:33:47 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[316765]: [NOTICE]   (316769) : Loading success.
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.167 244018 INFO nova.compute.manager [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Took 8.21 seconds to spawn the instance on the hypervisor.
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.168 244018 DEBUG nova.compute.manager [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.388 244018 INFO nova.compute.manager [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Took 9.53 seconds to build instance.
Feb 25 12:33:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:33:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2819306250' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:33:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:33:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2819306250' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.497 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.498 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2819306250' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:33:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2819306250' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.636 244018 DEBUG oslo_concurrency.lockutils [None req-85e441e8-9390-48c9-bd96-e66853a6ea17 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.638 244018 DEBUG nova.compute.manager [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.919 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.920 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.927 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:33:47 compute-0 nova_compute[244014]: 2026-02-25 12:33:47.928 244018 INFO nova.compute.claims [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:33:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Feb 25 12:33:48 compute-0 nova_compute[244014]: 2026-02-25 12:33:48.291 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:48 compute-0 ceph-mon[76335]: pgmap v1554: 305 pgs: 305 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Feb 25 12:33:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:33:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689860374' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:48 compute-0 nova_compute[244014]: 2026-02-25 12:33:48.862 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:48 compute-0 nova_compute[244014]: 2026-02-25 12:33:48.868 244018 DEBUG nova.compute.provider_tree [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.060 244018 DEBUG nova.scheduler.client.report [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.177 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.179 244018 DEBUG nova.compute.manager [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:33:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2689860374' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.754 244018 DEBUG nova.compute.manager [req-b2b921b1-8b0d-48dc-b37a-64c81cff355b req-643558f4-08ce-4e70-b78b-ae92afaaec5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.756 244018 DEBUG oslo_concurrency.lockutils [req-b2b921b1-8b0d-48dc-b37a-64c81cff355b req-643558f4-08ce-4e70-b78b-ae92afaaec5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.756 244018 DEBUG oslo_concurrency.lockutils [req-b2b921b1-8b0d-48dc-b37a-64c81cff355b req-643558f4-08ce-4e70-b78b-ae92afaaec5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.756 244018 DEBUG oslo_concurrency.lockutils [req-b2b921b1-8b0d-48dc-b37a-64c81cff355b req-643558f4-08ce-4e70-b78b-ae92afaaec5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.757 244018 DEBUG nova.compute.manager [req-b2b921b1-8b0d-48dc-b37a-64c81cff355b req-643558f4-08ce-4e70-b78b-ae92afaaec5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] No waiting events found dispatching network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.757 244018 WARNING nova.compute.manager [req-b2b921b1-8b0d-48dc-b37a-64c81cff355b req-643558f4-08ce-4e70-b78b-ae92afaaec5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received unexpected event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee for instance with vm_state active and task_state None.
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.873 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.973 244018 DEBUG nova.compute.manager [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:33:49 compute-0 nova_compute[244014]: 2026-02-25 12:33:49.974 244018 DEBUG nova.network.neutron [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:33:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.103 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.271 244018 INFO nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.284 244018 DEBUG nova.policy [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cb2c815ef3214a7b897f911b4f53a146', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '367d43ab207546c3900a8414f0713ef4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.350 244018 DEBUG nova.compute.manager [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.519 244018 DEBUG nova.compute.manager [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.520 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.521 244018 INFO nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Creating image(s)
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.539 244018 DEBUG nova.storage.rbd_utils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.560 244018 DEBUG nova.storage.rbd_utils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.582 244018 DEBUG nova.storage.rbd_utils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.586 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.644 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.645 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.646 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.646 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.670 244018 DEBUG nova.storage.rbd_utils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.674 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:50 compute-0 ceph-mon[76335]: pgmap v1555: 305 pgs: 305 active+clean; 279 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Feb 25 12:33:50 compute-0 nova_compute[244014]: 2026-02-25 12:33:50.994 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.046 244018 DEBUG nova.storage.rbd_utils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] resizing rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.115 244018 DEBUG nova.objects.instance [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'migration_context' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.427 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.428 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Ensure instance console log exists: /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.429 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.430 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.430 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.490 244018 DEBUG nova.network.neutron [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Successfully created port: e7482fdc-9792-451e-adf6-a816dd7113c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.516 244018 INFO nova.compute.manager [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Rebuilding instance
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.806 244018 DEBUG nova.objects.instance [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0f9c0573-54eb-4c29-82f2-f79ac673120f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.828 244018 DEBUG nova.compute.manager [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.882 244018 DEBUG nova.objects.instance [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0f9c0573-54eb-4c29-82f2-f79ac673120f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.902 244018 DEBUG nova.objects.instance [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f9c0573-54eb-4c29-82f2-f79ac673120f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.917 244018 DEBUG nova.objects.instance [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'resources' on Instance uuid 0f9c0573-54eb-4c29-82f2-f79ac673120f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.931 244018 DEBUG nova.objects.instance [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f9c0573-54eb-4c29-82f2-f79ac673120f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.942 244018 DEBUG nova.objects.instance [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:33:51 compute-0 nova_compute[244014]: 2026-02-25 12:33:51.944 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:33:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 304 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 82 op/s
Feb 25 12:33:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.396 244018 DEBUG nova.network.neutron [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Successfully updated port: e7482fdc-9792-451e-adf6-a816dd7113c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.412 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.413 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquired lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.413 244018 DEBUG nova.network.neutron [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.511 244018 INFO nova.compute.manager [None req-81298521-8931-48c4-88b9-e2851ba35ac6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Pausing
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.513 244018 DEBUG nova.objects.instance [None req-81298521-8931-48c4-88b9-e2851ba35ac6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'flavor' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.560 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022832.554953, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.560 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Paused (Lifecycle Event)
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.562 244018 DEBUG nova.compute.manager [None req-81298521-8931-48c4-88b9-e2851ba35ac6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.602 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.605 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.671 244018 DEBUG nova.network.neutron [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.804 244018 DEBUG nova.compute.manager [req-acd2678f-5ce3-4b3f-88d3-4c8c7af85611 req-b1ecc328-a685-4e6d-aafe-331be3950f89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-changed-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.804 244018 DEBUG nova.compute.manager [req-acd2678f-5ce3-4b3f-88d3-4c8c7af85611 req-b1ecc328-a685-4e6d-aafe-331be3950f89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Refreshing instance network info cache due to event network-changed-e7482fdc-9792-451e-adf6-a816dd7113c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:33:52 compute-0 nova_compute[244014]: 2026-02-25 12:33:52.804 244018 DEBUG oslo_concurrency.lockutils [req-acd2678f-5ce3-4b3f-88d3-4c8c7af85611 req-b1ecc328-a685-4e6d-aafe-331be3950f89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:33:53 compute-0 ceph-mon[76335]: pgmap v1556: 305 pgs: 305 active+clean; 304 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 82 op/s
Feb 25 12:33:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 103 op/s
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.748 244018 INFO nova.compute.manager [None req-249085f4-a459-4589-bbd9-9274173dc9f9 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Unpausing
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.750 244018 DEBUG nova.objects.instance [None req-249085f4-a459-4589-bbd9-9274173dc9f9 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'flavor' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.777 244018 DEBUG nova.network.neutron [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Updating instance_info_cache with network_info: [{"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.781 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022834.781257, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.781 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Resumed (Lifecycle Event)
Feb 25 12:33:54 compute-0 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.786 244018 DEBUG nova.virt.libvirt.guest [None req-249085f4-a459-4589-bbd9-9274173dc9f9 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.786 244018 DEBUG nova.compute.manager [None req-249085f4-a459-4589-bbd9-9274173dc9f9 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.808 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.809 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Releasing lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.809 244018 DEBUG nova.compute.manager [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Instance network_info: |[{"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.809 244018 DEBUG oslo_concurrency.lockutils [req-acd2678f-5ce3-4b3f-88d3-4c8c7af85611 req-b1ecc328-a685-4e6d-aafe-331be3950f89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.810 244018 DEBUG nova.network.neutron [req-acd2678f-5ce3-4b3f-88d3-4c8c7af85611 req-b1ecc328-a685-4e6d-aafe-331be3950f89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Refreshing network info cache for port e7482fdc-9792-451e-adf6-a816dd7113c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.812 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Start _get_guest_xml network_info=[{"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.815 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.818 244018 WARNING nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.822 244018 DEBUG nova.virt.libvirt.host [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.823 244018 DEBUG nova.virt.libvirt.host [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.827 244018 DEBUG nova.virt.libvirt.host [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.827 244018 DEBUG nova.virt.libvirt.host [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.828 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.828 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.828 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.828 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.829 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.829 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.829 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.829 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.829 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.829 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.829 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.830 244018 DEBUG nova.virt.hardware [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.832 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.867 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] During sync_power_state the instance has a pending task (unpausing). Skip.
Feb 25 12:33:54 compute-0 nova_compute[244014]: 2026-02-25 12:33:54.875 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:55.014 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:55.014 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:55.015 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:55 compute-0 ceph-mon[76335]: pgmap v1557: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 103 op/s
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.105 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:33:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715789687' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.387 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.420 244018 DEBUG nova.storage.rbd_utils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.424 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:33:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3643917434' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.933 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.935 244018 DEBUG nova.virt.libvirt.vif [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:33:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-842091875',display_name='tempest-ServerRescueTestJSON-server-842091875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-842091875',id=84,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='367d43ab207546c3900a8414f0713ef4',ramdisk_id='',reservation_id='r-aun4358p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-930018924',owner_user_name='tempest-ServerRescueTestJSON-930018924-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:33:50Z,user_data=None,user_id='cb2c815ef3214a7b897f911b4f53a146',uuid=aeaad9e2-4ad0-46fb-b619-7ca2c78443a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.936 244018 DEBUG nova.network.os_vif_util [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converting VIF {"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.937 244018 DEBUG nova.network.os_vif_util [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:aa:11,bridge_name='br-int',has_traffic_filtering=True,id=e7482fdc-9792-451e-adf6-a816dd7113c8,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7482fdc-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.939 244018 DEBUG nova.objects.instance [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'pci_devices' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.960 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:33:55 compute-0 nova_compute[244014]:   <uuid>aeaad9e2-4ad0-46fb-b619-7ca2c78443a8</uuid>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   <name>instance-00000054</name>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerRescueTestJSON-server-842091875</nova:name>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:33:54</nova:creationTime>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <nova:user uuid="cb2c815ef3214a7b897f911b4f53a146">tempest-ServerRescueTestJSON-930018924-project-member</nova:user>
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <nova:project uuid="367d43ab207546c3900a8414f0713ef4">tempest-ServerRescueTestJSON-930018924</nova:project>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <nova:port uuid="e7482fdc-9792-451e-adf6-a816dd7113c8">
Feb 25 12:33:55 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <system>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <entry name="serial">aeaad9e2-4ad0-46fb-b619-7ca2c78443a8</entry>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <entry name="uuid">aeaad9e2-4ad0-46fb-b619-7ca2c78443a8</entry>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     </system>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   <os>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   </os>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   <features>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   </features>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk">
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       </source>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.config">
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       </source>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:33:55 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:56:aa:11"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <target dev="tape7482fdc-97"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/console.log" append="off"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <video>
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     </video>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:33:55 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:33:55 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:33:55 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:33:55 compute-0 nova_compute[244014]: </domain>
Feb 25 12:33:55 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.961 244018 DEBUG nova.compute.manager [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Preparing to wait for external event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.961 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.962 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.962 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.963 244018 DEBUG nova.virt.libvirt.vif [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:33:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-842091875',display_name='tempest-ServerRescueTestJSON-server-842091875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-842091875',id=84,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='367d43ab207546c3900a8414f0713ef4',ramdisk_id='',reservation_id='r-aun4358p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-930018924',owner_user_name='tempest-ServerRescueTestJSON-930018924-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:33:50Z,user_data=None,user_id='cb2c815ef3214a7b897f911b4f53a146',uuid=aeaad9e2-4ad0-46fb-b619-7ca2c78443a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.964 244018 DEBUG nova.network.os_vif_util [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converting VIF {"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.965 244018 DEBUG nova.network.os_vif_util [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:aa:11,bridge_name='br-int',has_traffic_filtering=True,id=e7482fdc-9792-451e-adf6-a816dd7113c8,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7482fdc-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.965 244018 DEBUG os_vif [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:aa:11,bridge_name='br-int',has_traffic_filtering=True,id=e7482fdc-9792-451e-adf6-a816dd7113c8,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7482fdc-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.966 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.967 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.968 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.972 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7482fdc-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.974 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7482fdc-97, col_values=(('external_ids', {'iface-id': 'e7482fdc-9792-451e-adf6-a816dd7113c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:aa:11', 'vm-uuid': 'aeaad9e2-4ad0-46fb-b619-7ca2c78443a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:55 compute-0 NetworkManager[49836]: <info>  [1772022835.9772] manager: (tape7482fdc-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.981 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:55 compute-0 nova_compute[244014]: 2026-02-25 12:33:55.982 244018 INFO os_vif [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:aa:11,bridge_name='br-int',has_traffic_filtering=True,id=e7482fdc-9792-451e-adf6-a816dd7113c8,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7482fdc-97')
Feb 25 12:33:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.043 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.043 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.044 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No VIF found with MAC fa:16:3e:56:aa:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.044 244018 INFO nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Using config drive
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.079 244018 DEBUG nova.storage.rbd_utils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1715789687' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3643917434' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.467 244018 DEBUG nova.network.neutron [req-acd2678f-5ce3-4b3f-88d3-4c8c7af85611 req-b1ecc328-a685-4e6d-aafe-331be3950f89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Updated VIF entry in instance network info cache for port e7482fdc-9792-451e-adf6-a816dd7113c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.468 244018 DEBUG nova.network.neutron [req-acd2678f-5ce3-4b3f-88d3-4c8c7af85611 req-b1ecc328-a685-4e6d-aafe-331be3950f89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Updating instance_info_cache with network_info: [{"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.493 244018 DEBUG oslo_concurrency.lockutils [req-acd2678f-5ce3-4b3f-88d3-4c8c7af85611 req-b1ecc328-a685-4e6d-aafe-331be3950f89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.609 244018 INFO nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Creating config drive at /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.615 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpypf09g_a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.755 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpypf09g_a" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.780 244018 DEBUG nova.storage.rbd_utils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.785 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.933 244018 DEBUG oslo_concurrency.processutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.934 244018 INFO nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Deleting local config drive /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config because it was imported into RBD.
Feb 25 12:33:56 compute-0 NetworkManager[49836]: <info>  [1772022836.9831] manager: (tape7482fdc-97): new Tun device (/org/freedesktop/NetworkManager/Devices/342)
Feb 25 12:33:56 compute-0 kernel: tape7482fdc-97: entered promiscuous mode
Feb 25 12:33:56 compute-0 ovn_controller[147040]: 2026-02-25T12:33:56Z|00803|binding|INFO|Claiming lport e7482fdc-9792-451e-adf6-a816dd7113c8 for this chassis.
Feb 25 12:33:56 compute-0 ovn_controller[147040]: 2026-02-25T12:33:56Z|00804|binding|INFO|e7482fdc-9792-451e-adf6-a816dd7113c8: Claiming fa:16:3e:56:aa:11 10.100.0.12
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.985 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:56 compute-0 nova_compute[244014]: 2026-02-25 12:33:56.989 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:57 compute-0 systemd-udevd[317104]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:33:57 compute-0 systemd-machined[210048]: New machine qemu-102-instance-00000054.
Feb 25 12:33:57 compute-0 nova_compute[244014]: 2026-02-25 12:33:57.020 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:57 compute-0 NetworkManager[49836]: <info>  [1772022837.0244] device (tape7482fdc-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:33:57 compute-0 NetworkManager[49836]: <info>  [1772022837.0248] device (tape7482fdc-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:33:57 compute-0 ovn_controller[147040]: 2026-02-25T12:33:57Z|00805|binding|INFO|Setting lport e7482fdc-9792-451e-adf6-a816dd7113c8 ovn-installed in OVS
Feb 25 12:33:57 compute-0 nova_compute[244014]: 2026-02-25 12:33:57.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:33:57 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-00000054.
Feb 25 12:33:57 compute-0 ceph-mon[76335]: pgmap v1558: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:33:57 compute-0 ovn_controller[147040]: 2026-02-25T12:33:57Z|00806|binding|INFO|Setting lport e7482fdc-9792-451e-adf6-a816dd7113c8 up in Southbound
Feb 25 12:33:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:57.112 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:aa:11 10.100.0.12'], port_security=['fa:16:3e:56:aa:11 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'aeaad9e2-4ad0-46fb-b619-7ca2c78443a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e7482fdc-9792-451e-adf6-a816dd7113c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:33:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:57.114 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e7482fdc-9792-451e-adf6-a816dd7113c8 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 bound to our chassis
Feb 25 12:33:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:57.114 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:33:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:33:57.115 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ae064901-cf3c-41ea-8e40-3a20bf89ebd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:33:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:33:57 compute-0 nova_compute[244014]: 2026-02-25 12:33:57.555 244018 DEBUG nova.compute.manager [req-5f0d24b0-bda3-417d-a49c-c4ac6bfcbdfb req-0817585f-11a0-4884-b151-526c43c8ff2a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:57 compute-0 nova_compute[244014]: 2026-02-25 12:33:57.555 244018 DEBUG oslo_concurrency.lockutils [req-5f0d24b0-bda3-417d-a49c-c4ac6bfcbdfb req-0817585f-11a0-4884-b151-526c43c8ff2a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:57 compute-0 nova_compute[244014]: 2026-02-25 12:33:57.556 244018 DEBUG oslo_concurrency.lockutils [req-5f0d24b0-bda3-417d-a49c-c4ac6bfcbdfb req-0817585f-11a0-4884-b151-526c43c8ff2a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:57 compute-0 nova_compute[244014]: 2026-02-25 12:33:57.556 244018 DEBUG oslo_concurrency.lockutils [req-5f0d24b0-bda3-417d-a49c-c4ac6bfcbdfb req-0817585f-11a0-4884-b151-526c43c8ff2a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:57 compute-0 nova_compute[244014]: 2026-02-25 12:33:57.556 244018 DEBUG nova.compute.manager [req-5f0d24b0-bda3-417d-a49c-c4ac6bfcbdfb req-0817585f-11a0-4884-b151-526c43c8ff2a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Processing event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:33:57 compute-0 ovn_controller[147040]: 2026-02-25T12:33:57Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:8e:f8 10.100.0.14
Feb 25 12:33:57 compute-0 ovn_controller[147040]: 2026-02-25T12:33:57Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:8e:f8 10.100.0.14
Feb 25 12:33:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 335 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 119 op/s
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.228 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022838.22764, aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.228 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] VM Started (Lifecycle Event)
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.232 244018 DEBUG nova.compute.manager [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.237 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.239 244018 INFO nova.virt.libvirt.driver [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Instance spawned successfully.
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.239 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.314 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.319 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.322 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.323 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.323 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.324 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.324 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.324 244018 DEBUG nova.virt.libvirt.driver [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.586 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.586 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022838.2277765, aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.586 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] VM Paused (Lifecycle Event)
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.792 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.795 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022838.2356598, aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.795 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] VM Resumed (Lifecycle Event)
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.798 244018 INFO nova.compute.manager [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Took 8.28 seconds to spawn the instance on the hypervisor.
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.799 244018 DEBUG nova.compute.manager [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.829 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.835 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.866 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.887 244018 INFO nova.compute.manager [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Took 11.00 seconds to build instance.
Feb 25 12:33:58 compute-0 nova_compute[244014]: 2026-02-25 12:33:58.907 244018 DEBUG oslo_concurrency.lockutils [None req-eecdf9e9-a29d-40f8-b52b-ecdf9b64232e cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:59 compute-0 ceph-mon[76335]: pgmap v1559: 305 pgs: 305 active+clean; 335 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 119 op/s
Feb 25 12:33:59 compute-0 nova_compute[244014]: 2026-02-25 12:33:59.817 244018 DEBUG nova.compute.manager [req-9cf94a03-7830-47c4-bb0b-5548010773cd req-1666bf93-2e29-4c1d-8a74-f9f77a259f04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:33:59 compute-0 nova_compute[244014]: 2026-02-25 12:33:59.817 244018 DEBUG oslo_concurrency.lockutils [req-9cf94a03-7830-47c4-bb0b-5548010773cd req-1666bf93-2e29-4c1d-8a74-f9f77a259f04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:33:59 compute-0 nova_compute[244014]: 2026-02-25 12:33:59.818 244018 DEBUG oslo_concurrency.lockutils [req-9cf94a03-7830-47c4-bb0b-5548010773cd req-1666bf93-2e29-4c1d-8a74-f9f77a259f04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:33:59 compute-0 nova_compute[244014]: 2026-02-25 12:33:59.818 244018 DEBUG oslo_concurrency.lockutils [req-9cf94a03-7830-47c4-bb0b-5548010773cd req-1666bf93-2e29-4c1d-8a74-f9f77a259f04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:33:59 compute-0 nova_compute[244014]: 2026-02-25 12:33:59.818 244018 DEBUG nova.compute.manager [req-9cf94a03-7830-47c4-bb0b-5548010773cd req-1666bf93-2e29-4c1d-8a74-f9f77a259f04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] No waiting events found dispatching network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:33:59 compute-0 nova_compute[244014]: 2026-02-25 12:33:59.818 244018 WARNING nova.compute.manager [req-9cf94a03-7830-47c4-bb0b-5548010773cd req-1666bf93-2e29-4c1d-8a74-f9f77a259f04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received unexpected event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 for instance with vm_state active and task_state None.
Feb 25 12:33:59 compute-0 nova_compute[244014]: 2026-02-25 12:33:59.877 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 335 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 101 op/s
Feb 25 12:34:00 compute-0 nova_compute[244014]: 2026-02-25 12:34:00.820 244018 INFO nova.compute.manager [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Rescuing
Feb 25 12:34:00 compute-0 nova_compute[244014]: 2026-02-25 12:34:00.821 244018 DEBUG oslo_concurrency.lockutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:34:00 compute-0 nova_compute[244014]: 2026-02-25 12:34:00.821 244018 DEBUG oslo_concurrency.lockutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquired lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:34:00 compute-0 nova_compute[244014]: 2026-02-25 12:34:00.821 244018 DEBUG nova.network.neutron [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:34:00 compute-0 nova_compute[244014]: 2026-02-25 12:34:00.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:01 compute-0 ceph-mon[76335]: pgmap v1560: 305 pgs: 305 active+clean; 335 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 101 op/s
Feb 25 12:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:34:02 compute-0 nova_compute[244014]: 2026-02-25 12:34:02.009 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:34:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 353 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 183 op/s
Feb 25 12:34:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:34:03 compute-0 ceph-mon[76335]: pgmap v1561: 305 pgs: 305 active+clean; 353 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 183 op/s
Feb 25 12:34:04 compute-0 nova_compute[244014]: 2026-02-25 12:34:04.008 244018 DEBUG nova.network.neutron [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Updating instance_info_cache with network_info: [{"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 358 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.8 MiB/s wr, 185 op/s
Feb 25 12:34:04 compute-0 nova_compute[244014]: 2026-02-25 12:34:04.030 244018 DEBUG oslo_concurrency.lockutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Releasing lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:34:04 compute-0 kernel: tapfcd3999e-62 (unregistering): left promiscuous mode
Feb 25 12:34:04 compute-0 NetworkManager[49836]: <info>  [1772022844.3867] device (tapfcd3999e-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:34:04 compute-0 ovn_controller[147040]: 2026-02-25T12:34:04Z|00807|binding|INFO|Releasing lport fcd3999e-6238-4484-971c-1fdb438ff4ee from this chassis (sb_readonly=0)
Feb 25 12:34:04 compute-0 ovn_controller[147040]: 2026-02-25T12:34:04Z|00808|binding|INFO|Setting lport fcd3999e-6238-4484-971c-1fdb438ff4ee down in Southbound
Feb 25 12:34:04 compute-0 ovn_controller[147040]: 2026-02-25T12:34:04Z|00809|binding|INFO|Removing iface tapfcd3999e-62 ovn-installed in OVS
Feb 25 12:34:04 compute-0 nova_compute[244014]: 2026-02-25 12:34:04.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:04 compute-0 nova_compute[244014]: 2026-02-25 12:34:04.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:04 compute-0 nova_compute[244014]: 2026-02-25 12:34:04.401 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:04 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000053.scope: Deactivated successfully.
Feb 25 12:34:04 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000053.scope: Consumed 12.042s CPU time.
Feb 25 12:34:04 compute-0 systemd-machined[210048]: Machine qemu-101-instance-00000053 terminated.
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.464 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:8e:f8 10.100.0.14'], port_security=['fa:16:3e:f0:8e:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0f9c0573-54eb-4c29-82f2-f79ac673120f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fcd3999e-6238-4484-971c-1fdb438ff4ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.466 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fcd3999e-6238-4484-971c-1fdb438ff4ee in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 unbound from our chassis
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.468 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d1639de-d0ac-47b6-9707-253523b26c21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.469 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[df7e5e8f-4649-4bce-ae28-bccbea00a693]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.470 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace which is not needed anymore
Feb 25 12:34:04 compute-0 nova_compute[244014]: 2026-02-25 12:34:04.505 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:34:04 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[316765]: [NOTICE]   (316769) : haproxy version is 2.8.14-c23fe91
Feb 25 12:34:04 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[316765]: [NOTICE]   (316769) : path to executable is /usr/sbin/haproxy
Feb 25 12:34:04 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[316765]: [WARNING]  (316769) : Exiting Master process...
Feb 25 12:34:04 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[316765]: [ALERT]    (316769) : Current worker (316771) exited with code 143 (Terminated)
Feb 25 12:34:04 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[316765]: [WARNING]  (316769) : All workers exited. Exiting... (0)
Feb 25 12:34:04 compute-0 systemd[1]: libpod-8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b.scope: Deactivated successfully.
Feb 25 12:34:04 compute-0 podman[317180]: 2026-02-25 12:34:04.584405843 +0000 UTC m=+0.040676711 container died 8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b-userdata-shm.mount: Deactivated successfully.
Feb 25 12:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1541cfdcc527c3aae642c99cef6b14fc2ac3bbcb0b8c5ce35f768d47b5c1aeb7-merged.mount: Deactivated successfully.
Feb 25 12:34:04 compute-0 podman[317180]: 2026-02-25 12:34:04.644068549 +0000 UTC m=+0.100339417 container cleanup 8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:34:04 compute-0 systemd[1]: libpod-conmon-8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b.scope: Deactivated successfully.
Feb 25 12:34:04 compute-0 podman[317220]: 2026-02-25 12:34:04.703861589 +0000 UTC m=+0.037482470 container remove 8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.708 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b18c662-297e-4072-a6c9-6ef839c6b266]: (4, ('Wed Feb 25 12:34:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b)\n8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b\nWed Feb 25 12:34:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b)\n8ca21e10c903687ee683e44c23f2705387177ae6a979e6b5e9f40d62534ca20b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.710 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd657bcf-bab6-4ff0-b1d9-a12ac9c5cd40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.711 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:04 compute-0 nova_compute[244014]: 2026-02-25 12:34:04.714 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:04 compute-0 kernel: tap9d1639de-d0: left promiscuous mode
Feb 25 12:34:04 compute-0 nova_compute[244014]: 2026-02-25 12:34:04.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.727 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5703bc-6bfc-444f-9e78-6841ccd63112]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.740 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7885b8-4582-4d9a-8e78-45c43ec4ae99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.741 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7eadaf72-a4c4-4e37-b8c7-f845da004e8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.752 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[378ec9f4-91b5-41fc-a846-f104d23cdd48]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479576, 'reachable_time': 22781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317240, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d1639de\x2dd0ac\x2d47b6\x2d9707\x2d253523b26c21.mount: Deactivated successfully.
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.756 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:34:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:04.756 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[839b7b09-f31d-4aa7-8fc1-c54f7d07aba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:04 compute-0 nova_compute[244014]: 2026-02-25 12:34:04.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.005 244018 DEBUG nova.compute.manager [req-6d664494-80ba-410f-bd82-489e2d4ab217 req-be1b9cbc-e3a7-496d-b837-f0e412ca0760 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received event network-vif-unplugged-fcd3999e-6238-4484-971c-1fdb438ff4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.005 244018 DEBUG oslo_concurrency.lockutils [req-6d664494-80ba-410f-bd82-489e2d4ab217 req-be1b9cbc-e3a7-496d-b837-f0e412ca0760 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.006 244018 DEBUG oslo_concurrency.lockutils [req-6d664494-80ba-410f-bd82-489e2d4ab217 req-be1b9cbc-e3a7-496d-b837-f0e412ca0760 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.006 244018 DEBUG oslo_concurrency.lockutils [req-6d664494-80ba-410f-bd82-489e2d4ab217 req-be1b9cbc-e3a7-496d-b837-f0e412ca0760 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.006 244018 DEBUG nova.compute.manager [req-6d664494-80ba-410f-bd82-489e2d4ab217 req-be1b9cbc-e3a7-496d-b837-f0e412ca0760 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] No waiting events found dispatching network-vif-unplugged-fcd3999e-6238-4484-971c-1fdb438ff4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.007 244018 WARNING nova.compute.manager [req-6d664494-80ba-410f-bd82-489e2d4ab217 req-be1b9cbc-e3a7-496d-b837-f0e412ca0760 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received unexpected event network-vif-unplugged-fcd3999e-6238-4484-971c-1fdb438ff4ee for instance with vm_state active and task_state rebuilding.
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.025 244018 INFO nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Instance shutdown successfully after 13 seconds.
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.030 244018 INFO nova.virt.libvirt.driver [-] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Instance destroyed successfully.
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.036 244018 INFO nova.virt.libvirt.driver [-] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Instance destroyed successfully.
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.038 244018 DEBUG nova.virt.libvirt.vif [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:33:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-588501724',display_name='tempest-ServerDiskConfigTestJSON-server-588501724',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-588501724',id=83,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:33:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-xb709nop',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:33:50Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=0f9c0573-54eb-4c29-82f2-f79ac673120f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.038 244018 DEBUG nova.network.os_vif_util [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.039 244018 DEBUG nova.network.os_vif_util [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.040 244018 DEBUG os_vif [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.043 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcd3999e-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.052 244018 INFO os_vif [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62')
Feb 25 12:34:05 compute-0 ceph-mon[76335]: pgmap v1562: 305 pgs: 305 active+clean; 358 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.8 MiB/s wr, 185 op/s
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.535 244018 INFO nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Deleting instance files /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f_del
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.536 244018 INFO nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Deletion of /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f_del complete
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.711 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.712 244018 INFO nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Creating image(s)
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.740 244018 DEBUG nova.storage.rbd_utils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.768 244018 DEBUG nova.storage.rbd_utils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.790 244018 DEBUG nova.storage.rbd_utils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.794 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.869 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.870 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.871 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.871 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.894 244018 DEBUG nova.storage.rbd_utils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:05 compute-0 nova_compute[244014]: 2026-02-25 12:34:05.898 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 358 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 25 12:34:06 compute-0 ceph-mon[76335]: pgmap v1563: 305 pgs: 305 active+clean; 358 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.174 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.220 244018 DEBUG nova.storage.rbd_utils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] resizing rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.290 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.290 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Ensure instance console log exists: /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.291 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.291 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.291 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.293 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Start _get_guest_xml network_info=[{"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.296 244018 WARNING nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.303 244018 DEBUG nova.virt.libvirt.host [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.303 244018 DEBUG nova.virt.libvirt.host [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.307 244018 DEBUG nova.virt.libvirt.host [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.307 244018 DEBUG nova.virt.libvirt.host [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.307 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.307 244018 DEBUG nova.virt.hardware [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.308 244018 DEBUG nova.virt.hardware [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.308 244018 DEBUG nova.virt.hardware [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.308 244018 DEBUG nova.virt.hardware [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.308 244018 DEBUG nova.virt.hardware [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.308 244018 DEBUG nova.virt.hardware [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.308 244018 DEBUG nova.virt.hardware [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.309 244018 DEBUG nova.virt.hardware [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.309 244018 DEBUG nova.virt.hardware [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.309 244018 DEBUG nova.virt.hardware [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.309 244018 DEBUG nova.virt.hardware [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.309 244018 DEBUG nova.objects.instance [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0f9c0573-54eb-4c29-82f2-f79ac673120f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.341 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233878449' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.877 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.898 244018 DEBUG nova.storage.rbd_utils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:06 compute-0 nova_compute[244014]: 2026-02-25 12:34:06.903 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:34:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2233878449' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.370 244018 DEBUG nova.compute.manager [req-8c43f3f4-2a0e-4b5f-90ae-d4052d69f2f7 req-69decf0f-efe4-451f-8efa-5e5eef036f40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.370 244018 DEBUG oslo_concurrency.lockutils [req-8c43f3f4-2a0e-4b5f-90ae-d4052d69f2f7 req-69decf0f-efe4-451f-8efa-5e5eef036f40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.370 244018 DEBUG oslo_concurrency.lockutils [req-8c43f3f4-2a0e-4b5f-90ae-d4052d69f2f7 req-69decf0f-efe4-451f-8efa-5e5eef036f40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.371 244018 DEBUG oslo_concurrency.lockutils [req-8c43f3f4-2a0e-4b5f-90ae-d4052d69f2f7 req-69decf0f-efe4-451f-8efa-5e5eef036f40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.371 244018 DEBUG nova.compute.manager [req-8c43f3f4-2a0e-4b5f-90ae-d4052d69f2f7 req-69decf0f-efe4-451f-8efa-5e5eef036f40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] No waiting events found dispatching network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.371 244018 WARNING nova.compute.manager [req-8c43f3f4-2a0e-4b5f-90ae-d4052d69f2f7 req-69decf0f-efe4-451f-8efa-5e5eef036f40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received unexpected event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee for instance with vm_state active and task_state rebuild_spawning.
Feb 25 12:34:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2238792750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.436 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.437 244018 DEBUG nova.virt.libvirt.vif [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:33:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-588501724',display_name='tempest-ServerDiskConfigTestJSON-server-588501724',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-588501724',id=83,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:33:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-xb709nop',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:05Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=0f9c0573-54eb-4c29-82f2-f79ac673120f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.437 244018 DEBUG nova.network.os_vif_util [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.438 244018 DEBUG nova.network.os_vif_util [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.440 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:34:07 compute-0 nova_compute[244014]:   <uuid>0f9c0573-54eb-4c29-82f2-f79ac673120f</uuid>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   <name>instance-00000053</name>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-588501724</nova:name>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:34:06</nova:creationTime>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <nova:user uuid="d8636d59ca0d49698907e2edb5dc4967">tempest-ServerDiskConfigTestJSON-1145834762-project-member</nova:user>
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <nova:project uuid="e80b416cdd774e9483545c9e08abf805">tempest-ServerDiskConfigTestJSON-1145834762</nova:project>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <nova:port uuid="fcd3999e-6238-4484-971c-1fdb438ff4ee">
Feb 25 12:34:07 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <system>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <entry name="serial">0f9c0573-54eb-4c29-82f2-f79ac673120f</entry>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <entry name="uuid">0f9c0573-54eb-4c29-82f2-f79ac673120f</entry>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     </system>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   <os>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   </os>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   <features>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   </features>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0f9c0573-54eb-4c29-82f2-f79ac673120f_disk">
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config">
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:07 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:f0:8e:f8"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <target dev="tapfcd3999e-62"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/console.log" append="off"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <video>
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     </video>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:34:07 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:34:07 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:34:07 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:34:07 compute-0 nova_compute[244014]: </domain>
Feb 25 12:34:07 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.441 244018 DEBUG nova.compute.manager [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Preparing to wait for external event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.441 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.442 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.442 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.442 244018 DEBUG nova.virt.libvirt.vif [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:33:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-588501724',display_name='tempest-ServerDiskConfigTestJSON-server-588501724',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-588501724',id=83,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:33:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-xb709nop',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:05Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=0f9c0573-54eb-4c29-82f2-f79ac673120f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.443 244018 DEBUG nova.network.os_vif_util [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.443 244018 DEBUG nova.network.os_vif_util [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.444 244018 DEBUG os_vif [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.445 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.445 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.447 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfcd3999e-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.448 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfcd3999e-62, col_values=(('external_ids', {'iface-id': 'fcd3999e-6238-4484-971c-1fdb438ff4ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:8e:f8', 'vm-uuid': '0f9c0573-54eb-4c29-82f2-f79ac673120f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.449 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:07 compute-0 NetworkManager[49836]: <info>  [1772022847.4505] manager: (tapfcd3999e-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.454 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.455 244018 INFO os_vif [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62')
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.531 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.531 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.531 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No VIF found with MAC fa:16:3e:f0:8e:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.532 244018 INFO nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Using config drive
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.548 244018 DEBUG nova.storage.rbd_utils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.570 244018 DEBUG nova.objects.instance [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0f9c0573-54eb-4c29-82f2-f79ac673120f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:07 compute-0 nova_compute[244014]: 2026-02-25 12:34:07.615 244018 DEBUG nova.objects.instance [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'keypairs' on Instance uuid 0f9c0573-54eb-4c29-82f2-f79ac673120f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.091 244018 INFO nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Creating config drive at /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.098 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9c31y9ro execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2238792750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:08 compute-0 ceph-mon[76335]: pgmap v1564: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.235 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9c31y9ro" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.256 244018 DEBUG nova.storage.rbd_utils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.259 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.461 244018 DEBUG oslo_concurrency.processutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config 0f9c0573-54eb-4c29-82f2-f79ac673120f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.462 244018 INFO nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Deleting local config drive /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f/disk.config because it was imported into RBD.
Feb 25 12:34:08 compute-0 kernel: tapfcd3999e-62: entered promiscuous mode
Feb 25 12:34:08 compute-0 NetworkManager[49836]: <info>  [1772022848.4950] manager: (tapfcd3999e-62): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:08 compute-0 ovn_controller[147040]: 2026-02-25T12:34:08Z|00810|binding|INFO|Claiming lport fcd3999e-6238-4484-971c-1fdb438ff4ee for this chassis.
Feb 25 12:34:08 compute-0 ovn_controller[147040]: 2026-02-25T12:34:08Z|00811|binding|INFO|fcd3999e-6238-4484-971c-1fdb438ff4ee: Claiming fa:16:3e:f0:8e:f8 10.100.0.14
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.504 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:08 compute-0 ovn_controller[147040]: 2026-02-25T12:34:08Z|00812|binding|INFO|Setting lport fcd3999e-6238-4484-971c-1fdb438ff4ee ovn-installed in OVS
Feb 25 12:34:08 compute-0 ovn_controller[147040]: 2026-02-25T12:34:08Z|00813|binding|INFO|Setting lport fcd3999e-6238-4484-971c-1fdb438ff4ee up in Southbound
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.506 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:8e:f8 10.100.0.14'], port_security=['fa:16:3e:f0:8e:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0f9c0573-54eb-4c29-82f2-f79ac673120f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '5', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fcd3999e-6238-4484-971c-1fdb438ff4ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.508 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fcd3999e-6238-4484-971c-1fdb438ff4ee in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 bound to our chassis
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.509 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:34:08 compute-0 systemd-udevd[317562]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.517 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45d896cc-ec68-4c96-8ad1-519892a85aa7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.518 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d1639de-d1 in ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.521 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d1639de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.521 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a488de51-19d0-4615-8d4a-3c3125859d7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.523 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0818508f-07d9-4ab1-bbeb-72bc1fbad62b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 NetworkManager[49836]: <info>  [1772022848.5268] device (tapfcd3999e-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:34:08 compute-0 NetworkManager[49836]: <info>  [1772022848.5272] device (tapfcd3999e-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:34:08 compute-0 systemd-machined[210048]: New machine qemu-103-instance-00000053.
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.532 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[932cf2de-3ade-44c1-bade-dc598a4ea448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-00000053.
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.544 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[50eb3e2a-46e8-4e34-ba0d-5b9487b3e37a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.571 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[27c24b23-c7a6-4137-9fc0-c89a5c3a1218]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.575 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[12879efa-624b-4c1a-87d3-904d65f437b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 NetworkManager[49836]: <info>  [1772022848.5758] manager: (tap9d1639de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/345)
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.599 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0c709600-c637-4242-a5dd-53ecf53c8986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.604 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2bdc52-16ab-47a5-8f68-e6ef3f4fc585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 NetworkManager[49836]: <info>  [1772022848.6212] device (tap9d1639de-d0): carrier: link connected
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.627 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[50663ac7-67ac-46ae-b4c3-f1c0e2a325fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.653 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[99de9bfe-f44e-4cf5-918b-4e5c377b1dd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481802, 'reachable_time': 43896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317596, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.671 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[151e32de-0e88-473b-b736-445e0c905ddb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:8cd3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481802, 'tstamp': 481802}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317597, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.683 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[32b0cc7b-d04b-46ab-a3df-14d67939e390]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481802, 'reachable_time': 43896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317598, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.710 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a941c5b6-a60b-4038-8955-41871a28e575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.749 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[667e97b8-d86a-405e-9cd8-89beaf5cf998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.750 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.751 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.751 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d1639de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.752 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:08 compute-0 kernel: tap9d1639de-d0: entered promiscuous mode
Feb 25 12:34:08 compute-0 NetworkManager[49836]: <info>  [1772022848.7545] manager: (tap9d1639de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.754 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d1639de-d0, col_values=(('external_ids', {'iface-id': '2a0bb56b-974f-4df3-ab65-5a5521eee6ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.755 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:08 compute-0 ovn_controller[147040]: 2026-02-25T12:34:08Z|00814|binding|INFO|Releasing lport 2a0bb56b-974f-4df3-ab65-5a5521eee6ab from this chassis (sb_readonly=0)
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.760 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.760 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.761 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.761 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d763520e-d1e6-4e94-9a61-3f4f53afcf6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.762 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:34:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:08.762 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'env', 'PROCESS_TAG=haproxy-9d1639de-d0ac-47b6-9707-253523b26c21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d1639de-d0ac-47b6-9707-253523b26c21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.998 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 0f9c0573-54eb-4c29-82f2-f79ac673120f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:34:08 compute-0 nova_compute[244014]: 2026-02-25 12:34:08.999 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022848.9972625, 0f9c0573-54eb-4c29-82f2-f79ac673120f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.000 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] VM Started (Lifecycle Event)
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.065 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.069 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022848.9986534, 0f9c0573-54eb-4c29-82f2-f79ac673120f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.069 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] VM Paused (Lifecycle Event)
Feb 25 12:34:09 compute-0 podman[317671]: 2026-02-25 12:34:09.084387573 +0000 UTC m=+0.041554406 container create 484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.106 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.109 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:09 compute-0 systemd[1]: Started libpod-conmon-484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f.scope.
Feb 25 12:34:09 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9daf8144d2c61332227af26fd59002ada6a4990f87a50d6473671e1be55a74/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:09 compute-0 podman[317671]: 2026-02-25 12:34:09.157281643 +0000 UTC m=+0.114448516 container init 484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 12:34:09 compute-0 podman[317671]: 2026-02-25 12:34:09.060005254 +0000 UTC m=+0.017172127 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:34:09 compute-0 podman[317671]: 2026-02-25 12:34:09.162616784 +0000 UTC m=+0.119783627 container start 484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:34:09 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[317686]: [NOTICE]   (317690) : New worker (317692) forked
Feb 25 12:34:09 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[317686]: [NOTICE]   (317690) : Loading success.
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.189 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.595 244018 DEBUG nova.compute.manager [req-c059e1a8-3d0c-43d8-b88b-d8ec1086d71b req-ef3663f2-c690-49f2-9661-95da9be330a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.596 244018 DEBUG oslo_concurrency.lockutils [req-c059e1a8-3d0c-43d8-b88b-d8ec1086d71b req-ef3663f2-c690-49f2-9661-95da9be330a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.597 244018 DEBUG oslo_concurrency.lockutils [req-c059e1a8-3d0c-43d8-b88b-d8ec1086d71b req-ef3663f2-c690-49f2-9661-95da9be330a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.597 244018 DEBUG oslo_concurrency.lockutils [req-c059e1a8-3d0c-43d8-b88b-d8ec1086d71b req-ef3663f2-c690-49f2-9661-95da9be330a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.597 244018 DEBUG nova.compute.manager [req-c059e1a8-3d0c-43d8-b88b-d8ec1086d71b req-ef3663f2-c690-49f2-9661-95da9be330a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Processing event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.598 244018 DEBUG nova.compute.manager [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.601 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.602 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022849.6021438, 0f9c0573-54eb-4c29-82f2-f79ac673120f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.602 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] VM Resumed (Lifecycle Event)
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.607 244018 INFO nova.virt.libvirt.driver [-] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Instance spawned successfully.
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.608 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.644 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.651 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.656 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.657 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.658 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.659 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.659 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.660 244018 DEBUG nova.virt.libvirt.driver [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.683 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.720 244018 DEBUG nova.compute.manager [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:09 compute-0 podman[317701]: 2026-02-25 12:34:09.743748408 +0000 UTC m=+0.076129543 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.771 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.772 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.772 244018 DEBUG nova.objects.instance [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:34:09 compute-0 podman[317702]: 2026-02-25 12:34:09.773023785 +0000 UTC m=+0.094802110 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.826 244018 DEBUG oslo_concurrency.lockutils [None req-dabec343-fbd9-442f-a1b2-e3bba584fb2f d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:09 compute-0 nova_compute[244014]: 2026-02-25 12:34:09.881 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 175 op/s
Feb 25 12:34:10 compute-0 nova_compute[244014]: 2026-02-25 12:34:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:34:10 compute-0 nova_compute[244014]: 2026-02-25 12:34:10.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:34:10 compute-0 nova_compute[244014]: 2026-02-25 12:34:10.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:34:11 compute-0 nova_compute[244014]: 2026-02-25 12:34:11.119 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:34:11 compute-0 nova_compute[244014]: 2026-02-25 12:34:11.119 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:34:11 compute-0 nova_compute[244014]: 2026-02-25 12:34:11.120 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:34:11 compute-0 nova_compute[244014]: 2026-02-25 12:34:11.121 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:11 compute-0 ceph-mon[76335]: pgmap v1565: 305 pgs: 305 active+clean; 325 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 175 op/s
Feb 25 12:34:11 compute-0 nova_compute[244014]: 2026-02-25 12:34:11.988 244018 DEBUG oslo_concurrency.lockutils [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:11 compute-0 nova_compute[244014]: 2026-02-25 12:34:11.988 244018 DEBUG oslo_concurrency.lockutils [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:11 compute-0 nova_compute[244014]: 2026-02-25 12:34:11.989 244018 DEBUG oslo_concurrency.lockutils [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:11 compute-0 nova_compute[244014]: 2026-02-25 12:34:11.989 244018 DEBUG oslo_concurrency.lockutils [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:11 compute-0 nova_compute[244014]: 2026-02-25 12:34:11.989 244018 DEBUG oslo_concurrency.lockutils [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:11 compute-0 nova_compute[244014]: 2026-02-25 12:34:11.991 244018 INFO nova.compute.manager [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Terminating instance
Feb 25 12:34:11 compute-0 nova_compute[244014]: 2026-02-25 12:34:11.992 244018 DEBUG nova.compute.manager [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:34:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 351 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.3 MiB/s wr, 245 op/s
Feb 25 12:34:12 compute-0 kernel: tapfcd3999e-62 (unregistering): left promiscuous mode
Feb 25 12:34:12 compute-0 NetworkManager[49836]: <info>  [1772022852.0315] device (tapfcd3999e-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:34:12 compute-0 ovn_controller[147040]: 2026-02-25T12:34:12Z|00815|binding|INFO|Releasing lport fcd3999e-6238-4484-971c-1fdb438ff4ee from this chassis (sb_readonly=0)
Feb 25 12:34:12 compute-0 ovn_controller[147040]: 2026-02-25T12:34:12Z|00816|binding|INFO|Setting lport fcd3999e-6238-4484-971c-1fdb438ff4ee down in Southbound
Feb 25 12:34:12 compute-0 ovn_controller[147040]: 2026-02-25T12:34:12Z|00817|binding|INFO|Removing iface tapfcd3999e-62 ovn-installed in OVS
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.047 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:8e:f8 10.100.0.14'], port_security=['fa:16:3e:f0:8e:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0f9c0573-54eb-4c29-82f2-f79ac673120f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '6', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fcd3999e-6238-4484-971c-1fdb438ff4ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.049 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fcd3999e-6238-4484-971c-1fdb438ff4ee in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 unbound from our chassis
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.051 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d1639de-d0ac-47b6-9707-253523b26c21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.052 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[791c88ec-7d5b-4dec-b4b1-738966a78d07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.053 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace which is not needed anymore
Feb 25 12:34:12 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Deactivated successfully.
Feb 25 12:34:12 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Consumed 2.855s CPU time.
Feb 25 12:34:12 compute-0 systemd-machined[210048]: Machine qemu-103-instance-00000053 terminated.
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.091 244018 DEBUG nova.compute.manager [req-bdba1df8-e6ce-4a5f-bcff-7b9f2081c542 req-1eacd2ca-9f88-49a8-aa99-895f581ba31d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.091 244018 DEBUG oslo_concurrency.lockutils [req-bdba1df8-e6ce-4a5f-bcff-7b9f2081c542 req-1eacd2ca-9f88-49a8-aa99-895f581ba31d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.091 244018 DEBUG oslo_concurrency.lockutils [req-bdba1df8-e6ce-4a5f-bcff-7b9f2081c542 req-1eacd2ca-9f88-49a8-aa99-895f581ba31d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.092 244018 DEBUG oslo_concurrency.lockutils [req-bdba1df8-e6ce-4a5f-bcff-7b9f2081c542 req-1eacd2ca-9f88-49a8-aa99-895f581ba31d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.092 244018 DEBUG nova.compute.manager [req-bdba1df8-e6ce-4a5f-bcff-7b9f2081c542 req-1eacd2ca-9f88-49a8-aa99-895f581ba31d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] No waiting events found dispatching network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.092 244018 WARNING nova.compute.manager [req-bdba1df8-e6ce-4a5f-bcff-7b9f2081c542 req-1eacd2ca-9f88-49a8-aa99-895f581ba31d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received unexpected event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee for instance with vm_state active and task_state deleting.
Feb 25 12:34:12 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[317686]: [NOTICE]   (317690) : haproxy version is 2.8.14-c23fe91
Feb 25 12:34:12 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[317686]: [NOTICE]   (317690) : path to executable is /usr/sbin/haproxy
Feb 25 12:34:12 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[317686]: [WARNING]  (317690) : Exiting Master process...
Feb 25 12:34:12 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[317686]: [WARNING]  (317690) : Exiting Master process...
Feb 25 12:34:12 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[317686]: [ALERT]    (317690) : Current worker (317692) exited with code 143 (Terminated)
Feb 25 12:34:12 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[317686]: [WARNING]  (317690) : All workers exited. Exiting... (0)
Feb 25 12:34:12 compute-0 systemd[1]: libpod-484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f.scope: Deactivated successfully.
Feb 25 12:34:12 compute-0 podman[317768]: 2026-02-25 12:34:12.176216294 +0000 UTC m=+0.045084695 container died 484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 12:34:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f-userdata-shm.mount: Deactivated successfully.
Feb 25 12:34:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d9daf8144d2c61332227af26fd59002ada6a4990f87a50d6473671e1be55a74-merged.mount: Deactivated successfully.
Feb 25 12:34:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:12 compute-0 podman[317768]: 2026-02-25 12:34:12.621020086 +0000 UTC m=+0.489888477 container cleanup 484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.627 244018 INFO nova.virt.libvirt.driver [-] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Instance destroyed successfully.
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.627 244018 DEBUG nova.objects.instance [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'resources' on Instance uuid 0f9c0573-54eb-4c29-82f2-f79ac673120f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:12 compute-0 systemd[1]: libpod-conmon-484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f.scope: Deactivated successfully.
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.650 244018 DEBUG nova.virt.libvirt.vif [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:33:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-588501724',display_name='tempest-ServerDiskConfigTestJSON-server-588501724',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-588501724',id=83,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-xb709nop',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:34:09Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=0f9c0573-54eb-4c29-82f2-f79ac673120f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.651 244018 DEBUG nova.network.os_vif_util [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "address": "fa:16:3e:f0:8e:f8", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd3999e-62", "ovs_interfaceid": "fcd3999e-6238-4484-971c-1fdb438ff4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.652 244018 DEBUG nova.network.os_vif_util [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.653 244018 DEBUG os_vif [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.655 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.656 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcd3999e-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.663 244018 INFO os_vif [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:8e:f8,bridge_name='br-int',has_traffic_filtering=True,id=fcd3999e-6238-4484-971c-1fdb438ff4ee,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd3999e-62')
Feb 25 12:34:12 compute-0 podman[317808]: 2026-02-25 12:34:12.681863215 +0000 UTC m=+0.042470431 container remove 484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.687 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[33b5dd42-65d2-4d0e-b051-0c0bd940e819]: (4, ('Wed Feb 25 12:34:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f)\n484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f\nWed Feb 25 12:34:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f)\n484e9b2740b6cba87c8952845edcf2ea71b83a64f5eb5f1997afecab0db43c1f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.688 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d367f2-eaba-45b1-b82d-71506c8e2176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.689 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:12 compute-0 kernel: tap9d1639de-d0: left promiscuous mode
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.696 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.699 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[50fa2c95-1bbd-47d6-9354-70d16d70757e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.719 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb022f8-7530-4ce1-aad5-ca0d34fb165a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.721 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fcd074-ecca-49ef-9acd-40bc0d55ffc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.735 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed3b983-5ec3-4d01-90c0-bbba43944062]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481797, 'reachable_time': 32391, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317841, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.737 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:34:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:12.737 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6a1a8a-0387-463a-8746-546204e739d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d1639de\x2dd0ac\x2d47b6\x2d9707\x2d253523b26c21.mount: Deactivated successfully.
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.923 244018 INFO nova.virt.libvirt.driver [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Deleting instance files /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f_del
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.924 244018 INFO nova.virt.libvirt.driver [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Deletion of /var/lib/nova/instances/0f9c0573-54eb-4c29-82f2-f79ac673120f_del complete
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.990 244018 INFO nova.compute.manager [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Took 1.00 seconds to destroy the instance on the hypervisor.
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.991 244018 DEBUG oslo.service.loopingcall [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.991 244018 DEBUG nova.compute.manager [-] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:34:12 compute-0 nova_compute[244014]: 2026-02-25 12:34:12.992 244018 DEBUG nova.network.neutron [-] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:34:13 compute-0 ceph-mon[76335]: pgmap v1566: 305 pgs: 305 active+clean; 351 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.3 MiB/s wr, 245 op/s
Feb 25 12:34:13 compute-0 nova_compute[244014]: 2026-02-25 12:34:13.772 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updating instance_info_cache with network_info: [{"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:13 compute-0 nova_compute[244014]: 2026-02-25 12:34:13.798 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:34:13 compute-0 nova_compute[244014]: 2026-02-25 12:34:13.798 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:34:13 compute-0 nova_compute[244014]: 2026-02-25 12:34:13.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:34:13 compute-0 nova_compute[244014]: 2026-02-25 12:34:13.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:34:13 compute-0 nova_compute[244014]: 2026-02-25 12:34:13.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:13 compute-0 nova_compute[244014]: 2026-02-25 12:34:13.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:13 compute-0 nova_compute[244014]: 2026-02-25 12:34:13.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:13 compute-0 nova_compute[244014]: 2026-02-25 12:34:13.910 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:34:13 compute-0 nova_compute[244014]: 2026-02-25 12:34:13.910 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 358 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.0 MiB/s wr, 232 op/s
Feb 25 12:34:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:34:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4035425214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.488 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.593 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.594 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.601 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.602 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.621 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.676 244018 DEBUG nova.compute.manager [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received event network-vif-unplugged-fcd3999e-6238-4484-971c-1fdb438ff4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.677 244018 DEBUG oslo_concurrency.lockutils [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.677 244018 DEBUG oslo_concurrency.lockutils [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.678 244018 DEBUG oslo_concurrency.lockutils [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.678 244018 DEBUG nova.compute.manager [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] No waiting events found dispatching network-vif-unplugged-fcd3999e-6238-4484-971c-1fdb438ff4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.678 244018 DEBUG nova.compute.manager [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received event network-vif-unplugged-fcd3999e-6238-4484-971c-1fdb438ff4ee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.678 244018 DEBUG nova.compute.manager [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.678 244018 DEBUG oslo_concurrency.lockutils [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.679 244018 DEBUG oslo_concurrency.lockutils [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.679 244018 DEBUG oslo_concurrency.lockutils [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.679 244018 DEBUG nova.compute.manager [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] No waiting events found dispatching network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.680 244018 WARNING nova.compute.manager [req-bb4608e5-660e-4748-8292-5dc1bbc8d6fd req-af76875a-75ad-41fc-9184-5f652184614c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received unexpected event network-vif-plugged-fcd3999e-6238-4484-971c-1fdb438ff4ee for instance with vm_state active and task_state deleting.
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.681 244018 DEBUG nova.network.neutron [-] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.763 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.764 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.875962880440056GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.765 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.765 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.771 244018 INFO nova.compute.manager [-] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Took 1.78 seconds to deallocate network for instance.
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.846 244018 DEBUG oslo_concurrency.lockutils [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.866 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 19abddab-88d5-48b8-b98e-1dedccbb8b7f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.867 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 0f9c0573-54eb-4c29-82f2-f79ac673120f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.867 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.868 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.868 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:14 compute-0 nova_compute[244014]: 2026-02-25 12:34:14.974 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.137 244018 DEBUG nova.compute.manager [req-5d6ef372-b31d-4a36-ad12-8747918e33e9 req-127addd4-ffc1-498d-924c-e5c4e5ec416d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Received event network-vif-deleted-fcd3999e-6238-4484-971c-1fdb438ff4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:15 compute-0 ceph-mon[76335]: pgmap v1567: 305 pgs: 305 active+clean; 358 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.0 MiB/s wr, 232 op/s
Feb 25 12:34:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4035425214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:34:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174410860' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.496 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.500 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.539 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.566 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.566 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.566 244018 DEBUG oslo_concurrency.lockutils [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.691 244018 DEBUG oslo_concurrency.processutils [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.893 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.894 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.912 244018 DEBUG nova.compute.manager [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:34:15 compute-0 nova_compute[244014]: 2026-02-25 12:34:15.967 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 358 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Feb 25 12:34:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1174410860' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:34:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2507793681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.210 244018 DEBUG oslo_concurrency.processutils [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.217 244018 DEBUG nova.compute.provider_tree [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.236 244018 DEBUG nova.scheduler.client.report [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.256 244018 DEBUG oslo_concurrency.lockutils [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.258 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.265 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.266 244018 INFO nova.compute.claims [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.279 244018 INFO nova.scheduler.client.report [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Deleted allocations for instance 0f9c0573-54eb-4c29-82f2-f79ac673120f
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.345 244018 DEBUG oslo_concurrency.lockutils [None req-26e4ce9e-4c2c-41f2-9b7a-fe7a4b9df252 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "0f9c0573-54eb-4c29-82f2-f79ac673120f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.427 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.513 244018 DEBUG oslo_concurrency.lockutils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.513 244018 DEBUG oslo_concurrency.lockutils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.513 244018 INFO nova.compute.manager [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Shelving
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.535 244018 DEBUG nova.virt.libvirt.driver [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.566 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.566 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:34:16 compute-0 kernel: tape7482fdc-97 (unregistering): left promiscuous mode
Feb 25 12:34:16 compute-0 NetworkManager[49836]: <info>  [1772022856.8864] device (tape7482fdc-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:34:16 compute-0 ovn_controller[147040]: 2026-02-25T12:34:16Z|00818|binding|INFO|Releasing lport e7482fdc-9792-451e-adf6-a816dd7113c8 from this chassis (sb_readonly=0)
Feb 25 12:34:16 compute-0 ovn_controller[147040]: 2026-02-25T12:34:16Z|00819|binding|INFO|Setting lport e7482fdc-9792-451e-adf6-a816dd7113c8 down in Southbound
Feb 25 12:34:16 compute-0 ovn_controller[147040]: 2026-02-25T12:34:16Z|00820|binding|INFO|Removing iface tape7482fdc-97 ovn-installed in OVS
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:16.899 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:aa:11 10.100.0.12'], port_security=['fa:16:3e:56:aa:11 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'aeaad9e2-4ad0-46fb-b619-7ca2c78443a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e7482fdc-9792-451e-adf6-a816dd7113c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:16.902 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e7482fdc-9792-451e-adf6-a816dd7113c8 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 unbound from our chassis
Feb 25 12:34:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:16.904 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.905 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:16.905 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd9201c-5632-4c1c-bc29-e6ea163c9dff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:16 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000054.scope: Deactivated successfully.
Feb 25 12:34:16 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000054.scope: Consumed 12.625s CPU time.
Feb 25 12:34:16 compute-0 systemd-machined[210048]: Machine qemu-102-instance-00000054 terminated.
Feb 25 12:34:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:34:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883354114' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.984 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:16 compute-0 nova_compute[244014]: 2026-02-25 12:34:16.989 244018 DEBUG nova.compute.provider_tree [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.007 244018 DEBUG nova.scheduler.client.report [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.033 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.034 244018 DEBUG nova.compute.manager [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.088 244018 DEBUG nova.compute.manager [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.089 244018 DEBUG nova.network.neutron [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.109 244018 INFO nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.133 244018 DEBUG nova.compute.manager [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:34:17 compute-0 ceph-mon[76335]: pgmap v1568: 305 pgs: 305 active+clean; 358 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Feb 25 12:34:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2507793681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/883354114' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.258 244018 DEBUG nova.compute.manager [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.260 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.260 244018 INFO nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Creating image(s)
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.286 244018 DEBUG nova.storage.rbd_utils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.312 244018 DEBUG nova.storage.rbd_utils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.333 244018 DEBUG nova.storage.rbd_utils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.338 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.373 244018 DEBUG nova.policy [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8636d59ca0d49698907e2edb5dc4967', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e80b416cdd774e9483545c9e08abf805', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.377 244018 DEBUG nova.compute.manager [req-b8c965d7-1367-4296-8d58-c0cc161ba7aa req-47576bc2-50bd-4624-88a8-916c5766be89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-unplugged-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.377 244018 DEBUG oslo_concurrency.lockutils [req-b8c965d7-1367-4296-8d58-c0cc161ba7aa req-47576bc2-50bd-4624-88a8-916c5766be89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.378 244018 DEBUG oslo_concurrency.lockutils [req-b8c965d7-1367-4296-8d58-c0cc161ba7aa req-47576bc2-50bd-4624-88a8-916c5766be89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.379 244018 DEBUG oslo_concurrency.lockutils [req-b8c965d7-1367-4296-8d58-c0cc161ba7aa req-47576bc2-50bd-4624-88a8-916c5766be89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.379 244018 DEBUG nova.compute.manager [req-b8c965d7-1367-4296-8d58-c0cc161ba7aa req-47576bc2-50bd-4624-88a8-916c5766be89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] No waiting events found dispatching network-vif-unplugged-e7482fdc-9792-451e-adf6-a816dd7113c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.380 244018 WARNING nova.compute.manager [req-b8c965d7-1367-4296-8d58-c0cc161ba7aa req-47576bc2-50bd-4624-88a8-916c5766be89 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received unexpected event network-vif-unplugged-e7482fdc-9792-451e-adf6-a816dd7113c8 for instance with vm_state active and task_state rescuing.
Feb 25 12:34:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:17.418 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.419 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.419 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:17.420 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.420 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.421 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.444 244018 DEBUG nova.storage.rbd_utils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.447 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.470 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.635 244018 INFO nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Instance shutdown successfully after 13 seconds.
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.642 244018 INFO nova.virt.libvirt.driver [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Instance destroyed successfully.
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.642 244018 DEBUG nova.objects.instance [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'numa_topology' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.663 244018 INFO nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Attempting rescue
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.664 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.667 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.668 244018 INFO nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Creating image(s)
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.686 244018 DEBUG nova.storage.rbd_utils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.690 244018 DEBUG nova.objects.instance [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.706 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.750 244018 DEBUG nova.storage.rbd_utils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.768 244018 DEBUG nova.storage.rbd_utils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.771 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.820 244018 DEBUG nova.storage.rbd_utils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] resizing rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.842 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.843 244018 DEBUG oslo_concurrency.lockutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.843 244018 DEBUG oslo_concurrency.lockutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.844 244018 DEBUG oslo_concurrency.lockutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.859 244018 DEBUG nova.storage.rbd_utils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.861 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.919 244018 DEBUG nova.objects.instance [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'migration_context' on Instance uuid ef34c4d4-57e0-4af1-af7a-b8ef35d09862 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.938 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.938 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Ensure instance console log exists: /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.939 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.939 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:17 compute-0 nova_compute[244014]: 2026-02-25 12:34:17.939 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 327 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 229 op/s
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.096 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.097 244018 DEBUG nova.objects.instance [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'migration_context' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.116 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.117 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Start _get_guest_xml network_info=[{"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1416447925-network", "vif_mac": "fa:16:3e:56:aa:11"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.117 244018 DEBUG nova.objects.instance [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'resources' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.133 244018 WARNING nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.139 244018 DEBUG nova.virt.libvirt.host [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.140 244018 DEBUG nova.virt.libvirt.host [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.145 244018 DEBUG nova.virt.libvirt.host [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.145 244018 DEBUG nova.virt.libvirt.host [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.146 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.146 244018 DEBUG nova.virt.hardware [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.147 244018 DEBUG nova.virt.hardware [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.147 244018 DEBUG nova.virt.hardware [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.148 244018 DEBUG nova.virt.hardware [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.148 244018 DEBUG nova.virt.hardware [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.148 244018 DEBUG nova.virt.hardware [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.149 244018 DEBUG nova.virt.hardware [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.149 244018 DEBUG nova.virt.hardware [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.150 244018 DEBUG nova.virt.hardware [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.150 244018 DEBUG nova.virt.hardware [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.150 244018 DEBUG nova.virt.hardware [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.151 244018 DEBUG nova.objects.instance [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.175 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:18 compute-0 ceph-mon[76335]: pgmap v1569: 305 pgs: 305 active+clean; 327 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 229 op/s
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.739 244018 DEBUG nova.network.neutron [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Successfully created port: 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:34:18 compute-0 kernel: tapd6f9abb7-ac (unregistering): left promiscuous mode
Feb 25 12:34:18 compute-0 NetworkManager[49836]: <info>  [1772022858.7792] device (tapd6f9abb7-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.779 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.785 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:18 compute-0 ovn_controller[147040]: 2026-02-25T12:34:18Z|00821|binding|INFO|Releasing lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 from this chassis (sb_readonly=0)
Feb 25 12:34:18 compute-0 ovn_controller[147040]: 2026-02-25T12:34:18Z|00822|binding|INFO|Setting lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 down in Southbound
Feb 25 12:34:18 compute-0 ovn_controller[147040]: 2026-02-25T12:34:18Z|00823|binding|INFO|Removing iface tapd6f9abb7-ac ovn-installed in OVS
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1561064673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:18.796 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:66:6e 10.100.0.8'], port_security=['fa:16:3e:af:66:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '19abddab-88d5-48b8-b98e-1dedccbb8b7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d34ca23436b401fbaeb0b01190a440a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3f8cb539-706f-470a-94ac-5465c7a896f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2c08b8a-392f-4189-9ba4-eaa2e30b0540, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:18.799 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 in datapath a0cf2281-bf49-498f-8de5-70cdba33cd62 unbound from our chassis
Feb 25 12:34:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:18.801 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0cf2281-bf49-498f-8de5-70cdba33cd62, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:34:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:18.802 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[66310c63-6f22-40c1-b194-fc80d72f037a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:18.803 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 namespace which is not needed anymore
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.815 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.816 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:18 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Feb 25 12:34:18 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d0000004f.scope: Consumed 15.273s CPU time.
Feb 25 12:34:18 compute-0 systemd-machined[210048]: Machine qemu-97-instance-0000004f terminated.
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:34:18 compute-0 nova_compute[244014]: 2026-02-25 12:34:18.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:34:18 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[313490]: [NOTICE]   (313512) : haproxy version is 2.8.14-c23fe91
Feb 25 12:34:18 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[313490]: [NOTICE]   (313512) : path to executable is /usr/sbin/haproxy
Feb 25 12:34:18 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[313490]: [WARNING]  (313512) : Exiting Master process...
Feb 25 12:34:18 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[313490]: [ALERT]    (313512) : Current worker (313546) exited with code 143 (Terminated)
Feb 25 12:34:18 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[313490]: [WARNING]  (313512) : All workers exited. Exiting... (0)
Feb 25 12:34:18 compute-0 systemd[1]: libpod-69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191.scope: Deactivated successfully.
Feb 25 12:34:18 compute-0 podman[318256]: 2026-02-25 12:34:18.919801583 +0000 UTC m=+0.044697434 container died 69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 12:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191-userdata-shm.mount: Deactivated successfully.
Feb 25 12:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-6294ed470bb7fef415eacb48a4705631d3f8b94ba50302f69a55420830910089-merged.mount: Deactivated successfully.
Feb 25 12:34:18 compute-0 podman[318256]: 2026-02-25 12:34:18.976222197 +0000 UTC m=+0.101118018 container cleanup 69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 12:34:18 compute-0 systemd[1]: libpod-conmon-69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191.scope: Deactivated successfully.
Feb 25 12:34:19 compute-0 podman[318306]: 2026-02-25 12:34:19.02903343 +0000 UTC m=+0.039093266 container remove 69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 12:34:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:19.030 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9284eb3-2bd3-4429-be37-fc38c82b0cac]: (4, ('Wed Feb 25 12:34:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 (69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191)\n69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191\nWed Feb 25 12:34:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 (69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191)\n69873d09f8dcf573e93d62281929afd41a796019b9339d4cd0fd6efeff022191\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:19.032 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[300e4b01-8a3d-44b3-b9ec-0231582315a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:19.033 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0cf2281-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.034 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:19 compute-0 kernel: tapa0cf2281-b0: left promiscuous mode
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.045 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:19.046 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[97d96987-757a-41b3-9738-542f3dee8578]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:19.057 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e71996-2332-49fa-a1d5-e194a54663e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:19.058 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1631b105-2ee8-47a5-b599-7094070b4148]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:19.070 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc8d662-2161-455c-9ddc-9dcf7c80a71f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473323, 'reachable_time': 30744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318338, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:19.072 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:34:19 compute-0 systemd[1]: run-netns-ovnmeta\x2da0cf2281\x2dbf49\x2d498f\x2d8de5\x2d70cdba33cd62.mount: Deactivated successfully.
Feb 25 12:34:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:19.072 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[556efb02-6fb6-495f-90cd-ffe6df22fdf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1561064673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908359529' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.385 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.387 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.554 244018 INFO nova.virt.libvirt.driver [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance shutdown successfully after 3 seconds.
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.560 244018 INFO nova.virt.libvirt.driver [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance destroyed successfully.
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.560 244018 DEBUG nova.objects.instance [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'numa_topology' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.661 244018 DEBUG nova.compute.manager [req-a154bc5a-2bf2-4f81-a29d-8c511a85aa58 req-1b8e6f29-5a4c-43c1-9894-d1852d646e90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.662 244018 DEBUG oslo_concurrency.lockutils [req-a154bc5a-2bf2-4f81-a29d-8c511a85aa58 req-1b8e6f29-5a4c-43c1-9894-d1852d646e90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.662 244018 DEBUG oslo_concurrency.lockutils [req-a154bc5a-2bf2-4f81-a29d-8c511a85aa58 req-1b8e6f29-5a4c-43c1-9894-d1852d646e90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.663 244018 DEBUG oslo_concurrency.lockutils [req-a154bc5a-2bf2-4f81-a29d-8c511a85aa58 req-1b8e6f29-5a4c-43c1-9894-d1852d646e90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.663 244018 DEBUG nova.compute.manager [req-a154bc5a-2bf2-4f81-a29d-8c511a85aa58 req-1b8e6f29-5a4c-43c1-9894-d1852d646e90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] No waiting events found dispatching network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.664 244018 WARNING nova.compute.manager [req-a154bc5a-2bf2-4f81-a29d-8c511a85aa58 req-1b8e6f29-5a4c-43c1-9894-d1852d646e90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received unexpected event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 for instance with vm_state active and task_state rescuing.
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.818 244018 INFO nova.virt.libvirt.driver [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Beginning cold snapshot process
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026768230' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.960 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.962 244018 DEBUG nova.virt.libvirt.vif [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:33:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-842091875',display_name='tempest-ServerRescueTestJSON-server-842091875',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-842091875',id=84,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:33:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='367d43ab207546c3900a8414f0713ef4',ramdisk_id='',reservation_id='r-aun4358p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-930018924',owner_user_name='tempest-ServerRescueTestJSON-930018924-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:33:58Z,user_data=None,user_id='cb2c815ef3214a7b897f911b4f53a146',uuid=aeaad9e2-4ad0-46fb-b619-7ca2c78443a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1416447925-network", "vif_mac": "fa:16:3e:56:aa:11"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.962 244018 DEBUG nova.network.os_vif_util [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converting VIF {"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1416447925-network", "vif_mac": "fa:16:3e:56:aa:11"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.963 244018 DEBUG nova.network.os_vif_util [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:aa:11,bridge_name='br-int',has_traffic_filtering=True,id=e7482fdc-9792-451e-adf6-a816dd7113c8,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7482fdc-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:19 compute-0 nova_compute[244014]: 2026-02-25 12:34:19.964 244018 DEBUG nova.objects.instance [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'pci_devices' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 327 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 173 op/s
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.074 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:34:20 compute-0 nova_compute[244014]:   <uuid>aeaad9e2-4ad0-46fb-b619-7ca2c78443a8</uuid>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   <name>instance-00000054</name>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerRescueTestJSON-server-842091875</nova:name>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:34:18</nova:creationTime>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <nova:user uuid="cb2c815ef3214a7b897f911b4f53a146">tempest-ServerRescueTestJSON-930018924-project-member</nova:user>
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <nova:project uuid="367d43ab207546c3900a8414f0713ef4">tempest-ServerRescueTestJSON-930018924</nova:project>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <nova:port uuid="e7482fdc-9792-451e-adf6-a816dd7113c8">
Feb 25 12:34:20 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <system>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <entry name="serial">aeaad9e2-4ad0-46fb-b619-7ca2c78443a8</entry>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <entry name="uuid">aeaad9e2-4ad0-46fb-b619-7ca2c78443a8</entry>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     </system>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   <os>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   </os>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   <features>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   </features>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.rescue">
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk">
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <target dev="vdb" bus="virtio"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.config.rescue">
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:20 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:56:aa:11"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <target dev="tape7482fdc-97"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/console.log" append="off"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <video>
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     </video>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:34:20 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:34:20 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:34:20 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:34:20 compute-0 nova_compute[244014]: </domain>
Feb 25 12:34:20 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.082 244018 INFO nova.virt.libvirt.driver [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Instance destroyed successfully.
Feb 25 12:34:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2908359529' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3026768230' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:20 compute-0 ceph-mon[76335]: pgmap v1570: 305 pgs: 305 active+clean; 327 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 173 op/s
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.205 244018 DEBUG nova.virt.libvirt.imagebackend [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.221 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.222 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.222 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.222 244018 DEBUG nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No VIF found with MAC fa:16:3e:56:aa:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.222 244018 INFO nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Using config drive
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.238 244018 DEBUG nova.storage.rbd_utils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.261 244018 DEBUG nova.objects.instance [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'ec2_ids' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.330 244018 DEBUG nova.objects.instance [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'keypairs' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.428 244018 DEBUG nova.storage.rbd_utils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] creating snapshot(d1972e2bf12c4e70ad7fa4fe27e94af1) on rbd image(19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.613 244018 DEBUG nova.network.neutron [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Successfully updated port: 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.631 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "refresh_cache-ef34c4d4-57e0-4af1-af7a-b8ef35d09862" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.631 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquired lock "refresh_cache-ef34c4d4-57e0-4af1-af7a-b8ef35d09862" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.631 244018 DEBUG nova.network.neutron [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.734 244018 DEBUG nova.compute.manager [req-29f4a24b-40f0-46f1-9d67-f771536a5bae req-37273c1d-f106-43c7-b17b-7adec2159723 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received event network-changed-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.735 244018 DEBUG nova.compute.manager [req-29f4a24b-40f0-46f1-9d67-f771536a5bae req-37273c1d-f106-43c7-b17b-7adec2159723 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Refreshing instance network info cache due to event network-changed-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.735 244018 DEBUG oslo_concurrency.lockutils [req-29f4a24b-40f0-46f1-9d67-f771536a5bae req-37273c1d-f106-43c7-b17b-7adec2159723 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ef34c4d4-57e0-4af1-af7a-b8ef35d09862" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.860 244018 INFO nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Creating config drive at /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config.rescue
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.863 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmptshvtrp3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:20 compute-0 nova_compute[244014]: 2026-02-25 12:34:20.893 244018 DEBUG nova.network.neutron [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.001 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmptshvtrp3" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.034 244018 DEBUG nova.storage.rbd_utils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.038 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config.rescue aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.198 244018 DEBUG oslo_concurrency.processutils [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config.rescue aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.200 244018 INFO nova.virt.libvirt.driver [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Deleting local config drive /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8/disk.config.rescue because it was imported into RBD.
Feb 25 12:34:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Feb 25 12:34:21 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.253 244018 DEBUG nova.storage.rbd_utils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] cloning vms/19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk@d1972e2bf12c4e70ad7fa4fe27e94af1 to images/fc0b665d-d9e3-4786-aa12-40ae928c6e08 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:34:21 compute-0 NetworkManager[49836]: <info>  [1772022861.2562] manager: (tape7482fdc-97): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Feb 25 12:34:21 compute-0 kernel: tape7482fdc-97: entered promiscuous mode
Feb 25 12:34:21 compute-0 ovn_controller[147040]: 2026-02-25T12:34:21Z|00824|binding|INFO|Claiming lport e7482fdc-9792-451e-adf6-a816dd7113c8 for this chassis.
Feb 25 12:34:21 compute-0 ovn_controller[147040]: 2026-02-25T12:34:21Z|00825|binding|INFO|e7482fdc-9792-451e-adf6-a816dd7113c8: Claiming fa:16:3e:56:aa:11 10.100.0.12
Feb 25 12:34:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:21.267 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:aa:11 10.100.0.12'], port_security=['fa:16:3e:56:aa:11 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'aeaad9e2-4ad0-46fb-b619-7ca2c78443a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e7482fdc-9792-451e-adf6-a816dd7113c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:21.269 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e7482fdc-9792-451e-adf6-a816dd7113c8 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 bound to our chassis
Feb 25 12:34:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:21.270 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:34:21 compute-0 ovn_controller[147040]: 2026-02-25T12:34:21Z|00826|binding|INFO|Setting lport e7482fdc-9792-451e-adf6-a816dd7113c8 ovn-installed in OVS
Feb 25 12:34:21 compute-0 ovn_controller[147040]: 2026-02-25T12:34:21Z|00827|binding|INFO|Setting lport e7482fdc-9792-451e-adf6-a816dd7113c8 up in Southbound
Feb 25 12:34:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:21.272 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4efeb7d4-29bf-4cab-96e1-073e185e4f2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:21 compute-0 systemd-machined[210048]: New machine qemu-104-instance-00000054.
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:21 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-00000054.
Feb 25 12:34:21 compute-0 systemd-udevd[318520]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:34:21 compute-0 NetworkManager[49836]: <info>  [1772022861.3345] device (tape7482fdc-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:34:21 compute-0 NetworkManager[49836]: <info>  [1772022861.3358] device (tape7482fdc-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.392 244018 DEBUG nova.storage.rbd_utils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] flattening images/fc0b665d-d9e3-4786-aa12-40ae928c6e08 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.834 244018 DEBUG nova.compute.manager [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.834 244018 DEBUG oslo_concurrency.lockutils [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.834 244018 DEBUG oslo_concurrency.lockutils [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.834 244018 DEBUG oslo_concurrency.lockutils [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.835 244018 DEBUG nova.compute.manager [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] No waiting events found dispatching network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.835 244018 WARNING nova.compute.manager [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received unexpected event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 for instance with vm_state active and task_state rescuing.
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.835 244018 DEBUG nova.compute.manager [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.835 244018 DEBUG oslo_concurrency.lockutils [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.835 244018 DEBUG oslo_concurrency.lockutils [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.836 244018 DEBUG oslo_concurrency.lockutils [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.836 244018 DEBUG nova.compute.manager [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] No waiting events found dispatching network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.836 244018 WARNING nova.compute.manager [req-c9485b81-0876-4fbe-9e2f-3c28975605d6 req-611d89dc-4186-4b26-a426-726b9da60a8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received unexpected event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 for instance with vm_state active and task_state rescuing.
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.849 244018 DEBUG nova.storage.rbd_utils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] removing snapshot(d1972e2bf12c4e70ad7fa4fe27e94af1) on rbd image(19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.882 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.883 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022861.8824766, aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.883 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] VM Resumed (Lifecycle Event)
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.890 244018 DEBUG nova.compute.manager [None req-0633821d-d139-4585-ab79-33e5ffbbaed4 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.918 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.922 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.949 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022861.8831975, aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.949 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] VM Started (Lifecycle Event)
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.969 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:21 compute-0 nova_compute[244014]: 2026-02-25 12:34:21.974 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 396 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.1 MiB/s wr, 206 op/s
Feb 25 12:34:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Feb 25 12:34:22 compute-0 ceph-mon[76335]: osdmap e236: 3 total, 3 up, 3 in
Feb 25 12:34:22 compute-0 ceph-mon[76335]: pgmap v1572: 305 pgs: 305 active+clean; 396 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.1 MiB/s wr, 206 op/s
Feb 25 12:34:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Feb 25 12:34:22 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.267 244018 DEBUG nova.storage.rbd_utils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] creating snapshot(snap) on rbd image(fc0b665d-d9e3-4786-aa12-40ae928c6e08) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.303 244018 DEBUG nova.network.neutron [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Updating instance_info_cache with network_info: [{"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.331 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Releasing lock "refresh_cache-ef34c4d4-57e0-4af1-af7a-b8ef35d09862" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.332 244018 DEBUG nova.compute.manager [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Instance network_info: |[{"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.333 244018 DEBUG oslo_concurrency.lockutils [req-29f4a24b-40f0-46f1-9d67-f771536a5bae req-37273c1d-f106-43c7-b17b-7adec2159723 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ef34c4d4-57e0-4af1-af7a-b8ef35d09862" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.333 244018 DEBUG nova.network.neutron [req-29f4a24b-40f0-46f1-9d67-f771536a5bae req-37273c1d-f106-43c7-b17b-7adec2159723 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Refreshing network info cache for port 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.339 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Start _get_guest_xml network_info=[{"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.346 244018 WARNING nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.351 244018 DEBUG nova.virt.libvirt.host [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.352 244018 DEBUG nova.virt.libvirt.host [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.369 244018 DEBUG nova.virt.libvirt.host [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.370 244018 DEBUG nova.virt.libvirt.host [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.370 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.371 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.371 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.372 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.372 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.372 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.372 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.373 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.373 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.374 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.374 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.374 244018 DEBUG nova.virt.hardware [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.378 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2197756852' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.943 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.960 244018 DEBUG nova.storage.rbd_utils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:22 compute-0 nova_compute[244014]: 2026-02-25 12:34:22.964 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Feb 25 12:34:23 compute-0 ceph-mon[76335]: osdmap e237: 3 total, 3 up, 3 in
Feb 25 12:34:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2197756852' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Feb 25 12:34:23 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Feb 25 12:34:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378961753' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.488 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.490 244018 DEBUG nova.virt.libvirt.vif [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:34:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237181777',display_name='tempest-ServerDiskConfigTestJSON-server-1237181777',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237181777',id=85,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-74vbp2dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:17Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=ef34c4d4-57e0-4af1-af7a-b8ef35d09862,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.490 244018 DEBUG nova.network.os_vif_util [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.491 244018 DEBUG nova.network.os_vif_util [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.492 244018 DEBUG nova.objects.instance [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef34c4d4-57e0-4af1-af7a-b8ef35d09862 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.511 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:34:23 compute-0 nova_compute[244014]:   <uuid>ef34c4d4-57e0-4af1-af7a-b8ef35d09862</uuid>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   <name>instance-00000055</name>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1237181777</nova:name>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:34:22</nova:creationTime>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <nova:user uuid="d8636d59ca0d49698907e2edb5dc4967">tempest-ServerDiskConfigTestJSON-1145834762-project-member</nova:user>
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <nova:project uuid="e80b416cdd774e9483545c9e08abf805">tempest-ServerDiskConfigTestJSON-1145834762</nova:project>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <nova:port uuid="9064f9e6-dd74-4a19-9efb-6abc4c4e27c5">
Feb 25 12:34:23 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <system>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <entry name="serial">ef34c4d4-57e0-4af1-af7a-b8ef35d09862</entry>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <entry name="uuid">ef34c4d4-57e0-4af1-af7a-b8ef35d09862</entry>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     </system>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   <os>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   </os>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   <features>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   </features>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk">
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config">
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:23 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:46:18:49"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <target dev="tap9064f9e6-dd"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/console.log" append="off"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <video>
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     </video>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:34:23 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:34:23 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:34:23 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:34:23 compute-0 nova_compute[244014]: </domain>
Feb 25 12:34:23 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.513 244018 DEBUG nova.compute.manager [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Preparing to wait for external event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.514 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.514 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.514 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.514 244018 DEBUG nova.virt.libvirt.vif [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:34:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237181777',display_name='tempest-ServerDiskConfigTestJSON-server-1237181777',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237181777',id=85,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-74vbp2dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:17Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=ef34c4d4-57e0-4af1-af7a-b8ef35d09862,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.515 244018 DEBUG nova.network.os_vif_util [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.515 244018 DEBUG nova.network.os_vif_util [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.516 244018 DEBUG os_vif [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.516 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.516 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.517 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.519 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9064f9e6-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.519 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9064f9e6-dd, col_values=(('external_ids', {'iface-id': '9064f9e6-dd74-4a19-9efb-6abc4c4e27c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:18:49', 'vm-uuid': 'ef34c4d4-57e0-4af1-af7a-b8ef35d09862'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:23 compute-0 NetworkManager[49836]: <info>  [1772022863.5225] manager: (tap9064f9e6-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.529 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.530 244018 INFO os_vif [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd')
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.584 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.585 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.585 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No VIF found with MAC fa:16:3e:46:18:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.586 244018 INFO nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Using config drive
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.606 244018 DEBUG nova.storage.rbd_utils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.950 244018 DEBUG nova.network.neutron [req-29f4a24b-40f0-46f1-9d67-f771536a5bae req-37273c1d-f106-43c7-b17b-7adec2159723 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Updated VIF entry in instance network info cache for port 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.951 244018 DEBUG nova.network.neutron [req-29f4a24b-40f0-46f1-9d67-f771536a5bae req-37273c1d-f106-43c7-b17b-7adec2159723 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Updating instance_info_cache with network_info: [{"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:23 compute-0 nova_compute[244014]: 2026-02-25 12:34:23.967 244018 DEBUG oslo_concurrency.lockutils [req-29f4a24b-40f0-46f1-9d67-f771536a5bae req-37273c1d-f106-43c7-b17b-7adec2159723 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ef34c4d4-57e0-4af1-af7a-b8ef35d09862" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:34:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 444 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 217 op/s
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.021 244018 INFO nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Creating config drive at /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.029 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8yt5erl7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.167 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8yt5erl7" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.193 244018 DEBUG nova.storage.rbd_utils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.198 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:24 compute-0 ceph-mon[76335]: osdmap e238: 3 total, 3 up, 3 in
Feb 25 12:34:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/378961753' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:24 compute-0 ceph-mon[76335]: pgmap v1575: 305 pgs: 305 active+clean; 444 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 217 op/s
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.341 244018 DEBUG oslo_concurrency.processutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.342 244018 INFO nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Deleting local config drive /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config because it was imported into RBD.
Feb 25 12:34:24 compute-0 NetworkManager[49836]: <info>  [1772022864.3744] manager: (tap9064f9e6-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/349)
Feb 25 12:34:24 compute-0 kernel: tap9064f9e6-dd: entered promiscuous mode
Feb 25 12:34:24 compute-0 ovn_controller[147040]: 2026-02-25T12:34:24Z|00828|binding|INFO|Claiming lport 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 for this chassis.
Feb 25 12:34:24 compute-0 ovn_controller[147040]: 2026-02-25T12:34:24Z|00829|binding|INFO|9064f9e6-dd74-4a19-9efb-6abc4c4e27c5: Claiming fa:16:3e:46:18:49 10.100.0.3
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:24 compute-0 ovn_controller[147040]: 2026-02-25T12:34:24Z|00830|binding|INFO|Setting lport 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 ovn-installed in OVS
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:24 compute-0 ovn_controller[147040]: 2026-02-25T12:34:24Z|00831|binding|INFO|Setting lport 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 up in Southbound
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.391 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:18:49 10.100.0.3'], port_security=['fa:16:3e:46:18:49 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ef34c4d4-57e0-4af1-af7a-b8ef35d09862', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.394 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 bound to our chassis
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.395 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:34:24 compute-0 systemd-udevd[318790]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:34:24 compute-0 systemd-machined[210048]: New machine qemu-105-instance-00000055.
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.404 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4da00786-60eb-4532-8dc3-fe37c9c8adfb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.405 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d1639de-d1 in ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:34:24 compute-0 NetworkManager[49836]: <info>  [1772022864.4074] device (tap9064f9e6-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:34:24 compute-0 NetworkManager[49836]: <info>  [1772022864.4080] device (tap9064f9e6-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.409 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d1639de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.409 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e69cd332-bc07-4133-af79-d97c9287b64b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.410 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67d6a5d2-1270-425c-b872-3b3c16e1af26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-00000055.
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.420 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[baf7c26f-e7c8-4326-bf15-99a63bf51ae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.431 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8986bbf8-1376-4a6f-b35a-f22bac2c6e70]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.447 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[99f1a940-6212-4590-8baf-6c587bfef3e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 systemd-udevd[318793]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.452 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[26533393-a7f6-472b-aa1b-d42929831c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 NetworkManager[49836]: <info>  [1772022864.4531] manager: (tap9d1639de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/350)
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.472 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[22872a86-df1a-49e8-91bb-ac81b935e5e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.474 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[13cff917-5a21-492d-a9e1-494d1e68d438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 NetworkManager[49836]: <info>  [1772022864.4888] device (tap9d1639de-d0): carrier: link connected
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.491 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a97d4e11-2357-4180-97ee-4c8582e4f39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.503 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[15c9f8f4-733e-4eb3-b1f6-3ac511cee88c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483389, 'reachable_time': 25337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318826, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.512 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[36e2f77a-552b-480e-b4b3-5f5ac375b549]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:8cd3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 483389, 'tstamp': 483389}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318827, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.521 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d366d77f-be57-4697-98d9-c3407d75c3f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483389, 'reachable_time': 25337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318828, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.538 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b10cc45-e6e1-4617-9097-5c2939da4fea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.570 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cf9a15ed-7642-4a41-8ba3-c817bfb77708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.572 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.572 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.572 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d1639de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:24 compute-0 NetworkManager[49836]: <info>  [1772022864.5749] manager: (tap9d1639de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Feb 25 12:34:24 compute-0 kernel: tap9d1639de-d0: entered promiscuous mode
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.580 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d1639de-d0, col_values=(('external_ids', {'iface-id': '2a0bb56b-974f-4df3-ab65-5a5521eee6ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:24 compute-0 ovn_controller[147040]: 2026-02-25T12:34:24Z|00832|binding|INFO|Releasing lport 2a0bb56b-974f-4df3-ab65-5a5521eee6ab from this chassis (sb_readonly=0)
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.581 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.590 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.591 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[17eefe5b-d9ca-4c75-9e44-4b0847af331c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.591 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:34:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:24.593 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'env', 'PROCESS_TAG=haproxy-9d1639de-d0ac-47b6-9707-253523b26c21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d1639de-d0ac-47b6-9707-253523b26c21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.611 244018 INFO nova.virt.libvirt.driver [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Snapshot image upload complete
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.611 244018 DEBUG nova.compute.manager [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.671 244018 INFO nova.compute.manager [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Shelve offloading
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.679 244018 INFO nova.virt.libvirt.driver [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance destroyed successfully.
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.679 244018 DEBUG nova.compute.manager [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.683 244018 DEBUG oslo_concurrency.lockutils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.684 244018 DEBUG oslo_concurrency.lockutils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquired lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.684 244018 DEBUG nova.network.neutron [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.789 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022864.7877698, ef34c4d4-57e0-4af1-af7a-b8ef35d09862 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.790 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] VM Started (Lifecycle Event)
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.812 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.815 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022864.7898355, ef34c4d4-57e0-4af1-af7a-b8ef35d09862 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.816 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] VM Paused (Lifecycle Event)
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.836 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.839 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.860 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:34:24 compute-0 nova_compute[244014]: 2026-02-25 12:34:24.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:24 compute-0 podman[318906]: 2026-02-25 12:34:24.893962736 +0000 UTC m=+0.042727698 container create c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 12:34:24 compute-0 systemd[1]: Started libpod-conmon-c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86.scope.
Feb 25 12:34:24 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b22cfb4da164b32e6ac6fc2c87415aed62df482066031284028ba2a091612c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:24 compute-0 podman[318906]: 2026-02-25 12:34:24.869519255 +0000 UTC m=+0.018284237 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:34:24 compute-0 podman[318906]: 2026-02-25 12:34:24.968501523 +0000 UTC m=+0.117266515 container init c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:34:24 compute-0 podman[318906]: 2026-02-25 12:34:24.972113095 +0000 UTC m=+0.120878057 container start c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:34:24 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[318919]: [NOTICE]   (318923) : New worker (318925) forked
Feb 25 12:34:24 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[318919]: [NOTICE]   (318923) : Loading success.
Feb 25 12:34:25 compute-0 nova_compute[244014]: 2026-02-25 12:34:25.244 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:25 compute-0 nova_compute[244014]: 2026-02-25 12:34:25.245 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:25 compute-0 nova_compute[244014]: 2026-02-25 12:34:25.261 244018 DEBUG nova.compute.manager [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:34:25 compute-0 nova_compute[244014]: 2026-02-25 12:34:25.328 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:25 compute-0 nova_compute[244014]: 2026-02-25 12:34:25.329 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:25 compute-0 nova_compute[244014]: 2026-02-25 12:34:25.338 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:34:25 compute-0 nova_compute[244014]: 2026-02-25 12:34:25.340 244018 INFO nova.compute.claims [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:34:25 compute-0 nova_compute[244014]: 2026-02-25 12:34:25.504 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:25 compute-0 nova_compute[244014]: 2026-02-25 12:34:25.878 244018 DEBUG nova.network.neutron [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updating instance_info_cache with network_info: [{"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:25 compute-0 nova_compute[244014]: 2026-02-25 12:34:25.893 244018 DEBUG oslo_concurrency.lockutils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Releasing lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:34:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:34:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1575727215' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 444 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 217 op/s
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.023 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.028 244018 DEBUG nova.compute.provider_tree [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.046 244018 DEBUG nova.scheduler.client.report [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:34:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1575727215' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.066 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.066 244018 DEBUG nova.compute.manager [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.107 244018 DEBUG nova.compute.manager [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.107 244018 DEBUG nova.network.neutron [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.128 244018 INFO nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.146 244018 DEBUG nova.compute.manager [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.249 244018 DEBUG nova.compute.manager [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.251 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.251 244018 INFO nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Creating image(s)
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.268 244018 DEBUG nova.storage.rbd_utils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.288 244018 DEBUG nova.storage.rbd_utils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.304 244018 DEBUG nova.storage.rbd_utils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.307 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.333 244018 DEBUG nova.policy [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cb2c815ef3214a7b897f911b4f53a146', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '367d43ab207546c3900a8414f0713ef4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.369 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.371 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.371 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.372 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.391 244018 DEBUG nova.storage.rbd_utils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.394 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e3178d86-5c76-4393-9327-2aac2cb8d81d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:26.423 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.732 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e3178d86-5c76-4393-9327-2aac2cb8d81d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.800 244018 DEBUG nova.storage.rbd_utils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] resizing rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.894 244018 DEBUG nova.objects.instance [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'migration_context' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.911 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.912 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Ensure instance console log exists: /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.912 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.913 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:26 compute-0 nova_compute[244014]: 2026-02-25 12:34:26.913 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:27 compute-0 ceph-mon[76335]: pgmap v1576: 305 pgs: 305 active+clean; 444 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 217 op/s
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.134 244018 DEBUG nova.compute.manager [req-cc3d294c-3e92-4624-899c-48e562076b2a req-2d149a16-09d2-4e38-a684-d84097b131c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.135 244018 DEBUG oslo_concurrency.lockutils [req-cc3d294c-3e92-4624-899c-48e562076b2a req-2d149a16-09d2-4e38-a684-d84097b131c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.135 244018 DEBUG oslo_concurrency.lockutils [req-cc3d294c-3e92-4624-899c-48e562076b2a req-2d149a16-09d2-4e38-a684-d84097b131c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.136 244018 DEBUG oslo_concurrency.lockutils [req-cc3d294c-3e92-4624-899c-48e562076b2a req-2d149a16-09d2-4e38-a684-d84097b131c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.136 244018 DEBUG nova.compute.manager [req-cc3d294c-3e92-4624-899c-48e562076b2a req-2d149a16-09d2-4e38-a684-d84097b131c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.137 244018 WARNING nova.compute.manager [req-cc3d294c-3e92-4624-899c-48e562076b2a req-2d149a16-09d2-4e38-a684-d84097b131c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received unexpected event network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with vm_state shelved and task_state shelving_offloading.
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.175 244018 INFO nova.virt.libvirt.driver [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance destroyed successfully.
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.176 244018 DEBUG nova.objects.instance [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'resources' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.190 244018 DEBUG nova.virt.libvirt.vif [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-293517057',display_name='tempest-ServersNegativeTestJSON-server-293517057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-293517057',id=79,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:32:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0d34ca23436b401fbaeb0b01190a440a',ramdisk_id='',reservation_id='r-lll6a7bq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1613719120',owner_user_name='tempest-ServersNegativeTestJSON-1613719120-project-member',shelved_at='2026-02-25T12:34:24.611832',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fc0b665d-d9e3-4786-aa12-40ae928c6e08'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:34:20Z,user_data=None,user_id='84ba7d5e80a44535b25853f3b18e352d',uuid=19abddab-88d5-48b8-b98e-1dedccbb8b7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.190 244018 DEBUG nova.network.os_vif_util [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converting VIF {"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.191 244018 DEBUG nova.network.os_vif_util [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.192 244018 DEBUG os_vif [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.194 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6f9abb7-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.196 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.197 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.200 244018 INFO os_vif [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac')
Feb 25 12:34:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.452 244018 DEBUG nova.network.neutron [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Successfully created port: 26c6cf46-a52b-4476-9112-4047f420e492 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.570 244018 INFO nova.virt.libvirt.driver [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Deleting instance files /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f_del
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.571 244018 INFO nova.virt.libvirt.driver [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Deletion of /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f_del complete
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.625 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022852.6240342, 0f9c0573-54eb-4c29-82f2-f79ac673120f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.626 244018 INFO nova.compute.manager [-] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] VM Stopped (Lifecycle Event)
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.646 244018 DEBUG nova.compute.manager [None req-fb3ffdd0-fda1-42af-867a-5ff5cfe1d18c - - - - - -] [instance: 0f9c0573-54eb-4c29-82f2-f79ac673120f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.693 244018 INFO nova.scheduler.client.report [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Deleted allocations for instance 19abddab-88d5-48b8-b98e-1dedccbb8b7f
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.754 244018 DEBUG oslo_concurrency.lockutils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.755 244018 DEBUG oslo_concurrency.lockutils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.855 244018 DEBUG oslo_concurrency.processutils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.881 244018 DEBUG nova.compute.manager [req-e7d220a9-d03e-4cef-a9c5-711997bb0e38 req-9f0925da-de26-4afa-b4ea-3b2e6718ee08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.881 244018 DEBUG oslo_concurrency.lockutils [req-e7d220a9-d03e-4cef-a9c5-711997bb0e38 req-9f0925da-de26-4afa-b4ea-3b2e6718ee08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.882 244018 DEBUG oslo_concurrency.lockutils [req-e7d220a9-d03e-4cef-a9c5-711997bb0e38 req-9f0925da-de26-4afa-b4ea-3b2e6718ee08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.882 244018 DEBUG oslo_concurrency.lockutils [req-e7d220a9-d03e-4cef-a9c5-711997bb0e38 req-9f0925da-de26-4afa-b4ea-3b2e6718ee08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.882 244018 DEBUG nova.compute.manager [req-e7d220a9-d03e-4cef-a9c5-711997bb0e38 req-9f0925da-de26-4afa-b4ea-3b2e6718ee08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:27 compute-0 nova_compute[244014]: 2026-02-25 12:34:27.883 244018 WARNING nova.compute.manager [req-e7d220a9-d03e-4cef-a9c5-711997bb0e38 req-9f0925da-de26-4afa-b4ea-3b2e6718ee08 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received unexpected event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with vm_state shelved_offloaded and task_state None.
Feb 25 12:34:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 492 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 14 MiB/s wr, 410 op/s
Feb 25 12:34:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:34:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641417559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:28 compute-0 nova_compute[244014]: 2026-02-25 12:34:28.400 244018 DEBUG oslo_concurrency.processutils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:28 compute-0 nova_compute[244014]: 2026-02-25 12:34:28.405 244018 DEBUG nova.compute.provider_tree [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:34:28 compute-0 nova_compute[244014]: 2026-02-25 12:34:28.424 244018 DEBUG nova.scheduler.client.report [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:34:28 compute-0 nova_compute[244014]: 2026-02-25 12:34:28.456 244018 DEBUG oslo_concurrency.lockutils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:28 compute-0 nova_compute[244014]: 2026-02-25 12:34:28.546 244018 DEBUG nova.network.neutron [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Successfully updated port: 26c6cf46-a52b-4476-9112-4047f420e492 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:34:28 compute-0 nova_compute[244014]: 2026-02-25 12:34:28.558 244018 DEBUG oslo_concurrency.lockutils [None req-f5d07bbb-a166-47b5-a34d-62376dbd73fd 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 12.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:28 compute-0 nova_compute[244014]: 2026-02-25 12:34:28.570 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:34:28 compute-0 nova_compute[244014]: 2026-02-25 12:34:28.571 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquired lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:34:28 compute-0 nova_compute[244014]: 2026-02-25 12:34:28.571 244018 DEBUG nova.network.neutron [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:34:28 compute-0 nova_compute[244014]: 2026-02-25 12:34:28.709 244018 DEBUG nova.network.neutron [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:34:29 compute-0 ceph-mon[76335]: pgmap v1577: 305 pgs: 305 active+clean; 492 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 14 MiB/s wr, 410 op/s
Feb 25 12:34:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3641417559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.572 244018 DEBUG nova.network.neutron [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Updating instance_info_cache with network_info: [{"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.596 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Releasing lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.597 244018 DEBUG nova.compute.manager [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance network_info: |[{"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.600 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Start _get_guest_xml network_info=[{"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.604 244018 WARNING nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.610 244018 DEBUG nova.virt.libvirt.host [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.610 244018 DEBUG nova.virt.libvirt.host [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.614 244018 DEBUG nova.virt.libvirt.host [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.614 244018 DEBUG nova.virt.libvirt.host [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.615 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.615 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.616 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.616 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.617 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.617 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.617 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.618 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.618 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.618 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.619 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.619 244018 DEBUG nova.virt.hardware [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.622 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:29 compute-0 nova_compute[244014]: 2026-02-25 12:34:29.887 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 492 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.3 MiB/s wr, 246 op/s
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.067 244018 DEBUG nova.compute.manager [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.068 244018 DEBUG oslo_concurrency.lockutils [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.068 244018 DEBUG oslo_concurrency.lockutils [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.068 244018 DEBUG oslo_concurrency.lockutils [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.069 244018 DEBUG nova.compute.manager [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Processing event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.069 244018 DEBUG nova.compute.manager [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.070 244018 DEBUG oslo_concurrency.lockutils [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.070 244018 DEBUG oslo_concurrency.lockutils [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.070 244018 DEBUG oslo_concurrency.lockutils [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.071 244018 DEBUG nova.compute.manager [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] No waiting events found dispatching network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.071 244018 WARNING nova.compute.manager [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received unexpected event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 for instance with vm_state building and task_state spawning.
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.071 244018 DEBUG nova.compute.manager [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-changed-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.072 244018 DEBUG nova.compute.manager [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Refreshing instance network info cache due to event network-changed-26c6cf46-a52b-4476-9112-4047f420e492. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.072 244018 DEBUG oslo_concurrency.lockutils [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.073 244018 DEBUG oslo_concurrency.lockutils [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.073 244018 DEBUG nova.network.neutron [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Refreshing network info cache for port 26c6cf46-a52b-4476-9112-4047f420e492 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.076 244018 DEBUG nova.compute.manager [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.087 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022870.0817316, ef34c4d4-57e0-4af1-af7a-b8ef35d09862 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.103 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] VM Resumed (Lifecycle Event)
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.106 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.121 244018 INFO nova.virt.libvirt.driver [-] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Instance spawned successfully.
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.122 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:34:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3495813043' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.142 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.147 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.151 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.152 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.152 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.153 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.153 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.154 244018 DEBUG nova.virt.libvirt.driver [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.162 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.183 244018 DEBUG nova.storage.rbd_utils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.188 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.242 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.244 244018 INFO nova.compute.manager [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Took 12.99 seconds to spawn the instance on the hypervisor.
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.245 244018 DEBUG nova.compute.manager [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.323 244018 INFO nova.compute.manager [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Took 14.37 seconds to build instance.
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.350 244018 DEBUG oslo_concurrency.lockutils [None req-f7b8e08b-4c5f-4bb0-8514-563729d6737a d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404903201' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.777 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.778 244018 DEBUG nova.virt.libvirt.vif [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1397178537',display_name='tempest-ServerRescueTestJSON-server-1397178537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1397178537',id=86,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='367d43ab207546c3900a8414f0713ef4',ramdisk_id='',reservation_id='r-j34kam6o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-930018924',owner_user_name='tempest-ServerRescueTestJSON-930018924-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:26Z,user_data=None,user_id='cb2c815ef3214a7b897f911b4f53a146',uuid=e3178d86-5c76-4393-9327-2aac2cb8d81d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.779 244018 DEBUG nova.network.os_vif_util [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converting VIF {"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.779 244018 DEBUG nova.network.os_vif_util [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a3:ed,bridge_name='br-int',has_traffic_filtering=True,id=26c6cf46-a52b-4476-9112-4047f420e492,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c6cf46-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.780 244018 DEBUG nova.objects.instance [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'pci_devices' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.794 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:34:30 compute-0 nova_compute[244014]:   <uuid>e3178d86-5c76-4393-9327-2aac2cb8d81d</uuid>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   <name>instance-00000056</name>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerRescueTestJSON-server-1397178537</nova:name>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:34:29</nova:creationTime>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <nova:user uuid="cb2c815ef3214a7b897f911b4f53a146">tempest-ServerRescueTestJSON-930018924-project-member</nova:user>
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <nova:project uuid="367d43ab207546c3900a8414f0713ef4">tempest-ServerRescueTestJSON-930018924</nova:project>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <nova:port uuid="26c6cf46-a52b-4476-9112-4047f420e492">
Feb 25 12:34:30 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <system>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <entry name="serial">e3178d86-5c76-4393-9327-2aac2cb8d81d</entry>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <entry name="uuid">e3178d86-5c76-4393-9327-2aac2cb8d81d</entry>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     </system>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   <os>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   </os>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   <features>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   </features>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e3178d86-5c76-4393-9327-2aac2cb8d81d_disk">
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.config">
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:30 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:a5:a3:ed"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <target dev="tap26c6cf46-a5"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/console.log" append="off"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <video>
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     </video>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:34:30 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:34:30 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:34:30 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:34:30 compute-0 nova_compute[244014]: </domain>
Feb 25 12:34:30 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.798 244018 DEBUG nova.compute.manager [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Preparing to wait for external event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.798 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.799 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.799 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.800 244018 DEBUG nova.virt.libvirt.vif [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1397178537',display_name='tempest-ServerRescueTestJSON-server-1397178537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1397178537',id=86,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='367d43ab207546c3900a8414f0713ef4',ramdisk_id='',reservation_id='r-j34kam6o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-930018924',owner_user_name='tempest-ServerRescueTestJSON-930018924-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:26Z,user_data=None,user_id='cb2c815ef3214a7b897f911b4f53a146',uuid=e3178d86-5c76-4393-9327-2aac2cb8d81d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.800 244018 DEBUG nova.network.os_vif_util [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converting VIF {"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.801 244018 DEBUG nova.network.os_vif_util [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a3:ed,bridge_name='br-int',has_traffic_filtering=True,id=26c6cf46-a52b-4476-9112-4047f420e492,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c6cf46-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.801 244018 DEBUG os_vif [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a3:ed,bridge_name='br-int',has_traffic_filtering=True,id=26c6cf46-a52b-4476-9112-4047f420e492,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c6cf46-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.802 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.802 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.804 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26c6cf46-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.805 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26c6cf46-a5, col_values=(('external_ids', {'iface-id': '26c6cf46-a52b-4476-9112-4047f420e492', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:a3:ed', 'vm-uuid': 'e3178d86-5c76-4393-9327-2aac2cb8d81d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.806 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:30 compute-0 NetworkManager[49836]: <info>  [1772022870.8074] manager: (tap26c6cf46-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.812 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.813 244018 INFO os_vif [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a3:ed,bridge_name='br-int',has_traffic_filtering=True,id=26c6cf46-a52b-4476-9112-4047f420e492,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c6cf46-a5')
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.858 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.859 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.859 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No VIF found with MAC fa:16:3e:a5:a3:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.860 244018 INFO nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Using config drive
Feb 25 12:34:30 compute-0 nova_compute[244014]: 2026-02-25 12:34:30.883 244018 DEBUG nova.storage.rbd_utils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:34:30
Feb 25 12:34:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:34:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:34:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'images', '.mgr']
Feb 25 12:34:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:34:31 compute-0 ceph-mon[76335]: pgmap v1578: 305 pgs: 305 active+clean; 492 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.3 MiB/s wr, 246 op/s
Feb 25 12:34:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3495813043' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1404903201' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:31 compute-0 nova_compute[244014]: 2026-02-25 12:34:31.289 244018 INFO nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Creating config drive at /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config
Feb 25 12:34:31 compute-0 nova_compute[244014]: 2026-02-25 12:34:31.293 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5yh1u5zd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:31 compute-0 nova_compute[244014]: 2026-02-25 12:34:31.431 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5yh1u5zd" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:31 compute-0 nova_compute[244014]: 2026-02-25 12:34:31.454 244018 DEBUG nova.storage.rbd_utils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:31 compute-0 nova_compute[244014]: 2026-02-25 12:34:31.458 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:34:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:34:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 451 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 6.9 MiB/s wr, 303 op/s
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.257 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.259 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.259 244018 INFO nova.compute.manager [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Unshelving
Feb 25 12:34:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:34:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.388 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.388 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.394 244018 DEBUG nova.objects.instance [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'pci_requests' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:32 compute-0 ceph-mon[76335]: pgmap v1579: 305 pgs: 305 active+clean; 451 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 6.9 MiB/s wr, 303 op/s
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.412 244018 DEBUG nova.objects.instance [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'numa_topology' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.425 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.425 244018 INFO nova.compute.claims [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.540 244018 DEBUG oslo_concurrency.processutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.542 244018 INFO nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Deleting local config drive /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config because it was imported into RBD.
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.571 244018 DEBUG oslo_concurrency.processutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:32 compute-0 kernel: tap26c6cf46-a5: entered promiscuous mode
Feb 25 12:34:32 compute-0 NetworkManager[49836]: <info>  [1772022872.5952] manager: (tap26c6cf46-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Feb 25 12:34:32 compute-0 ovn_controller[147040]: 2026-02-25T12:34:32Z|00833|binding|INFO|Claiming lport 26c6cf46-a52b-4476-9112-4047f420e492 for this chassis.
Feb 25 12:34:32 compute-0 ovn_controller[147040]: 2026-02-25T12:34:32Z|00834|binding|INFO|26c6cf46-a52b-4476-9112-4047f420e492: Claiming fa:16:3e:a5:a3:ed 10.100.0.5
Feb 25 12:34:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:32.604 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a3:ed 10.100.0.5'], port_security=['fa:16:3e:a5:a3:ed 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e3178d86-5c76-4393-9327-2aac2cb8d81d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=26c6cf46-a52b-4476-9112-4047f420e492) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:32 compute-0 ovn_controller[147040]: 2026-02-25T12:34:32Z|00835|binding|INFO|Setting lport 26c6cf46-a52b-4476-9112-4047f420e492 ovn-installed in OVS
Feb 25 12:34:32 compute-0 ovn_controller[147040]: 2026-02-25T12:34:32Z|00836|binding|INFO|Setting lport 26c6cf46-a52b-4476-9112-4047f420e492 up in Southbound
Feb 25 12:34:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:32.607 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 26c6cf46-a52b-4476-9112-4047f420e492 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 bound to our chassis
Feb 25 12:34:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:32.612 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:34:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:32.615 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e5028bfe-6372-4761-9a5b-d9c690cbad7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:32 compute-0 nova_compute[244014]: 2026-02-25 12:34:32.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:32 compute-0 systemd-udevd[319300]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:34:32 compute-0 systemd-machined[210048]: New machine qemu-106-instance-00000056.
Feb 25 12:34:32 compute-0 NetworkManager[49836]: <info>  [1772022872.6450] device (tap26c6cf46-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:34:32 compute-0 NetworkManager[49836]: <info>  [1772022872.6460] device (tap26c6cf46-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:34:32 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-00000056.
Feb 25 12:34:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Feb 25 12:34:32 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Feb 25 12:34:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:34:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011846653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.160 244018 DEBUG oslo_concurrency.processutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.167 244018 DEBUG nova.compute.provider_tree [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.186 244018 DEBUG nova.scheduler.client.report [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.214 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.332 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022873.332289, e3178d86-5c76-4393-9327-2aac2cb8d81d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.333 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] VM Started (Lifecycle Event)
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.337 244018 DEBUG nova.network.neutron [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Updated VIF entry in instance network info cache for port 26c6cf46-a52b-4476-9112-4047f420e492. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.338 244018 DEBUG nova.network.neutron [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Updating instance_info_cache with network_info: [{"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.358 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.358 244018 DEBUG oslo_concurrency.lockutils [req-c6ede333-cb50-46da-b626-b7827a435c60 req-d7d1f8aa-eac2-4fa2-ba8d-817871d68be8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.362 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022873.3323832, e3178d86-5c76-4393-9327-2aac2cb8d81d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.362 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] VM Paused (Lifecycle Event)
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.382 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.385 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:33 compute-0 nova_compute[244014]: 2026-02-25 12:34:33.404 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:34:33 compute-0 ceph-mon[76335]: osdmap e239: 3 total, 3 up, 3 in
Feb 25 12:34:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2011846653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.013 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022859.0111198, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.013 244018 INFO nova.compute.manager [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Stopped (Lifecycle Event)
Feb 25 12:34:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 451 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 3.9 MiB/s wr, 278 op/s
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.040 244018 DEBUG nova.compute.manager [None req-c8937622-ec8c-49ad-be56-44e16b66fc57 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.254 244018 INFO nova.network.neutron [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updating port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.481 244018 DEBUG nova.compute.manager [req-89c55afc-8305-4e18-a73f-cfea3498c330 req-dbebbd66-35d3-4030-9817-80445e8e6c01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.482 244018 DEBUG oslo_concurrency.lockutils [req-89c55afc-8305-4e18-a73f-cfea3498c330 req-dbebbd66-35d3-4030-9817-80445e8e6c01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.482 244018 DEBUG oslo_concurrency.lockutils [req-89c55afc-8305-4e18-a73f-cfea3498c330 req-dbebbd66-35d3-4030-9817-80445e8e6c01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.482 244018 DEBUG oslo_concurrency.lockutils [req-89c55afc-8305-4e18-a73f-cfea3498c330 req-dbebbd66-35d3-4030-9817-80445e8e6c01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.483 244018 DEBUG nova.compute.manager [req-89c55afc-8305-4e18-a73f-cfea3498c330 req-dbebbd66-35d3-4030-9817-80445e8e6c01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Processing event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.483 244018 DEBUG nova.compute.manager [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.486 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022874.4860802, e3178d86-5c76-4393-9327-2aac2cb8d81d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.486 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] VM Resumed (Lifecycle Event)
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.488 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.491 244018 INFO nova.virt.libvirt.driver [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance spawned successfully.
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.491 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.509 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.514 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.517 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.518 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.518 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.519 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.519 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.519 244018 DEBUG nova.virt.libvirt.driver [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.543 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.584 244018 INFO nova.compute.manager [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Took 8.33 seconds to spawn the instance on the hypervisor.
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.585 244018 DEBUG nova.compute.manager [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.638 244018 INFO nova.compute.manager [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Took 9.33 seconds to build instance.
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.655 244018 DEBUG oslo_concurrency.lockutils [None req-39fbdc83-b421-4dfc-80d4-26fb41170434 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:34 compute-0 ceph-mon[76335]: pgmap v1581: 305 pgs: 305 active+clean; 451 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 3.9 MiB/s wr, 278 op/s
Feb 25 12:34:34 compute-0 nova_compute[244014]: 2026-02-25 12:34:34.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:35 compute-0 nova_compute[244014]: 2026-02-25 12:34:35.174 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:34:35 compute-0 nova_compute[244014]: 2026-02-25 12:34:35.175 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquired lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:34:35 compute-0 nova_compute[244014]: 2026-02-25 12:34:35.175 244018 DEBUG nova.network.neutron [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:34:35 compute-0 nova_compute[244014]: 2026-02-25 12:34:35.277 244018 DEBUG nova.compute.manager [req-a4592b1c-8b36-41b4-92b4-25e5619e7b7a req-290ae705-e62e-4412-aee5-7df5bc245ff1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-changed-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:35 compute-0 nova_compute[244014]: 2026-02-25 12:34:35.278 244018 DEBUG nova.compute.manager [req-a4592b1c-8b36-41b4-92b4-25e5619e7b7a req-290ae705-e62e-4412-aee5-7df5bc245ff1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Refreshing instance network info cache due to event network-changed-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:34:35 compute-0 nova_compute[244014]: 2026-02-25 12:34:35.278 244018 DEBUG oslo_concurrency.lockutils [req-a4592b1c-8b36-41b4-92b4-25e5619e7b7a req-290ae705-e62e-4412-aee5-7df5bc245ff1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:34:35 compute-0 nova_compute[244014]: 2026-02-25 12:34:35.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 451 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 3.9 MiB/s wr, 278 op/s
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.057 244018 DEBUG nova.compute.manager [req-56dbe35a-cd0e-4957-b442-1358066d6e0a req-e9fbe332-795a-4343-a57f-ff918a6133bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.058 244018 DEBUG oslo_concurrency.lockutils [req-56dbe35a-cd0e-4957-b442-1358066d6e0a req-e9fbe332-795a-4343-a57f-ff918a6133bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.059 244018 DEBUG oslo_concurrency.lockutils [req-56dbe35a-cd0e-4957-b442-1358066d6e0a req-e9fbe332-795a-4343-a57f-ff918a6133bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.059 244018 DEBUG oslo_concurrency.lockutils [req-56dbe35a-cd0e-4957-b442-1358066d6e0a req-e9fbe332-795a-4343-a57f-ff918a6133bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.060 244018 DEBUG nova.compute.manager [req-56dbe35a-cd0e-4957-b442-1358066d6e0a req-e9fbe332-795a-4343-a57f-ff918a6133bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.060 244018 WARNING nova.compute.manager [req-56dbe35a-cd0e-4957-b442-1358066d6e0a req-e9fbe332-795a-4343-a57f-ff918a6133bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received unexpected event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with vm_state active and task_state None.
Feb 25 12:34:37 compute-0 ceph-mon[76335]: pgmap v1582: 305 pgs: 305 active+clean; 451 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 3.9 MiB/s wr, 278 op/s
Feb 25 12:34:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.311 244018 INFO nova.compute.manager [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Rescuing
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.312 244018 DEBUG oslo_concurrency.lockutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.312 244018 DEBUG oslo_concurrency.lockutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquired lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.312 244018 DEBUG nova.network.neutron [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.808 244018 INFO nova.compute.manager [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Rebuilding instance
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.895 244018 DEBUG nova.network.neutron [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updating instance_info_cache with network_info: [{"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.917 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Releasing lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.918 244018 DEBUG nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.919 244018 INFO nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Creating image(s)
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.941 244018 DEBUG nova.storage.rbd_utils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.945 244018 DEBUG nova.objects.instance [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.950 244018 DEBUG oslo_concurrency.lockutils [req-a4592b1c-8b36-41b4-92b4-25e5619e7b7a req-290ae705-e62e-4412-aee5-7df5bc245ff1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.951 244018 DEBUG nova.network.neutron [req-a4592b1c-8b36-41b4-92b4-25e5619e7b7a req-290ae705-e62e-4412-aee5-7df5bc245ff1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Refreshing network info cache for port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:34:37 compute-0 nova_compute[244014]: 2026-02-25 12:34:37.986 244018 DEBUG nova.storage.rbd_utils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.020 244018 DEBUG nova.storage.rbd_utils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 451 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1017 KiB/s wr, 279 op/s
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.026 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "6252a025717471779da668bc27b6de736cbe90a7" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.028 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "6252a025717471779da668bc27b6de736cbe90a7" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.290 244018 DEBUG nova.objects.instance [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ef34c4d4-57e0-4af1-af7a-b8ef35d09862 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.324 244018 DEBUG nova.compute.manager [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.379 244018 DEBUG nova.objects.instance [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'pci_requests' on Instance uuid ef34c4d4-57e0-4af1-af7a-b8ef35d09862 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.392 244018 DEBUG nova.objects.instance [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef34c4d4-57e0-4af1-af7a-b8ef35d09862 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.408 244018 DEBUG nova.objects.instance [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'resources' on Instance uuid ef34c4d4-57e0-4af1-af7a-b8ef35d09862 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.421 244018 DEBUG nova.objects.instance [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'migration_context' on Instance uuid ef34c4d4-57e0-4af1-af7a-b8ef35d09862 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.445 244018 DEBUG nova.objects.instance [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.450 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.594 244018 DEBUG nova.virt.libvirt.imagebackend [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/fc0b665d-d9e3-4786-aa12-40ae928c6e08/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/fc0b665d-d9e3-4786-aa12-40ae928c6e08/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.646 244018 DEBUG nova.virt.libvirt.imagebackend [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Selected location: {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/fc0b665d-d9e3-4786-aa12-40ae928c6e08/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.647 244018 DEBUG nova.storage.rbd_utils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] cloning images/fc0b665d-d9e3-4786-aa12-40ae928c6e08@snap to None/19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.728 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "6252a025717471779da668bc27b6de736cbe90a7" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.868 244018 DEBUG nova.objects.instance [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'migration_context' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:38 compute-0 nova_compute[244014]: 2026-02-25 12:34:38.950 244018 DEBUG nova.storage.rbd_utils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] flattening vms/19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:34:39 compute-0 ceph-mon[76335]: pgmap v1583: 305 pgs: 305 active+clean; 451 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1017 KiB/s wr, 279 op/s
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.586 244018 DEBUG nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Image rbd:vms/19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.586 244018 DEBUG nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.586 244018 DEBUG nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Ensure instance console log exists: /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.587 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.587 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.587 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.589 244018 DEBUG nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Start _get_guest_xml network_info=[{"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:34:16Z,direct_url=<?>,disk_format='raw',id=fc0b665d-d9e3-4786-aa12-40ae928c6e08,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-293517057-shelved',owner='0d34ca23436b401fbaeb0b01190a440a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:34:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.592 244018 WARNING nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.598 244018 DEBUG nova.virt.libvirt.host [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.599 244018 DEBUG nova.virt.libvirt.host [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.601 244018 DEBUG nova.virt.libvirt.host [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.601 244018 DEBUG nova.virt.libvirt.host [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.601 244018 DEBUG nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.602 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:34:16Z,direct_url=<?>,disk_format='raw',id=fc0b665d-d9e3-4786-aa12-40ae928c6e08,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-293517057-shelved',owner='0d34ca23436b401fbaeb0b01190a440a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:34:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.602 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.602 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.602 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.603 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.603 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.603 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.603 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.603 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.604 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.604 244018 DEBUG nova.virt.hardware [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.604 244018 DEBUG nova.objects.instance [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.621 244018 DEBUG oslo_concurrency.processutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.771 244018 DEBUG nova.network.neutron [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Updating instance_info_cache with network_info: [{"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.799 244018 DEBUG oslo_concurrency.lockutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Releasing lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.979 244018 DEBUG nova.network.neutron [req-a4592b1c-8b36-41b4-92b4-25e5619e7b7a req-290ae705-e62e-4412-aee5-7df5bc245ff1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updated VIF entry in instance network info cache for port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.979 244018 DEBUG nova.network.neutron [req-a4592b1c-8b36-41b4-92b4-25e5619e7b7a req-290ae705-e62e-4412-aee5-7df5bc245ff1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updating instance_info_cache with network_info: [{"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.995 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:34:39 compute-0 nova_compute[244014]: 2026-02-25 12:34:39.997 244018 DEBUG oslo_concurrency.lockutils [req-a4592b1c-8b36-41b4-92b4-25e5619e7b7a req-290ae705-e62e-4412-aee5-7df5bc245ff1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:34:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 451 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1017 KiB/s wr, 279 op/s
Feb 25 12:34:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209770718' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.195 244018 DEBUG oslo_concurrency.processutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:40 compute-0 ceph-mon[76335]: pgmap v1584: 305 pgs: 305 active+clean; 451 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1017 KiB/s wr, 279 op/s
Feb 25 12:34:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1209770718' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.229 244018 DEBUG nova.storage.rbd_utils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.238 244018 DEBUG oslo_concurrency.processutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:40 compute-0 podman[319646]: 2026-02-25 12:34:40.717568206 +0000 UTC m=+0.057083044 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:34:40 compute-0 podman[319647]: 2026-02-25 12:34:40.736179312 +0000 UTC m=+0.073590500 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.810 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109869383' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.925 244018 DEBUG oslo_concurrency.processutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.927 244018 DEBUG nova.virt.libvirt.vif [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-293517057',display_name='tempest-ServersNegativeTestJSON-server-293517057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-293517057',id=79,image_ref='fc0b665d-d9e3-4786-aa12-40ae928c6e08',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:32:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='0d34ca23436b401fbaeb0b01190a440a',ramdisk_id='',reservation_id='r-lll6a7bq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1613719120',owner_user_name='tempest-ServersNegativeTestJSON-1613719120-project-member',shelved_at='2026-02-25T12:34:24.611832',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fc0b665d-d9e3-4786-aa12-40ae928c6e08'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:32Z,user_data=None,user_id='84ba7d5e80a44535b25853f3b18e352d',uuid=19abddab-88d5-48b8-b98e-1dedccbb8b7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.927 244018 DEBUG nova.network.os_vif_util [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converting VIF {"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.928 244018 DEBUG nova.network.os_vif_util [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.930 244018 DEBUG nova.objects.instance [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'pci_devices' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.957 244018 DEBUG nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:34:40 compute-0 nova_compute[244014]:   <uuid>19abddab-88d5-48b8-b98e-1dedccbb8b7f</uuid>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   <name>instance-0000004f</name>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersNegativeTestJSON-server-293517057</nova:name>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:34:39</nova:creationTime>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <nova:user uuid="84ba7d5e80a44535b25853f3b18e352d">tempest-ServersNegativeTestJSON-1613719120-project-member</nova:user>
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <nova:project uuid="0d34ca23436b401fbaeb0b01190a440a">tempest-ServersNegativeTestJSON-1613719120</nova:project>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="fc0b665d-d9e3-4786-aa12-40ae928c6e08"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <nova:port uuid="d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536">
Feb 25 12:34:40 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <system>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <entry name="serial">19abddab-88d5-48b8-b98e-1dedccbb8b7f</entry>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <entry name="uuid">19abddab-88d5-48b8-b98e-1dedccbb8b7f</entry>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     </system>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   <os>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   </os>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   <features>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   </features>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk">
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config">
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:40 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:af:66:6e"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <target dev="tapd6f9abb7-ac"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/console.log" append="off"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <video>
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     </video>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:34:40 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:34:40 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:34:40 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:34:40 compute-0 nova_compute[244014]: </domain>
Feb 25 12:34:40 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.963 244018 DEBUG nova.compute.manager [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Preparing to wait for external event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.963 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.963 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.964 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.964 244018 DEBUG nova.virt.libvirt.vif [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-293517057',display_name='tempest-ServersNegativeTestJSON-server-293517057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-293517057',id=79,image_ref='fc0b665d-d9e3-4786-aa12-40ae928c6e08',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:32:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='0d34ca23436b401fbaeb0b01190a440a',ramdisk_id='',reservation_id='r-lll6a7bq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1613719120',owner_user_name='tempest-ServersNegativeTestJSON-1613719120-project-member',shelved_at='2026-02-25T12:34:24.611832',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fc0b665d-d9e3-4786-aa12-40ae928c6e08'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:32Z,user_data=None,user_id='84ba7d5e80a44535b25853f3b18e352d',uuid=19abddab-88d5-48b8-b98e-1dedccbb8b7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.965 244018 DEBUG nova.network.os_vif_util [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converting VIF {"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.966 244018 DEBUG nova.network.os_vif_util [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.966 244018 DEBUG os_vif [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.967 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.968 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.970 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.970 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6f9abb7-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.971 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6f9abb7-ac, col_values=(('external_ids', {'iface-id': 'd6f9abb7-ac51-44f8-88ae-b5a8ef3b6536', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:66:6e', 'vm-uuid': '19abddab-88d5-48b8-b98e-1dedccbb8b7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:34:40 compute-0 NetworkManager[49836]: <info>  [1772022880.9762] manager: (tapd6f9abb7-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:40 compute-0 nova_compute[244014]: 2026-02-25 12:34:40.978 244018 INFO os_vif [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac')
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.021 244018 DEBUG nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.022 244018 DEBUG nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.023 244018 DEBUG nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] No VIF found with MAC fa:16:3e:af:66:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.023 244018 INFO nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Using config drive
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.044 244018 DEBUG nova.storage.rbd_utils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.065 244018 DEBUG nova.objects.instance [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.099 244018 DEBUG nova.objects.instance [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'keypairs' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2109869383' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.454 244018 INFO nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Creating config drive at /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.459 244018 DEBUG oslo_concurrency.processutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpy3uo_01m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.589 244018 DEBUG oslo_concurrency.processutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpy3uo_01m" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.614 244018 DEBUG nova.storage.rbd_utils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] rbd image 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:41 compute-0 nova_compute[244014]: 2026-02-25 12:34:41.617 244018 DEBUG oslo_concurrency.processutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:41 compute-0 sudo[319748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:34:41 compute-0 sudo[319748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:34:41 compute-0 sudo[319748]: pam_unix(sudo:session): session closed for user root
Feb 25 12:34:41 compute-0 sudo[319773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:34:41 compute-0 sudo[319773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 505 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 3.0 MiB/s wr, 276 op/s
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.056 244018 DEBUG oslo_concurrency.processutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config 19abddab-88d5-48b8-b98e-1dedccbb8b7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.060 244018 INFO nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Deleting local config drive /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f/disk.config because it was imported into RBD.
Feb 25 12:34:42 compute-0 kernel: tapd6f9abb7-ac: entered promiscuous mode
Feb 25 12:34:42 compute-0 NetworkManager[49836]: <info>  [1772022882.1202] manager: (tapd6f9abb7-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/355)
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.121 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:42 compute-0 ovn_controller[147040]: 2026-02-25T12:34:42Z|00837|binding|INFO|Claiming lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for this chassis.
Feb 25 12:34:42 compute-0 ovn_controller[147040]: 2026-02-25T12:34:42Z|00838|binding|INFO|d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536: Claiming fa:16:3e:af:66:6e 10.100.0.8
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.130 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:66:6e 10.100.0.8'], port_security=['fa:16:3e:af:66:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '19abddab-88d5-48b8-b98e-1dedccbb8b7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d34ca23436b401fbaeb0b01190a440a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3f8cb539-706f-470a-94ac-5465c7a896f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2c08b8a-392f-4189-9ba4-eaa2e30b0540, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.131 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 in datapath a0cf2281-bf49-498f-8de5-70cdba33cd62 bound to our chassis
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.133 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0cf2281-bf49-498f-8de5-70cdba33cd62
Feb 25 12:34:42 compute-0 ovn_controller[147040]: 2026-02-25T12:34:42Z|00839|binding|INFO|Setting lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 ovn-installed in OVS
Feb 25 12:34:42 compute-0 ovn_controller[147040]: 2026-02-25T12:34:42Z|00840|binding|INFO|Setting lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 up in Southbound
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.143 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[473ccdc3-df51-44e6-827c-6ac902a12c6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.143 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0cf2281-b1 in ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.146 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0cf2281-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.146 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa4720b-abd5-4c32-8c44-1a039d5cf3a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.147 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ecad3625-4505-4d59-9fad-43ded514fb3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 systemd-udevd[319826]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.162 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b587ce68-4526-4836-b34b-8a4e2fe0657c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 systemd-machined[210048]: New machine qemu-107-instance-0000004f.
Feb 25 12:34:42 compute-0 NetworkManager[49836]: <info>  [1772022882.1670] device (tapd6f9abb7-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:34:42 compute-0 NetworkManager[49836]: <info>  [1772022882.1679] device (tapd6f9abb7-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:34:42 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-0000004f.
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.186 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[20f71f29-a249-48b4-869b-f3606b567a7b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.209 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8f14e5-8b1d-4b7a-b896-9afc93a004f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 systemd-udevd[319831]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:34:42 compute-0 NetworkManager[49836]: <info>  [1772022882.2156] manager: (tapa0cf2281-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/356)
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.214 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c0dce2-46b7-4116-91f3-7b5dc6e15e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.242 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3aefe0-dd2a-4299-adea-99f45e714a59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.248 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9ed2a5-2943-4b21-b21f-c428a0b57a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 NetworkManager[49836]: <info>  [1772022882.2621] device (tapa0cf2281-b0): carrier: link connected
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.265 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[59f86848-4d0d-42dc-b44f-8dfafba27a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.277 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[372cc8bb-c169-4689-9450-e4fbd10b38d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0cf2281-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:10:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485167, 'reachable_time': 41549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319863, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.287 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1357bf13-4460-4a2a-8f80-470de04eda01]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4a:1087'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485167, 'tstamp': 485167}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319866, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.310 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[07e5c90c-0ab7-454f-82a4-3c92641fdfc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0cf2281-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:10:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485167, 'reachable_time': 41549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319867, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bca33f1e-4f18-4c26-8aaf-32dbacf2e935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002295558151574994 of space, bias 1.0, pg target 0.6886674454724981 quantized to 32 (current 32)
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0032502682544201266 of space, bias 1.0, pg target 0.975080476326038 quantized to 32 (current 32)
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.334791845833485e-07 of space, bias 4.0, pg target 0.0010001750215000182 quantized to 16 (current 16)
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:34:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:34:42 compute-0 sudo[319773]: pam_unix(sudo:session): session closed for user root
Feb 25 12:34:42 compute-0 ceph-mon[76335]: pgmap v1585: 305 pgs: 305 active+clean; 505 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 3.0 MiB/s wr, 276 op/s
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.396 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b34b513-3634-4603-966a-493ae3f93ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.398 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0cf2281-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.398 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.399 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0cf2281-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:42 compute-0 NetworkManager[49836]: <info>  [1772022882.4010] manager: (tapa0cf2281-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Feb 25 12:34:42 compute-0 kernel: tapa0cf2281-b0: entered promiscuous mode
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.405 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0cf2281-b0, col_values=(('external_ids', {'iface-id': '134cce92-aa02-44fa-b97c-8b46159f1d29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:42 compute-0 ovn_controller[147040]: 2026-02-25T12:34:42Z|00841|binding|INFO|Releasing lport 134cce92-aa02-44fa-b97c-8b46159f1d29 from this chassis (sb_readonly=0)
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.406 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.409 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0cf2281-bf49-498f-8de5-70cdba33cd62.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0cf2281-bf49-498f-8de5-70cdba33cd62.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.413 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.415 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dd344990-77c8-44ed-b794-7135c816a4aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.416 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-a0cf2281-bf49-498f-8de5-70cdba33cd62
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/a0cf2281-bf49-498f-8de5-70cdba33cd62.pid.haproxy
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID a0cf2281-bf49-498f-8de5-70cdba33cd62
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:34:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:42.417 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'env', 'PROCESS_TAG=haproxy-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0cf2281-bf49-498f-8de5-70cdba33cd62.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:34:42 compute-0 ovn_controller[147040]: 2026-02-25T12:34:42Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:18:49 10.100.0.3
Feb 25 12:34:42 compute-0 ovn_controller[147040]: 2026-02-25T12:34:42Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:18:49 10.100.0.3
Feb 25 12:34:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:34:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:34:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:34:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:34:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:34:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:34:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:34:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:34:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:34:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:34:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:34:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:34:42 compute-0 sudo[319930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:34:42 compute-0 sudo[319930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:34:42 compute-0 sudo[319930]: pam_unix(sudo:session): session closed for user root
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.581 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022882.57837, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.581 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Started (Lifecycle Event)
Feb 25 12:34:42 compute-0 sudo[319956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:34:42 compute-0 sudo[319956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.724 244018 DEBUG nova.compute.manager [req-df3d0c59-7c4a-4108-a265-6e13b5ebaa94 req-9bdd211c-5afe-4a35-814e-edffd43adbd4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.724 244018 DEBUG oslo_concurrency.lockutils [req-df3d0c59-7c4a-4108-a265-6e13b5ebaa94 req-9bdd211c-5afe-4a35-814e-edffd43adbd4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.725 244018 DEBUG oslo_concurrency.lockutils [req-df3d0c59-7c4a-4108-a265-6e13b5ebaa94 req-9bdd211c-5afe-4a35-814e-edffd43adbd4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.725 244018 DEBUG oslo_concurrency.lockutils [req-df3d0c59-7c4a-4108-a265-6e13b5ebaa94 req-9bdd211c-5afe-4a35-814e-edffd43adbd4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.725 244018 DEBUG nova.compute.manager [req-df3d0c59-7c4a-4108-a265-6e13b5ebaa94 req-9bdd211c-5afe-4a35-814e-edffd43adbd4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Processing event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.726 244018 DEBUG nova.compute.manager [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.729 244018 DEBUG nova.virt.libvirt.driver [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.737 244018 INFO nova.virt.libvirt.driver [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance spawned successfully.
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.756 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.758 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.791 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.792 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022882.5785837, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.792 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Paused (Lifecycle Event)
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.817 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.821 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022882.7290814, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.821 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Resumed (Lifecycle Event)
Feb 25 12:34:42 compute-0 podman[320003]: 2026-02-25 12:34:42.830897964 +0000 UTC m=+0.100731448 container create 8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.839 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.843 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:42 compute-0 podman[320003]: 2026-02-25 12:34:42.766301909 +0000 UTC m=+0.036135383 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:34:42 compute-0 nova_compute[244014]: 2026-02-25 12:34:42.865 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:34:42 compute-0 systemd[1]: Started libpod-conmon-8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4.scope.
Feb 25 12:34:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210403d49e93acea9e0e205810dacd74afebcd09870e76456568ef2a5afec89b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:42 compute-0 podman[320027]: 2026-02-25 12:34:42.911848932 +0000 UTC m=+0.043913212 container create af27def7c5c1498f082c468b66139ccab899790cdfbbc155d838622e3f20d3a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:34:42 compute-0 podman[320003]: 2026-02-25 12:34:42.923274595 +0000 UTC m=+0.193108069 container init 8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:34:42 compute-0 podman[320003]: 2026-02-25 12:34:42.927154985 +0000 UTC m=+0.196988439 container start 8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:34:42 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[320036]: [NOTICE]   (320047) : New worker (320051) forked
Feb 25 12:34:42 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[320036]: [NOTICE]   (320047) : Loading success.
Feb 25 12:34:42 compute-0 systemd[1]: Started libpod-conmon-af27def7c5c1498f082c468b66139ccab899790cdfbbc155d838622e3f20d3a8.scope.
Feb 25 12:34:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:34:42 compute-0 podman[320027]: 2026-02-25 12:34:42.892419423 +0000 UTC m=+0.024483723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:34:42 compute-0 podman[320027]: 2026-02-25 12:34:42.996224087 +0000 UTC m=+0.128288387 container init af27def7c5c1498f082c468b66139ccab899790cdfbbc155d838622e3f20d3a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 12:34:43 compute-0 podman[320027]: 2026-02-25 12:34:43.00200322 +0000 UTC m=+0.134067510 container start af27def7c5c1498f082c468b66139ccab899790cdfbbc155d838622e3f20d3a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 12:34:43 compute-0 systemd[1]: libpod-af27def7c5c1498f082c468b66139ccab899790cdfbbc155d838622e3f20d3a8.scope: Deactivated successfully.
Feb 25 12:34:43 compute-0 zealous_poitras[320060]: 167 167
Feb 25 12:34:43 compute-0 conmon[320060]: conmon af27def7c5c1498f082c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af27def7c5c1498f082c468b66139ccab899790cdfbbc155d838622e3f20d3a8.scope/container/memory.events
Feb 25 12:34:43 compute-0 podman[320027]: 2026-02-25 12:34:43.006285741 +0000 UTC m=+0.138350021 container attach af27def7c5c1498f082c468b66139ccab899790cdfbbc155d838622e3f20d3a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_poitras, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:34:43 compute-0 podman[320027]: 2026-02-25 12:34:43.009995946 +0000 UTC m=+0.142060226 container died af27def7c5c1498f082c468b66139ccab899790cdfbbc155d838622e3f20d3a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3936f9e45caf81ed5d9eccaef5c5d9aaec8ee20d85bcb79beb7cebf6543477c3-merged.mount: Deactivated successfully.
Feb 25 12:34:43 compute-0 podman[320027]: 2026-02-25 12:34:43.058104765 +0000 UTC m=+0.190169035 container remove af27def7c5c1498f082c468b66139ccab899790cdfbbc155d838622e3f20d3a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:34:43 compute-0 systemd[1]: libpod-conmon-af27def7c5c1498f082c468b66139ccab899790cdfbbc155d838622e3f20d3a8.scope: Deactivated successfully.
Feb 25 12:34:43 compute-0 podman[320084]: 2026-02-25 12:34:43.237487875 +0000 UTC m=+0.055338105 container create 273be977e0281f8bfe243c2ae786c32d495b8e28491a82c843597b07a4ff28c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:34:43 compute-0 systemd[1]: Started libpod-conmon-273be977e0281f8bfe243c2ae786c32d495b8e28491a82c843597b07a4ff28c5.scope.
Feb 25 12:34:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:34:43 compute-0 podman[320084]: 2026-02-25 12:34:43.208420483 +0000 UTC m=+0.026270763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92995a9c34e6e21387a4b859beb0dd33e12b0753ddb40fd83bca4d4bbdf48bdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92995a9c34e6e21387a4b859beb0dd33e12b0753ddb40fd83bca4d4bbdf48bdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92995a9c34e6e21387a4b859beb0dd33e12b0753ddb40fd83bca4d4bbdf48bdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92995a9c34e6e21387a4b859beb0dd33e12b0753ddb40fd83bca4d4bbdf48bdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92995a9c34e6e21387a4b859beb0dd33e12b0753ddb40fd83bca4d4bbdf48bdd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:43 compute-0 podman[320084]: 2026-02-25 12:34:43.32082715 +0000 UTC m=+0.138677360 container init 273be977e0281f8bfe243c2ae786c32d495b8e28491a82c843597b07a4ff28c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 12:34:43 compute-0 podman[320084]: 2026-02-25 12:34:43.330876624 +0000 UTC m=+0.148726814 container start 273be977e0281f8bfe243c2ae786c32d495b8e28491a82c843597b07a4ff28c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:34:43 compute-0 podman[320084]: 2026-02-25 12:34:43.339161578 +0000 UTC m=+0.157011798 container attach 273be977e0281f8bfe243c2ae786c32d495b8e28491a82c843597b07a4ff28c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:34:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:34:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:34:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:34:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:34:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:34:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:34:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Feb 25 12:34:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Feb 25 12:34:43 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Feb 25 12:34:43 compute-0 flamboyant_wiles[320099]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:34:43 compute-0 flamboyant_wiles[320099]: --> All data devices are unavailable
Feb 25 12:34:43 compute-0 systemd[1]: libpod-273be977e0281f8bfe243c2ae786c32d495b8e28491a82c843597b07a4ff28c5.scope: Deactivated successfully.
Feb 25 12:34:43 compute-0 podman[320084]: 2026-02-25 12:34:43.785556034 +0000 UTC m=+0.603406224 container died 273be977e0281f8bfe243c2ae786c32d495b8e28491a82c843597b07a4ff28c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-92995a9c34e6e21387a4b859beb0dd33e12b0753ddb40fd83bca4d4bbdf48bdd-merged.mount: Deactivated successfully.
Feb 25 12:34:43 compute-0 podman[320084]: 2026-02-25 12:34:43.830498995 +0000 UTC m=+0.648349195 container remove 273be977e0281f8bfe243c2ae786c32d495b8e28491a82c843597b07a4ff28c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 12:34:43 compute-0 systemd[1]: libpod-conmon-273be977e0281f8bfe243c2ae786c32d495b8e28491a82c843597b07a4ff28c5.scope: Deactivated successfully.
Feb 25 12:34:43 compute-0 sudo[319956]: pam_unix(sudo:session): session closed for user root
Feb 25 12:34:43 compute-0 nova_compute[244014]: 2026-02-25 12:34:43.890 244018 DEBUG nova.compute.manager [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:43 compute-0 sudo[320132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:34:43 compute-0 sudo[320132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:34:43 compute-0 sudo[320132]: pam_unix(sudo:session): session closed for user root
Feb 25 12:34:43 compute-0 sudo[320157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:34:43 compute-0 sudo[320157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:34:43 compute-0 nova_compute[244014]: 2026-02-25 12:34:43.986 244018 DEBUG oslo_concurrency.lockutils [None req-2f64a3e6-424d-40d5-9a74-e91d2b0e5da3 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 561 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 7.2 MiB/s wr, 309 op/s
Feb 25 12:34:44 compute-0 podman[320194]: 2026-02-25 12:34:44.275628075 +0000 UTC m=+0.052760842 container create 9b0475eff2192df2f989124a396fae30a0c8bffd9dfe43c6f826e7eaeb8a48f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jennings, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 12:34:44 compute-0 systemd[1]: Started libpod-conmon-9b0475eff2192df2f989124a396fae30a0c8bffd9dfe43c6f826e7eaeb8a48f7.scope.
Feb 25 12:34:44 compute-0 podman[320194]: 2026-02-25 12:34:44.250935207 +0000 UTC m=+0.028067974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:34:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:34:44 compute-0 podman[320194]: 2026-02-25 12:34:44.366625977 +0000 UTC m=+0.143758804 container init 9b0475eff2192df2f989124a396fae30a0c8bffd9dfe43c6f826e7eaeb8a48f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jennings, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 12:34:44 compute-0 podman[320194]: 2026-02-25 12:34:44.372862253 +0000 UTC m=+0.149995020 container start 9b0475eff2192df2f989124a396fae30a0c8bffd9dfe43c6f826e7eaeb8a48f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jennings, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 12:34:44 compute-0 podman[320194]: 2026-02-25 12:34:44.3759333 +0000 UTC m=+0.153066107 container attach 9b0475eff2192df2f989124a396fae30a0c8bffd9dfe43c6f826e7eaeb8a48f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jennings, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:34:44 compute-0 sad_jennings[320210]: 167 167
Feb 25 12:34:44 compute-0 systemd[1]: libpod-9b0475eff2192df2f989124a396fae30a0c8bffd9dfe43c6f826e7eaeb8a48f7.scope: Deactivated successfully.
Feb 25 12:34:44 compute-0 podman[320194]: 2026-02-25 12:34:44.380074797 +0000 UTC m=+0.157207574 container died 9b0475eff2192df2f989124a396fae30a0c8bffd9dfe43c6f826e7eaeb8a48f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f0646bb5b65c3da44fd0f3d28ebe2c4cfcf8d8311e5ca73b2c258b62ef282a7-merged.mount: Deactivated successfully.
Feb 25 12:34:44 compute-0 podman[320194]: 2026-02-25 12:34:44.415635272 +0000 UTC m=+0.192768049 container remove 9b0475eff2192df2f989124a396fae30a0c8bffd9dfe43c6f826e7eaeb8a48f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_jennings, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:34:44 compute-0 systemd[1]: libpod-conmon-9b0475eff2192df2f989124a396fae30a0c8bffd9dfe43c6f826e7eaeb8a48f7.scope: Deactivated successfully.
Feb 25 12:34:44 compute-0 ceph-mon[76335]: osdmap e240: 3 total, 3 up, 3 in
Feb 25 12:34:44 compute-0 ceph-mon[76335]: pgmap v1587: 305 pgs: 305 active+clean; 561 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 7.2 MiB/s wr, 309 op/s
Feb 25 12:34:44 compute-0 podman[320234]: 2026-02-25 12:34:44.590605037 +0000 UTC m=+0.052532516 container create 1fc1d6916a412f367cf31f5769e501b4aebaa1aecd8223558241d7fa0fafe1b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 12:34:44 compute-0 systemd[1]: Started libpod-conmon-1fc1d6916a412f367cf31f5769e501b4aebaa1aecd8223558241d7fa0fafe1b2.scope.
Feb 25 12:34:44 compute-0 podman[320234]: 2026-02-25 12:34:44.565863438 +0000 UTC m=+0.027790947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:34:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:34:44 compute-0 nova_compute[244014]: 2026-02-25 12:34:44.672 244018 DEBUG nova.compute.manager [req-20cacb8e-1f0d-4b87-8ca1-1df86c441010 req-f56e9f5b-40a4-423d-8a92-8bdc468fc259 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:44 compute-0 nova_compute[244014]: 2026-02-25 12:34:44.674 244018 DEBUG oslo_concurrency.lockutils [req-20cacb8e-1f0d-4b87-8ca1-1df86c441010 req-f56e9f5b-40a4-423d-8a92-8bdc468fc259 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03c116dbffbbeecc8e371774455a9d42d7b0f0cbaa02e8659bea5b27f42c9ca9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:44 compute-0 nova_compute[244014]: 2026-02-25 12:34:44.674 244018 DEBUG oslo_concurrency.lockutils [req-20cacb8e-1f0d-4b87-8ca1-1df86c441010 req-f56e9f5b-40a4-423d-8a92-8bdc468fc259 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:44 compute-0 nova_compute[244014]: 2026-02-25 12:34:44.674 244018 DEBUG oslo_concurrency.lockutils [req-20cacb8e-1f0d-4b87-8ca1-1df86c441010 req-f56e9f5b-40a4-423d-8a92-8bdc468fc259 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:44 compute-0 nova_compute[244014]: 2026-02-25 12:34:44.675 244018 DEBUG nova.compute.manager [req-20cacb8e-1f0d-4b87-8ca1-1df86c441010 req-f56e9f5b-40a4-423d-8a92-8bdc468fc259 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03c116dbffbbeecc8e371774455a9d42d7b0f0cbaa02e8659bea5b27f42c9ca9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03c116dbffbbeecc8e371774455a9d42d7b0f0cbaa02e8659bea5b27f42c9ca9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:44 compute-0 nova_compute[244014]: 2026-02-25 12:34:44.675 244018 WARNING nova.compute.manager [req-20cacb8e-1f0d-4b87-8ca1-1df86c441010 req-f56e9f5b-40a4-423d-8a92-8bdc468fc259 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received unexpected event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with vm_state active and task_state None.
Feb 25 12:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03c116dbffbbeecc8e371774455a9d42d7b0f0cbaa02e8659bea5b27f42c9ca9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:44 compute-0 podman[320234]: 2026-02-25 12:34:44.689287736 +0000 UTC m=+0.151215235 container init 1fc1d6916a412f367cf31f5769e501b4aebaa1aecd8223558241d7fa0fafe1b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mclaren, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:34:44 compute-0 podman[320234]: 2026-02-25 12:34:44.696369446 +0000 UTC m=+0.158296935 container start 1fc1d6916a412f367cf31f5769e501b4aebaa1aecd8223558241d7fa0fafe1b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:34:44 compute-0 podman[320234]: 2026-02-25 12:34:44.700712409 +0000 UTC m=+0.162639918 container attach 1fc1d6916a412f367cf31f5769e501b4aebaa1aecd8223558241d7fa0fafe1b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mclaren, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 12:34:44 compute-0 nova_compute[244014]: 2026-02-25 12:34:44.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:44 compute-0 elated_mclaren[320251]: {
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:     "0": [
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:         {
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "devices": [
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "/dev/loop3"
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             ],
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_name": "ceph_lv0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_size": "21470642176",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "name": "ceph_lv0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "tags": {
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.cluster_name": "ceph",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.crush_device_class": "",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.encrypted": "0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.objectstore": "bluestore",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.osd_id": "0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.type": "block",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.vdo": "0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.with_tpm": "0"
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             },
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "type": "block",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "vg_name": "ceph_vg0"
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:         }
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:     ],
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:     "1": [
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:         {
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "devices": [
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "/dev/loop4"
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             ],
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_name": "ceph_lv1",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_size": "21470642176",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "name": "ceph_lv1",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "tags": {
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.cluster_name": "ceph",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.crush_device_class": "",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.encrypted": "0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.objectstore": "bluestore",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.osd_id": "1",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.type": "block",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.vdo": "0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.with_tpm": "0"
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             },
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "type": "block",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "vg_name": "ceph_vg1"
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:         }
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:     ],
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:     "2": [
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:         {
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "devices": [
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "/dev/loop5"
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             ],
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_name": "ceph_lv2",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_size": "21470642176",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "name": "ceph_lv2",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "tags": {
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.cluster_name": "ceph",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.crush_device_class": "",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.encrypted": "0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.objectstore": "bluestore",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.osd_id": "2",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.type": "block",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.vdo": "0",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:                 "ceph.with_tpm": "0"
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             },
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "type": "block",
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:             "vg_name": "ceph_vg2"
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:         }
Feb 25 12:34:44 compute-0 elated_mclaren[320251]:     ]
Feb 25 12:34:44 compute-0 elated_mclaren[320251]: }
Feb 25 12:34:44 compute-0 systemd[1]: libpod-1fc1d6916a412f367cf31f5769e501b4aebaa1aecd8223558241d7fa0fafe1b2.scope: Deactivated successfully.
Feb 25 12:34:44 compute-0 podman[320234]: 2026-02-25 12:34:44.975265079 +0000 UTC m=+0.437192558 container died 1fc1d6916a412f367cf31f5769e501b4aebaa1aecd8223558241d7fa0fafe1b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mclaren, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 12:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-03c116dbffbbeecc8e371774455a9d42d7b0f0cbaa02e8659bea5b27f42c9ca9-merged.mount: Deactivated successfully.
Feb 25 12:34:45 compute-0 podman[320234]: 2026-02-25 12:34:45.016999128 +0000 UTC m=+0.478926607 container remove 1fc1d6916a412f367cf31f5769e501b4aebaa1aecd8223558241d7fa0fafe1b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mclaren, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 12:34:45 compute-0 systemd[1]: libpod-conmon-1fc1d6916a412f367cf31f5769e501b4aebaa1aecd8223558241d7fa0fafe1b2.scope: Deactivated successfully.
Feb 25 12:34:45 compute-0 sudo[320157]: pam_unix(sudo:session): session closed for user root
Feb 25 12:34:45 compute-0 sudo[320270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:34:45 compute-0 sudo[320270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:34:45 compute-0 sudo[320270]: pam_unix(sudo:session): session closed for user root
Feb 25 12:34:45 compute-0 sudo[320295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:34:45 compute-0 sudo[320295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:34:45 compute-0 podman[320331]: 2026-02-25 12:34:45.439655263 +0000 UTC m=+0.053660347 container create c1ef866d6f9fc4a3c81fae66d70253ebb2c76bcf7d52360e00c61864b09eec00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hugle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:34:45 compute-0 systemd[1]: Started libpod-conmon-c1ef866d6f9fc4a3c81fae66d70253ebb2c76bcf7d52360e00c61864b09eec00.scope.
Feb 25 12:34:45 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:34:45 compute-0 podman[320331]: 2026-02-25 12:34:45.499404502 +0000 UTC m=+0.113409606 container init c1ef866d6f9fc4a3c81fae66d70253ebb2c76bcf7d52360e00c61864b09eec00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 12:34:45 compute-0 podman[320331]: 2026-02-25 12:34:45.503449686 +0000 UTC m=+0.117454770 container start c1ef866d6f9fc4a3c81fae66d70253ebb2c76bcf7d52360e00c61864b09eec00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb 25 12:34:45 compute-0 podman[320331]: 2026-02-25 12:34:45.409236694 +0000 UTC m=+0.023241808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:34:45 compute-0 nostalgic_hugle[320347]: 167 167
Feb 25 12:34:45 compute-0 systemd[1]: libpod-c1ef866d6f9fc4a3c81fae66d70253ebb2c76bcf7d52360e00c61864b09eec00.scope: Deactivated successfully.
Feb 25 12:34:45 compute-0 podman[320331]: 2026-02-25 12:34:45.514666313 +0000 UTC m=+0.128671417 container attach c1ef866d6f9fc4a3c81fae66d70253ebb2c76bcf7d52360e00c61864b09eec00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 12:34:45 compute-0 podman[320331]: 2026-02-25 12:34:45.514956022 +0000 UTC m=+0.128961096 container died c1ef866d6f9fc4a3c81fae66d70253ebb2c76bcf7d52360e00c61864b09eec00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-278456337b3e46ca102708fb5574c43a6abc1a51ca993222e41305e4a01dd446-merged.mount: Deactivated successfully.
Feb 25 12:34:45 compute-0 podman[320331]: 2026-02-25 12:34:45.566867479 +0000 UTC m=+0.180872563 container remove c1ef866d6f9fc4a3c81fae66d70253ebb2c76bcf7d52360e00c61864b09eec00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:34:45 compute-0 systemd[1]: libpod-conmon-c1ef866d6f9fc4a3c81fae66d70253ebb2c76bcf7d52360e00c61864b09eec00.scope: Deactivated successfully.
Feb 25 12:34:45 compute-0 podman[320372]: 2026-02-25 12:34:45.710301513 +0000 UTC m=+0.037400689 container create 1289876b7d6729439f27b30b7eebee74ef5ced9fa41e097cead4837229143cff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_sinoussi, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:34:45 compute-0 systemd[1]: Started libpod-conmon-1289876b7d6729439f27b30b7eebee74ef5ced9fa41e097cead4837229143cff.scope.
Feb 25 12:34:45 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bce519b99d54dd0e5c0df90ba76d8d7e1312b2f716869ec0acb96f7d4e1a82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bce519b99d54dd0e5c0df90ba76d8d7e1312b2f716869ec0acb96f7d4e1a82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bce519b99d54dd0e5c0df90ba76d8d7e1312b2f716869ec0acb96f7d4e1a82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bce519b99d54dd0e5c0df90ba76d8d7e1312b2f716869ec0acb96f7d4e1a82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:45 compute-0 podman[320372]: 2026-02-25 12:34:45.692751447 +0000 UTC m=+0.019850653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:34:45 compute-0 podman[320372]: 2026-02-25 12:34:45.803138896 +0000 UTC m=+0.130238092 container init 1289876b7d6729439f27b30b7eebee74ef5ced9fa41e097cead4837229143cff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:34:45 compute-0 podman[320372]: 2026-02-25 12:34:45.809417674 +0000 UTC m=+0.136516840 container start 1289876b7d6729439f27b30b7eebee74ef5ced9fa41e097cead4837229143cff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_sinoussi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:34:45 compute-0 podman[320372]: 2026-02-25 12:34:45.815755033 +0000 UTC m=+0.142854249 container attach 1289876b7d6729439f27b30b7eebee74ef5ced9fa41e097cead4837229143cff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_sinoussi, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 12:34:45 compute-0 nova_compute[244014]: 2026-02-25 12:34:45.973 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 561 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 7.2 MiB/s wr, 309 op/s
Feb 25 12:34:46 compute-0 lvm[320467]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:34:46 compute-0 lvm[320468]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:34:46 compute-0 lvm[320467]: VG ceph_vg0 finished
Feb 25 12:34:46 compute-0 lvm[320468]: VG ceph_vg1 finished
Feb 25 12:34:46 compute-0 lvm[320470]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:34:46 compute-0 lvm[320470]: VG ceph_vg2 finished
Feb 25 12:34:46 compute-0 busy_sinoussi[320389]: {}
Feb 25 12:34:46 compute-0 systemd[1]: libpod-1289876b7d6729439f27b30b7eebee74ef5ced9fa41e097cead4837229143cff.scope: Deactivated successfully.
Feb 25 12:34:46 compute-0 conmon[320389]: conmon 1289876b7d6729439f27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1289876b7d6729439f27b30b7eebee74ef5ced9fa41e097cead4837229143cff.scope/container/memory.events
Feb 25 12:34:46 compute-0 podman[320372]: 2026-02-25 12:34:46.603145707 +0000 UTC m=+0.930244893 container died 1289876b7d6729439f27b30b7eebee74ef5ced9fa41e097cead4837229143cff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 12:34:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-51bce519b99d54dd0e5c0df90ba76d8d7e1312b2f716869ec0acb96f7d4e1a82-merged.mount: Deactivated successfully.
Feb 25 12:34:46 compute-0 podman[320372]: 2026-02-25 12:34:46.659489488 +0000 UTC m=+0.986588664 container remove 1289876b7d6729439f27b30b7eebee74ef5ced9fa41e097cead4837229143cff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_sinoussi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:34:46 compute-0 systemd[1]: libpod-conmon-1289876b7d6729439f27b30b7eebee74ef5ced9fa41e097cead4837229143cff.scope: Deactivated successfully.
Feb 25 12:34:46 compute-0 sudo[320295]: pam_unix(sudo:session): session closed for user root
Feb 25 12:34:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:34:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:34:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:34:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:34:46 compute-0 sudo[320487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:34:46 compute-0 sudo[320487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:34:46 compute-0 sudo[320487]: pam_unix(sudo:session): session closed for user root
Feb 25 12:34:47 compute-0 ceph-mon[76335]: pgmap v1588: 305 pgs: 305 active+clean; 561 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 7.2 MiB/s wr, 309 op/s
Feb 25 12:34:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:34:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:34:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:34:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:34:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1828970002' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:34:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:34:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1828970002' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:34:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 519 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 9.8 MiB/s wr, 353 op/s
Feb 25 12:34:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1828970002' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:34:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1828970002' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:34:48 compute-0 nova_compute[244014]: 2026-02-25 12:34:48.514 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:34:49 compute-0 ceph-mon[76335]: pgmap v1589: 305 pgs: 305 active+clean; 519 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 9.8 MiB/s wr, 353 op/s
Feb 25 12:34:49 compute-0 nova_compute[244014]: 2026-02-25 12:34:49.635 244018 DEBUG nova.objects.instance [None req-31258b4a-3dd3-4720-a338-61a69e2129ba 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'pci_devices' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:49 compute-0 nova_compute[244014]: 2026-02-25 12:34:49.655 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022889.655467, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:49 compute-0 nova_compute[244014]: 2026-02-25 12:34:49.656 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Paused (Lifecycle Event)
Feb 25 12:34:49 compute-0 nova_compute[244014]: 2026-02-25 12:34:49.673 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:49 compute-0 nova_compute[244014]: 2026-02-25 12:34:49.675 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:49 compute-0 nova_compute[244014]: 2026-02-25 12:34:49.694 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 25 12:34:49 compute-0 nova_compute[244014]: 2026-02-25 12:34:49.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 519 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 9.8 MiB/s wr, 353 op/s
Feb 25 12:34:50 compute-0 nova_compute[244014]: 2026-02-25 12:34:50.038 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:34:50 compute-0 ceph-mon[76335]: pgmap v1590: 305 pgs: 305 active+clean; 519 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 9.8 MiB/s wr, 353 op/s
Feb 25 12:34:50 compute-0 kernel: tapd6f9abb7-ac (unregistering): left promiscuous mode
Feb 25 12:34:50 compute-0 NetworkManager[49836]: <info>  [1772022890.4866] device (tapd6f9abb7-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:34:50 compute-0 ovn_controller[147040]: 2026-02-25T12:34:50Z|00842|binding|INFO|Releasing lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 from this chassis (sb_readonly=0)
Feb 25 12:34:50 compute-0 ovn_controller[147040]: 2026-02-25T12:34:50Z|00843|binding|INFO|Setting lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 down in Southbound
Feb 25 12:34:50 compute-0 nova_compute[244014]: 2026-02-25 12:34:50.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:50 compute-0 ovn_controller[147040]: 2026-02-25T12:34:50Z|00844|binding|INFO|Removing iface tapd6f9abb7-ac ovn-installed in OVS
Feb 25 12:34:50 compute-0 nova_compute[244014]: 2026-02-25 12:34:50.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:50.501 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:66:6e 10.100.0.8'], port_security=['fa:16:3e:af:66:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '19abddab-88d5-48b8-b98e-1dedccbb8b7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d34ca23436b401fbaeb0b01190a440a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3f8cb539-706f-470a-94ac-5465c7a896f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2c08b8a-392f-4189-9ba4-eaa2e30b0540, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:50.502 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 in datapath a0cf2281-bf49-498f-8de5-70cdba33cd62 unbound from our chassis
Feb 25 12:34:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:50.503 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0cf2281-bf49-498f-8de5-70cdba33cd62, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:34:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:50.504 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[15fa15a7-0baf-44c9-9bfd-4c369ce5cbb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:50.505 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 namespace which is not needed anymore
Feb 25 12:34:50 compute-0 nova_compute[244014]: 2026-02-25 12:34:50.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:50 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Feb 25 12:34:50 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000004f.scope: Consumed 7.417s CPU time.
Feb 25 12:34:50 compute-0 systemd-machined[210048]: Machine qemu-107-instance-0000004f terminated.
Feb 25 12:34:50 compute-0 nova_compute[244014]: 2026-02-25 12:34:50.696 244018 DEBUG nova.compute.manager [None req-31258b4a-3dd3-4720-a338-61a69e2129ba 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:50 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[320036]: [NOTICE]   (320047) : haproxy version is 2.8.14-c23fe91
Feb 25 12:34:50 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[320036]: [NOTICE]   (320047) : path to executable is /usr/sbin/haproxy
Feb 25 12:34:50 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[320036]: [WARNING]  (320047) : Exiting Master process...
Feb 25 12:34:50 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[320036]: [ALERT]    (320047) : Current worker (320051) exited with code 143 (Terminated)
Feb 25 12:34:50 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[320036]: [WARNING]  (320047) : All workers exited. Exiting... (0)
Feb 25 12:34:50 compute-0 systemd[1]: libpod-8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4.scope: Deactivated successfully.
Feb 25 12:34:50 compute-0 podman[320539]: 2026-02-25 12:34:50.872518768 +0000 UTC m=+0.298418855 container died 8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:34:50 compute-0 nova_compute[244014]: 2026-02-25 12:34:50.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:51 compute-0 kernel: tap9064f9e6-dd (unregistering): left promiscuous mode
Feb 25 12:34:51 compute-0 NetworkManager[49836]: <info>  [1772022891.0425] device (tap9064f9e6-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:51 compute-0 ovn_controller[147040]: 2026-02-25T12:34:51Z|00845|binding|INFO|Releasing lport 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 from this chassis (sb_readonly=0)
Feb 25 12:34:51 compute-0 ovn_controller[147040]: 2026-02-25T12:34:51Z|00846|binding|INFO|Setting lport 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 down in Southbound
Feb 25 12:34:51 compute-0 ovn_controller[147040]: 2026-02-25T12:34:51Z|00847|binding|INFO|Removing iface tap9064f9e6-dd ovn-installed in OVS
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.052 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.058 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:18:49 10.100.0.3'], port_security=['fa:16:3e:46:18:49 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ef34c4d4-57e0-4af1-af7a-b8ef35d09862', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.058 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:51 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000055.scope: Deactivated successfully.
Feb 25 12:34:51 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000055.scope: Consumed 11.840s CPU time.
Feb 25 12:34:51 compute-0 systemd-machined[210048]: Machine qemu-105-instance-00000055 terminated.
Feb 25 12:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4-userdata-shm.mount: Deactivated successfully.
Feb 25 12:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-210403d49e93acea9e0e205810dacd74afebcd09870e76456568ef2a5afec89b-merged.mount: Deactivated successfully.
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.174 244018 DEBUG nova.compute.manager [req-3d851e9c-80fd-4ce2-93fe-b5a0f9118247 req-a2080b2b-3c12-49ba-9054-68050a599c33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.174 244018 DEBUG oslo_concurrency.lockutils [req-3d851e9c-80fd-4ce2-93fe-b5a0f9118247 req-a2080b2b-3c12-49ba-9054-68050a599c33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.175 244018 DEBUG oslo_concurrency.lockutils [req-3d851e9c-80fd-4ce2-93fe-b5a0f9118247 req-a2080b2b-3c12-49ba-9054-68050a599c33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.175 244018 DEBUG oslo_concurrency.lockutils [req-3d851e9c-80fd-4ce2-93fe-b5a0f9118247 req-a2080b2b-3c12-49ba-9054-68050a599c33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.175 244018 DEBUG nova.compute.manager [req-3d851e9c-80fd-4ce2-93fe-b5a0f9118247 req-a2080b2b-3c12-49ba-9054-68050a599c33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.175 244018 WARNING nova.compute.manager [req-3d851e9c-80fd-4ce2-93fe-b5a0f9118247 req-a2080b2b-3c12-49ba-9054-68050a599c33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received unexpected event network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with vm_state suspended and task_state None.
Feb 25 12:34:51 compute-0 podman[320539]: 2026-02-25 12:34:51.231408751 +0000 UTC m=+0.657308838 container cleanup 8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 12:34:51 compute-0 podman[320584]: 2026-02-25 12:34:51.339150256 +0000 UTC m=+0.086491476 container remove 8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.344 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5d0737-aba7-4dee-a711-7ae146f628e4]: (4, ('Wed Feb 25 12:34:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 (8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4)\n8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4\nWed Feb 25 12:34:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 (8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4)\n8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.346 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0784381-b8ee-4833-b3f2-f5218580093b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.347 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0cf2281-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:51 compute-0 kernel: tapa0cf2281-b0: left promiscuous mode
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.363 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d67c2c44-29e5-4f2f-8829-908641fe9c12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.376 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1741bfb-e98f-4a76-9fb9-98693fc27861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.377 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d67e73-6ee3-483f-910b-a55f88df27fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.387 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1dad183f-3d37-4b42-a859-d0a602be374d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485161, 'reachable_time': 17706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320618, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.389 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.389 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[75169b42-2562-4b82-8d4b-3f384cb9d111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.390 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 unbound from our chassis
Feb 25 12:34:51 compute-0 systemd[1]: run-netns-ovnmeta\x2da0cf2281\x2dbf49\x2d498f\x2d8de5\x2d70cdba33cd62.mount: Deactivated successfully.
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.391 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d1639de-d0ac-47b6-9707-253523b26c21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.392 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf8af86-6c5d-442b-bbc6-a1f29a292f59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.393 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace which is not needed anymore
Feb 25 12:34:51 compute-0 systemd[1]: libpod-conmon-8ec1fcd898a600c9087a72e51c33c40beac5a3f243d1267132b1d666aee35ad4.scope: Deactivated successfully.
Feb 25 12:34:51 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[318919]: [NOTICE]   (318923) : haproxy version is 2.8.14-c23fe91
Feb 25 12:34:51 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[318919]: [NOTICE]   (318923) : path to executable is /usr/sbin/haproxy
Feb 25 12:34:51 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[318919]: [WARNING]  (318923) : Exiting Master process...
Feb 25 12:34:51 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[318919]: [ALERT]    (318923) : Current worker (318925) exited with code 143 (Terminated)
Feb 25 12:34:51 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[318919]: [WARNING]  (318923) : All workers exited. Exiting... (0)
Feb 25 12:34:51 compute-0 systemd[1]: libpod-c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86.scope: Deactivated successfully.
Feb 25 12:34:51 compute-0 podman[320636]: 2026-02-25 12:34:51.502235765 +0000 UTC m=+0.042085711 container died c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.529 244018 INFO nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Instance shutdown successfully after 13 seconds.
Feb 25 12:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86-userdata-shm.mount: Deactivated successfully.
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.533 244018 INFO nova.virt.libvirt.driver [-] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Instance destroyed successfully.
Feb 25 12:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-82b22cfb4da164b32e6ac6fc2c87415aed62df482066031284028ba2a091612c-merged.mount: Deactivated successfully.
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.539 244018 INFO nova.virt.libvirt.driver [-] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Instance destroyed successfully.
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.540 244018 DEBUG nova.virt.libvirt.vif [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:34:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237181777',display_name='tempest-ServerDiskConfigTestJSON-server-1237181777',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237181777',id=85,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-74vbp2dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:37Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=ef34c4d4-57e0-4af1-af7a-b8ef35d09862,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.540 244018 DEBUG nova.network.os_vif_util [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.541 244018 DEBUG nova.network.os_vif_util [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.542 244018 DEBUG os_vif [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.544 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9064f9e6-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.548 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:51 compute-0 podman[320636]: 2026-02-25 12:34:51.550028556 +0000 UTC m=+0.089878502 container cleanup c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.550 244018 INFO os_vif [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd')
Feb 25 12:34:51 compute-0 systemd[1]: libpod-conmon-c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86.scope: Deactivated successfully.
Feb 25 12:34:51 compute-0 podman[320672]: 2026-02-25 12:34:51.625090417 +0000 UTC m=+0.059285076 container remove c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.629 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c92c59c-ff69-4ef2-a58b-1c4d2ad94887]: (4, ('Wed Feb 25 12:34:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86)\nc926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86\nWed Feb 25 12:34:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (c926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86)\nc926b0b80a04f142d44a378e60cc1ad8c1ca53b24a577810a55d85be871e5a86\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.631 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a48ec1a-06bd-4ea9-b4de-9f2feed6b9bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.632 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:51 compute-0 kernel: tap9d1639de-d0: left promiscuous mode
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.644 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9ad2b9-6673-4436-b35a-6ae64e5d91f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.655 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[78901142-ea81-49c5-b411-584261e1fefd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.656 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d4fdd1-bb87-48fb-a3e9-e574f54a713d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.671 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16c86581-68a2-4637-826e-f2dc012b9af6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483385, 'reachable_time': 40199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320704, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.674 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:34:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:51.674 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[54989221-fb73-4b78-afb8-c2a4c7f4802d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.895 244018 INFO nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Deleting instance files /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862_del
Feb 25 12:34:51 compute-0 nova_compute[244014]: 2026-02-25 12:34:51.896 244018 INFO nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Deletion of /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862_del complete
Feb 25 12:34:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 494 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.8 MiB/s wr, 262 op/s
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.045 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.046 244018 INFO nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Creating image(s)
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.064 244018 DEBUG nova.storage.rbd_utils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.083 244018 DEBUG nova.storage.rbd_utils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.121 244018 DEBUG nova.storage.rbd_utils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.125 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d1639de\x2dd0ac\x2d47b6\x2d9707\x2d253523b26c21.mount: Deactivated successfully.
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.184 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.185 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.186 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.186 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.204 244018 DEBUG nova.storage.rbd_utils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.206 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:34:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Feb 25 12:34:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Feb 25 12:34:52 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.457 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.509 244018 DEBUG nova.storage.rbd_utils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] resizing rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.579 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.580 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Ensure instance console log exists: /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.580 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.581 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.581 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.583 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Start _get_guest_xml network_info=[{"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.587 244018 WARNING nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.598 244018 DEBUG nova.virt.libvirt.host [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.600 244018 DEBUG nova.virt.libvirt.host [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.603 244018 DEBUG nova.virt.libvirt.host [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.605 244018 DEBUG nova.virt.libvirt.host [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.605 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.605 244018 DEBUG nova.virt.hardware [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.606 244018 DEBUG nova.virt.hardware [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.606 244018 DEBUG nova.virt.hardware [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.606 244018 DEBUG nova.virt.hardware [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.606 244018 DEBUG nova.virt.hardware [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.606 244018 DEBUG nova.virt.hardware [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.607 244018 DEBUG nova.virt.hardware [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.607 244018 DEBUG nova.virt.hardware [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.607 244018 DEBUG nova.virt.hardware [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.607 244018 DEBUG nova.virt.hardware [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.607 244018 DEBUG nova.virt.hardware [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.608 244018 DEBUG nova.objects.instance [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ef34c4d4-57e0-4af1-af7a-b8ef35d09862 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:52 compute-0 nova_compute[244014]: 2026-02-25 12:34:52.624 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:53 compute-0 ceph-mon[76335]: pgmap v1591: 305 pgs: 305 active+clean; 494 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.8 MiB/s wr, 262 op/s
Feb 25 12:34:53 compute-0 ceph-mon[76335]: osdmap e241: 3 total, 3 up, 3 in
Feb 25 12:34:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4042486196' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.157 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.180 244018 DEBUG nova.storage.rbd_utils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.185 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:53 compute-0 kernel: tap26c6cf46-a5 (unregistering): left promiscuous mode
Feb 25 12:34:53 compute-0 NetworkManager[49836]: <info>  [1772022893.2982] device (tap26c6cf46-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:34:53 compute-0 ovn_controller[147040]: 2026-02-25T12:34:53Z|00848|binding|INFO|Releasing lport 26c6cf46-a52b-4476-9112-4047f420e492 from this chassis (sb_readonly=0)
Feb 25 12:34:53 compute-0 ovn_controller[147040]: 2026-02-25T12:34:53Z|00849|binding|INFO|Setting lport 26c6cf46-a52b-4476-9112-4047f420e492 down in Southbound
Feb 25 12:34:53 compute-0 ovn_controller[147040]: 2026-02-25T12:34:53Z|00850|binding|INFO|Removing iface tap26c6cf46-a5 ovn-installed in OVS
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.309 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:53.311 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a3:ed 10.100.0.5'], port_security=['fa:16:3e:a5:a3:ed 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e3178d86-5c76-4393-9327-2aac2cb8d81d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=26c6cf46-a52b-4476-9112-4047f420e492) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:53.312 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 26c6cf46-a52b-4476-9112-4047f420e492 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 unbound from our chassis
Feb 25 12:34:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:53.313 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:34:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:53.314 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[995e19f6-435a-40b9-ae87-624e3b6026f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:53 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000056.scope: Deactivated successfully.
Feb 25 12:34:53 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000056.scope: Consumed 11.634s CPU time.
Feb 25 12:34:53 compute-0 systemd-machined[210048]: Machine qemu-106-instance-00000056 terminated.
Feb 25 12:34:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595538986' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.750 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.751 244018 DEBUG nova.virt.libvirt.vif [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:34:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237181777',display_name='tempest-ServerDiskConfigTestJSON-server-1237181777',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237181777',id=85,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-74vbp2dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:51Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=ef34c4d4-57e0-4af1-af7a-b8ef35d09862,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.751 244018 DEBUG nova.network.os_vif_util [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.752 244018 DEBUG nova.network.os_vif_util [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.754 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:34:53 compute-0 nova_compute[244014]:   <uuid>ef34c4d4-57e0-4af1-af7a-b8ef35d09862</uuid>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   <name>instance-00000055</name>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1237181777</nova:name>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:34:52</nova:creationTime>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <nova:user uuid="d8636d59ca0d49698907e2edb5dc4967">tempest-ServerDiskConfigTestJSON-1145834762-project-member</nova:user>
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <nova:project uuid="e80b416cdd774e9483545c9e08abf805">tempest-ServerDiskConfigTestJSON-1145834762</nova:project>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <nova:port uuid="9064f9e6-dd74-4a19-9efb-6abc4c4e27c5">
Feb 25 12:34:53 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <system>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <entry name="serial">ef34c4d4-57e0-4af1-af7a-b8ef35d09862</entry>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <entry name="uuid">ef34c4d4-57e0-4af1-af7a-b8ef35d09862</entry>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     </system>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   <os>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   </os>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   <features>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   </features>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk">
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config">
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:53 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:46:18:49"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <target dev="tap9064f9e6-dd"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/console.log" append="off"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <video>
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     </video>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:34:53 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:34:53 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:34:53 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:34:53 compute-0 nova_compute[244014]: </domain>
Feb 25 12:34:53 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.755 244018 DEBUG nova.compute.manager [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Preparing to wait for external event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.755 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.755 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.755 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.756 244018 DEBUG nova.virt.libvirt.vif [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:34:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237181777',display_name='tempest-ServerDiskConfigTestJSON-server-1237181777',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237181777',id=85,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-74vbp2dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:51Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=ef34c4d4-57e0-4af1-af7a-b8ef35d09862,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.756 244018 DEBUG nova.network.os_vif_util [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.756 244018 DEBUG nova.network.os_vif_util [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.757 244018 DEBUG os_vif [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.757 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.758 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.760 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9064f9e6-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.760 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9064f9e6-dd, col_values=(('external_ids', {'iface-id': '9064f9e6-dd74-4a19-9efb-6abc4c4e27c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:18:49', 'vm-uuid': 'ef34c4d4-57e0-4af1-af7a-b8ef35d09862'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:53 compute-0 NetworkManager[49836]: <info>  [1772022893.7623] manager: (tap9064f9e6-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.768 244018 INFO os_vif [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd')
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.818 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.818 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.818 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No VIF found with MAC fa:16:3e:46:18:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.819 244018 INFO nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Using config drive
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.836 244018 DEBUG nova.storage.rbd_utils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.855 244018 DEBUG nova.objects.instance [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ef34c4d4-57e0-4af1-af7a-b8ef35d09862 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:53 compute-0 nova_compute[244014]: 2026-02-25 12:34:53.879 244018 DEBUG nova.objects.instance [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'keypairs' on Instance uuid ef34c4d4-57e0-4af1-af7a-b8ef35d09862 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 471 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.8 MiB/s wr, 232 op/s
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.159 244018 INFO nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance shutdown successfully after 14 seconds.
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.166 244018 DEBUG nova.compute.manager [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.167 244018 DEBUG oslo_concurrency.lockutils [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.167 244018 DEBUG oslo_concurrency.lockutils [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.168 244018 DEBUG oslo_concurrency.lockutils [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.168 244018 DEBUG nova.compute.manager [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.168 244018 WARNING nova.compute.manager [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received unexpected event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with vm_state suspended and task_state None.
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.169 244018 DEBUG nova.compute.manager [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received event network-vif-unplugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.169 244018 DEBUG oslo_concurrency.lockutils [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.169 244018 DEBUG oslo_concurrency.lockutils [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.170 244018 DEBUG oslo_concurrency.lockutils [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.170 244018 DEBUG nova.compute.manager [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] No event matching network-vif-unplugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 in dict_keys([('network-vif-plugged', '9064f9e6-dd74-4a19-9efb-6abc4c4e27c5')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.170 244018 WARNING nova.compute.manager [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received unexpected event network-vif-unplugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 for instance with vm_state active and task_state rebuild_spawning.
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.171 244018 DEBUG nova.compute.manager [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.171 244018 DEBUG oslo_concurrency.lockutils [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.171 244018 DEBUG oslo_concurrency.lockutils [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.172 244018 DEBUG oslo_concurrency.lockutils [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.172 244018 DEBUG nova.compute.manager [req-1c3edc90-615a-4b16-9e1d-718884beea39 req-86058b66-f0a2-4f58-b7b9-e6ca3c184298 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Processing event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.176 244018 INFO nova.virt.libvirt.driver [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance destroyed successfully.
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.176 244018 DEBUG nova.objects.instance [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'numa_topology' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4042486196' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3595538986' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.213 244018 INFO nova.compute.manager [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Resuming
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.216 244018 DEBUG nova.objects.instance [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'flavor' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.221 244018 INFO nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Attempting rescue
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.222 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.227 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.228 244018 INFO nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Creating image(s)
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.256 244018 DEBUG nova.storage.rbd_utils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.260 244018 DEBUG nova.objects.instance [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.313 244018 DEBUG nova.storage.rbd_utils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.343 244018 DEBUG nova.storage.rbd_utils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.347 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.380 244018 INFO nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Creating config drive at /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.383 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6mu_fe6y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.413 244018 DEBUG oslo_concurrency.lockutils [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.413 244018 DEBUG oslo_concurrency.lockutils [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquired lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.414 244018 DEBUG nova.network.neutron [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.431 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.431 244018 DEBUG oslo_concurrency.lockutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.432 244018 DEBUG oslo_concurrency.lockutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.432 244018 DEBUG oslo_concurrency.lockutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.452 244018 DEBUG nova.storage.rbd_utils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.457 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.516 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6mu_fe6y" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.539 244018 DEBUG nova.storage.rbd_utils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.543 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.896 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.901 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.901 244018 DEBUG nova.objects.instance [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'migration_context' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.921 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.922 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Start _get_guest_xml network_info=[{"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1416447925-network", "vif_mac": "fa:16:3e:a5:a3:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.922 244018 DEBUG nova.objects.instance [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'resources' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.936 244018 WARNING nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.942 244018 DEBUG nova.virt.libvirt.host [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.943 244018 DEBUG nova.virt.libvirt.host [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.948 244018 DEBUG nova.virt.libvirt.host [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.949 244018 DEBUG nova.virt.libvirt.host [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.949 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.950 244018 DEBUG nova.virt.hardware [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.951 244018 DEBUG nova.virt.hardware [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.951 244018 DEBUG nova.virt.hardware [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.952 244018 DEBUG nova.virt.hardware [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.952 244018 DEBUG nova.virt.hardware [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.953 244018 DEBUG nova.virt.hardware [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.954 244018 DEBUG nova.virt.hardware [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.954 244018 DEBUG nova.virt.hardware [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.955 244018 DEBUG nova.virt.hardware [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.955 244018 DEBUG nova.virt.hardware [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.956 244018 DEBUG nova.virt.hardware [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.956 244018 DEBUG nova.objects.instance [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:54 compute-0 nova_compute[244014]: 2026-02-25 12:34:54.976 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.014 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.015 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.014 244018 DEBUG oslo_concurrency.processutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config ef34c4d4-57e0-4af1-af7a-b8ef35d09862_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.015 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.015 244018 INFO nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Deleting local config drive /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862/disk.config because it was imported into RBD.
Feb 25 12:34:55 compute-0 systemd-udevd[320952]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:34:55 compute-0 NetworkManager[49836]: <info>  [1772022895.0678] manager: (tap9064f9e6-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Feb 25 12:34:55 compute-0 kernel: tap9064f9e6-dd: entered promiscuous mode
Feb 25 12:34:55 compute-0 ovn_controller[147040]: 2026-02-25T12:34:55Z|00851|binding|INFO|Claiming lport 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 for this chassis.
Feb 25 12:34:55 compute-0 ovn_controller[147040]: 2026-02-25T12:34:55Z|00852|binding|INFO|9064f9e6-dd74-4a19-9efb-6abc4c4e27c5: Claiming fa:16:3e:46:18:49 10.100.0.3
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:55 compute-0 NetworkManager[49836]: <info>  [1772022895.0841] device (tap9064f9e6-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:34:55 compute-0 NetworkManager[49836]: <info>  [1772022895.0852] device (tap9064f9e6-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.088 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:18:49 10.100.0.3'], port_security=['fa:16:3e:46:18:49 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ef34c4d4-57e0-4af1-af7a-b8ef35d09862', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '5', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.090 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 bound to our chassis
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.093 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:34:55 compute-0 ovn_controller[147040]: 2026-02-25T12:34:55Z|00853|binding|INFO|Setting lport 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 ovn-installed in OVS
Feb 25 12:34:55 compute-0 ovn_controller[147040]: 2026-02-25T12:34:55Z|00854|binding|INFO|Setting lport 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 up in Southbound
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.099 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.107 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.108 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1ffb80-7a85-445e-adb6-85c6cf8df94c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.110 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d1639de-d1 in ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:34:55 compute-0 systemd-machined[210048]: New machine qemu-108-instance-00000055.
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.116 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d1639de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.116 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ce8654-ceb0-4352-a743-022575e45784]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.118 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee748e2f-6d1c-4109-8b32-030b85547435]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-00000055.
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.133 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e07670df-4b1a-45e6-af53-770adc855762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:34:55 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.151 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[574c4a2f-7798-494a-b170-410254914f88]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.190 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbd0536-82ae-4031-bb7a-e8c673efa770]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.196 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3647cb-76eb-4069-ba56-23a9b6ca6ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 NetworkManager[49836]: <info>  [1772022895.1981] manager: (tap9d1639de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/360)
Feb 25 12:34:55 compute-0 ceph-mon[76335]: pgmap v1593: 305 pgs: 305 active+clean; 471 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.8 MiB/s wr, 232 op/s
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.224 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[57d2d2d3-7fe9-4f01-8f41-77d45202d0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.227 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5b48d146-8e89-4c54-aa5b-7f7c5668a71d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 NetworkManager[49836]: <info>  [1772022895.2481] device (tap9d1639de-d0): carrier: link connected
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.254 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[39c60db2-31d2-4140-99ab-1d264e14c46b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.268 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe44436-bafb-499d-acfe-0382b2f2cb3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486465, 'reachable_time': 20019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321190, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.282 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f31a8704-0f9e-446c-949b-abc69cddd93d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:8cd3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 486465, 'tstamp': 486465}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321191, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.297 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e1983e-4af0-44ef-9d74-382a985a5048]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486465, 'reachable_time': 20019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321192, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.322 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[da877e8d-74d8-4662-b327-23d03480d050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.358 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a81555e9-93ee-47a7-993d-1c1edbee6f98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.359 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.360 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.360 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d1639de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:55 compute-0 NetworkManager[49836]: <info>  [1772022895.3624] manager: (tap9d1639de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:55 compute-0 kernel: tap9d1639de-d0: entered promiscuous mode
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.368 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d1639de-d0, col_values=(('external_ids', {'iface-id': '2a0bb56b-974f-4df3-ab65-5a5521eee6ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.369 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:55 compute-0 ovn_controller[147040]: 2026-02-25T12:34:55Z|00855|binding|INFO|Releasing lport 2a0bb56b-974f-4df3-ab65-5a5521eee6ab from this chassis (sb_readonly=0)
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.385 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.388 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d27664bf-1f0c-4cfe-8e51-28733595fa31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.389 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:55.390 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'env', 'PROCESS_TAG=haproxy-9d1639de-d0ac-47b6-9707-253523b26c21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d1639de-d0ac-47b6-9707-253523b26c21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:34:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/719107378' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.579 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.580 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.632 244018 DEBUG nova.compute.manager [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.633 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for ef34c4d4-57e0-4af1-af7a-b8ef35d09862 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.634 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022895.6312387, ef34c4d4-57e0-4af1-af7a-b8ef35d09862 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.635 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] VM Started (Lifecycle Event)
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.637 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.641 244018 INFO nova.virt.libvirt.driver [-] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Instance spawned successfully.
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.641 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.687 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.692 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.692 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.693 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.693 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.694 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.694 244018 DEBUG nova.virt.libvirt.driver [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.699 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:55 compute-0 sshd-session[321181]: Invalid user lighthouse from 80.94.92.186 port 59980
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.729 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.729 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022895.6320057, ef34c4d4-57e0-4af1-af7a-b8ef35d09862 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.729 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] VM Paused (Lifecycle Event)
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.761 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.765 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022895.6387448, ef34c4d4-57e0-4af1-af7a-b8ef35d09862 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.765 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] VM Resumed (Lifecycle Event)
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.777 244018 DEBUG nova.compute.manager [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:55 compute-0 podman[321277]: 2026-02-25 12:34:55.778295426 +0000 UTC m=+0.115669410 container create 770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:34:55 compute-0 podman[321277]: 2026-02-25 12:34:55.684465314 +0000 UTC m=+0.021839298 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.787 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.799 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.847 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:34:55 compute-0 systemd[1]: Started libpod-conmon-770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7.scope.
Feb 25 12:34:55 compute-0 sshd-session[321181]: Connection closed by invalid user lighthouse 80.94.92.186 port 59980 [preauth]
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.869 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.870 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.870 244018 DEBUG nova.objects.instance [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:34:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e630577dbbc390890fba0963c51a270419e40ac7b2e84c354e8621f1152a742/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:55 compute-0 podman[321277]: 2026-02-25 12:34:55.919045984 +0000 UTC m=+0.256419998 container init 770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 25 12:34:55 compute-0 podman[321277]: 2026-02-25 12:34:55.923089208 +0000 UTC m=+0.260463202 container start 770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 12:34:55 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[321313]: [NOTICE]   (321317) : New worker (321319) forked
Feb 25 12:34:55 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[321313]: [NOTICE]   (321317) : Loading success.
Feb 25 12:34:55 compute-0 nova_compute[244014]: 2026-02-25 12:34:55.975 244018 DEBUG oslo_concurrency.lockutils [None req-272f337f-d9ee-40eb-bb8f-162d88059aa1 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 471 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.8 MiB/s wr, 232 op/s
Feb 25 12:34:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243203152' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.103 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.117 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.150 244018 DEBUG nova.network.neutron [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updating instance_info_cache with network_info: [{"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.175 244018 DEBUG oslo_concurrency.lockutils [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Releasing lock "refresh_cache-19abddab-88d5-48b8-b98e-1dedccbb8b7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.184 244018 DEBUG nova.virt.libvirt.vif [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-293517057',display_name='tempest-ServersNegativeTestJSON-server-293517057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-293517057',id=79,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0d34ca23436b401fbaeb0b01190a440a',ramdisk_id='',reservation_id='r-lll6a7bq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1613719120',owner_user_name='tempest-ServersNegativeTestJSON-1613719120-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:34:50Z,user_data=None,user_id='84ba7d5e80a44535b25853f3b18e352d',uuid=19abddab-88d5-48b8-b98e-1dedccbb8b7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.186 244018 DEBUG nova.network.os_vif_util [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converting VIF {"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.188 244018 DEBUG nova.network.os_vif_util [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.189 244018 DEBUG os_vif [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.191 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.192 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.194 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.198 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.199 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6f9abb7-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.199 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6f9abb7-ac, col_values=(('external_ids', {'iface-id': 'd6f9abb7-ac51-44f8-88ae-b5a8ef3b6536', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:66:6e', 'vm-uuid': '19abddab-88d5-48b8-b98e-1dedccbb8b7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.201 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.202 244018 INFO os_vif [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac')
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.285 244018 DEBUG nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.286 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.286 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.286 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.287 244018 DEBUG nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.287 244018 WARNING nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received unexpected event network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with vm_state active and task_state rescuing.
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.287 244018 DEBUG nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.288 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.288 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.288 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.289 244018 DEBUG nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.289 244018 WARNING nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received unexpected event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with vm_state active and task_state rescuing.
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.289 244018 DEBUG nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.290 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.290 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.290 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.291 244018 DEBUG nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] No waiting events found dispatching network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.291 244018 WARNING nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received unexpected event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 for instance with vm_state active and task_state None.
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.291 244018 DEBUG nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.291 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.292 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.292 244018 DEBUG oslo_concurrency.lockutils [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.292 244018 DEBUG nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] No waiting events found dispatching network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.293 244018 WARNING nova.compute.manager [req-61c5b808-709b-40da-ab18-879c4e38c9ce req-8951a475-b715-454c-b451-5fa44b441d8f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received unexpected event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 for instance with vm_state active and task_state None.
Feb 25 12:34:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/719107378' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:56 compute-0 ceph-mon[76335]: pgmap v1594: 305 pgs: 305 active+clean; 471 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.8 MiB/s wr, 232 op/s
Feb 25 12:34:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3243203152' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.300 244018 DEBUG nova.objects.instance [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'numa_topology' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:56 compute-0 NetworkManager[49836]: <info>  [1772022896.3492] manager: (tapd6f9abb7-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Feb 25 12:34:56 compute-0 kernel: tapd6f9abb7-ac: entered promiscuous mode
Feb 25 12:34:56 compute-0 systemd-udevd[321177]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:34:56 compute-0 ovn_controller[147040]: 2026-02-25T12:34:56Z|00856|binding|INFO|Claiming lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for this chassis.
Feb 25 12:34:56 compute-0 ovn_controller[147040]: 2026-02-25T12:34:56Z|00857|binding|INFO|d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536: Claiming fa:16:3e:af:66:6e 10.100.0.8
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.360 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:66:6e 10.100.0.8'], port_security=['fa:16:3e:af:66:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '19abddab-88d5-48b8-b98e-1dedccbb8b7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d34ca23436b401fbaeb0b01190a440a', 'neutron:revision_number': '9', 'neutron:security_group_ids': '3f8cb539-706f-470a-94ac-5465c7a896f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2c08b8a-392f-4189-9ba4-eaa2e30b0540, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.361 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 in datapath a0cf2281-bf49-498f-8de5-70cdba33cd62 bound to our chassis
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.362 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0cf2281-bf49-498f-8de5-70cdba33cd62
Feb 25 12:34:56 compute-0 NetworkManager[49836]: <info>  [1772022896.3652] device (tapd6f9abb7-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:34:56 compute-0 NetworkManager[49836]: <info>  [1772022896.3659] device (tapd6f9abb7-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:34:56 compute-0 ovn_controller[147040]: 2026-02-25T12:34:56Z|00858|binding|INFO|Setting lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 ovn-installed in OVS
Feb 25 12:34:56 compute-0 ovn_controller[147040]: 2026-02-25T12:34:56Z|00859|binding|INFO|Setting lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 up in Southbound
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.369 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.374 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ae994b-ac1a-469d-bcbc-9a752407011c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.375 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0cf2281-b1 in ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.376 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0cf2281-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.376 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e79466f1-0259-4fbb-84ff-b4e5932fad62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.377 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9346ad-6067-484d-b896-297acde8081e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 systemd-machined[210048]: New machine qemu-109-instance-0000004f.
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.389 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8393f9de-b065-44cc-8dab-77ac3c405b25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-0000004f.
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.413 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[012ca04e-9a54-43ba-911a-eef51a307237]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.443 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f13b40cc-95a0-4611-bb0c-0516ec2f4183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 NetworkManager[49836]: <info>  [1772022896.4502] manager: (tapa0cf2281-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/363)
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.453 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6b3861-2b5e-488c-bf5e-894852b8d7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.475 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e49c905a-96c8-4197-9e25-c9e5ee793840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.478 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf64b0c-9bc3-48e3-94b7-ea63dfa23f00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 NetworkManager[49836]: <info>  [1772022896.4970] device (tapa0cf2281-b0): carrier: link connected
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.502 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5770c609-7b2b-4808-b7cd-11075e22e6da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.513 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f6196ecb-e772-4675-adae-3c107d8b1bde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0cf2281-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:10:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486590, 'reachable_time': 24888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321386, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.527 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0198fd7-673e-4995-a657-20093577d03d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4a:1087'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 486590, 'tstamp': 486590}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321387, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.536 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[404230a2-b42b-45b2-811a-0d2005d6dafe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0cf2281-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:10:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486590, 'reachable_time': 24888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321388, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.559 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16d99d45-6ba2-404a-b13e-fcbea53e6b49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.607 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a36f23c-0c51-4049-8440-df28fe1de0df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.609 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0cf2281-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.609 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.610 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0cf2281-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:56 compute-0 NetworkManager[49836]: <info>  [1772022896.6120] manager: (tapa0cf2281-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Feb 25 12:34:56 compute-0 kernel: tapa0cf2281-b0: entered promiscuous mode
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.615 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0cf2281-b0, col_values=(('external_ids', {'iface-id': '134cce92-aa02-44fa-b97c-8b46159f1d29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:56 compute-0 ovn_controller[147040]: 2026-02-25T12:34:56Z|00860|binding|INFO|Releasing lport 134cce92-aa02-44fa-b97c-8b46159f1d29 from this chassis (sb_readonly=0)
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.627 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0cf2281-bf49-498f-8de5-70cdba33cd62.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0cf2281-bf49-498f-8de5-70cdba33cd62.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.628 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76eeb70b-3aa2-45fe-aa8e-b31a713a210e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.628 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-a0cf2281-bf49-498f-8de5-70cdba33cd62
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/a0cf2281-bf49-498f-8de5-70cdba33cd62.pid.haproxy
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID a0cf2281-bf49-498f-8de5-70cdba33cd62
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:34:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:56.630 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'env', 'PROCESS_TAG=haproxy-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0cf2281-bf49-498f-8de5-70cdba33cd62.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:34:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:34:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/271797764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.745 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.746 244018 DEBUG nova.virt.libvirt.vif [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1397178537',display_name='tempest-ServerRescueTestJSON-server-1397178537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1397178537',id=86,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='367d43ab207546c3900a8414f0713ef4',ramdisk_id='',reservation_id='r-j34kam6o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-930018924',owner_user_name='tempest-ServerRescueTestJSON-930018924-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:34:34Z,user_data=None,user_id='cb2c815ef3214a7b897f911b4f53a146',uuid=e3178d86-5c76-4393-9327-2aac2cb8d81d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1416447925-network", "vif_mac": "fa:16:3e:a5:a3:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.746 244018 DEBUG nova.network.os_vif_util [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converting VIF {"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1416447925-network", "vif_mac": "fa:16:3e:a5:a3:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.747 244018 DEBUG nova.network.os_vif_util [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a3:ed,bridge_name='br-int',has_traffic_filtering=True,id=26c6cf46-a52b-4476-9112-4047f420e492,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c6cf46-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.748 244018 DEBUG nova.objects.instance [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'pci_devices' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.763 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:34:56 compute-0 nova_compute[244014]:   <uuid>e3178d86-5c76-4393-9327-2aac2cb8d81d</uuid>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   <name>instance-00000056</name>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerRescueTestJSON-server-1397178537</nova:name>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:34:54</nova:creationTime>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <nova:user uuid="cb2c815ef3214a7b897f911b4f53a146">tempest-ServerRescueTestJSON-930018924-project-member</nova:user>
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <nova:project uuid="367d43ab207546c3900a8414f0713ef4">tempest-ServerRescueTestJSON-930018924</nova:project>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <nova:port uuid="26c6cf46-a52b-4476-9112-4047f420e492">
Feb 25 12:34:56 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <system>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <entry name="serial">e3178d86-5c76-4393-9327-2aac2cb8d81d</entry>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <entry name="uuid">e3178d86-5c76-4393-9327-2aac2cb8d81d</entry>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     </system>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   <os>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   </os>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   <features>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   </features>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.rescue">
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e3178d86-5c76-4393-9327-2aac2cb8d81d_disk">
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <target dev="vdb" bus="virtio"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.config.rescue">
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       </source>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:34:56 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:a5:a3:ed"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <target dev="tap26c6cf46-a5"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/console.log" append="off"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <video>
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     </video>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:34:56 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:34:56 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:34:56 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:34:56 compute-0 nova_compute[244014]: </domain>
Feb 25 12:34:56 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.772 244018 INFO nova.virt.libvirt.driver [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance destroyed successfully.
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.831 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.831 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.832 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.832 244018 DEBUG nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] No VIF found with MAC fa:16:3e:a5:a3:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.832 244018 INFO nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Using config drive
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.860 244018 DEBUG nova.storage.rbd_utils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.894 244018 DEBUG nova.objects.instance [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:56 compute-0 nova_compute[244014]: 2026-02-25 12:34:56.934 244018 DEBUG nova.objects.instance [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'keypairs' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.022 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 19abddab-88d5-48b8-b98e-1dedccbb8b7f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.032 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022897.0221717, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.032 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Started (Lifecycle Event)
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.045 244018 DEBUG nova.compute.manager [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.045 244018 DEBUG nova.objects.instance [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'pci_devices' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:57 compute-0 podman[321488]: 2026-02-25 12:34:56.974657888 +0000 UTC m=+0.023671790 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:34:57 compute-0 podman[321488]: 2026-02-25 12:34:57.068204282 +0000 UTC m=+0.117218194 container create 16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.068 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.075 244018 INFO nova.virt.libvirt.driver [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance running successfully.
Feb 25 12:34:57 compute-0 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.077 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.080 244018 DEBUG nova.virt.libvirt.guest [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.080 244018 DEBUG nova.compute.manager [None req-9fb1dfb1-f2fc-4b32-9701-84e69055d7c6 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.109 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.110 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022897.0406423, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.110 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Resumed (Lifecycle Event)
Feb 25 12:34:57 compute-0 systemd[1]: Started libpod-conmon-16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf.scope.
Feb 25 12:34:57 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.140 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1286b91a07c941dc6ee3488e98e5fa87f3ac6bdce355293186c3d25055400acc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.144 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:57 compute-0 podman[321488]: 2026-02-25 12:34:57.174489346 +0000 UTC m=+0.223503218 container init 16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 12:34:57 compute-0 podman[321488]: 2026-02-25 12:34:57.178854149 +0000 UTC m=+0.227868021 container start 16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 12:34:57 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [NOTICE]   (321507) : New worker (321509) forked
Feb 25 12:34:57 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [NOTICE]   (321507) : Loading success.
Feb 25 12:34:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:34:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/271797764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.592 244018 INFO nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Creating config drive at /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config.rescue
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.600 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpru7xoden execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.736 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpru7xoden" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.761 244018 DEBUG nova.storage.rbd_utils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] rbd image e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:34:57 compute-0 nova_compute[244014]: 2026-02-25 12:34:57.764 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config.rescue e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:34:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 532 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 160 op/s
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.324 244018 DEBUG oslo_concurrency.processutils [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config.rescue e3178d86-5c76-4393-9327-2aac2cb8d81d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.326 244018 INFO nova.virt.libvirt.driver [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Deleting local config drive /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d/disk.config.rescue because it was imported into RBD.
Feb 25 12:34:58 compute-0 kernel: tap26c6cf46-a5: entered promiscuous mode
Feb 25 12:34:58 compute-0 NetworkManager[49836]: <info>  [1772022898.3759] manager: (tap26c6cf46-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/365)
Feb 25 12:34:58 compute-0 systemd-udevd[321464]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:34:58 compute-0 ovn_controller[147040]: 2026-02-25T12:34:58Z|00861|binding|INFO|Claiming lport 26c6cf46-a52b-4476-9112-4047f420e492 for this chassis.
Feb 25 12:34:58 compute-0 ovn_controller[147040]: 2026-02-25T12:34:58Z|00862|binding|INFO|26c6cf46-a52b-4476-9112-4047f420e492: Claiming fa:16:3e:a5:a3:ed 10.100.0.5
Feb 25 12:34:58 compute-0 ovn_controller[147040]: 2026-02-25T12:34:58Z|00863|binding|INFO|Setting lport 26c6cf46-a52b-4476-9112-4047f420e492 ovn-installed in OVS
Feb 25 12:34:58 compute-0 ovn_controller[147040]: 2026-02-25T12:34:58Z|00864|binding|INFO|Setting lport 26c6cf46-a52b-4476-9112-4047f420e492 up in Southbound
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:58.384 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a3:ed 10.100.0.5'], port_security=['fa:16:3e:a5:a3:ed 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e3178d86-5c76-4393-9327-2aac2cb8d81d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=26c6cf46-a52b-4476-9112-4047f420e492) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:58 compute-0 NetworkManager[49836]: <info>  [1772022898.3852] device (tap26c6cf46-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:34:58 compute-0 NetworkManager[49836]: <info>  [1772022898.3856] device (tap26c6cf46-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:58.391 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 26c6cf46-a52b-4476-9112-4047f420e492 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 bound to our chassis
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.394 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:58.396 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:34:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:58.397 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[599463f0-1a20-491a-88bf-5ca127590cc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:58 compute-0 systemd-machined[210048]: New machine qemu-110-instance-00000056.
Feb 25 12:34:58 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-00000056.
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.435 244018 DEBUG nova.compute.manager [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.435 244018 DEBUG oslo_concurrency.lockutils [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.437 244018 DEBUG oslo_concurrency.lockutils [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.437 244018 DEBUG oslo_concurrency.lockutils [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.438 244018 DEBUG nova.compute.manager [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.438 244018 WARNING nova.compute.manager [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received unexpected event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with vm_state active and task_state None.
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.438 244018 DEBUG nova.compute.manager [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.439 244018 DEBUG oslo_concurrency.lockutils [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.439 244018 DEBUG oslo_concurrency.lockutils [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.439 244018 DEBUG oslo_concurrency.lockutils [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.440 244018 DEBUG nova.compute.manager [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.440 244018 WARNING nova.compute.manager [req-e2e60003-23f7-4dfa-9c63-e05a391cfc19 req-704f14ac-dcf4-4887-b716-1c00f31d8176 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received unexpected event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with vm_state active and task_state None.
Feb 25 12:34:58 compute-0 ceph-mon[76335]: pgmap v1595: 305 pgs: 305 active+clean; 532 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 160 op/s
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.588 244018 DEBUG oslo_concurrency.lockutils [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.588 244018 DEBUG oslo_concurrency.lockutils [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.589 244018 DEBUG oslo_concurrency.lockutils [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.589 244018 DEBUG oslo_concurrency.lockutils [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.589 244018 DEBUG oslo_concurrency.lockutils [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.591 244018 INFO nova.compute.manager [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Terminating instance
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.592 244018 DEBUG nova.compute.manager [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:34:58 compute-0 kernel: tap9064f9e6-dd (unregistering): left promiscuous mode
Feb 25 12:34:58 compute-0 NetworkManager[49836]: <info>  [1772022898.6797] device (tap9064f9e6-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:34:58 compute-0 ovn_controller[147040]: 2026-02-25T12:34:58Z|00865|binding|INFO|Releasing lport 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 from this chassis (sb_readonly=0)
Feb 25 12:34:58 compute-0 ovn_controller[147040]: 2026-02-25T12:34:58Z|00866|binding|INFO|Setting lport 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 down in Southbound
Feb 25 12:34:58 compute-0 ovn_controller[147040]: 2026-02-25T12:34:58Z|00867|binding|INFO|Removing iface tap9064f9e6-dd ovn-installed in OVS
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.686 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:58.712 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:18:49 10.100.0.3'], port_security=['fa:16:3e:46:18:49 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ef34c4d4-57e0-4af1-af7a-b8ef35d09862', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '6', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:34:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:58.714 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 unbound from our chassis
Feb 25 12:34:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:58.716 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d1639de-d0ac-47b6-9707-253523b26c21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:34:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:58.717 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[412fa409-a32a-4e97-ba4b-8642d7c534e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:58.717 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace which is not needed anymore
Feb 25 12:34:58 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000055.scope: Deactivated successfully.
Feb 25 12:34:58 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000055.scope: Consumed 3.386s CPU time.
Feb 25 12:34:58 compute-0 systemd-machined[210048]: Machine qemu-108-instance-00000055 terminated.
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.816 244018 INFO nova.virt.libvirt.driver [-] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Instance destroyed successfully.
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.817 244018 DEBUG nova.objects.instance [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'resources' on Instance uuid ef34c4d4-57e0-4af1-af7a-b8ef35d09862 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.834 244018 DEBUG nova.virt.libvirt.vif [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:34:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237181777',display_name='tempest-ServerDiskConfigTestJSON-server-1237181777',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237181777',id=85,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-74vbp2dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:34:55Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=ef34c4d4-57e0-4af1-af7a-b8ef35d09862,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.834 244018 DEBUG nova.network.os_vif_util [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "address": "fa:16:3e:46:18:49", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9064f9e6-dd", "ovs_interfaceid": "9064f9e6-dd74-4a19-9efb-6abc4c4e27c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.835 244018 DEBUG nova.network.os_vif_util [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.835 244018 DEBUG os_vif [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.837 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9064f9e6-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.841 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:34:58 compute-0 nova_compute[244014]: 2026-02-25 12:34:58.843 244018 INFO os_vif [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:18:49,bridge_name='br-int',has_traffic_filtering=True,id=9064f9e6-dd74-4a19-9efb-6abc4c4e27c5,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9064f9e6-dd')
Feb 25 12:34:58 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[321313]: [NOTICE]   (321317) : haproxy version is 2.8.14-c23fe91
Feb 25 12:34:58 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[321313]: [NOTICE]   (321317) : path to executable is /usr/sbin/haproxy
Feb 25 12:34:58 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[321313]: [WARNING]  (321317) : Exiting Master process...
Feb 25 12:34:58 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[321313]: [ALERT]    (321317) : Current worker (321319) exited with code 143 (Terminated)
Feb 25 12:34:58 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[321313]: [WARNING]  (321317) : All workers exited. Exiting... (0)
Feb 25 12:34:58 compute-0 systemd[1]: libpod-770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7.scope: Deactivated successfully.
Feb 25 12:34:58 compute-0 podman[321602]: 2026-02-25 12:34:58.882819807 +0000 UTC m=+0.089526632 container died 770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.028 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for e3178d86-5c76-4393-9327-2aac2cb8d81d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.029 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022899.0283682, e3178d86-5c76-4393-9327-2aac2cb8d81d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.029 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] VM Resumed (Lifecycle Event)
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.033 244018 DEBUG nova.compute.manager [None req-4fbe82d1-ac83-4f40-b76f-fec1c3a1386b cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7-userdata-shm.mount: Deactivated successfully.
Feb 25 12:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e630577dbbc390890fba0963c51a270419e40ac7b2e84c354e8621f1152a742-merged.mount: Deactivated successfully.
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.054 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.058 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.086 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] During sync_power_state the instance has a pending task (rescuing). Skip.
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.086 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022899.0313785, e3178d86-5c76-4393-9327-2aac2cb8d81d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.087 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] VM Started (Lifecycle Event)
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.115 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.126 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:34:59 compute-0 podman[321602]: 2026-02-25 12:34:59.149786812 +0000 UTC m=+0.356493637 container cleanup 770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:34:59 compute-0 systemd[1]: libpod-conmon-770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7.scope: Deactivated successfully.
Feb 25 12:34:59 compute-0 podman[321715]: 2026-02-25 12:34:59.368267457 +0000 UTC m=+0.198286036 container remove 770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:34:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:59.373 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[068f0d32-9834-41b2-9271-5eff16ecbe56]: (4, ('Wed Feb 25 12:34:58 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7)\n770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7\nWed Feb 25 12:34:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7)\n770f2f184cb8c2319328b96c56c3f8c0f15e271a635da763c504d32e527889e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:59.375 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d51791a-e91f-42eb-aba9-db24c367e1dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:59.376 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:34:59 compute-0 kernel: tap9d1639de-d0: left promiscuous mode
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:34:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:59.394 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[42a17a71-41ce-45e7-8977-bf4f3507c86c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:59.407 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[03279581-aa20-4cf7-a654-c1b840681fce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:59.408 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[955e9efa-3070-4f79-ba4c-dd8d91b3b505]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:59.422 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[78279ce4-e6f3-4e32-800f-254fd89b0e2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486459, 'reachable_time': 16112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321728, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:59.436 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:34:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:34:59.436 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ad37393f-ee18-4a7e-bff6-ffe0c517f727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:34:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d1639de\x2dd0ac\x2d47b6\x2d9707\x2d253523b26c21.mount: Deactivated successfully.
Feb 25 12:34:59 compute-0 nova_compute[244014]: 2026-02-25 12:34:59.897 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 532 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 160 op/s
Feb 25 12:35:00 compute-0 nova_compute[244014]: 2026-02-25 12:35:00.232 244018 INFO nova.virt.libvirt.driver [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Deleting instance files /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862_del
Feb 25 12:35:00 compute-0 nova_compute[244014]: 2026-02-25 12:35:00.233 244018 INFO nova.virt.libvirt.driver [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Deletion of /var/lib/nova/instances/ef34c4d4-57e0-4af1-af7a-b8ef35d09862_del complete
Feb 25 12:35:00 compute-0 nova_compute[244014]: 2026-02-25 12:35:00.337 244018 INFO nova.compute.manager [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Took 1.74 seconds to destroy the instance on the hypervisor.
Feb 25 12:35:00 compute-0 nova_compute[244014]: 2026-02-25 12:35:00.338 244018 DEBUG oslo.service.loopingcall [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:35:00 compute-0 nova_compute[244014]: 2026-02-25 12:35:00.338 244018 DEBUG nova.compute.manager [-] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:35:00 compute-0 nova_compute[244014]: 2026-02-25 12:35:00.338 244018 DEBUG nova.network.neutron [-] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:35:01 compute-0 ceph-mon[76335]: pgmap v1596: 305 pgs: 305 active+clean; 532 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 160 op/s
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.230 244018 DEBUG nova.compute.manager [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.230 244018 DEBUG oslo_concurrency.lockutils [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.231 244018 DEBUG oslo_concurrency.lockutils [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.231 244018 DEBUG oslo_concurrency.lockutils [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.231 244018 DEBUG nova.compute.manager [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.231 244018 WARNING nova.compute.manager [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received unexpected event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with vm_state rescued and task_state None.
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.232 244018 DEBUG nova.compute.manager [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.232 244018 DEBUG oslo_concurrency.lockutils [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.232 244018 DEBUG oslo_concurrency.lockutils [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.232 244018 DEBUG oslo_concurrency.lockutils [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.232 244018 DEBUG nova.compute.manager [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.233 244018 WARNING nova.compute.manager [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received unexpected event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with vm_state rescued and task_state None.
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.233 244018 DEBUG nova.compute.manager [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received event network-vif-unplugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.233 244018 DEBUG oslo_concurrency.lockutils [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.233 244018 DEBUG oslo_concurrency.lockutils [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.234 244018 DEBUG oslo_concurrency.lockutils [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.234 244018 DEBUG nova.compute.manager [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] No waiting events found dispatching network-vif-unplugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.234 244018 DEBUG nova.compute.manager [req-90479064-f7df-42e1-90c0-470a063e2060 req-381a8e74-2529-43bb-a5c4-f473a3d664b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received event network-vif-unplugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.444 244018 DEBUG nova.network.neutron [-] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.465 244018 INFO nova.compute.manager [-] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Took 1.13 seconds to deallocate network for instance.
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.522 244018 DEBUG oslo_concurrency.lockutils [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.522 244018 DEBUG oslo_concurrency.lockutils [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:35:01 compute-0 nova_compute[244014]: 2026-02-25 12:35:01.649 244018 DEBUG oslo_concurrency.processutils [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 494 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.3 MiB/s wr, 251 op/s
Feb 25 12:35:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:35:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548803296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.355 244018 DEBUG oslo_concurrency.processutils [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.705s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.362 244018 DEBUG nova.compute.provider_tree [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.385 244018 DEBUG nova.scheduler.client.report [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:35:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.418 244018 DEBUG oslo_concurrency.lockutils [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:02 compute-0 ceph-mon[76335]: pgmap v1597: 305 pgs: 305 active+clean; 494 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.3 MiB/s wr, 251 op/s
Feb 25 12:35:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1548803296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.458 244018 INFO nova.scheduler.client.report [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Deleted allocations for instance ef34c4d4-57e0-4af1-af7a-b8ef35d09862
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.529 244018 DEBUG oslo_concurrency.lockutils [None req-fc42aa00-4328-4876-bb2b-04bdcd7c0454 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.584 244018 INFO nova.compute.manager [None req-88bde76d-bf73-4490-b090-91dd8d7ea7c3 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Unrescuing
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.585 244018 DEBUG oslo_concurrency.lockutils [None req-88bde76d-bf73-4490-b090-91dd8d7ea7c3 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.585 244018 DEBUG oslo_concurrency.lockutils [None req-88bde76d-bf73-4490-b090-91dd8d7ea7c3 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquired lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.585 244018 DEBUG nova.network.neutron [None req-88bde76d-bf73-4490-b090-91dd8d7ea7c3 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.842 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.843 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.888 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.970 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.971 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.977 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:35:02 compute-0 nova_compute[244014]: 2026-02-25 12:35:02.977 244018 INFO nova.compute.claims [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.122 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.344 244018 DEBUG nova.compute.manager [req-702b1f66-3d64-4bae-9947-f861dfb683b3 req-6b69ac81-fd45-4e8a-8364-4292dff8f394 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.345 244018 DEBUG oslo_concurrency.lockutils [req-702b1f66-3d64-4bae-9947-f861dfb683b3 req-6b69ac81-fd45-4e8a-8364-4292dff8f394 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.345 244018 DEBUG oslo_concurrency.lockutils [req-702b1f66-3d64-4bae-9947-f861dfb683b3 req-6b69ac81-fd45-4e8a-8364-4292dff8f394 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.345 244018 DEBUG oslo_concurrency.lockutils [req-702b1f66-3d64-4bae-9947-f861dfb683b3 req-6b69ac81-fd45-4e8a-8364-4292dff8f394 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ef34c4d4-57e0-4af1-af7a-b8ef35d09862-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.346 244018 DEBUG nova.compute.manager [req-702b1f66-3d64-4bae-9947-f861dfb683b3 req-6b69ac81-fd45-4e8a-8364-4292dff8f394 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] No waiting events found dispatching network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.346 244018 WARNING nova.compute.manager [req-702b1f66-3d64-4bae-9947-f861dfb683b3 req-6b69ac81-fd45-4e8a-8364-4292dff8f394 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received unexpected event network-vif-plugged-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 for instance with vm_state deleted and task_state None.
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.346 244018 DEBUG nova.compute.manager [req-702b1f66-3d64-4bae-9947-f861dfb683b3 req-6b69ac81-fd45-4e8a-8364-4292dff8f394 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Received event network-vif-deleted-9064f9e6-dd74-4a19-9efb-6abc4c4e27c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:03 compute-0 ovn_controller[147040]: 2026-02-25T12:35:03Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:66:6e 10.100.0.8
Feb 25 12:35:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:35:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421150742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.638 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.643 244018 DEBUG nova.compute.provider_tree [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.662 244018 DEBUG nova.scheduler.client.report [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.695 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.696 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:35:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2421150742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.780 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.780 244018 DEBUG nova.network.neutron [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.806 244018 INFO nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.829 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.940 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.941 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.942 244018 INFO nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Creating image(s)
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.966 244018 DEBUG nova.storage.rbd_utils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 77c38424-b0a2-4d31-975a-16f265ff93fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:35:03 compute-0 nova_compute[244014]: 2026-02-25 12:35:03.988 244018 DEBUG nova.storage.rbd_utils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 77c38424-b0a2-4d31-975a-16f265ff93fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.016 244018 DEBUG nova.storage.rbd_utils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 77c38424-b0a2-4d31-975a-16f265ff93fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.020 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 486 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.7 MiB/s wr, 276 op/s
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.095 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.096 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.096 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.096 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.119 244018 DEBUG nova.storage.rbd_utils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 77c38424-b0a2-4d31-975a-16f265ff93fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.122 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 77c38424-b0a2-4d31-975a-16f265ff93fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.324 244018 DEBUG nova.policy [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8636d59ca0d49698907e2edb5dc4967', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e80b416cdd774e9483545c9e08abf805', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.328 244018 DEBUG nova.network.neutron [None req-88bde76d-bf73-4490-b090-91dd8d7ea7c3 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Updating instance_info_cache with network_info: [{"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.362 244018 DEBUG oslo_concurrency.lockutils [None req-88bde76d-bf73-4490-b090-91dd8d7ea7c3 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Releasing lock "refresh_cache-e3178d86-5c76-4393-9327-2aac2cb8d81d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.363 244018 DEBUG nova.objects.instance [None req-88bde76d-bf73-4490-b090-91dd8d7ea7c3 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'flavor' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:35:04 compute-0 nova_compute[244014]: 2026-02-25 12:35:04.899 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:05 compute-0 kernel: tap26c6cf46-a5 (unregistering): left promiscuous mode
Feb 25 12:35:05 compute-0 NetworkManager[49836]: <info>  [1772022905.0067] device (tap26c6cf46-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:35:05 compute-0 ovn_controller[147040]: 2026-02-25T12:35:05Z|00868|binding|INFO|Releasing lport 26c6cf46-a52b-4476-9112-4047f420e492 from this chassis (sb_readonly=0)
Feb 25 12:35:05 compute-0 ovn_controller[147040]: 2026-02-25T12:35:05Z|00869|binding|INFO|Setting lport 26c6cf46-a52b-4476-9112-4047f420e492 down in Southbound
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:05 compute-0 ovn_controller[147040]: 2026-02-25T12:35:05Z|00870|binding|INFO|Removing iface tap26c6cf46-a5 ovn-installed in OVS
Feb 25 12:35:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:05.018 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a3:ed 10.100.0.5'], port_security=['fa:16:3e:a5:a3:ed 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e3178d86-5c76-4393-9327-2aac2cb8d81d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=26c6cf46-a52b-4476-9112-4047f420e492) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:35:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:05.019 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 26c6cf46-a52b-4476-9112-4047f420e492 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 unbound from our chassis
Feb 25 12:35:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:05.020 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.019 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:05.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[50a0c40d-0b0c-43d5-aadf-2f5b376a0366]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:05 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000056.scope: Deactivated successfully.
Feb 25 12:35:05 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000056.scope: Consumed 5.988s CPU time.
Feb 25 12:35:05 compute-0 systemd-machined[210048]: Machine qemu-110-instance-00000056 terminated.
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.221 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.225 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.241 244018 INFO nova.virt.libvirt.driver [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance destroyed successfully.
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.242 244018 DEBUG nova.objects.instance [None req-88bde76d-bf73-4490-b090-91dd8d7ea7c3 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'numa_topology' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.359 244018 DEBUG nova.network.neutron [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Successfully created port: db6ecea4-c353-4b6f-860e-95c410e7ec39 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:35:05 compute-0 ceph-mon[76335]: pgmap v1598: 305 pgs: 305 active+clean; 486 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.7 MiB/s wr, 276 op/s
Feb 25 12:35:05 compute-0 kernel: tap26c6cf46-a5: entered promiscuous mode
Feb 25 12:35:05 compute-0 systemd-udevd[321870]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:35:05 compute-0 NetworkManager[49836]: <info>  [1772022905.6961] manager: (tap26c6cf46-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/366)
Feb 25 12:35:05 compute-0 ovn_controller[147040]: 2026-02-25T12:35:05Z|00871|binding|INFO|Claiming lport 26c6cf46-a52b-4476-9112-4047f420e492 for this chassis.
Feb 25 12:35:05 compute-0 ovn_controller[147040]: 2026-02-25T12:35:05Z|00872|binding|INFO|26c6cf46-a52b-4476-9112-4047f420e492: Claiming fa:16:3e:a5:a3:ed 10.100.0.5
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.696 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:05 compute-0 ovn_controller[147040]: 2026-02-25T12:35:05Z|00873|binding|INFO|Setting lport 26c6cf46-a52b-4476-9112-4047f420e492 ovn-installed in OVS
Feb 25 12:35:05 compute-0 ovn_controller[147040]: 2026-02-25T12:35:05Z|00874|binding|INFO|Setting lport 26c6cf46-a52b-4476-9112-4047f420e492 up in Southbound
Feb 25 12:35:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:05.704 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a3:ed 10.100.0.5'], port_security=['fa:16:3e:a5:a3:ed 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e3178d86-5c76-4393-9327-2aac2cb8d81d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '7', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=26c6cf46-a52b-4476-9112-4047f420e492) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:05.706 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 26c6cf46-a52b-4476-9112-4047f420e492 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 bound to our chassis
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:05.707 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:35:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:05.707 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a47c5c2e-e4df-4cf7-a2a2-262133f49157]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:05 compute-0 NetworkManager[49836]: <info>  [1772022905.7095] device (tap26c6cf46-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:35:05 compute-0 NetworkManager[49836]: <info>  [1772022905.7103] device (tap26c6cf46-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:35:05 compute-0 systemd-machined[210048]: New machine qemu-111-instance-00000056.
Feb 25 12:35:05 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-00000056.
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.991 244018 DEBUG nova.compute.manager [req-58a84731-a989-467d-b193-c79898dd60f5 req-e316a78c-1d60-4be2-bc37-6c31cdd00053 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.992 244018 DEBUG oslo_concurrency.lockutils [req-58a84731-a989-467d-b193-c79898dd60f5 req-e316a78c-1d60-4be2-bc37-6c31cdd00053 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.992 244018 DEBUG oslo_concurrency.lockutils [req-58a84731-a989-467d-b193-c79898dd60f5 req-e316a78c-1d60-4be2-bc37-6c31cdd00053 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.993 244018 DEBUG oslo_concurrency.lockutils [req-58a84731-a989-467d-b193-c79898dd60f5 req-e316a78c-1d60-4be2-bc37-6c31cdd00053 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.993 244018 DEBUG nova.compute.manager [req-58a84731-a989-467d-b193-c79898dd60f5 req-e316a78c-1d60-4be2-bc37-6c31cdd00053 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:05 compute-0 nova_compute[244014]: 2026-02-25 12:35:05.993 244018 WARNING nova.compute.manager [req-58a84731-a989-467d-b193-c79898dd60f5 req-e316a78c-1d60-4be2-bc37-6c31cdd00053 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received unexpected event network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with vm_state rescued and task_state unrescuing.
Feb 25 12:35:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 486 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 241 op/s
Feb 25 12:35:06 compute-0 nova_compute[244014]: 2026-02-25 12:35:06.748 244018 DEBUG nova.network.neutron [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Successfully updated port: db6ecea4-c353-4b6f-860e-95c410e7ec39 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:35:06 compute-0 nova_compute[244014]: 2026-02-25 12:35:06.770 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "refresh_cache-77c38424-b0a2-4d31-975a-16f265ff93fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:35:06 compute-0 nova_compute[244014]: 2026-02-25 12:35:06.770 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquired lock "refresh_cache-77c38424-b0a2-4d31-975a-16f265ff93fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:35:06 compute-0 nova_compute[244014]: 2026-02-25 12:35:06.770 244018 DEBUG nova.network.neutron [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:35:06 compute-0 nova_compute[244014]: 2026-02-25 12:35:06.959 244018 DEBUG nova.network.neutron [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:35:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:07 compute-0 ceph-mon[76335]: pgmap v1599: 305 pgs: 305 active+clean; 486 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 241 op/s
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.022 244018 DEBUG nova.network.neutron [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Updating instance_info_cache with network_info: [{"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.039 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for e3178d86-5c76-4393-9327-2aac2cb8d81d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:35:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 524 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 5.0 MiB/s wr, 291 op/s
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.040 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022908.0394485, e3178d86-5c76-4393-9327-2aac2cb8d81d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.040 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] VM Resumed (Lifecycle Event)
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.063 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.064 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Releasing lock "refresh_cache-77c38424-b0a2-4d31-975a-16f265ff93fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.065 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Instance network_info: |[{"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.069 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.075 244018 DEBUG nova.compute.manager [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.076 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.076 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.076 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.077 244018 DEBUG nova.compute.manager [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.077 244018 WARNING nova.compute.manager [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received unexpected event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with vm_state rescued and task_state unrescuing.
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.077 244018 DEBUG nova.compute.manager [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.077 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.078 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.078 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.078 244018 DEBUG nova.compute.manager [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.078 244018 WARNING nova.compute.manager [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received unexpected event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with vm_state rescued and task_state unrescuing.
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.079 244018 DEBUG nova.compute.manager [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.079 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.079 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.080 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.080 244018 DEBUG nova.compute.manager [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.080 244018 WARNING nova.compute.manager [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received unexpected event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with vm_state rescued and task_state unrescuing.
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.080 244018 DEBUG nova.compute.manager [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-changed-db6ecea4-c353-4b6f-860e-95c410e7ec39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.081 244018 DEBUG nova.compute.manager [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Refreshing instance network info cache due to event network-changed-db6ecea4-c353-4b6f-860e-95c410e7ec39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.081 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-77c38424-b0a2-4d31-975a-16f265ff93fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.081 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-77c38424-b0a2-4d31-975a-16f265ff93fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.081 244018 DEBUG nova.network.neutron [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Refreshing network info cache for port db6ecea4-c353-4b6f-860e-95c410e7ec39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.088 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] During sync_power_state the instance has a pending task (unrescuing). Skip.
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.088 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022908.0427713, e3178d86-5c76-4393-9327-2aac2cb8d81d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.089 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] VM Started (Lifecycle Event)
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.105 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.111 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.139 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] During sync_power_state the instance has a pending task (unrescuing). Skip.
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.340 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 77c38424-b0a2-4d31-975a-16f265ff93fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.406 244018 DEBUG nova.storage.rbd_utils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] resizing rbd image 77c38424-b0a2-4d31-975a-16f265ff93fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:35:08 compute-0 nova_compute[244014]: 2026-02-25 12:35:08.842 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:09 compute-0 ceph-mon[76335]: pgmap v1600: 305 pgs: 305 active+clean; 524 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 5.0 MiB/s wr, 291 op/s
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.644 244018 DEBUG nova.objects.instance [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'migration_context' on Instance uuid 77c38424-b0a2-4d31-975a-16f265ff93fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.662 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.662 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Ensure instance console log exists: /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.663 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.663 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.664 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.668 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Start _get_guest_xml network_info=[{"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.673 244018 WARNING nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.677 244018 DEBUG nova.virt.libvirt.host [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.678 244018 DEBUG nova.virt.libvirt.host [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.682 244018 DEBUG nova.virt.libvirt.host [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.683 244018 DEBUG nova.virt.libvirt.host [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.683 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.684 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.684 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.685 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.685 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.686 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.686 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.686 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.687 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.687 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.688 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.688 244018 DEBUG nova.virt.hardware [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.692 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.727 244018 DEBUG nova.network.neutron [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Updated VIF entry in instance network info cache for port db6ecea4-c353-4b6f-860e-95c410e7ec39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.728 244018 DEBUG nova.network.neutron [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Updating instance_info_cache with network_info: [{"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.746 244018 DEBUG oslo_concurrency.lockutils [req-5a8f715f-f0bd-4919-80f2-de8d92ad0986 req-be7b5062-6027-4ebe-a444-797fbfff2bcb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-77c38424-b0a2-4d31-975a-16f265ff93fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:35:09 compute-0 nova_compute[244014]: 2026-02-25 12:35:09.902 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 524 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 188 op/s
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.132 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.133 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.133 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.133 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.133 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.135 244018 INFO nova.compute.manager [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Terminating instance
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.135 244018 DEBUG nova.compute.manager [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:35:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:35:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218163997' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:35:10 compute-0 rsyslogd[1020]: imjournal: 9423 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.255 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.287 244018 DEBUG nova.storage.rbd_utils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.293 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:10 compute-0 ceph-mon[76335]: pgmap v1601: 305 pgs: 305 active+clean; 524 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 188 op/s
Feb 25 12:35:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2218163997' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:35:10 compute-0 kernel: tapd6f9abb7-ac (unregistering): left promiscuous mode
Feb 25 12:35:10 compute-0 NetworkManager[49836]: <info>  [1772022910.9639] device (tapd6f9abb7-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.966 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:10 compute-0 ovn_controller[147040]: 2026-02-25T12:35:10Z|00875|binding|INFO|Releasing lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 from this chassis (sb_readonly=0)
Feb 25 12:35:10 compute-0 ovn_controller[147040]: 2026-02-25T12:35:10Z|00876|binding|INFO|Setting lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 down in Southbound
Feb 25 12:35:10 compute-0 ovn_controller[147040]: 2026-02-25T12:35:10Z|00877|binding|INFO|Removing iface tapd6f9abb7-ac ovn-installed in OVS
Feb 25 12:35:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:10.981 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:66:6e 10.100.0.8'], port_security=['fa:16:3e:af:66:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '19abddab-88d5-48b8-b98e-1dedccbb8b7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d34ca23436b401fbaeb0b01190a440a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '3f8cb539-706f-470a-94ac-5465c7a896f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2c08b8a-392f-4189-9ba4-eaa2e30b0540, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:35:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:10.982 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 in datapath a0cf2281-bf49-498f-8de5-70cdba33cd62 unbound from our chassis
Feb 25 12:35:10 compute-0 nova_compute[244014]: 2026-02-25 12:35:10.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:10.983 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0cf2281-bf49-498f-8de5-70cdba33cd62, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:35:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:10.984 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fbf84b-aee9-44de-831b-938c63a92aa6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:10.986 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 namespace which is not needed anymore
Feb 25 12:35:11 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Feb 25 12:35:11 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000004f.scope: Consumed 5.862s CPU time.
Feb 25 12:35:11 compute-0 systemd-machined[210048]: Machine qemu-109-instance-0000004f terminated.
Feb 25 12:35:11 compute-0 podman[322099]: 2026-02-25 12:35:11.0808088 +0000 UTC m=+0.091473467 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.151 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.172 244018 INFO nova.virt.libvirt.driver [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance destroyed successfully.
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.172 244018 DEBUG nova.objects.instance [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'resources' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.185 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.185 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.185 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.188 244018 DEBUG nova.virt.libvirt.vif [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-293517057',display_name='tempest-ServersNegativeTestJSON-server-293517057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-293517057',id=79,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d34ca23436b401fbaeb0b01190a440a',ramdisk_id='',reservation_id='r-lll6a7bq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1613719120',owner_user_name='tempest-ServersNegativeTestJSON-1613719120-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:34:57Z,user_data=None,user_id='84ba7d5e80a44535b25853f3b18e352d',uuid=19abddab-88d5-48b8-b98e-1dedccbb8b7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.188 244018 DEBUG nova.network.os_vif_util [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converting VIF {"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.189 244018 DEBUG nova.network.os_vif_util [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.189 244018 DEBUG os_vif [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.191 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6f9abb7-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.196 244018 INFO os_vif [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac')
Feb 25 12:35:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:35:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2038076065' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.256 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.963s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.257 244018 DEBUG nova.virt.libvirt.vif [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-354613820',display_name='tempest-ServerDiskConfigTestJSON-server-354613820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-354613820',id=87,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-cr7n0zq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:35:03Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=77c38424-b0a2-4d31-975a-16f265ff93fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.257 244018 DEBUG nova.network.os_vif_util [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.258 244018 DEBUG nova.network.os_vif_util [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.259 244018 DEBUG nova.objects.instance [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77c38424-b0a2-4d31-975a-16f265ff93fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.289 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:35:11 compute-0 nova_compute[244014]:   <uuid>77c38424-b0a2-4d31-975a-16f265ff93fb</uuid>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   <name>instance-00000057</name>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-354613820</nova:name>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:35:09</nova:creationTime>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <nova:user uuid="d8636d59ca0d49698907e2edb5dc4967">tempest-ServerDiskConfigTestJSON-1145834762-project-member</nova:user>
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <nova:project uuid="e80b416cdd774e9483545c9e08abf805">tempest-ServerDiskConfigTestJSON-1145834762</nova:project>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <nova:port uuid="db6ecea4-c353-4b6f-860e-95c410e7ec39">
Feb 25 12:35:11 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <system>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <entry name="serial">77c38424-b0a2-4d31-975a-16f265ff93fb</entry>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <entry name="uuid">77c38424-b0a2-4d31-975a-16f265ff93fb</entry>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     </system>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   <os>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   </os>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   <features>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   </features>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/77c38424-b0a2-4d31-975a-16f265ff93fb_disk">
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       </source>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config">
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       </source>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:35:11 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:db:ce:56"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <target dev="tapdb6ecea4-c3"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/console.log" append="off"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <video>
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     </video>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:35:11 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:35:11 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:35:11 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:35:11 compute-0 nova_compute[244014]: </domain>
Feb 25 12:35:11 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.289 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Preparing to wait for external event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.290 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.290 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.290 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.291 244018 DEBUG nova.virt.libvirt.vif [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-354613820',display_name='tempest-ServerDiskConfigTestJSON-server-354613820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-354613820',id=87,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-cr7n0zq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:35:03Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=77c38424-b0a2-4d31-975a-16f265ff93fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.291 244018 DEBUG nova.network.os_vif_util [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.292 244018 DEBUG nova.network.os_vif_util [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.292 244018 DEBUG os_vif [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.293 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.293 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.293 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.297 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb6ecea4-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.297 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb6ecea4-c3, col_values=(('external_ids', {'iface-id': 'db6ecea4-c353-4b6f-860e-95c410e7ec39', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:ce:56', 'vm-uuid': '77c38424-b0a2-4d31-975a-16f265ff93fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:11 compute-0 NetworkManager[49836]: <info>  [1772022911.2992] manager: (tapdb6ecea4-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.302 244018 INFO os_vif [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3')
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.353 244018 DEBUG nova.compute.manager [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.354 244018 DEBUG oslo_concurrency.lockutils [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.354 244018 DEBUG oslo_concurrency.lockutils [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.354 244018 DEBUG oslo_concurrency.lockutils [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.355 244018 DEBUG nova.compute.manager [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.355 244018 DEBUG nova.compute.manager [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.686 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.686 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.686 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No VIF found with MAC fa:16:3e:db:ce:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.687 244018 INFO nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Using config drive
Feb 25 12:35:11 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [NOTICE]   (321507) : haproxy version is 2.8.14-c23fe91
Feb 25 12:35:11 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [NOTICE]   (321507) : path to executable is /usr/sbin/haproxy
Feb 25 12:35:11 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [WARNING]  (321507) : Exiting Master process...
Feb 25 12:35:11 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [ALERT]    (321507) : Current worker (321509) exited with code 143 (Terminated)
Feb 25 12:35:11 compute-0 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [WARNING]  (321507) : All workers exited. Exiting... (0)
Feb 25 12:35:11 compute-0 systemd[1]: libpod-16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf.scope: Deactivated successfully.
Feb 25 12:35:11 compute-0 podman[322161]: 2026-02-25 12:35:11.713035067 +0000 UTC m=+0.622455742 container died 16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:35:11 compute-0 nova_compute[244014]: 2026-02-25 12:35:11.713 244018 DEBUG nova.storage.rbd_utils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:35:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 524 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 188 op/s
Feb 25 12:35:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2038076065' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:35:12 compute-0 nova_compute[244014]: 2026-02-25 12:35:12.218 244018 INFO nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Creating config drive at /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config
Feb 25 12:35:12 compute-0 nova_compute[244014]: 2026-02-25 12:35:12.222 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpizamvkbe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:12 compute-0 nova_compute[244014]: 2026-02-25 12:35:12.355 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpizamvkbe" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf-userdata-shm.mount: Deactivated successfully.
Feb 25 12:35:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1286b91a07c941dc6ee3488e98e5fa87f3ac6bdce355293186c3d25055400acc-merged.mount: Deactivated successfully.
Feb 25 12:35:12 compute-0 podman[322101]: 2026-02-25 12:35:12.752102024 +0000 UTC m=+1.761340810 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 12:35:12 compute-0 nova_compute[244014]: 2026-02-25 12:35:12.901 244018 DEBUG nova.storage.rbd_utils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:35:12 compute-0 nova_compute[244014]: 2026-02-25 12:35:12.907 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config 77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:12 compute-0 nova_compute[244014]: 2026-02-25 12:35:12.942 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Updating instance_info_cache with network_info: [{"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:35:12 compute-0 nova_compute[244014]: 2026-02-25 12:35:12.971 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:35:12 compute-0 nova_compute[244014]: 2026-02-25 12:35:12.972 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:35:13 compute-0 nova_compute[244014]: 2026-02-25 12:35:13.483 244018 DEBUG nova.compute.manager [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:13 compute-0 nova_compute[244014]: 2026-02-25 12:35:13.483 244018 DEBUG oslo_concurrency.lockutils [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:13 compute-0 nova_compute[244014]: 2026-02-25 12:35:13.483 244018 DEBUG oslo_concurrency.lockutils [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:13 compute-0 nova_compute[244014]: 2026-02-25 12:35:13.484 244018 DEBUG oslo_concurrency.lockutils [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:13 compute-0 nova_compute[244014]: 2026-02-25 12:35:13.484 244018 DEBUG nova.compute.manager [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:13 compute-0 nova_compute[244014]: 2026-02-25 12:35:13.484 244018 WARNING nova.compute.manager [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received unexpected event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with vm_state active and task_state deleting.
Feb 25 12:35:13 compute-0 ceph-mon[76335]: pgmap v1602: 305 pgs: 305 active+clean; 524 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 188 op/s
Feb 25 12:35:13 compute-0 nova_compute[244014]: 2026-02-25 12:35:13.813 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022898.8124812, ef34c4d4-57e0-4af1-af7a-b8ef35d09862 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:35:13 compute-0 nova_compute[244014]: 2026-02-25 12:35:13.814 244018 INFO nova.compute.manager [-] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] VM Stopped (Lifecycle Event)
Feb 25 12:35:13 compute-0 nova_compute[244014]: 2026-02-25 12:35:13.844 244018 DEBUG nova.compute.manager [None req-5b6b4607-a514-4480-bb63-ad6f2e1efaaf - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:13 compute-0 podman[322161]: 2026-02-25 12:35:13.929294464 +0000 UTC m=+2.838715179 container cleanup 16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:35:13 compute-0 systemd[1]: libpod-conmon-16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf.scope: Deactivated successfully.
Feb 25 12:35:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 534 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Feb 25 12:35:14 compute-0 nova_compute[244014]: 2026-02-25 12:35:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:35:14 compute-0 nova_compute[244014]: 2026-02-25 12:35:14.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:35:14 compute-0 nova_compute[244014]: 2026-02-25 12:35:14.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:15 compute-0 podman[322282]: 2026-02-25 12:35:15.161200941 +0000 UTC m=+1.208539768 container remove 16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.166 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d89b378c-7b05-49f3-b463-95ae260848ce]: (4, ('Wed Feb 25 12:35:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 (16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf)\n16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf\nWed Feb 25 12:35:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 (16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf)\n16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.168 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5778b6ee-7e4a-46bb-82ba-dc4c0dd810b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.173 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0cf2281-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:15 compute-0 kernel: tapa0cf2281-b0: left promiscuous mode
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.176 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.188 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be471435-036a-434a-903f-181bcd713490]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.214 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[295ee3e5-9c17-4bc7-be52-7d82f1eba8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.215 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c60fac3a-93d8-46af-ac21-84fd03c6e213]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.229 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ffb47f-380d-49c1-8593-85a16d0e2a50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486584, 'reachable_time': 33997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322304, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 systemd[1]: run-netns-ovnmeta\x2da0cf2281\x2dbf49\x2d498f\x2d8de5\x2d70cdba33cd62.mount: Deactivated successfully.
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.232 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.233 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e24b7749-6e34-4ba6-a2bd-0c13198acb59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ceph-mon[76335]: pgmap v1603: 305 pgs: 305 active+clean; 534 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.586 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config 77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.679s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.587 244018 INFO nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Deleting local config drive /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config because it was imported into RBD.
Feb 25 12:35:15 compute-0 kernel: tapdb6ecea4-c3: entered promiscuous mode
Feb 25 12:35:15 compute-0 NetworkManager[49836]: <info>  [1772022915.6220] manager: (tapdb6ecea4-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:15 compute-0 ovn_controller[147040]: 2026-02-25T12:35:15Z|00878|binding|INFO|Claiming lport db6ecea4-c353-4b6f-860e-95c410e7ec39 for this chassis.
Feb 25 12:35:15 compute-0 ovn_controller[147040]: 2026-02-25T12:35:15Z|00879|binding|INFO|db6ecea4-c353-4b6f-860e-95c410e7ec39: Claiming fa:16:3e:db:ce:56 10.100.0.6
Feb 25 12:35:15 compute-0 systemd-udevd[322306]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:35:15 compute-0 ovn_controller[147040]: 2026-02-25T12:35:15Z|00880|binding|INFO|Setting lport db6ecea4-c353-4b6f-860e-95c410e7ec39 ovn-installed in OVS
Feb 25 12:35:15 compute-0 ovn_controller[147040]: 2026-02-25T12:35:15Z|00881|binding|INFO|Setting lport db6ecea4-c353-4b6f-860e-95c410e7ec39 up in Southbound
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.630 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:ce:56 10.100.0.6'], port_security=['fa:16:3e:db:ce:56 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '77c38424-b0a2-4d31-975a-16f265ff93fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=db6ecea4-c353-4b6f-860e-95c410e7ec39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.631 157129 INFO neutron.agent.ovn.metadata.agent [-] Port db6ecea4-c353-4b6f-860e-95c410e7ec39 in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 bound to our chassis
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.632 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:15 compute-0 NetworkManager[49836]: <info>  [1772022915.6396] device (tapdb6ecea4-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:35:15 compute-0 NetworkManager[49836]: <info>  [1772022915.6405] device (tapdb6ecea4-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.646 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c95371-ade4-4066-b3b3-aea3de13277f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.647 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d1639de-d1 in ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.648 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d1639de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.648 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13d87f7f-6dc6-4463-8a4d-6df5d590f463]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.649 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec6cdfa-f4a6-43bd-b342-8cf55243d33e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 systemd-machined[210048]: New machine qemu-112-instance-00000057.
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.657 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4375e6d1-9e56-4209-8d28-89096bcfac6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-00000057.
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.667 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b635210d-2696-4229-b338-d5d1d09379f4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.690 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[32952eab-0977-4ab9-9c69-cea9bd47a0b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.694 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc96e4a4-9bef-444f-98c7-e85fc91f9156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 NetworkManager[49836]: <info>  [1772022915.6959] manager: (tap9d1639de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/369)
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.715 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[52e2d979-dbfb-46ff-b47a-e69ec82775a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.717 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3b4c37-2058-4f98-bf70-825683bdc6bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 NetworkManager[49836]: <info>  [1772022915.7330] device (tap9d1639de-d0): carrier: link connected
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.738 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[741e0c41-db50-436d-8eeb-21d1d740737f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.751 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[292105a5-41aa-42a6-bea8-fc8195bfeeb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488514, 'reachable_time': 43171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322352, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.767 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[98f7dc95-66b0-4bfa-90d2-072f1b8f275a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:8cd3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488514, 'tstamp': 488514}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322353, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.778 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7908d736-a063-4098-b6b6-57623d622830]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488514, 'reachable_time': 43171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322354, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.799 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61458f9e-af6c-4525-9749-e6a84fae58e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.850 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f2038962-435e-42d8-8eb6-a7cc1268e01a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.851 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.852 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.853 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d1639de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:15 compute-0 NetworkManager[49836]: <info>  [1772022915.8565] manager: (tap9d1639de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Feb 25 12:35:15 compute-0 kernel: tap9d1639de-d0: entered promiscuous mode
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.859 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d1639de-d0, col_values=(('external_ids', {'iface-id': '2a0bb56b-974f-4df3-ab65-5a5521eee6ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.862 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:35:15 compute-0 ovn_controller[147040]: 2026-02-25T12:35:15Z|00882|binding|INFO|Releasing lport 2a0bb56b-974f-4df3-ab65-5a5521eee6ab from this chassis (sb_readonly=0)
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.863 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e64f7c2e-2872-4e02-a0ef-525aa72c0b64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.865 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:35:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.867 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'env', 'PROCESS_TAG=haproxy-9d1639de-d0ac-47b6-9707-253523b26c21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d1639de-d0ac-47b6-9707-253523b26c21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.900 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:35:15 compute-0 nova_compute[244014]: 2026-02-25 12:35:15.900 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 534 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.181 244018 DEBUG nova.compute.manager [req-4ed7c461-8b67-41b8-b83d-9c61f34cf3e9 req-1e0073e8-eb4e-407b-9a56-60f8c1d79e74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.181 244018 DEBUG oslo_concurrency.lockutils [req-4ed7c461-8b67-41b8-b83d-9c61f34cf3e9 req-1e0073e8-eb4e-407b-9a56-60f8c1d79e74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.182 244018 DEBUG oslo_concurrency.lockutils [req-4ed7c461-8b67-41b8-b83d-9c61f34cf3e9 req-1e0073e8-eb4e-407b-9a56-60f8c1d79e74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.182 244018 DEBUG oslo_concurrency.lockutils [req-4ed7c461-8b67-41b8-b83d-9c61f34cf3e9 req-1e0073e8-eb4e-407b-9a56-60f8c1d79e74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.182 244018 DEBUG nova.compute.manager [req-4ed7c461-8b67-41b8-b83d-9c61f34cf3e9 req-1e0073e8-eb4e-407b-9a56-60f8c1d79e74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Processing event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:35:16 compute-0 podman[322423]: 2026-02-25 12:35:16.177898854 +0000 UTC m=+0.019983206 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:35:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4075301731' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.432 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.502 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.502 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.505 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.505 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.508 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.508 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.508 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.510 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.510 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.669 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.670 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3318MB free_disk=59.788564148359GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.670 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.670 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.689 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022916.6893885, 77c38424-b0a2-4d31-975a-16f265ff93fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.690 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] VM Started (Lifecycle Event)
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.691 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.694 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.705 244018 INFO nova.virt.libvirt.driver [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Instance spawned successfully.
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.705 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.827 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.830 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.836 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.836 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.838 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.838 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.838 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.839 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.877 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.877 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022916.691264, 77c38424-b0a2-4d31-975a-16f265ff93fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.877 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] VM Paused (Lifecycle Event)
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.913 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.915 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.915 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e3178d86-5c76-4393-9327-2aac2cb8d81d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.915 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 19abddab-88d5-48b8-b98e-1dedccbb8b7f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 77c38424-b0a2-4d31-975a-16f265ff93fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.923 244018 INFO nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Took 12.98 seconds to spawn the instance on the hypervisor.
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.925 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.926 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022916.6938012, 77c38424-b0a2-4d31-975a-16f265ff93fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.927 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] VM Resumed (Lifecycle Event)
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.944 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.959 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.962 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.964 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.965 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.993 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 12:35:16 compute-0 nova_compute[244014]: 2026-02-25 12:35:16.997 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:35:17 compute-0 nova_compute[244014]: 2026-02-25 12:35:17.012 244018 INFO nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Took 14.06 seconds to build instance.
Feb 25 12:35:17 compute-0 nova_compute[244014]: 2026-02-25 12:35:17.023 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 12:35:17 compute-0 nova_compute[244014]: 2026-02-25 12:35:17.028 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:17 compute-0 nova_compute[244014]: 2026-02-25 12:35:17.127 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:17 compute-0 ceph-mon[76335]: pgmap v1604: 305 pgs: 305 active+clean; 534 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Feb 25 12:35:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4075301731' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:17 compute-0 podman[322423]: 2026-02-25 12:35:17.228499377 +0000 UTC m=+1.070583689 container create 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:35:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:17 compute-0 systemd[1]: Started libpod-conmon-16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4.scope.
Feb 25 12:35:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd22ca02a4a701e91c040bc18e8d941f5c4b35115a09169483e243124fa15c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:17 compute-0 podman[322423]: 2026-02-25 12:35:17.888795118 +0000 UTC m=+1.730879440 container init 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 12:35:17 compute-0 podman[322423]: 2026-02-25 12:35:17.895820727 +0000 UTC m=+1.737905039 container start 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:35:17 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [NOTICE]   (322487) : New worker (322489) forked
Feb 25 12:35:17 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [NOTICE]   (322487) : Loading success.
Feb 25 12:35:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:35:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4255929073' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:17 compute-0 nova_compute[244014]: 2026-02-25 12:35:17.968 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.842s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:17 compute-0 nova_compute[244014]: 2026-02-25 12:35:17.973 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:35:17 compute-0 nova_compute[244014]: 2026-02-25 12:35:17.996 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:35:18 compute-0 nova_compute[244014]: 2026-02-25 12:35:18.017 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:35:18 compute-0 nova_compute[244014]: 2026-02-25 12:35:18.018 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 453 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Feb 25 12:35:18 compute-0 nova_compute[244014]: 2026-02-25 12:35:18.834 244018 DEBUG nova.compute.manager [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:18 compute-0 nova_compute[244014]: 2026-02-25 12:35:18.835 244018 DEBUG oslo_concurrency.lockutils [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:18 compute-0 nova_compute[244014]: 2026-02-25 12:35:18.835 244018 DEBUG oslo_concurrency.lockutils [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:18 compute-0 nova_compute[244014]: 2026-02-25 12:35:18.835 244018 DEBUG oslo_concurrency.lockutils [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:18 compute-0 nova_compute[244014]: 2026-02-25 12:35:18.836 244018 DEBUG nova.compute.manager [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] No waiting events found dispatching network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:18 compute-0 nova_compute[244014]: 2026-02-25 12:35:18.836 244018 WARNING nova.compute.manager [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received unexpected event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 for instance with vm_state active and task_state None.
Feb 25 12:35:19 compute-0 nova_compute[244014]: 2026-02-25 12:35:19.013 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:35:19 compute-0 nova_compute[244014]: 2026-02-25 12:35:19.014 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:35:19 compute-0 nova_compute[244014]: 2026-02-25 12:35:19.015 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:35:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4255929073' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:19 compute-0 ceph-mon[76335]: pgmap v1605: 305 pgs: 305 active+clean; 453 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Feb 25 12:35:19 compute-0 nova_compute[244014]: 2026-02-25 12:35:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:35:19 compute-0 nova_compute[244014]: 2026-02-25 12:35:19.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 453 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 323 KiB/s wr, 125 op/s
Feb 25 12:35:20 compute-0 ceph-mon[76335]: pgmap v1606: 305 pgs: 305 active+clean; 453 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 323 KiB/s wr, 125 op/s
Feb 25 12:35:21 compute-0 nova_compute[244014]: 2026-02-25 12:35:21.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:21 compute-0 nova_compute[244014]: 2026-02-25 12:35:21.758 244018 DEBUG nova.compute.manager [None req-88bde76d-bf73-4490-b090-91dd8d7ea7c3 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 453 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 322 KiB/s wr, 125 op/s
Feb 25 12:35:22 compute-0 ceph-mon[76335]: pgmap v1607: 305 pgs: 305 active+clean; 453 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 322 KiB/s wr, 125 op/s
Feb 25 12:35:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.710 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.712 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.712 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.713 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.714 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.716 244018 INFO nova.compute.manager [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Terminating instance
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.719 244018 DEBUG nova.compute.manager [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.779 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:22.782 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:35:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:22.783 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.902 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.903 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.903 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.904 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.904 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.905 244018 INFO nova.compute.manager [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Terminating instance
Feb 25 12:35:22 compute-0 nova_compute[244014]: 2026-02-25 12:35:22.906 244018 DEBUG nova.compute.manager [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:35:23 compute-0 kernel: tapdb6ecea4-c3 (unregistering): left promiscuous mode
Feb 25 12:35:23 compute-0 NetworkManager[49836]: <info>  [1772022923.2937] device (tapdb6ecea4-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:23 compute-0 ovn_controller[147040]: 2026-02-25T12:35:23Z|00883|binding|INFO|Releasing lport db6ecea4-c353-4b6f-860e-95c410e7ec39 from this chassis (sb_readonly=0)
Feb 25 12:35:23 compute-0 ovn_controller[147040]: 2026-02-25T12:35:23Z|00884|binding|INFO|Setting lport db6ecea4-c353-4b6f-860e-95c410e7ec39 down in Southbound
Feb 25 12:35:23 compute-0 ovn_controller[147040]: 2026-02-25T12:35:23Z|00885|binding|INFO|Removing iface tapdb6ecea4-c3 ovn-installed in OVS
Feb 25 12:35:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.311 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:ce:56 10.100.0.6'], port_security=['fa:16:3e:db:ce:56 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '77c38424-b0a2-4d31-975a-16f265ff93fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=db6ecea4-c353-4b6f-860e-95c410e7ec39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:35:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.312 157129 INFO neutron.agent.ovn.metadata.agent [-] Port db6ecea4-c353-4b6f-860e-95c410e7ec39 in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 unbound from our chassis
Feb 25 12:35:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.314 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d1639de-d0ac-47b6-9707-253523b26c21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:35:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.316 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9b265c-b045-4633-bcdc-0e7068fddcf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.316 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace which is not needed anymore
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:23 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d00000057.scope: Deactivated successfully.
Feb 25 12:35:23 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d00000057.scope: Consumed 6.934s CPU time.
Feb 25 12:35:23 compute-0 systemd-machined[210048]: Machine qemu-112-instance-00000057 terminated.
Feb 25 12:35:23 compute-0 kernel: tap26c6cf46-a5 (unregistering): left promiscuous mode
Feb 25 12:35:23 compute-0 NetworkManager[49836]: <info>  [1772022923.3910] device (tap26c6cf46-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:35:23 compute-0 ovn_controller[147040]: 2026-02-25T12:35:23Z|00886|binding|INFO|Releasing lport 26c6cf46-a52b-4476-9112-4047f420e492 from this chassis (sb_readonly=0)
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:23 compute-0 ovn_controller[147040]: 2026-02-25T12:35:23Z|00887|binding|INFO|Setting lport 26c6cf46-a52b-4476-9112-4047f420e492 down in Southbound
Feb 25 12:35:23 compute-0 ovn_controller[147040]: 2026-02-25T12:35:23Z|00888|binding|INFO|Removing iface tap26c6cf46-a5 ovn-installed in OVS
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.400 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a3:ed 10.100.0.5'], port_security=['fa:16:3e:a5:a3:ed 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e3178d86-5c76-4393-9327-2aac2cb8d81d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=26c6cf46-a52b-4476-9112-4047f420e492) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:23 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000056.scope: Deactivated successfully.
Feb 25 12:35:23 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000056.scope: Consumed 11.458s CPU time.
Feb 25 12:35:23 compute-0 systemd-machined[210048]: Machine qemu-111-instance-00000056 terminated.
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.491 244018 INFO nova.virt.libvirt.driver [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Deleting instance files /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f_del
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.491 244018 INFO nova.virt.libvirt.driver [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Deletion of /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f_del complete
Feb 25 12:35:23 compute-0 NetworkManager[49836]: <info>  [1772022923.5342] manager: (tap26c6cf46-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.539 244018 INFO nova.virt.libvirt.driver [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Instance destroyed successfully.
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.540 244018 DEBUG nova.objects.instance [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'resources' on Instance uuid 77c38424-b0a2-4d31-975a-16f265ff93fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.547 244018 INFO nova.virt.libvirt.driver [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance destroyed successfully.
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.548 244018 DEBUG nova.objects.instance [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'resources' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.549 244018 INFO nova.compute.manager [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Took 13.41 seconds to destroy the instance on the hypervisor.
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.550 244018 DEBUG oslo.service.loopingcall [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.550 244018 DEBUG nova.compute.manager [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.550 244018 DEBUG nova.network.neutron [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.559 244018 DEBUG nova.virt.libvirt.vif [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-354613820',display_name='tempest-ServerDiskConfigTestJSON-server-354613820',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-354613820',id=87,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:35:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-cr7n0zq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:35:21Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=77c38424-b0a2-4d31-975a-16f265ff93fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.560 244018 DEBUG nova.network.os_vif_util [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.560 244018 DEBUG nova.network.os_vif_util [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.560 244018 DEBUG os_vif [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.563 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.563 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb6ecea4-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.567 244018 DEBUG nova.virt.libvirt.vif [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1397178537',display_name='tempest-ServerRescueTestJSON-server-1397178537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1397178537',id=86,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='367d43ab207546c3900a8414f0713ef4',ramdisk_id='',reservation_id='r-j34kam6o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-930018924',owner_user_name='tempest-ServerRescueTestJSON-930018924-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:35:21Z,user_data=None,user_id='cb2c815ef3214a7b897f911b4f53a146',uuid=e3178d86-5c76-4393-9327-2aac2cb8d81d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.567 244018 DEBUG nova.network.os_vif_util [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converting VIF {"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.568 244018 DEBUG nova.network.os_vif_util [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a3:ed,bridge_name='br-int',has_traffic_filtering=True,id=26c6cf46-a52b-4476-9112-4047f420e492,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c6cf46-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.568 244018 DEBUG os_vif [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a3:ed,bridge_name='br-int',has_traffic_filtering=True,id=26c6cf46-a52b-4476-9112-4047f420e492,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c6cf46-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.574 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26c6cf46-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.575 244018 INFO os_vif [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3')
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.591 244018 INFO os_vif [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a3:ed,bridge_name='br-int',has_traffic_filtering=True,id=26c6cf46-a52b-4476-9112-4047f420e492,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c6cf46-a5')
Feb 25 12:35:23 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [NOTICE]   (322487) : haproxy version is 2.8.14-c23fe91
Feb 25 12:35:23 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [NOTICE]   (322487) : path to executable is /usr/sbin/haproxy
Feb 25 12:35:23 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [WARNING]  (322487) : Exiting Master process...
Feb 25 12:35:23 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [ALERT]    (322487) : Current worker (322489) exited with code 143 (Terminated)
Feb 25 12:35:23 compute-0 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [WARNING]  (322487) : All workers exited. Exiting... (0)
Feb 25 12:35:23 compute-0 systemd[1]: libpod-16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4.scope: Deactivated successfully.
Feb 25 12:35:23 compute-0 podman[322528]: 2026-02-25 12:35:23.603836378 +0000 UTC m=+0.193831209 container died 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.628 244018 DEBUG nova.compute.manager [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-unplugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.629 244018 DEBUG oslo_concurrency.lockutils [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.629 244018 DEBUG oslo_concurrency.lockutils [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.629 244018 DEBUG oslo_concurrency.lockutils [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.630 244018 DEBUG nova.compute.manager [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] No waiting events found dispatching network-vif-unplugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:23 compute-0 nova_compute[244014]: 2026-02-25 12:35:23.630 244018 DEBUG nova.compute.manager [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-unplugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:35:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 407 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 334 KiB/s wr, 246 op/s
Feb 25 12:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4-userdata-shm.mount: Deactivated successfully.
Feb 25 12:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fd22ca02a4a701e91c040bc18e8d941f5c4b35115a09169483e243124fa15c6-merged.mount: Deactivated successfully.
Feb 25 12:35:24 compute-0 ceph-mon[76335]: pgmap v1608: 305 pgs: 305 active+clean; 407 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 334 KiB/s wr, 246 op/s
Feb 25 12:35:24 compute-0 nova_compute[244014]: 2026-02-25 12:35:24.675 244018 DEBUG nova.network.neutron [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:35:24 compute-0 podman[322528]: 2026-02-25 12:35:24.693148294 +0000 UTC m=+1.283143125 container cleanup 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:35:24 compute-0 nova_compute[244014]: 2026-02-25 12:35:24.701 244018 INFO nova.compute.manager [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Took 1.15 seconds to deallocate network for instance.
Feb 25 12:35:24 compute-0 systemd[1]: libpod-conmon-16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4.scope: Deactivated successfully.
Feb 25 12:35:24 compute-0 nova_compute[244014]: 2026-02-25 12:35:24.745 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:24 compute-0 nova_compute[244014]: 2026-02-25 12:35:24.746 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:24 compute-0 nova_compute[244014]: 2026-02-25 12:35:24.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:35:24 compute-0 nova_compute[244014]: 2026-02-25 12:35:24.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:35:24 compute-0 nova_compute[244014]: 2026-02-25 12:35:24.878 244018 DEBUG oslo_concurrency.processutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:24 compute-0 nova_compute[244014]: 2026-02-25 12:35:24.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:35:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1576377988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.414 244018 DEBUG oslo_concurrency.processutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.420 244018 DEBUG nova.compute.provider_tree [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.445 244018 DEBUG nova.scheduler.client.report [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.475 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.500 244018 INFO nova.scheduler.client.report [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Deleted allocations for instance 19abddab-88d5-48b8-b98e-1dedccbb8b7f
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.583 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.737 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.738 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.738 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.739 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.739 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] No waiting events found dispatching network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.739 244018 WARNING nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received unexpected event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 for instance with vm_state active and task_state deleting.
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.739 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.740 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.740 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.740 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.740 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.740 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.741 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-deleted-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.741 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.741 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.741 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.741 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.742 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:25 compute-0 nova_compute[244014]: 2026-02-25 12:35:25.742 244018 WARNING nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received unexpected event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with vm_state active and task_state deleting.
Feb 25 12:35:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1576377988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:26 compute-0 podman[322623]: 2026-02-25 12:35:26.028059961 +0000 UTC m=+1.312255648 container remove 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.036 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[41c72b26-6659-4ee6-85ed-556337bce0ef]: (4, ('Wed Feb 25 12:35:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4)\n16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4\nWed Feb 25 12:35:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4)\n16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.037 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[27828008-f07d-4a5e-a499-b103b24583a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.038 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:26 compute-0 nova_compute[244014]: 2026-02-25 12:35:26.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:26 compute-0 kernel: tap9d1639de-d0: left promiscuous mode
Feb 25 12:35:26 compute-0 nova_compute[244014]: 2026-02-25 12:35:26.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.049 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[54d5be6a-2e85-4c14-bc30-9d331f7791b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 407 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 25 KiB/s wr, 152 op/s
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.060 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4cab8623-5c25-4105-9e39-46b79a2ecc3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.062 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[913aedd8-c900-4eb0-a81c-8d10a81df0ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.079 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d10f4d-e4d3-4bcc-a4e8-7e81937ebeda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488509, 'reachable_time': 35625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322661, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.083 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.083 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3783fe-9054-4ef1-a556-ef1dd17483c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d1639de\x2dd0ac\x2d47b6\x2d9707\x2d253523b26c21.mount: Deactivated successfully.
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.085 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 26c6cf46-a52b-4476-9112-4047f420e492 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 unbound from our chassis
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.086 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:35:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.087 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09bd3a41-3da8-411a-9d09-74ea07c14995]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:26 compute-0 nova_compute[244014]: 2026-02-25 12:35:26.168 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022911.1674278, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:35:26 compute-0 nova_compute[244014]: 2026-02-25 12:35:26.169 244018 INFO nova.compute.manager [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Stopped (Lifecycle Event)
Feb 25 12:35:26 compute-0 nova_compute[244014]: 2026-02-25 12:35:26.214 244018 DEBUG nova.compute.manager [None req-0533052a-d532-40ae-a847-c9f23d06a869 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:27 compute-0 ceph-mon[76335]: pgmap v1609: 305 pgs: 305 active+clean; 407 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 25 KiB/s wr, 152 op/s
Feb 25 12:35:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 357 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 26 KiB/s wr, 176 op/s
Feb 25 12:35:28 compute-0 nova_compute[244014]: 2026-02-25 12:35:28.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:29 compute-0 ceph-mon[76335]: pgmap v1610: 305 pgs: 305 active+clean; 357 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 26 KiB/s wr, 176 op/s
Feb 25 12:35:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:29.785 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:29 compute-0 nova_compute[244014]: 2026-02-25 12:35:29.909 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 357 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 144 op/s
Feb 25 12:35:30 compute-0 nova_compute[244014]: 2026-02-25 12:35:30.904 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:35:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:35:30
Feb 25 12:35:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:35:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:35:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'images', '.mgr', 'volumes', 'default.rgw.log']
Feb 25 12:35:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:35:31 compute-0 nova_compute[244014]: 2026-02-25 12:35:31.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:31 compute-0 ceph-mon[76335]: pgmap v1611: 305 pgs: 305 active+clean; 357 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 144 op/s
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:35:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:35:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 302 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 14 KiB/s wr, 169 op/s
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.497 244018 INFO nova.virt.libvirt.driver [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Deleting instance files /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d_del
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.498 244018 INFO nova.virt.libvirt.driver [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Deletion of /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d_del complete
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.505 244018 INFO nova.virt.libvirt.driver [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Deleting instance files /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb_del
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.506 244018 INFO nova.virt.libvirt.driver [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Deletion of /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb_del complete
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.573 244018 INFO nova.compute.manager [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Took 9.85 seconds to destroy the instance on the hypervisor.
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.574 244018 DEBUG oslo.service.loopingcall [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.574 244018 DEBUG nova.compute.manager [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.575 244018 DEBUG nova.network.neutron [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.579 244018 INFO nova.compute.manager [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Took 9.67 seconds to destroy the instance on the hypervisor.
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.580 244018 DEBUG oslo.service.loopingcall [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.581 244018 DEBUG nova.compute.manager [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:35:32 compute-0 nova_compute[244014]: 2026-02-25 12:35:32.581 244018 DEBUG nova.network.neutron [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:35:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:33 compute-0 ceph-mon[76335]: pgmap v1612: 305 pgs: 305 active+clean; 302 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 14 KiB/s wr, 169 op/s
Feb 25 12:35:33 compute-0 nova_compute[244014]: 2026-02-25 12:35:33.582 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:33 compute-0 nova_compute[244014]: 2026-02-25 12:35:33.901 244018 DEBUG nova.network.neutron [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:35:33 compute-0 nova_compute[244014]: 2026-02-25 12:35:33.920 244018 INFO nova.compute.manager [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Took 1.35 seconds to deallocate network for instance.
Feb 25 12:35:33 compute-0 nova_compute[244014]: 2026-02-25 12:35:33.969 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:33 compute-0 nova_compute[244014]: 2026-02-25 12:35:33.969 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.029 244018 DEBUG nova.network.neutron [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.050 244018 DEBUG oslo_concurrency.processutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 14 KiB/s wr, 174 op/s
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.078 244018 INFO nova.compute.manager [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Took 1.50 seconds to deallocate network for instance.
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.181 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.253 244018 DEBUG nova.compute.manager [req-078bf21a-77d8-49dd-bb6f-1e65fd13c4e8 req-06a4fb17-b06c-4ade-bbca-8892cd520480 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-deleted-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:35:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806862022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.619 244018 DEBUG oslo_concurrency.processutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.625 244018 DEBUG nova.compute.provider_tree [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:35:34 compute-0 ceph-mon[76335]: pgmap v1613: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 14 KiB/s wr, 174 op/s
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.668 244018 DEBUG nova.scheduler.client.report [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.774 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.776 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.863 244018 DEBUG oslo_concurrency.processutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.898 244018 INFO nova.scheduler.client.report [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Deleted allocations for instance e3178d86-5c76-4393-9327-2aac2cb8d81d
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.911 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:34 compute-0 nova_compute[244014]: 2026-02-25 12:35:34.965 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:35:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/46961917' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:35 compute-0 nova_compute[244014]: 2026-02-25 12:35:35.481 244018 DEBUG oslo_concurrency.processutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:35 compute-0 nova_compute[244014]: 2026-02-25 12:35:35.486 244018 DEBUG nova.compute.provider_tree [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:35:35 compute-0 nova_compute[244014]: 2026-02-25 12:35:35.501 244018 DEBUG nova.scheduler.client.report [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:35:35 compute-0 nova_compute[244014]: 2026-02-25 12:35:35.526 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:35 compute-0 nova_compute[244014]: 2026-02-25 12:35:35.570 244018 INFO nova.scheduler.client.report [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Deleted allocations for instance 77c38424-b0a2-4d31-975a-16f265ff93fb
Feb 25 12:35:35 compute-0 nova_compute[244014]: 2026-02-25 12:35:35.657 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2806862022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/46961917' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.127 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.128 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.129 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.129 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.129 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.130 244018 INFO nova.compute.manager [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Terminating instance
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.131 244018 DEBUG nova.compute.manager [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.438 244018 DEBUG nova.compute.manager [req-647815c2-c9a8-4c8c-80d8-53fb3adc66cd req-f92d57aa-1fb2-410a-bb5c-a5cd0c9f476f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-deleted-db6ecea4-c353-4b6f-860e-95c410e7ec39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:36 compute-0 kernel: tape7482fdc-97 (unregistering): left promiscuous mode
Feb 25 12:35:36 compute-0 NetworkManager[49836]: <info>  [1772022936.4752] device (tape7482fdc-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.480 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:36 compute-0 ovn_controller[147040]: 2026-02-25T12:35:36Z|00889|binding|INFO|Releasing lport e7482fdc-9792-451e-adf6-a816dd7113c8 from this chassis (sb_readonly=0)
Feb 25 12:35:36 compute-0 ovn_controller[147040]: 2026-02-25T12:35:36Z|00890|binding|INFO|Setting lport e7482fdc-9792-451e-adf6-a816dd7113c8 down in Southbound
Feb 25 12:35:36 compute-0 ovn_controller[147040]: 2026-02-25T12:35:36Z|00891|binding|INFO|Removing iface tape7482fdc-97 ovn-installed in OVS
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.483 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:36.500 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:aa:11 10.100.0.12'], port_security=['fa:16:3e:56:aa:11 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'aeaad9e2-4ad0-46fb-b619-7ca2c78443a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e7482fdc-9792-451e-adf6-a816dd7113c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:35:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:36.501 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e7482fdc-9792-451e-adf6-a816dd7113c8 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 unbound from our chassis
Feb 25 12:35:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:36.502 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:35:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:36.504 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbe935e-ccef-4c31-b218-dda2c2c1edb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:35:36 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Deactivated successfully.
Feb 25 12:35:36 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Consumed 13.887s CPU time.
Feb 25 12:35:36 compute-0 systemd-machined[210048]: Machine qemu-104-instance-00000054 terminated.
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.576 244018 INFO nova.virt.libvirt.driver [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Instance destroyed successfully.
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.576 244018 DEBUG nova.objects.instance [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'resources' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.591 244018 DEBUG nova.virt.libvirt.vif [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:33:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-842091875',display_name='tempest-ServerRescueTestJSON-server-842091875',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-842091875',id=84,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='367d43ab207546c3900a8414f0713ef4',ramdisk_id='',reservation_id='r-aun4358p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-930018924',owner_user_name='tempest-ServerRescueTestJSON-930018924-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:34:21Z,user_data=None,user_id='cb2c815ef3214a7b897f911b4f53a146',uuid=aeaad9e2-4ad0-46fb-b619-7ca2c78443a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.591 244018 DEBUG nova.network.os_vif_util [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converting VIF {"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.592 244018 DEBUG nova.network.os_vif_util [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:aa:11,bridge_name='br-int',has_traffic_filtering=True,id=e7482fdc-9792-451e-adf6-a816dd7113c8,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7482fdc-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.592 244018 DEBUG os_vif [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:aa:11,bridge_name='br-int',has_traffic_filtering=True,id=e7482fdc-9792-451e-adf6-a816dd7113c8,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7482fdc-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.593 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7482fdc-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.596 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:36 compute-0 nova_compute[244014]: 2026-02-25 12:35:36.598 244018 INFO os_vif [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:aa:11,bridge_name='br-int',has_traffic_filtering=True,id=e7482fdc-9792-451e-adf6-a816dd7113c8,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7482fdc-97')
Feb 25 12:35:37 compute-0 ceph-mon[76335]: pgmap v1614: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Feb 25 12:35:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 8.0 KiB/s wr, 61 op/s
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.538 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022923.53617, 77c38424-b0a2-4d31-975a-16f265ff93fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.538 244018 INFO nova.compute.manager [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] VM Stopped (Lifecycle Event)
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.545 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022923.5447984, e3178d86-5c76-4393-9327-2aac2cb8d81d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.546 244018 INFO nova.compute.manager [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] VM Stopped (Lifecycle Event)
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.573 244018 DEBUG nova.compute.manager [None req-83513475-46d9-4e17-8a52-63011aaca117 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.575 244018 DEBUG nova.compute.manager [None req-cba4b95b-37d9-4184-b7fc-d7c21dc205bb - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.613 244018 DEBUG nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-unplugged-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.614 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.614 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.614 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.615 244018 DEBUG nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] No waiting events found dispatching network-vif-unplugged-e7482fdc-9792-451e-adf6-a816dd7113c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.615 244018 DEBUG nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-unplugged-e7482fdc-9792-451e-adf6-a816dd7113c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.615 244018 DEBUG nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.616 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.616 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.617 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.617 244018 DEBUG nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] No waiting events found dispatching network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:35:38 compute-0 nova_compute[244014]: 2026-02-25 12:35:38.617 244018 WARNING nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received unexpected event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 for instance with vm_state rescued and task_state deleting.
Feb 25 12:35:38 compute-0 ceph-mon[76335]: pgmap v1615: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 8.0 KiB/s wr, 61 op/s
Feb 25 12:35:39 compute-0 nova_compute[244014]: 2026-02-25 12:35:39.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 7.0 KiB/s wr, 37 op/s
Feb 25 12:35:40 compute-0 ceph-mon[76335]: pgmap v1616: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 7.0 KiB/s wr, 37 op/s
Feb 25 12:35:41 compute-0 nova_compute[244014]: 2026-02-25 12:35:41.596 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:41 compute-0 podman[322746]: 2026-02-25 12:35:41.766496094 +0000 UTC m=+0.065526763 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent)
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 228 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 7.7 KiB/s wr, 48 op/s
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005990409087240522 of space, bias 1.0, pg target 0.17971227261721565 quantized to 32 (current 32)
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024928472979332526 of space, bias 1.0, pg target 0.7478541893799757 quantized to 32 (current 32)
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.930829390391495e-07 of space, bias 4.0, pg target 0.0009516995268469793 quantized to 16 (current 16)
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:35:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:35:42 compute-0 ceph-mon[76335]: pgmap v1617: 305 pgs: 305 active+clean; 228 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 7.7 KiB/s wr, 48 op/s
Feb 25 12:35:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:43 compute-0 podman[322767]: 2026-02-25 12:35:43.72654674 +0000 UTC m=+0.071376548 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:35:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 7.0 KiB/s wr, 33 op/s
Feb 25 12:35:44 compute-0 ceph-mon[76335]: pgmap v1618: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 7.0 KiB/s wr, 33 op/s
Feb 25 12:35:44 compute-0 nova_compute[244014]: 2026-02-25 12:35:44.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 6.7 KiB/s wr, 28 op/s
Feb 25 12:35:46 compute-0 ceph-mon[76335]: pgmap v1619: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 6.7 KiB/s wr, 28 op/s
Feb 25 12:35:46 compute-0 nova_compute[244014]: 2026-02-25 12:35:46.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:46 compute-0 nova_compute[244014]: 2026-02-25 12:35:46.633 244018 INFO nova.virt.libvirt.driver [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Deleting instance files /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_del
Feb 25 12:35:46 compute-0 nova_compute[244014]: 2026-02-25 12:35:46.634 244018 INFO nova.virt.libvirt.driver [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Deletion of /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_del complete
Feb 25 12:35:46 compute-0 nova_compute[244014]: 2026-02-25 12:35:46.684 244018 INFO nova.compute.manager [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Took 10.55 seconds to destroy the instance on the hypervisor.
Feb 25 12:35:46 compute-0 nova_compute[244014]: 2026-02-25 12:35:46.684 244018 DEBUG oslo.service.loopingcall [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:35:46 compute-0 nova_compute[244014]: 2026-02-25 12:35:46.685 244018 DEBUG nova.compute.manager [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:35:46 compute-0 nova_compute[244014]: 2026-02-25 12:35:46.685 244018 DEBUG nova.network.neutron [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:35:46 compute-0 sudo[322790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:35:46 compute-0 sudo[322790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:35:46 compute-0 sudo[322790]: pam_unix(sudo:session): session closed for user root
Feb 25 12:35:46 compute-0 sudo[322815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:35:46 compute-0 sudo[322815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:35:47 compute-0 sudo[322815]: pam_unix(sudo:session): session closed for user root
Feb 25 12:35:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:35:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:35:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:35:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:35:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:35:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:35:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:35:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:35:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:35:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:35:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:35:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:35:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:35:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:35:47 compute-0 sudo[322871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:35:47 compute-0 sudo[322871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:35:47 compute-0 sudo[322871]: pam_unix(sudo:session): session closed for user root
Feb 25 12:35:47 compute-0 sudo[322896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:35:47 compute-0 sudo[322896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:35:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:35:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3839980790' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:35:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:35:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3839980790' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:35:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:47 compute-0 podman[322933]: 2026-02-25 12:35:47.847316522 +0000 UTC m=+0.105469862 container create e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:35:47 compute-0 podman[322933]: 2026-02-25 12:35:47.763153543 +0000 UTC m=+0.021306913 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:35:47 compute-0 systemd[1]: Started libpod-conmon-e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d.scope.
Feb 25 12:35:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:35:47 compute-0 podman[322933]: 2026-02-25 12:35:47.987422332 +0000 UTC m=+0.245575702 container init e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:35:47 compute-0 podman[322933]: 2026-02-25 12:35:47.994254255 +0000 UTC m=+0.252407595 container start e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:35:47 compute-0 tender_keldysh[322948]: 167 167
Feb 25 12:35:47 compute-0 systemd[1]: libpod-e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d.scope: Deactivated successfully.
Feb 25 12:35:48 compute-0 podman[322933]: 2026-02-25 12:35:48.025057056 +0000 UTC m=+0.283210426 container attach e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:35:48 compute-0 podman[322933]: 2026-02-25 12:35:48.026859497 +0000 UTC m=+0.285012837 container died e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:35:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 172 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 7.9 KiB/s wr, 57 op/s
Feb 25 12:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-66f2c4fcd165cf095617d9912ddaab2230cf583c7b47dd64c592b1c36006a7d7-merged.mount: Deactivated successfully.
Feb 25 12:35:48 compute-0 podman[322953]: 2026-02-25 12:35:48.15684664 +0000 UTC m=+0.143953099 container remove e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:35:48 compute-0 systemd[1]: libpod-conmon-e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d.scope: Deactivated successfully.
Feb 25 12:35:48 compute-0 nova_compute[244014]: 2026-02-25 12:35:48.202 244018 DEBUG nova.network.neutron [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:35:48 compute-0 nova_compute[244014]: 2026-02-25 12:35:48.225 244018 INFO nova.compute.manager [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Took 1.54 seconds to deallocate network for instance.
Feb 25 12:35:48 compute-0 nova_compute[244014]: 2026-02-25 12:35:48.267 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:48 compute-0 nova_compute[244014]: 2026-02-25 12:35:48.268 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:48 compute-0 nova_compute[244014]: 2026-02-25 12:35:48.324 244018 DEBUG oslo_concurrency.processutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:48 compute-0 podman[322976]: 2026-02-25 12:35:48.343574008 +0000 UTC m=+0.097514577 container create aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 12:35:48 compute-0 podman[322976]: 2026-02-25 12:35:48.265728758 +0000 UTC m=+0.019669387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:35:48 compute-0 nova_compute[244014]: 2026-02-25 12:35:48.372 244018 DEBUG nova.compute.manager [req-1917102e-ddb4-4446-bffb-726c4c557e10 req-a2824ee0-ceb8-427a-b9cb-8ab8e942f3f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-deleted-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:35:48 compute-0 systemd[1]: Started libpod-conmon-aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59.scope.
Feb 25 12:35:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:48 compute-0 podman[322976]: 2026-02-25 12:35:48.628676506 +0000 UTC m=+0.382617135 container init aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:35:48 compute-0 podman[322976]: 2026-02-25 12:35:48.634872981 +0000 UTC m=+0.388813570 container start aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 12:35:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:35:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:35:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:35:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:35:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3839980790' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:35:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3839980790' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:35:48 compute-0 ceph-mon[76335]: pgmap v1620: 305 pgs: 305 active+clean; 172 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 7.9 KiB/s wr, 57 op/s
Feb 25 12:35:48 compute-0 podman[322976]: 2026-02-25 12:35:48.812235353 +0000 UTC m=+0.566175942 container attach aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:35:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:35:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863393853' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:48 compute-0 nova_compute[244014]: 2026-02-25 12:35:48.947 244018 DEBUG oslo_concurrency.processutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:48 compute-0 nova_compute[244014]: 2026-02-25 12:35:48.952 244018 DEBUG nova.compute.provider_tree [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:35:48 compute-0 nova_compute[244014]: 2026-02-25 12:35:48.966 244018 DEBUG nova.scheduler.client.report [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:35:48 compute-0 nova_compute[244014]: 2026-02-25 12:35:48.988 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:49 compute-0 nova_compute[244014]: 2026-02-25 12:35:49.017 244018 INFO nova.scheduler.client.report [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Deleted allocations for instance aeaad9e2-4ad0-46fb-b619-7ca2c78443a8
Feb 25 12:35:49 compute-0 nova_compute[244014]: 2026-02-25 12:35:49.093 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:49 compute-0 happy_booth[322994]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:35:49 compute-0 happy_booth[322994]: --> All data devices are unavailable
Feb 25 12:35:49 compute-0 systemd[1]: libpod-aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59.scope: Deactivated successfully.
Feb 25 12:35:49 compute-0 podman[323035]: 2026-02-25 12:35:49.159190089 +0000 UTC m=+0.026462799 container died aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3-merged.mount: Deactivated successfully.
Feb 25 12:35:49 compute-0 podman[323035]: 2026-02-25 12:35:49.53957632 +0000 UTC m=+0.406848970 container remove aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:35:49 compute-0 systemd[1]: libpod-conmon-aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59.scope: Deactivated successfully.
Feb 25 12:35:49 compute-0 sudo[322896]: pam_unix(sudo:session): session closed for user root
Feb 25 12:35:49 compute-0 sudo[323050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:35:49 compute-0 sudo[323050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:35:49 compute-0 sudo[323050]: pam_unix(sudo:session): session closed for user root
Feb 25 12:35:49 compute-0 sudo[323075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:35:49 compute-0 sudo[323075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:35:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2863393853' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:49 compute-0 nova_compute[244014]: 2026-02-25 12:35:49.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:50 compute-0 podman[323111]: 2026-02-25 12:35:50.026334907 +0000 UTC m=+0.077919303 container create 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:35:50 compute-0 podman[323111]: 2026-02-25 12:35:49.972821764 +0000 UTC m=+0.024406160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:35:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 172 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 49 op/s
Feb 25 12:35:50 compute-0 systemd[1]: Started libpod-conmon-950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90.scope.
Feb 25 12:35:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:35:50 compute-0 podman[323111]: 2026-02-25 12:35:50.170543213 +0000 UTC m=+0.222127639 container init 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:35:50 compute-0 podman[323111]: 2026-02-25 12:35:50.176399258 +0000 UTC m=+0.227983644 container start 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:35:50 compute-0 hardcore_bassi[323127]: 167 167
Feb 25 12:35:50 compute-0 systemd[1]: libpod-950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90.scope: Deactivated successfully.
Feb 25 12:35:50 compute-0 conmon[323127]: conmon 950065b982a67a7dec3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90.scope/container/memory.events
Feb 25 12:35:50 compute-0 podman[323111]: 2026-02-25 12:35:50.20300179 +0000 UTC m=+0.254586216 container attach 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb 25 12:35:50 compute-0 podman[323111]: 2026-02-25 12:35:50.203518415 +0000 UTC m=+0.255102811 container died 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6fdf26c9bae01860ca844447f16f72b3122f6afb3b909d91ca18a5d896e7871-merged.mount: Deactivated successfully.
Feb 25 12:35:50 compute-0 podman[323111]: 2026-02-25 12:35:50.39086937 +0000 UTC m=+0.442453766 container remove 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:35:50 compute-0 systemd[1]: libpod-conmon-950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90.scope: Deactivated successfully.
Feb 25 12:35:50 compute-0 podman[323151]: 2026-02-25 12:35:50.542929577 +0000 UTC m=+0.065931414 container create 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:35:50 compute-0 podman[323151]: 2026-02-25 12:35:50.496656359 +0000 UTC m=+0.019658216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:35:50 compute-0 systemd[1]: Started libpod-conmon-2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392.scope.
Feb 25 12:35:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bfe3abe50d22ca3245972e8c88b1f25ad1889a61233ad4a27994b9da1b9897/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bfe3abe50d22ca3245972e8c88b1f25ad1889a61233ad4a27994b9da1b9897/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bfe3abe50d22ca3245972e8c88b1f25ad1889a61233ad4a27994b9da1b9897/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bfe3abe50d22ca3245972e8c88b1f25ad1889a61233ad4a27994b9da1b9897/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:50 compute-0 podman[323151]: 2026-02-25 12:35:50.68843379 +0000 UTC m=+0.211435647 container init 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:35:50 compute-0 podman[323151]: 2026-02-25 12:35:50.694934823 +0000 UTC m=+0.217936670 container start 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Feb 25 12:35:50 compute-0 podman[323151]: 2026-02-25 12:35:50.719300702 +0000 UTC m=+0.242302559 container attach 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]: {
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:     "0": [
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:         {
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "devices": [
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "/dev/loop3"
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             ],
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_name": "ceph_lv0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_size": "21470642176",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "name": "ceph_lv0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "tags": {
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.cluster_name": "ceph",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.crush_device_class": "",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.encrypted": "0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.objectstore": "bluestore",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.osd_id": "0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.type": "block",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.vdo": "0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.with_tpm": "0"
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             },
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "type": "block",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "vg_name": "ceph_vg0"
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:         }
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:     ],
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:     "1": [
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:         {
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "devices": [
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "/dev/loop4"
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             ],
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_name": "ceph_lv1",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_size": "21470642176",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "name": "ceph_lv1",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "tags": {
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.cluster_name": "ceph",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.crush_device_class": "",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.encrypted": "0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.objectstore": "bluestore",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.osd_id": "1",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.type": "block",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.vdo": "0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.with_tpm": "0"
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             },
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "type": "block",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "vg_name": "ceph_vg1"
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:         }
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:     ],
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:     "2": [
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:         {
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "devices": [
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "/dev/loop5"
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             ],
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_name": "ceph_lv2",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_size": "21470642176",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "name": "ceph_lv2",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "tags": {
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.cluster_name": "ceph",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.crush_device_class": "",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.encrypted": "0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.objectstore": "bluestore",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.osd_id": "2",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.type": "block",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.vdo": "0",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:                 "ceph.with_tpm": "0"
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             },
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "type": "block",
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:             "vg_name": "ceph_vg2"
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:         }
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]:     ]
Feb 25 12:35:50 compute-0 vigilant_jackson[323167]: }
Feb 25 12:35:50 compute-0 systemd[1]: libpod-2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392.scope: Deactivated successfully.
Feb 25 12:35:50 compute-0 podman[323151]: 2026-02-25 12:35:50.981845172 +0000 UTC m=+0.504847019 container died 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 12:35:51 compute-0 ceph-mon[76335]: pgmap v1621: 305 pgs: 305 active+clean; 172 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 49 op/s
Feb 25 12:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6bfe3abe50d22ca3245972e8c88b1f25ad1889a61233ad4a27994b9da1b9897-merged.mount: Deactivated successfully.
Feb 25 12:35:51 compute-0 podman[323151]: 2026-02-25 12:35:51.424680377 +0000 UTC m=+0.947682244 container remove 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 12:35:51 compute-0 systemd[1]: libpod-conmon-2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392.scope: Deactivated successfully.
Feb 25 12:35:51 compute-0 sudo[323075]: pam_unix(sudo:session): session closed for user root
Feb 25 12:35:51 compute-0 sudo[323190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:35:51 compute-0 sudo[323190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:35:51 compute-0 sudo[323190]: pam_unix(sudo:session): session closed for user root
Feb 25 12:35:51 compute-0 sudo[323215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:35:51 compute-0 sudo[323215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:35:51 compute-0 nova_compute[244014]: 2026-02-25 12:35:51.574 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022936.5731418, aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:35:51 compute-0 nova_compute[244014]: 2026-02-25 12:35:51.575 244018 INFO nova.compute.manager [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] VM Stopped (Lifecycle Event)
Feb 25 12:35:51 compute-0 nova_compute[244014]: 2026-02-25 12:35:51.598 244018 DEBUG nova.compute.manager [None req-b30990db-df00-4615-ba8f-cd2e9d9ea234 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:35:51 compute-0 nova_compute[244014]: 2026-02-25 12:35:51.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:51 compute-0 podman[323252]: 2026-02-25 12:35:51.817587371 +0000 UTC m=+0.018406861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:35:52 compute-0 podman[323252]: 2026-02-25 12:35:52.016462212 +0000 UTC m=+0.217281692 container create 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 12:35:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 153 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 50 op/s
Feb 25 12:35:52 compute-0 systemd[1]: Started libpod-conmon-0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025.scope.
Feb 25 12:35:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:35:52 compute-0 ceph-mon[76335]: pgmap v1622: 305 pgs: 305 active+clean; 153 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 50 op/s
Feb 25 12:35:52 compute-0 podman[323252]: 2026-02-25 12:35:52.702184703 +0000 UTC m=+0.903004273 container init 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 12:35:52 compute-0 podman[323252]: 2026-02-25 12:35:52.710401165 +0000 UTC m=+0.911220675 container start 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:35:52 compute-0 thirsty_herschel[323268]: 167 167
Feb 25 12:35:52 compute-0 systemd[1]: libpod-0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025.scope: Deactivated successfully.
Feb 25 12:35:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:53 compute-0 podman[323252]: 2026-02-25 12:35:53.13553957 +0000 UTC m=+1.336359150 container attach 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:35:53 compute-0 podman[323252]: 2026-02-25 12:35:53.136032454 +0000 UTC m=+1.336851964 container died 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-749c2b5acc00ca6f5db3a98132a0451fae225059b634177f9882234c7883b41f-merged.mount: Deactivated successfully.
Feb 25 12:35:53 compute-0 nova_compute[244014]: 2026-02-25 12:35:53.873 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Feb 25 12:35:54 compute-0 podman[323252]: 2026-02-25 12:35:54.232904955 +0000 UTC m=+2.433724445 container remove 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:35:54 compute-0 systemd[1]: libpod-conmon-0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025.scope: Deactivated successfully.
Feb 25 12:35:54 compute-0 podman[323293]: 2026-02-25 12:35:54.369905347 +0000 UTC m=+0.023675351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:35:54 compute-0 podman[323293]: 2026-02-25 12:35:54.63507725 +0000 UTC m=+0.288847194 container create 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:35:54 compute-0 ceph-mon[76335]: pgmap v1623: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Feb 25 12:35:54 compute-0 systemd[1]: Started libpod-conmon-3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a.scope.
Feb 25 12:35:54 compute-0 nova_compute[244014]: 2026-02-25 12:35:54.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/563cbdc86e633d7f55a5bb1a79ee34cda4c6862562076516d8ac8052fa148ce8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/563cbdc86e633d7f55a5bb1a79ee34cda4c6862562076516d8ac8052fa148ce8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/563cbdc86e633d7f55a5bb1a79ee34cda4c6862562076516d8ac8052fa148ce8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/563cbdc86e633d7f55a5bb1a79ee34cda4c6862562076516d8ac8052fa148ce8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:35:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:55.016 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:55.016 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:35:55.017 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:55 compute-0 podman[323293]: 2026-02-25 12:35:55.150346953 +0000 UTC m=+0.804116947 container init 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:35:55 compute-0 podman[323293]: 2026-02-25 12:35:55.156937399 +0000 UTC m=+0.810707353 container start 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:35:55 compute-0 podman[323293]: 2026-02-25 12:35:55.288104946 +0000 UTC m=+0.941874890 container attach 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 12:35:55 compute-0 lvm[323391]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:35:55 compute-0 lvm[323388]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:35:55 compute-0 lvm[323388]: VG ceph_vg0 finished
Feb 25 12:35:55 compute-0 lvm[323391]: VG ceph_vg2 finished
Feb 25 12:35:55 compute-0 lvm[323390]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:35:55 compute-0 lvm[323390]: VG ceph_vg1 finished
Feb 25 12:35:55 compute-0 quirky_chaplygin[323310]: {}
Feb 25 12:35:55 compute-0 systemd[1]: libpod-3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a.scope: Deactivated successfully.
Feb 25 12:35:55 compute-0 podman[323293]: 2026-02-25 12:35:55.914073138 +0000 UTC m=+1.567843102 container died 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:35:55 compute-0 systemd[1]: libpod-3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a.scope: Consumed 1.044s CPU time.
Feb 25 12:35:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Feb 25 12:35:56 compute-0 nova_compute[244014]: 2026-02-25 12:35:56.602 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:35:56 compute-0 ceph-mon[76335]: pgmap v1624: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Feb 25 12:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-563cbdc86e633d7f55a5bb1a79ee34cda4c6862562076516d8ac8052fa148ce8-merged.mount: Deactivated successfully.
Feb 25 12:35:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:35:57 compute-0 podman[323293]: 2026-02-25 12:35:57.801052879 +0000 UTC m=+3.454822833 container remove 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 12:35:57 compute-0 sudo[323215]: pam_unix(sudo:session): session closed for user root
Feb 25 12:35:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:35:57 compute-0 systemd[1]: libpod-conmon-3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a.scope: Deactivated successfully.
Feb 25 12:35:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:35:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:35:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.105 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.106 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.124 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.210 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.211 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.219 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.220 244018 INFO nova.compute.claims [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:35:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:35:58 compute-0 sudo[323405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.369 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:58 compute-0 sudo[323405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:35:58 compute-0 sudo[323405]: pam_unix(sudo:session): session closed for user root
Feb 25 12:35:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:35:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/47908807' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.892 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.900 244018 DEBUG nova.compute.provider_tree [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.924 244018 DEBUG nova.scheduler.client.report [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.951 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:58 compute-0 nova_compute[244014]: 2026-02-25 12:35:58.952 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.018 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.018 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.053 244018 INFO nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.073 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:35:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:35:59 compute-0 ceph-mon[76335]: pgmap v1625: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Feb 25 12:35:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:35:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/47908807' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.165 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.166 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.167 244018 INFO nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Creating image(s)
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.493 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.521 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.545 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.549 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.589 244018 DEBUG nova.policy [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '546ffcfc9e27442d89b984a2d0e30dda', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6c30db4c9b5f480dbb12cc23f1604de1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.660 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.661 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.661 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.662 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.685 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.688 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 be11f836-327c-447c-865a-0088e0554c0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:35:59 compute-0 nova_compute[244014]: 2026-02-25 12:35:59.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 25 12:36:00 compute-0 ceph-mon[76335]: pgmap v1626: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 25 12:36:01 compute-0 nova_compute[244014]: 2026-02-25 12:36:01.077 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Successfully created port: 3fb1749a-7239-482e-939c-ec165690b798 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:36:01 compute-0 nova_compute[244014]: 2026-02-25 12:36:01.605 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 161 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 266 KiB/s wr, 4 op/s
Feb 25 12:36:02 compute-0 nova_compute[244014]: 2026-02-25 12:36:02.191 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Successfully updated port: 3fb1749a-7239-482e-939c-ec165690b798 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:36:02 compute-0 nova_compute[244014]: 2026-02-25 12:36:02.280 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:36:02 compute-0 nova_compute[244014]: 2026-02-25 12:36:02.280 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquired lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:36:02 compute-0 nova_compute[244014]: 2026-02-25 12:36:02.280 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:36:02 compute-0 nova_compute[244014]: 2026-02-25 12:36:02.409 244018 DEBUG nova.compute.manager [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-changed-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:02 compute-0 nova_compute[244014]: 2026-02-25 12:36:02.410 244018 DEBUG nova.compute.manager [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Refreshing instance network info cache due to event network-changed-3fb1749a-7239-482e-939c-ec165690b798. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:36:02 compute-0 nova_compute[244014]: 2026-02-25 12:36:02.410 244018 DEBUG oslo_concurrency.lockutils [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:36:02 compute-0 nova_compute[244014]: 2026-02-25 12:36:02.654 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:36:02 compute-0 nova_compute[244014]: 2026-02-25 12:36:02.678 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 be11f836-327c-447c-865a-0088e0554c0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.990s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:02 compute-0 nova_compute[244014]: 2026-02-25 12:36:02.737 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] resizing rbd image be11f836-327c-447c-865a-0088e0554c0d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:36:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:03 compute-0 ceph-mon[76335]: pgmap v1627: 305 pgs: 305 active+clean; 161 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 266 KiB/s wr, 4 op/s
Feb 25 12:36:03 compute-0 nova_compute[244014]: 2026-02-25 12:36:03.514 244018 DEBUG nova.objects.instance [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lazy-loading 'migration_context' on Instance uuid be11f836-327c-447c-865a-0088e0554c0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:03 compute-0 nova_compute[244014]: 2026-02-25 12:36:03.532 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:36:03 compute-0 nova_compute[244014]: 2026-02-25 12:36:03.533 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Ensure instance console log exists: /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:36:03 compute-0 nova_compute[244014]: 2026-02-25 12:36:03.534 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:03 compute-0 nova_compute[244014]: 2026-02-25 12:36:03.534 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:03 compute-0 nova_compute[244014]: 2026-02-25 12:36:03.534 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 179 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 543 KiB/s wr, 15 op/s
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.304 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Updating instance_info_cache with network_info: [{"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.343 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Releasing lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.343 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Instance network_info: |[{"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.344 244018 DEBUG oslo_concurrency.lockutils [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.345 244018 DEBUG nova.network.neutron [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Refreshing network info cache for port 3fb1749a-7239-482e-939c-ec165690b798 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.348 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Start _get_guest_xml network_info=[{"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.355 244018 WARNING nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.364 244018 DEBUG nova.virt.libvirt.host [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.365 244018 DEBUG nova.virt.libvirt.host [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.374 244018 DEBUG nova.virt.libvirt.host [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.374 244018 DEBUG nova.virt.libvirt.host [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.375 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.375 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.375 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.376 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.376 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.376 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.376 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.377 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.377 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.377 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.377 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.378 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.381 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:04 compute-0 ceph-mon[76335]: pgmap v1628: 305 pgs: 305 active+clean; 179 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 543 KiB/s wr, 15 op/s
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.921 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425467780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.965 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.985 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:04 compute-0 nova_compute[244014]: 2026-02-25 12:36:04.989 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.331 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.332 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.368 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.474 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.475 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.483 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.484 244018 INFO nova.compute.claims [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:36:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3133009143' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.534 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.535 244018 DEBUG nova.virt.libvirt.vif [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:35:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1424139914',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1424139914',id=88,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c30db4c9b5f480dbb12cc23f1604de1',ramdisk_id='',reservation_id='r-7eqku0s7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-203832490',owner_user_name='tempest-InstanceActionsV221TestJSON-203832490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:35:59Z,user_data=None,user_id='546ffcfc9e27442d89b984a2d0e30dda',uuid=be11f836-327c-447c-865a-0088e0554c0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.537 244018 DEBUG nova.network.os_vif_util [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converting VIF {"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.539 244018 DEBUG nova.network.os_vif_util [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.541 244018 DEBUG nova.objects.instance [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lazy-loading 'pci_devices' on Instance uuid be11f836-327c-447c-865a-0088e0554c0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.578 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:36:05 compute-0 nova_compute[244014]:   <uuid>be11f836-327c-447c-865a-0088e0554c0d</uuid>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   <name>instance-00000058</name>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-1424139914</nova:name>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:36:04</nova:creationTime>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <nova:user uuid="546ffcfc9e27442d89b984a2d0e30dda">tempest-InstanceActionsV221TestJSON-203832490-project-member</nova:user>
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <nova:project uuid="6c30db4c9b5f480dbb12cc23f1604de1">tempest-InstanceActionsV221TestJSON-203832490</nova:project>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <nova:port uuid="3fb1749a-7239-482e-939c-ec165690b798">
Feb 25 12:36:05 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <system>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <entry name="serial">be11f836-327c-447c-865a-0088e0554c0d</entry>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <entry name="uuid">be11f836-327c-447c-865a-0088e0554c0d</entry>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     </system>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   <os>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   </os>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   <features>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   </features>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/be11f836-327c-447c-865a-0088e0554c0d_disk">
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/be11f836-327c-447c-865a-0088e0554c0d_disk.config">
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:05 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:9d:e4:ea"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <target dev="tap3fb1749a-72"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/console.log" append="off"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <video>
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     </video>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:36:05 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:36:05 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:36:05 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:36:05 compute-0 nova_compute[244014]: </domain>
Feb 25 12:36:05 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.579 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Preparing to wait for external event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.580 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.580 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.580 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.581 244018 DEBUG nova.virt.libvirt.vif [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:35:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1424139914',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1424139914',id=88,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c30db4c9b5f480dbb12cc23f1604de1',ramdisk_id='',reservation_id='r-7eqku0s7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-203832490',owner_user_name='tempest-InstanceActionsV221TestJSON-203832490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:35:59Z,user_data=None,user_id='546ffcfc9e27442d89b984a2d0e30dda',uuid=be11f836-327c-447c-865a-0088e0554c0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.581 244018 DEBUG nova.network.os_vif_util [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converting VIF {"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.582 244018 DEBUG nova.network.os_vif_util [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.582 244018 DEBUG os_vif [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.583 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.584 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.588 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3fb1749a-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.589 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3fb1749a-72, col_values=(('external_ids', {'iface-id': '3fb1749a-7239-482e-939c-ec165690b798', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:e4:ea', 'vm-uuid': 'be11f836-327c-447c-865a-0088e0554c0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:36:05 compute-0 NetworkManager[49836]: <info>  [1772022965.5918] manager: (tap3fb1749a-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.599 244018 INFO os_vif [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72')
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.679 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.731 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.732 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.732 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] No VIF found with MAC fa:16:3e:9d:e4:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.733 244018 INFO nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Using config drive
Feb 25 12:36:05 compute-0 nova_compute[244014]: 2026-02-25 12:36:05.888 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3425467780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3133009143' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 179 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 543 KiB/s wr, 15 op/s
Feb 25 12:36:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:36:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3238565797' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.318 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.324 244018 DEBUG nova.compute.provider_tree [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.339 244018 DEBUG nova.scheduler.client.report [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.380 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.381 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.387 244018 INFO nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Creating config drive at /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.391 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpoc37fecw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.479 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.534 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpoc37fecw" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.557 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.560 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config be11f836-327c-447c-865a-0088e0554c0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.731 244018 INFO nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.753 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.845 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.846 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.847 244018 INFO nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating image(s)
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.870 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.892 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.910 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.913 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.942 244018 DEBUG nova.network.neutron [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Updated VIF entry in instance network info cache for port 3fb1749a-7239-482e-939c-ec165690b798. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.943 244018 DEBUG nova.network.neutron [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Updating instance_info_cache with network_info: [{"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.945 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config be11f836-327c-447c-865a-0088e0554c0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.945 244018 INFO nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Deleting local config drive /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config because it was imported into RBD.
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.960 244018 DEBUG oslo_concurrency.lockutils [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.973 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.974 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.975 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:06 compute-0 nova_compute[244014]: 2026-02-25 12:36:06.975 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:06 compute-0 kernel: tap3fb1749a-72: entered promiscuous mode
Feb 25 12:36:06 compute-0 NetworkManager[49836]: <info>  [1772022966.9803] manager: (tap3fb1749a-72): new Tun device (/org/freedesktop/NetworkManager/Devices/373)
Feb 25 12:36:06 compute-0 ovn_controller[147040]: 2026-02-25T12:36:06Z|00892|binding|INFO|Claiming lport 3fb1749a-7239-482e-939c-ec165690b798 for this chassis.
Feb 25 12:36:06 compute-0 ovn_controller[147040]: 2026-02-25T12:36:06Z|00893|binding|INFO|3fb1749a-7239-482e-939c-ec165690b798: Claiming fa:16:3e:9d:e4:ea 10.100.0.14
Feb 25 12:36:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:06.994 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e4:ea 10.100.0.14'], port_security=['fa:16:3e:9d:e4:ea 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be11f836-327c-447c-865a-0088e0554c0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aefc26c0-533a-476c-be4d-dc2b231c8685', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c30db4c9b5f480dbb12cc23f1604de1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1c7a807-e7ae-4ff5-a4f8-0f24bfa78da8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ffc0757-5bda-4254-9536-399017e8e195, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3fb1749a-7239-482e-939c-ec165690b798) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:36:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:06.995 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3fb1749a-7239-482e-939c-ec165690b798 in datapath aefc26c0-533a-476c-be4d-dc2b231c8685 bound to our chassis
Feb 25 12:36:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:06.996 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aefc26c0-533a-476c-be4d-dc2b231c8685
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.001 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.003 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7909c137-7f72-45e0-9b9f-2384c8b29e7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.006 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaefc26c0-51 in ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:36:07 compute-0 systemd-udevd[323851]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.008 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaefc26c0-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.008 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4cbae0-e746-4ac5-820d-833679a4e718]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 ovn_controller[147040]: 2026-02-25T12:36:07Z|00894|binding|INFO|Setting lport 3fb1749a-7239-482e-939c-ec165690b798 ovn-installed in OVS
Feb 25 12:36:07 compute-0 ovn_controller[147040]: 2026-02-25T12:36:07Z|00895|binding|INFO|Setting lport 3fb1749a-7239-482e-939c-ec165690b798 up in Southbound
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.010 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[793f195c-0f62-4248-b98c-da7c97f0c180]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.011 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:07 compute-0 systemd-machined[210048]: New machine qemu-113-instance-00000058.
Feb 25 12:36:07 compute-0 NetworkManager[49836]: <info>  [1772022967.0194] device (tap3fb1749a-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:36:07 compute-0 NetworkManager[49836]: <info>  [1772022967.0202] device (tap3fb1749a-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.023 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4abfbc-3e7f-442d-80cb-8e5aa7ba26ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-00000058.
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.041 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2c02e1-438b-43af-a39f-8c0a571acae1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.062 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e48f5bce-a476-4042-a074-beb7486ddb9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 systemd-udevd[323855]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:36:07 compute-0 NetworkManager[49836]: <info>  [1772022967.0671] manager: (tapaefc26c0-50): new Veth device (/org/freedesktop/NetworkManager/Devices/374)
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.068 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[85f26b94-1747-4e6f-99c1-9daae2140759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 ceph-mon[76335]: pgmap v1629: 305 pgs: 305 active+clean; 179 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 543 KiB/s wr, 15 op/s
Feb 25 12:36:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3238565797' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.088 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7cebb0a4-402d-4bff-9845-d65fa1f8d8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.091 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[944454bb-3148-403c-b87f-6f79f8dda383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 NetworkManager[49836]: <info>  [1772022967.1087] device (tapaefc26c0-50): carrier: link connected
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.111 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[da396934-2e70-49a9-b9f2-cf8a851f074a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[27dec32b-2a37-4c6e-8b6a-e3d4a379631e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaefc26c0-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:0c:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493651, 'reachable_time': 32059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323900, 'error': None, 'target': 'ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.139 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[145a3de4-f567-485c-8c0a-1ede4fbf7bb6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:cc6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493651, 'tstamp': 493651}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323901, 'error': None, 'target': 'ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.151 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81573cb8-929d-403d-b96a-8ea848c9c42c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaefc26c0-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:0c:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493651, 'reachable_time': 32059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323902, 'error': None, 'target': 'ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.170 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4d090a90-24f8-43ed-81eb-d161f5ae7dbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.208 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[59e1b273-7985-4233-9c7c-7dfe4efdd911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.209 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaefc26c0-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.209 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.210 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaefc26c0-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:07 compute-0 NetworkManager[49836]: <info>  [1772022967.2131] manager: (tapaefc26c0-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Feb 25 12:36:07 compute-0 kernel: tapaefc26c0-50: entered promiscuous mode
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.216 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaefc26c0-50, col_values=(('external_ids', {'iface-id': '25f1e81e-7d2e-4698-8a33-915eba7603a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:07 compute-0 ovn_controller[147040]: 2026-02-25T12:36:07Z|00896|binding|INFO|Releasing lport 25f1e81e-7d2e-4698-8a33-915eba7603a7 from this chassis (sb_readonly=0)
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.218 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aefc26c0-533a-476c-be4d-dc2b231c8685.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aefc26c0-533a-476c-be4d-dc2b231c8685.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.219 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7df206-826a-4999-a06a-23c3975c4d07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.219 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-aefc26c0-533a-476c-be4d-dc2b231c8685
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/aefc26c0-533a-476c-be4d-dc2b231c8685.pid.haproxy
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID aefc26c0-533a-476c-be4d-dc2b231c8685
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:36:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.220 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685', 'env', 'PROCESS_TAG=haproxy-aefc26c0-533a-476c-be4d-dc2b231c8685', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aefc26c0-533a-476c-be4d-dc2b231c8685.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.223 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.388 244018 DEBUG nova.compute.manager [req-e3fe9e0b-1a13-4706-8e76-44e4d0c7c6ba req-e3ccf7bf-0320-4024-91d7-f26b2a6847e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.389 244018 DEBUG oslo_concurrency.lockutils [req-e3fe9e0b-1a13-4706-8e76-44e4d0c7c6ba req-e3ccf7bf-0320-4024-91d7-f26b2a6847e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.390 244018 DEBUG oslo_concurrency.lockutils [req-e3fe9e0b-1a13-4706-8e76-44e4d0c7c6ba req-e3ccf7bf-0320-4024-91d7-f26b2a6847e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.390 244018 DEBUG oslo_concurrency.lockutils [req-e3fe9e0b-1a13-4706-8e76-44e4d0c7c6ba req-e3ccf7bf-0320-4024-91d7-f26b2a6847e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.391 244018 DEBUG nova.compute.manager [req-e3fe9e0b-1a13-4706-8e76-44e4d0c7c6ba req-e3ccf7bf-0320-4024-91d7-f26b2a6847e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Processing event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.603 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:07 compute-0 podman[323938]: 2026-02-25 12:36:07.546954079 +0000 UTC m=+0.022635261 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.702 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] resizing rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:36:07 compute-0 podman[323938]: 2026-02-25 12:36:07.706097877 +0000 UTC m=+0.181779019 container create 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:36:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:07 compute-0 systemd[1]: Started libpod-conmon-46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54.scope.
Feb 25 12:36:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f869d10a1ae0f882e6b7b906c9ac51192b52ba8e8071b619209b2a6cd377d6fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:36:07 compute-0 podman[323938]: 2026-02-25 12:36:07.879461066 +0000 UTC m=+0.355142238 container init 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:36:07 compute-0 podman[323938]: 2026-02-25 12:36:07.885429035 +0000 UTC m=+0.361110177 container start 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:36:07 compute-0 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [NOTICE]   (324053) : New worker (324055) forked
Feb 25 12:36:07 compute-0 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [NOTICE]   (324053) : Loading success.
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.910 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022967.9094691, be11f836-327c-447c-865a-0088e0554c0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.910 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] VM Started (Lifecycle Event)
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.912 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.916 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.919 244018 INFO nova.virt.libvirt.driver [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] Instance spawned successfully.
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.919 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.952 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.958 244018 DEBUG nova.objects.instance [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.963 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.966 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.966 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.967 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.967 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.968 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.968 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.974 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.975 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Ensure instance console log exists: /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.975 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.975 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.976 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.977 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.981 244018 WARNING nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.990 244018 DEBUG nova.virt.libvirt.host [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.991 244018 DEBUG nova.virt.libvirt.host [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.994 244018 DEBUG nova.virt.libvirt.host [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.995 244018 DEBUG nova.virt.libvirt.host [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.995 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.995 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.996 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.996 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.996 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.997 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.997 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.997 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.997 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.998 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.998 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:36:07 compute-0 nova_compute[244014]: 2026-02-25 12:36:07.998 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.000 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.028 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.029 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022967.9096217, be11f836-327c-447c-865a-0088e0554c0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.030 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] VM Paused (Lifecycle Event)
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.032 244018 INFO nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Took 8.87 seconds to spawn the instance on the hypervisor.
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.032 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.058 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.062 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022967.915422, be11f836-327c-447c-865a-0088e0554c0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.063 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] VM Resumed (Lifecycle Event)
Feb 25 12:36:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.091 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.103 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.110 244018 INFO nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Took 9.93 seconds to build instance.
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.149 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999880895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.594 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.616 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.619 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.705 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.706 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.706 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.706 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.707 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.708 244018 INFO nova.compute.manager [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Terminating instance
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.709 244018 DEBUG nova.compute.manager [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:36:08 compute-0 kernel: tap3fb1749a-72 (unregistering): left promiscuous mode
Feb 25 12:36:08 compute-0 NetworkManager[49836]: <info>  [1772022968.9069] device (tap3fb1749a-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.909 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:08 compute-0 ovn_controller[147040]: 2026-02-25T12:36:08Z|00897|binding|INFO|Releasing lport 3fb1749a-7239-482e-939c-ec165690b798 from this chassis (sb_readonly=0)
Feb 25 12:36:08 compute-0 ovn_controller[147040]: 2026-02-25T12:36:08Z|00898|binding|INFO|Setting lport 3fb1749a-7239-482e-939c-ec165690b798 down in Southbound
Feb 25 12:36:08 compute-0 ovn_controller[147040]: 2026-02-25T12:36:08Z|00899|binding|INFO|Removing iface tap3fb1749a-72 ovn-installed in OVS
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:08.925 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e4:ea 10.100.0.14'], port_security=['fa:16:3e:9d:e4:ea 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be11f836-327c-447c-865a-0088e0554c0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aefc26c0-533a-476c-be4d-dc2b231c8685', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c30db4c9b5f480dbb12cc23f1604de1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1c7a807-e7ae-4ff5-a4f8-0f24bfa78da8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ffc0757-5bda-4254-9536-399017e8e195, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3fb1749a-7239-482e-939c-ec165690b798) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:36:08 compute-0 nova_compute[244014]: 2026-02-25 12:36:08.926 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:08.934 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3fb1749a-7239-482e-939c-ec165690b798 in datapath aefc26c0-533a-476c-be4d-dc2b231c8685 unbound from our chassis
Feb 25 12:36:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:08.937 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aefc26c0-533a-476c-be4d-dc2b231c8685, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:36:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:08.938 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ade21f94-d5dd-4aed-9cdb-fb1023600597]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:08.940 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685 namespace which is not needed anymore
Feb 25 12:36:08 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d00000058.scope: Deactivated successfully.
Feb 25 12:36:08 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d00000058.scope: Consumed 1.548s CPU time.
Feb 25 12:36:08 compute-0 systemd-machined[210048]: Machine qemu-113-instance-00000058 terminated.
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:09 compute-0 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [NOTICE]   (324053) : haproxy version is 2.8.14-c23fe91
Feb 25 12:36:09 compute-0 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [NOTICE]   (324053) : path to executable is /usr/sbin/haproxy
Feb 25 12:36:09 compute-0 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [WARNING]  (324053) : Exiting Master process...
Feb 25 12:36:09 compute-0 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [ALERT]    (324053) : Current worker (324055) exited with code 143 (Terminated)
Feb 25 12:36:09 compute-0 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [WARNING]  (324053) : All workers exited. Exiting... (0)
Feb 25 12:36:09 compute-0 systemd[1]: libpod-46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54.scope: Deactivated successfully.
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.139 244018 INFO nova.virt.libvirt.driver [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] Instance destroyed successfully.
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.140 244018 DEBUG nova.objects.instance [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lazy-loading 'resources' on Instance uuid be11f836-327c-447c-865a-0088e0554c0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2781290701' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:09 compute-0 podman[324163]: 2026-02-25 12:36:09.142655876 +0000 UTC m=+0.117168472 container died 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.157 244018 DEBUG nova.virt.libvirt.vif [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:35:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1424139914',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1424139914',id=88,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6c30db4c9b5f480dbb12cc23f1604de1',ramdisk_id='',reservation_id='r-7eqku0s7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-203832490',owner_user_name='tempest-InstanceActionsV221TestJSON-203832490-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:36:08Z,user_data=None,user_id='546ffcfc9e27442d89b984a2d0e30dda',uuid=be11f836-327c-447c-865a-0088e0554c0d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.158 244018 DEBUG nova.network.os_vif_util [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converting VIF {"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.159 244018 DEBUG nova.network.os_vif_util [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.160 244018 DEBUG os_vif [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.163 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.163 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3fb1749a-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.167 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.168 244018 DEBUG nova.objects.instance [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.170 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.172 244018 INFO os_vif [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72')
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.201 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:36:09 compute-0 nova_compute[244014]:   <uuid>2184a715-0ac8-4fc2-aa99-fae3b8e32edf</uuid>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   <name>instance-00000059</name>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerShowV254Test-server-567186647</nova:name>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:36:07</nova:creationTime>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:36:09 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:36:09 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:36:09 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:36:09 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:36:09 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:36:09 compute-0 nova_compute[244014]:         <nova:user uuid="f14f324492304e519e215bb56099abdd">tempest-ServerShowV254Test-1036115410-project-member</nova:user>
Feb 25 12:36:09 compute-0 nova_compute[244014]:         <nova:project uuid="70a704d198834ace8c228d59139dd7b4">tempest-ServerShowV254Test-1036115410</nova:project>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <system>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <entry name="serial">2184a715-0ac8-4fc2-aa99-fae3b8e32edf</entry>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <entry name="uuid">2184a715-0ac8-4fc2-aa99-fae3b8e32edf</entry>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     </system>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   <os>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   </os>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   <features>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   </features>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk">
Feb 25 12:36:09 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:09 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config">
Feb 25 12:36:09 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:09 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/console.log" append="off"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <video>
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     </video>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:36:09 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:36:09 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:36:09 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:36:09 compute-0 nova_compute[244014]: </domain>
Feb 25 12:36:09 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:36:09 compute-0 ceph-mon[76335]: pgmap v1630: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:36:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3999880895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2781290701' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.386 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.387 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.387 244018 INFO nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Using config drive
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.420 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54-userdata-shm.mount: Deactivated successfully.
Feb 25 12:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f869d10a1ae0f882e6b7b906c9ac51192b52ba8e8071b619209b2a6cd377d6fd-merged.mount: Deactivated successfully.
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.753 244018 INFO nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating config drive at /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.757 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxft2u2uc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.795 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.796 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.797 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.797 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.797 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] No waiting events found dispatching network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.798 244018 WARNING nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received unexpected event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 for instance with vm_state active and task_state deleting.
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.798 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-unplugged-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.798 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.799 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.799 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.799 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] No waiting events found dispatching network-vif-unplugged-3fb1749a-7239-482e-939c-ec165690b798 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.800 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-unplugged-3fb1749a-7239-482e-939c-ec165690b798 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.801 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.801 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.802 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.802 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.803 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] No waiting events found dispatching network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.803 244018 WARNING nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received unexpected event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 for instance with vm_state active and task_state deleting.
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.900 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxft2u2uc" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.919 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.921 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:09 compute-0 nova_compute[244014]: 2026-02-25 12:36:09.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:36:10 compute-0 podman[324163]: 2026-02-25 12:36:10.155354518 +0000 UTC m=+1.129867144 container cleanup 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:36:10 compute-0 systemd[1]: libpod-conmon-46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54.scope: Deactivated successfully.
Feb 25 12:36:10 compute-0 ceph-mon[76335]: pgmap v1631: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:36:10 compute-0 podman[324279]: 2026-02-25 12:36:10.776862093 +0000 UTC m=+0.599623217 container remove 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:36:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.782 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[12ba0a9c-a82c-41ed-9310-f19180aeace6]: (4, ('Wed Feb 25 12:36:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685 (46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54)\n46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54\nWed Feb 25 12:36:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685 (46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54)\n46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.784 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b59f20b2-50b3-443e-834d-735214da1bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.785 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaefc26c0-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:10 compute-0 nova_compute[244014]: 2026-02-25 12:36:10.787 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:10 compute-0 kernel: tapaefc26c0-50: left promiscuous mode
Feb 25 12:36:10 compute-0 nova_compute[244014]: 2026-02-25 12:36:10.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e9002d-e37b-412a-9f9b-c4d84aaa2ff5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.843 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9da5c1b6-d401-4004-90cd-9a34daf9bd9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.844 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c26928-0615-4997-8549-c0a8d8d1e84b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.859 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1938bd6-4d73-4b06-a3ab-418ffaecc5bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493646, 'reachable_time': 20756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324296, 'error': None, 'target': 'ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.861 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:36:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.862 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b7d96f-33ab-4dfb-a293-04058118b746]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:10 compute-0 systemd[1]: run-netns-ovnmeta\x2daefc26c0\x2d533a\x2d476c\x2dbe4d\x2ddc2b231c8685.mount: Deactivated successfully.
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.096 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.097 244018 INFO nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deleting local config drive /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config because it was imported into RBD.
Feb 25 12:36:11 compute-0 systemd-machined[210048]: New machine qemu-114-instance-00000059.
Feb 25 12:36:11 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-00000059.
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.694 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022971.6934757, 2184a715-0ac8-4fc2-aa99-fae3b8e32edf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.695 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] VM Resumed (Lifecycle Event)
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.697 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.697 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.703 244018 INFO nova.virt.libvirt.driver [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance spawned successfully.
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.703 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.733 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.738 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.740 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.740 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.741 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.741 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.741 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.741 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.786 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.787 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022971.6952932, 2184a715-0ac8-4fc2-aa99-fae3b8e32edf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.787 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] VM Started (Lifecycle Event)
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.820 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.823 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.834 244018 INFO nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Took 4.99 seconds to spawn the instance on the hypervisor.
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.835 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.861 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.908 244018 INFO nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Took 6.48 seconds to build instance.
Feb 25 12:36:11 compute-0 nova_compute[244014]: 2026-02-25 12:36:11.928 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 211 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 3.1 MiB/s wr, 76 op/s
Feb 25 12:36:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:12 compute-0 nova_compute[244014]: 2026-02-25 12:36:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:36:12 compute-0 nova_compute[244014]: 2026-02-25 12:36:12.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:36:12 compute-0 nova_compute[244014]: 2026-02-25 12:36:12.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:36:12 compute-0 nova_compute[244014]: 2026-02-25 12:36:12.902 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 25 12:36:12 compute-0 podman[324358]: 2026-02-25 12:36:12.91175474 +0000 UTC m=+0.054516572 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:36:13 compute-0 ceph-mon[76335]: pgmap v1632: 305 pgs: 305 active+clean; 211 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 3.1 MiB/s wr, 76 op/s
Feb 25 12:36:13 compute-0 nova_compute[244014]: 2026-02-25 12:36:13.272 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:36:13 compute-0 nova_compute[244014]: 2026-02-25 12:36:13.272 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:36:13 compute-0 nova_compute[244014]: 2026-02-25 12:36:13.273 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:36:13 compute-0 nova_compute[244014]: 2026-02-25 12:36:13.273 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:13 compute-0 nova_compute[244014]: 2026-02-25 12:36:13.786 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:36:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1003 KiB/s rd, 3.3 MiB/s wr, 116 op/s
Feb 25 12:36:14 compute-0 nova_compute[244014]: 2026-02-25 12:36:14.110 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:14 compute-0 nova_compute[244014]: 2026-02-25 12:36:14.126 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:36:14 compute-0 nova_compute[244014]: 2026-02-25 12:36:14.126 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:36:14 compute-0 nova_compute[244014]: 2026-02-25 12:36:14.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:14 compute-0 ceph-mon[76335]: pgmap v1633: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1003 KiB/s rd, 3.3 MiB/s wr, 116 op/s
Feb 25 12:36:14 compute-0 podman[324377]: 2026-02-25 12:36:14.73146964 +0000 UTC m=+0.075402402 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:36:14 compute-0 nova_compute[244014]: 2026-02-25 12:36:14.793 244018 INFO nova.compute.manager [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Rebuilding instance
Feb 25 12:36:14 compute-0 nova_compute[244014]: 2026-02-25 12:36:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:36:14 compute-0 nova_compute[244014]: 2026-02-25 12:36:14.925 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:15 compute-0 nova_compute[244014]: 2026-02-25 12:36:15.245 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:15 compute-0 nova_compute[244014]: 2026-02-25 12:36:15.273 244018 DEBUG nova.compute.manager [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:15 compute-0 nova_compute[244014]: 2026-02-25 12:36:15.340 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:15 compute-0 nova_compute[244014]: 2026-02-25 12:36:15.361 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:15 compute-0 nova_compute[244014]: 2026-02-25 12:36:15.415 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'resources' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:15 compute-0 nova_compute[244014]: 2026-02-25 12:36:15.441 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:15 compute-0 nova_compute[244014]: 2026-02-25 12:36:15.458 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:36:15 compute-0 nova_compute[244014]: 2026-02-25 12:36:15.462 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:36:15 compute-0 nova_compute[244014]: 2026-02-25 12:36:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:36:15 compute-0 nova_compute[244014]: 2026-02-25 12:36:15.945 244018 INFO nova.virt.libvirt.driver [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Deleting instance files /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d_del
Feb 25 12:36:15 compute-0 nova_compute[244014]: 2026-02-25 12:36:15.947 244018 INFO nova.virt.libvirt.driver [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Deletion of /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d_del complete
Feb 25 12:36:16 compute-0 nova_compute[244014]: 2026-02-25 12:36:16.008 244018 INFO nova.compute.manager [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Took 7.30 seconds to destroy the instance on the hypervisor.
Feb 25 12:36:16 compute-0 nova_compute[244014]: 2026-02-25 12:36:16.009 244018 DEBUG oslo.service.loopingcall [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:36:16 compute-0 nova_compute[244014]: 2026-02-25 12:36:16.010 244018 DEBUG nova.compute.manager [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:36:16 compute-0 nova_compute[244014]: 2026-02-25 12:36:16.010 244018 DEBUG nova.network.neutron [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:36:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 997 KiB/s rd, 3.0 MiB/s wr, 104 op/s
Feb 25 12:36:16 compute-0 ceph-mon[76335]: pgmap v1634: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 997 KiB/s rd, 3.0 MiB/s wr, 104 op/s
Feb 25 12:36:16 compute-0 nova_compute[244014]: 2026-02-25 12:36:16.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:36:16 compute-0 nova_compute[244014]: 2026-02-25 12:36:16.960 244018 DEBUG nova.network.neutron [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:16 compute-0 nova_compute[244014]: 2026-02-25 12:36:16.977 244018 INFO nova.compute.manager [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] Took 0.97 seconds to deallocate network for instance.
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.024 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.025 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.068 244018 DEBUG nova.compute.manager [req-685cd545-e79d-4889-bedc-d45662c2e5a5 req-afac249e-cefd-4dc4-a54e-1fa467ac4822 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-deleted-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.087 244018 DEBUG oslo_concurrency.processutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:36:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963544512' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.667 244018 DEBUG oslo_concurrency.processutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.674 244018 DEBUG nova.compute.provider_tree [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.693 244018 DEBUG nova.scheduler.client.report [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.718 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.747 244018 INFO nova.scheduler.client.report [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Deleted allocations for instance be11f836-327c-447c-865a-0088e0554c0d
Feb 25 12:36:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1963544512' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.812 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.904 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:36:17 compute-0 nova_compute[244014]: 2026-02-25 12:36:17.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Feb 25 12:36:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:36:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402022151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:18 compute-0 nova_compute[244014]: 2026-02-25 12:36:18.496 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:18 compute-0 nova_compute[244014]: 2026-02-25 12:36:18.757 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:36:18 compute-0 nova_compute[244014]: 2026-02-25 12:36:18.758 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:36:18 compute-0 nova_compute[244014]: 2026-02-25 12:36:18.889 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:36:18 compute-0 nova_compute[244014]: 2026-02-25 12:36:18.890 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3666MB free_disk=59.96658870950341GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:36:18 compute-0 nova_compute[244014]: 2026-02-25 12:36:18.890 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:18 compute-0 nova_compute[244014]: 2026-02-25 12:36:18.891 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:18 compute-0 nova_compute[244014]: 2026-02-25 12:36:18.964 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 2184a715-0ac8-4fc2-aa99-fae3b8e32edf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:36:18 compute-0 nova_compute[244014]: 2026-02-25 12:36:18.964 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:36:18 compute-0 nova_compute[244014]: 2026-02-25 12:36:18.964 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:36:19 compute-0 nova_compute[244014]: 2026-02-25 12:36:19.006 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:19 compute-0 ceph-mon[76335]: pgmap v1635: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Feb 25 12:36:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3402022151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:19 compute-0 nova_compute[244014]: 2026-02-25 12:36:19.167 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:36:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031814701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:19 compute-0 nova_compute[244014]: 2026-02-25 12:36:19.667 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:19 compute-0 nova_compute[244014]: 2026-02-25 12:36:19.673 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:36:19 compute-0 nova_compute[244014]: 2026-02-25 12:36:19.699 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:36:19 compute-0 nova_compute[244014]: 2026-02-25 12:36:19.727 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:36:19 compute-0 nova_compute[244014]: 2026-02-25 12:36:19.728 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:19 compute-0 nova_compute[244014]: 2026-02-25 12:36:19.927 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Feb 25 12:36:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Feb 25 12:36:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4031814701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:20 compute-0 ceph-mon[76335]: pgmap v1636: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Feb 25 12:36:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Feb 25 12:36:20 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Feb 25 12:36:20 compute-0 nova_compute[244014]: 2026-02-25 12:36:20.729 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:36:20 compute-0 nova_compute[244014]: 2026-02-25 12:36:20.729 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:36:21 compute-0 ceph-mon[76335]: osdmap e242: 3 total, 3 up, 3 in
Feb 25 12:36:21 compute-0 nova_compute[244014]: 2026-02-25 12:36:21.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:36:22 compute-0 nova_compute[244014]: 2026-02-25 12:36:22.026 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 536 KiB/s wr, 167 op/s
Feb 25 12:36:22 compute-0 ceph-mon[76335]: pgmap v1638: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 536 KiB/s wr, 167 op/s
Feb 25 12:36:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 KiB/s wr, 130 op/s
Feb 25 12:36:24 compute-0 nova_compute[244014]: 2026-02-25 12:36:24.138 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022969.1368487, be11f836-327c-447c-865a-0088e0554c0d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:24 compute-0 nova_compute[244014]: 2026-02-25 12:36:24.138 244018 INFO nova.compute.manager [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] VM Stopped (Lifecycle Event)
Feb 25 12:36:24 compute-0 nova_compute[244014]: 2026-02-25 12:36:24.162 244018 DEBUG nova.compute.manager [None req-81a3e6cc-7536-455c-b595-526bc0c04319 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:24 compute-0 nova_compute[244014]: 2026-02-25 12:36:24.169 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:24 compute-0 nova_compute[244014]: 2026-02-25 12:36:24.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:36:24 compute-0 nova_compute[244014]: 2026-02-25 12:36:24.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:36:24 compute-0 nova_compute[244014]: 2026-02-25 12:36:24.929 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:25 compute-0 ceph-mon[76335]: pgmap v1639: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 KiB/s wr, 130 op/s
Feb 25 12:36:25 compute-0 nova_compute[244014]: 2026-02-25 12:36:25.510 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:36:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 KiB/s wr, 130 op/s
Feb 25 12:36:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Feb 25 12:36:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Feb 25 12:36:26 compute-0 ceph-mon[76335]: pgmap v1640: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 KiB/s wr, 130 op/s
Feb 25 12:36:26 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Feb 25 12:36:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:27 compute-0 ceph-mon[76335]: osdmap e243: 3 total, 3 up, 3 in
Feb 25 12:36:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 345 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 17 MiB/s wr, 178 op/s
Feb 25 12:36:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Feb 25 12:36:28 compute-0 ceph-mon[76335]: pgmap v1642: 305 pgs: 305 active+clean; 345 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 17 MiB/s wr, 178 op/s
Feb 25 12:36:29 compute-0 nova_compute[244014]: 2026-02-25 12:36:29.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Feb 25 12:36:29 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Feb 25 12:36:29 compute-0 nova_compute[244014]: 2026-02-25 12:36:29.931 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 345 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 514 KiB/s rd, 17 MiB/s wr, 147 op/s
Feb 25 12:36:30 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000059.scope: Deactivated successfully.
Feb 25 12:36:30 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000059.scope: Consumed 12.236s CPU time.
Feb 25 12:36:30 compute-0 systemd-machined[210048]: Machine qemu-114-instance-00000059 terminated.
Feb 25 12:36:30 compute-0 ceph-mon[76335]: osdmap e244: 3 total, 3 up, 3 in
Feb 25 12:36:30 compute-0 ceph-mon[76335]: pgmap v1644: 305 pgs: 305 active+clean; 345 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 514 KiB/s rd, 17 MiB/s wr, 147 op/s
Feb 25 12:36:30 compute-0 nova_compute[244014]: 2026-02-25 12:36:30.539 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance shutdown successfully after 15 seconds.
Feb 25 12:36:30 compute-0 nova_compute[244014]: 2026-02-25 12:36:30.547 244018 INFO nova.virt.libvirt.driver [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance destroyed successfully.
Feb 25 12:36:30 compute-0 nova_compute[244014]: 2026-02-25 12:36:30.554 244018 INFO nova.virt.libvirt.driver [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance destroyed successfully.
Feb 25 12:36:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:36:30
Feb 25 12:36:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:36:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:36:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'backups', 'images', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.control', '.mgr']
Feb 25 12:36:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.310 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deleting instance files /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_del
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.311 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deletion of /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_del complete
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.532 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.533 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating image(s)
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.551 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.571 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.592 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.595 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.669 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.670 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.671 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.671 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.691 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.694 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:31.785 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:36:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:31.786 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:36:31 compute-0 nova_compute[244014]: 2026-02-25 12:36:31.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:31 compute-0 sshd-session[324493]: Received disconnect from 45.148.10.141 port 9906:11:  [preauth]
Feb 25 12:36:31 compute-0 sshd-session[324493]: Disconnected from authenticating user root 45.148.10.141 port 9906 [preauth]
Feb 25 12:36:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Feb 25 12:36:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Feb 25 12:36:31 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:36:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:36:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 190 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 23 MiB/s wr, 245 op/s
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.099 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.162 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] resizing rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.256 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.257 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Ensure instance console log exists: /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.257 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.257 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.258 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.259 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.263 244018 WARNING nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.279 244018 DEBUG nova.virt.libvirt.host [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.280 244018 DEBUG nova.virt.libvirt.host [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.283 244018 DEBUG nova.virt.libvirt.host [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.284 244018 DEBUG nova.virt.libvirt.host [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.284 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.285 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.285 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.286 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.286 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.286 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.286 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.287 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.287 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.287 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.287 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.288 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.288 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.310 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:32 compute-0 ceph-mon[76335]: osdmap e245: 3 total, 3 up, 3 in
Feb 25 12:36:32 compute-0 ceph-mon[76335]: pgmap v1646: 305 pgs: 305 active+clean; 190 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 23 MiB/s wr, 245 op/s
Feb 25 12:36:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/309203223' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.849 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.870 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:32 compute-0 nova_compute[244014]: 2026-02-25 12:36:32.874 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817749783' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.398 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.401 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:36:33 compute-0 nova_compute[244014]:   <uuid>2184a715-0ac8-4fc2-aa99-fae3b8e32edf</uuid>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   <name>instance-00000059</name>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerShowV254Test-server-567186647</nova:name>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:36:32</nova:creationTime>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:36:33 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:36:33 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:36:33 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:36:33 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:36:33 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:36:33 compute-0 nova_compute[244014]:         <nova:user uuid="f14f324492304e519e215bb56099abdd">tempest-ServerShowV254Test-1036115410-project-member</nova:user>
Feb 25 12:36:33 compute-0 nova_compute[244014]:         <nova:project uuid="70a704d198834ace8c228d59139dd7b4">tempest-ServerShowV254Test-1036115410</nova:project>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <system>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <entry name="serial">2184a715-0ac8-4fc2-aa99-fae3b8e32edf</entry>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <entry name="uuid">2184a715-0ac8-4fc2-aa99-fae3b8e32edf</entry>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     </system>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   <os>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   </os>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   <features>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   </features>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk">
Feb 25 12:36:33 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:33 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config">
Feb 25 12:36:33 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:33 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/console.log" append="off"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <video>
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     </video>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:36:33 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:36:33 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:36:33 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:36:33 compute-0 nova_compute[244014]: </domain>
Feb 25 12:36:33 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.485 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.486 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.486 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Using config drive
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.512 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.541 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.823 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating config drive at /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config
Feb 25 12:36:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/309203223' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/817749783' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.828 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjum7k_c5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.960 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjum7k_c5" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.982 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:33 compute-0 nova_compute[244014]: 2026-02-25 12:36:33.985 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 173 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 9.6 MiB/s wr, 208 op/s
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.119 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.120 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deleting local config drive /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config because it was imported into RBD.
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.173 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:34 compute-0 systemd-machined[210048]: New machine qemu-115-instance-00000059.
Feb 25 12:36:34 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-00000059.
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.517 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 2184a715-0ac8-4fc2-aa99-fae3b8e32edf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.518 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022994.5158381, 2184a715-0ac8-4fc2-aa99-fae3b8e32edf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.518 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] VM Resumed (Lifecycle Event)
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.525 244018 DEBUG nova.compute.manager [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.525 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.530 244018 INFO nova.virt.libvirt.driver [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance spawned successfully.
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.531 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.547 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.555 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.562 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.563 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.564 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.564 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.565 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.566 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.574 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.574 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022994.5181427, 2184a715-0ac8-4fc2-aa99-fae3b8e32edf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.575 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] VM Started (Lifecycle Event)
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.609 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.613 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.635 244018 DEBUG nova.compute.manager [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.637 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.712 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.712 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.712 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.783 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:34 compute-0 ceph-mon[76335]: pgmap v1647: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 173 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 9.6 MiB/s wr, 208 op/s
Feb 25 12:36:34 compute-0 nova_compute[244014]: 2026-02-25 12:36:34.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 173 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.1 MiB/s wr, 92 op/s
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.430 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.431 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.467 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.580 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.581 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.588 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.588 244018 INFO nova.compute.claims [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.725 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.857 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.858 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.858 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.858 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.858 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.860 244018 INFO nova.compute.manager [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Terminating instance
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.860 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.861 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquired lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:36:36 compute-0 nova_compute[244014]: 2026-02-25 12:36:36.861 244018 DEBUG nova.network.neutron [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:36:37 compute-0 ceph-mon[76335]: pgmap v1648: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 173 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.1 MiB/s wr, 92 op/s
Feb 25 12:36:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:36:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/391316532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.266 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.270 244018 DEBUG nova.compute.provider_tree [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.284 244018 DEBUG nova.scheduler.client.report [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.307 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.308 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.368 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.368 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.388 244018 INFO nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.393 244018 DEBUG nova.network.neutron [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.410 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.504 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.505 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.505 244018 INFO nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Creating image(s)
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.529 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.559 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.594 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.598 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.657 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.658 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.660 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.660 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.692 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.696 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.723 244018 DEBUG nova.policy [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00eb5c915b2d41f69600acd33967d0f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.758 244018 DEBUG nova.network.neutron [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.773 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Releasing lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.773 244018 DEBUG nova.compute.manager [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:36:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Feb 25 12:36:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Feb 25 12:36:37 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Feb 25 12:36:37 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000059.scope: Deactivated successfully.
Feb 25 12:36:37 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000059.scope: Consumed 3.721s CPU time.
Feb 25 12:36:37 compute-0 systemd-machined[210048]: Machine qemu-115-instance-00000059 terminated.
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.992 244018 INFO nova.virt.libvirt.driver [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance destroyed successfully.
Feb 25 12:36:37 compute-0 nova_compute[244014]: 2026-02-25 12:36:37.992 244018 DEBUG nova.objects.instance [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'resources' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.094387) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998094431, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2086, "num_deletes": 254, "total_data_size": 3281800, "memory_usage": 3336680, "flush_reason": "Manual Compaction"}
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Feb 25 12:36:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 200 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 263 op/s
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998348259, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3212104, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33275, "largest_seqno": 35360, "table_properties": {"data_size": 3202685, "index_size": 5849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20107, "raw_average_key_size": 20, "raw_value_size": 3183522, "raw_average_value_size": 3258, "num_data_blocks": 257, "num_entries": 977, "num_filter_entries": 977, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022802, "oldest_key_time": 1772022802, "file_creation_time": 1772022998, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 253989 microseconds, and 6872 cpu microseconds.
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:36:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/391316532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:38 compute-0 ceph-mon[76335]: osdmap e246: 3 total, 3 up, 3 in
Feb 25 12:36:38 compute-0 ceph-mon[76335]: pgmap v1650: 305 pgs: 305 active+clean; 200 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 263 op/s
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.348365) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3212104 bytes OK
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.348401) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.468331) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.468375) EVENT_LOG_v1 {"time_micros": 1772022998468361, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.468409) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3272920, prev total WAL file size 3276081, number of live WAL files 2.
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.469796) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3136KB)], [74(7862KB)]
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998469899, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11262979, "oldest_snapshot_seqno": -1}
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6069 keys, 9603441 bytes, temperature: kUnknown
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998724675, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9603441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9561309, "index_size": 25830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 153209, "raw_average_key_size": 25, "raw_value_size": 9451096, "raw_average_value_size": 1557, "num_data_blocks": 1044, "num_entries": 6069, "num_filter_entries": 6069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022998, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:36:38 compute-0 nova_compute[244014]: 2026-02-25 12:36:38.724 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Successfully created port: d689bf7c-d44c-4f39-a2a7-a85e52dcee30 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:36:38 compute-0 nova_compute[244014]: 2026-02-25 12:36:38.811 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:38 compute-0 nova_compute[244014]: 2026-02-25 12:36:38.812 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:38 compute-0 nova_compute[244014]: 2026-02-25 12:36:38.829 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.725009) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9603441 bytes
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.846302) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 44.2 rd, 37.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.7 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 6595, records dropped: 526 output_compression: NoCompression
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.846348) EVENT_LOG_v1 {"time_micros": 1772022998846328, "job": 42, "event": "compaction_finished", "compaction_time_micros": 254890, "compaction_time_cpu_micros": 33697, "output_level": 6, "num_output_files": 1, "total_output_size": 9603441, "num_input_records": 6595, "num_output_records": 6069, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998846994, "job": 42, "event": "table_file_deletion", "file_number": 76}
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998848301, "job": 42, "event": "table_file_deletion", "file_number": 74}
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.469614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.848380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.848385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.848386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.848388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:36:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.848389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:36:38 compute-0 nova_compute[244014]: 2026-02-25 12:36:38.919 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:38 compute-0 nova_compute[244014]: 2026-02-25 12:36:38.919 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:38 compute-0 nova_compute[244014]: 2026-02-25 12:36:38.930 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:36:38 compute-0 nova_compute[244014]: 2026-02-25 12:36:38.931 244018 INFO nova.compute.claims [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.087 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.175 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.749 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Successfully updated port: d689bf7c-d44c-4f39-a2a7-a85e52dcee30 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.768 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.769 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquired lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.770 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:36:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:36:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/459317964' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.814 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.818 244018 DEBUG nova.compute.provider_tree [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.834 244018 DEBUG nova.scheduler.client.report [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.860 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.861 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.893 244018 DEBUG nova.compute.manager [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-changed-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.894 244018 DEBUG nova.compute.manager [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Refreshing instance network info cache due to event network-changed-d689bf7c-d44c-4f39-a2a7-a85e52dcee30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.895 244018 DEBUG oslo_concurrency.lockutils [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.922 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.922 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.944 244018 INFO nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.965 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:36:39 compute-0 nova_compute[244014]: 2026-02-25 12:36:39.973 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.069 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.072 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.073 244018 INFO nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Creating image(s)
Feb 25 12:36:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 200 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 201 op/s
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.107 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.134 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.168 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.172 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.212 244018 DEBUG nova.policy [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00eb5c915b2d41f69600acd33967d0f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.242 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.243 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.244 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.244 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.267 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.272 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.309 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.393 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.394 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.400 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] resizing rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.445 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.558 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.559 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.561 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.588 244018 DEBUG nova.objects.instance [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'migration_context' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.593 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.593 244018 INFO nova.compute.claims [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.622 244018 INFO nova.virt.libvirt.driver [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deleting instance files /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_del
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.623 244018 INFO nova.virt.libvirt.driver [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deletion of /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_del complete
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.628 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] resizing rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.647 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.648 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Ensure instance console log exists: /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.648 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.648 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.649 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/459317964' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:40 compute-0 ceph-mon[76335]: pgmap v1651: 305 pgs: 305 active+clean; 200 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 201 op/s
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.688 244018 DEBUG nova.objects.instance [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'migration_context' on Instance uuid 6076a107-bdef-4c8a-8f75-887cdb4833f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.693 244018 INFO nova.compute.manager [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Took 2.92 seconds to destroy the instance on the hypervisor.
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.693 244018 DEBUG oslo.service.loopingcall [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.694 244018 DEBUG nova.compute.manager [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.694 244018 DEBUG nova.network.neutron [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.702 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.703 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Ensure instance console log exists: /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.703 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.703 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.704 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:40.788 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:40 compute-0 nova_compute[244014]: 2026-02-25 12:36:40.820 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.182 244018 DEBUG nova.network.neutron [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.210 244018 DEBUG nova.network.neutron [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.231 244018 INFO nova.compute.manager [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Took 0.54 seconds to deallocate network for instance.
Feb 25 12:36:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:36:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861160029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.294 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.299 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.304 244018 DEBUG nova.compute.provider_tree [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.327 244018 DEBUG nova.scheduler.client.report [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.354 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.355 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.357 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.411 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.412 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.433 244018 INFO nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.451 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.461 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Successfully created port: 66c41a3b-23a5-4cbf-a70e-416adf1617e5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.492 244018 DEBUG oslo_concurrency.processutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.630 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.631 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.632 244018 INFO nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Creating image(s)
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.657 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1861160029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.678 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.700 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.704 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.773 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updating instance_info_cache with network_info: [{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.785 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.786 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.787 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.788 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.818 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.823 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:41 compute-0 nova_compute[244014]: 2026-02-25 12:36:41.851 244018 DEBUG nova.policy [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00eb5c915b2d41f69600acd33967d0f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.008 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Releasing lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.009 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance network_info: |[{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.010 244018 DEBUG oslo_concurrency.lockutils [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.011 244018 DEBUG nova.network.neutron [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Refreshing network info cache for port d689bf7c-d44c-4f39-a2a7-a85e52dcee30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.019 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start _get_guest_xml network_info=[{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.025 244018 WARNING nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.031 244018 DEBUG nova.virt.libvirt.host [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.032 244018 DEBUG nova.virt.libvirt.host [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.036 244018 DEBUG nova.virt.libvirt.host [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.038 244018 DEBUG nova.virt.libvirt.host [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:36:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:36:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470824002' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.039 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.040 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.041 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.042 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.042 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.043 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.043 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.043 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.044 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.044 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.045 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.045 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.051 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.088 244018 DEBUG oslo_concurrency.processutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.095 244018 DEBUG nova.compute.provider_tree [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 232 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 215 op/s
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.108 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.169 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] resizing rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.191 244018 DEBUG nova.scheduler.client.report [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.238 244018 DEBUG nova.objects.instance [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'migration_context' on Instance uuid 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.376 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.383 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.383 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Ensure instance console log exists: /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.383 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.384 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.384 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006996718842186654 of space, bias 1.0, pg target 0.20990156526559964 quantized to 32 (current 32)
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024930145185177097 of space, bias 1.0, pg target 0.7479043555553129 quantized to 32 (current 32)
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.699971522888712e-07 of space, bias 4.0, pg target 0.0009239965827466455 quantized to 16 (current 16)
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:36:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.571 244018 INFO nova.scheduler.client.report [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Deleted allocations for instance 2184a715-0ac8-4fc2-aa99-fae3b8e32edf
Feb 25 12:36:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3501951083' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.630 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.651 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.654 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1470824002' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:36:42 compute-0 ceph-mon[76335]: pgmap v1652: 305 pgs: 305 active+clean; 232 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 215 op/s
Feb 25 12:36:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3501951083' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:42 compute-0 nova_compute[244014]: 2026-02-25 12:36:42.719 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Feb 25 12:36:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Feb 25 12:36:42 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Feb 25 12:36:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989944339' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.193 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.195 244018 DEBUG nova.virt.libvirt.vif [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:37Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.196 244018 DEBUG nova.network.os_vif_util [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.197 244018 DEBUG nova.network.os_vif_util [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.198 244018 DEBUG nova.objects.instance [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.395 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:36:43 compute-0 nova_compute[244014]:   <uuid>0061daee-43d7-458b-8645-0ad3f8fbb2af</uuid>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   <name>instance-0000005a</name>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1814919794</nova:name>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:36:42</nova:creationTime>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <nova:user uuid="00eb5c915b2d41f69600acd33967d0f5">tempest-ListServerFiltersTestJSON-1481541933-project-member</nova:user>
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <nova:project uuid="999f2a015b9c4bc98661fe6fe6db06a0">tempest-ListServerFiltersTestJSON-1481541933</nova:project>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <nova:port uuid="d689bf7c-d44c-4f39-a2a7-a85e52dcee30">
Feb 25 12:36:43 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <system>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <entry name="serial">0061daee-43d7-458b-8645-0ad3f8fbb2af</entry>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <entry name="uuid">0061daee-43d7-458b-8645-0ad3f8fbb2af</entry>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     </system>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   <os>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   </os>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   <features>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   </features>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0061daee-43d7-458b-8645-0ad3f8fbb2af_disk">
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config">
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:43 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:65:76:ae"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <target dev="tapd689bf7c-d4"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/console.log" append="off"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <video>
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     </video>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:36:43 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:36:43 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:36:43 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:36:43 compute-0 nova_compute[244014]: </domain>
Feb 25 12:36:43 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.397 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Preparing to wait for external event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.398 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.399 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.399 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.401 244018 DEBUG nova.virt.libvirt.vif [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:37Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.402 244018 DEBUG nova.network.os_vif_util [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.403 244018 DEBUG nova.network.os_vif_util [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.404 244018 DEBUG os_vif [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.406 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.407 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.412 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.412 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd689bf7c-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.413 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd689bf7c-d4, col_values=(('external_ids', {'iface-id': 'd689bf7c-d44c-4f39-a2a7-a85e52dcee30', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:76:ae', 'vm-uuid': '0061daee-43d7-458b-8645-0ad3f8fbb2af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:43 compute-0 NetworkManager[49836]: <info>  [1772023003.4178] manager: (tapd689bf7c-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.421 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.422 244018 INFO os_vif [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4')
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.482 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.483 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.483 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No VIF found with MAC fa:16:3e:65:76:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.483 244018 INFO nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Using config drive
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.517 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:43 compute-0 podman[325514]: 2026-02-25 12:36:43.533577272 +0000 UTC m=+0.063405313 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.593 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Successfully created port: ca16d7f1-7f94-4d64-906e-e1469230e4f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:36:43 compute-0 ceph-mon[76335]: osdmap e247: 3 total, 3 up, 3 in
Feb 25 12:36:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1989944339' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.915 244018 INFO nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Creating config drive at /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config
Feb 25 12:36:43 compute-0 nova_compute[244014]: 2026-02-25 12:36:43.922 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd1088dmj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.010 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Successfully updated port: 66c41a3b-23a5-4cbf-a70e-416adf1617e5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.035 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.036 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquired lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.036 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.067 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd1088dmj" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 262 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.6 MiB/s wr, 313 op/s
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.100 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.104 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.147 244018 DEBUG nova.compute.manager [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Received event network-changed-66c41a3b-23a5-4cbf-a70e-416adf1617e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.147 244018 DEBUG nova.compute.manager [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Refreshing instance network info cache due to event network-changed-66c41a3b-23a5-4cbf-a70e-416adf1617e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.147 244018 DEBUG oslo_concurrency.lockutils [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:36:44 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.255 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.256 244018 INFO nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Deleting local config drive /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config because it was imported into RBD.
Feb 25 12:36:44 compute-0 kernel: tapd689bf7c-d4: entered promiscuous mode
Feb 25 12:36:44 compute-0 NetworkManager[49836]: <info>  [1772023004.3067] manager: (tapd689bf7c-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:44 compute-0 ovn_controller[147040]: 2026-02-25T12:36:44Z|00900|binding|INFO|Claiming lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for this chassis.
Feb 25 12:36:44 compute-0 ovn_controller[147040]: 2026-02-25T12:36:44Z|00901|binding|INFO|d689bf7c-d44c-4f39-a2a7-a85e52dcee30: Claiming fa:16:3e:65:76:ae 10.100.0.8
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.323 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:76:ae 10.100.0.8'], port_security=['fa:16:3e:65:76:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0061daee-43d7-458b-8645-0ad3f8fbb2af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d689bf7c-d44c-4f39-a2a7-a85e52dcee30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.324 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d689bf7c-d44c-4f39-a2a7-a85e52dcee30 in datapath 38a239da-c933-4cbc-be00-dd127471e198 bound to our chassis
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.325 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198
Feb 25 12:36:44 compute-0 systemd-machined[210048]: New machine qemu-116-instance-0000005a.
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.338 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed44d18-91b3-40e5-bcbe-4611b36fbab5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.339 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap38a239da-c1 in ovnmeta-38a239da-c933-4cbc-be00-dd127471e198 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.340 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap38a239da-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.341 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f8af60-25ca-4fda-b95e-5ccbf6c3c983]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.341 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7413ddba-f8fc-445d-9cc4-76eb7ef39327]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.342 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:44 compute-0 ovn_controller[147040]: 2026-02-25T12:36:44Z|00902|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 ovn-installed in OVS
Feb 25 12:36:44 compute-0 ovn_controller[147040]: 2026-02-25T12:36:44Z|00903|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 up in Southbound
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.344 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.350 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[f747ae6c-9f14-471f-a626-3d6b001f75a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-0000005a.
Feb 25 12:36:44 compute-0 systemd-udevd[325606]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:36:44 compute-0 NetworkManager[49836]: <info>  [1772023004.3682] device (tapd689bf7c-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:36:44 compute-0 NetworkManager[49836]: <info>  [1772023004.3694] device (tapd689bf7c-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.372 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[443a47f6-9dff-4ced-b08d-267d351e9273]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.393 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f9423214-39d6-4f08-aad1-15c71ac6355c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.396 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[93747d7e-5ab6-40c8-b32d-89c726b238a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 systemd-udevd[325609]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:36:44 compute-0 NetworkManager[49836]: <info>  [1772023004.3982] manager: (tap38a239da-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/378)
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.427 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[45061f05-529c-436c-bafa-d31cb301ea2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.429 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e85c79-0d2e-4cc3-8401-05f01a7d0927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 NetworkManager[49836]: <info>  [1772023004.4458] device (tap38a239da-c0): carrier: link connected
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.451 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[72d50a8e-40a8-4673-9498-db297528efde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.464 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2a97c7d6-5a52-496d-814f-ede30fc9b143]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 39986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325636, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.480 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16ea7eda-8361-47e6-a7c9-91d2aa2fa1d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:fa50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497385, 'tstamp': 497385}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325637, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.497 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21cf56a6-34de-49a8-a77b-f38567fc7c4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 39986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325638, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.523 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1db3945a-20e6-48cf-b4bc-348acfbb5660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.575 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[489b1060-bb2e-4cd9-9228-febc18198a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.576 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.577 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.577 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:44 compute-0 kernel: tap38a239da-c0: entered promiscuous mode
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:44 compute-0 NetworkManager[49836]: <info>  [1772023004.5810] manager: (tap38a239da-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.585 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.587 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:44 compute-0 ovn_controller[147040]: 2026-02-25T12:36:44Z|00904|binding|INFO|Releasing lport b61d6004-89be-4a9d-aeb0-ecb4f03f3526 from this chassis (sb_readonly=0)
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.602 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.604 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/38a239da-c933-4cbc-be00-dd127471e198.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/38a239da-c933-4cbc-be00-dd127471e198.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.605 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[94729851-8004-4ad5-a487-4489879e5457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.606 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-38a239da-c933-4cbc-be00-dd127471e198
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/38a239da-c933-4cbc-be00-dd127471e198.pid.haproxy
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 38a239da-c933-4cbc-be00-dd127471e198
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:36:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.607 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'env', 'PROCESS_TAG=haproxy-38a239da-c933-4cbc-be00-dd127471e198', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/38a239da-c933-4cbc-be00-dd127471e198.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.629 244018 DEBUG nova.compute.manager [req-edc06f77-d275-47eb-bbc4-00db192bb044 req-b32f426e-e655-44f9-ab69-e360b7ae972a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.630 244018 DEBUG oslo_concurrency.lockutils [req-edc06f77-d275-47eb-bbc4-00db192bb044 req-b32f426e-e655-44f9-ab69-e360b7ae972a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.630 244018 DEBUG oslo_concurrency.lockutils [req-edc06f77-d275-47eb-bbc4-00db192bb044 req-b32f426e-e655-44f9-ab69-e360b7ae972a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.630 244018 DEBUG oslo_concurrency.lockutils [req-edc06f77-d275-47eb-bbc4-00db192bb044 req-b32f426e-e655-44f9-ab69-e360b7ae972a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.631 244018 DEBUG nova.compute.manager [req-edc06f77-d275-47eb-bbc4-00db192bb044 req-b32f426e-e655-44f9-ab69-e360b7ae972a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Processing event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.648 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:36:44 compute-0 ceph-mon[76335]: pgmap v1654: 305 pgs: 305 active+clean; 262 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.6 MiB/s wr, 313 op/s
Feb 25 12:36:44 compute-0 nova_compute[244014]: 2026-02-25 12:36:44.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:45 compute-0 podman[325670]: 2026-02-25 12:36:45.004119212 +0000 UTC m=+0.059436971 container create 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:36:45 compute-0 systemd[1]: Started libpod-conmon-2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1.scope.
Feb 25 12:36:45 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82356394d1b742d50fdf5952aac21f145d54d5f578fc2e46fd418cf1c3cc383b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:36:45 compute-0 podman[325670]: 2026-02-25 12:36:44.967375703 +0000 UTC m=+0.022693532 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:36:45 compute-0 podman[325670]: 2026-02-25 12:36:45.072490124 +0000 UTC m=+0.127807943 container init 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 12:36:45 compute-0 podman[325670]: 2026-02-25 12:36:45.077265219 +0000 UTC m=+0.132583008 container start 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 12:36:45 compute-0 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [NOTICE]   (325699) : New worker (325707) forked
Feb 25 12:36:45 compute-0 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [NOTICE]   (325699) : Loading success.
Feb 25 12:36:45 compute-0 podman[325683]: 2026-02-25 12:36:45.133500838 +0000 UTC m=+0.092490225 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.275 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023005.2753124, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.276 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Started (Lifecycle Event)
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.279 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.283 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.287 244018 INFO nova.virt.libvirt.driver [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance spawned successfully.
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.287 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.318 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.325 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.333 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.334 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.335 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.336 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.337 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.337 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.375 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.376 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023005.2754111, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.376 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Paused (Lifecycle Event)
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.417 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.420 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023005.2827766, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.420 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Resumed (Lifecycle Event)
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.446 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.450 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.457 244018 INFO nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Took 7.95 seconds to spawn the instance on the hypervisor.
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.458 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.474 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.522 244018 DEBUG nova.network.neutron [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updated VIF entry in instance network info cache for port d689bf7c-d44c-4f39-a2a7-a85e52dcee30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.522 244018 DEBUG nova.network.neutron [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updating instance_info_cache with network_info: [{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.527 244018 INFO nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Took 8.97 seconds to build instance.
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.542 244018 DEBUG oslo_concurrency.lockutils [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.544 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.928 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Successfully updated port: ca16d7f1-7f94-4d64-906e-e1469230e4f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.944 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.944 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquired lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:36:45 compute-0 nova_compute[244014]: 2026-02-25 12:36:45.945 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:36:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 262 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 5.8 MiB/s wr, 138 op/s
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.221 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.290 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Updating instance_info_cache with network_info: [{"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.302 244018 DEBUG nova.compute.manager [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Received event network-changed-ca16d7f1-7f94-4d64-906e-e1469230e4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.303 244018 DEBUG nova.compute.manager [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Refreshing instance network info cache due to event network-changed-ca16d7f1-7f94-4d64-906e-e1469230e4f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.303 244018 DEBUG oslo_concurrency.lockutils [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.342 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Releasing lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.343 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Instance network_info: |[{"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.345 244018 DEBUG oslo_concurrency.lockutils [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.345 244018 DEBUG nova.network.neutron [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Refreshing network info cache for port 66c41a3b-23a5-4cbf-a70e-416adf1617e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.351 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Start _get_guest_xml network_info=[{"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'f0ef5a9a-23b8-4883-8e47-feb7403a11d8'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.358 244018 WARNING nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.363 244018 DEBUG nova.virt.libvirt.host [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.363 244018 DEBUG nova.virt.libvirt.host [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.366 244018 DEBUG nova.virt.libvirt.host [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.366 244018 DEBUG nova.virt.libvirt.host [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.366 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.367 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.367 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.367 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.368 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.368 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.368 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.368 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.369 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.369 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.369 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.369 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.372 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.831 244018 DEBUG nova.compute.manager [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.832 244018 DEBUG oslo_concurrency.lockutils [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.833 244018 DEBUG oslo_concurrency.lockutils [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.833 244018 DEBUG oslo_concurrency.lockutils [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.834 244018 DEBUG nova.compute.manager [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.834 244018 WARNING nova.compute.manager [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state active and task_state None.
Feb 25 12:36:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967359161' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.924 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.946 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:46 compute-0 nova_compute[244014]: 2026-02-25 12:36:46.950 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:47 compute-0 ceph-mon[76335]: pgmap v1655: 305 pgs: 305 active+clean; 262 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 5.8 MiB/s wr, 138 op/s
Feb 25 12:36:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1967359161' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4057307832' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:36:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/40011578' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:36:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:36:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/40011578' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.496 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.498 244018 DEBUG nova.virt.libvirt.vif [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1261051940',display_name='tempest-ListServerFiltersTestJSON-instance-1261051940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1261051940',id=91,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-c0kkprmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:40Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=6076a107-bdef-4c8a-8f75-887cdb4833f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.499 244018 DEBUG nova.network.os_vif_util [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.500 244018 DEBUG nova.network.os_vif_util [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.502 244018 DEBUG nova.objects.instance [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6076a107-bdef-4c8a-8f75-887cdb4833f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.559 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:36:47 compute-0 nova_compute[244014]:   <uuid>6076a107-bdef-4c8a-8f75-887cdb4833f0</uuid>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   <name>instance-0000005b</name>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1261051940</nova:name>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:36:46</nova:creationTime>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <nova:user uuid="00eb5c915b2d41f69600acd33967d0f5">tempest-ListServerFiltersTestJSON-1481541933-project-member</nova:user>
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <nova:project uuid="999f2a015b9c4bc98661fe6fe6db06a0">tempest-ListServerFiltersTestJSON-1481541933</nova:project>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <nova:port uuid="66c41a3b-23a5-4cbf-a70e-416adf1617e5">
Feb 25 12:36:47 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <system>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <entry name="serial">6076a107-bdef-4c8a-8f75-887cdb4833f0</entry>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <entry name="uuid">6076a107-bdef-4c8a-8f75-887cdb4833f0</entry>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     </system>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   <os>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   </os>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   <features>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   </features>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/6076a107-bdef-4c8a-8f75-887cdb4833f0_disk">
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config">
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:47 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:6d:11:ce"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <target dev="tap66c41a3b-23"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/console.log" append="off"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <video>
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     </video>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:36:47 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:36:47 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:36:47 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:36:47 compute-0 nova_compute[244014]: </domain>
Feb 25 12:36:47 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.561 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Preparing to wait for external event network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.562 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.562 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.562 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.564 244018 DEBUG nova.virt.libvirt.vif [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1261051940',display_name='tempest-ListServerFiltersTestJSON-instance-1261051940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1261051940',id=91,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-c0kkprmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:40Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=6076a107-bdef-4c8a-8f75-887cdb4833f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.564 244018 DEBUG nova.network.os_vif_util [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.565 244018 DEBUG nova.network.os_vif_util [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.566 244018 DEBUG os_vif [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.568 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.569 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.573 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66c41a3b-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.574 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66c41a3b-23, col_values=(('external_ids', {'iface-id': '66c41a3b-23a5-4cbf-a70e-416adf1617e5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:11:ce', 'vm-uuid': '6076a107-bdef-4c8a-8f75-887cdb4833f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:47 compute-0 NetworkManager[49836]: <info>  [1772023007.5769] manager: (tap66c41a3b-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.583 244018 INFO os_vif [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23')
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.640 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.641 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.641 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No VIF found with MAC fa:16:3e:6d:11:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.641 244018 INFO nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Using config drive
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.660 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.844 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Updating instance_info_cache with network_info: [{"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.869 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Releasing lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.870 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Instance network_info: |[{"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.871 244018 DEBUG oslo_concurrency.lockutils [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.871 244018 DEBUG nova.network.neutron [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Refreshing network info cache for port ca16d7f1-7f94-4d64-906e-e1469230e4f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.876 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Start _get_guest_xml network_info=[{"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.885 244018 WARNING nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.890 244018 DEBUG nova.virt.libvirt.host [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.891 244018 DEBUG nova.virt.libvirt.host [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.909 244018 DEBUG nova.virt.libvirt.host [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.910 244018 DEBUG nova.virt.libvirt.host [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.910 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.911 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8a55b37a-157d-41c3-9f41-7dbf617bee81',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.912 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.913 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.914 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.914 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.915 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.916 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.917 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.917 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.918 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.918 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:36:47 compute-0 nova_compute[244014]: 2026-02-25 12:36:47.923 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 292 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 217 op/s
Feb 25 12:36:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4057307832' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/40011578' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:36:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/40011578' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:36:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077882892' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.475 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.497 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.501 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.584 244018 INFO nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Creating config drive at /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.587 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkyv9wk_v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.716 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkyv9wk_v" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.747 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.751 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.887 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.889 244018 INFO nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Deleting local config drive /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config because it was imported into RBD.
Feb 25 12:36:48 compute-0 NetworkManager[49836]: <info>  [1772023008.9366] manager: (tap66c41a3b-23): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Feb 25 12:36:48 compute-0 kernel: tap66c41a3b-23: entered promiscuous mode
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:48 compute-0 ovn_controller[147040]: 2026-02-25T12:36:48Z|00905|binding|INFO|Claiming lport 66c41a3b-23a5-4cbf-a70e-416adf1617e5 for this chassis.
Feb 25 12:36:48 compute-0 ovn_controller[147040]: 2026-02-25T12:36:48Z|00906|binding|INFO|66c41a3b-23a5-4cbf-a70e-416adf1617e5: Claiming fa:16:3e:6d:11:ce 10.100.0.6
Feb 25 12:36:48 compute-0 ovn_controller[147040]: 2026-02-25T12:36:48Z|00907|binding|INFO|Setting lport 66c41a3b-23a5-4cbf-a70e-416adf1617e5 ovn-installed in OVS
Feb 25 12:36:48 compute-0 nova_compute[244014]: 2026-02-25 12:36:48.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:48 compute-0 systemd-machined[210048]: New machine qemu-117-instance-0000005b.
Feb 25 12:36:48 compute-0 ovn_controller[147040]: 2026-02-25T12:36:48Z|00908|binding|INFO|Setting lport 66c41a3b-23a5-4cbf-a70e-416adf1617e5 up in Southbound
Feb 25 12:36:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:48.980 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:11:ce 10.100.0.6'], port_security=['fa:16:3e:6d:11:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6076a107-bdef-4c8a-8f75-887cdb4833f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=66c41a3b-23a5-4cbf-a70e-416adf1617e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:36:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:48.982 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 66c41a3b-23a5-4cbf-a70e-416adf1617e5 in datapath 38a239da-c933-4cbc-be00-dd127471e198 bound to our chassis
Feb 25 12:36:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:48.984 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198
Feb 25 12:36:48 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-0000005b.
Feb 25 12:36:49 compute-0 systemd-udevd[325964]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:36:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.003 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d45b713-244a-4eb4-9fa0-b3e858d7055f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:49 compute-0 NetworkManager[49836]: <info>  [1772023009.0146] device (tap66c41a3b-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:36:49 compute-0 NetworkManager[49836]: <info>  [1772023009.0167] device (tap66c41a3b-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:36:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:36:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1339552868' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.054 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8dee6aa8-1441-4bb1-8ceb-ac644354f4d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.059 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5a97813b-0457-48d4-95e0-ab22ff13dc8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.070 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.073 244018 DEBUG nova.virt.libvirt.vif [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1161077750',display_name='tempest-ListServerFiltersTestJSON-instance-1161077750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1161077750',id=92,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-hzdbk73p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:41Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=7c2fb1e7-04d0-4903-a675-8cda55bbb6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.074 244018 DEBUG nova.network.os_vif_util [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.076 244018 DEBUG nova.network.os_vif_util [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.079 244018 DEBUG nova.objects.instance [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:36:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.092 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed0fa4c-70dd-4759-bc8f-1c4258ef0f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.098 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:36:49 compute-0 nova_compute[244014]:   <uuid>7c2fb1e7-04d0-4903-a675-8cda55bbb6ed</uuid>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   <name>instance-0000005c</name>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   <memory>196608</memory>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1161077750</nova:name>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:36:47</nova:creationTime>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <nova:flavor name="m1.micro">
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <nova:memory>192</nova:memory>
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <nova:user uuid="00eb5c915b2d41f69600acd33967d0f5">tempest-ListServerFiltersTestJSON-1481541933-project-member</nova:user>
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <nova:project uuid="999f2a015b9c4bc98661fe6fe6db06a0">tempest-ListServerFiltersTestJSON-1481541933</nova:project>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <nova:port uuid="ca16d7f1-7f94-4d64-906e-e1469230e4f1">
Feb 25 12:36:49 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <system>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <entry name="serial">7c2fb1e7-04d0-4903-a675-8cda55bbb6ed</entry>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <entry name="uuid">7c2fb1e7-04d0-4903-a675-8cda55bbb6ed</entry>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     </system>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   <os>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   </os>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   <features>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   </features>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk">
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config">
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       </source>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:36:49 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:aa:9e:27"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <target dev="tapca16d7f1-7f"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/console.log" append="off"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <video>
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     </video>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:36:49 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:36:49 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:36:49 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:36:49 compute-0 nova_compute[244014]: </domain>
Feb 25 12:36:49 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.104 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Preparing to wait for external event network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.105 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.105 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.105 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.106 244018 DEBUG nova.virt.libvirt.vif [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1161077750',display_name='tempest-ListServerFiltersTestJSON-instance-1161077750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1161077750',id=92,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-hzdbk73p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:41Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=7c2fb1e7-04d0-4903-a675-8cda55bbb6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.106 244018 DEBUG nova.network.os_vif_util [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.107 244018 DEBUG nova.network.os_vif_util [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.107 244018 DEBUG os_vif [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.108 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.108 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:36:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.109 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1e946cb2-300f-484f-b9d2-b4fb3e98f0aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 39986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325979, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.110 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca16d7f1-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.111 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca16d7f1-7f, col_values=(('external_ids', {'iface-id': 'ca16d7f1-7f94-4d64-906e-e1469230e4f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:9e:27', 'vm-uuid': '7c2fb1e7-04d0-4903-a675-8cda55bbb6ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:49 compute-0 NetworkManager[49836]: <info>  [1772023009.1132] manager: (tapca16d7f1-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.120 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.121 244018 INFO os_vif [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f')
Feb 25 12:36:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d71e517d-c4ce-4ca8-b5a5-38d6b032dcbf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325981, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325981, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.129 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.136 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.136 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:36:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.137 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.138 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.166 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.167 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.167 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No VIF found with MAC fa:16:3e:aa:9e:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.167 244018 INFO nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Using config drive
Feb 25 12:36:49 compute-0 ceph-mon[76335]: pgmap v1656: 305 pgs: 305 active+clean; 292 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 217 op/s
Feb 25 12:36:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4077882892' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1339552868' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.202 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.268 244018 DEBUG nova.network.neutron [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Updated VIF entry in instance network info cache for port 66c41a3b-23a5-4cbf-a70e-416adf1617e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.268 244018 DEBUG nova.network.neutron [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Updating instance_info_cache with network_info: [{"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.305 244018 DEBUG oslo_concurrency.lockutils [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.800 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023009.800289, 6076a107-bdef-4c8a-8f75-887cdb4833f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.801 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] VM Started (Lifecycle Event)
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.817 244018 INFO nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Creating config drive at /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.825 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj916sa5i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.864 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.869 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023009.802458, 6076a107-bdef-4c8a-8f75-887cdb4833f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.870 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] VM Paused (Lifecycle Event)
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.886 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.889 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.900 244018 DEBUG nova.compute.manager [req-e6b330bc-3030-4cac-a366-587417f0ec51 req-7c2d148a-3fb9-4d55-8868-ba2154504b29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Received event network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.900 244018 DEBUG oslo_concurrency.lockutils [req-e6b330bc-3030-4cac-a366-587417f0ec51 req-7c2d148a-3fb9-4d55-8868-ba2154504b29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.901 244018 DEBUG oslo_concurrency.lockutils [req-e6b330bc-3030-4cac-a366-587417f0ec51 req-7c2d148a-3fb9-4d55-8868-ba2154504b29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.901 244018 DEBUG oslo_concurrency.lockutils [req-e6b330bc-3030-4cac-a366-587417f0ec51 req-7c2d148a-3fb9-4d55-8868-ba2154504b29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.901 244018 DEBUG nova.compute.manager [req-e6b330bc-3030-4cac-a366-587417f0ec51 req-7c2d148a-3fb9-4d55-8868-ba2154504b29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Processing event network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.902 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.907 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.913 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.915 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023009.9073553, 6076a107-bdef-4c8a-8f75-887cdb4833f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.915 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] VM Resumed (Lifecycle Event)
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.918 244018 INFO nova.virt.libvirt.driver [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Instance spawned successfully.
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.918 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.938 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.943 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.947 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.947 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.948 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.948 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.949 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.949 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.952 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.966 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj916sa5i" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.988 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:36:49 compute-0 nova_compute[244014]: 2026-02-25 12:36:49.991 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.024 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.026 244018 INFO nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Took 9.96 seconds to spawn the instance on the hypervisor.
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.027 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.085 244018 INFO nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Took 11.19 seconds to build instance.
Feb 25 12:36:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 292 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 217 op/s
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.105 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.156 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.157 244018 INFO nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Deleting local config drive /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config because it was imported into RBD.
Feb 25 12:36:50 compute-0 ceph-mon[76335]: pgmap v1657: 305 pgs: 305 active+clean; 292 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 217 op/s
Feb 25 12:36:50 compute-0 NetworkManager[49836]: <info>  [1772023010.2021] manager: (tapca16d7f1-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Feb 25 12:36:50 compute-0 kernel: tapca16d7f1-7f: entered promiscuous mode
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.203 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:50 compute-0 ovn_controller[147040]: 2026-02-25T12:36:50Z|00909|binding|INFO|Claiming lport ca16d7f1-7f94-4d64-906e-e1469230e4f1 for this chassis.
Feb 25 12:36:50 compute-0 ovn_controller[147040]: 2026-02-25T12:36:50Z|00910|binding|INFO|ca16d7f1-7f94-4d64-906e-e1469230e4f1: Claiming fa:16:3e:aa:9e:27 10.100.0.3
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.211 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:9e:27 10.100.0.3'], port_security=['fa:16:3e:aa:9e:27 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7c2fb1e7-04d0-4903-a675-8cda55bbb6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ca16d7f1-7f94-4d64-906e-e1469230e4f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.212 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ca16d7f1-7f94-4d64-906e-e1469230e4f1 in datapath 38a239da-c933-4cbc-be00-dd127471e198 bound to our chassis
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.213 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198
Feb 25 12:36:50 compute-0 ovn_controller[147040]: 2026-02-25T12:36:50Z|00911|binding|INFO|Setting lport ca16d7f1-7f94-4d64-906e-e1469230e4f1 ovn-installed in OVS
Feb 25 12:36:50 compute-0 ovn_controller[147040]: 2026-02-25T12:36:50Z|00912|binding|INFO|Setting lport ca16d7f1-7f94-4d64-906e-e1469230e4f1 up in Southbound
Feb 25 12:36:50 compute-0 NetworkManager[49836]: <info>  [1772023010.2165] device (tapca16d7f1-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.216 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:50 compute-0 NetworkManager[49836]: <info>  [1772023010.2173] device (tapca16d7f1-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.225 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:50 compute-0 systemd-machined[210048]: New machine qemu-118-instance-0000005c.
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.241 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[18c69aae-3c50-4645-895a-37542a8f5937]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:50 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-0000005c.
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.262 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[13c2b547-68ee-498c-bbcc-9c571fb57467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.264 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4c866303-1a56-4fbc-b718-8213ee7d9857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.283 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[26b0ede3-35cf-4307-812a-0da8be4feff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.299 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[518f15f8-7ec9-4ce2-bc9f-f6c297716ad9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 39986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326110, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.312 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[520a7f98-f042-4e61-b38c-7484cdf4ed29]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326111, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326111, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.313 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.317 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.317 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.318 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:36:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.318 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.735 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023010.7348795, 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.736 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] VM Started (Lifecycle Event)
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.764 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.768 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023010.735014, 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.769 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] VM Paused (Lifecycle Event)
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.784 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.787 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.793 244018 DEBUG nova.network.neutron [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Updated VIF entry in instance network info cache for port ca16d7f1-7f94-4d64-906e-e1469230e4f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.794 244018 DEBUG nova.network.neutron [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Updating instance_info_cache with network_info: [{"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.809 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:36:50 compute-0 nova_compute[244014]: 2026-02-25 12:36:50.814 244018 DEBUG oslo_concurrency.lockutils [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:36:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 293 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.0 MiB/s wr, 233 op/s
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.135 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Received event network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.135 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.135 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] No waiting events found dispatching network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 WARNING nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Received unexpected event network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 for instance with vm_state active and task_state None.
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Received event network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.137 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.137 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Processing event network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.137 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Received event network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.137 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.137 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.138 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.138 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] No waiting events found dispatching network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.138 244018 WARNING nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Received unexpected event network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 for instance with vm_state building and task_state spawning.
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.138 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.141 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023012.1414227, 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.141 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] VM Resumed (Lifecycle Event)
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.143 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.145 244018 INFO nova.virt.libvirt.driver [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Instance spawned successfully.
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.145 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.174 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.186 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.192 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.192 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.193 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.194 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.195 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.195 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.230 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.261 244018 INFO nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Took 10.63 seconds to spawn the instance on the hypervisor.
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.261 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.327 244018 INFO nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Took 11.83 seconds to build instance.
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.346 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.990 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022997.9888468, 2184a715-0ac8-4fc2-aa99-fae3b8e32edf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:36:52 compute-0 nova_compute[244014]: 2026-02-25 12:36:52.991 244018 INFO nova.compute.manager [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] VM Stopped (Lifecycle Event)
Feb 25 12:36:53 compute-0 nova_compute[244014]: 2026-02-25 12:36:53.085 244018 DEBUG nova.compute.manager [None req-23e7d739-d1dd-45af-84aa-649f197d7681 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:36:53 compute-0 ceph-mon[76335]: pgmap v1658: 305 pgs: 305 active+clean; 293 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.0 MiB/s wr, 233 op/s
Feb 25 12:36:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 293 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.5 MiB/s wr, 180 op/s
Feb 25 12:36:54 compute-0 nova_compute[244014]: 2026-02-25 12:36:54.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:54 compute-0 nova_compute[244014]: 2026-02-25 12:36:54.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:55.016 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:36:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:55.017 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:36:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:36:55.018 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:36:55 compute-0 ceph-mon[76335]: pgmap v1659: 305 pgs: 305 active+clean; 293 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.5 MiB/s wr, 180 op/s
Feb 25 12:36:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 293 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 169 op/s
Feb 25 12:36:57 compute-0 ceph-mon[76335]: pgmap v1660: 305 pgs: 305 active+clean; 293 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 169 op/s
Feb 25 12:36:57 compute-0 ovn_controller[147040]: 2026-02-25T12:36:57Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:76:ae 10.100.0.8
Feb 25 12:36:57 compute-0 ovn_controller[147040]: 2026-02-25T12:36:57Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:76:ae 10.100.0.8
Feb 25 12:36:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:36:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 325 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.4 MiB/s wr, 287 op/s
Feb 25 12:36:58 compute-0 sudo[326154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:36:58 compute-0 sudo[326154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:36:58 compute-0 sudo[326154]: pam_unix(sudo:session): session closed for user root
Feb 25 12:36:58 compute-0 sudo[326179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:36:58 compute-0 sudo[326179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:36:58 compute-0 sudo[326179]: pam_unix(sudo:session): session closed for user root
Feb 25 12:36:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:36:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:36:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:36:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:36:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:36:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:36:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:36:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:36:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:36:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:36:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:36:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:36:58 compute-0 sudo[326234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:36:58 compute-0 sudo[326234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:36:58 compute-0 sudo[326234]: pam_unix(sudo:session): session closed for user root
Feb 25 12:36:59 compute-0 sudo[326259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:36:59 compute-0 sudo[326259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:36:59 compute-0 nova_compute[244014]: 2026-02-25 12:36:59.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:59 compute-0 ceph-mon[76335]: pgmap v1661: 305 pgs: 305 active+clean; 325 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.4 MiB/s wr, 287 op/s
Feb 25 12:36:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:36:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:36:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:36:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:36:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:36:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:36:59 compute-0 podman[326297]: 2026-02-25 12:36:59.254644324 +0000 UTC m=+0.033403035 container create 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 12:36:59 compute-0 systemd[1]: Started libpod-conmon-721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307.scope.
Feb 25 12:36:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:36:59 compute-0 podman[326297]: 2026-02-25 12:36:59.327613156 +0000 UTC m=+0.106371917 container init 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:36:59 compute-0 podman[326297]: 2026-02-25 12:36:59.238456096 +0000 UTC m=+0.017214847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:36:59 compute-0 podman[326297]: 2026-02-25 12:36:59.336797376 +0000 UTC m=+0.115556097 container start 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:36:59 compute-0 podman[326297]: 2026-02-25 12:36:59.339678507 +0000 UTC m=+0.118437318 container attach 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 12:36:59 compute-0 fervent_euclid[326313]: 167 167
Feb 25 12:36:59 compute-0 systemd[1]: libpod-721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307.scope: Deactivated successfully.
Feb 25 12:36:59 compute-0 podman[326297]: 2026-02-25 12:36:59.343185716 +0000 UTC m=+0.121944447 container died 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-15f08a8f123dbf29353256e6bb5052dcff2805476144aae4b8653a2e27635ca8-merged.mount: Deactivated successfully.
Feb 25 12:36:59 compute-0 podman[326297]: 2026-02-25 12:36:59.379821382 +0000 UTC m=+0.158580103 container remove 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:36:59 compute-0 systemd[1]: libpod-conmon-721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307.scope: Deactivated successfully.
Feb 25 12:36:59 compute-0 podman[326336]: 2026-02-25 12:36:59.512595554 +0000 UTC m=+0.027060316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:36:59 compute-0 podman[326336]: 2026-02-25 12:36:59.781842804 +0000 UTC m=+0.296307536 container create 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:36:59 compute-0 systemd[1]: Started libpod-conmon-897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408.scope.
Feb 25 12:36:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:36:59 compute-0 podman[326336]: 2026-02-25 12:36:59.928870559 +0000 UTC m=+0.443335311 container init 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:36:59 compute-0 podman[326336]: 2026-02-25 12:36:59.936129554 +0000 UTC m=+0.450594306 container start 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:36:59 compute-0 nova_compute[244014]: 2026-02-25 12:36:59.946 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:36:59 compute-0 podman[326336]: 2026-02-25 12:36:59.980090447 +0000 UTC m=+0.494555179 container attach 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:37:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 325 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 201 op/s
Feb 25 12:37:00 compute-0 ceph-mon[76335]: pgmap v1662: 305 pgs: 305 active+clean; 325 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 201 op/s
Feb 25 12:37:00 compute-0 quirky_pasteur[326352]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:37:00 compute-0 quirky_pasteur[326352]: --> All data devices are unavailable
Feb 25 12:37:00 compute-0 systemd[1]: libpod-897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408.scope: Deactivated successfully.
Feb 25 12:37:00 compute-0 podman[326336]: 2026-02-25 12:37:00.379342581 +0000 UTC m=+0.893807343 container died 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 12:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39-merged.mount: Deactivated successfully.
Feb 25 12:37:00 compute-0 podman[326336]: 2026-02-25 12:37:00.622355229 +0000 UTC m=+1.136819961 container remove 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 12:37:00 compute-0 systemd[1]: libpod-conmon-897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408.scope: Deactivated successfully.
Feb 25 12:37:00 compute-0 sudo[326259]: pam_unix(sudo:session): session closed for user root
Feb 25 12:37:00 compute-0 sudo[326385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:37:00 compute-0 sudo[326385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:37:00 compute-0 sudo[326385]: pam_unix(sudo:session): session closed for user root
Feb 25 12:37:00 compute-0 sudo[326410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:37:00 compute-0 sudo[326410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:37:01 compute-0 podman[326446]: 2026-02-25 12:37:01.035829275 +0000 UTC m=+0.040440184 container create 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:37:01 compute-0 systemd[1]: Started libpod-conmon-26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17.scope.
Feb 25 12:37:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:37:01 compute-0 podman[326446]: 2026-02-25 12:37:01.113478969 +0000 UTC m=+0.118089868 container init 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 12:37:01 compute-0 podman[326446]: 2026-02-25 12:37:01.017199998 +0000 UTC m=+0.021810907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:37:01 compute-0 podman[326446]: 2026-02-25 12:37:01.117789471 +0000 UTC m=+0.122400380 container start 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:37:01 compute-0 thirsty_hawking[326462]: 167 167
Feb 25 12:37:01 compute-0 systemd[1]: libpod-26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17.scope: Deactivated successfully.
Feb 25 12:37:01 compute-0 podman[326446]: 2026-02-25 12:37:01.123940875 +0000 UTC m=+0.128551794 container attach 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 12:37:01 compute-0 podman[326446]: 2026-02-25 12:37:01.124559462 +0000 UTC m=+0.129170371 container died 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 12:37:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a202e237e77d740205a7fffea1840055449fcba953cf64590e00ac3fcee4a21-merged.mount: Deactivated successfully.
Feb 25 12:37:01 compute-0 podman[326446]: 2026-02-25 12:37:01.172136977 +0000 UTC m=+0.176747886 container remove 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 12:37:01 compute-0 systemd[1]: libpod-conmon-26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17.scope: Deactivated successfully.
Feb 25 12:37:01 compute-0 podman[326487]: 2026-02-25 12:37:01.34066906 +0000 UTC m=+0.059538344 container create f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:37:01 compute-0 ovn_controller[147040]: 2026-02-25T12:37:01Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:11:ce 10.100.0.6
Feb 25 12:37:01 compute-0 ovn_controller[147040]: 2026-02-25T12:37:01Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:11:ce 10.100.0.6
Feb 25 12:37:01 compute-0 systemd[1]: Started libpod-conmon-f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31.scope.
Feb 25 12:37:01 compute-0 podman[326487]: 2026-02-25 12:37:01.317614708 +0000 UTC m=+0.036484032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:37:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53407574a9fc29540ec0a7787606527ecfaa7cadb6474cda56fae3ad4eabf12f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53407574a9fc29540ec0a7787606527ecfaa7cadb6474cda56fae3ad4eabf12f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53407574a9fc29540ec0a7787606527ecfaa7cadb6474cda56fae3ad4eabf12f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53407574a9fc29540ec0a7787606527ecfaa7cadb6474cda56fae3ad4eabf12f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:37:01 compute-0 podman[326487]: 2026-02-25 12:37:01.586863918 +0000 UTC m=+0.305733192 container init f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 12:37:01 compute-0 podman[326487]: 2026-02-25 12:37:01.593137085 +0000 UTC m=+0.312006339 container start f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:37:01 compute-0 podman[326487]: 2026-02-25 12:37:01.735730295 +0000 UTC m=+0.454599559 container attach f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]: {
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:     "0": [
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:         {
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "devices": [
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "/dev/loop3"
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             ],
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_name": "ceph_lv0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_size": "21470642176",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "name": "ceph_lv0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "tags": {
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.cluster_name": "ceph",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.crush_device_class": "",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.encrypted": "0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.objectstore": "bluestore",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.osd_id": "0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.type": "block",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.vdo": "0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.with_tpm": "0"
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             },
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "type": "block",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "vg_name": "ceph_vg0"
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:         }
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:     ],
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:     "1": [
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:         {
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "devices": [
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "/dev/loop4"
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             ],
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_name": "ceph_lv1",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_size": "21470642176",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "name": "ceph_lv1",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "tags": {
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.cluster_name": "ceph",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.crush_device_class": "",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.encrypted": "0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.objectstore": "bluestore",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.osd_id": "1",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.type": "block",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.vdo": "0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.with_tpm": "0"
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             },
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "type": "block",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "vg_name": "ceph_vg1"
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:         }
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:     ],
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:     "2": [
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:         {
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "devices": [
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "/dev/loop5"
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             ],
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_name": "ceph_lv2",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_size": "21470642176",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "name": "ceph_lv2",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "tags": {
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.cluster_name": "ceph",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.crush_device_class": "",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.encrypted": "0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.objectstore": "bluestore",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.osd_id": "2",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.type": "block",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.vdo": "0",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:                 "ceph.with_tpm": "0"
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             },
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "type": "block",
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:             "vg_name": "ceph_vg2"
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:         }
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]:     ]
Feb 25 12:37:01 compute-0 quizzical_noyce[326503]: }
Feb 25 12:37:01 compute-0 systemd[1]: libpod-f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31.scope: Deactivated successfully.
Feb 25 12:37:01 compute-0 conmon[326503]: conmon f656a2be4fa9558c67a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31.scope/container/memory.events
Feb 25 12:37:01 compute-0 podman[326487]: 2026-02-25 12:37:01.878159031 +0000 UTC m=+0.597028315 container died f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:37:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 341 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 243 op/s
Feb 25 12:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-53407574a9fc29540ec0a7787606527ecfaa7cadb6474cda56fae3ad4eabf12f-merged.mount: Deactivated successfully.
Feb 25 12:37:02 compute-0 ceph-mon[76335]: pgmap v1663: 305 pgs: 305 active+clean; 341 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 243 op/s
Feb 25 12:37:02 compute-0 podman[326487]: 2026-02-25 12:37:02.610612622 +0000 UTC m=+1.329481866 container remove f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:37:02 compute-0 systemd[1]: libpod-conmon-f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31.scope: Deactivated successfully.
Feb 25 12:37:02 compute-0 sudo[326410]: pam_unix(sudo:session): session closed for user root
Feb 25 12:37:02 compute-0 sudo[326527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:37:02 compute-0 sudo[326527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:37:02 compute-0 sudo[326527]: pam_unix(sudo:session): session closed for user root
Feb 25 12:37:02 compute-0 sudo[326552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:37:02 compute-0 sudo[326552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:37:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:03 compute-0 podman[326589]: 2026-02-25 12:37:03.024604441 +0000 UTC m=+0.022596080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:37:03 compute-0 podman[326589]: 2026-02-25 12:37:03.335463897 +0000 UTC m=+0.333455446 container create 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:37:03 compute-0 systemd[1]: Started libpod-conmon-7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355.scope.
Feb 25 12:37:03 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:37:03 compute-0 nova_compute[244014]: 2026-02-25 12:37:03.822 244018 DEBUG oslo_concurrency.lockutils [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:03 compute-0 nova_compute[244014]: 2026-02-25 12:37:03.824 244018 DEBUG oslo_concurrency.lockutils [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:03 compute-0 nova_compute[244014]: 2026-02-25 12:37:03.824 244018 DEBUG nova.compute.manager [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:03 compute-0 nova_compute[244014]: 2026-02-25 12:37:03.828 244018 DEBUG nova.compute.manager [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 25 12:37:03 compute-0 nova_compute[244014]: 2026-02-25 12:37:03.828 244018 DEBUG nova.objects.instance [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'flavor' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:03 compute-0 nova_compute[244014]: 2026-02-25 12:37:03.857 244018 DEBUG nova.virt.libvirt.driver [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:37:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 351 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 211 op/s
Feb 25 12:37:04 compute-0 nova_compute[244014]: 2026-02-25 12:37:04.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:04 compute-0 podman[326589]: 2026-02-25 12:37:04.190353733 +0000 UTC m=+1.188345372 container init 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 12:37:04 compute-0 podman[326589]: 2026-02-25 12:37:04.196897588 +0000 UTC m=+1.194889137 container start 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:37:04 compute-0 happy_dirac[326606]: 167 167
Feb 25 12:37:04 compute-0 systemd[1]: libpod-7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355.scope: Deactivated successfully.
Feb 25 12:37:04 compute-0 podman[326589]: 2026-02-25 12:37:04.370796455 +0000 UTC m=+1.368788104 container attach 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:37:04 compute-0 podman[326589]: 2026-02-25 12:37:04.3712952 +0000 UTC m=+1.369286789 container died 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 12:37:04 compute-0 ceph-mon[76335]: pgmap v1664: 305 pgs: 305 active+clean; 351 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 211 op/s
Feb 25 12:37:04 compute-0 nova_compute[244014]: 2026-02-25 12:37:04.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4f94ccfc84b667e3ecbc6ee9fc3fcae98ef109078cc5ae4dc7504eaa1971b16-merged.mount: Deactivated successfully.
Feb 25 12:37:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 351 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 182 op/s
Feb 25 12:37:07 compute-0 ceph-mon[76335]: pgmap v1665: 305 pgs: 305 active+clean; 351 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 182 op/s
Feb 25 12:37:07 compute-0 podman[326589]: 2026-02-25 12:37:07.304494654 +0000 UTC m=+4.302486243 container remove 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:37:07 compute-0 systemd[1]: libpod-conmon-7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355.scope: Deactivated successfully.
Feb 25 12:37:07 compute-0 podman[326630]: 2026-02-25 12:37:07.50529047 +0000 UTC m=+0.034785862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:37:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:08 compute-0 podman[326630]: 2026-02-25 12:37:08.012466463 +0000 UTC m=+0.541961795 container create e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 12:37:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 370 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 217 op/s
Feb 25 12:37:08 compute-0 systemd[1]: Started libpod-conmon-e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153.scope.
Feb 25 12:37:08 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc31121381984df3f43e602664d586716c6d0b333549a816f3501cb703b8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc31121381984df3f43e602664d586716c6d0b333549a816f3501cb703b8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc31121381984df3f43e602664d586716c6d0b333549a816f3501cb703b8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc31121381984df3f43e602664d586716c6d0b333549a816f3501cb703b8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:37:08 compute-0 ceph-mon[76335]: pgmap v1666: 305 pgs: 305 active+clean; 370 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 217 op/s
Feb 25 12:37:08 compute-0 podman[326630]: 2026-02-25 12:37:08.641434912 +0000 UTC m=+1.170930294 container init e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:37:08 compute-0 podman[326630]: 2026-02-25 12:37:08.651676531 +0000 UTC m=+1.181171863 container start e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:37:08 compute-0 podman[326630]: 2026-02-25 12:37:08.805716078 +0000 UTC m=+1.335211470 container attach e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:37:09 compute-0 nova_compute[244014]: 2026-02-25 12:37:09.127 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:09 compute-0 lvm[326726]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:37:09 compute-0 lvm[326726]: VG ceph_vg1 finished
Feb 25 12:37:09 compute-0 lvm[326725]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:37:09 compute-0 lvm[326725]: VG ceph_vg0 finished
Feb 25 12:37:09 compute-0 lvm[326728]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:37:09 compute-0 lvm[326728]: VG ceph_vg2 finished
Feb 25 12:37:09 compute-0 xenodochial_meninsky[326646]: {}
Feb 25 12:37:09 compute-0 podman[326630]: 2026-02-25 12:37:09.495117543 +0000 UTC m=+2.024612865 container died e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:37:09 compute-0 systemd[1]: libpod-e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153.scope: Deactivated successfully.
Feb 25 12:37:09 compute-0 systemd[1]: libpod-e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153.scope: Consumed 1.025s CPU time.
Feb 25 12:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d8cc31121381984df3f43e602664d586716c6d0b333549a816f3501cb703b8f-merged.mount: Deactivated successfully.
Feb 25 12:37:09 compute-0 nova_compute[244014]: 2026-02-25 12:37:09.949 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 370 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 3.6 MiB/s wr, 99 op/s
Feb 25 12:37:10 compute-0 podman[326630]: 2026-02-25 12:37:10.315283977 +0000 UTC m=+2.844779309 container remove e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:37:10 compute-0 sudo[326552]: pam_unix(sudo:session): session closed for user root
Feb 25 12:37:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:37:10 compute-0 systemd[1]: libpod-conmon-e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153.scope: Deactivated successfully.
Feb 25 12:37:10 compute-0 ceph-mon[76335]: pgmap v1667: 305 pgs: 305 active+clean; 370 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 3.6 MiB/s wr, 99 op/s
Feb 25 12:37:10 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:37:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:37:10 compute-0 kernel: tapd689bf7c-d4 (unregistering): left promiscuous mode
Feb 25 12:37:10 compute-0 NetworkManager[49836]: <info>  [1772023030.8746] device (tapd689bf7c-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:37:10 compute-0 nova_compute[244014]: 2026-02-25 12:37:10.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:10 compute-0 ovn_controller[147040]: 2026-02-25T12:37:10Z|00913|binding|INFO|Releasing lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 from this chassis (sb_readonly=0)
Feb 25 12:37:10 compute-0 ovn_controller[147040]: 2026-02-25T12:37:10Z|00914|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 down in Southbound
Feb 25 12:37:10 compute-0 ovn_controller[147040]: 2026-02-25T12:37:10Z|00915|binding|INFO|Removing iface tapd689bf7c-d4 ovn-installed in OVS
Feb 25 12:37:10 compute-0 nova_compute[244014]: 2026-02-25 12:37:10.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:10 compute-0 nova_compute[244014]: 2026-02-25 12:37:10.901 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.906 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:76:ae 10.100.0.8'], port_security=['fa:16:3e:65:76:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0061daee-43d7-458b-8645-0ad3f8fbb2af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d689bf7c-d44c-4f39-a2a7-a85e52dcee30) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:37:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.908 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d689bf7c-d44c-4f39-a2a7-a85e52dcee30 in datapath 38a239da-c933-4cbc-be00-dd127471e198 unbound from our chassis
Feb 25 12:37:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.912 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198
Feb 25 12:37:10 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 25 12:37:10 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005a.scope: Consumed 12.684s CPU time.
Feb 25 12:37:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.931 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa67ddb-e7fd-4e24-b269-77929d6898ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:10 compute-0 systemd-machined[210048]: Machine qemu-116-instance-0000005a terminated.
Feb 25 12:37:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.955 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[132ca7a4-dca9-4f51-a1e1-1a81ffd04a9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.958 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[31360445-5567-4f02-8286-2701a1e19704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.977 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f984bca3-6fcb-4bb7-a8ec-1699247df27a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.991 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6759532b-65c2-40f4-ad4b-c8bac98f6eef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 9, 'rx_bytes': 910, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 9, 'rx_bytes': 910, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 19502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326755, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:11 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:37:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.027 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f92935c-c7f2-485e-862a-e33a08a0bc08]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326756, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326756, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.028 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.030 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.038 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.039 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:37:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.039 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.040 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:37:11 compute-0 sudo[326758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:37:11 compute-0 sudo[326758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:37:11 compute-0 sudo[326758]: pam_unix(sudo:session): session closed for user root
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.138 244018 INFO nova.virt.libvirt.driver [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance shutdown successfully after 7 seconds.
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.159 244018 INFO nova.virt.libvirt.driver [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance destroyed successfully.
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.160 244018 DEBUG nova.objects.instance [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.206 244018 DEBUG nova.compute.manager [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.268 244018 DEBUG oslo_concurrency.lockutils [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 7.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.371 244018 DEBUG nova.compute.manager [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.372 244018 DEBUG oslo_concurrency.lockutils [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.373 244018 DEBUG oslo_concurrency.lockutils [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.373 244018 DEBUG oslo_concurrency.lockutils [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.374 244018 DEBUG nova.compute.manager [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:37:11 compute-0 nova_compute[244014]: 2026-02-25 12:37:11.374 244018 WARNING nova.compute.manager [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state stopped and task_state None.
Feb 25 12:37:11 compute-0 ovn_controller[147040]: 2026-02-25T12:37:11Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:9e:27 10.100.0.3
Feb 25 12:37:11 compute-0 ovn_controller[147040]: 2026-02-25T12:37:11Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:9e:27 10.100.0.3
Feb 25 12:37:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 372 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 3.7 MiB/s wr, 113 op/s
Feb 25 12:37:12 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:37:12 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:37:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:12 compute-0 nova_compute[244014]: 2026-02-25 12:37:12.932 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:12 compute-0 nova_compute[244014]: 2026-02-25 12:37:12.934 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:12 compute-0 nova_compute[244014]: 2026-02-25 12:37:12.958 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:37:13 compute-0 nova_compute[244014]: 2026-02-25 12:37:13.079 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:13 compute-0 nova_compute[244014]: 2026-02-25 12:37:13.079 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:13 compute-0 nova_compute[244014]: 2026-02-25 12:37:13.088 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:37:13 compute-0 nova_compute[244014]: 2026-02-25 12:37:13.089 244018 INFO nova.compute.claims [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:37:13 compute-0 ceph-mon[76335]: pgmap v1668: 305 pgs: 305 active+clean; 372 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 3.7 MiB/s wr, 113 op/s
Feb 25 12:37:13 compute-0 podman[326793]: 2026-02-25 12:37:13.730123403 +0000 UTC m=+0.061145847 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.053 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 384 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 3.0 MiB/s wr, 80 op/s
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.534 244018 DEBUG nova.compute.manager [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.535 244018 DEBUG oslo_concurrency.lockutils [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.535 244018 DEBUG oslo_concurrency.lockutils [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.536 244018 DEBUG oslo_concurrency.lockutils [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.536 244018 DEBUG nova.compute.manager [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.536 244018 WARNING nova.compute.manager [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state stopped and task_state None.
Feb 25 12:37:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:37:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789193406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.608 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.614 244018 DEBUG nova.compute.provider_tree [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.629 244018 DEBUG nova.scheduler.client.report [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.652 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.654 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.702 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.702 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:37:14 compute-0 ceph-mon[76335]: pgmap v1669: 305 pgs: 305 active+clean; 384 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 3.0 MiB/s wr, 80 op/s
Feb 25 12:37:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1789193406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.725 244018 INFO nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.769 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.821 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'flavor' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.865 244018 DEBUG oslo_concurrency.lockutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.865 244018 DEBUG oslo_concurrency.lockutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquired lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.865 244018 DEBUG nova.network.neutron [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.866 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'info_cache' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:37:14 compute-0 nova_compute[244014]: 2026-02-25 12:37:14.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.038 244018 DEBUG nova.policy [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3187080005d24d4b8fc920bcd975004b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '04e53ffc2c7b47b993b4fd34d0c71d77', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.042 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.043 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.043 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.044 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.044 244018 INFO nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Creating image(s)
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.064 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.084 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.107 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.110 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.192 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.193 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.193 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.194 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.214 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.217 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.472 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.560 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] resizing rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.641 244018 DEBUG nova.objects.instance [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lazy-loading 'migration_context' on Instance uuid ed3e74a3-3eba-4446-aba8-cb3936a346e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.657 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.658 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Ensure instance console log exists: /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.658 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.658 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:15 compute-0 nova_compute[244014]: 2026-02-25 12:37:15.659 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:15 compute-0 podman[327001]: 2026-02-25 12:37:15.739352113 +0000 UTC m=+0.091576626 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 12:37:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 384 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.195 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Successfully created port: 7e3c890f-a02f-487b-9c31-7341f5caa914 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.273 244018 DEBUG nova.network.neutron [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updating instance_info_cache with network_info: [{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.291 244018 DEBUG oslo_concurrency.lockutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Releasing lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.294 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.295 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.295 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.344 244018 INFO nova.virt.libvirt.driver [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance destroyed successfully.
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.344 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.358 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'resources' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.372 244018 DEBUG nova.virt.libvirt.vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:37:11Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.373 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.374 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.375 244018 DEBUG os_vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.378 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd689bf7c-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.387 244018 INFO os_vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4')
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.393 244018 DEBUG nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start _get_guest_xml network_info=[{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.398 244018 WARNING nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.402 244018 DEBUG nova.virt.libvirt.host [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.403 244018 DEBUG nova.virt.libvirt.host [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.406 244018 DEBUG nova.virt.libvirt.host [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.406 244018 DEBUG nova.virt.libvirt.host [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.407 244018 DEBUG nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.407 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.407 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.408 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.408 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.408 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.409 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.409 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.409 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.410 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.410 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.410 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.411 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:16 compute-0 nova_compute[244014]: 2026-02-25 12:37:16.434 244018 DEBUG oslo_concurrency.processutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:37:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/394028940' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.007 244018 DEBUG oslo_concurrency.processutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.039 244018 DEBUG oslo_concurrency.processutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:17 compute-0 ceph-mon[76335]: pgmap v1670: 305 pgs: 305 active+clean; 384 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Feb 25 12:37:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/394028940' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:37:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3998435508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.590 244018 DEBUG oslo_concurrency.processutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.591 244018 DEBUG nova.virt.libvirt.vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:37:11Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.592 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.593 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.594 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.615 244018 DEBUG nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:37:17 compute-0 nova_compute[244014]:   <uuid>0061daee-43d7-458b-8645-0ad3f8fbb2af</uuid>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   <name>instance-0000005a</name>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1814919794</nova:name>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:37:16</nova:creationTime>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <nova:user uuid="00eb5c915b2d41f69600acd33967d0f5">tempest-ListServerFiltersTestJSON-1481541933-project-member</nova:user>
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <nova:project uuid="999f2a015b9c4bc98661fe6fe6db06a0">tempest-ListServerFiltersTestJSON-1481541933</nova:project>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <nova:port uuid="d689bf7c-d44c-4f39-a2a7-a85e52dcee30">
Feb 25 12:37:17 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <system>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <entry name="serial">0061daee-43d7-458b-8645-0ad3f8fbb2af</entry>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <entry name="uuid">0061daee-43d7-458b-8645-0ad3f8fbb2af</entry>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     </system>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   <os>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   </os>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   <features>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   </features>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0061daee-43d7-458b-8645-0ad3f8fbb2af_disk">
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       </source>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config">
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       </source>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:37:17 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:65:76:ae"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <target dev="tapd689bf7c-d4"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/console.log" append="off"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <video>
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     </video>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:37:17 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:37:17 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:37:17 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:37:17 compute-0 nova_compute[244014]: </domain>
Feb 25 12:37:17 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.618 244018 DEBUG nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.619 244018 DEBUG nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.621 244018 DEBUG nova.virt.libvirt.vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:37:11Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.622 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.623 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.624 244018 DEBUG os_vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.626 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.627 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.632 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd689bf7c-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.633 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd689bf7c-d4, col_values=(('external_ids', {'iface-id': 'd689bf7c-d44c-4f39-a2a7-a85e52dcee30', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:76:ae', 'vm-uuid': '0061daee-43d7-458b-8645-0ad3f8fbb2af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.635 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:17 compute-0 NetworkManager[49836]: <info>  [1772023037.6367] manager: (tapd689bf7c-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.643 244018 INFO os_vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4')
Feb 25 12:37:17 compute-0 kernel: tapd689bf7c-d4: entered promiscuous mode
Feb 25 12:37:17 compute-0 NetworkManager[49836]: <info>  [1772023037.7462] manager: (tapd689bf7c-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Feb 25 12:37:17 compute-0 ovn_controller[147040]: 2026-02-25T12:37:17Z|00916|binding|INFO|Claiming lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for this chassis.
Feb 25 12:37:17 compute-0 ovn_controller[147040]: 2026-02-25T12:37:17Z|00917|binding|INFO|d689bf7c-d44c-4f39-a2a7-a85e52dcee30: Claiming fa:16:3e:65:76:ae 10.100.0.8
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:17 compute-0 ovn_controller[147040]: 2026-02-25T12:37:17Z|00918|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 ovn-installed in OVS
Feb 25 12:37:17 compute-0 ovn_controller[147040]: 2026-02-25T12:37:17Z|00919|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 up in Southbound
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.760 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:76:ae 10.100.0.8'], port_security=['fa:16:3e:65:76:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0061daee-43d7-458b-8645-0ad3f8fbb2af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d689bf7c-d44c-4f39-a2a7-a85e52dcee30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.760 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.763 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d689bf7c-d44c-4f39-a2a7-a85e52dcee30 in datapath 38a239da-c933-4cbc-be00-dd127471e198 bound to our chassis
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.766 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198
Feb 25 12:37:17 compute-0 systemd-udevd[327105]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:37:17 compute-0 systemd-machined[210048]: New machine qemu-119-instance-0000005a.
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.785 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[223664d8-039b-4d6b-987e-279aab6cc2b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:17 compute-0 NetworkManager[49836]: <info>  [1772023037.7869] device (tapd689bf7c-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:37:17 compute-0 NetworkManager[49836]: <info>  [1772023037.7893] device (tapd689bf7c-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:37:17 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-0000005a.
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.816 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[63be026d-a6fb-4d23-84f4-f97814d9eaf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.821 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb0cfd6-5847-4cc8-9796-ff0f33ead275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.826 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Successfully updated port: 7e3c890f-a02f-487b-9c31-7341f5caa914 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.846 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.847 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquired lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.847 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.848 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6fed7bbb-47df-4739-8850-4d7a9f6d70dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.863 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d71576fd-93e7-47e7-9748-1750a6917033]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 11, 'rx_bytes': 1246, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 11, 'rx_bytes': 1246, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 19502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327118, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.877 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3db334-4675-48cc-a776-0e4e54c94095]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327119, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327119, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.879 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.881 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.883 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.883 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.884 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.884 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.962 244018 DEBUG nova.compute.manager [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Received event network-changed-7e3c890f-a02f-487b-9c31-7341f5caa914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.962 244018 DEBUG nova.compute.manager [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Refreshing instance network info cache due to event network-changed-7e3c890f-a02f-487b-9c31-7341f5caa914. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:37:17 compute-0 nova_compute[244014]: 2026-02-25 12:37:17.962 244018 DEBUG oslo_concurrency.lockutils [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.076 244018 DEBUG nova.compute.manager [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.077 244018 DEBUG oslo_concurrency.lockutils [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.077 244018 DEBUG oslo_concurrency.lockutils [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.078 244018 DEBUG oslo_concurrency.lockutils [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.079 244018 DEBUG nova.compute.manager [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.079 244018 WARNING nova.compute.manager [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state stopped and task_state powering-on.
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.085 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:37:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 4.0 MiB/s wr, 98 op/s
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.124 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updating instance_info_cache with network_info: [{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.140 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.140 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.141 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:18 compute-0 nova_compute[244014]: 2026-02-25 12:37:18.141 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:18 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3998435508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.151 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 0061daee-43d7-458b-8645-0ad3f8fbb2af due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.151 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023039.1507766, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.151 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Resumed (Lifecycle Event)
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.154 244018 DEBUG nova.compute.manager [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.158 244018 INFO nova.virt.libvirt.driver [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance rebooted successfully.
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.159 244018 DEBUG nova.compute.manager [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.167 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.171 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.188 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] During sync_power_state the instance has a pending task (powering-on). Skip.
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.189 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023039.1535273, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.189 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Started (Lifecycle Event)
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.214 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.218 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:37:19 compute-0 ceph-mon[76335]: pgmap v1671: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 4.0 MiB/s wr, 98 op/s
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.351 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Updating instance_info_cache with network_info: [{"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.375 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Releasing lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.375 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Instance network_info: |[{"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.376 244018 DEBUG oslo_concurrency.lockutils [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.376 244018 DEBUG nova.network.neutron [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Refreshing network info cache for port 7e3c890f-a02f-487b-9c31-7341f5caa914 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.381 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Start _get_guest_xml network_info=[{"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.385 244018 WARNING nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.392 244018 DEBUG nova.virt.libvirt.host [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.392 244018 DEBUG nova.virt.libvirt.host [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.400 244018 DEBUG nova.virt.libvirt.host [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.401 244018 DEBUG nova.virt.libvirt.host [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.401 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.401 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.402 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.402 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.402 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.402 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.403 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.403 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.403 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.403 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.404 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.404 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.407 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.900 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.900 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:19 compute-0 nova_compute[244014]: 2026-02-25 12:37:19.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:37:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1709585853' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.078 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.671s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.118 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.122 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1709585853' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:20 compute-0 ceph-mon[76335]: pgmap v1672: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Feb 25 12:37:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:37:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272266667' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.472 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.563 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.564 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.568 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.568 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.571 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.571 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:37:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:37:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3369517480' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.645 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.646 244018 DEBUG nova.virt.libvirt.vif [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:37:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1563092304',id=93,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04e53ffc2c7b47b993b4fd34d0c71d77',ramdisk_id='',reservation_id='r-5zdt11tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-685623650',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-685623650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:37:14Z,user_data=None,user_id='3187080005d24d4b8fc920bcd975004b',uuid=ed3e74a3-3eba-4446-aba8-cb3936a346e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.647 244018 DEBUG nova.network.os_vif_util [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converting VIF {"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.648 244018 DEBUG nova.network.os_vif_util [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.650 244018 DEBUG nova.objects.instance [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lazy-loading 'pci_devices' on Instance uuid ed3e74a3-3eba-4446-aba8-cb3936a346e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.703 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:37:20 compute-0 nova_compute[244014]:   <uuid>ed3e74a3-3eba-4446-aba8-cb3936a346e0</uuid>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   <name>instance-0000005d</name>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-1563092304</nova:name>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:37:19</nova:creationTime>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <nova:user uuid="3187080005d24d4b8fc920bcd975004b">tempest-ServersNegativeTestMultiTenantJSON-685623650-project-member</nova:user>
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <nova:project uuid="04e53ffc2c7b47b993b4fd34d0c71d77">tempest-ServersNegativeTestMultiTenantJSON-685623650</nova:project>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <nova:port uuid="7e3c890f-a02f-487b-9c31-7341f5caa914">
Feb 25 12:37:20 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <system>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <entry name="serial">ed3e74a3-3eba-4446-aba8-cb3936a346e0</entry>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <entry name="uuid">ed3e74a3-3eba-4446-aba8-cb3936a346e0</entry>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     </system>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   <os>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   </os>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   <features>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   </features>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk">
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       </source>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config">
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       </source>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:37:20 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:e9:ff:e0"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <target dev="tap7e3c890f-a0"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/console.log" append="off"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <video>
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     </video>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:37:20 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:37:20 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:37:20 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:37:20 compute-0 nova_compute[244014]: </domain>
Feb 25 12:37:20 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.710 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Preparing to wait for external event network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.710 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.710 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.711 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.711 244018 DEBUG nova.virt.libvirt.vif [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:37:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1563092304',id=93,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04e53ffc2c7b47b993b4fd34d0c71d77',ramdisk_id='',reservation_id='r-5zdt11tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-685623650',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-685623650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:37:14Z,user_data=None,user_id='3187080005d24d4b8fc920bcd975004b',uuid=ed3e74a3-3eba-4446-aba8-cb3936a346e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.712 244018 DEBUG nova.network.os_vif_util [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converting VIF {"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.712 244018 DEBUG nova.network.os_vif_util [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.713 244018 DEBUG os_vif [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.716 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.717 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.719 244018 DEBUG nova.compute.manager [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.720 244018 DEBUG oslo_concurrency.lockutils [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.721 244018 DEBUG oslo_concurrency.lockutils [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.721 244018 DEBUG oslo_concurrency.lockutils [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.722 244018 DEBUG nova.compute.manager [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.722 244018 WARNING nova.compute.manager [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state active and task_state None.
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.726 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.726 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e3c890f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.727 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7e3c890f-a0, col_values=(('external_ids', {'iface-id': '7e3c890f-a02f-487b-9c31-7341f5caa914', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:ff:e0', 'vm-uuid': 'ed3e74a3-3eba-4446-aba8-cb3936a346e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:20 compute-0 NetworkManager[49836]: <info>  [1772023040.7302] manager: (tap7e3c890f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.734 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.735 244018 INFO os_vif [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0')
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.799 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.799 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.800 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] No VIF found with MAC fa:16:3e:e9:ff:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.800 244018 INFO nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Using config drive
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.825 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.842 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.844 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3190MB free_disk=59.830501983873546GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.844 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:20 compute-0 nova_compute[244014]: 2026-02-25 12:37:20.844 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.151 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 0061daee-43d7-458b-8645-0ad3f8fbb2af actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.151 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 6076a107-bdef-4c8a-8f75-887cdb4833f0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.152 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.152 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance ed3e74a3-3eba-4446-aba8-cb3936a346e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.153 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.153 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1088MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:37:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4272266667' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3369517480' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.473 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.646 244018 INFO nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Creating config drive at /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.653 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpom2r0x5w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.792 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpom2r0x5w" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.838 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.842 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.875 244018 DEBUG nova.network.neutron [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Updated VIF entry in instance network info cache for port 7e3c890f-a02f-487b-9c31-7341f5caa914. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.876 244018 DEBUG nova.network.neutron [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Updating instance_info_cache with network_info: [{"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:37:21 compute-0 nova_compute[244014]: 2026-02-25 12:37:21.901 244018 DEBUG oslo_concurrency.lockutils [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.000 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.001 244018 INFO nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Deleting local config drive /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config because it was imported into RBD.
Feb 25 12:37:22 compute-0 kernel: tap7e3c890f-a0: entered promiscuous mode
Feb 25 12:37:22 compute-0 NetworkManager[49836]: <info>  [1772023042.0411] manager: (tap7e3c890f-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/387)
Feb 25 12:37:22 compute-0 ovn_controller[147040]: 2026-02-25T12:37:22Z|00920|binding|INFO|Claiming lport 7e3c890f-a02f-487b-9c31-7341f5caa914 for this chassis.
Feb 25 12:37:22 compute-0 ovn_controller[147040]: 2026-02-25T12:37:22Z|00921|binding|INFO|7e3c890f-a02f-487b-9c31-7341f5caa914: Claiming fa:16:3e:e9:ff:e0 10.100.0.14
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:22 compute-0 systemd-udevd[327340]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:37:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:37:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1809555701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.072 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:ff:e0 10.100.0.14'], port_security=['fa:16:3e:e9:ff:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ed3e74a3-3eba-4446-aba8-cb3936a346e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04e53ffc2c7b47b993b4fd34d0c71d77', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a4d5db6f-4b6b-4f9f-b234-d31c315a5616', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9322afa8-c9d9-4d31-97f4-43f86a60b202, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7e3c890f-a02f-487b-9c31-7341f5caa914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.075 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7e3c890f-a02f-487b-9c31-7341f5caa914 in datapath a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 bound to our chassis
Feb 25 12:37:22 compute-0 NetworkManager[49836]: <info>  [1772023042.0769] device (tap7e3c890f-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:37:22 compute-0 NetworkManager[49836]: <info>  [1772023042.0773] device (tap7e3c890f-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.077 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3ce9bca-ffe4-4184-87c6-054df1c1e0b8
Feb 25 12:37:22 compute-0 systemd-machined[210048]: New machine qemu-120-instance-0000005d.
Feb 25 12:37:22 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-0000005d.
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.086 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[70be9ae8-a71c-4cb6-a2ee-b5318b6bc0a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.086 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3ce9bca-f1 in ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.088 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3ce9bca-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.088 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9733eb-46b6-47ba-be45-38553262cbb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.089 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee15c16-eff6-4776-b914-65fe7c866197]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 ovn_controller[147040]: 2026-02-25T12:37:22Z|00922|binding|INFO|Setting lport 7e3c890f-a02f-487b-9c31-7341f5caa914 ovn-installed in OVS
Feb 25 12:37:22 compute-0 ovn_controller[147040]: 2026-02-25T12:37:22Z|00923|binding|INFO|Setting lport 7e3c890f-a02f-487b-9c31-7341f5caa914 up in Southbound
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.101 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9f105aa4-ace2-45f0-973e-b4aa156b706c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.101 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.104 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.111 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cf380293-574f-4eae-b101-a0eace7823bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 126 op/s
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.117 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.131 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cde97d0b-b09a-4ff3-a2d7-3d22d563379a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 NetworkManager[49836]: <info>  [1772023042.1390] manager: (tapa3ce9bca-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/388)
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.138 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc7a1fb-3d83-4f39-930a-2404b91d7436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.162 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[637cd4d0-192c-4546-9ace-5fa1267f279e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.163 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.164 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[47759ce7-daf5-4b73-8b7b-fbd7aaa78f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 NetworkManager[49836]: <info>  [1772023042.1791] device (tapa3ce9bca-f0): carrier: link connected
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.182 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e2310683-ba4c-412e-8f0a-c5a895a7fed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.195 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4245f03-828f-4858-802e-92026e3c242b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ce9bca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:b3:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501158, 'reachable_time': 39897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327376, 'error': None, 'target': 'ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.206 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5f601247-e536-430d-aa81-d3ac756eaa80]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:b353'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501158, 'tstamp': 501158}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327377, 'error': None, 'target': 'ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.218 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7952a9ba-bf46-4b26-a415-a9b25c13d77e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ce9bca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:b3:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501158, 'reachable_time': 39897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327378, 'error': None, 'target': 'ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.225 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.226 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.241 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[77e8febb-0931-4879-8d67-920ecc05c42f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.288 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7977c3-6dda-4d27-a7fa-a5a3877d7fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.289 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ce9bca-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.289 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.290 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ce9bca-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:22 compute-0 kernel: tapa3ce9bca-f0: entered promiscuous mode
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.294 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:22 compute-0 NetworkManager[49836]: <info>  [1772023042.2952] manager: (tapa3ce9bca-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.298 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3ce9bca-f0, col_values=(('external_ids', {'iface-id': '241e63c3-9527-4162-9586-5bef285d531c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:22 compute-0 ovn_controller[147040]: 2026-02-25T12:37:22Z|00924|binding|INFO|Releasing lport 241e63c3-9527-4162-9586-5bef285d531c from this chassis (sb_readonly=0)
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.302 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3ce9bca-ffe4-4184-87c6-054df1c1e0b8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3ce9bca-ffe4-4184-87c6-054df1c1e0b8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e870baa9-4617-4a83-a339-f534639f8f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.304 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/a3ce9bca-ffe4-4184-87c6-054df1c1e0b8.pid.haproxy
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID a3ce9bca-ffe4-4184-87c6-054df1c1e0b8
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:37:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.304 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'env', 'PROCESS_TAG=haproxy-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3ce9bca-ffe4-4184-87c6-054df1c1e0b8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:22 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1809555701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:22 compute-0 ceph-mon[76335]: pgmap v1673: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 126 op/s
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.628 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023042.628258, ed3e74a3-3eba-4446-aba8-cb3936a346e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.629 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] VM Started (Lifecycle Event)
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.661 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.666 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023042.628526, ed3e74a3-3eba-4446-aba8-cb3936a346e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.667 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] VM Paused (Lifecycle Event)
Feb 25 12:37:22 compute-0 podman[327452]: 2026-02-25 12:37:22.668265664 +0000 UTC m=+0.061729813 container create 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.703 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:22 compute-0 systemd[1]: Started libpod-conmon-26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b.scope.
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.708 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:37:22 compute-0 podman[327452]: 2026-02-25 12:37:22.628047679 +0000 UTC m=+0.021511848 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:37:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53868578ef99a8810c9684c99d8c731ec4e09f65916a481d668654b6628d16a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:37:22 compute-0 podman[327452]: 2026-02-25 12:37:22.747946853 +0000 UTC m=+0.141411002 container init 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:37:22 compute-0 podman[327452]: 2026-02-25 12:37:22.752137271 +0000 UTC m=+0.145601420 container start 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.757 244018 DEBUG nova.compute.manager [req-a93c97d8-61ad-4c81-881d-d6c39bfc7ca0 req-1ca95013-fd70-4603-8652-0914900d3ad6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Received event network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.757 244018 DEBUG oslo_concurrency.lockutils [req-a93c97d8-61ad-4c81-881d-d6c39bfc7ca0 req-1ca95013-fd70-4603-8652-0914900d3ad6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.758 244018 DEBUG oslo_concurrency.lockutils [req-a93c97d8-61ad-4c81-881d-d6c39bfc7ca0 req-1ca95013-fd70-4603-8652-0914900d3ad6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.758 244018 DEBUG oslo_concurrency.lockutils [req-a93c97d8-61ad-4c81-881d-d6c39bfc7ca0 req-1ca95013-fd70-4603-8652-0914900d3ad6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.760 244018 DEBUG nova.compute.manager [req-a93c97d8-61ad-4c81-881d-d6c39bfc7ca0 req-1ca95013-fd70-4603-8652-0914900d3ad6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Processing event network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.761 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.765 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.768 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.768 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023042.7645283, ed3e74a3-3eba-4446-aba8-cb3936a346e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.769 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] VM Resumed (Lifecycle Event)
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.778 244018 INFO nova.virt.libvirt.driver [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Instance spawned successfully.
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.779 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:37:22 compute-0 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [NOTICE]   (327471) : New worker (327473) forked
Feb 25 12:37:22 compute-0 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [NOTICE]   (327471) : Loading success.
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.810 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.815 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.815 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.816 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.817 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.818 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.818 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.824 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:37:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:22 compute-0 nova_compute[244014]: 2026-02-25 12:37:22.897 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:37:23 compute-0 nova_compute[244014]: 2026-02-25 12:37:23.001 244018 INFO nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Took 7.96 seconds to spawn the instance on the hypervisor.
Feb 25 12:37:23 compute-0 nova_compute[244014]: 2026-02-25 12:37:23.002 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:23 compute-0 nova_compute[244014]: 2026-02-25 12:37:23.120 244018 INFO nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Took 10.09 seconds to build instance.
Feb 25 12:37:23 compute-0 nova_compute[244014]: 2026-02-25 12:37:23.152 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:23 compute-0 nova_compute[244014]: 2026-02-25 12:37:23.225 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:23 compute-0 nova_compute[244014]: 2026-02-25 12:37:23.226 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:23 compute-0 nova_compute[244014]: 2026-02-25 12:37:23.226 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 126 op/s
Feb 25 12:37:24 compute-0 nova_compute[244014]: 2026-02-25 12:37:24.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:24 compute-0 nova_compute[244014]: 2026-02-25 12:37:24.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:37:24 compute-0 nova_compute[244014]: 2026-02-25 12:37:24.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:25 compute-0 nova_compute[244014]: 2026-02-25 12:37:25.112 244018 DEBUG nova.compute.manager [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Received event network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:25 compute-0 nova_compute[244014]: 2026-02-25 12:37:25.113 244018 DEBUG oslo_concurrency.lockutils [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:25 compute-0 nova_compute[244014]: 2026-02-25 12:37:25.113 244018 DEBUG oslo_concurrency.lockutils [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:25 compute-0 nova_compute[244014]: 2026-02-25 12:37:25.113 244018 DEBUG oslo_concurrency.lockutils [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:25 compute-0 nova_compute[244014]: 2026-02-25 12:37:25.114 244018 DEBUG nova.compute.manager [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] No waiting events found dispatching network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:37:25 compute-0 nova_compute[244014]: 2026-02-25 12:37:25.114 244018 WARNING nova.compute.manager [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Received unexpected event network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 for instance with vm_state active and task_state None.
Feb 25 12:37:25 compute-0 ceph-mon[76335]: pgmap v1674: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 126 op/s
Feb 25 12:37:25 compute-0 nova_compute[244014]: 2026-02-25 12:37:25.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 116 op/s
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.659 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.660 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.660 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.660 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.661 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.662 244018 INFO nova.compute.manager [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Terminating instance
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.663 244018 DEBUG nova.compute.manager [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:37:26 compute-0 kernel: tap7e3c890f-a0 (unregistering): left promiscuous mode
Feb 25 12:37:26 compute-0 NetworkManager[49836]: <info>  [1772023046.7038] device (tap7e3c890f-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:26 compute-0 ovn_controller[147040]: 2026-02-25T12:37:26Z|00925|binding|INFO|Releasing lport 7e3c890f-a02f-487b-9c31-7341f5caa914 from this chassis (sb_readonly=0)
Feb 25 12:37:26 compute-0 ovn_controller[147040]: 2026-02-25T12:37:26Z|00926|binding|INFO|Setting lport 7e3c890f-a02f-487b-9c31-7341f5caa914 down in Southbound
Feb 25 12:37:26 compute-0 ovn_controller[147040]: 2026-02-25T12:37:26Z|00927|binding|INFO|Removing iface tap7e3c890f-a0 ovn-installed in OVS
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.711 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:26.717 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:ff:e0 10.100.0.14'], port_security=['fa:16:3e:e9:ff:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ed3e74a3-3eba-4446-aba8-cb3936a346e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04e53ffc2c7b47b993b4fd34d0c71d77', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4d5db6f-4b6b-4f9f-b234-d31c315a5616', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9322afa8-c9d9-4d31-97f4-43f86a60b202, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7e3c890f-a02f-487b-9c31-7341f5caa914) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:26.723 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7e3c890f-a02f-487b-9c31-7341f5caa914 in datapath a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 unbound from our chassis
Feb 25 12:37:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:26.727 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:37:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:26.729 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f81c1ff8-2864-480f-8c09-9bc5a8f282c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:26.729 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 namespace which is not needed anymore
Feb 25 12:37:26 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Feb 25 12:37:26 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d0000005d.scope: Consumed 4.484s CPU time.
Feb 25 12:37:26 compute-0 systemd-machined[210048]: Machine qemu-120-instance-0000005d terminated.
Feb 25 12:37:26 compute-0 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [NOTICE]   (327471) : haproxy version is 2.8.14-c23fe91
Feb 25 12:37:26 compute-0 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [NOTICE]   (327471) : path to executable is /usr/sbin/haproxy
Feb 25 12:37:26 compute-0 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [WARNING]  (327471) : Exiting Master process...
Feb 25 12:37:26 compute-0 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [ALERT]    (327471) : Current worker (327473) exited with code 143 (Terminated)
Feb 25 12:37:26 compute-0 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [WARNING]  (327471) : All workers exited. Exiting... (0)
Feb 25 12:37:26 compute-0 systemd[1]: libpod-26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b.scope: Deactivated successfully.
Feb 25 12:37:26 compute-0 podman[327507]: 2026-02-25 12:37:26.882435637 +0000 UTC m=+0.056285539 container died 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.892 244018 INFO nova.virt.libvirt.driver [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Instance destroyed successfully.
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.893 244018 DEBUG nova.objects.instance [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lazy-loading 'resources' on Instance uuid ed3e74a3-3eba-4446-aba8-cb3936a346e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b-userdata-shm.mount: Deactivated successfully.
Feb 25 12:37:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c53868578ef99a8810c9684c99d8c731ec4e09f65916a481d668654b6628d16a-merged.mount: Deactivated successfully.
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.910 244018 DEBUG nova.virt.libvirt.vif [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:37:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1563092304',id=93,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:37:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='04e53ffc2c7b47b993b4fd34d0c71d77',ramdisk_id='',reservation_id='r-5zdt11tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-685623650',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-685623650-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:37:23Z,user_data=None,user_id='3187080005d24d4b8fc920bcd975004b',uuid=ed3e74a3-3eba-4446-aba8-cb3936a346e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.915 244018 DEBUG nova.network.os_vif_util [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converting VIF {"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.916 244018 DEBUG nova.network.os_vif_util [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.916 244018 DEBUG os_vif [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.918 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e3c890f-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.923 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:37:26 compute-0 nova_compute[244014]: 2026-02-25 12:37:26.925 244018 INFO os_vif [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0')
Feb 25 12:37:26 compute-0 podman[327507]: 2026-02-25 12:37:26.931754859 +0000 UTC m=+0.105604761 container cleanup 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:37:26 compute-0 systemd[1]: libpod-conmon-26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b.scope: Deactivated successfully.
Feb 25 12:37:26 compute-0 podman[327556]: 2026-02-25 12:37:26.999166261 +0000 UTC m=+0.045135985 container remove 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.004 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8229cb-c960-4272-9b09-82d2ec9e89db]: (4, ('Wed Feb 25 12:37:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 (26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b)\n26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b\nWed Feb 25 12:37:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 (26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b)\n26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.005 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb48c1b-6937-4f51-a6d4-c639b2cdc385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.006 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ce9bca-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:27 compute-0 kernel: tapa3ce9bca-f0: left promiscuous mode
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.016 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b7863a2a-d6b1-408c-80d4-60940fcbe338]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.016 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.037 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67fcdb10-444e-43e0-8451-58400bf1c68c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.039 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1cb752-40c6-4348-85a0-fb29529c78f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.049 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[478f168d-6aca-48ba-8681-4ab114f0a2f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501153, 'reachable_time': 40530, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327579, 'error': None, 'target': 'ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 systemd[1]: run-netns-ovnmeta\x2da3ce9bca\x2dffe4\x2d4184\x2d87c6\x2d054df1c1e0b8.mount: Deactivated successfully.
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.052 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.052 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[7b1c611c-5a15-44be-b0b8-13b80bb0e78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.181 244018 INFO nova.virt.libvirt.driver [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Deleting instance files /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0_del
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.182 244018 INFO nova.virt.libvirt.driver [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Deletion of /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0_del complete
Feb 25 12:37:27 compute-0 ceph-mon[76335]: pgmap v1675: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 116 op/s
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.249 244018 INFO nova.compute.manager [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Took 0.59 seconds to destroy the instance on the hypervisor.
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.249 244018 DEBUG oslo.service.loopingcall [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.249 244018 DEBUG nova.compute.manager [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.250 244018 DEBUG nova.network.neutron [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.758 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.758 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.758 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.759 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.759 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.760 244018 INFO nova.compute.manager [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Terminating instance
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.760 244018 DEBUG nova.compute.manager [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:37:27 compute-0 kernel: tapca16d7f1-7f (unregistering): left promiscuous mode
Feb 25 12:37:27 compute-0 NetworkManager[49836]: <info>  [1772023047.8071] device (tapca16d7f1-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:27 compute-0 ovn_controller[147040]: 2026-02-25T12:37:27Z|00928|binding|INFO|Releasing lport ca16d7f1-7f94-4d64-906e-e1469230e4f1 from this chassis (sb_readonly=0)
Feb 25 12:37:27 compute-0 ovn_controller[147040]: 2026-02-25T12:37:27Z|00929|binding|INFO|Setting lport ca16d7f1-7f94-4d64-906e-e1469230e4f1 down in Southbound
Feb 25 12:37:27 compute-0 ovn_controller[147040]: 2026-02-25T12:37:27Z|00930|binding|INFO|Removing iface tapca16d7f1-7f ovn-installed in OVS
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.818 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:9e:27 10.100.0.3'], port_security=['fa:16:3e:aa:9e:27 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7c2fb1e7-04d0-4903-a675-8cda55bbb6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ca16d7f1-7f94-4d64-906e-e1469230e4f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.821 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ca16d7f1-7f94-4d64-906e-e1469230e4f1 in datapath 38a239da-c933-4cbc-be00-dd127471e198 unbound from our chassis
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.822 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198
Feb 25 12:37:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.842 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a01a783c-5572-4592-9abe-681ef7a3b432]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Feb 25 12:37:27 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005c.scope: Consumed 13.414s CPU time.
Feb 25 12:37:27 compute-0 systemd-machined[210048]: Machine qemu-118-instance-0000005c terminated.
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.867 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c02cf11a-229a-45d8-82ba-59eef755ec7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.869 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8eebd2f2-de4f-4f7e-a9a0-023605087925]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.887 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7b99f481-81ba-4437-8156-3d8c679d369d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.900 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3d7e9f-2ea9-4706-93d2-cac24a0ec438]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 27, 'tx_packets': 13, 'rx_bytes': 1414, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 27, 'tx_packets': 13, 'rx_bytes': 1414, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 19502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327589, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.908 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad8d6b8-2d15-4cad-9849-78c0cc8899d0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327590, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327590, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.910 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.911 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.915 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.915 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.916 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.916 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.991 244018 INFO nova.virt.libvirt.driver [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Instance destroyed successfully.
Feb 25 12:37:27 compute-0 nova_compute[244014]: 2026-02-25 12:37:27.992 244018 DEBUG nova.objects.instance [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'resources' on Instance uuid 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.020 244018 DEBUG nova.virt.libvirt.vif [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1161077750',display_name='tempest-ListServerFiltersTestJSON-instance-1161077750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1161077750',id=92,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-hzdbk73p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:36:52Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=7c2fb1e7-04d0-4903-a675-8cda55bbb6ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.021 244018 DEBUG nova.network.os_vif_util [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.022 244018 DEBUG nova.network.os_vif_util [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.022 244018 DEBUG os_vif [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.024 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.025 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca16d7f1-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.028 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.031 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.033 244018 INFO os_vif [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f')
Feb 25 12:37:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 422 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 195 op/s
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.278 244018 INFO nova.virt.libvirt.driver [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Deleting instance files /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_del
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.279 244018 INFO nova.virt.libvirt.driver [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Deletion of /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_del complete
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.346 244018 INFO nova.compute.manager [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Took 0.59 seconds to destroy the instance on the hypervisor.
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.347 244018 DEBUG oslo.service.loopingcall [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.347 244018 DEBUG nova.compute.manager [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.347 244018 DEBUG nova.network.neutron [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.704 244018 DEBUG nova.network.neutron [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.723 244018 INFO nova.compute.manager [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Took 1.47 seconds to deallocate network for instance.
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.767 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.768 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.782 244018 DEBUG nova.compute.manager [req-e08d7017-0088-4b9e-bbff-e384660f0acd req-5d4e7965-75f7-41de-aeb4-87aa24feb9ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Received event network-vif-deleted-7e3c890f-a02f-487b-9c31-7341f5caa914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:28 compute-0 nova_compute[244014]: 2026-02-25 12:37:28.917 244018 DEBUG oslo_concurrency.processutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.044 244018 DEBUG nova.network.neutron [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.074 244018 INFO nova.compute.manager [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Took 0.73 seconds to deallocate network for instance.
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.128 244018 DEBUG nova.compute.manager [req-49eb6003-c712-4992-81a8-bc15991f7984 req-136f26ad-dba7-4a76-947b-e20d51a6e453 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Received event network-vif-deleted-ca16d7f1-7f94-4d64-906e-e1469230e4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.139 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:29 compute-0 ceph-mon[76335]: pgmap v1676: 305 pgs: 305 active+clean; 422 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 195 op/s
Feb 25 12:37:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:37:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4270513869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.522 244018 DEBUG oslo_concurrency.processutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.527 244018 DEBUG nova.compute.provider_tree [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.546 244018 DEBUG nova.scheduler.client.report [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.574 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.576 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.607 244018 INFO nova.scheduler.client.report [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Deleted allocations for instance ed3e74a3-3eba-4446-aba8-cb3936a346e0
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.672 244018 DEBUG oslo_concurrency.processutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.699 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:29 compute-0 nova_compute[244014]: 2026-02-25 12:37:29.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 422 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 155 op/s
Feb 25 12:37:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4270513869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:37:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185976889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.221 244018 DEBUG oslo_concurrency.processutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.226 244018 DEBUG nova.compute.provider_tree [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.244 244018 DEBUG nova.scheduler.client.report [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.279 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.310 244018 INFO nova.scheduler.client.report [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Deleted allocations for instance 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.388 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:30 compute-0 ovn_controller[147040]: 2026-02-25T12:37:30Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:76:ae 10.100.0.8
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 12:37:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:37:30
Feb 25 12:37:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:37:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:37:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Feb 25 12:37:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.992 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.993 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.994 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.994 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.995 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.997 244018 INFO nova.compute.manager [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Terminating instance
Feb 25 12:37:30 compute-0 nova_compute[244014]: 2026-02-25 12:37:30.999 244018 DEBUG nova.compute.manager [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:37:31 compute-0 kernel: tap66c41a3b-23 (unregistering): left promiscuous mode
Feb 25 12:37:31 compute-0 NetworkManager[49836]: <info>  [1772023051.0555] device (tap66c41a3b-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:37:31 compute-0 ovn_controller[147040]: 2026-02-25T12:37:31Z|00931|binding|INFO|Releasing lport 66c41a3b-23a5-4cbf-a70e-416adf1617e5 from this chassis (sb_readonly=0)
Feb 25 12:37:31 compute-0 ovn_controller[147040]: 2026-02-25T12:37:31Z|00932|binding|INFO|Setting lport 66c41a3b-23a5-4cbf-a70e-416adf1617e5 down in Southbound
Feb 25 12:37:31 compute-0 ovn_controller[147040]: 2026-02-25T12:37:31Z|00933|binding|INFO|Removing iface tap66c41a3b-23 ovn-installed in OVS
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.072 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.074 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:11:ce 10.100.0.6'], port_security=['fa:16:3e:6d:11:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6076a107-bdef-4c8a-8f75-887cdb4833f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=66c41a3b-23a5-4cbf-a70e-416adf1617e5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.078 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 66c41a3b-23a5-4cbf-a70e-416adf1617e5 in datapath 38a239da-c933-4cbc-be00-dd127471e198 unbound from our chassis
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.080 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.094 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[921cd520-bae0-4893-b9d7-3b7c4282fe79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:31 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Feb 25 12:37:31 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005b.scope: Consumed 13.362s CPU time.
Feb 25 12:37:31 compute-0 systemd-machined[210048]: Machine qemu-117-instance-0000005b terminated.
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.117 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb6754e-1f41-4d1b-aa6c-56e213fdb6d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.120 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b2554a-1295-45be-ab01-dc08fd970a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.145 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d011d24d-0054-4c5c-8324-47dd8fcb1af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.160 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[582d70da-7cbb-4e4c-81ef-d47fa7cd3251]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 27, 'tx_packets': 15, 'rx_bytes': 1414, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 27, 'tx_packets': 15, 'rx_bytes': 1414, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 19502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327679, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.172 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb895b3e-1324-40d5-9d1c-7423a481fdb7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327680, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327680, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.173 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.175 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.179 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.179 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.180 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.181 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:37:31 compute-0 ceph-mon[76335]: pgmap v1677: 305 pgs: 305 active+clean; 422 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 155 op/s
Feb 25 12:37:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1185976889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.234 244018 INFO nova.virt.libvirt.driver [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Instance destroyed successfully.
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.236 244018 DEBUG nova.objects.instance [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'resources' on Instance uuid 6076a107-bdef-4c8a-8f75-887cdb4833f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.251 244018 DEBUG nova.virt.libvirt.vif [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1261051940',display_name='tempest-ListServerFiltersTestJSON-instance-1261051940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1261051940',id=91,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-c0kkprmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:36:50Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=6076a107-bdef-4c8a-8f75-887cdb4833f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.251 244018 DEBUG nova.network.os_vif_util [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.252 244018 DEBUG nova.network.os_vif_util [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.252 244018 DEBUG os_vif [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.254 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.254 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66c41a3b-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.256 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.259 244018 INFO os_vif [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23')
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.518 244018 INFO nova.virt.libvirt.driver [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Deleting instance files /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0_del
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.520 244018 INFO nova.virt.libvirt.driver [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Deletion of /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0_del complete
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.592 244018 INFO nova.compute.manager [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Took 0.59 seconds to destroy the instance on the hypervisor.
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.593 244018 DEBUG oslo.service.loopingcall [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.593 244018 DEBUG nova.compute.manager [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:37:31 compute-0 nova_compute[244014]: 2026-02-25 12:37:31.594 244018 DEBUG nova.network.neutron [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:37:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:37:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:32.086 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:37:32 compute-0 nova_compute[244014]: 2026-02-25 12:37:32.086 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:32.087 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:37:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 308 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 47 KiB/s wr, 226 op/s
Feb 25 12:37:32 compute-0 ceph-mon[76335]: pgmap v1678: 305 pgs: 305 active+clean; 308 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 47 KiB/s wr, 226 op/s
Feb 25 12:37:32 compute-0 nova_compute[244014]: 2026-02-25 12:37:32.511 244018 DEBUG nova.network.neutron [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:37:32 compute-0 nova_compute[244014]: 2026-02-25 12:37:32.533 244018 INFO nova.compute.manager [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Took 0.94 seconds to deallocate network for instance.
Feb 25 12:37:32 compute-0 nova_compute[244014]: 2026-02-25 12:37:32.575 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:32 compute-0 nova_compute[244014]: 2026-02-25 12:37:32.576 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:32 compute-0 nova_compute[244014]: 2026-02-25 12:37:32.580 244018 DEBUG nova.compute.manager [req-a22e98a5-eb93-4edc-8fa9-d0033405c477 req-7e35add3-9ec8-47e0-b603-9e9315630013 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Received event network-vif-deleted-66c41a3b-23a5-4cbf-a70e-416adf1617e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:32 compute-0 nova_compute[244014]: 2026-02-25 12:37:32.641 244018 DEBUG oslo_concurrency.processutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:37:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320362384' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:33 compute-0 nova_compute[244014]: 2026-02-25 12:37:33.206 244018 DEBUG oslo_concurrency.processutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:33 compute-0 nova_compute[244014]: 2026-02-25 12:37:33.214 244018 DEBUG nova.compute.provider_tree [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:37:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/320362384' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:33 compute-0 ovn_controller[147040]: 2026-02-25T12:37:33Z|00934|binding|INFO|Releasing lport b61d6004-89be-4a9d-aeb0-ecb4f03f3526 from this chassis (sb_readonly=0)
Feb 25 12:37:33 compute-0 nova_compute[244014]: 2026-02-25 12:37:33.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:33 compute-0 nova_compute[244014]: 2026-02-25 12:37:33.270 244018 DEBUG nova.scheduler.client.report [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:37:33 compute-0 nova_compute[244014]: 2026-02-25 12:37:33.321 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:33 compute-0 nova_compute[244014]: 2026-02-25 12:37:33.346 244018 INFO nova.scheduler.client.report [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Deleted allocations for instance 6076a107-bdef-4c8a-8f75-887cdb4833f0
Feb 25 12:37:33 compute-0 nova_compute[244014]: 2026-02-25 12:37:33.412 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:33 compute-0 nova_compute[244014]: 2026-02-25 12:37:33.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 233 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 36 KiB/s wr, 212 op/s
Feb 25 12:37:34 compute-0 ceph-mon[76335]: pgmap v1679: 305 pgs: 305 active+clean; 233 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 36 KiB/s wr, 212 op/s
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.311 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.312 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.313 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.314 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.314 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.316 244018 INFO nova.compute.manager [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Terminating instance
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.317 244018 DEBUG nova.compute.manager [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:37:34 compute-0 kernel: tapd689bf7c-d4 (unregistering): left promiscuous mode
Feb 25 12:37:34 compute-0 NetworkManager[49836]: <info>  [1772023054.3676] device (tapd689bf7c-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:37:34 compute-0 ovn_controller[147040]: 2026-02-25T12:37:34Z|00935|binding|INFO|Releasing lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 from this chassis (sb_readonly=0)
Feb 25 12:37:34 compute-0 ovn_controller[147040]: 2026-02-25T12:37:34Z|00936|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 down in Southbound
Feb 25 12:37:34 compute-0 ovn_controller[147040]: 2026-02-25T12:37:34Z|00937|binding|INFO|Removing iface tapd689bf7c-d4 ovn-installed in OVS
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.381 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:76:ae 10.100.0.8'], port_security=['fa:16:3e:65:76:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0061daee-43d7-458b-8645-0ad3f8fbb2af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d689bf7c-d44c-4f39-a2a7-a85e52dcee30) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.382 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d689bf7c-d44c-4f39-a2a7-a85e52dcee30 in datapath 38a239da-c933-4cbc-be00-dd127471e198 unbound from our chassis
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.383 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38a239da-c933-4cbc-be00-dd127471e198, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.384 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e905ed8e-0d33-4289-a88f-d422dd631b49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.385 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-38a239da-c933-4cbc-be00-dd127471e198 namespace which is not needed anymore
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.390 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:34 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 25 12:37:34 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d0000005a.scope: Consumed 12.407s CPU time.
Feb 25 12:37:34 compute-0 systemd-machined[210048]: Machine qemu-119-instance-0000005a terminated.
Feb 25 12:37:34 compute-0 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [NOTICE]   (325699) : haproxy version is 2.8.14-c23fe91
Feb 25 12:37:34 compute-0 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [NOTICE]   (325699) : path to executable is /usr/sbin/haproxy
Feb 25 12:37:34 compute-0 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [WARNING]  (325699) : Exiting Master process...
Feb 25 12:37:34 compute-0 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [ALERT]    (325699) : Current worker (325707) exited with code 143 (Terminated)
Feb 25 12:37:34 compute-0 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [WARNING]  (325699) : All workers exited. Exiting... (0)
Feb 25 12:37:34 compute-0 systemd[1]: libpod-2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1.scope: Deactivated successfully.
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.533 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:34 compute-0 podman[327760]: 2026-02-25 12:37:34.534841844 +0000 UTC m=+0.050383023 container died 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.550 244018 INFO nova.virt.libvirt.driver [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance destroyed successfully.
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.551 244018 DEBUG nova.objects.instance [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'resources' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1-userdata-shm.mount: Deactivated successfully.
Feb 25 12:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-82356394d1b742d50fdf5952aac21f145d54d5f578fc2e46fd418cf1c3cc383b-merged.mount: Deactivated successfully.
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.568 244018 DEBUG nova.virt.libvirt.vif [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:37:19Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.569 244018 DEBUG nova.network.os_vif_util [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.570 244018 DEBUG nova.network.os_vif_util [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.570 244018 DEBUG os_vif [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.571 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd689bf7c-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:34 compute-0 podman[327760]: 2026-02-25 12:37:34.576775347 +0000 UTC m=+0.092316566 container cleanup 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.576 244018 INFO os_vif [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4')
Feb 25 12:37:34 compute-0 systemd[1]: libpod-conmon-2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1.scope: Deactivated successfully.
Feb 25 12:37:34 compute-0 podman[327805]: 2026-02-25 12:37:34.653833162 +0000 UTC m=+0.053539652 container remove 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.659 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2cadae38-bb44-4a7a-8934-30ac81159022]: (4, ('Wed Feb 25 12:37:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198 (2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1)\n2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1\nWed Feb 25 12:37:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198 (2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1)\n2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.661 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[277b1df6-2261-4555-b6bd-bcafd0fe5ae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.662 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:34 compute-0 kernel: tap38a239da-c0: left promiscuous mode
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.674 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[28e1002e-4a1d-4108-944c-bea2f8ce5b35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.687 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8661e4d9-6f26-47c3-b932-403725b06a17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.688 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[732bb5cb-6cd2-46c1-9599-98e9f3ca49d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.703 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd524d18-fbe4-4f33-b3a1-6b9ffd12c573]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497379, 'reachable_time': 29779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327831, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d38a239da\x2dc933\x2d4cbc\x2dbe00\x2ddd127471e198.mount: Deactivated successfully.
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.707 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-38a239da-c933-4cbc-be00-dd127471e198 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:37:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.707 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a8205a0c-e0e3-49e9-9c60-96c537d196d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.795 244018 DEBUG nova.compute.manager [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.795 244018 DEBUG oslo_concurrency.lockutils [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.796 244018 DEBUG oslo_concurrency.lockutils [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.796 244018 DEBUG oslo_concurrency.lockutils [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.796 244018 DEBUG nova.compute.manager [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.796 244018 DEBUG nova.compute.manager [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.839 244018 INFO nova.virt.libvirt.driver [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Deleting instance files /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af_del
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.840 244018 INFO nova.virt.libvirt.driver [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Deletion of /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af_del complete
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.888 244018 INFO nova.compute.manager [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Took 0.57 seconds to destroy the instance on the hypervisor.
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.889 244018 DEBUG oslo.service.loopingcall [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.889 244018 DEBUG nova.compute.manager [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.890 244018 DEBUG nova.network.neutron [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:37:34 compute-0 nova_compute[244014]: 2026-02-25 12:37:34.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:35 compute-0 nova_compute[244014]: 2026-02-25 12:37:35.534 244018 DEBUG nova.network.neutron [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:37:35 compute-0 nova_compute[244014]: 2026-02-25 12:37:35.551 244018 INFO nova.compute.manager [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Took 0.66 seconds to deallocate network for instance.
Feb 25 12:37:35 compute-0 nova_compute[244014]: 2026-02-25 12:37:35.604 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:35 compute-0 nova_compute[244014]: 2026-02-25 12:37:35.604 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:35 compute-0 nova_compute[244014]: 2026-02-25 12:37:35.669 244018 DEBUG oslo_concurrency.processutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:35 compute-0 nova_compute[244014]: 2026-02-25 12:37:35.703 244018 DEBUG nova.compute.manager [req-a570c5ce-e320-4dea-8a85-8abe6f8e8554 req-106b017b-5f88-4896-aa9e-1a6fe23c329b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-deleted-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:35 compute-0 nova_compute[244014]: 2026-02-25 12:37:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:35 compute-0 nova_compute[244014]: 2026-02-25 12:37:35.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 12:37:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 233 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 35 KiB/s wr, 198 op/s
Feb 25 12:37:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:37:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316831477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:36 compute-0 nova_compute[244014]: 2026-02-25 12:37:36.211 244018 DEBUG oslo_concurrency.processutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:36 compute-0 nova_compute[244014]: 2026-02-25 12:37:36.218 244018 DEBUG nova.compute.provider_tree [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:37:36 compute-0 nova_compute[244014]: 2026-02-25 12:37:36.243 244018 DEBUG nova.scheduler.client.report [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:37:36 compute-0 nova_compute[244014]: 2026-02-25 12:37:36.264 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:36 compute-0 nova_compute[244014]: 2026-02-25 12:37:36.290 244018 INFO nova.scheduler.client.report [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Deleted allocations for instance 0061daee-43d7-458b-8645-0ad3f8fbb2af
Feb 25 12:37:36 compute-0 nova_compute[244014]: 2026-02-25 12:37:36.360 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:36 compute-0 nova_compute[244014]: 2026-02-25 12:37:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:37 compute-0 nova_compute[244014]: 2026-02-25 12:37:37.015 244018 DEBUG nova.compute.manager [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:37:37 compute-0 nova_compute[244014]: 2026-02-25 12:37:37.015 244018 DEBUG oslo_concurrency.lockutils [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:37 compute-0 nova_compute[244014]: 2026-02-25 12:37:37.016 244018 DEBUG oslo_concurrency.lockutils [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:37 compute-0 nova_compute[244014]: 2026-02-25 12:37:37.016 244018 DEBUG oslo_concurrency.lockutils [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:37 compute-0 nova_compute[244014]: 2026-02-25 12:37:37.017 244018 DEBUG nova.compute.manager [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:37:37 compute-0 nova_compute[244014]: 2026-02-25 12:37:37.018 244018 WARNING nova.compute.manager [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state deleted and task_state None.
Feb 25 12:37:37 compute-0 ceph-mon[76335]: pgmap v1680: 305 pgs: 305 active+clean; 233 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 35 KiB/s wr, 198 op/s
Feb 25 12:37:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/316831477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 36 KiB/s wr, 226 op/s
Feb 25 12:37:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:39.089 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:37:39 compute-0 nova_compute[244014]: 2026-02-25 12:37:39.149 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:39 compute-0 ceph-mon[76335]: pgmap v1681: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 36 KiB/s wr, 226 op/s
Feb 25 12:37:39 compute-0 nova_compute[244014]: 2026-02-25 12:37:39.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:39 compute-0 nova_compute[244014]: 2026-02-25 12:37:39.963 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 600 KiB/s rd, 23 KiB/s wr, 148 op/s
Feb 25 12:37:40 compute-0 ceph-mon[76335]: pgmap v1682: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 600 KiB/s rd, 23 KiB/s wr, 148 op/s
Feb 25 12:37:41 compute-0 nova_compute[244014]: 2026-02-25 12:37:41.751 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:37:41 compute-0 nova_compute[244014]: 2026-02-25 12:37:41.891 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023046.8893049, ed3e74a3-3eba-4446-aba8-cb3936a346e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:37:41 compute-0 nova_compute[244014]: 2026-02-25 12:37:41.892 244018 INFO nova.compute.manager [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] VM Stopped (Lifecycle Event)
Feb 25 12:37:41 compute-0 nova_compute[244014]: 2026-02-25 12:37:41.917 244018 DEBUG nova.compute.manager [None req-88669b5d-bf64-4df5-9951-3841ece5c05f - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 600 KiB/s rd, 23 KiB/s wr, 148 op/s
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.1233866754248558e-05 of space, bias 1.0, pg target 0.0033701600262745672 quantized to 32 (current 32)
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024931183502203227 of space, bias 1.0, pg target 0.7479355050660969 quantized to 32 (current 32)
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.626072165167579e-07 of space, bias 4.0, pg target 0.0009151286598201094 quantized to 16 (current 16)
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:37:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:37:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:42 compute-0 nova_compute[244014]: 2026-02-25 12:37:42.988 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023047.9853299, 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:37:42 compute-0 nova_compute[244014]: 2026-02-25 12:37:42.989 244018 INFO nova.compute.manager [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] VM Stopped (Lifecycle Event)
Feb 25 12:37:43 compute-0 nova_compute[244014]: 2026-02-25 12:37:43.019 244018 DEBUG nova.compute.manager [None req-4d98a929-0c32-41b5-8842-934245f16d54 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:43 compute-0 ceph-mon[76335]: pgmap v1683: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 600 KiB/s rd, 23 KiB/s wr, 148 op/s
Feb 25 12:37:43 compute-0 nova_compute[244014]: 2026-02-25 12:37:43.715 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "0ae40499-8d6f-432e-a58e-a28e508f7982" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:43 compute-0 nova_compute[244014]: 2026-02-25 12:37:43.715 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:43 compute-0 nova_compute[244014]: 2026-02-25 12:37:43.731 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:37:43 compute-0 nova_compute[244014]: 2026-02-25 12:37:43.810 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:43 compute-0 nova_compute[244014]: 2026-02-25 12:37:43.810 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:43 compute-0 nova_compute[244014]: 2026-02-25 12:37:43.816 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:37:43 compute-0 nova_compute[244014]: 2026-02-25 12:37:43.816 244018 INFO nova.compute.claims [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:37:43 compute-0 nova_compute[244014]: 2026-02-25 12:37:43.945 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 2.9 KiB/s wr, 76 op/s
Feb 25 12:37:44 compute-0 ceph-mon[76335]: pgmap v1684: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 2.9 KiB/s wr, 76 op/s
Feb 25 12:37:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:37:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192893107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.475 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.484 244018 DEBUG nova.compute.provider_tree [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.508 244018 DEBUG nova.scheduler.client.report [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.533 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.534 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.611 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.630 244018 INFO nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.665 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:37:44 compute-0 podman[327877]: 2026-02-25 12:37:44.728999846 +0000 UTC m=+0.076698955 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.962 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.964 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.965 244018 INFO nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Creating image(s)
Feb 25 12:37:44 compute-0 nova_compute[244014]: 2026-02-25 12:37:44.998 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:45 compute-0 nova_compute[244014]: 2026-02-25 12:37:45.032 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:45 compute-0 nova_compute[244014]: 2026-02-25 12:37:45.065 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:45 compute-0 nova_compute[244014]: 2026-02-25 12:37:45.070 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:45 compute-0 nova_compute[244014]: 2026-02-25 12:37:45.100 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:45 compute-0 nova_compute[244014]: 2026-02-25 12:37:45.176 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:45 compute-0 nova_compute[244014]: 2026-02-25 12:37:45.177 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:45 compute-0 nova_compute[244014]: 2026-02-25 12:37:45.178 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:45 compute-0 nova_compute[244014]: 2026-02-25 12:37:45.179 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:45 compute-0 nova_compute[244014]: 2026-02-25 12:37:45.211 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:45 compute-0 nova_compute[244014]: 2026-02-25 12:37:45.216 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0ae40499-8d6f-432e-a58e-a28e508f7982_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2192893107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 25 12:37:46 compute-0 nova_compute[244014]: 2026-02-25 12:37:46.232 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023051.2309868, 6076a107-bdef-4c8a-8f75-887cdb4833f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:37:46 compute-0 nova_compute[244014]: 2026-02-25 12:37:46.234 244018 INFO nova.compute.manager [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] VM Stopped (Lifecycle Event)
Feb 25 12:37:46 compute-0 nova_compute[244014]: 2026-02-25 12:37:46.257 244018 DEBUG nova.compute.manager [None req-f6a73a1c-8b6a-445d-b26f-95bd06f0d4fa - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:46 compute-0 podman[327990]: 2026-02-25 12:37:46.764926158 +0000 UTC m=+0.107012771 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 12:37:46 compute-0 ceph-mon[76335]: pgmap v1685: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.203 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0ae40499-8d6f-432e-a58e-a28e508f7982_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.987s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.298 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] resizing rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:37:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:37:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2547865294' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:37:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:37:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2547865294' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.812 244018 DEBUG nova.objects.instance [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lazy-loading 'migration_context' on Instance uuid 0ae40499-8d6f-432e-a58e-a28e508f7982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.826 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.827 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Ensure instance console log exists: /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.828 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.829 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.829 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.832 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:37:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.841 244018 WARNING nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.898 244018 DEBUG nova.virt.libvirt.host [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.898 244018 DEBUG nova.virt.libvirt.host [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.901 244018 DEBUG nova.virt.libvirt.host [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.902 244018 DEBUG nova.virt.libvirt.host [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.902 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.902 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.903 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.903 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.903 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.904 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.904 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.904 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.904 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.904 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.905 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.906 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:37:47 compute-0 nova_compute[244014]: 2026-02-25 12:37:47.909 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 198 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.6 MiB/s wr, 50 op/s
Feb 25 12:37:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2547865294' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:37:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2547865294' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:37:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:37:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3916152046' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:48 compute-0 nova_compute[244014]: 2026-02-25 12:37:48.526 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:48 compute-0 nova_compute[244014]: 2026-02-25 12:37:48.551 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:48 compute-0 nova_compute[244014]: 2026-02-25 12:37:48.555 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:37:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157725244' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.069 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.072 244018 DEBUG nova.objects.instance [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ae40499-8d6f-432e-a58e-a28e508f7982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.091 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:37:49 compute-0 nova_compute[244014]:   <uuid>0ae40499-8d6f-432e-a58e-a28e508f7982</uuid>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   <name>instance-0000005e</name>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersAaction247Test-server-424896276</nova:name>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:37:47</nova:creationTime>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:37:49 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:37:49 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:37:49 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:37:49 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:37:49 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:37:49 compute-0 nova_compute[244014]:         <nova:user uuid="410569f12f9b4726b30c6bb2d581bfed">tempest-ServersAaction247Test-801237191-project-member</nova:user>
Feb 25 12:37:49 compute-0 nova_compute[244014]:         <nova:project uuid="9d70f139f85141ba913156bfb65093a3">tempest-ServersAaction247Test-801237191</nova:project>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <system>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <entry name="serial">0ae40499-8d6f-432e-a58e-a28e508f7982</entry>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <entry name="uuid">0ae40499-8d6f-432e-a58e-a28e508f7982</entry>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     </system>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   <os>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   </os>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   <features>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   </features>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0ae40499-8d6f-432e-a58e-a28e508f7982_disk">
Feb 25 12:37:49 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       </source>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:37:49 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config">
Feb 25 12:37:49 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       </source>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:37:49 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/console.log" append="off"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <video>
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     </video>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:37:49 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:37:49 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:37:49 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:37:49 compute-0 nova_compute[244014]: </domain>
Feb 25 12:37:49 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.165 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.166 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.167 244018 INFO nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Using config drive
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.240 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:49 compute-0 ceph-mon[76335]: pgmap v1686: 305 pgs: 305 active+clean; 198 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.6 MiB/s wr, 50 op/s
Feb 25 12:37:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3916152046' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3157725244' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.548 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023054.5479949, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.549 244018 INFO nova.compute.manager [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Stopped (Lifecycle Event)
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.565 244018 DEBUG nova.compute.manager [None req-728959d6-2c8d-4ff4-a321-07722eed6026 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.616 244018 INFO nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Creating config drive at /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.622 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3oxzklq7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.759 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3oxzklq7" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.781 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.783 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config 0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:49 compute-0 nova_compute[244014]: 2026-02-25 12:37:49.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 198 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.6 MiB/s wr, 22 op/s
Feb 25 12:37:50 compute-0 ceph-mon[76335]: pgmap v1687: 305 pgs: 305 active+clean; 198 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.6 MiB/s wr, 22 op/s
Feb 25 12:37:50 compute-0 nova_compute[244014]: 2026-02-25 12:37:50.854 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config 0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:50 compute-0 nova_compute[244014]: 2026-02-25 12:37:50.856 244018 INFO nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Deleting local config drive /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config because it was imported into RBD.
Feb 25 12:37:50 compute-0 systemd-machined[210048]: New machine qemu-121-instance-0000005e.
Feb 25 12:37:50 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-0000005e.
Feb 25 12:37:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 200 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.164 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023072.163778, 0ae40499-8d6f-432e-a58e-a28e508f7982 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.164 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] VM Resumed (Lifecycle Event)
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.168 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.169 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.172 244018 INFO nova.virt.libvirt.driver [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance spawned successfully.
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.173 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.194 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.204 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.209 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.210 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.211 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.211 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.212 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.213 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.247 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.248 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023072.1673963, 0ae40499-8d6f-432e-a58e-a28e508f7982 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.248 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] VM Started (Lifecycle Event)
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.278 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.281 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.286 244018 INFO nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Took 7.32 seconds to spawn the instance on the hypervisor.
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.286 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.310 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.343 244018 INFO nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Took 8.57 seconds to build instance.
Feb 25 12:37:52 compute-0 nova_compute[244014]: 2026-02-25 12:37:52.364 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:53 compute-0 ceph-mon[76335]: pgmap v1688: 305 pgs: 305 active+clean; 200 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 25 12:37:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 200 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.309 244018 DEBUG nova.compute.manager [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.372 244018 INFO nova.compute.manager [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] instance snapshotting
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.373 244018 DEBUG nova.objects.instance [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lazy-loading 'flavor' on Instance uuid 0ae40499-8d6f-432e-a58e-a28e508f7982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.513 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "0ae40499-8d6f-432e-a58e-a28e508f7982" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.514 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.514 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "0ae40499-8d6f-432e-a58e-a28e508f7982-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.514 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.515 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.516 244018 INFO nova.compute.manager [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Terminating instance
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.516 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "refresh_cache-0ae40499-8d6f-432e-a58e-a28e508f7982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.516 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquired lock "refresh_cache-0ae40499-8d6f-432e-a58e-a28e508f7982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.517 244018 DEBUG nova.network.neutron [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.608 244018 INFO nova.virt.libvirt.driver [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Beginning live snapshot process
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.649 244018 DEBUG nova.compute.manager [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.696 244018 DEBUG nova.network.neutron [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:37:54 compute-0 nova_compute[244014]: 2026-02-25 12:37:54.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:55.017 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:55.018 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:37:55.018 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:55 compute-0 ceph-mon[76335]: pgmap v1689: 305 pgs: 305 active+clean; 200 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.292 244018 DEBUG nova.network.neutron [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.321 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Releasing lock "refresh_cache-0ae40499-8d6f-432e-a58e-a28e508f7982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.322 244018 DEBUG nova.compute.manager [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:37:55 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Feb 25 12:37:55 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d0000005e.scope: Consumed 4.338s CPU time.
Feb 25 12:37:55 compute-0 systemd-machined[210048]: Machine qemu-121-instance-0000005e terminated.
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.544 244018 INFO nova.virt.libvirt.driver [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance destroyed successfully.
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.545 244018 DEBUG nova.objects.instance [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lazy-loading 'resources' on Instance uuid 0ae40499-8d6f-432e-a58e-a28e508f7982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.697 244018 DEBUG nova.compute.manager [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.858 244018 INFO nova.virt.libvirt.driver [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Deleting instance files /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982_del
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.859 244018 INFO nova.virt.libvirt.driver [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Deletion of /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982_del complete
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.932 244018 INFO nova.compute.manager [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Took 0.61 seconds to destroy the instance on the hypervisor.
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.932 244018 DEBUG oslo.service.loopingcall [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.933 244018 DEBUG nova.compute.manager [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:37:55 compute-0 nova_compute[244014]: 2026-02-25 12:37:55.933 244018 DEBUG nova.network.neutron [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:37:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 200 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 12:37:56 compute-0 ceph-mon[76335]: pgmap v1690: 305 pgs: 305 active+clean; 200 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 12:37:56 compute-0 nova_compute[244014]: 2026-02-25 12:37:56.321 244018 DEBUG nova.network.neutron [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:37:56 compute-0 nova_compute[244014]: 2026-02-25 12:37:56.335 244018 DEBUG nova.network.neutron [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:37:56 compute-0 nova_compute[244014]: 2026-02-25 12:37:56.359 244018 INFO nova.compute.manager [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Took 0.43 seconds to deallocate network for instance.
Feb 25 12:37:56 compute-0 nova_compute[244014]: 2026-02-25 12:37:56.416 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:37:56 compute-0 nova_compute[244014]: 2026-02-25 12:37:56.416 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:37:56 compute-0 nova_compute[244014]: 2026-02-25 12:37:56.481 244018 DEBUG oslo_concurrency.processutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:37:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:37:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625209494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:57 compute-0 nova_compute[244014]: 2026-02-25 12:37:57.060 244018 DEBUG oslo_concurrency.processutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:37:57 compute-0 nova_compute[244014]: 2026-02-25 12:37:57.067 244018 DEBUG nova.compute.provider_tree [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:37:57 compute-0 nova_compute[244014]: 2026-02-25 12:37:57.082 244018 DEBUG nova.scheduler.client.report [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:37:57 compute-0 nova_compute[244014]: 2026-02-25 12:37:57.101 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:57 compute-0 nova_compute[244014]: 2026-02-25 12:37:57.134 244018 INFO nova.scheduler.client.report [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Deleted allocations for instance 0ae40499-8d6f-432e-a58e-a28e508f7982
Feb 25 12:37:57 compute-0 nova_compute[244014]: 2026-02-25 12:37:57.211 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:37:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2625209494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:37:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:37:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 12:37:58 compute-0 ceph-mon[76335]: pgmap v1691: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 12:37:59 compute-0 nova_compute[244014]: 2026-02-25 12:37:59.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:37:59 compute-0 nova_compute[244014]: 2026-02-25 12:37:59.970 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 145 KiB/s wr, 104 op/s
Feb 25 12:38:01 compute-0 ceph-mon[76335]: pgmap v1692: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 145 KiB/s wr, 104 op/s
Feb 25 12:38:01 compute-0 anacron[164977]: Job `cron.weekly' started
Feb 25 12:38:01 compute-0 anacron[164977]: Job `cron.weekly' terminated
Feb 25 12:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:38:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 145 KiB/s wr, 104 op/s
Feb 25 12:38:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:03 compute-0 ceph-mon[76335]: pgmap v1693: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 145 KiB/s wr, 104 op/s
Feb 25 12:38:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 97 op/s
Feb 25 12:38:04 compute-0 ceph-mon[76335]: pgmap v1694: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 97 op/s
Feb 25 12:38:04 compute-0 nova_compute[244014]: 2026-02-25 12:38:04.585 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:04 compute-0 nova_compute[244014]: 2026-02-25 12:38:04.971 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Feb 25 12:38:07 compute-0 ceph-mon[76335]: pgmap v1695: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Feb 25 12:38:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Feb 25 12:38:09 compute-0 ceph-mon[76335]: pgmap v1696: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Feb 25 12:38:09 compute-0 nova_compute[244014]: 2026-02-25 12:38:09.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:09 compute-0 nova_compute[244014]: 2026-02-25 12:38:09.973 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:10 compute-0 ovn_controller[147040]: 2026-02-25T12:38:10Z|00938|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Feb 25 12:38:10 compute-0 ceph-mon[76335]: pgmap v1697: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:10 compute-0 nova_compute[244014]: 2026-02-25 12:38:10.543 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023075.5413747, 0ae40499-8d6f-432e-a58e-a28e508f7982 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:38:10 compute-0 nova_compute[244014]: 2026-02-25 12:38:10.544 244018 INFO nova.compute.manager [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] VM Stopped (Lifecycle Event)
Feb 25 12:38:10 compute-0 nova_compute[244014]: 2026-02-25 12:38:10.563 244018 DEBUG nova.compute.manager [None req-cfa1ca97-d50e-4f59-bd9c-ef1e2b87dbc5 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:38:11 compute-0 sudo[328313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:38:11 compute-0 sudo[328313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:38:11 compute-0 sudo[328313]: pam_unix(sudo:session): session closed for user root
Feb 25 12:38:11 compute-0 sudo[328338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:38:11 compute-0 sudo[328338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:38:11 compute-0 nova_compute[244014]: 2026-02-25 12:38:11.653 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:38:11 compute-0 sudo[328338]: pam_unix(sudo:session): session closed for user root
Feb 25 12:38:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:38:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:38:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:38:11 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:38:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:38:11 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:38:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:38:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:38:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:38:11 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:38:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:38:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:38:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:38:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:38:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:38:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:38:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:38:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:38:11 compute-0 sudo[328393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:38:11 compute-0 sudo[328393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:38:11 compute-0 sudo[328393]: pam_unix(sudo:session): session closed for user root
Feb 25 12:38:11 compute-0 sudo[328418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:38:11 compute-0 sudo[328418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:38:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:12 compute-0 ceph-mon[76335]: pgmap v1698: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:12 compute-0 podman[328455]: 2026-02-25 12:38:12.957783469 +0000 UTC m=+0.038582890 container create 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:38:12 compute-0 systemd[1]: Started libpod-conmon-6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b.scope.
Feb 25 12:38:13 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:38:13 compute-0 podman[328455]: 2026-02-25 12:38:13.027412894 +0000 UTC m=+0.108212345 container init 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 12:38:13 compute-0 podman[328455]: 2026-02-25 12:38:13.033708321 +0000 UTC m=+0.114507762 container start 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:38:13 compute-0 podman[328455]: 2026-02-25 12:38:12.938780163 +0000 UTC m=+0.019579594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:38:13 compute-0 zen_kapitsa[328469]: 167 167
Feb 25 12:38:13 compute-0 systemd[1]: libpod-6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b.scope: Deactivated successfully.
Feb 25 12:38:13 compute-0 podman[328455]: 2026-02-25 12:38:13.038740094 +0000 UTC m=+0.119539565 container attach 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:38:13 compute-0 podman[328455]: 2026-02-25 12:38:13.03931916 +0000 UTC m=+0.120118591 container died 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8536bf4da4feb708c1efa08a927a81a9f383a54f466b612923f0b12e1817bac9-merged.mount: Deactivated successfully.
Feb 25 12:38:13 compute-0 podman[328455]: 2026-02-25 12:38:13.08856876 +0000 UTC m=+0.169368181 container remove 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:38:13 compute-0 systemd[1]: libpod-conmon-6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b.scope: Deactivated successfully.
Feb 25 12:38:13 compute-0 podman[328494]: 2026-02-25 12:38:13.219978658 +0000 UTC m=+0.036359657 container create b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 12:38:13 compute-0 systemd[1]: Started libpod-conmon-b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55.scope.
Feb 25 12:38:13 compute-0 podman[328494]: 2026-02-25 12:38:13.202542056 +0000 UTC m=+0.018923075 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:38:13 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:13 compute-0 podman[328494]: 2026-02-25 12:38:13.327678007 +0000 UTC m=+0.144059006 container init b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:38:13 compute-0 podman[328494]: 2026-02-25 12:38:13.33309982 +0000 UTC m=+0.149480839 container start b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 12:38:13 compute-0 podman[328494]: 2026-02-25 12:38:13.337630358 +0000 UTC m=+0.154011347 container attach b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:38:13 compute-0 wizardly_hopper[328511]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:38:13 compute-0 wizardly_hopper[328511]: --> All data devices are unavailable
Feb 25 12:38:13 compute-0 systemd[1]: libpod-b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55.scope: Deactivated successfully.
Feb 25 12:38:13 compute-0 podman[328531]: 2026-02-25 12:38:13.89227191 +0000 UTC m=+0.026782037 container died b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d-merged.mount: Deactivated successfully.
Feb 25 12:38:13 compute-0 podman[328531]: 2026-02-25 12:38:13.946604683 +0000 UTC m=+0.081114730 container remove b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 12:38:13 compute-0 systemd[1]: libpod-conmon-b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55.scope: Deactivated successfully.
Feb 25 12:38:14 compute-0 sudo[328418]: pam_unix(sudo:session): session closed for user root
Feb 25 12:38:14 compute-0 sudo[328546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:38:14 compute-0 sudo[328546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:38:14 compute-0 sudo[328546]: pam_unix(sudo:session): session closed for user root
Feb 25 12:38:14 compute-0 sudo[328571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:38:14 compute-0 sudo[328571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:38:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:14 compute-0 podman[328609]: 2026-02-25 12:38:14.377710908 +0000 UTC m=+0.045821464 container create 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:38:14 compute-0 systemd[1]: Started libpod-conmon-52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386.scope.
Feb 25 12:38:14 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:38:14 compute-0 podman[328609]: 2026-02-25 12:38:14.449022881 +0000 UTC m=+0.117133467 container init 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:38:14 compute-0 podman[328609]: 2026-02-25 12:38:14.359837254 +0000 UTC m=+0.027947860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:38:14 compute-0 podman[328609]: 2026-02-25 12:38:14.456730198 +0000 UTC m=+0.124840794 container start 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 12:38:14 compute-0 pedantic_nightingale[328625]: 167 167
Feb 25 12:38:14 compute-0 systemd[1]: libpod-52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386.scope: Deactivated successfully.
Feb 25 12:38:14 compute-0 conmon[328625]: conmon 52098891f03848ab07eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386.scope/container/memory.events
Feb 25 12:38:14 compute-0 podman[328609]: 2026-02-25 12:38:14.460835674 +0000 UTC m=+0.128946260 container attach 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 12:38:14 compute-0 podman[328609]: 2026-02-25 12:38:14.463820148 +0000 UTC m=+0.131930734 container died 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bcb527201cb61290c5da3ce10f53b8d0fe82d7ce04135ccfb2ea845c5b2a500-merged.mount: Deactivated successfully.
Feb 25 12:38:14 compute-0 podman[328609]: 2026-02-25 12:38:14.501261615 +0000 UTC m=+0.169372181 container remove 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:38:14 compute-0 systemd[1]: libpod-conmon-52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386.scope: Deactivated successfully.
Feb 25 12:38:14 compute-0 nova_compute[244014]: 2026-02-25 12:38:14.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:14 compute-0 podman[328648]: 2026-02-25 12:38:14.640261847 +0000 UTC m=+0.043691654 container create ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 12:38:14 compute-0 systemd[1]: Started libpod-conmon-ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32.scope.
Feb 25 12:38:14 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9cd2a3a50507c64854ee58b978744617b2b257d07849e62402f433cab1c9319/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9cd2a3a50507c64854ee58b978744617b2b257d07849e62402f433cab1c9319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9cd2a3a50507c64854ee58b978744617b2b257d07849e62402f433cab1c9319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9cd2a3a50507c64854ee58b978744617b2b257d07849e62402f433cab1c9319/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:14 compute-0 podman[328648]: 2026-02-25 12:38:14.621741375 +0000 UTC m=+0.025171232 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:38:14 compute-0 podman[328648]: 2026-02-25 12:38:14.749175211 +0000 UTC m=+0.152605068 container init ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:38:14 compute-0 podman[328648]: 2026-02-25 12:38:14.754975935 +0000 UTC m=+0.158405782 container start ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:38:14 compute-0 podman[328648]: 2026-02-25 12:38:14.774757323 +0000 UTC m=+0.178187160 container attach ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 12:38:14 compute-0 nova_compute[244014]: 2026-02-25 12:38:14.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:15 compute-0 determined_montalcini[328665]: {
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:     "0": [
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:         {
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "devices": [
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "/dev/loop3"
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             ],
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_name": "ceph_lv0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_size": "21470642176",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "name": "ceph_lv0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "tags": {
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.cluster_name": "ceph",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.crush_device_class": "",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.encrypted": "0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.objectstore": "bluestore",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.osd_id": "0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.type": "block",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.vdo": "0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.with_tpm": "0"
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             },
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "type": "block",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "vg_name": "ceph_vg0"
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:         }
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:     ],
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:     "1": [
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:         {
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "devices": [
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "/dev/loop4"
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             ],
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_name": "ceph_lv1",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_size": "21470642176",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "name": "ceph_lv1",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "tags": {
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.cluster_name": "ceph",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.crush_device_class": "",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.encrypted": "0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.objectstore": "bluestore",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.osd_id": "1",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.type": "block",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.vdo": "0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.with_tpm": "0"
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             },
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "type": "block",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "vg_name": "ceph_vg1"
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:         }
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:     ],
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:     "2": [
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:         {
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "devices": [
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "/dev/loop5"
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             ],
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_name": "ceph_lv2",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_size": "21470642176",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "name": "ceph_lv2",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "tags": {
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.cluster_name": "ceph",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.crush_device_class": "",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.encrypted": "0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.objectstore": "bluestore",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.osd_id": "2",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.type": "block",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.vdo": "0",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:                 "ceph.with_tpm": "0"
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             },
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "type": "block",
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:             "vg_name": "ceph_vg2"
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:         }
Feb 25 12:38:15 compute-0 determined_montalcini[328665]:     ]
Feb 25 12:38:15 compute-0 determined_montalcini[328665]: }
Feb 25 12:38:15 compute-0 systemd[1]: libpod-ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32.scope: Deactivated successfully.
Feb 25 12:38:15 compute-0 podman[328648]: 2026-02-25 12:38:15.075414107 +0000 UTC m=+0.478843954 container died ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9cd2a3a50507c64854ee58b978744617b2b257d07849e62402f433cab1c9319-merged.mount: Deactivated successfully.
Feb 25 12:38:15 compute-0 podman[328648]: 2026-02-25 12:38:15.129290948 +0000 UTC m=+0.532720755 container remove ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 12:38:15 compute-0 systemd[1]: libpod-conmon-ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32.scope: Deactivated successfully.
Feb 25 12:38:15 compute-0 sudo[328571]: pam_unix(sudo:session): session closed for user root
Feb 25 12:38:15 compute-0 podman[328675]: 2026-02-25 12:38:15.182475569 +0000 UTC m=+0.069238775 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 25 12:38:15 compute-0 ceph-mon[76335]: pgmap v1699: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:15 compute-0 sudo[328709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:38:15 compute-0 sudo[328709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:38:15 compute-0 sudo[328709]: pam_unix(sudo:session): session closed for user root
Feb 25 12:38:15 compute-0 sudo[328734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:38:15 compute-0 sudo[328734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:38:15 compute-0 podman[328773]: 2026-02-25 12:38:15.58154974 +0000 UTC m=+0.063756960 container create a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 12:38:15 compute-0 systemd[1]: Started libpod-conmon-a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f.scope.
Feb 25 12:38:15 compute-0 podman[328773]: 2026-02-25 12:38:15.557042849 +0000 UTC m=+0.039250109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:38:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:38:15 compute-0 podman[328773]: 2026-02-25 12:38:15.668100823 +0000 UTC m=+0.150308053 container init a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Feb 25 12:38:15 compute-0 podman[328773]: 2026-02-25 12:38:15.67577681 +0000 UTC m=+0.157984030 container start a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 12:38:15 compute-0 podman[328773]: 2026-02-25 12:38:15.679281358 +0000 UTC m=+0.161488568 container attach a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:38:15 compute-0 intelligent_hermann[328789]: 167 167
Feb 25 12:38:15 compute-0 systemd[1]: libpod-a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f.scope: Deactivated successfully.
Feb 25 12:38:15 compute-0 podman[328773]: 2026-02-25 12:38:15.681071659 +0000 UTC m=+0.163278879 container died a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-395e13ee95a79ef64faf036f94139b0562fb1805003b952ed7240620d8a82a3a-merged.mount: Deactivated successfully.
Feb 25 12:38:15 compute-0 podman[328773]: 2026-02-25 12:38:15.7317812 +0000 UTC m=+0.213988420 container remove a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:38:15 compute-0 systemd[1]: libpod-conmon-a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f.scope: Deactivated successfully.
Feb 25 12:38:15 compute-0 podman[328815]: 2026-02-25 12:38:15.915902566 +0000 UTC m=+0.056887477 container create 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:38:15 compute-0 systemd[1]: Started libpod-conmon-3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b.scope.
Feb 25 12:38:15 compute-0 podman[328815]: 2026-02-25 12:38:15.894354068 +0000 UTC m=+0.035339059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:38:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73c114b128e8b67ea3d493f5eff06bccebdb9ca222de4baf5e7ea342006b60e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73c114b128e8b67ea3d493f5eff06bccebdb9ca222de4baf5e7ea342006b60e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73c114b128e8b67ea3d493f5eff06bccebdb9ca222de4baf5e7ea342006b60e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73c114b128e8b67ea3d493f5eff06bccebdb9ca222de4baf5e7ea342006b60e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:16 compute-0 podman[328815]: 2026-02-25 12:38:16.022668989 +0000 UTC m=+0.163653970 container init 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:38:16 compute-0 podman[328815]: 2026-02-25 12:38:16.030919762 +0000 UTC m=+0.171904703 container start 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:38:16 compute-0 podman[328815]: 2026-02-25 12:38:16.034273176 +0000 UTC m=+0.175258187 container attach 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 12:38:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:16 compute-0 ceph-mon[76335]: pgmap v1700: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:16 compute-0 lvm[328908]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:38:16 compute-0 lvm[328908]: VG ceph_vg0 finished
Feb 25 12:38:16 compute-0 lvm[328910]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:38:16 compute-0 lvm[328910]: VG ceph_vg1 finished
Feb 25 12:38:16 compute-0 lvm[328912]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:38:16 compute-0 lvm[328912]: VG ceph_vg2 finished
Feb 25 12:38:16 compute-0 romantic_noether[328831]: {}
Feb 25 12:38:16 compute-0 systemd[1]: libpod-3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b.scope: Deactivated successfully.
Feb 25 12:38:16 compute-0 podman[328815]: 2026-02-25 12:38:16.757972829 +0000 UTC m=+0.898957730 container died 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 12:38:16 compute-0 systemd[1]: libpod-3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b.scope: Consumed 1.005s CPU time.
Feb 25 12:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f73c114b128e8b67ea3d493f5eff06bccebdb9ca222de4baf5e7ea342006b60e-merged.mount: Deactivated successfully.
Feb 25 12:38:16 compute-0 podman[328815]: 2026-02-25 12:38:16.818210359 +0000 UTC m=+0.959195270 container remove 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:38:16 compute-0 systemd[1]: libpod-conmon-3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b.scope: Deactivated successfully.
Feb 25 12:38:16 compute-0 sudo[328734]: pam_unix(sudo:session): session closed for user root
Feb 25 12:38:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:38:16 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:38:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:38:16 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:38:16 compute-0 nova_compute[244014]: 2026-02-25 12:38:16.895 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:38:16 compute-0 nova_compute[244014]: 2026-02-25 12:38:16.896 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:38:16 compute-0 podman[328923]: 2026-02-25 12:38:16.926739782 +0000 UTC m=+0.115436359 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 12:38:16 compute-0 sudo[328952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:38:16 compute-0 sudo[328952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:38:16 compute-0 sudo[328952]: pam_unix(sudo:session): session closed for user root
Feb 25 12:38:16 compute-0 nova_compute[244014]: 2026-02-25 12:38:16.950 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:38:16 compute-0 nova_compute[244014]: 2026-02-25 12:38:16.951 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:38:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:38:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:38:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:18 compute-0 ceph-mon[76335]: pgmap v1701: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:18 compute-0 nova_compute[244014]: 2026-02-25 12:38:18.927 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:38:19 compute-0 nova_compute[244014]: 2026-02-25 12:38:19.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:19 compute-0 nova_compute[244014]: 2026-02-25 12:38:19.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:38:19 compute-0 nova_compute[244014]: 2026-02-25 12:38:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:38:19 compute-0 nova_compute[244014]: 2026-02-25 12:38:19.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:19 compute-0 nova_compute[244014]: 2026-02-25 12:38:19.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:19 compute-0 nova_compute[244014]: 2026-02-25 12:38:19.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:19 compute-0 nova_compute[244014]: 2026-02-25 12:38:19.912 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:38:19 compute-0 nova_compute[244014]: 2026-02-25 12:38:19.913 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:38:19 compute-0 nova_compute[244014]: 2026-02-25 12:38:19.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:20 compute-0 ceph-mon[76335]: pgmap v1702: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:38:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3468064007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:38:20 compute-0 nova_compute[244014]: 2026-02-25 12:38:20.467 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:38:20 compute-0 nova_compute[244014]: 2026-02-25 12:38:20.661 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:38:20 compute-0 nova_compute[244014]: 2026-02-25 12:38:20.662 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3766MB free_disk=59.987605430185795GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:38:20 compute-0 nova_compute[244014]: 2026-02-25 12:38:20.662 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:20 compute-0 nova_compute[244014]: 2026-02-25 12:38:20.662 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:20 compute-0 nova_compute[244014]: 2026-02-25 12:38:20.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:38:20 compute-0 nova_compute[244014]: 2026-02-25 12:38:20.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:38:20 compute-0 nova_compute[244014]: 2026-02-25 12:38:20.749 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:38:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3468064007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:38:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:38:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978281410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:38:21 compute-0 nova_compute[244014]: 2026-02-25 12:38:21.303 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:38:21 compute-0 nova_compute[244014]: 2026-02-25 12:38:21.312 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:38:21 compute-0 nova_compute[244014]: 2026-02-25 12:38:21.337 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:38:21 compute-0 nova_compute[244014]: 2026-02-25 12:38:21.374 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:38:21 compute-0 nova_compute[244014]: 2026-02-25 12:38:21.374 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:22 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/978281410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:38:22 compute-0 ceph-mon[76335]: pgmap v1703: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:22 compute-0 nova_compute[244014]: 2026-02-25 12:38:22.375 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:38:22 compute-0 nova_compute[244014]: 2026-02-25 12:38:22.376 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:38:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:22 compute-0 nova_compute[244014]: 2026-02-25 12:38:22.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:38:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:24 compute-0 ceph-mon[76335]: pgmap v1704: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:24 compute-0 nova_compute[244014]: 2026-02-25 12:38:24.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:24 compute-0 nova_compute[244014]: 2026-02-25 12:38:24.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:26 compute-0 ceph-mon[76335]: pgmap v1705: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:26 compute-0 nova_compute[244014]: 2026-02-25 12:38:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:38:26 compute-0 nova_compute[244014]: 2026-02-25 12:38:26.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:38:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:28 compute-0 ceph-mon[76335]: pgmap v1706: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:29 compute-0 nova_compute[244014]: 2026-02-25 12:38:29.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:29 compute-0 sshd-session[329023]: Invalid user lighthouse from 80.94.92.186 port 34956
Feb 25 12:38:29 compute-0 sshd-session[329023]: Connection closed by invalid user lighthouse 80.94.92.186 port 34956 [preauth]
Feb 25 12:38:29 compute-0 nova_compute[244014]: 2026-02-25 12:38:29.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:30 compute-0 ceph-mon[76335]: pgmap v1707: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:30.844 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:b7:46 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4896f114-8c32-45b5-8e19-88367b748d23', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4896f114-8c32-45b5-8e19-88367b748d23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90eeb56b-9178-4d29-a92f-928e902be9b3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=df10225b-1c1c-46d7-a344-56179f0d6b9b) old=Port_Binding(mac=['fa:16:3e:24:b7:46 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4896f114-8c32-45b5-8e19-88367b748d23', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4896f114-8c32-45b5-8e19-88367b748d23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:38:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:30.848 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port df10225b-1c1c-46d7-a344-56179f0d6b9b in datapath 4896f114-8c32-45b5-8e19-88367b748d23 updated
Feb 25 12:38:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:30.850 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4896f114-8c32-45b5-8e19-88367b748d23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:38:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:30.851 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[647242fc-2b29-4872-b6e9-5854bb7b17aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:38:30
Feb 25 12:38:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:38:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:38:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', 'volumes', 'vms', '.mgr']
Feb 25 12:38:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:38:31 compute-0 nova_compute[244014]: 2026-02-25 12:38:31.672 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:31 compute-0 nova_compute[244014]: 2026-02-25 12:38:31.673 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:31 compute-0 nova_compute[244014]: 2026-02-25 12:38:31.689 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:38:31 compute-0 nova_compute[244014]: 2026-02-25 12:38:31.795 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:31 compute-0 nova_compute[244014]: 2026-02-25 12:38:31.796 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:31 compute-0 nova_compute[244014]: 2026-02-25 12:38:31.811 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:38:31 compute-0 nova_compute[244014]: 2026-02-25 12:38:31.811 244018 INFO nova.compute.claims [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:38:31 compute-0 nova_compute[244014]: 2026-02-25 12:38:31.923 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:38:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:38:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:32.238 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:38:32 compute-0 nova_compute[244014]: 2026-02-25 12:38:32.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:32.243 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:38:32 compute-0 ceph-mon[76335]: pgmap v1708: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:38:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/489997292' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:38:32 compute-0 nova_compute[244014]: 2026-02-25 12:38:32.458 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:38:32 compute-0 nova_compute[244014]: 2026-02-25 12:38:32.466 244018 DEBUG nova.compute.provider_tree [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:38:32 compute-0 nova_compute[244014]: 2026-02-25 12:38:32.514 244018 DEBUG nova.scheduler.client.report [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:38:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:32 compute-0 nova_compute[244014]: 2026-02-25 12:38:32.872 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:32 compute-0 nova_compute[244014]: 2026-02-25 12:38:32.985 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "9ffe2b64-e10a-4231-8f78-a59ae059826c" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:32 compute-0 nova_compute[244014]: 2026-02-25 12:38:32.986 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "9ffe2b64-e10a-4231-8f78-a59ae059826c" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.016 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "9ffe2b64-e10a-4231-8f78-a59ae059826c" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.017 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.122 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.123 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.167 244018 INFO nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.243 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.373 244018 DEBUG nova.policy [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '04dc6c3292f14b8398bec7165759bd4b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8d5e5d163084460c88c8f594df149ff0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.402 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.403 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.404 244018 INFO nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Creating image(s)
Feb 25 12:38:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/489997292' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.535 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.569 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.604 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.609 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.700 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.701 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.702 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.702 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.726 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.730 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:38:33 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 25 12:38:33 compute-0 nova_compute[244014]: 2026-02-25 12:38:33.979 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:38:34 compute-0 nova_compute[244014]: 2026-02-25 12:38:34.038 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] resizing rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:38:34 compute-0 nova_compute[244014]: 2026-02-25 12:38:34.128 244018 DEBUG nova.objects.instance [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lazy-loading 'migration_context' on Instance uuid 61e87fb9-d3ba-4d0b-ae95-86b4272980cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:38:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:34 compute-0 nova_compute[244014]: 2026-02-25 12:38:34.299 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:38:34 compute-0 nova_compute[244014]: 2026-02-25 12:38:34.299 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Ensure instance console log exists: /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:38:34 compute-0 nova_compute[244014]: 2026-02-25 12:38:34.300 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:34 compute-0 nova_compute[244014]: 2026-02-25 12:38:34.300 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:34 compute-0 nova_compute[244014]: 2026-02-25 12:38:34.301 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:34 compute-0 nova_compute[244014]: 2026-02-25 12:38:34.313 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Successfully created port: 3bb5f129-0f6a-4905-87a3-a81ccde523cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:38:34 compute-0 ceph-mon[76335]: pgmap v1709: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:34 compute-0 nova_compute[244014]: 2026-02-25 12:38:34.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:34 compute-0 nova_compute[244014]: 2026-02-25 12:38:34.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:36 compute-0 ceph-mon[76335]: pgmap v1710: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:38:37 compute-0 nova_compute[244014]: 2026-02-25 12:38:37.138 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Successfully updated port: 3bb5f129-0f6a-4905-87a3-a81ccde523cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:38:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:37.245 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:38:37 compute-0 nova_compute[244014]: 2026-02-25 12:38:37.386 244018 DEBUG nova.compute.manager [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-changed-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:38:37 compute-0 nova_compute[244014]: 2026-02-25 12:38:37.387 244018 DEBUG nova.compute.manager [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Refreshing instance network info cache due to event network-changed-3bb5f129-0f6a-4905-87a3-a81ccde523cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:38:37 compute-0 nova_compute[244014]: 2026-02-25 12:38:37.387 244018 DEBUG oslo_concurrency.lockutils [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:38:37 compute-0 nova_compute[244014]: 2026-02-25 12:38:37.387 244018 DEBUG oslo_concurrency.lockutils [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:38:37 compute-0 nova_compute[244014]: 2026-02-25 12:38:37.388 244018 DEBUG nova.network.neutron [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Refreshing network info cache for port 3bb5f129-0f6a-4905-87a3-a81ccde523cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:38:37 compute-0 nova_compute[244014]: 2026-02-25 12:38:37.391 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:38:37 compute-0 nova_compute[244014]: 2026-02-25 12:38:37.707 244018 DEBUG nova.network.neutron [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:38:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:38:38 compute-0 ceph-mon[76335]: pgmap v1711: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:38:38 compute-0 nova_compute[244014]: 2026-02-25 12:38:38.277 244018 DEBUG nova.network.neutron [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:38:38 compute-0 nova_compute[244014]: 2026-02-25 12:38:38.294 244018 DEBUG oslo_concurrency.lockutils [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:38:38 compute-0 nova_compute[244014]: 2026-02-25 12:38:38.296 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquired lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:38:38 compute-0 nova_compute[244014]: 2026-02-25 12:38:38.296 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:38:38 compute-0 nova_compute[244014]: 2026-02-25 12:38:38.568 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:38:39 compute-0 nova_compute[244014]: 2026-02-25 12:38:39.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:39 compute-0 nova_compute[244014]: 2026-02-25 12:38:39.987 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:38:40 compute-0 ceph-mon[76335]: pgmap v1712: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.355 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Updating instance_info_cache with network_info: [{"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.383 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Releasing lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.383 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance network_info: |[{"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.385 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Start _get_guest_xml network_info=[{"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.389 244018 WARNING nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.394 244018 DEBUG nova.virt.libvirt.host [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.394 244018 DEBUG nova.virt.libvirt.host [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.398 244018 DEBUG nova.virt.libvirt.host [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.399 244018 DEBUG nova.virt.libvirt.host [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.399 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.400 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.400 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.400 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.400 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.401 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.401 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.401 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.401 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.401 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.402 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.402 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.404 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:38:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:38:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751624086' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:38:40 compute-0 nova_compute[244014]: 2026-02-25 12:38:40.994 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.015 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.018 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:38:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3751624086' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:38:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:38:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2927167528' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.592 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.595 244018 DEBUG nova.virt.libvirt.vif [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1119160360',display_name='tempest-ServerGroupTestJSON-server-1119160360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1119160360',id=95,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d5e5d163084460c88c8f594df149ff0',ramdisk_id='',reservation_id='r-s85cq2rq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-45430372',owner_user_name='tempest-ServerGroupTestJSON-45430372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:38:33Z,user_data=None,user_id='04dc6c3292f14b8398bec7165759bd4b',uuid=61e87fb9-d3ba-4d0b-ae95-86b4272980cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.596 244018 DEBUG nova.network.os_vif_util [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converting VIF {"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.598 244018 DEBUG nova.network.os_vif_util [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.600 244018 DEBUG nova.objects.instance [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 61e87fb9-d3ba-4d0b-ae95-86b4272980cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.653 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:38:41 compute-0 nova_compute[244014]:   <uuid>61e87fb9-d3ba-4d0b-ae95-86b4272980cc</uuid>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   <name>instance-0000005f</name>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerGroupTestJSON-server-1119160360</nova:name>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:38:40</nova:creationTime>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <nova:user uuid="04dc6c3292f14b8398bec7165759bd4b">tempest-ServerGroupTestJSON-45430372-project-member</nova:user>
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <nova:project uuid="8d5e5d163084460c88c8f594df149ff0">tempest-ServerGroupTestJSON-45430372</nova:project>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <nova:port uuid="3bb5f129-0f6a-4905-87a3-a81ccde523cd">
Feb 25 12:38:41 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <system>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <entry name="serial">61e87fb9-d3ba-4d0b-ae95-86b4272980cc</entry>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <entry name="uuid">61e87fb9-d3ba-4d0b-ae95-86b4272980cc</entry>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     </system>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   <os>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   </os>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   <features>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   </features>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk">
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       </source>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config">
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       </source>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:38:41 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:6a:a7:16"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <target dev="tap3bb5f129-0f"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/console.log" append="off"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <video>
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     </video>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:38:41 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:38:41 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:38:41 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:38:41 compute-0 nova_compute[244014]: </domain>
Feb 25 12:38:41 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.655 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Preparing to wait for external event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.656 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.656 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.657 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.658 244018 DEBUG nova.virt.libvirt.vif [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1119160360',display_name='tempest-ServerGroupTestJSON-server-1119160360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1119160360',id=95,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d5e5d163084460c88c8f594df149ff0',ramdisk_id='',reservation_id='r-s85cq2rq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-45430372',owner_user_name='tempest-ServerGroupTestJSON-45430372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:38:33Z,user_data=None,user_id='04dc6c3292f14b8398bec7165759bd4b',uuid=61e87fb9-d3ba-4d0b-ae95-86b4272980cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.659 244018 DEBUG nova.network.os_vif_util [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converting VIF {"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.660 244018 DEBUG nova.network.os_vif_util [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.660 244018 DEBUG os_vif [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.662 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.663 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.668 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.668 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bb5f129-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.669 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3bb5f129-0f, col_values=(('external_ids', {'iface-id': '3bb5f129-0f6a-4905-87a3-a81ccde523cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:a7:16', 'vm-uuid': '61e87fb9-d3ba-4d0b-ae95-86b4272980cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:41 compute-0 NetworkManager[49836]: <info>  [1772023121.6728] manager: (tap3bb5f129-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.680 244018 INFO os_vif [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f')
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.809 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.810 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.810 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] No VIF found with MAC fa:16:3e:6a:a7:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.811 244018 INFO nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Using config drive
Feb 25 12:38:41 compute-0 nova_compute[244014]: 2026-02-25 12:38:41.846 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:38:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2927167528' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:38:42 compute-0 ceph-mon[76335]: pgmap v1713: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.379 244018 INFO nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Creating config drive at /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.383 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyxqlrfkv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035719931447786177 of space, bias 1.0, pg target 0.10715979434335852 quantized to 32 (current 32)
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024937167797671124 of space, bias 1.0, pg target 0.7481150339301337 quantized to 32 (current 32)
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.626072165167579e-07 of space, bias 4.0, pg target 0.0009151286598201094 quantized to 16 (current 16)
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:38:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.536 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyxqlrfkv" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.576 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.581 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.739 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.741 244018 INFO nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Deleting local config drive /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config because it was imported into RBD.
Feb 25 12:38:42 compute-0 kernel: tap3bb5f129-0f: entered promiscuous mode
Feb 25 12:38:42 compute-0 ovn_controller[147040]: 2026-02-25T12:38:42Z|00939|binding|INFO|Claiming lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd for this chassis.
Feb 25 12:38:42 compute-0 ovn_controller[147040]: 2026-02-25T12:38:42Z|00940|binding|INFO|3bb5f129-0f6a-4905-87a3-a81ccde523cd: Claiming fa:16:3e:6a:a7:16 10.100.0.13
Feb 25 12:38:42 compute-0 NetworkManager[49836]: <info>  [1772023122.7985] manager: (tap3bb5f129-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.819 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:a7:16 10.100.0.13'], port_security=['fa:16:3e:6a:a7:16 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61e87fb9-d3ba-4d0b-ae95-86b4272980cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41119909-cf1c-417e-b52e-40b8a4b9716e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e5d163084460c88c8f594df149ff0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cba43bfc-12c5-414d-b845-95bd5bf66142', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5504f6fc-86f0-41e2-8986-2cb5dda54647, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3bb5f129-0f6a-4905-87a3-a81ccde523cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.821 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3bb5f129-0f6a-4905-87a3-a81ccde523cd in datapath 41119909-cf1c-417e-b52e-40b8a4b9716e bound to our chassis
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.823 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41119909-cf1c-417e-b52e-40b8a4b9716e
Feb 25 12:38:42 compute-0 systemd-udevd[329348]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.834 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[162f0066-3bf8-4b0d-9e62-7e67e645c3ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.835 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41119909-c1 in ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:38:42 compute-0 systemd-machined[210048]: New machine qemu-122-instance-0000005f.
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.838 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41119909-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.838 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5bef7f52-3573-4ab0-bfe9-42870c221c80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.839 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19997dc3-b321-4ebf-9cb3-fd5c62e0c6a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:42 compute-0 NetworkManager[49836]: <info>  [1772023122.8460] device (tap3bb5f129-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:38:42 compute-0 NetworkManager[49836]: <info>  [1772023122.8471] device (tap3bb5f129-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.850 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[155bfc39-b0d8-43e7-8346-1699099b8929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:42 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-0000005f.
Feb 25 12:38:42 compute-0 ovn_controller[147040]: 2026-02-25T12:38:42Z|00941|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd ovn-installed in OVS
Feb 25 12:38:42 compute-0 ovn_controller[147040]: 2026-02-25T12:38:42Z|00942|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd up in Southbound
Feb 25 12:38:42 compute-0 nova_compute[244014]: 2026-02-25 12:38:42.855 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.876 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[70abf00f-28e6-46d2-96e7-b059e4b14f04]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.908 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c31bbc-8cb2-466a-a4c7-829d48e97026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:42 compute-0 systemd-udevd[329352]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.916 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1b0bd3-3440-4ded-99ef-effe4b767bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:42 compute-0 NetworkManager[49836]: <info>  [1772023122.9186] manager: (tap41119909-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/392)
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.950 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ef907e53-e73d-4d62-aa92-cabb11f65c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.955 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[86014866-9e4c-493c-bc5f-3309cbab350b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:42 compute-0 NetworkManager[49836]: <info>  [1772023122.9782] device (tap41119909-c0): carrier: link connected
Feb 25 12:38:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.981 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[939550ad-5654-47e0-b110-8f7ab408bed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.000 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b45d93c7-9940-4bb9-a791-4954736b2243]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41119909-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:37:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509238, 'reachable_time': 27991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329381, 'error': None, 'target': 'ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5a070214-13aa-4dc2-a1c3-aac99c3ed78b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe70:3757'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509238, 'tstamp': 509238}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329382, 'error': None, 'target': 'ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.034 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[04a9fd87-0edb-457d-b557-d04ed3c1c580]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41119909-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:37:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509238, 'reachable_time': 27991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329383, 'error': None, 'target': 'ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.063 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16d56e91-1b39-4e0e-b79d-06c2b876b344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.113 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a53c9f-d5b5-4c95-a862-1392f9b960a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41119909-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.115 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.115 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41119909-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:43 compute-0 NetworkManager[49836]: <info>  [1772023123.1179] manager: (tap41119909-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Feb 25 12:38:43 compute-0 kernel: tap41119909-c0: entered promiscuous mode
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.119 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.120 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41119909-c0, col_values=(('external_ids', {'iface-id': 'bce4452d-fc90-4131-bb29-426776368685'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.121 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:43 compute-0 ovn_controller[147040]: 2026-02-25T12:38:43Z|00943|binding|INFO|Releasing lport bce4452d-fc90-4131-bb29-426776368685 from this chassis (sb_readonly=0)
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.125 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.126 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41119909-cf1c-417e-b52e-40b8a4b9716e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41119909-cf1c-417e-b52e-40b8a4b9716e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.127 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21297fe4-5135-43b0-b02e-f912ed8c9e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.128 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-41119909-cf1c-417e-b52e-40b8a4b9716e
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/41119909-cf1c-417e-b52e-40b8a4b9716e.pid.haproxy
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 41119909-cf1c-417e-b52e-40b8a4b9716e
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:38:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.129 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e', 'env', 'PROCESS_TAG=haproxy-41119909-cf1c-417e-b52e-40b8a4b9716e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41119909-cf1c-417e-b52e-40b8a4b9716e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.184 244018 DEBUG nova.compute.manager [req-1085b3c2-1ad8-421b-be2a-315195c12728 req-8285c5ed-74d7-4da1-9df4-ecd1c894c84a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.185 244018 DEBUG oslo_concurrency.lockutils [req-1085b3c2-1ad8-421b-be2a-315195c12728 req-8285c5ed-74d7-4da1-9df4-ecd1c894c84a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.186 244018 DEBUG oslo_concurrency.lockutils [req-1085b3c2-1ad8-421b-be2a-315195c12728 req-8285c5ed-74d7-4da1-9df4-ecd1c894c84a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.186 244018 DEBUG oslo_concurrency.lockutils [req-1085b3c2-1ad8-421b-be2a-315195c12728 req-8285c5ed-74d7-4da1-9df4-ecd1c894c84a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.187 244018 DEBUG nova.compute.manager [req-1085b3c2-1ad8-421b-be2a-315195c12728 req-8285c5ed-74d7-4da1-9df4-ecd1c894c84a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Processing event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:38:43 compute-0 podman[329415]: 2026-02-25 12:38:43.536303093 +0000 UTC m=+0.073133205 container create bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:38:43 compute-0 systemd[1]: Started libpod-conmon-bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1.scope.
Feb 25 12:38:43 compute-0 podman[329415]: 2026-02-25 12:38:43.496569952 +0000 UTC m=+0.033400084 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:38:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26b6ba428616ece1f36be1800c17c0da2392d8e06e17dc1967c2979f3495730/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:38:43 compute-0 podman[329415]: 2026-02-25 12:38:43.627865617 +0000 UTC m=+0.164695719 container init bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:38:43 compute-0 podman[329415]: 2026-02-25 12:38:43.636508681 +0000 UTC m=+0.173338793 container start bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:38:43 compute-0 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [NOTICE]   (329475) : New worker (329477) forked
Feb 25 12:38:43 compute-0 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [NOTICE]   (329475) : Loading success.
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.698 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023123.698035, 61e87fb9-d3ba-4d0b-ae95-86b4272980cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.699 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] VM Started (Lifecycle Event)
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.701 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.705 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.709 244018 INFO nova.virt.libvirt.driver [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance spawned successfully.
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.709 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.718 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.722 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.751 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.751 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.752 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.752 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.753 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.753 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.757 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.757 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023123.6981664, 61e87fb9-d3ba-4d0b-ae95-86b4272980cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.757 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] VM Paused (Lifecycle Event)
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.815 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.820 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023123.7040687, 61e87fb9-d3ba-4d0b-ae95-86b4272980cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.820 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] VM Resumed (Lifecycle Event)
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.850 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.853 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.894 244018 INFO nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Took 10.49 seconds to spawn the instance on the hypervisor.
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.894 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.897 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:38:43 compute-0 nova_compute[244014]: 2026-02-25 12:38:43.987 244018 INFO nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Took 12.25 seconds to build instance.
Feb 25 12:38:44 compute-0 nova_compute[244014]: 2026-02-25 12:38:44.046 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:38:44 compute-0 ceph-mon[76335]: pgmap v1714: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:38:44 compute-0 nova_compute[244014]: 2026-02-25 12:38:44.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:45 compute-0 nova_compute[244014]: 2026-02-25 12:38:45.376 244018 DEBUG nova.compute.manager [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:38:45 compute-0 nova_compute[244014]: 2026-02-25 12:38:45.377 244018 DEBUG oslo_concurrency.lockutils [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:45 compute-0 nova_compute[244014]: 2026-02-25 12:38:45.378 244018 DEBUG oslo_concurrency.lockutils [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:45 compute-0 nova_compute[244014]: 2026-02-25 12:38:45.379 244018 DEBUG oslo_concurrency.lockutils [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:45 compute-0 nova_compute[244014]: 2026-02-25 12:38:45.379 244018 DEBUG nova.compute.manager [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:38:45 compute-0 nova_compute[244014]: 2026-02-25 12:38:45.380 244018 WARNING nova.compute.manager [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received unexpected event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with vm_state active and task_state None.
Feb 25 12:38:45 compute-0 podman[329487]: 2026-02-25 12:38:45.712134985 +0000 UTC m=+0.058775530 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.111 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.112 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.113 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.113 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.114 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.116 244018 INFO nova.compute.manager [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Terminating instance
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.117 244018 DEBUG nova.compute.manager [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:38:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:38:46 compute-0 kernel: tap3bb5f129-0f (unregistering): left promiscuous mode
Feb 25 12:38:46 compute-0 NetworkManager[49836]: <info>  [1772023126.1643] device (tap3bb5f129-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00944|binding|INFO|Releasing lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd from this chassis (sb_readonly=0)
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00945|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd down in Southbound
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00946|binding|INFO|Removing iface tap3bb5f129-0f ovn-installed in OVS
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.179 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.183 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.189 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:a7:16 10.100.0.13'], port_security=['fa:16:3e:6a:a7:16 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61e87fb9-d3ba-4d0b-ae95-86b4272980cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41119909-cf1c-417e-b52e-40b8a4b9716e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e5d163084460c88c8f594df149ff0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cba43bfc-12c5-414d-b845-95bd5bf66142', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5504f6fc-86f0-41e2-8986-2cb5dda54647, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3bb5f129-0f6a-4905-87a3-a81ccde523cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.191 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3bb5f129-0f6a-4905-87a3-a81ccde523cd in datapath 41119909-cf1c-417e-b52e-40b8a4b9716e unbound from our chassis
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.193 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41119909-cf1c-417e-b52e-40b8a4b9716e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.195 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b12c9861-1af7-4090-b9c7-72a5596d1c38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.196 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e namespace which is not needed anymore
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.198 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Feb 25 12:38:46 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005f.scope: Consumed 3.378s CPU time.
Feb 25 12:38:46 compute-0 systemd-machined[210048]: Machine qemu-122-instance-0000005f terminated.
Feb 25 12:38:46 compute-0 ceph-mon[76335]: pgmap v1715: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:38:46 compute-0 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [NOTICE]   (329475) : haproxy version is 2.8.14-c23fe91
Feb 25 12:38:46 compute-0 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [NOTICE]   (329475) : path to executable is /usr/sbin/haproxy
Feb 25 12:38:46 compute-0 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [WARNING]  (329475) : Exiting Master process...
Feb 25 12:38:46 compute-0 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [ALERT]    (329475) : Current worker (329477) exited with code 143 (Terminated)
Feb 25 12:38:46 compute-0 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [WARNING]  (329475) : All workers exited. Exiting... (0)
Feb 25 12:38:46 compute-0 systemd[1]: libpod-bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1.scope: Deactivated successfully.
Feb 25 12:38:46 compute-0 podman[329532]: 2026-02-25 12:38:46.314962006 +0000 UTC m=+0.044247889 container died bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:38:46 compute-0 kernel: tap3bb5f129-0f: entered promiscuous mode
Feb 25 12:38:46 compute-0 systemd-udevd[329511]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:38:46 compute-0 NetworkManager[49836]: <info>  [1772023126.3400] manager: (tap3bb5f129-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00947|binding|INFO|Claiming lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd for this chassis.
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00948|binding|INFO|3bb5f129-0f6a-4905-87a3-a81ccde523cd: Claiming fa:16:3e:6a:a7:16 10.100.0.13
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.341 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1-userdata-shm.mount: Deactivated successfully.
Feb 25 12:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a26b6ba428616ece1f36be1800c17c0da2392d8e06e17dc1967c2979f3495730-merged.mount: Deactivated successfully.
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.353 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:a7:16 10.100.0.13'], port_security=['fa:16:3e:6a:a7:16 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61e87fb9-d3ba-4d0b-ae95-86b4272980cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41119909-cf1c-417e-b52e-40b8a4b9716e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e5d163084460c88c8f594df149ff0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cba43bfc-12c5-414d-b845-95bd5bf66142', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5504f6fc-86f0-41e2-8986-2cb5dda54647, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3bb5f129-0f6a-4905-87a3-a81ccde523cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:38:46 compute-0 kernel: tap3bb5f129-0f (unregistering): left promiscuous mode
Feb 25 12:38:46 compute-0 virtnodedevd[243651]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Feb 25 12:38:46 compute-0 virtnodedevd[243651]: hostname: compute-0
Feb 25 12:38:46 compute-0 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00949|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd ovn-installed in OVS
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00950|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd up in Southbound
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00951|binding|INFO|Releasing lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd from this chassis (sb_readonly=1)
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00952|if_status|INFO|Dropped 2 log messages in last 328 seconds (most recently, 328 seconds ago) due to excessive rate
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00953|if_status|INFO|Not setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd down as sb is readonly
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00954|binding|INFO|Removing iface tap3bb5f129-0f ovn-installed in OVS
Feb 25 12:38:46 compute-0 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00955|binding|INFO|Releasing lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd from this chassis (sb_readonly=0)
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.368 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 ovn_controller[147040]: 2026-02-25T12:38:46Z|00956|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd down in Southbound
Feb 25 12:38:46 compute-0 podman[329532]: 2026-02-25 12:38:46.370055701 +0000 UTC m=+0.099341634 container cleanup bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.373 244018 INFO nova.virt.libvirt.driver [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance destroyed successfully.
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.373 244018 DEBUG nova.objects.instance [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lazy-loading 'resources' on Instance uuid 61e87fb9-d3ba-4d0b-ae95-86b4272980cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:38:46 compute-0 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.376 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:a7:16 10.100.0.13'], port_security=['fa:16:3e:6a:a7:16 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61e87fb9-d3ba-4d0b-ae95-86b4272980cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41119909-cf1c-417e-b52e-40b8a4b9716e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e5d163084460c88c8f594df149ff0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cba43bfc-12c5-414d-b845-95bd5bf66142', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5504f6fc-86f0-41e2-8986-2cb5dda54647, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3bb5f129-0f6a-4905-87a3-a81ccde523cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:38:46 compute-0 systemd[1]: libpod-conmon-bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1.scope: Deactivated successfully.
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 12:38:46 compute-0 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.387 244018 DEBUG nova.virt.libvirt.vif [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1119160360',display_name='tempest-ServerGroupTestJSON-server-1119160360',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1119160360',id=95,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:38:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8d5e5d163084460c88c8f594df149ff0',ramdisk_id='',reservation_id='r-s85cq2rq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-45430372',owner_user_name='tempest-ServerGroupTestJSON-45430372-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:38:43Z,user_data=None,user_id='04dc6c3292f14b8398bec7165759bd4b',uuid=61e87fb9-d3ba-4d0b-ae95-86b4272980cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.388 244018 DEBUG nova.network.os_vif_util [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converting VIF {"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:38:46 compute-0 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.389 244018 DEBUG nova.network.os_vif_util [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.390 244018 DEBUG os_vif [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:38:46 compute-0 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.392 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bb5f129-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.394 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.398 244018 INFO os_vif [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f')
Feb 25 12:38:46 compute-0 podman[329572]: 2026-02-25 12:38:46.430485116 +0000 UTC m=+0.036949413 container remove bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.436 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6d913a-b9ba-4a16-8626-a14cb15b1e9a]: (4, ('Wed Feb 25 12:38:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e (bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1)\nbfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1\nWed Feb 25 12:38:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e (bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1)\nbfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.437 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb40c4b-a642-484f-a0f3-bc10267343ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.439 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41119909-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.441 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 kernel: tap41119909-c0: left promiscuous mode
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.453 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2fe8b1-7189-49c5-99c4-ec5f3ce34fc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.468 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[75e1d72c-0043-47a7-9eb5-2574695061e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.469 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd49ffe7-283c-4e6c-b7e3-8c4c5f3002e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.482 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb63cba-692f-44f1-8e81-1513cd31be4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509231, 'reachable_time': 20922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329614, 'error': None, 'target': 'ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.484 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.484 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f15c8c-8c8a-4660-b763-4369ad56d720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.485 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3bb5f129-0f6a-4905-87a3-a81ccde523cd in datapath 41119909-cf1c-417e-b52e-40b8a4b9716e unbound from our chassis
Feb 25 12:38:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d41119909\x2dcf1c\x2d417e\x2db52e\x2d40b8a4b9716e.mount: Deactivated successfully.
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.486 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41119909-cf1c-417e-b52e-40b8a4b9716e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.486 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dacf3e5a-901e-4e83-a157-0410ae235334]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.487 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3bb5f129-0f6a-4905-87a3-a81ccde523cd in datapath 41119909-cf1c-417e-b52e-40b8a4b9716e unbound from our chassis
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.488 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41119909-cf1c-417e-b52e-40b8a4b9716e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:38:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.489 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b3fed2-786f-4e15-87dd-ef795859e852]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.654 244018 INFO nova.virt.libvirt.driver [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Deleting instance files /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc_del
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.655 244018 INFO nova.virt.libvirt.driver [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Deletion of /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc_del complete
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.733 244018 INFO nova.compute.manager [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.735 244018 DEBUG oslo.service.loopingcall [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.736 244018 DEBUG nova.compute.manager [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:38:46 compute-0 nova_compute[244014]: 2026-02-25 12:38:46.736 244018 DEBUG nova.network.neutron [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:38:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:38:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/882259766' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:38:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:38:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/882259766' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.523 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.524 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.525 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.525 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.526 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.526 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.527 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.527 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.528 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.529 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.529 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.529 244018 WARNING nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received unexpected event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with vm_state active and task_state deleting.
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.530 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.530 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.531 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.531 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.532 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.532 244018 WARNING nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received unexpected event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with vm_state active and task_state deleting.
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.532 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.533 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.534 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.534 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.535 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.535 244018 WARNING nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received unexpected event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with vm_state active and task_state deleting.
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.536 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.536 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.536 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.537 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.537 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.538 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:38:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/882259766' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:38:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/882259766' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:38:47 compute-0 podman[329616]: 2026-02-25 12:38:47.778343991 +0000 UTC m=+0.115722117 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 25 12:38:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.876 244018 DEBUG nova.network.neutron [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.893 244018 INFO nova.compute.manager [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Took 1.16 seconds to deallocate network for instance.
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.935 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.936 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:47 compute-0 nova_compute[244014]: 2026-02-25 12:38:47.994 244018 DEBUG oslo_concurrency.processutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:38:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 153 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 12:38:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:38:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650875818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:38:48 compute-0 ceph-mon[76335]: pgmap v1716: 305 pgs: 305 active+clean; 153 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 12:38:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1650875818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:38:48 compute-0 nova_compute[244014]: 2026-02-25 12:38:48.569 244018 DEBUG oslo_concurrency.processutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:38:48 compute-0 nova_compute[244014]: 2026-02-25 12:38:48.578 244018 DEBUG nova.compute.provider_tree [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:38:48 compute-0 nova_compute[244014]: 2026-02-25 12:38:48.601 244018 DEBUG nova.scheduler.client.report [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:38:48 compute-0 nova_compute[244014]: 2026-02-25 12:38:48.625 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:48 compute-0 nova_compute[244014]: 2026-02-25 12:38:48.674 244018 INFO nova.scheduler.client.report [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Deleted allocations for instance 61e87fb9-d3ba-4d0b-ae95-86b4272980cc
Feb 25 12:38:48 compute-0 nova_compute[244014]: 2026-02-25 12:38:48.762 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:49 compute-0 nova_compute[244014]: 2026-02-25 12:38:49.766 244018 DEBUG nova.compute.manager [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:38:49 compute-0 nova_compute[244014]: 2026-02-25 12:38:49.768 244018 DEBUG oslo_concurrency.lockutils [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:49 compute-0 nova_compute[244014]: 2026-02-25 12:38:49.769 244018 DEBUG oslo_concurrency.lockutils [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:49 compute-0 nova_compute[244014]: 2026-02-25 12:38:49.769 244018 DEBUG oslo_concurrency.lockutils [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:49 compute-0 nova_compute[244014]: 2026-02-25 12:38:49.770 244018 DEBUG nova.compute.manager [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:38:49 compute-0 nova_compute[244014]: 2026-02-25 12:38:49.770 244018 WARNING nova.compute.manager [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received unexpected event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with vm_state deleted and task_state None.
Feb 25 12:38:49 compute-0 nova_compute[244014]: 2026-02-25 12:38:49.771 244018 DEBUG nova.compute.manager [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-deleted-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:38:49 compute-0 nova_compute[244014]: 2026-02-25 12:38:49.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 153 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 98 op/s
Feb 25 12:38:50 compute-0 ceph-mon[76335]: pgmap v1717: 305 pgs: 305 active+clean; 153 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 98 op/s
Feb 25 12:38:51 compute-0 nova_compute[244014]: 2026-02-25 12:38:51.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 12:38:52 compute-0 ceph-mon[76335]: pgmap v1718: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 12:38:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 12:38:54 compute-0 ceph-mon[76335]: pgmap v1719: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 12:38:54 compute-0 nova_compute[244014]: 2026-02-25 12:38:54.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:54 compute-0 nova_compute[244014]: 2026-02-25 12:38:54.992 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:55.018 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:38:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:55.019 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:38:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:38:55.019 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:38:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Feb 25 12:38:56 compute-0 ceph-mon[76335]: pgmap v1720: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Feb 25 12:38:56 compute-0 nova_compute[244014]: 2026-02-25 12:38:56.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:38:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:38:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Feb 25 12:38:58 compute-0 ceph-mon[76335]: pgmap v1721: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Feb 25 12:38:59 compute-0 nova_compute[244014]: 2026-02-25 12:38:59.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 25 12:39:00 compute-0 ceph-mon[76335]: pgmap v1722: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 25 12:39:01 compute-0 nova_compute[244014]: 2026-02-25 12:39:01.370 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023126.36903, 61e87fb9-d3ba-4d0b-ae95-86b4272980cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:39:01 compute-0 nova_compute[244014]: 2026-02-25 12:39:01.371 244018 INFO nova.compute.manager [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] VM Stopped (Lifecycle Event)
Feb 25 12:39:01 compute-0 nova_compute[244014]: 2026-02-25 12:39:01.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:01 compute-0 nova_compute[244014]: 2026-02-25 12:39:01.432 244018 DEBUG nova.compute.manager [None req-53d0a97e-9808-44b2-b575-ae6288ec19e4 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:39:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:01.823 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:2f:7d 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81cc8b5c-087a-44ef-8773-89172e384531, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c863bed3-e120-4e62-b6cf-b3bdec2b0cb1) old=Port_Binding(mac=['fa:16:3e:bc:2f:7d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:39:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:01.825 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c863bed3-e120-4e62-b6cf-b3bdec2b0cb1 in datapath e5a58f3f-f1c5-482b-ba54-7b7acab1971b updated
Feb 25 12:39:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:01.826 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5a58f3f-f1c5-482b-ba54-7b7acab1971b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:39:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:01.828 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[159aebb3-ea44-453f-bea7-7ee41699b713]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 25 12:39:02 compute-0 ceph-mon[76335]: pgmap v1723: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 25 12:39:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:39:04 compute-0 ceph-mon[76335]: pgmap v1724: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:39:04 compute-0 nova_compute[244014]: 2026-02-25 12:39:04.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:39:06 compute-0 ceph-mon[76335]: pgmap v1725: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:39:06 compute-0 nova_compute[244014]: 2026-02-25 12:39:06.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:39:08 compute-0 ceph-mon[76335]: pgmap v1726: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:39:09 compute-0 nova_compute[244014]: 2026-02-25 12:39:09.081 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:09 compute-0 nova_compute[244014]: 2026-02-25 12:39:09.082 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:09 compute-0 nova_compute[244014]: 2026-02-25 12:39:09.099 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:39:09 compute-0 nova_compute[244014]: 2026-02-25 12:39:09.202 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:09 compute-0 nova_compute[244014]: 2026-02-25 12:39:09.203 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:09 compute-0 nova_compute[244014]: 2026-02-25 12:39:09.216 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:39:09 compute-0 nova_compute[244014]: 2026-02-25 12:39:09.216 244018 INFO nova.compute.claims [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:39:09 compute-0 nova_compute[244014]: 2026-02-25 12:39:09.336 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:09.592 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:2f:7d 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81cc8b5c-087a-44ef-8773-89172e384531, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c863bed3-e120-4e62-b6cf-b3bdec2b0cb1) old=Port_Binding(mac=['fa:16:3e:bc:2f:7d 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:39:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:09.593 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c863bed3-e120-4e62-b6cf-b3bdec2b0cb1 in datapath e5a58f3f-f1c5-482b-ba54-7b7acab1971b updated
Feb 25 12:39:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:09.594 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5a58f3f-f1c5-482b-ba54-7b7acab1971b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:39:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:09.595 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e43c0192-5d00-4842-882c-58177e8f7ace]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:39:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3757355745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:09 compute-0 nova_compute[244014]: 2026-02-25 12:39:09.960 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:09 compute-0 nova_compute[244014]: 2026-02-25 12:39:09.967 244018 DEBUG nova.compute.provider_tree [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:39:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3757355745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.001 244018 DEBUG nova.scheduler.client.report [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.032 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.033 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.103 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.104 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.128 244018 INFO nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.152 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:39:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.285 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.288 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.288 244018 INFO nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Creating image(s)
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.320 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.344 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.367 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.371 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.404 244018 DEBUG nova.policy [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0c0513e28db44d81aa02994a8543b844', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b2470f8bb56548cd818113da683292d8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.457 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.457 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.458 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.459 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.488 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.492 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e25f3aed-d038-4b27-a15f-9beada2b67b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.812 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e25f3aed-d038-4b27-a15f-9beada2b67b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.880 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] resizing rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:39:10 compute-0 nova_compute[244014]: 2026-02-25 12:39:10.989 244018 DEBUG nova.objects.instance [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lazy-loading 'migration_context' on Instance uuid e25f3aed-d038-4b27-a15f-9beada2b67b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:39:11 compute-0 ceph-mon[76335]: pgmap v1727: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:39:11 compute-0 nova_compute[244014]: 2026-02-25 12:39:11.004 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:39:11 compute-0 nova_compute[244014]: 2026-02-25 12:39:11.005 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Ensure instance console log exists: /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:39:11 compute-0 nova_compute[244014]: 2026-02-25 12:39:11.005 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:11 compute-0 nova_compute[244014]: 2026-02-25 12:39:11.005 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:11 compute-0 nova_compute[244014]: 2026-02-25 12:39:11.006 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:11 compute-0 nova_compute[244014]: 2026-02-25 12:39:11.158 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Successfully created port: 14cc64dc-7418-4171-b1f4-1e19bed07489 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:39:11 compute-0 nova_compute[244014]: 2026-02-25 12:39:11.408 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 181 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 978 KiB/s wr, 25 op/s
Feb 25 12:39:12 compute-0 nova_compute[244014]: 2026-02-25 12:39:12.209 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Successfully updated port: 14cc64dc-7418-4171-b1f4-1e19bed07489 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:39:12 compute-0 nova_compute[244014]: 2026-02-25 12:39:12.225 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:39:12 compute-0 nova_compute[244014]: 2026-02-25 12:39:12.225 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquired lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:39:12 compute-0 nova_compute[244014]: 2026-02-25 12:39:12.225 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:39:12 compute-0 ceph-mon[76335]: pgmap v1728: 305 pgs: 305 active+clean; 181 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 978 KiB/s wr, 25 op/s
Feb 25 12:39:12 compute-0 nova_compute[244014]: 2026-02-25 12:39:12.544 244018 DEBUG nova.compute.manager [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Received event network-changed-14cc64dc-7418-4171-b1f4-1e19bed07489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:39:12 compute-0 nova_compute[244014]: 2026-02-25 12:39:12.544 244018 DEBUG nova.compute.manager [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Refreshing instance network info cache due to event network-changed-14cc64dc-7418-4171-b1f4-1e19bed07489. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:39:12 compute-0 nova_compute[244014]: 2026-02-25 12:39:12.545 244018 DEBUG oslo_concurrency.lockutils [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:39:12 compute-0 nova_compute[244014]: 2026-02-25 12:39:12.602 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:39:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:39:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8126 writes, 36K keys, 8126 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8126 writes, 8126 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1489 writes, 6691 keys, 1489 commit groups, 1.0 writes per commit group, ingest: 9.13 MB, 0.02 MB/s
                                           Interval WAL: 1489 writes, 1489 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     47.8      0.86              0.12        21    0.041       0      0       0.0       0.0
                                             L6      1/0    9.16 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.7    101.2     83.4      1.81              0.42        20    0.090    102K    11K       0.0       0.0
                                            Sum      1/0    9.16 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.7     68.5     71.9      2.67              0.54        41    0.065    102K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.9     38.6     39.4      1.28              0.15        10    0.128     32K   3075       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    101.2     83.4      1.81              0.42        20    0.090    102K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     48.0      0.86              0.12        20    0.043       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.040, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.19 GB write, 0.06 MB/s write, 0.18 GB read, 0.06 MB/s read, 2.7 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 22.63 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000402 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1442,21.84 MB,7.18383%) FilterBlock(42,293.36 KB,0.0942381%) IndexBlock(42,513.19 KB,0.164855%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 12:39:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.259 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updating instance_info_cache with network_info: [{"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:39:14 compute-0 ceph-mon[76335]: pgmap v1729: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.279 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Releasing lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.280 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Instance network_info: |[{"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.281 244018 DEBUG oslo_concurrency.lockutils [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.281 244018 DEBUG nova.network.neutron [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Refreshing network info cache for port 14cc64dc-7418-4171-b1f4-1e19bed07489 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.286 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Start _get_guest_xml network_info=[{"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.293 244018 WARNING nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.300 244018 DEBUG nova.virt.libvirt.host [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.300 244018 DEBUG nova.virt.libvirt.host [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.303 244018 DEBUG nova.virt.libvirt.host [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.303 244018 DEBUG nova.virt.libvirt.host [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.304 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.304 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.305 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.305 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.306 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.306 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.307 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.307 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.307 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.308 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.309 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.309 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.314 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:39:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171999011' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.889 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.920 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:14 compute-0 nova_compute[244014]: 2026-02-25 12:39:14.925 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1171999011' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:39:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534650665' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.493 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.496 244018 DEBUG nova.virt.libvirt.vif [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-99198811',display_name='tempest-ServerMetadataNegativeTestJSON-server-99198811',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-99198811',id=96,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b2470f8bb56548cd818113da683292d8',ramdisk_id='',reservation_id='r-8npm3urm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1634701035',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1634701035-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:39:10Z,user_data=None,user_id='0c0513e28db44d81aa02994a8543b844',uuid=e25f3aed-d038-4b27-a15f-9beada2b67b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.497 244018 DEBUG nova.network.os_vif_util [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converting VIF {"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.499 244018 DEBUG nova.network.os_vif_util [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.501 244018 DEBUG nova.objects.instance [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lazy-loading 'pci_devices' on Instance uuid e25f3aed-d038-4b27-a15f-9beada2b67b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.538 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:39:15 compute-0 nova_compute[244014]:   <uuid>e25f3aed-d038-4b27-a15f-9beada2b67b0</uuid>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   <name>instance-00000060</name>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-99198811</nova:name>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:39:14</nova:creationTime>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <nova:user uuid="0c0513e28db44d81aa02994a8543b844">tempest-ServerMetadataNegativeTestJSON-1634701035-project-member</nova:user>
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <nova:project uuid="b2470f8bb56548cd818113da683292d8">tempest-ServerMetadataNegativeTestJSON-1634701035</nova:project>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <nova:port uuid="14cc64dc-7418-4171-b1f4-1e19bed07489">
Feb 25 12:39:15 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <system>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <entry name="serial">e25f3aed-d038-4b27-a15f-9beada2b67b0</entry>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <entry name="uuid">e25f3aed-d038-4b27-a15f-9beada2b67b0</entry>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     </system>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   <os>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   </os>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   <features>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   </features>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e25f3aed-d038-4b27-a15f-9beada2b67b0_disk">
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config">
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:39:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:60:e2:2c"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <target dev="tap14cc64dc-74"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/console.log" append="off"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <video>
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     </video>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:39:15 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:39:15 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:39:15 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:39:15 compute-0 nova_compute[244014]: </domain>
Feb 25 12:39:15 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.540 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Preparing to wait for external event network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.541 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.541 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.542 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.543 244018 DEBUG nova.virt.libvirt.vif [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-99198811',display_name='tempest-ServerMetadataNegativeTestJSON-server-99198811',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-99198811',id=96,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b2470f8bb56548cd818113da683292d8',ramdisk_id='',reservation_id='r-8npm3urm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1634701035',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1634701035-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:39:10Z,user_data=None,user_id='0c0513e28db44d81aa02994a8543b844',uuid=e25f3aed-d038-4b27-a15f-9beada2b67b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.543 244018 DEBUG nova.network.os_vif_util [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converting VIF {"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.544 244018 DEBUG nova.network.os_vif_util [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.544 244018 DEBUG os_vif [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.545 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.545 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.549 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14cc64dc-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.549 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14cc64dc-74, col_values=(('external_ids', {'iface-id': '14cc64dc-7418-4171-b1f4-1e19bed07489', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:e2:2c', 'vm-uuid': 'e25f3aed-d038-4b27-a15f-9beada2b67b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:15 compute-0 NetworkManager[49836]: <info>  [1772023155.5523] manager: (tap14cc64dc-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.558 244018 INFO os_vif [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74')
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.658 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.659 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.659 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] No VIF found with MAC fa:16:3e:60:e2:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.660 244018 INFO nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Using config drive
Feb 25 12:39:15 compute-0 nova_compute[244014]: 2026-02-25 12:39:15.693 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.080 244018 INFO nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Creating config drive at /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.088 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmps54vjq5m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.123 244018 DEBUG nova.network.neutron [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updated VIF entry in instance network info cache for port 14cc64dc-7418-4171-b1f4-1e19bed07489. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.125 244018 DEBUG nova.network.neutron [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updating instance_info_cache with network_info: [{"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.144 244018 DEBUG oslo_concurrency.lockutils [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:39:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.230 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmps54vjq5m" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.269 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.275 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3534650665' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:16 compute-0 ceph-mon[76335]: pgmap v1730: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.453 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.455 244018 INFO nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Deleting local config drive /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config because it was imported into RBD.
Feb 25 12:39:16 compute-0 kernel: tap14cc64dc-74: entered promiscuous mode
Feb 25 12:39:16 compute-0 NetworkManager[49836]: <info>  [1772023156.5233] manager: (tap14cc64dc-74): new Tun device (/org/freedesktop/NetworkManager/Devices/396)
Feb 25 12:39:16 compute-0 ovn_controller[147040]: 2026-02-25T12:39:16Z|00957|binding|INFO|Claiming lport 14cc64dc-7418-4171-b1f4-1e19bed07489 for this chassis.
Feb 25 12:39:16 compute-0 ovn_controller[147040]: 2026-02-25T12:39:16Z|00958|binding|INFO|14cc64dc-7418-4171-b1f4-1e19bed07489: Claiming fa:16:3e:60:e2:2c 10.100.0.12
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.534 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.551 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:e2:2c 10.100.0.12'], port_security=['fa:16:3e:60:e2:2c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e25f3aed-d038-4b27-a15f-9beada2b67b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2470f8bb56548cd818113da683292d8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0bf5667d-50da-48f9-b7c1-919d155c28ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce9da3ce-009a-4d1e-9912-d9a8180cd783, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=14cc64dc-7418-4171-b1f4-1e19bed07489) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.554 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 14cc64dc-7418-4171-b1f4-1e19bed07489 in datapath 941e8032-4eea-492d-a5e0-a9cc25d1a9cc bound to our chassis
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.556 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 941e8032-4eea-492d-a5e0-a9cc25d1a9cc
Feb 25 12:39:16 compute-0 systemd-machined[210048]: New machine qemu-123-instance-00000060.
Feb 25 12:39:16 compute-0 systemd-udevd[330000]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.565 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[095e0f93-fa81-4d5c-9570-93a562b8610e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.566 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap941e8032-41 in ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.568 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap941e8032-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.568 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7eae67-cf23-41b3-9e9c-36567252001d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.569 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[114f0d8f-e155-4a04-a73d-9dd0ef3d0893]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 NetworkManager[49836]: <info>  [1772023156.5736] device (tap14cc64dc-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:39:16 compute-0 NetworkManager[49836]: <info>  [1772023156.5745] device (tap14cc64dc-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:39:16 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000060.
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.581 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[589b9327-42bd-498d-a05c-287d763ce4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 ovn_controller[147040]: 2026-02-25T12:39:16Z|00959|binding|INFO|Setting lport 14cc64dc-7418-4171-b1f4-1e19bed07489 ovn-installed in OVS
Feb 25 12:39:16 compute-0 ovn_controller[147040]: 2026-02-25T12:39:16Z|00960|binding|INFO|Setting lport 14cc64dc-7418-4171-b1f4-1e19bed07489 up in Southbound
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.607 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1766eacc-a5d6-41b1-ab09-78ea4ab06206]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 podman[329975]: 2026-02-25 12:39:16.620275849 +0000 UTC m=+0.122810156 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.636 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e8469b-ed76-4174-83e7-5660d0ffd00c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.640 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f30065f-ce10-430d-81a5-7d4065f49dc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 NetworkManager[49836]: <info>  [1772023156.6415] manager: (tap941e8032-40): new Veth device (/org/freedesktop/NetworkManager/Devices/397)
Feb 25 12:39:16 compute-0 systemd-udevd[330009]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.670 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c99d39b7-28fa-4b62-8313-4ef6eb604190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.674 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf975a4-ae09-4a12-a521-33e6d0cbe02c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 NetworkManager[49836]: <info>  [1772023156.6909] device (tap941e8032-40): carrier: link connected
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.693 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2b08cc1e-c92c-47f3-9575-824845903548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.709 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d501b966-177d-4297-b393-b4e1edc160c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap941e8032-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:fa:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 287], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512609, 'reachable_time': 43861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330040, 'error': None, 'target': 'ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.720 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47b99608-dec9-422e-ab5e-191be34f02f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:fa7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512609, 'tstamp': 512609}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330041, 'error': None, 'target': 'ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8f787085-a64c-4eaa-b696-77eaec7010a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap941e8032-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:fa:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 287], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512609, 'reachable_time': 43861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330042, 'error': None, 'target': 'ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.757 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[06c25158-3129-494d-aefe-9331220d2f72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.800 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fad42c-b145-4b67-9dd1-4aa0bb9e1d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.801 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap941e8032-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.802 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.803 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap941e8032-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:16 compute-0 kernel: tap941e8032-40: entered promiscuous mode
Feb 25 12:39:16 compute-0 NetworkManager[49836]: <info>  [1772023156.8052] manager: (tap941e8032-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.812 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap941e8032-40, col_values=(('external_ids', {'iface-id': '3398affd-9bd0-414f-9185-f9baaa68693b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:16 compute-0 ovn_controller[147040]: 2026-02-25T12:39:16Z|00961|binding|INFO|Releasing lport 3398affd-9bd0-414f-9185-f9baaa68693b from this chassis (sb_readonly=0)
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.816 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/941e8032-4eea-492d-a5e0-a9cc25d1a9cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/941e8032-4eea-492d-a5e0-a9cc25d1a9cc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.818 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47360737-7739-4915-b8cf-0f4ae83f9702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.820 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-941e8032-4eea-492d-a5e0-a9cc25d1a9cc
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/941e8032-4eea-492d-a5e0-a9cc25d1a9cc.pid.haproxy
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 941e8032-4eea-492d-a5e0-a9cc25d1a9cc
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:39:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.820 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'env', 'PROCESS_TAG=haproxy-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/941e8032-4eea-492d-a5e0-a9cc25d1a9cc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.860 244018 DEBUG nova.compute.manager [req-a52d6325-7f9f-437e-b800-76b17b0aa7ca req-57306a8a-24c3-4810-954d-f19513fe1e53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Received event network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.861 244018 DEBUG oslo_concurrency.lockutils [req-a52d6325-7f9f-437e-b800-76b17b0aa7ca req-57306a8a-24c3-4810-954d-f19513fe1e53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.862 244018 DEBUG oslo_concurrency.lockutils [req-a52d6325-7f9f-437e-b800-76b17b0aa7ca req-57306a8a-24c3-4810-954d-f19513fe1e53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.863 244018 DEBUG oslo_concurrency.lockutils [req-a52d6325-7f9f-437e-b800-76b17b0aa7ca req-57306a8a-24c3-4810-954d-f19513fe1e53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:16 compute-0 nova_compute[244014]: 2026-02-25 12:39:16.863 244018 DEBUG nova.compute.manager [req-a52d6325-7f9f-437e-b800-76b17b0aa7ca req-57306a8a-24c3-4810-954d-f19513fe1e53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Processing event network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:39:17 compute-0 sudo[330052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:39:17 compute-0 sudo[330052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:39:17 compute-0 sudo[330052]: pam_unix(sudo:session): session closed for user root
Feb 25 12:39:17 compute-0 sudo[330078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:39:17 compute-0 sudo[330078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:39:17 compute-0 podman[330124]: 2026-02-25 12:39:17.164302502 +0000 UTC m=+0.054052267 container create 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:39:17 compute-0 systemd[1]: Started libpod-conmon-3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d.scope.
Feb 25 12:39:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f32400978b76bc1f139aa47fa194c73ab872f9f6624b37050a90a5820e6f2f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:17 compute-0 podman[330124]: 2026-02-25 12:39:17.133629686 +0000 UTC m=+0.023379511 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:39:17 compute-0 podman[330124]: 2026-02-25 12:39:17.247080758 +0000 UTC m=+0.136830573 container init 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:39:17 compute-0 podman[330124]: 2026-02-25 12:39:17.251640786 +0000 UTC m=+0.141390581 container start 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:39:17 compute-0 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [NOTICE]   (330186) : New worker (330194) forked
Feb 25 12:39:17 compute-0 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [NOTICE]   (330186) : Loading success.
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.351 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023157.3506556, e25f3aed-d038-4b27-a15f-9beada2b67b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.353 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] VM Started (Lifecycle Event)
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.356 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.359 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.362 244018 INFO nova.virt.libvirt.driver [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Instance spawned successfully.
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.363 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.416 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.423 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.426 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.426 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.427 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.427 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.427 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.428 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.461 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.461 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023157.3510406, e25f3aed-d038-4b27-a15f-9beada2b67b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.462 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] VM Paused (Lifecycle Event)
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.479 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.481 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023157.3586068, e25f3aed-d038-4b27-a15f-9beada2b67b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.482 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] VM Resumed (Lifecycle Event)
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.505 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.508 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.516 244018 INFO nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Took 7.23 seconds to spawn the instance on the hypervisor.
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.516 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.527 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:39:17 compute-0 sudo[330078]: pam_unix(sudo:session): session closed for user root
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.571 244018 INFO nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Took 8.42 seconds to build instance.
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.589 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:39:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:39:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:39:17 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:39:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:39:17 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:39:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:39:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:39:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:39:17 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:39:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:39:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:39:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:39:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:39:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:39:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:39:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:39:17 compute-0 sudo[330228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:39:17 compute-0 sudo[330228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:39:17 compute-0 sudo[330228]: pam_unix(sudo:session): session closed for user root
Feb 25 12:39:17 compute-0 sudo[330253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:39:17 compute-0 sudo[330253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:39:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:39:17 compute-0 nova_compute[244014]: 2026-02-25 12:39:17.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:39:17 compute-0 podman[330277]: 2026-02-25 12:39:17.942552424 +0000 UTC m=+0.109575443 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:39:18 compute-0 nova_compute[244014]: 2026-02-25 12:39:18.113 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:39:18 compute-0 nova_compute[244014]: 2026-02-25 12:39:18.114 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:39:18 compute-0 nova_compute[244014]: 2026-02-25 12:39:18.114 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:39:18 compute-0 nova_compute[244014]: 2026-02-25 12:39:18.115 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid e25f3aed-d038-4b27-a15f-9beada2b67b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:39:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 12:39:18 compute-0 podman[330314]: 2026-02-25 12:39:18.212922614 +0000 UTC m=+0.123120056 container create f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 12:39:18 compute-0 podman[330314]: 2026-02-25 12:39:18.128366727 +0000 UTC m=+0.038564069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:39:18 compute-0 systemd[1]: Started libpod-conmon-f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543.scope.
Feb 25 12:39:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:39:18 compute-0 podman[330314]: 2026-02-25 12:39:18.331105819 +0000 UTC m=+0.241303171 container init f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:39:18 compute-0 podman[330314]: 2026-02-25 12:39:18.341373179 +0000 UTC m=+0.251570501 container start f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:39:18 compute-0 silly_carver[330330]: 167 167
Feb 25 12:39:18 compute-0 systemd[1]: libpod-f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543.scope: Deactivated successfully.
Feb 25 12:39:18 compute-0 podman[330314]: 2026-02-25 12:39:18.34639241 +0000 UTC m=+0.256589762 container attach f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:39:18 compute-0 podman[330314]: 2026-02-25 12:39:18.347721668 +0000 UTC m=+0.257918980 container died f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-270082f391bdf911c47c26fc2d9af20322580860774b51e7471192f466be4bbf-merged.mount: Deactivated successfully.
Feb 25 12:39:18 compute-0 podman[330314]: 2026-02-25 12:39:18.384433824 +0000 UTC m=+0.294631146 container remove f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 12:39:18 compute-0 systemd[1]: libpod-conmon-f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543.scope: Deactivated successfully.
Feb 25 12:39:18 compute-0 podman[330355]: 2026-02-25 12:39:18.530784874 +0000 UTC m=+0.037231862 container create d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:39:18 compute-0 systemd[1]: Started libpod-conmon-d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6.scope.
Feb 25 12:39:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:18 compute-0 podman[330355]: 2026-02-25 12:39:18.514222326 +0000 UTC m=+0.020669344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:39:18 compute-0 podman[330355]: 2026-02-25 12:39:18.618927411 +0000 UTC m=+0.125374419 container init d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 12:39:18 compute-0 podman[330355]: 2026-02-25 12:39:18.624639432 +0000 UTC m=+0.131086420 container start d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 12:39:18 compute-0 podman[330355]: 2026-02-25 12:39:18.62811237 +0000 UTC m=+0.134559358 container attach d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 12:39:18 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:39:18 compute-0 ceph-mon[76335]: pgmap v1731: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 12:39:18 compute-0 nova_compute[244014]: 2026-02-25 12:39:18.971 244018 DEBUG nova.compute.manager [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Received event network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:39:18 compute-0 nova_compute[244014]: 2026-02-25 12:39:18.973 244018 DEBUG oslo_concurrency.lockutils [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:18 compute-0 nova_compute[244014]: 2026-02-25 12:39:18.973 244018 DEBUG oslo_concurrency.lockutils [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:18 compute-0 nova_compute[244014]: 2026-02-25 12:39:18.973 244018 DEBUG oslo_concurrency.lockutils [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:18 compute-0 nova_compute[244014]: 2026-02-25 12:39:18.974 244018 DEBUG nova.compute.manager [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] No waiting events found dispatching network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:39:18 compute-0 nova_compute[244014]: 2026-02-25 12:39:18.975 244018 WARNING nova.compute.manager [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Received unexpected event network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 for instance with vm_state active and task_state None.
Feb 25 12:39:19 compute-0 zealous_boyd[330372]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:39:19 compute-0 zealous_boyd[330372]: --> All data devices are unavailable
Feb 25 12:39:19 compute-0 systemd[1]: libpod-d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6.scope: Deactivated successfully.
Feb 25 12:39:19 compute-0 podman[330355]: 2026-02-25 12:39:19.089052867 +0000 UTC m=+0.595499895 container died d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 12:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9-merged.mount: Deactivated successfully.
Feb 25 12:39:19 compute-0 podman[330355]: 2026-02-25 12:39:19.132556095 +0000 UTC m=+0.639003123 container remove d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:39:19 compute-0 sudo[330253]: pam_unix(sudo:session): session closed for user root
Feb 25 12:39:19 compute-0 systemd[1]: libpod-conmon-d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6.scope: Deactivated successfully.
Feb 25 12:39:19 compute-0 sudo[330406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:39:19 compute-0 sudo[330406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:39:19 compute-0 sudo[330406]: pam_unix(sudo:session): session closed for user root
Feb 25 12:39:19 compute-0 sudo[330431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:39:19 compute-0 sudo[330431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:39:19 compute-0 sshd-session[330387]: Invalid user user from 2.57.121.25 port 55216
Feb 25 12:39:19 compute-0 podman[330468]: 2026-02-25 12:39:19.616141671 +0000 UTC m=+0.060125897 container create 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:39:19 compute-0 systemd[1]: Started libpod-conmon-4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b.scope.
Feb 25 12:39:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:39:19 compute-0 podman[330468]: 2026-02-25 12:39:19.589025486 +0000 UTC m=+0.033009772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:39:19 compute-0 podman[330468]: 2026-02-25 12:39:19.695199463 +0000 UTC m=+0.139183719 container init 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:39:19 compute-0 podman[330468]: 2026-02-25 12:39:19.701459439 +0000 UTC m=+0.145443645 container start 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:39:19 compute-0 epic_swirles[330486]: 167 167
Feb 25 12:39:19 compute-0 podman[330468]: 2026-02-25 12:39:19.706076729 +0000 UTC m=+0.150060955 container attach 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 25 12:39:19 compute-0 systemd[1]: libpod-4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b.scope: Deactivated successfully.
Feb 25 12:39:19 compute-0 podman[330468]: 2026-02-25 12:39:19.706960394 +0000 UTC m=+0.150944630 container died 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 12:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5bfaa8df3c0884f7648437641d338ca35ed57fcf09e763fbc06a950f75f92bc-merged.mount: Deactivated successfully.
Feb 25 12:39:19 compute-0 podman[330468]: 2026-02-25 12:39:19.753830977 +0000 UTC m=+0.197815183 container remove 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:39:19 compute-0 systemd[1]: libpod-conmon-4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b.scope: Deactivated successfully.
Feb 25 12:39:19 compute-0 podman[330510]: 2026-02-25 12:39:19.900960049 +0000 UTC m=+0.052525563 container create ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:39:19 compute-0 systemd[1]: Started libpod-conmon-ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87.scope.
Feb 25 12:39:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c064270ac3fe9929810cc029ef9b8646a93d4a64535c0fd432f0f53e62d5517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c064270ac3fe9929810cc029ef9b8646a93d4a64535c0fd432f0f53e62d5517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c064270ac3fe9929810cc029ef9b8646a93d4a64535c0fd432f0f53e62d5517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c064270ac3fe9929810cc029ef9b8646a93d4a64535c0fd432f0f53e62d5517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:19 compute-0 podman[330510]: 2026-02-25 12:39:19.882020065 +0000 UTC m=+0.033585569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:39:19 compute-0 podman[330510]: 2026-02-25 12:39:19.986614486 +0000 UTC m=+0.138180010 container init ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:39:19 compute-0 podman[330510]: 2026-02-25 12:39:19.994440057 +0000 UTC m=+0.146005541 container start ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 12:39:19 compute-0 podman[330510]: 2026-02-25 12:39:19.998366188 +0000 UTC m=+0.149931712 container attach ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.003 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 12:39:20 compute-0 sshd-session[330387]: Received disconnect from 2.57.121.25 port 55216:11: Bye [preauth]
Feb 25 12:39:20 compute-0 sshd-session[330387]: Disconnected from invalid user user 2.57.121.25 port 55216 [preauth]
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]: {
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:     "0": [
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:         {
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "devices": [
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "/dev/loop3"
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             ],
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_name": "ceph_lv0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_size": "21470642176",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "name": "ceph_lv0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "tags": {
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.cluster_name": "ceph",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.crush_device_class": "",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.encrypted": "0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.objectstore": "bluestore",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.osd_id": "0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.type": "block",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.vdo": "0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.with_tpm": "0"
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             },
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "type": "block",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "vg_name": "ceph_vg0"
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:         }
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:     ],
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:     "1": [
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:         {
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "devices": [
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "/dev/loop4"
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             ],
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_name": "ceph_lv1",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_size": "21470642176",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "name": "ceph_lv1",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "tags": {
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.cluster_name": "ceph",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.crush_device_class": "",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.encrypted": "0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.objectstore": "bluestore",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.osd_id": "1",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.type": "block",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.vdo": "0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.with_tpm": "0"
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             },
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "type": "block",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "vg_name": "ceph_vg1"
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:         }
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:     ],
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:     "2": [
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:         {
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "devices": [
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "/dev/loop5"
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             ],
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_name": "ceph_lv2",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_size": "21470642176",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "name": "ceph_lv2",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "tags": {
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.cluster_name": "ceph",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.crush_device_class": "",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.encrypted": "0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.objectstore": "bluestore",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.osd_id": "2",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.type": "block",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.vdo": "0",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:                 "ceph.with_tpm": "0"
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             },
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "type": "block",
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:             "vg_name": "ceph_vg2"
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:         }
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]:     ]
Feb 25 12:39:20 compute-0 wizardly_maxwell[330526]: }
Feb 25 12:39:20 compute-0 ceph-mon[76335]: pgmap v1732: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 12:39:20 compute-0 systemd[1]: libpod-ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87.scope: Deactivated successfully.
Feb 25 12:39:20 compute-0 podman[330510]: 2026-02-25 12:39:20.300090102 +0000 UTC m=+0.451655646 container died ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c064270ac3fe9929810cc029ef9b8646a93d4a64535c0fd432f0f53e62d5517-merged.mount: Deactivated successfully.
Feb 25 12:39:20 compute-0 podman[330510]: 2026-02-25 12:39:20.352310526 +0000 UTC m=+0.503876050 container remove ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 12:39:20 compute-0 systemd[1]: libpod-conmon-ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87.scope: Deactivated successfully.
Feb 25 12:39:20 compute-0 sudo[330431]: pam_unix(sudo:session): session closed for user root
Feb 25 12:39:20 compute-0 sudo[330548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:39:20 compute-0 sudo[330548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:39:20 compute-0 sudo[330548]: pam_unix(sudo:session): session closed for user root
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.490 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updating instance_info_cache with network_info: [{"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.529 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.529 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.531 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:39:20 compute-0 sudo[330573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:39:20 compute-0 sudo[330573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:20 compute-0 podman[330610]: 2026-02-25 12:39:20.867845314 +0000 UTC m=+0.049493147 container create 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:39:20 compute-0 systemd[1]: Started libpod-conmon-3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c.scope.
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.919 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:39:20 compute-0 nova_compute[244014]: 2026-02-25 12:39:20.919 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:20 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:39:20 compute-0 podman[330610]: 2026-02-25 12:39:20.84925501 +0000 UTC m=+0.030902883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:39:20 compute-0 podman[330610]: 2026-02-25 12:39:20.952998707 +0000 UTC m=+0.134646630 container init 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:39:20 compute-0 podman[330610]: 2026-02-25 12:39:20.961309982 +0000 UTC m=+0.142957845 container start 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 12:39:20 compute-0 podman[330610]: 2026-02-25 12:39:20.966177699 +0000 UTC m=+0.147825562 container attach 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 12:39:20 compute-0 hardcore_maxwell[330626]: 167 167
Feb 25 12:39:20 compute-0 systemd[1]: libpod-3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c.scope: Deactivated successfully.
Feb 25 12:39:20 compute-0 podman[330610]: 2026-02-25 12:39:20.96760862 +0000 UTC m=+0.149256463 container died 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-358560c4a26c331605204b2980e8893ac4495c750cae63e5bf7c1a06317fb36b-merged.mount: Deactivated successfully.
Feb 25 12:39:21 compute-0 podman[330610]: 2026-02-25 12:39:21.004628694 +0000 UTC m=+0.186276527 container remove 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 12:39:21 compute-0 systemd[1]: libpod-conmon-3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c.scope: Deactivated successfully.
Feb 25 12:39:21 compute-0 podman[330666]: 2026-02-25 12:39:21.189597604 +0000 UTC m=+0.061682461 container create 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:39:21 compute-0 systemd[1]: Started libpod-conmon-2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780.scope.
Feb 25 12:39:21 compute-0 podman[330666]: 2026-02-25 12:39:21.158995131 +0000 UTC m=+0.031080058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:39:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555e7f02c97960cf808b087980c7588722037e03e06820fa35e23134613e963/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555e7f02c97960cf808b087980c7588722037e03e06820fa35e23134613e963/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555e7f02c97960cf808b087980c7588722037e03e06820fa35e23134613e963/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555e7f02c97960cf808b087980c7588722037e03e06820fa35e23134613e963/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:21 compute-0 podman[330666]: 2026-02-25 12:39:21.30285776 +0000 UTC m=+0.174942627 container init 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:39:21 compute-0 podman[330666]: 2026-02-25 12:39:21.314818158 +0000 UTC m=+0.186903035 container start 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 12:39:21 compute-0 podman[330666]: 2026-02-25 12:39:21.319285734 +0000 UTC m=+0.191370611 container attach 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:39:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:39:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604528281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:21 compute-0 nova_compute[244014]: 2026-02-25 12:39:21.572 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/604528281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:21 compute-0 nova_compute[244014]: 2026-02-25 12:39:21.649 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:39:21 compute-0 nova_compute[244014]: 2026-02-25 12:39:21.650 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:39:21 compute-0 nova_compute[244014]: 2026-02-25 12:39:21.772 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:39:21 compute-0 nova_compute[244014]: 2026-02-25 12:39:21.774 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3658MB free_disk=59.96670763287693GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:39:21 compute-0 nova_compute[244014]: 2026-02-25 12:39:21.774 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:21 compute-0 nova_compute[244014]: 2026-02-25 12:39:21.774 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:21 compute-0 nova_compute[244014]: 2026-02-25 12:39:21.847 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e25f3aed-d038-4b27-a15f-9beada2b67b0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:39:21 compute-0 nova_compute[244014]: 2026-02-25 12:39:21.848 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:39:21 compute-0 nova_compute[244014]: 2026-02-25 12:39:21.848 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:39:21 compute-0 nova_compute[244014]: 2026-02-25 12:39:21.884 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:22 compute-0 lvm[330783]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:39:22 compute-0 lvm[330783]: VG ceph_vg1 finished
Feb 25 12:39:22 compute-0 lvm[330780]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:39:22 compute-0 lvm[330780]: VG ceph_vg0 finished
Feb 25 12:39:22 compute-0 lvm[330788]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:39:22 compute-0 lvm[330788]: VG ceph_vg2 finished
Feb 25 12:39:22 compute-0 suspicious_golick[330684]: {}
Feb 25 12:39:22 compute-0 podman[330666]: 2026-02-25 12:39:22.130810085 +0000 UTC m=+1.002894932 container died 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:39:22 compute-0 systemd[1]: libpod-2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780.scope: Deactivated successfully.
Feb 25 12:39:22 compute-0 systemd[1]: libpod-2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780.scope: Consumed 1.046s CPU time.
Feb 25 12:39:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Feb 25 12:39:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:39:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908164515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.427 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.431 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e555e7f02c97960cf808b087980c7588722037e03e06820fa35e23134613e963-merged.mount: Deactivated successfully.
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.453 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.487 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.488 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.614 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.615 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.616 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.617 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.617 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.620 244018 INFO nova.compute.manager [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Terminating instance
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.622 244018 DEBUG nova.compute.manager [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:39:22 compute-0 ceph-mon[76335]: pgmap v1733: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Feb 25 12:39:22 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2908164515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:22 compute-0 podman[330666]: 2026-02-25 12:39:22.819968012 +0000 UTC m=+1.692052899 container remove 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:39:22 compute-0 kernel: tap14cc64dc-74 (unregistering): left promiscuous mode
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:22 compute-0 NetworkManager[49836]: <info>  [1772023162.8751] device (tap14cc64dc-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.881 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:22 compute-0 ovn_controller[147040]: 2026-02-25T12:39:22Z|00962|binding|INFO|Releasing lport 14cc64dc-7418-4171-b1f4-1e19bed07489 from this chassis (sb_readonly=0)
Feb 25 12:39:22 compute-0 ovn_controller[147040]: 2026-02-25T12:39:22Z|00963|binding|INFO|Setting lport 14cc64dc-7418-4171-b1f4-1e19bed07489 down in Southbound
Feb 25 12:39:22 compute-0 ovn_controller[147040]: 2026-02-25T12:39:22Z|00964|binding|INFO|Removing iface tap14cc64dc-74 ovn-installed in OVS
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:22 compute-0 sudo[330573]: pam_unix(sudo:session): session closed for user root
Feb 25 12:39:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:22.892 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:e2:2c 10.100.0.12'], port_security=['fa:16:3e:60:e2:2c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e25f3aed-d038-4b27-a15f-9beada2b67b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2470f8bb56548cd818113da683292d8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0bf5667d-50da-48f9-b7c1-919d155c28ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce9da3ce-009a-4d1e-9912-d9a8180cd783, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=14cc64dc-7418-4171-b1f4-1e19bed07489) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:39:22 compute-0 nova_compute[244014]: 2026-02-25 12:39:22.893 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:22.897 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 14cc64dc-7418-4171-b1f4-1e19bed07489 in datapath 941e8032-4eea-492d-a5e0-a9cc25d1a9cc unbound from our chassis
Feb 25 12:39:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:22.899 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 941e8032-4eea-492d-a5e0-a9cc25d1a9cc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:39:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:22.901 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e166df69-b439-411c-940f-b914a0e2fd5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:22.902 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc namespace which is not needed anymore
Feb 25 12:39:22 compute-0 systemd[1]: libpod-conmon-2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780.scope: Deactivated successfully.
Feb 25 12:39:22 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000060.scope: Deactivated successfully.
Feb 25 12:39:22 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000060.scope: Consumed 6.043s CPU time.
Feb 25 12:39:22 compute-0 systemd-machined[210048]: Machine qemu-123-instance-00000060 terminated.
Feb 25 12:39:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:39:23 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:39:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:39:23 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:39:23 compute-0 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [NOTICE]   (330186) : haproxy version is 2.8.14-c23fe91
Feb 25 12:39:23 compute-0 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [NOTICE]   (330186) : path to executable is /usr/sbin/haproxy
Feb 25 12:39:23 compute-0 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [WARNING]  (330186) : Exiting Master process...
Feb 25 12:39:23 compute-0 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [WARNING]  (330186) : Exiting Master process...
Feb 25 12:39:23 compute-0 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [ALERT]    (330186) : Current worker (330194) exited with code 143 (Terminated)
Feb 25 12:39:23 compute-0 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [WARNING]  (330186) : All workers exited. Exiting... (0)
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:23 compute-0 systemd[1]: libpod-3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d.scope: Deactivated successfully.
Feb 25 12:39:23 compute-0 conmon[330158]: conmon 3ea5b341c07a411098b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d.scope/container/memory.events
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:23 compute-0 podman[330826]: 2026-02-25 12:39:23.051541257 +0000 UTC m=+0.058469021 container died 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.055 244018 INFO nova.virt.libvirt.driver [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Instance destroyed successfully.
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.055 244018 DEBUG nova.objects.instance [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lazy-loading 'resources' on Instance uuid e25f3aed-d038-4b27-a15f-9beada2b67b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.070 244018 DEBUG nova.virt.libvirt.vif [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-99198811',display_name='tempest-ServerMetadataNegativeTestJSON-server-99198811',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-99198811',id=96,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:39:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b2470f8bb56548cd818113da683292d8',ramdisk_id='',reservation_id='r-8npm3urm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1634701035',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1634701035-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:39:17Z,user_data=None,user_id='0c0513e28db44d81aa02994a8543b844',uuid=e25f3aed-d038-4b27-a15f-9beada2b67b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.072 244018 DEBUG nova.network.os_vif_util [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converting VIF {"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.072 244018 DEBUG nova.network.os_vif_util [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.073 244018 DEBUG os_vif [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.075 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.075 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14cc64dc-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.078 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.083 244018 INFO os_vif [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74')
Feb 25 12:39:23 compute-0 sudo[330843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:39:23 compute-0 sudo[330843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:39:23 compute-0 sudo[330843]: pam_unix(sudo:session): session closed for user root
Feb 25 12:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d-userdata-shm.mount: Deactivated successfully.
Feb 25 12:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-50f32400978b76bc1f139aa47fa194c73ab872f9f6624b37050a90a5820e6f2f-merged.mount: Deactivated successfully.
Feb 25 12:39:23 compute-0 podman[330826]: 2026-02-25 12:39:23.109177214 +0000 UTC m=+0.116104988 container cleanup 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 12:39:23 compute-0 systemd[1]: libpod-conmon-3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d.scope: Deactivated successfully.
Feb 25 12:39:23 compute-0 podman[330905]: 2026-02-25 12:39:23.177270545 +0000 UTC m=+0.045974678 container remove 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:39:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.182 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[013eb1c2-c4be-493c-9e96-4cbb4ad315de]: (4, ('Wed Feb 25 12:39:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc (3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d)\n3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d\nWed Feb 25 12:39:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc (3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d)\n3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.184 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee78dcbc-debd-4eac-98e6-9f99d03c925b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.185 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap941e8032-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.187 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:23 compute-0 kernel: tap941e8032-40: left promiscuous mode
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.197 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6a4d8f-2822-4399-b88d-69d4ca41e95f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.214 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7bf3c5-3ae4-44f4-924e-684f5e387f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.215 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4852dd-7b16-4db9-8257-bfe8339e0eff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.228 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dc023880-d5db-4ed3-8fed-089e09d9fd15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512603, 'reachable_time': 27742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330923, 'error': None, 'target': 'ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d941e8032\x2d4eea\x2d492d\x2da5e0\x2da9cc25d1a9cc.mount: Deactivated successfully.
Feb 25 12:39:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.232 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:39:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.232 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[c8cc59ac-045e-467a-a1ae-2ed829e9c819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.488 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.488 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:39:23 compute-0 nova_compute[244014]: 2026-02-25 12:39:23.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:39:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:39:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:39:24 compute-0 nova_compute[244014]: 2026-02-25 12:39:24.057 244018 INFO nova.virt.libvirt.driver [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Deleting instance files /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0_del
Feb 25 12:39:24 compute-0 nova_compute[244014]: 2026-02-25 12:39:24.058 244018 INFO nova.virt.libvirt.driver [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Deletion of /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0_del complete
Feb 25 12:39:24 compute-0 nova_compute[244014]: 2026-02-25 12:39:24.132 244018 INFO nova.compute.manager [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Took 1.51 seconds to destroy the instance on the hypervisor.
Feb 25 12:39:24 compute-0 nova_compute[244014]: 2026-02-25 12:39:24.132 244018 DEBUG oslo.service.loopingcall [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:39:24 compute-0 nova_compute[244014]: 2026-02-25 12:39:24.133 244018 DEBUG nova.compute.manager [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:39:24 compute-0 nova_compute[244014]: 2026-02-25 12:39:24.133 244018 DEBUG nova.network.neutron [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:39:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 849 KiB/s wr, 75 op/s
Feb 25 12:39:24 compute-0 nova_compute[244014]: 2026-02-25 12:39:24.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:39:25 compute-0 nova_compute[244014]: 2026-02-25 12:39:25.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:25 compute-0 ceph-mon[76335]: pgmap v1734: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 849 KiB/s wr, 75 op/s
Feb 25 12:39:25 compute-0 nova_compute[244014]: 2026-02-25 12:39:25.174 244018 DEBUG nova.network.neutron [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:39:25 compute-0 nova_compute[244014]: 2026-02-25 12:39:25.199 244018 INFO nova.compute.manager [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Took 1.07 seconds to deallocate network for instance.
Feb 25 12:39:25 compute-0 nova_compute[244014]: 2026-02-25 12:39:25.246 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:25 compute-0 nova_compute[244014]: 2026-02-25 12:39:25.248 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:25 compute-0 nova_compute[244014]: 2026-02-25 12:39:25.294 244018 DEBUG oslo_concurrency.processutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:25 compute-0 nova_compute[244014]: 2026-02-25 12:39:25.338 244018 DEBUG nova.compute.manager [req-ae228d59-63e5-4610-8165-e50a2f78812f req-93d8e484-1263-488d-89c2-391f8aa477a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Received event network-vif-deleted-14cc64dc-7418-4171-b1f4-1e19bed07489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:39:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:39:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411513205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:25 compute-0 nova_compute[244014]: 2026-02-25 12:39:25.936 244018 DEBUG oslo_concurrency.processutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:25 compute-0 nova_compute[244014]: 2026-02-25 12:39:25.945 244018 DEBUG nova.compute.provider_tree [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:39:25 compute-0 nova_compute[244014]: 2026-02-25 12:39:25.983 244018 DEBUG nova.scheduler.client.report [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:39:26 compute-0 nova_compute[244014]: 2026-02-25 12:39:26.013 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:26 compute-0 nova_compute[244014]: 2026-02-25 12:39:26.040 244018 INFO nova.scheduler.client.report [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Deleted allocations for instance e25f3aed-d038-4b27-a15f-9beada2b67b0
Feb 25 12:39:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1411513205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:26 compute-0 nova_compute[244014]: 2026-02-25 12:39:26.107 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:39:27 compute-0 ceph-mon[76335]: pgmap v1735: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:39:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:28 compute-0 nova_compute[244014]: 2026-02-25 12:39:28.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 12:39:28 compute-0 ceph-mon[76335]: pgmap v1736: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 12:39:28 compute-0 nova_compute[244014]: 2026-02-25 12:39:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:39:28 compute-0 nova_compute[244014]: 2026-02-25 12:39:28.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:39:29 compute-0 nova_compute[244014]: 2026-02-25 12:39:29.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:30 compute-0 nova_compute[244014]: 2026-02-25 12:39:30.007 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Feb 25 12:39:30 compute-0 ceph-mon[76335]: pgmap v1737: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Feb 25 12:39:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:39:30
Feb 25 12:39:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:39:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:39:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'backups']
Feb 25 12:39:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:39:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:39:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Feb 25 12:39:32 compute-0 ceph-mon[76335]: pgmap v1738: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Feb 25 12:39:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:33 compute-0 nova_compute[244014]: 2026-02-25 12:39:33.083 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:33 compute-0 nova_compute[244014]: 2026-02-25 12:39:33.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:39:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:34.134 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:39:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:34.135 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:39:34 compute-0 nova_compute[244014]: 2026-02-25 12:39:34.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 864 KiB/s rd, 1.2 KiB/s wr, 54 op/s
Feb 25 12:39:34 compute-0 ceph-mon[76335]: pgmap v1739: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 864 KiB/s rd, 1.2 KiB/s wr, 54 op/s
Feb 25 12:39:35 compute-0 nova_compute[244014]: 2026-02-25 12:39:35.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 25 12:39:36 compute-0 ceph-mon[76335]: pgmap v1740: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 25 12:39:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:38 compute-0 nova_compute[244014]: 2026-02-25 12:39:38.053 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023163.052404, e25f3aed-d038-4b27-a15f-9beada2b67b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:39:38 compute-0 nova_compute[244014]: 2026-02-25 12:39:38.054 244018 INFO nova.compute.manager [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] VM Stopped (Lifecycle Event)
Feb 25 12:39:38 compute-0 nova_compute[244014]: 2026-02-25 12:39:38.080 244018 DEBUG nova.compute.manager [None req-45d8dab6-73f1-48db-86e6-4e3a9c3b6db2 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:39:38 compute-0 nova_compute[244014]: 2026-02-25 12:39:38.087 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 25 12:39:38 compute-0 ceph-mon[76335]: pgmap v1741: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 25 12:39:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:39.137 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:40 compute-0 nova_compute[244014]: 2026-02-25 12:39:40.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:39:40 compute-0 ceph-mon[76335]: pgmap v1742: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:39:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Feb 25 12:39:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Feb 25 12:39:41 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.4 KiB/s wr, 13 op/s
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.1290455653175771e-05 of space, bias 1.0, pg target 0.0033871366959527314 quantized to 32 (current 32)
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939042760997172 of space, bias 1.0, pg target 0.7481712828299152 quantized to 32 (current 32)
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.601542546428212e-07 of space, bias 4.0, pg target 0.0009121851055713854 quantized to 16 (current 16)
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:39:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:39:42 compute-0 ceph-mon[76335]: osdmap e248: 3 total, 3 up, 3 in
Feb 25 12:39:42 compute-0 ceph-mon[76335]: pgmap v1744: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.4 KiB/s wr, 13 op/s
Feb 25 12:39:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:43 compute-0 nova_compute[244014]: 2026-02-25 12:39:43.090 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Feb 25 12:39:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Feb 25 12:39:44 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Feb 25 12:39:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.9 KiB/s wr, 17 op/s
Feb 25 12:39:45 compute-0 nova_compute[244014]: 2026-02-25 12:39:45.014 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:45 compute-0 ceph-mon[76335]: osdmap e249: 3 total, 3 up, 3 in
Feb 25 12:39:45 compute-0 ceph-mon[76335]: pgmap v1746: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.9 KiB/s wr, 17 op/s
Feb 25 12:39:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Feb 25 12:39:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Feb 25 12:39:46 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Feb 25 12:39:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.5 KiB/s wr, 23 op/s
Feb 25 12:39:46 compute-0 podman[330947]: 2026-02-25 12:39:46.746791236 +0000 UTC m=+0.079856655 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 12:39:47 compute-0 ceph-mon[76335]: osdmap e250: 3 total, 3 up, 3 in
Feb 25 12:39:47 compute-0 ceph-mon[76335]: pgmap v1748: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.5 KiB/s wr, 23 op/s
Feb 25 12:39:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:39:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1417499686' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:39:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:39:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1417499686' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:39:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Feb 25 12:39:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Feb 25 12:39:48 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Feb 25 12:39:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1417499686' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:39:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1417499686' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:39:48 compute-0 nova_compute[244014]: 2026-02-25 12:39:48.092 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 5.3 KiB/s wr, 94 op/s
Feb 25 12:39:48 compute-0 podman[330966]: 2026-02-25 12:39:48.796871769 +0000 UTC m=+0.135130305 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:39:48 compute-0 nova_compute[244014]: 2026-02-25 12:39:48.803 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:48 compute-0 nova_compute[244014]: 2026-02-25 12:39:48.804 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:48 compute-0 nova_compute[244014]: 2026-02-25 12:39:48.821 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:39:48 compute-0 nova_compute[244014]: 2026-02-25 12:39:48.915 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:48 compute-0 nova_compute[244014]: 2026-02-25 12:39:48.916 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:48 compute-0 nova_compute[244014]: 2026-02-25 12:39:48.930 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:39:48 compute-0 nova_compute[244014]: 2026-02-25 12:39:48.931 244018 INFO nova.compute.claims [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.053 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:49 compute-0 ceph-mon[76335]: osdmap e251: 3 total, 3 up, 3 in
Feb 25 12:39:49 compute-0 ceph-mon[76335]: pgmap v1750: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 5.3 KiB/s wr, 94 op/s
Feb 25 12:39:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:39:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366555760' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.618 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.624 244018 DEBUG nova.compute.provider_tree [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.648 244018 DEBUG nova.scheduler.client.report [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.673 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.674 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.736 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.737 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.759 244018 INFO nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.785 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.894 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.896 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.896 244018 INFO nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Creating image(s)
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.930 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:49 compute-0 nova_compute[244014]: 2026-02-25 12:39:49.967 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.002 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.007 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1366555760' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.121 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.123 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.124 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.125 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.161 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.166 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 5.0 KiB/s wr, 91 op/s
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.196 244018 DEBUG nova.policy [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.428 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.513 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.617 244018 DEBUG nova.objects.instance [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid 62d2fee1-f07f-44e3-a511-6b9bb341a3ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.652 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.652 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Ensure instance console log exists: /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.652 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.653 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:50 compute-0 nova_compute[244014]: 2026-02-25 12:39:50.653 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:51 compute-0 ceph-mon[76335]: pgmap v1751: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 5.0 KiB/s wr, 91 op/s
Feb 25 12:39:51 compute-0 nova_compute[244014]: 2026-02-25 12:39:51.244 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Successfully created port: 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:39:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 183 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 1.8 MiB/s wr, 137 op/s
Feb 25 12:39:52 compute-0 ceph-mon[76335]: pgmap v1752: 305 pgs: 305 active+clean; 183 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 1.8 MiB/s wr, 137 op/s
Feb 25 12:39:52 compute-0 nova_compute[244014]: 2026-02-25 12:39:52.280 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Successfully updated port: 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:39:52 compute-0 nova_compute[244014]: 2026-02-25 12:39:52.296 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:39:52 compute-0 nova_compute[244014]: 2026-02-25 12:39:52.296 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:39:52 compute-0 nova_compute[244014]: 2026-02-25 12:39:52.296 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:39:52 compute-0 nova_compute[244014]: 2026-02-25 12:39:52.439 244018 DEBUG nova.compute.manager [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-changed-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:39:52 compute-0 nova_compute[244014]: 2026-02-25 12:39:52.440 244018 DEBUG nova.compute.manager [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Refreshing instance network info cache due to event network-changed-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:39:52 compute-0 nova_compute[244014]: 2026-02-25 12:39:52.441 244018 DEBUG oslo_concurrency.lockutils [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:39:52 compute-0 nova_compute[244014]: 2026-02-25 12:39:52.443 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:39:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Feb 25 12:39:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Feb 25 12:39:53 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.418 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updating instance_info_cache with network_info: [{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.442 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.443 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Instance network_info: |[{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.443 244018 DEBUG oslo_concurrency.lockutils [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.444 244018 DEBUG nova.network.neutron [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Refreshing network info cache for port 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.447 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Start _get_guest_xml network_info=[{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.453 244018 WARNING nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.466 244018 DEBUG nova.virt.libvirt.host [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.467 244018 DEBUG nova.virt.libvirt.host [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.472 244018 DEBUG nova.virt.libvirt.host [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.473 244018 DEBUG nova.virt.libvirt.host [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.473 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.474 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.475 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.475 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.476 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.476 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.477 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.477 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.478 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.479 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.479 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.480 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:39:53 compute-0 nova_compute[244014]: 2026-02-25 12:39:53.485 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:54 compute-0 ceph-mon[76335]: osdmap e252: 3 total, 3 up, 3 in
Feb 25 12:39:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:39:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097274914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.058 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.097 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.103 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 2.7 MiB/s wr, 145 op/s
Feb 25 12:39:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:39:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/267101661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.642 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.644 244018 DEBUG nova.virt.libvirt.vif [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:39:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-825477072',display_name='tempest-₡-825477072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--825477072',id=97,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-t20m600l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:39:49Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=62d2fee1-f07f-44e3-a511-6b9bb341a3ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.645 244018 DEBUG nova.network.os_vif_util [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.646 244018 DEBUG nova.network.os_vif_util [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.647 244018 DEBUG nova.objects.instance [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid 62d2fee1-f07f-44e3-a511-6b9bb341a3ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.668 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:39:54 compute-0 nova_compute[244014]:   <uuid>62d2fee1-f07f-44e3-a511-6b9bb341a3ed</uuid>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   <name>instance-00000061</name>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <nova:name>tempest-₡-825477072</nova:name>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:39:53</nova:creationTime>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <nova:port uuid="521e9ad8-9ef3-4823-9ddb-59c3c3fe0674">
Feb 25 12:39:54 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <system>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <entry name="serial">62d2fee1-f07f-44e3-a511-6b9bb341a3ed</entry>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <entry name="uuid">62d2fee1-f07f-44e3-a511-6b9bb341a3ed</entry>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     </system>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   <os>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   </os>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   <features>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   </features>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk">
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       </source>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config">
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       </source>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:39:54 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:f6:f0:2e"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <target dev="tap521e9ad8-9e"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/console.log" append="off"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <video>
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     </video>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:39:54 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:39:54 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:39:54 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:39:54 compute-0 nova_compute[244014]: </domain>
Feb 25 12:39:54 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.669 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Preparing to wait for external event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.669 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.669 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.669 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.670 244018 DEBUG nova.virt.libvirt.vif [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:39:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-825477072',display_name='tempest-₡-825477072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--825477072',id=97,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-t20m600l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:39:49Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=62d2fee1-f07f-44e3-a511-6b9bb341a3ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.670 244018 DEBUG nova.network.os_vif_util [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.671 244018 DEBUG nova.network.os_vif_util [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.671 244018 DEBUG os_vif [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.672 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.673 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.678 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap521e9ad8-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.679 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap521e9ad8-9e, col_values=(('external_ids', {'iface-id': '521e9ad8-9ef3-4823-9ddb-59c3c3fe0674', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:f0:2e', 'vm-uuid': '62d2fee1-f07f-44e3-a511-6b9bb341a3ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:54 compute-0 NetworkManager[49836]: <info>  [1772023194.6823] manager: (tap521e9ad8-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.688 244018 INFO os_vif [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e')
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.760 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.760 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.760 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:f6:f0:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.761 244018 INFO nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Using config drive
Feb 25 12:39:54 compute-0 nova_compute[244014]: 2026-02-25 12:39:54.786 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.019 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.020 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.020 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3097274914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:55 compute-0 ceph-mon[76335]: pgmap v1754: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 2.7 MiB/s wr, 145 op/s
Feb 25 12:39:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/267101661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.308 244018 INFO nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Creating config drive at /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.316 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp2yh8lbi4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.426 244018 DEBUG nova.network.neutron [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updated VIF entry in instance network info cache for port 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.427 244018 DEBUG nova.network.neutron [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updating instance_info_cache with network_info: [{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.446 244018 DEBUG oslo_concurrency.lockutils [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.462 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp2yh8lbi4" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.499 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.505 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.585 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "01328724-b95f-4b36-809a-ddc156dd0dde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.586 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.603 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.676 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.679 244018 INFO nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Deleting local config drive /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config because it was imported into RBD.
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.723 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.724 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.735 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.736 244018 INFO nova.compute.claims [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:39:55 compute-0 kernel: tap521e9ad8-9e: entered promiscuous mode
Feb 25 12:39:55 compute-0 NetworkManager[49836]: <info>  [1772023195.7431] manager: (tap521e9ad8-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.744 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:55 compute-0 ovn_controller[147040]: 2026-02-25T12:39:55Z|00965|binding|INFO|Claiming lport 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 for this chassis.
Feb 25 12:39:55 compute-0 ovn_controller[147040]: 2026-02-25T12:39:55Z|00966|binding|INFO|521e9ad8-9ef3-4823-9ddb-59c3c3fe0674: Claiming fa:16:3e:f6:f0:2e 10.100.0.3
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.752 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:55 compute-0 systemd-udevd[331315]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:39:55 compute-0 NetworkManager[49836]: <info>  [1772023195.7851] device (tap521e9ad8-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:39:55 compute-0 NetworkManager[49836]: <info>  [1772023195.7865] device (tap521e9ad8-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:39:55 compute-0 systemd-machined[210048]: New machine qemu-124-instance-00000061.
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.791 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f0:2e 10.100.0.3'], port_security=['fa:16:3e:f6:f0:2e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '62d2fee1-f07f-44e3-a511-6b9bb341a3ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.792 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.793 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:39:55 compute-0 ovn_controller[147040]: 2026-02-25T12:39:55Z|00967|binding|INFO|Setting lport 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 ovn-installed in OVS
Feb 25 12:39:55 compute-0 ovn_controller[147040]: 2026-02-25T12:39:55Z|00968|binding|INFO|Setting lport 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 up in Southbound
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:55 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-00000061.
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.802 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3aab6304-03f5-4446-a560-06d6883b70c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.803 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapec8bae53-f1 in ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.806 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapec8bae53-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.806 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[64b70197-9017-46be-aa03-8d5cccd33c20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.806 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0e83f127-f969-45a0-b40a-752e2961da80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.814 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[d99b8ab7-2485-401b-a18e-75acde5c70d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.829 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[917db3d3-7a38-4adb-9c94-10e8e7ac9026]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.852 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3c81d39b-314d-4315-9320-87a6dd124ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:55 compute-0 NetworkManager[49836]: <info>  [1772023195.8586] manager: (tapec8bae53-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.857 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a64af5c-2855-4f1c-ad90-8426ff1c9c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:55 compute-0 systemd-udevd[331319]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.886 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0e4ad2-ea14-498d-bb91-1da31ade15c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.890 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[53e87037-3829-4b3a-97ea-fb7e1b92bba8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:55 compute-0 NetworkManager[49836]: <info>  [1772023195.9153] device (tapec8bae53-f0): carrier: link connected
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.921 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[efc5b6da-856c-40ac-a5ea-aa240aa4fb11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.945 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19ea2ff6-bfeb-4990-bc53-d3a53f154383]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331350, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:55 compute-0 nova_compute[244014]: 2026-02-25 12:39:55.952 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.979 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[306d78e1-333f-452a-a972-2b919fb48c59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:a500'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516532, 'tstamp': 516532}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331351, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.999 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf480fa-3b35-47dd-b856-623b42c498ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331353, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.028 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea1dc5f-86fc-4e73-be5a-ec9f87364b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.096 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ec9756-e762-44e2-ae1e-c243b23ec78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.098 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.098 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.099 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:56 compute-0 kernel: tapec8bae53-f0: entered promiscuous mode
Feb 25 12:39:56 compute-0 NetworkManager[49836]: <info>  [1772023196.1018] manager: (tapec8bae53-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.102 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.103 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:39:56 compute-0 ovn_controller[147040]: 2026-02-25T12:39:56Z|00969|binding|INFO|Releasing lport e2d1eadf-baf7-4e5c-8052-6c64e8476a26 from this chassis (sb_readonly=0)
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.106 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ec8bae53-fe6a-49d1-a733-f00c198be561.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ec8bae53-fe6a-49d1-a733-f00c198be561.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.107 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[654c1aec-0e09-425e-afb4-762eab38b5bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.108 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/ec8bae53-fe6a-49d1-a733-f00c198be561.pid.haproxy
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:39:56 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.109 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'env', 'PROCESS_TAG=haproxy-ec8bae53-fe6a-49d1-a733-f00c198be561', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ec8bae53-fe6a-49d1-a733-f00c198be561.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.124 244018 DEBUG nova.compute.manager [req-fa69c7d4-1dac-42ac-ab7a-bedde1109c0e req-3dda3859-8bb1-44f1-858b-7891b654e84d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.125 244018 DEBUG oslo_concurrency.lockutils [req-fa69c7d4-1dac-42ac-ab7a-bedde1109c0e req-3dda3859-8bb1-44f1-858b-7891b654e84d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.125 244018 DEBUG oslo_concurrency.lockutils [req-fa69c7d4-1dac-42ac-ab7a-bedde1109c0e req-3dda3859-8bb1-44f1-858b-7891b654e84d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.126 244018 DEBUG oslo_concurrency.lockutils [req-fa69c7d4-1dac-42ac-ab7a-bedde1109c0e req-3dda3859-8bb1-44f1-858b-7891b654e84d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.126 244018 DEBUG nova.compute.manager [req-fa69c7d4-1dac-42ac-ab7a-bedde1109c0e req-3dda3859-8bb1-44f1-858b-7891b654e84d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Processing event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:39:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.6 MiB/s wr, 73 op/s
Feb 25 12:39:56 compute-0 ceph-mon[76335]: pgmap v1755: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.6 MiB/s wr, 73 op/s
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.302 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.304 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023196.3041885, 62d2fee1-f07f-44e3-a511-6b9bb341a3ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.304 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] VM Started (Lifecycle Event)
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.313 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.318 244018 INFO nova.virt.libvirt.driver [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Instance spawned successfully.
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.319 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.362 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.366 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.367 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.368 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.369 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.369 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.370 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.375 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.418 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.420 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023196.3047404, 62d2fee1-f07f-44e3-a511-6b9bb341a3ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.420 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] VM Paused (Lifecycle Event)
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.449 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.455 244018 INFO nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Took 6.56 seconds to spawn the instance on the hypervisor.
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.456 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.457 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023196.3068895, 62d2fee1-f07f-44e3-a511-6b9bb341a3ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.457 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] VM Resumed (Lifecycle Event)
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.484 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.488 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:39:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:39:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048460738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:56 compute-0 podman[331446]: 2026-02-25 12:39:56.507414767 +0000 UTC m=+0.064110940 container create 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.509 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.513 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.516 244018 DEBUG nova.compute.provider_tree [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.528 244018 INFO nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Took 7.65 seconds to build instance.
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.531 244018 DEBUG nova.scheduler.client.report [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:39:56 compute-0 systemd[1]: Started libpod-conmon-5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02.scope.
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.564 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.565 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.569 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:56 compute-0 podman[331446]: 2026-02-25 12:39:56.47915836 +0000 UTC m=+0.035854553 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:39:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aeb1a3a8bf9efcf56fafcf5d930b23a7cd7c44dae6b62bfc44fc981eafd5921/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.609 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:39:56 compute-0 podman[331446]: 2026-02-25 12:39:56.609768026 +0000 UTC m=+0.166464209 container init 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.610 244018 DEBUG nova.network.neutron [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:39:56 compute-0 podman[331446]: 2026-02-25 12:39:56.615494598 +0000 UTC m=+0.172190761 container start 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.626 244018 INFO nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:39:56 compute-0 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [NOTICE]   (331468) : New worker (331470) forked
Feb 25 12:39:56 compute-0 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [NOTICE]   (331468) : Loading success.
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.646 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.744 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.745 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.746 244018 INFO nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Creating image(s)
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.769 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.796 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.824 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.829 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "838576657076f409077f9f38137795a25cd06654" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:56 compute-0 nova_compute[244014]: 2026-02-25 12:39:56.831 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "838576657076f409077f9f38137795a25cd06654" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.051 244018 DEBUG nova.network.neutron [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.051 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.168 244018 DEBUG nova.virt.libvirt.imagebackend [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/d1c7c812-5951-4aad-935f-d6237846a428/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/d1c7c812-5951-4aad-935f-d6237846a428/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.207 244018 DEBUG nova.virt.libvirt.imagebackend [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Selected location: {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/d1c7c812-5951-4aad-935f-d6237846a428/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.208 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] cloning images/d1c7c812-5951-4aad-935f-d6237846a428@snap to None/01328724-b95f-4b36-809a-ddc156dd0dde_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:39:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3048460738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.299 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "838576657076f409077f9f38137795a25cd06654" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.418 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] resizing rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.478 244018 DEBUG nova.objects.instance [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lazy-loading 'migration_context' on Instance uuid 01328724-b95f-4b36-809a-ddc156dd0dde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.491 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.492 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Ensure instance console log exists: /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.492 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.493 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.493 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.495 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='3f9e4304c5653f60352b81ed1f3aa247',container_format='bare',created_at=2026-02-25T12:39:52Z,direct_url=<?>,disk_format='raw',id=d1c7c812-5951-4aad-935f-d6237846a428,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1512660417',owner='fc8ee4ee0f07455b8722a80a5e837c79',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-02-25T12:39:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'd1c7c812-5951-4aad-935f-d6237846a428'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.499 244018 WARNING nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.505 244018 DEBUG nova.virt.libvirt.host [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.505 244018 DEBUG nova.virt.libvirt.host [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.508 244018 DEBUG nova.virt.libvirt.host [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.508 244018 DEBUG nova.virt.libvirt.host [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.509 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.510 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='3f9e4304c5653f60352b81ed1f3aa247',container_format='bare',created_at=2026-02-25T12:39:52Z,direct_url=<?>,disk_format='raw',id=d1c7c812-5951-4aad-935f-d6237846a428,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1512660417',owner='fc8ee4ee0f07455b8722a80a5e837c79',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-02-25T12:39:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.510 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.511 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.511 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.512 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.512 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.513 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.513 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.513 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.514 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.514 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:39:57 compute-0 nova_compute[244014]: 2026-02-25 12:39:57.518 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:39:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Feb 25 12:39:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:39:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4142169491' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Feb 25 12:39:58 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.039932) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198039986, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 260, "total_data_size": 3184895, "memory_usage": 3227856, "flush_reason": "Manual Compaction"}
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198053238, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3128053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35361, "largest_seqno": 37412, "table_properties": {"data_size": 3118835, "index_size": 5712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19268, "raw_average_key_size": 20, "raw_value_size": 3100115, "raw_average_value_size": 3239, "num_data_blocks": 252, "num_entries": 957, "num_filter_entries": 957, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022998, "oldest_key_time": 1772022998, "file_creation_time": 1772023198, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 13336 microseconds, and 4284 cpu microseconds.
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.052 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.053275) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3128053 bytes OK
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.053290) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.055250) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.055262) EVENT_LOG_v1 {"time_micros": 1772023198055258, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.055277) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3176211, prev total WAL file size 3176211, number of live WAL files 2.
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.055887) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323539' seq:72057594037927935, type:22 .. '6C6F676D0031353133' seq:0, type:0; will stop at (end)
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3054KB)], [77(9378KB)]
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198055970, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 12731494, "oldest_snapshot_seqno": -1}
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.110 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.115 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6490 keys, 12596330 bytes, temperature: kUnknown
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198139023, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12596330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12547995, "index_size": 31009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 162969, "raw_average_key_size": 25, "raw_value_size": 12427071, "raw_average_value_size": 1914, "num_data_blocks": 1264, "num_entries": 6490, "num_filter_entries": 6490, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023198, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.139627) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12596330 bytes
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.140976) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.6 rd, 151.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.2 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(8.1) write-amplify(4.0) OK, records in: 7026, records dropped: 536 output_compression: NoCompression
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.141006) EVENT_LOG_v1 {"time_micros": 1772023198140993, "job": 44, "event": "compaction_finished", "compaction_time_micros": 83447, "compaction_time_cpu_micros": 28636, "output_level": 6, "num_output_files": 1, "total_output_size": 12596330, "num_input_records": 7026, "num_output_records": 6490, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198141758, "job": 44, "event": "table_file_deletion", "file_number": 79}
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198143052, "job": 44, "event": "table_file_deletion", "file_number": 77}
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.055766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.143164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.143172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.143176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.143181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:39:58 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.143185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:39:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1015 KiB/s rd, 2.7 MiB/s wr, 154 op/s
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.223 244018 DEBUG nova.compute.manager [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.224 244018 DEBUG oslo_concurrency.lockutils [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.224 244018 DEBUG oslo_concurrency.lockutils [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.225 244018 DEBUG oslo_concurrency.lockutils [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.226 244018 DEBUG nova.compute.manager [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] No waiting events found dispatching network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.226 244018 WARNING nova.compute.manager [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received unexpected event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 for instance with vm_state active and task_state None.
Feb 25 12:39:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4142169491' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:58 compute-0 ceph-mon[76335]: osdmap e253: 3 total, 3 up, 3 in
Feb 25 12:39:58 compute-0 ceph-mon[76335]: pgmap v1757: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1015 KiB/s rd, 2.7 MiB/s wr, 154 op/s
Feb 25 12:39:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:39:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2922528019' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.675 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.678 244018 DEBUG nova.objects.instance [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lazy-loading 'pci_devices' on Instance uuid 01328724-b95f-4b36-809a-ddc156dd0dde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.699 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:39:58 compute-0 nova_compute[244014]:   <uuid>01328724-b95f-4b36-809a-ddc156dd0dde</uuid>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   <name>instance-00000062</name>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <nova:name>instance-depend-image</nova:name>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:39:57</nova:creationTime>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:39:58 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:39:58 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:39:58 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:39:58 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:39:58 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:39:58 compute-0 nova_compute[244014]:         <nova:user uuid="d154463d39d44a229e754c4dd30f78d0">tempest-ImageDependencyTests-1556343205-project-member</nova:user>
Feb 25 12:39:58 compute-0 nova_compute[244014]:         <nova:project uuid="fc8ee4ee0f07455b8722a80a5e837c79">tempest-ImageDependencyTests-1556343205</nova:project>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="d1c7c812-5951-4aad-935f-d6237846a428"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <system>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <entry name="serial">01328724-b95f-4b36-809a-ddc156dd0dde</entry>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <entry name="uuid">01328724-b95f-4b36-809a-ddc156dd0dde</entry>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     </system>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   <os>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   </os>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   <features>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   </features>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/01328724-b95f-4b36-809a-ddc156dd0dde_disk">
Feb 25 12:39:58 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       </source>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:39:58 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/01328724-b95f-4b36-809a-ddc156dd0dde_disk.config">
Feb 25 12:39:58 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       </source>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:39:58 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/console.log" append="off"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <video>
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     </video>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:39:58 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:39:58 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:39:58 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:39:58 compute-0 nova_compute[244014]: </domain>
Feb 25 12:39:58 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.762 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.764 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.765 244018 INFO nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Using config drive
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.800 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:58 compute-0 nova_compute[244014]: 2026-02-25 12:39:58.996 244018 INFO nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Creating config drive at /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config
Feb 25 12:39:59 compute-0 nova_compute[244014]: 2026-02-25 12:39:59.003 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj8v99gvx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:59 compute-0 nova_compute[244014]: 2026-02-25 12:39:59.140 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj8v99gvx" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:59 compute-0 nova_compute[244014]: 2026-02-25 12:39:59.166 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:39:59 compute-0 nova_compute[244014]: 2026-02-25 12:39:59.171 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config 01328724-b95f-4b36-809a-ddc156dd0dde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:39:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2922528019' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:39:59 compute-0 nova_compute[244014]: 2026-02-25 12:39:59.347 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config 01328724-b95f-4b36-809a-ddc156dd0dde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:39:59 compute-0 nova_compute[244014]: 2026-02-25 12:39:59.348 244018 INFO nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Deleting local config drive /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config because it was imported into RBD.
Feb 25 12:39:59 compute-0 systemd-machined[210048]: New machine qemu-125-instance-00000062.
Feb 25 12:39:59 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-00000062.
Feb 25 12:39:59 compute-0 nova_compute[244014]: 2026-02-25 12:39:59.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 968 KiB/s rd, 889 KiB/s wr, 87 op/s
Feb 25 12:40:00 compute-0 ceph-mon[76335]: pgmap v1758: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 968 KiB/s rd, 889 KiB/s wr, 87 op/s
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.330 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023200.3296406, 01328724-b95f-4b36-809a-ddc156dd0dde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.331 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] VM Resumed (Lifecycle Event)
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.335 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.336 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.342 244018 INFO nova.virt.libvirt.driver [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance spawned successfully.
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.343 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.357 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.364 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.368 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.368 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.369 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.369 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.370 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.370 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.395 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.396 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023200.331505, 01328724-b95f-4b36-809a-ddc156dd0dde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.396 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] VM Started (Lifecycle Event)
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.419 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.422 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.432 244018 INFO nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Took 3.69 seconds to spawn the instance on the hypervisor.
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.433 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.445 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.523 244018 INFO nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Took 4.84 seconds to build instance.
Feb 25 12:40:00 compute-0 nova_compute[244014]: 2026-02-25 12:40:00.566 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:40:01 compute-0 nova_compute[244014]: 2026-02-25 12:40:01.845 244018 DEBUG nova.compute.manager [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:01 compute-0 nova_compute[244014]: 2026-02-25 12:40:01.891 244018 INFO nova.compute.manager [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] instance snapshotting
Feb 25 12:40:02 compute-0 nova_compute[244014]: 2026-02-25 12:40:02.190 244018 INFO nova.virt.libvirt.driver [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Beginning live snapshot process
Feb 25 12:40:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 19 KiB/s wr, 162 op/s
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.259932) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202259977, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 300, "num_deletes": 251, "total_data_size": 99923, "memory_usage": 107016, "flush_reason": "Manual Compaction"}
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202266033, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 99270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37413, "largest_seqno": 37712, "table_properties": {"data_size": 97298, "index_size": 200, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5079, "raw_average_key_size": 18, "raw_value_size": 93422, "raw_average_value_size": 338, "num_data_blocks": 9, "num_entries": 276, "num_filter_entries": 276, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023198, "oldest_key_time": 1772023198, "file_creation_time": 1772023202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 6151 microseconds, and 963 cpu microseconds.
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.266082) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 99270 bytes OK
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.266101) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.268179) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.268199) EVENT_LOG_v1 {"time_micros": 1772023202268192, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.268218) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 97740, prev total WAL file size 97740, number of live WAL files 2.
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.268583) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(96KB)], [80(12MB)]
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202268611, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 12695600, "oldest_snapshot_seqno": -1}
Feb 25 12:40:02 compute-0 ceph-mon[76335]: pgmap v1759: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 19 KiB/s wr, 162 op/s
Feb 25 12:40:02 compute-0 nova_compute[244014]: 2026-02-25 12:40:02.348 244018 DEBUG nova.storage.rbd_utils [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] creating snapshot(0d009d3b9cf242088f2a76c7db6396f9) on rbd image(01328724-b95f-4b36-809a-ddc156dd0dde_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6257 keys, 11090288 bytes, temperature: kUnknown
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202362272, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 11090288, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11044803, "index_size": 28735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 158829, "raw_average_key_size": 25, "raw_value_size": 10929226, "raw_average_value_size": 1746, "num_data_blocks": 1160, "num_entries": 6257, "num_filter_entries": 6257, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.362667) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11090288 bytes
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.365059) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.5 rd, 118.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(239.6) write-amplify(111.7) OK, records in: 6766, records dropped: 509 output_compression: NoCompression
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.365076) EVENT_LOG_v1 {"time_micros": 1772023202365068, "job": 46, "event": "compaction_finished", "compaction_time_micros": 93715, "compaction_time_cpu_micros": 21195, "output_level": 6, "num_output_files": 1, "total_output_size": 11090288, "num_input_records": 6766, "num_output_records": 6257, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202365169, "job": 46, "event": "table_file_deletion", "file_number": 82}
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202366258, "job": 46, "event": "table_file_deletion", "file_number": 80}
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.268511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.366345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.366354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.366358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.366363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:02 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.366367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Feb 25 12:40:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Feb 25 12:40:03 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Feb 25 12:40:03 compute-0 nova_compute[244014]: 2026-02-25 12:40:03.433 244018 DEBUG nova.storage.rbd_utils [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] cloning vms/01328724-b95f-4b36-809a-ddc156dd0dde_disk@0d009d3b9cf242088f2a76c7db6396f9 to images/bc6f423b-2af0-43cc-9bd7-113d1ebfcdf9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:40:03 compute-0 nova_compute[244014]: 2026-02-25 12:40:03.529 244018 DEBUG nova.storage.rbd_utils [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] flattening images/bc6f423b-2af0-43cc-9bd7-113d1ebfcdf9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:40:03 compute-0 nova_compute[244014]: 2026-02-25 12:40:03.929 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:03 compute-0 nova_compute[244014]: 2026-02-25 12:40:03.930 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:03 compute-0 nova_compute[244014]: 2026-02-25 12:40:03.957 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:40:03 compute-0 nova_compute[244014]: 2026-02-25 12:40:03.998 244018 DEBUG nova.storage.rbd_utils [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] removing snapshot(0d009d3b9cf242088f2a76c7db6396f9) on rbd image(01328724-b95f-4b36-809a-ddc156dd0dde_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:40:04 compute-0 nova_compute[244014]: 2026-02-25 12:40:04.038 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:04 compute-0 nova_compute[244014]: 2026-02-25 12:40:04.039 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:04 compute-0 nova_compute[244014]: 2026-02-25 12:40:04.046 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:40:04 compute-0 nova_compute[244014]: 2026-02-25 12:40:04.047 244018 INFO nova.compute.claims [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:40:04 compute-0 nova_compute[244014]: 2026-02-25 12:40:04.166 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 40 KiB/s wr, 203 op/s
Feb 25 12:40:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Feb 25 12:40:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Feb 25 12:40:04 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Feb 25 12:40:04 compute-0 ceph-mon[76335]: osdmap e254: 3 total, 3 up, 3 in
Feb 25 12:40:04 compute-0 ceph-mon[76335]: pgmap v1761: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 40 KiB/s wr, 203 op/s
Feb 25 12:40:04 compute-0 nova_compute[244014]: 2026-02-25 12:40:04.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:04 compute-0 nova_compute[244014]: 2026-02-25 12:40:04.877 244018 DEBUG nova.storage.rbd_utils [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] creating snapshot(snap) on rbd image(bc6f423b-2af0-43cc-9bd7-113d1ebfcdf9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:40:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:40:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3039364253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.021 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.855s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.026 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.033 244018 DEBUG nova.compute.provider_tree [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.058 244018 DEBUG nova.scheduler.client.report [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.089 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.090 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.146 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.147 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.167 244018 INFO nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.186 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.278 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.279 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.280 244018 INFO nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Creating image(s)
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.299 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.322 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.343 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.347 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.406 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.408 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.408 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.409 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.458 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.462 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Feb 25 12:40:05 compute-0 nova_compute[244014]: 2026-02-25 12:40:05.491 244018 DEBUG nova.policy [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:40:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Feb 25 12:40:05 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Feb 25 12:40:05 compute-0 ceph-mon[76335]: osdmap e255: 3 total, 3 up, 3 in
Feb 25 12:40:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3039364253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 26 KiB/s wr, 165 op/s
Feb 25 12:40:06 compute-0 nova_compute[244014]: 2026-02-25 12:40:06.481 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:06 compute-0 nova_compute[244014]: 2026-02-25 12:40:06.580 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:40:07 compute-0 ceph-mon[76335]: osdmap e256: 3 total, 3 up, 3 in
Feb 25 12:40:07 compute-0 ceph-mon[76335]: pgmap v1764: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 26 KiB/s wr, 165 op/s
Feb 25 12:40:07 compute-0 nova_compute[244014]: 2026-02-25 12:40:07.262 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Successfully created port: 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:40:07 compute-0 nova_compute[244014]: 2026-02-25 12:40:07.971 244018 DEBUG nova.objects.instance [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid 89f166ca-c0fc-48ec-baf9-eda20ff22f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:40:07 compute-0 nova_compute[244014]: 2026-02-25 12:40:07.993 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:40:07 compute-0 nova_compute[244014]: 2026-02-25 12:40:07.994 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Ensure instance console log exists: /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:40:07 compute-0 nova_compute[244014]: 2026-02-25 12:40:07.995 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:07 compute-0 nova_compute[244014]: 2026-02-25 12:40:07.995 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:07 compute-0 nova_compute[244014]: 2026-02-25 12:40:07.995 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 251 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 5.2 MiB/s wr, 203 op/s
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.251470) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208251532, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 326, "num_deletes": 250, "total_data_size": 120515, "memory_usage": 126640, "flush_reason": "Manual Compaction"}
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208304024, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 118855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37713, "largest_seqno": 38038, "table_properties": {"data_size": 116724, "index_size": 295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5887, "raw_average_key_size": 20, "raw_value_size": 112493, "raw_average_value_size": 389, "num_data_blocks": 13, "num_entries": 289, "num_filter_entries": 289, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023202, "oldest_key_time": 1772023202, "file_creation_time": 1772023208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 52673 microseconds, and 1612 cpu microseconds.
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.304143) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 118855 bytes OK
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.304178) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.428148) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.428197) EVENT_LOG_v1 {"time_micros": 1772023208428186, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.428227) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 118235, prev total WAL file size 131791, number of live WAL files 2.
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.428888) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(116KB)], [83(10MB)]
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208428959, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11209143, "oldest_snapshot_seqno": -1}
Feb 25 12:40:08 compute-0 nova_compute[244014]: 2026-02-25 12:40:08.686 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Successfully updated port: 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6035 keys, 7864064 bytes, temperature: kUnknown
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208693624, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 7864064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7824881, "index_size": 23009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 154423, "raw_average_key_size": 25, "raw_value_size": 7717882, "raw_average_value_size": 1278, "num_data_blocks": 921, "num_entries": 6035, "num_filter_entries": 6035, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:40:08 compute-0 nova_compute[244014]: 2026-02-25 12:40:08.715 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:40:08 compute-0 nova_compute[244014]: 2026-02-25 12:40:08.715 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:40:08 compute-0 nova_compute[244014]: 2026-02-25 12:40:08.716 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:40:08 compute-0 nova_compute[244014]: 2026-02-25 12:40:08.787 244018 DEBUG nova.compute.manager [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-changed-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:08 compute-0 nova_compute[244014]: 2026-02-25 12:40:08.788 244018 DEBUG nova.compute.manager [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Refreshing instance network info cache due to event network-changed-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:40:08 compute-0 nova_compute[244014]: 2026-02-25 12:40:08.788 244018 DEBUG oslo_concurrency.lockutils [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:40:08 compute-0 nova_compute[244014]: 2026-02-25 12:40:08.909 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.694026) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 7864064 bytes
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.916170) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 42.3 rd, 29.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.6 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(160.5) write-amplify(66.2) OK, records in: 6546, records dropped: 511 output_compression: NoCompression
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.916239) EVENT_LOG_v1 {"time_micros": 1772023208916214, "job": 48, "event": "compaction_finished", "compaction_time_micros": 264795, "compaction_time_cpu_micros": 29963, "output_level": 6, "num_output_files": 1, "total_output_size": 7864064, "num_input_records": 6546, "num_output_records": 6035, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208916648, "job": 48, "event": "table_file_deletion", "file_number": 85}
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208920016, "job": 48, "event": "table_file_deletion", "file_number": 83}
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.428678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.920146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.920156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.920159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.920163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.920166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:40:08 compute-0 ceph-mon[76335]: pgmap v1765: 305 pgs: 305 active+clean; 251 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 5.2 MiB/s wr, 203 op/s
Feb 25 12:40:09 compute-0 nova_compute[244014]: 2026-02-25 12:40:09.341 244018 INFO nova.virt.libvirt.driver [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Snapshot image upload complete
Feb 25 12:40:09 compute-0 nova_compute[244014]: 2026-02-25 12:40:09.342 244018 INFO nova.compute.manager [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Took 7.45 seconds to snapshot the instance on the hypervisor.
Feb 25 12:40:09 compute-0 ovn_controller[147040]: 2026-02-25T12:40:09Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:f0:2e 10.100.0.3
Feb 25 12:40:09 compute-0 ovn_controller[147040]: 2026-02-25T12:40:09Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:f0:2e 10.100.0.3
Feb 25 12:40:09 compute-0 nova_compute[244014]: 2026-02-25 12:40:09.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:10 compute-0 nova_compute[244014]: 2026-02-25 12:40:10.026 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 251 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 4.5 MiB/s wr, 158 op/s
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.957 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Updating instance_info_cache with network_info: [{"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:40:11 compute-0 ceph-mon[76335]: pgmap v1766: 305 pgs: 305 active+clean; 251 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 4.5 MiB/s wr, 158 op/s
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.976 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.977 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Instance network_info: |[{"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.977 244018 DEBUG oslo_concurrency.lockutils [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.977 244018 DEBUG nova.network.neutron [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Refreshing network info cache for port 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.979 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Start _get_guest_xml network_info=[{"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.983 244018 WARNING nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.988 244018 DEBUG nova.virt.libvirt.host [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.989 244018 DEBUG nova.virt.libvirt.host [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.993 244018 DEBUG nova.virt.libvirt.host [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.993 244018 DEBUG nova.virt.libvirt.host [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.994 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.994 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.994 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.994 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.996 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:40:11 compute-0 nova_compute[244014]: 2026-02-25 12:40:11.998 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 269 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 409 KiB/s rd, 5.5 MiB/s wr, 219 op/s
Feb 25 12:40:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:40:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105441243' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:12 compute-0 nova_compute[244014]: 2026-02-25 12:40:12.587 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:12 compute-0 nova_compute[244014]: 2026-02-25 12:40:12.618 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:12 compute-0 nova_compute[244014]: 2026-02-25 12:40:12.623 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Feb 25 12:40:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Feb 25 12:40:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3105441243' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:12 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Feb 25 12:40:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:40:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/611789799' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.171 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.174 244018 DEBUG nova.virt.libvirt.vif [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1192653093',display_name='tempest-ServersTestJSON-server-1192653093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1192653093',id=99,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-3ml2upp9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:05Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=89f166ca-c0fc-48ec-baf9-eda20ff22f2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.175 244018 DEBUG nova.network.os_vif_util [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.176 244018 DEBUG nova.network.os_vif_util [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.178 244018 DEBUG nova.objects.instance [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89f166ca-c0fc-48ec-baf9-eda20ff22f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.196 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:40:13 compute-0 nova_compute[244014]:   <uuid>89f166ca-c0fc-48ec-baf9-eda20ff22f2b</uuid>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   <name>instance-00000063</name>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersTestJSON-server-1192653093</nova:name>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:40:11</nova:creationTime>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <nova:port uuid="34e36f4e-0cdc-4b6b-bbbc-daa0d461be44">
Feb 25 12:40:13 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <system>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <entry name="serial">89f166ca-c0fc-48ec-baf9-eda20ff22f2b</entry>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <entry name="uuid">89f166ca-c0fc-48ec-baf9-eda20ff22f2b</entry>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     </system>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   <os>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   </os>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   <features>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   </features>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk">
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       </source>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config">
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       </source>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:40:13 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:bd:ca:34"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <target dev="tap34e36f4e-0c"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/console.log" append="off"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <video>
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     </video>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:40:13 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:40:13 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:40:13 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:40:13 compute-0 nova_compute[244014]: </domain>
Feb 25 12:40:13 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.198 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Preparing to wait for external event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.198 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.198 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.199 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.200 244018 DEBUG nova.virt.libvirt.vif [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1192653093',display_name='tempest-ServersTestJSON-server-1192653093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1192653093',id=99,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-3ml2upp9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:05Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=89f166ca-c0fc-48ec-baf9-eda20ff22f2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.201 244018 DEBUG nova.network.os_vif_util [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.202 244018 DEBUG nova.network.os_vif_util [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.202 244018 DEBUG os_vif [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.203 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.204 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.205 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.210 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34e36f4e-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.210 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34e36f4e-0c, col_values=(('external_ids', {'iface-id': '34e36f4e-0cdc-4b6b-bbbc-daa0d461be44', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:ca:34', 'vm-uuid': '89f166ca-c0fc-48ec-baf9-eda20ff22f2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.212 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:13 compute-0 NetworkManager[49836]: <info>  [1772023213.2138] manager: (tap34e36f4e-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Feb 25 12:40:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.220 244018 INFO os_vif [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c')
Feb 25 12:40:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Feb 25 12:40:13 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.281 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.281 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.281 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:bd:ca:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.282 244018 INFO nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Using config drive
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.300 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.663 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "01328724-b95f-4b36-809a-ddc156dd0dde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.664 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.664 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "01328724-b95f-4b36-809a-ddc156dd0dde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.665 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.665 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.667 244018 INFO nova.compute.manager [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Terminating instance
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.668 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "refresh_cache-01328724-b95f-4b36-809a-ddc156dd0dde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.669 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquired lock "refresh_cache-01328724-b95f-4b36-809a-ddc156dd0dde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:40:13 compute-0 nova_compute[244014]: 2026-02-25 12:40:13.669 244018 DEBUG nova.network.neutron [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:40:13 compute-0 ceph-mon[76335]: pgmap v1767: 305 pgs: 305 active+clean; 269 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 409 KiB/s rd, 5.5 MiB/s wr, 219 op/s
Feb 25 12:40:13 compute-0 ceph-mon[76335]: osdmap e257: 3 total, 3 up, 3 in
Feb 25 12:40:13 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/611789799' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:13 compute-0 ceph-mon[76335]: osdmap e258: 3 total, 3 up, 3 in
Feb 25 12:40:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 663 KiB/s rd, 5.9 MiB/s wr, 270 op/s
Feb 25 12:40:14 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.548 244018 DEBUG nova.network.neutron [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:40:14 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.586 244018 DEBUG nova.network.neutron [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Updated VIF entry in instance network info cache for port 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:40:14 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.587 244018 DEBUG nova.network.neutron [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Updating instance_info_cache with network_info: [{"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:40:14 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.607 244018 INFO nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Creating config drive at /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config
Feb 25 12:40:14 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.614 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5fp12z1s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:14 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.653 244018 DEBUG oslo_concurrency.lockutils [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:40:14 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.760 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5fp12z1s" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:14 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.798 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:14 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.803 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:14 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.952 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:14 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.953 244018 INFO nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Deleting local config drive /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config because it was imported into RBD.
Feb 25 12:40:14 compute-0 kernel: tap34e36f4e-0c: entered promiscuous mode
Feb 25 12:40:14 compute-0 NetworkManager[49836]: <info>  [1772023214.9975] manager: (tap34e36f4e-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Feb 25 12:40:15 compute-0 ovn_controller[147040]: 2026-02-25T12:40:14Z|00970|binding|INFO|Claiming lport 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 for this chassis.
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:14.999 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:15 compute-0 ovn_controller[147040]: 2026-02-25T12:40:14Z|00971|binding|INFO|34e36f4e-0cdc-4b6b-bbbc-daa0d461be44: Claiming fa:16:3e:bd:ca:34 10.100.0.12
Feb 25 12:40:15 compute-0 ovn_controller[147040]: 2026-02-25T12:40:15Z|00972|binding|INFO|Setting lport 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 ovn-installed in OVS
Feb 25 12:40:15 compute-0 ovn_controller[147040]: 2026-02-25T12:40:15Z|00973|binding|INFO|Setting lport 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 up in Southbound
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.013 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:ca:34 10.100.0.12'], port_security=['fa:16:3e:bd:ca:34 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '89f166ca-c0fc-48ec-baf9-eda20ff22f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.015 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.018 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.020 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.027 244018 DEBUG nova.network.neutron [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.030 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:15 compute-0 systemd-machined[210048]: New machine qemu-126-instance-00000063.
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.036 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[843c3e9d-1db5-4d80-9aba-abdbf69f7da1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:15 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000063.
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.057 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Releasing lock "refresh_cache-01328724-b95f-4b36-809a-ddc156dd0dde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.057 244018 DEBUG nova.compute.manager [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:40:15 compute-0 systemd-udevd[332321]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.073 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[10ca442b-0c33-41ef-9fcf-2a2f8bccd3f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.077 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4f3266-0976-4f6f-b402-81c73336e1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:15 compute-0 NetworkManager[49836]: <info>  [1772023215.0835] device (tap34e36f4e-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:40:15 compute-0 NetworkManager[49836]: <info>  [1772023215.0843] device (tap34e36f4e-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.100 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8210f8c4-48ba-4aef-8b6a-be6656686a78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:15 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000062.scope: Deactivated successfully.
Feb 25 12:40:15 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000062.scope: Consumed 1.370s CPU time.
Feb 25 12:40:15 compute-0 systemd-machined[210048]: Machine qemu-125-instance-00000062 terminated.
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.120 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76b471ea-41f3-4289-8a87-a8b3fc85ba63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332331, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.131 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[352eec46-a372-446b-b868-76c83b0977ad]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332332, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332332, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.133 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.136 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.136 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.137 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.137 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.279 244018 INFO nova.virt.libvirt.driver [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance destroyed successfully.
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.279 244018 DEBUG nova.objects.instance [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lazy-loading 'resources' on Instance uuid 01328724-b95f-4b36-809a-ddc156dd0dde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.353 244018 DEBUG nova.compute.manager [req-824a55fb-7733-4da2-befe-4a89b4b9c89a req-cbaa85f3-6d57-4923-a37f-2a371e5e96dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.353 244018 DEBUG oslo_concurrency.lockutils [req-824a55fb-7733-4da2-befe-4a89b4b9c89a req-cbaa85f3-6d57-4923-a37f-2a371e5e96dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.359 244018 DEBUG oslo_concurrency.lockutils [req-824a55fb-7733-4da2-befe-4a89b4b9c89a req-cbaa85f3-6d57-4923-a37f-2a371e5e96dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.359 244018 DEBUG oslo_concurrency.lockutils [req-824a55fb-7733-4da2-befe-4a89b4b9c89a req-cbaa85f3-6d57-4923-a37f-2a371e5e96dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.359 244018 DEBUG nova.compute.manager [req-824a55fb-7733-4da2-befe-4a89b4b9c89a req-cbaa85f3-6d57-4923-a37f-2a371e5e96dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Processing event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:40:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:40:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 32K writes, 125K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.04 MB/s
                                           Cumulative WAL: 32K writes, 11K syncs, 2.84 writes per sync, written: 0.12 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8270 writes, 30K keys, 8270 commit groups, 1.0 writes per commit group, ingest: 32.22 MB, 0.05 MB/s
                                           Interval WAL: 8270 writes, 3258 syncs, 2.54 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.897 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.898 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023215.8968914, 89f166ca-c0fc-48ec-baf9-eda20ff22f2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.899 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] VM Started (Lifecycle Event)
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.904 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.909 244018 INFO nova.virt.libvirt.driver [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Instance spawned successfully.
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.910 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.926 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.934 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.938 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.939 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.939 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.940 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.941 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.942 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.954 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.954 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023215.8995128, 89f166ca-c0fc-48ec-baf9-eda20ff22f2b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.955 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] VM Paused (Lifecycle Event)
Feb 25 12:40:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.981 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:15 compute-0 ceph-mon[76335]: pgmap v1770: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 663 KiB/s rd, 5.9 MiB/s wr, 270 op/s
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.985 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023215.9032626, 89f166ca-c0fc-48ec-baf9-eda20ff22f2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.986 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] VM Resumed (Lifecycle Event)
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.993 244018 INFO nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Took 10.71 seconds to spawn the instance on the hypervisor.
Feb 25 12:40:15 compute-0 nova_compute[244014]: 2026-02-25 12:40:15.993 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Feb 25 12:40:15 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.005 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.011 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.028 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.056 244018 INFO nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Took 12.05 seconds to build instance.
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.069 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.160 244018 INFO nova.virt.libvirt.driver [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Deleting instance files /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde_del
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.161 244018 INFO nova.virt.libvirt.driver [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Deletion of /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde_del complete
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.221 244018 INFO nova.compute.manager [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Took 1.16 seconds to destroy the instance on the hypervisor.
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.222 244018 DEBUG oslo.service.loopingcall [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.223 244018 DEBUG nova.compute.manager [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.223 244018 DEBUG nova.network.neutron [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:40:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 716 KiB/s rd, 2.6 MiB/s wr, 179 op/s
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.421 244018 DEBUG nova.network.neutron [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.442 244018 DEBUG nova.network.neutron [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.456 244018 INFO nova.compute.manager [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Took 0.23 seconds to deallocate network for instance.
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.506 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.507 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:16 compute-0 nova_compute[244014]: 2026-02-25 12:40:16.621 244018 DEBUG oslo_concurrency.processutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:17 compute-0 ceph-mon[76335]: osdmap e259: 3 total, 3 up, 3 in
Feb 25 12:40:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:40:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097539994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.193 244018 DEBUG oslo_concurrency.processutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.198 244018 DEBUG nova.compute.provider_tree [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.217 244018 DEBUG nova.scheduler.client.report [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.238 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.264 244018 INFO nova.scheduler.client.report [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Deleted allocations for instance 01328724-b95f-4b36-809a-ddc156dd0dde
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.339 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.507 244018 DEBUG nova.compute.manager [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.509 244018 DEBUG oslo_concurrency.lockutils [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.510 244018 DEBUG oslo_concurrency.lockutils [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.511 244018 DEBUG oslo_concurrency.lockutils [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.512 244018 DEBUG nova.compute.manager [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] No waiting events found dispatching network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.512 244018 WARNING nova.compute.manager [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received unexpected event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 for instance with vm_state active and task_state None.
Feb 25 12:40:17 compute-0 podman[332420]: 2026-02-25 12:40:17.740436476 +0000 UTC m=+0.083049795 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.781 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.781 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.782 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.782 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.782 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.783 244018 INFO nova.compute.manager [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Terminating instance
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.784 244018 DEBUG nova.compute.manager [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:40:17 compute-0 kernel: tap34e36f4e-0c (unregistering): left promiscuous mode
Feb 25 12:40:17 compute-0 NetworkManager[49836]: <info>  [1772023217.8334] device (tap34e36f4e-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:40:17 compute-0 ovn_controller[147040]: 2026-02-25T12:40:17Z|00974|binding|INFO|Releasing lport 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 from this chassis (sb_readonly=0)
Feb 25 12:40:17 compute-0 ovn_controller[147040]: 2026-02-25T12:40:17Z|00975|binding|INFO|Setting lport 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 down in Southbound
Feb 25 12:40:17 compute-0 ovn_controller[147040]: 2026-02-25T12:40:17Z|00976|binding|INFO|Removing iface tap34e36f4e-0c ovn-installed in OVS
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:17 compute-0 nova_compute[244014]: 2026-02-25 12:40:17.842 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:17 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000063.scope: Deactivated successfully.
Feb 25 12:40:17 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000063.scope: Consumed 2.792s CPU time.
Feb 25 12:40:17 compute-0 systemd-machined[210048]: Machine qemu-126-instance-00000063 terminated.
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.003 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.015 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:ca:34 10.100.0.12'], port_security=['fa:16:3e:bd:ca:34 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '89f166ca-c0fc-48ec-baf9-eda20ff22f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.017 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.019 244018 INFO nova.virt.libvirt.driver [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Instance destroyed successfully.
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.019 244018 DEBUG nova.objects.instance [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid 89f166ca-c0fc-48ec-baf9-eda20ff22f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.019 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.037 244018 DEBUG nova.virt.libvirt.vif [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:40:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1192653093',display_name='tempest-ServersTestJSON-server-1192653093',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1192653093',id=99,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:40:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-3ml2upp9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:40:16Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=89f166ca-c0fc-48ec-baf9-eda20ff22f2b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.038 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[14233316-04cc-4dae-9e3b-dd0ad69caeb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.038 244018 DEBUG nova.network.os_vif_util [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.040 244018 DEBUG nova.network.os_vif_util [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.040 244018 DEBUG os_vif [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.044 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34e36f4e-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.051 244018 INFO os_vif [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c')
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.063 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbd9ba1-ba97-40f4-ac2d-1866ea068b4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.067 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a8dd30-dce0-4e40-a874-d9fc7f772711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.089 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc7e81d-fe7b-4b7f-82a5-daf104271818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.099 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4adf9052-31c2-401a-8e13-69925951c07a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332476, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.111 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[227dd281-3e7b-4ba8-be7a-f8410d07a129]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332480, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332480, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.112 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:18 compute-0 ceph-mon[76335]: pgmap v1772: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 716 KiB/s rd, 2.6 MiB/s wr, 179 op/s
Feb 25 12:40:18 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3097539994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 521 KiB/s wr, 270 op/s
Feb 25 12:40:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.486 244018 INFO nova.virt.libvirt.driver [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Deleting instance files /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b_del
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.487 244018 INFO nova.virt.libvirt.driver [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Deletion of /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b_del complete
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.575 244018 INFO nova.compute.manager [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Took 0.79 seconds to destroy the instance on the hypervisor.
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.576 244018 DEBUG oslo.service.loopingcall [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.576 244018 DEBUG nova.compute.manager [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.576 244018 DEBUG nova.network.neutron [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:40:18 compute-0 nova_compute[244014]: 2026-02-25 12:40:18.907 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.109 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.110 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.110 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.111 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62d2fee1-f07f-44e3-a511-6b9bb341a3ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:40:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:40:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 33K writes, 128K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.04 MB/s
                                           Cumulative WAL: 33K writes, 11K syncs, 2.83 writes per sync, written: 0.12 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7316 writes, 27K keys, 7316 commit groups, 1.0 writes per commit group, ingest: 26.03 MB, 0.04 MB/s
                                           Interval WAL: 7316 writes, 2886 syncs, 2.53 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.454 244018 DEBUG nova.network.neutron [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.474 244018 INFO nova.compute.manager [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Took 0.90 seconds to deallocate network for instance.
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.549 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.551 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.588 244018 DEBUG nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.596 244018 DEBUG nova.compute.manager [req-391a9a76-2761-4bec-9643-2f3a15c9c99a req-df53c50b-7bcb-49e6-b73d-5f9125675992 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-vif-deleted-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.607 244018 DEBUG nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.608 244018 DEBUG nova.compute.provider_tree [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.634 244018 DEBUG nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.642 244018 DEBUG nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-vif-unplugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.643 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.643 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.644 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.644 244018 DEBUG nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] No waiting events found dispatching network-vif-unplugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.644 244018 WARNING nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received unexpected event network-vif-unplugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 for instance with vm_state deleted and task_state None.
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.645 244018 DEBUG nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.645 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.646 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.646 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.647 244018 DEBUG nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] No waiting events found dispatching network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.647 244018 WARNING nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received unexpected event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 for instance with vm_state deleted and task_state None.
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.662 244018 DEBUG nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 12:40:19 compute-0 nova_compute[244014]: 2026-02-25 12:40:19.714 244018 DEBUG oslo_concurrency.processutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:19 compute-0 podman[332482]: 2026-02-25 12:40:19.76859035 +0000 UTC m=+0.101310300 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:40:20 compute-0 nova_compute[244014]: 2026-02-25 12:40:20.030 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:20 compute-0 ceph-mon[76335]: pgmap v1773: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 521 KiB/s wr, 270 op/s
Feb 25 12:40:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 44 KiB/s wr, 167 op/s
Feb 25 12:40:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:40:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778755769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:20 compute-0 nova_compute[244014]: 2026-02-25 12:40:20.283 244018 DEBUG oslo_concurrency.processutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:20 compute-0 nova_compute[244014]: 2026-02-25 12:40:20.291 244018 DEBUG nova.compute.provider_tree [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:40:20 compute-0 nova_compute[244014]: 2026-02-25 12:40:20.314 244018 DEBUG nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:40:20 compute-0 nova_compute[244014]: 2026-02-25 12:40:20.348 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:20 compute-0 nova_compute[244014]: 2026-02-25 12:40:20.378 244018 INFO nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance 89f166ca-c0fc-48ec-baf9-eda20ff22f2b
Feb 25 12:40:20 compute-0 nova_compute[244014]: 2026-02-25 12:40:20.469 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2778755769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.635 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updating instance_info_cache with network_info: [{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.658 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.659 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.920 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:40:21 compute-0 nova_compute[244014]: 2026-02-25 12:40:21.921 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:22 compute-0 ceph-mon[76335]: pgmap v1774: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 44 KiB/s wr, 167 op/s
Feb 25 12:40:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 252 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 37 KiB/s wr, 189 op/s
Feb 25 12:40:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:40:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/823477466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:22 compute-0 nova_compute[244014]: 2026-02-25 12:40:22.460 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:22 compute-0 nova_compute[244014]: 2026-02-25 12:40:22.535 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:40:22 compute-0 nova_compute[244014]: 2026-02-25 12:40:22.536 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:40:22 compute-0 nova_compute[244014]: 2026-02-25 12:40:22.744 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:40:22 compute-0 nova_compute[244014]: 2026-02-25 12:40:22.746 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3602MB free_disk=59.92121119052172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:40:22 compute-0 nova_compute[244014]: 2026-02-25 12:40:22.747 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:22 compute-0 nova_compute[244014]: 2026-02-25 12:40:22.747 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:22 compute-0 nova_compute[244014]: 2026-02-25 12:40:22.875 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 62d2fee1-f07f-44e3-a511-6b9bb341a3ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:40:22 compute-0 nova_compute[244014]: 2026-02-25 12:40:22.876 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:40:22 compute-0 nova_compute[244014]: 2026-02-25 12:40:22.876 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:40:22 compute-0 nova_compute[244014]: 2026-02-25 12:40:22.927 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:23 compute-0 nova_compute[244014]: 2026-02-25 12:40:23.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:23 compute-0 sudo[332573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:40:23 compute-0 sudo[332573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:40:23 compute-0 sudo[332573]: pam_unix(sudo:session): session closed for user root
Feb 25 12:40:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/823477466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:23 compute-0 sudo[332598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:40:23 compute-0 sudo[332598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:40:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Feb 25 12:40:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Feb 25 12:40:23 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Feb 25 12:40:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:40:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/563994600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:23 compute-0 nova_compute[244014]: 2026-02-25 12:40:23.567 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:23 compute-0 nova_compute[244014]: 2026-02-25 12:40:23.575 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:40:23 compute-0 nova_compute[244014]: 2026-02-25 12:40:23.591 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:40:23 compute-0 nova_compute[244014]: 2026-02-25 12:40:23.619 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:40:23 compute-0 nova_compute[244014]: 2026-02-25 12:40:23.620 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:23 compute-0 sudo[332598]: pam_unix(sudo:session): session closed for user root
Feb 25 12:40:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:40:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:40:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:40:23 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:40:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:40:23 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:40:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:40:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:40:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:40:23 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:40:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:40:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:40:23 compute-0 sudo[332656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:40:23 compute-0 sudo[332656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:40:23 compute-0 sudo[332656]: pam_unix(sudo:session): session closed for user root
Feb 25 12:40:23 compute-0 sudo[332681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:40:23 compute-0 sudo[332681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:40:24 compute-0 ceph-mon[76335]: pgmap v1775: 305 pgs: 305 active+clean; 252 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 37 KiB/s wr, 189 op/s
Feb 25 12:40:24 compute-0 ceph-mon[76335]: osdmap e260: 3 total, 3 up, 3 in
Feb 25 12:40:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/563994600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:40:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:40:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:40:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:40:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:40:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:40:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 209 op/s
Feb 25 12:40:24 compute-0 podman[332721]: 2026-02-25 12:40:24.300003115 +0000 UTC m=+0.055099446 container create 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 12:40:24 compute-0 systemd[1]: Started libpod-conmon-919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a.scope.
Feb 25 12:40:24 compute-0 podman[332721]: 2026-02-25 12:40:24.27784481 +0000 UTC m=+0.032941141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:40:24 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:40:24 compute-0 podman[332721]: 2026-02-25 12:40:24.393960026 +0000 UTC m=+0.149056397 container init 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:40:24 compute-0 podman[332721]: 2026-02-25 12:40:24.402647702 +0000 UTC m=+0.157744033 container start 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 12:40:24 compute-0 exciting_dijkstra[332737]: 167 167
Feb 25 12:40:24 compute-0 systemd[1]: libpod-919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a.scope: Deactivated successfully.
Feb 25 12:40:24 compute-0 conmon[332737]: conmon 919b6299b7d828c281e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a.scope/container/memory.events
Feb 25 12:40:24 compute-0 podman[332721]: 2026-02-25 12:40:24.408802915 +0000 UTC m=+0.163899256 container attach 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:40:24 compute-0 podman[332721]: 2026-02-25 12:40:24.409625939 +0000 UTC m=+0.164722270 container died 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:40:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f116f8591ecbeabef37b26bdc46c6052fdad5d613a2fe4d2bbe4c58cc6b8b92-merged.mount: Deactivated successfully.
Feb 25 12:40:24 compute-0 podman[332721]: 2026-02-25 12:40:24.46780001 +0000 UTC m=+0.222896311 container remove 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:40:24 compute-0 systemd[1]: libpod-conmon-919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a.scope: Deactivated successfully.
Feb 25 12:40:24 compute-0 podman[332763]: 2026-02-25 12:40:24.655771555 +0000 UTC m=+0.060577471 container create 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 12:40:24 compute-0 systemd[1]: Started libpod-conmon-73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29.scope.
Feb 25 12:40:24 compute-0 podman[332763]: 2026-02-25 12:40:24.629787152 +0000 UTC m=+0.034593108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:40:24 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:24 compute-0 podman[332763]: 2026-02-25 12:40:24.767067786 +0000 UTC m=+0.171873752 container init 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 12:40:24 compute-0 podman[332763]: 2026-02-25 12:40:24.777583022 +0000 UTC m=+0.182388938 container start 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 12:40:24 compute-0 podman[332763]: 2026-02-25 12:40:24.782196532 +0000 UTC m=+0.187002509 container attach 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 12:40:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:40:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.6 total, 600.0 interval
                                           Cumulative writes: 25K writes, 105K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 25K writes, 8766 syncs, 2.94 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5316 writes, 21K keys, 5316 commit groups, 1.0 writes per commit group, ingest: 21.40 MB, 0.04 MB/s
                                           Interval WAL: 5316 writes, 2101 syncs, 2.53 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:40:25 compute-0 nova_compute[244014]: 2026-02-25 12:40:25.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:25 compute-0 sad_wing[332780]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:40:25 compute-0 sad_wing[332780]: --> All data devices are unavailable
Feb 25 12:40:25 compute-0 systemd[1]: libpod-73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29.scope: Deactivated successfully.
Feb 25 12:40:25 compute-0 podman[332763]: 2026-02-25 12:40:25.291111084 +0000 UTC m=+0.695917000 container died 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:40:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997-merged.mount: Deactivated successfully.
Feb 25 12:40:25 compute-0 podman[332763]: 2026-02-25 12:40:25.333645044 +0000 UTC m=+0.738450930 container remove 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:40:25 compute-0 systemd[1]: libpod-conmon-73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29.scope: Deactivated successfully.
Feb 25 12:40:25 compute-0 sudo[332681]: pam_unix(sudo:session): session closed for user root
Feb 25 12:40:25 compute-0 sudo[332811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:40:25 compute-0 sudo[332811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:40:25 compute-0 sudo[332811]: pam_unix(sudo:session): session closed for user root
Feb 25 12:40:25 compute-0 sudo[332836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:40:25 compute-0 sudo[332836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:40:25 compute-0 nova_compute[244014]: 2026-02-25 12:40:25.621 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:40:25 compute-0 nova_compute[244014]: 2026-02-25 12:40:25.621 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:40:25 compute-0 podman[332873]: 2026-02-25 12:40:25.8409336 +0000 UTC m=+0.056874426 container create 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:40:25 compute-0 nova_compute[244014]: 2026-02-25 12:40:25.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:40:25 compute-0 systemd[1]: Started libpod-conmon-92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8.scope.
Feb 25 12:40:25 compute-0 podman[332873]: 2026-02-25 12:40:25.811257262 +0000 UTC m=+0.027198108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:40:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:40:25 compute-0 podman[332873]: 2026-02-25 12:40:25.929629052 +0000 UTC m=+0.145569948 container init 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:40:25 compute-0 podman[332873]: 2026-02-25 12:40:25.937686399 +0000 UTC m=+0.153627225 container start 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:40:25 compute-0 podman[332873]: 2026-02-25 12:40:25.941328352 +0000 UTC m=+0.157269238 container attach 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 12:40:25 compute-0 reverent_ride[332889]: 167 167
Feb 25 12:40:25 compute-0 systemd[1]: libpod-92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8.scope: Deactivated successfully.
Feb 25 12:40:25 compute-0 podman[332873]: 2026-02-25 12:40:25.944868282 +0000 UTC m=+0.160809108 container died 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:40:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-76480631b04f531ace4b1e2871b2d8f7ff116373b49589d0490cab25ecfee652-merged.mount: Deactivated successfully.
Feb 25 12:40:25 compute-0 podman[332873]: 2026-02-25 12:40:25.993291799 +0000 UTC m=+0.209232645 container remove 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 12:40:26 compute-0 systemd[1]: libpod-conmon-92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8.scope: Deactivated successfully.
Feb 25 12:40:26 compute-0 podman[332912]: 2026-02-25 12:40:26.163309797 +0000 UTC m=+0.047474481 container create a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:40:26 compute-0 systemd[1]: Started libpod-conmon-a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401.scope.
Feb 25 12:40:26 compute-0 ceph-mon[76335]: pgmap v1777: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 209 op/s
Feb 25 12:40:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:40:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 33 KiB/s wr, 172 op/s
Feb 25 12:40:26 compute-0 podman[332912]: 2026-02-25 12:40:26.142651134 +0000 UTC m=+0.026815818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/039881223f401031a2217200c7d84c21b3dcb108e8ff627176641ddb67dee5c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/039881223f401031a2217200c7d84c21b3dcb108e8ff627176641ddb67dee5c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/039881223f401031a2217200c7d84c21b3dcb108e8ff627176641ddb67dee5c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/039881223f401031a2217200c7d84c21b3dcb108e8ff627176641ddb67dee5c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:26 compute-0 podman[332912]: 2026-02-25 12:40:26.259516282 +0000 UTC m=+0.143681016 container init a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:40:26 compute-0 podman[332912]: 2026-02-25 12:40:26.272507888 +0000 UTC m=+0.156672572 container start a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:40:26 compute-0 podman[332912]: 2026-02-25 12:40:26.276907052 +0000 UTC m=+0.161071786 container attach a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 25 12:40:26 compute-0 nova_compute[244014]: 2026-02-25 12:40:26.528 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:26 compute-0 nova_compute[244014]: 2026-02-25 12:40:26.529 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:26 compute-0 nova_compute[244014]: 2026-02-25 12:40:26.549 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:40:26 compute-0 confident_mendel[332928]: {
Feb 25 12:40:26 compute-0 confident_mendel[332928]:     "0": [
Feb 25 12:40:26 compute-0 confident_mendel[332928]:         {
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "devices": [
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "/dev/loop3"
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             ],
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_name": "ceph_lv0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_size": "21470642176",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "name": "ceph_lv0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "tags": {
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.cluster_name": "ceph",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.crush_device_class": "",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.encrypted": "0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.objectstore": "bluestore",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.osd_id": "0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.type": "block",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.vdo": "0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.with_tpm": "0"
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             },
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "type": "block",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "vg_name": "ceph_vg0"
Feb 25 12:40:26 compute-0 confident_mendel[332928]:         }
Feb 25 12:40:26 compute-0 confident_mendel[332928]:     ],
Feb 25 12:40:26 compute-0 confident_mendel[332928]:     "1": [
Feb 25 12:40:26 compute-0 confident_mendel[332928]:         {
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "devices": [
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "/dev/loop4"
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             ],
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_name": "ceph_lv1",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_size": "21470642176",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "name": "ceph_lv1",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "tags": {
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.cluster_name": "ceph",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.crush_device_class": "",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.encrypted": "0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.objectstore": "bluestore",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.osd_id": "1",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.type": "block",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.vdo": "0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.with_tpm": "0"
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             },
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "type": "block",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "vg_name": "ceph_vg1"
Feb 25 12:40:26 compute-0 confident_mendel[332928]:         }
Feb 25 12:40:26 compute-0 confident_mendel[332928]:     ],
Feb 25 12:40:26 compute-0 confident_mendel[332928]:     "2": [
Feb 25 12:40:26 compute-0 confident_mendel[332928]:         {
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "devices": [
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "/dev/loop5"
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             ],
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_name": "ceph_lv2",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_size": "21470642176",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "name": "ceph_lv2",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "tags": {
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.cluster_name": "ceph",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.crush_device_class": "",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.encrypted": "0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.objectstore": "bluestore",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.osd_id": "2",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.type": "block",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.vdo": "0",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:                 "ceph.with_tpm": "0"
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             },
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "type": "block",
Feb 25 12:40:26 compute-0 confident_mendel[332928]:             "vg_name": "ceph_vg2"
Feb 25 12:40:26 compute-0 confident_mendel[332928]:         }
Feb 25 12:40:26 compute-0 confident_mendel[332928]:     ]
Feb 25 12:40:26 compute-0 confident_mendel[332928]: }
Feb 25 12:40:26 compute-0 systemd[1]: libpod-a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401.scope: Deactivated successfully.
Feb 25 12:40:26 compute-0 podman[332912]: 2026-02-25 12:40:26.595899704 +0000 UTC m=+0.480064408 container died a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-039881223f401031a2217200c7d84c21b3dcb108e8ff627176641ddb67dee5c3-merged.mount: Deactivated successfully.
Feb 25 12:40:26 compute-0 nova_compute[244014]: 2026-02-25 12:40:26.645 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:26 compute-0 nova_compute[244014]: 2026-02-25 12:40:26.646 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:26 compute-0 podman[332912]: 2026-02-25 12:40:26.654115936 +0000 UTC m=+0.538280620 container remove a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 12:40:26 compute-0 nova_compute[244014]: 2026-02-25 12:40:26.655 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:40:26 compute-0 nova_compute[244014]: 2026-02-25 12:40:26.655 244018 INFO nova.compute.claims [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:40:26 compute-0 systemd[1]: libpod-conmon-a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401.scope: Deactivated successfully.
Feb 25 12:40:26 compute-0 sudo[332836]: pam_unix(sudo:session): session closed for user root
Feb 25 12:40:26 compute-0 sudo[332950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:40:26 compute-0 sudo[332950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:40:26 compute-0 sudo[332950]: pam_unix(sudo:session): session closed for user root
Feb 25 12:40:26 compute-0 nova_compute[244014]: 2026-02-25 12:40:26.842 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:26 compute-0 sudo[332975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:40:26 compute-0 sudo[332975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:40:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 12:40:27 compute-0 podman[333034]: 2026-02-25 12:40:27.198589821 +0000 UTC m=+0.046456172 container create 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 12:40:27 compute-0 systemd[1]: Started libpod-conmon-7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e.scope.
Feb 25 12:40:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:40:27 compute-0 podman[333034]: 2026-02-25 12:40:27.179994276 +0000 UTC m=+0.027860667 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:40:27 compute-0 podman[333034]: 2026-02-25 12:40:27.276553901 +0000 UTC m=+0.124420282 container init 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:40:27 compute-0 podman[333034]: 2026-02-25 12:40:27.280957905 +0000 UTC m=+0.128824266 container start 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 12:40:27 compute-0 podman[333034]: 2026-02-25 12:40:27.284070713 +0000 UTC m=+0.131937054 container attach 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 12:40:27 compute-0 thirsty_greider[333051]: 167 167
Feb 25 12:40:27 compute-0 systemd[1]: libpod-7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e.scope: Deactivated successfully.
Feb 25 12:40:27 compute-0 conmon[333051]: conmon 7d8a916c08eb29eed3c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e.scope/container/memory.events
Feb 25 12:40:27 compute-0 podman[333034]: 2026-02-25 12:40:27.286383809 +0000 UTC m=+0.134250150 container died 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0029316227fdf6c41299d82668794f2f1d81adccabbcd36b289d8feba6cf837a-merged.mount: Deactivated successfully.
Feb 25 12:40:27 compute-0 podman[333034]: 2026-02-25 12:40:27.326918472 +0000 UTC m=+0.174784823 container remove 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:40:27 compute-0 systemd[1]: libpod-conmon-7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e.scope: Deactivated successfully.
Feb 25 12:40:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:40:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977409649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.434 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.441 244018 DEBUG nova.compute.provider_tree [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.471 244018 DEBUG nova.scheduler.client.report [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:40:27 compute-0 podman[333077]: 2026-02-25 12:40:27.490822528 +0000 UTC m=+0.045927277 container create 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.499 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.500 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:40:27 compute-0 systemd[1]: Started libpod-conmon-66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16.scope.
Feb 25 12:40:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93adbc3f66e8ba3bbbf9a89a6b1207ac5680319f61f12f19301475b1dde9fb8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93adbc3f66e8ba3bbbf9a89a6b1207ac5680319f61f12f19301475b1dde9fb8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93adbc3f66e8ba3bbbf9a89a6b1207ac5680319f61f12f19301475b1dde9fb8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93adbc3f66e8ba3bbbf9a89a6b1207ac5680319f61f12f19301475b1dde9fb8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:40:27 compute-0 podman[333077]: 2026-02-25 12:40:27.46785364 +0000 UTC m=+0.022958359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.578 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.578 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:40:27 compute-0 podman[333077]: 2026-02-25 12:40:27.578979746 +0000 UTC m=+0.134084495 container init 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 12:40:27 compute-0 podman[333077]: 2026-02-25 12:40:27.587525397 +0000 UTC m=+0.142630106 container start 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 12:40:27 compute-0 podman[333077]: 2026-02-25 12:40:27.590738037 +0000 UTC m=+0.145842796 container attach 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.614 244018 INFO nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.645 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.730 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.731 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.732 244018 INFO nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Creating image(s)
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.754 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.776 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.799 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.803 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.882 244018 DEBUG nova.policy [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.886 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.886 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.887 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.888 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.909 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:27 compute-0 nova_compute[244014]: 2026-02-25 12:40:27.918 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4d29081c-59fe-483f-b905-8c7349e84fc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:28 compute-0 nova_compute[244014]: 2026-02-25 12:40:28.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:28 compute-0 nova_compute[244014]: 2026-02-25 12:40:28.170 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4d29081c-59fe-483f-b905-8c7349e84fc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:28 compute-0 ceph-mon[76335]: pgmap v1778: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 33 KiB/s wr, 172 op/s
Feb 25 12:40:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1977409649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 654 KiB/s rd, 1.4 KiB/s wr, 51 op/s
Feb 25 12:40:28 compute-0 lvm[333298]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:40:28 compute-0 lvm[333298]: VG ceph_vg0 finished
Feb 25 12:40:28 compute-0 lvm[333302]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:40:28 compute-0 lvm[333302]: VG ceph_vg2 finished
Feb 25 12:40:28 compute-0 nova_compute[244014]: 2026-02-25 12:40:28.254 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:40:28 compute-0 lvm[333301]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:40:28 compute-0 lvm[333301]: VG ceph_vg1 finished
Feb 25 12:40:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:28 compute-0 magical_jennings[333093]: {}
Feb 25 12:40:28 compute-0 nova_compute[244014]: 2026-02-25 12:40:28.335 244018 DEBUG nova.objects.instance [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid 4d29081c-59fe-483f-b905-8c7349e84fc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:40:28 compute-0 systemd[1]: libpod-66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16.scope: Deactivated successfully.
Feb 25 12:40:28 compute-0 systemd[1]: libpod-66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16.scope: Consumed 1.025s CPU time.
Feb 25 12:40:28 compute-0 nova_compute[244014]: 2026-02-25 12:40:28.353 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:40:28 compute-0 nova_compute[244014]: 2026-02-25 12:40:28.353 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Ensure instance console log exists: /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:40:28 compute-0 nova_compute[244014]: 2026-02-25 12:40:28.354 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:28 compute-0 nova_compute[244014]: 2026-02-25 12:40:28.354 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:28 compute-0 nova_compute[244014]: 2026-02-25 12:40:28.354 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:28 compute-0 podman[333343]: 2026-02-25 12:40:28.383703455 +0000 UTC m=+0.018452542 container died 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:40:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-93adbc3f66e8ba3bbbf9a89a6b1207ac5680319f61f12f19301475b1dde9fb8f-merged.mount: Deactivated successfully.
Feb 25 12:40:28 compute-0 podman[333343]: 2026-02-25 12:40:28.410253074 +0000 UTC m=+0.045002131 container remove 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:40:28 compute-0 systemd[1]: libpod-conmon-66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16.scope: Deactivated successfully.
Feb 25 12:40:28 compute-0 sudo[332975]: pam_unix(sudo:session): session closed for user root
Feb 25 12:40:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:40:28 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:40:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:40:28 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:40:28 compute-0 sudo[333358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:40:28 compute-0 sudo[333358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:40:28 compute-0 sudo[333358]: pam_unix(sudo:session): session closed for user root
Feb 25 12:40:29 compute-0 nova_compute[244014]: 2026-02-25 12:40:29.473 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Successfully created port: 29c2436f-5a46-4a8e-9f40-9b993368661c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:40:29 compute-0 ceph-mon[76335]: pgmap v1779: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 654 KiB/s rd, 1.4 KiB/s wr, 51 op/s
Feb 25 12:40:29 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:40:29 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:40:30 compute-0 nova_compute[244014]: 2026-02-25 12:40:30.034 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 654 KiB/s rd, 1.4 KiB/s wr, 51 op/s
Feb 25 12:40:30 compute-0 nova_compute[244014]: 2026-02-25 12:40:30.276 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023215.274664, 01328724-b95f-4b36-809a-ddc156dd0dde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:30 compute-0 nova_compute[244014]: 2026-02-25 12:40:30.277 244018 INFO nova.compute.manager [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] VM Stopped (Lifecycle Event)
Feb 25 12:40:30 compute-0 nova_compute[244014]: 2026-02-25 12:40:30.299 244018 DEBUG nova.compute.manager [None req-5866f534-21b9-435b-9deb-4b269b7cdf01 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:30 compute-0 nova_compute[244014]: 2026-02-25 12:40:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:40:30 compute-0 nova_compute[244014]: 2026-02-25 12:40:30.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:40:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:40:30
Feb 25 12:40:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:40:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:40:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'backups']
Feb 25 12:40:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:40:31 compute-0 nova_compute[244014]: 2026-02-25 12:40:31.421 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Successfully updated port: 29c2436f-5a46-4a8e-9f40-9b993368661c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:40:31 compute-0 nova_compute[244014]: 2026-02-25 12:40:31.446 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:40:31 compute-0 nova_compute[244014]: 2026-02-25 12:40:31.446 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:40:31 compute-0 nova_compute[244014]: 2026-02-25 12:40:31.447 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:40:31 compute-0 ceph-mon[76335]: pgmap v1780: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 654 KiB/s rd, 1.4 KiB/s wr, 51 op/s
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:40:31 compute-0 nova_compute[244014]: 2026-02-25 12:40:31.627 244018 DEBUG nova.compute.manager [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-changed-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:31 compute-0 nova_compute[244014]: 2026-02-25 12:40:31.627 244018 DEBUG nova.compute.manager [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Refreshing instance network info cache due to event network-changed-29c2436f-5a46-4a8e-9f40-9b993368661c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:40:31 compute-0 nova_compute[244014]: 2026-02-25 12:40:31.628 244018 DEBUG oslo_concurrency.lockutils [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:40:31 compute-0 nova_compute[244014]: 2026-02-25 12:40:31.680 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:40:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:40:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 258 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 955 KiB/s wr, 33 op/s
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.477 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Updating instance_info_cache with network_info: [{"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.497 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.497 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Instance network_info: |[{"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.498 244018 DEBUG oslo_concurrency.lockutils [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.498 244018 DEBUG nova.network.neutron [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Refreshing network info cache for port 29c2436f-5a46-4a8e-9f40-9b993368661c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.500 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Start _get_guest_xml network_info=[{"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.506 244018 WARNING nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.512 244018 DEBUG nova.virt.libvirt.host [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.512 244018 DEBUG nova.virt.libvirt.host [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.518 244018 DEBUG nova.virt.libvirt.host [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.519 244018 DEBUG nova.virt.libvirt.host [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.519 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.519 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.520 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.520 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.520 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.520 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.521 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.521 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.521 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.521 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.522 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.522 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:40:32 compute-0 nova_compute[244014]: 2026-02-25 12:40:32.525 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.018 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023218.0168273, 89f166ca-c0fc-48ec-baf9-eda20ff22f2b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.019 244018 INFO nova.compute.manager [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] VM Stopped (Lifecycle Event)
Feb 25 12:40:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:40:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005480750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.040 244018 DEBUG nova.compute.manager [None req-f16340c9-fc6c-4be3-bf30-24b746ed22f7 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.055 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.076 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.079 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:33 compute-0 ceph-mon[76335]: pgmap v1781: 305 pgs: 305 active+clean; 258 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 955 KiB/s wr, 33 op/s
Feb 25 12:40:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3005480750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:40:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3523040187' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.588 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.590 244018 DEBUG nova.virt.libvirt.vif [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1761451429',display_name='tempest-ServersTestJSON-server-1761451429',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1761451429',id=100,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0D9Rc8lFbW4qL5T7jR3oexmsn8QI7vD/zBEdjljHhz+WbYLBt+vZCC/EcArywe2HQ7Qo+4kBD5ze0YkonOn3bgWm+BpeU3ibaQn/otIhthIErVKZalQ86qnW2bWscjqA==',key_name='tempest-key-57149867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-2108qsoc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:27Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4d29081c-59fe-483f-b905-8c7349e84fc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.590 244018 DEBUG nova.network.os_vif_util [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.591 244018 DEBUG nova.network.os_vif_util [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.592 244018 DEBUG nova.objects.instance [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d29081c-59fe-483f-b905-8c7349e84fc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.615 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:40:33 compute-0 nova_compute[244014]:   <uuid>4d29081c-59fe-483f-b905-8c7349e84fc5</uuid>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   <name>instance-00000064</name>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersTestJSON-server-1761451429</nova:name>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:40:32</nova:creationTime>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <nova:port uuid="29c2436f-5a46-4a8e-9f40-9b993368661c">
Feb 25 12:40:33 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <system>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <entry name="serial">4d29081c-59fe-483f-b905-8c7349e84fc5</entry>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <entry name="uuid">4d29081c-59fe-483f-b905-8c7349e84fc5</entry>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     </system>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   <os>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   </os>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   <features>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   </features>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/4d29081c-59fe-483f-b905-8c7349e84fc5_disk">
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       </source>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config">
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       </source>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:40:33 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:48:85:8e"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <target dev="tap29c2436f-5a"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/console.log" append="off"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <video>
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     </video>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:40:33 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:40:33 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:40:33 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:40:33 compute-0 nova_compute[244014]: </domain>
Feb 25 12:40:33 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.617 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Preparing to wait for external event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.617 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.618 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.618 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.619 244018 DEBUG nova.virt.libvirt.vif [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1761451429',display_name='tempest-ServersTestJSON-server-1761451429',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1761451429',id=100,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0D9Rc8lFbW4qL5T7jR3oexmsn8QI7vD/zBEdjljHhz+WbYLBt+vZCC/EcArywe2HQ7Qo+4kBD5ze0YkonOn3bgWm+BpeU3ibaQn/otIhthIErVKZalQ86qnW2bWscjqA==',key_name='tempest-key-57149867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-2108qsoc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:27Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4d29081c-59fe-483f-b905-8c7349e84fc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.619 244018 DEBUG nova.network.os_vif_util [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.620 244018 DEBUG nova.network.os_vif_util [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.620 244018 DEBUG os_vif [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.621 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.622 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.625 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29c2436f-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.626 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29c2436f-5a, col_values=(('external_ids', {'iface-id': '29c2436f-5a46-4a8e-9f40-9b993368661c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:85:8e', 'vm-uuid': '4d29081c-59fe-483f-b905-8c7349e84fc5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:33 compute-0 NetworkManager[49836]: <info>  [1772023233.6286] manager: (tap29c2436f-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.633 244018 INFO os_vif [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a')
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.679 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.679 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.679 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:48:85:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.680 244018 INFO nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Using config drive
Feb 25 12:40:33 compute-0 nova_compute[244014]: 2026-02-25 12:40:33.708 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.179 244018 INFO nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Creating config drive at /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.186 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6tczv0z2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 29 op/s
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.326 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6tczv0z2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.355 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.359 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config 4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.458 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.461 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.500 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config 4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.501 244018 INFO nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Deleting local config drive /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config because it was imported into RBD.
Feb 25 12:40:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3523040187' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:34 compute-0 kernel: tap29c2436f-5a: entered promiscuous mode
Feb 25 12:40:34 compute-0 NetworkManager[49836]: <info>  [1772023234.5660] manager: (tap29c2436f-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Feb 25 12:40:34 compute-0 ovn_controller[147040]: 2026-02-25T12:40:34Z|00977|binding|INFO|Claiming lport 29c2436f-5a46-4a8e-9f40-9b993368661c for this chassis.
Feb 25 12:40:34 compute-0 ovn_controller[147040]: 2026-02-25T12:40:34Z|00978|binding|INFO|29c2436f-5a46-4a8e-9f40-9b993368661c: Claiming fa:16:3e:48:85:8e 10.100.0.14
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.574 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:85:8e 10.100.0.14'], port_security=['fa:16:3e:48:85:8e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4d29081c-59fe-483f-b905-8c7349e84fc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=29c2436f-5a46-4a8e-9f40-9b993368661c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.575 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 29c2436f-5a46-4a8e-9f40-9b993368661c in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.576 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:40:34 compute-0 ovn_controller[147040]: 2026-02-25T12:40:34Z|00979|binding|INFO|Setting lport 29c2436f-5a46-4a8e-9f40-9b993368661c ovn-installed in OVS
Feb 25 12:40:34 compute-0 ovn_controller[147040]: 2026-02-25T12:40:34Z|00980|binding|INFO|Setting lport 29c2436f-5a46-4a8e-9f40-9b993368661c up in Southbound
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.591 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[702a5c38-6e9f-4a19-9d0b-7935769e9db4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:34 compute-0 systemd-udevd[333517]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:40:34 compute-0 systemd-machined[210048]: New machine qemu-127-instance-00000064.
Feb 25 12:40:34 compute-0 NetworkManager[49836]: <info>  [1772023234.6141] device (tap29c2436f-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:40:34 compute-0 NetworkManager[49836]: <info>  [1772023234.6154] device (tap29c2436f-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:40:34 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-00000064.
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.624 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d847f90c-0a41-4efd-8b53-009b648be4e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.628 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[316329a0-4194-407c-93e6-ebf79b49d12e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.660 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f54aa152-3fa2-4fff-868f-75efbf4c0fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.682 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[39040a80-07f4-42e4-82e8-22af959ab357]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333528, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.700 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7b50ac-8c88-4118-ab48-85e387d7d136]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333531, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333531, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.702 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.706 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.707 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.707 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.708 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.760 244018 DEBUG nova.network.neutron [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Updated VIF entry in instance network info cache for port 29c2436f-5a46-4a8e-9f40-9b993368661c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.761 244018 DEBUG nova.network.neutron [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Updating instance_info_cache with network_info: [{"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.783 244018 DEBUG oslo_concurrency.lockutils [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.972 244018 DEBUG nova.compute.manager [req-2450b9d2-61a8-4635-ac54-5ca31c484edc req-7e8c2da5-446a-4d46-ad11-b983b11c3dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.973 244018 DEBUG oslo_concurrency.lockutils [req-2450b9d2-61a8-4635-ac54-5ca31c484edc req-7e8c2da5-446a-4d46-ad11-b983b11c3dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.973 244018 DEBUG oslo_concurrency.lockutils [req-2450b9d2-61a8-4635-ac54-5ca31c484edc req-7e8c2da5-446a-4d46-ad11-b983b11c3dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.974 244018 DEBUG oslo_concurrency.lockutils [req-2450b9d2-61a8-4635-ac54-5ca31c484edc req-7e8c2da5-446a-4d46-ad11-b983b11c3dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:34 compute-0 nova_compute[244014]: 2026-02-25 12:40:34.974 244018 DEBUG nova.compute.manager [req-2450b9d2-61a8-4635-ac54-5ca31c484edc req-7e8c2da5-446a-4d46-ad11-b983b11c3dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Processing event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.041 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.043 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023235.0414193, 4d29081c-59fe-483f-b905-8c7349e84fc5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.043 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] VM Started (Lifecycle Event)
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.047 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.051 244018 INFO nova.virt.libvirt.driver [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Instance spawned successfully.
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.051 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.072 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.078 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.082 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.082 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.083 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.083 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.084 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.084 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.122 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.122 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023235.0420704, 4d29081c-59fe-483f-b905-8c7349e84fc5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.122 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] VM Paused (Lifecycle Event)
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.155 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.158 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023235.0460918, 4d29081c-59fe-483f-b905-8c7349e84fc5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.158 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] VM Resumed (Lifecycle Event)
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.171 244018 INFO nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Took 7.44 seconds to spawn the instance on the hypervisor.
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.171 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.179 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.182 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.223 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.248 244018 INFO nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Took 8.63 seconds to build instance.
Feb 25 12:40:35 compute-0 nova_compute[244014]: 2026-02-25 12:40:35.319 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:35 compute-0 ceph-mon[76335]: pgmap v1782: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 29 op/s
Feb 25 12:40:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.126 244018 DEBUG nova.compute.manager [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.127 244018 DEBUG oslo_concurrency.lockutils [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.127 244018 DEBUG oslo_concurrency.lockutils [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.127 244018 DEBUG oslo_concurrency.lockutils [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.128 244018 DEBUG nova.compute.manager [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] No waiting events found dispatching network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.128 244018 WARNING nova.compute.manager [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received unexpected event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c for instance with vm_state active and task_state None.
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.356 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.356 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.357 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.357 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.357 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.359 244018 INFO nova.compute.manager [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Terminating instance
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.360 244018 DEBUG nova.compute.manager [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:40:37 compute-0 kernel: tap29c2436f-5a (unregistering): left promiscuous mode
Feb 25 12:40:37 compute-0 NetworkManager[49836]: <info>  [1772023237.4097] device (tap29c2436f-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:40:37 compute-0 ovn_controller[147040]: 2026-02-25T12:40:37Z|00981|binding|INFO|Releasing lport 29c2436f-5a46-4a8e-9f40-9b993368661c from this chassis (sb_readonly=0)
Feb 25 12:40:37 compute-0 ovn_controller[147040]: 2026-02-25T12:40:37Z|00982|binding|INFO|Setting lport 29c2436f-5a46-4a8e-9f40-9b993368661c down in Southbound
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:37 compute-0 ovn_controller[147040]: 2026-02-25T12:40:37Z|00983|binding|INFO|Removing iface tap29c2436f-5a ovn-installed in OVS
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.421 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.424 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:85:8e 10.100.0.14'], port_security=['fa:16:3e:48:85:8e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4d29081c-59fe-483f-b905-8c7349e84fc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=29c2436f-5a46-4a8e-9f40-9b993368661c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.426 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 29c2436f-5a46-4a8e-9f40-9b993368661c in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.429 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.445 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b78588ca-d42a-48da-b0bb-bc5b593521b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:37 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000064.scope: Deactivated successfully.
Feb 25 12:40:37 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000064.scope: Consumed 2.697s CPU time.
Feb 25 12:40:37 compute-0 systemd-machined[210048]: Machine qemu-127-instance-00000064 terminated.
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.474 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[12df5156-cdd7-4985-b381-a06525aa1bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.479 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d7580926-eb01-48a7-8e50-06a89456498a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.505 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d69a00e0-529a-425a-9e7d-375f29707e19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:37 compute-0 ceph-mon[76335]: pgmap v1783: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.526 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8e3e4b-c6e4-46d3-8b82-9b52ed4b1db9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333584, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.544 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f86aa7-b561-4284-8026-7d0c63de6080]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333585, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333585, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.546 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.548 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.554 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.554 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.605 244018 INFO nova.virt.libvirt.driver [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Instance destroyed successfully.
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.605 244018 DEBUG nova.objects.instance [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid 4d29081c-59fe-483f-b905-8c7349e84fc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.624 244018 DEBUG nova.virt.libvirt.vif [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:40:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1761451429',display_name='tempest-ServersTestJSON-server-1761451429',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1761451429',id=100,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0D9Rc8lFbW4qL5T7jR3oexmsn8QI7vD/zBEdjljHhz+WbYLBt+vZCC/EcArywe2HQ7Qo+4kBD5ze0YkonOn3bgWm+BpeU3ibaQn/otIhthIErVKZalQ86qnW2bWscjqA==',key_name='tempest-key-57149867',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:40:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-2108qsoc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:40:35Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4d29081c-59fe-483f-b905-8c7349e84fc5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.625 244018 DEBUG nova.network.os_vif_util [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.626 244018 DEBUG nova.network.os_vif_util [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.626 244018 DEBUG os_vif [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.630 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29c2436f-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.636 244018 INFO os_vif [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a')
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.947 244018 INFO nova.virt.libvirt.driver [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Deleting instance files /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5_del
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.948 244018 INFO nova.virt.libvirt.driver [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Deletion of /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5_del complete
Feb 25 12:40:37 compute-0 nova_compute[244014]: 2026-02-25 12:40:37.999 244018 INFO nova.compute.manager [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 25 12:40:38 compute-0 nova_compute[244014]: 2026-02-25 12:40:38.000 244018 DEBUG oslo.service.loopingcall [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:40:38 compute-0 nova_compute[244014]: 2026-02-25 12:40:38.000 244018 DEBUG nova.compute.manager [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:40:38 compute-0 nova_compute[244014]: 2026-02-25 12:40:38.000 244018 DEBUG nova.network.neutron [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:40:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:40:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:38 compute-0 nova_compute[244014]: 2026-02-25 12:40:38.968 244018 DEBUG nova.network.neutron [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.004 244018 INFO nova.compute.manager [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Took 1.00 seconds to deallocate network for instance.
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.052 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.053 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.127 244018 DEBUG oslo_concurrency.processutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.166 244018 DEBUG nova.compute.manager [req-e39ab66c-411d-430c-87b2-4c2c05db3a99 req-cd787754-15fc-4f7c-976b-9c192d7401fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-vif-deleted-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.299 244018 DEBUG nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-vif-unplugged-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.300 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.301 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.302 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.302 244018 DEBUG nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] No waiting events found dispatching network-vif-unplugged-29c2436f-5a46-4a8e-9f40-9b993368661c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.303 244018 WARNING nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received unexpected event network-vif-unplugged-29c2436f-5a46-4a8e-9f40-9b993368661c for instance with vm_state deleted and task_state None.
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.303 244018 DEBUG nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.304 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.304 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.305 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.306 244018 DEBUG nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] No waiting events found dispatching network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.306 244018 WARNING nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received unexpected event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c for instance with vm_state deleted and task_state None.
Feb 25 12:40:39 compute-0 ceph-mon[76335]: pgmap v1784: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:40:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:40:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/100799852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.679 244018 DEBUG oslo_concurrency.processutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.686 244018 DEBUG nova.compute.provider_tree [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.701 244018 DEBUG nova.scheduler.client.report [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.723 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.760 244018 INFO nova.scheduler.client.report [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance 4d29081c-59fe-483f-b905-8c7349e84fc5
Feb 25 12:40:39 compute-0 nova_compute[244014]: 2026-02-25 12:40:39.821 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:40 compute-0 nova_compute[244014]: 2026-02-25 12:40:40.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:40:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/100799852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:42 compute-0 ceph-mon[76335]: pgmap v1785: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 254 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009670309111608878 of space, bias 1.0, pg target 0.2901092733482663 quantized to 32 (current 32)
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024937969202050445 of space, bias 1.0, pg target 0.7481390760615133 quantized to 32 (current 32)
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0215965201940558e-06 of space, bias 4.0, pg target 0.001225915824232867 quantized to 16 (current 16)
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:40:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:40:42 compute-0 nova_compute[244014]: 2026-02-25 12:40:42.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 233 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 101 op/s
Feb 25 12:40:44 compute-0 ceph-mon[76335]: pgmap v1786: 305 pgs: 305 active+clean; 254 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Feb 25 12:40:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:44.463 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.099 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.100 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.122 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.206 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.207 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.219 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.219 244018 INFO nova.compute.claims [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.328 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:45 compute-0 ceph-mon[76335]: pgmap v1787: 305 pgs: 305 active+clean; 233 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 101 op/s
Feb 25 12:40:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:40:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2896996746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.948 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.959 244018 DEBUG nova.compute.provider_tree [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.978 244018 DEBUG nova.scheduler.client.report [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.997 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:45 compute-0 nova_compute[244014]: 2026-02-25 12:40:45.998 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.051 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.052 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.074 244018 INFO nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.090 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.176 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.177 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.177 244018 INFO nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Creating image(s)
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.198 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.220 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.240 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 233 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.244 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.305 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.306 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.307 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.308 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.329 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.332 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4717553a-9bb9-4278-b610-7c446db3f8d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:46 compute-0 nova_compute[244014]: 2026-02-25 12:40:46.463 244018 DEBUG nova.policy [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:40:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2896996746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:47 compute-0 nova_compute[244014]: 2026-02-25 12:40:47.187 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4717553a-9bb9-4278-b610-7c446db3f8d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.855s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:47 compute-0 nova_compute[244014]: 2026-02-25 12:40:47.258 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:40:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:40:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3233863198' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:40:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:40:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3233863198' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:40:47 compute-0 nova_compute[244014]: 2026-02-25 12:40:47.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:47 compute-0 nova_compute[244014]: 2026-02-25 12:40:47.697 244018 DEBUG nova.objects.instance [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid 4717553a-9bb9-4278-b610-7c446db3f8d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:40:47 compute-0 ceph-mon[76335]: pgmap v1788: 305 pgs: 305 active+clean; 233 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Feb 25 12:40:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3233863198' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:40:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3233863198' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:40:47 compute-0 nova_compute[244014]: 2026-02-25 12:40:47.791 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:40:47 compute-0 nova_compute[244014]: 2026-02-25 12:40:47.792 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Ensure instance console log exists: /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:40:47 compute-0 nova_compute[244014]: 2026-02-25 12:40:47.792 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:47 compute-0 nova_compute[244014]: 2026-02-25 12:40:47.793 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:47 compute-0 nova_compute[244014]: 2026-02-25 12:40:47.793 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 268 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 125 op/s
Feb 25 12:40:48 compute-0 nova_compute[244014]: 2026-02-25 12:40:48.281 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Successfully created port: 6150f46a-debb-49d6-b2e0-07b81f4cccea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:40:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:48 compute-0 podman[333829]: 2026-02-25 12:40:48.741938776 +0000 UTC m=+0.079150124 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 12:40:49 compute-0 ceph-mon[76335]: pgmap v1789: 305 pgs: 305 active+clean; 268 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 125 op/s
Feb 25 12:40:50 compute-0 nova_compute[244014]: 2026-02-25 12:40:50.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 268 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.2 MiB/s wr, 50 op/s
Feb 25 12:40:50 compute-0 podman[333849]: 2026-02-25 12:40:50.752501924 +0000 UTC m=+0.092545382 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:40:51 compute-0 nova_compute[244014]: 2026-02-25 12:40:51.102 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Successfully updated port: 6150f46a-debb-49d6-b2e0-07b81f4cccea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:40:51 compute-0 nova_compute[244014]: 2026-02-25 12:40:51.117 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:40:51 compute-0 nova_compute[244014]: 2026-02-25 12:40:51.117 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:40:51 compute-0 nova_compute[244014]: 2026-02-25 12:40:51.117 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:40:51 compute-0 nova_compute[244014]: 2026-02-25 12:40:51.261 244018 DEBUG nova.compute.manager [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-changed-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:51 compute-0 nova_compute[244014]: 2026-02-25 12:40:51.262 244018 DEBUG nova.compute.manager [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Refreshing instance network info cache due to event network-changed-6150f46a-debb-49d6-b2e0-07b81f4cccea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:40:51 compute-0 nova_compute[244014]: 2026-02-25 12:40:51.262 244018 DEBUG oslo_concurrency.lockutils [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:40:51 compute-0 nova_compute[244014]: 2026-02-25 12:40:51.404 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:40:51 compute-0 ceph-mon[76335]: pgmap v1790: 305 pgs: 305 active+clean; 268 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.2 MiB/s wr, 50 op/s
Feb 25 12:40:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.603 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023237.601733, 4d29081c-59fe-483f-b905-8c7349e84fc5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.603 244018 INFO nova.compute.manager [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] VM Stopped (Lifecycle Event)
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.626 244018 DEBUG nova.compute.manager [None req-39f5a20b-da25-44a1-91c1-1eab3fa2496f - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.881 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Updating instance_info_cache with network_info: [{"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.899 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.900 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Instance network_info: |[{"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.901 244018 DEBUG oslo_concurrency.lockutils [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.901 244018 DEBUG nova.network.neutron [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Refreshing network info cache for port 6150f46a-debb-49d6-b2e0-07b81f4cccea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.907 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Start _get_guest_xml network_info=[{"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.914 244018 WARNING nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.924 244018 DEBUG nova.virt.libvirt.host [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.925 244018 DEBUG nova.virt.libvirt.host [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.938 244018 DEBUG nova.virt.libvirt.host [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.939 244018 DEBUG nova.virt.libvirt.host [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.939 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.939 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.940 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.941 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.941 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.941 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.942 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.942 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.943 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.943 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.943 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.944 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:40:52 compute-0 nova_compute[244014]: 2026-02-25 12:40:52.949 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.004 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.004 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.023 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.106 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.106 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.113 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.113 244018 INFO nova.compute.claims [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:40:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:53.127 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:8a:86 10.100.0.2 2001:db8::f816:3eff:feaa:8a86'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feaa:8a86/64', 'neutron:device_id': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cb98e41-b63f-472f-91ce-2c5130dfa4a8, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=895bc3cc-c38d-425b-b005-1acb3139bbee) old=Port_Binding(mac=['fa:16:3e:aa:8a:86 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:40:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:53.129 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 895bc3cc-c38d-425b-b005-1acb3139bbee in datapath 9ec47b46-6b4d-4267-964b-6f16eef9a7b1 updated
Feb 25 12:40:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:53.131 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ec47b46-6b4d-4267-964b-6f16eef9a7b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:40:53 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:53.132 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2340c8a6-af40-4621-985a-9ddc7e963175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.298 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:40:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929090395' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.518 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.547 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.552 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:40:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488990023' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.866 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.873 244018 DEBUG nova.compute.provider_tree [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:40:53 compute-0 ceph-mon[76335]: pgmap v1791: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Feb 25 12:40:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1929090395' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3488990023' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.891 244018 DEBUG nova.scheduler.client.report [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.921 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.922 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.969 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.969 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:40:53 compute-0 nova_compute[244014]: 2026-02-25 12:40:53.987 244018 INFO nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.005 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:40:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:40:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892986754' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.096 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.099 244018 DEBUG nova.virt.libvirt.vif [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=101,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-mh21m2o5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:46Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4717553a-9bb9-4278-b610-7c446db3f8d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.099 244018 DEBUG nova.network.os_vif_util [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.100 244018 DEBUG nova.network.os_vif_util [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.102 244018 DEBUG nova.objects.instance [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4717553a-9bb9-4278-b610-7c446db3f8d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.125 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:40:54 compute-0 nova_compute[244014]:   <uuid>4717553a-9bb9-4278-b610-7c446db3f8d9</uuid>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   <name>instance-00000065</name>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersTestJSON-server-1299470364</nova:name>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:40:52</nova:creationTime>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <nova:port uuid="6150f46a-debb-49d6-b2e0-07b81f4cccea">
Feb 25 12:40:54 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <system>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <entry name="serial">4717553a-9bb9-4278-b610-7c446db3f8d9</entry>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <entry name="uuid">4717553a-9bb9-4278-b610-7c446db3f8d9</entry>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     </system>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   <os>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   </os>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   <features>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   </features>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/4717553a-9bb9-4278-b610-7c446db3f8d9_disk">
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       </source>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config">
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       </source>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:40:54 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:ee:a2:d4"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <target dev="tap6150f46a-de"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/console.log" append="off"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <video>
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     </video>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:40:54 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:40:54 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:40:54 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:40:54 compute-0 nova_compute[244014]: </domain>
Feb 25 12:40:54 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.126 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Preparing to wait for external event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.126 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.127 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.127 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.128 244018 DEBUG nova.virt.libvirt.vif [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=101,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-mh21m2o5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:46Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4717553a-9bb9-4278-b610-7c446db3f8d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.129 244018 DEBUG nova.network.os_vif_util [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.130 244018 DEBUG nova.network.os_vif_util [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.131 244018 DEBUG os_vif [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.133 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.133 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.138 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6150f46a-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.138 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6150f46a-de, col_values=(('external_ids', {'iface-id': '6150f46a-debb-49d6-b2e0-07b81f4cccea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:a2:d4', 'vm-uuid': '4717553a-9bb9-4278-b610-7c446db3f8d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:54 compute-0 NetworkManager[49836]: <info>  [1772023254.1420] manager: (tap6150f46a-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.142 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.143 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.144 244018 INFO nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Creating image(s)
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.177 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.215 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.249 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.252 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.280 244018 INFO os_vif [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de')
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.321 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.321 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.322 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.322 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.346 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.350 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bb4e80d2-c200-4c24-8154-e1a86d06946b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.393 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.394 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.394 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:ee:a2:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.395 244018 INFO nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Using config drive
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.423 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.654 244018 DEBUG nova.policy [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb37a481eb114226822ed8b2ef4f9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.727 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bb4e80d2-c200-4c24-8154-e1a86d06946b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.808 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:40:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3892986754' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.897 244018 DEBUG nova.objects.instance [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid bb4e80d2-c200-4c24-8154-e1a86d06946b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.918 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.919 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Ensure instance console log exists: /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.919 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.919 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:54 compute-0 nova_compute[244014]: 2026-02-25 12:40:54.919 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.020 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.020 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.021 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.058 244018 INFO nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Creating config drive at /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.062 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfitiy1tt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.197 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfitiy1tt" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.220 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.224 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config 4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.376 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config 4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.377 244018 INFO nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Deleting local config drive /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config because it was imported into RBD.
Feb 25 12:40:55 compute-0 kernel: tap6150f46a-de: entered promiscuous mode
Feb 25 12:40:55 compute-0 ovn_controller[147040]: 2026-02-25T12:40:55Z|00984|binding|INFO|Claiming lport 6150f46a-debb-49d6-b2e0-07b81f4cccea for this chassis.
Feb 25 12:40:55 compute-0 NetworkManager[49836]: <info>  [1772023255.4415] manager: (tap6150f46a-de): new Tun device (/org/freedesktop/NetworkManager/Devices/408)
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.441 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:55 compute-0 ovn_controller[147040]: 2026-02-25T12:40:55Z|00985|binding|INFO|6150f46a-debb-49d6-b2e0-07b81f4cccea: Claiming fa:16:3e:ee:a2:d4 10.100.0.9
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:55 compute-0 ovn_controller[147040]: 2026-02-25T12:40:55Z|00986|binding|INFO|Setting lport 6150f46a-debb-49d6-b2e0-07b81f4cccea ovn-installed in OVS
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:55 compute-0 ovn_controller[147040]: 2026-02-25T12:40:55Z|00987|binding|INFO|Setting lport 6150f46a-debb-49d6-b2e0-07b81f4cccea up in Southbound
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.469 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:a2:d4 10.100.0.9'], port_security=['fa:16:3e:ee:a2:d4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4717553a-9bb9-4278-b610-7c446db3f8d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6150f46a-debb-49d6-b2e0-07b81f4cccea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.470 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6150f46a-debb-49d6-b2e0-07b81f4cccea in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.472 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:40:55 compute-0 systemd-udevd[334198]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.485 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dc681900-57ea-4418-93d1-969bc21cdec0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:55 compute-0 NetworkManager[49836]: <info>  [1772023255.4902] device (tap6150f46a-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:40:55 compute-0 NetworkManager[49836]: <info>  [1772023255.4910] device (tap6150f46a-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:40:55 compute-0 systemd-machined[210048]: New machine qemu-128-instance-00000065.
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.510 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[acec9576-92d0-49e5-9da1-3c21af9bbd0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.514 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[393d46dc-d66d-4266-91df-7f65bbe0a39e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:55 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-00000065.
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.538 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[389deeea-8182-4d72-945a-fa8af59de439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.552 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2954fdc9-fdec-47da-9c76-d7e5704ac476]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334213, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.566 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[708e9e48-c763-4a01-907d-9a0c00162627]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334215, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334215, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.568 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.569 244018 DEBUG nova.network.neutron [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Updated VIF entry in instance network info cache for port 6150f46a-debb-49d6-b2e0-07b81f4cccea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.570 244018 DEBUG nova.network.neutron [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Updating instance_info_cache with network_info: [{"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.572 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.572 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.573 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.573 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.611 244018 DEBUG oslo_concurrency.lockutils [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:40:55 compute-0 ceph-mon[76335]: pgmap v1792: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.893 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023255.893339, 4717553a-9bb9-4278-b610-7c446db3f8d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.894 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] VM Started (Lifecycle Event)
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.926 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.930 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023255.8935688, 4717553a-9bb9-4278-b610-7c446db3f8d9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.930 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] VM Paused (Lifecycle Event)
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.962 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:55 compute-0 nova_compute[244014]: 2026-02-25 12:40:55.966 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.023 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:40:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.547 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Successfully created port: ca235479-72b7-40ba-b8e5-d8d3227079fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.599 244018 DEBUG nova.compute.manager [req-b64b10cb-4ff0-491c-b19b-2d08f0dbabce req-9ae7d88d-1cb0-47ca-8898-708229fa7456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.600 244018 DEBUG oslo_concurrency.lockutils [req-b64b10cb-4ff0-491c-b19b-2d08f0dbabce req-9ae7d88d-1cb0-47ca-8898-708229fa7456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.600 244018 DEBUG oslo_concurrency.lockutils [req-b64b10cb-4ff0-491c-b19b-2d08f0dbabce req-9ae7d88d-1cb0-47ca-8898-708229fa7456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.600 244018 DEBUG oslo_concurrency.lockutils [req-b64b10cb-4ff0-491c-b19b-2d08f0dbabce req-9ae7d88d-1cb0-47ca-8898-708229fa7456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.600 244018 DEBUG nova.compute.manager [req-b64b10cb-4ff0-491c-b19b-2d08f0dbabce req-9ae7d88d-1cb0-47ca-8898-708229fa7456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Processing event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.601 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.606 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023256.6056755, 4717553a-9bb9-4278-b610-7c446db3f8d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.606 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] VM Resumed (Lifecycle Event)
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.610 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.614 244018 INFO nova.virt.libvirt.driver [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Instance spawned successfully.
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.614 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.631 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.639 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.646 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.647 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.647 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.648 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.649 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.649 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.656 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.710 244018 INFO nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Took 10.53 seconds to spawn the instance on the hypervisor.
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.711 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.771 244018 INFO nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Took 11.60 seconds to build instance.
Feb 25 12:40:56 compute-0 nova_compute[244014]: 2026-02-25 12:40:56.792 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:57 compute-0 ceph-mon[76335]: pgmap v1793: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:40:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 747 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Feb 25 12:40:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:40:58 compute-0 nova_compute[244014]: 2026-02-25 12:40:58.748 244018 DEBUG nova.compute.manager [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:40:58 compute-0 nova_compute[244014]: 2026-02-25 12:40:58.748 244018 DEBUG oslo_concurrency.lockutils [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:40:58 compute-0 nova_compute[244014]: 2026-02-25 12:40:58.749 244018 DEBUG oslo_concurrency.lockutils [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:40:58 compute-0 nova_compute[244014]: 2026-02-25 12:40:58.749 244018 DEBUG oslo_concurrency.lockutils [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:40:58 compute-0 nova_compute[244014]: 2026-02-25 12:40:58.749 244018 DEBUG nova.compute.manager [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] No waiting events found dispatching network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:40:58 compute-0 nova_compute[244014]: 2026-02-25 12:40:58.750 244018 WARNING nova.compute.manager [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received unexpected event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea for instance with vm_state active and task_state None.
Feb 25 12:40:59 compute-0 nova_compute[244014]: 2026-02-25 12:40:59.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:40:59 compute-0 nova_compute[244014]: 2026-02-25 12:40:59.316 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Successfully updated port: ca235479-72b7-40ba-b8e5-d8d3227079fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:40:59 compute-0 nova_compute[244014]: 2026-02-25 12:40:59.333 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:40:59 compute-0 nova_compute[244014]: 2026-02-25 12:40:59.333 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:40:59 compute-0 nova_compute[244014]: 2026-02-25 12:40:59.333 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:40:59 compute-0 nova_compute[244014]: 2026-02-25 12:40:59.477 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:40:59 compute-0 ceph-mon[76335]: pgmap v1794: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 747 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Feb 25 12:41:00 compute-0 nova_compute[244014]: 2026-02-25 12:41:00.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 2.3 MiB/s wr, 63 op/s
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.277 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updating instance_info_cache with network_info: [{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.312 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.312 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Instance network_info: |[{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.317 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Start _get_guest_xml network_info=[{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.322 244018 WARNING nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.329 244018 DEBUG nova.virt.libvirt.host [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.330 244018 DEBUG nova.virt.libvirt.host [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.335 244018 DEBUG nova.virt.libvirt.host [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.336 244018 DEBUG nova.virt.libvirt.host [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.337 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.338 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.339 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.339 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.340 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.340 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.340 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.341 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.341 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.342 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.342 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.343 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.347 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.484 244018 DEBUG nova.compute.manager [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.484 244018 DEBUG nova.compute.manager [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing instance network info cache due to event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.485 244018 DEBUG oslo_concurrency.lockutils [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.485 244018 DEBUG oslo_concurrency.lockutils [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.485 244018 DEBUG nova.network.neutron [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:41:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3615576775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.910 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:01 compute-0 ceph-mon[76335]: pgmap v1795: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 2.3 MiB/s wr, 63 op/s
Feb 25 12:41:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3615576775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.937 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:01 compute-0 nova_compute[244014]: 2026-02-25 12:41:01.943 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.248 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.249 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 104 op/s
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.260 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.261 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.279 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.284 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.375 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.376 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.377 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.386 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.387 244018 INFO nova.compute.claims [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:41:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3677413552' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.502 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.505 244018 DEBUG nova.virt.libvirt.vif [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-262633266',display_name='tempest-TestNetworkAdvancedServerOps-server-262633266',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-262633266',id=102,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLLfY/ZYTp8zkjISi8dRav/XikNBwgTA0veZE/C8b/WfaOQQWjNxzsgI+O1rEE3S7jO5qPjb3pQG3PNH0tfGcXeLebTLFJlD+G8jxg8kTGDYVpT+gC8btxGaaq2vWYzmA==',key_name='tempest-TestNetworkAdvancedServerOps-1175251533',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-uqls6dt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:54Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=bb4e80d2-c200-4c24-8154-e1a86d06946b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.505 244018 DEBUG nova.network.os_vif_util [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.506 244018 DEBUG nova.network.os_vif_util [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.507 244018 DEBUG nova.objects.instance [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid bb4e80d2-c200-4c24-8154-e1a86d06946b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.524 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:41:02 compute-0 nova_compute[244014]:   <uuid>bb4e80d2-c200-4c24-8154-e1a86d06946b</uuid>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   <name>instance-00000066</name>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-262633266</nova:name>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:41:01</nova:creationTime>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <nova:port uuid="ca235479-72b7-40ba-b8e5-d8d3227079fc">
Feb 25 12:41:02 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <system>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <entry name="serial">bb4e80d2-c200-4c24-8154-e1a86d06946b</entry>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <entry name="uuid">bb4e80d2-c200-4c24-8154-e1a86d06946b</entry>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     </system>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   <os>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   </os>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   <features>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   </features>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/bb4e80d2-c200-4c24-8154-e1a86d06946b_disk">
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config">
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:02 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:c8:65:4a"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <target dev="tapca235479-72"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/console.log" append="off"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <video>
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     </video>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:41:02 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:41:02 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:41:02 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:41:02 compute-0 nova_compute[244014]: </domain>
Feb 25 12:41:02 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.526 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Preparing to wait for external event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.526 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.526 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.526 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.527 244018 DEBUG nova.virt.libvirt.vif [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-262633266',display_name='tempest-TestNetworkAdvancedServerOps-server-262633266',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-262633266',id=102,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLLfY/ZYTp8zkjISi8dRav/XikNBwgTA0veZE/C8b/WfaOQQWjNxzsgI+O1rEE3S7jO5qPjb3pQG3PNH0tfGcXeLebTLFJlD+G8jxg8kTGDYVpT+gC8btxGaaq2vWYzmA==',key_name='tempest-TestNetworkAdvancedServerOps-1175251533',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-uqls6dt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:54Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=bb4e80d2-c200-4c24-8154-e1a86d06946b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.527 244018 DEBUG nova.network.os_vif_util [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.528 244018 DEBUG nova.network.os_vif_util [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.528 244018 DEBUG os_vif [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.529 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.529 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.530 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.535 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.535 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca235479-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.536 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca235479-72, col_values=(('external_ids', {'iface-id': 'ca235479-72b7-40ba-b8e5-d8d3227079fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:65:4a', 'vm-uuid': 'bb4e80d2-c200-4c24-8154-e1a86d06946b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:02 compute-0 NetworkManager[49836]: <info>  [1772023262.5385] manager: (tapca235479-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.544 244018 INFO os_vif [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72')
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.561 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.621 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.621 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.622 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:c8:65:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.622 244018 INFO nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Using config drive
Feb 25 12:41:02 compute-0 nova_compute[244014]: 2026-02-25 12:41:02.644 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3677413552' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:41:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/301190081' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.138 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.143 244018 DEBUG nova.compute.provider_tree [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.163 244018 DEBUG nova.scheduler.client.report [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.187 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.188 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.191 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.197 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.197 244018 INFO nova.compute.claims [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.274 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.275 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:41:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.292 244018 INFO nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.311 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.392 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.393 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.394 244018 INFO nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Creating image(s)
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.412 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.434 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.459 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.464 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.491 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.524 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.524 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.525 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.525 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.547 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.550 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bd882e8f-3bea-4fa6-a185-186bc9911267_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.640 244018 DEBUG nova.network.neutron [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updated VIF entry in instance network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.641 244018 DEBUG nova.network.neutron [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updating instance_info_cache with network_info: [{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.651 244018 DEBUG nova.policy [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.661 244018 DEBUG oslo_concurrency.lockutils [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.785 244018 INFO nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Creating config drive at /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.789 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1g3f49x3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.815 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bd882e8f-3bea-4fa6-a185-186bc9911267_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.871 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.919 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1g3f49x3" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:03 compute-0 ceph-mon[76335]: pgmap v1796: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 104 op/s
Feb 25 12:41:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/301190081' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.940 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:03 compute-0 nova_compute[244014]: 2026-02-25 12:41:03.946 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.009 244018 DEBUG nova.objects.instance [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid bd882e8f-3bea-4fa6-a185-186bc9911267 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.023 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.024 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Ensure instance console log exists: /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.024 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.024 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.025 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:41:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3249733971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.081 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.086 244018 DEBUG nova.compute.provider_tree [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.098 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.098 244018 INFO nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Deleting local config drive /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config because it was imported into RBD.
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.104 244018 DEBUG nova.scheduler.client.report [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.122 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.123 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:41:04 compute-0 NetworkManager[49836]: <info>  [1772023264.1475] manager: (tapca235479-72): new Tun device (/org/freedesktop/NetworkManager/Devices/410)
Feb 25 12:41:04 compute-0 kernel: tapca235479-72: entered promiscuous mode
Feb 25 12:41:04 compute-0 ovn_controller[147040]: 2026-02-25T12:41:04Z|00988|binding|INFO|Claiming lport ca235479-72b7-40ba-b8e5-d8d3227079fc for this chassis.
Feb 25 12:41:04 compute-0 ovn_controller[147040]: 2026-02-25T12:41:04Z|00989|binding|INFO|ca235479-72b7-40ba-b8e5-d8d3227079fc: Claiming fa:16:3e:c8:65:4a 10.100.0.7
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.151 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.157 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.165 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:65:4a 10.100.0.7'], port_security=['fa:16:3e:c8:65:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bb4e80d2-c200-4c24-8154-e1a86d06946b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ffae040-354a-4380-b0b9-a74c45661bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf47c8ae-ac17-4dbe-9374-1ad52fb8ff7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=861eb98d-f02a-4401-a60e-4c578f807c6b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ca235479-72b7-40ba-b8e5-d8d3227079fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.167 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ca235479-72b7-40ba-b8e5-d8d3227079fc in datapath 2ffae040-354a-4380-b0b9-a74c45661bf1 bound to our chassis
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.167 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.167 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.168 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ffae040-354a-4380-b0b9-a74c45661bf1
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.177 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[316b6b71-22a0-4467-b8f2-1e047d12ff02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.178 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ffae040-31 in ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.180 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ffae040-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.180 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2821417d-12ee-400f-bf6c-d35f1dde7b68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.182 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.181 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1178ea-372a-409d-918e-23d97b7e485a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 ovn_controller[147040]: 2026-02-25T12:41:04Z|00990|binding|INFO|Setting lport ca235479-72b7-40ba-b8e5-d8d3227079fc ovn-installed in OVS
Feb 25 12:41:04 compute-0 ovn_controller[147040]: 2026-02-25T12:41:04Z|00991|binding|INFO|Setting lport ca235479-72b7-40ba-b8e5-d8d3227079fc up in Southbound
Feb 25 12:41:04 compute-0 systemd-udevd[334606]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.187 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.193 244018 INFO nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.194 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[13469c44-2f7e-4f35-a087-2541fa83af94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 systemd-machined[210048]: New machine qemu-129-instance-00000066.
Feb 25 12:41:04 compute-0 NetworkManager[49836]: <info>  [1772023264.2041] device (tapca235479-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:41:04 compute-0 NetworkManager[49836]: <info>  [1772023264.2047] device (tapca235479-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:41:04 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-00000066.
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.210 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.216 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c03a7fc-5a54-4cd6-9db6-271459f81c04]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.243 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a39e6618-7471-4289-876b-1cf25ff83708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 systemd-udevd[334610]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.247 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9240d99a-1448-4914-8279-beefd3f4f875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 NetworkManager[49836]: <info>  [1772023264.2485] manager: (tap2ffae040-30): new Veth device (/org/freedesktop/NetworkManager/Devices/411)
Feb 25 12:41:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.275 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0e9628-751d-4e4d-b4ff-314eb41ca9f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.278 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[86866db2-a219-494f-8095-84be7c996b79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 NetworkManager[49836]: <info>  [1772023264.3026] device (tap2ffae040-30): carrier: link connected
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.305 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.308 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.310 244018 INFO nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Creating image(s)
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.310 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f67f5d45-566e-45c1-8eb0-ad48d91c1bc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.328 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be0504b4-31e9-4924-b68e-8fd482e6e6df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ffae040-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523371, 'reachable_time': 26475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334638, 'error': None, 'target': 'ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.340 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.347 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67551647-3680-4627-ac37-35f589afb405]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:7355'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523371, 'tstamp': 523371}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334655, 'error': None, 'target': 'ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.365 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3029e4-d628-4ca0-8c7b-0de2da448462]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ffae040-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523371, 'reachable_time': 26475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334658, 'error': None, 'target': 'ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.383 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.389 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[56bd1098-b4ab-4830-a24e-2dc3a3345022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.419 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.427 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.452 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8728462e-90aa-4d6f-bdb6-2269eeb18b57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.455 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ffae040-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.455 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.456 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ffae040-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:04 compute-0 kernel: tap2ffae040-30: entered promiscuous mode
Feb 25 12:41:04 compute-0 NetworkManager[49836]: <info>  [1772023264.4581] manager: (tap2ffae040-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.461 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ffae040-30, col_values=(('external_ids', {'iface-id': '866d334c-e04a-4c89-acaa-4482da6c6f0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:04 compute-0 ovn_controller[147040]: 2026-02-25T12:41:04Z|00992|binding|INFO|Releasing lport 866d334c-e04a-4c89-acaa-4482da6c6f0c from this chassis (sb_readonly=0)
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.463 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ffae040-354a-4380-b0b9-a74c45661bf1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ffae040-354a-4380-b0b9-a74c45661bf1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.464 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0d066f-1c54-45a2-9262-a83987f558dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.465 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-2ffae040-354a-4380-b0b9-a74c45661bf1
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/2ffae040-354a-4380-b0b9-a74c45661bf1.pid.haproxy
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 2ffae040-354a-4380-b0b9-a74c45661bf1
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:41:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.467 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1', 'env', 'PROCESS_TAG=haproxy-2ffae040-354a-4380-b0b9-a74c45661bf1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ffae040-354a-4380-b0b9-a74c45661bf1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.480 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.517 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.518 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.519 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.519 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.547 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.554 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d547b0db-242e-49a5-8a76-5682b0235b6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:04 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.793 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023264.792724, bb4e80d2-c200-4c24-8154-e1a86d06946b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.793 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Started (Lifecycle Event)
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.799 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d547b0db-242e-49a5-8a76-5682b0235b6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.841 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.871 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023264.792899, bb4e80d2-c200-4c24-8154-e1a86d06946b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.871 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Paused (Lifecycle Event)
Feb 25 12:41:04 compute-0 podman[334809]: 2026-02-25 12:41:04.872331611 +0000 UTC m=+0.043652483 container create 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.882 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:41:04 compute-0 systemd[1]: Started libpod-conmon-73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1.scope.
Feb 25 12:41:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308c3469f7246fcc17ea83f23367abb6b8fd53a71b962f6f2bf6602aad8594fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.928 244018 DEBUG nova.policy [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.931 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.937 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:41:04 compute-0 podman[334809]: 2026-02-25 12:41:04.942594113 +0000 UTC m=+0.113914985 container init 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:41:04 compute-0 podman[334809]: 2026-02-25 12:41:04.847991404 +0000 UTC m=+0.019312306 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:41:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3249733971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:04 compute-0 podman[334809]: 2026-02-25 12:41:04.949247421 +0000 UTC m=+0.120568293 container start 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:41:04 compute-0 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [NOTICE]   (334881) : New worker (334901) forked
Feb 25 12:41:04 compute-0 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [NOTICE]   (334881) : Loading success.
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.976 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.980 244018 DEBUG nova.objects.instance [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid d547b0db-242e-49a5-8a76-5682b0235b6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.996 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.996 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Ensure instance console log exists: /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.997 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.997 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:04 compute-0 nova_compute[244014]: 2026-02-25 12:41:04.997 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.129 244018 DEBUG nova.compute.manager [req-2c447212-360f-4beb-8040-23caba22e935 req-26168cf2-0e38-41ab-b215-abd27e5683b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.129 244018 DEBUG oslo_concurrency.lockutils [req-2c447212-360f-4beb-8040-23caba22e935 req-26168cf2-0e38-41ab-b215-abd27e5683b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.129 244018 DEBUG oslo_concurrency.lockutils [req-2c447212-360f-4beb-8040-23caba22e935 req-26168cf2-0e38-41ab-b215-abd27e5683b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.129 244018 DEBUG oslo_concurrency.lockutils [req-2c447212-360f-4beb-8040-23caba22e935 req-26168cf2-0e38-41ab-b215-abd27e5683b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.130 244018 DEBUG nova.compute.manager [req-2c447212-360f-4beb-8040-23caba22e935 req-26168cf2-0e38-41ab-b215-abd27e5683b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Processing event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.130 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.142 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023265.1420126, bb4e80d2-c200-4c24-8154-e1a86d06946b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.142 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Resumed (Lifecycle Event)
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.149 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.152 244018 INFO nova.virt.libvirt.driver [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Instance spawned successfully.
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.152 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.172 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.178 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.181 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.181 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.181 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.182 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.182 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.182 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.215 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.251 244018 INFO nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Took 11.11 seconds to spawn the instance on the hypervisor.
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.252 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.309 244018 INFO nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Took 12.23 seconds to build instance.
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.326 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.338 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Successfully created port: 12e30ba1-a42b-437b-b14c-ab41975d5f53 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:41:05 compute-0 nova_compute[244014]: 2026-02-25 12:41:05.707 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Successfully created port: 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:41:05 compute-0 ceph-mon[76335]: pgmap v1797: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 25 12:41:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 25 12:41:06 compute-0 nova_compute[244014]: 2026-02-25 12:41:06.464 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Successfully updated port: 12e30ba1-a42b-437b-b14c-ab41975d5f53 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:41:06 compute-0 nova_compute[244014]: 2026-02-25 12:41:06.541 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:06 compute-0 nova_compute[244014]: 2026-02-25 12:41:06.542 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:06 compute-0 nova_compute[244014]: 2026-02-25 12:41:06.542 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:41:06 compute-0 nova_compute[244014]: 2026-02-25 12:41:06.787 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:41:07 compute-0 nova_compute[244014]: 2026-02-25 12:41:07.269 244018 DEBUG nova.compute.manager [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:07 compute-0 nova_compute[244014]: 2026-02-25 12:41:07.269 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:07 compute-0 nova_compute[244014]: 2026-02-25 12:41:07.269 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:07 compute-0 nova_compute[244014]: 2026-02-25 12:41:07.270 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:07 compute-0 nova_compute[244014]: 2026-02-25 12:41:07.270 244018 DEBUG nova.compute.manager [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] No waiting events found dispatching network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:07 compute-0 nova_compute[244014]: 2026-02-25 12:41:07.270 244018 WARNING nova.compute.manager [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received unexpected event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc for instance with vm_state active and task_state None.
Feb 25 12:41:07 compute-0 nova_compute[244014]: 2026-02-25 12:41:07.270 244018 DEBUG nova.compute.manager [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-changed-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:07 compute-0 nova_compute[244014]: 2026-02-25 12:41:07.271 244018 DEBUG nova.compute.manager [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Refreshing instance network info cache due to event network-changed-12e30ba1-a42b-437b-b14c-ab41975d5f53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:41:07 compute-0 nova_compute[244014]: 2026-02-25 12:41:07.271 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:07 compute-0 nova_compute[244014]: 2026-02-25 12:41:07.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:08 compute-0 ceph-mon[76335]: pgmap v1798: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 25 12:41:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 422 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.8 MiB/s wr, 251 op/s
Feb 25 12:41:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:08 compute-0 nova_compute[244014]: 2026-02-25 12:41:08.627 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Successfully updated port: 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:41:08 compute-0 nova_compute[244014]: 2026-02-25 12:41:08.655 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:08 compute-0 nova_compute[244014]: 2026-02-25 12:41:08.657 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:08 compute-0 nova_compute[244014]: 2026-02-25 12:41:08.657 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:41:08 compute-0 ovn_controller[147040]: 2026-02-25T12:41:08Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:a2:d4 10.100.0.9
Feb 25 12:41:08 compute-0 ovn_controller[147040]: 2026-02-25T12:41:08Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:a2:d4 10.100.0.9
Feb 25 12:41:08 compute-0 nova_compute[244014]: 2026-02-25 12:41:08.958 244018 DEBUG nova.compute.manager [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:08 compute-0 nova_compute[244014]: 2026-02-25 12:41:08.959 244018 DEBUG nova.compute.manager [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing instance network info cache due to event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:41:08 compute-0 nova_compute[244014]: 2026-02-25 12:41:08.959 244018 DEBUG oslo_concurrency.lockutils [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.586 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.655 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Updating instance_info_cache with network_info: [{"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.679 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.679 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Instance network_info: |[{"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.680 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.680 244018 DEBUG nova.network.neutron [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Refreshing network info cache for port 12e30ba1-a42b-437b-b14c-ab41975d5f53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.683 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Start _get_guest_xml network_info=[{"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.687 244018 WARNING nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.694 244018 DEBUG nova.virt.libvirt.host [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.695 244018 DEBUG nova.virt.libvirt.host [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.698 244018 DEBUG nova.virt.libvirt.host [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.699 244018 DEBUG nova.virt.libvirt.host [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.699 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.700 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.700 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.701 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.701 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.701 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.701 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.702 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.702 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.703 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.703 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.703 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:41:09 compute-0 nova_compute[244014]: 2026-02-25 12:41:09.706 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:10 compute-0 ceph-mon[76335]: pgmap v1799: 305 pgs: 305 active+clean; 422 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.8 MiB/s wr, 251 op/s
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 422 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 189 op/s
Feb 25 12:41:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2502032656' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.289 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.317 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.322 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3068695911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.892 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.895 244018 DEBUG nova.virt.libvirt.vif [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=103,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-itlntti8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:03Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=bd882e8f-3bea-4fa6-a185-186bc9911267,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.896 244018 DEBUG nova.network.os_vif_util [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.897 244018 DEBUG nova.network.os_vif_util [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.899 244018 DEBUG nova.objects.instance [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd882e8f-3bea-4fa6-a185-186bc9911267 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.917 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:41:10 compute-0 nova_compute[244014]:   <uuid>bd882e8f-3bea-4fa6-a185-186bc9911267</uuid>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   <name>instance-00000067</name>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersTestJSON-server-1299470364</nova:name>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:41:09</nova:creationTime>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <nova:port uuid="12e30ba1-a42b-437b-b14c-ab41975d5f53">
Feb 25 12:41:10 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <system>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <entry name="serial">bd882e8f-3bea-4fa6-a185-186bc9911267</entry>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <entry name="uuid">bd882e8f-3bea-4fa6-a185-186bc9911267</entry>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     </system>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   <os>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   </os>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   <features>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   </features>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/bd882e8f-3bea-4fa6-a185-186bc9911267_disk">
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config">
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:10 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:e6:70:d4"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <target dev="tap12e30ba1-a4"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/console.log" append="off"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <video>
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     </video>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:41:10 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:41:10 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:41:10 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:41:10 compute-0 nova_compute[244014]: </domain>
Feb 25 12:41:10 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.922 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Preparing to wait for external event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.923 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.923 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.923 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.924 244018 DEBUG nova.virt.libvirt.vif [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=103,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-itlntti8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:03Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=bd882e8f-3bea-4fa6-a185-186bc9911267,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.924 244018 DEBUG nova.network.os_vif_util [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.925 244018 DEBUG nova.network.os_vif_util [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.926 244018 DEBUG os_vif [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.926 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.927 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.927 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.929 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.929 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12e30ba1-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.930 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12e30ba1-a4, col_values=(('external_ids', {'iface-id': '12e30ba1-a42b-437b-b14c-ab41975d5f53', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:70:d4', 'vm-uuid': 'bd882e8f-3bea-4fa6-a185-186bc9911267'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.932 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:10 compute-0 NetworkManager[49836]: <info>  [1772023270.9328] manager: (tap12e30ba1-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.934 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.937 244018 INFO os_vif [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4')
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.951 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.952 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Instance network_info: |[{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.952 244018 DEBUG oslo_concurrency.lockutils [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.952 244018 DEBUG nova.network.neutron [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.955 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Start _get_guest_xml network_info=[{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.959 244018 WARNING nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.964 244018 DEBUG nova.virt.libvirt.host [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.965 244018 DEBUG nova.virt.libvirt.host [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.972 244018 DEBUG nova.virt.libvirt.host [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.972 244018 DEBUG nova.virt.libvirt.host [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.973 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.973 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.974 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.974 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.974 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.974 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.975 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.975 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.975 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.976 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.976 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.976 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:41:10 compute-0 nova_compute[244014]: 2026-02-25 12:41:10.979 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.028 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.029 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.029 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:e6:70:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.030 244018 INFO nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Using config drive
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.051 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2502032656' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3068695911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331425324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.508 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.529 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.533 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:11 compute-0 NetworkManager[49836]: <info>  [1772023271.6119] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Feb 25 12:41:11 compute-0 NetworkManager[49836]: <info>  [1772023271.6127] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:11 compute-0 ovn_controller[147040]: 2026-02-25T12:41:11Z|00993|binding|INFO|Releasing lport e2d1eadf-baf7-4e5c-8052-6c64e8476a26 from this chassis (sb_readonly=0)
Feb 25 12:41:11 compute-0 ovn_controller[147040]: 2026-02-25T12:41:11Z|00994|binding|INFO|Releasing lport 866d334c-e04a-4c89-acaa-4482da6c6f0c from this chassis (sb_readonly=0)
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.871 244018 DEBUG nova.network.neutron [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Updated VIF entry in instance network info cache for port 12e30ba1-a42b-437b-b14c-ab41975d5f53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.872 244018 DEBUG nova.network.neutron [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Updating instance_info_cache with network_info: [{"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.889 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.934 244018 INFO nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Creating config drive at /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config
Feb 25 12:41:11 compute-0 nova_compute[244014]: 2026-02-25 12:41:11.937 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiet785_m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.063 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiet785_m" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244499993' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:12 compute-0 ceph-mon[76335]: pgmap v1800: 305 pgs: 305 active+clean; 422 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 189 op/s
Feb 25 12:41:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3331425324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.118 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.121 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.148 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.150 244018 DEBUG nova.virt.libvirt.vif [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-43297595',display_name='tempest-TestGettingAddress-server-43297595',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-43297595',id=104,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-k2no7ltx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:04Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=d547b0db-242e-49a5-8a76-5682b0235b6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.151 244018 DEBUG nova.network.os_vif_util [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.152 244018 DEBUG nova.network.os_vif_util [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.153 244018 DEBUG nova.objects.instance [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid d547b0db-242e-49a5-8a76-5682b0235b6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.175 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:41:12 compute-0 nova_compute[244014]:   <uuid>d547b0db-242e-49a5-8a76-5682b0235b6d</uuid>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   <name>instance-00000068</name>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-43297595</nova:name>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:41:10</nova:creationTime>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <nova:port uuid="96ad4438-9fbd-4fbe-ae7f-af0eecb763c1">
Feb 25 12:41:12 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe95:7721" ipVersion="6"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <system>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <entry name="serial">d547b0db-242e-49a5-8a76-5682b0235b6d</entry>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <entry name="uuid">d547b0db-242e-49a5-8a76-5682b0235b6d</entry>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     </system>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   <os>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   </os>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   <features>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   </features>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d547b0db-242e-49a5-8a76-5682b0235b6d_disk">
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config">
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:12 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:95:77:21"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <target dev="tap96ad4438-9f"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/console.log" append="off"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <video>
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     </video>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:41:12 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:41:12 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:41:12 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:41:12 compute-0 nova_compute[244014]: </domain>
Feb 25 12:41:12 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.180 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Preparing to wait for external event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.180 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.181 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.181 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.182 244018 DEBUG nova.virt.libvirt.vif [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-43297595',display_name='tempest-TestGettingAddress-server-43297595',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-43297595',id=104,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-k2no7ltx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:04Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=d547b0db-242e-49a5-8a76-5682b0235b6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.182 244018 DEBUG nova.network.os_vif_util [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.183 244018 DEBUG nova.network.os_vif_util [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.183 244018 DEBUG os_vif [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.185 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.185 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.187 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.187 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96ad4438-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.188 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap96ad4438-9f, col_values=(('external_ids', {'iface-id': '96ad4438-9fbd-4fbe-ae7f-af0eecb763c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:77:21', 'vm-uuid': 'd547b0db-242e-49a5-8a76-5682b0235b6d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.189 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:12 compute-0 NetworkManager[49836]: <info>  [1772023272.1901] manager: (tap96ad4438-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.194 244018 INFO os_vif [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f')
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.201 244018 DEBUG nova.compute.manager [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.202 244018 DEBUG nova.compute.manager [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing instance network info cache due to event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.202 244018 DEBUG oslo_concurrency.lockutils [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.202 244018 DEBUG oslo_concurrency.lockutils [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.203 244018 DEBUG nova.network.neutron [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:41:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 447 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.6 MiB/s wr, 219 op/s
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.262 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.263 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.263 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:95:77:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.264 244018 INFO nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Using config drive
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.284 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.310 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.311 244018 INFO nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Deleting local config drive /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config because it was imported into RBD.
Feb 25 12:41:12 compute-0 NetworkManager[49836]: <info>  [1772023272.3482] manager: (tap12e30ba1-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/417)
Feb 25 12:41:12 compute-0 kernel: tap12e30ba1-a4: entered promiscuous mode
Feb 25 12:41:12 compute-0 ovn_controller[147040]: 2026-02-25T12:41:12Z|00995|binding|INFO|Claiming lport 12e30ba1-a42b-437b-b14c-ab41975d5f53 for this chassis.
Feb 25 12:41:12 compute-0 ovn_controller[147040]: 2026-02-25T12:41:12Z|00996|binding|INFO|12e30ba1-a42b-437b-b14c-ab41975d5f53: Claiming fa:16:3e:e6:70:d4 10.100.0.6
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.364 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.371 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:70:d4 10.100.0.6'], port_security=['fa:16:3e:e6:70:d4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'bd882e8f-3bea-4fa6-a185-186bc9911267', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=12e30ba1-a42b-437b-b14c-ab41975d5f53) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.373 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 12e30ba1-a42b-437b-b14c-ab41975d5f53 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.374 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:41:12 compute-0 systemd-machined[210048]: New machine qemu-130-instance-00000067.
Feb 25 12:41:12 compute-0 ovn_controller[147040]: 2026-02-25T12:41:12Z|00997|binding|INFO|Setting lport 12e30ba1-a42b-437b-b14c-ab41975d5f53 ovn-installed in OVS
Feb 25 12:41:12 compute-0 ovn_controller[147040]: 2026-02-25T12:41:12Z|00998|binding|INFO|Setting lport 12e30ba1-a42b-437b-b14c-ab41975d5f53 up in Southbound
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:12 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-00000067.
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.389 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1296cf3c-ab73-4b18-a368-cc9250627016]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:12 compute-0 systemd-udevd[335131]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:41:12 compute-0 NetworkManager[49836]: <info>  [1772023272.4039] device (tap12e30ba1-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:41:12 compute-0 NetworkManager[49836]: <info>  [1772023272.4045] device (tap12e30ba1-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.416 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[044dbc1f-66ac-4616-806c-ae45a8e95cb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.419 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[292c130a-50d8-4939-b7b2-1b40dc9cfed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.443 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[92442e93-c4b1-4a2f-a3e2-6ebc0048deb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.457 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d10139d6-f8da-4d65-b402-9cd9397121cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335143, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.470 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb8ab0d-ed6d-4b8c-b9d3-4aa99561a7d0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335144, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335144, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.472 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.473 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.477 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.478 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.479 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.479 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.658 244018 INFO nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Creating config drive at /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.662 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpdhtiua4h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.796 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpdhtiua4h" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.820 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.824 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.984 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:12 compute-0 nova_compute[244014]: 2026-02-25 12:41:12.984 244018 INFO nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Deleting local config drive /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config because it was imported into RBD.
Feb 25 12:41:13 compute-0 NetworkManager[49836]: <info>  [1772023273.0135] manager: (tap96ad4438-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/418)
Feb 25 12:41:13 compute-0 kernel: tap96ad4438-9f: entered promiscuous mode
Feb 25 12:41:13 compute-0 systemd-udevd[335134]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:41:13 compute-0 ovn_controller[147040]: 2026-02-25T12:41:13Z|00999|binding|INFO|Claiming lport 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 for this chassis.
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:13 compute-0 ovn_controller[147040]: 2026-02-25T12:41:13Z|01000|binding|INFO|96ad4438-9fbd-4fbe-ae7f-af0eecb763c1: Claiming fa:16:3e:95:77:21 10.100.0.12 2001:db8::f816:3eff:fe95:7721
Feb 25 12:41:13 compute-0 NetworkManager[49836]: <info>  [1772023273.0248] device (tap96ad4438-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:41:13 compute-0 NetworkManager[49836]: <info>  [1772023273.0257] device (tap96ad4438-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:41:13 compute-0 ovn_controller[147040]: 2026-02-25T12:41:13Z|01001|binding|INFO|Setting lport 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 ovn-installed in OVS
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.028 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.031 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:13 compute-0 systemd-machined[210048]: New machine qemu-131-instance-00000068.
Feb 25 12:41:13 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-00000068.
Feb 25 12:41:13 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2244499993' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.349 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023273.3485942, bd882e8f-3bea-4fa6-a185-186bc9911267 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.354 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] VM Started (Lifecycle Event)
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.446 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:77:21 10.100.0.12 2001:db8::f816:3eff:fe95:7721'], port_security=['fa:16:3e:95:77:21 10.100.0.12 2001:db8::f816:3eff:fe95:7721'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe95:7721/64', 'neutron:device_id': 'd547b0db-242e-49a5-8a76-5682b0235b6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '219ff989-2dc4-4de6-abcc-fec08f1c06f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cb98e41-b63f-472f-91ce-2c5130dfa4a8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:41:13 compute-0 ovn_controller[147040]: 2026-02-25T12:41:13Z|01002|binding|INFO|Setting lport 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 up in Southbound
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.448 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 in datapath 9ec47b46-6b4d-4267-964b-6f16eef9a7b1 bound to our chassis
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.450 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ec47b46-6b4d-4267-964b-6f16eef9a7b1
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.459 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fee07e01-eae9-4e77-9847-22f87265fa51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.460 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ec47b46-61 in ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.462 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ec47b46-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.462 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[24f3a279-7bf8-4b89-80c4-2922df8e19e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.463 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5208a7-013e-43ac-929d-48b83188dc36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.474 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[924e1289-3765-42b3-81c0-9f2fa79d5002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.485 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81cde8e3-a1f3-49f0-afec-12ffd036a67c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.488 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.492 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023273.352884, bd882e8f-3bea-4fa6-a185-186bc9911267 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.492 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] VM Paused (Lifecycle Event)
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.510 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[08ee2a92-9b72-4b5c-a176-a501aac03d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.516 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b048d93e-3b49-4dd1-9e62-723cdb2fdb6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 NetworkManager[49836]: <info>  [1772023273.5175] manager: (tap9ec47b46-60): new Veth device (/org/freedesktop/NetworkManager/Devices/419)
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.524 244018 DEBUG nova.compute.manager [req-ea153940-1f3f-411d-b6d2-b3dff5cb331d req-b8a63a7f-7258-4393-9251-1df38382dc5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.525 244018 DEBUG oslo_concurrency.lockutils [req-ea153940-1f3f-411d-b6d2-b3dff5cb331d req-b8a63a7f-7258-4393-9251-1df38382dc5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.525 244018 DEBUG oslo_concurrency.lockutils [req-ea153940-1f3f-411d-b6d2-b3dff5cb331d req-b8a63a7f-7258-4393-9251-1df38382dc5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.525 244018 DEBUG oslo_concurrency.lockutils [req-ea153940-1f3f-411d-b6d2-b3dff5cb331d req-b8a63a7f-7258-4393-9251-1df38382dc5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.525 244018 DEBUG nova.compute.manager [req-ea153940-1f3f-411d-b6d2-b3dff5cb331d req-b8a63a7f-7258-4393-9251-1df38382dc5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Processing event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.526 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.527 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.533 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.537 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023273.416806, d547b0db-242e-49a5-8a76-5682b0235b6d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.538 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] VM Started (Lifecycle Event)
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.541 244018 INFO nova.virt.libvirt.driver [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Instance spawned successfully.
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.541 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.548 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[101d9b2a-cb43-415b-aac3-4708ad8b06bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.551 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[94a70406-ec4a-4533-b28f-d278bf3d56c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.570 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.574 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023273.4169364, d547b0db-242e-49a5-8a76-5682b0235b6d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.575 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] VM Paused (Lifecycle Event)
Feb 25 12:41:13 compute-0 NetworkManager[49836]: <info>  [1772023273.5772] device (tap9ec47b46-60): carrier: link connected
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.580 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.580 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.580 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.581 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.581 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.581 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.582 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9ca0b1cd-6419-4ade-9d1d-2b41c469ed1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.593 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.596 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.595 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[251e2343-c5db-4239-8cbb-be6f7040048f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ec47b46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:8a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524298, 'reachable_time': 43888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335316, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.608 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d40a493d-9836-4b52-97a6-f52517004656]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:8a86'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524298, 'tstamp': 524298}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335317, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.618 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.618 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023273.5315669, bd882e8f-3bea-4fa6-a185-186bc9911267 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.618 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] VM Resumed (Lifecycle Event)
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.620 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8c434778-d17a-4512-ad5e-5eb972a55864]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ec47b46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:8a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524298, 'reachable_time': 43888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335318, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.638 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.639 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[92929090-9320-404c-bc2d-a01e9474514d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.641 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.651 244018 INFO nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Took 10.26 seconds to spawn the instance on the hypervisor.
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.651 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.659 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.676 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[246ea11a-3654-4cdb-98fd-53f4afac97b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.677 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec47b46-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.677 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.678 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ec47b46-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:13 compute-0 NetworkManager[49836]: <info>  [1772023273.6802] manager: (tap9ec47b46-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:13 compute-0 kernel: tap9ec47b46-60: entered promiscuous mode
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ec47b46-60, col_values=(('external_ids', {'iface-id': '895bc3cc-c38d-425b-b005-1acb3139bbee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:13 compute-0 ovn_controller[147040]: 2026-02-25T12:41:13Z|01003|binding|INFO|Releasing lport 895bc3cc-c38d-425b-b005-1acb3139bbee from this chassis (sb_readonly=0)
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.695 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ec47b46-6b4d-4267-964b-6f16eef9a7b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ec47b46-6b4d-4267-964b-6f16eef9a7b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.695 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ecbecd-e85c-46a6-b941-ee87e659ee7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.696 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-9ec47b46-6b4d-4267-964b-6f16eef9a7b1
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/9ec47b46-6b4d-4267-964b-6f16eef9a7b1.pid.haproxy
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 9ec47b46-6b4d-4267-964b-6f16eef9a7b1
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:41:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.696 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'env', 'PROCESS_TAG=haproxy-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ec47b46-6b4d-4267-964b-6f16eef9a7b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.709 244018 INFO nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Took 11.36 seconds to build instance.
Feb 25 12:41:13 compute-0 nova_compute[244014]: 2026-02-25 12:41:13.729 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:14 compute-0 podman[335351]: 2026-02-25 12:41:14.032311492 +0000 UTC m=+0.064149422 container create 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:41:14 compute-0 systemd[1]: Started libpod-conmon-5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b.scope.
Feb 25 12:41:14 compute-0 podman[335351]: 2026-02-25 12:41:13.995543504 +0000 UTC m=+0.027381374 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:41:14 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02aa81a3db703ffc9f0b50010648091a42999b6e192082fa62a6c0558305e7bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:14 compute-0 podman[335351]: 2026-02-25 12:41:14.115078757 +0000 UTC m=+0.146916627 container init 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:41:14 compute-0 podman[335351]: 2026-02-25 12:41:14.119515722 +0000 UTC m=+0.151353572 container start 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:41:14 compute-0 ceph-mon[76335]: pgmap v1801: 305 pgs: 305 active+clean; 447 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.6 MiB/s wr, 219 op/s
Feb 25 12:41:14 compute-0 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [NOTICE]   (335370) : New worker (335372) forked
Feb 25 12:41:14 compute-0 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [NOTICE]   (335370) : Loading success.
Feb 25 12:41:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 451 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 195 op/s
Feb 25 12:41:14 compute-0 nova_compute[244014]: 2026-02-25 12:41:14.746 244018 DEBUG nova.network.neutron [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updated VIF entry in instance network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:41:14 compute-0 nova_compute[244014]: 2026-02-25 12:41:14.746 244018 DEBUG nova.network.neutron [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:14 compute-0 nova_compute[244014]: 2026-02-25 12:41:14.768 244018 DEBUG oslo_concurrency.lockutils [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:14 compute-0 nova_compute[244014]: 2026-02-25 12:41:14.966 244018 DEBUG nova.network.neutron [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updated VIF entry in instance network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:41:14 compute-0 nova_compute[244014]: 2026-02-25 12:41:14.967 244018 DEBUG nova.network.neutron [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updating instance_info_cache with network_info: [{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:14 compute-0 nova_compute[244014]: 2026-02-25 12:41:14.998 244018 DEBUG oslo_concurrency.lockutils [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.748 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.748 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.748 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.749 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.749 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] No waiting events found dispatching network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.749 244018 WARNING nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received unexpected event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 for instance with vm_state active and task_state None.
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.749 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.749 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.750 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.756 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.756 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Processing event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.757 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.757 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.757 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.757 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.758 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] No waiting events found dispatching network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.758 244018 WARNING nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received unexpected event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 for instance with vm_state building and task_state spawning.
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.759 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.762 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023275.7619615, d547b0db-242e-49a5-8a76-5682b0235b6d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.762 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] VM Resumed (Lifecycle Event)
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.764 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.769 244018 INFO nova.virt.libvirt.driver [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Instance spawned successfully.
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.769 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.801 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.806 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.810 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.811 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.812 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.812 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.813 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.813 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.853 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.890 244018 INFO nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Took 11.58 seconds to spawn the instance on the hypervisor.
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.890 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.961 244018 INFO nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Took 13.61 seconds to build instance.
Feb 25 12:41:15 compute-0 nova_compute[244014]: 2026-02-25 12:41:15.981 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:16 compute-0 ceph-mon[76335]: pgmap v1802: 305 pgs: 305 active+clean; 451 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 195 op/s
Feb 25 12:41:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 451 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 195 op/s
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.432 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.432 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.433 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.433 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.433 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.434 244018 INFO nova.compute.manager [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Terminating instance
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.436 244018 DEBUG nova.compute.manager [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:41:16 compute-0 kernel: tap12e30ba1-a4 (unregistering): left promiscuous mode
Feb 25 12:41:16 compute-0 NetworkManager[49836]: <info>  [1772023276.4829] device (tap12e30ba1-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:16 compute-0 ovn_controller[147040]: 2026-02-25T12:41:16Z|01004|binding|INFO|Releasing lport 12e30ba1-a42b-437b-b14c-ab41975d5f53 from this chassis (sb_readonly=0)
Feb 25 12:41:16 compute-0 ovn_controller[147040]: 2026-02-25T12:41:16Z|01005|binding|INFO|Setting lport 12e30ba1-a42b-437b-b14c-ab41975d5f53 down in Southbound
Feb 25 12:41:16 compute-0 ovn_controller[147040]: 2026-02-25T12:41:16Z|01006|binding|INFO|Removing iface tap12e30ba1-a4 ovn-installed in OVS
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.499 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:70:d4 10.100.0.6'], port_security=['fa:16:3e:e6:70:d4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'bd882e8f-3bea-4fa6-a185-186bc9911267', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=12e30ba1-a42b-437b-b14c-ab41975d5f53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.500 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 12e30ba1-a42b-437b-b14c-ab41975d5f53 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.502 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.514 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8523bbce-f41d-484f-9345-96dd2b5457f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.536 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3605565f-3427-424a-85c1-3f2e7189d57c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:16 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000067.scope: Deactivated successfully.
Feb 25 12:41:16 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000067.scope: Consumed 3.824s CPU time.
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.539 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[26bfef85-241a-48b4-9f43-d55b79e29aab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:16 compute-0 systemd-machined[210048]: Machine qemu-130-instance-00000067 terminated.
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.562 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[463c7d4f-373b-4a94-9798-b20e4274cc6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.576 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a50aac-af7d-4519-811b-9822ff2dde36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335393, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.588 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11b8e255-8445-4b46-8441-28161c026751]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335394, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335394, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.590 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.594 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.594 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.594 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.595 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.595 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.664 244018 INFO nova.virt.libvirt.driver [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Instance destroyed successfully.
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.664 244018 DEBUG nova.objects.instance [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid bd882e8f-3bea-4fa6-a185-186bc9911267 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.679 244018 DEBUG nova.virt.libvirt.vif [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=103,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:41:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-itlntti8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:41:13Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=bd882e8f-3bea-4fa6-a185-186bc9911267,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.680 244018 DEBUG nova.network.os_vif_util [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.680 244018 DEBUG nova.network.os_vif_util [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.686 244018 DEBUG os_vif [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.688 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12e30ba1-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:16 compute-0 nova_compute[244014]: 2026-02-25 12:41:16.693 244018 INFO os_vif [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4')
Feb 25 12:41:17 compute-0 nova_compute[244014]: 2026-02-25 12:41:17.146 244018 INFO nova.virt.libvirt.driver [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Deleting instance files /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267_del
Feb 25 12:41:17 compute-0 nova_compute[244014]: 2026-02-25 12:41:17.147 244018 INFO nova.virt.libvirt.driver [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Deletion of /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267_del complete
Feb 25 12:41:17 compute-0 nova_compute[244014]: 2026-02-25 12:41:17.274 244018 INFO nova.compute.manager [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Took 0.84 seconds to destroy the instance on the hypervisor.
Feb 25 12:41:17 compute-0 nova_compute[244014]: 2026-02-25 12:41:17.274 244018 DEBUG oslo.service.loopingcall [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:41:17 compute-0 nova_compute[244014]: 2026-02-25 12:41:17.275 244018 DEBUG nova.compute.manager [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:41:17 compute-0 nova_compute[244014]: 2026-02-25 12:41:17.275 244018 DEBUG nova.network.neutron [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:41:17 compute-0 ovn_controller[147040]: 2026-02-25T12:41:17Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:65:4a 10.100.0.7
Feb 25 12:41:17 compute-0 ovn_controller[147040]: 2026-02-25T12:41:17Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:65:4a 10.100.0.7
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.034 244018 DEBUG nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-unplugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.035 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.036 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.036 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.037 244018 DEBUG nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] No waiting events found dispatching network-vif-unplugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.038 244018 DEBUG nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-unplugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.038 244018 DEBUG nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.038 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.039 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.039 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.039 244018 DEBUG nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] No waiting events found dispatching network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:18 compute-0 nova_compute[244014]: 2026-02-25 12:41:18.039 244018 WARNING nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received unexpected event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 for instance with vm_state active and task_state deleting.
Feb 25 12:41:18 compute-0 ceph-mon[76335]: pgmap v1803: 305 pgs: 305 active+clean; 451 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 195 op/s
Feb 25 12:41:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 440 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 7.8 MiB/s wr, 420 op/s
Feb 25 12:41:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.001 244018 DEBUG nova.network.neutron [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.023 244018 INFO nova.compute.manager [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Took 1.75 seconds to deallocate network for instance.
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.084 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.084 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.207 244018 DEBUG oslo_concurrency.processutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:19 compute-0 podman[335446]: 2026-02-25 12:41:19.700754064 +0000 UTC m=+0.048447529 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:41:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:41:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063817213' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.765 244018 DEBUG oslo_concurrency.processutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.770 244018 DEBUG nova.compute.provider_tree [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.785 244018 DEBUG nova.scheduler.client.report [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.804 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.834 244018 INFO nova.scheduler.client.report [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance bd882e8f-3bea-4fa6-a185-186bc9911267
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:41:19 compute-0 nova_compute[244014]: 2026-02-25 12:41:19.910 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.053 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.085 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.086 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.086 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.086 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62d2fee1-f07f-44e3-a511-6b9bb341a3ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.132 244018 DEBUG nova.compute.manager [req-605bf9ae-9359-43e5-9c96-8f1c24d55066 req-d953a1f5-53de-4ea2-ae0f-06edab08e931 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-deleted-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.137 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.137 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.137 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.138 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.138 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.139 244018 INFO nova.compute.manager [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Terminating instance
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.140 244018 DEBUG nova.compute.manager [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:41:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 440 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.8 MiB/s wr, 271 op/s
Feb 25 12:41:20 compute-0 ceph-mon[76335]: pgmap v1804: 305 pgs: 305 active+clean; 440 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 7.8 MiB/s wr, 420 op/s
Feb 25 12:41:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3063817213' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:20 compute-0 kernel: tap6150f46a-de (unregistering): left promiscuous mode
Feb 25 12:41:20 compute-0 NetworkManager[49836]: <info>  [1772023280.4797] device (tap6150f46a-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:41:20 compute-0 ovn_controller[147040]: 2026-02-25T12:41:20Z|01007|binding|INFO|Releasing lport 6150f46a-debb-49d6-b2e0-07b81f4cccea from this chassis (sb_readonly=0)
Feb 25 12:41:20 compute-0 ovn_controller[147040]: 2026-02-25T12:41:20Z|01008|binding|INFO|Setting lport 6150f46a-debb-49d6-b2e0-07b81f4cccea down in Southbound
Feb 25 12:41:20 compute-0 ovn_controller[147040]: 2026-02-25T12:41:20Z|01009|binding|INFO|Removing iface tap6150f46a-de ovn-installed in OVS
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:20 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000065.scope: Deactivated successfully.
Feb 25 12:41:20 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000065.scope: Consumed 13.394s CPU time.
Feb 25 12:41:20 compute-0 systemd-machined[210048]: Machine qemu-128-instance-00000065 terminated.
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.604 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:a2:d4 10.100.0.9'], port_security=['fa:16:3e:ee:a2:d4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4717553a-9bb9-4278-b610-7c446db3f8d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6150f46a-debb-49d6-b2e0-07b81f4cccea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.605 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6150f46a-debb-49d6-b2e0-07b81f4cccea in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.607 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.619 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2811d441-03ae-4134-97bb-e940d5b79ee4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.642 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1e60cd67-b2a1-498d-abc5-8f21d452c1f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.646 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7b69d4-20ed-4b40-b675-ad934235db62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.667 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[99fc63b2-0a22-4a99-acc5-2ecad05ef662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.681 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[64dbdb60-a19d-4b0b-9412-49fea9b5d998]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335478, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.695 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30a5f17a-7677-42da-b2f6-ed8c43daf774]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335479, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335479, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.697 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.702 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.702 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.703 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.703 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.799 244018 INFO nova.virt.libvirt.driver [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Instance destroyed successfully.
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.799 244018 DEBUG nova.objects.instance [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid 4717553a-9bb9-4278-b610-7c446db3f8d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.831 244018 DEBUG nova.virt.libvirt.vif [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=101,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:40:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-mh21m2o5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:40:56Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4717553a-9bb9-4278-b610-7c446db3f8d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.831 244018 DEBUG nova.network.os_vif_util [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.832 244018 DEBUG nova.network.os_vif_util [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.832 244018 DEBUG os_vif [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.834 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6150f46a-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:41:20 compute-0 nova_compute[244014]: 2026-02-25 12:41:20.839 244018 INFO os_vif [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de')
Feb 25 12:41:20 compute-0 podman[335492]: 2026-02-25 12:41:20.879390173 +0000 UTC m=+0.069779670 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 12:41:21 compute-0 nova_compute[244014]: 2026-02-25 12:41:21.225 244018 DEBUG nova.compute.manager [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:21 compute-0 nova_compute[244014]: 2026-02-25 12:41:21.225 244018 DEBUG nova.compute.manager [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing instance network info cache due to event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:41:21 compute-0 nova_compute[244014]: 2026-02-25 12:41:21.225 244018 DEBUG oslo_concurrency.lockutils [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:21 compute-0 nova_compute[244014]: 2026-02-25 12:41:21.225 244018 DEBUG oslo_concurrency.lockutils [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:21 compute-0 nova_compute[244014]: 2026-02-25 12:41:21.226 244018 DEBUG nova.network.neutron [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:41:21 compute-0 ceph-mon[76335]: pgmap v1805: 305 pgs: 305 active+clean; 440 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.8 MiB/s wr, 271 op/s
Feb 25 12:41:21 compute-0 nova_compute[244014]: 2026-02-25 12:41:21.920 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updating instance_info_cache with network_info: [{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:21 compute-0 nova_compute[244014]: 2026-02-25 12:41:21.944 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:21 compute-0 nova_compute[244014]: 2026-02-25 12:41:21.945 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:41:21 compute-0 nova_compute[244014]: 2026-02-25 12:41:21.945 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:41:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 438 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.9 MiB/s wr, 305 op/s
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.277 244018 DEBUG nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-unplugged-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.277 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.277 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.278 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.278 244018 DEBUG nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] No waiting events found dispatching network-vif-unplugged-6150f46a-debb-49d6-b2e0-07b81f4cccea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.278 244018 DEBUG nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-unplugged-6150f46a-debb-49d6-b2e0-07b81f4cccea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.279 244018 DEBUG nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.279 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.280 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.280 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.280 244018 DEBUG nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] No waiting events found dispatching network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.281 244018 WARNING nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received unexpected event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea for instance with vm_state active and task_state deleting.
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.901 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:41:22 compute-0 nova_compute[244014]: 2026-02-25 12:41:22.902 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:23 compute-0 ceph-mon[76335]: pgmap v1806: 305 pgs: 305 active+clean; 438 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.9 MiB/s wr, 305 op/s
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.588 244018 DEBUG nova.network.neutron [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updated VIF entry in instance network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.590 244018 DEBUG nova.network.neutron [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.612 244018 DEBUG oslo_concurrency.lockutils [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:41:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936890460' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.709 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.807s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.802 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.802 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.806 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.807 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.810 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.811 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.814 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.814 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.985 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.986 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3234MB free_disk=59.83032176736742GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.987 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:23 compute-0 nova_compute[244014]: 2026-02-25 12:41:23.987 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.066 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 62d2fee1-f07f-44e3-a511-6b9bb341a3ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.067 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 4717553a-9bb9-4278-b610-7c446db3f8d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.067 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance bb4e80d2-c200-4c24-8154-e1a86d06946b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.068 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d547b0db-242e-49a5-8a76-5682b0235b6d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.068 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.069 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.143 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.199 244018 INFO nova.virt.libvirt.driver [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Deleting instance files /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9_del
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.201 244018 INFO nova.virt.libvirt.driver [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Deletion of /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9_del complete
Feb 25 12:41:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 416 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.2 MiB/s wr, 295 op/s
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.302 244018 INFO nova.compute.manager [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Took 4.16 seconds to destroy the instance on the hypervisor.
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.304 244018 DEBUG oslo.service.loopingcall [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.304 244018 DEBUG nova.compute.manager [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.305 244018 DEBUG nova.network.neutron [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:41:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1936890460' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.633 244018 INFO nova.compute.manager [None req-df569dc2-06ac-4c35-a513-2950c7b81e50 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Get console output
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.638 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:41:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:41:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388731596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.691 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.695 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.708 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.737 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.738 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.900 244018 INFO nova.compute.manager [None req-bb1f7e25-c491-4c67-b272-5270cc202368 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Pausing
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.902 244018 DEBUG nova.objects.instance [None req-bb1f7e25-c491-4c67-b272-5270cc202368 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'flavor' on Instance uuid bb4e80d2-c200-4c24-8154-e1a86d06946b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.935 244018 DEBUG nova.network.neutron [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.938 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023284.9374084, bb4e80d2-c200-4c24-8154-e1a86d06946b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.938 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Paused (Lifecycle Event)
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.940 244018 DEBUG nova.compute.manager [None req-bb1f7e25-c491-4c67-b272-5270cc202368 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.969 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.972 244018 INFO nova.compute.manager [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Took 0.67 seconds to deallocate network for instance.
Feb 25 12:41:24 compute-0 nova_compute[244014]: 2026-02-25 12:41:24.976 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.007 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] During sync_power_state the instance has a pending task (pausing). Skip.
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.014 244018 DEBUG nova.compute.manager [req-b3f28807-c2da-4827-914f-e2e2f1a0c514 req-b6b3aa0e-cb33-4931-b11a-0a828e2ac378 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-deleted-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.031 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.032 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.134 244018 DEBUG oslo_concurrency.processutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:25 compute-0 ceph-mon[76335]: pgmap v1807: 305 pgs: 305 active+clean; 416 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.2 MiB/s wr, 295 op/s
Feb 25 12:41:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1388731596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:41:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1310139123' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.653 244018 DEBUG oslo_concurrency.processutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.658 244018 DEBUG nova.compute.provider_tree [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.738 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.740 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.746 244018 DEBUG nova.scheduler.client.report [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.767 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.811 244018 INFO nova.scheduler.client.report [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance 4717553a-9bb9-4278-b610-7c446db3f8d9
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:41:25 compute-0 nova_compute[244014]: 2026-02-25 12:41:25.894 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 416 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 278 op/s
Feb 25 12:41:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1310139123' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:26 compute-0 nova_compute[244014]: 2026-02-25 12:41:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:41:27 compute-0 ceph-mon[76335]: pgmap v1808: 305 pgs: 305 active+clean; 416 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 278 op/s
Feb 25 12:41:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 384 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 373 op/s
Feb 25 12:41:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:28 compute-0 ovn_controller[147040]: 2026-02-25T12:41:28Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:77:21 10.100.0.12
Feb 25 12:41:28 compute-0 ovn_controller[147040]: 2026-02-25T12:41:28Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:77:21 10.100.0.12
Feb 25 12:41:28 compute-0 sudo[335606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:41:28 compute-0 sudo[335606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:41:28 compute-0 sudo[335606]: pam_unix(sudo:session): session closed for user root
Feb 25 12:41:28 compute-0 sudo[335631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 12:41:28 compute-0 sudo[335631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:41:28 compute-0 nova_compute[244014]: 2026-02-25 12:41:28.696 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:28 compute-0 nova_compute[244014]: 2026-02-25 12:41:28.697 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:28 compute-0 nova_compute[244014]: 2026-02-25 12:41:28.714 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:41:28 compute-0 nova_compute[244014]: 2026-02-25 12:41:28.790 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:28 compute-0 nova_compute[244014]: 2026-02-25 12:41:28.791 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:28 compute-0 nova_compute[244014]: 2026-02-25 12:41:28.797 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:41:28 compute-0 nova_compute[244014]: 2026-02-25 12:41:28.798 244018 INFO nova.compute.claims [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:41:28 compute-0 sudo[335631]: pam_unix(sudo:session): session closed for user root
Feb 25 12:41:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:41:28 compute-0 nova_compute[244014]: 2026-02-25 12:41:28.973 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:28 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:41:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:41:28 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:41:29 compute-0 sudo[335677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:41:29 compute-0 sudo[335677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:41:29 compute-0 sudo[335677]: pam_unix(sudo:session): session closed for user root
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.033 244018 INFO nova.compute.manager [None req-0d38717e-20bc-49ee-8544-a1c31a6e50d8 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Get console output
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.039 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:41:29 compute-0 sudo[335702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:41:29 compute-0 sudo[335702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.228 244018 INFO nova.compute.manager [None req-ac37be72-2481-4445-a40f-0ca67578947c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Unpausing
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.230 244018 DEBUG nova.objects.instance [None req-ac37be72-2481-4445-a40f-0ca67578947c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'flavor' on Instance uuid bb4e80d2-c200-4c24-8154-e1a86d06946b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.261 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023289.2611344, bb4e80d2-c200-4c24-8154-e1a86d06946b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.262 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Resumed (Lifecycle Event)
Feb 25 12:41:29 compute-0 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.270 244018 DEBUG nova.virt.libvirt.guest [None req-ac37be72-2481-4445-a40f-0ca67578947c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.271 244018 DEBUG nova.compute.manager [None req-ac37be72-2481-4445-a40f-0ca67578947c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.282 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.288 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.332 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] During sync_power_state the instance has a pending task (unpausing). Skip.
Feb 25 12:41:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:41:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1795722616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.517 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.525 244018 DEBUG nova.compute.provider_tree [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.545 244018 DEBUG nova.scheduler.client.report [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:41:29 compute-0 sudo[335702]: pam_unix(sudo:session): session closed for user root
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.586 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.587 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:41:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:41:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:41:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:41:29 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:41:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:41:29 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:41:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:41:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:41:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:41:29 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:41:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:41:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.646 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.646 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.670 244018 INFO nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:41:29 compute-0 sudo[335779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:41:29 compute-0 sudo[335779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:41:29 compute-0 sudo[335779]: pam_unix(sudo:session): session closed for user root
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.697 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:41:29 compute-0 sudo[335804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:41:29 compute-0 sudo[335804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.930 244018 DEBUG nova.policy [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.936 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.938 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.938 244018 INFO nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Creating image(s)
Feb 25 12:41:29 compute-0 nova_compute[244014]: 2026-02-25 12:41:29.961 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.093 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:30 compute-0 ceph-mon[76335]: pgmap v1809: 305 pgs: 305 active+clean; 384 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 373 op/s
Feb 25 12:41:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:41:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:41:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1795722616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:41:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:41:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:41:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:41:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:41:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:41:30 compute-0 podman[335862]: 2026-02-25 12:41:30.106174321 +0000 UTC m=+0.057844143 container create 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.121 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.124 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.149 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:30 compute-0 podman[335862]: 2026-02-25 12:41:30.071984276 +0000 UTC m=+0.023654128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.190 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.194 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.196 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.196 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:30 compute-0 systemd[1]: Started libpod-conmon-2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2.scope.
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.226 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.233 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aeced0b2-f2b3-4012-b740-eaa411f99631_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:41:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 384 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 148 op/s
Feb 25 12:41:30 compute-0 podman[335862]: 2026-02-25 12:41:30.296252655 +0000 UTC m=+0.247922477 container init 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:41:30 compute-0 podman[335862]: 2026-02-25 12:41:30.304144088 +0000 UTC m=+0.255813910 container start 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:41:30 compute-0 podman[335862]: 2026-02-25 12:41:30.310492747 +0000 UTC m=+0.262162569 container attach 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:41:30 compute-0 compassionate_goldstine[335920]: 167 167
Feb 25 12:41:30 compute-0 systemd[1]: libpod-2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2.scope: Deactivated successfully.
Feb 25 12:41:30 compute-0 conmon[335920]: conmon 2de3d8003e006a4e4bbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2.scope/container/memory.events
Feb 25 12:41:30 compute-0 podman[335862]: 2026-02-25 12:41:30.314543791 +0000 UTC m=+0.266213623 container died 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:41:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-34b3d86b2f99e1222f8976eb870eaaf474af7411abe0f21a4cb6e99781ce5479-merged.mount: Deactivated successfully.
Feb 25 12:41:30 compute-0 podman[335862]: 2026-02-25 12:41:30.386951255 +0000 UTC m=+0.338621077 container remove 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 12:41:30 compute-0 systemd[1]: libpod-conmon-2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2.scope: Deactivated successfully.
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.569 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Successfully created port: 8627fb46-e938-4a67-9067-fc60794fca05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:41:30 compute-0 podman[335975]: 2026-02-25 12:41:30.570383901 +0000 UTC m=+0.072398584 container create 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.613 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aeced0b2-f2b3-4012-b740-eaa411f99631_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:30 compute-0 systemd[1]: Started libpod-conmon-59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5.scope.
Feb 25 12:41:30 compute-0 podman[335975]: 2026-02-25 12:41:30.526641297 +0000 UTC m=+0.028656010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:41:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:30 compute-0 podman[335975]: 2026-02-25 12:41:30.660631618 +0000 UTC m=+0.162646321 container init 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 12:41:30 compute-0 podman[335975]: 2026-02-25 12:41:30.668814469 +0000 UTC m=+0.170829162 container start 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 12:41:30 compute-0 podman[335975]: 2026-02-25 12:41:30.675810846 +0000 UTC m=+0.177825529 container attach 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.687 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.781 244018 DEBUG nova.objects.instance [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid aeced0b2-f2b3-4012-b740-eaa411f99631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.800 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.801 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Ensure instance console log exists: /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.801 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.802 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.802 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:41:30 compute-0 nova_compute[244014]: 2026-02-25 12:41:30.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:41:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:41:30
Feb 25 12:41:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:41:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:41:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'vms']
Feb 25 12:41:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:41:31 compute-0 brave_bardeen[335994]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:41:31 compute-0 brave_bardeen[335994]: --> All data devices are unavailable
Feb 25 12:41:31 compute-0 systemd[1]: libpod-59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5.scope: Deactivated successfully.
Feb 25 12:41:31 compute-0 podman[335975]: 2026-02-25 12:41:31.107681463 +0000 UTC m=+0.609696146 container died 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:41:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7-merged.mount: Deactivated successfully.
Feb 25 12:41:31 compute-0 podman[335975]: 2026-02-25 12:41:31.160961836 +0000 UTC m=+0.662976529 container remove 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Feb 25 12:41:31 compute-0 systemd[1]: libpod-conmon-59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5.scope: Deactivated successfully.
Feb 25 12:41:31 compute-0 sudo[335804]: pam_unix(sudo:session): session closed for user root
Feb 25 12:41:31 compute-0 sudo[336096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:41:31 compute-0 sudo[336096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:41:31 compute-0 sudo[336096]: pam_unix(sudo:session): session closed for user root
Feb 25 12:41:31 compute-0 sudo[336121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:41:31 compute-0 sudo[336121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:41:31 compute-0 nova_compute[244014]: 2026-02-25 12:41:31.420 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Successfully updated port: 8627fb46-e938-4a67-9067-fc60794fca05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:41:31 compute-0 nova_compute[244014]: 2026-02-25 12:41:31.443 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:31 compute-0 nova_compute[244014]: 2026-02-25 12:41:31.443 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:31 compute-0 nova_compute[244014]: 2026-02-25 12:41:31.443 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:41:31 compute-0 podman[336159]: 2026-02-25 12:41:31.58881938 +0000 UTC m=+0.038134597 container create b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:41:31 compute-0 systemd[1]: Started libpod-conmon-b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde.scope.
Feb 25 12:41:31 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:41:31 compute-0 podman[336159]: 2026-02-25 12:41:31.662349685 +0000 UTC m=+0.111664942 container init b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:41:31 compute-0 nova_compute[244014]: 2026-02-25 12:41:31.664 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023276.6630564, bd882e8f-3bea-4fa6-a185-186bc9911267 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:31 compute-0 nova_compute[244014]: 2026-02-25 12:41:31.664 244018 INFO nova.compute.manager [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] VM Stopped (Lifecycle Event)
Feb 25 12:41:31 compute-0 podman[336159]: 2026-02-25 12:41:31.572050627 +0000 UTC m=+0.021365874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:41:31 compute-0 podman[336159]: 2026-02-25 12:41:31.668969652 +0000 UTC m=+0.118284879 container start b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:41:31 compute-0 systemd[1]: libpod-b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde.scope: Deactivated successfully.
Feb 25 12:41:31 compute-0 affectionate_austin[336176]: 167 167
Feb 25 12:41:31 compute-0 conmon[336176]: conmon b7bcc283ab9713268d50 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde.scope/container/memory.events
Feb 25 12:41:31 compute-0 podman[336159]: 2026-02-25 12:41:31.679837859 +0000 UTC m=+0.129153106 container attach b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 12:41:31 compute-0 podman[336159]: 2026-02-25 12:41:31.680737244 +0000 UTC m=+0.130052471 container died b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:41:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-275b4b28fedf72aee5d03dba8906423381562040981d0f40573811a8898e2619-merged.mount: Deactivated successfully.
Feb 25 12:41:31 compute-0 podman[336159]: 2026-02-25 12:41:31.724678874 +0000 UTC m=+0.173994101 container remove b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:41:31 compute-0 systemd[1]: libpod-conmon-b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde.scope: Deactivated successfully.
Feb 25 12:41:31 compute-0 nova_compute[244014]: 2026-02-25 12:41:31.743 244018 DEBUG nova.compute.manager [None req-4c28932b-889c-484e-8941-476d1e92da68 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:31 compute-0 nova_compute[244014]: 2026-02-25 12:41:31.791 244018 DEBUG nova.compute.manager [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-changed-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:31 compute-0 nova_compute[244014]: 2026-02-25 12:41:31.791 244018 DEBUG nova.compute.manager [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Refreshing instance network info cache due to event network-changed-8627fb46-e938-4a67-9067-fc60794fca05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:41:31 compute-0 nova_compute[244014]: 2026-02-25 12:41:31.791 244018 DEBUG oslo_concurrency.lockutils [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:31 compute-0 nova_compute[244014]: 2026-02-25 12:41:31.822 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:41:31 compute-0 podman[336200]: 2026-02-25 12:41:31.87299756 +0000 UTC m=+0.034739132 container create a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:41:31 compute-0 systemd[1]: Started libpod-conmon-a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075.scope.
Feb 25 12:41:31 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212a3747a6360eb37192a3d45b8042ab3aa336922b2a06b27f88a292138c0a5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212a3747a6360eb37192a3d45b8042ab3aa336922b2a06b27f88a292138c0a5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212a3747a6360eb37192a3d45b8042ab3aa336922b2a06b27f88a292138c0a5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212a3747a6360eb37192a3d45b8042ab3aa336922b2a06b27f88a292138c0a5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:31 compute-0 podman[336200]: 2026-02-25 12:41:31.858571953 +0000 UTC m=+0.020313545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:41:31 compute-0 podman[336200]: 2026-02-25 12:41:31.955814577 +0000 UTC m=+0.117556189 container init a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:41:31 compute-0 podman[336200]: 2026-02-25 12:41:31.960619602 +0000 UTC m=+0.122361194 container start a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:41:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:41:32 compute-0 podman[336200]: 2026-02-25 12:41:32.013805543 +0000 UTC m=+0.175547425 container attach a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.052 244018 INFO nova.compute.manager [None req-e29bc8b6-b0d1-41b8-9782-144ac0cd2dd5 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Get console output
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.060 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:41:32 compute-0 ceph-mon[76335]: pgmap v1810: 305 pgs: 305 active+clean; 384 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 148 op/s
Feb 25 12:41:32 compute-0 serene_newton[336218]: {
Feb 25 12:41:32 compute-0 serene_newton[336218]:     "0": [
Feb 25 12:41:32 compute-0 serene_newton[336218]:         {
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "devices": [
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "/dev/loop3"
Feb 25 12:41:32 compute-0 serene_newton[336218]:             ],
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_name": "ceph_lv0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_size": "21470642176",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "name": "ceph_lv0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "tags": {
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.cluster_name": "ceph",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.crush_device_class": "",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.encrypted": "0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.objectstore": "bluestore",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.osd_id": "0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.type": "block",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.vdo": "0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.with_tpm": "0"
Feb 25 12:41:32 compute-0 serene_newton[336218]:             },
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "type": "block",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "vg_name": "ceph_vg0"
Feb 25 12:41:32 compute-0 serene_newton[336218]:         }
Feb 25 12:41:32 compute-0 serene_newton[336218]:     ],
Feb 25 12:41:32 compute-0 serene_newton[336218]:     "1": [
Feb 25 12:41:32 compute-0 serene_newton[336218]:         {
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "devices": [
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "/dev/loop4"
Feb 25 12:41:32 compute-0 serene_newton[336218]:             ],
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_name": "ceph_lv1",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_size": "21470642176",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "name": "ceph_lv1",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "tags": {
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.cluster_name": "ceph",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.crush_device_class": "",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.encrypted": "0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.objectstore": "bluestore",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.osd_id": "1",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.type": "block",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.vdo": "0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.with_tpm": "0"
Feb 25 12:41:32 compute-0 serene_newton[336218]:             },
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "type": "block",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "vg_name": "ceph_vg1"
Feb 25 12:41:32 compute-0 serene_newton[336218]:         }
Feb 25 12:41:32 compute-0 serene_newton[336218]:     ],
Feb 25 12:41:32 compute-0 serene_newton[336218]:     "2": [
Feb 25 12:41:32 compute-0 serene_newton[336218]:         {
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "devices": [
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "/dev/loop5"
Feb 25 12:41:32 compute-0 serene_newton[336218]:             ],
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_name": "ceph_lv2",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_size": "21470642176",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "name": "ceph_lv2",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "tags": {
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.cluster_name": "ceph",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.crush_device_class": "",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.encrypted": "0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.objectstore": "bluestore",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.osd_id": "2",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.type": "block",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.vdo": "0",
Feb 25 12:41:32 compute-0 serene_newton[336218]:                 "ceph.with_tpm": "0"
Feb 25 12:41:32 compute-0 serene_newton[336218]:             },
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "type": "block",
Feb 25 12:41:32 compute-0 serene_newton[336218]:             "vg_name": "ceph_vg2"
Feb 25 12:41:32 compute-0 serene_newton[336218]:         }
Feb 25 12:41:32 compute-0 serene_newton[336218]:     ]
Feb 25 12:41:32 compute-0 serene_newton[336218]: }
Feb 25 12:41:32 compute-0 systemd[1]: libpod-a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075.scope: Deactivated successfully.
Feb 25 12:41:32 compute-0 podman[336200]: 2026-02-25 12:41:32.229975374 +0000 UTC m=+0.391716956 container died a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 12:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-212a3747a6360eb37192a3d45b8042ab3aa336922b2a06b27f88a292138c0a5f-merged.mount: Deactivated successfully.
Feb 25 12:41:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 423 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 3.7 MiB/s wr, 175 op/s
Feb 25 12:41:32 compute-0 podman[336200]: 2026-02-25 12:41:32.282967589 +0000 UTC m=+0.444709161 container remove a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 12:41:32 compute-0 systemd[1]: libpod-conmon-a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075.scope: Deactivated successfully.
Feb 25 12:41:32 compute-0 sudo[336121]: pam_unix(sudo:session): session closed for user root
Feb 25 12:41:32 compute-0 sudo[336237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:41:32 compute-0 sudo[336237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:41:32 compute-0 sudo[336237]: pam_unix(sudo:session): session closed for user root
Feb 25 12:41:32 compute-0 sudo[336262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:41:32 compute-0 sudo[336262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:41:32 compute-0 podman[336298]: 2026-02-25 12:41:32.69973905 +0000 UTC m=+0.037255912 container create 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.710 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Updating instance_info_cache with network_info: [{"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:32 compute-0 systemd[1]: Started libpod-conmon-38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc.scope.
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.736 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.737 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Instance network_info: |[{"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.737 244018 DEBUG oslo_concurrency.lockutils [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.737 244018 DEBUG nova.network.neutron [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Refreshing network info cache for port 8627fb46-e938-4a67-9067-fc60794fca05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.740 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Start _get_guest_xml network_info=[{"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.746 244018 WARNING nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.754 244018 DEBUG nova.virt.libvirt.host [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.755 244018 DEBUG nova.virt.libvirt.host [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:41:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.758 244018 DEBUG nova.virt.libvirt.host [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.758 244018 DEBUG nova.virt.libvirt.host [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.758 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.759 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.759 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.759 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.759 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.759 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.760 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.760 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.760 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.760 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.761 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.761 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.763 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:32 compute-0 podman[336298]: 2026-02-25 12:41:32.771204897 +0000 UTC m=+0.108721769 container init 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 12:41:32 compute-0 podman[336298]: 2026-02-25 12:41:32.776544018 +0000 UTC m=+0.114060880 container start 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 12:41:32 compute-0 podman[336298]: 2026-02-25 12:41:32.681784074 +0000 UTC m=+0.019300966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:41:32 compute-0 serene_thompson[336314]: 167 167
Feb 25 12:41:32 compute-0 systemd[1]: libpod-38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc.scope: Deactivated successfully.
Feb 25 12:41:32 compute-0 podman[336298]: 2026-02-25 12:41:32.784024459 +0000 UTC m=+0.121541341 container attach 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:41:32 compute-0 podman[336298]: 2026-02-25 12:41:32.784520773 +0000 UTC m=+0.122037645 container died 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.796 244018 DEBUG nova.compute.manager [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.797 244018 DEBUG nova.compute.manager [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing instance network info cache due to event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.797 244018 DEBUG oslo_concurrency.lockutils [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.797 244018 DEBUG oslo_concurrency.lockutils [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.797 244018 DEBUG nova.network.neutron [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9154efe256526b705e9a04ebd83bdd23c810ae0e68e7dd559168601ee20caef6-merged.mount: Deactivated successfully.
Feb 25 12:41:32 compute-0 podman[336298]: 2026-02-25 12:41:32.826097426 +0000 UTC m=+0.163614288 container remove 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 12:41:32 compute-0 systemd[1]: libpod-conmon-38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc.scope: Deactivated successfully.
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.841 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.841 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.841 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.842 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.842 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.843 244018 INFO nova.compute.manager [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Terminating instance
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.844 244018 DEBUG nova.compute.manager [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:41:32 compute-0 kernel: tapca235479-72 (unregistering): left promiscuous mode
Feb 25 12:41:32 compute-0 NetworkManager[49836]: <info>  [1772023292.8954] device (tapca235479-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:41:32 compute-0 ovn_controller[147040]: 2026-02-25T12:41:32Z|01010|binding|INFO|Releasing lport ca235479-72b7-40ba-b8e5-d8d3227079fc from this chassis (sb_readonly=0)
Feb 25 12:41:32 compute-0 ovn_controller[147040]: 2026-02-25T12:41:32Z|01011|binding|INFO|Setting lport ca235479-72b7-40ba-b8e5-d8d3227079fc down in Southbound
Feb 25 12:41:32 compute-0 ovn_controller[147040]: 2026-02-25T12:41:32Z|01012|binding|INFO|Removing iface tapca235479-72 ovn-installed in OVS
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:32 compute-0 nova_compute[244014]: 2026-02-25 12:41:32.908 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:32.911 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:65:4a 10.100.0.7'], port_security=['fa:16:3e:c8:65:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bb4e80d2-c200-4c24-8154-e1a86d06946b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ffae040-354a-4380-b0b9-a74c45661bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf47c8ae-ac17-4dbe-9374-1ad52fb8ff7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=861eb98d-f02a-4401-a60e-4c578f807c6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ca235479-72b7-40ba-b8e5-d8d3227079fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:41:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:32.913 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ca235479-72b7-40ba-b8e5-d8d3227079fc in datapath 2ffae040-354a-4380-b0b9-a74c45661bf1 unbound from our chassis
Feb 25 12:41:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:32.915 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ffae040-354a-4380-b0b9-a74c45661bf1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:41:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:32.917 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2188e67a-6109-498c-9327-42f350482c02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:32.917 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1 namespace which is not needed anymore
Feb 25 12:41:32 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000066.scope: Deactivated successfully.
Feb 25 12:41:32 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000066.scope: Consumed 12.207s CPU time.
Feb 25 12:41:32 compute-0 systemd-machined[210048]: Machine qemu-129-instance-00000066 terminated.
Feb 25 12:41:32 compute-0 podman[336360]: 2026-02-25 12:41:32.969783331 +0000 UTC m=+0.039441044 container create c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:41:33 compute-0 systemd[1]: Started libpod-conmon-c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc.scope.
Feb 25 12:41:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:41:33 compute-0 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [NOTICE]   (334881) : haproxy version is 2.8.14-c23fe91
Feb 25 12:41:33 compute-0 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [NOTICE]   (334881) : path to executable is /usr/sbin/haproxy
Feb 25 12:41:33 compute-0 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [WARNING]  (334881) : Exiting Master process...
Feb 25 12:41:33 compute-0 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [ALERT]    (334881) : Current worker (334901) exited with code 143 (Terminated)
Feb 25 12:41:33 compute-0 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [WARNING]  (334881) : All workers exited. Exiting... (0)
Feb 25 12:41:33 compute-0 systemd[1]: libpod-73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1.scope: Deactivated successfully.
Feb 25 12:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82a44d26d3e4e7146e87fa2df5c83ca244d4b02f26ffa1b2b628b362b0b59b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82a44d26d3e4e7146e87fa2df5c83ca244d4b02f26ffa1b2b628b362b0b59b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82a44d26d3e4e7146e87fa2df5c83ca244d4b02f26ffa1b2b628b362b0b59b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82a44d26d3e4e7146e87fa2df5c83ca244d4b02f26ffa1b2b628b362b0b59b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:41:33 compute-0 podman[336360]: 2026-02-25 12:41:32.953362168 +0000 UTC m=+0.023019901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:41:33 compute-0 podman[336395]: 2026-02-25 12:41:33.050580441 +0000 UTC m=+0.048850449 container died 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.061 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:33 compute-0 podman[336360]: 2026-02-25 12:41:33.07853228 +0000 UTC m=+0.148190003 container init c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:41:33 compute-0 podman[336360]: 2026-02-25 12:41:33.08667998 +0000 UTC m=+0.156337703 container start c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.088 244018 INFO nova.virt.libvirt.driver [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Instance destroyed successfully.
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.090 244018 DEBUG nova.objects.instance [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid bb4e80d2-c200-4c24-8154-e1a86d06946b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:33 compute-0 podman[336360]: 2026-02-25 12:41:33.091705202 +0000 UTC m=+0.161362935 container attach c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1-userdata-shm.mount: Deactivated successfully.
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.105 244018 DEBUG nova.virt.libvirt.vif [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:40:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-262633266',display_name='tempest-TestNetworkAdvancedServerOps-server-262633266',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-262633266',id=102,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLLfY/ZYTp8zkjISi8dRav/XikNBwgTA0veZE/C8b/WfaOQQWjNxzsgI+O1rEE3S7jO5qPjb3pQG3PNH0tfGcXeLebTLFJlD+G8jxg8kTGDYVpT+gC8btxGaaq2vWYzmA==',key_name='tempest-TestNetworkAdvancedServerOps-1175251533',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:41:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-uqls6dt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:41:29Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=bb4e80d2-c200-4c24-8154-e1a86d06946b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.105 244018 DEBUG nova.network.os_vif_util [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.106 244018 DEBUG nova.network.os_vif_util [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.106 244018 DEBUG os_vif [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.108 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca235479-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.111 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.116 244018 INFO os_vif [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72')
Feb 25 12:41:33 compute-0 podman[336395]: 2026-02-25 12:41:33.123794567 +0000 UTC m=+0.122064575 container cleanup 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 12:41:33 compute-0 systemd[1]: libpod-conmon-73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1.scope: Deactivated successfully.
Feb 25 12:41:33 compute-0 podman[336455]: 2026-02-25 12:41:33.197744994 +0000 UTC m=+0.053581963 container remove 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:41:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.202 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f80911fd-07ed-44a5-9f76-5f5bf94f1aa7]: (4, ('Wed Feb 25 12:41:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1 (73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1)\n73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1\nWed Feb 25 12:41:33 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1 (73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1)\n73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.204 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[26499fac-ae85-4627-9bc1-931ad6f7c36e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.205 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ffae040-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:33 compute-0 kernel: tap2ffae040-30: left promiscuous mode
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.215 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d25c6ed9-8115-4907-9c79-70c7931c274c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.231 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e751454d-bfce-4c94-aa9b-0ef5941eab06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.232 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b6da43-6cab-41e4-95d7-5b6680c1b97c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.250 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c222a98-5b02-48c5-900d-5e0e603ee113]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523364, 'reachable_time': 37856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336471, 'error': None, 'target': 'ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.253 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:41:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.253 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[227f6246-7fb9-4b7d-a44a-d0aa84996126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-308c3469f7246fcc17ea83f23367abb6b8fd53a71b962f6f2bf6602aad8594fe-merged.mount: Deactivated successfully.
Feb 25 12:41:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d2ffae040\x2d354a\x2d4380\x2db0b9\x2da74c45661bf1.mount: Deactivated successfully.
Feb 25 12:41:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252519309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.386 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.412 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.426 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.595 244018 INFO nova.virt.libvirt.driver [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Deleting instance files /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b_del
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.596 244018 INFO nova.virt.libvirt.driver [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Deletion of /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b_del complete
Feb 25 12:41:33 compute-0 lvm[336583]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:41:33 compute-0 lvm[336583]: VG ceph_vg0 finished
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.674 244018 INFO nova.compute.manager [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Took 0.83 seconds to destroy the instance on the hypervisor.
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.674 244018 DEBUG oslo.service.loopingcall [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.675 244018 DEBUG nova.compute.manager [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.675 244018 DEBUG nova.network.neutron [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:41:33 compute-0 lvm[336585]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:41:33 compute-0 lvm[336585]: VG ceph_vg1 finished
Feb 25 12:41:33 compute-0 lvm[336586]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:41:33 compute-0 lvm[336586]: VG ceph_vg2 finished
Feb 25 12:41:33 compute-0 competent_sammet[336408]: {}
Feb 25 12:41:33 compute-0 systemd[1]: libpod-c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc.scope: Deactivated successfully.
Feb 25 12:41:33 compute-0 systemd[1]: libpod-c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc.scope: Consumed 1.029s CPU time.
Feb 25 12:41:33 compute-0 conmon[336408]: conmon c0707c7ff97707f68876 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc.scope/container/memory.events
Feb 25 12:41:33 compute-0 podman[336360]: 2026-02-25 12:41:33.841584113 +0000 UTC m=+0.911241826 container died c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:41:33 compute-0 nova_compute[244014]: 2026-02-25 12:41:33.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:41:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2380833841' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.007 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.008 244018 DEBUG nova.virt.libvirt.vif [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-131630496',display_name='tempest-ServersTestJSON-server-131630496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-131630496',id=105,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-9s80l7z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:29Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=aeced0b2-f2b3-4012-b740-eaa411f99631,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.009 244018 DEBUG nova.network.os_vif_util [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.010 244018 DEBUG nova.network.os_vif_util [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.011 244018 DEBUG nova.objects.instance [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid aeced0b2-f2b3-4012-b740-eaa411f99631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.028 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:41:34 compute-0 nova_compute[244014]:   <uuid>aeced0b2-f2b3-4012-b740-eaa411f99631</uuid>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   <name>instance-00000069</name>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersTestJSON-server-131630496</nova:name>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:41:32</nova:creationTime>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <nova:port uuid="8627fb46-e938-4a67-9067-fc60794fca05">
Feb 25 12:41:34 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <system>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <entry name="serial">aeced0b2-f2b3-4012-b740-eaa411f99631</entry>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <entry name="uuid">aeced0b2-f2b3-4012-b740-eaa411f99631</entry>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     </system>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   <os>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   </os>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   <features>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   </features>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/aeced0b2-f2b3-4012-b740-eaa411f99631_disk">
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config">
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:34 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:9d:4f:fa"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <target dev="tap8627fb46-e9"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/console.log" append="off"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <video>
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     </video>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:41:34 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:41:34 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:41:34 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:41:34 compute-0 nova_compute[244014]: </domain>
Feb 25 12:41:34 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.030 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Preparing to wait for external event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.030 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.030 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.030 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.031 244018 DEBUG nova.virt.libvirt.vif [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-131630496',display_name='tempest-ServersTestJSON-server-131630496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-131630496',id=105,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-9s80l7z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:29Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=aeced0b2-f2b3-4012-b740-eaa411f99631,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.031 244018 DEBUG nova.network.os_vif_util [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.032 244018 DEBUG nova.network.os_vif_util [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.032 244018 DEBUG os_vif [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.033 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.033 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.034 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.037 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8627fb46-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.037 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8627fb46-e9, col_values=(('external_ids', {'iface-id': '8627fb46-e938-4a67-9067-fc60794fca05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:4f:fa', 'vm-uuid': 'aeced0b2-f2b3-4012-b740-eaa411f99631'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:34 compute-0 NetworkManager[49836]: <info>  [1772023294.0398] manager: (tap8627fb46-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.045 244018 INFO os_vif [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9')
Feb 25 12:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e82a44d26d3e4e7146e87fa2df5c83ca244d4b02f26ffa1b2b628b362b0b59b9-merged.mount: Deactivated successfully.
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.116 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.116 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.116 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:9d:4f:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.117 244018 INFO nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Using config drive
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.156 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:34 compute-0 ceph-mon[76335]: pgmap v1811: 305 pgs: 305 active+clean; 423 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 3.7 MiB/s wr, 175 op/s
Feb 25 12:41:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1252519309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2380833841' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 437 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.9 MiB/s wr, 152 op/s
Feb 25 12:41:34 compute-0 podman[336589]: 2026-02-25 12:41:34.315049185 +0000 UTC m=+0.461460474 container remove c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:41:34 compute-0 systemd[1]: libpod-conmon-c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc.scope: Deactivated successfully.
Feb 25 12:41:34 compute-0 sudo[336262]: pam_unix(sudo:session): session closed for user root
Feb 25 12:41:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:41:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:41:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:34.525 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:41:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:34.526 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:41:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:34.527 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.565 244018 DEBUG nova.network.neutron [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.590 244018 INFO nova.compute.manager [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Took 0.91 seconds to deallocate network for instance.
Feb 25 12:41:34 compute-0 sudo[336626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:41:34 compute-0 sudo[336626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:41:34 compute-0 sudo[336626]: pam_unix(sudo:session): session closed for user root
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.663 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.664 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.832 244018 DEBUG oslo_concurrency.processutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.911 244018 DEBUG nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-vif-unplugged-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.911 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.912 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.912 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.912 244018 DEBUG nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] No waiting events found dispatching network-vif-unplugged-ca235479-72b7-40ba-b8e5-d8d3227079fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.912 244018 WARNING nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received unexpected event network-vif-unplugged-ca235479-72b7-40ba-b8e5-d8d3227079fc for instance with vm_state deleted and task_state None.
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.913 244018 DEBUG nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.913 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.913 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.913 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.914 244018 DEBUG nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] No waiting events found dispatching network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.914 244018 WARNING nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received unexpected event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc for instance with vm_state deleted and task_state None.
Feb 25 12:41:34 compute-0 nova_compute[244014]: 2026-02-25 12:41:34.914 244018 DEBUG nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-vif-deleted-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.059 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.077 244018 INFO nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Creating config drive at /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.080 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc_4e8tzp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.119 244018 DEBUG nova.network.neutron [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updated VIF entry in instance network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.120 244018 DEBUG nova.network.neutron [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updating instance_info_cache with network_info: [{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.123 244018 DEBUG nova.network.neutron [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Updated VIF entry in instance network info cache for port 8627fb46-e938-4a67-9067-fc60794fca05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.123 244018 DEBUG nova.network.neutron [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Updating instance_info_cache with network_info: [{"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.167 244018 DEBUG oslo_concurrency.lockutils [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.169 244018 DEBUG oslo_concurrency.lockutils [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.225 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc_4e8tzp" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.252 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.258 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:41:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529161144' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.406 244018 DEBUG oslo_concurrency.processutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.411 244018 DEBUG nova.compute.provider_tree [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.433 244018 DEBUG nova.scheduler.client.report [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.468 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.508 244018 INFO nova.scheduler.client.report [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Deleted allocations for instance bb4e80d2-c200-4c24-8154-e1a86d06946b
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.622 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:35 compute-0 ceph-mon[76335]: pgmap v1812: 305 pgs: 305 active+clean; 437 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.9 MiB/s wr, 152 op/s
Feb 25 12:41:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:41:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:41:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1529161144' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.797 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023280.7955923, 4717553a-9bb9-4278-b610-7c446db3f8d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.797 244018 INFO nova.compute.manager [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] VM Stopped (Lifecycle Event)
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.816 244018 DEBUG nova.compute.manager [None req-f70d302d-efb6-4c23-93ef-008f73191e0a - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.957 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:35 compute-0 nova_compute[244014]: 2026-02-25 12:41:35.958 244018 INFO nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Deleting local config drive /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config because it was imported into RBD.
Feb 25 12:41:35 compute-0 kernel: tap8627fb46-e9: entered promiscuous mode
Feb 25 12:41:35 compute-0 NetworkManager[49836]: <info>  [1772023295.9982] manager: (tap8627fb46-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Feb 25 12:41:36 compute-0 ovn_controller[147040]: 2026-02-25T12:41:35Z|01013|binding|INFO|Claiming lport 8627fb46-e938-4a67-9067-fc60794fca05 for this chassis.
Feb 25 12:41:36 compute-0 ovn_controller[147040]: 2026-02-25T12:41:35Z|01014|binding|INFO|8627fb46-e938-4a67-9067-fc60794fca05: Claiming fa:16:3e:9d:4f:fa 10.100.0.13
Feb 25 12:41:36 compute-0 nova_compute[244014]: 2026-02-25 12:41:36.000 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:36 compute-0 ovn_controller[147040]: 2026-02-25T12:41:36Z|01015|binding|INFO|Setting lport 8627fb46-e938-4a67-9067-fc60794fca05 ovn-installed in OVS
Feb 25 12:41:36 compute-0 ovn_controller[147040]: 2026-02-25T12:41:36Z|01016|binding|INFO|Setting lport 8627fb46-e938-4a67-9067-fc60794fca05 up in Southbound
Feb 25 12:41:36 compute-0 nova_compute[244014]: 2026-02-25 12:41:36.007 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.007 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:4f:fa 10.100.0.13'], port_security=['fa:16:3e:9d:4f:fa 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'aeced0b2-f2b3-4012-b740-eaa411f99631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8627fb46-e938-4a67-9067-fc60794fca05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:41:36 compute-0 nova_compute[244014]: 2026-02-25 12:41:36.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.009 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8627fb46-e938-4a67-9067-fc60794fca05 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.010 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.020 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f95cfd77-bdfa-4a00-a16d-753e5cbc784c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:36 compute-0 systemd-machined[210048]: New machine qemu-132-instance-00000069.
Feb 25 12:41:36 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-00000069.
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.041 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3d494c14-e53a-495b-a3c5-b887265cdf51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.043 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e683d9-ab14-4817-b49d-8fdc88fd99a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:36 compute-0 systemd-udevd[336731]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.063 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9fdfbc2a-5624-4ea0-9370-1544c247fb30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:36 compute-0 NetworkManager[49836]: <info>  [1772023296.0723] device (tap8627fb46-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:41:36 compute-0 NetworkManager[49836]: <info>  [1772023296.0729] device (tap8627fb46-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.079 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[adb76595-12cf-4497-8c0d-47f04f046707]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336733, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.092 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[976c7dce-5115-4f41-99f6-f892885bf318]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336738, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336738, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.094 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:36 compute-0 nova_compute[244014]: 2026-02-25 12:41:36.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:36 compute-0 nova_compute[244014]: 2026-02-25 12:41:36.097 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.097 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.097 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.098 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.098 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 437 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 3.9 MiB/s wr, 132 op/s
Feb 25 12:41:36 compute-0 nova_compute[244014]: 2026-02-25 12:41:36.743 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023296.7430825, aeced0b2-f2b3-4012-b740-eaa411f99631 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:36 compute-0 nova_compute[244014]: 2026-02-25 12:41:36.744 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] VM Started (Lifecycle Event)
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.136 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.140 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023296.7463038, aeced0b2-f2b3-4012-b740-eaa411f99631 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.141 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] VM Paused (Lifecycle Event)
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.195 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.200 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.228 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.240 244018 DEBUG nova.compute.manager [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.240 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.241 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.241 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.242 244018 DEBUG nova.compute.manager [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Processing event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.242 244018 DEBUG nova.compute.manager [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.243 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.243 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.243 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.244 244018 DEBUG nova.compute.manager [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] No waiting events found dispatching network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.245 244018 WARNING nova.compute.manager [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received unexpected event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 for instance with vm_state building and task_state spawning.
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.246 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.250 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.252 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023297.25038, aeced0b2-f2b3-4012-b740-eaa411f99631 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.252 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] VM Resumed (Lifecycle Event)
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.257 244018 INFO nova.virt.libvirt.driver [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Instance spawned successfully.
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.258 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.280 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.288 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.289 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.291 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.291 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.292 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.293 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.301 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.334 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.361 244018 INFO nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Took 7.42 seconds to spawn the instance on the hypervisor.
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.362 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.481 244018 INFO nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Took 8.71 seconds to build instance.
Feb 25 12:41:37 compute-0 nova_compute[244014]: 2026-02-25 12:41:37.511 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:37 compute-0 ceph-mon[76335]: pgmap v1813: 305 pgs: 305 active+clean; 437 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 3.9 MiB/s wr, 132 op/s
Feb 25 12:41:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 3.9 MiB/s wr, 171 op/s
Feb 25 12:41:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:39 compute-0 nova_compute[244014]: 2026-02-25 12:41:39.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:39 compute-0 ceph-mon[76335]: pgmap v1814: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 3.9 MiB/s wr, 171 op/s
Feb 25 12:41:40 compute-0 nova_compute[244014]: 2026-02-25 12:41:40.061 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.9 MiB/s wr, 76 op/s
Feb 25 12:41:40 compute-0 ovn_controller[147040]: 2026-02-25T12:41:40Z|01017|binding|INFO|Releasing lport e2d1eadf-baf7-4e5c-8052-6c64e8476a26 from this chassis (sb_readonly=0)
Feb 25 12:41:40 compute-0 ovn_controller[147040]: 2026-02-25T12:41:40Z|01018|binding|INFO|Releasing lport 895bc3cc-c38d-425b-b005-1acb3139bbee from this chassis (sb_readonly=0)
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.757 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.758 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.759 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.759 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.759 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.760 244018 INFO nova.compute.manager [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Terminating instance
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.762 244018 DEBUG nova.compute.manager [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:41:41 compute-0 kernel: tap8627fb46-e9 (unregistering): left promiscuous mode
Feb 25 12:41:41 compute-0 NetworkManager[49836]: <info>  [1772023301.9090] device (tap8627fb46-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:41 compute-0 ovn_controller[147040]: 2026-02-25T12:41:41Z|01019|binding|INFO|Releasing lport 8627fb46-e938-4a67-9067-fc60794fca05 from this chassis (sb_readonly=0)
Feb 25 12:41:41 compute-0 ovn_controller[147040]: 2026-02-25T12:41:41Z|01020|binding|INFO|Setting lport 8627fb46-e938-4a67-9067-fc60794fca05 down in Southbound
Feb 25 12:41:41 compute-0 ovn_controller[147040]: 2026-02-25T12:41:41Z|01021|binding|INFO|Removing iface tap8627fb46-e9 ovn-installed in OVS
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.923 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.925 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:4f:fa 10.100.0.13'], port_security=['fa:16:3e:9d:4f:fa 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'aeced0b2-f2b3-4012-b740-eaa411f99631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8627fb46-e938-4a67-9067-fc60794fca05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:41:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.926 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8627fb46-e938-4a67-9067-fc60794fca05 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis
Feb 25 12:41:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.928 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:41:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.939 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7c8aebae-715f-4d0a-b4aa-6b61866a36e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:41 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Deactivated successfully.
Feb 25 12:41:41 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Consumed 5.218s CPU time.
Feb 25 12:41:41 compute-0 systemd-machined[210048]: Machine qemu-132-instance-00000069 terminated.
Feb 25 12:41:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.960 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[850a2c84-d829-4c86-8cf0-b2626cb55995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.963 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[21625381-5649-4c93-b599-3a0a9eecfe24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.981 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.982 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e36afa62-63ca-47fa-944a-2c43df4ccd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.993 244018 INFO nova.virt.libvirt.driver [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Instance destroyed successfully.
Feb 25 12:41:41 compute-0 nova_compute[244014]: 2026-02-25 12:41:41.993 244018 DEBUG nova.objects.instance [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid aeced0b2-f2b3-4012-b740-eaa411f99631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.995 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce6b351-4802-4a99-be4b-ae7871e4da73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336804, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.008 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[27b3f3e0-ad9d-4394-bc1e-fc50ea6b68b8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336807, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336807, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.009 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.010 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.014 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.014 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.014 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.015 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.017 244018 DEBUG nova.virt.libvirt.vif [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-131630496',display_name='tempest-ServersTestJSON-server-131630496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-131630496',id=105,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:41:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-9s80l7z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:41:40Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=aeced0b2-f2b3-4012-b740-eaa411f99631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.018 244018 DEBUG nova.network.os_vif_util [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.019 244018 DEBUG nova.network.os_vif_util [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.019 244018 DEBUG os_vif [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.020 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.020 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8627fb46-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.022 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.024 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.026 244018 INFO os_vif [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9')
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.111 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.111 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:42 compute-0 ceph-mon[76335]: pgmap v1815: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.9 MiB/s wr, 76 op/s
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.136 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.149 244018 DEBUG nova.compute.manager [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-unplugged-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.150 244018 DEBUG oslo_concurrency.lockutils [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.150 244018 DEBUG oslo_concurrency.lockutils [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.150 244018 DEBUG oslo_concurrency.lockutils [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.150 244018 DEBUG nova.compute.manager [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] No waiting events found dispatching network-vif-unplugged-8627fb46-e938-4a67-9067-fc60794fca05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.151 244018 DEBUG nova.compute.manager [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-unplugged-8627fb46-e938-4a67-9067-fc60794fca05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.217 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.217 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.227 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.227 244018 INFO nova.compute.claims [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.397 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018777258889690822 of space, bias 1.0, pg target 0.5633177666907246 quantized to 32 (current 32)
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493855278962539 of space, bias 1.0, pg target 0.7481565836887617 quantized to 32 (current 32)
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0166595462958793e-06 of space, bias 4.0, pg target 0.0012199914555550552 quantized to 16 (current 16)
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:41:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:41:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:41:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089410437' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.977 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.982 244018 DEBUG nova.compute.provider_tree [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:41:42 compute-0 nova_compute[244014]: 2026-02-25 12:41:42.998 244018 DEBUG nova.scheduler.client.report [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.019 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.020 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.096 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.096 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.115 244018 INFO nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.133 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:41:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4089410437' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.237 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.239 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.239 244018 INFO nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Creating image(s)
Feb 25 12:41:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.293 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.316 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.347 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.352 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.433 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.434 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.434 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.434 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.454 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.457 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 227efbfe-da43-423a-8652-9636ecded4cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:43 compute-0 nova_compute[244014]: 2026-02-25 12:41:43.802 244018 DEBUG nova.policy [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.052 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 227efbfe-da43-423a-8652-9636ecded4cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.102 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.225 244018 DEBUG nova.compute.manager [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.226 244018 DEBUG oslo_concurrency.lockutils [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.226 244018 DEBUG oslo_concurrency.lockutils [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.226 244018 DEBUG oslo_concurrency.lockutils [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.226 244018 DEBUG nova.compute.manager [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] No waiting events found dispatching network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.226 244018 WARNING nova.compute.manager [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received unexpected event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 for instance with vm_state active and task_state deleting.
Feb 25 12:41:44 compute-0 ceph-mon[76335]: pgmap v1816: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Feb 25 12:41:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 284 KiB/s wr, 114 op/s
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.570 244018 DEBUG nova.objects.instance [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 227efbfe-da43-423a-8652-9636ecded4cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.584 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.584 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Ensure instance console log exists: /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.585 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.585 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.585 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.742 244018 INFO nova.virt.libvirt.driver [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Deleting instance files /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631_del
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.744 244018 INFO nova.virt.libvirt.driver [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Deletion of /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631_del complete
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.799 244018 INFO nova.compute.manager [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Took 3.04 seconds to destroy the instance on the hypervisor.
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.800 244018 DEBUG oslo.service.loopingcall [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.801 244018 DEBUG nova.compute.manager [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:41:44 compute-0 nova_compute[244014]: 2026-02-25 12:41:44.801 244018 DEBUG nova.network.neutron [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:41:45 compute-0 nova_compute[244014]: 2026-02-25 12:41:45.064 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:45 compute-0 nova_compute[244014]: 2026-02-25 12:41:45.274 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Successfully created port: ee68d04f-36ba-4727-ab4b-c31e559353e0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:41:45 compute-0 ceph-mon[76335]: pgmap v1817: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 284 KiB/s wr, 114 op/s
Feb 25 12:41:45 compute-0 nova_compute[244014]: 2026-02-25 12:41:45.640 244018 DEBUG nova.network.neutron [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:45 compute-0 nova_compute[244014]: 2026-02-25 12:41:45.670 244018 INFO nova.compute.manager [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Took 0.87 seconds to deallocate network for instance.
Feb 25 12:41:45 compute-0 nova_compute[244014]: 2026-02-25 12:41:45.764 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:45 compute-0 nova_compute[244014]: 2026-02-25 12:41:45.765 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:45 compute-0 nova_compute[244014]: 2026-02-25 12:41:45.869 244018 DEBUG oslo_concurrency.processutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.177 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Successfully updated port: ee68d04f-36ba-4727-ab4b-c31e559353e0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.201 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.201 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.202 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:41:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 103 op/s
Feb 25 12:41:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:41:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4076591774' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.407 244018 DEBUG oslo_concurrency.processutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.413 244018 DEBUG nova.compute.provider_tree [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.421 244018 DEBUG nova.compute.manager [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.421 244018 DEBUG nova.compute.manager [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing instance network info cache due to event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.421 244018 DEBUG oslo_concurrency.lockutils [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.425 244018 DEBUG nova.compute.manager [req-8b4b6247-825e-4794-a3ea-c3adff770e5a req-d819805a-6e3a-4e6a-8b0b-ae7162bd54da 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-deleted-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.434 244018 DEBUG nova.scheduler.client.report [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.461 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4076591774' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.494 244018 INFO nova.scheduler.client.report [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance aeced0b2-f2b3-4012-b740-eaa411f99631
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.562 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:41:46 compute-0 nova_compute[244014]: 2026-02-25 12:41:46.577 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:47 compute-0 nova_compute[244014]: 2026-02-25 12:41:47.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:47 compute-0 ceph-mon[76335]: pgmap v1818: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 103 op/s
Feb 25 12:41:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:41:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1809902909' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:41:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:41:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1809902909' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:41:48 compute-0 nova_compute[244014]: 2026-02-25 12:41:48.082 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023293.0807204, bb4e80d2-c200-4c24-8154-e1a86d06946b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:48 compute-0 nova_compute[244014]: 2026-02-25 12:41:48.082 244018 INFO nova.compute.manager [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Stopped (Lifecycle Event)
Feb 25 12:41:48 compute-0 nova_compute[244014]: 2026-02-25 12:41:48.108 244018 DEBUG nova.compute.manager [None req-097dafce-faf2-4e4d-bf16-9e163a894dd0 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Feb 25 12:41:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1809902909' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:41:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1809902909' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.197 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updating instance_info_cache with network_info: [{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.213 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.213 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Instance network_info: |[{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.214 244018 DEBUG oslo_concurrency.lockutils [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.214 244018 DEBUG nova.network.neutron [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.216 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Start _get_guest_xml network_info=[{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.222 244018 WARNING nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.226 244018 DEBUG nova.virt.libvirt.host [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.226 244018 DEBUG nova.virt.libvirt.host [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.229 244018 DEBUG nova.virt.libvirt.host [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.230 244018 DEBUG nova.virt.libvirt.host [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.230 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.230 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.231 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.231 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.231 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.231 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.232 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.232 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.232 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.232 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.232 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.233 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.236 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:49 compute-0 ceph-mon[76335]: pgmap v1819: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Feb 25 12:41:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694417328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.782 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.803 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:49 compute-0 nova_compute[244014]: 2026-02-25 12:41:49.807 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Feb 25 12:41:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062132063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.327 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.330 244018 DEBUG nova.virt.libvirt.vif [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1104420481',display_name='tempest-TestGettingAddress-server-1104420481',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1104420481',id=106,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-g8zs80ws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:43Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=227efbfe-da43-423a-8652-9636ecded4cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.331 244018 DEBUG nova.network.os_vif_util [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.332 244018 DEBUG nova.network.os_vif_util [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.334 244018 DEBUG nova.objects.instance [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 227efbfe-da43-423a-8652-9636ecded4cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.352 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:41:50 compute-0 nova_compute[244014]:   <uuid>227efbfe-da43-423a-8652-9636ecded4cd</uuid>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   <name>instance-0000006a</name>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1104420481</nova:name>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:41:49</nova:creationTime>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <nova:port uuid="ee68d04f-36ba-4727-ab4b-c31e559353e0">
Feb 25 12:41:50 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feb7:749a" ipVersion="6"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <system>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <entry name="serial">227efbfe-da43-423a-8652-9636ecded4cd</entry>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <entry name="uuid">227efbfe-da43-423a-8652-9636ecded4cd</entry>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     </system>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   <os>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   </os>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   <features>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   </features>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/227efbfe-da43-423a-8652-9636ecded4cd_disk">
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/227efbfe-da43-423a-8652-9636ecded4cd_disk.config">
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:50 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:b7:74:9a"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <target dev="tapee68d04f-36"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/console.log" append="off"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <video>
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     </video>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:41:50 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:41:50 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:41:50 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:41:50 compute-0 nova_compute[244014]: </domain>
Feb 25 12:41:50 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.354 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Preparing to wait for external event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.354 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.355 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.355 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.356 244018 DEBUG nova.virt.libvirt.vif [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1104420481',display_name='tempest-TestGettingAddress-server-1104420481',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1104420481',id=106,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-g8zs80ws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:43Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=227efbfe-da43-423a-8652-9636ecded4cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.356 244018 DEBUG nova.network.os_vif_util [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.357 244018 DEBUG nova.network.os_vif_util [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.357 244018 DEBUG os_vif [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.358 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.358 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.359 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.362 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee68d04f-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.362 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee68d04f-36, col_values=(('external_ids', {'iface-id': 'ee68d04f-36ba-4727-ab4b-c31e559353e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:74:9a', 'vm-uuid': '227efbfe-da43-423a-8652-9636ecded4cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:50 compute-0 NetworkManager[49836]: <info>  [1772023310.3648] manager: (tapee68d04f-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.368 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.369 244018 INFO os_vif [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36')
Feb 25 12:41:50 compute-0 podman[337102]: 2026-02-25 12:41:50.456393267 +0000 UTC m=+0.053819160 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.477 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.478 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.495 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.568 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.569 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.570 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:b7:74:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.571 244018 INFO nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Using config drive
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.598 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1694417328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2062132063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.771 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.772 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.782 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.782 244018 INFO nova.compute.claims [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:41:50 compute-0 nova_compute[244014]: 2026-02-25 12:41:50.955 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.199 244018 INFO nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Creating config drive at /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.208 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpo27n9xj0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.355 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpo27n9xj0" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.382 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.386 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config 227efbfe-da43-423a-8652-9636ecded4cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:41:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951979274' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.500 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.504 244018 DEBUG nova.compute.provider_tree [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.525 244018 DEBUG nova.scheduler.client.report [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.547 244018 DEBUG nova.network.neutron [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updated VIF entry in instance network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.548 244018 DEBUG nova.network.neutron [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updating instance_info_cache with network_info: [{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.555 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.555 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.577 244018 DEBUG oslo_concurrency.lockutils [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.627 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.627 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.665 244018 INFO nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.685 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config 227efbfe-da43-423a-8652-9636ecded4cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.685 244018 INFO nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Deleting local config drive /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config because it was imported into RBD.
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.687 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.689 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:51 compute-0 ceph-mon[76335]: pgmap v1820: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Feb 25 12:41:51 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2951979274' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:41:51 compute-0 kernel: tapee68d04f-36: entered promiscuous mode
Feb 25 12:41:51 compute-0 NetworkManager[49836]: <info>  [1772023311.7314] manager: (tapee68d04f-36): new Tun device (/org/freedesktop/NetworkManager/Devices/424)
Feb 25 12:41:51 compute-0 ovn_controller[147040]: 2026-02-25T12:41:51Z|01022|binding|INFO|Claiming lport ee68d04f-36ba-4727-ab4b-c31e559353e0 for this chassis.
Feb 25 12:41:51 compute-0 ovn_controller[147040]: 2026-02-25T12:41:51Z|01023|binding|INFO|ee68d04f-36ba-4727-ab4b-c31e559353e0: Claiming fa:16:3e:b7:74:9a 10.100.0.8 2001:db8::f816:3eff:feb7:749a
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.731 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.738 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:74:9a 10.100.0.8 2001:db8::f816:3eff:feb7:749a'], port_security=['fa:16:3e:b7:74:9a 10.100.0.8 2001:db8::f816:3eff:feb7:749a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:feb7:749a/64', 'neutron:device_id': '227efbfe-da43-423a-8652-9636ecded4cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '219ff989-2dc4-4de6-abcc-fec08f1c06f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cb98e41-b63f-472f-91ce-2c5130dfa4a8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee68d04f-36ba-4727-ab4b-c31e559353e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.739 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee68d04f-36ba-4727-ab4b-c31e559353e0 in datapath 9ec47b46-6b4d-4267-964b-6f16eef9a7b1 bound to our chassis
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.740 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ec47b46-6b4d-4267-964b-6f16eef9a7b1
Feb 25 12:41:51 compute-0 ovn_controller[147040]: 2026-02-25T12:41:51Z|01024|binding|INFO|Setting lport ee68d04f-36ba-4727-ab4b-c31e559353e0 ovn-installed in OVS
Feb 25 12:41:51 compute-0 ovn_controller[147040]: 2026-02-25T12:41:51Z|01025|binding|INFO|Setting lport ee68d04f-36ba-4727-ab4b-c31e559353e0 up in Southbound
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.742 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:51 compute-0 podman[337202]: 2026-02-25 12:41:51.75417756 +0000 UTC m=+0.100679962 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.754 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe71152-c237-45fc-8f7a-bae202405389]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:51 compute-0 systemd-udevd[337241]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:41:51 compute-0 systemd-machined[210048]: New machine qemu-133-instance-0000006a.
Feb 25 12:41:51 compute-0 NetworkManager[49836]: <info>  [1772023311.7705] device (tapee68d04f-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:41:51 compute-0 NetworkManager[49836]: <info>  [1772023311.7710] device (tapee68d04f-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:41:51 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-0000006a.
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.781 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[86a1c770-04c3-41e0-a939-e60e82984cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.785 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c6788317-7a87-41ff-b6c2-98a6b109bcd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.799 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.800 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.800 244018 INFO nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Creating image(s)
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.815 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7b655281-ed4c-460d-99ee-d5d8cb9aba6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.830 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.836 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[181ae194-1f03-4f6d-9cea-2fb9dc3fd0de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ec47b46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:8a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524298, 'reachable_time': 43888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337269, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.855 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3062fc-40d2-4666-a286-b5c15061b604]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9ec47b46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524306, 'tstamp': 524306}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337281, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9ec47b46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524308, 'tstamp': 524308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337281, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.856 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.857 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec47b46-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.860 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ec47b46-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.860 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.861 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ec47b46-60, col_values=(('external_ids', {'iface-id': '895bc3cc-c38d-425b-b005-1acb3139bbee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.861 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.884 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.889 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.917 244018 DEBUG nova.policy [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.953 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.954 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.954 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.955 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.974 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:51 compute-0 nova_compute[244014]: 2026-02-25 12:41:51.978 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b8d9acc4-7912-4d26-bad6-1159f6993361_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.249 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023312.2485483, 227efbfe-da43-423a-8652-9636ecded4cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.249 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] VM Started (Lifecycle Event)
Feb 25 12:41:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.286 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.290 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023312.2517533, 227efbfe-da43-423a-8652-9636ecded4cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.290 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] VM Paused (Lifecycle Event)
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.309 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.311 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.340 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.499 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Successfully created port: 6178078f-c3e6-4ed1-86fa-d99671618a75 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.580 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b8d9acc4-7912-4d26-bad6-1159f6993361_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.662 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.892 244018 DEBUG nova.objects.instance [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid b8d9acc4-7912-4d26-bad6-1159f6993361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.911 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.911 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Ensure instance console log exists: /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.911 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.912 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:52 compute-0 nova_compute[244014]: 2026-02-25 12:41:52.912 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:53 compute-0 ceph-mon[76335]: pgmap v1821: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Feb 25 12:41:53 compute-0 nova_compute[244014]: 2026-02-25 12:41:53.761 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Successfully updated port: 6178078f-c3e6-4ed1-86fa-d99671618a75 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:41:53 compute-0 nova_compute[244014]: 2026-02-25 12:41:53.778 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:53 compute-0 nova_compute[244014]: 2026-02-25 12:41:53.779 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:53 compute-0 nova_compute[244014]: 2026-02-25 12:41:53.779 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:41:53 compute-0 nova_compute[244014]: 2026-02-25 12:41:53.847 244018 DEBUG nova.compute.manager [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-changed-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:53 compute-0 nova_compute[244014]: 2026-02-25 12:41:53.847 244018 DEBUG nova.compute.manager [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Refreshing instance network info cache due to event network-changed-6178078f-c3e6-4ed1-86fa-d99671618a75. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:41:53 compute-0 nova_compute[244014]: 2026-02-25 12:41:53.847 244018 DEBUG oslo_concurrency.lockutils [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:41:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 375 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.4 MiB/s wr, 83 op/s
Feb 25 12:41:54 compute-0 nova_compute[244014]: 2026-02-25 12:41:54.595 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:41:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:55.020 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:55.021 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:41:55.021 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.255 244018 DEBUG nova.compute.manager [req-80c26e69-5868-4891-98bf-79c102783b08 req-bdc1e758-59b8-4e39-be72-3ea3f599f019 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.256 244018 DEBUG oslo_concurrency.lockutils [req-80c26e69-5868-4891-98bf-79c102783b08 req-bdc1e758-59b8-4e39-be72-3ea3f599f019 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.257 244018 DEBUG oslo_concurrency.lockutils [req-80c26e69-5868-4891-98bf-79c102783b08 req-bdc1e758-59b8-4e39-be72-3ea3f599f019 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.257 244018 DEBUG oslo_concurrency.lockutils [req-80c26e69-5868-4891-98bf-79c102783b08 req-bdc1e758-59b8-4e39-be72-3ea3f599f019 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.257 244018 DEBUG nova.compute.manager [req-80c26e69-5868-4891-98bf-79c102783b08 req-bdc1e758-59b8-4e39-be72-3ea3f599f019 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Processing event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.258 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.262 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023315.2619576, 227efbfe-da43-423a-8652-9636ecded4cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.262 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] VM Resumed (Lifecycle Event)
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.265 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.269 244018 INFO nova.virt.libvirt.driver [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Instance spawned successfully.
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.270 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.309 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.315 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.315 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.316 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.316 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.317 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.318 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.323 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.364 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.366 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.385 244018 INFO nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Took 12.15 seconds to spawn the instance on the hypervisor.
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.385 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.468 244018 INFO nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Took 13.28 seconds to build instance.
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.471 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Updating instance_info_cache with network_info: [{"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.495 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.496 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.496 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance network_info: |[{"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.496 244018 DEBUG oslo_concurrency.lockutils [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.497 244018 DEBUG nova.network.neutron [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Refreshing network info cache for port 6178078f-c3e6-4ed1-86fa-d99671618a75 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.499 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Start _get_guest_xml network_info=[{"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.503 244018 WARNING nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.508 244018 DEBUG nova.virt.libvirt.host [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.509 244018 DEBUG nova.virt.libvirt.host [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.515 244018 DEBUG nova.virt.libvirt.host [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.515 244018 DEBUG nova.virt.libvirt.host [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.516 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.516 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.516 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.516 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.518 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.518 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:41:55 compute-0 nova_compute[244014]: 2026-02-25 12:41:55.521 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:55 compute-0 ceph-mon[76335]: pgmap v1822: 305 pgs: 305 active+clean; 375 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.4 MiB/s wr, 83 op/s
Feb 25 12:41:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720281530' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.210 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.689s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.230 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.235 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 375 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Feb 25 12:41:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:41:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2872760037' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.768 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.771 244018 DEBUG nova.virt.libvirt.vif [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1506093231',display_name='tempest-ServersTestJSON-server-1506093231',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1506093231',id=107,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-p42ovzbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:51Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=b8d9acc4-7912-4d26-bad6-1159f6993361,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.772 244018 DEBUG nova.network.os_vif_util [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.773 244018 DEBUG nova.network.os_vif_util [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.774 244018 DEBUG nova.objects.instance [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8d9acc4-7912-4d26-bad6-1159f6993361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.794 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:41:56 compute-0 nova_compute[244014]:   <uuid>b8d9acc4-7912-4d26-bad6-1159f6993361</uuid>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   <name>instance-0000006b</name>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <nova:name>tempest-ServersTestJSON-server-1506093231</nova:name>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:41:55</nova:creationTime>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <nova:port uuid="6178078f-c3e6-4ed1-86fa-d99671618a75">
Feb 25 12:41:56 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <system>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <entry name="serial">b8d9acc4-7912-4d26-bad6-1159f6993361</entry>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <entry name="uuid">b8d9acc4-7912-4d26-bad6-1159f6993361</entry>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     </system>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   <os>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   </os>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   <features>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   </features>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b8d9acc4-7912-4d26-bad6-1159f6993361_disk">
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config">
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       </source>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:41:56 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:e2:bd:b3"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <target dev="tap6178078f-c3"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/console.log" append="off"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <video>
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     </video>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:41:56 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:41:56 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:41:56 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:41:56 compute-0 nova_compute[244014]: </domain>
Feb 25 12:41:56 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.796 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Preparing to wait for external event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.796 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.796 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.797 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.797 244018 DEBUG nova.virt.libvirt.vif [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1506093231',display_name='tempest-ServersTestJSON-server-1506093231',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1506093231',id=107,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-p42ovzbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:51Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=b8d9acc4-7912-4d26-bad6-1159f6993361,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.798 244018 DEBUG nova.network.os_vif_util [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.799 244018 DEBUG nova.network.os_vif_util [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.799 244018 DEBUG os_vif [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.800 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.801 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.804 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6178078f-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.804 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6178078f-c3, col_values=(('external_ids', {'iface-id': '6178078f-c3e6-4ed1-86fa-d99671618a75', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:bd:b3', 'vm-uuid': 'b8d9acc4-7912-4d26-bad6-1159f6993361'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.806 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:56 compute-0 NetworkManager[49836]: <info>  [1772023316.8078] manager: (tap6178078f-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.814 244018 INFO os_vif [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3')
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.894 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.895 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.895 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:e2:bd:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:41:56 compute-0 nova_compute[244014]: 2026-02-25 12:41:56.896 244018 INFO nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Using config drive
Feb 25 12:41:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1720281530' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2872760037' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.013 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.032 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023301.991494, aeced0b2-f2b3-4012-b740-eaa411f99631 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.033 244018 INFO nova.compute.manager [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] VM Stopped (Lifecycle Event)
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.058 244018 DEBUG nova.compute.manager [None req-d959d633-2ead-4c83-a701-ee9115081cb4 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.330 244018 DEBUG nova.compute.manager [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.331 244018 DEBUG oslo_concurrency.lockutils [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.331 244018 DEBUG oslo_concurrency.lockutils [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.331 244018 DEBUG oslo_concurrency.lockutils [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.331 244018 DEBUG nova.compute.manager [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] No waiting events found dispatching network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.332 244018 WARNING nova.compute.manager [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received unexpected event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 for instance with vm_state active and task_state None.
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.639 244018 DEBUG nova.network.neutron [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Updated VIF entry in instance network info cache for port 6178078f-c3e6-4ed1-86fa-d99671618a75. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.639 244018 DEBUG nova.network.neutron [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Updating instance_info_cache with network_info: [{"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:41:57 compute-0 nova_compute[244014]: 2026-02-25 12:41:57.656 244018 DEBUG oslo_concurrency.lockutils [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:41:58 compute-0 ceph-mon[76335]: pgmap v1823: 305 pgs: 305 active+clean; 375 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Feb 25 12:41:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 405 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Feb 25 12:41:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:41:59 compute-0 nova_compute[244014]: 2026-02-25 12:41:59.651 244018 INFO nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Creating config drive at /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config
Feb 25 12:41:59 compute-0 nova_compute[244014]: 2026-02-25 12:41:59.655 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd_p86hp6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:41:59 compute-0 nova_compute[244014]: 2026-02-25 12:41:59.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:41:59 compute-0 nova_compute[244014]: 2026-02-25 12:41:59.788 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd_p86hp6" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:41:59 compute-0 nova_compute[244014]: 2026-02-25 12:41:59.811 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:41:59 compute-0 nova_compute[244014]: 2026-02-25 12:41:59.814 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.070 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:00 compute-0 ceph-mon[76335]: pgmap v1824: 305 pgs: 305 active+clean; 405 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.238 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.239 244018 INFO nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Deleting local config drive /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config because it was imported into RBD.
Feb 25 12:42:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 405 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:42:00 compute-0 kernel: tap6178078f-c3: entered promiscuous mode
Feb 25 12:42:00 compute-0 NetworkManager[49836]: <info>  [1772023320.2795] manager: (tap6178078f-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.282 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:00 compute-0 ovn_controller[147040]: 2026-02-25T12:42:00Z|01026|binding|INFO|Claiming lport 6178078f-c3e6-4ed1-86fa-d99671618a75 for this chassis.
Feb 25 12:42:00 compute-0 ovn_controller[147040]: 2026-02-25T12:42:00Z|01027|binding|INFO|6178078f-c3e6-4ed1-86fa-d99671618a75: Claiming fa:16:3e:e2:bd:b3 10.100.0.6
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.287 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:00 compute-0 ovn_controller[147040]: 2026-02-25T12:42:00Z|01028|binding|INFO|Setting lport 6178078f-c3e6-4ed1-86fa-d99671618a75 ovn-installed in OVS
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.293 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:00 compute-0 systemd-udevd[337598]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:42:00 compute-0 NetworkManager[49836]: <info>  [1772023320.3210] device (tap6178078f-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:42:00 compute-0 NetworkManager[49836]: <info>  [1772023320.3214] device (tap6178078f-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:42:00 compute-0 systemd-machined[210048]: New machine qemu-134-instance-0000006b.
Feb 25 12:42:00 compute-0 ovn_controller[147040]: 2026-02-25T12:42:00Z|01029|binding|INFO|Setting lport 6178078f-c3e6-4ed1-86fa-d99671618a75 up in Southbound
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.346 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:bd:b3 10.100.0.6'], port_security=['fa:16:3e:e2:bd:b3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8d9acc4-7912-4d26-bad6-1159f6993361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6178078f-c3e6-4ed1-86fa-d99671618a75) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.347 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6178078f-c3e6-4ed1-86fa-d99671618a75 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.349 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:42:00 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-0000006b.
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.363 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d72b92-dad0-485f-adda-c527fa84e984]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.394 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e3d3f6-0edc-4e1c-b217-e9c9f805922c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.398 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[26bede8b-2971-4c4e-aac9-bedfa6206e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.418 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3a519f31-7f7a-4d54-897d-e99aecce69d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.433 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f96975bb-3979-4cb6-adc3-1329bdc9bd80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337615, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.447 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[25b7bb1e-13d2-478c-a462-f2b9e626edbf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337616, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337616, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.448 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.451 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.451 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.452 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.452 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.951 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023320.9506202, b8d9acc4-7912-4d26-bad6-1159f6993361 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.951 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] VM Started (Lifecycle Event)
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.971 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.975 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023320.953524, b8d9acc4-7912-4d26-bad6-1159f6993361 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:00 compute-0 nova_compute[244014]: 2026-02-25 12:42:00.975 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] VM Paused (Lifecycle Event)
Feb 25 12:42:01 compute-0 nova_compute[244014]: 2026-02-25 12:42:01.007 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:01 compute-0 nova_compute[244014]: 2026-02-25 12:42:01.010 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:42:01 compute-0 nova_compute[244014]: 2026-02-25 12:42:01.039 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:42:01 compute-0 nova_compute[244014]: 2026-02-25 12:42:01.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 405 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Feb 25 12:42:02 compute-0 ceph-mon[76335]: pgmap v1825: 305 pgs: 305 active+clean; 405 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:42:02 compute-0 nova_compute[244014]: 2026-02-25 12:42:02.989 244018 DEBUG nova.compute.manager [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:02 compute-0 nova_compute[244014]: 2026-02-25 12:42:02.989 244018 DEBUG nova.compute.manager [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing instance network info cache due to event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:42:02 compute-0 nova_compute[244014]: 2026-02-25 12:42:02.990 244018 DEBUG oslo_concurrency.lockutils [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:42:02 compute-0 nova_compute[244014]: 2026-02-25 12:42:02.990 244018 DEBUG oslo_concurrency.lockutils [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:42:02 compute-0 nova_compute[244014]: 2026-02-25 12:42:02.990 244018 DEBUG nova.network.neutron [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:42:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:03 compute-0 ceph-mon[76335]: pgmap v1826: 305 pgs: 305 active+clean; 405 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Feb 25 12:42:03 compute-0 nova_compute[244014]: 2026-02-25 12:42:03.980 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:03 compute-0 nova_compute[244014]: 2026-02-25 12:42:03.981 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:03 compute-0 nova_compute[244014]: 2026-02-25 12:42:03.994 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:42:04 compute-0 nova_compute[244014]: 2026-02-25 12:42:04.065 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:04 compute-0 nova_compute[244014]: 2026-02-25 12:42:04.065 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:04 compute-0 nova_compute[244014]: 2026-02-25 12:42:04.073 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:42:04 compute-0 nova_compute[244014]: 2026-02-25 12:42:04.073 244018 INFO nova.compute.claims [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:42:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 405 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Feb 25 12:42:04 compute-0 nova_compute[244014]: 2026-02-25 12:42:04.406 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:42:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2124750038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.032 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.039 244018 DEBUG nova.compute.provider_tree [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.067 244018 DEBUG nova.scheduler.client.report [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.072 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.095 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.096 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.169 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.169 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.204 244018 INFO nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.238 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.320 244018 DEBUG nova.compute.manager [req-bf3948cf-e52f-46db-ae7f-4c17c8fb3554 req-5db2cca5-f876-408f-9bd0-adc88439b614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.321 244018 DEBUG oslo_concurrency.lockutils [req-bf3948cf-e52f-46db-ae7f-4c17c8fb3554 req-5db2cca5-f876-408f-9bd0-adc88439b614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.321 244018 DEBUG oslo_concurrency.lockutils [req-bf3948cf-e52f-46db-ae7f-4c17c8fb3554 req-5db2cca5-f876-408f-9bd0-adc88439b614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.322 244018 DEBUG oslo_concurrency.lockutils [req-bf3948cf-e52f-46db-ae7f-4c17c8fb3554 req-5db2cca5-f876-408f-9bd0-adc88439b614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.322 244018 DEBUG nova.compute.manager [req-bf3948cf-e52f-46db-ae7f-4c17c8fb3554 req-5db2cca5-f876-408f-9bd0-adc88439b614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Processing event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.323 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.326 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023325.3260763, b8d9acc4-7912-4d26-bad6-1159f6993361 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.327 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] VM Resumed (Lifecycle Event)
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.329 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.333 244018 INFO nova.virt.libvirt.driver [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance spawned successfully.
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.334 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.372 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.374 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.374 244018 INFO nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Creating image(s)
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.396 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.421 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.441 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.445 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.475 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.481 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.481 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.482 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.482 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.482 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.483 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.486 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.511 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.517 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.518 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.518 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.519 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.535 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.537 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.561 244018 DEBUG nova.policy [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb37a481eb114226822ed8b2ef4f9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.568 244018 INFO nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Took 13.77 seconds to spawn the instance on the hypervisor.
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.568 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.631 244018 INFO nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Took 15.02 seconds to build instance.
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.658 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:05 compute-0 ceph-mon[76335]: pgmap v1827: 305 pgs: 305 active+clean; 405 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Feb 25 12:42:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2124750038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.978 244018 DEBUG nova.network.neutron [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updated VIF entry in instance network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:42:05 compute-0 nova_compute[244014]: 2026-02-25 12:42:05.978 244018 DEBUG nova.network.neutron [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updating instance_info_cache with network_info: [{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:06 compute-0 nova_compute[244014]: 2026-02-25 12:42:06.001 244018 DEBUG oslo_concurrency.lockutils [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:42:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 405 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 92 op/s
Feb 25 12:42:06 compute-0 sshd-session[337772]: Received disconnect from 91.224.92.190 port 38294:11:  [preauth]
Feb 25 12:42:06 compute-0 sshd-session[337772]: Disconnected from authenticating user root 91.224.92.190 port 38294 [preauth]
Feb 25 12:42:06 compute-0 nova_compute[244014]: 2026-02-25 12:42:06.809 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.141 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Successfully created port: 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.422 244018 DEBUG nova.compute.manager [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.423 244018 DEBUG oslo_concurrency.lockutils [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.423 244018 DEBUG oslo_concurrency.lockutils [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.423 244018 DEBUG oslo_concurrency.lockutils [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.424 244018 DEBUG nova.compute.manager [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] No waiting events found dispatching network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.424 244018 WARNING nova.compute.manager [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received unexpected event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 for instance with vm_state active and task_state None.
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.462 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.925s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.509 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:42:07 compute-0 ceph-mon[76335]: pgmap v1828: 305 pgs: 305 active+clean; 405 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 92 op/s
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.948 244018 DEBUG nova.objects.instance [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid 11f1a7e0-6001-4367-8491-5b5508f56bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.976 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.977 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Ensure instance console log exists: /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.977 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.978 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:07 compute-0 nova_compute[244014]: 2026-02-25 12:42:07.978 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 460 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 198 op/s
Feb 25 12:42:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:08 compute-0 nova_compute[244014]: 2026-02-25 12:42:08.654 244018 DEBUG oslo_concurrency.lockutils [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:08 compute-0 nova_compute[244014]: 2026-02-25 12:42:08.654 244018 DEBUG oslo_concurrency.lockutils [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:08 compute-0 nova_compute[244014]: 2026-02-25 12:42:08.655 244018 DEBUG nova.compute.manager [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:08 compute-0 nova_compute[244014]: 2026-02-25 12:42:08.660 244018 DEBUG nova.compute.manager [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 25 12:42:08 compute-0 nova_compute[244014]: 2026-02-25 12:42:08.661 244018 DEBUG nova.objects.instance [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'flavor' on Instance uuid b8d9acc4-7912-4d26-bad6-1159f6993361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:42:08 compute-0 nova_compute[244014]: 2026-02-25 12:42:08.686 244018 DEBUG nova.virt.libvirt.driver [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:42:08 compute-0 ovn_controller[147040]: 2026-02-25T12:42:08Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:74:9a 10.100.0.8
Feb 25 12:42:08 compute-0 ovn_controller[147040]: 2026-02-25T12:42:08Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:74:9a 10.100.0.8
Feb 25 12:42:09 compute-0 nova_compute[244014]: 2026-02-25 12:42:09.041 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Successfully updated port: 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:42:09 compute-0 nova_compute[244014]: 2026-02-25 12:42:09.231 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:42:09 compute-0 nova_compute[244014]: 2026-02-25 12:42:09.231 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:42:09 compute-0 nova_compute[244014]: 2026-02-25 12:42:09.232 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:42:09 compute-0 nova_compute[244014]: 2026-02-25 12:42:09.452 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:42:09 compute-0 nova_compute[244014]: 2026-02-25 12:42:09.519 244018 DEBUG nova.compute.manager [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:09 compute-0 nova_compute[244014]: 2026-02-25 12:42:09.519 244018 DEBUG nova.compute.manager [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing instance network info cache due to event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:42:09 compute-0 nova_compute[244014]: 2026-02-25 12:42:09.520 244018 DEBUG oslo_concurrency.lockutils [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:10 compute-0 ceph-mon[76335]: pgmap v1829: 305 pgs: 305 active+clean; 460 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 198 op/s
Feb 25 12:42:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 460 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 115 op/s
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.353 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.372 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.372 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance network_info: |[{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.373 244018 DEBUG oslo_concurrency.lockutils [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.373 244018 DEBUG nova.network.neutron [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.375 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Start _get_guest_xml network_info=[{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.380 244018 WARNING nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.386 244018 DEBUG nova.virt.libvirt.host [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.386 244018 DEBUG nova.virt.libvirt.host [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.391 244018 DEBUG nova.virt.libvirt.host [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.391 244018 DEBUG nova.virt.libvirt.host [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.392 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.392 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.392 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.392 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.393 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.393 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.393 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.393 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.393 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.394 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.394 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.395 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:42:10 compute-0 nova_compute[244014]: 2026-02-25 12:42:10.399 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:10 compute-0 sshd-session[337850]: Invalid user lighthouse from 80.94.92.186 port 38130
Feb 25 12:42:10 compute-0 sshd-session[337850]: Connection closed by invalid user lighthouse 80.94.92.186 port 38130 [preauth]
Feb 25 12:42:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:42:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274460178' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.001 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.034 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.038 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3274460178' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:42:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:42:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2144351085' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.574 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.576 244018 DEBUG nova.virt.libvirt.vif [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-356243921',display_name='tempest-TestNetworkAdvancedServerOps-server-356243921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-356243921',id=108,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOZBQUrVipYRBieQq1xDRMWSoKXwtqySIiZY2u0KWFIY87u59XRYFhhGNlsK64UxHDZ7UBqeTfQ7KU9w8um5F3hgrG+dYRSQSNJa4y14JlVwVfxefeeLtULB24TIYsOAUg==',key_name='tempest-TestNetworkAdvancedServerOps-1545448232',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-em0x8woo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:05Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=11f1a7e0-6001-4367-8491-5b5508f56bdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.577 244018 DEBUG nova.network.os_vif_util [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.577 244018 DEBUG nova.network.os_vif_util [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.578 244018 DEBUG nova.objects.instance [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid 11f1a7e0-6001-4367-8491-5b5508f56bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.595 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:42:11 compute-0 nova_compute[244014]:   <uuid>11f1a7e0-6001-4367-8491-5b5508f56bdb</uuid>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   <name>instance-0000006c</name>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-356243921</nova:name>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:42:10</nova:creationTime>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <nova:port uuid="6aff0d14-b6b5-45d1-83da-b9f0946a2e7e">
Feb 25 12:42:11 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <system>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <entry name="serial">11f1a7e0-6001-4367-8491-5b5508f56bdb</entry>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <entry name="uuid">11f1a7e0-6001-4367-8491-5b5508f56bdb</entry>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     </system>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   <os>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   </os>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   <features>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   </features>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/11f1a7e0-6001-4367-8491-5b5508f56bdb_disk">
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       </source>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config">
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       </source>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:42:11 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:3e:d0:2e"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <target dev="tap6aff0d14-b6"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/console.log" append="off"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <video>
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     </video>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:42:11 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:42:11 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:42:11 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:42:11 compute-0 nova_compute[244014]: </domain>
Feb 25 12:42:11 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.600 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Preparing to wait for external event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.600 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.600 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.601 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.601 244018 DEBUG nova.virt.libvirt.vif [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-356243921',display_name='tempest-TestNetworkAdvancedServerOps-server-356243921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-356243921',id=108,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOZBQUrVipYRBieQq1xDRMWSoKXwtqySIiZY2u0KWFIY87u59XRYFhhGNlsK64UxHDZ7UBqeTfQ7KU9w8um5F3hgrG+dYRSQSNJa4y14JlVwVfxefeeLtULB24TIYsOAUg==',key_name='tempest-TestNetworkAdvancedServerOps-1545448232',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-em0x8woo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:05Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=11f1a7e0-6001-4367-8491-5b5508f56bdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.602 244018 DEBUG nova.network.os_vif_util [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.602 244018 DEBUG nova.network.os_vif_util [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.603 244018 DEBUG os_vif [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.604 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.604 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.606 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.606 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6aff0d14-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.607 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6aff0d14-b6, col_values=(('external_ids', {'iface-id': '6aff0d14-b6b5-45d1-83da-b9f0946a2e7e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:d0:2e', 'vm-uuid': '11f1a7e0-6001-4367-8491-5b5508f56bdb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.609 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:11 compute-0 NetworkManager[49836]: <info>  [1772023331.6102] manager: (tap6aff0d14-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.614 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.614 244018 INFO os_vif [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6')
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.671 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.671 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.671 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:3e:d0:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.672 244018 INFO nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Using config drive
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.689 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:42:11 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:42:11 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.940 244018 DEBUG nova.network.neutron [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updated VIF entry in instance network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.941 244018 DEBUG nova.network.neutron [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:11 compute-0 nova_compute[244014]: 2026-02-25 12:42:11.961 244018 DEBUG oslo_concurrency.lockutils [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.070 244018 INFO nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Creating config drive at /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.076 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpstdau324 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:12 compute-0 ceph-mon[76335]: pgmap v1830: 305 pgs: 305 active+clean; 460 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 115 op/s
Feb 25 12:42:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2144351085' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.210 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpstdau324" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.251 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.256 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 482 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 153 op/s
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.398 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.399 244018 INFO nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Deleting local config drive /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config because it was imported into RBD.
Feb 25 12:42:12 compute-0 NetworkManager[49836]: <info>  [1772023332.4366] manager: (tap6aff0d14-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Feb 25 12:42:12 compute-0 kernel: tap6aff0d14-b6: entered promiscuous mode
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:12 compute-0 ovn_controller[147040]: 2026-02-25T12:42:12Z|01030|binding|INFO|Claiming lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for this chassis.
Feb 25 12:42:12 compute-0 ovn_controller[147040]: 2026-02-25T12:42:12Z|01031|binding|INFO|6aff0d14-b6b5-45d1-83da-b9f0946a2e7e: Claiming fa:16:3e:3e:d0:2e 10.100.0.4
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.453 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.452 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:d0:2e 10.100.0.4'], port_security=['fa:16:3e:3e:d0:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11f1a7e0-6001-4367-8491-5b5508f56bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-898394ed-19f4-4525-9c0d-05895748de8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '563c6e30-ca07-41f4-bf4e-27cec8015d08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f095686b-cedf-4f50-94ed-1d2cdaae5b7e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.453 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e in datapath 898394ed-19f4-4525-9c0d-05895748de8c bound to our chassis
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.455 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 898394ed-19f4-4525-9c0d-05895748de8c
Feb 25 12:42:12 compute-0 ovn_controller[147040]: 2026-02-25T12:42:12Z|01032|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e ovn-installed in OVS
Feb 25 12:42:12 compute-0 ovn_controller[147040]: 2026-02-25T12:42:12Z|01033|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e up in Southbound
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.457 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.463 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c76d33c5-ab2d-4a3f-8cc8-9107e0183500]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.465 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap898394ed-11 in ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.468 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap898394ed-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.468 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a525a782-09d8-4afe-bbdc-5c8bec9e7284]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.469 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a36740-82d7-4d70-9590-4844008659a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 systemd-machined[210048]: New machine qemu-135-instance-0000006c.
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.483 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[f80530ad-f12f-4515-99ff-e855b601bcd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-0000006c.
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.494 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6c8aaa-fdb9-4cd7-b42f-28f734ff20ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 systemd-udevd[337993]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:42:12 compute-0 NetworkManager[49836]: <info>  [1772023332.5170] device (tap6aff0d14-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:42:12 compute-0 NetworkManager[49836]: <info>  [1772023332.5176] device (tap6aff0d14-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.518 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[63066a6e-83c1-42f0-9eff-c16b276a99d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 NetworkManager[49836]: <info>  [1772023332.5241] manager: (tap898394ed-10): new Veth device (/org/freedesktop/NetworkManager/Devices/429)
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.523 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b66fad6-8966-4539-9db9-bf120e08faae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.553 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[96d3725b-2d5c-46bd-bbdf-10d8ed1ebd1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.556 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7e297306-2724-44c5-af99-b55160413e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 NetworkManager[49836]: <info>  [1772023332.5718] device (tap898394ed-10): carrier: link connected
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.574 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d4168d10-8c9a-4ea6-a863-6ef3c06e7f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.587 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8c91c0-eb36-451b-977d-93c33ee2ff9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap898394ed-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:00:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530197, 'reachable_time': 21665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338021, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.598 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7f142e3a-88b8-44ab-bffc-1e1c95997048]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530197, 'tstamp': 530197}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338022, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.610 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1d3ae9-fb36-4d98-9c87-01e8b8677fe3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap898394ed-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:00:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530197, 'reachable_time': 21665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338023, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.633 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[376a0cf9-a54b-46a6-bc5c-378b574b9623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.681 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ccdd5a20-34d5-4db3-b329-3692a32f4426]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap898394ed-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.684 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap898394ed-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.686 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:12 compute-0 NetworkManager[49836]: <info>  [1772023332.6869] manager: (tap898394ed-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Feb 25 12:42:12 compute-0 kernel: tap898394ed-10: entered promiscuous mode
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.688 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap898394ed-10, col_values=(('external_ids', {'iface-id': '90ed2c98-937c-42de-8c1d-0f2d90380e03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:12 compute-0 ovn_controller[147040]: 2026-02-25T12:42:12Z|01034|binding|INFO|Releasing lport 90ed2c98-937c-42de-8c1d-0f2d90380e03 from this chassis (sb_readonly=0)
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.696 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.697 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.698 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5346283b-fd37-482c-94c0-fbf159993463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.699 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-898394ed-19f4-4525-9c0d-05895748de8c
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 898394ed-19f4-4525-9c0d-05895748de8c
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:42:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.699 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'env', 'PROCESS_TAG=haproxy-898394ed-19f4-4525-9c0d-05895748de8c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/898394ed-19f4-4525-9c0d-05895748de8c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.773 244018 DEBUG nova.compute.manager [req-b1972967-b3cb-4f76-bf7e-9c554898928f req-761d2d85-808c-4285-983d-ab1cb79924bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.773 244018 DEBUG oslo_concurrency.lockutils [req-b1972967-b3cb-4f76-bf7e-9c554898928f req-761d2d85-808c-4285-983d-ab1cb79924bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.774 244018 DEBUG oslo_concurrency.lockutils [req-b1972967-b3cb-4f76-bf7e-9c554898928f req-761d2d85-808c-4285-983d-ab1cb79924bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.774 244018 DEBUG oslo_concurrency.lockutils [req-b1972967-b3cb-4f76-bf7e-9c554898928f req-761d2d85-808c-4285-983d-ab1cb79924bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.774 244018 DEBUG nova.compute.manager [req-b1972967-b3cb-4f76-bf7e-9c554898928f req-761d2d85-808c-4285-983d-ab1cb79924bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Processing event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.893 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.894 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023332.893629, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.894 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Started (Lifecycle Event)
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.897 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.900 244018 INFO nova.virt.libvirt.driver [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance spawned successfully.
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.900 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.929 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.933 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.933 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.934 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.934 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.935 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.935 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.943 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.973 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.973 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023332.8943381, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:12 compute-0 nova_compute[244014]: 2026-02-25 12:42:12.974 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Paused (Lifecycle Event)
Feb 25 12:42:13 compute-0 nova_compute[244014]: 2026-02-25 12:42:13.006 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:13 compute-0 nova_compute[244014]: 2026-02-25 12:42:13.009 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023332.8965251, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:13 compute-0 nova_compute[244014]: 2026-02-25 12:42:13.010 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Resumed (Lifecycle Event)
Feb 25 12:42:13 compute-0 nova_compute[244014]: 2026-02-25 12:42:13.013 244018 INFO nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Took 7.64 seconds to spawn the instance on the hypervisor.
Feb 25 12:42:13 compute-0 nova_compute[244014]: 2026-02-25 12:42:13.014 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:13 compute-0 nova_compute[244014]: 2026-02-25 12:42:13.077 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:13 compute-0 nova_compute[244014]: 2026-02-25 12:42:13.081 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:42:13 compute-0 nova_compute[244014]: 2026-02-25 12:42:13.119 244018 INFO nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Took 9.09 seconds to build instance.
Feb 25 12:42:13 compute-0 podman[338096]: 2026-02-25 12:42:13.120186369 +0000 UTC m=+0.056962358 container create 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:42:13 compute-0 nova_compute[244014]: 2026-02-25 12:42:13.144 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:13 compute-0 systemd[1]: Started libpod-conmon-06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839.scope.
Feb 25 12:42:13 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:42:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7657430fcd1ccf36e0ff1aaea267888d4bce6d3d745f3ebe88290757fb8da93/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:13 compute-0 podman[338096]: 2026-02-25 12:42:13.089279717 +0000 UTC m=+0.026055696 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:42:13 compute-0 podman[338096]: 2026-02-25 12:42:13.192941143 +0000 UTC m=+0.129717162 container init 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:42:13 compute-0 podman[338096]: 2026-02-25 12:42:13.198186021 +0000 UTC m=+0.134962000 container start 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 12:42:13 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [NOTICE]   (338114) : New worker (338116) forked
Feb 25 12:42:13 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [NOTICE]   (338114) : Loading success.
Feb 25 12:42:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:14 compute-0 ceph-mon[76335]: pgmap v1831: 305 pgs: 305 active+clean; 482 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 153 op/s
Feb 25 12:42:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 484 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Feb 25 12:42:14 compute-0 nova_compute[244014]: 2026-02-25 12:42:14.876 244018 DEBUG nova.compute.manager [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:14 compute-0 nova_compute[244014]: 2026-02-25 12:42:14.876 244018 DEBUG oslo_concurrency.lockutils [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:14 compute-0 nova_compute[244014]: 2026-02-25 12:42:14.876 244018 DEBUG oslo_concurrency.lockutils [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:14 compute-0 nova_compute[244014]: 2026-02-25 12:42:14.876 244018 DEBUG oslo_concurrency.lockutils [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:14 compute-0 nova_compute[244014]: 2026-02-25 12:42:14.877 244018 DEBUG nova.compute.manager [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:14 compute-0 nova_compute[244014]: 2026-02-25 12:42:14.877 244018 WARNING nova.compute.manager [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state active and task_state None.
Feb 25 12:42:15 compute-0 nova_compute[244014]: 2026-02-25 12:42:15.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:16 compute-0 ceph-mon[76335]: pgmap v1832: 305 pgs: 305 active+clean; 484 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Feb 25 12:42:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 484 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Feb 25 12:42:16 compute-0 nova_compute[244014]: 2026-02-25 12:42:16.609 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:17 compute-0 nova_compute[244014]: 2026-02-25 12:42:17.263 244018 DEBUG nova.compute.manager [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:17 compute-0 nova_compute[244014]: 2026-02-25 12:42:17.263 244018 DEBUG nova.compute.manager [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing instance network info cache due to event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:42:17 compute-0 nova_compute[244014]: 2026-02-25 12:42:17.263 244018 DEBUG oslo_concurrency.lockutils [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:42:17 compute-0 nova_compute[244014]: 2026-02-25 12:42:17.264 244018 DEBUG oslo_concurrency.lockutils [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:42:17 compute-0 nova_compute[244014]: 2026-02-25 12:42:17.264 244018 DEBUG nova.network.neutron [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:42:17 compute-0 ovn_controller[147040]: 2026-02-25T12:42:17Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:bd:b3 10.100.0.6
Feb 25 12:42:17 compute-0 ovn_controller[147040]: 2026-02-25T12:42:17Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:bd:b3 10.100.0.6
Feb 25 12:42:18 compute-0 ceph-mon[76335]: pgmap v1833: 305 pgs: 305 active+clean; 484 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Feb 25 12:42:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 508 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 275 op/s
Feb 25 12:42:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:18 compute-0 nova_compute[244014]: 2026-02-25 12:42:18.790 244018 DEBUG nova.virt.libvirt.driver [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 25 12:42:18 compute-0 nova_compute[244014]: 2026-02-25 12:42:18.846 244018 DEBUG nova.network.neutron [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updated VIF entry in instance network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:42:18 compute-0 nova_compute[244014]: 2026-02-25 12:42:18.847 244018 DEBUG nova.network.neutron [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:18 compute-0 nova_compute[244014]: 2026-02-25 12:42:18.876 244018 DEBUG oslo_concurrency.lockutils [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.079 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.123 244018 DEBUG nova.compute.manager [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.124 244018 DEBUG nova.compute.manager [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing instance network info cache due to event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.125 244018 DEBUG oslo_concurrency.lockutils [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.125 244018 DEBUG oslo_concurrency.lockutils [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.126 244018 DEBUG nova.network.neutron [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.168 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.169 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.169 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.169 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.169 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.170 244018 INFO nova.compute.manager [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Terminating instance
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.172 244018 DEBUG nova.compute.manager [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:42:20 compute-0 ceph-mon[76335]: pgmap v1834: 305 pgs: 305 active+clean; 508 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 275 op/s
Feb 25 12:42:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 508 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 169 op/s
Feb 25 12:42:20 compute-0 kernel: tapee68d04f-36 (unregistering): left promiscuous mode
Feb 25 12:42:20 compute-0 NetworkManager[49836]: <info>  [1772023340.2976] device (tapee68d04f-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:42:20 compute-0 ovn_controller[147040]: 2026-02-25T12:42:20Z|01035|binding|INFO|Releasing lport ee68d04f-36ba-4727-ab4b-c31e559353e0 from this chassis (sb_readonly=0)
Feb 25 12:42:20 compute-0 ovn_controller[147040]: 2026-02-25T12:42:20Z|01036|binding|INFO|Setting lport ee68d04f-36ba-4727-ab4b-c31e559353e0 down in Southbound
Feb 25 12:42:20 compute-0 ovn_controller[147040]: 2026-02-25T12:42:20Z|01037|binding|INFO|Removing iface tapee68d04f-36 ovn-installed in OVS
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.325 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:74:9a 10.100.0.8 2001:db8::f816:3eff:feb7:749a'], port_security=['fa:16:3e:b7:74:9a 10.100.0.8 2001:db8::f816:3eff:feb7:749a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:feb7:749a/64', 'neutron:device_id': '227efbfe-da43-423a-8652-9636ecded4cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '219ff989-2dc4-4de6-abcc-fec08f1c06f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cb98e41-b63f-472f-91ce-2c5130dfa4a8, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee68d04f-36ba-4727-ab4b-c31e559353e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.327 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee68d04f-36ba-4727-ab4b-c31e559353e0 in datapath 9ec47b46-6b4d-4267-964b-6f16eef9a7b1 unbound from our chassis
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.328 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ec47b46-6b4d-4267-964b-6f16eef9a7b1
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.343 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc36e09-1708-41b5-b0b7-952be437885c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:20 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Feb 25 12:42:20 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Consumed 12.127s CPU time.
Feb 25 12:42:20 compute-0 systemd-machined[210048]: Machine qemu-133-instance-0000006a terminated.
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.371 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3798d0-05b8-4c4c-81a3-bdc011f9cb41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.375 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7f9d0a-bfe0-4632-aa92-756d46aff305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.395 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6220131e-393b-4aa0-924e-bcadd1c84c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.414 244018 INFO nova.virt.libvirt.driver [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Instance destroyed successfully.
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.414 244018 DEBUG nova.objects.instance [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 227efbfe-da43-423a-8652-9636ecded4cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.413 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[28acd6c3-937b-4eab-9bcd-f2ffd7f528d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ec47b46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:8a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524298, 'reachable_time': 40654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338143, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.427 244018 DEBUG nova.virt.libvirt.vif [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1104420481',display_name='tempest-TestGettingAddress-server-1104420481',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1104420481',id=106,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:41:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-g8zs80ws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:41:55Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=227efbfe-da43-423a-8652-9636ecded4cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.428 244018 DEBUG nova.network.os_vif_util [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.429 244018 DEBUG nova.network.os_vif_util [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.429 244018 DEBUG os_vif [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.429 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[850d52cf-0c99-480e-bec6-674170e50d43]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9ec47b46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524306, 'tstamp': 524306}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338148, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9ec47b46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524308, 'tstamp': 524308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338148, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.431 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec47b46-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.432 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.433 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee68d04f-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.437 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ec47b46-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.437 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.437 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.438 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ec47b46-60, col_values=(('external_ids', {'iface-id': '895bc3cc-c38d-425b-b005-1acb3139bbee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.438 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.440 244018 INFO os_vif [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36')
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.649 244018 DEBUG nova.compute.manager [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-unplugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.650 244018 DEBUG oslo_concurrency.lockutils [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.651 244018 DEBUG oslo_concurrency.lockutils [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.651 244018 DEBUG oslo_concurrency.lockutils [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.652 244018 DEBUG nova.compute.manager [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] No waiting events found dispatching network-vif-unplugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.652 244018 DEBUG nova.compute.manager [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-unplugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:42:20 compute-0 podman[338168]: 2026-02-25 12:42:20.745482872 +0000 UTC m=+0.077263271 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:20 compute-0 nova_compute[244014]: 2026-02-25 12:42:20.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.107 244018 INFO nova.virt.libvirt.driver [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Deleting instance files /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd_del
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.108 244018 INFO nova.virt.libvirt.driver [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Deletion of /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd_del complete
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.124 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.124 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.125 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.167 244018 INFO nova.compute.manager [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Took 0.99 seconds to destroy the instance on the hypervisor.
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.168 244018 DEBUG oslo.service.loopingcall [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.168 244018 DEBUG nova.compute.manager [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.168 244018 DEBUG nova.network.neutron [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:42:21 compute-0 ceph-mon[76335]: pgmap v1835: 305 pgs: 305 active+clean; 508 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 169 op/s
Feb 25 12:42:21 compute-0 kernel: tap6178078f-c3 (unregistering): left promiscuous mode
Feb 25 12:42:21 compute-0 NetworkManager[49836]: <info>  [1772023341.3745] device (tap6178078f-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:42:21 compute-0 ovn_controller[147040]: 2026-02-25T12:42:21Z|01038|binding|INFO|Releasing lport 6178078f-c3e6-4ed1-86fa-d99671618a75 from this chassis (sb_readonly=0)
Feb 25 12:42:21 compute-0 ovn_controller[147040]: 2026-02-25T12:42:21Z|01039|binding|INFO|Setting lport 6178078f-c3e6-4ed1-86fa-d99671618a75 down in Southbound
Feb 25 12:42:21 compute-0 ovn_controller[147040]: 2026-02-25T12:42:21Z|01040|binding|INFO|Removing iface tap6178078f-c3 ovn-installed in OVS
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.389 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:bd:b3 10.100.0.6'], port_security=['fa:16:3e:e2:bd:b3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8d9acc4-7912-4d26-bad6-1159f6993361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6178078f-c3e6-4ed1-86fa-d99671618a75) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.390 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6178078f-c3e6-4ed1-86fa-d99671618a75 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.394 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.421 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[71ce079e-e7ba-413a-8b97-3e269688e08a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:21 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Feb 25 12:42:21 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Consumed 12.339s CPU time.
Feb 25 12:42:21 compute-0 systemd-machined[210048]: Machine qemu-134-instance-0000006b terminated.
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.456 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[577756a6-a3f3-4afc-8c65-fdc6ae722cb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.460 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3967c6e4-c255-401f-8016-0833f2d5c511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.502 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7a857fe9-b295-4df0-b03e-7f043a7a3748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.522 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4bcdfad1-00d5-44ee-9989-76bf7daef021]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 43845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338196, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.546 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b47eaab0-1d90-4a8b-9999-4ebf789d969b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338197, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338197, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.549 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.611 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.611 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.612 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.613 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.805 244018 INFO nova.virt.libvirt.driver [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance shutdown successfully after 13 seconds.
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.811 244018 INFO nova.virt.libvirt.driver [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance destroyed successfully.
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.811 244018 DEBUG nova.objects.instance [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'numa_topology' on Instance uuid b8d9acc4-7912-4d26-bad6-1159f6993361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.825 244018 DEBUG nova.compute.manager [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:21 compute-0 nova_compute[244014]: 2026-02-25 12:42:21.866 244018 DEBUG oslo_concurrency.lockutils [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.060 244018 DEBUG nova.network.neutron [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updated VIF entry in instance network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.061 244018 DEBUG nova.network.neutron [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updating instance_info_cache with network_info: [{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.077 244018 DEBUG oslo_concurrency.lockutils [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.138 244018 DEBUG nova.network.neutron [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.155 244018 INFO nova.compute.manager [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Took 0.99 seconds to deallocate network for instance.
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.202 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.202 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 464 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 195 op/s
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.534 244018 DEBUG oslo_concurrency.processutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.747 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:22 compute-0 podman[338212]: 2026-02-25 12:42:22.748479855 +0000 UTC m=+0.084702771 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible)
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.748 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.749 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.749 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.749 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] No waiting events found dispatching network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.750 244018 WARNING nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received unexpected event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 for instance with vm_state deleted and task_state None.
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.750 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-vif-unplugged-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.750 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.750 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.751 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.751 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] No waiting events found dispatching network-vif-unplugged-6178078f-c3e6-4ed1-86fa-d99671618a75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.751 244018 WARNING nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received unexpected event network-vif-unplugged-6178078f-c3e6-4ed1-86fa-d99671618a75 for instance with vm_state stopped and task_state None.
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.751 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-deleted-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.752 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.752 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.752 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.752 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.753 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] No waiting events found dispatching network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:22 compute-0 nova_compute[244014]: 2026-02-25 12:42:22.753 244018 WARNING nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received unexpected event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 for instance with vm_state stopped and task_state None.
Feb 25 12:42:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:42:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671590202' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.089 244018 DEBUG oslo_concurrency.processutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.097 244018 DEBUG nova.compute.provider_tree [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.111 244018 DEBUG nova.scheduler.client.report [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.131 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.179 244018 INFO nova.scheduler.client.report [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 227efbfe-da43-423a-8652-9636ecded4cd
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.269 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.327 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.344 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.344 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.344 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:23 compute-0 ceph-mon[76335]: pgmap v1836: 305 pgs: 305 active+clean; 464 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 195 op/s
Feb 25 12:42:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3671590202' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.855 244018 DEBUG nova.compute.manager [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.855 244018 DEBUG nova.compute.manager [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing instance network info cache due to event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.856 244018 DEBUG oslo_concurrency.lockutils [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.856 244018 DEBUG oslo_concurrency.lockutils [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.856 244018 DEBUG nova.network.neutron [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.900 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:42:23 compute-0 nova_compute[244014]: 2026-02-25 12:42:23.901 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.039 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.039 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.040 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.040 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.041 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.043 244018 INFO nova.compute.manager [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Terminating instance
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.046 244018 DEBUG nova.compute.manager [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:42:24 compute-0 kernel: tap96ad4438-9f (unregistering): left promiscuous mode
Feb 25 12:42:24 compute-0 NetworkManager[49836]: <info>  [1772023344.0939] device (tap96ad4438-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:42:24 compute-0 ovn_controller[147040]: 2026-02-25T12:42:24Z|01041|binding|INFO|Releasing lport 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 from this chassis (sb_readonly=0)
Feb 25 12:42:24 compute-0 ovn_controller[147040]: 2026-02-25T12:42:24Z|01042|binding|INFO|Setting lport 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 down in Southbound
Feb 25 12:42:24 compute-0 ovn_controller[147040]: 2026-02-25T12:42:24Z|01043|binding|INFO|Removing iface tap96ad4438-9f ovn-installed in OVS
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.112 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:77:21 10.100.0.12 2001:db8::f816:3eff:fe95:7721'], port_security=['fa:16:3e:95:77:21 10.100.0.12 2001:db8::f816:3eff:fe95:7721'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe95:7721/64', 'neutron:device_id': 'd547b0db-242e-49a5-8a76-5682b0235b6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '219ff989-2dc4-4de6-abcc-fec08f1c06f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cb98e41-b63f-472f-91ce-2c5130dfa4a8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.115 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 in datapath 9ec47b46-6b4d-4267-964b-6f16eef9a7b1 unbound from our chassis
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.117 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ec47b46-6b4d-4267-964b-6f16eef9a7b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.119 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d7597d36-86cc-4e52-9c40-1b229cdfc10a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.119 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1 namespace which is not needed anymore
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:24 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000068.scope: Deactivated successfully.
Feb 25 12:42:24 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000068.scope: Consumed 13.914s CPU time.
Feb 25 12:42:24 compute-0 systemd-machined[210048]: Machine qemu-131-instance-00000068 terminated.
Feb 25 12:42:24 compute-0 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [NOTICE]   (335370) : haproxy version is 2.8.14-c23fe91
Feb 25 12:42:24 compute-0 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [NOTICE]   (335370) : path to executable is /usr/sbin/haproxy
Feb 25 12:42:24 compute-0 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [WARNING]  (335370) : Exiting Master process...
Feb 25 12:42:24 compute-0 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [ALERT]    (335370) : Current worker (335372) exited with code 143 (Terminated)
Feb 25 12:42:24 compute-0 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [WARNING]  (335370) : All workers exited. Exiting... (0)
Feb 25 12:42:24 compute-0 systemd[1]: libpod-5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b.scope: Deactivated successfully.
Feb 25 12:42:24 compute-0 podman[338304]: 2026-02-25 12:42:24.252035306 +0000 UTC m=+0.048194051 container died 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.282 244018 INFO nova.virt.libvirt.driver [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Instance destroyed successfully.
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.283 244018 DEBUG nova.objects.instance [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid d547b0db-242e-49a5-8a76-5682b0235b6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:42:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 438 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Feb 25 12:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-02aa81a3db703ffc9f0b50010648091a42999b6e192082fa62a6c0558305e7bb-merged.mount: Deactivated successfully.
Feb 25 12:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b-userdata-shm.mount: Deactivated successfully.
Feb 25 12:42:24 compute-0 podman[338304]: 2026-02-25 12:42:24.29897384 +0000 UTC m=+0.095132565 container cleanup 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 12:42:24 compute-0 systemd[1]: libpod-conmon-5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b.scope: Deactivated successfully.
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.317 244018 DEBUG nova.virt.libvirt.vif [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-43297595',display_name='tempest-TestGettingAddress-server-43297595',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-43297595',id=104,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:41:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-k2no7ltx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:41:15Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=d547b0db-242e-49a5-8a76-5682b0235b6d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.317 244018 DEBUG nova.network.os_vif_util [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.318 244018 DEBUG nova.network.os_vif_util [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.319 244018 DEBUG os_vif [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.320 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.320 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96ad4438-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.324 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.337 244018 INFO os_vif [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f')
Feb 25 12:42:24 compute-0 podman[338343]: 2026-02-25 12:42:24.373161114 +0000 UTC m=+0.053025468 container remove 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.380 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[95a8716a-dad7-43ce-96f7-445a11e67188]: (4, ('Wed Feb 25 12:42:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1 (5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b)\n5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b\nWed Feb 25 12:42:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1 (5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b)\n5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.383 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00c8f851-b9ba-4c02-af22-69f9dab2d1a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.384 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec47b46-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:24 compute-0 kernel: tap9ec47b46-60: left promiscuous mode
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.403 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.406 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f0359870-14bf-4eba-b84d-18f8c96f8f27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.418 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6afb2e5f-07d9-4a72-9a74-f7136567fe7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.420 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a692dfd4-5702-4b92-9a84-bfef86f3cf5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.440 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3a448496-2abf-4620-9a9c-1c66a65da238]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524291, 'reachable_time': 42330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338381, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d9ec47b46\x2d6b4d\x2d4267\x2d964b\x2d6f16eef9a7b1.mount: Deactivated successfully.
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.444 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:42:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.444 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[7822310b-f362-4ce0-9b5c-d3913fa0886d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:42:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1306296653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.567 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.620 244018 INFO nova.virt.libvirt.driver [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Deleting instance files /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d_del
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.622 244018 INFO nova.virt.libvirt.driver [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Deletion of /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d_del complete
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.682 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.683 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.686 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.686 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.693 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.693 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.697 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.697 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.714 244018 INFO nova.compute.manager [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Took 0.67 seconds to destroy the instance on the hypervisor.
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.715 244018 DEBUG oslo.service.loopingcall [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.715 244018 DEBUG nova.compute.manager [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.716 244018 DEBUG nova.network.neutron [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.837 244018 DEBUG nova.compute.manager [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-unplugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.837 244018 DEBUG oslo_concurrency.lockutils [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.837 244018 DEBUG oslo_concurrency.lockutils [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.838 244018 DEBUG oslo_concurrency.lockutils [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.838 244018 DEBUG nova.compute.manager [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] No waiting events found dispatching network-vif-unplugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.838 244018 DEBUG nova.compute.manager [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-unplugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.856 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.857 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3351MB free_disk=59.83013467211276GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.857 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.857 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.952 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 62d2fee1-f07f-44e3-a511-6b9bb341a3ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.953 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d547b0db-242e-49a5-8a76-5682b0235b6d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.953 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b8d9acc4-7912-4d26-bad6-1159f6993361 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.953 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 11f1a7e0-6001-4367-8491-5b5508f56bdb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.953 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:42:24 compute-0 nova_compute[244014]: 2026-02-25 12:42:24.953 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.039 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.345 244018 DEBUG nova.network.neutron [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.368 244018 INFO nova.compute.manager [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Took 0.65 seconds to deallocate network for instance.
Feb 25 12:42:25 compute-0 ceph-mon[76335]: pgmap v1837: 305 pgs: 305 active+clean; 438 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Feb 25 12:42:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1306296653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.438 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.439 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.440 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.441 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.441 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.441 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.444 244018 INFO nova.compute.manager [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Terminating instance
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.445 244018 DEBUG nova.compute.manager [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.455 244018 INFO nova.virt.libvirt.driver [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance destroyed successfully.
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.456 244018 DEBUG nova.objects.instance [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid b8d9acc4-7912-4d26-bad6-1159f6993361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.467 244018 DEBUG nova.virt.libvirt.vif [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:41:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1506093231',display_name='tempest-Íñstáñcé-59660951',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1506093231',id=107,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-p42ovzbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:42:23Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=b8d9acc4-7912-4d26-bad6-1159f6993361,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.468 244018 DEBUG nova.network.os_vif_util [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.469 244018 DEBUG nova.network.os_vif_util [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.469 244018 DEBUG os_vif [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.472 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6178078f-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.480 244018 INFO os_vif [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3')
Feb 25 12:42:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:42:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1526370141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.680 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.687 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.706 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.726 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.726 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.727 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.809 244018 INFO nova.virt.libvirt.driver [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Deleting instance files /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361_del
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.810 244018 INFO nova.virt.libvirt.driver [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Deletion of /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361_del complete
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.829 244018 DEBUG oslo_concurrency.processutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.900 244018 INFO nova.compute.manager [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Took 0.45 seconds to destroy the instance on the hypervisor.
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.901 244018 DEBUG oslo.service.loopingcall [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.901 244018 DEBUG nova.compute.manager [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.901 244018 DEBUG nova.network.neutron [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.906 244018 DEBUG nova.network.neutron [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updated VIF entry in instance network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.906 244018 DEBUG nova.network.neutron [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.923 244018 DEBUG oslo_concurrency.lockutils [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.949 244018 DEBUG nova.compute.manager [req-a90e0961-908b-439a-bcab-5f526d743fcd req-3a13d184-27b8-42e2-be46-9d6a6c838adf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-deleted-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.949 244018 INFO nova.compute.manager [req-a90e0961-908b-439a-bcab-5f526d743fcd req-3a13d184-27b8-42e2-be46-9d6a6c838adf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Neutron deleted interface 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1; detaching it from the instance and deleting it from the info cache
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.950 244018 DEBUG nova.network.neutron [req-a90e0961-908b-439a-bcab-5f526d743fcd req-3a13d184-27b8-42e2-be46-9d6a6c838adf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:25 compute-0 nova_compute[244014]: 2026-02-25 12:42:25.971 244018 DEBUG nova.compute.manager [req-a90e0961-908b-439a-bcab-5f526d743fcd req-3a13d184-27b8-42e2-be46-9d6a6c838adf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Detach interface failed, port_id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1, reason: Instance d547b0db-242e-49a5-8a76-5682b0235b6d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:42:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 438 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 159 op/s
Feb 25 12:42:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:42:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437893879' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.361 244018 DEBUG oslo_concurrency.processutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.364 244018 DEBUG nova.compute.provider_tree [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.379 244018 DEBUG nova.scheduler.client.report [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.404 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1526370141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3437893879' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.431 244018 INFO nova.scheduler.client.report [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance d547b0db-242e-49a5-8a76-5682b0235b6d
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.487 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:26 compute-0 ovn_controller[147040]: 2026-02-25T12:42:26Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:d0:2e 10.100.0.4
Feb 25 12:42:26 compute-0 ovn_controller[147040]: 2026-02-25T12:42:26Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:d0:2e 10.100.0.4
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.691 244018 DEBUG nova.network.neutron [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.717 244018 INFO nova.compute.manager [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Took 0.82 seconds to deallocate network for instance.
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.766 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.767 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.852 244018 DEBUG oslo_concurrency.processutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.950 244018 DEBUG nova.compute.manager [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.951 244018 DEBUG oslo_concurrency.lockutils [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.952 244018 DEBUG oslo_concurrency.lockutils [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.952 244018 DEBUG oslo_concurrency.lockutils [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.952 244018 DEBUG nova.compute.manager [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] No waiting events found dispatching network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:26 compute-0 nova_compute[244014]: 2026-02-25 12:42:26.953 244018 WARNING nova.compute.manager [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received unexpected event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 for instance with vm_state deleted and task_state None.
Feb 25 12:42:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:42:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715076582' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:27 compute-0 nova_compute[244014]: 2026-02-25 12:42:27.421 244018 DEBUG oslo_concurrency.processutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:27 compute-0 nova_compute[244014]: 2026-02-25 12:42:27.428 244018 DEBUG nova.compute.provider_tree [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:42:27 compute-0 ceph-mon[76335]: pgmap v1838: 305 pgs: 305 active+clean; 438 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 159 op/s
Feb 25 12:42:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3715076582' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:27 compute-0 nova_compute[244014]: 2026-02-25 12:42:27.448 244018 DEBUG nova.scheduler.client.report [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:42:27 compute-0 nova_compute[244014]: 2026-02-25 12:42:27.654 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:27 compute-0 nova_compute[244014]: 2026-02-25 12:42:27.688 244018 INFO nova.scheduler.client.report [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance b8d9acc4-7912-4d26-bad6-1159f6993361
Feb 25 12:42:27 compute-0 nova_compute[244014]: 2026-02-25 12:42:27.727 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:27 compute-0 nova_compute[244014]: 2026-02-25 12:42:27.727 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:27 compute-0 nova_compute[244014]: 2026-02-25 12:42:27.744 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:27 compute-0 nova_compute[244014]: 2026-02-25 12:42:27.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:28 compute-0 nova_compute[244014]: 2026-02-25 12:42:28.047 244018 DEBUG nova.compute.manager [req-2299ce07-f709-4b29-8d58-c633436af3a7 req-df7f2e5a-8262-431f-921a-21fa2fcbc5d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-vif-deleted-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 312 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 274 op/s
Feb 25 12:42:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:29 compute-0 ceph-mon[76335]: pgmap v1839: 305 pgs: 305 active+clean; 312 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 274 op/s
Feb 25 12:42:29 compute-0 nova_compute[244014]: 2026-02-25 12:42:29.920 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:29 compute-0 nova_compute[244014]: 2026-02-25 12:42:29.920 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:29 compute-0 nova_compute[244014]: 2026-02-25 12:42:29.920 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:29 compute-0 nova_compute[244014]: 2026-02-25 12:42:29.921 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:29 compute-0 nova_compute[244014]: 2026-02-25 12:42:29.921 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:29 compute-0 nova_compute[244014]: 2026-02-25 12:42:29.922 244018 INFO nova.compute.manager [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Terminating instance
Feb 25 12:42:29 compute-0 nova_compute[244014]: 2026-02-25 12:42:29.923 244018 DEBUG nova.compute.manager [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:42:29 compute-0 kernel: tap521e9ad8-9e (unregistering): left promiscuous mode
Feb 25 12:42:29 compute-0 NetworkManager[49836]: <info>  [1772023349.9716] device (tap521e9ad8-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:42:29 compute-0 ovn_controller[147040]: 2026-02-25T12:42:29Z|01044|binding|INFO|Releasing lport 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 from this chassis (sb_readonly=0)
Feb 25 12:42:29 compute-0 ovn_controller[147040]: 2026-02-25T12:42:29Z|01045|binding|INFO|Setting lport 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 down in Southbound
Feb 25 12:42:29 compute-0 nova_compute[244014]: 2026-02-25 12:42:29.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:29 compute-0 ovn_controller[147040]: 2026-02-25T12:42:29Z|01046|binding|INFO|Removing iface tap521e9ad8-9e ovn-installed in OVS
Feb 25 12:42:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:29.984 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f0:2e 10.100.0.3'], port_security=['fa:16:3e:f6:f0:2e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '62d2fee1-f07f-44e3-a511-6b9bb341a3ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:42:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:29.985 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis
Feb 25 12:42:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:29.986 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ec8bae53-fe6a-49d1-a733-f00c198be561, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:42:29 compute-0 nova_compute[244014]: 2026-02-25 12:42:29.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:29.988 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[36d0ec4a-6cf3-4761-8173-593a36d37528]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:29.989 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561 namespace which is not needed anymore
Feb 25 12:42:30 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000061.scope: Deactivated successfully.
Feb 25 12:42:30 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000061.scope: Consumed 17.132s CPU time.
Feb 25 12:42:30 compute-0 systemd-machined[210048]: Machine qemu-124-instance-00000061 terminated.
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.083 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:30 compute-0 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [NOTICE]   (331468) : haproxy version is 2.8.14-c23fe91
Feb 25 12:42:30 compute-0 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [NOTICE]   (331468) : path to executable is /usr/sbin/haproxy
Feb 25 12:42:30 compute-0 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [WARNING]  (331468) : Exiting Master process...
Feb 25 12:42:30 compute-0 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [ALERT]    (331468) : Current worker (331470) exited with code 143 (Terminated)
Feb 25 12:42:30 compute-0 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [WARNING]  (331468) : All workers exited. Exiting... (0)
Feb 25 12:42:30 compute-0 systemd[1]: libpod-5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02.scope: Deactivated successfully.
Feb 25 12:42:30 compute-0 podman[338499]: 2026-02-25 12:42:30.124421381 +0000 UTC m=+0.046345929 container died 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:42:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aeb1a3a8bf9efcf56fafcf5d930b23a7cd7c44dae6b62bfc44fc981eafd5921-merged.mount: Deactivated successfully.
Feb 25 12:42:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02-userdata-shm.mount: Deactivated successfully.
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.160 244018 INFO nova.virt.libvirt.driver [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Instance destroyed successfully.
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.160 244018 DEBUG nova.objects.instance [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid 62d2fee1-f07f-44e3-a511-6b9bb341a3ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:42:30 compute-0 podman[338499]: 2026-02-25 12:42:30.164276156 +0000 UTC m=+0.086200694 container cleanup 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 12:42:30 compute-0 systemd[1]: libpod-conmon-5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02.scope: Deactivated successfully.
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.175 244018 DEBUG nova.virt.libvirt.vif [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:39:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-825477072',display_name='tempest-₡-825477072',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--825477072',id=97,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:39:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-t20m600l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:39:56Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=62d2fee1-f07f-44e3-a511-6b9bb341a3ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.175 244018 DEBUG nova.network.os_vif_util [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.176 244018 DEBUG nova.network.os_vif_util [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.176 244018 DEBUG os_vif [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.177 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap521e9ad8-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.179 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.180 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.182 244018 INFO os_vif [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e')
Feb 25 12:42:30 compute-0 podman[338543]: 2026-02-25 12:42:30.228020184 +0000 UTC m=+0.041909123 container remove 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:42:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.232 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8adad288-b1f1-4352-a50a-9b664a64a15f]: (4, ('Wed Feb 25 12:42:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561 (5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02)\n5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02\nWed Feb 25 12:42:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561 (5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02)\n5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.234 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[02a03ac2-ea35-46d7-b2ea-1bea2b20eee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.235 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:30 compute-0 kernel: tapec8bae53-f0: left promiscuous mode
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.237 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.240 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7127f8ba-a491-480c-8618-9a924c1ddd30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.250 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.253 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[767778ea-bebb-4e3b-83c0-6ad25d8ef1b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.254 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e82dafcd-7162-43fa-97b0-0388a60ed4a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.267 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34908ccb-bb82-4e9a-a738-ef287485cf03]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516525, 'reachable_time': 28996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338575, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:30 compute-0 systemd[1]: run-netns-ovnmeta\x2dec8bae53\x2dfe6a\x2d49d1\x2da733\x2df00c198be561.mount: Deactivated successfully.
Feb 25 12:42:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.271 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:42:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.271 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0e2448bc-378a-4664-be92-1bec24ef0d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 312 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.2 MiB/s wr, 155 op/s
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.401 244018 INFO nova.virt.libvirt.driver [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Deleting instance files /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed_del
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.402 244018 INFO nova.virt.libvirt.driver [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Deletion of /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed_del complete
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.445 244018 INFO nova.compute.manager [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Took 0.52 seconds to destroy the instance on the hypervisor.
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.446 244018 DEBUG oslo.service.loopingcall [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.446 244018 DEBUG nova.compute.manager [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.446 244018 DEBUG nova.network.neutron [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.955 244018 DEBUG nova.compute.manager [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-unplugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.955 244018 DEBUG oslo_concurrency.lockutils [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.956 244018 DEBUG oslo_concurrency.lockutils [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.956 244018 DEBUG oslo_concurrency.lockutils [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.956 244018 DEBUG nova.compute.manager [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] No waiting events found dispatching network-vif-unplugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:30 compute-0 nova_compute[244014]: 2026-02-25 12:42:30.956 244018 DEBUG nova.compute.manager [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-unplugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:42:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:42:30
Feb 25 12:42:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:42:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:42:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'volumes', 'default.rgw.control', '.rgw.root']
Feb 25 12:42:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:42:31 compute-0 nova_compute[244014]: 2026-02-25 12:42:31.173 244018 DEBUG nova.network.neutron [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:31 compute-0 nova_compute[244014]: 2026-02-25 12:42:31.197 244018 INFO nova.compute.manager [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Took 0.75 seconds to deallocate network for instance.
Feb 25 12:42:31 compute-0 nova_compute[244014]: 2026-02-25 12:42:31.247 244018 DEBUG nova.compute.manager [req-fba2be00-e56a-4422-a67c-81ace07c82e0 req-42749830-6398-499d-989a-c192bd244a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-deleted-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:31 compute-0 nova_compute[244014]: 2026-02-25 12:42:31.254 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:31 compute-0 nova_compute[244014]: 2026-02-25 12:42:31.254 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:31 compute-0 nova_compute[244014]: 2026-02-25 12:42:31.342 244018 DEBUG oslo_concurrency.processutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:31 compute-0 ceph-mon[76335]: pgmap v1840: 305 pgs: 305 active+clean; 312 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.2 MiB/s wr, 155 op/s
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:42:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:42:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545476640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:31 compute-0 nova_compute[244014]: 2026-02-25 12:42:31.959 244018 DEBUG oslo_concurrency.processutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:31 compute-0 nova_compute[244014]: 2026-02-25 12:42:31.963 244018 DEBUG nova.compute.provider_tree [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:42:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:42:31 compute-0 nova_compute[244014]: 2026-02-25 12:42:31.981 244018 DEBUG nova.scheduler.client.report [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:42:32 compute-0 nova_compute[244014]: 2026-02-25 12:42:32.002 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:32 compute-0 nova_compute[244014]: 2026-02-25 12:42:32.039 244018 INFO nova.scheduler.client.report [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance 62d2fee1-f07f-44e3-a511-6b9bb341a3ed
Feb 25 12:42:32 compute-0 nova_compute[244014]: 2026-02-25 12:42:32.122 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 262 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.2 MiB/s wr, 174 op/s
Feb 25 12:42:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3545476640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:32 compute-0 nova_compute[244014]: 2026-02-25 12:42:32.866 244018 INFO nova.compute.manager [None req-04689145-144d-4a06-a756-a33730bf0d54 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Get console output
Feb 25 12:42:32 compute-0 ovn_controller[147040]: 2026-02-25T12:42:32Z|01047|binding|INFO|Releasing lport 90ed2c98-937c-42de-8c1d-0f2d90380e03 from this chassis (sb_readonly=0)
Feb 25 12:42:32 compute-0 nova_compute[244014]: 2026-02-25 12:42:32.874 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:42:32 compute-0 nova_compute[244014]: 2026-02-25 12:42:32.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.072 244018 DEBUG nova.compute.manager [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.073 244018 DEBUG oslo_concurrency.lockutils [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.073 244018 DEBUG oslo_concurrency.lockutils [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.073 244018 DEBUG oslo_concurrency.lockutils [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.074 244018 DEBUG nova.compute.manager [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] No waiting events found dispatching network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.074 244018 WARNING nova.compute.manager [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received unexpected event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 for instance with vm_state deleted and task_state None.
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.154 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.155 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.155 244018 INFO nova.compute.manager [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Rebooting instance
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.177 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.178 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:42:33 compute-0 nova_compute[244014]: 2026-02-25 12:42:33.178 244018 DEBUG nova.network.neutron [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:42:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:33 compute-0 ceph-mon[76335]: pgmap v1841: 305 pgs: 305 active+clean; 262 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.2 MiB/s wr, 174 op/s
Feb 25 12:42:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 158 op/s
Feb 25 12:42:34 compute-0 sudo[338599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:42:34 compute-0 sudo[338599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:42:34 compute-0 sudo[338599]: pam_unix(sudo:session): session closed for user root
Feb 25 12:42:34 compute-0 sudo[338624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:42:34 compute-0 sudo[338624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:42:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:35.023 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:42:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:35.024 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:42:35 compute-0 nova_compute[244014]: 2026-02-25 12:42:35.024 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:35 compute-0 nova_compute[244014]: 2026-02-25 12:42:35.093 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:35 compute-0 nova_compute[244014]: 2026-02-25 12:42:35.180 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:35 compute-0 sudo[338624]: pam_unix(sudo:session): session closed for user root
Feb 25 12:42:35 compute-0 ovn_controller[147040]: 2026-02-25T12:42:35Z|01048|binding|INFO|Releasing lport 90ed2c98-937c-42de-8c1d-0f2d90380e03 from this chassis (sb_readonly=0)
Feb 25 12:42:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 12:42:35 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 12:42:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:42:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:42:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:42:35 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:42:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:42:35 compute-0 nova_compute[244014]: 2026-02-25 12:42:35.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:35 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:42:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:42:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:42:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:42:35 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:42:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:42:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:42:35 compute-0 nova_compute[244014]: 2026-02-25 12:42:35.304 244018 DEBUG nova.network.neutron [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:42:35 compute-0 nova_compute[244014]: 2026-02-25 12:42:35.322 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:42:35 compute-0 nova_compute[244014]: 2026-02-25 12:42:35.324 244018 DEBUG nova.compute.manager [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:35 compute-0 sudo[338680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:42:35 compute-0 sudo[338680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:42:35 compute-0 sudo[338680]: pam_unix(sudo:session): session closed for user root
Feb 25 12:42:35 compute-0 sudo[338705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:42:35 compute-0 sudo[338705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:42:35 compute-0 nova_compute[244014]: 2026-02-25 12:42:35.411 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023340.4106336, 227efbfe-da43-423a-8652-9636ecded4cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:35 compute-0 nova_compute[244014]: 2026-02-25 12:42:35.413 244018 INFO nova.compute.manager [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] VM Stopped (Lifecycle Event)
Feb 25 12:42:35 compute-0 nova_compute[244014]: 2026-02-25 12:42:35.435 244018 DEBUG nova.compute.manager [None req-9d12d821-e5b2-42ea-967d-a44822e16652 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:35 compute-0 ceph-mon[76335]: pgmap v1842: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 158 op/s
Feb 25 12:42:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 12:42:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:42:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:42:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:42:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:42:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:42:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:42:35 compute-0 podman[338742]: 2026-02-25 12:42:35.734820953 +0000 UTC m=+0.077633562 container create 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:42:35 compute-0 podman[338742]: 2026-02-25 12:42:35.683391391 +0000 UTC m=+0.026204010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:42:35 compute-0 systemd[1]: Started libpod-conmon-8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3.scope.
Feb 25 12:42:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:42:35 compute-0 podman[338742]: 2026-02-25 12:42:35.850985481 +0000 UTC m=+0.193798060 container init 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 12:42:35 compute-0 podman[338742]: 2026-02-25 12:42:35.855938461 +0000 UTC m=+0.198751040 container start 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:42:35 compute-0 podman[338742]: 2026-02-25 12:42:35.858640127 +0000 UTC m=+0.201452746 container attach 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 12:42:35 compute-0 affectionate_meitner[338758]: 167 167
Feb 25 12:42:35 compute-0 systemd[1]: libpod-8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3.scope: Deactivated successfully.
Feb 25 12:42:35 compute-0 conmon[338758]: conmon 8088284d25938ecec2b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3.scope/container/memory.events
Feb 25 12:42:35 compute-0 podman[338742]: 2026-02-25 12:42:35.861200439 +0000 UTC m=+0.204013058 container died 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 12:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-62a81bb82d03b2d46c7db29f6c40bf94f374b10bb4658dec0bde1a23ca286aa4-merged.mount: Deactivated successfully.
Feb 25 12:42:36 compute-0 podman[338742]: 2026-02-25 12:42:36.001109128 +0000 UTC m=+0.343921707 container remove 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 25 12:42:36 compute-0 systemd[1]: libpod-conmon-8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3.scope: Deactivated successfully.
Feb 25 12:42:36 compute-0 podman[338782]: 2026-02-25 12:42:36.216261729 +0000 UTC m=+0.067050163 container create dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 12:42:36 compute-0 podman[338782]: 2026-02-25 12:42:36.174356357 +0000 UTC m=+0.025144831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:42:36 compute-0 systemd[1]: Started libpod-conmon-dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9.scope.
Feb 25 12:42:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 143 op/s
Feb 25 12:42:36 compute-0 podman[338782]: 2026-02-25 12:42:36.315184081 +0000 UTC m=+0.165972565 container init dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 12:42:36 compute-0 podman[338782]: 2026-02-25 12:42:36.327878239 +0000 UTC m=+0.178666673 container start dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:42:36 compute-0 podman[338782]: 2026-02-25 12:42:36.333528518 +0000 UTC m=+0.184316962 container attach dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:42:36 compute-0 nova_compute[244014]: 2026-02-25 12:42:36.635 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023341.6339054, b8d9acc4-7912-4d26-bad6-1159f6993361 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:36 compute-0 nova_compute[244014]: 2026-02-25 12:42:36.638 244018 INFO nova.compute.manager [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] VM Stopped (Lifecycle Event)
Feb 25 12:42:36 compute-0 nova_compute[244014]: 2026-02-25 12:42:36.658 244018 DEBUG nova.compute.manager [None req-23a7d195-8ed6-4728-aa60-8f8984f32cfe - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:36 compute-0 friendly_kowalevski[338798]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:42:36 compute-0 friendly_kowalevski[338798]: --> All data devices are unavailable
Feb 25 12:42:36 compute-0 systemd[1]: libpod-dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9.scope: Deactivated successfully.
Feb 25 12:42:36 compute-0 conmon[338798]: conmon dac3b645cdc7d1cd349e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9.scope/container/memory.events
Feb 25 12:42:36 compute-0 podman[338818]: 2026-02-25 12:42:36.862378333 +0000 UTC m=+0.026256722 container died dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 12:42:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336-merged.mount: Deactivated successfully.
Feb 25 12:42:36 compute-0 podman[338818]: 2026-02-25 12:42:36.936220926 +0000 UTC m=+0.100099305 container remove dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 12:42:36 compute-0 systemd[1]: libpod-conmon-dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9.scope: Deactivated successfully.
Feb 25 12:42:36 compute-0 sudo[338705]: pam_unix(sudo:session): session closed for user root
Feb 25 12:42:37 compute-0 sudo[338833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:42:37 compute-0 sudo[338833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:42:37 compute-0 sudo[338833]: pam_unix(sudo:session): session closed for user root
Feb 25 12:42:37 compute-0 sudo[338858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:42:37 compute-0 sudo[338858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:42:37 compute-0 podman[338894]: 2026-02-25 12:42:37.378916139 +0000 UTC m=+0.068283838 container create bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:42:37 compute-0 podman[338894]: 2026-02-25 12:42:37.336875573 +0000 UTC m=+0.026243262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:42:37 compute-0 systemd[1]: Started libpod-conmon-bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d.scope.
Feb 25 12:42:37 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:42:37 compute-0 podman[338894]: 2026-02-25 12:42:37.565598727 +0000 UTC m=+0.254966466 container init bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:42:37 compute-0 podman[338894]: 2026-02-25 12:42:37.575545848 +0000 UTC m=+0.264913557 container start bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:42:37 compute-0 podman[338894]: 2026-02-25 12:42:37.580020734 +0000 UTC m=+0.269388473 container attach bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 12:42:37 compute-0 recursing_thompson[338910]: 167 167
Feb 25 12:42:37 compute-0 systemd[1]: libpod-bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d.scope: Deactivated successfully.
Feb 25 12:42:37 compute-0 podman[338894]: 2026-02-25 12:42:37.581516166 +0000 UTC m=+0.270883865 container died bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:42:37 compute-0 ceph-mon[76335]: pgmap v1843: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 143 op/s
Feb 25 12:42:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-01efe4991c5ad508bf88f2d3af27223ad17b50c26d1e56b6c895ac032ddb928b-merged.mount: Deactivated successfully.
Feb 25 12:42:37 compute-0 podman[338894]: 2026-02-25 12:42:37.759747716 +0000 UTC m=+0.449115395 container remove bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:42:37 compute-0 kernel: tap6aff0d14-b6 (unregistering): left promiscuous mode
Feb 25 12:42:37 compute-0 NetworkManager[49836]: <info>  [1772023357.7956] device (tap6aff0d14-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:42:37 compute-0 ovn_controller[147040]: 2026-02-25T12:42:37Z|01049|binding|INFO|Releasing lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e from this chassis (sb_readonly=0)
Feb 25 12:42:37 compute-0 nova_compute[244014]: 2026-02-25 12:42:37.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:37 compute-0 ovn_controller[147040]: 2026-02-25T12:42:37Z|01050|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e down in Southbound
Feb 25 12:42:37 compute-0 ovn_controller[147040]: 2026-02-25T12:42:37Z|01051|binding|INFO|Removing iface tap6aff0d14-b6 ovn-installed in OVS
Feb 25 12:42:37 compute-0 nova_compute[244014]: 2026-02-25 12:42:37.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:37.815 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:d0:2e 10.100.0.4'], port_security=['fa:16:3e:3e:d0:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11f1a7e0-6001-4367-8491-5b5508f56bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-898394ed-19f4-4525-9c0d-05895748de8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '563c6e30-ca07-41f4-bf4e-27cec8015d08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f095686b-cedf-4f50-94ed-1d2cdaae5b7e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:42:37 compute-0 nova_compute[244014]: 2026-02-25 12:42:37.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:37.819 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e in datapath 898394ed-19f4-4525-9c0d-05895748de8c unbound from our chassis
Feb 25 12:42:37 compute-0 systemd[1]: libpod-conmon-bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d.scope: Deactivated successfully.
Feb 25 12:42:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:37.820 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 898394ed-19f4-4525-9c0d-05895748de8c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:42:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:37.821 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[74446245-3d36-4735-b1ad-e80282d7c470]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:37.822 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c namespace which is not needed anymore
Feb 25 12:42:37 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Feb 25 12:42:37 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Consumed 14.751s CPU time.
Feb 25 12:42:37 compute-0 systemd-machined[210048]: Machine qemu-135-instance-0000006c terminated.
Feb 25 12:42:37 compute-0 podman[338954]: 2026-02-25 12:42:37.973316583 +0000 UTC m=+0.103469781 container create 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:42:37 compute-0 podman[338954]: 2026-02-25 12:42:37.888772957 +0000 UTC m=+0.018926185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.023 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.028 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:38 compute-0 systemd[1]: Started libpod-conmon-40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98.scope.
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.038 244018 DEBUG nova.compute.manager [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.039 244018 DEBUG oslo_concurrency.lockutils [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.039 244018 DEBUG oslo_concurrency.lockutils [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.040 244018 DEBUG oslo_concurrency.lockutils [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.040 244018 DEBUG nova.compute.manager [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.040 244018 WARNING nova.compute.manager [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state active and task_state reboot_started.
Feb 25 12:42:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1039eb16db0a4bcbab2c6cb408fb6324a4f9619fc967a63ae8d8456cac7816/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1039eb16db0a4bcbab2c6cb408fb6324a4f9619fc967a63ae8d8456cac7816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1039eb16db0a4bcbab2c6cb408fb6324a4f9619fc967a63ae8d8456cac7816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1039eb16db0a4bcbab2c6cb408fb6324a4f9619fc967a63ae8d8456cac7816/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:38 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [NOTICE]   (338114) : haproxy version is 2.8.14-c23fe91
Feb 25 12:42:38 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [NOTICE]   (338114) : path to executable is /usr/sbin/haproxy
Feb 25 12:42:38 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [WARNING]  (338114) : Exiting Master process...
Feb 25 12:42:38 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [WARNING]  (338114) : Exiting Master process...
Feb 25 12:42:38 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [ALERT]    (338114) : Current worker (338116) exited with code 143 (Terminated)
Feb 25 12:42:38 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [WARNING]  (338114) : All workers exited. Exiting... (0)
Feb 25 12:42:38 compute-0 systemd[1]: libpod-06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839.scope: Deactivated successfully.
Feb 25 12:42:38 compute-0 conmon[338110]: conmon 06a4d8f54adee94a8816 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839.scope/container/memory.events
Feb 25 12:42:38 compute-0 podman[338954]: 2026-02-25 12:42:38.19043996 +0000 UTC m=+0.320593178 container init 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:42:38 compute-0 podman[338954]: 2026-02-25 12:42:38.19858063 +0000 UTC m=+0.328733868 container start 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 12:42:38 compute-0 podman[338954]: 2026-02-25 12:42:38.277107556 +0000 UTC m=+0.407260834 container attach 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:42:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 147 op/s
Feb 25 12:42:38 compute-0 podman[338964]: 2026-02-25 12:42:38.325957235 +0000 UTC m=+0.431424926 container died 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 12:42:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839-userdata-shm.mount: Deactivated successfully.
Feb 25 12:42:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7657430fcd1ccf36e0ff1aaea267888d4bce6d3d745f3ebe88290757fb8da93-merged.mount: Deactivated successfully.
Feb 25 12:42:38 compute-0 podman[338964]: 2026-02-25 12:42:38.386770921 +0000 UTC m=+0.492238602 container cleanup 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:42:38 compute-0 systemd[1]: libpod-conmon-06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839.scope: Deactivated successfully.
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.449 244018 INFO nova.virt.libvirt.driver [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance shutdown successfully.
Feb 25 12:42:38 compute-0 podman[339022]: 2026-02-25 12:42:38.454543813 +0000 UTC m=+0.041435500 container remove 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.459 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a1ad71-618f-4f02-8872-24cc4edcf910]: (4, ('Wed Feb 25 12:42:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c (06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839)\n06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839\nWed Feb 25 12:42:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c (06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839)\n06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.461 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[24da8cde-c8ef-40b2-b6a5-8faae9971479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.461 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap898394ed-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:38 compute-0 kernel: tap898394ed-10: left promiscuous mode
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:38 compute-0 keen_rubin[338995]: {
Feb 25 12:42:38 compute-0 keen_rubin[338995]:     "0": [
Feb 25 12:42:38 compute-0 keen_rubin[338995]:         {
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "devices": [
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "/dev/loop3"
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             ],
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_name": "ceph_lv0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_size": "21470642176",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "name": "ceph_lv0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "tags": {
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.cluster_name": "ceph",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.crush_device_class": "",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.encrypted": "0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.objectstore": "bluestore",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.osd_id": "0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.type": "block",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.vdo": "0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.with_tpm": "0"
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             },
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "type": "block",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "vg_name": "ceph_vg0"
Feb 25 12:42:38 compute-0 keen_rubin[338995]:         }
Feb 25 12:42:38 compute-0 keen_rubin[338995]:     ],
Feb 25 12:42:38 compute-0 keen_rubin[338995]:     "1": [
Feb 25 12:42:38 compute-0 keen_rubin[338995]:         {
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "devices": [
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "/dev/loop4"
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             ],
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_name": "ceph_lv1",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_size": "21470642176",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "name": "ceph_lv1",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "tags": {
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.cluster_name": "ceph",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.crush_device_class": "",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.encrypted": "0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.objectstore": "bluestore",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.osd_id": "1",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.type": "block",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.vdo": "0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.with_tpm": "0"
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             },
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "type": "block",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "vg_name": "ceph_vg1"
Feb 25 12:42:38 compute-0 keen_rubin[338995]:         }
Feb 25 12:42:38 compute-0 keen_rubin[338995]:     ],
Feb 25 12:42:38 compute-0 keen_rubin[338995]:     "2": [
Feb 25 12:42:38 compute-0 keen_rubin[338995]:         {
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "devices": [
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "/dev/loop5"
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             ],
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_name": "ceph_lv2",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_size": "21470642176",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "name": "ceph_lv2",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "tags": {
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.cluster_name": "ceph",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.crush_device_class": "",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.encrypted": "0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.objectstore": "bluestore",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.osd_id": "2",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.type": "block",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.vdo": "0",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:                 "ceph.with_tpm": "0"
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             },
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "type": "block",
Feb 25 12:42:38 compute-0 keen_rubin[338995]:             "vg_name": "ceph_vg2"
Feb 25 12:42:38 compute-0 keen_rubin[338995]:         }
Feb 25 12:42:38 compute-0 keen_rubin[338995]:     ]
Feb 25 12:42:38 compute-0 keen_rubin[338995]: }
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.477 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b95b0e0b-efd6-46f1-a6be-d09926ea66b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.491 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c11ed968-575b-4a76-a6cc-fdb9c1fa575d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.492 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b56f418f-9885-490f-8ccc-efed2c801c43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 systemd[1]: libpod-40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98.scope: Deactivated successfully.
Feb 25 12:42:38 compute-0 podman[338954]: 2026-02-25 12:42:38.498852204 +0000 UTC m=+0.629005412 container died 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:42:38 compute-0 kernel: tap6aff0d14-b6: entered promiscuous mode
Feb 25 12:42:38 compute-0 NetworkManager[49836]: <info>  [1772023358.5063] manager: (tap6aff0d14-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/431)
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:38 compute-0 systemd-udevd[338932]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:42:38 compute-0 ovn_controller[147040]: 2026-02-25T12:42:38Z|01052|binding|INFO|Claiming lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for this chassis.
Feb 25 12:42:38 compute-0 ovn_controller[147040]: 2026-02-25T12:42:38Z|01053|binding|INFO|6aff0d14-b6b5-45d1-83da-b9f0946a2e7e: Claiming fa:16:3e:3e:d0:2e 10.100.0.4
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.508 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8500c573-b78d-42f3-813d-4aef017a9463]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530192, 'reachable_time': 17439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339051, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d898394ed\x2d19f4\x2d4525\x2d9c0d\x2d05895748de8c.mount: Deactivated successfully.
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.516 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.516 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[38375d80-7be0-4f38-9d6d-c5f1f36627e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_controller[147040]: 2026-02-25T12:42:38Z|01054|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e ovn-installed in OVS
Feb 25 12:42:38 compute-0 ovn_controller[147040]: 2026-02-25T12:42:38Z|01055|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e up in Southbound
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.520 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:d0:2e 10.100.0.4'], port_security=['fa:16:3e:3e:d0:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11f1a7e0-6001-4367-8491-5b5508f56bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-898394ed-19f4-4525-9c0d-05895748de8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '563c6e30-ca07-41f4-bf4e-27cec8015d08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f095686b-cedf-4f50-94ed-1d2cdaae5b7e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:42:38 compute-0 NetworkManager[49836]: <info>  [1772023358.5224] device (tap6aff0d14-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:42:38 compute-0 NetworkManager[49836]: <info>  [1772023358.5230] device (tap6aff0d14-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.519 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.524 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e in datapath 898394ed-19f4-4525-9c0d-05895748de8c bound to our chassis
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.525 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 898394ed-19f4-4525-9c0d-05895748de8c
Feb 25 12:42:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd1039eb16db0a4bcbab2c6cb408fb6324a4f9619fc967a63ae8d8456cac7816-merged.mount: Deactivated successfully.
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.539 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d7e0f2-cb0a-4839-844c-d3273f5725c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.540 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap898394ed-11 in ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.541 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap898394ed-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.542 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[792e6e4f-8a09-4bf1-bac5-559a62541d96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.543 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc98b9a-f296-49b5-8f63-f1a1ae82aab5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 systemd-machined[210048]: New machine qemu-136-instance-0000006c.
Feb 25 12:42:38 compute-0 podman[338954]: 2026-02-25 12:42:38.555007838 +0000 UTC m=+0.685161036 container remove 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.556 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b419681b-6db4-4074-bdb5-cfc2332e2e61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-0000006c.
Feb 25 12:42:38 compute-0 systemd[1]: libpod-conmon-40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98.scope: Deactivated successfully.
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.568 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e626fcb6-a1cb-48ee-b077-49dee04b3811]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.591 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fb479f2e-9979-4a49-8257-346207e91406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 NetworkManager[49836]: <info>  [1772023358.5965] manager: (tap898394ed-10): new Veth device (/org/freedesktop/NetworkManager/Devices/432)
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.596 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0500481f-1d6c-4c0b-b90d-d5fbcac0fd45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 sudo[338858]: pam_unix(sudo:session): session closed for user root
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.616 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f9866625-6196-4f62-9688-4fa46958dbdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.619 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca312e4-73fd-49da-b0d2-a56ba3ee903b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 NetworkManager[49836]: <info>  [1772023358.6350] device (tap898394ed-10): carrier: link connected
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.637 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[92967756-2b31-47dc-b54f-dfee3b0d727a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.650 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6531fc56-c4fd-4591-9db9-b35e6a25182e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap898394ed-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:00:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532804, 'reachable_time': 21633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339126, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 sudo[339099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:42:38 compute-0 sudo[339099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.661 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd44225-ffbe-4bf1-9fcf-e94622c273a0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 532804, 'tstamp': 532804}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339128, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 sudo[339099]: pam_unix(sudo:session): session closed for user root
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.674 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a9696a1f-01eb-410a-a48f-d1d8bf35f4c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap898394ed-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:00:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532804, 'reachable_time': 21633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339130, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.695 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1ea9f3-47fe-4d66-ad23-9afe77eb5594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 sudo[339131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:42:38 compute-0 sudo[339131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.754 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[15e52752-02fc-41bf-970b-09c39507334a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.755 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap898394ed-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.755 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.756 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap898394ed-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:38 compute-0 kernel: tap898394ed-10: entered promiscuous mode
Feb 25 12:42:38 compute-0 NetworkManager[49836]: <info>  [1772023358.7588] manager: (tap898394ed-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.762 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap898394ed-10, col_values=(('external_ids', {'iface-id': '90ed2c98-937c-42de-8c1d-0f2d90380e03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.760 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:38 compute-0 ovn_controller[147040]: 2026-02-25T12:42:38Z|01056|binding|INFO|Releasing lport 90ed2c98-937c-42de-8c1d-0f2d90380e03 from this chassis (sb_readonly=0)
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.765 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.766 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b78963f8-3ef4-4d9d-9cda-efc9500e2800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.767 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-898394ed-19f4-4525-9c0d-05895748de8c
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 898394ed-19f4-4525-9c0d-05895748de8c
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:42:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.767 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'env', 'PROCESS_TAG=haproxy-898394ed-19f4-4525-9c0d-05895748de8c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/898394ed-19f4-4525-9c0d-05895748de8c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:42:38 compute-0 nova_compute[244014]: 2026-02-25 12:42:38.775 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:38 compute-0 podman[339177]: 2026-02-25 12:42:38.949566033 +0000 UTC m=+0.034257778 container create 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:42:38 compute-0 systemd[1]: Started libpod-conmon-0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9.scope.
Feb 25 12:42:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:42:39 compute-0 podman[339177]: 2026-02-25 12:42:39.020468634 +0000 UTC m=+0.105160399 container init 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:42:39 compute-0 podman[339177]: 2026-02-25 12:42:39.027619095 +0000 UTC m=+0.112310850 container start 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 12:42:39 compute-0 podman[339177]: 2026-02-25 12:42:38.935247099 +0000 UTC m=+0.019938844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:42:39 compute-0 podman[339177]: 2026-02-25 12:42:39.031598448 +0000 UTC m=+0.116290233 container attach 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 12:42:39 compute-0 flamboyant_hertz[339198]: 167 167
Feb 25 12:42:39 compute-0 systemd[1]: libpod-0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9.scope: Deactivated successfully.
Feb 25 12:42:39 compute-0 conmon[339198]: conmon 0f388cea1359a55c91f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9.scope/container/memory.events
Feb 25 12:42:39 compute-0 podman[339177]: 2026-02-25 12:42:39.034468349 +0000 UTC m=+0.119160134 container died 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:42:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-976b663c346405524adea0bd2fb9f63a911101632e95d6e10d529cbdb7a2704a-merged.mount: Deactivated successfully.
Feb 25 12:42:39 compute-0 podman[339177]: 2026-02-25 12:42:39.080114767 +0000 UTC m=+0.164806522 container remove 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 12:42:39 compute-0 systemd[1]: libpod-conmon-0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9.scope: Deactivated successfully.
Feb 25 12:42:39 compute-0 podman[339231]: 2026-02-25 12:42:39.120536477 +0000 UTC m=+0.042107739 container create 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:42:39 compute-0 systemd[1]: Started libpod-conmon-68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f.scope.
Feb 25 12:42:39 compute-0 podman[339231]: 2026-02-25 12:42:39.097314711 +0000 UTC m=+0.018885963 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:42:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85336dac33943960f28fe299da666ed4b00ba73c5530f114cd6d990127d9387f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:39 compute-0 podman[339231]: 2026-02-25 12:42:39.22801375 +0000 UTC m=+0.149585082 container init 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:42:39 compute-0 podman[339231]: 2026-02-25 12:42:39.232897157 +0000 UTC m=+0.154468439 container start 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:42:39 compute-0 podman[339253]: 2026-02-25 12:42:39.23902397 +0000 UTC m=+0.047778819 container create 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 12:42:39 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [NOTICE]   (339268) : New worker (339270) forked
Feb 25 12:42:39 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [NOTICE]   (339268) : Loading success.
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.279 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023344.278254, d547b0db-242e-49a5-8a76-5682b0235b6d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.280 244018 INFO nova.compute.manager [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] VM Stopped (Lifecycle Event)
Feb 25 12:42:39 compute-0 systemd[1]: Started libpod-conmon-8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71.scope.
Feb 25 12:42:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.307 244018 DEBUG nova.compute.manager [None req-1c80d8cb-d659-41ec-a264-afb622005127 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2dce0cf89c192b535b364c8d13d0f61e05e053073438e024dd9f3d2a72033/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:39 compute-0 podman[339253]: 2026-02-25 12:42:39.218604184 +0000 UTC m=+0.027359113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2dce0cf89c192b535b364c8d13d0f61e05e053073438e024dd9f3d2a72033/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2dce0cf89c192b535b364c8d13d0f61e05e053073438e024dd9f3d2a72033/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2dce0cf89c192b535b364c8d13d0f61e05e053073438e024dd9f3d2a72033/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:42:39 compute-0 podman[339253]: 2026-02-25 12:42:39.339913967 +0000 UTC m=+0.148668826 container init 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:42:39 compute-0 podman[339253]: 2026-02-25 12:42:39.347139381 +0000 UTC m=+0.155894230 container start 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 12:42:39 compute-0 podman[339253]: 2026-02-25 12:42:39.350785664 +0000 UTC m=+0.159540733 container attach 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:42:39 compute-0 ceph-mon[76335]: pgmap v1844: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 147 op/s
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.626 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 11f1a7e0-6001-4367-8491-5b5508f56bdb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.626 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023359.6251705, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.627 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Resumed (Lifecycle Event)
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.633 244018 INFO nova.virt.libvirt.driver [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance running successfully.
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.633 244018 INFO nova.virt.libvirt.driver [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance soft rebooted successfully.
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.634 244018 DEBUG nova.compute.manager [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.663 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.667 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.694 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] During sync_power_state the instance has a pending task (reboot_started). Skip.
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.695 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023359.6255298, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.695 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Started (Lifecycle Event)
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.704 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.712 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:39 compute-0 nova_compute[244014]: 2026-02-25 12:42:39.715 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:42:39 compute-0 lvm[339401]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:42:39 compute-0 lvm[339401]: VG ceph_vg0 finished
Feb 25 12:42:39 compute-0 lvm[339403]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:42:39 compute-0 lvm[339403]: VG ceph_vg1 finished
Feb 25 12:42:39 compute-0 lvm[339404]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:42:39 compute-0 lvm[339404]: VG ceph_vg2 finished
Feb 25 12:42:40 compute-0 pedantic_dubinsky[339284]: {}
Feb 25 12:42:40 compute-0 systemd[1]: libpod-8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71.scope: Deactivated successfully.
Feb 25 12:42:40 compute-0 systemd[1]: libpod-8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71.scope: Consumed 1.044s CPU time.
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.091 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:40 compute-0 podman[339407]: 2026-02-25 12:42:40.134877901 +0000 UTC m=+0.037844519 container died 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:42:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-14c2dce0cf89c192b535b364c8d13d0f61e05e053073438e024dd9f3d2a72033-merged.mount: Deactivated successfully.
Feb 25 12:42:40 compute-0 podman[339407]: 2026-02-25 12:42:40.171514725 +0000 UTC m=+0.074481323 container remove 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:42:40 compute-0 systemd[1]: libpod-conmon-8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71.scope: Deactivated successfully.
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.181 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:40 compute-0 sudo[339131]: pam_unix(sudo:session): session closed for user root
Feb 25 12:42:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:42:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:42:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:42:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:42:40 compute-0 sudo[339422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:42:40 compute-0 sudo[339422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:42:40 compute-0 sudo[339422]: pam_unix(sudo:session): session closed for user root
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.279 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.280 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.281 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.281 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.281 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.281 244018 WARNING nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state active and task_state None.
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.281 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.282 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.282 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.282 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.282 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.283 244018 WARNING nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state active and task_state None.
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.283 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.283 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.283 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.284 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.284 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:40 compute-0 nova_compute[244014]: 2026-02-25 12:42:40.284 244018 WARNING nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state active and task_state None.
Feb 25 12:42:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 31 KiB/s wr, 32 op/s
Feb 25 12:42:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:41.026 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:42:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:42:42 compute-0 nova_compute[244014]: 2026-02-25 12:42:42.031 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:42 compute-0 ceph-mon[76335]: pgmap v1845: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 31 KiB/s wr, 32 op/s
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 86 op/s
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007717237114215445 of space, bias 1.0, pg target 0.23151711342646333 quantized to 32 (current 32)
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493797090980871 of space, bias 1.0, pg target 0.7481391272942614 quantized to 32 (current 32)
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0114275959698244e-06 of space, bias 4.0, pg target 0.0012137131151637893 quantized to 16 (current 16)
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:42:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:42:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:44 compute-0 ceph-mon[76335]: pgmap v1846: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 86 op/s
Feb 25 12:42:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 82 op/s
Feb 25 12:42:44 compute-0 nova_compute[244014]: 2026-02-25 12:42:44.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:44 compute-0 nova_compute[244014]: 2026-02-25 12:42:44.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 12:42:44 compute-0 nova_compute[244014]: 2026-02-25 12:42:44.893 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 12:42:45 compute-0 nova_compute[244014]: 2026-02-25 12:42:45.093 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:45 compute-0 nova_compute[244014]: 2026-02-25 12:42:45.158 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023350.1581175, 62d2fee1-f07f-44e3-a511-6b9bb341a3ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:42:45 compute-0 nova_compute[244014]: 2026-02-25 12:42:45.159 244018 INFO nova.compute.manager [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] VM Stopped (Lifecycle Event)
Feb 25 12:42:45 compute-0 nova_compute[244014]: 2026-02-25 12:42:45.183 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:45 compute-0 nova_compute[244014]: 2026-02-25 12:42:45.193 244018 DEBUG nova.compute.manager [None req-3dbccde4-452a-4e30-9f9d-7fa10527494c - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:42:45 compute-0 ceph-mon[76335]: pgmap v1847: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 82 op/s
Feb 25 12:42:45 compute-0 nova_compute[244014]: 2026-02-25 12:42:45.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 72 op/s
Feb 25 12:42:47 compute-0 ceph-mon[76335]: pgmap v1848: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 72 op/s
Feb 25 12:42:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:42:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3756345258' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:42:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:42:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3756345258' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:42:47 compute-0 nova_compute[244014]: 2026-02-25 12:42:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:47 compute-0 nova_compute[244014]: 2026-02-25 12:42:47.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 12:42:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 72 op/s
Feb 25 12:42:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3756345258' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:42:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3756345258' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:42:49 compute-0 ceph-mon[76335]: pgmap v1849: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 72 op/s
Feb 25 12:42:49 compute-0 nova_compute[244014]: 2026-02-25 12:42:49.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:42:49 compute-0 ovn_controller[147040]: 2026-02-25T12:42:49Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:d0:2e 10.100.0.4
Feb 25 12:42:50 compute-0 nova_compute[244014]: 2026-02-25 12:42:50.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:50 compute-0 nova_compute[244014]: 2026-02-25 12:42:50.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 69 op/s
Feb 25 12:42:51 compute-0 ceph-mon[76335]: pgmap v1850: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 69 op/s
Feb 25 12:42:51 compute-0 podman[339447]: 2026-02-25 12:42:51.714609805 +0000 UTC m=+0.055239890 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 12:42:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 12 KiB/s wr, 94 op/s
Feb 25 12:42:52 compute-0 nova_compute[244014]: 2026-02-25 12:42:52.609 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:53 compute-0 ceph-mon[76335]: pgmap v1851: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 12 KiB/s wr, 94 op/s
Feb 25 12:42:53 compute-0 podman[339468]: 2026-02-25 12:42:53.753581323 +0000 UTC m=+0.100640521 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:42:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 944 KiB/s rd, 12 KiB/s wr, 59 op/s
Feb 25 12:42:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:55.022 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:55 compute-0 nova_compute[244014]: 2026-02-25 12:42:55.098 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:55 compute-0 nova_compute[244014]: 2026-02-25 12:42:55.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:55 compute-0 ceph-mon[76335]: pgmap v1852: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 944 KiB/s rd, 12 KiB/s wr, 59 op/s
Feb 25 12:42:55 compute-0 nova_compute[244014]: 2026-02-25 12:42:55.913 244018 INFO nova.compute.manager [None req-26dd1456-d6be-45d5-be5b-a0f6d959fd45 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Get console output
Feb 25 12:42:55 compute-0 nova_compute[244014]: 2026-02-25 12:42:55.920 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:42:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 43 op/s
Feb 25 12:42:57 compute-0 ceph-mon[76335]: pgmap v1853: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 43 op/s
Feb 25 12:42:57 compute-0 nova_compute[244014]: 2026-02-25 12:42:57.902 244018 DEBUG nova.compute.manager [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:57 compute-0 nova_compute[244014]: 2026-02-25 12:42:57.902 244018 DEBUG nova.compute.manager [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing instance network info cache due to event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:42:57 compute-0 nova_compute[244014]: 2026-02-25 12:42:57.903 244018 DEBUG oslo_concurrency.lockutils [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:42:57 compute-0 nova_compute[244014]: 2026-02-25 12:42:57.903 244018 DEBUG oslo_concurrency.lockutils [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:42:57 compute-0 nova_compute[244014]: 2026-02-25 12:42:57.903 244018 DEBUG nova.network.neutron [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:42:57 compute-0 nova_compute[244014]: 2026-02-25 12:42:57.996 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:57 compute-0 nova_compute[244014]: 2026-02-25 12:42:57.997 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:57 compute-0 nova_compute[244014]: 2026-02-25 12:42:57.997 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:57 compute-0 nova_compute[244014]: 2026-02-25 12:42:57.997 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:57 compute-0 nova_compute[244014]: 2026-02-25 12:42:57.998 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:57.999 244018 INFO nova.compute.manager [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Terminating instance
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.001 244018 DEBUG nova.compute.manager [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:42:58 compute-0 kernel: tap6aff0d14-b6 (unregistering): left promiscuous mode
Feb 25 12:42:58 compute-0 NetworkManager[49836]: <info>  [1772023378.0645] device (tap6aff0d14-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:42:58 compute-0 ovn_controller[147040]: 2026-02-25T12:42:58Z|01057|binding|INFO|Releasing lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e from this chassis (sb_readonly=0)
Feb 25 12:42:58 compute-0 ovn_controller[147040]: 2026-02-25T12:42:58Z|01058|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e down in Southbound
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:58 compute-0 ovn_controller[147040]: 2026-02-25T12:42:58Z|01059|binding|INFO|Removing iface tap6aff0d14-b6 ovn-installed in OVS
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.088 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.092 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:d0:2e 10.100.0.4'], port_security=['fa:16:3e:3e:d0:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11f1a7e0-6001-4367-8491-5b5508f56bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-898394ed-19f4-4525-9c0d-05895748de8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '563c6e30-ca07-41f4-bf4e-27cec8015d08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f095686b-cedf-4f50-94ed-1d2cdaae5b7e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.094 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e in datapath 898394ed-19f4-4525-9c0d-05895748de8c unbound from our chassis
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.095 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 898394ed-19f4-4525-9c0d-05895748de8c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.097 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[75efc87d-6667-45a7-8068-b1950fc45572]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.098 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c namespace which is not needed anymore
Feb 25 12:42:58 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Feb 25 12:42:58 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006c.scope: Consumed 11.906s CPU time.
Feb 25 12:42:58 compute-0 systemd-machined[210048]: Machine qemu-136-instance-0000006c terminated.
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.227 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.242 244018 INFO nova.virt.libvirt.driver [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance destroyed successfully.
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.243 244018 DEBUG nova.objects.instance [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid 11f1a7e0-6001-4367-8491-5b5508f56bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:42:58 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [NOTICE]   (339268) : haproxy version is 2.8.14-c23fe91
Feb 25 12:42:58 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [NOTICE]   (339268) : path to executable is /usr/sbin/haproxy
Feb 25 12:42:58 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [WARNING]  (339268) : Exiting Master process...
Feb 25 12:42:58 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [ALERT]    (339268) : Current worker (339270) exited with code 143 (Terminated)
Feb 25 12:42:58 compute-0 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [WARNING]  (339268) : All workers exited. Exiting... (0)
Feb 25 12:42:58 compute-0 systemd[1]: libpod-68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f.scope: Deactivated successfully.
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.265 244018 DEBUG nova.virt.libvirt.vif [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:42:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-356243921',display_name='tempest-TestNetworkAdvancedServerOps-server-356243921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-356243921',id=108,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOZBQUrVipYRBieQq1xDRMWSoKXwtqySIiZY2u0KWFIY87u59XRYFhhGNlsK64UxHDZ7UBqeTfQ7KU9w8um5F3hgrG+dYRSQSNJa4y14JlVwVfxefeeLtULB24TIYsOAUg==',key_name='tempest-TestNetworkAdvancedServerOps-1545448232',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:42:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-em0x8woo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:42:39Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=11f1a7e0-6001-4367-8491-5b5508f56bdb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.265 244018 DEBUG nova.network.os_vif_util [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:42:58 compute-0 podman[339520]: 2026-02-25 12:42:58.266168707 +0000 UTC m=+0.059812958 container died 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.266 244018 DEBUG nova.network.os_vif_util [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.267 244018 DEBUG os_vif [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.269 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.269 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6aff0d14-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.278 244018 INFO os_vif [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6')
Feb 25 12:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f-userdata-shm.mount: Deactivated successfully.
Feb 25 12:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-85336dac33943960f28fe299da666ed4b00ba73c5530f114cd6d990127d9387f-merged.mount: Deactivated successfully.
Feb 25 12:42:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:42:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 235 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 23 KiB/s wr, 44 op/s
Feb 25 12:42:58 compute-0 podman[339520]: 2026-02-25 12:42:58.314711627 +0000 UTC m=+0.108355798 container cleanup 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:42:58 compute-0 systemd[1]: libpod-conmon-68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f.scope: Deactivated successfully.
Feb 25 12:42:58 compute-0 podman[339574]: 2026-02-25 12:42:58.384721793 +0000 UTC m=+0.045577217 container remove 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.389 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[82ab6c6b-ff12-4553-a154-bb27b27bf1c3]: (4, ('Wed Feb 25 12:42:58 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c (68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f)\n68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f\nWed Feb 25 12:42:58 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c (68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f)\n68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.390 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bbeb39ce-d15a-4d59-8924-c0701ca531d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.391 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap898394ed-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:58 compute-0 kernel: tap898394ed-10: left promiscuous mode
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.404 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d666dbf-dc07-4c99-b0d9-d1a443c03c9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.414 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[650e6071-c8b5-44b7-9d9b-0ac70912db66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.415 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc9149f-31ba-423f-8f61-d54186687fd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.430 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[83bf50f4-6583-4190-918c-dd8f31fe41da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532799, 'reachable_time': 26842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339595, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.433 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:42:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.433 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3a85cd-9595-4e19-bb9c-2c4047fb05ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:42:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d898394ed\x2d19f4\x2d4525\x2d9c0d\x2d05895748de8c.mount: Deactivated successfully.
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.569 244018 INFO nova.virt.libvirt.driver [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Deleting instance files /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb_del
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.570 244018 INFO nova.virt.libvirt.driver [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Deletion of /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb_del complete
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.680 244018 INFO nova.compute.manager [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Took 0.68 seconds to destroy the instance on the hypervisor.
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.681 244018 DEBUG oslo.service.loopingcall [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.683 244018 DEBUG nova.compute.manager [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.684 244018 DEBUG nova.network.neutron [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.783 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.784 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.802 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.893 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.893 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.903 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:42:58 compute-0 nova_compute[244014]: 2026-02-25 12:42:58.904 244018 INFO nova.compute.claims [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.011 244018 DEBUG nova.compute.manager [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.012 244018 DEBUG oslo_concurrency.lockutils [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.012 244018 DEBUG oslo_concurrency.lockutils [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.013 244018 DEBUG oslo_concurrency.lockutils [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.013 244018 DEBUG nova.compute.manager [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.013 244018 DEBUG nova.compute.manager [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.064 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:42:59 compute-0 ceph-mon[76335]: pgmap v1854: 305 pgs: 305 active+clean; 235 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 23 KiB/s wr, 44 op/s
Feb 25 12:42:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:42:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954141345' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.619 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.624 244018 DEBUG nova.compute.provider_tree [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.647 244018 DEBUG nova.scheduler.client.report [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.670 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.671 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.749 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.750 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.779 244018 INFO nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.807 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.904 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.905 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.906 244018 INFO nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Creating image(s)
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.937 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.968 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:42:59 compute-0 nova_compute[244014]: 2026-02-25 12:42:59.999 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.002 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.026 244018 DEBUG nova.policy [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.068 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.069 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.069 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.069 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.095 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.101 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.145 244018 DEBUG nova.network.neutron [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updated VIF entry in instance network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.145 244018 DEBUG nova.network.neutron [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.169 244018 DEBUG oslo_concurrency.lockutils [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 235 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 23 KiB/s wr, 44 op/s
Feb 25 12:43:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/954141345' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.691 244018 DEBUG nova.network.neutron [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.706 244018 INFO nova.compute.manager [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Took 2.02 seconds to deallocate network for instance.
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.762 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.763 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.831 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.730s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.865 244018 DEBUG oslo_concurrency.processutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:00 compute-0 nova_compute[244014]: 2026-02-25 12:43:00.946 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.208 244018 DEBUG nova.compute.manager [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.209 244018 DEBUG oslo_concurrency.lockutils [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.210 244018 DEBUG oslo_concurrency.lockutils [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.210 244018 DEBUG oslo_concurrency.lockutils [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.211 244018 DEBUG nova.compute.manager [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.211 244018 WARNING nova.compute.manager [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state deleted and task_state None.
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.212 244018 DEBUG nova.compute.manager [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-deleted-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.260 244018 DEBUG nova.objects.instance [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.274 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.275 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Ensure instance console log exists: /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.276 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.276 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.277 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:43:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1304900099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.415 244018 DEBUG oslo_concurrency.processutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.421 244018 DEBUG nova.compute.provider_tree [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.454 244018 DEBUG nova.scheduler.client.report [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.494 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.531 244018 INFO nova.scheduler.client.report [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Deleted allocations for instance 11f1a7e0-6001-4367-8491-5b5508f56bdb
Feb 25 12:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.608 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:01 compute-0 nova_compute[244014]: 2026-02-25 12:43:01.641 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Successfully created port: 473bf89e-f488-42b9-b6d3-d736d2a61760 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:43:01 compute-0 ceph-mon[76335]: pgmap v1855: 305 pgs: 305 active+clean; 235 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 23 KiB/s wr, 44 op/s
Feb 25 12:43:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1304900099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:43:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 197 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 553 KiB/s rd, 1.4 MiB/s wr, 77 op/s
Feb 25 12:43:02 compute-0 nova_compute[244014]: 2026-02-25 12:43:02.443 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Successfully created port: 7126aff6-e4c8-45c9-a3f7-6c333946b022 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:43:03 compute-0 nova_compute[244014]: 2026-02-25 12:43:03.175 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Successfully updated port: 473bf89e-f488-42b9-b6d3-d736d2a61760 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:43:03 compute-0 nova_compute[244014]: 2026-02-25 12:43:03.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:03 compute-0 nova_compute[244014]: 2026-02-25 12:43:03.352 244018 DEBUG nova.compute.manager [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:03 compute-0 nova_compute[244014]: 2026-02-25 12:43:03.353 244018 DEBUG nova.compute.manager [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing instance network info cache due to event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:43:03 compute-0 nova_compute[244014]: 2026-02-25 12:43:03.353 244018 DEBUG oslo_concurrency.lockutils [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:03 compute-0 nova_compute[244014]: 2026-02-25 12:43:03.354 244018 DEBUG oslo_concurrency.lockutils [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:03 compute-0 nova_compute[244014]: 2026-02-25 12:43:03.354 244018 DEBUG nova.network.neutron [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing network info cache for port 473bf89e-f488-42b9-b6d3-d736d2a61760 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:43:03 compute-0 nova_compute[244014]: 2026-02-25 12:43:03.592 244018 DEBUG nova.network.neutron [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:43:03 compute-0 ceph-mon[76335]: pgmap v1856: 305 pgs: 305 active+clean; 197 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 553 KiB/s rd, 1.4 MiB/s wr, 77 op/s
Feb 25 12:43:04 compute-0 nova_compute[244014]: 2026-02-25 12:43:04.078 244018 DEBUG nova.network.neutron [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:04 compute-0 nova_compute[244014]: 2026-02-25 12:43:04.103 244018 DEBUG oslo_concurrency.lockutils [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:04 compute-0 nova_compute[244014]: 2026-02-25 12:43:04.233 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Successfully updated port: 7126aff6-e4c8-45c9-a3f7-6c333946b022 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:43:04 compute-0 nova_compute[244014]: 2026-02-25 12:43:04.257 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:04 compute-0 nova_compute[244014]: 2026-02-25 12:43:04.258 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:04 compute-0 nova_compute[244014]: 2026-02-25 12:43:04.258 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:43:04 compute-0 nova_compute[244014]: 2026-02-25 12:43:04.286 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 144 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Feb 25 12:43:04 compute-0 nova_compute[244014]: 2026-02-25 12:43:04.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:04 compute-0 nova_compute[244014]: 2026-02-25 12:43:04.598 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:43:05 compute-0 nova_compute[244014]: 2026-02-25 12:43:05.102 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:05 compute-0 nova_compute[244014]: 2026-02-25 12:43:05.530 244018 DEBUG nova.compute.manager [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-changed-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:05 compute-0 nova_compute[244014]: 2026-02-25 12:43:05.530 244018 DEBUG nova.compute.manager [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing instance network info cache due to event network-changed-7126aff6-e4c8-45c9-a3f7-6c333946b022. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:43:05 compute-0 nova_compute[244014]: 2026-02-25 12:43:05.531 244018 DEBUG oslo_concurrency.lockutils [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:05 compute-0 ceph-mon[76335]: pgmap v1857: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 144 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Feb 25 12:43:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.746 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.767 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.768 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance network_info: |[{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.769 244018 DEBUG oslo_concurrency.lockutils [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.769 244018 DEBUG nova.network.neutron [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing network info cache for port 7126aff6-e4c8-45c9-a3f7-6c333946b022 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.772 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Start _get_guest_xml network_info=[{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.777 244018 WARNING nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.782 244018 DEBUG nova.virt.libvirt.host [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.783 244018 DEBUG nova.virt.libvirt.host [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.789 244018 DEBUG nova.virt.libvirt.host [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.790 244018 DEBUG nova.virt.libvirt.host [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.790 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.791 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.791 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.792 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.792 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.792 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.793 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.793 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.793 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.793 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.794 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.794 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:43:06 compute-0 nova_compute[244014]: 2026-02-25 12:43:06.797 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:43:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1687572344' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:07 compute-0 nova_compute[244014]: 2026-02-25 12:43:07.416 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:07 compute-0 nova_compute[244014]: 2026-02-25 12:43:07.455 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:07 compute-0 nova_compute[244014]: 2026-02-25 12:43:07.462 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:08 compute-0 ceph-mon[76335]: pgmap v1858: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:43:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1687572344' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:43:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3344917758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.063 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.069 244018 DEBUG nova.virt.libvirt.vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:59Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.070 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.072 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.075 244018 DEBUG nova.virt.libvirt.vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:59Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.076 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.078 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.079 244018 DEBUG nova.objects.instance [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.102 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:43:08 compute-0 nova_compute[244014]:   <uuid>cf7f6093-44a3-4e8f-8970-db25cf0b4ab9</uuid>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   <name>instance-0000006d</name>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1568259628</nova:name>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:43:06</nova:creationTime>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <nova:port uuid="473bf89e-f488-42b9-b6d3-d736d2a61760">
Feb 25 12:43:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <nova:port uuid="7126aff6-e4c8-45c9-a3f7-6c333946b022">
Feb 25 12:43:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fef5:6385" ipVersion="6"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <system>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <entry name="serial">cf7f6093-44a3-4e8f-8970-db25cf0b4ab9</entry>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <entry name="uuid">cf7f6093-44a3-4e8f-8970-db25cf0b4ab9</entry>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     </system>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   <os>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   </os>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   <features>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   </features>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk">
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config">
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:43:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:7a:8c:23"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <target dev="tap473bf89e-f4"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:f5:63:85"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <target dev="tap7126aff6-e4"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/console.log" append="off"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <video>
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     </video>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:43:08 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:43:08 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:43:08 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:43:08 compute-0 nova_compute[244014]: </domain>
Feb 25 12:43:08 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.104 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Preparing to wait for external event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.104 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.105 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.105 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.105 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Preparing to wait for external event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.105 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.105 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.106 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.106 244018 DEBUG nova.virt.libvirt.vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:59Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.107 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.107 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.108 244018 DEBUG os_vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.109 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.109 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.113 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473bf89e-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.114 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap473bf89e-f4, col_values=(('external_ids', {'iface-id': '473bf89e-f488-42b9-b6d3-d736d2a61760', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:8c:23', 'vm-uuid': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.116 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:08 compute-0 NetworkManager[49836]: <info>  [1772023388.1177] manager: (tap473bf89e-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.124 244018 INFO os_vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4')
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.125 244018 DEBUG nova.virt.libvirt.vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:59Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.126 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.126 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.127 244018 DEBUG os_vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.127 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.128 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.128 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.132 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7126aff6-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.132 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7126aff6-e4, col_values=(('external_ids', {'iface-id': '7126aff6-e4c8-45c9-a3f7-6c333946b022', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:63:85', 'vm-uuid': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:08 compute-0 NetworkManager[49836]: <info>  [1772023388.1352] manager: (tap7126aff6-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.141 244018 INFO os_vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4')
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.225 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.225 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.226 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:7a:8c:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.226 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:f5:63:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.226 244018 INFO nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Using config drive
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.258 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.471 244018 DEBUG nova.network.neutron [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updated VIF entry in instance network info cache for port 7126aff6-e4c8-45c9-a3f7-6c333946b022. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.471 244018 DEBUG nova.network.neutron [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.494 244018 DEBUG oslo_concurrency.lockutils [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.740 244018 INFO nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Creating config drive at /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.744 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3hx6j_52 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.885 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3hx6j_52" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.911 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:08 compute-0 nova_compute[244014]: 2026-02-25 12:43:08.915 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3344917758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.247 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.249 244018 INFO nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Deleting local config drive /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config because it was imported into RBD.
Feb 25 12:43:09 compute-0 kernel: tap473bf89e-f4: entered promiscuous mode
Feb 25 12:43:09 compute-0 NetworkManager[49836]: <info>  [1772023389.3227] manager: (tap473bf89e-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/436)
Feb 25 12:43:09 compute-0 ovn_controller[147040]: 2026-02-25T12:43:09Z|01060|binding|INFO|Claiming lport 473bf89e-f488-42b9-b6d3-d736d2a61760 for this chassis.
Feb 25 12:43:09 compute-0 ovn_controller[147040]: 2026-02-25T12:43:09Z|01061|binding|INFO|473bf89e-f488-42b9-b6d3-d736d2a61760: Claiming fa:16:3e:7a:8c:23 10.100.0.8
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.325 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:09 compute-0 NetworkManager[49836]: <info>  [1772023389.3420] manager: (tap7126aff6-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/437)
Feb 25 12:43:09 compute-0 systemd-udevd[339949]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:43:09 compute-0 systemd-udevd[339948]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.346 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:8c:23 10.100.0.8'], port_security=['fa:16:3e:7a:8c:23 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edb1eef-979c-4be1-8c88-b734bc363731, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=473bf89e-f488-42b9-b6d3-d736d2a61760) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.348 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 473bf89e-f488-42b9-b6d3-d736d2a61760 in datapath e0ff7905-af45-428a-b6a0-d6e1209fd009 bound to our chassis
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.351 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e0ff7905-af45-428a-b6a0-d6e1209fd009
Feb 25 12:43:09 compute-0 NetworkManager[49836]: <info>  [1772023389.3612] device (tap473bf89e-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:43:09 compute-0 NetworkManager[49836]: <info>  [1772023389.3618] device (tap473bf89e-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.361 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35aad0c9-af9a-4878-849f-1e7e9d93e250]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.364 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape0ff7905-a1 in ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.368 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape0ff7905-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.369 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5df4d4bb-078d-45fc-ac3d-77ced49b37e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.370 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b36e628-6e64-42c3-8835-1aa4ccffe32a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:09 compute-0 kernel: tap7126aff6-e4: entered promiscuous mode
Feb 25 12:43:09 compute-0 NetworkManager[49836]: <info>  [1772023389.3763] device (tap7126aff6-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:43:09 compute-0 NetworkManager[49836]: <info>  [1772023389.3774] device (tap7126aff6-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:09 compute-0 ovn_controller[147040]: 2026-02-25T12:43:09Z|01062|binding|INFO|Claiming lport 7126aff6-e4c8-45c9-a3f7-6c333946b022 for this chassis.
Feb 25 12:43:09 compute-0 ovn_controller[147040]: 2026-02-25T12:43:09Z|01063|binding|INFO|7126aff6-e4c8-45c9-a3f7-6c333946b022: Claiming fa:16:3e:f5:63:85 2001:db8::f816:3eff:fef5:6385
Feb 25 12:43:09 compute-0 systemd-machined[210048]: New machine qemu-137-instance-0000006d.
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.382 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[dca97360-20e9-4f5f-9731-af7e62d5b83c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.385 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:63:85 2001:db8::f816:3eff:fef5:6385'], port_security=['fa:16:3e:f5:63:85 2001:db8::f816:3eff:fef5:6385'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef5:6385/64', 'neutron:device_id': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4445989-91e8-4869-98cb-32b4b81bb3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1f88174-704f-4cf1-b79c-73e2410029c4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7126aff6-e4c8-45c9-a3f7-6c333946b022) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:43:09 compute-0 ovn_controller[147040]: 2026-02-25T12:43:09Z|01064|binding|INFO|Setting lport 473bf89e-f488-42b9-b6d3-d736d2a61760 ovn-installed in OVS
Feb 25 12:43:09 compute-0 ovn_controller[147040]: 2026-02-25T12:43:09Z|01065|binding|INFO|Setting lport 473bf89e-f488-42b9-b6d3-d736d2a61760 up in Southbound
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.389 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:09 compute-0 ovn_controller[147040]: 2026-02-25T12:43:09Z|01066|binding|INFO|Setting lport 7126aff6-e4c8-45c9-a3f7-6c333946b022 ovn-installed in OVS
Feb 25 12:43:09 compute-0 ovn_controller[147040]: 2026-02-25T12:43:09Z|01067|binding|INFO|Setting lport 7126aff6-e4c8-45c9-a3f7-6c333946b022 up in Southbound
Feb 25 12:43:09 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-0000006d.
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.397 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d761de4-0f12-4069-a7c8-e867d69cd7be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.427 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8aef7770-c9c4-4408-8b9f-fed06bfb0957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 NetworkManager[49836]: <info>  [1772023389.4338] manager: (tape0ff7905-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/438)
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.433 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b94b528-4a43-4409-a359-604961d41a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.466 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[40798cf9-7392-4833-88ed-bef2a806f419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.470 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7d35f874-b046-49fa-938c-63616dfd39c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 NetworkManager[49836]: <info>  [1772023389.4920] device (tape0ff7905-a0): carrier: link connected
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.496 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[36927694-4da9-4e5d-a8dc-0a89c4d13bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.512 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2e54fd4d-ce25-4051-aa05-696e493e6c76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0ff7905-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535889, 'reachable_time': 44896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339986, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.523 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e0b6ee-fc49-4246-ab37-1e4730dc9d37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:d2fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535889, 'tstamp': 535889}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339987, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.537 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2cbe7d-93f5-41f8-a00d-51c89ed47c2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0ff7905-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535889, 'reachable_time': 44896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339988, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.569 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe65bff-2903-4f23-b5e8-6ad7140a8a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.623 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1dfc38d7-3140-494c-a7fa-7708cc27e738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.625 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0ff7905-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.625 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.626 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0ff7905-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:09 compute-0 kernel: tape0ff7905-a0: entered promiscuous mode
Feb 25 12:43:09 compute-0 NetworkManager[49836]: <info>  [1772023389.6286] manager: (tape0ff7905-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.631 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape0ff7905-a0, col_values=(('external_ids', {'iface-id': 'a592354c-38c0-41be-9126-87d6dec6c687'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:09 compute-0 ovn_controller[147040]: 2026-02-25T12:43:09Z|01068|binding|INFO|Releasing lport a592354c-38c0-41be-9126-87d6dec6c687 from this chassis (sb_readonly=0)
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.635 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e0ff7905-af45-428a-b6a0-d6e1209fd009.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e0ff7905-af45-428a-b6a0-d6e1209fd009.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.636 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[df1751eb-0757-496e-995e-3057f2250c10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.636 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-e0ff7905-af45-428a-b6a0-d6e1209fd009
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/e0ff7905-af45-428a-b6a0-d6e1209fd009.pid.haproxy
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID e0ff7905-af45-428a-b6a0-d6e1209fd009
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:43:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.637 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'env', 'PROCESS_TAG=haproxy-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e0ff7905-af45-428a-b6a0-d6e1209fd009.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.700 244018 DEBUG nova.compute.manager [req-280955ad-0c21-4499-80b9-b0e5df223bcc req-7a4810c3-6d74-42bc-ab00-23aab947d0e5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.701 244018 DEBUG oslo_concurrency.lockutils [req-280955ad-0c21-4499-80b9-b0e5df223bcc req-7a4810c3-6d74-42bc-ab00-23aab947d0e5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.701 244018 DEBUG oslo_concurrency.lockutils [req-280955ad-0c21-4499-80b9-b0e5df223bcc req-7a4810c3-6d74-42bc-ab00-23aab947d0e5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.702 244018 DEBUG oslo_concurrency.lockutils [req-280955ad-0c21-4499-80b9-b0e5df223bcc req-7a4810c3-6d74-42bc-ab00-23aab947d0e5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.702 244018 DEBUG nova.compute.manager [req-280955ad-0c21-4499-80b9-b0e5df223bcc req-7a4810c3-6d74-42bc-ab00-23aab947d0e5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Processing event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.816 244018 DEBUG nova.compute.manager [req-71a48f81-b42c-4aaa-a7b0-ac5886c689a1 req-2d83045c-fd16-4120-943d-1d7ac4df9cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.817 244018 DEBUG oslo_concurrency.lockutils [req-71a48f81-b42c-4aaa-a7b0-ac5886c689a1 req-2d83045c-fd16-4120-943d-1d7ac4df9cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.818 244018 DEBUG oslo_concurrency.lockutils [req-71a48f81-b42c-4aaa-a7b0-ac5886c689a1 req-2d83045c-fd16-4120-943d-1d7ac4df9cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.818 244018 DEBUG oslo_concurrency.lockutils [req-71a48f81-b42c-4aaa-a7b0-ac5886c689a1 req-2d83045c-fd16-4120-943d-1d7ac4df9cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:09 compute-0 nova_compute[244014]: 2026-02-25 12:43:09.818 244018 DEBUG nova.compute.manager [req-71a48f81-b42c-4aaa-a7b0-ac5886c689a1 req-2d83045c-fd16-4120-943d-1d7ac4df9cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Processing event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:43:10 compute-0 podman[340020]: 2026-02-25 12:43:10.014763829 +0000 UTC m=+0.082445958 container create 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:43:10 compute-0 systemd[1]: Started libpod-conmon-4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07.scope.
Feb 25 12:43:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:43:10 compute-0 podman[340020]: 2026-02-25 12:43:09.965060836 +0000 UTC m=+0.032742985 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:43:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26baa473b5cd94140eb2f5364901afbf20c0a34db5790bb729eb08e64d9f641f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:10 compute-0 podman[340020]: 2026-02-25 12:43:10.149542532 +0000 UTC m=+0.217224691 container init 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 12:43:10 compute-0 podman[340020]: 2026-02-25 12:43:10.160177072 +0000 UTC m=+0.227859201 container start 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:43:10 compute-0 ceph-mon[76335]: pgmap v1859: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.172 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.173 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023390.172215, cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.174 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] VM Started (Lifecycle Event)
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.180 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.184 244018 INFO nova.virt.libvirt.driver [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance spawned successfully.
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.184 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:43:10 compute-0 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [NOTICE]   (340082) : New worker (340084) forked
Feb 25 12:43:10 compute-0 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [NOTICE]   (340082) : Loading success.
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.201 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.208 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.211 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.212 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.212 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.213 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.213 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.214 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.215 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7126aff6-e4c8-45c9-a3f7-6c333946b022 in datapath e4445989-91e8-4869-98cb-32b4b81bb3da unbound from our chassis
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.217 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4445989-91e8-4869-98cb-32b4b81bb3da
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.225 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a6859fcf-57ce-4f89-b681-659cf23d5891]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.226 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4445989-91 in ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.228 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4445989-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.229 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0c491bad-757b-43fd-bca5-363697357af1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.230 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[51e3c271-b9cf-4945-a80b-bb770a0f2c22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.240 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0b7597-4898-4062-a9ae-1155344dc1b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.242 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.242 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023390.172342, cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.242 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] VM Paused (Lifecycle Event)
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.260 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4d99574a-9f04-4380-b975-160d5b709ffa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.281 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[67299b00-259e-4031-a563-1317cd405022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 systemd-udevd[339970]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.287 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3f20a8-73b3-4de1-a89b-d793e4e156dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 NetworkManager[49836]: <info>  [1772023390.2884] manager: (tape4445989-90): new Veth device (/org/freedesktop/NetworkManager/Devices/440)
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.311 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.314 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023390.178914, cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.314 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] VM Resumed (Lifecycle Event)
Feb 25 12:43:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.321 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b17a7f-9ae3-4ca1-b592-a65d42f27306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.323 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7dafc382-7c61-49a5-9b4e-f59e034925eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.331 244018 INFO nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Took 10.43 seconds to spawn the instance on the hypervisor.
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.331 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.337 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.338 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:43:10 compute-0 NetworkManager[49836]: <info>  [1772023390.3491] device (tape4445989-90): carrier: link connected
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.353 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[716f2907-e8b0-4e3e-a7c2-3906622597f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.364 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.364 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f809fa1-a215-4ba7-86f9-701a145db21c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4445989-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:7d:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535975, 'reachable_time': 27894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340103, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.376 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b001c383-ae36-434e-9be0-655d9f7de120]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:7d54'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535975, 'tstamp': 535975}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340104, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.390 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1180ae9d-6deb-4cd0-8199-580b174beee3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4445989-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:7d:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535975, 'reachable_time': 27894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340105, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.413 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3e12aaac-3c5d-450c-8ba3-16fa25549d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.418 244018 INFO nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Took 11.56 seconds to build instance.
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.432 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.440 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e4afe856-eaa5-445a-aa1e-35ffead04239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.441 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4445989-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.441 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.442 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4445989-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:10 compute-0 NetworkManager[49836]: <info>  [1772023390.4450] manager: (tape4445989-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Feb 25 12:43:10 compute-0 kernel: tape4445989-90: entered promiscuous mode
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.450 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4445989-90, col_values=(('external_ids', {'iface-id': '4a3f602e-68fb-46ca-b4ed-cec3ad6d3ec5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:10 compute-0 ovn_controller[147040]: 2026-02-25T12:43:10Z|01069|binding|INFO|Releasing lport 4a3f602e-68fb-46ca-b4ed-cec3ad6d3ec5 from this chassis (sb_readonly=0)
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.454 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4445989-91e8-4869-98cb-32b4b81bb3da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4445989-91e8-4869-98cb-32b4b81bb3da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.455 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f47a8611-0d2f-467e-81b2-dada723b7491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.456 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-e4445989-91e8-4869-98cb-32b4b81bb3da
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/e4445989-91e8-4869-98cb-32b4b81bb3da.pid.haproxy
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID e4445989-91e8-4869-98cb-32b4b81bb3da
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:43:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.457 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'env', 'PROCESS_TAG=haproxy-e4445989-91e8-4869-98cb-32b4b81bb3da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4445989-91e8-4869-98cb-32b4b81bb3da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:43:10 compute-0 nova_compute[244014]: 2026-02-25 12:43:10.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:10 compute-0 podman[340135]: 2026-02-25 12:43:10.815960168 +0000 UTC m=+0.052615485 container create cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:43:10 compute-0 systemd[1]: Started libpod-conmon-cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8.scope.
Feb 25 12:43:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:43:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053075e96790c2347754d133c524b37fb39668500f025fbd5384e1ede98d4099/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:10 compute-0 podman[340135]: 2026-02-25 12:43:10.877815134 +0000 UTC m=+0.114470431 container init cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:43:10 compute-0 podman[340135]: 2026-02-25 12:43:10.882712202 +0000 UTC m=+0.119367479 container start cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:43:10 compute-0 podman[340135]: 2026-02-25 12:43:10.790368686 +0000 UTC m=+0.027023993 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:43:10 compute-0 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [NOTICE]   (340154) : New worker (340156) forked
Feb 25 12:43:10 compute-0 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [NOTICE]   (340154) : Loading success.
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.880 244018 DEBUG nova.compute.manager [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.880 244018 DEBUG oslo_concurrency.lockutils [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.880 244018 DEBUG oslo_concurrency.lockutils [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.881 244018 DEBUG oslo_concurrency.lockutils [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.881 244018 DEBUG nova.compute.manager [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.881 244018 WARNING nova.compute.manager [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received unexpected event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 for instance with vm_state active and task_state None.
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.901 244018 DEBUG nova.compute.manager [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.902 244018 DEBUG oslo_concurrency.lockutils [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.902 244018 DEBUG oslo_concurrency.lockutils [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.902 244018 DEBUG oslo_concurrency.lockutils [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.903 244018 DEBUG nova.compute.manager [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:43:11 compute-0 nova_compute[244014]: 2026-02-25 12:43:11.903 244018 WARNING nova.compute.manager [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received unexpected event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 for instance with vm_state active and task_state None.
Feb 25 12:43:12 compute-0 ceph-mon[76335]: pgmap v1860: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:43:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Feb 25 12:43:13 compute-0 nova_compute[244014]: 2026-02-25 12:43:13.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:13 compute-0 nova_compute[244014]: 2026-02-25 12:43:13.241 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023378.2400763, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:13 compute-0 nova_compute[244014]: 2026-02-25 12:43:13.242 244018 INFO nova.compute.manager [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Stopped (Lifecycle Event)
Feb 25 12:43:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:13 compute-0 nova_compute[244014]: 2026-02-25 12:43:13.722 244018 DEBUG nova.compute.manager [None req-5c95fc4a-77b5-47c6-b6b8-bf7ee5c49221 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:14 compute-0 ceph-mon[76335]: pgmap v1861: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Feb 25 12:43:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 440 KiB/s wr, 95 op/s
Feb 25 12:43:15 compute-0 nova_compute[244014]: 2026-02-25 12:43:15.106 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:16 compute-0 nova_compute[244014]: 2026-02-25 12:43:16.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:16 compute-0 NetworkManager[49836]: <info>  [1772023396.1526] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Feb 25 12:43:16 compute-0 NetworkManager[49836]: <info>  [1772023396.1533] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Feb 25 12:43:16 compute-0 nova_compute[244014]: 2026-02-25 12:43:16.201 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:16 compute-0 ceph-mon[76335]: pgmap v1862: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 440 KiB/s wr, 95 op/s
Feb 25 12:43:16 compute-0 ovn_controller[147040]: 2026-02-25T12:43:16Z|01070|binding|INFO|Releasing lport a592354c-38c0-41be-9126-87d6dec6c687 from this chassis (sb_readonly=0)
Feb 25 12:43:16 compute-0 ovn_controller[147040]: 2026-02-25T12:43:16Z|01071|binding|INFO|Releasing lport 4a3f602e-68fb-46ca-b4ed-cec3ad6d3ec5 from this chassis (sb_readonly=0)
Feb 25 12:43:16 compute-0 nova_compute[244014]: 2026-02-25 12:43:16.212 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:43:16 compute-0 nova_compute[244014]: 2026-02-25 12:43:16.441 244018 DEBUG nova.compute.manager [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:16 compute-0 nova_compute[244014]: 2026-02-25 12:43:16.442 244018 DEBUG nova.compute.manager [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing instance network info cache due to event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:43:16 compute-0 nova_compute[244014]: 2026-02-25 12:43:16.442 244018 DEBUG oslo_concurrency.lockutils [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:16 compute-0 nova_compute[244014]: 2026-02-25 12:43:16.443 244018 DEBUG oslo_concurrency.lockutils [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:16 compute-0 nova_compute[244014]: 2026-02-25 12:43:16.443 244018 DEBUG nova.network.neutron [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing network info cache for port 473bf89e-f488-42b9-b6d3-d736d2a61760 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:43:17 compute-0 nova_compute[244014]: 2026-02-25 12:43:17.951 244018 DEBUG nova.network.neutron [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updated VIF entry in instance network info cache for port 473bf89e-f488-42b9-b6d3-d736d2a61760. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:43:17 compute-0 nova_compute[244014]: 2026-02-25 12:43:17.952 244018 DEBUG nova.network.neutron [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:17 compute-0 nova_compute[244014]: 2026-02-25 12:43:17.982 244018 DEBUG oslo_concurrency.lockutils [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:18 compute-0 nova_compute[244014]: 2026-02-25 12:43:18.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:18 compute-0 ceph-mon[76335]: pgmap v1863: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:43:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:43:18 compute-0 nova_compute[244014]: 2026-02-25 12:43:18.570 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:18 compute-0 nova_compute[244014]: 2026-02-25 12:43:18.571 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:18 compute-0 nova_compute[244014]: 2026-02-25 12:43:18.593 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:43:18 compute-0 nova_compute[244014]: 2026-02-25 12:43:18.671 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:18 compute-0 nova_compute[244014]: 2026-02-25 12:43:18.672 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:18 compute-0 nova_compute[244014]: 2026-02-25 12:43:18.682 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:43:18 compute-0 nova_compute[244014]: 2026-02-25 12:43:18.683 244018 INFO nova.compute.claims [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:43:18 compute-0 nova_compute[244014]: 2026-02-25 12:43:18.866 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:43:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4067404949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.394 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.402 244018 DEBUG nova.compute.provider_tree [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.420 244018 DEBUG nova.scheduler.client.report [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.448 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.450 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.517 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.518 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.542 244018 INFO nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.563 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.670 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.671 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.672 244018 INFO nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating image(s)
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.703 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.730 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.754 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.759 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.790 244018 DEBUG nova.policy [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb37a481eb114226822ed8b2ef4f9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.824 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.824 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.825 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.826 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.850 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:19 compute-0 nova_compute[244014]: 2026-02-25 12:43:19.857 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:20 compute-0 nova_compute[244014]: 2026-02-25 12:43:20.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:43:20 compute-0 ceph-mon[76335]: pgmap v1864: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:43:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4067404949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:43:20 compute-0 nova_compute[244014]: 2026-02-25 12:43:20.651 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.794s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:20 compute-0 nova_compute[244014]: 2026-02-25 12:43:20.721 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:43:20 compute-0 nova_compute[244014]: 2026-02-25 12:43:20.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:43:20 compute-0 nova_compute[244014]: 2026-02-25 12:43:20.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:43:20 compute-0 nova_compute[244014]: 2026-02-25 12:43:20.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:43:20 compute-0 nova_compute[244014]: 2026-02-25 12:43:20.917 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:43:20 compute-0 nova_compute[244014]: 2026-02-25 12:43:20.981 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Successfully created port: 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.169 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.169 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.170 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.170 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.287 244018 DEBUG nova.objects.instance [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.301 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.302 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Ensure instance console log exists: /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.302 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.303 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.303 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:21 compute-0 ceph-mon[76335]: pgmap v1865: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.725 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Successfully updated port: 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.743 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.743 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.744 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.907 244018 DEBUG nova.compute.manager [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.907 244018 DEBUG nova.compute.manager [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing instance network info cache due to event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:43:21 compute-0 nova_compute[244014]: 2026-02-25 12:43:21.907 244018 DEBUG oslo_concurrency.lockutils [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:22 compute-0 nova_compute[244014]: 2026-02-25 12:43:22.139 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:43:22 compute-0 ovn_controller[147040]: 2026-02-25T12:43:22Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:8c:23 10.100.0.8
Feb 25 12:43:22 compute-0 ovn_controller[147040]: 2026-02-25T12:43:22Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:8c:23 10.100.0.8
Feb 25 12:43:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 241 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 113 op/s
Feb 25 12:43:22 compute-0 podman[340355]: 2026-02-25 12:43:22.767368662 +0000 UTC m=+0.103101821 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.142 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.335 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updating instance_info_cache with network_info: [{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.354 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.355 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance network_info: |[{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.356 244018 DEBUG oslo_concurrency.lockutils [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.356 244018 DEBUG nova.network.neutron [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.361 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start _get_guest_xml network_info=[{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.368 244018 WARNING nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.374 244018 DEBUG nova.virt.libvirt.host [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.375 244018 DEBUG nova.virt.libvirt.host [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.388 244018 DEBUG nova.virt.libvirt.host [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.389 244018 DEBUG nova.virt.libvirt.host [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.390 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.391 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.392 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.393 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.393 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.394 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.394 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.395 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.395 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.396 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.397 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.397 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.404 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:23 compute-0 ceph-mon[76335]: pgmap v1866: 305 pgs: 305 active+clean; 241 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 113 op/s
Feb 25 12:43:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:43:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2005405089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.942 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.977 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:23 compute-0 nova_compute[244014]: 2026-02-25 12:43:23.982 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 269 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 832 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Feb 25 12:43:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:43:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958499659' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.514 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.516 244018 DEBUG nova.virt.libvirt.vif [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:19Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.516 244018 DEBUG nova.network.os_vif_util [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.517 244018 DEBUG nova.network.os_vif_util [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.518 244018 DEBUG nova.objects.instance [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.534 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:43:24 compute-0 nova_compute[244014]:   <uuid>b5941b54-9cd2-465c-89c0-3cf87ebed83e</uuid>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   <name>instance-0000006e</name>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1420097374</nova:name>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:43:23</nova:creationTime>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <nova:port uuid="43ea1958-7fd9-47b6-be81-3eeb1b3801a0">
Feb 25 12:43:24 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <system>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <entry name="serial">b5941b54-9cd2-465c-89c0-3cf87ebed83e</entry>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <entry name="uuid">b5941b54-9cd2-465c-89c0-3cf87ebed83e</entry>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     </system>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   <os>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   </os>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   <features>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   </features>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:43:24 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk">
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       </source>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config">
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       </source>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:43:24 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:3e:93:a9"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <target dev="tap43ea1958-7f"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/console.log" append="off"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <video>
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     </video>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:43:24 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:43:24 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:43:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2005405089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:24 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:43:24 compute-0 nova_compute[244014]: </domain>
Feb 25 12:43:24 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:43:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1958499659' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.535 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Preparing to wait for external event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.536 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.536 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.537 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.537 244018 DEBUG nova.virt.libvirt.vif [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:19Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.538 244018 DEBUG nova.network.os_vif_util [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.539 244018 DEBUG nova.network.os_vif_util [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.539 244018 DEBUG os_vif [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.540 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.540 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.546 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ea1958-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.546 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43ea1958-7f, col_values=(('external_ids', {'iface-id': '43ea1958-7fd9-47b6-be81-3eeb1b3801a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:93:a9', 'vm-uuid': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.548 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:24 compute-0 NetworkManager[49836]: <info>  [1772023404.5494] manager: (tap43ea1958-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.555 244018 INFO os_vif [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f')
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.616 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.617 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.617 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:3e:93:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.619 244018 INFO nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Using config drive
Feb 25 12:43:24 compute-0 nova_compute[244014]: 2026-02-25 12:43:24.647 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:24 compute-0 podman[340440]: 2026-02-25 12:43:24.663086959 +0000 UTC m=+0.075002858 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.252 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.278 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.278 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.278 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.279 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:43:25 compute-0 ceph-mon[76335]: pgmap v1867: 305 pgs: 305 active+clean; 269 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 832 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.706 244018 DEBUG nova.network.neutron [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updated VIF entry in instance network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.707 244018 DEBUG nova.network.neutron [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updating instance_info_cache with network_info: [{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.719 244018 INFO nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating config drive at /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.725 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxrp36wmi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.762 244018 DEBUG oslo_concurrency.lockutils [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.873 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxrp36wmi" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.922 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.929 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.960 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.962 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.984 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.985 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.985 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.986 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:43:25 compute-0 nova_compute[244014]: 2026-02-25 12:43:25.986 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.085 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.085 244018 INFO nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deleting local config drive /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config because it was imported into RBD.
Feb 25 12:43:26 compute-0 kernel: tap43ea1958-7f: entered promiscuous mode
Feb 25 12:43:26 compute-0 NetworkManager[49836]: <info>  [1772023406.1266] manager: (tap43ea1958-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/445)
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:26 compute-0 ovn_controller[147040]: 2026-02-25T12:43:26Z|01072|binding|INFO|Claiming lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for this chassis.
Feb 25 12:43:26 compute-0 ovn_controller[147040]: 2026-02-25T12:43:26Z|01073|binding|INFO|43ea1958-7fd9-47b6-be81-3eeb1b3801a0: Claiming fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:26 compute-0 ovn_controller[147040]: 2026-02-25T12:43:26Z|01074|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 ovn-installed in OVS
Feb 25 12:43:26 compute-0 ovn_controller[147040]: 2026-02-25T12:43:26Z|01075|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 up in Southbound
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.138 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:93:a9 10.100.0.5'], port_security=['fa:16:3e:3e:93:a9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5401511-20ab-4c2d-9471-9223f0b77f54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cc7d41-346e-4e2e-a938-6389d190a22f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=43ea1958-7fd9-47b6-be81-3eeb1b3801a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.139 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.141 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 in datapath f8edf066-3c6a-45fb-bc20-36dc74f8aee6 bound to our chassis
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.144 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8edf066-3c6a-45fb-bc20-36dc74f8aee6
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.152 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6995a66e-d6be-4890-a4b6-62e783077163]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.153 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8edf066-31 in ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.156 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8edf066-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.156 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8816d6c2-16a8-458b-8541-5a00917a4586]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.157 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[66f06533-929f-4276-a0be-6eeb607611de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 systemd-machined[210048]: New machine qemu-138-instance-0000006e.
Feb 25 12:43:26 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-0000006e.
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.171 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[5192eda7-5015-45ff-a5f1-e1033b2f817b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 systemd-udevd[340560]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:43:26 compute-0 NetworkManager[49836]: <info>  [1772023406.1856] device (tap43ea1958-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:43:26 compute-0 NetworkManager[49836]: <info>  [1772023406.1863] device (tap43ea1958-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.203 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[82b489b0-f72e-4ed9-a409-d55b3cc15a32]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.233 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9d648dff-22d4-4275-9128-2bcb4c75a7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 systemd-udevd[340563]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:43:26 compute-0 NetworkManager[49836]: <info>  [1772023406.2395] manager: (tapf8edf066-30): new Veth device (/org/freedesktop/NetworkManager/Devices/446)
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.239 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e2b5d4-2820-4378-9cb7-0798e23edb1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.268 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[db251ba8-d132-4f4c-8645-199cbf294473]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.271 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3f0727-872b-4cf1-a284-15ae167ed41b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 NetworkManager[49836]: <info>  [1772023406.2901] device (tapf8edf066-30): carrier: link connected
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.295 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ded17a2f-b181-4eae-8b23-4e6400a52b50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.307 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8535a312-0592-4b77-ab16-bfff5b758954]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8edf066-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:6d:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537569, 'reachable_time': 16694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340591, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.317 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[998db363-8cfd-400b-8425-d061a2bef22c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:6d18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537569, 'tstamp': 537569}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340592, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 269 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 3.8 MiB/s wr, 81 op/s
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.331 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2695d9c-44be-4e0f-b751-9ceeb3a9aa20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8edf066-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:6d:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537569, 'reachable_time': 16694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340593, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.352 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bca364d5-5f37-4107-9d92-e263a25776ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.384 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[78e60fb0-b005-490e-8306-5af220176a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.385 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8edf066-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.385 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.385 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8edf066-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:26 compute-0 NetworkManager[49836]: <info>  [1772023406.3875] manager: (tapf8edf066-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Feb 25 12:43:26 compute-0 kernel: tapf8edf066-30: entered promiscuous mode
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.392 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8edf066-30, col_values=(('external_ids', {'iface-id': '537184a5-5d27-4b28-acba-8f254f6dc5ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:26 compute-0 ovn_controller[147040]: 2026-02-25T12:43:26Z|01076|binding|INFO|Releasing lport 537184a5-5d27-4b28-acba-8f254f6dc5ca from this chassis (sb_readonly=0)
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.401 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.403 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.404 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cea9d25c-21c8-4ee8-bf84-bfa1f3a488ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.404 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-f8edf066-3c6a-45fb-bc20-36dc74f8aee6
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID f8edf066-3c6a-45fb-bc20-36dc74f8aee6
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:43:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.405 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'env', 'PROCESS_TAG=haproxy-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.469 244018 DEBUG nova.compute.manager [req-12e65119-586a-462c-aba5-694e0472e495 req-5d1d173a-b2b1-480d-a1bf-933f6b598b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.469 244018 DEBUG oslo_concurrency.lockutils [req-12e65119-586a-462c-aba5-694e0472e495 req-5d1d173a-b2b1-480d-a1bf-933f6b598b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.469 244018 DEBUG oslo_concurrency.lockutils [req-12e65119-586a-462c-aba5-694e0472e495 req-5d1d173a-b2b1-480d-a1bf-933f6b598b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.470 244018 DEBUG oslo_concurrency.lockutils [req-12e65119-586a-462c-aba5-694e0472e495 req-5d1d173a-b2b1-480d-a1bf-933f6b598b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.470 244018 DEBUG nova.compute.manager [req-12e65119-586a-462c-aba5-694e0472e495 req-5d1d173a-b2b1-480d-a1bf-933f6b598b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Processing event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:43:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:43:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2285557306' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.580 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2285557306' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.618881) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406618916, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2033, "num_deletes": 253, "total_data_size": 3198380, "memory_usage": 3239728, "flush_reason": "Manual Compaction"}
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406633448, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 3130447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38039, "largest_seqno": 40071, "table_properties": {"data_size": 3121300, "index_size": 5641, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19516, "raw_average_key_size": 20, "raw_value_size": 3102733, "raw_average_value_size": 3252, "num_data_blocks": 249, "num_entries": 954, "num_filter_entries": 954, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023208, "oldest_key_time": 1772023208, "file_creation_time": 1772023406, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 14613 microseconds, and 4575 cpu microseconds.
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.633491) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 3130447 bytes OK
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.633509) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.635227) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.635239) EVENT_LOG_v1 {"time_micros": 1772023406635235, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.635259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3189752, prev total WAL file size 3189752, number of live WAL files 2.
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.639774) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(3057KB)], [86(7679KB)]
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406639837, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 10994511, "oldest_snapshot_seqno": -1}
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.661 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.661 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6467 keys, 9273776 bytes, temperature: kUnknown
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406694932, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9273776, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9230328, "index_size": 26186, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164216, "raw_average_key_size": 25, "raw_value_size": 9114477, "raw_average_value_size": 1409, "num_data_blocks": 1051, "num_entries": 6467, "num_filter_entries": 6467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023406, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.695780) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9273776 bytes
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.697360) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.1 rd, 166.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 6989, records dropped: 522 output_compression: NoCompression
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.697376) EVENT_LOG_v1 {"time_micros": 1772023406697368, "job": 50, "event": "compaction_finished", "compaction_time_micros": 55782, "compaction_time_cpu_micros": 16072, "output_level": 6, "num_output_files": 1, "total_output_size": 9273776, "num_input_records": 6989, "num_output_records": 6467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406697676, "job": 50, "event": "table_file_deletion", "file_number": 88}
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406698860, "job": 50, "event": "table_file_deletion", "file_number": 86}
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.635741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.698905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.698909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.698911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.698912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:43:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.698914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.775 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023406.7752013, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.776 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Started (Lifecycle Event)
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.778 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.781 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.784 244018 INFO nova.virt.libvirt.driver [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance spawned successfully.
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.784 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.801 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.806 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.810 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.810 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.811 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.811 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.811 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.812 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:26 compute-0 podman[340669]: 2026-02-25 12:43:26.822197308 +0000 UTC m=+0.056658100 container create 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.841 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.842 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023406.775345, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.842 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Paused (Lifecycle Event)
Feb 25 12:43:26 compute-0 systemd[1]: Started libpod-conmon-314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd.scope.
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.874 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.877 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023406.7813468, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.877 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Resumed (Lifecycle Event)
Feb 25 12:43:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.886 244018 INFO nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Took 7.22 seconds to spawn the instance on the hypervisor.
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.887 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:26 compute-0 podman[340669]: 2026-02-25 12:43:26.793399735 +0000 UTC m=+0.027860577 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd30b2a51b99c22ab6484bb072cec6f6373098acd12eff7ecbd61ac2cb75c84/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.898 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.899 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3503MB free_disk=59.92203834373504GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:26 compute-0 podman[340669]: 2026-02-25 12:43:26.901343441 +0000 UTC m=+0.135804263 container init 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 12:43:26 compute-0 podman[340669]: 2026-02-25 12:43:26.905676483 +0000 UTC m=+0.140137285 container start 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.920 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.923 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:43:26 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [NOTICE]   (340688) : New worker (340690) forked
Feb 25 12:43:26 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [NOTICE]   (340688) : Loading success.
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.951 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.974 244018 INFO nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Took 8.34 seconds to build instance.
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.977 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.977 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b5941b54-9cd2-465c-89c0-3cf87ebed83e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.977 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.977 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:43:26 compute-0 nova_compute[244014]: 2026-02-25 12:43:26.992 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:27 compute-0 nova_compute[244014]: 2026-02-25 12:43:27.033 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:43:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241667786' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:43:27 compute-0 nova_compute[244014]: 2026-02-25 12:43:27.563 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:27 compute-0 nova_compute[244014]: 2026-02-25 12:43:27.577 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:43:27 compute-0 nova_compute[244014]: 2026-02-25 12:43:27.599 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:43:27 compute-0 nova_compute[244014]: 2026-02-25 12:43:27.638 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:43:27 compute-0 nova_compute[244014]: 2026-02-25 12:43:27.638 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:27 compute-0 ceph-mon[76335]: pgmap v1868: 305 pgs: 305 active+clean; 269 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 3.8 MiB/s wr, 81 op/s
Feb 25 12:43:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3241667786' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:43:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Feb 25 12:43:28 compute-0 nova_compute[244014]: 2026-02-25 12:43:28.553 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:43:28 compute-0 nova_compute[244014]: 2026-02-25 12:43:28.554 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:43:28 compute-0 nova_compute[244014]: 2026-02-25 12:43:28.555 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:43:28 compute-0 nova_compute[244014]: 2026-02-25 12:43:28.644 244018 DEBUG nova.compute.manager [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:28 compute-0 nova_compute[244014]: 2026-02-25 12:43:28.644 244018 DEBUG oslo_concurrency.lockutils [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:28 compute-0 nova_compute[244014]: 2026-02-25 12:43:28.645 244018 DEBUG oslo_concurrency.lockutils [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:28 compute-0 nova_compute[244014]: 2026-02-25 12:43:28.645 244018 DEBUG oslo_concurrency.lockutils [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:28 compute-0 nova_compute[244014]: 2026-02-25 12:43:28.645 244018 DEBUG nova.compute.manager [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:43:28 compute-0 nova_compute[244014]: 2026-02-25 12:43:28.645 244018 WARNING nova.compute.manager [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state None.
Feb 25 12:43:29 compute-0 nova_compute[244014]: 2026-02-25 12:43:29.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:29 compute-0 ceph-mon[76335]: pgmap v1869: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Feb 25 12:43:30 compute-0 nova_compute[244014]: 2026-02-25 12:43:30.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Feb 25 12:43:30 compute-0 nova_compute[244014]: 2026-02-25 12:43:30.714 244018 DEBUG nova.compute.manager [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:30 compute-0 nova_compute[244014]: 2026-02-25 12:43:30.714 244018 DEBUG nova.compute.manager [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing instance network info cache due to event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:43:30 compute-0 nova_compute[244014]: 2026-02-25 12:43:30.715 244018 DEBUG oslo_concurrency.lockutils [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:30 compute-0 nova_compute[244014]: 2026-02-25 12:43:30.715 244018 DEBUG oslo_concurrency.lockutils [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:30 compute-0 nova_compute[244014]: 2026-02-25 12:43:30.715 244018 DEBUG nova.network.neutron [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:43:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:43:30
Feb 25 12:43:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:43:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:43:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'images', 'vms', 'volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Feb 25 12:43:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:43:31 compute-0 ceph-mon[76335]: pgmap v1870: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Feb 25 12:43:31 compute-0 nova_compute[244014]: 2026-02-25 12:43:31.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:43:31 compute-0 nova_compute[244014]: 2026-02-25 12:43:31.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:43:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:43:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Feb 25 12:43:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:33 compute-0 nova_compute[244014]: 2026-02-25 12:43:33.770 244018 DEBUG nova.network.neutron [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updated VIF entry in instance network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:43:33 compute-0 nova_compute[244014]: 2026-02-25 12:43:33.770 244018 DEBUG nova.network.neutron [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updating instance_info_cache with network_info: [{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:33 compute-0 nova_compute[244014]: 2026-02-25 12:43:33.798 244018 DEBUG oslo_concurrency.lockutils [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:33 compute-0 ceph-mon[76335]: pgmap v1871: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Feb 25 12:43:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Feb 25 12:43:34 compute-0 nova_compute[244014]: 2026-02-25 12:43:34.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:35 compute-0 nova_compute[244014]: 2026-02-25 12:43:35.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:35 compute-0 ceph-mon[76335]: pgmap v1872: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Feb 25 12:43:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 86 KiB/s wr, 82 op/s
Feb 25 12:43:36 compute-0 nova_compute[244014]: 2026-02-25 12:43:36.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:43:37 compute-0 nova_compute[244014]: 2026-02-25 12:43:37.670 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:37 compute-0 nova_compute[244014]: 2026-02-25 12:43:37.670 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:37 compute-0 nova_compute[244014]: 2026-02-25 12:43:37.692 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:43:37 compute-0 nova_compute[244014]: 2026-02-25 12:43:37.797 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:37 compute-0 nova_compute[244014]: 2026-02-25 12:43:37.798 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:37 compute-0 nova_compute[244014]: 2026-02-25 12:43:37.805 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:43:37 compute-0 nova_compute[244014]: 2026-02-25 12:43:37.806 244018 INFO nova.compute.claims [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:43:37 compute-0 nova_compute[244014]: 2026-02-25 12:43:37.986 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:38 compute-0 ceph-mon[76335]: pgmap v1873: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 86 KiB/s wr, 82 op/s
Feb 25 12:43:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 289 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 107 op/s
Feb 25 12:43:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:43:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2209978200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.535 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.540 244018 DEBUG nova.compute.provider_tree [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.559 244018 DEBUG nova.scheduler.client.report [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.583 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.584 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.636 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.637 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.655 244018 INFO nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.676 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.770 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.771 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.771 244018 INFO nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Creating image(s)
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.788 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.810 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.835 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.846 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.917 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.918 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.919 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.919 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.938 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:38 compute-0 nova_compute[244014]: 2026-02-25 12:43:38.941 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:39 compute-0 nova_compute[244014]: 2026-02-25 12:43:39.057 244018 DEBUG nova.policy [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:43:39 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2209978200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:43:39 compute-0 nova_compute[244014]: 2026-02-25 12:43:39.440 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:39 compute-0 nova_compute[244014]: 2026-02-25 12:43:39.502 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:43:39 compute-0 nova_compute[244014]: 2026-02-25 12:43:39.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:39 compute-0 ovn_controller[147040]: 2026-02-25T12:43:39Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 12:43:39 compute-0 ovn_controller[147040]: 2026-02-25T12:43:39Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 12:43:39 compute-0 nova_compute[244014]: 2026-02-25 12:43:39.748 244018 DEBUG nova.objects.instance [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:39 compute-0 nova_compute[244014]: 2026-02-25 12:43:39.822 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:43:39 compute-0 nova_compute[244014]: 2026-02-25 12:43:39.823 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Ensure instance console log exists: /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:43:39 compute-0 nova_compute[244014]: 2026-02-25 12:43:39.824 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:39 compute-0 nova_compute[244014]: 2026-02-25 12:43:39.824 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:39 compute-0 nova_compute[244014]: 2026-02-25 12:43:39.825 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:39 compute-0 nova_compute[244014]: 2026-02-25 12:43:39.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:39.831 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:43:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:39.833 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:43:40 compute-0 nova_compute[244014]: 2026-02-25 12:43:40.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:40 compute-0 nova_compute[244014]: 2026-02-25 12:43:40.155 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Successfully created port: d71dae92-b542-404e-b4cc-ecad408ed655 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:43:40 compute-0 ceph-mon[76335]: pgmap v1874: 305 pgs: 305 active+clean; 289 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 107 op/s
Feb 25 12:43:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 289 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 973 KiB/s wr, 79 op/s
Feb 25 12:43:40 compute-0 sudo[340911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:43:40 compute-0 sudo[340911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:43:40 compute-0 sudo[340911]: pam_unix(sudo:session): session closed for user root
Feb 25 12:43:40 compute-0 sudo[340936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 12:43:40 compute-0 sudo[340936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:43:40 compute-0 podman[341004]: 2026-02-25 12:43:40.867536201 +0000 UTC m=+0.146148055 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 12:43:40 compute-0 podman[341004]: 2026-02-25 12:43:40.958004084 +0000 UTC m=+0.236615938 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:43:41 compute-0 nova_compute[244014]: 2026-02-25 12:43:41.108 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Successfully created port: 18ca6c9a-c1c9-4a48-b124-25942ebef5df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:43:41 compute-0 sudo[340936]: pam_unix(sudo:session): session closed for user root
Feb 25 12:43:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:43:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:43:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:43:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:43:41 compute-0 sudo[341194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:43:41 compute-0 sudo[341194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:43:41 compute-0 sudo[341194]: pam_unix(sudo:session): session closed for user root
Feb 25 12:43:41 compute-0 sudo[341219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:43:41 compute-0 sudo[341219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:43:42 compute-0 ceph-mon[76335]: pgmap v1875: 305 pgs: 305 active+clean; 289 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 973 KiB/s wr, 79 op/s
Feb 25 12:43:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:43:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:43:42 compute-0 nova_compute[244014]: 2026-02-25 12:43:42.308 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Successfully updated port: d71dae92-b542-404e-b4cc-ecad408ed655 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 345 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 MiB/s wr, 126 op/s
Feb 25 12:43:42 compute-0 sudo[341219]: pam_unix(sudo:session): session closed for user root
Feb 25 12:43:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:43:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:43:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:43:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:43:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:43:42 compute-0 nova_compute[244014]: 2026-02-25 12:43:42.426 244018 DEBUG nova.compute.manager [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:42 compute-0 nova_compute[244014]: 2026-02-25 12:43:42.426 244018 DEBUG nova.compute.manager [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing instance network info cache due to event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:43:42 compute-0 nova_compute[244014]: 2026-02-25 12:43:42.426 244018 DEBUG oslo_concurrency.lockutils [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:42 compute-0 nova_compute[244014]: 2026-02-25 12:43:42.426 244018 DEBUG oslo_concurrency.lockutils [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:42 compute-0 nova_compute[244014]: 2026-02-25 12:43:42.426 244018 DEBUG nova.network.neutron [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing network info cache for port d71dae92-b542-404e-b4cc-ecad408ed655 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:43:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:43:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:43:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:43:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:43:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:43:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:43:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018414802412164857 of space, bias 1.0, pg target 0.5524440723649457 quantized to 32 (current 32)
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493766335806996 of space, bias 1.0, pg target 0.7481299007420988 quantized to 32 (current 32)
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0089591090207362e-06 of space, bias 4.0, pg target 0.0012107509308248836 quantized to 16 (current 16)
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:43:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:43:42 compute-0 sudo[341276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:43:42 compute-0 sudo[341276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:43:42 compute-0 sudo[341276]: pam_unix(sudo:session): session closed for user root
Feb 25 12:43:42 compute-0 sudo[341301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:43:42 compute-0 sudo[341301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:43:42 compute-0 nova_compute[244014]: 2026-02-25 12:43:42.726 244018 DEBUG nova.network.neutron [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:43:42 compute-0 podman[341338]: 2026-02-25 12:43:42.84126587 +0000 UTC m=+0.067074294 container create 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:43:42 compute-0 podman[341338]: 2026-02-25 12:43:42.79239021 +0000 UTC m=+0.018198534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:43:42 compute-0 systemd[1]: Started libpod-conmon-7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4.scope.
Feb 25 12:43:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:43:42 compute-0 podman[341338]: 2026-02-25 12:43:42.948087764 +0000 UTC m=+0.173896098 container init 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 12:43:42 compute-0 podman[341338]: 2026-02-25 12:43:42.958725025 +0000 UTC m=+0.184533329 container start 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 12:43:42 compute-0 hopeful_zhukovsky[341354]: 167 167
Feb 25 12:43:42 compute-0 systemd[1]: libpod-7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4.scope: Deactivated successfully.
Feb 25 12:43:42 compute-0 conmon[341354]: conmon 7d13092960c6c10f58e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4.scope/container/memory.events
Feb 25 12:43:43 compute-0 podman[341338]: 2026-02-25 12:43:43.022379271 +0000 UTC m=+0.248187625 container attach 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:43:43 compute-0 podman[341338]: 2026-02-25 12:43:43.023368809 +0000 UTC m=+0.249177123 container died 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 12:43:43 compute-0 nova_compute[244014]: 2026-02-25 12:43:43.079 244018 DEBUG nova.network.neutron [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:43 compute-0 nova_compute[244014]: 2026-02-25 12:43:43.100 244018 DEBUG oslo_concurrency.lockutils [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf0a9835b7b569ec5b9b1e0c2a1a62dd0f66cc068b5bbe5e7f66f2f73318b9a-merged.mount: Deactivated successfully.
Feb 25 12:43:43 compute-0 podman[341338]: 2026-02-25 12:43:43.15242309 +0000 UTC m=+0.378231394 container remove 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 12:43:43 compute-0 systemd[1]: libpod-conmon-7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4.scope: Deactivated successfully.
Feb 25 12:43:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:43:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:43:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:43:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:43:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:43:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:43:43 compute-0 podman[341378]: 2026-02-25 12:43:43.308954238 +0000 UTC m=+0.046750261 container create beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:43:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:43 compute-0 podman[341378]: 2026-02-25 12:43:43.282862522 +0000 UTC m=+0.020658575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:43:43 compute-0 systemd[1]: Started libpod-conmon-beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe.scope.
Feb 25 12:43:43 compute-0 nova_compute[244014]: 2026-02-25 12:43:43.404 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Successfully updated port: 18ca6c9a-c1c9-4a48-b124-25942ebef5df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:43:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:43 compute-0 nova_compute[244014]: 2026-02-25 12:43:43.425 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:43 compute-0 nova_compute[244014]: 2026-02-25 12:43:43.426 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:43 compute-0 nova_compute[244014]: 2026-02-25 12:43:43.426 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:43:43 compute-0 podman[341378]: 2026-02-25 12:43:43.486309773 +0000 UTC m=+0.224105816 container init beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 12:43:43 compute-0 podman[341378]: 2026-02-25 12:43:43.492681663 +0000 UTC m=+0.230477686 container start beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:43:43 compute-0 podman[341378]: 2026-02-25 12:43:43.496627534 +0000 UTC m=+0.234423577 container attach beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 12:43:43 compute-0 nova_compute[244014]: 2026-02-25 12:43:43.620 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:43:43 compute-0 cool_lamport[341395]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:43:43 compute-0 cool_lamport[341395]: --> All data devices are unavailable
Feb 25 12:43:43 compute-0 systemd[1]: libpod-beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe.scope: Deactivated successfully.
Feb 25 12:43:43 compute-0 podman[341378]: 2026-02-25 12:43:43.968230281 +0000 UTC m=+0.706026304 container died beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:43:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b-merged.mount: Deactivated successfully.
Feb 25 12:43:44 compute-0 podman[341378]: 2026-02-25 12:43:44.126182409 +0000 UTC m=+0.863978432 container remove beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:43:44 compute-0 systemd[1]: libpod-conmon-beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe.scope: Deactivated successfully.
Feb 25 12:43:44 compute-0 sudo[341301]: pam_unix(sudo:session): session closed for user root
Feb 25 12:43:44 compute-0 sudo[341429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:43:44 compute-0 sudo[341429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:43:44 compute-0 sudo[341429]: pam_unix(sudo:session): session closed for user root
Feb 25 12:43:44 compute-0 sudo[341454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:43:44 compute-0 sudo[341454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:43:44 compute-0 ceph-mon[76335]: pgmap v1876: 305 pgs: 305 active+clean; 345 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 MiB/s wr, 126 op/s
Feb 25 12:43:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Feb 25 12:43:44 compute-0 podman[341490]: 2026-02-25 12:43:44.543675711 +0000 UTC m=+0.042450579 container create e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:43:44 compute-0 nova_compute[244014]: 2026-02-25 12:43:44.555 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:44 compute-0 systemd[1]: Started libpod-conmon-e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d.scope.
Feb 25 12:43:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:43:44 compute-0 podman[341490]: 2026-02-25 12:43:44.519670393 +0000 UTC m=+0.018445311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:43:44 compute-0 podman[341490]: 2026-02-25 12:43:44.629684938 +0000 UTC m=+0.128459826 container init e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:43:44 compute-0 nova_compute[244014]: 2026-02-25 12:43:44.634 244018 DEBUG nova.compute.manager [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-changed-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:44 compute-0 nova_compute[244014]: 2026-02-25 12:43:44.635 244018 DEBUG nova.compute.manager [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing instance network info cache due to event network-changed-18ca6c9a-c1c9-4a48-b124-25942ebef5df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:43:44 compute-0 nova_compute[244014]: 2026-02-25 12:43:44.635 244018 DEBUG oslo_concurrency.lockutils [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:44 compute-0 podman[341490]: 2026-02-25 12:43:44.637397225 +0000 UTC m=+0.136172093 container start e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:43:44 compute-0 competent_curran[341506]: 167 167
Feb 25 12:43:44 compute-0 systemd[1]: libpod-e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d.scope: Deactivated successfully.
Feb 25 12:43:44 compute-0 conmon[341506]: conmon e20b8a1f5b89668f9d40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d.scope/container/memory.events
Feb 25 12:43:44 compute-0 podman[341490]: 2026-02-25 12:43:44.644274399 +0000 UTC m=+0.143049267 container attach e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 12:43:44 compute-0 podman[341490]: 2026-02-25 12:43:44.645078902 +0000 UTC m=+0.143853770 container died e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Feb 25 12:43:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2b8ccd0177ac1cb836290a29b3a9c626be74b5d3d077a4a81f025eabe460b5b-merged.mount: Deactivated successfully.
Feb 25 12:43:44 compute-0 podman[341490]: 2026-02-25 12:43:44.764146602 +0000 UTC m=+0.262921480 container remove e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb 25 12:43:44 compute-0 systemd[1]: libpod-conmon-e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d.scope: Deactivated successfully.
Feb 25 12:43:44 compute-0 podman[341530]: 2026-02-25 12:43:44.965468524 +0000 UTC m=+0.090798254 container create 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Feb 25 12:43:44 compute-0 podman[341530]: 2026-02-25 12:43:44.893002799 +0000 UTC m=+0.018332519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:43:45 compute-0 systemd[1]: Started libpod-conmon-04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272.scope.
Feb 25 12:43:45 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4e36d84329450239c4aeeae28b72de3ba5b331e9f961e07c5345d4e6f21bf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4e36d84329450239c4aeeae28b72de3ba5b331e9f961e07c5345d4e6f21bf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4e36d84329450239c4aeeae28b72de3ba5b331e9f961e07c5345d4e6f21bf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4e36d84329450239c4aeeae28b72de3ba5b331e9f961e07c5345d4e6f21bf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:45 compute-0 podman[341530]: 2026-02-25 12:43:45.191258865 +0000 UTC m=+0.316588595 container init 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 12:43:45 compute-0 podman[341530]: 2026-02-25 12:43:45.198033187 +0000 UTC m=+0.323362887 container start 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:43:45 compute-0 podman[341530]: 2026-02-25 12:43:45.202607496 +0000 UTC m=+0.327937266 container attach 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 12:43:45 compute-0 ceph-mon[76335]: pgmap v1877: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Feb 25 12:43:45 compute-0 jovial_nobel[341548]: {
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:     "0": [
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:         {
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "devices": [
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "/dev/loop3"
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             ],
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_name": "ceph_lv0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_size": "21470642176",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "name": "ceph_lv0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "tags": {
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.cluster_name": "ceph",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.crush_device_class": "",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.encrypted": "0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.objectstore": "bluestore",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.osd_id": "0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.type": "block",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.vdo": "0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.with_tpm": "0"
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             },
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "type": "block",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "vg_name": "ceph_vg0"
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:         }
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:     ],
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:     "1": [
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:         {
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "devices": [
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "/dev/loop4"
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             ],
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_name": "ceph_lv1",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_size": "21470642176",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "name": "ceph_lv1",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "tags": {
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.cluster_name": "ceph",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.crush_device_class": "",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.encrypted": "0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.objectstore": "bluestore",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.osd_id": "1",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.type": "block",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.vdo": "0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.with_tpm": "0"
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             },
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "type": "block",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "vg_name": "ceph_vg1"
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:         }
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:     ],
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:     "2": [
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:         {
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "devices": [
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "/dev/loop5"
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             ],
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_name": "ceph_lv2",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_size": "21470642176",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "name": "ceph_lv2",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "tags": {
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.cluster_name": "ceph",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.crush_device_class": "",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.encrypted": "0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.objectstore": "bluestore",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.osd_id": "2",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.type": "block",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.vdo": "0",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:                 "ceph.with_tpm": "0"
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             },
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "type": "block",
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:             "vg_name": "ceph_vg2"
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:         }
Feb 25 12:43:45 compute-0 jovial_nobel[341548]:     ]
Feb 25 12:43:45 compute-0 jovial_nobel[341548]: }
Feb 25 12:43:45 compute-0 systemd[1]: libpod-04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272.scope: Deactivated successfully.
Feb 25 12:43:45 compute-0 podman[341530]: 2026-02-25 12:43:45.498273799 +0000 UTC m=+0.623603509 container died 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.590 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-de4e36d84329450239c4aeeae28b72de3ba5b331e9f961e07c5345d4e6f21bf3-merged.mount: Deactivated successfully.
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.613 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.614 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance network_info: |[{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.614 244018 DEBUG oslo_concurrency.lockutils [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.614 244018 DEBUG nova.network.neutron [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing network info cache for port 18ca6c9a-c1c9-4a48-b124-25942ebef5df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.619 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Start _get_guest_xml network_info=[{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.625 244018 WARNING nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.631 244018 DEBUG nova.virt.libvirt.host [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.632 244018 DEBUG nova.virt.libvirt.host [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.640 244018 DEBUG nova.virt.libvirt.host [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.640 244018 DEBUG nova.virt.libvirt.host [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.641 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.641 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.641 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.642 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.642 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.642 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.642 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.642 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.643 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.643 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.643 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.643 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:43:45 compute-0 nova_compute[244014]: 2026-02-25 12:43:45.647 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:45 compute-0 podman[341530]: 2026-02-25 12:43:45.648297903 +0000 UTC m=+0.773627603 container remove 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:43:45 compute-0 systemd[1]: libpod-conmon-04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272.scope: Deactivated successfully.
Feb 25 12:43:45 compute-0 sudo[341454]: pam_unix(sudo:session): session closed for user root
Feb 25 12:43:45 compute-0 sudo[341572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:43:45 compute-0 sudo[341572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:43:45 compute-0 sudo[341572]: pam_unix(sudo:session): session closed for user root
Feb 25 12:43:45 compute-0 sudo[341607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:43:45 compute-0 sudo[341607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:43:46 compute-0 podman[341655]: 2026-02-25 12:43:46.15316959 +0000 UTC m=+0.096519704 container create 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 12:43:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:43:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1959454357' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:46 compute-0 podman[341655]: 2026-02-25 12:43:46.078639727 +0000 UTC m=+0.021989931 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.179 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:46 compute-0 systemd[1]: Started libpod-conmon-3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570.scope.
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.203 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.208 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:43:46 compute-0 podman[341655]: 2026-02-25 12:43:46.259734998 +0000 UTC m=+0.203085112 container init 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:43:46 compute-0 podman[341655]: 2026-02-25 12:43:46.2665433 +0000 UTC m=+0.209893394 container start 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:43:46 compute-0 reverent_bartik[341688]: 167 167
Feb 25 12:43:46 compute-0 systemd[1]: libpod-3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570.scope: Deactivated successfully.
Feb 25 12:43:46 compute-0 podman[341655]: 2026-02-25 12:43:46.317868258 +0000 UTC m=+0.261218352 container attach 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 12:43:46 compute-0 podman[341655]: 2026-02-25 12:43:46.318236799 +0000 UTC m=+0.261586913 container died 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:43:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Feb 25 12:43:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-768e573fa1406d971f67a538e712a25d28eed7a9cd22650de07fe490b49670f9-merged.mount: Deactivated successfully.
Feb 25 12:43:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1959454357' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.463 244018 INFO nova.compute.manager [None req-7ac9f207-9ab6-4eca-8ef7-61442abb41b3 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Get console output
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.477 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:43:46 compute-0 podman[341655]: 2026-02-25 12:43:46.617079302 +0000 UTC m=+0.560429436 container remove 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 12:43:46 compute-0 systemd[1]: libpod-conmon-3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570.scope: Deactivated successfully.
Feb 25 12:43:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:43:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1349605789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.777 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.780 244018 DEBUG nova.virt.libvirt.vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:38Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.782 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.783 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.784 244018 DEBUG nova.virt.libvirt.vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:38Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.784 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.785 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.786 244018 DEBUG nova.objects.instance [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.803 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:43:46 compute-0 nova_compute[244014]:   <uuid>c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367</uuid>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   <name>instance-0000006f</name>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1190047841</nova:name>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:43:45</nova:creationTime>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <nova:port uuid="d71dae92-b542-404e-b4cc-ecad408ed655">
Feb 25 12:43:46 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <nova:port uuid="18ca6c9a-c1c9-4a48-b124-25942ebef5df">
Feb 25 12:43:46 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe5d:4f81" ipVersion="6"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <system>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <entry name="serial">c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367</entry>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <entry name="uuid">c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367</entry>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     </system>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   <os>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   </os>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   <features>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   </features>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk">
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       </source>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config">
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       </source>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:43:46 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:74:17:b4"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <target dev="tapd71dae92-b5"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:5d:4f:81"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <target dev="tap18ca6c9a-c1"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/console.log" append="off"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <video>
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     </video>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:43:46 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:43:46 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:43:46 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:43:46 compute-0 nova_compute[244014]: </domain>
Feb 25 12:43:46 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.803 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Preparing to wait for external event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.804 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.804 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.804 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.804 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Preparing to wait for external event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.804 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.805 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.805 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.806 244018 DEBUG nova.virt.libvirt.vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:38Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.806 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.806 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.807 244018 DEBUG os_vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.808 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.808 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.812 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.812 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd71dae92-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.813 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd71dae92-b5, col_values=(('external_ids', {'iface-id': 'd71dae92-b542-404e-b4cc-ecad408ed655', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:17:b4', 'vm-uuid': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:46 compute-0 NetworkManager[49836]: <info>  [1772023426.8158] manager: (tapd71dae92-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.818 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.822 244018 INFO os_vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5')
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.823 244018 DEBUG nova.virt.libvirt.vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:38Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.823 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.824 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.824 244018 DEBUG os_vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.825 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.825 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ca6c9a-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18ca6c9a-c1, col_values=(('external_ids', {'iface-id': '18ca6c9a-c1c9-4a48-b124-25942ebef5df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:4f:81', 'vm-uuid': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:46 compute-0 NetworkManager[49836]: <info>  [1772023426.8315] manager: (tap18ca6c9a-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.838 244018 INFO os_vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1')
Feb 25 12:43:46 compute-0 podman[341738]: 2026-02-25 12:43:46.86683877 +0000 UTC m=+0.114122131 container create 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 12:43:46 compute-0 podman[341738]: 2026-02-25 12:43:46.778706043 +0000 UTC m=+0.025989434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.902 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.903 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.903 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:74:17:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.903 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:5d:4f:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.903 244018 INFO nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Using config drive
Feb 25 12:43:46 compute-0 nova_compute[244014]: 2026-02-25 12:43:46.925 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:46 compute-0 systemd[1]: Started libpod-conmon-4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62.scope.
Feb 25 12:43:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51e1a90d4ef6eb50e5a8790431db754bc004fd5ccfb459f3a978b0906e98b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51e1a90d4ef6eb50e5a8790431db754bc004fd5ccfb459f3a978b0906e98b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51e1a90d4ef6eb50e5a8790431db754bc004fd5ccfb459f3a978b0906e98b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51e1a90d4ef6eb50e5a8790431db754bc004fd5ccfb459f3a978b0906e98b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:47 compute-0 podman[341738]: 2026-02-25 12:43:47.03516251 +0000 UTC m=+0.282445921 container init 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 12:43:47 compute-0 podman[341738]: 2026-02-25 12:43:47.041477159 +0000 UTC m=+0.288760530 container start 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:43:47 compute-0 podman[341738]: 2026-02-25 12:43:47.055154575 +0000 UTC m=+0.302437946 container attach 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:43:47 compute-0 ceph-mon[76335]: pgmap v1878: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Feb 25 12:43:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1349605789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:47 compute-0 nova_compute[244014]: 2026-02-25 12:43:47.418 244018 INFO nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Creating config drive at /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config
Feb 25 12:43:47 compute-0 nova_compute[244014]: 2026-02-25 12:43:47.422 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgeu50kf5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:47 compute-0 nova_compute[244014]: 2026-02-25 12:43:47.558 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgeu50kf5" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:43:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1733278138' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:43:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:43:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1733278138' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:43:47 compute-0 nova_compute[244014]: 2026-02-25 12:43:47.586 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:47 compute-0 nova_compute[244014]: 2026-02-25 12:43:47.589 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:47 compute-0 lvm[341876]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:43:47 compute-0 lvm[341876]: VG ceph_vg0 finished
Feb 25 12:43:47 compute-0 lvm[341880]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:43:47 compute-0 lvm[341880]: VG ceph_vg1 finished
Feb 25 12:43:47 compute-0 lvm[341896]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:43:47 compute-0 lvm[341896]: VG ceph_vg2 finished
Feb 25 12:43:47 compute-0 lvm[341897]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:43:47 compute-0 lvm[341897]: VG ceph_vg0 finished
Feb 25 12:43:47 compute-0 confident_ptolemy[341778]: {}
Feb 25 12:43:47 compute-0 systemd[1]: libpod-4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62.scope: Deactivated successfully.
Feb 25 12:43:47 compute-0 systemd[1]: libpod-4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62.scope: Consumed 1.064s CPU time.
Feb 25 12:43:47 compute-0 podman[341738]: 2026-02-25 12:43:47.851343912 +0000 UTC m=+1.098627313 container died 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f51e1a90d4ef6eb50e5a8790431db754bc004fd5ccfb459f3a978b0906e98b7-merged.mount: Deactivated successfully.
Feb 25 12:43:47 compute-0 nova_compute[244014]: 2026-02-25 12:43:47.972 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:47 compute-0 nova_compute[244014]: 2026-02-25 12:43:47.973 244018 INFO nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Deleting local config drive /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config because it was imported into RBD.
Feb 25 12:43:48 compute-0 NetworkManager[49836]: <info>  [1772023428.0309] manager: (tapd71dae92-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Feb 25 12:43:48 compute-0 systemd-udevd[341875]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:43:48 compute-0 kernel: tapd71dae92-b5: entered promiscuous mode
Feb 25 12:43:48 compute-0 ovn_controller[147040]: 2026-02-25T12:43:48Z|01077|binding|INFO|Claiming lport d71dae92-b542-404e-b4cc-ecad408ed655 for this chassis.
Feb 25 12:43:48 compute-0 ovn_controller[147040]: 2026-02-25T12:43:48Z|01078|binding|INFO|d71dae92-b542-404e-b4cc-ecad408ed655: Claiming fa:16:3e:74:17:b4 10.100.0.4
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:48 compute-0 kernel: tap18ca6c9a-c1: entered promiscuous mode
Feb 25 12:43:48 compute-0 podman[341738]: 2026-02-25 12:43:48.045405818 +0000 UTC m=+1.292689179 container remove 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:43:48 compute-0 NetworkManager[49836]: <info>  [1772023428.0465] device (tapd71dae92-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:43:48 compute-0 NetworkManager[49836]: <info>  [1772023428.0498] manager: (tap18ca6c9a-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/451)
Feb 25 12:43:48 compute-0 NetworkManager[49836]: <info>  [1772023428.0509] device (tapd71dae92-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:43:48 compute-0 ovn_controller[147040]: 2026-02-25T12:43:48Z|01079|binding|INFO|Claiming lport 18ca6c9a-c1c9-4a48-b124-25942ebef5df for this chassis.
Feb 25 12:43:48 compute-0 ovn_controller[147040]: 2026-02-25T12:43:48Z|01080|binding|INFO|18ca6c9a-c1c9-4a48-b124-25942ebef5df: Claiming fa:16:3e:5d:4f:81 2001:db8::f816:3eff:fe5d:4f81
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.051 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:17:b4 10.100.0.4'], port_security=['fa:16:3e:74:17:b4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edb1eef-979c-4be1-8c88-b734bc363731, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d71dae92-b542-404e-b4cc-ecad408ed655) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:43:48 compute-0 systemd-udevd[341895]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.053 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d71dae92-b542-404e-b4cc-ecad408ed655 in datapath e0ff7905-af45-428a-b6a0-d6e1209fd009 bound to our chassis
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.055 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e0ff7905-af45-428a-b6a0-d6e1209fd009
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:48 compute-0 systemd[1]: libpod-conmon-4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62.scope: Deactivated successfully.
Feb 25 12:43:48 compute-0 ovn_controller[147040]: 2026-02-25T12:43:48Z|01081|binding|INFO|Setting lport d71dae92-b542-404e-b4cc-ecad408ed655 ovn-installed in OVS
Feb 25 12:43:48 compute-0 ovn_controller[147040]: 2026-02-25T12:43:48Z|01082|binding|INFO|Setting lport d71dae92-b542-404e-b4cc-ecad408ed655 up in Southbound
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.060 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:48 compute-0 ovn_controller[147040]: 2026-02-25T12:43:48Z|01083|binding|INFO|Setting lport 18ca6c9a-c1c9-4a48-b124-25942ebef5df ovn-installed in OVS
Feb 25 12:43:48 compute-0 ovn_controller[147040]: 2026-02-25T12:43:48Z|01084|binding|INFO|Setting lport 18ca6c9a-c1c9-4a48-b124-25942ebef5df up in Southbound
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.063 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:4f:81 2001:db8::f816:3eff:fe5d:4f81'], port_security=['fa:16:3e:5d:4f:81 2001:db8::f816:3eff:fe5d:4f81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5d:4f81/64', 'neutron:device_id': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4445989-91e8-4869-98cb-32b4b81bb3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1f88174-704f-4cf1-b79c-73e2410029c4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=18ca6c9a-c1c9-4a48-b124-25942ebef5df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:48 compute-0 NetworkManager[49836]: <info>  [1772023428.0665] device (tap18ca6c9a-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:43:48 compute-0 NetworkManager[49836]: <info>  [1772023428.0670] device (tap18ca6c9a-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f5034a-ca77-4869-b7c1-90f05fbf7c49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 systemd-machined[210048]: New machine qemu-139-instance-0000006f.
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.096 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b8679249-e7e0-40a4-9eae-d71dcb5261c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.099 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bdd450-8c74-4619-96d5-e4ca76d278a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-0000006f.
Feb 25 12:43:48 compute-0 sudo[341607]: pam_unix(sudo:session): session closed for user root
Feb 25 12:43:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.120 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[717316fc-125c-417f-8920-56c34cf78d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.137 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[986581ad-f3ee-4689-bbde-16d71da412db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0ff7905-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535889, 'reachable_time': 44896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341940, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0aa6074-5019-48f0-a09a-065f12bab8ef]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape0ff7905-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535900, 'tstamp': 535900}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341943, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape0ff7905-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535902, 'tstamp': 535902}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341943, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.159 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0ff7905-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:48 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.161 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.163 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0ff7905-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.163 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.164 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape0ff7905-a0, col_values=(('external_ids', {'iface-id': 'a592354c-38c0-41be-9126-87d6dec6c687'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.165 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.166 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 18ca6c9a-c1c9-4a48-b124-25942ebef5df in datapath e4445989-91e8-4869-98cb-32b4b81bb3da unbound from our chassis
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.168 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4445989-91e8-4869-98cb-32b4b81bb3da
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.181 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0e45d3ef-fa32-4267-82b1-7cfda5baf619]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.201 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce2d770-9ffc-42e0-890c-23af8799fd59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.205 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[595b22ea-1893-4980-a8e0-27f3110dc469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.228 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[009cd81d-e04c-456e-bbc5-2073700d868e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 sudo[341951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:43:48 compute-0 sudo[341951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.240 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b07b9ba8-2342-48d1-8789-22fc449d6bed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4445989-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:7d:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535975, 'reachable_time': 27894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341977, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 sudo[341951]: pam_unix(sudo:session): session closed for user root
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.252 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[32114282-3380-4bc8-99ae-36078314808e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape4445989-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535984, 'tstamp': 535984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341979, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.255 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4445989-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.258 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4445989-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.258 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.259 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4445989-90, col_values=(('external_ids', {'iface-id': '4a3f602e-68fb-46ca-b4ed-cec3ad6d3ec5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.259 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Feb 25 12:43:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1733278138' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:43:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1733278138' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:43:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:43:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.544 244018 DEBUG nova.network.neutron [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updated VIF entry in instance network info cache for port 18ca6c9a-c1c9-4a48-b124-25942ebef5df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.544 244018 DEBUG nova.network.neutron [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.557 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023428.557236, c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.557 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] VM Started (Lifecycle Event)
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.568 244018 DEBUG oslo_concurrency.lockutils [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.569 244018 INFO nova.compute.manager [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Rebuilding instance
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.574 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.579 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023428.5588315, c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.579 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] VM Paused (Lifecycle Event)
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.608 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.612 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:43:48 compute-0 nova_compute[244014]: 2026-02-25 12:43:48.642 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:43:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.834 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.143 244018 DEBUG nova.compute.manager [req-94cae8ba-bb2a-4c39-8155-25c1d68e5a65 req-0e1d4803-0f2b-4573-813c-bb7bdc73e33d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.143 244018 DEBUG oslo_concurrency.lockutils [req-94cae8ba-bb2a-4c39-8155-25c1d68e5a65 req-0e1d4803-0f2b-4573-813c-bb7bdc73e33d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.143 244018 DEBUG oslo_concurrency.lockutils [req-94cae8ba-bb2a-4c39-8155-25c1d68e5a65 req-0e1d4803-0f2b-4573-813c-bb7bdc73e33d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.144 244018 DEBUG oslo_concurrency.lockutils [req-94cae8ba-bb2a-4c39-8155-25c1d68e5a65 req-0e1d4803-0f2b-4573-813c-bb7bdc73e33d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.144 244018 DEBUG nova.compute.manager [req-94cae8ba-bb2a-4c39-8155-25c1d68e5a65 req-0e1d4803-0f2b-4573-813c-bb7bdc73e33d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Processing event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.170 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'trusted_certs' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.193 244018 DEBUG nova.compute.manager [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.255 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_requests' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.270 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.290 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.305 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.316 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:43:49 compute-0 nova_compute[244014]: 2026-02-25 12:43:49.320 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:43:50 compute-0 nova_compute[244014]: 2026-02-25 12:43:50.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:50 compute-0 ceph-mon[76335]: pgmap v1879: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Feb 25 12:43:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 3.0 MiB/s wr, 67 op/s
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.318 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.318 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.319 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.319 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.320 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No event matching network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 in dict_keys([('network-vif-plugged', '18ca6c9a-c1c9-4a48-b124-25942ebef5df')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.320 244018 WARNING nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received unexpected event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 for instance with vm_state building and task_state spawning.
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.321 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.321 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.322 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.323 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.323 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Processing event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.324 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.324 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.325 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.326 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.327 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No waiting events found dispatching network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.327 244018 WARNING nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received unexpected event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df for instance with vm_state building and task_state spawning.
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.328 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.332 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023431.3317747, c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.333 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] VM Resumed (Lifecycle Event)
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.336 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.341 244018 INFO nova.virt.libvirt.driver [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance spawned successfully.
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.342 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:43:51 compute-0 ceph-mon[76335]: pgmap v1880: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 3.0 MiB/s wr, 67 op/s
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.365 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.372 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.378 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.378 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.379 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.380 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.380 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.381 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.448 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.477 244018 INFO nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Took 12.71 seconds to spawn the instance on the hypervisor.
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.477 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.560 244018 INFO nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Took 13.80 seconds to build instance.
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.587 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:51 compute-0 kernel: tap43ea1958-7f (unregistering): left promiscuous mode
Feb 25 12:43:51 compute-0 NetworkManager[49836]: <info>  [1772023431.7287] device (tap43ea1958-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:43:51 compute-0 ovn_controller[147040]: 2026-02-25T12:43:51Z|01085|binding|INFO|Releasing lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 from this chassis (sb_readonly=0)
Feb 25 12:43:51 compute-0 ovn_controller[147040]: 2026-02-25T12:43:51Z|01086|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 down in Southbound
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:51 compute-0 ovn_controller[147040]: 2026-02-25T12:43:51Z|01087|binding|INFO|Removing iface tap43ea1958-7f ovn-installed in OVS
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:51.747 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:93:a9 10.100.0.5'], port_security=['fa:16:3e:3e:93:a9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5401511-20ab-4c2d-9471-9223f0b77f54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cc7d41-346e-4e2e-a938-6389d190a22f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=43ea1958-7fd9-47b6-be81-3eeb1b3801a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:43:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:51.750 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 in datapath f8edf066-3c6a-45fb-bc20-36dc74f8aee6 unbound from our chassis
Feb 25 12:43:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:51.753 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:43:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:51.754 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1192a1b6-d6b1-4d38-afbb-cd736f371f7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:51.755 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 namespace which is not needed anymore
Feb 25 12:43:51 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Feb 25 12:43:51 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Consumed 12.203s CPU time.
Feb 25 12:43:51 compute-0 systemd-machined[210048]: Machine qemu-138-instance-0000006e terminated.
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:51 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [NOTICE]   (340688) : haproxy version is 2.8.14-c23fe91
Feb 25 12:43:51 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [NOTICE]   (340688) : path to executable is /usr/sbin/haproxy
Feb 25 12:43:51 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [WARNING]  (340688) : Exiting Master process...
Feb 25 12:43:51 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [ALERT]    (340688) : Current worker (340690) exited with code 143 (Terminated)
Feb 25 12:43:51 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [WARNING]  (340688) : All workers exited. Exiting... (0)
Feb 25 12:43:51 compute-0 systemd[1]: libpod-314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd.scope: Deactivated successfully.
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:51 compute-0 nova_compute[244014]: 2026-02-25 12:43:51.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:51 compute-0 podman[342046]: 2026-02-25 12:43:51.964783062 +0000 UTC m=+0.095180847 container died 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd-userdata-shm.mount: Deactivated successfully.
Feb 25 12:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbd30b2a51b99c22ab6484bb072cec6f6373098acd12eff7ecbd61ac2cb75c84-merged.mount: Deactivated successfully.
Feb 25 12:43:52 compute-0 podman[342046]: 2026-02-25 12:43:52.031105444 +0000 UTC m=+0.161503209 container cleanup 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:43:52 compute-0 systemd[1]: libpod-conmon-314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd.scope: Deactivated successfully.
Feb 25 12:43:52 compute-0 podman[342089]: 2026-02-25 12:43:52.323780893 +0000 UTC m=+0.267745647 container remove 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:43:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.328 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7b547d60-45c9-4d79-ba58-a2f9b35dfa43]: (4, ('Wed Feb 25 12:43:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 (314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd)\n314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd\nWed Feb 25 12:43:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 (314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd)\n314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.331 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[98b2e7e2-724e-4256-aa12-b042da3dcf52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.332 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8edf066-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:52 compute-0 kernel: tapf8edf066-30: left promiscuous mode
Feb 25 12:43:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 3.0 MiB/s wr, 82 op/s
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.343 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance shutdown successfully after 3 seconds.
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.350 244018 INFO nova.virt.libvirt.driver [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance destroyed successfully.
Feb 25 12:43:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.349 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b78508d-2ac7-411f-8f91-07c624c96fc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.357 244018 INFO nova.virt.libvirt.driver [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance destroyed successfully.
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.358 244018 DEBUG nova.virt.libvirt.vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:47Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.358 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.359 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.360 244018 DEBUG os_vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.364 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ea1958-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:43:52 compute-0 nova_compute[244014]: 2026-02-25 12:43:52.370 244018 INFO os_vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f')
Feb 25 12:43:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.368 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[15759f5b-a34e-41e6-a3ab-6c6d78153f13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.372 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a490b228-bd86-4095-8360-1fb833e35f09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.385 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[902186d3-2cc3-4478-8ac2-97fc32ee02e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537563, 'reachable_time': 32737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342108, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.390 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:43:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.390 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[172ff96e-c394-4e41-a40f-651267381172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:52 compute-0 systemd[1]: run-netns-ovnmeta\x2df8edf066\x2d3c6a\x2d45fb\x2dbc20\x2d36dc74f8aee6.mount: Deactivated successfully.
Feb 25 12:43:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.467 244018 DEBUG nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.467 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.468 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.468 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.468 244018 DEBUG nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.468 244018 WARNING nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state rebuilding.
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.468 244018 DEBUG nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.469 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.469 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.469 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.469 244018 DEBUG nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:43:53 compute-0 nova_compute[244014]: 2026-02-25 12:43:53.469 244018 WARNING nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state rebuilding.
Feb 25 12:43:53 compute-0 ceph-mon[76335]: pgmap v1881: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 3.0 MiB/s wr, 82 op/s
Feb 25 12:43:53 compute-0 podman[342128]: 2026-02-25 12:43:53.718877562 +0000 UTC m=+0.061027213 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.077 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deleting instance files /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e_del
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.078 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deletion of /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e_del complete
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.220 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.221 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating image(s)
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.247 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.275 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.297 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.301 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 219 KiB/s wr, 50 op/s
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.376 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.377 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.377 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.377 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.401 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.405 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:54 compute-0 nova_compute[244014]: 2026-02-25 12:43:54.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:55.022 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:55 compute-0 nova_compute[244014]: 2026-02-25 12:43:55.125 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:55 compute-0 podman[342242]: 2026-02-25 12:43:55.476323996 +0000 UTC m=+0.074514484 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 12:43:55 compute-0 nova_compute[244014]: 2026-02-25 12:43:55.697 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:55 compute-0 ceph-mon[76335]: pgmap v1882: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 219 KiB/s wr, 50 op/s
Feb 25 12:43:55 compute-0 nova_compute[244014]: 2026-02-25 12:43:55.864 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.072 244018 DEBUG nova.compute.manager [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.072 244018 DEBUG nova.compute.manager [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing instance network info cache due to event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.073 244018 DEBUG oslo_concurrency.lockutils [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.073 244018 DEBUG oslo_concurrency.lockutils [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.073 244018 DEBUG nova.network.neutron [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing network info cache for port d71dae92-b542-404e-b4cc-ecad408ed655 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.221 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.221 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Ensure instance console log exists: /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.222 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.222 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.222 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.225 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start _get_guest_xml network_info=[{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.231 244018 WARNING nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.238 244018 DEBUG nova.virt.libvirt.host [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.239 244018 DEBUG nova.virt.libvirt.host [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.243 244018 DEBUG nova.virt.libvirt.host [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.244 244018 DEBUG nova.virt.libvirt.host [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.244 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.245 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.246 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.246 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.247 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.247 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.247 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.247 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.248 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.248 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.248 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.248 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.249 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'vcpu_model' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.267 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 440 KiB/s rd, 42 KiB/s wr, 30 op/s
Feb 25 12:43:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:43:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1296203534' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.830 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.864 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:56 compute-0 nova_compute[244014]: 2026-02-25 12:43:56.871 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1296203534' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:43:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001768558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.411 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.412 244018 DEBUG nova.virt.libvirt.vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:54Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.412 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.413 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.416 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:43:57 compute-0 nova_compute[244014]:   <uuid>b5941b54-9cd2-465c-89c0-3cf87ebed83e</uuid>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   <name>instance-0000006e</name>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1420097374</nova:name>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:43:56</nova:creationTime>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <nova:port uuid="43ea1958-7fd9-47b6-be81-3eeb1b3801a0">
Feb 25 12:43:57 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <system>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <entry name="serial">b5941b54-9cd2-465c-89c0-3cf87ebed83e</entry>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <entry name="uuid">b5941b54-9cd2-465c-89c0-3cf87ebed83e</entry>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     </system>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   <os>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   </os>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   <features>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   </features>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk">
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       </source>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config">
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       </source>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:43:57 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:3e:93:a9"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <target dev="tap43ea1958-7f"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/console.log" append="off"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <video>
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     </video>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:43:57 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:43:57 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:43:57 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:43:57 compute-0 nova_compute[244014]: </domain>
Feb 25 12:43:57 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.417 244018 DEBUG nova.virt.libvirt.vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:54Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.417 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.418 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.418 244018 DEBUG os_vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.419 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.419 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.422 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.423 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ea1958-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.423 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43ea1958-7f, col_values=(('external_ids', {'iface-id': '43ea1958-7fd9-47b6-be81-3eeb1b3801a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:93:a9', 'vm-uuid': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.424 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:57 compute-0 NetworkManager[49836]: <info>  [1772023437.4258] manager: (tap43ea1958-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.426 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.428 244018 INFO os_vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f')
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.544 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.545 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.545 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:3e:93:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.546 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Using config drive
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.574 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.599 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'ec2_ids' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:57 compute-0 nova_compute[244014]: 2026-02-25 12:43:57.632 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'keypairs' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:43:57 compute-0 ceph-mon[76335]: pgmap v1883: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 440 KiB/s rd, 42 KiB/s wr, 30 op/s
Feb 25 12:43:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3001768558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.047 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating config drive at /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.054 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu2e7atri execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.193 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu2e7atri" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.223 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.229 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.305 244018 DEBUG nova.network.neutron [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updated VIF entry in instance network info cache for port d71dae92-b542-404e-b4cc-ecad408ed655. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.306 244018 DEBUG nova.network.neutron [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.332 244018 DEBUG oslo_concurrency.lockutils [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:43:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:43:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.693 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.694 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deleting local config drive /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config because it was imported into RBD.
Feb 25 12:43:58 compute-0 kernel: tap43ea1958-7f: entered promiscuous mode
Feb 25 12:43:58 compute-0 ovn_controller[147040]: 2026-02-25T12:43:58Z|01088|binding|INFO|Claiming lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for this chassis.
Feb 25 12:43:58 compute-0 ovn_controller[147040]: 2026-02-25T12:43:58Z|01089|binding|INFO|43ea1958-7fd9-47b6-be81-3eeb1b3801a0: Claiming fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.770 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:58 compute-0 NetworkManager[49836]: <info>  [1772023438.7726] manager: (tap43ea1958-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/453)
Feb 25 12:43:58 compute-0 ovn_controller[147040]: 2026-02-25T12:43:58Z|01090|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 ovn-installed in OVS
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.780 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:93:a9 10.100.0.5'], port_security=['fa:16:3e:3e:93:a9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a5401511-20ab-4c2d-9471-9223f0b77f54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cc7d41-346e-4e2e-a938-6389d190a22f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=43ea1958-7fd9-47b6-be81-3eeb1b3801a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:43:58 compute-0 ovn_controller[147040]: 2026-02-25T12:43:58Z|01091|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 up in Southbound
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.784 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 in datapath f8edf066-3c6a-45fb-bc20-36dc74f8aee6 bound to our chassis
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.788 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8edf066-3c6a-45fb-bc20-36dc74f8aee6
Feb 25 12:43:58 compute-0 nova_compute[244014]: 2026-02-25 12:43:58.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.801 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69133733-20b6-485e-b6d2-904806d25c61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.802 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8edf066-31 in ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.805 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8edf066-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.805 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[71b4af7e-87fc-4a46-990c-747c9c9f4c66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.806 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09734e11-1698-43e8-83d4-56845cde8124]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 systemd-machined[210048]: New machine qemu-140-instance-0000006e.
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.814 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[f94eb4ea-ba24-4fc3-a571-da9d036130a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-0000006e.
Feb 25 12:43:58 compute-0 systemd-udevd[342479]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.836 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9c37f400-7934-41bb-8a83-6e6553ae0eb5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 NetworkManager[49836]: <info>  [1772023438.8448] device (tap43ea1958-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:43:58 compute-0 NetworkManager[49836]: <info>  [1772023438.8461] device (tap43ea1958-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.863 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[124b9ad9-0c11-4ae9-aa6b-603b177d3261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 systemd-udevd[342482]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.868 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb48662-18d0-4069-b0a2-99186f62de96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 NetworkManager[49836]: <info>  [1772023438.8697] manager: (tapf8edf066-30): new Veth device (/org/freedesktop/NetworkManager/Devices/454)
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.902 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2b1ef3-2eff-47bf-bde5-13738f60716c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.904 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[611caf7c-c3cb-4383-9a3b-8fc23b2598ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 NetworkManager[49836]: <info>  [1772023438.9232] device (tapf8edf066-30): carrier: link connected
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.928 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[414c90d3-b5bb-45b5-a3af-f040df0fafee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.943 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[72152f50-cbcc-4600-954c-9d65d015664b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8edf066-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:6d:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540833, 'reachable_time': 42496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342509, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.960 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[36881004-7f86-4d3f-be20-5aaf42380992]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:6d18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540833, 'tstamp': 540833}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342510, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.976 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[212d94d9-6720-4704-9cae-03167a2d2b5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8edf066-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:6d:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540833, 'reachable_time': 42496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342511, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.019 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba960485-0cb5-46e0-a45d-1f7402ecc5a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.071 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[da8326e2-ba76-42b5-9f48-35e8f13b8b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.073 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8edf066-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.073 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.074 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8edf066-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:59 compute-0 kernel: tapf8edf066-30: entered promiscuous mode
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.079 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8edf066-30, col_values=(('external_ids', {'iface-id': '537184a5-5d27-4b28-acba-8f254f6dc5ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:59 compute-0 ovn_controller[147040]: 2026-02-25T12:43:59Z|01092|binding|INFO|Releasing lport 537184a5-5d27-4b28-acba-8f254f6dc5ca from this chassis (sb_readonly=0)
Feb 25 12:43:59 compute-0 NetworkManager[49836]: <info>  [1772023439.0837] manager: (tapf8edf066-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.088 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.090 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.091 244018 DEBUG nova.compute.manager [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.092 244018 DEBUG oslo_concurrency.lockutils [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.092 244018 DEBUG oslo_concurrency.lockutils [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.093 244018 DEBUG oslo_concurrency.lockutils [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.093 244018 DEBUG nova.compute.manager [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.093 244018 WARNING nova.compute.manager [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state rebuild_spawning.
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.094 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d48943dc-ed90-4bc6-9fdb-5b20007c170d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.095 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-f8edf066-3c6a-45fb-bc20-36dc74f8aee6
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID f8edf066-3c6a-45fb-bc20-36dc74f8aee6
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:43:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.096 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'env', 'PROCESS_TAG=haproxy-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.491 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for b5941b54-9cd2-465c-89c0-3cf87ebed83e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.491 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023439.4866447, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.491 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Resumed (Lifecycle Event)
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.493 244018 DEBUG nova.compute.manager [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.493 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.496 244018 INFO nova.virt.libvirt.driver [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance spawned successfully.
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.497 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.509 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.515 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.518 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.519 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.519 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.520 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.520 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.521 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:43:59 compute-0 podman[342577]: 2026-02-25 12:43:59.426115957 +0000 UTC m=+0.019522631 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.529 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.530 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023439.4872682, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.530 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Started (Lifecycle Event)
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.552 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.556 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.573 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.579 244018 DEBUG nova.compute.manager [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:43:59 compute-0 podman[342577]: 2026-02-25 12:43:59.634772586 +0000 UTC m=+0.228179270 container create 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.660 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.661 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.661 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.725 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:43:59 compute-0 nova_compute[244014]: 2026-02-25 12:43:59.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:43:59 compute-0 systemd[1]: Started libpod-conmon-4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da.scope.
Feb 25 12:43:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1caf97c9deaa2249bb8b02c7dbf21ee80762224b30ee568b1931de368be96fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:43:59 compute-0 podman[342577]: 2026-02-25 12:43:59.969497301 +0000 UTC m=+0.562903995 container init 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:43:59 compute-0 podman[342577]: 2026-02-25 12:43:59.974724079 +0000 UTC m=+0.568130733 container start 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:43:59 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [NOTICE]   (342601) : New worker (342603) forked
Feb 25 12:43:59 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [NOTICE]   (342601) : Loading success.
Feb 25 12:44:00 compute-0 nova_compute[244014]: 2026-02-25 12:44:00.127 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:00 compute-0 ceph-mon[76335]: pgmap v1884: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Feb 25 12:44:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Feb 25 12:44:01 compute-0 nova_compute[244014]: 2026-02-25 12:44:01.354 244018 DEBUG nova.compute.manager [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:01 compute-0 nova_compute[244014]: 2026-02-25 12:44:01.354 244018 DEBUG oslo_concurrency.lockutils [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:01 compute-0 nova_compute[244014]: 2026-02-25 12:44:01.355 244018 DEBUG oslo_concurrency.lockutils [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:01 compute-0 nova_compute[244014]: 2026-02-25 12:44:01.355 244018 DEBUG oslo_concurrency.lockutils [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:01 compute-0 nova_compute[244014]: 2026-02-25 12:44:01.355 244018 DEBUG nova.compute.manager [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:01 compute-0 nova_compute[244014]: 2026-02-25 12:44:01.356 244018 WARNING nova.compute.manager [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state None.
Feb 25 12:44:01 compute-0 ceph-mon[76335]: pgmap v1885: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Feb 25 12:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:44:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 327 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Feb 25 12:44:02 compute-0 nova_compute[244014]: 2026-02-25 12:44:02.424 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:03 compute-0 nova_compute[244014]: 2026-02-25 12:44:03.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:03 compute-0 ceph-mon[76335]: pgmap v1886: 305 pgs: 305 active+clean; 327 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Feb 25 12:44:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 334 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 204 op/s
Feb 25 12:44:04 compute-0 ovn_controller[147040]: 2026-02-25T12:44:04Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:74:17:b4 10.100.0.4
Feb 25 12:44:04 compute-0 ovn_controller[147040]: 2026-02-25T12:44:04Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:17:b4 10.100.0.4
Feb 25 12:44:05 compute-0 nova_compute[244014]: 2026-02-25 12:44:05.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:05 compute-0 ceph-mon[76335]: pgmap v1887: 305 pgs: 305 active+clean; 334 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 204 op/s
Feb 25 12:44:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 334 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 189 op/s
Feb 25 12:44:07 compute-0 nova_compute[244014]: 2026-02-25 12:44:07.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:07 compute-0 ceph-mon[76335]: pgmap v1888: 305 pgs: 305 active+clean; 334 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 189 op/s
Feb 25 12:44:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 241 op/s
Feb 25 12:44:09 compute-0 ceph-mon[76335]: pgmap v1889: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 241 op/s
Feb 25 12:44:10 compute-0 nova_compute[244014]: 2026-02-25 12:44:10.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 25 12:44:12 compute-0 ceph-mon[76335]: pgmap v1890: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 25 12:44:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 379 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 161 op/s
Feb 25 12:44:12 compute-0 nova_compute[244014]: 2026-02-25 12:44:12.427 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:12 compute-0 ovn_controller[147040]: 2026-02-25T12:44:12Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 12:44:12 compute-0 ovn_controller[147040]: 2026-02-25T12:44:12Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 12:44:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:13 compute-0 ceph-mon[76335]: pgmap v1891: 305 pgs: 305 active+clean; 379 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 161 op/s
Feb 25 12:44:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 383 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 855 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Feb 25 12:44:15 compute-0 nova_compute[244014]: 2026-02-25 12:44:15.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:15 compute-0 ceph-mon[76335]: pgmap v1892: 305 pgs: 305 active+clean; 383 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 855 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.092 244018 DEBUG nova.compute.manager [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.093 244018 DEBUG nova.compute.manager [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing instance network info cache due to event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.093 244018 DEBUG oslo_concurrency.lockutils [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.093 244018 DEBUG oslo_concurrency.lockutils [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.094 244018 DEBUG nova.network.neutron [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing network info cache for port d71dae92-b542-404e-b4cc-ecad408ed655 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.184 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.185 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.185 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.186 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.186 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.187 244018 INFO nova.compute.manager [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Terminating instance
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.188 244018 DEBUG nova.compute.manager [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:44:16 compute-0 kernel: tapd71dae92-b5 (unregistering): left promiscuous mode
Feb 25 12:44:16 compute-0 NetworkManager[49836]: <info>  [1772023456.2994] device (tapd71dae92-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:44:16 compute-0 ovn_controller[147040]: 2026-02-25T12:44:16Z|01093|binding|INFO|Releasing lport d71dae92-b542-404e-b4cc-ecad408ed655 from this chassis (sb_readonly=0)
Feb 25 12:44:16 compute-0 ovn_controller[147040]: 2026-02-25T12:44:16Z|01094|binding|INFO|Setting lport d71dae92-b542-404e-b4cc-ecad408ed655 down in Southbound
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 ovn_controller[147040]: 2026-02-25T12:44:16Z|01095|binding|INFO|Removing iface tapd71dae92-b5 ovn-installed in OVS
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.314 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:17:b4 10.100.0.4'], port_security=['fa:16:3e:74:17:b4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edb1eef-979c-4be1-8c88-b734bc363731, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d71dae92-b542-404e-b4cc-ecad408ed655) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.317 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d71dae92-b542-404e-b4cc-ecad408ed655 in datapath e0ff7905-af45-428a-b6a0-d6e1209fd009 unbound from our chassis
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 kernel: tap18ca6c9a-c1 (unregistering): left promiscuous mode
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.321 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e0ff7905-af45-428a-b6a0-d6e1209fd009
Feb 25 12:44:16 compute-0 NetworkManager[49836]: <info>  [1772023456.3266] device (tap18ca6c9a-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.335 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d3fbb67c-361e-421a-962d-2e5140a6e121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 ovn_controller[147040]: 2026-02-25T12:44:16Z|01096|binding|INFO|Releasing lport 18ca6c9a-c1c9-4a48-b124-25942ebef5df from this chassis (sb_readonly=0)
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 ovn_controller[147040]: 2026-02-25T12:44:16Z|01097|binding|INFO|Setting lport 18ca6c9a-c1c9-4a48-b124-25942ebef5df down in Southbound
Feb 25 12:44:16 compute-0 ovn_controller[147040]: 2026-02-25T12:44:16Z|01098|binding|INFO|Removing iface tap18ca6c9a-c1 ovn-installed in OVS
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.343 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 383 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 634 KiB/s rd, 3.5 MiB/s wr, 111 op/s
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.362 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:4f:81 2001:db8::f816:3eff:fe5d:4f81'], port_security=['fa:16:3e:5d:4f:81 2001:db8::f816:3eff:fe5d:4f81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5d:4f81/64', 'neutron:device_id': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4445989-91e8-4869-98cb-32b4b81bb3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1f88174-704f-4cf1-b79c-73e2410029c4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=18ca6c9a-c1c9-4a48-b124-25942ebef5df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:44:16 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Feb 25 12:44:16 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006f.scope: Consumed 12.506s CPU time.
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.371 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9892c5-2367-4564-9256-b6f3bc9e1cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 systemd-machined[210048]: Machine qemu-139-instance-0000006f terminated.
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.376 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5dcdd8-c373-4d3f-b560-e0c550e448b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.404 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[702fb74e-9633-413d-81e1-86ff9928881c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 NetworkManager[49836]: <info>  [1772023456.4179] manager: (tap18ca6c9a-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/456)
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.424 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0776678-4aed-4291-9eb9-8c1883d42281]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0ff7905-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535889, 'reachable_time': 44896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342638, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.432 244018 INFO nova.virt.libvirt.driver [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance destroyed successfully.
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.433 244018 DEBUG nova.objects.instance [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.439 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[58c04bfa-83e9-45bf-8057-74b319fe9623]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape0ff7905-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535900, 'tstamp': 535900}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342654, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape0ff7905-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535902, 'tstamp': 535902}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342654, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.441 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0ff7905-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.444 244018 DEBUG nova.virt.libvirt.vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:43:51Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.445 244018 DEBUG nova.network.os_vif_util [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.445 244018 DEBUG nova.network.os_vif_util [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.445 244018 DEBUG os_vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.447 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd71dae92-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.448 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0ff7905-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.448 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.449 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape0ff7905-a0, col_values=(('external_ids', {'iface-id': 'a592354c-38c0-41be-9126-87d6dec6c687'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.449 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.451 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 18ca6c9a-c1c9-4a48-b124-25942ebef5df in datapath e4445989-91e8-4869-98cb-32b4b81bb3da unbound from our chassis
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.452 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4445989-91e8-4869-98cb-32b4b81bb3da
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.453 244018 INFO os_vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5')
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.454 244018 DEBUG nova.virt.libvirt.vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:43:51Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.454 244018 DEBUG nova.network.os_vif_util [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.454 244018 DEBUG nova.network.os_vif_util [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.455 244018 DEBUG os_vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.456 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.456 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ca6c9a-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.460 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.461 244018 INFO os_vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1')
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.464 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec07bb4-75d7-4e34-802e-a5f03befec32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.485 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[37e637bf-179a-48b6-8a44-3cca001678d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.487 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ac284767-2b64-41c6-a2fd-260bef9a46eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.506 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[775548b1-5963-41a9-b119-49be79a99349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.521 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[94d4517c-2ad4-407e-a204-a9181203ee00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4445989-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:7d:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535975, 'reachable_time': 27894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342681, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.534 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f7f6ca-3cae-4b01-9eeb-91bee4447e0d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape4445989-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535984, 'tstamp': 535984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342682, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.535 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4445989-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.537 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.538 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4445989-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.539 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.539 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4445989-90, col_values=(('external_ids', {'iface-id': '4a3f602e-68fb-46ca-b4ed-cec3ad6d3ec5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.539 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.711 244018 DEBUG nova.compute.manager [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-unplugged-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.712 244018 DEBUG oslo_concurrency.lockutils [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.712 244018 DEBUG oslo_concurrency.lockutils [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.712 244018 DEBUG oslo_concurrency.lockutils [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.712 244018 DEBUG nova.compute.manager [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No waiting events found dispatching network-vif-unplugged-d71dae92-b542-404e-b4cc-ecad408ed655 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.712 244018 DEBUG nova.compute.manager [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-unplugged-d71dae92-b542-404e-b4cc-ecad408ed655 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.826 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:16 compute-0 nova_compute[244014]: 2026-02-25 12:44:16.827 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.027 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.132 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.132 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.142 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.142 244018 INFO nova.compute.claims [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.330 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:17 compute-0 ceph-mon[76335]: pgmap v1893: 305 pgs: 305 active+clean; 383 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 634 KiB/s rd, 3.5 MiB/s wr, 111 op/s
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.500 244018 INFO nova.virt.libvirt.driver [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Deleting instance files /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_del
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.501 244018 INFO nova.virt.libvirt.driver [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Deletion of /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_del complete
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.551 244018 INFO nova.compute.manager [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Took 1.36 seconds to destroy the instance on the hypervisor.
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.552 244018 DEBUG oslo.service.loopingcall [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.553 244018 DEBUG nova.compute.manager [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.553 244018 DEBUG nova.network.neutron [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:44:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:44:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2911438101' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.873 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.879 244018 DEBUG nova.compute.provider_tree [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.961 244018 DEBUG nova.scheduler.client.report [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.991 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:17 compute-0 nova_compute[244014]: 2026-02-25 12:44:17.992 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.047 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.048 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.067 244018 INFO nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.085 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.140 244018 DEBUG nova.network.neutron [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updated VIF entry in instance network info cache for port d71dae92-b542-404e-b4cc-ecad408ed655. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.141 244018 DEBUG nova.network.neutron [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.187 244018 DEBUG oslo_concurrency.lockutils [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.196 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.198 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.198 244018 INFO nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating image(s)
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.224 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.251 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.279 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.284 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.322 244018 DEBUG nova.policy [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '44c0b78107ea4f7381e82a02c5954e7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6c0adb05683141e7a0b866f450e410e0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:44:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 312 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 3.6 MiB/s wr, 139 op/s
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.393 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.394 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.394 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.394 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.413 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.417 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 874359d8-3251-4416-82dc-f6776853e384_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:18 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2911438101' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.838 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.838 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.839 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.839 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.839 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No waiting events found dispatching network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.839 244018 WARNING nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received unexpected event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 for instance with vm_state active and task_state deleting.
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.839 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-unplugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.840 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.840 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.841 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.841 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No waiting events found dispatching network-vif-unplugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.841 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-unplugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.841 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.841 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.842 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.842 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.842 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No waiting events found dispatching network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.842 244018 WARNING nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received unexpected event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df for instance with vm_state active and task_state deleting.
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.842 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-deleted-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.843 244018 INFO nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Neutron deleted interface 18ca6c9a-c1c9-4a48-b124-25942ebef5df; detaching it from the instance and deleting it from the info cache
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.843 244018 DEBUG nova.network.neutron [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:18 compute-0 nova_compute[244014]: 2026-02-25 12:44:18.873 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Detach interface failed, port_id=18ca6c9a-c1c9-4a48-b124-25942ebef5df, reason: Instance c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.225 244018 DEBUG nova.network.neutron [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.245 244018 INFO nova.compute.manager [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Took 1.69 seconds to deallocate network for instance.
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.299 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.300 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.320 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 874359d8-3251-4416-82dc-f6776853e384_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.903s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.368 244018 INFO nova.compute.manager [None req-86dbe725-5961-437a-b145-78245956a81f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Get console output
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.410 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] resizing rbd image 874359d8-3251-4416-82dc-f6776853e384_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.452 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.485 244018 DEBUG oslo_concurrency.processutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:19 compute-0 ceph-mon[76335]: pgmap v1894: 305 pgs: 305 active+clean; 312 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 3.6 MiB/s wr, 139 op/s
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.568 244018 DEBUG nova.objects.instance [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'migration_context' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.595 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.595 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Ensure instance console log exists: /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.596 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.596 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.596 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:19 compute-0 nova_compute[244014]: 2026-02-25 12:44:19.682 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Successfully created port: 97fb9f99-cb59-4581-8866-375ea3e167d7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:44:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:44:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026762265' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.086 244018 DEBUG oslo_concurrency.processutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.094 244018 DEBUG nova.compute.provider_tree [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.113 244018 DEBUG nova.scheduler.client.report [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.135 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.167 244018 INFO nova.scheduler.client.report [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.234 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 312 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Feb 25 12:44:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3026762265' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.578 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.578 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.579 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.579 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.580 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.581 244018 INFO nova.compute.manager [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Terminating instance
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.583 244018 DEBUG nova.compute.manager [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:44:20 compute-0 kernel: tap43ea1958-7f (unregistering): left promiscuous mode
Feb 25 12:44:20 compute-0 NetworkManager[49836]: <info>  [1772023460.6281] device (tap43ea1958-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:20 compute-0 ovn_controller[147040]: 2026-02-25T12:44:20Z|01099|binding|INFO|Releasing lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 from this chassis (sb_readonly=0)
Feb 25 12:44:20 compute-0 ovn_controller[147040]: 2026-02-25T12:44:20Z|01100|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 down in Southbound
Feb 25 12:44:20 compute-0 ovn_controller[147040]: 2026-02-25T12:44:20Z|01101|binding|INFO|Removing iface tap43ea1958-7f ovn-installed in OVS
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.645 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:93:a9 10.100.0.5'], port_security=['fa:16:3e:3e:93:a9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a5401511-20ab-4c2d-9471-9223f0b77f54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cc7d41-346e-4e2e-a938-6389d190a22f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=43ea1958-7fd9-47b6-be81-3eeb1b3801a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.646 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 in datapath f8edf066-3c6a-45fb-bc20-36dc74f8aee6 unbound from our chassis
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.647 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.649 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5d0d23-9790-49c5-8931-67fff67c7d3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.650 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 namespace which is not needed anymore
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:20 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Feb 25 12:44:20 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Consumed 12.529s CPU time.
Feb 25 12:44:20 compute-0 systemd-machined[210048]: Machine qemu-140-instance-0000006e terminated.
Feb 25 12:44:20 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [NOTICE]   (342601) : haproxy version is 2.8.14-c23fe91
Feb 25 12:44:20 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [NOTICE]   (342601) : path to executable is /usr/sbin/haproxy
Feb 25 12:44:20 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [WARNING]  (342601) : Exiting Master process...
Feb 25 12:44:20 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [ALERT]    (342601) : Current worker (342603) exited with code 143 (Terminated)
Feb 25 12:44:20 compute-0 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [WARNING]  (342601) : All workers exited. Exiting... (0)
Feb 25 12:44:20 compute-0 systemd[1]: libpod-4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da.scope: Deactivated successfully.
Feb 25 12:44:20 compute-0 podman[342918]: 2026-02-25 12:44:20.784178334 +0000 UTC m=+0.047268785 container died 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da-userdata-shm.mount: Deactivated successfully.
Feb 25 12:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1caf97c9deaa2249bb8b02c7dbf21ee80762224b30ee568b1931de368be96fa-merged.mount: Deactivated successfully.
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.818 244018 INFO nova.virt.libvirt.driver [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance destroyed successfully.
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.819 244018 DEBUG nova.objects.instance [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:44:20 compute-0 podman[342918]: 2026-02-25 12:44:20.827893338 +0000 UTC m=+0.090983809 container cleanup 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.831 244018 DEBUG nova.virt.libvirt.vif [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:43:59Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.831 244018 DEBUG nova.network.os_vif_util [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.832 244018 DEBUG nova.network.os_vif_util [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.832 244018 DEBUG os_vif [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:44:20 compute-0 systemd[1]: libpod-conmon-4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da.scope: Deactivated successfully.
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.837 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ea1958-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.841 244018 INFO os_vif [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f')
Feb 25 12:44:20 compute-0 podman[342956]: 2026-02-25 12:44:20.880690728 +0000 UTC m=+0.037917661 container remove 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.885 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a9f8ff-99f3-467f-ae96-b2806a747f2f]: (4, ('Wed Feb 25 12:44:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 (4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da)\n4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da\nWed Feb 25 12:44:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 (4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da)\n4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.886 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9085521d-f0a3-433e-be1a-5409c21464c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.888 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8edf066-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:20 compute-0 kernel: tapf8edf066-30: left promiscuous mode
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.895 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.896 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c942d6ba-e63c-408a-83e4-b42ef9a27225]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.910 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d8396e-d0f3-49df-89de-037357f71811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.911 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[537e6637-e77c-40f2-bf31-99313f6ff988]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.921 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2aaffe88-6af0-46ff-b7c9-4734a54d41d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540826, 'reachable_time': 22751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342989, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:20 compute-0 systemd[1]: run-netns-ovnmeta\x2df8edf066\x2d3c6a\x2d45fb\x2dbc20\x2d36dc74f8aee6.mount: Deactivated successfully.
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.923 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:44:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.924 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[bb19a566-bef4-45ff-9d4e-144c27163409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.932 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Successfully updated port: 97fb9f99-cb59-4581-8866-375ea3e167d7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.943 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.944 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:44:20 compute-0 nova_compute[244014]: 2026-02-25 12:44:20.944 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.125 244018 INFO nova.virt.libvirt.driver [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deleting instance files /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e_del
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.125 244018 INFO nova.virt.libvirt.driver [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deletion of /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e_del complete
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.176 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.189 244018 INFO nova.compute.manager [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.189 244018 DEBUG oslo.service.loopingcall [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.190 244018 DEBUG nova.compute.manager [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.190 244018 DEBUG nova.network.neutron [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.350 244018 DEBUG nova.compute.manager [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.351 244018 DEBUG nova.compute.manager [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing instance network info cache due to event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.351 244018 DEBUG oslo_concurrency.lockutils [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.391 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-deleted-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.391 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.392 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing instance network info cache due to event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.392 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.393 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.393 244018 DEBUG nova.network.neutron [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:44:21 compute-0 ceph-mon[76335]: pgmap v1895: 305 pgs: 305 active+clean; 312 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 25 12:44:21 compute-0 nova_compute[244014]: 2026-02-25 12:44:21.895 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.004 244018 DEBUG nova.network.neutron [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.030 244018 INFO nova.compute.manager [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Took 0.84 seconds to deallocate network for instance.
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.079 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.079 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.114 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.114 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.114 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.114 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.195 244018 DEBUG oslo_concurrency.processutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 287 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 3.4 MiB/s wr, 133 op/s
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.588 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.613 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.614 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance network_info: |[{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.614 244018 DEBUG oslo_concurrency.lockutils [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.615 244018 DEBUG nova.network.neutron [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.620 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start _get_guest_xml network_info=[{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.626 244018 WARNING nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.631 244018 DEBUG nova.virt.libvirt.host [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.632 244018 DEBUG nova.virt.libvirt.host [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.637 244018 DEBUG nova.virt.libvirt.host [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.638 244018 DEBUG nova.virt.libvirt.host [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.638 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.638 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.639 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.639 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.640 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.640 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.641 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.641 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.641 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.642 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.642 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.642 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.648 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:44:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3087698293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.718 244018 DEBUG oslo_concurrency.processutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.727 244018 DEBUG nova.compute.provider_tree [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.743 244018 DEBUG nova.scheduler.client.report [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.777 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.803 244018 INFO nova.scheduler.client.report [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Deleted allocations for instance b5941b54-9cd2-465c-89c0-3cf87ebed83e
Feb 25 12:44:22 compute-0 nova_compute[244014]: 2026-02-25 12:44:22.884 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.111 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.112 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.113 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.113 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.113 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.115 244018 INFO nova.compute.manager [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Terminating instance
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.116 244018 DEBUG nova.compute.manager [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:44:23 compute-0 kernel: tap473bf89e-f4 (unregistering): left promiscuous mode
Feb 25 12:44:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:44:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701796555' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:44:23 compute-0 NetworkManager[49836]: <info>  [1772023463.1783] device (tap473bf89e-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:44:23 compute-0 ovn_controller[147040]: 2026-02-25T12:44:23Z|01102|binding|INFO|Releasing lport 473bf89e-f488-42b9-b6d3-d736d2a61760 from this chassis (sb_readonly=0)
Feb 25 12:44:23 compute-0 ovn_controller[147040]: 2026-02-25T12:44:23Z|01103|binding|INFO|Setting lport 473bf89e-f488-42b9-b6d3-d736d2a61760 down in Southbound
Feb 25 12:44:23 compute-0 ovn_controller[147040]: 2026-02-25T12:44:23Z|01104|binding|INFO|Removing iface tap473bf89e-f4 ovn-installed in OVS
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.193 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:8c:23 10.100.0.8'], port_security=['fa:16:3e:7a:8c:23 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edb1eef-979c-4be1-8c88-b734bc363731, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=473bf89e-f488-42b9-b6d3-d736d2a61760) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.194 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 473bf89e-f488-42b9-b6d3-d736d2a61760 in datapath e0ff7905-af45-428a-b6a0-d6e1209fd009 unbound from our chassis
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.195 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e0ff7905-af45-428a-b6a0-d6e1209fd009, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.197 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[510c8356-7aae-4517-8356-1469879a7cfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.197 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009 namespace which is not needed anymore
Feb 25 12:44:23 compute-0 kernel: tap7126aff6-e4 (unregistering): left promiscuous mode
Feb 25 12:44:23 compute-0 NetworkManager[49836]: <info>  [1772023463.2131] device (tap7126aff6-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.213 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:23 compute-0 ovn_controller[147040]: 2026-02-25T12:44:23Z|01105|binding|INFO|Releasing lport 7126aff6-e4c8-45c9-a3f7-6c333946b022 from this chassis (sb_readonly=0)
Feb 25 12:44:23 compute-0 ovn_controller[147040]: 2026-02-25T12:44:23Z|01106|binding|INFO|Setting lport 7126aff6-e4c8-45c9-a3f7-6c333946b022 down in Southbound
Feb 25 12:44:23 compute-0 ovn_controller[147040]: 2026-02-25T12:44:23Z|01107|binding|INFO|Removing iface tap7126aff6-e4 ovn-installed in OVS
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.236 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:63:85 2001:db8::f816:3eff:fef5:6385'], port_security=['fa:16:3e:f5:63:85 2001:db8::f816:3eff:fef5:6385'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef5:6385/64', 'neutron:device_id': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4445989-91e8-4869-98cb-32b4b81bb3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1f88174-704f-4cf1-b79c-73e2410029c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7126aff6-e4c8-45c9-a3f7-6c333946b022) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.242 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.246 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:23 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Feb 25 12:44:23 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006d.scope: Consumed 14.818s CPU time.
Feb 25 12:44:23 compute-0 systemd-machined[210048]: Machine qemu-137-instance-0000006d terminated.
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.270 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:23 compute-0 NetworkManager[49836]: <info>  [1772023463.3518] manager: (tap7126aff6-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/457)
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [NOTICE]   (340082) : haproxy version is 2.8.14-c23fe91
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [NOTICE]   (340082) : path to executable is /usr/sbin/haproxy
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [WARNING]  (340082) : Exiting Master process...
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [WARNING]  (340082) : Exiting Master process...
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [ALERT]    (340082) : Current worker (340084) exited with code 143 (Terminated)
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [WARNING]  (340082) : All workers exited. Exiting... (0)
Feb 25 12:44:23 compute-0 systemd[1]: libpod-4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07.scope: Deactivated successfully.
Feb 25 12:44:23 compute-0 podman[343081]: 2026-02-25 12:44:23.36663783 +0000 UTC m=+0.058189603 container died 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.369 244018 INFO nova.virt.libvirt.driver [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance destroyed successfully.
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.370 244018 DEBUG nova.objects.instance [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.393 244018 DEBUG nova.virt.libvirt.vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:43:10Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.393 244018 DEBUG nova.network.os_vif_util [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.394 244018 DEBUG nova.network.os_vif_util [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.395 244018 DEBUG os_vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.396 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473bf89e-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07-userdata-shm.mount: Deactivated successfully.
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.401 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-26baa473b5cd94140eb2f5364901afbf20c0a34db5790bb729eb08e64d9f641f-merged.mount: Deactivated successfully.
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.404 244018 INFO os_vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4')
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.405 244018 DEBUG nova.virt.libvirt.vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:43:10Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.405 244018 DEBUG nova.network.os_vif_util [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.407 244018 DEBUG nova.network.os_vif_util [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.407 244018 DEBUG os_vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.408 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.409 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7126aff6-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.410 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.413 244018 INFO os_vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4')
Feb 25 12:44:23 compute-0 podman[343081]: 2026-02-25 12:44:23.41731686 +0000 UTC m=+0.108868623 container cleanup 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:44:23 compute-0 systemd[1]: libpod-conmon-4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07.scope: Deactivated successfully.
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.434 244018 DEBUG nova.network.neutron [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updated VIF entry in instance network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.434 244018 DEBUG nova.network.neutron [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updating instance_info_cache with network_info: [{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.448 244018 DEBUG nova.compute.manager [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.449 244018 DEBUG nova.compute.manager [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing instance network info cache due to event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.450 244018 DEBUG oslo_concurrency.lockutils [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.465 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.466 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.466 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.467 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.467 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.467 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.469 244018 WARNING nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state deleting.
Feb 25 12:44:23 compute-0 podman[343162]: 2026-02-25 12:44:23.496130244 +0000 UTC m=+0.055735294 container remove 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.500 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2de8bb9b-dd2a-4784-94b6-39809ad8218a]: (4, ('Wed Feb 25 12:44:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009 (4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07)\n4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07\nWed Feb 25 12:44:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009 (4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07)\n4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.501 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[74395e14-a1b2-4085-9b7c-07549c349f0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.502 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0ff7905-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.504 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 kernel: tape0ff7905-a0: left promiscuous mode
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.509 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.515 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30ddc3df-e4e4-4de8-9f53-4a98b032109a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.524 244018 DEBUG nova.compute.manager [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-deleted-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.524 244018 DEBUG nova.compute.manager [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-unplugged-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.524 244018 DEBUG oslo_concurrency.lockutils [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.525 244018 DEBUG oslo_concurrency.lockutils [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.525 244018 DEBUG oslo_concurrency.lockutils [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.525 244018 DEBUG nova.compute.manager [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-unplugged-473bf89e-f488-42b9-b6d3-d736d2a61760 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.526 244018 DEBUG nova.compute.manager [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-unplugged-473bf89e-f488-42b9-b6d3-d736d2a61760 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.529 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7861eb50-9113-4582-a0fe-4bdc3209ef4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.530 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc6fc9d-4f33-44f9-af0b-0df104504fca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.545 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5224878c-483d-4cbf-bbda-3326f507e690]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535882, 'reachable_time': 25294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343181, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.548 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.548 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[93c7a1f2-0a79-465e-ba79-437af260706a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.548 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7126aff6-e4c8-45c9-a3f7-6c333946b022 in datapath e4445989-91e8-4869-98cb-32b4b81bb3da unbound from our chassis
Feb 25 12:44:23 compute-0 systemd[1]: run-netns-ovnmeta\x2de0ff7905\x2daf45\x2d428a\x2db6a0\x2dd6e1209fd009.mount: Deactivated successfully.
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.551 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4445989-91e8-4869-98cb-32b4b81bb3da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.552 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4616ae75-b914-48e2-b289-13ffbe5d66fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.552 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da namespace which is not needed anymore
Feb 25 12:44:23 compute-0 ceph-mon[76335]: pgmap v1896: 305 pgs: 305 active+clean; 287 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 3.4 MiB/s wr, 133 op/s
Feb 25 12:44:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3087698293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/701796555' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [NOTICE]   (340154) : haproxy version is 2.8.14-c23fe91
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [NOTICE]   (340154) : path to executable is /usr/sbin/haproxy
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [WARNING]  (340154) : Exiting Master process...
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [WARNING]  (340154) : Exiting Master process...
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [ALERT]    (340154) : Current worker (340156) exited with code 143 (Terminated)
Feb 25 12:44:23 compute-0 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [WARNING]  (340154) : All workers exited. Exiting... (0)
Feb 25 12:44:23 compute-0 systemd[1]: libpod-cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8.scope: Deactivated successfully.
Feb 25 12:44:23 compute-0 podman[343200]: 2026-02-25 12:44:23.676991738 +0000 UTC m=+0.049856838 container died cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8-userdata-shm.mount: Deactivated successfully.
Feb 25 12:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-053075e96790c2347754d133c524b37fb39668500f025fbd5384e1ede98d4099-merged.mount: Deactivated successfully.
Feb 25 12:44:23 compute-0 podman[343200]: 2026-02-25 12:44:23.722935945 +0000 UTC m=+0.095801045 container cleanup cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.742 244018 INFO nova.virt.libvirt.driver [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Deleting instance files /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_del
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.743 244018 INFO nova.virt.libvirt.driver [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Deletion of /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_del complete
Feb 25 12:44:23 compute-0 systemd[1]: libpod-conmon-cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8.scope: Deactivated successfully.
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.797 244018 INFO nova.compute.manager [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Took 0.68 seconds to destroy the instance on the hypervisor.
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.797 244018 DEBUG oslo.service.loopingcall [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.798 244018 DEBUG nova.compute.manager [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.798 244018 DEBUG nova.network.neutron [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:44:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:44:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784260646' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.838 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.839 244018 DEBUG nova.virt.libvirt.vif [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9lQWzdSu90yaQLNul1MY74fQc98Cg9DYq6NKBJiDcNTT2XlppuxCcKfQtu58BOyBXAh+PrpYKR3FwTDUP4XAt9wnb5ixmiKo9lnvQqdB0jXAsnGExk5Uj48m62YSy+xg==',key_name='tempest-TestShelveInstance-571084469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:18Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.839 244018 DEBUG nova.network.os_vif_util [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.840 244018 DEBUG nova.network.os_vif_util [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.841 244018 DEBUG nova.objects.instance [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.856 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:44:23 compute-0 nova_compute[244014]:   <uuid>874359d8-3251-4416-82dc-f6776853e384</uuid>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   <name>instance-00000070</name>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <nova:name>tempest-TestShelveInstance-server-1245136630</nova:name>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:44:22</nova:creationTime>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <nova:user uuid="44c0b78107ea4f7381e82a02c5954e7c">tempest-TestShelveInstance-1925524092-project-member</nova:user>
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <nova:project uuid="6c0adb05683141e7a0b866f450e410e0">tempest-TestShelveInstance-1925524092</nova:project>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <nova:port uuid="97fb9f99-cb59-4581-8866-375ea3e167d7">
Feb 25 12:44:23 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <system>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <entry name="serial">874359d8-3251-4416-82dc-f6776853e384</entry>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <entry name="uuid">874359d8-3251-4416-82dc-f6776853e384</entry>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     </system>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   <os>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   </os>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   <features>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   </features>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/874359d8-3251-4416-82dc-f6776853e384_disk">
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/874359d8-3251-4416-82dc-f6776853e384_disk.config">
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:44:23 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:61:b7:f7"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <target dev="tap97fb9f99-cb"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/console.log" append="off"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <video>
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     </video>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:44:23 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:44:23 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:44:23 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:44:23 compute-0 nova_compute[244014]: </domain>
Feb 25 12:44:23 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.857 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Preparing to wait for external event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.857 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.857 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.858 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.859 244018 DEBUG nova.virt.libvirt.vif [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9lQWzdSu90yaQLNul1MY74fQc98Cg9DYq6NKBJiDcNTT2XlppuxCcKfQtu58BOyBXAh+PrpYKR3FwTDUP4XAt9wnb5ixmiKo9lnvQqdB0jXAsnGExk5Uj48m62YSy+xg==',key_name='tempest-TestShelveInstance-571084469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:18Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.859 244018 DEBUG nova.network.os_vif_util [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.860 244018 DEBUG nova.network.os_vif_util [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.861 244018 DEBUG os_vif [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.862 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:23 compute-0 podman[343228]: 2026-02-25 12:44:23.862111972 +0000 UTC m=+0.106593719 container remove cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.862 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.865 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97fb9f99-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.865 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97fb9f99-cb, col_values=(('external_ids', {'iface-id': '97fb9f99-cb59-4581-8866-375ea3e167d7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:b7:f7', 'vm-uuid': '874359d8-3251-4416-82dc-f6776853e384'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.866 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2b4546-e84e-47e8-85ac-21ac08372f95]: (4, ('Wed Feb 25 12:44:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da (cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8)\ncc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8\nWed Feb 25 12:44:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da (cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8)\ncc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.868 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 NetworkManager[49836]: <info>  [1772023463.8688] manager: (tap97fb9f99-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/458)
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.867 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[041dbdfd-9fda-4a57-84b4-f8b8b9e25410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.870 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4445989-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.875 244018 INFO os_vif [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb')
Feb 25 12:44:23 compute-0 kernel: tape4445989-90: left promiscuous mode
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.880 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.882 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[319a416f-bfd6-4fdd-bd3a-bbf0173d2f90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.901 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9128f8ce-a8ed-4699-95fc-71af3628e85c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.902 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[85fef852-dca7-4f00-ad54-3537d089092a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.917 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d01555-d16b-464d-a72b-8a374efabc5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535968, 'reachable_time': 37183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343264, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.919 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:44:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.919 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a355a982-3970-4887-8b2d-84705c9efd05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:23 compute-0 podman[343234]: 2026-02-25 12:44:23.922162837 +0000 UTC m=+0.138259843 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.940 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.940 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.940 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No VIF found with MAC fa:16:3e:61:b7:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.941 244018 INFO nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Using config drive
Feb 25 12:44:23 compute-0 nova_compute[244014]: 2026-02-25 12:44:23.957 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 272 KiB/s rd, 2.3 MiB/s wr, 128 op/s
Feb 25 12:44:24 compute-0 systemd[1]: run-netns-ovnmeta\x2de4445989\x2d91e8\x2d4869\x2d98cb\x2d32b4b81bb3da.mount: Deactivated successfully.
Feb 25 12:44:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/784260646' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:44:24 compute-0 nova_compute[244014]: 2026-02-25 12:44:24.791 244018 DEBUG nova.network.neutron [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated VIF entry in instance network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:44:24 compute-0 nova_compute[244014]: 2026-02-25 12:44:24.792 244018 DEBUG nova.network.neutron [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:24 compute-0 nova_compute[244014]: 2026-02-25 12:44:24.813 244018 DEBUG oslo_concurrency.lockutils [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:44:24 compute-0 nova_compute[244014]: 2026-02-25 12:44:24.987 244018 INFO nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating config drive at /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config
Feb 25 12:44:24 compute-0 nova_compute[244014]: 2026-02-25 12:44:24.993 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6lw6a38u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.024 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.047 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.047 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.048 244018 DEBUG oslo_concurrency.lockutils [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.048 244018 DEBUG nova.network.neutron [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing network info cache for port 473bf89e-f488-42b9-b6d3-d736d2a61760 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.126 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6lw6a38u" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.164 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.167 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config 874359d8-3251-4416-82dc-f6776853e384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.189 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.247 244018 INFO nova.network.neutron [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Port 473bf89e-f488-42b9-b6d3-d736d2a61760 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.247 244018 DEBUG nova.network.neutron [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.281 244018 DEBUG oslo_concurrency.lockutils [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.298 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config 874359d8-3251-4416-82dc-f6776853e384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.299 244018 INFO nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deleting local config drive /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config because it was imported into RBD.
Feb 25 12:44:25 compute-0 kernel: tap97fb9f99-cb: entered promiscuous mode
Feb 25 12:44:25 compute-0 NetworkManager[49836]: <info>  [1772023465.3509] manager: (tap97fb9f99-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/459)
Feb 25 12:44:25 compute-0 systemd-udevd[343267]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:25 compute-0 ovn_controller[147040]: 2026-02-25T12:44:25Z|01108|binding|INFO|Claiming lport 97fb9f99-cb59-4581-8866-375ea3e167d7 for this chassis.
Feb 25 12:44:25 compute-0 ovn_controller[147040]: 2026-02-25T12:44:25Z|01109|binding|INFO|97fb9f99-cb59-4581-8866-375ea3e167d7: Claiming fa:16:3e:61:b7:f7 10.100.0.11
Feb 25 12:44:25 compute-0 ovn_controller[147040]: 2026-02-25T12:44:25Z|01110|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 ovn-installed in OVS
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:25 compute-0 NetworkManager[49836]: <info>  [1772023465.3640] device (tap97fb9f99-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:44:25 compute-0 ovn_controller[147040]: 2026-02-25T12:44:25Z|01111|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 up in Southbound
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.364 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:b7:f7 10.100.0.11'], port_security=['fa:16:3e:61:b7:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '874359d8-3251-4416-82dc-f6776853e384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c0adb05683141e7a0b866f450e410e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c8e48a3c-4f73-4222-8cd8-b956480085c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88c1b411-2973-4db5-967d-0a2cebc1066f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=97fb9f99-cb59-4581-8866-375ea3e167d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:44:25 compute-0 NetworkManager[49836]: <info>  [1772023465.3648] device (tap97fb9f99-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.365 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 97fb9f99-cb59-4581-8866-375ea3e167d7 in datapath e4d059ba-eacc-463b-a8a4-393a5a36dba3 bound to our chassis
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.366 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4d059ba-eacc-463b-a8a4-393a5a36dba3
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.379 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2935849-272b-4e32-a986-125739f9b590]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.380 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4d059ba-e1 in ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:44:25 compute-0 systemd-machined[210048]: New machine qemu-141-instance-00000070.
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.382 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4d059ba-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.382 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d85d1f5-7e0f-4f0b-b2b7-d6b0ea4bc2ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.383 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7906339c-ef44-4a29-b941-2ae308dde8c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.394 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4a3ee7-5337-4951-a581-fda8816f5e87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-00000070.
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.414 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9625549d-ed91-49a6-bf4d-07d415f33630]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.439 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[572f4cfa-0f0e-4717-81eb-4bb2fa148c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 NetworkManager[49836]: <info>  [1772023465.4454] manager: (tape4d059ba-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/460)
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.444 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d7338b69-4728-4db1-b8d2-7f69f130c079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.445 244018 DEBUG nova.network.neutron [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.472 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5485cfe8-ba75-4d14-a883-94516f0cb78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.475 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[46e64f79-2d68-4e0f-a84e-2ccb6f68a3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.481 244018 INFO nova.compute.manager [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Took 1.68 seconds to deallocate network for instance.
Feb 25 12:44:25 compute-0 NetworkManager[49836]: <info>  [1772023465.4931] device (tape4d059ba-e0): carrier: link connected
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.496 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d57583-e85a-4e56-a93f-6bb30e2be85c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.509 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f26a8acc-cd33-4a38-9457-a843a9012b9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d059ba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:42:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 335], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543490, 'reachable_time': 22180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343376, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.522 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e95ad1-59bf-42d3-a1a6-e8f1085e0dce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:42a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543490, 'tstamp': 543490}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343377, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.536 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[95ff5321-c0eb-4273-b710-b49fbdaf5279]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d059ba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:42:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 335], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543490, 'reachable_time': 22180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343378, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.549 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-unplugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.549 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.550 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.550 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.550 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-unplugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.551 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-unplugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.551 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.551 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.552 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.552 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.553 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.553 244018 WARNING nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received unexpected event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 for instance with vm_state active and task_state deleting.
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.553 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-deleted-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.553 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-deleted-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.556 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.556 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.561 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[700acde5-24f8-473b-8330-b4dffb884da9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 ceph-mon[76335]: pgmap v1897: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 272 KiB/s rd, 2.3 MiB/s wr, 128 op/s
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.605 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[90433b16-04e7-40d2-93ee-23c482608da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.607 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d059ba-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.607 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.608 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4d059ba-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:25 compute-0 kernel: tape4d059ba-e0: entered promiscuous mode
Feb 25 12:44:25 compute-0 NetworkManager[49836]: <info>  [1772023465.6105] manager: (tape4d059ba-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.613 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4d059ba-e0, col_values=(('external_ids', {'iface-id': '751d6f2c-9451-4526-9618-730226a78a70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:25 compute-0 ovn_controller[147040]: 2026-02-25T12:44:25Z|01112|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.616 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.617 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6537c473-9d62-48a0-a130-fb9a93d056b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.619 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-e4d059ba-eacc-463b-a8a4-393a5a36dba3
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID e4d059ba-eacc-463b-a8a4-393a5a36dba3
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:44:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.619 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'env', 'PROCESS_TAG=haproxy-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4d059ba-eacc-463b-a8a4-393a5a36dba3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.667 244018 DEBUG nova.compute.manager [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.669 244018 DEBUG oslo_concurrency.lockutils [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.669 244018 DEBUG oslo_concurrency.lockutils [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.670 244018 DEBUG oslo_concurrency.lockutils [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.670 244018 DEBUG nova.compute.manager [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.670 244018 WARNING nova.compute.manager [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received unexpected event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 for instance with vm_state deleted and task_state None.
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.676 244018 DEBUG oslo_concurrency.processutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:25 compute-0 podman[343386]: 2026-02-25 12:44:25.770108516 +0000 UTC m=+0.106765324 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.887 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023465.8874667, 874359d8-3251-4416-82dc-f6776853e384 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.888 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Started (Lifecycle Event)
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.906 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.911 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023465.887552, 874359d8-3251-4416-82dc-f6776853e384 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.911 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Paused (Lifecycle Event)
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.926 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.930 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:44:25 compute-0 nova_compute[244014]: 2026-02-25 12:44:25.954 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:44:25 compute-0 podman[343498]: 2026-02-25 12:44:25.993907441 +0000 UTC m=+0.058755959 container create d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:44:26 compute-0 systemd[1]: Started libpod-conmon-d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31.scope.
Feb 25 12:44:26 compute-0 podman[343498]: 2026-02-25 12:44:25.961733783 +0000 UTC m=+0.026582281 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:44:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5928287d34ac4333e0ae9c06ec6f097d3bd4e406cca623c2f40db94d0a23763/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:26 compute-0 podman[343498]: 2026-02-25 12:44:26.091645389 +0000 UTC m=+0.156493897 container init d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:44:26 compute-0 podman[343498]: 2026-02-25 12:44:26.099212983 +0000 UTC m=+0.164061461 container start d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:44:26 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [NOTICE]   (343517) : New worker (343519) forked
Feb 25 12:44:26 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [NOTICE]   (343517) : Loading success.
Feb 25 12:44:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:44:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3446958632' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.223 244018 DEBUG oslo_concurrency.processutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.227 244018 DEBUG nova.compute.provider_tree [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.247 244018 DEBUG nova.scheduler.client.report [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.278 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.281 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.281 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.281 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.282 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.345 244018 INFO nova.scheduler.client.report [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance cf7f6093-44a3-4e8f-8970-db25cf0b4ab9
Feb 25 12:44:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 1.9 MiB/s wr, 90 op/s
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.414 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3446958632' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:44:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063588877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.885 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.953 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.954 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:44:26 compute-0 ovn_controller[147040]: 2026-02-25T12:44:26Z|01113|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 12:44:26 compute-0 nova_compute[244014]: 2026-02-25 12:44:26.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:27 compute-0 ovn_controller[147040]: 2026-02-25T12:44:27Z|01114|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.061 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.078 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.079 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.92112819105387GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.079 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.079 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.157 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 874359d8-3251-4416-82dc-f6776853e384 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.157 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.158 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.188 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:27 compute-0 ceph-mon[76335]: pgmap v1898: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 1.9 MiB/s wr, 90 op/s
Feb 25 12:44:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3063588877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:44:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1048043195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.727 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.732 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.751 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.773 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.774 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.837 244018 DEBUG nova.compute.manager [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.837 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.837 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.838 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.838 244018 DEBUG nova.compute.manager [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Processing event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.838 244018 DEBUG nova.compute.manager [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.838 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.838 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.839 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.839 244018 DEBUG nova.compute.manager [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.839 244018 WARNING nova.compute.manager [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received unexpected event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with vm_state building and task_state spawning.
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.840 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.843 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023467.8432598, 874359d8-3251-4416-82dc-f6776853e384 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.843 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Resumed (Lifecycle Event)
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.845 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.848 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance spawned successfully.
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.849 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.872 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.877 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.878 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.878 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.878 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.879 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.879 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.882 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.920 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.958 244018 INFO nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Took 9.76 seconds to spawn the instance on the hypervisor.
Feb 25 12:44:27 compute-0 nova_compute[244014]: 2026-02-25 12:44:27.959 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:28 compute-0 nova_compute[244014]: 2026-02-25 12:44:28.034 244018 INFO nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Took 10.95 seconds to build instance.
Feb 25 12:44:28 compute-0 nova_compute[244014]: 2026-02-25 12:44:28.054 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 1.9 MiB/s wr, 127 op/s
Feb 25 12:44:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1048043195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:28 compute-0 nova_compute[244014]: 2026-02-25 12:44:28.773 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:44:28 compute-0 nova_compute[244014]: 2026-02-25 12:44:28.774 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:44:28 compute-0 nova_compute[244014]: 2026-02-25 12:44:28.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:28 compute-0 nova_compute[244014]: 2026-02-25 12:44:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:44:28 compute-0 nova_compute[244014]: 2026-02-25 12:44:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:44:29 compute-0 ceph-mon[76335]: pgmap v1899: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 1.9 MiB/s wr, 127 op/s
Feb 25 12:44:30 compute-0 nova_compute[244014]: 2026-02-25 12:44:30.142 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 25 12:44:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:44:30
Feb 25 12:44:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:44:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:44:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.rgw.root']
Feb 25 12:44:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:44:31 compute-0 nova_compute[244014]: 2026-02-25 12:44:31.431 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023456.4303594, c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:44:31 compute-0 nova_compute[244014]: 2026-02-25 12:44:31.434 244018 INFO nova.compute.manager [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] VM Stopped (Lifecycle Event)
Feb 25 12:44:31 compute-0 nova_compute[244014]: 2026-02-25 12:44:31.465 244018 DEBUG nova.compute.manager [None req-d488c380-a838-4703-98f2-a866ef8c19f4 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:44:31 compute-0 ceph-mon[76335]: pgmap v1900: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:44:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:44:32 compute-0 nova_compute[244014]: 2026-02-25 12:44:32.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:32 compute-0 NetworkManager[49836]: <info>  [1772023472.2668] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Feb 25 12:44:32 compute-0 NetworkManager[49836]: <info>  [1772023472.2683] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Feb 25 12:44:32 compute-0 nova_compute[244014]: 2026-02-25 12:44:32.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:32 compute-0 ovn_controller[147040]: 2026-02-25T12:44:32Z|01115|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 12:44:32 compute-0 nova_compute[244014]: 2026-02-25 12:44:32.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Feb 25 12:44:33 compute-0 nova_compute[244014]: 2026-02-25 12:44:33.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:33 compute-0 nova_compute[244014]: 2026-02-25 12:44:33.076 244018 DEBUG nova.compute.manager [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:33 compute-0 nova_compute[244014]: 2026-02-25 12:44:33.077 244018 DEBUG nova.compute.manager [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing instance network info cache due to event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:44:33 compute-0 nova_compute[244014]: 2026-02-25 12:44:33.077 244018 DEBUG oslo_concurrency.lockutils [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:44:33 compute-0 nova_compute[244014]: 2026-02-25 12:44:33.077 244018 DEBUG oslo_concurrency.lockutils [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:44:33 compute-0 nova_compute[244014]: 2026-02-25 12:44:33.078 244018 DEBUG nova.network.neutron [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:44:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:33 compute-0 ceph-mon[76335]: pgmap v1901: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Feb 25 12:44:33 compute-0 nova_compute[244014]: 2026-02-25 12:44:33.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:33 compute-0 nova_compute[244014]: 2026-02-25 12:44:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:44:33 compute-0 nova_compute[244014]: 2026-02-25 12:44:33.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:44:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 522 KiB/s wr, 118 op/s
Feb 25 12:44:34 compute-0 nova_compute[244014]: 2026-02-25 12:44:34.793 244018 DEBUG nova.network.neutron [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated VIF entry in instance network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:44:34 compute-0 nova_compute[244014]: 2026-02-25 12:44:34.794 244018 DEBUG nova.network.neutron [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:34 compute-0 nova_compute[244014]: 2026-02-25 12:44:34.816 244018 DEBUG oslo_concurrency.lockutils [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:44:35 compute-0 nova_compute[244014]: 2026-02-25 12:44:35.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:35 compute-0 ceph-mon[76335]: pgmap v1902: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 522 KiB/s wr, 118 op/s
Feb 25 12:44:35 compute-0 nova_compute[244014]: 2026-02-25 12:44:35.811 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023460.8090527, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:44:35 compute-0 nova_compute[244014]: 2026-02-25 12:44:35.811 244018 INFO nova.compute.manager [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Stopped (Lifecycle Event)
Feb 25 12:44:35 compute-0 nova_compute[244014]: 2026-02-25 12:44:35.833 244018 DEBUG nova.compute.manager [None req-ac637bbc-09ea-4ca8-8cf5-8ecabe5529bc - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:44:37 compute-0 ceph-mon[76335]: pgmap v1903: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:44:37 compute-0 nova_compute[244014]: 2026-02-25 12:44:37.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:44:38 compute-0 nova_compute[244014]: 2026-02-25 12:44:38.365 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023463.3641844, cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:44:38 compute-0 nova_compute[244014]: 2026-02-25 12:44:38.366 244018 INFO nova.compute.manager [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] VM Stopped (Lifecycle Event)
Feb 25 12:44:38 compute-0 nova_compute[244014]: 2026-02-25 12:44:38.383 244018 DEBUG nova.compute.manager [None req-ac0cb84d-a76a-4344-99b7-6d1bd56b312f - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:38 compute-0 nova_compute[244014]: 2026-02-25 12:44:38.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:39 compute-0 ovn_controller[147040]: 2026-02-25T12:44:39Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:b7:f7 10.100.0.11
Feb 25 12:44:39 compute-0 ovn_controller[147040]: 2026-02-25T12:44:39Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:b7:f7 10.100.0.11
Feb 25 12:44:39 compute-0 ceph-mon[76335]: pgmap v1904: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:44:40 compute-0 nova_compute[244014]: 2026-02-25 12:44:40.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:40.252 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:44:40 compute-0 nova_compute[244014]: 2026-02-25 12:44:40.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:40.253 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:44:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Feb 25 12:44:41 compute-0 nova_compute[244014]: 2026-02-25 12:44:41.303 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:41 compute-0 nova_compute[244014]: 2026-02-25 12:44:41.304 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:41 compute-0 nova_compute[244014]: 2026-02-25 12:44:41.325 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:44:41 compute-0 nova_compute[244014]: 2026-02-25 12:44:41.401 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:41 compute-0 nova_compute[244014]: 2026-02-25 12:44:41.402 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:41 compute-0 nova_compute[244014]: 2026-02-25 12:44:41.411 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:44:41 compute-0 nova_compute[244014]: 2026-02-25 12:44:41.412 244018 INFO nova.compute.claims [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:44:41 compute-0 nova_compute[244014]: 2026-02-25 12:44:41.556 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:41 compute-0 ceph-mon[76335]: pgmap v1905: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Feb 25 12:44:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:44:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544809804' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.098 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.106 244018 DEBUG nova.compute.provider_tree [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.126 244018 DEBUG nova.scheduler.client.report [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.165 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.166 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.221 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.221 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.257 244018 INFO nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.277 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 217 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 115 op/s
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.388 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.390 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.391 244018 INFO nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Creating image(s)
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.423 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.454 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.476 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.479 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005991630289651938 of space, bias 1.0, pg target 0.17974890868955815 quantized to 32 (current 32)
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493807710132275 of space, bias 1.0, pg target 0.7481423130396825 quantized to 32 (current 32)
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.006242220869224e-06 of space, bias 4.0, pg target 0.0012074906650430689 quantized to 16 (current 16)
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:44:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.569 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.570 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.571 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.572 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.602 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.607 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aee87402-4b34-4083-888b-bb653e2beaa9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:42 compute-0 nova_compute[244014]: 2026-02-25 12:44:42.635 244018 DEBUG nova.policy [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb37a481eb114226822ed8b2ef4f9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:44:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2544809804' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:43 compute-0 nova_compute[244014]: 2026-02-25 12:44:43.878 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:44 compute-0 nova_compute[244014]: 2026-02-25 12:44:44.107 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aee87402-4b34-4083-888b-bb653e2beaa9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:44 compute-0 ceph-mon[76335]: pgmap v1906: 305 pgs: 305 active+clean; 217 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 115 op/s
Feb 25 12:44:44 compute-0 nova_compute[244014]: 2026-02-25 12:44:44.191 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:44:44 compute-0 nova_compute[244014]: 2026-02-25 12:44:44.284 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Successfully created port: 8d032336-9efd-4e76-9498-4dafee40640b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:44:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 863 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Feb 25 12:44:44 compute-0 nova_compute[244014]: 2026-02-25 12:44:44.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:44 compute-0 nova_compute[244014]: 2026-02-25 12:44:44.574 244018 DEBUG nova.objects.instance [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:44:44 compute-0 nova_compute[244014]: 2026-02-25 12:44:44.591 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:44:44 compute-0 nova_compute[244014]: 2026-02-25 12:44:44.591 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Ensure instance console log exists: /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:44:44 compute-0 nova_compute[244014]: 2026-02-25 12:44:44.592 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:44 compute-0 nova_compute[244014]: 2026-02-25 12:44:44.592 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:44 compute-0 nova_compute[244014]: 2026-02-25 12:44:44.592 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:45 compute-0 nova_compute[244014]: 2026-02-25 12:44:45.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:45 compute-0 nova_compute[244014]: 2026-02-25 12:44:45.238 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Successfully updated port: 8d032336-9efd-4e76-9498-4dafee40640b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:44:45 compute-0 nova_compute[244014]: 2026-02-25 12:44:45.256 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:44:45 compute-0 nova_compute[244014]: 2026-02-25 12:44:45.256 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:44:45 compute-0 nova_compute[244014]: 2026-02-25 12:44:45.257 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:44:45 compute-0 nova_compute[244014]: 2026-02-25 12:44:45.380 244018 DEBUG nova.compute.manager [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-changed-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:45 compute-0 nova_compute[244014]: 2026-02-25 12:44:45.380 244018 DEBUG nova.compute.manager [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing instance network info cache due to event network-changed-8d032336-9efd-4e76-9498-4dafee40640b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:44:45 compute-0 nova_compute[244014]: 2026-02-25 12:44:45.380 244018 DEBUG oslo_concurrency.lockutils [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:44:45 compute-0 nova_compute[244014]: 2026-02-25 12:44:45.740 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:44:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Feb 25 12:44:46 compute-0 ceph-mon[76335]: pgmap v1907: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 863 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Feb 25 12:44:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Feb 25 12:44:46 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Feb 25 12:44:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:46.255 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.6 MiB/s wr, 76 op/s
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.811 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.841 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.842 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance network_info: |[{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.843 244018 DEBUG oslo_concurrency.lockutils [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.843 244018 DEBUG nova.network.neutron [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.850 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start _get_guest_xml network_info=[{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.858 244018 WARNING nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.864 244018 DEBUG nova.virt.libvirt.host [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.865 244018 DEBUG nova.virt.libvirt.host [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.872 244018 DEBUG nova.virt.libvirt.host [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.872 244018 DEBUG nova.virt.libvirt.host [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.873 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.873 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.874 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.874 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.875 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.875 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.875 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.876 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.876 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.876 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.876 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.877 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:44:46 compute-0 nova_compute[244014]: 2026-02-25 12:44:46.881 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Feb 25 12:44:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Feb 25 12:44:47 compute-0 ceph-mon[76335]: osdmap e261: 3 total, 3 up, 3 in
Feb 25 12:44:47 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Feb 25 12:44:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:44:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2782215990' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:44:47 compute-0 nova_compute[244014]: 2026-02-25 12:44:47.417 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:47 compute-0 nova_compute[244014]: 2026-02-25 12:44:47.438 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:47 compute-0 nova_compute[244014]: 2026-02-25 12:44:47.442 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:44:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399059919' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:44:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:44:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399059919' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:44:47 compute-0 nova_compute[244014]: 2026-02-25 12:44:47.895 244018 DEBUG nova.network.neutron [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updated VIF entry in instance network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:44:47 compute-0 nova_compute[244014]: 2026-02-25 12:44:47.896 244018 DEBUG nova.network.neutron [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:47 compute-0 nova_compute[244014]: 2026-02-25 12:44:47.922 244018 DEBUG oslo_concurrency.lockutils [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:44:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:44:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135161991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.024 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.025 244018 DEBUG nova.virt.libvirt.vif [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:42Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.026 244018 DEBUG nova.network.os_vif_util [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.026 244018 DEBUG nova.network.os_vif_util [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.027 244018 DEBUG nova.objects.instance [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.039 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:44:48 compute-0 nova_compute[244014]:   <uuid>aee87402-4b34-4083-888b-bb653e2beaa9</uuid>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   <name>instance-00000071</name>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-204528973</nova:name>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:44:46</nova:creationTime>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <nova:port uuid="8d032336-9efd-4e76-9498-4dafee40640b">
Feb 25 12:44:48 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <system>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <entry name="serial">aee87402-4b34-4083-888b-bb653e2beaa9</entry>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <entry name="uuid">aee87402-4b34-4083-888b-bb653e2beaa9</entry>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     </system>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   <os>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   </os>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   <features>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   </features>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/aee87402-4b34-4083-888b-bb653e2beaa9_disk">
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       </source>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/aee87402-4b34-4083-888b-bb653e2beaa9_disk.config">
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       </source>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:44:48 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:b3:5f:c9"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <target dev="tap8d032336-9e"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/console.log" append="off"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <video>
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     </video>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:44:48 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:44:48 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:44:48 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:44:48 compute-0 nova_compute[244014]: </domain>
Feb 25 12:44:48 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.040 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Preparing to wait for external event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.040 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.040 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.041 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.041 244018 DEBUG nova.virt.libvirt.vif [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:42Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.041 244018 DEBUG nova.network.os_vif_util [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.042 244018 DEBUG nova.network.os_vif_util [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.042 244018 DEBUG os_vif [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.043 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.043 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.047 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d032336-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.047 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d032336-9e, col_values=(('external_ids', {'iface-id': '8d032336-9efd-4e76-9498-4dafee40640b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:5f:c9', 'vm-uuid': 'aee87402-4b34-4083-888b-bb653e2beaa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.049 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:48 compute-0 NetworkManager[49836]: <info>  [1772023488.0500] manager: (tap8d032336-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.055 244018 INFO os_vif [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e')
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.115 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.116 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.117 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:b3:5f:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.117 244018 INFO nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Using config drive
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.142 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Feb 25 12:44:48 compute-0 ceph-mon[76335]: pgmap v1909: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.6 MiB/s wr, 76 op/s
Feb 25 12:44:48 compute-0 ceph-mon[76335]: osdmap e262: 3 total, 3 up, 3 in
Feb 25 12:44:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2782215990' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:44:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/399059919' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:44:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/399059919' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:44:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1135161991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:44:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Feb 25 12:44:48 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Feb 25 12:44:48 compute-0 sudo[343847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:44:48 compute-0 sudo[343847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:44:48 compute-0 sudo[343847]: pam_unix(sudo:session): session closed for user root
Feb 25 12:44:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 5.3 MiB/s wr, 122 op/s
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:48 compute-0 sudo[343872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:44:48 compute-0 sudo[343872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.598 244018 INFO nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Creating config drive at /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.605 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp98i6gijm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.746 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp98i6gijm" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.768 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.771 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config aee87402-4b34-4083-888b-bb653e2beaa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.926 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config aee87402-4b34-4083-888b-bb653e2beaa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:48 compute-0 nova_compute[244014]: 2026-02-25 12:44:48.928 244018 INFO nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Deleting local config drive /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config because it was imported into RBD.
Feb 25 12:44:48 compute-0 kernel: tap8d032336-9e: entered promiscuous mode
Feb 25 12:44:48 compute-0 NetworkManager[49836]: <info>  [1772023488.9972] manager: (tap8d032336-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/465)
Feb 25 12:44:49 compute-0 ovn_controller[147040]: 2026-02-25T12:44:49Z|01116|binding|INFO|Claiming lport 8d032336-9efd-4e76-9498-4dafee40640b for this chassis.
Feb 25 12:44:49 compute-0 ovn_controller[147040]: 2026-02-25T12:44:49Z|01117|binding|INFO|8d032336-9efd-4e76-9498-4dafee40640b: Claiming fa:16:3e:b3:5f:c9 10.100.0.11
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:49 compute-0 sudo[343872]: pam_unix(sudo:session): session closed for user root
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.013 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5f:c9 10.100.0.11'], port_security=['fa:16:3e:b3:5f:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aee87402-4b34-4083-888b-bb653e2beaa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd92597b-67bf-4492-9581-a9a7ec80f716', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815102d4-6506-4957-9109-24ea4e91e4b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d5163c3-96b2-4512-ae02-51f6c9b4bef4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d032336-9efd-4e76-9498-4dafee40640b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.014 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:49 compute-0 ovn_controller[147040]: 2026-02-25T12:44:49Z|01118|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b ovn-installed in OVS
Feb 25 12:44:49 compute-0 ovn_controller[147040]: 2026-02-25T12:44:49Z|01119|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b up in Southbound
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.016 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d032336-9efd-4e76-9498-4dafee40640b in datapath cd92597b-67bf-4492-9581-a9a7ec80f716 bound to our chassis
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.018 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd92597b-67bf-4492-9581-a9a7ec80f716
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.020 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.030 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[87f60eb9-c1fc-4988-83e9-afb7f476ab86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.032 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd92597b-61 in ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.034 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd92597b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.034 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9e77e3e0-c2f9-4e2e-9010-f98904b48e95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.034 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d6258619-aec0-4b08-9364-b0b8577bd2a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 systemd-udevd[343981]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:44:49 compute-0 systemd-machined[210048]: New machine qemu-142-instance-00000071.
Feb 25 12:44:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:44:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.044 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[29a2e364-50ce-4e62-aff2-502da9305b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:44:49 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:44:49 compute-0 NetworkManager[49836]: <info>  [1772023489.0498] device (tap8d032336-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:44:49 compute-0 NetworkManager[49836]: <info>  [1772023489.0503] device (tap8d032336-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:44:49 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-00000071.
Feb 25 12:44:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:44:49 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:44:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:44:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:44:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:44:49 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.066 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1db437-565c-47ab-9082-5717392a1ec8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:44:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.091 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7c903665-9a20-42fb-9afc-5d64a190bf22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.096 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b467182-ca74-48a5-8f2c-59c151afd40c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 NetworkManager[49836]: <info>  [1772023489.0977] manager: (tapcd92597b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/466)
Feb 25 12:44:49 compute-0 systemd-udevd[343985]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.123 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a47e83dd-35d6-4a67-85a9-15f62d7295a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.125 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a1398a-ae27-4600-af53-86e6b49928bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 sudo[343987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:44:49 compute-0 sudo[343987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:44:49 compute-0 sudo[343987]: pam_unix(sudo:session): session closed for user root
Feb 25 12:44:49 compute-0 NetworkManager[49836]: <info>  [1772023489.1399] device (tapcd92597b-60): carrier: link connected
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.146 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e82fce5c-036b-4092-80cb-a72e1b138cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.157 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9b36980e-8be7-4448-beae-e9401a997db0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd92597b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ac:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545854, 'reachable_time': 41882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344043, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.168 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[92e0d09e-1c94-4c0b-8eeb-21c6ab0d91dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe89:ace3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545854, 'tstamp': 545854}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344053, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ceph-mon[76335]: osdmap e263: 3 total, 3 up, 3 in
Feb 25 12:44:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:44:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:44:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:44:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:44:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:44:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:44:49 compute-0 sudo[344038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.189 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[63d5d807-3411-4e5a-94c1-d650a8baf8ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd92597b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ac:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545854, 'reachable_time': 41882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344063, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 sudo[344038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.213 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eae10f16-2599-4b25-85e1-681b0b405ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.255 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea203e2-50e2-443c-967b-7407c40329e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.257 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd92597b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.257 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.257 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd92597b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:49 compute-0 kernel: tapcd92597b-60: entered promiscuous mode
Feb 25 12:44:49 compute-0 NetworkManager[49836]: <info>  [1772023489.2600] manager: (tapcd92597b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.263 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd92597b-60, col_values=(('external_ids', {'iface-id': '37c4424f-372d-4923-b009-7893a123c4d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:49 compute-0 ovn_controller[147040]: 2026-02-25T12:44:49Z|01120|binding|INFO|Releasing lport 37c4424f-372d-4923-b009-7893a123c4d8 from this chassis (sb_readonly=0)
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.266 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.271 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e6109620-6cde-4719-9d5c-9dd91e6414b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.272 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-cd92597b-67bf-4492-9581-a9a7ec80f716
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID cd92597b-67bf-4492-9581-a9a7ec80f716
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.273 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'env', 'PROCESS_TAG=haproxy-cd92597b-67bf-4492-9581-a9a7ec80f716', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd92597b-67bf-4492-9581-a9a7ec80f716.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.309 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.311 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.311 244018 INFO nova.compute.manager [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Shelving
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.338 244018 DEBUG nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:44:49 compute-0 podman[344086]: 2026-02-25 12:44:49.463623277 +0000 UTC m=+0.040214335 container create 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 12:44:49 compute-0 systemd[1]: Started libpod-conmon-182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0.scope.
Feb 25 12:44:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:44:49 compute-0 podman[344086]: 2026-02-25 12:44:49.529498236 +0000 UTC m=+0.106089294 container init 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:44:49 compute-0 podman[344086]: 2026-02-25 12:44:49.537650936 +0000 UTC m=+0.114242004 container start 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 12:44:49 compute-0 podman[344086]: 2026-02-25 12:44:49.540768444 +0000 UTC m=+0.117359532 container attach 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 12:44:49 compute-0 pensive_lamarr[344143]: 167 167
Feb 25 12:44:49 compute-0 podman[344086]: 2026-02-25 12:44:49.447653557 +0000 UTC m=+0.024244655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.546 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023489.5463667, aee87402-4b34-4083-888b-bb653e2beaa9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.548 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Started (Lifecycle Event)
Feb 25 12:44:49 compute-0 systemd[1]: libpod-182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0.scope: Deactivated successfully.
Feb 25 12:44:49 compute-0 podman[344086]: 2026-02-25 12:44:49.556455617 +0000 UTC m=+0.133046685 container died 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.576 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f2646160838be3ad6a1ce69c5e96bf86f81a36120dbce0ebc6afa337762a0d2-merged.mount: Deactivated successfully.
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.584 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023489.546581, aee87402-4b34-4083-888b-bb653e2beaa9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.585 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Paused (Lifecycle Event)
Feb 25 12:44:49 compute-0 podman[344086]: 2026-02-25 12:44:49.589334105 +0000 UTC m=+0.165925163 container remove 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 12:44:49 compute-0 systemd[1]: libpod-conmon-182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0.scope: Deactivated successfully.
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.610 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.614 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:44:49 compute-0 nova_compute[244014]: 2026-02-25 12:44:49.648 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:44:49 compute-0 podman[344178]: 2026-02-25 12:44:49.657962012 +0000 UTC m=+0.056476015 container create 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 12:44:49 compute-0 systemd[1]: Started libpod-conmon-5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740.scope.
Feb 25 12:44:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f152e8c42872b5c20ce576f93cd6d4f4e71b6bf0c572194dbd79d4f61bceb58/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:49 compute-0 podman[344178]: 2026-02-25 12:44:49.629595351 +0000 UTC m=+0.028109434 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:44:49 compute-0 podman[344178]: 2026-02-25 12:44:49.73089454 +0000 UTC m=+0.129408553 container init 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:44:49 compute-0 podman[344178]: 2026-02-25 12:44:49.736113277 +0000 UTC m=+0.134627270 container start 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 12:44:49 compute-0 podman[344204]: 2026-02-25 12:44:49.745093851 +0000 UTC m=+0.044588250 container create 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 12:44:49 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [NOTICE]   (344221) : New worker (344223) forked
Feb 25 12:44:49 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [NOTICE]   (344221) : Loading success.
Feb 25 12:44:49 compute-0 systemd[1]: Started libpod-conmon-1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04.scope.
Feb 25 12:44:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:49 compute-0 podman[344204]: 2026-02-25 12:44:49.725841447 +0000 UTC m=+0.025335856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:44:49 compute-0 podman[344204]: 2026-02-25 12:44:49.829436991 +0000 UTC m=+0.128931390 container init 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:44:49 compute-0 podman[344204]: 2026-02-25 12:44:49.835813691 +0000 UTC m=+0.135308110 container start 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:44:49 compute-0 podman[344204]: 2026-02-25 12:44:49.83967214 +0000 UTC m=+0.139166559 container attach 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:44:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Feb 25 12:44:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Feb 25 12:44:50 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Feb 25 12:44:50 compute-0 nova_compute[244014]: 2026-02-25 12:44:50.150 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:50 compute-0 ceph-mon[76335]: pgmap v1912: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 5.3 MiB/s wr, 122 op/s
Feb 25 12:44:50 compute-0 ceph-mon[76335]: osdmap e264: 3 total, 3 up, 3 in
Feb 25 12:44:50 compute-0 cool_payne[344234]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:44:50 compute-0 cool_payne[344234]: --> All data devices are unavailable
Feb 25 12:44:50 compute-0 systemd[1]: libpod-1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04.scope: Deactivated successfully.
Feb 25 12:44:50 compute-0 podman[344204]: 2026-02-25 12:44:50.291258803 +0000 UTC m=+0.590753272 container died 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:44:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:50.314 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:ad:86 2001:db8:0:1:f816:3eff:fec6:ad86 2001:db8::f816:3eff:fec6:ad86'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fec6:ad86/64 2001:db8::f816:3eff:fec6:ad86/64', 'neutron:device_id': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5bfff87-5507-4fe1-aed9-1b014e8e7384, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0b276f7a-90bb-4427-8f39-0e014732fd20) old=Port_Binding(mac=['fa:16:3e:c6:ad:86 2001:db8::f816:3eff:fec6:ad86'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec6:ad86/64', 'neutron:device_id': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:44:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:50.318 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0b276f7a-90bb-4427-8f39-0e014732fd20 in datapath 6df13266-bcfe-4a5b-94c4-81b5f08a6c21 updated
Feb 25 12:44:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3-merged.mount: Deactivated successfully.
Feb 25 12:44:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:50.322 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6df13266-bcfe-4a5b-94c4-81b5f08a6c21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:44:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:50.324 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4cc413-5e85-4ff0-9313-857c1440186f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:50 compute-0 podman[344204]: 2026-02-25 12:44:50.338660711 +0000 UTC m=+0.638155100 container remove 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:44:50 compute-0 systemd[1]: libpod-conmon-1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04.scope: Deactivated successfully.
Feb 25 12:44:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 5.1 MiB/s wr, 138 op/s
Feb 25 12:44:50 compute-0 sudo[344038]: pam_unix(sudo:session): session closed for user root
Feb 25 12:44:50 compute-0 sudo[344265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:44:50 compute-0 sudo[344265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:44:50 compute-0 sudo[344265]: pam_unix(sudo:session): session closed for user root
Feb 25 12:44:50 compute-0 sudo[344290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:44:50 compute-0 sudo[344290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:44:50 compute-0 podman[344329]: 2026-02-25 12:44:50.840789741 +0000 UTC m=+0.050405253 container create 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 12:44:50 compute-0 systemd[1]: Started libpod-conmon-46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1.scope.
Feb 25 12:44:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:44:50 compute-0 podman[344329]: 2026-02-25 12:44:50.822497895 +0000 UTC m=+0.032113367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:44:50 compute-0 podman[344329]: 2026-02-25 12:44:50.921624702 +0000 UTC m=+0.131240164 container init 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:44:50 compute-0 podman[344329]: 2026-02-25 12:44:50.928336622 +0000 UTC m=+0.137952054 container start 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:44:50 compute-0 distracted_tharp[344346]: 167 167
Feb 25 12:44:50 compute-0 systemd[1]: libpod-46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1.scope: Deactivated successfully.
Feb 25 12:44:50 compute-0 podman[344329]: 2026-02-25 12:44:50.93254522 +0000 UTC m=+0.142160682 container attach 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:44:50 compute-0 podman[344329]: 2026-02-25 12:44:50.933288611 +0000 UTC m=+0.142904053 container died 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 12:44:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e3375f5f97be8916b85e73160c6eea15c54d2dbd9a38bb599bc79fdb3432288-merged.mount: Deactivated successfully.
Feb 25 12:44:50 compute-0 podman[344329]: 2026-02-25 12:44:50.975861603 +0000 UTC m=+0.185477025 container remove 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:44:50 compute-0 systemd[1]: libpod-conmon-46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1.scope: Deactivated successfully.
Feb 25 12:44:51 compute-0 podman[344370]: 2026-02-25 12:44:51.120319919 +0000 UTC m=+0.039855975 container create 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 12:44:51 compute-0 systemd[1]: Started libpod-conmon-2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab.scope.
Feb 25 12:44:51 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aa87aa81a54d68fbb9a8b6c968f62be8491d0ba0a22612ada7a2d8c0a1f7ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aa87aa81a54d68fbb9a8b6c968f62be8491d0ba0a22612ada7a2d8c0a1f7ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aa87aa81a54d68fbb9a8b6c968f62be8491d0ba0a22612ada7a2d8c0a1f7ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aa87aa81a54d68fbb9a8b6c968f62be8491d0ba0a22612ada7a2d8c0a1f7ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:51 compute-0 podman[344370]: 2026-02-25 12:44:51.18376597 +0000 UTC m=+0.103302046 container init 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:44:51 compute-0 podman[344370]: 2026-02-25 12:44:51.191477917 +0000 UTC m=+0.111014003 container start 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:44:51 compute-0 podman[344370]: 2026-02-25 12:44:51.195081829 +0000 UTC m=+0.114617885 container attach 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:44:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Feb 25 12:44:51 compute-0 podman[344370]: 2026-02-25 12:44:51.102429214 +0000 UTC m=+0.021965280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:44:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Feb 25 12:44:51 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Feb 25 12:44:51 compute-0 fervent_mclean[344386]: {
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:     "0": [
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:         {
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "devices": [
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "/dev/loop3"
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             ],
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_name": "ceph_lv0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_size": "21470642176",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "name": "ceph_lv0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "tags": {
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.cluster_name": "ceph",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.crush_device_class": "",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.encrypted": "0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.objectstore": "bluestore",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.osd_id": "0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.type": "block",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.vdo": "0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.with_tpm": "0"
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             },
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "type": "block",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "vg_name": "ceph_vg0"
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:         }
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:     ],
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:     "1": [
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:         {
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "devices": [
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "/dev/loop4"
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             ],
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_name": "ceph_lv1",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_size": "21470642176",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "name": "ceph_lv1",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "tags": {
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.cluster_name": "ceph",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.crush_device_class": "",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.encrypted": "0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.objectstore": "bluestore",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.osd_id": "1",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.type": "block",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.vdo": "0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.with_tpm": "0"
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             },
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "type": "block",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "vg_name": "ceph_vg1"
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:         }
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:     ],
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:     "2": [
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:         {
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "devices": [
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "/dev/loop5"
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             ],
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_name": "ceph_lv2",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_size": "21470642176",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "name": "ceph_lv2",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "tags": {
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.cluster_name": "ceph",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.crush_device_class": "",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.encrypted": "0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.objectstore": "bluestore",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.osd_id": "2",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.type": "block",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.vdo": "0",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:                 "ceph.with_tpm": "0"
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             },
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "type": "block",
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:             "vg_name": "ceph_vg2"
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:         }
Feb 25 12:44:51 compute-0 fervent_mclean[344386]:     ]
Feb 25 12:44:51 compute-0 fervent_mclean[344386]: }
Feb 25 12:44:51 compute-0 systemd[1]: libpod-2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab.scope: Deactivated successfully.
Feb 25 12:44:51 compute-0 podman[344370]: 2026-02-25 12:44:51.480648248 +0000 UTC m=+0.400184314 container died 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:44:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-48aa87aa81a54d68fbb9a8b6c968f62be8491d0ba0a22612ada7a2d8c0a1f7ec-merged.mount: Deactivated successfully.
Feb 25 12:44:51 compute-0 podman[344370]: 2026-02-25 12:44:51.51757358 +0000 UTC m=+0.437109646 container remove 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:44:51 compute-0 systemd[1]: libpod-conmon-2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab.scope: Deactivated successfully.
Feb 25 12:44:51 compute-0 sudo[344290]: pam_unix(sudo:session): session closed for user root
Feb 25 12:44:51 compute-0 kernel: tap97fb9f99-cb (unregistering): left promiscuous mode
Feb 25 12:44:51 compute-0 NetworkManager[49836]: <info>  [1772023491.5736] device (tap97fb9f99-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:44:51 compute-0 ovn_controller[147040]: 2026-02-25T12:44:51Z|01121|binding|INFO|Releasing lport 97fb9f99-cb59-4581-8866-375ea3e167d7 from this chassis (sb_readonly=0)
Feb 25 12:44:51 compute-0 ovn_controller[147040]: 2026-02-25T12:44:51Z|01122|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 down in Southbound
Feb 25 12:44:51 compute-0 nova_compute[244014]: 2026-02-25 12:44:51.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:51 compute-0 ovn_controller[147040]: 2026-02-25T12:44:51Z|01123|binding|INFO|Removing iface tap97fb9f99-cb ovn-installed in OVS
Feb 25 12:44:51 compute-0 nova_compute[244014]: 2026-02-25 12:44:51.582 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:51 compute-0 nova_compute[244014]: 2026-02-25 12:44:51.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.587 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:b7:f7 10.100.0.11'], port_security=['fa:16:3e:61:b7:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '874359d8-3251-4416-82dc-f6776853e384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c0adb05683141e7a0b866f450e410e0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c8e48a3c-4f73-4222-8cd8-b956480085c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88c1b411-2973-4db5-967d-0a2cebc1066f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=97fb9f99-cb59-4581-8866-375ea3e167d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.588 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 97fb9f99-cb59-4581-8866-375ea3e167d7 in datapath e4d059ba-eacc-463b-a8a4-393a5a36dba3 unbound from our chassis
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.590 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d059ba-eacc-463b-a8a4-393a5a36dba3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.590 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d059191-c1e1-4a35-af71-21690a165bf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.591 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 namespace which is not needed anymore
Feb 25 12:44:51 compute-0 sudo[344405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:44:51 compute-0 sudo[344405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:44:51 compute-0 sudo[344405]: pam_unix(sudo:session): session closed for user root
Feb 25 12:44:51 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Deactivated successfully.
Feb 25 12:44:51 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Consumed 12.809s CPU time.
Feb 25 12:44:51 compute-0 systemd-machined[210048]: Machine qemu-141-instance-00000070 terminated.
Feb 25 12:44:51 compute-0 sudo[344443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:44:51 compute-0 sudo[344443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:44:51 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [NOTICE]   (343517) : haproxy version is 2.8.14-c23fe91
Feb 25 12:44:51 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [NOTICE]   (343517) : path to executable is /usr/sbin/haproxy
Feb 25 12:44:51 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [WARNING]  (343517) : Exiting Master process...
Feb 25 12:44:51 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [ALERT]    (343517) : Current worker (343519) exited with code 143 (Terminated)
Feb 25 12:44:51 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [WARNING]  (343517) : All workers exited. Exiting... (0)
Feb 25 12:44:51 compute-0 systemd[1]: libpod-d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31.scope: Deactivated successfully.
Feb 25 12:44:51 compute-0 conmon[343513]: conmon d905eeb48e3eb29d0a00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31.scope/container/memory.events
Feb 25 12:44:51 compute-0 podman[344474]: 2026-02-25 12:44:51.702989411 +0000 UTC m=+0.038120307 container died d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:44:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31-userdata-shm.mount: Deactivated successfully.
Feb 25 12:44:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5928287d34ac4333e0ae9c06ec6f097d3bd4e406cca623c2f40db94d0a23763-merged.mount: Deactivated successfully.
Feb 25 12:44:51 compute-0 podman[344474]: 2026-02-25 12:44:51.738083512 +0000 UTC m=+0.073214408 container cleanup d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:44:51 compute-0 systemd[1]: libpod-conmon-d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31.scope: Deactivated successfully.
Feb 25 12:44:51 compute-0 podman[344507]: 2026-02-25 12:44:51.793063193 +0000 UTC m=+0.038944630 container remove d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:44:51 compute-0 nova_compute[244014]: 2026-02-25 12:44:51.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.802 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[55669442-9860-4dec-a12a-d564ca56e9e7]: (4, ('Wed Feb 25 12:44:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 (d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31)\nd905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31\nWed Feb 25 12:44:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 (d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31)\nd905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.805 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29b153dd-f333-470e-9f33-c38710fbd6f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.806 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d059ba-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:44:51 compute-0 nova_compute[244014]: 2026-02-25 12:44:51.808 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:51 compute-0 kernel: tape4d059ba-e0: left promiscuous mode
Feb 25 12:44:51 compute-0 nova_compute[244014]: 2026-02-25 12:44:51.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:51 compute-0 nova_compute[244014]: 2026-02-25 12:44:51.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[78547dc0-2f97-454c-91dc-0b448f1495c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.845 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6fc288-66b9-459c-8ed6-c4b53d2018b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.846 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f42a6bc6-3f96-47fe-9a83-f429b825712b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.864 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a85da5-f5bf-4195-9ccf-43f43cc64ec1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543484, 'reachable_time': 44463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344538, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.867 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:44:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.868 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e25bc163-1a4c-43f7-8c39-34e27badaa81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:44:51 compute-0 systemd[1]: run-netns-ovnmeta\x2de4d059ba\x2deacc\x2d463b\x2da8a4\x2d393a5a36dba3.mount: Deactivated successfully.
Feb 25 12:44:51 compute-0 podman[344547]: 2026-02-25 12:44:51.941017408 +0000 UTC m=+0.046365259 container create 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:44:51 compute-0 systemd[1]: Started libpod-conmon-165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed.scope.
Feb 25 12:44:52 compute-0 podman[344547]: 2026-02-25 12:44:51.915147778 +0000 UTC m=+0.020495669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:44:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:44:52 compute-0 podman[344547]: 2026-02-25 12:44:52.026862041 +0000 UTC m=+0.132209952 container init 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 12:44:52 compute-0 podman[344547]: 2026-02-25 12:44:52.035305749 +0000 UTC m=+0.140653550 container start 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:44:52 compute-0 podman[344547]: 2026-02-25 12:44:52.039102616 +0000 UTC m=+0.144450517 container attach 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:44:52 compute-0 blissful_engelbart[344563]: 167 167
Feb 25 12:44:52 compute-0 systemd[1]: libpod-165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed.scope: Deactivated successfully.
Feb 25 12:44:52 compute-0 podman[344547]: 2026-02-25 12:44:52.043009356 +0000 UTC m=+0.148357207 container died 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 12:44:52 compute-0 podman[344547]: 2026-02-25 12:44:52.088252243 +0000 UTC m=+0.193600094 container remove 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:44:52 compute-0 systemd[1]: libpod-conmon-165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed.scope: Deactivated successfully.
Feb 25 12:44:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Feb 25 12:44:52 compute-0 ceph-mon[76335]: pgmap v1914: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 5.1 MiB/s wr, 138 op/s
Feb 25 12:44:52 compute-0 ceph-mon[76335]: osdmap e265: 3 total, 3 up, 3 in
Feb 25 12:44:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Feb 25 12:44:52 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Feb 25 12:44:52 compute-0 podman[344587]: 2026-02-25 12:44:52.245392568 +0000 UTC m=+0.044746144 container create eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:44:52 compute-0 systemd[1]: Started libpod-conmon-eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd.scope.
Feb 25 12:44:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:44:52 compute-0 podman[344587]: 2026-02-25 12:44:52.226117114 +0000 UTC m=+0.025470740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945eaf6277a214eee789c113cacfd1d69335b1debd2d56dfaec43d1c299d29c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945eaf6277a214eee789c113cacfd1d69335b1debd2d56dfaec43d1c299d29c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945eaf6277a214eee789c113cacfd1d69335b1debd2d56dfaec43d1c299d29c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945eaf6277a214eee789c113cacfd1d69335b1debd2d56dfaec43d1c299d29c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:44:52 compute-0 podman[344587]: 2026-02-25 12:44:52.352417488 +0000 UTC m=+0.151771094 container init eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.358 244018 INFO nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance shutdown successfully after 3 seconds.
Feb 25 12:44:52 compute-0 podman[344587]: 2026-02-25 12:44:52.363883802 +0000 UTC m=+0.163237368 container start eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 12:44:52 compute-0 podman[344587]: 2026-02-25 12:44:52.368273495 +0000 UTC m=+0.167627121 container attach eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.368 244018 DEBUG nova.compute.manager [req-5f0f07bd-d98b-4924-9c79-ad3f72026d47 req-1179a6a6-8142-4305-8410-9f0c55efd9b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.369 244018 DEBUG oslo_concurrency.lockutils [req-5f0f07bd-d98b-4924-9c79-ad3f72026d47 req-1179a6a6-8142-4305-8410-9f0c55efd9b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.370 244018 DEBUG oslo_concurrency.lockutils [req-5f0f07bd-d98b-4924-9c79-ad3f72026d47 req-1179a6a6-8142-4305-8410-9f0c55efd9b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 91 KiB/s wr, 156 op/s
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.370 244018 DEBUG oslo_concurrency.lockutils [req-5f0f07bd-d98b-4924-9c79-ad3f72026d47 req-1179a6a6-8142-4305-8410-9f0c55efd9b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.371 244018 DEBUG nova.compute.manager [req-5f0f07bd-d98b-4924-9c79-ad3f72026d47 req-1179a6a6-8142-4305-8410-9f0c55efd9b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Processing event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.372 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.376 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance destroyed successfully.
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.377 244018 DEBUG nova.objects.instance [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.380 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023492.3781693, aee87402-4b34-4083-888b-bb653e2beaa9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.380 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Resumed (Lifecycle Event)
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.383 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.389 244018 INFO nova.virt.libvirt.driver [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance spawned successfully.
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.390 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.414 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.423 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.441 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.441 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.442 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.443 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.444 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.445 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.457 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.533 244018 INFO nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Took 10.14 seconds to spawn the instance on the hypervisor.
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.539 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.617 244018 INFO nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Took 11.24 seconds to build instance.
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.636 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.783 244018 INFO nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Beginning cold snapshot process
Feb 25 12:44:52 compute-0 nova_compute[244014]: 2026-02-25 12:44:52.967 244018 DEBUG nova.virt.libvirt.imagebackend [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 12:44:53 compute-0 lvm[344713]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:44:53 compute-0 lvm[344713]: VG ceph_vg0 finished
Feb 25 12:44:53 compute-0 lvm[344716]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:44:53 compute-0 lvm[344716]: VG ceph_vg1 finished
Feb 25 12:44:53 compute-0 lvm[344718]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:44:53 compute-0 lvm[344718]: VG ceph_vg2 finished
Feb 25 12:44:53 compute-0 nova_compute[244014]: 2026-02-25 12:44:53.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:53 compute-0 lvm[344719]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:44:53 compute-0 lvm[344719]: VG ceph_vg0 finished
Feb 25 12:44:53 compute-0 heuristic_shirley[344604]: {}
Feb 25 12:44:53 compute-0 systemd[1]: libpod-eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd.scope: Deactivated successfully.
Feb 25 12:44:53 compute-0 systemd[1]: libpod-eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd.scope: Consumed 1.188s CPU time.
Feb 25 12:44:53 compute-0 podman[344722]: 2026-02-25 12:44:53.18027128 +0000 UTC m=+0.026954962 container died eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:44:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-945eaf6277a214eee789c113cacfd1d69335b1debd2d56dfaec43d1c299d29c0-merged.mount: Deactivated successfully.
Feb 25 12:44:53 compute-0 ceph-mon[76335]: osdmap e266: 3 total, 3 up, 3 in
Feb 25 12:44:53 compute-0 podman[344722]: 2026-02-25 12:44:53.230714433 +0000 UTC m=+0.077398095 container remove eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 12:44:53 compute-0 systemd[1]: libpod-conmon-eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd.scope: Deactivated successfully.
Feb 25 12:44:53 compute-0 sudo[344443]: pam_unix(sudo:session): session closed for user root
Feb 25 12:44:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:44:53 compute-0 nova_compute[244014]: 2026-02-25 12:44:53.277 244018 DEBUG nova.storage.rbd_utils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] creating snapshot(1638c28f135b4127b592203a14fda253) on rbd image(874359d8-3251-4416-82dc-f6776853e384_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:44:53 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:44:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:44:53 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:44:53 compute-0 sudo[344750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:44:53 compute-0 sudo[344750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:44:53 compute-0 sudo[344750]: pam_unix(sudo:session): session closed for user root
Feb 25 12:44:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:54 compute-0 ceph-mon[76335]: pgmap v1917: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 91 KiB/s wr, 156 op/s
Feb 25 12:44:54 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:44:54 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:44:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Feb 25 12:44:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Feb 25 12:44:54 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.328 244018 DEBUG nova.storage.rbd_utils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] cloning vms/874359d8-3251-4416-82dc-f6776853e384_disk@1638c28f135b4127b592203a14fda253 to images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:44:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 127 KiB/s wr, 187 op/s
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.446 244018 DEBUG nova.storage.rbd_utils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] flattening images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:44:54 compute-0 podman[344833]: 2026-02-25 12:44:54.772002698 +0000 UTC m=+0.106816295 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.991 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.992 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.993 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.994 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.994 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.995 244018 WARNING nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state active and task_state None.
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.995 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.996 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.996 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.997 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.997 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.998 244018 WARNING nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received unexpected event network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with vm_state active and task_state shelving_image_uploading.
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.998 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:54 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.999 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:55 compute-0 nova_compute[244014]: 2026-02-25 12:44:54.999 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:55 compute-0 nova_compute[244014]: 2026-02-25 12:44:55.000 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:55 compute-0 nova_compute[244014]: 2026-02-25 12:44:55.000 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:44:55 compute-0 nova_compute[244014]: 2026-02-25 12:44:55.001 244018 WARNING nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received unexpected event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with vm_state active and task_state shelving_image_uploading.
Feb 25 12:44:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:44:55.024 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:55 compute-0 nova_compute[244014]: 2026-02-25 12:44:55.073 244018 DEBUG nova.storage.rbd_utils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] removing snapshot(1638c28f135b4127b592203a14fda253) on rbd image(874359d8-3251-4416-82dc-f6776853e384_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 12:44:55 compute-0 nova_compute[244014]: 2026-02-25 12:44:55.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Feb 25 12:44:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Feb 25 12:44:55 compute-0 ceph-mon[76335]: osdmap e267: 3 total, 3 up, 3 in
Feb 25 12:44:55 compute-0 ceph-mon[76335]: pgmap v1919: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 127 KiB/s wr, 187 op/s
Feb 25 12:44:55 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Feb 25 12:44:55 compute-0 nova_compute[244014]: 2026-02-25 12:44:55.590 244018 DEBUG nova.storage.rbd_utils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] creating snapshot(snap) on rbd image(ce5f0984-576d-4f20-8e86-b4b26cb5bb6e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 12:44:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 106 KiB/s wr, 156 op/s
Feb 25 12:44:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Feb 25 12:44:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Feb 25 12:44:56 compute-0 ceph-mon[76335]: osdmap e268: 3 total, 3 up, 3 in
Feb 25 12:44:56 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Feb 25 12:44:56 compute-0 podman[344890]: 2026-02-25 12:44:56.745599792 +0000 UTC m=+0.098335696 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.273 244018 DEBUG nova.compute.manager [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-changed-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.274 244018 DEBUG nova.compute.manager [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing instance network info cache due to event network-changed-8d032336-9efd-4e76-9498-4dafee40640b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.274 244018 DEBUG oslo_concurrency.lockutils [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.275 244018 DEBUG oslo_concurrency.lockutils [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.275 244018 DEBUG nova.network.neutron [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.416 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.417 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.443 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:44:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Feb 25 12:44:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Feb 25 12:44:57 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Feb 25 12:44:57 compute-0 ceph-mon[76335]: pgmap v1921: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 106 KiB/s wr, 156 op/s
Feb 25 12:44:57 compute-0 ceph-mon[76335]: osdmap e269: 3 total, 3 up, 3 in
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.531 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.532 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.543 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.544 244018 INFO nova.compute.claims [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.703 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.935 244018 INFO nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Snapshot image upload complete
Feb 25 12:44:57 compute-0 nova_compute[244014]: 2026-02-25 12:44:57.938 244018 DEBUG nova.compute.manager [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.018 244018 INFO nova.compute.manager [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Shelve offloading
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.028 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance destroyed successfully.
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.028 244018 DEBUG nova.compute.manager [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.032 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.033 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.033 244018 DEBUG nova.network.neutron [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:44:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:44:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1182150988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.263 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.273 244018 DEBUG nova.compute.provider_tree [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.289 244018 DEBUG nova.scheduler.client.report [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.318 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.319 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:44:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:44:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 358 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 11 MiB/s wr, 479 op/s
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.374 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.375 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.402 244018 INFO nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.420 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:44:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Feb 25 12:44:58 compute-0 ceph-mon[76335]: osdmap e270: 3 total, 3 up, 3 in
Feb 25 12:44:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1182150988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:44:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Feb 25 12:44:58 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.549 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.551 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.552 244018 INFO nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Creating image(s)
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.588 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.614 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.639 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.643 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.733 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.735 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.736 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.736 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.766 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.774 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dd8c9142-2607-4722-90eb-65233f258639_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:44:58 compute-0 nova_compute[244014]: 2026-02-25 12:44:58.820 244018 DEBUG nova.policy [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:44:59 compute-0 nova_compute[244014]: 2026-02-25 12:44:59.034 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dd8c9142-2607-4722-90eb-65233f258639_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:44:59 compute-0 nova_compute[244014]: 2026-02-25 12:44:59.117 244018 DEBUG nova.network.neutron [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updated VIF entry in instance network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:44:59 compute-0 nova_compute[244014]: 2026-02-25 12:44:59.118 244018 DEBUG nova.network.neutron [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:44:59 compute-0 nova_compute[244014]: 2026-02-25 12:44:59.132 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image dd8c9142-2607-4722-90eb-65233f258639_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:44:59 compute-0 nova_compute[244014]: 2026-02-25 12:44:59.177 244018 DEBUG oslo_concurrency.lockutils [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:44:59 compute-0 nova_compute[244014]: 2026-02-25 12:44:59.214 244018 DEBUG nova.objects.instance [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid dd8c9142-2607-4722-90eb-65233f258639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:44:59 compute-0 nova_compute[244014]: 2026-02-25 12:44:59.239 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:44:59 compute-0 nova_compute[244014]: 2026-02-25 12:44:59.239 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Ensure instance console log exists: /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:44:59 compute-0 nova_compute[244014]: 2026-02-25 12:44:59.240 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:44:59 compute-0 nova_compute[244014]: 2026-02-25 12:44:59.240 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:44:59 compute-0 nova_compute[244014]: 2026-02-25 12:44:59.240 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:44:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Feb 25 12:44:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Feb 25 12:44:59 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Feb 25 12:44:59 compute-0 ceph-mon[76335]: pgmap v1924: 305 pgs: 305 active+clean; 358 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 11 MiB/s wr, 479 op/s
Feb 25 12:44:59 compute-0 ceph-mon[76335]: osdmap e271: 3 total, 3 up, 3 in
Feb 25 12:45:00 compute-0 nova_compute[244014]: 2026-02-25 12:45:00.154 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 358 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 18 MiB/s rd, 12 MiB/s wr, 488 op/s
Feb 25 12:45:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Feb 25 12:45:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Feb 25 12:45:00 compute-0 ceph-mon[76335]: osdmap e272: 3 total, 3 up, 3 in
Feb 25 12:45:00 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Feb 25 12:45:00 compute-0 nova_compute[244014]: 2026-02-25 12:45:00.801 244018 DEBUG nova.network.neutron [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:00 compute-0 nova_compute[244014]: 2026-02-25 12:45:00.825 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:00 compute-0 nova_compute[244014]: 2026-02-25 12:45:00.829 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Successfully created port: e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.456 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Successfully created port: 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:45:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Feb 25 12:45:01 compute-0 ceph-mon[76335]: pgmap v1927: 305 pgs: 305 active+clean; 358 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 18 MiB/s rd, 12 MiB/s wr, 488 op/s
Feb 25 12:45:01 compute-0 ceph-mon[76335]: osdmap e273: 3 total, 3 up, 3 in
Feb 25 12:45:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Feb 25 12:45:01 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Feb 25 12:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.858 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance destroyed successfully.
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.859 244018 DEBUG nova.objects.instance [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'resources' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.875 244018 DEBUG nova.virt.libvirt.vif [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9lQWzdSu90yaQLNul1MY74fQc98Cg9DYq6NKBJiDcNTT2XlppuxCcKfQtu58BOyBXAh+PrpYKR3FwTDUP4XAt9wnb5ixmiKo9lnvQqdB0jXAsnGExk5Uj48m62YSy+xg==',key_name='tempest-TestShelveInstance-571084469',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member',shelved_at='2026-02-25T12:44:57.937945',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='ce5f0984-576d-4f20-8e86-b4b26cb5bb6e'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:44:52Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.875 244018 DEBUG nova.network.os_vif_util [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.876 244018 DEBUG nova.network.os_vif_util [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.876 244018 DEBUG os_vif [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.878 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.879 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97fb9f99-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.880 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.887 244018 INFO os_vif [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb')
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.959 244018 DEBUG nova.compute.manager [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.959 244018 DEBUG nova.compute.manager [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing instance network info cache due to event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.960 244018 DEBUG oslo_concurrency.lockutils [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.960 244018 DEBUG oslo_concurrency.lockutils [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:01 compute-0 nova_compute[244014]: 2026-02-25 12:45:01.961 244018 DEBUG nova.network.neutron [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.059 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Successfully updated port: e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.187 244018 INFO nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deleting instance files /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384_del
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.189 244018 INFO nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deletion of /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384_del complete
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.245 244018 DEBUG nova.compute.manager [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.247 244018 DEBUG nova.compute.manager [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing instance network info cache due to event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.247 244018 DEBUG oslo_concurrency.lockutils [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.248 244018 DEBUG oslo_concurrency.lockutils [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.248 244018 DEBUG nova.network.neutron [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing network info cache for port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.282 244018 INFO nova.scheduler.client.report [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Deleted allocations for instance 874359d8-3251-4416-82dc-f6776853e384
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.330 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.330 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 347 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 4.4 MiB/s wr, 391 op/s
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.405 244018 DEBUG oslo_concurrency.processutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.508 244018 DEBUG nova.network.neutron [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:45:02 compute-0 ceph-mon[76335]: osdmap e274: 3 total, 3 up, 3 in
Feb 25 12:45:02 compute-0 ovn_controller[147040]: 2026-02-25T12:45:02Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:5f:c9 10.100.0.11
Feb 25 12:45:02 compute-0 ovn_controller[147040]: 2026-02-25T12:45:02Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:5f:c9 10.100.0.11
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.949 244018 DEBUG nova.network.neutron [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:45:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713606025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.964 244018 DEBUG oslo_concurrency.lockutils [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.974 244018 DEBUG oslo_concurrency.processutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.978 244018 DEBUG nova.compute.provider_tree [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:45:02 compute-0 nova_compute[244014]: 2026-02-25 12:45:02.996 244018 DEBUG nova.scheduler.client.report [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:45:03 compute-0 nova_compute[244014]: 2026-02-25 12:45:03.012 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:03 compute-0 nova_compute[244014]: 2026-02-25 12:45:03.081 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:03 compute-0 nova_compute[244014]: 2026-02-25 12:45:03.108 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Successfully updated port: 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:45:03 compute-0 nova_compute[244014]: 2026-02-25 12:45:03.130 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:03 compute-0 nova_compute[244014]: 2026-02-25 12:45:03.130 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:03 compute-0 nova_compute[244014]: 2026-02-25 12:45:03.131 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:45:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Feb 25 12:45:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Feb 25 12:45:03 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Feb 25 12:45:03 compute-0 ceph-mon[76335]: pgmap v1930: 305 pgs: 305 active+clean; 347 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 4.4 MiB/s wr, 391 op/s
Feb 25 12:45:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1713606025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:03 compute-0 ceph-mon[76335]: osdmap e275: 3 total, 3 up, 3 in
Feb 25 12:45:03 compute-0 nova_compute[244014]: 2026-02-25 12:45:03.790 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:45:04 compute-0 nova_compute[244014]: 2026-02-25 12:45:04.103 244018 DEBUG nova.network.neutron [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated VIF entry in instance network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:45:04 compute-0 nova_compute[244014]: 2026-02-25 12:45:04.104 244018 DEBUG nova.network.neutron [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": null, "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:04 compute-0 nova_compute[244014]: 2026-02-25 12:45:04.124 244018 DEBUG oslo_concurrency.lockutils [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:04 compute-0 nova_compute[244014]: 2026-02-25 12:45:04.323 244018 DEBUG nova.compute.manager [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-changed-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:04 compute-0 nova_compute[244014]: 2026-02-25 12:45:04.323 244018 DEBUG nova.compute.manager [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing instance network info cache due to event network-changed-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:45:04 compute-0 nova_compute[244014]: 2026-02-25 12:45:04.324 244018 DEBUG oslo_concurrency.lockutils [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 341 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 495 KiB/s rd, 6.3 MiB/s wr, 454 op/s
Feb 25 12:45:05 compute-0 nova_compute[244014]: 2026-02-25 12:45:05.158 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:05 compute-0 ceph-mon[76335]: pgmap v1932: 305 pgs: 305 active+clean; 341 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 495 KiB/s rd, 6.3 MiB/s wr, 454 op/s
Feb 25 12:45:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 341 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 5.1 MiB/s wr, 368 op/s
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.813 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023491.81187, 874359d8-3251-4416-82dc-f6776853e384 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.814 244018 INFO nova.compute.manager [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Stopped (Lifecycle Event)
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.829 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.850 244018 DEBUG nova.compute.manager [None req-6ca805a9-88d8-429a-86dd-3858063e5319 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.857 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.858 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance network_info: |[{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.859 244018 DEBUG oslo_concurrency.lockutils [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.860 244018 DEBUG nova.network.neutron [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing network info cache for port 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.864 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Start _get_guest_xml network_info=[{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.871 244018 WARNING nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.876 244018 DEBUG nova.virt.libvirt.host [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.877 244018 DEBUG nova.virt.libvirt.host [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.885 244018 DEBUG nova.virt.libvirt.host [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.886 244018 DEBUG nova.virt.libvirt.host [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.886 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.887 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.887 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.887 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.888 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.888 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.888 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.889 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.889 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.889 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.889 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.890 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:45:06 compute-0 nova_compute[244014]: 2026-02-25 12:45:06.894 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.107 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.108 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.108 244018 INFO nova.compute.manager [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Unshelving
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.214 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.215 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.228 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'pci_requests' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.271 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.283 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.283 244018 INFO nova.compute.claims [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.398 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:45:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2136277098' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.472 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.500 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.506 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:07 compute-0 ceph-mon[76335]: pgmap v1933: 305 pgs: 305 active+clean; 341 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 5.1 MiB/s wr, 368 op/s
Feb 25 12:45:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2136277098' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.952 244018 INFO nova.compute.manager [None req-42e46de6-9a97-47ff-82ab-d3df2084fda9 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Get console output
Feb 25 12:45:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:45:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771756237' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.963 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.988 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:07 compute-0 nova_compute[244014]: 2026-02-25 12:45:07.994 244018 DEBUG nova.compute.provider_tree [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.008 244018 DEBUG nova.scheduler.client.report [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.029 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:45:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067648468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.090 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.091 244018 DEBUG nova.virt.libvirt.vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:58Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.092 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.092 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.093 244018 DEBUG nova.virt.libvirt.vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:58Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.094 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.095 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.096 244018 DEBUG nova.objects.instance [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd8c9142-2607-4722-90eb-65233f258639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.113 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:45:08 compute-0 nova_compute[244014]:   <uuid>dd8c9142-2607-4722-90eb-65233f258639</uuid>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   <name>instance-00000072</name>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1795695092</nova:name>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:45:06</nova:creationTime>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <nova:port uuid="e50c9f03-a8a5-48d1-a34b-4a8fd638d5df">
Feb 25 12:45:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <nova:port uuid="68ffeedb-a9df-4fa8-9ae1-2535a1f1799d">
Feb 25 12:45:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe60:df22" ipVersion="6"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe60:df22" ipVersion="6"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <system>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <entry name="serial">dd8c9142-2607-4722-90eb-65233f258639</entry>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <entry name="uuid">dd8c9142-2607-4722-90eb-65233f258639</entry>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     </system>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   <os>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   </os>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   <features>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   </features>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/dd8c9142-2607-4722-90eb-65233f258639_disk">
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/dd8c9142-2607-4722-90eb-65233f258639_disk.config">
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:45:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:47:4a:25"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <target dev="tape50c9f03-a8"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:60:df:22"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <target dev="tap68ffeedb-a9"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/console.log" append="off"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <video>
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     </video>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:45:08 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:45:08 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:45:08 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:45:08 compute-0 nova_compute[244014]: </domain>
Feb 25 12:45:08 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.114 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Preparing to wait for external event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.115 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.115 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.115 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.115 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Preparing to wait for external event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.116 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.116 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.116 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.117 244018 DEBUG nova.virt.libvirt.vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:58Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.117 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.118 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.118 244018 DEBUG os_vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.119 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.119 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.120 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.123 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape50c9f03-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.123 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape50c9f03-a8, col_values=(('external_ids', {'iface-id': 'e50c9f03-a8a5-48d1-a34b-4a8fd638d5df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:4a:25', 'vm-uuid': 'dd8c9142-2607-4722-90eb-65233f258639'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.125 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:08 compute-0 NetworkManager[49836]: <info>  [1772023508.1264] manager: (tape50c9f03-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.133 244018 INFO os_vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8')
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.134 244018 DEBUG nova.virt.libvirt.vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:58Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.134 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.136 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.136 244018 DEBUG os_vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.139 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.139 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.140 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.143 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68ffeedb-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.143 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68ffeedb-a9, col_values=(('external_ids', {'iface-id': '68ffeedb-a9df-4fa8-9ae1-2535a1f1799d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:df:22', 'vm-uuid': 'dd8c9142-2607-4722-90eb-65233f258639'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:08 compute-0 NetworkManager[49836]: <info>  [1772023508.1460] manager: (tap68ffeedb-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.148 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.153 244018 INFO os_vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9')
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.215 244018 INFO nova.network.neutron [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating port 97fb9f99-cb59-4581-8866-375ea3e167d7 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.245 244018 DEBUG oslo_concurrency.lockutils [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.246 244018 DEBUG oslo_concurrency.lockutils [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.246 244018 DEBUG nova.compute.manager [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.253 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.254 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.254 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:47:4a:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.254 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:60:df:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.255 244018 INFO nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Using config drive
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.286 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.300 244018 DEBUG nova.compute.manager [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.301 244018 DEBUG nova.objects.instance [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'flavor' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.334 244018 DEBUG nova.virt.libvirt.driver [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 25 12:45:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Feb 25 12:45:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 358 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 6.0 MiB/s wr, 352 op/s
Feb 25 12:45:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Feb 25 12:45:08 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Feb 25 12:45:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1771756237' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3067648468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:08 compute-0 ceph-mon[76335]: osdmap e276: 3 total, 3 up, 3 in
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.786 244018 DEBUG nova.network.neutron [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updated VIF entry in instance network info cache for port 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.786 244018 DEBUG nova.network.neutron [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.808 244018 DEBUG oslo_concurrency.lockutils [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.840 244018 INFO nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Creating config drive at /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.844 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpimtvacjf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.883 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.884 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.884 244018 DEBUG nova.network.neutron [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:45:08 compute-0 nova_compute[244014]: 2026-02-25 12:45:08.986 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpimtvacjf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.026 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.031 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config dd8c9142-2607-4722-90eb-65233f258639_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.179 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config dd8c9142-2607-4722-90eb-65233f258639_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.180 244018 INFO nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Deleting local config drive /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config because it was imported into RBD.
Feb 25 12:45:09 compute-0 NetworkManager[49836]: <info>  [1772023509.2303] manager: (tape50c9f03-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/470)
Feb 25 12:45:09 compute-0 kernel: tape50c9f03-a8: entered promiscuous mode
Feb 25 12:45:09 compute-0 ovn_controller[147040]: 2026-02-25T12:45:09Z|01124|binding|INFO|Claiming lport e50c9f03-a8a5-48d1-a34b-4a8fd638d5df for this chassis.
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:09 compute-0 ovn_controller[147040]: 2026-02-25T12:45:09Z|01125|binding|INFO|e50c9f03-a8a5-48d1-a34b-4a8fd638d5df: Claiming fa:16:3e:47:4a:25 10.100.0.14
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.243 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:4a:25 10.100.0.14'], port_security=['fa:16:3e:47:4a:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dd8c9142-2607-4722-90eb-65233f258639', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa114a0-206b-4aa1-b770-18f248344fa4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.244 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df in datapath bb79e0fd-2a4d-4a70-9c80-4853297401ff bound to our chassis
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.245 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79e0fd-2a4d-4a70-9c80-4853297401ff
Feb 25 12:45:09 compute-0 ovn_controller[147040]: 2026-02-25T12:45:09Z|01126|binding|INFO|Setting lport e50c9f03-a8a5-48d1-a34b-4a8fd638d5df ovn-installed in OVS
Feb 25 12:45:09 compute-0 ovn_controller[147040]: 2026-02-25T12:45:09Z|01127|binding|INFO|Setting lport e50c9f03-a8a5-48d1-a34b-4a8fd638d5df up in Southbound
Feb 25 12:45:09 compute-0 kernel: tap68ffeedb-a9: entered promiscuous mode
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.250 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:09 compute-0 NetworkManager[49836]: <info>  [1772023509.2511] manager: (tap68ffeedb-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/471)
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:09 compute-0 ovn_controller[147040]: 2026-02-25T12:45:09Z|01128|binding|INFO|Claiming lport 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d for this chassis.
Feb 25 12:45:09 compute-0 ovn_controller[147040]: 2026-02-25T12:45:09Z|01129|binding|INFO|68ffeedb-a9df-4fa8-9ae1-2535a1f1799d: Claiming fa:16:3e:60:df:22 2001:db8:0:1:f816:3eff:fe60:df22 2001:db8::f816:3eff:fe60:df22
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.257 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[72d06ebf-6fcb-4d3b-a0a6-8854330e1c69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.258 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbb79e0fd-21 in ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.262 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbb79e0fd-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.262 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa8b78a-3424-4f1e-b066-3ea075c436f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.264 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.263 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[428f3833-40c2-4461-9afe-1d51c369ccef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 systemd-udevd[345313]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:45:09 compute-0 systemd-udevd[345314]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:45:09 compute-0 ovn_controller[147040]: 2026-02-25T12:45:09Z|01130|binding|INFO|Setting lport 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d up in Southbound
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.267 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:df:22 2001:db8:0:1:f816:3eff:fe60:df22 2001:db8::f816:3eff:fe60:df22'], port_security=['fa:16:3e:60:df:22 2001:db8:0:1:f816:3eff:fe60:df22 2001:db8::f816:3eff:fe60:df22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe60:df22/64 2001:db8::f816:3eff:fe60:df22/64', 'neutron:device_id': 'dd8c9142-2607-4722-90eb-65233f258639', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5bfff87-5507-4fe1-aed9-1b014e8e7384, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:09 compute-0 ovn_controller[147040]: 2026-02-25T12:45:09Z|01131|binding|INFO|Setting lport 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d ovn-installed in OVS
Feb 25 12:45:09 compute-0 NetworkManager[49836]: <info>  [1772023509.2789] device (tap68ffeedb-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:45:09 compute-0 NetworkManager[49836]: <info>  [1772023509.2799] device (tap68ffeedb-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.277 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2d813831-4109-4ae3-a031-6e3a8777a925]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 NetworkManager[49836]: <info>  [1772023509.2805] device (tape50c9f03-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:45:09 compute-0 NetworkManager[49836]: <info>  [1772023509.2812] device (tape50c9f03-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.291 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[adf341a4-fc02-4195-a4d7-ddac75d41752]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 systemd-machined[210048]: New machine qemu-143-instance-00000072.
Feb 25 12:45:09 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-00000072.
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.324 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[903ed677-7e54-44bd-8134-f9af0a3aebd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b148b462-332d-432f-b025-641d5bd73133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 NetworkManager[49836]: <info>  [1772023509.3302] manager: (tapbb79e0fd-20): new Veth device (/org/freedesktop/NetworkManager/Devices/472)
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.369 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0f34975f-3dcc-43a1-9772-e13faab88f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.376 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9279cef2-bc77-4992-9dfa-3dd26b35f06a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 NetworkManager[49836]: <info>  [1772023509.6499] device (tapbb79e0fd-20): carrier: link connected
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.660 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[77494bf6-8006-4dc8-9bfb-4e5bed9c73de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.662 244018 DEBUG nova.compute.manager [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.662 244018 DEBUG nova.compute.manager [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing instance network info cache due to event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.662 244018 DEBUG oslo_concurrency.lockutils [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.683 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9aeaaea-a343-42cc-bb6a-d462e8092fc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79e0fd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:0b:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547905, 'reachable_time': 39147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345349, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.701 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[24a425be-fe64-45ec-bb93-1020c029ee2b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:b00'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547905, 'tstamp': 547905}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345350, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.706 244018 DEBUG nova.compute.manager [req-e69fce93-db9b-48d2-94da-3c66fd268474 req-514ecdf8-8eb4-4187-946f-f074cae1fac8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.706 244018 DEBUG oslo_concurrency.lockutils [req-e69fce93-db9b-48d2-94da-3c66fd268474 req-514ecdf8-8eb4-4187-946f-f074cae1fac8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.707 244018 DEBUG oslo_concurrency.lockutils [req-e69fce93-db9b-48d2-94da-3c66fd268474 req-514ecdf8-8eb4-4187-946f-f074cae1fac8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.707 244018 DEBUG oslo_concurrency.lockutils [req-e69fce93-db9b-48d2-94da-3c66fd268474 req-514ecdf8-8eb4-4187-946f-f074cae1fac8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.707 244018 DEBUG nova.compute.manager [req-e69fce93-db9b-48d2-94da-3c66fd268474 req-514ecdf8-8eb4-4187-946f-f074cae1fac8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Processing event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:45:09 compute-0 ceph-mon[76335]: pgmap v1934: 305 pgs: 305 active+clean; 358 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 6.0 MiB/s wr, 352 op/s
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.728 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[94a5369b-beae-47c4-afad-7b4c14408fed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79e0fd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:0b:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547905, 'reachable_time': 39147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345351, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.762 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b8e8c6-82fa-4d26-a3a5-39c14a2929c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.833 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9222479b-01a9-4fd2-a6eb-c31fbcef1b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.835 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79e0fd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.836 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.836 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79e0fd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:09 compute-0 NetworkManager[49836]: <info>  [1772023509.8398] manager: (tapbb79e0fd-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/473)
Feb 25 12:45:09 compute-0 kernel: tapbb79e0fd-20: entered promiscuous mode
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.843 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79e0fd-20, col_values=(('external_ids', {'iface-id': '091afd0d-cbaa-4d88-8f55-1965b8ffcb56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:09 compute-0 ovn_controller[147040]: 2026-02-25T12:45:09Z|01132|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.846 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bb79e0fd-2a4d-4a70-9c80-4853297401ff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bb79e0fd-2a4d-4a70-9c80-4853297401ff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.847 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81a2c382-5cdc-45d4-81d9-5c82ffac77ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.849 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-bb79e0fd-2a4d-4a70-9c80-4853297401ff
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/bb79e0fd-2a4d-4a70-9c80-4853297401ff.pid.haproxy
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID bb79e0fd-2a4d-4a70-9c80-4853297401ff
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:45:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.851 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'env', 'PROCESS_TAG=haproxy-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bb79e0fd-2a4d-4a70-9c80-4853297401ff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:45:09 compute-0 nova_compute[244014]: 2026-02-25 12:45:09.852 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.038 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023510.0375986, dd8c9142-2607-4722-90eb-65233f258639 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.038 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] VM Started (Lifecycle Event)
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.061 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.067 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023510.0378032, dd8c9142-2607-4722-90eb-65233f258639 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.067 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] VM Paused (Lifecycle Event)
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.090 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.096 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.121 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.158 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:10 compute-0 podman[345426]: 2026-02-25 12:45:10.23856777 +0000 UTC m=+0.060575400 container create d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.272 244018 DEBUG nova.network.neutron [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:10 compute-0 systemd[1]: Started libpod-conmon-d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633.scope.
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.299 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.302 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.303 244018 INFO nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating image(s)
Feb 25 12:45:10 compute-0 podman[345426]: 2026-02-25 12:45:10.208735118 +0000 UTC m=+0.030742768 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c13903373d387e7781319c35f624b48a598e4d3ad7505fd786416ab8b5fa73/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:45:10 compute-0 podman[345426]: 2026-02-25 12:45:10.319340599 +0000 UTC m=+0.141348239 container init d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:45:10 compute-0 podman[345426]: 2026-02-25 12:45:10.329174387 +0000 UTC m=+0.151182017 container start d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.339 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.346 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.349 244018 DEBUG oslo_concurrency.lockutils [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.350 244018 DEBUG nova.network.neutron [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:45:10 compute-0 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [NOTICE]   (345461) : New worker (345466) forked
Feb 25 12:45:10 compute-0 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [NOTICE]   (345461) : Loading success.
Feb 25 12:45:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 358 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 519 KiB/s rd, 3.6 MiB/s wr, 150 op/s
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.401 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.410 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d in datapath 6df13266-bcfe-4a5b-94c4-81b5f08a6c21 unbound from our chassis
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.413 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6df13266-bcfe-4a5b-94c4-81b5f08a6c21
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.421 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[644c9ae0-2e4a-4398-97de-22f729a4a193]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.421 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6df13266-b1 in ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.421 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.424 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6df13266-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.424 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d6afa467-6145-4591-bb40-832ae70bccf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.425 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "fe4b7a9f067b869997678002ce25dd04ec0bdad0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.425 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ed763b1a-d053-4088-b101-5c857820c277]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.426 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "fe4b7a9f067b869997678002ce25dd04ec0bdad0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.433 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9996edaa-adf1-4125-a8c0-43cf86ebce51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.447 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8bff900e-da0c-4bb5-b37f-3e8fa403eca3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.475 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e32b61dc-420f-43d3-a886-6576b1075d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 NetworkManager[49836]: <info>  [1772023510.4819] manager: (tap6df13266-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/474)
Feb 25 12:45:10 compute-0 systemd-udevd[345329]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.482 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7b32d7-093c-49e8-98c8-885b03c71c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.517 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f84d094b-e7e8-42e4-b70c-d202a1fb0f1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.522 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[93160036-1b67-42f0-8812-2e78353584f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 NetworkManager[49836]: <info>  [1772023510.5471] device (tap6df13266-b0): carrier: link connected
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.552 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed2e96c-ec74-4c1c-a9ed-39b899a7114b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.571 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4db3b1-3d8a-4c7c-9327-689f49862e92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6df13266-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:ad:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547995, 'reachable_time': 15036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345521, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.584 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aa89da31-31e4-42df-bf29-5efb0ad76078]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:ad86'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547995, 'tstamp': 547995}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345522, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.599 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5780f304-1e67-4deb-85dc-71a0d2b846f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6df13266-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:ad:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547995, 'reachable_time': 15036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345523, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 kernel: tap8d032336-9e (unregistering): left promiscuous mode
Feb 25 12:45:10 compute-0 NetworkManager[49836]: <info>  [1772023510.6215] device (tap8d032336-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:45:10 compute-0 ovn_controller[147040]: 2026-02-25T12:45:10Z|01133|binding|INFO|Releasing lport 8d032336-9efd-4e76-9498-4dafee40640b from this chassis (sb_readonly=0)
Feb 25 12:45:10 compute-0 ovn_controller[147040]: 2026-02-25T12:45:10Z|01134|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b down in Southbound
Feb 25 12:45:10 compute-0 ovn_controller[147040]: 2026-02-25T12:45:10Z|01135|binding|INFO|Removing iface tap8d032336-9e ovn-installed in OVS
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.635 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.632 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d32b0c25-7567-4b9e-9d98-f0c9cc9e0f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.665 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2b1e2c-4e31-4d98-85c2-ba1a166b734f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.666 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6df13266-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.667 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.667 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6df13266-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:10 compute-0 NetworkManager[49836]: <info>  [1772023510.6702] manager: (tap6df13266-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Feb 25 12:45:10 compute-0 kernel: tap6df13266-b0: entered promiscuous mode
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.675 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6df13266-b0, col_values=(('external_ids', {'iface-id': '0b276f7a-90bb-4427-8f39-0e014732fd20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:10 compute-0 ovn_controller[147040]: 2026-02-25T12:45:10Z|01136|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=1)
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:10 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Deactivated successfully.
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:10 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Consumed 12.122s CPU time.
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.688 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6df13266-bcfe-4a5b-94c4-81b5f08a6c21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6df13266-bcfe-4a5b-94c4-81b5f08a6c21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.689 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[320bc6e9-bdd9-43d0-b76d-75093dd06c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.691 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-6df13266-bcfe-4a5b-94c4-81b5f08a6c21
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/6df13266-bcfe-4a5b-94c4-81b5f08a6c21.pid.haproxy
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 6df13266-bcfe-4a5b-94c4-81b5f08a6c21
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.691 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'env', 'PROCESS_TAG=haproxy-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6df13266-bcfe-4a5b-94c4-81b5f08a6c21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:45:10 compute-0 systemd-machined[210048]: Machine qemu-142-instance-00000071 terminated.
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.734 244018 DEBUG nova.virt.libvirt.imagebackend [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.790 244018 DEBUG nova.virt.libvirt.imagebackend [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Selected location: {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.790 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] cloning images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e@snap to None/874359d8-3251-4416-82dc-f6776853e384_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 12:45:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.802 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5f:c9 10.100.0.11'], port_security=['fa:16:3e:b3:5f:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aee87402-4b34-4083-888b-bb653e2beaa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd92597b-67bf-4492-9581-a9a7ec80f716', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815102d4-6506-4957-9109-24ea4e91e4b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d5163c3-96b2-4512-ae02-51f6c9b4bef4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d032336-9efd-4e76-9498-4dafee40640b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:45:10 compute-0 NetworkManager[49836]: <info>  [1772023510.8453] manager: (tap8d032336-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/476)
Feb 25 12:45:10 compute-0 nova_compute[244014]: 2026-02-25 12:45:10.913 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "fe4b7a9f067b869997678002ce25dd04ec0bdad0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.045 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'migration_context' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:11 compute-0 podman[345675]: 2026-02-25 12:45:11.057520301 +0000 UTC m=+0.049676413 container create 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:45:11 compute-0 systemd[1]: Started libpod-conmon-2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989.scope.
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.109 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] flattening vms/874359d8-3251-4416-82dc-f6776853e384_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 12:45:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:45:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3db6813ebd7b49f3ead32cb60564520a6c07da0da03dd8df369d119009fd878/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:45:11 compute-0 podman[345675]: 2026-02-25 12:45:11.035056837 +0000 UTC m=+0.027212959 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:45:11 compute-0 podman[345675]: 2026-02-25 12:45:11.138312371 +0000 UTC m=+0.130468483 container init 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 12:45:11 compute-0 podman[345675]: 2026-02-25 12:45:11.143826886 +0000 UTC m=+0.135982978 container start 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:45:11 compute-0 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [NOTICE]   (345748) : New worker (345750) forked
Feb 25 12:45:11 compute-0 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [NOTICE]   (345748) : Loading success.
Feb 25 12:45:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:11.202 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d032336-9efd-4e76-9498-4dafee40640b in datapath cd92597b-67bf-4492-9581-a9a7ec80f716 unbound from our chassis
Feb 25 12:45:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:11.208 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd92597b-67bf-4492-9581-a9a7ec80f716, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:45:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:11.215 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00cbe21c-3861-4303-8500-dc730b0c35b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:11.216 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 namespace which is not needed anymore
Feb 25 12:45:11 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.363 244018 INFO nova.virt.libvirt.driver [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance shutdown successfully after 3 seconds.
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.373 244018 INFO nova.virt.libvirt.driver [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance destroyed successfully.
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.373 244018 DEBUG nova.objects.instance [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'numa_topology' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:11 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [NOTICE]   (344221) : haproxy version is 2.8.14-c23fe91
Feb 25 12:45:11 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [NOTICE]   (344221) : path to executable is /usr/sbin/haproxy
Feb 25 12:45:11 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [WARNING]  (344221) : Exiting Master process...
Feb 25 12:45:11 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [WARNING]  (344221) : Exiting Master process...
Feb 25 12:45:11 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [ALERT]    (344221) : Current worker (344223) exited with code 143 (Terminated)
Feb 25 12:45:11 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [WARNING]  (344221) : All workers exited. Exiting... (0)
Feb 25 12:45:11 compute-0 systemd[1]: libpod-5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740.scope: Deactivated successfully.
Feb 25 12:45:11 compute-0 podman[345776]: 2026-02-25 12:45:11.387948235 +0000 UTC m=+0.077386124 container died 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.391 244018 DEBUG nova.compute.manager [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.449 244018 DEBUG oslo_concurrency.lockutils [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.830 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.831 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.831 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.831 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.832 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] No event matching network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df in dict_keys([('network-vif-plugged', '68ffeedb-a9df-4fa8-9ae1-2535a1f1799d')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.832 244018 WARNING nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received unexpected event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df for instance with vm_state building and task_state spawning.
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.832 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.833 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.833 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.834 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.834 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Processing event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.834 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.834 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.835 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.835 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.835 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] No waiting events found dispatching network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.836 244018 WARNING nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received unexpected event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d for instance with vm_state building and task_state spawning.
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.836 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.836 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.836 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.837 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.837 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.837 244018 WARNING nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state stopped and task_state None.
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.837 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.837 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.838 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.838 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.838 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.838 244018 WARNING nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state stopped and task_state None.
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.840 244018 DEBUG nova.network.neutron [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated VIF entry in instance network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.840 244018 DEBUG nova.network.neutron [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.842 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:45:11 compute-0 nova_compute[244014]: 2026-02-25 12:45:11.865 244018 DEBUG oslo_concurrency.lockutils [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.007 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023512.006942, dd8c9142-2607-4722-90eb-65233f258639 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.008 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] VM Resumed (Lifecycle Event)
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.011 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:45:12 compute-0 ceph-mon[76335]: pgmap v1936: 305 pgs: 305 active+clean; 358 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 519 KiB/s rd, 3.6 MiB/s wr, 150 op/s
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.023 244018 INFO nova.virt.libvirt.driver [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance spawned successfully.
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.023 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.029 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.033 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.044 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.045 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.045 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.046 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740-userdata-shm.mount: Deactivated successfully.
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.046 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.046 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f152e8c42872b5c20ce576f93cd6d4f4e71b6bf0c572194dbd79d4f61bceb58-merged.mount: Deactivated successfully.
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.051 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:45:12 compute-0 podman[345776]: 2026-02-25 12:45:12.067339928 +0000 UTC m=+0.756777807 container cleanup 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 12:45:12 compute-0 systemd[1]: libpod-conmon-5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740.scope: Deactivated successfully.
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.080 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Image rbd:vms/874359d8-3251-4416-82dc-f6776853e384_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.081 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.082 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Ensure instance console log exists: /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.082 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.083 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.083 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.085 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start _get_guest_xml network_info=[{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:44:49Z,direct_url=<?>,disk_format='raw',id=ce5f0984-576d-4f20-8e86-b4b26cb5bb6e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1245136630-shelved',owner='6c0adb05683141e7a0b866f450e410e0',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:44:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.090 244018 WARNING nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.094 244018 INFO nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Took 13.54 seconds to spawn the instance on the hypervisor.
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.094 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.099 244018 DEBUG nova.virt.libvirt.host [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.100 244018 DEBUG nova.virt.libvirt.host [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.107 244018 DEBUG nova.virt.libvirt.host [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.108 244018 DEBUG nova.virt.libvirt.host [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.108 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.108 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:44:49Z,direct_url=<?>,disk_format='raw',id=ce5f0984-576d-4f20-8e86-b4b26cb5bb6e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1245136630-shelved',owner='6c0adb05683141e7a0b866f450e410e0',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:44:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.110 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.110 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.110 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.110 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.110 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.127 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:12 compute-0 podman[345808]: 2026-02-25 12:45:12.140033219 +0000 UTC m=+0.049153878 container remove 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 12:45:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.160 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00b47a9c-e828-40dd-9351-806c6b3d7865]: (4, ('Wed Feb 25 12:45:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 (5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740)\n5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740\nWed Feb 25 12:45:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 (5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740)\n5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.162 244018 INFO nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Took 14.66 seconds to build instance.
Feb 25 12:45:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.162 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5386f1c5-2374-4a7f-bcb6-d075238f67ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.163 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd92597b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:12 compute-0 kernel: tapcd92597b-60: left promiscuous mode
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.179 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.179 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.182 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[84d72aac-1ab8-4473-9542-02c222d061f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.201 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35d41262-8c05-4644-882b-8b2a9a35bfa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.203 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae1e51d-b686-435a-a6b8-61d521d67653]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.217 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[db76dc2f-06c0-44f7-a0b2-ebd55327c267]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545849, 'reachable_time': 29158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345829, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.220 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:45:12 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.220 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7ed0f9-ba3a-47a0-a1a6-25535fc0c0fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:12 compute-0 systemd[1]: run-netns-ovnmeta\x2dcd92597b\x2d67bf\x2d4492\x2d9581\x2da9a7ec80f716.mount: Deactivated successfully.
Feb 25 12:45:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 399 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 147 op/s
Feb 25 12:45:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:45:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150052644' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.728 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.757 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:12 compute-0 nova_compute[244014]: 2026-02-25 12:45:12.764 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:13 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3150052644' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:13 compute-0 nova_compute[244014]: 2026-02-25 12:45:13.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:45:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1920090227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:13 compute-0 nova_compute[244014]: 2026-02-25 12:45:13.318 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:13 compute-0 nova_compute[244014]: 2026-02-25 12:45:13.320 244018 DEBUG nova.virt.libvirt.vif [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='ce5f0984-576d-4f20-8e86-b4b26cb5bb6e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-571084469',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member',shelved_at='2026-02-25T12:44:57.937945',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='ce5f0984-576d-4f20-8e86-b4b26cb5bb6e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:07Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:45:13 compute-0 nova_compute[244014]: 2026-02-25 12:45:13.320 244018 DEBUG nova.network.os_vif_util [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:13 compute-0 nova_compute[244014]: 2026-02-25 12:45:13.321 244018 DEBUG nova.network.os_vif_util [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:13 compute-0 nova_compute[244014]: 2026-02-25 12:45:13.322 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:14 compute-0 ceph-mon[76335]: pgmap v1937: 305 pgs: 305 active+clean; 399 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 147 op/s
Feb 25 12:45:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1920090227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.148 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:45:14 compute-0 nova_compute[244014]:   <uuid>874359d8-3251-4416-82dc-f6776853e384</uuid>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   <name>instance-00000070</name>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <nova:name>tempest-TestShelveInstance-server-1245136630</nova:name>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:45:12</nova:creationTime>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <nova:user uuid="44c0b78107ea4f7381e82a02c5954e7c">tempest-TestShelveInstance-1925524092-project-member</nova:user>
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <nova:project uuid="6c0adb05683141e7a0b866f450e410e0">tempest-TestShelveInstance-1925524092</nova:project>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="ce5f0984-576d-4f20-8e86-b4b26cb5bb6e"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <nova:port uuid="97fb9f99-cb59-4581-8866-375ea3e167d7">
Feb 25 12:45:14 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <system>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <entry name="serial">874359d8-3251-4416-82dc-f6776853e384</entry>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <entry name="uuid">874359d8-3251-4416-82dc-f6776853e384</entry>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     </system>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   <os>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   </os>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   <features>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   </features>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/874359d8-3251-4416-82dc-f6776853e384_disk">
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       </source>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/874359d8-3251-4416-82dc-f6776853e384_disk.config">
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       </source>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:45:14 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:61:b7:f7"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <target dev="tap97fb9f99-cb"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/console.log" append="off"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <video>
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     </video>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:45:14 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:45:14 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:45:14 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:45:14 compute-0 nova_compute[244014]: </domain>
Feb 25 12:45:14 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.149 244018 DEBUG nova.compute.manager [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Preparing to wait for external event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.149 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.149 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.150 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.150 244018 DEBUG nova.virt.libvirt.vif [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='ce5f0984-576d-4f20-8e86-b4b26cb5bb6e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-571084469',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member',shelved_at='2026-02-25T12:44:57.937945',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='ce5f0984-576d-4f20-8e86-b4b26cb5bb6e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:07Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.151 244018 DEBUG nova.network.os_vif_util [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.152 244018 DEBUG nova.network.os_vif_util [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.152 244018 DEBUG os_vif [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.153 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.154 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.156 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.161 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.161 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97fb9f99-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.164 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97fb9f99-cb, col_values=(('external_ids', {'iface-id': '97fb9f99-cb59-4581-8866-375ea3e167d7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:b7:f7', 'vm-uuid': '874359d8-3251-4416-82dc-f6776853e384'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:14 compute-0 NetworkManager[49836]: <info>  [1772023514.1666] manager: (tap97fb9f99-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.170 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.174 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.175 244018 INFO os_vif [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb')
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.230 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.231 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.231 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No VIF found with MAC fa:16:3e:61:b7:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.232 244018 INFO nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Using config drive
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.262 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.287 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:14 compute-0 nova_compute[244014]: 2026-02-25 12:45:14.346 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'keypairs' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 436 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.3 MiB/s wr, 168 op/s
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.139 244018 INFO nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating config drive at /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.144 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxybfkycw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.288 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxybfkycw" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.325 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.330 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config 874359d8-3251-4416-82dc-f6776853e384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.496 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config 874359d8-3251-4416-82dc-f6776853e384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.498 244018 INFO nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deleting local config drive /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config because it was imported into RBD.
Feb 25 12:45:15 compute-0 kernel: tap97fb9f99-cb: entered promiscuous mode
Feb 25 12:45:15 compute-0 NetworkManager[49836]: <info>  [1772023515.5498] manager: (tap97fb9f99-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/478)
Feb 25 12:45:15 compute-0 ovn_controller[147040]: 2026-02-25T12:45:15Z|01137|binding|INFO|Claiming lport 97fb9f99-cb59-4581-8866-375ea3e167d7 for this chassis.
Feb 25 12:45:15 compute-0 ovn_controller[147040]: 2026-02-25T12:45:15Z|01138|binding|INFO|97fb9f99-cb59-4581-8866-375ea3e167d7: Claiming fa:16:3e:61:b7:f7 10.100.0.11
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.561 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:b7:f7 10.100.0.11'], port_security=['fa:16:3e:61:b7:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '874359d8-3251-4416-82dc-f6776853e384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c0adb05683141e7a0b866f450e410e0', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'c8e48a3c-4f73-4222-8cd8-b956480085c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88c1b411-2973-4db5-967d-0a2cebc1066f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=97fb9f99-cb59-4581-8866-375ea3e167d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.563 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 97fb9f99-cb59-4581-8866-375ea3e167d7 in datapath e4d059ba-eacc-463b-a8a4-393a5a36dba3 bound to our chassis
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.566 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4d059ba-eacc-463b-a8a4-393a5a36dba3
Feb 25 12:45:15 compute-0 ovn_controller[147040]: 2026-02-25T12:45:15Z|01139|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 ovn-installed in OVS
Feb 25 12:45:15 compute-0 ovn_controller[147040]: 2026-02-25T12:45:15Z|01140|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 up in Southbound
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.577 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe257f7-56ae-4ac2-b3a5-fe1dbe305274]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.578 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4d059ba-e1 in ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.580 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4d059ba-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.580 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8699fc84-7ff8-4047-b0a5-f8daa70c2a07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.582 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4219a5c6-2df7-419b-a4ca-fc0962776bc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.582 244018 INFO nova.compute.manager [None req-87463989-7183-4875-ab5c-e24a53d41990 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Get console output
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:15 compute-0 systemd-machined[210048]: New machine qemu-144-instance-00000070.
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.596 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc7b92f-0bb9-401c-883f-24c0eac83778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000070.
Feb 25 12:45:15 compute-0 systemd-udevd[345972]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.611 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4ca83c-f845-4bd6-a411-87f7a54b7f5d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 NetworkManager[49836]: <info>  [1772023515.6243] device (tap97fb9f99-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:45:15 compute-0 NetworkManager[49836]: <info>  [1772023515.6264] device (tap97fb9f99-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.639 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[454abaa7-4be2-4efa-9cbb-634f518e5c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 NetworkManager[49836]: <info>  [1772023515.6484] manager: (tape4d059ba-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/479)
Feb 25 12:45:15 compute-0 systemd-udevd[345976]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.649 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cb443366-344c-481c-b377-9746b46130f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.672 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0a90dfc3-232d-4aa6-8c72-3c2d51ec3d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.675 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1252ef-ce7d-4972-9070-efbc3df9748e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 NetworkManager[49836]: <info>  [1772023515.6928] device (tape4d059ba-e0): carrier: link connected
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.696 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f17ba4e0-0d1d-464e-ad30-db2459229259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.709 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[202e2010-7172-42e5-a1a7-7f477dbc750e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d059ba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:42:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548510, 'reachable_time': 31711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346005, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.724 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbb3d20-3aec-434b-92b9-202f8ad3616e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:42a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548510, 'tstamp': 548510}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346006, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.739 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2f95f3d1-4253-4e22-9159-afde24eb8b3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d059ba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:42:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548510, 'reachable_time': 31711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346007, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.770 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5e6326-02da-4f7e-b709-5a66f3dc3ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.831 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bbae288f-895c-4c36-ada9-0ecd862911bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.833 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d059ba-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.834 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.834 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4d059ba-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:15 compute-0 kernel: tape4d059ba-e0: entered promiscuous mode
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:15 compute-0 NetworkManager[49836]: <info>  [1772023515.8404] manager: (tape4d059ba-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.841 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.844 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4d059ba-e0, col_values=(('external_ids', {'iface-id': '751d6f2c-9451-4526-9618-730226a78a70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.846 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:15 compute-0 ovn_controller[147040]: 2026-02-25T12:45:15Z|01141|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.855 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.857 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.859 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.860 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef29f248-a410-4a0d-92cb-0476ba5a4afb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.861 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-e4d059ba-eacc-463b-a8a4-393a5a36dba3
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID e4d059ba-eacc-463b-a8a4-393a5a36dba3
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:45:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.862 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'env', 'PROCESS_TAG=haproxy-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4d059ba-eacc-463b-a8a4-393a5a36dba3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.879 244018 DEBUG nova.compute.manager [req-05a0cf0c-1e8d-4fbf-994b-0b94d6d08b66 req-57963243-3ae5-49c3-8db9-276af0125ddb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.880 244018 DEBUG oslo_concurrency.lockutils [req-05a0cf0c-1e8d-4fbf-994b-0b94d6d08b66 req-57963243-3ae5-49c3-8db9-276af0125ddb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.880 244018 DEBUG oslo_concurrency.lockutils [req-05a0cf0c-1e8d-4fbf-994b-0b94d6d08b66 req-57963243-3ae5-49c3-8db9-276af0125ddb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.881 244018 DEBUG oslo_concurrency.lockutils [req-05a0cf0c-1e8d-4fbf-994b-0b94d6d08b66 req-57963243-3ae5-49c3-8db9-276af0125ddb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.882 244018 DEBUG nova.compute.manager [req-05a0cf0c-1e8d-4fbf-994b-0b94d6d08b66 req-57963243-3ae5-49c3-8db9-276af0125ddb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Processing event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.908 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'flavor' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.933 244018 DEBUG oslo_concurrency.lockutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.934 244018 DEBUG oslo_concurrency.lockutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.934 244018 DEBUG nova.network.neutron [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.935 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'info_cache' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.961 244018 DEBUG nova.compute.manager [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.962 244018 DEBUG nova.compute.manager [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing instance network info cache due to event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.962 244018 DEBUG oslo_concurrency.lockutils [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.962 244018 DEBUG oslo_concurrency.lockutils [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:15 compute-0 nova_compute[244014]: 2026-02-25 12:45:15.962 244018 DEBUG nova.network.neutron [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing network info cache for port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:45:16 compute-0 ceph-mon[76335]: pgmap v1938: 305 pgs: 305 active+clean; 436 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.3 MiB/s wr, 168 op/s
Feb 25 12:45:16 compute-0 podman[346041]: 2026-02-25 12:45:16.236449648 +0000 UTC m=+0.066154467 container create f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:45:16 compute-0 systemd[1]: Started libpod-conmon-f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80.scope.
Feb 25 12:45:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff6ea0a8a911ca82af7fb30e8506525301753b046b20808291f024818c24148/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:45:16 compute-0 podman[346041]: 2026-02-25 12:45:16.295379212 +0000 UTC m=+0.125084021 container init f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:45:16 compute-0 podman[346041]: 2026-02-25 12:45:16.299533719 +0000 UTC m=+0.129238498 container start f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:45:16 compute-0 podman[346041]: 2026-02-25 12:45:16.204886178 +0000 UTC m=+0.034591017 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:45:16 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [NOTICE]   (346103) : New worker (346106) forked
Feb 25 12:45:16 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [NOTICE]   (346103) : Loading success.
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.346 244018 DEBUG nova.compute.manager [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.348 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023516.3456142, 874359d8-3251-4416-82dc-f6776853e384 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.348 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Started (Lifecycle Event)
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.351 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.355 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance spawned successfully.
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.368 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.372 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:45:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 436 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.3 MiB/s wr, 168 op/s
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.396 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.397 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023516.3458734, 874359d8-3251-4416-82dc-f6776853e384 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.397 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Paused (Lifecycle Event)
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.417 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.421 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023516.3506489, 874359d8-3251-4416-82dc-f6776853e384 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.421 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Resumed (Lifecycle Event)
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.447 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.451 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:45:16 compute-0 nova_compute[244014]: 2026-02-25 12:45:16.469 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:45:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Feb 25 12:45:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Feb 25 12:45:17 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.218 244018 DEBUG nova.network.neutron [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.239 244018 DEBUG oslo_concurrency.lockutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.275 244018 INFO nova.virt.libvirt.driver [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance destroyed successfully.
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.276 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'numa_topology' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.294 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.307 244018 DEBUG nova.virt.libvirt.vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:11Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.308 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.309 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.309 244018 DEBUG os_vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.313 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d032336-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.321 244018 INFO os_vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e')
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.328 244018 DEBUG nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start _get_guest_xml network_info=[{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.333 244018 WARNING nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.341 244018 DEBUG nova.virt.libvirt.host [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.342 244018 DEBUG nova.virt.libvirt.host [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.345 244018 DEBUG nova.virt.libvirt.host [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.346 244018 DEBUG nova.virt.libvirt.host [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.346 244018 DEBUG nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.346 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.348 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.348 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.348 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.348 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.349 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'vcpu_model' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.367 244018 DEBUG oslo_concurrency.processutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.397 244018 DEBUG nova.compute.manager [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.400 244018 DEBUG nova.network.neutron [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updated VIF entry in instance network info cache for port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.401 244018 DEBUG nova.network.neutron [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.449 244018 DEBUG oslo_concurrency.lockutils [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.469 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:45:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/485664098' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.905 244018 DEBUG oslo_concurrency.processutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.948 244018 DEBUG oslo_concurrency.processutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.989 244018 DEBUG nova.compute.manager [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.989 244018 DEBUG oslo_concurrency.lockutils [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.990 244018 DEBUG oslo_concurrency.lockutils [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.990 244018 DEBUG oslo_concurrency.lockutils [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.990 244018 DEBUG nova.compute.manager [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:45:17 compute-0 nova_compute[244014]: 2026-02-25 12:45:17.991 244018 WARNING nova.compute.manager [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received unexpected event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with vm_state active and task_state None.
Feb 25 12:45:18 compute-0 ceph-mon[76335]: pgmap v1939: 305 pgs: 305 active+clean; 436 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.3 MiB/s wr, 168 op/s
Feb 25 12:45:18 compute-0 ceph-mon[76335]: osdmap e277: 3 total, 3 up, 3 in
Feb 25 12:45:18 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/485664098' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 425 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 4.7 MiB/s wr, 245 op/s
Feb 25 12:45:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:45:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8230030' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.569 244018 DEBUG oslo_concurrency.processutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.570 244018 DEBUG nova.virt.libvirt.vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:11Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.570 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.571 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.572 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.602 244018 DEBUG nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:45:18 compute-0 nova_compute[244014]:   <uuid>aee87402-4b34-4083-888b-bb653e2beaa9</uuid>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   <name>instance-00000071</name>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-204528973</nova:name>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:45:17</nova:creationTime>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <nova:port uuid="8d032336-9efd-4e76-9498-4dafee40640b">
Feb 25 12:45:18 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <system>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <entry name="serial">aee87402-4b34-4083-888b-bb653e2beaa9</entry>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <entry name="uuid">aee87402-4b34-4083-888b-bb653e2beaa9</entry>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     </system>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   <os>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   </os>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   <features>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   </features>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/aee87402-4b34-4083-888b-bb653e2beaa9_disk">
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       </source>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/aee87402-4b34-4083-888b-bb653e2beaa9_disk.config">
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       </source>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:45:18 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:b3:5f:c9"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <target dev="tap8d032336-9e"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/console.log" append="off"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <video>
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     </video>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:45:18 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:45:18 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:45:18 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:45:18 compute-0 nova_compute[244014]: </domain>
Feb 25 12:45:18 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.602 244018 DEBUG nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.603 244018 DEBUG nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.603 244018 DEBUG nova.virt.libvirt.vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:11Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.603 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.604 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.604 244018 DEBUG os_vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.605 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.605 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.605 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.607 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d032336-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.608 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d032336-9e, col_values=(('external_ids', {'iface-id': '8d032336-9efd-4e76-9498-4dafee40640b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:5f:c9', 'vm-uuid': 'aee87402-4b34-4083-888b-bb653e2beaa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:18 compute-0 NetworkManager[49836]: <info>  [1772023518.6098] manager: (tap8d032336-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.613 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.614 244018 INFO os_vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e')
Feb 25 12:45:18 compute-0 kernel: tap8d032336-9e: entered promiscuous mode
Feb 25 12:45:18 compute-0 NetworkManager[49836]: <info>  [1772023518.7871] manager: (tap8d032336-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/482)
Feb 25 12:45:18 compute-0 ovn_controller[147040]: 2026-02-25T12:45:18Z|01142|binding|INFO|Claiming lport 8d032336-9efd-4e76-9498-4dafee40640b for this chassis.
Feb 25 12:45:18 compute-0 ovn_controller[147040]: 2026-02-25T12:45:18Z|01143|binding|INFO|8d032336-9efd-4e76-9498-4dafee40640b: Claiming fa:16:3e:b3:5f:c9 10.100.0.11
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.797 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5f:c9 10.100.0.11'], port_security=['fa:16:3e:b3:5f:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aee87402-4b34-4083-888b-bb653e2beaa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd92597b-67bf-4492-9581-a9a7ec80f716', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '815102d4-6506-4957-9109-24ea4e91e4b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d5163c3-96b2-4512-ae02-51f6c9b4bef4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d032336-9efd-4e76-9498-4dafee40640b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:45:18 compute-0 ovn_controller[147040]: 2026-02-25T12:45:18Z|01144|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b ovn-installed in OVS
Feb 25 12:45:18 compute-0 ovn_controller[147040]: 2026-02-25T12:45:18Z|01145|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b up in Southbound
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.800 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d032336-9efd-4e76-9498-4dafee40640b in datapath cd92597b-67bf-4492-9581-a9a7ec80f716 bound to our chassis
Feb 25 12:45:18 compute-0 nova_compute[244014]: 2026-02-25 12:45:18.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.805 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd92597b-67bf-4492-9581-a9a7ec80f716
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.817 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa02bdd-fc37-46b6-a791-f2a33b61c0d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.818 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd92597b-61 in ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:45:18 compute-0 systemd-machined[210048]: New machine qemu-145-instance-00000071.
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.821 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd92597b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.821 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[04939727-bc66-410e-9d73-5f86d7fdbc44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.823 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f08c990f-8ec3-48c1-a2b6-486321f41c17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:18 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-00000071.
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.835 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a98490f7-2dad-4b41-bcd7-bbc1824987cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:18 compute-0 systemd-udevd[346194]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.869 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe663c8-50f1-4dff-9451-df64c8f4ae2c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:18 compute-0 NetworkManager[49836]: <info>  [1772023518.8710] device (tap8d032336-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:45:18 compute-0 NetworkManager[49836]: <info>  [1772023518.8720] device (tap8d032336-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.894 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ad653b2a-516d-4f72-a4a0-700463db0e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[317c69c5-e6b9-497c-9097-24ccf3cfd668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:18 compute-0 NetworkManager[49836]: <info>  [1772023518.8995] manager: (tapcd92597b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/483)
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.929 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ee40b0d3-f2c4-48ec-93e9-6e4abf255df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.932 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[012c25d3-2c85-4c2a-8d3c-3bb3c964ee59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:18 compute-0 NetworkManager[49836]: <info>  [1772023518.9649] device (tapcd92597b-60): carrier: link connected
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.973 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[031d362a-fb81-4cb5-bc7e-e525301c6c31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.988 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ccecbbf9-9d08-4061-b9df-33d0f6d6a4d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd92597b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ac:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548837, 'reachable_time': 33065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346225, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.004 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[46c3003e-fb30-4f2d-a748-700edfa6bbdf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe89:ace3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548837, 'tstamp': 548837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346226, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.021 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a50355e-e6fd-4aaa-bfac-04c7f82d08a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd92597b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ac:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548837, 'reachable_time': 33065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346227, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.046 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7ba602-9c1f-46c1-8205-ed5fec4677aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/8230030' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.102 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0bde864d-a850-48ae-bc9b-b99125dc5187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.103 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd92597b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.104 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.104 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd92597b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.105 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:19 compute-0 kernel: tapcd92597b-60: entered promiscuous mode
Feb 25 12:45:19 compute-0 NetworkManager[49836]: <info>  [1772023519.1066] manager: (tapcd92597b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.107 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd92597b-60, col_values=(('external_ids', {'iface-id': '37c4424f-372d-4923-b009-7893a123c4d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:19 compute-0 ovn_controller[147040]: 2026-02-25T12:45:19Z|01146|binding|INFO|Releasing lport 37c4424f-372d-4923-b009-7893a123c4d8 from this chassis (sb_readonly=0)
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.116 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.116 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.117 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[08b82419-eafc-481f-92e3-1e6fff0f9888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.117 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-cd92597b-67bf-4492-9581-a9a7ec80f716
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID cd92597b-67bf-4492-9581-a9a7ec80f716
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:45:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.118 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'env', 'PROCESS_TAG=haproxy-cd92597b-67bf-4492-9581-a9a7ec80f716', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd92597b-67bf-4492-9581-a9a7ec80f716.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:45:19 compute-0 podman[346259]: 2026-02-25 12:45:19.443661625 +0000 UTC m=+0.026819118 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:45:19 compute-0 podman[346259]: 2026-02-25 12:45:19.585378904 +0000 UTC m=+0.168536407 container create eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:45:19 compute-0 systemd[1]: Started libpod-conmon-eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75.scope.
Feb 25 12:45:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:45:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4abce4945cdbbec3adb7c76e2b958b55b61f3b27db7344184a7c0c756e9956d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.675 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for aee87402-4b34-4083-888b-bb653e2beaa9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.676 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023519.675293, aee87402-4b34-4083-888b-bb653e2beaa9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.676 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Resumed (Lifecycle Event)
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.679 244018 DEBUG nova.compute.manager [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.682 244018 INFO nova.virt.libvirt.driver [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance rebooted successfully.
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.682 244018 DEBUG nova.compute.manager [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.723 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.726 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:45:19 compute-0 podman[346259]: 2026-02-25 12:45:19.761092753 +0000 UTC m=+0.344250226 container init eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:45:19 compute-0 podman[346259]: 2026-02-25 12:45:19.765416365 +0000 UTC m=+0.348573838 container start eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.770 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] During sync_power_state the instance has a pending task (powering-on). Skip.
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.771 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023519.6784546, aee87402-4b34-4083-888b-bb653e2beaa9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.771 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Started (Lifecycle Event)
Feb 25 12:45:19 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [NOTICE]   (346322) : New worker (346324) forked
Feb 25 12:45:19 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [NOTICE]   (346322) : Loading success.
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.802 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:19 compute-0 nova_compute[244014]: 2026-02-25 12:45:19.809 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.087 244018 DEBUG nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.087 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.088 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.088 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.088 244018 DEBUG nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.088 244018 WARNING nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state active and task_state None.
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.089 244018 DEBUG nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.089 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.089 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.089 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.090 244018 DEBUG nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.090 244018 WARNING nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state active and task_state None.
Feb 25 12:45:20 compute-0 nova_compute[244014]: 2026-02-25 12:45:20.187 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:20 compute-0 ceph-mon[76335]: pgmap v1941: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 425 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 4.7 MiB/s wr, 245 op/s
Feb 25 12:45:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 425 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 4.7 MiB/s wr, 245 op/s
Feb 25 12:45:22 compute-0 ceph-mon[76335]: pgmap v1942: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 425 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 4.7 MiB/s wr, 245 op/s
Feb 25 12:45:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 2.1 MiB/s wr, 294 op/s
Feb 25 12:45:22 compute-0 nova_compute[244014]: 2026-02-25 12:45:22.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:45:22 compute-0 nova_compute[244014]: 2026-02-25 12:45:22.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:45:22 compute-0 nova_compute[244014]: 2026-02-25 12:45:22.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:45:22 compute-0 nova_compute[244014]: 2026-02-25 12:45:22.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:45:23 compute-0 nova_compute[244014]: 2026-02-25 12:45:23.066 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:23 compute-0 nova_compute[244014]: 2026-02-25 12:45:23.067 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:23 compute-0 nova_compute[244014]: 2026-02-25 12:45:23.068 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:45:23 compute-0 nova_compute[244014]: 2026-02-25 12:45:23.068 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Feb 25 12:45:23 compute-0 ceph-mon[76335]: pgmap v1943: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 2.1 MiB/s wr, 294 op/s
Feb 25 12:45:23 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 25 12:45:23 compute-0 nova_compute[244014]: 2026-02-25 12:45:23.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Feb 25 12:45:23 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Feb 25 12:45:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 361 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 183 KiB/s wr, 347 op/s
Feb 25 12:45:24 compute-0 ceph-mon[76335]: osdmap e278: 3 total, 3 up, 3 in
Feb 25 12:45:24 compute-0 nova_compute[244014]: 2026-02-25 12:45:24.799 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:24 compute-0 nova_compute[244014]: 2026-02-25 12:45:24.817 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:24 compute-0 nova_compute[244014]: 2026-02-25 12:45:24.818 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:45:24 compute-0 ovn_controller[147040]: 2026-02-25T12:45:24Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:4a:25 10.100.0.14
Feb 25 12:45:24 compute-0 ovn_controller[147040]: 2026-02-25T12:45:24Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:4a:25 10.100.0.14
Feb 25 12:45:25 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 25 12:45:25 compute-0 nova_compute[244014]: 2026-02-25 12:45:25.189 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:25 compute-0 podman[346333]: 2026-02-25 12:45:25.773970004 +0000 UTC m=+0.106339432 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 25 12:45:25 compute-0 ceph-mon[76335]: pgmap v1945: 305 pgs: 305 active+clean; 361 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 183 KiB/s wr, 347 op/s
Feb 25 12:45:25 compute-0 nova_compute[244014]: 2026-02-25 12:45:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:45:25 compute-0 nova_compute[244014]: 2026-02-25 12:45:25.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:25 compute-0 nova_compute[244014]: 2026-02-25 12:45:25.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:25 compute-0 nova_compute[244014]: 2026-02-25 12:45:25.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:25 compute-0 nova_compute[244014]: 2026-02-25 12:45:25.899 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:45:25 compute-0 nova_compute[244014]: 2026-02-25 12:45:25.901 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 361 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 139 KiB/s wr, 285 op/s
Feb 25 12:45:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:45:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352791063' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.441 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.550 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.551 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.557 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.557 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.563 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.564 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.814 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.815 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3008MB free_disk=59.8733590496704GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.815 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.815 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.884 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance aee87402-4b34-4083-888b-bb653e2beaa9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.885 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance dd8c9142-2607-4722-90eb-65233f258639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.885 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 874359d8-3251-4416-82dc-f6776853e384 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.885 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.885 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.906 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.922 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.922 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.935 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 12:45:26 compute-0 nova_compute[244014]: 2026-02-25 12:45:26.961 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 12:45:27 compute-0 nova_compute[244014]: 2026-02-25 12:45:27.046 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3352791063' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:27 compute-0 podman[346394]: 2026-02-25 12:45:27.784022876 +0000 UTC m=+0.110353925 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 12:45:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:45:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2268235967' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:27 compute-0 nova_compute[244014]: 2026-02-25 12:45:27.924 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.877s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:27 compute-0 nova_compute[244014]: 2026-02-25 12:45:27.932 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:45:27 compute-0 nova_compute[244014]: 2026-02-25 12:45:27.950 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:45:27 compute-0 nova_compute[244014]: 2026-02-25 12:45:27.984 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:45:27 compute-0 nova_compute[244014]: 2026-02-25 12:45:27.985 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:28 compute-0 ceph-mon[76335]: pgmap v1946: 305 pgs: 305 active+clean; 361 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 139 KiB/s wr, 285 op/s
Feb 25 12:45:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2268235967' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 228 op/s
Feb 25 12:45:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:28 compute-0 nova_compute[244014]: 2026-02-25 12:45:28.613 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:29 compute-0 ceph-mon[76335]: pgmap v1947: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 228 op/s
Feb 25 12:45:29 compute-0 nova_compute[244014]: 2026-02-25 12:45:29.986 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:45:29 compute-0 nova_compute[244014]: 2026-02-25 12:45:29.987 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:45:29 compute-0 nova_compute[244014]: 2026-02-25 12:45:29.988 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:45:29 compute-0 nova_compute[244014]: 2026-02-25 12:45:29.988 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:45:30 compute-0 ovn_controller[147040]: 2026-02-25T12:45:30Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:b7:f7 10.100.0.11
Feb 25 12:45:30 compute-0 nova_compute[244014]: 2026-02-25 12:45:30.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 228 op/s
Feb 25 12:45:30 compute-0 nova_compute[244014]: 2026-02-25 12:45:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:45:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:45:30
Feb 25 12:45:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:45:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:45:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root', 'default.rgw.log', 'images', '.mgr']
Feb 25 12:45:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:45:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:45:32 compute-0 ceph-mon[76335]: pgmap v1948: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 228 op/s
Feb 25 12:45:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 154 op/s
Feb 25 12:45:32 compute-0 ovn_controller[147040]: 2026-02-25T12:45:32Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:5f:c9 10.100.0.11
Feb 25 12:45:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:33 compute-0 nova_compute[244014]: 2026-02-25 12:45:33.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:34 compute-0 ceph-mon[76335]: pgmap v1949: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 154 op/s
Feb 25 12:45:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 166 op/s
Feb 25 12:45:34 compute-0 nova_compute[244014]: 2026-02-25 12:45:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:45:34 compute-0 nova_compute[244014]: 2026-02-25 12:45:34.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:45:35 compute-0 nova_compute[244014]: 2026-02-25 12:45:35.197 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:35 compute-0 ceph-mon[76335]: pgmap v1950: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 166 op/s
Feb 25 12:45:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Feb 25 12:45:36 compute-0 nova_compute[244014]: 2026-02-25 12:45:36.796 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:36 compute-0 nova_compute[244014]: 2026-02-25 12:45:36.797 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:36 compute-0 nova_compute[244014]: 2026-02-25 12:45:36.821 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:45:36 compute-0 nova_compute[244014]: 2026-02-25 12:45:36.919 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:36 compute-0 nova_compute[244014]: 2026-02-25 12:45:36.919 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:36 compute-0 nova_compute[244014]: 2026-02-25 12:45:36.927 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:45:36 compute-0 nova_compute[244014]: 2026-02-25 12:45:36.927 244018 INFO nova.compute.claims [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.074 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:45:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3574570873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.632 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.639 244018 DEBUG nova.compute.provider_tree [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.653 244018 DEBUG nova.scheduler.client.report [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.672 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.672 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.720 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.721 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.748 244018 INFO nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.775 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.864 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.867 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.867 244018 INFO nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Creating image(s)
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.941 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:37 compute-0 nova_compute[244014]: 2026-02-25 12:45:37.976 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.011 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:38 compute-0 ceph-mon[76335]: pgmap v1951: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Feb 25 12:45:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3574570873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.017 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.060 244018 DEBUG nova.policy [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.064 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.070 244018 INFO nova.compute.manager [None req-0938bf09-ee14-4e7d-8e0e-d400621b5511 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Get console output
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.079 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.106 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.107 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.108 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.108 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.203 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.208 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Feb 25 12:45:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:38 compute-0 nova_compute[244014]: 2026-02-25 12:45:38.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.796 244018 DEBUG nova.compute.manager [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-changed-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.797 244018 DEBUG nova.compute.manager [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing instance network info cache due to event network-changed-8d032336-9efd-4e76-9498-4dafee40640b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.797 244018 DEBUG oslo_concurrency.lockutils [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.797 244018 DEBUG oslo_concurrency.lockutils [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.798 244018 DEBUG nova.network.neutron [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.873 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.874 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.875 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.875 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.876 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.880 244018 INFO nova.compute.manager [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Terminating instance
Feb 25 12:45:39 compute-0 nova_compute[244014]: 2026-02-25 12:45:39.882 244018 DEBUG nova.compute.manager [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:45:40 compute-0 nova_compute[244014]: 2026-02-25 12:45:40.199 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 983 KiB/s rd, 58 KiB/s wr, 82 op/s
Feb 25 12:45:40 compute-0 ceph-mon[76335]: pgmap v1952: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Feb 25 12:45:40 compute-0 nova_compute[244014]: 2026-02-25 12:45:40.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:40.998 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:45:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.001 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.118 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Successfully created port: 443e71ca-9c38-40e8-b799-1026ef47c35d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:45:41 compute-0 kernel: tap8d032336-9e (unregistering): left promiscuous mode
Feb 25 12:45:41 compute-0 NetworkManager[49836]: <info>  [1772023541.7201] device (tap8d032336-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.728 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:41 compute-0 ovn_controller[147040]: 2026-02-25T12:45:41Z|01147|binding|INFO|Releasing lport 8d032336-9efd-4e76-9498-4dafee40640b from this chassis (sb_readonly=0)
Feb 25 12:45:41 compute-0 ovn_controller[147040]: 2026-02-25T12:45:41Z|01148|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b down in Southbound
Feb 25 12:45:41 compute-0 ovn_controller[147040]: 2026-02-25T12:45:41Z|01149|binding|INFO|Removing iface tap8d032336-9e ovn-installed in OVS
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.738 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5f:c9 10.100.0.11'], port_security=['fa:16:3e:b3:5f:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aee87402-4b34-4083-888b-bb653e2beaa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd92597b-67bf-4492-9581-a9a7ec80f716', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '815102d4-6506-4957-9109-24ea4e91e4b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d5163c3-96b2-4512-ae02-51f6c9b4bef4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d032336-9efd-4e76-9498-4dafee40640b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:45:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.740 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d032336-9efd-4e76-9498-4dafee40640b in datapath cd92597b-67bf-4492-9581-a9a7ec80f716 unbound from our chassis
Feb 25 12:45:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.743 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd92597b-67bf-4492-9581-a9a7ec80f716, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:45:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.744 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c367060f-2299-4382-89e9-c3813fe22683]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.745 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 namespace which is not needed anymore
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.746 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:41 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000071.scope: Deactivated successfully.
Feb 25 12:45:41 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000071.scope: Consumed 12.583s CPU time.
Feb 25 12:45:41 compute-0 systemd-machined[210048]: Machine qemu-145-instance-00000071 terminated.
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.952 244018 INFO nova.virt.libvirt.driver [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance destroyed successfully.
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.952 244018 DEBUG nova.objects.instance [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.971 244018 DEBUG nova.virt.libvirt.vif [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:19Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.972 244018 DEBUG nova.network.os_vif_util [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.973 244018 DEBUG nova.network.os_vif_util [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.974 244018 DEBUG os_vif [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.977 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d032336-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.985 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:45:41 compute-0 nova_compute[244014]: 2026-02-25 12:45:41.991 244018 INFO os_vif [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e')
Feb 25 12:45:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:42.002 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.071 244018 DEBUG nova.compute.manager [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.072 244018 DEBUG oslo_concurrency.lockutils [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.072 244018 DEBUG oslo_concurrency.lockutils [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.073 244018 DEBUG oslo_concurrency.lockutils [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.073 244018 DEBUG nova.compute.manager [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.073 244018 DEBUG nova.compute.manager [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:45:42 compute-0 ceph-mon[76335]: pgmap v1953: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 983 KiB/s rd, 58 KiB/s wr, 82 op/s
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.087 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.879s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:42 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [NOTICE]   (346322) : haproxy version is 2.8.14-c23fe91
Feb 25 12:45:42 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [NOTICE]   (346322) : path to executable is /usr/sbin/haproxy
Feb 25 12:45:42 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [WARNING]  (346322) : Exiting Master process...
Feb 25 12:45:42 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [WARNING]  (346322) : Exiting Master process...
Feb 25 12:45:42 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [ALERT]    (346322) : Current worker (346324) exited with code 143 (Terminated)
Feb 25 12:45:42 compute-0 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [WARNING]  (346322) : All workers exited. Exiting... (0)
Feb 25 12:45:42 compute-0 systemd[1]: libpod-eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75.scope: Deactivated successfully.
Feb 25 12:45:42 compute-0 podman[346563]: 2026-02-25 12:45:42.107288042 +0000 UTC m=+0.225035706 container died eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.236 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 428 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 1.1 MiB/s wr, 87 op/s
Feb 25 12:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75-userdata-shm.mount: Deactivated successfully.
Feb 25 12:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4abce4945cdbbec3adb7c76e2b958b55b61f3b27db7344184a7c0c756e9956d5-merged.mount: Deactivated successfully.
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.530 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Successfully created port: 39f87175-7c4f-4092-bce6-29b413c731cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0024997657061228648 of space, bias 1.0, pg target 0.7499297118368594 quantized to 32 (current 32)
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939818704253244 of space, bias 1.0, pg target 0.7481945611275973 quantized to 32 (current 32)
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4167562580251474e-06 of space, bias 4.0, pg target 0.001700107509630177 quantized to 16 (current 16)
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:45:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:45:42 compute-0 podman[346563]: 2026-02-25 12:45:42.57684493 +0000 UTC m=+0.694592594 container cleanup eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.583 244018 DEBUG nova.network.neutron [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updated VIF entry in instance network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.584 244018 DEBUG nova.network.neutron [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:42 compute-0 systemd[1]: libpod-conmon-eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75.scope: Deactivated successfully.
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.606 244018 DEBUG oslo_concurrency.lockutils [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.748 244018 DEBUG nova.objects.instance [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b9ea98f-9b1a-495b-8669-e79da967b0ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.763 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.764 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Ensure instance console log exists: /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.765 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.765 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:42 compute-0 nova_compute[244014]: 2026-02-25 12:45:42.766 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:43 compute-0 podman[346677]: 2026-02-25 12:45:43.269169379 +0000 UTC m=+0.660763398 container remove eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 12:45:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.275 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4987efb3-54c3-489b-aa6b-d516452407b0]: (4, ('Wed Feb 25 12:45:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 (eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75)\neb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75\nWed Feb 25 12:45:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 (eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75)\neb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.277 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f0412724-54b0-425c-9af1-816cef80ce98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.279 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd92597b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:43 compute-0 nova_compute[244014]: 2026-02-25 12:45:43.281 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:43 compute-0 kernel: tapcd92597b-60: left promiscuous mode
Feb 25 12:45:43 compute-0 nova_compute[244014]: 2026-02-25 12:45:43.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.297 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c3dd86-f151-4ea7-85b0-5a67000b41e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.318 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdf65e2-fd97-40d2-8435-1e5bd280f1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.320 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4828b62b-cd6c-48fa-8658-b28784e550df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.337 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[89c70a0a-3cb5-4b43-ad55-abf877cb40e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548829, 'reachable_time': 42288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346711, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.340 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:45:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.340 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ac46d547-346f-40bb-89b3-d47508e58431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dcd92597b\x2d67bf\x2d4492\x2d9581\x2da9a7ec80f716.mount: Deactivated successfully.
Feb 25 12:45:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:43 compute-0 nova_compute[244014]: 2026-02-25 12:45:43.884 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Successfully updated port: 443e71ca-9c38-40e8-b799-1026ef47c35d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.008 244018 DEBUG nova.compute.manager [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.008 244018 DEBUG nova.compute.manager [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing instance network info cache due to event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.009 244018 DEBUG oslo_concurrency.lockutils [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.009 244018 DEBUG oslo_concurrency.lockutils [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.010 244018 DEBUG nova.network.neutron [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing network info cache for port 443e71ca-9c38-40e8-b799-1026ef47c35d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:45:44 compute-0 ceph-mon[76335]: pgmap v1954: 305 pgs: 305 active+clean; 428 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 1.1 MiB/s wr, 87 op/s
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.243 244018 DEBUG nova.compute.manager [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.244 244018 DEBUG oslo_concurrency.lockutils [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.244 244018 DEBUG oslo_concurrency.lockutils [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.245 244018 DEBUG oslo_concurrency.lockutils [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.245 244018 DEBUG nova.compute.manager [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.246 244018 WARNING nova.compute.manager [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state active and task_state deleting.
Feb 25 12:45:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 441 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 496 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Feb 25 12:45:44 compute-0 nova_compute[244014]: 2026-02-25 12:45:44.536 244018 DEBUG nova.network.neutron [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:45:45 compute-0 nova_compute[244014]: 2026-02-25 12:45:45.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:45 compute-0 nova_compute[244014]: 2026-02-25 12:45:45.272 244018 DEBUG nova.network.neutron [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:45 compute-0 nova_compute[244014]: 2026-02-25 12:45:45.289 244018 DEBUG oslo_concurrency.lockutils [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:45 compute-0 nova_compute[244014]: 2026-02-25 12:45:45.367 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Successfully updated port: 39f87175-7c4f-4092-bce6-29b413c731cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:45:45 compute-0 nova_compute[244014]: 2026-02-25 12:45:45.403 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:45 compute-0 nova_compute[244014]: 2026-02-25 12:45:45.404 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:45 compute-0 nova_compute[244014]: 2026-02-25 12:45:45.404 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:45:45 compute-0 nova_compute[244014]: 2026-02-25 12:45:45.571 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:45:46 compute-0 ceph-mon[76335]: pgmap v1955: 305 pgs: 305 active+clean; 441 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 496 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Feb 25 12:45:46 compute-0 nova_compute[244014]: 2026-02-25 12:45:46.097 244018 DEBUG nova.compute.manager [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-changed-39f87175-7c4f-4092-bce6-29b413c731cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:46 compute-0 nova_compute[244014]: 2026-02-25 12:45:46.098 244018 DEBUG nova.compute.manager [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing instance network info cache due to event network-changed-39f87175-7c4f-4092-bce6-29b413c731cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:45:46 compute-0 nova_compute[244014]: 2026-02-25 12:45:46.099 244018 DEBUG oslo_concurrency.lockutils [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:46 compute-0 nova_compute[244014]: 2026-02-25 12:45:46.139 244018 INFO nova.virt.libvirt.driver [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Deleting instance files /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9_del
Feb 25 12:45:46 compute-0 nova_compute[244014]: 2026-02-25 12:45:46.140 244018 INFO nova.virt.libvirt.driver [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Deletion of /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9_del complete
Feb 25 12:45:46 compute-0 nova_compute[244014]: 2026-02-25 12:45:46.210 244018 INFO nova.compute.manager [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Took 6.33 seconds to destroy the instance on the hypervisor.
Feb 25 12:45:46 compute-0 nova_compute[244014]: 2026-02-25 12:45:46.211 244018 DEBUG oslo.service.loopingcall [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:45:46 compute-0 nova_compute[244014]: 2026-02-25 12:45:46.212 244018 DEBUG nova.compute.manager [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:45:46 compute-0 nova_compute[244014]: 2026-02-25 12:45:46.212 244018 DEBUG nova.network.neutron [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:45:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 441 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 21 op/s
Feb 25 12:45:46 compute-0 nova_compute[244014]: 2026-02-25 12:45:46.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.178 244018 DEBUG nova.network.neutron [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.204 244018 INFO nova.compute.manager [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Took 0.99 seconds to deallocate network for instance.
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.265 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.266 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.391 244018 DEBUG oslo_concurrency.processutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:47 compute-0 ceph-mon[76335]: pgmap v1956: 305 pgs: 305 active+clean; 441 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 21 op/s
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.772 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.800 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.800 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance network_info: |[{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.802 244018 DEBUG oslo_concurrency.lockutils [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.802 244018 DEBUG nova.network.neutron [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing network info cache for port 39f87175-7c4f-4092-bce6-29b413c731cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.809 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Start _get_guest_xml network_info=[{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.815 244018 WARNING nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.821 244018 DEBUG nova.virt.libvirt.host [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.822 244018 DEBUG nova.virt.libvirt.host [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.826 244018 DEBUG nova.virt.libvirt.host [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.827 244018 DEBUG nova.virt.libvirt.host [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.828 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.828 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.829 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.830 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.830 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.831 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.831 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.832 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.832 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.833 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.833 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.834 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.839 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:45:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/146916130' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:45:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:45:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/146916130' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:45:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:45:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3897932473' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.962 244018 DEBUG oslo_concurrency.processutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.970 244018 DEBUG nova.compute.provider_tree [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:45:47 compute-0 nova_compute[244014]: 2026-02-25 12:45:47.996 244018 DEBUG nova.scheduler.client.report [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.033 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.076 244018 INFO nova.scheduler.client.report [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Deleted allocations for instance aee87402-4b34-4083-888b-bb653e2beaa9
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.164 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.237 244018 DEBUG nova.compute.manager [req-ef5ec5e0-88e6-44f4-9d11-8a9da9b1b7ae req-2e8eb9ea-d7b5-4843-acb0-6702bdb62b1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-deleted-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:45:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1168713874' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.378 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.414 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.418 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:45:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/516825819' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/146916130' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:45:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/146916130' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:45:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3897932473' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:45:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1168713874' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.968 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.971 244018 DEBUG nova.virt.libvirt.vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:37Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.971 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.973 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.975 244018 DEBUG nova.virt.libvirt.vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:37Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.976 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.977 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:48 compute-0 nova_compute[244014]: 2026-02-25 12:45:48.979 244018 DEBUG nova.objects.instance [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b9ea98f-9b1a-495b-8669-e79da967b0ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.011 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:45:49 compute-0 nova_compute[244014]:   <uuid>2b9ea98f-9b1a-495b-8669-e79da967b0ab</uuid>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   <name>instance-00000073</name>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1283323036</nova:name>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:45:47</nova:creationTime>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <nova:port uuid="443e71ca-9c38-40e8-b799-1026ef47c35d">
Feb 25 12:45:49 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <nova:port uuid="39f87175-7c4f-4092-bce6-29b413c731cd">
Feb 25 12:45:49 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feb2:ded3" ipVersion="6"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feb2:ded3" ipVersion="6"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <system>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <entry name="serial">2b9ea98f-9b1a-495b-8669-e79da967b0ab</entry>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <entry name="uuid">2b9ea98f-9b1a-495b-8669-e79da967b0ab</entry>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     </system>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   <os>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   </os>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   <features>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   </features>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk">
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       </source>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config">
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       </source>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:45:49 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:98:e2:13"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <target dev="tap443e71ca-9c"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:b2:de:d3"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <target dev="tap39f87175-7c"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/console.log" append="off"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <video>
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     </video>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:45:49 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:45:49 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:45:49 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:45:49 compute-0 nova_compute[244014]: </domain>
Feb 25 12:45:49 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.012 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Preparing to wait for external event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.013 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.013 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.014 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.014 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Preparing to wait for external event network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.014 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.015 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.015 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.016 244018 DEBUG nova.virt.libvirt.vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:37Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.017 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.018 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.018 244018 DEBUG os_vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.019 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.020 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.021 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.024 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.025 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap443e71ca-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.026 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap443e71ca-9c, col_values=(('external_ids', {'iface-id': '443e71ca-9c38-40e8-b799-1026ef47c35d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:e2:13', 'vm-uuid': '2b9ea98f-9b1a-495b-8669-e79da967b0ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:49 compute-0 NetworkManager[49836]: <info>  [1772023549.0299] manager: (tap443e71ca-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.034 244018 INFO os_vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c')
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.035 244018 DEBUG nova.virt.libvirt.vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:37Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.036 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.037 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.038 244018 DEBUG os_vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.039 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.040 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.044 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39f87175-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.045 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap39f87175-7c, col_values=(('external_ids', {'iface-id': '39f87175-7c4f-4092-bce6-29b413c731cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:de:d3', 'vm-uuid': '2b9ea98f-9b1a-495b-8669-e79da967b0ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:49 compute-0 NetworkManager[49836]: <info>  [1772023549.0478] manager: (tap39f87175-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.049 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.056 244018 INFO os_vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c')
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.229 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.230 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.230 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:98:e2:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.230 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:b2:de:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.231 244018 INFO nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Using config drive
Feb 25 12:45:49 compute-0 nova_compute[244014]: 2026-02-25 12:45:49.264 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:50 compute-0 ceph-mon[76335]: pgmap v1957: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Feb 25 12:45:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/516825819' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:45:50 compute-0 nova_compute[244014]: 2026-02-25 12:45:50.203 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:50 compute-0 nova_compute[244014]: 2026-02-25 12:45:50.305 244018 INFO nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Creating config drive at /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config
Feb 25 12:45:50 compute-0 nova_compute[244014]: 2026-02-25 12:45:50.310 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqs3b8jyh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:45:50 compute-0 nova_compute[244014]: 2026-02-25 12:45:50.446 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqs3b8jyh" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:50 compute-0 nova_compute[244014]: 2026-02-25 12:45:50.487 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:45:50 compute-0 nova_compute[244014]: 2026-02-25 12:45:50.493 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:45:50 compute-0 nova_compute[244014]: 2026-02-25 12:45:50.561 244018 DEBUG nova.network.neutron [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updated VIF entry in instance network info cache for port 39f87175-7c4f-4092-bce6-29b413c731cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:45:50 compute-0 nova_compute[244014]: 2026-02-25 12:45:50.562 244018 DEBUG nova.network.neutron [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:50 compute-0 nova_compute[244014]: 2026-02-25 12:45:50.580 244018 DEBUG oslo_concurrency.lockutils [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:50 compute-0 ovn_controller[147040]: 2026-02-25T12:45:50Z|01150|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 12:45:50 compute-0 ovn_controller[147040]: 2026-02-25T12:45:50Z|01151|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 12:45:50 compute-0 ovn_controller[147040]: 2026-02-25T12:45:50Z|01152|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.012 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.405 244018 DEBUG nova.compute.manager [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.406 244018 DEBUG nova.compute.manager [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing instance network info cache due to event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.407 244018 DEBUG oslo_concurrency.lockutils [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.407 244018 DEBUG oslo_concurrency.lockutils [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.407 244018 DEBUG nova.network.neutron [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.499 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.500 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.500 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.501 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.501 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.503 244018 INFO nova.compute.manager [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Terminating instance
Feb 25 12:45:51 compute-0 nova_compute[244014]: 2026-02-25 12:45:51.505 244018 DEBUG nova.compute.manager [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:45:51 compute-0 ceph-mon[76335]: pgmap v1958: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:45:52 compute-0 kernel: tap97fb9f99-cb (unregistering): left promiscuous mode
Feb 25 12:45:52 compute-0 NetworkManager[49836]: <info>  [1772023552.0902] device (tap97fb9f99-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:45:52 compute-0 ovn_controller[147040]: 2026-02-25T12:45:52Z|01153|binding|INFO|Releasing lport 97fb9f99-cb59-4581-8866-375ea3e167d7 from this chassis (sb_readonly=0)
Feb 25 12:45:52 compute-0 ovn_controller[147040]: 2026-02-25T12:45:52Z|01154|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 down in Southbound
Feb 25 12:45:52 compute-0 ovn_controller[147040]: 2026-02-25T12:45:52Z|01155|binding|INFO|Removing iface tap97fb9f99-cb ovn-installed in OVS
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.099 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.108 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:b7:f7 10.100.0.11'], port_security=['fa:16:3e:61:b7:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '874359d8-3251-4416-82dc-f6776853e384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c0adb05683141e7a0b866f450e410e0', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'c8e48a3c-4f73-4222-8cd8-b956480085c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88c1b411-2973-4db5-967d-0a2cebc1066f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=97fb9f99-cb59-4581-8866-375ea3e167d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:45:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.110 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 97fb9f99-cb59-4581-8866-375ea3e167d7 in datapath e4d059ba-eacc-463b-a8a4-393a5a36dba3 unbound from our chassis
Feb 25 12:45:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.113 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d059ba-eacc-463b-a8a4-393a5a36dba3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:45:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.114 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81f51818-2a00-45b5-8b5e-abc370b10487]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.115 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 namespace which is not needed anymore
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000070.scope: Deactivated successfully.
Feb 25 12:45:52 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000070.scope: Consumed 13.233s CPU time.
Feb 25 12:45:52 compute-0 systemd-machined[210048]: Machine qemu-144-instance-00000070 terminated.
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.351 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance destroyed successfully.
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.353 244018 DEBUG nova.objects.instance [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'resources' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.368 244018 DEBUG nova.virt.libvirt.vif [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9lQWzdSu90yaQLNul1MY74fQc98Cg9DYq6NKBJiDcNTT2XlppuxCcKfQtu58BOyBXAh+PrpYKR3FwTDUP4XAt9wnb5ixmiKo9lnvQqdB0jXAsnGExk5Uj48m62YSy+xg==',key_name='tempest-TestShelveInstance-571084469',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:45:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:17Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.369 244018 DEBUG nova.network.os_vif_util [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.370 244018 DEBUG nova.network.os_vif_util [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.371 244018 DEBUG os_vif [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.374 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97fb9f99-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.388 244018 INFO os_vif [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb')
Feb 25 12:45:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.433 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.940s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.434 244018 INFO nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Deleting local config drive /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config because it was imported into RBD.
Feb 25 12:45:52 compute-0 systemd-udevd[346869]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:45:52 compute-0 kernel: tap443e71ca-9c: entered promiscuous mode
Feb 25 12:45:52 compute-0 NetworkManager[49836]: <info>  [1772023552.4911] manager: (tap443e71ca-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/487)
Feb 25 12:45:52 compute-0 NetworkManager[49836]: <info>  [1772023552.5036] device (tap443e71ca-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:45:52 compute-0 NetworkManager[49836]: <info>  [1772023552.5054] device (tap443e71ca-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.548 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 ovn_controller[147040]: 2026-02-25T12:45:52Z|01156|binding|INFO|Claiming lport 443e71ca-9c38-40e8-b799-1026ef47c35d for this chassis.
Feb 25 12:45:52 compute-0 ovn_controller[147040]: 2026-02-25T12:45:52Z|01157|binding|INFO|443e71ca-9c38-40e8-b799-1026ef47c35d: Claiming fa:16:3e:98:e2:13 10.100.0.4
Feb 25 12:45:52 compute-0 NetworkManager[49836]: <info>  [1772023552.5567] manager: (tap39f87175-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/488)
Feb 25 12:45:52 compute-0 kernel: tap39f87175-7c: entered promiscuous mode
Feb 25 12:45:52 compute-0 ovn_controller[147040]: 2026-02-25T12:45:52Z|01158|binding|INFO|Claiming lport 39f87175-7c4f-4092-bce6-29b413c731cd for this chassis.
Feb 25 12:45:52 compute-0 ovn_controller[147040]: 2026-02-25T12:45:52Z|01159|binding|INFO|39f87175-7c4f-4092-bce6-29b413c731cd: Claiming fa:16:3e:b2:de:d3 2001:db8:0:1:f816:3eff:feb2:ded3 2001:db8::f816:3eff:feb2:ded3
Feb 25 12:45:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.560 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:e2:13 10.100.0.4'], port_security=['fa:16:3e:98:e2:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2b9ea98f-9b1a-495b-8669-e79da967b0ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa114a0-206b-4aa1-b770-18f248344fa4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=443e71ca-9c38-40e8-b799-1026ef47c35d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.561 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.563 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 ovn_controller[147040]: 2026-02-25T12:45:52Z|01160|binding|INFO|Setting lport 443e71ca-9c38-40e8-b799-1026ef47c35d ovn-installed in OVS
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 NetworkManager[49836]: <info>  [1772023552.5687] device (tap39f87175-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:45:52 compute-0 NetworkManager[49836]: <info>  [1772023552.5695] device (tap39f87175-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:45:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.569 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:de:d3 2001:db8:0:1:f816:3eff:feb2:ded3 2001:db8::f816:3eff:feb2:ded3'], port_security=['fa:16:3e:b2:de:d3 2001:db8:0:1:f816:3eff:feb2:ded3 2001:db8::f816:3eff:feb2:ded3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb2:ded3/64 2001:db8::f816:3eff:feb2:ded3/64', 'neutron:device_id': '2b9ea98f-9b1a-495b-8669-e79da967b0ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5bfff87-5507-4fe1-aed9-1b014e8e7384, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=39f87175-7c4f-4092-bce6-29b413c731cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:45:52 compute-0 ovn_controller[147040]: 2026-02-25T12:45:52Z|01161|binding|INFO|Setting lport 443e71ca-9c38-40e8-b799-1026ef47c35d up in Southbound
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 ovn_controller[147040]: 2026-02-25T12:45:52Z|01162|binding|INFO|Setting lport 39f87175-7c4f-4092-bce6-29b413c731cd ovn-installed in OVS
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:52 compute-0 ovn_controller[147040]: 2026-02-25T12:45:52Z|01163|binding|INFO|Setting lport 39f87175-7c4f-4092-bce6-29b413c731cd up in Southbound
Feb 25 12:45:52 compute-0 systemd-machined[210048]: New machine qemu-146-instance-00000073.
Feb 25 12:45:52 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-00000073.
Feb 25 12:45:52 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [NOTICE]   (346103) : haproxy version is 2.8.14-c23fe91
Feb 25 12:45:52 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [NOTICE]   (346103) : path to executable is /usr/sbin/haproxy
Feb 25 12:45:52 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [WARNING]  (346103) : Exiting Master process...
Feb 25 12:45:52 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [WARNING]  (346103) : Exiting Master process...
Feb 25 12:45:52 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [ALERT]    (346103) : Current worker (346106) exited with code 143 (Terminated)
Feb 25 12:45:52 compute-0 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [WARNING]  (346103) : All workers exited. Exiting... (0)
Feb 25 12:45:52 compute-0 systemd[1]: libpod-f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80.scope: Deactivated successfully.
Feb 25 12:45:52 compute-0 podman[346890]: 2026-02-25 12:45:52.835543628 +0000 UTC m=+0.613360690 container died f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.884 244018 DEBUG nova.compute.manager [req-9989124e-e8dc-4fcb-a8e4-98f2800633b8 req-b1054707-fda9-4223-8fae-48bc3e13d9b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.885 244018 DEBUG oslo_concurrency.lockutils [req-9989124e-e8dc-4fcb-a8e4-98f2800633b8 req-b1054707-fda9-4223-8fae-48bc3e13d9b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.886 244018 DEBUG oslo_concurrency.lockutils [req-9989124e-e8dc-4fcb-a8e4-98f2800633b8 req-b1054707-fda9-4223-8fae-48bc3e13d9b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.887 244018 DEBUG oslo_concurrency.lockutils [req-9989124e-e8dc-4fcb-a8e4-98f2800633b8 req-b1054707-fda9-4223-8fae-48bc3e13d9b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:52 compute-0 nova_compute[244014]: 2026-02-25 12:45:52.887 244018 DEBUG nova.compute.manager [req-9989124e-e8dc-4fcb-a8e4-98f2800633b8 req-b1054707-fda9-4223-8fae-48bc3e13d9b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Processing event network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:45:53 compute-0 nova_compute[244014]: 2026-02-25 12:45:53.282 244018 DEBUG nova.network.neutron [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated VIF entry in instance network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:45:53 compute-0 nova_compute[244014]: 2026-02-25 12:45:53.283 244018 DEBUG nova.network.neutron [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:45:53 compute-0 nova_compute[244014]: 2026-02-25 12:45:53.301 244018 DEBUG oslo_concurrency.lockutils [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:45:53 compute-0 sudo[347013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:45:53 compute-0 sudo[347013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:45:53 compute-0 sudo[347013]: pam_unix(sudo:session): session closed for user root
Feb 25 12:45:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:53 compute-0 sudo[347044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:45:53 compute-0 sudo[347044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:45:53 compute-0 nova_compute[244014]: 2026-02-25 12:45:53.550 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023553.5500739, 2b9ea98f-9b1a-495b-8669-e79da967b0ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:53 compute-0 nova_compute[244014]: 2026-02-25 12:45:53.550 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] VM Started (Lifecycle Event)
Feb 25 12:45:53 compute-0 nova_compute[244014]: 2026-02-25 12:45:53.570 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:53 compute-0 nova_compute[244014]: 2026-02-25 12:45:53.576 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023553.5526402, 2b9ea98f-9b1a-495b-8669-e79da967b0ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:53 compute-0 nova_compute[244014]: 2026-02-25 12:45:53.577 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] VM Paused (Lifecycle Event)
Feb 25 12:45:53 compute-0 nova_compute[244014]: 2026-02-25 12:45:53.594 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:53 compute-0 nova_compute[244014]: 2026-02-25 12:45:53.597 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:45:53 compute-0 nova_compute[244014]: 2026-02-25 12:45:53.743 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:45:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80-userdata-shm.mount: Deactivated successfully.
Feb 25 12:45:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-dff6ea0a8a911ca82af7fb30e8506525301753b046b20808291f024818c24148-merged.mount: Deactivated successfully.
Feb 25 12:45:54 compute-0 sudo[347044]: pam_unix(sudo:session): session closed for user root
Feb 25 12:45:54 compute-0 podman[346890]: 2026-02-25 12:45:54.343521298 +0000 UTC m=+2.121338340 container cleanup f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:45:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:45:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:45:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:45:54 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:45:54 compute-0 systemd[1]: libpod-conmon-f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80.scope: Deactivated successfully.
Feb 25 12:45:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:45:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 763 KiB/s wr, 64 op/s
Feb 25 12:45:54 compute-0 ceph-mon[76335]: pgmap v1959: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Feb 25 12:45:54 compute-0 nova_compute[244014]: 2026-02-25 12:45:54.971 244018 DEBUG nova.compute.manager [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:54 compute-0 nova_compute[244014]: 2026-02-25 12:45:54.973 244018 DEBUG oslo_concurrency.lockutils [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:54 compute-0 nova_compute[244014]: 2026-02-25 12:45:54.973 244018 DEBUG oslo_concurrency.lockutils [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:54 compute-0 nova_compute[244014]: 2026-02-25 12:45:54.974 244018 DEBUG oslo_concurrency.lockutils [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:54 compute-0 nova_compute[244014]: 2026-02-25 12:45:54.974 244018 DEBUG nova.compute.manager [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] No event matching network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd in dict_keys([('network-vif-plugged', '443e71ca-9c38-40e8-b799-1026ef47c35d')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 25 12:45:54 compute-0 nova_compute[244014]: 2026-02-25 12:45:54.975 244018 WARNING nova.compute.manager [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received unexpected event network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd for instance with vm_state building and task_state spawning.
Feb 25 12:45:54 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.024 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.024 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.026 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:45:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:45:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:45:55 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:45:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:45:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:55 compute-0 sudo[347122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:45:55 compute-0 sudo[347122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:45:55 compute-0 sudo[347122]: pam_unix(sudo:session): session closed for user root
Feb 25 12:45:55 compute-0 podman[347107]: 2026-02-25 12:45:55.297541176 +0000 UTC m=+0.929376463 container remove f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[869ed914-0540-4870-9a98-9a6dae188be7]: (4, ('Wed Feb 25 12:45:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 (f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80)\nf6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80\nWed Feb 25 12:45:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 (f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80)\nf6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.306 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4d7793f8-6942-4d5d-8023-0b2892baab76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.307 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d059ba-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.310 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:55 compute-0 kernel: tape4d059ba-e0: left promiscuous mode
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.316 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[95402bf0-b575-4629-bb8a-6e73607623cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d74ae32c-41a5-48cb-afc1-77a9c57d2f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.330 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[28a6bbe8-6baa-4a0d-b89b-8bce6d966b10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 sudo[347147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.344 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd74df3e-926a-49bf-97d6-81b16a7226b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548504, 'reachable_time': 28536, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347171, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 sudo[347147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:45:55 compute-0 systemd[1]: run-netns-ovnmeta\x2de4d059ba\x2deacc\x2d463b\x2da8a4\x2d393a5a36dba3.mount: Deactivated successfully.
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.348 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.349 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[35db6755-9762-4eb9-bdb2-916067a7ea54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.350 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 443e71ca-9c38-40e8-b799-1026ef47c35d in datapath bb79e0fd-2a4d-4a70-9c80-4853297401ff unbound from our chassis
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.356 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79e0fd-2a4d-4a70-9c80-4853297401ff
Feb 25 12:45:55 compute-0 sshd-session[347120]: Invalid user solana from 80.94.92.186 port 41304
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.416 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff08fec-b4ae-41a7-a96a-93843d6da97d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.455 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b32a5e-b771-4c90-a561-e16db3753238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.460 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b659e4dd-5c05-4830-8fe6-6f1053387fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.495 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0daddea2-1c4f-4e3f-9bfb-9eb72155961f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.519 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fb794ab5-8361-4ca0-bb64-b7c9ecf72a2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79e0fd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:0b:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547905, 'reachable_time': 39147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347184, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.533 244018 DEBUG nova.compute.manager [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.534 244018 DEBUG oslo_concurrency.lockutils [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.535 244018 DEBUG oslo_concurrency.lockutils [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.535 244018 DEBUG oslo_concurrency.lockutils [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.535 244018 DEBUG nova.compute.manager [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.536 244018 DEBUG nova.compute.manager [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.542 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1adea4b3-d07d-419f-94de-d7e9e9f2a1d8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb79e0fd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547920, 'tstamp': 547920}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347186, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb79e0fd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547923, 'tstamp': 547923}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347186, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.543 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79e0fd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.554 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79e0fd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.554 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79e0fd-20, col_values=(('external_ids', {'iface-id': '091afd0d-cbaa-4d88-8f55-1965b8ffcb56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.557 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 39f87175-7c4f-4092-bce6-29b413c731cd in datapath 6df13266-bcfe-4a5b-94c4-81b5f08a6c21 unbound from our chassis
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.558 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6df13266-bcfe-4a5b-94c4-81b5f08a6c21
Feb 25 12:45:55 compute-0 sshd-session[347120]: Connection closed by invalid user solana 80.94.92.186 port 41304 [preauth]
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.604 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f7796b-43e9-499c-89c4-8799c0716926]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.634 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2257ba-c324-454e-87d6-192096ceb53a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.637 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2424b682-c29b-4098-8741-a1b9a5679a82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.668 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[64f6acd9-5d41-470e-910b-7669ddc4e50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.693 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45df2b47-de97-4014-b90a-34c69c32f7e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6df13266-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:ad:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547995, 'reachable_time': 15036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347216, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.710 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbc54ff-967c-4b94-becd-6ed4d6af1304]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6df13266-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548006, 'tstamp': 548006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347217, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.712 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6df13266-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:55 compute-0 nova_compute[244014]: 2026-02-25 12:45:55.714 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.715 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6df13266-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.716 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.717 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6df13266-b0, col_values=(('external_ids', {'iface-id': '0b276f7a-90bb-4427-8f39-0e014732fd20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.717 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:45:55 compute-0 podman[347198]: 2026-02-25 12:45:55.633427891 +0000 UTC m=+0.028170237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:45:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:45:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:45:55 compute-0 ceph-mon[76335]: pgmap v1960: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 763 KiB/s wr, 64 op/s
Feb 25 12:45:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:45:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:45:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:45:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:45:56 compute-0 podman[347198]: 2026-02-25 12:45:56.158953329 +0000 UTC m=+0.553695655 container create 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:45:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 243 KiB/s rd, 23 KiB/s wr, 50 op/s
Feb 25 12:45:56 compute-0 systemd[1]: Started libpod-conmon-5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff.scope.
Feb 25 12:45:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:45:56 compute-0 nova_compute[244014]: 2026-02-25 12:45:56.509 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:56 compute-0 podman[347198]: 2026-02-25 12:45:56.616408895 +0000 UTC m=+1.011151251 container init 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 12:45:56 compute-0 podman[347218]: 2026-02-25 12:45:56.625886853 +0000 UTC m=+0.426579096 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 25 12:45:56 compute-0 podman[347198]: 2026-02-25 12:45:56.626996714 +0000 UTC m=+1.021739050 container start 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:45:56 compute-0 epic_nightingale[347229]: 167 167
Feb 25 12:45:56 compute-0 systemd[1]: libpod-5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff.scope: Deactivated successfully.
Feb 25 12:45:56 compute-0 podman[347198]: 2026-02-25 12:45:56.692736141 +0000 UTC m=+1.087478527 container attach 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 12:45:56 compute-0 podman[347198]: 2026-02-25 12:45:56.693799051 +0000 UTC m=+1.088541407 container died 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:45:56 compute-0 nova_compute[244014]: 2026-02-25 12:45:56.950 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023541.948884, aee87402-4b34-4083-888b-bb653e2beaa9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:56 compute-0 nova_compute[244014]: 2026-02-25 12:45:56.951 244018 INFO nova.compute.manager [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Stopped (Lifecycle Event)
Feb 25 12:45:56 compute-0 nova_compute[244014]: 2026-02-25 12:45:56.972 244018 DEBUG nova.compute.manager [None req-fd431afe-d29e-4dd1-af4d-9057cc23ea04 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-02d942c076a211e304ec5498370777b5f180cbaf75ee7aaca5010f648ee60007-merged.mount: Deactivated successfully.
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.507 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.613 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.614 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.614 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.614 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.615 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.615 244018 WARNING nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received unexpected event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with vm_state active and task_state deleting.
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.615 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.615 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.616 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.616 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.616 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Processing event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.616 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.617 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.617 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.617 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.617 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] No waiting events found dispatching network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.618 244018 WARNING nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received unexpected event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d for instance with vm_state building and task_state spawning.
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.618 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.623 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023557.623453, 2b9ea98f-9b1a-495b-8669-e79da967b0ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.624 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] VM Resumed (Lifecycle Event)
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.627 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.632 244018 INFO nova.virt.libvirt.driver [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance spawned successfully.
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.633 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.648 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.655 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.659 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.659 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.660 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.660 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.661 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.661 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:45:57 compute-0 podman[347198]: 2026-02-25 12:45:57.685500643 +0000 UTC m=+2.080242939 container remove 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.693 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:45:57 compute-0 systemd[1]: libpod-conmon-5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff.scope: Deactivated successfully.
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.742 244018 INFO nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Took 19.88 seconds to spawn the instance on the hypervisor.
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.742 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:45:57 compute-0 podman[347265]: 2026-02-25 12:45:57.850249685 +0000 UTC m=+0.033204968 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.968 244018 INFO nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Took 21.07 seconds to build instance.
Feb 25 12:45:57 compute-0 nova_compute[244014]: 2026-02-25 12:45:57.983 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:45:58 compute-0 podman[347265]: 2026-02-25 12:45:58.000884479 +0000 UTC m=+0.183839752 container create 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:45:58 compute-0 ceph-mon[76335]: pgmap v1961: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 243 KiB/s rd, 23 KiB/s wr, 50 op/s
Feb 25 12:45:58 compute-0 systemd[1]: Started libpod-conmon-6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969.scope.
Feb 25 12:45:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:45:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 26 KiB/s wr, 73 op/s
Feb 25 12:45:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:45:58 compute-0 podman[347265]: 2026-02-25 12:45:58.557962689 +0000 UTC m=+0.740918032 container init 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:45:58 compute-0 podman[347279]: 2026-02-25 12:45:58.563916337 +0000 UTC m=+0.523949056 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 12:45:58 compute-0 podman[347265]: 2026-02-25 12:45:58.568686242 +0000 UTC m=+0.751641515 container start 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:45:58 compute-0 podman[347265]: 2026-02-25 12:45:58.776049287 +0000 UTC m=+0.959004600 container attach 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:45:59 compute-0 trusting_shannon[347292]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:45:59 compute-0 trusting_shannon[347292]: --> All data devices are unavailable
Feb 25 12:45:59 compute-0 systemd[1]: libpod-6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969.scope: Deactivated successfully.
Feb 25 12:45:59 compute-0 podman[347265]: 2026-02-25 12:45:59.062234728 +0000 UTC m=+1.245189991 container died 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:45:59 compute-0 nova_compute[244014]: 2026-02-25 12:45:59.381 244018 INFO nova.virt.libvirt.driver [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deleting instance files /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384_del
Feb 25 12:45:59 compute-0 nova_compute[244014]: 2026-02-25 12:45:59.384 244018 INFO nova.virt.libvirt.driver [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deletion of /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384_del complete
Feb 25 12:45:59 compute-0 nova_compute[244014]: 2026-02-25 12:45:59.463 244018 INFO nova.compute.manager [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Took 7.96 seconds to destroy the instance on the hypervisor.
Feb 25 12:45:59 compute-0 nova_compute[244014]: 2026-02-25 12:45:59.464 244018 DEBUG oslo.service.loopingcall [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:45:59 compute-0 nova_compute[244014]: 2026-02-25 12:45:59.464 244018 DEBUG nova.compute.manager [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:45:59 compute-0 nova_compute[244014]: 2026-02-25 12:45:59.464 244018 DEBUG nova.network.neutron [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:45:59 compute-0 ceph-mon[76335]: pgmap v1962: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 26 KiB/s wr, 73 op/s
Feb 25 12:45:59 compute-0 nova_compute[244014]: 2026-02-25 12:45:59.532 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb-merged.mount: Deactivated successfully.
Feb 25 12:45:59 compute-0 podman[347265]: 2026-02-25 12:45:59.782070943 +0000 UTC m=+1.965026166 container remove 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 25 12:45:59 compute-0 systemd[1]: libpod-conmon-6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969.scope: Deactivated successfully.
Feb 25 12:45:59 compute-0 sudo[347147]: pam_unix(sudo:session): session closed for user root
Feb 25 12:45:59 compute-0 sudo[347340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:45:59 compute-0 sudo[347340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:45:59 compute-0 sudo[347340]: pam_unix(sudo:session): session closed for user root
Feb 25 12:46:00 compute-0 sudo[347365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:46:00 compute-0 sudo[347365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:46:00 compute-0 nova_compute[244014]: 2026-02-25 12:46:00.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 233 KiB/s rd, 21 KiB/s wr, 37 op/s
Feb 25 12:46:00 compute-0 podman[347402]: 2026-02-25 12:46:00.319405396 +0000 UTC m=+0.034051593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:46:00 compute-0 podman[347402]: 2026-02-25 12:46:00.454340286 +0000 UTC m=+0.168986463 container create b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:46:00 compute-0 systemd[1]: Started libpod-conmon-b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93.scope.
Feb 25 12:46:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:46:00 compute-0 podman[347402]: 2026-02-25 12:46:00.78668541 +0000 UTC m=+0.501331607 container init b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:46:00 compute-0 podman[347402]: 2026-02-25 12:46:00.799339827 +0000 UTC m=+0.513986014 container start b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:46:00 compute-0 thirsty_zhukovsky[347418]: 167 167
Feb 25 12:46:00 compute-0 systemd[1]: libpod-b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93.scope: Deactivated successfully.
Feb 25 12:46:00 compute-0 conmon[347418]: conmon b7490ba93a48be0accd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93.scope/container/memory.events
Feb 25 12:46:00 compute-0 podman[347402]: 2026-02-25 12:46:00.939923567 +0000 UTC m=+0.654569764 container attach b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:46:00 compute-0 podman[347402]: 2026-02-25 12:46:00.941009128 +0000 UTC m=+0.655655315 container died b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:46:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d66ae0feb6f75bf57bede6efe89060e54a3567e7dd7bbbccb0edf126cbcf5d3e-merged.mount: Deactivated successfully.
Feb 25 12:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:46:01 compute-0 nova_compute[244014]: 2026-02-25 12:46:01.658 244018 DEBUG nova.network.neutron [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:01 compute-0 nova_compute[244014]: 2026-02-25 12:46:01.699 244018 INFO nova.compute.manager [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Took 2.23 seconds to deallocate network for instance.
Feb 25 12:46:01 compute-0 nova_compute[244014]: 2026-02-25 12:46:01.733 244018 DEBUG nova.compute.manager [req-4e9e6215-d79f-475c-9285-6006eaf1b4be req-9a827067-ddd2-4402-8994-6dd4ac676672 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-deleted-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:01 compute-0 nova_compute[244014]: 2026-02-25 12:46:01.746 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:01 compute-0 nova_compute[244014]: 2026-02-25 12:46:01.747 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:01 compute-0 podman[347402]: 2026-02-25 12:46:01.825372369 +0000 UTC m=+1.540018556 container remove b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 12:46:01 compute-0 ovn_controller[147040]: 2026-02-25T12:46:01Z|01164|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 12:46:01 compute-0 ovn_controller[147040]: 2026-02-25T12:46:01Z|01165|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 12:46:01 compute-0 nova_compute[244014]: 2026-02-25 12:46:01.862 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:01 compute-0 ceph-mon[76335]: pgmap v1963: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 233 KiB/s rd, 21 KiB/s wr, 37 op/s
Feb 25 12:46:01 compute-0 systemd[1]: libpod-conmon-b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93.scope: Deactivated successfully.
Feb 25 12:46:01 compute-0 nova_compute[244014]: 2026-02-25 12:46:01.901 244018 DEBUG oslo_concurrency.processutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:02 compute-0 podman[347444]: 2026-02-25 12:46:02.053547252 +0000 UTC m=+0.099265274 container create 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 12:46:02 compute-0 podman[347444]: 2026-02-25 12:46:01.979661926 +0000 UTC m=+0.025379998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:46:02 compute-0 systemd[1]: Started libpod-conmon-763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f.scope.
Feb 25 12:46:02 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbcfeb3a823762698e4221689bb5cc88a62a2d868dd00ffa3a0bb7cb90fdd4b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbcfeb3a823762698e4221689bb5cc88a62a2d868dd00ffa3a0bb7cb90fdd4b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbcfeb3a823762698e4221689bb5cc88a62a2d868dd00ffa3a0bb7cb90fdd4b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbcfeb3a823762698e4221689bb5cc88a62a2d868dd00ffa3a0bb7cb90fdd4b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:46:02 compute-0 podman[347444]: 2026-02-25 12:46:02.379372903 +0000 UTC m=+0.425090925 container init 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:46:02 compute-0 nova_compute[244014]: 2026-02-25 12:46:02.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:02 compute-0 podman[347444]: 2026-02-25 12:46:02.38673383 +0000 UTC m=+0.432451822 container start 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 12:46:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 22 KiB/s wr, 78 op/s
Feb 25 12:46:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:46:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412297614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:02 compute-0 nova_compute[244014]: 2026-02-25 12:46:02.434 244018 DEBUG oslo_concurrency.processutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:02 compute-0 nova_compute[244014]: 2026-02-25 12:46:02.440 244018 DEBUG nova.compute.provider_tree [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:46:02 compute-0 nova_compute[244014]: 2026-02-25 12:46:02.455 244018 DEBUG nova.scheduler.client.report [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:46:02 compute-0 nova_compute[244014]: 2026-02-25 12:46:02.474 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:02 compute-0 podman[347444]: 2026-02-25 12:46:02.484343917 +0000 UTC m=+0.530061899 container attach 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 12:46:02 compute-0 nova_compute[244014]: 2026-02-25 12:46:02.496 244018 INFO nova.scheduler.client.report [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Deleted allocations for instance 874359d8-3251-4416-82dc-f6776853e384
Feb 25 12:46:02 compute-0 nova_compute[244014]: 2026-02-25 12:46:02.577 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]: {
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:     "0": [
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:         {
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "devices": [
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "/dev/loop3"
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             ],
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_name": "ceph_lv0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_size": "21470642176",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "name": "ceph_lv0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "tags": {
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.cluster_name": "ceph",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.crush_device_class": "",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.encrypted": "0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.objectstore": "bluestore",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.osd_id": "0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.type": "block",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.vdo": "0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.with_tpm": "0"
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             },
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "type": "block",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "vg_name": "ceph_vg0"
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:         }
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:     ],
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:     "1": [
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:         {
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "devices": [
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "/dev/loop4"
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             ],
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_name": "ceph_lv1",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_size": "21470642176",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "name": "ceph_lv1",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "tags": {
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.cluster_name": "ceph",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.crush_device_class": "",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.encrypted": "0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.objectstore": "bluestore",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.osd_id": "1",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.type": "block",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.vdo": "0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.with_tpm": "0"
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             },
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "type": "block",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "vg_name": "ceph_vg1"
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:         }
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:     ],
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:     "2": [
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:         {
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "devices": [
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "/dev/loop5"
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             ],
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_name": "ceph_lv2",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_size": "21470642176",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "name": "ceph_lv2",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "tags": {
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.cluster_name": "ceph",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.crush_device_class": "",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.encrypted": "0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.objectstore": "bluestore",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.osd_id": "2",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.type": "block",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.vdo": "0",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:                 "ceph.with_tpm": "0"
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             },
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "type": "block",
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:             "vg_name": "ceph_vg2"
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:         }
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]:     ]
Feb 25 12:46:02 compute-0 stupefied_kilby[347477]: }
Feb 25 12:46:02 compute-0 systemd[1]: libpod-763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f.scope: Deactivated successfully.
Feb 25 12:46:02 compute-0 conmon[347477]: conmon 763d926dda808ec4ce56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f.scope/container/memory.events
Feb 25 12:46:02 compute-0 podman[347444]: 2026-02-25 12:46:02.748181376 +0000 UTC m=+0.793899398 container died 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:46:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/412297614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbcfeb3a823762698e4221689bb5cc88a62a2d868dd00ffa3a0bb7cb90fdd4b1-merged.mount: Deactivated successfully.
Feb 25 12:46:03 compute-0 podman[347444]: 2026-02-25 12:46:03.426782407 +0000 UTC m=+1.472500409 container remove 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:46:03 compute-0 sudo[347365]: pam_unix(sudo:session): session closed for user root
Feb 25 12:46:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:03 compute-0 systemd[1]: libpod-conmon-763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f.scope: Deactivated successfully.
Feb 25 12:46:03 compute-0 sudo[347500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:46:03 compute-0 sudo[347500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:46:03 compute-0 sudo[347500]: pam_unix(sudo:session): session closed for user root
Feb 25 12:46:03 compute-0 sudo[347525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:46:03 compute-0 sudo[347525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:46:03 compute-0 ovn_controller[147040]: 2026-02-25T12:46:03Z|01166|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 12:46:03 compute-0 ovn_controller[147040]: 2026-02-25T12:46:03Z|01167|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 12:46:03 compute-0 nova_compute[244014]: 2026-02-25 12:46:03.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:04 compute-0 podman[347558]: 2026-02-25 12:46:04.01690851 +0000 UTC m=+0.113564727 container create 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:46:04 compute-0 podman[347558]: 2026-02-25 12:46:03.926807006 +0000 UTC m=+0.023463233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:46:04 compute-0 nova_compute[244014]: 2026-02-25 12:46:04.247 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:04 compute-0 systemd[1]: Started libpod-conmon-47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20.scope.
Feb 25 12:46:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:46:04 compute-0 ceph-mon[76335]: pgmap v1964: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 22 KiB/s wr, 78 op/s
Feb 25 12:46:04 compute-0 podman[347558]: 2026-02-25 12:46:04.389214923 +0000 UTC m=+0.485871150 container init 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 12:46:04 compute-0 podman[347558]: 2026-02-25 12:46:04.398945858 +0000 UTC m=+0.495602025 container start 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:46:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 99 op/s
Feb 25 12:46:04 compute-0 boring_booth[347574]: 167 167
Feb 25 12:46:04 compute-0 systemd[1]: libpod-47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20.scope: Deactivated successfully.
Feb 25 12:46:04 compute-0 conmon[347574]: conmon 47a4f4e1478a85bf794f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20.scope/container/memory.events
Feb 25 12:46:04 compute-0 nova_compute[244014]: 2026-02-25 12:46:04.511 244018 DEBUG nova.compute.manager [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:04 compute-0 nova_compute[244014]: 2026-02-25 12:46:04.513 244018 DEBUG nova.compute.manager [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing instance network info cache due to event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:46:04 compute-0 nova_compute[244014]: 2026-02-25 12:46:04.514 244018 DEBUG oslo_concurrency.lockutils [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:46:04 compute-0 nova_compute[244014]: 2026-02-25 12:46:04.515 244018 DEBUG oslo_concurrency.lockutils [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:46:04 compute-0 nova_compute[244014]: 2026-02-25 12:46:04.516 244018 DEBUG nova.network.neutron [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing network info cache for port 443e71ca-9c38-40e8-b799-1026ef47c35d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:46:04 compute-0 podman[347558]: 2026-02-25 12:46:04.611562332 +0000 UTC m=+0.708218609 container attach 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:46:04 compute-0 podman[347558]: 2026-02-25 12:46:04.612169889 +0000 UTC m=+0.708826096 container died 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 12:46:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5288a3c901499fd791c740e79b0028beb422068ca0568a9c3eb32b96a4f96f71-merged.mount: Deactivated successfully.
Feb 25 12:46:05 compute-0 podman[347558]: 2026-02-25 12:46:05.104609984 +0000 UTC m=+1.201266191 container remove 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 12:46:05 compute-0 systemd[1]: libpod-conmon-47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20.scope: Deactivated successfully.
Feb 25 12:46:05 compute-0 nova_compute[244014]: 2026-02-25 12:46:05.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:05 compute-0 podman[347599]: 2026-02-25 12:46:05.299501217 +0000 UTC m=+0.024098631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:46:05 compute-0 podman[347599]: 2026-02-25 12:46:05.44160803 +0000 UTC m=+0.166205484 container create c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:46:05 compute-0 ceph-mon[76335]: pgmap v1965: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 99 op/s
Feb 25 12:46:05 compute-0 systemd[1]: Started libpod-conmon-c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852.scope.
Feb 25 12:46:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2216db5e4a45918da92d2a8d89ed409556aff20b748b72e19efbb53152967980/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2216db5e4a45918da92d2a8d89ed409556aff20b748b72e19efbb53152967980/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2216db5e4a45918da92d2a8d89ed409556aff20b748b72e19efbb53152967980/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2216db5e4a45918da92d2a8d89ed409556aff20b748b72e19efbb53152967980/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:46:05 compute-0 podman[347599]: 2026-02-25 12:46:05.871425226 +0000 UTC m=+0.596022710 container init c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 12:46:05 compute-0 podman[347599]: 2026-02-25 12:46:05.879663459 +0000 UTC m=+0.604260903 container start c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 12:46:06 compute-0 podman[347599]: 2026-02-25 12:46:06.046286174 +0000 UTC m=+0.770883608 container attach c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 12:46:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 KiB/s wr, 94 op/s
Feb 25 12:46:06 compute-0 lvm[347694]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:46:06 compute-0 lvm[347694]: VG ceph_vg1 finished
Feb 25 12:46:06 compute-0 lvm[347693]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:46:06 compute-0 lvm[347693]: VG ceph_vg0 finished
Feb 25 12:46:06 compute-0 lvm[347696]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:46:06 compute-0 lvm[347696]: VG ceph_vg2 finished
Feb 25 12:46:06 compute-0 charming_noether[347615]: {}
Feb 25 12:46:06 compute-0 systemd[1]: libpod-c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852.scope: Deactivated successfully.
Feb 25 12:46:06 compute-0 systemd[1]: libpod-c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852.scope: Consumed 1.030s CPU time.
Feb 25 12:46:06 compute-0 podman[347699]: 2026-02-25 12:46:06.738935341 +0000 UTC m=+0.048267104 container died c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2216db5e4a45918da92d2a8d89ed409556aff20b748b72e19efbb53152967980-merged.mount: Deactivated successfully.
Feb 25 12:46:07 compute-0 podman[347699]: 2026-02-25 12:46:07.289918879 +0000 UTC m=+0.599250662 container remove c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:46:07 compute-0 systemd[1]: libpod-conmon-c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852.scope: Deactivated successfully.
Feb 25 12:46:07 compute-0 nova_compute[244014]: 2026-02-25 12:46:07.347 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023552.3462524, 874359d8-3251-4416-82dc-f6776853e384 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:46:07 compute-0 sudo[347525]: pam_unix(sudo:session): session closed for user root
Feb 25 12:46:07 compute-0 nova_compute[244014]: 2026-02-25 12:46:07.351 244018 INFO nova.compute.manager [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Stopped (Lifecycle Event)
Feb 25 12:46:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:46:07 compute-0 nova_compute[244014]: 2026-02-25 12:46:07.371 244018 DEBUG nova.compute.manager [None req-df8ebcd7-25b7-46d5-8ea5-80facc20641f - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:07 compute-0 nova_compute[244014]: 2026-02-25 12:46:07.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:07 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:46:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:46:07 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:46:07 compute-0 sudo[347712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:46:07 compute-0 sudo[347712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:46:07 compute-0 sudo[347712]: pam_unix(sudo:session): session closed for user root
Feb 25 12:46:07 compute-0 nova_compute[244014]: 2026-02-25 12:46:07.855 244018 DEBUG nova.network.neutron [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updated VIF entry in instance network info cache for port 443e71ca-9c38-40e8-b799-1026ef47c35d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:46:07 compute-0 nova_compute[244014]: 2026-02-25 12:46:07.856 244018 DEBUG nova.network.neutron [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:07 compute-0 nova_compute[244014]: 2026-02-25 12:46:07.903 244018 DEBUG oslo_concurrency.lockutils [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:46:07 compute-0 ceph-mon[76335]: pgmap v1966: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 KiB/s wr, 94 op/s
Feb 25 12:46:07 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:46:07 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:46:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 KiB/s wr, 94 op/s
Feb 25 12:46:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:08 compute-0 ovn_controller[147040]: 2026-02-25T12:46:08Z|01168|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 12:46:08 compute-0 ovn_controller[147040]: 2026-02-25T12:46:08Z|01169|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.526559) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568526607, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1867, "num_deletes": 254, "total_data_size": 2828935, "memory_usage": 2872464, "flush_reason": "Manual Compaction"}
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568545393, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2763803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40072, "largest_seqno": 41938, "table_properties": {"data_size": 2755173, "index_size": 5316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17013, "raw_average_key_size": 19, "raw_value_size": 2737875, "raw_average_value_size": 3093, "num_data_blocks": 234, "num_entries": 885, "num_filter_entries": 885, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023407, "oldest_key_time": 1772023407, "file_creation_time": 1772023568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 18896 microseconds, and 6704 cpu microseconds.
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.545452) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2763803 bytes OK
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.545476) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.548501) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.548517) EVENT_LOG_v1 {"time_micros": 1772023568548512, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.548543) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 2820892, prev total WAL file size 2820892, number of live WAL files 2.
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.549220) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(2699KB)], [89(9056KB)]
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568549305, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 12037579, "oldest_snapshot_seqno": -1}
Feb 25 12:46:08 compute-0 nova_compute[244014]: 2026-02-25 12:46:08.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6829 keys, 11279146 bytes, temperature: kUnknown
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568653432, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11279146, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11230739, "index_size": 30247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 173686, "raw_average_key_size": 25, "raw_value_size": 11105943, "raw_average_value_size": 1626, "num_data_blocks": 1206, "num_entries": 6829, "num_filter_entries": 6829, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.654106) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11279146 bytes
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.662070) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.1 rd, 107.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.8 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(8.4) write-amplify(4.1) OK, records in: 7352, records dropped: 523 output_compression: NoCompression
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.662125) EVENT_LOG_v1 {"time_micros": 1772023568662103, "job": 52, "event": "compaction_finished", "compaction_time_micros": 104578, "compaction_time_cpu_micros": 40004, "output_level": 6, "num_output_files": 1, "total_output_size": 11279146, "num_input_records": 7352, "num_output_records": 6829, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568662861, "job": 52, "event": "table_file_deletion", "file_number": 91}
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568664506, "job": 52, "event": "table_file_deletion", "file_number": 89}
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.549057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.664636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.664647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.664650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.664654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.664657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:08 compute-0 nova_compute[244014]: 2026-02-25 12:46:08.801 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:08 compute-0 nova_compute[244014]: 2026-02-25 12:46:08.805 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:08 compute-0 nova_compute[244014]: 2026-02-25 12:46:08.826 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:46:08 compute-0 nova_compute[244014]: 2026-02-25 12:46:08.900 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:08 compute-0 nova_compute[244014]: 2026-02-25 12:46:08.901 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:08 compute-0 nova_compute[244014]: 2026-02-25 12:46:08.909 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:46:08 compute-0 nova_compute[244014]: 2026-02-25 12:46:08.910 244018 INFO nova.compute.claims [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.067 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:46:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2161363190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.674 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.680 244018 DEBUG nova.compute.provider_tree [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.702 244018 DEBUG nova.scheduler.client.report [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.735 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.736 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.781 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.781 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.800 244018 INFO nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.814 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.895 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.897 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.897 244018 INFO nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Creating image(s)
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.926 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.960 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:46:09 compute-0 ceph-mon[76335]: pgmap v1967: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 KiB/s wr, 94 op/s
Feb 25 12:46:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2161363190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:09 compute-0 nova_compute[244014]: 2026-02-25 12:46:09.998 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:46:10 compute-0 nova_compute[244014]: 2026-02-25 12:46:10.003 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:10 compute-0 nova_compute[244014]: 2026-02-25 12:46:10.105 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:10 compute-0 nova_compute[244014]: 2026-02-25 12:46:10.106 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:10 compute-0 nova_compute[244014]: 2026-02-25 12:46:10.106 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:10 compute-0 nova_compute[244014]: 2026-02-25 12:46:10.106 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:10 compute-0 nova_compute[244014]: 2026-02-25 12:46:10.137 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:46:10 compute-0 nova_compute[244014]: 2026-02-25 12:46:10.141 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 709a8b15-83eb-45f4-b681-c150ad270e01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:10 compute-0 nova_compute[244014]: 2026-02-25 12:46:10.212 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 597 B/s wr, 70 op/s
Feb 25 12:46:10 compute-0 ovn_controller[147040]: 2026-02-25T12:46:10Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:e2:13 10.100.0.4
Feb 25 12:46:10 compute-0 ovn_controller[147040]: 2026-02-25T12:46:10Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:e2:13 10.100.0.4
Feb 25 12:46:10 compute-0 nova_compute[244014]: 2026-02-25 12:46:10.781 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 709a8b15-83eb-45f4-b681-c150ad270e01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:10 compute-0 nova_compute[244014]: 2026-02-25 12:46:10.866 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:46:11 compute-0 nova_compute[244014]: 2026-02-25 12:46:11.012 244018 DEBUG nova.policy [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb37a481eb114226822ed8b2ef4f9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:46:11 compute-0 nova_compute[244014]: 2026-02-25 12:46:11.100 244018 DEBUG nova.objects.instance [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:46:11 compute-0 nova_compute[244014]: 2026-02-25 12:46:11.118 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:46:11 compute-0 nova_compute[244014]: 2026-02-25 12:46:11.119 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Ensure instance console log exists: /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:46:11 compute-0 nova_compute[244014]: 2026-02-25 12:46:11.120 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:11 compute-0 nova_compute[244014]: 2026-02-25 12:46:11.121 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:11 compute-0 nova_compute[244014]: 2026-02-25 12:46:11.121 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:12 compute-0 ceph-mon[76335]: pgmap v1968: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 597 B/s wr, 70 op/s
Feb 25 12:46:12 compute-0 nova_compute[244014]: 2026-02-25 12:46:12.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Feb 25 12:46:12 compute-0 nova_compute[244014]: 2026-02-25 12:46:12.853 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Successfully created port: a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:46:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:13 compute-0 ovn_controller[147040]: 2026-02-25T12:46:13Z|01170|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 12:46:13 compute-0 ovn_controller[147040]: 2026-02-25T12:46:13Z|01171|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 12:46:13 compute-0 nova_compute[244014]: 2026-02-25 12:46:13.847 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:14 compute-0 ceph-mon[76335]: pgmap v1969: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Feb 25 12:46:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 118 op/s
Feb 25 12:46:15 compute-0 nova_compute[244014]: 2026-02-25 12:46:15.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:16 compute-0 nova_compute[244014]: 2026-02-25 12:46:16.029 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Successfully updated port: a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:46:16 compute-0 nova_compute[244014]: 2026-02-25 12:46:16.052 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:46:16 compute-0 nova_compute[244014]: 2026-02-25 12:46:16.053 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:46:16 compute-0 nova_compute[244014]: 2026-02-25 12:46:16.053 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:46:16 compute-0 ceph-mon[76335]: pgmap v1970: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 118 op/s
Feb 25 12:46:16 compute-0 nova_compute[244014]: 2026-02-25 12:46:16.275 244018 DEBUG nova.compute.manager [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:16 compute-0 nova_compute[244014]: 2026-02-25 12:46:16.276 244018 DEBUG nova.compute.manager [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing instance network info cache due to event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:46:16 compute-0 nova_compute[244014]: 2026-02-25 12:46:16.276 244018 DEBUG oslo_concurrency.lockutils [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:46:16 compute-0 nova_compute[244014]: 2026-02-25 12:46:16.354 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:46:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Feb 25 12:46:17 compute-0 nova_compute[244014]: 2026-02-25 12:46:17.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:18 compute-0 ceph-mon[76335]: pgmap v1971: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Feb 25 12:46:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Feb 25 12:46:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.837 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.877 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.877 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance network_info: |[{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.878 244018 DEBUG oslo_concurrency.lockutils [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.878 244018 DEBUG nova.network.neutron [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.882 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Start _get_guest_xml network_info=[{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.888 244018 WARNING nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.906 244018 DEBUG nova.virt.libvirt.host [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.907 244018 DEBUG nova.virt.libvirt.host [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.912 244018 DEBUG nova.virt.libvirt.host [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.912 244018 DEBUG nova.virt.libvirt.host [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.913 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.914 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.914 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.915 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.915 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.915 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.915 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.916 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.916 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.916 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.917 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.917 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:46:19 compute-0 nova_compute[244014]: 2026-02-25 12:46:19.920 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:20 compute-0 ceph-mon[76335]: pgmap v1972: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Feb 25 12:46:20 compute-0 nova_compute[244014]: 2026-02-25 12:46:20.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Feb 25 12:46:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:46:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012586038' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:46:20 compute-0 nova_compute[244014]: 2026-02-25 12:46:20.442 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:20 compute-0 nova_compute[244014]: 2026-02-25 12:46:20.472 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:46:20 compute-0 nova_compute[244014]: 2026-02-25 12:46:20.478 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:46:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469782281' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.056 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.058 244018 DEBUG nova.virt.libvirt.vif [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:46:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-742082713',display_name='tempest-TestNetworkAdvancedServerOps-server-742082713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-742082713',id=116,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+I5tZfRy0ovwNPnrrYTneQbLw5hVnQnH/IanoLiHVduiItxLS5mgbnypXYZj3B9Q1TA9okLBB9xUXwMoUUkuZoJAUYr2FIjfbOIXstrZLpgrCV3aVFIh2fnMujkQ9rZw==',key_name='tempest-TestNetworkAdvancedServerOps-1976579647',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-m0ykjdnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:46:09Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=709a8b15-83eb-45f4-b681-c150ad270e01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.058 244018 DEBUG nova.network.os_vif_util [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.059 244018 DEBUG nova.network.os_vif_util [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.060 244018 DEBUG nova.objects.instance [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.074 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:46:21 compute-0 nova_compute[244014]:   <uuid>709a8b15-83eb-45f4-b681-c150ad270e01</uuid>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   <name>instance-00000074</name>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-742082713</nova:name>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:46:19</nova:creationTime>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <nova:port uuid="a0f62675-5535-4e75-98f6-dc4fbadbc4c9">
Feb 25 12:46:21 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <system>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <entry name="serial">709a8b15-83eb-45f4-b681-c150ad270e01</entry>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <entry name="uuid">709a8b15-83eb-45f4-b681-c150ad270e01</entry>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     </system>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   <os>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   </os>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   <features>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   </features>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/709a8b15-83eb-45f4-b681-c150ad270e01_disk">
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       </source>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/709a8b15-83eb-45f4-b681-c150ad270e01_disk.config">
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       </source>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:46:21 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:56:b0:61"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <target dev="tapa0f62675-55"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/console.log" append="off"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <video>
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     </video>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:46:21 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:46:21 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:46:21 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:46:21 compute-0 nova_compute[244014]: </domain>
Feb 25 12:46:21 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.074 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Preparing to wait for external event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.074 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.075 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.075 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.075 244018 DEBUG nova.virt.libvirt.vif [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:46:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-742082713',display_name='tempest-TestNetworkAdvancedServerOps-server-742082713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-742082713',id=116,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+I5tZfRy0ovwNPnrrYTneQbLw5hVnQnH/IanoLiHVduiItxLS5mgbnypXYZj3B9Q1TA9okLBB9xUXwMoUUkuZoJAUYr2FIjfbOIXstrZLpgrCV3aVFIh2fnMujkQ9rZw==',key_name='tempest-TestNetworkAdvancedServerOps-1976579647',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-m0ykjdnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:46:09Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=709a8b15-83eb-45f4-b681-c150ad270e01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.075 244018 DEBUG nova.network.os_vif_util [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.076 244018 DEBUG nova.network.os_vif_util [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.077 244018 DEBUG os_vif [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.078 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.078 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.080 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0f62675-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.081 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0f62675-55, col_values=(('external_ids', {'iface-id': 'a0f62675-5535-4e75-98f6-dc4fbadbc4c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:b0:61', 'vm-uuid': '709a8b15-83eb-45f4-b681-c150ad270e01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:21 compute-0 NetworkManager[49836]: <info>  [1772023581.0837] manager: (tapa0f62675-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.084 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:46:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4012586038' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:46:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3469782281' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.091 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.092 244018 INFO os_vif [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55')
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.148 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.149 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.149 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:56:b0:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.149 244018 INFO nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Using config drive
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.169 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.203 244018 DEBUG nova.network.neutron [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updated VIF entry in instance network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.204 244018 DEBUG nova.network.neutron [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.218 244018 DEBUG oslo_concurrency.lockutils [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.583 244018 INFO nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Creating config drive at /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.587 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsq3isf8f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.719 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsq3isf8f" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.750 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.754 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config 709a8b15-83eb-45f4-b681-c150ad270e01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.869 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config 709a8b15-83eb-45f4-b681-c150ad270e01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.869 244018 INFO nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Deleting local config drive /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config because it was imported into RBD.
Feb 25 12:46:21 compute-0 kernel: tapa0f62675-55: entered promiscuous mode
Feb 25 12:46:21 compute-0 NetworkManager[49836]: <info>  [1772023581.9119] manager: (tapa0f62675-55): new Tun device (/org/freedesktop/NetworkManager/Devices/490)
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.913 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:21 compute-0 ovn_controller[147040]: 2026-02-25T12:46:21Z|01172|binding|INFO|Claiming lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for this chassis.
Feb 25 12:46:21 compute-0 ovn_controller[147040]: 2026-02-25T12:46:21Z|01173|binding|INFO|a0f62675-5535-4e75-98f6-dc4fbadbc4c9: Claiming fa:16:3e:56:b0:61 10.100.0.7
Feb 25 12:46:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.920 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:b0:61 10.100.0.7'], port_security=['fa:16:3e:56:b0:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '709a8b15-83eb-45f4-b681-c150ad270e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6cab933e-ec3e-4cce-9e9e-8098a6cc3a83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68434b74-2de5-46be-a5a4-5cead97a1427, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a0f62675-5535-4e75-98f6-dc4fbadbc4c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:21 compute-0 ovn_controller[147040]: 2026-02-25T12:46:21Z|01174|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 ovn-installed in OVS
Feb 25 12:46:21 compute-0 ovn_controller[147040]: 2026-02-25T12:46:21Z|01175|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 up in Southbound
Feb 25 12:46:21 compute-0 nova_compute[244014]: 2026-02-25 12:46:21.924 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.922 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 in datapath bf745dc4-78d4-4d88-8794-da49c21b9f38 bound to our chassis
Feb 25 12:46:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.924 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf745dc4-78d4-4d88-8794-da49c21b9f38
Feb 25 12:46:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.938 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc29cce-0043-4ad0-981f-d7f79893f2f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.939 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf745dc4-71 in ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:46:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.942 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf745dc4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:46:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.942 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[56bbdd9d-ae69-4fe8-9a21-a9412fe62d17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.943 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[715b1fd4-3cfa-41e1-8fba-e474ecf15b82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:21 compute-0 systemd-udevd[348063]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:46:21 compute-0 systemd-machined[210048]: New machine qemu-147-instance-00000074.
Feb 25 12:46:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.951 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8314d7ef-2589-426b-b3f1-7c8c80255192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:21 compute-0 NetworkManager[49836]: <info>  [1772023581.9584] device (tapa0f62675-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:46:21 compute-0 NetworkManager[49836]: <info>  [1772023581.9591] device (tapa0f62675-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:46:21 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-00000074.
Feb 25 12:46:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.966 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f89a12e-d93a-4f30-82a4-56d3b98a422e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.995 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[910bd9f9-52e4-46b7-ae2f-a5ae1bcae0aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:22 compute-0 systemd-udevd[348067]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:46:22 compute-0 NetworkManager[49836]: <info>  [1772023582.0024] manager: (tapbf745dc4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/491)
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.000 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81793156-5891-431b-a59b-ee339913b240]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.028 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[935cd98c-41ec-4baa-8245-1414347a5557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.032 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d10cebf6-dd77-408a-8e64-43cc64c34369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:22 compute-0 NetworkManager[49836]: <info>  [1772023582.0555] device (tapbf745dc4-70): carrier: link connected
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.061 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[11c1ab93-3484-46f8-8c88-9707203f0135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a93b54-5f77-4d4d-b1ab-46a33d401c15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf745dc4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:7d:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 353], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555146, 'reachable_time': 25803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348095, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.084 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a298d7c1-6dcb-484c-9672-dac6a994ba40]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:7daf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555146, 'tstamp': 555146}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348096, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:22 compute-0 ceph-mon[76335]: pgmap v1973: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.098 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb44c58-d3ec-4003-a631-feb818781a64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf745dc4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:7d:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 353], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555146, 'reachable_time': 25803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348097, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38ef4646-e58a-458f-b155-42944f4866f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.188 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[705697c9-b286-43e2-a2fd-10372f1b6eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.189 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf745dc4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.190 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.190 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf745dc4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.192 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:22 compute-0 kernel: tapbf745dc4-70: entered promiscuous mode
Feb 25 12:46:22 compute-0 NetworkManager[49836]: <info>  [1772023582.1960] manager: (tapbf745dc4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.197 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.198 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf745dc4-70, col_values=(('external_ids', {'iface-id': '92ae33df-1d64-498f-b132-6ef0663f81e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:22 compute-0 ovn_controller[147040]: 2026-02-25T12:46:22Z|01176|binding|INFO|Releasing lport 92ae33df-1d64-498f-b132-6ef0663f81e9 from this chassis (sb_readonly=0)
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.201 244018 DEBUG nova.compute.manager [req-d09e0eec-0542-4f83-87ae-2403ebeb8f6f req-b96f22ee-bab8-4e99-a263-25aa1e0e8cc7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.201 244018 DEBUG oslo_concurrency.lockutils [req-d09e0eec-0542-4f83-87ae-2403ebeb8f6f req-b96f22ee-bab8-4e99-a263-25aa1e0e8cc7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.201 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.202 244018 DEBUG oslo_concurrency.lockutils [req-d09e0eec-0542-4f83-87ae-2403ebeb8f6f req-b96f22ee-bab8-4e99-a263-25aa1e0e8cc7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.202 244018 DEBUG oslo_concurrency.lockutils [req-d09e0eec-0542-4f83-87ae-2403ebeb8f6f req-b96f22ee-bab8-4e99-a263-25aa1e0e8cc7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.202 244018 DEBUG nova.compute.manager [req-d09e0eec-0542-4f83-87ae-2403ebeb8f6f req-b96f22ee-bab8-4e99-a263-25aa1e0e8cc7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Processing event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.202 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[101acfea-fe24-4a0a-b627-61f9c01bc464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.203 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-bf745dc4-78d4-4d88-8794-da49c21b9f38
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID bf745dc4-78d4-4d88-8794-da49c21b9f38
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:46:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.204 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'env', 'PROCESS_TAG=haproxy-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf745dc4-78d4-4d88-8794-da49c21b9f38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:46:22 compute-0 podman[348129]: 2026-02-25 12:46:22.523651588 +0000 UTC m=+0.040556856 container create fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:46:22 compute-0 systemd[1]: Started libpod-conmon-fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87.scope.
Feb 25 12:46:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f539fbb4b8a2e0d559dbacdf783585633e2c940d6c86cd49a3a52f168fc29a4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:46:22 compute-0 podman[348129]: 2026-02-25 12:46:22.501461472 +0000 UTC m=+0.018366770 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:46:22 compute-0 podman[348129]: 2026-02-25 12:46:22.604020648 +0000 UTC m=+0.120925916 container init fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 12:46:22 compute-0 podman[348129]: 2026-02-25 12:46:22.609557364 +0000 UTC m=+0.126462632 container start fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 12:46:22 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [NOTICE]   (348148) : New worker (348150) forked
Feb 25 12:46:22 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [NOTICE]   (348148) : Loading success.
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.847 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023582.8474016, 709a8b15-83eb-45f4-b681-c150ad270e01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.848 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Started (Lifecycle Event)
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.850 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.855 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.859 244018 INFO nova.virt.libvirt.driver [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance spawned successfully.
Feb 25 12:46:22 compute-0 nova_compute[244014]: 2026-02-25 12:46:22.860 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.005 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.012 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.017 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.017 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.018 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.018 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.019 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.019 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.042 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.043 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023582.8475673, 709a8b15-83eb-45f4-b681-c150ad270e01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.043 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Paused (Lifecycle Event)
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.075 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.080 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023582.854272, 709a8b15-83eb-45f4-b681-c150ad270e01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.080 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Resumed (Lifecycle Event)
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.085 244018 INFO nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Took 13.19 seconds to spawn the instance on the hypervisor.
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.085 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.094 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.099 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.123 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.143 244018 INFO nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Took 14.27 seconds to build instance.
Feb 25 12:46:23 compute-0 nova_compute[244014]: 2026-02-25 12:46:23.158 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:24 compute-0 ceph-mon[76335]: pgmap v1974: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.347 244018 DEBUG nova.compute.manager [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.347 244018 DEBUG oslo_concurrency.lockutils [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.348 244018 DEBUG oslo_concurrency.lockutils [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.348 244018 DEBUG oslo_concurrency.lockutils [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.349 244018 DEBUG nova.compute.manager [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.349 244018 WARNING nova.compute.manager [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state active and task_state None.
Feb 25 12:46:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.919 244018 DEBUG nova.compute.manager [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.920 244018 DEBUG nova.compute.manager [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing instance network info cache due to event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.920 244018 DEBUG oslo_concurrency.lockutils [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.920 244018 DEBUG oslo_concurrency.lockutils [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:46:24 compute-0 nova_compute[244014]: 2026-02-25 12:46:24.920 244018 DEBUG nova.network.neutron [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing network info cache for port 443e71ca-9c38-40e8-b799-1026ef47c35d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.010 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.010 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.010 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.011 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.011 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.012 244018 INFO nova.compute.manager [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Terminating instance
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.012 244018 DEBUG nova.compute.manager [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:46:25 compute-0 kernel: tap443e71ca-9c (unregistering): left promiscuous mode
Feb 25 12:46:25 compute-0 NetworkManager[49836]: <info>  [1772023585.0790] device (tap443e71ca-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.097 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 ovn_controller[147040]: 2026-02-25T12:46:25Z|01177|binding|INFO|Releasing lport 443e71ca-9c38-40e8-b799-1026ef47c35d from this chassis (sb_readonly=0)
Feb 25 12:46:25 compute-0 ovn_controller[147040]: 2026-02-25T12:46:25Z|01178|binding|INFO|Setting lport 443e71ca-9c38-40e8-b799-1026ef47c35d down in Southbound
Feb 25 12:46:25 compute-0 ovn_controller[147040]: 2026-02-25T12:46:25Z|01179|binding|INFO|Removing iface tap443e71ca-9c ovn-installed in OVS
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.100 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 kernel: tap39f87175-7c (unregistering): left promiscuous mode
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.107 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 NetworkManager[49836]: <info>  [1772023585.1104] device (tap39f87175-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:46:25 compute-0 ovn_controller[147040]: 2026-02-25T12:46:25Z|01180|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 12:46:25 compute-0 ovn_controller[147040]: 2026-02-25T12:46:25Z|01181|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 12:46:25 compute-0 ovn_controller[147040]: 2026-02-25T12:46:25Z|01182|binding|INFO|Releasing lport 92ae33df-1d64-498f-b132-6ef0663f81e9 from this chassis (sb_readonly=0)
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.110 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:e2:13 10.100.0.4'], port_security=['fa:16:3e:98:e2:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2b9ea98f-9b1a-495b-8669-e79da967b0ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa114a0-206b-4aa1-b770-18f248344fa4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=443e71ca-9c38-40e8-b799-1026ef47c35d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.112 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 443e71ca-9c38-40e8-b799-1026ef47c35d in datapath bb79e0fd-2a4d-4a70-9c80-4853297401ff unbound from our chassis
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.116 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79e0fd-2a4d-4a70-9c80-4853297401ff
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 ovn_controller[147040]: 2026-02-25T12:46:25Z|01183|binding|INFO|Releasing lport 39f87175-7c4f-4092-bce6-29b413c731cd from this chassis (sb_readonly=0)
Feb 25 12:46:25 compute-0 ovn_controller[147040]: 2026-02-25T12:46:25Z|01184|binding|INFO|Setting lport 39f87175-7c4f-4092-bce6-29b413c731cd down in Southbound
Feb 25 12:46:25 compute-0 ovn_controller[147040]: 2026-02-25T12:46:25Z|01185|binding|INFO|Removing iface tap39f87175-7c ovn-installed in OVS
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.131 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b74e9752-eda9-4fb0-931a-936db6a01f8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.133 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.151 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:de:d3 2001:db8:0:1:f816:3eff:feb2:ded3 2001:db8::f816:3eff:feb2:ded3'], port_security=['fa:16:3e:b2:de:d3 2001:db8:0:1:f816:3eff:feb2:ded3 2001:db8::f816:3eff:feb2:ded3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb2:ded3/64 2001:db8::f816:3eff:feb2:ded3/64', 'neutron:device_id': '2b9ea98f-9b1a-495b-8669-e79da967b0ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5bfff87-5507-4fe1-aed9-1b014e8e7384, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=39f87175-7c4f-4092-bce6-29b413c731cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.156 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a55a244f-dabc-4100-8b6a-35c4dab9e33f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Deactivated successfully.
Feb 25 12:46:25 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Consumed 13.752s CPU time.
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.158 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f816f4d7-96a5-4a74-9d00-45d3de0a9d7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 systemd-machined[210048]: Machine qemu-146-instance-00000073 terminated.
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.177 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d92bc3d1-83c5-401d-9bc7-11ee2d48252e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.189 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c3eb00ac-c01f-44f6-8c9b-6a6aa5ff2a68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79e0fd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:0b:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547905, 'reachable_time': 39147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348217, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.200 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2c24ff-03cd-4578-be6f-a0b6947ef142]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb79e0fd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547920, 'tstamp': 547920}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348218, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb79e0fd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547923, 'tstamp': 547923}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348218, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.204 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79e0fd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.211 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79e0fd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.211 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.211 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79e0fd-20, col_values=(('external_ids', {'iface-id': '091afd0d-cbaa-4d88-8f55-1965b8ffcb56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.212 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.213 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 39f87175-7c4f-4092-bce6-29b413c731cd in datapath 6df13266-bcfe-4a5b-94c4-81b5f08a6c21 unbound from our chassis
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.214 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6df13266-bcfe-4a5b-94c4-81b5f08a6c21
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.223 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76e342ca-5ceb-4ca1-8e9f-19a7c8155638]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 NetworkManager[49836]: <info>  [1772023585.2379] manager: (tap39f87175-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/493)
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.247 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[52fdd905-0560-47f1-a67c-36da88f4cf69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.252 244018 INFO nova.virt.libvirt.driver [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance destroyed successfully.
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.252 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[44d64846-37a9-4f75-8327-517434d031b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.252 244018 DEBUG nova.objects.instance [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 2b9ea98f-9b1a-495b-8669-e79da967b0ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.275 244018 DEBUG nova.virt.libvirt.vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:45:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:57Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.276 244018 DEBUG nova.network.os_vif_util [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.276 244018 DEBUG nova.network.os_vif_util [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.277 244018 DEBUG os_vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.278 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap443e71ca-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.282 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.283 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.283 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[839e73a6-0847-4d75-8ecd-0e8214da0061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.285 244018 INFO os_vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c')
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.286 244018 DEBUG nova.virt.libvirt.vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:45:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:57Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.286 244018 DEBUG nova.network.os_vif_util [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.287 244018 DEBUG nova.network.os_vif_util [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.287 244018 DEBUG os_vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.289 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39f87175-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.294 244018 INFO os_vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c')
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.299 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0aece9f6-32df-4949-b39b-c687c1a8d59c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6df13266-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:ad:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547995, 'reachable_time': 15036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348247, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.311 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[905a9fc9-10be-401b-944d-d40d4edc0d8e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6df13266-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548006, 'tstamp': 548006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348262, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.313 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6df13266-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.317 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6df13266-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.318 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.318 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6df13266-b0, col_values=(('external_ids', {'iface-id': '0b276f7a-90bb-4427-8f39-0e014732fd20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.319 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.688 244018 INFO nova.virt.libvirt.driver [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Deleting instance files /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab_del
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.690 244018 INFO nova.virt.libvirt.driver [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Deletion of /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab_del complete
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.767 244018 INFO nova.compute.manager [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Took 0.75 seconds to destroy the instance on the hypervisor.
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.767 244018 DEBUG oslo.service.loopingcall [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.768 244018 DEBUG nova.compute.manager [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.768 244018 DEBUG nova.network.neutron [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.840 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.840 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:46:25 compute-0 nova_compute[244014]: 2026-02-25 12:46:25.841 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:46:26 compute-0 ceph-mon[76335]: pgmap v1975: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Feb 25 12:46:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 25 KiB/s wr, 9 op/s
Feb 25 12:46:26 compute-0 nova_compute[244014]: 2026-02-25 12:46:26.742 244018 DEBUG nova.compute.manager [req-787c2626-a279-40fa-85b8-1253ee8a0e52 req-75385cf2-efbf-4a31-a139-591f9d7e0068 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-deleted-39f87175-7c4f-4092-bce6-29b413c731cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:26 compute-0 nova_compute[244014]: 2026-02-25 12:46:26.742 244018 INFO nova.compute.manager [req-787c2626-a279-40fa-85b8-1253ee8a0e52 req-75385cf2-efbf-4a31-a139-591f9d7e0068 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Neutron deleted interface 39f87175-7c4f-4092-bce6-29b413c731cd; detaching it from the instance and deleting it from the info cache
Feb 25 12:46:26 compute-0 nova_compute[244014]: 2026-02-25 12:46:26.743 244018 DEBUG nova.network.neutron [req-787c2626-a279-40fa-85b8-1253ee8a0e52 req-75385cf2-efbf-4a31-a139-591f9d7e0068 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:26 compute-0 nova_compute[244014]: 2026-02-25 12:46:26.767 244018 DEBUG nova.compute.manager [req-787c2626-a279-40fa-85b8-1253ee8a0e52 req-75385cf2-efbf-4a31-a139-591f9d7e0068 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Detach interface failed, port_id=39f87175-7c4f-4092-bce6-29b413c731cd, reason: Instance 2b9ea98f-9b1a-495b-8669-e79da967b0ab could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.035 244018 DEBUG nova.compute.manager [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-unplugged-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.035 244018 DEBUG oslo_concurrency.lockutils [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.037 244018 DEBUG oslo_concurrency.lockutils [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.037 244018 DEBUG oslo_concurrency.lockutils [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.038 244018 DEBUG nova.compute.manager [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] No waiting events found dispatching network-vif-unplugged-443e71ca-9c38-40e8-b799-1026ef47c35d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.039 244018 DEBUG nova.compute.manager [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-unplugged-443e71ca-9c38-40e8-b799-1026ef47c35d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:46:27 compute-0 podman[348268]: 2026-02-25 12:46:27.741679348 +0000 UTC m=+0.074579647 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.798 244018 DEBUG nova.network.neutron [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.826 244018 INFO nova.compute.manager [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Took 2.06 seconds to deallocate network for instance.
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.881 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.882 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.886 244018 DEBUG nova.network.neutron [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updated VIF entry in instance network info cache for port 443e71ca-9c38-40e8-b799-1026ef47c35d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.887 244018 DEBUG nova.network.neutron [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:27 compute-0 nova_compute[244014]: 2026-02-25 12:46:27.917 244018 DEBUG oslo_concurrency.lockutils [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:46:28 compute-0 nova_compute[244014]: 2026-02-25 12:46:28.056 244018 DEBUG oslo_concurrency.processutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:28 compute-0 ceph-mon[76335]: pgmap v1976: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 25 KiB/s wr, 9 op/s
Feb 25 12:46:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 40 KiB/s wr, 105 op/s
Feb 25 12:46:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.529060) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588529089, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 410, "num_deletes": 256, "total_data_size": 298596, "memory_usage": 307752, "flush_reason": "Manual Compaction"}
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588532176, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 296163, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41939, "largest_seqno": 42348, "table_properties": {"data_size": 293729, "index_size": 534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5713, "raw_average_key_size": 17, "raw_value_size": 288948, "raw_average_value_size": 905, "num_data_blocks": 24, "num_entries": 319, "num_filter_entries": 319, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023568, "oldest_key_time": 1772023568, "file_creation_time": 1772023588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 3143 microseconds, and 1017 cpu microseconds.
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.532204) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 296163 bytes OK
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.532215) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.533644) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.533655) EVENT_LOG_v1 {"time_micros": 1772023588533652, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.533667) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 295998, prev total WAL file size 295998, number of live WAL files 2.
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.533978) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353132' seq:72057594037927935, type:22 .. '6C6F676D0031373634' seq:0, type:0; will stop at (end)
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(289KB)], [92(10MB)]
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588534039, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 11575309, "oldest_snapshot_seqno": -1}
Feb 25 12:46:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:46:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1565904403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:28 compute-0 nova_compute[244014]: 2026-02-25 12:46:28.591 244018 DEBUG oslo_concurrency.processutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:28 compute-0 nova_compute[244014]: 2026-02-25 12:46:28.596 244018 DEBUG nova.compute.provider_tree [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6629 keys, 11459092 bytes, temperature: kUnknown
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588602344, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 11459092, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11411215, "index_size": 30229, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 170463, "raw_average_key_size": 25, "raw_value_size": 11289081, "raw_average_value_size": 1702, "num_data_blocks": 1202, "num_entries": 6629, "num_filter_entries": 6629, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.602611) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11459092 bytes
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.604403) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.3 rd, 167.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.8 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(77.8) write-amplify(38.7) OK, records in: 7148, records dropped: 519 output_compression: NoCompression
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.604431) EVENT_LOG_v1 {"time_micros": 1772023588604419, "job": 54, "event": "compaction_finished", "compaction_time_micros": 68382, "compaction_time_cpu_micros": 29094, "output_level": 6, "num_output_files": 1, "total_output_size": 11459092, "num_input_records": 7148, "num_output_records": 6629, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588604595, "job": 54, "event": "table_file_deletion", "file_number": 94}
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588606055, "job": 54, "event": "table_file_deletion", "file_number": 92}
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.533900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.606141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.606146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.606148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.606149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.606151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:28 compute-0 nova_compute[244014]: 2026-02-25 12:46:28.613 244018 DEBUG nova.scheduler.client.report [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:46:28 compute-0 nova_compute[244014]: 2026-02-25 12:46:28.635 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:28 compute-0 nova_compute[244014]: 2026-02-25 12:46:28.663 244018 INFO nova.scheduler.client.report [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 2b9ea98f-9b1a-495b-8669-e79da967b0ab
Feb 25 12:46:28 compute-0 podman[348309]: 2026-02-25 12:46:28.733316678 +0000 UTC m=+0.085908077 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:46:28 compute-0 nova_compute[244014]: 2026-02-25 12:46:28.742 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:28 compute-0 nova_compute[244014]: 2026-02-25 12:46:28.844 244018 DEBUG nova.compute.manager [req-f034d967-8dd2-4ba0-b5dd-50d9f48d0b6d req-0c3ac0fd-1813-4db2-b771-58e05cc692bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-deleted-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:28 compute-0 nova_compute[244014]: 2026-02-25 12:46:28.845 244018 INFO nova.compute.manager [req-f034d967-8dd2-4ba0-b5dd-50d9f48d0b6d req-0c3ac0fd-1813-4db2-b771-58e05cc692bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Neutron deleted interface 443e71ca-9c38-40e8-b799-1026ef47c35d; detaching it from the instance and deleting it from the info cache
Feb 25 12:46:28 compute-0 nova_compute[244014]: 2026-02-25 12:46:28.846 244018 DEBUG nova.network.neutron [req-f034d967-8dd2-4ba0-b5dd-50d9f48d0b6d req-0c3ac0fd-1813-4db2-b771-58e05cc692bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 25 12:46:28 compute-0 nova_compute[244014]: 2026-02-25 12:46:28.848 244018 DEBUG nova.compute.manager [req-f034d967-8dd2-4ba0-b5dd-50d9f48d0b6d req-0c3ac0fd-1813-4db2-b771-58e05cc692bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Detach interface failed, port_id=443e71ca-9c38-40e8-b799-1026ef47c35d, reason: Instance 2b9ea98f-9b1a-495b-8669-e79da967b0ab could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:46:29 compute-0 nova_compute[244014]: 2026-02-25 12:46:29.144 244018 DEBUG nova.compute.manager [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:29 compute-0 nova_compute[244014]: 2026-02-25 12:46:29.145 244018 DEBUG oslo_concurrency.lockutils [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:29 compute-0 nova_compute[244014]: 2026-02-25 12:46:29.145 244018 DEBUG oslo_concurrency.lockutils [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:29 compute-0 nova_compute[244014]: 2026-02-25 12:46:29.146 244018 DEBUG oslo_concurrency.lockutils [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:29 compute-0 nova_compute[244014]: 2026-02-25 12:46:29.146 244018 DEBUG nova.compute.manager [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] No waiting events found dispatching network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:29 compute-0 nova_compute[244014]: 2026-02-25 12:46:29.146 244018 WARNING nova.compute.manager [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received unexpected event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d for instance with vm_state deleted and task_state None.
Feb 25 12:46:29 compute-0 ceph-mon[76335]: pgmap v1977: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 40 KiB/s wr, 105 op/s
Feb 25 12:46:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1565904403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.167 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.168 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.169 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.169 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.170 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.171 244018 INFO nova.compute.manager [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Terminating instance
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.173 244018 DEBUG nova.compute.manager [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:46:30 compute-0 kernel: tape50c9f03-a8 (unregistering): left promiscuous mode
Feb 25 12:46:30 compute-0 NetworkManager[49836]: <info>  [1772023590.2158] device (tape50c9f03-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.221 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 ovn_controller[147040]: 2026-02-25T12:46:30Z|01186|binding|INFO|Releasing lport e50c9f03-a8a5-48d1-a34b-4a8fd638d5df from this chassis (sb_readonly=0)
Feb 25 12:46:30 compute-0 ovn_controller[147040]: 2026-02-25T12:46:30Z|01187|binding|INFO|Setting lport e50c9f03-a8a5-48d1-a34b-4a8fd638d5df down in Southbound
Feb 25 12:46:30 compute-0 ovn_controller[147040]: 2026-02-25T12:46:30Z|01188|binding|INFO|Removing iface tape50c9f03-a8 ovn-installed in OVS
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.225 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.229 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.232 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:4a:25 10.100.0.14'], port_security=['fa:16:3e:47:4a:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dd8c9142-2607-4722-90eb-65233f258639', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa114a0-206b-4aa1-b770-18f248344fa4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.233 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df in datapath bb79e0fd-2a4d-4a70-9c80-4853297401ff unbound from our chassis
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.235 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bb79e0fd-2a4d-4a70-9c80-4853297401ff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.236 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8fece5-a5ce-4c7c-a7fc-384d38d02b72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 kernel: tap68ffeedb-a9 (unregistering): left promiscuous mode
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.237 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff namespace which is not needed anymore
Feb 25 12:46:30 compute-0 NetworkManager[49836]: <info>  [1772023590.2411] device (tap68ffeedb-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:46:30 compute-0 ovn_controller[147040]: 2026-02-25T12:46:30Z|01189|binding|INFO|Releasing lport 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d from this chassis (sb_readonly=0)
Feb 25 12:46:30 compute-0 ovn_controller[147040]: 2026-02-25T12:46:30Z|01190|binding|INFO|Setting lport 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d down in Southbound
Feb 25 12:46:30 compute-0 ovn_controller[147040]: 2026-02-25T12:46:30Z|01191|binding|INFO|Removing iface tap68ffeedb-a9 ovn-installed in OVS
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.267 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:df:22 2001:db8:0:1:f816:3eff:fe60:df22 2001:db8::f816:3eff:fe60:df22'], port_security=['fa:16:3e:60:df:22 2001:db8:0:1:f816:3eff:fe60:df22 2001:db8::f816:3eff:fe60:df22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe60:df22/64 2001:db8::f816:3eff:fe60:df22/64', 'neutron:device_id': 'dd8c9142-2607-4722-90eb-65233f258639', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5bfff87-5507-4fe1-aed9-1b014e8e7384, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:46:30 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Deactivated successfully.
Feb 25 12:46:30 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Consumed 15.517s CPU time.
Feb 25 12:46:30 compute-0 systemd-machined[210048]: Machine qemu-143-instance-00000072 terminated.
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [NOTICE]   (345461) : haproxy version is 2.8.14-c23fe91
Feb 25 12:46:30 compute-0 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [NOTICE]   (345461) : path to executable is /usr/sbin/haproxy
Feb 25 12:46:30 compute-0 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [WARNING]  (345461) : Exiting Master process...
Feb 25 12:46:30 compute-0 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [ALERT]    (345461) : Current worker (345466) exited with code 143 (Terminated)
Feb 25 12:46:30 compute-0 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [WARNING]  (345461) : All workers exited. Exiting... (0)
Feb 25 12:46:30 compute-0 systemd[1]: libpod-d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633.scope: Deactivated successfully.
Feb 25 12:46:30 compute-0 podman[348365]: 2026-02-25 12:46:30.361837512 +0000 UTC m=+0.043978053 container died d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633-userdata-shm.mount: Deactivated successfully.
Feb 25 12:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-86c13903373d387e7781319c35f624b48a598e4d3ad7505fd786416ab8b5fa73-merged.mount: Deactivated successfully.
Feb 25 12:46:30 compute-0 podman[348365]: 2026-02-25 12:46:30.403115968 +0000 UTC m=+0.085256519 container cleanup d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:46:30 compute-0 NetworkManager[49836]: <info>  [1772023590.4060] manager: (tap68ffeedb-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/494)
Feb 25 12:46:30 compute-0 systemd[1]: libpod-conmon-d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633.scope: Deactivated successfully.
Feb 25 12:46:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 104 op/s
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.422 244018 INFO nova.virt.libvirt.driver [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance destroyed successfully.
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.423 244018 DEBUG nova.objects.instance [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid dd8c9142-2607-4722-90eb-65233f258639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.451 244018 DEBUG nova.virt.libvirt.vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:45:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.452 244018 DEBUG nova.network.os_vif_util [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.452 244018 DEBUG nova.network.os_vif_util [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.453 244018 DEBUG os_vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.455 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.455 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape50c9f03-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.456 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.460 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.462 244018 INFO os_vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8')
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.463 244018 DEBUG nova.virt.libvirt.vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:45:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.463 244018 DEBUG nova.network.os_vif_util [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.464 244018 DEBUG nova.network.os_vif_util [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.465 244018 DEBUG os_vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:46:30 compute-0 podman[348409]: 2026-02-25 12:46:30.465487039 +0000 UTC m=+0.041215145 container remove d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.466 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68ffeedb-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.467 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.471 244018 INFO os_vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9')
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.472 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fca30e9b-fd80-4230-b2ea-cecd574d51de]: (4, ('Wed Feb 25 12:46:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff (d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633)\nd9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633\nWed Feb 25 12:46:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff (d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633)\nd9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.474 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a8348202-7833-4155-8b7c-913bf16fd034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.475 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79e0fd-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:30 compute-0 kernel: tapbb79e0fd-20: left promiscuous mode
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.482 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[42cbb241-83e0-4280-b513-f4e7fafd789e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.497 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.497 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9f537ef4-fecb-47e8-a8bc-b3798b94eb47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.499 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fabeb5d9-4829-4570-bca0-bb3218095880]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.518 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[85169c15-b7ad-4a37-9782-1b79f6d1b95b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547872, 'reachable_time': 20125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348445, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 systemd[1]: run-netns-ovnmeta\x2dbb79e0fd\x2d2a4d\x2d4a70\x2d9c80\x2d4853297401ff.mount: Deactivated successfully.
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.526 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.526 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea0e68a-c388-49e7-aa60-9e2cadddb75d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.528 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d in datapath 6df13266-bcfe-4a5b-94c4-81b5f08a6c21 unbound from our chassis
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.530 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6df13266-bcfe-4a5b-94c4-81b5f08a6c21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.532 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6deeeeaf-68a8-4533-ae7e-e3672d1eecf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.533 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21 namespace which is not needed anymore
Feb 25 12:46:30 compute-0 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [NOTICE]   (345748) : haproxy version is 2.8.14-c23fe91
Feb 25 12:46:30 compute-0 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [NOTICE]   (345748) : path to executable is /usr/sbin/haproxy
Feb 25 12:46:30 compute-0 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [WARNING]  (345748) : Exiting Master process...
Feb 25 12:46:30 compute-0 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [WARNING]  (345748) : Exiting Master process...
Feb 25 12:46:30 compute-0 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [ALERT]    (345748) : Current worker (345750) exited with code 143 (Terminated)
Feb 25 12:46:30 compute-0 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [WARNING]  (345748) : All workers exited. Exiting... (0)
Feb 25 12:46:30 compute-0 systemd[1]: libpod-2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989.scope: Deactivated successfully.
Feb 25 12:46:30 compute-0 conmon[345726]: conmon 2258344ca27eda115bfa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989.scope/container/memory.events
Feb 25 12:46:30 compute-0 podman[348466]: 2026-02-25 12:46:30.706099733 +0000 UTC m=+0.077362505 container died 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989-userdata-shm.mount: Deactivated successfully.
Feb 25 12:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3db6813ebd7b49f3ead32cb60564520a6c07da0da03dd8df369d119009fd878-merged.mount: Deactivated successfully.
Feb 25 12:46:30 compute-0 podman[348466]: 2026-02-25 12:46:30.745414613 +0000 UTC m=+0.116677375 container cleanup 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.757 244018 INFO nova.virt.libvirt.driver [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Deleting instance files /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639_del
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.758 244018 INFO nova.virt.libvirt.driver [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Deletion of /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639_del complete
Feb 25 12:46:30 compute-0 systemd[1]: libpod-conmon-2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989.scope: Deactivated successfully.
Feb 25 12:46:30 compute-0 podman[348496]: 2026-02-25 12:46:30.806287272 +0000 UTC m=+0.043481509 container remove 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.810 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cb83b63a-ac03-433e-b39b-c58f8b913134]: (4, ('Wed Feb 25 12:46:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21 (2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989)\n2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989\nWed Feb 25 12:46:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21 (2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989)\n2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.812 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[22c0b904-27ae-417f-9f37-dca45f5c95df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.813 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6df13266-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 kernel: tap6df13266-b0: left promiscuous mode
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.826 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a88f42e7-6c21-4164-abba-0c590355fbf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.840 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e941538-14f0-4875-8792-41b11d806e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.841 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f93bf1a3-8d66-41b7-b923-414317024b9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.851 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[080bfa0e-121e-49a1-ad38-2140b97e7281]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547987, 'reachable_time': 20751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348511, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.858 244018 INFO nova.compute.manager [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Took 0.68 seconds to destroy the instance on the hypervisor.
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.858 244018 DEBUG oslo.service.loopingcall [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.859 244018 DEBUG nova.compute.manager [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.859 244018 DEBUG nova.network.neutron [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.862 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:46:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.862 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2e43cc41-a231-4bec-815b-b7943eb75610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.930 244018 DEBUG nova.compute.manager [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.931 244018 DEBUG nova.compute.manager [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing instance network info cache due to event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:46:30 compute-0 nova_compute[244014]: 2026-02-25 12:46:30.931 244018 DEBUG oslo_concurrency.lockutils [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:46:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:46:30
Feb 25 12:46:30 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:46:30 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:46:30 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', 'volumes']
Feb 25 12:46:30 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.240 244018 DEBUG nova.compute.manager [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-unplugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.242 244018 DEBUG oslo_concurrency.lockutils [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.242 244018 DEBUG oslo_concurrency.lockutils [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.243 244018 DEBUG oslo_concurrency.lockutils [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.243 244018 DEBUG nova.compute.manager [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] No waiting events found dispatching network-vif-unplugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.244 244018 DEBUG nova.compute.manager [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-unplugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.383 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d6df13266\x2dbcfe\x2d4a5b\x2d94c4\x2d81b5f08a6c21.mount: Deactivated successfully.
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.409 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.410 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.411 244018 DEBUG oslo_concurrency.lockutils [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.412 244018 DEBUG nova.network.neutron [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing network info cache for port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.415 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.416 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.416 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.418 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.419 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.443 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.444 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.444 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.445 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:46:31 compute-0 nova_compute[244014]: 2026-02-25 12:46:31.445 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:31 compute-0 ceph-mon[76335]: pgmap v1978: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 104 op/s
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:46:31 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:46:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:46:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019240042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.023 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.100 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.101 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.322 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.327 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3402MB free_disk=59.920832998119295GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.327 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.328 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.398 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance dd8c9142-2607-4722-90eb-65233f258639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.400 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 709a8b15-83eb-45f4-b681-c150ad270e01 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.400 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.401 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:46:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 231 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 29 KiB/s wr, 122 op/s
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.468 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3019240042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:46:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/733207259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.990 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:32 compute-0 nova_compute[244014]: 2026-02-25 12:46:32.996 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.015 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.038 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.038 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.327 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.327 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing instance network info cache due to event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.328 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.329 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.329 244018 DEBUG nova.network.neutron [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.478 244018 DEBUG nova.network.neutron [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.498 244018 INFO nova.compute.manager [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] Took 2.64 seconds to deallocate network for instance.
Feb 25 12:46:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.541 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.542 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:33 compute-0 ceph-mon[76335]: pgmap v1979: 305 pgs: 305 active+clean; 231 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 29 KiB/s wr, 122 op/s
Feb 25 12:46:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/733207259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.597 244018 DEBUG oslo_concurrency.processutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.658 244018 DEBUG nova.network.neutron [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updated VIF entry in instance network info cache for port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.659 244018 DEBUG nova.network.neutron [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:33 compute-0 nova_compute[244014]: 2026-02-25 12:46:33.682 244018 DEBUG oslo_concurrency.lockutils [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:46:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:46:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3907871664' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.144 244018 DEBUG oslo_concurrency.processutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.149 244018 DEBUG nova.compute.provider_tree [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.172 244018 DEBUG nova.scheduler.client.report [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.206 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.227 244018 INFO nova.scheduler.client.report [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance dd8c9142-2607-4722-90eb-65233f258639
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.313 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.369 244018 DEBUG nova.network.neutron [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updated VIF entry in instance network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.369 244018 DEBUG nova.network.neutron [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.392 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.393 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.394 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.394 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.395 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.395 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] No waiting events found dispatching network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.395 244018 WARNING nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received unexpected event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df for instance with vm_state active and task_state deleting.
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.396 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.396 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.397 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.397 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.397 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] No waiting events found dispatching network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.397 244018 WARNING nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received unexpected event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d for instance with vm_state active and task_state deleting.
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.398 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-deleted-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.398 244018 INFO nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Neutron deleted interface 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d; detaching it from the instance and deleting it from the info cache
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.398 244018 DEBUG nova.network.neutron [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 131 op/s
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.423 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Detach interface failed, port_id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d, reason: Instance dd8c9142-2607-4722-90eb-65233f258639 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:46:34 compute-0 nova_compute[244014]: 2026-02-25 12:46:34.495 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:46:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3907871664' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:34 compute-0 ovn_controller[147040]: 2026-02-25T12:46:34Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:b0:61 10.100.0.7
Feb 25 12:46:34 compute-0 ovn_controller[147040]: 2026-02-25T12:46:34Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:b0:61 10.100.0.7
Feb 25 12:46:35 compute-0 nova_compute[244014]: 2026-02-25 12:46:35.231 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:35 compute-0 nova_compute[244014]: 2026-02-25 12:46:35.428 244018 DEBUG nova.compute.manager [req-73ebc221-ccf8-4b1a-8f29-573afe97c7e2 req-0e799139-7e1c-40fd-bdc1-87ac8b6381ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-deleted-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:35 compute-0 nova_compute[244014]: 2026-02-25 12:46:35.429 244018 INFO nova.compute.manager [req-73ebc221-ccf8-4b1a-8f29-573afe97c7e2 req-0e799139-7e1c-40fd-bdc1-87ac8b6381ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Neutron deleted interface e50c9f03-a8a5-48d1-a34b-4a8fd638d5df; detaching it from the instance and deleting it from the info cache
Feb 25 12:46:35 compute-0 nova_compute[244014]: 2026-02-25 12:46:35.429 244018 DEBUG nova.network.neutron [req-73ebc221-ccf8-4b1a-8f29-573afe97c7e2 req-0e799139-7e1c-40fd-bdc1-87ac8b6381ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 25 12:46:35 compute-0 nova_compute[244014]: 2026-02-25 12:46:35.432 244018 DEBUG nova.compute.manager [req-73ebc221-ccf8-4b1a-8f29-573afe97c7e2 req-0e799139-7e1c-40fd-bdc1-87ac8b6381ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Detach interface failed, port_id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df, reason: Instance dd8c9142-2607-4722-90eb-65233f258639 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:46:35 compute-0 nova_compute[244014]: 2026-02-25 12:46:35.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:35 compute-0 ceph-mon[76335]: pgmap v1980: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 131 op/s
Feb 25 12:46:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 123 op/s
Feb 25 12:46:36 compute-0 nova_compute[244014]: 2026-02-25 12:46:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:46:36 compute-0 nova_compute[244014]: 2026-02-25 12:46:36.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:46:37 compute-0 ceph-mon[76335]: pgmap v1981: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 123 op/s
Feb 25 12:46:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 187 op/s
Feb 25 12:46:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:39 compute-0 ceph-mon[76335]: pgmap v1982: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 187 op/s
Feb 25 12:46:40 compute-0 nova_compute[244014]: 2026-02-25 12:46:40.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:40 compute-0 nova_compute[244014]: 2026-02-25 12:46:40.249 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023585.2485282, 2b9ea98f-9b1a-495b-8669-e79da967b0ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:46:40 compute-0 nova_compute[244014]: 2026-02-25 12:46:40.250 244018 INFO nova.compute.manager [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] VM Stopped (Lifecycle Event)
Feb 25 12:46:40 compute-0 nova_compute[244014]: 2026-02-25 12:46:40.266 244018 DEBUG nova.compute.manager [None req-a3603cee-7a94-4a45-86bb-98893f68a113 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 25 12:46:40 compute-0 nova_compute[244014]: 2026-02-25 12:46:40.470 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:40 compute-0 nova_compute[244014]: 2026-02-25 12:46:40.535 244018 INFO nova.compute.manager [None req-ff6ea604-fffb-4001-9a66-7e4704310772 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Get console output
Feb 25 12:46:40 compute-0 nova_compute[244014]: 2026-02-25 12:46:40.545 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:46:40 compute-0 ovn_controller[147040]: 2026-02-25T12:46:40Z|01192|binding|INFO|Releasing lport 92ae33df-1d64-498f-b132-6ef0663f81e9 from this chassis (sb_readonly=0)
Feb 25 12:46:40 compute-0 nova_compute[244014]: 2026-02-25 12:46:40.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:41 compute-0 nova_compute[244014]: 2026-02-25 12:46:41.015 244018 DEBUG nova.objects.instance [None req-1c5c5436-4c19-4fcf-8592-efe6056f8235 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:46:41 compute-0 nova_compute[244014]: 2026-02-25 12:46:41.047 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023601.0466552, 709a8b15-83eb-45f4-b681-c150ad270e01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:46:41 compute-0 nova_compute[244014]: 2026-02-25 12:46:41.047 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Paused (Lifecycle Event)
Feb 25 12:46:41 compute-0 nova_compute[244014]: 2026-02-25 12:46:41.065 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:41 compute-0 nova_compute[244014]: 2026-02-25 12:46:41.071 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:46:41 compute-0 nova_compute[244014]: 2026-02-25 12:46:41.089 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 25 12:46:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.481 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:46:41 compute-0 nova_compute[244014]: 2026-02-25 12:46:41.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.483 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:46:41 compute-0 ceph-mon[76335]: pgmap v1983: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 25 12:46:41 compute-0 kernel: tapa0f62675-55 (unregistering): left promiscuous mode
Feb 25 12:46:41 compute-0 NetworkManager[49836]: <info>  [1772023601.7308] device (tapa0f62675-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:46:41 compute-0 ovn_controller[147040]: 2026-02-25T12:46:41Z|01193|binding|INFO|Releasing lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 from this chassis (sb_readonly=0)
Feb 25 12:46:41 compute-0 ovn_controller[147040]: 2026-02-25T12:46:41Z|01194|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 down in Southbound
Feb 25 12:46:41 compute-0 nova_compute[244014]: 2026-02-25 12:46:41.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:41 compute-0 ovn_controller[147040]: 2026-02-25T12:46:41Z|01195|binding|INFO|Removing iface tapa0f62675-55 ovn-installed in OVS
Feb 25 12:46:41 compute-0 nova_compute[244014]: 2026-02-25 12:46:41.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.744 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:b0:61 10.100.0.7'], port_security=['fa:16:3e:56:b0:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '709a8b15-83eb-45f4-b681-c150ad270e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cab933e-ec3e-4cce-9e9e-8098a6cc3a83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68434b74-2de5-46be-a5a4-5cead97a1427, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a0f62675-5535-4e75-98f6-dc4fbadbc4c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:46:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.746 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 in datapath bf745dc4-78d4-4d88-8794-da49c21b9f38 unbound from our chassis
Feb 25 12:46:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.747 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf745dc4-78d4-4d88-8794-da49c21b9f38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:46:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.749 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dd643433-ef82-48e6-a922-0a1b37072025]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.750 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 namespace which is not needed anymore
Feb 25 12:46:41 compute-0 nova_compute[244014]: 2026-02-25 12:46:41.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:41 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Deactivated successfully.
Feb 25 12:46:41 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Consumed 12.636s CPU time.
Feb 25 12:46:41 compute-0 systemd-machined[210048]: Machine qemu-147-instance-00000074 terminated.
Feb 25 12:46:41 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [NOTICE]   (348148) : haproxy version is 2.8.14-c23fe91
Feb 25 12:46:41 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [NOTICE]   (348148) : path to executable is /usr/sbin/haproxy
Feb 25 12:46:41 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [WARNING]  (348148) : Exiting Master process...
Feb 25 12:46:41 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [ALERT]    (348148) : Current worker (348150) exited with code 143 (Terminated)
Feb 25 12:46:41 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [WARNING]  (348148) : All workers exited. Exiting... (0)
Feb 25 12:46:41 compute-0 systemd[1]: libpod-fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87.scope: Deactivated successfully.
Feb 25 12:46:41 compute-0 podman[348607]: 2026-02-25 12:46:41.903681414 +0000 UTC m=+0.047891653 container died fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 12:46:41 compute-0 nova_compute[244014]: 2026-02-25 12:46:41.903 244018 DEBUG nova.compute.manager [None req-1c5c5436-4c19-4fcf-8592-efe6056f8235 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87-userdata-shm.mount: Deactivated successfully.
Feb 25 12:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f539fbb4b8a2e0d559dbacdf783585633e2c940d6c86cd49a3a52f168fc29a4e-merged.mount: Deactivated successfully.
Feb 25 12:46:41 compute-0 podman[348607]: 2026-02-25 12:46:41.95770931 +0000 UTC m=+0.101919529 container cleanup fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 12:46:41 compute-0 systemd[1]: libpod-conmon-fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87.scope: Deactivated successfully.
Feb 25 12:46:42 compute-0 podman[348647]: 2026-02-25 12:46:42.019192216 +0000 UTC m=+0.042389938 container remove fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 12:46:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.027 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2b00fc29-a4c0-4f73-8f41-13ab8bebe099]: (4, ('Wed Feb 25 12:46:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 (fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87)\nfde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87\nWed Feb 25 12:46:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 (fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87)\nfde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:42 compute-0 nova_compute[244014]: 2026-02-25 12:46:42.027 244018 DEBUG nova.compute.manager [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:42 compute-0 nova_compute[244014]: 2026-02-25 12:46:42.028 244018 DEBUG oslo_concurrency.lockutils [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.029 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be061554-744d-4e91-8610-f94bca556497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:42 compute-0 nova_compute[244014]: 2026-02-25 12:46:42.029 244018 DEBUG oslo_concurrency.lockutils [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:42 compute-0 nova_compute[244014]: 2026-02-25 12:46:42.029 244018 DEBUG oslo_concurrency.lockutils [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.030 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf745dc4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:42 compute-0 nova_compute[244014]: 2026-02-25 12:46:42.030 244018 DEBUG nova.compute.manager [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:42 compute-0 nova_compute[244014]: 2026-02-25 12:46:42.030 244018 WARNING nova.compute.manager [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state suspended and task_state None.
Feb 25 12:46:42 compute-0 nova_compute[244014]: 2026-02-25 12:46:42.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:42 compute-0 kernel: tapbf745dc4-70: left promiscuous mode
Feb 25 12:46:42 compute-0 nova_compute[244014]: 2026-02-25 12:46:42.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.052 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[04fb9f37-cf43-4512-8b36-fcc67b9954ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.067 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bca4ed68-aa20-45c1-aa4a-f60d49424ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.069 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[80b082c9-7c20-4813-9715-50c5b5b5129c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.087 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[26c87a1e-c35f-465d-8664-8d452893eae5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555139, 'reachable_time': 25620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348667, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.092 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:46:42 compute-0 systemd[1]: run-netns-ovnmeta\x2dbf745dc4\x2d78d4\x2d4d88\x2d8794\x2dda49c21b9f38.mount: Deactivated successfully.
Feb 25 12:46:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.092 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[23a1bc6b-1fb8-4853-91de-df7b20a22fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007744472287801465 of space, bias 1.0, pg target 0.23233416863404394 quantized to 32 (current 32)
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493995175414729 of space, bias 1.0, pg target 0.7481985526244187 quantized to 32 (current 32)
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4068357350033398e-06 of space, bias 4.0, pg target 0.0016882028820040078 quantized to 16 (current 16)
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:46:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.699504) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602699534, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 377, "num_deletes": 251, "total_data_size": 242446, "memory_usage": 250888, "flush_reason": "Manual Compaction"}
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602704054, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 240265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42349, "largest_seqno": 42725, "table_properties": {"data_size": 238012, "index_size": 417, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5649, "raw_average_key_size": 18, "raw_value_size": 233544, "raw_average_value_size": 768, "num_data_blocks": 19, "num_entries": 304, "num_filter_entries": 304, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023589, "oldest_key_time": 1772023589, "file_creation_time": 1772023602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 4617 microseconds, and 1926 cpu microseconds.
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.704113) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 240265 bytes OK
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.704139) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.706598) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.706628) EVENT_LOG_v1 {"time_micros": 1772023602706618, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.706654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 239988, prev total WAL file size 239988, number of live WAL files 2.
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.707092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(234KB)], [95(10MB)]
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602707232, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 11699357, "oldest_snapshot_seqno": -1}
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6424 keys, 10047448 bytes, temperature: kUnknown
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602767541, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10047448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10002243, "index_size": 28071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 166893, "raw_average_key_size": 25, "raw_value_size": 9884955, "raw_average_value_size": 1538, "num_data_blocks": 1103, "num_entries": 6424, "num_filter_entries": 6424, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.767896) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10047448 bytes
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.769097) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.8 rd, 166.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.9 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(90.5) write-amplify(41.8) OK, records in: 6933, records dropped: 509 output_compression: NoCompression
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.769127) EVENT_LOG_v1 {"time_micros": 1772023602769113, "job": 56, "event": "compaction_finished", "compaction_time_micros": 60367, "compaction_time_cpu_micros": 22099, "output_level": 6, "num_output_files": 1, "total_output_size": 10047448, "num_input_records": 6933, "num_output_records": 6424, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602769307, "job": 56, "event": "table_file_deletion", "file_number": 97}
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602771102, "job": 56, "event": "table_file_deletion", "file_number": 95}
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.707023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.771198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.771206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.771210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.771214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:42 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.771218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:46:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:43 compute-0 ceph-mon[76335]: pgmap v1984: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 25 12:46:44 compute-0 nova_compute[244014]: 2026-02-25 12:46:44.097 244018 DEBUG nova.compute.manager [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:44 compute-0 nova_compute[244014]: 2026-02-25 12:46:44.098 244018 DEBUG oslo_concurrency.lockutils [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:44 compute-0 nova_compute[244014]: 2026-02-25 12:46:44.098 244018 DEBUG oslo_concurrency.lockutils [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:44 compute-0 nova_compute[244014]: 2026-02-25 12:46:44.099 244018 DEBUG oslo_concurrency.lockutils [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:44 compute-0 nova_compute[244014]: 2026-02-25 12:46:44.099 244018 DEBUG nova.compute.manager [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:44 compute-0 nova_compute[244014]: 2026-02-25 12:46:44.099 244018 WARNING nova.compute.manager [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state suspended and task_state None.
Feb 25 12:46:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Feb 25 12:46:44 compute-0 nova_compute[244014]: 2026-02-25 12:46:44.791 244018 INFO nova.compute.manager [None req-5cf55615-aefe-462a-88a6-e8e407639019 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Get console output
Feb 25 12:46:45 compute-0 nova_compute[244014]: 2026-02-25 12:46:45.001 244018 INFO nova.compute.manager [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Resuming
Feb 25 12:46:45 compute-0 nova_compute[244014]: 2026-02-25 12:46:45.002 244018 DEBUG nova.objects.instance [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'flavor' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:46:45 compute-0 nova_compute[244014]: 2026-02-25 12:46:45.041 244018 DEBUG oslo_concurrency.lockutils [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:46:45 compute-0 nova_compute[244014]: 2026-02-25 12:46:45.041 244018 DEBUG oslo_concurrency.lockutils [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:46:45 compute-0 nova_compute[244014]: 2026-02-25 12:46:45.042 244018 DEBUG nova.network.neutron [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:46:45 compute-0 nova_compute[244014]: 2026-02-25 12:46:45.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:45 compute-0 nova_compute[244014]: 2026-02-25 12:46:45.421 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023590.4197423, dd8c9142-2607-4722-90eb-65233f258639 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:46:45 compute-0 nova_compute[244014]: 2026-02-25 12:46:45.422 244018 INFO nova.compute.manager [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] VM Stopped (Lifecycle Event)
Feb 25 12:46:45 compute-0 nova_compute[244014]: 2026-02-25 12:46:45.467 244018 DEBUG nova.compute.manager [None req-81d7a25e-c4b7-455c-8188-e3c7e9c8df41 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:45 compute-0 nova_compute[244014]: 2026-02-25 12:46:45.471 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:45.487 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:45 compute-0 ceph-mon[76335]: pgmap v1985: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Feb 25 12:46:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.817 244018 DEBUG nova.network.neutron [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.837 244018 DEBUG oslo_concurrency.lockutils [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.844 244018 DEBUG nova.virt.libvirt.vif [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:46:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-742082713',display_name='tempest-TestNetworkAdvancedServerOps-server-742082713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-742082713',id=116,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+I5tZfRy0ovwNPnrrYTneQbLw5hVnQnH/IanoLiHVduiItxLS5mgbnypXYZj3B9Q1TA9okLBB9xUXwMoUUkuZoJAUYr2FIjfbOIXstrZLpgrCV3aVFIh2fnMujkQ9rZw==',key_name='tempest-TestNetworkAdvancedServerOps-1976579647',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:46:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-m0ykjdnm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:46:41Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=709a8b15-83eb-45f4-b681-c150ad270e01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.844 244018 DEBUG nova.network.os_vif_util [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.846 244018 DEBUG nova.network.os_vif_util [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.847 244018 DEBUG os_vif [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.848 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.849 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.852 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.853 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0f62675-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.853 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0f62675-55, col_values=(('external_ids', {'iface-id': 'a0f62675-5535-4e75-98f6-dc4fbadbc4c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:b0:61', 'vm-uuid': '709a8b15-83eb-45f4-b681-c150ad270e01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.854 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.855 244018 INFO os_vif [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55')
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.882 244018 DEBUG nova.objects.instance [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'numa_topology' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:46:46 compute-0 kernel: tapa0f62675-55: entered promiscuous mode
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.961 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:46 compute-0 ovn_controller[147040]: 2026-02-25T12:46:46Z|01196|binding|INFO|Claiming lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for this chassis.
Feb 25 12:46:46 compute-0 ovn_controller[147040]: 2026-02-25T12:46:46Z|01197|binding|INFO|a0f62675-5535-4e75-98f6-dc4fbadbc4c9: Claiming fa:16:3e:56:b0:61 10.100.0.7
Feb 25 12:46:46 compute-0 NetworkManager[49836]: <info>  [1772023606.9625] manager: (tapa0f62675-55): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Feb 25 12:46:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.970 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:b0:61 10.100.0.7'], port_security=['fa:16:3e:56:b0:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '709a8b15-83eb-45f4-b681-c150ad270e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6cab933e-ec3e-4cce-9e9e-8098a6cc3a83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68434b74-2de5-46be-a5a4-5cead97a1427, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a0f62675-5535-4e75-98f6-dc4fbadbc4c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:46:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.972 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 in datapath bf745dc4-78d4-4d88-8794-da49c21b9f38 bound to our chassis
Feb 25 12:46:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.974 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf745dc4-78d4-4d88-8794-da49c21b9f38
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:46 compute-0 ovn_controller[147040]: 2026-02-25T12:46:46Z|01198|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 ovn-installed in OVS
Feb 25 12:46:46 compute-0 ovn_controller[147040]: 2026-02-25T12:46:46Z|01199|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 up in Southbound
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:46 compute-0 nova_compute[244014]: 2026-02-25 12:46:46.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.990 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bff9b017-551f-4b4d-8f46-f0e39ed9cc70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.992 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf745dc4-71 in ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:46:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.995 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf745dc4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:46:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.995 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61e9a55c-e356-4bf7-8282-6ba9bd746c0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.997 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[46a94d44-fc07-45f8-aae4-343a6f9670a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 systemd-machined[210048]: New machine qemu-148-instance-00000074.
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.013 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[1668d6c5-c5bf-4701-8afc-a4d75859a69c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-00000074.
Feb 25 12:46:47 compute-0 systemd-udevd[348685]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.040 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed6b96b-6b29-4ab3-8906-ddc643dfc8b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 NetworkManager[49836]: <info>  [1772023607.0446] device (tapa0f62675-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:46:47 compute-0 NetworkManager[49836]: <info>  [1772023607.0466] device (tapa0f62675-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.076 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[95ec7651-8b21-4ea5-888f-2cbd5750bde7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.082 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9b556a-3a2c-4ffc-827d-b4eeda635de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 NetworkManager[49836]: <info>  [1772023607.0849] manager: (tapbf745dc4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/496)
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.118 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b980ea43-6f1e-4f2b-b365-fca436ffba10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.121 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc40d63-36e6-49ec-b4c4-233a3d34eeb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 NetworkManager[49836]: <info>  [1772023607.1391] device (tapbf745dc4-70): carrier: link connected
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.146 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f1d9fa-9d81-493e-bab7-1eb097369caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.161 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb00bad-a2c5-42b1-a185-3b49c53b2007]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf745dc4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:7d:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557654, 'reachable_time': 25679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348715, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.175 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d189375-4a43-4e41-8c0c-70a69b71f9da]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:7daf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557654, 'tstamp': 557654}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348716, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.189 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e08c4cfa-0040-469e-8d58-2bdd8f78ba1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf745dc4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:7d:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557654, 'reachable_time': 25679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348717, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.218 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f328f1-fb85-4839-947d-d407358168f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.229 244018 DEBUG nova.compute.manager [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.230 244018 DEBUG oslo_concurrency.lockutils [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.230 244018 DEBUG oslo_concurrency.lockutils [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.231 244018 DEBUG oslo_concurrency.lockutils [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.231 244018 DEBUG nova.compute.manager [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.231 244018 WARNING nova.compute.manager [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state suspended and task_state resuming.
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.269 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[902f6c46-ad0e-436e-a164-1f73ea9bcdce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.271 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf745dc4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.271 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.272 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf745dc4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.274 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:47 compute-0 NetworkManager[49836]: <info>  [1772023607.2753] manager: (tapbf745dc4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Feb 25 12:46:47 compute-0 kernel: tapbf745dc4-70: entered promiscuous mode
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.277 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf745dc4-70, col_values=(('external_ids', {'iface-id': '92ae33df-1d64-498f-b132-6ef0663f81e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:47 compute-0 ovn_controller[147040]: 2026-02-25T12:46:47Z|01200|binding|INFO|Releasing lport 92ae33df-1d64-498f-b132-6ef0663f81e9 from this chassis (sb_readonly=0)
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.279 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.280 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.280 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[417d4412-2419-4c4b-bcf5-1380aa5032aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.281 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-bf745dc4-78d4-4d88-8794-da49c21b9f38
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID bf745dc4-78d4-4d88-8794-da49c21b9f38
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:46:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.282 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'env', 'PROCESS_TAG=haproxy-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf745dc4-78d4-4d88-8794-da49c21b9f38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.285 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:46:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2585637404' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:46:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:46:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2585637404' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.550 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 709a8b15-83eb-45f4-b681-c150ad270e01 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.551 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023607.549498, 709a8b15-83eb-45f4-b681-c150ad270e01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.551 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Started (Lifecycle Event)
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.573 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.579 244018 DEBUG nova.compute.manager [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.580 244018 DEBUG nova.objects.instance [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.585 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.601 244018 INFO nova.virt.libvirt.driver [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance running successfully.
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.603 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 25 12:46:47 compute-0 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.604 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023607.5547223, 709a8b15-83eb-45f4-b681-c150ad270e01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.605 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Resumed (Lifecycle Event)
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.610 244018 DEBUG nova.virt.libvirt.guest [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.610 244018 DEBUG nova.compute.manager [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.645 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.648 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:46:47 compute-0 podman[348790]: 2026-02-25 12:46:47.650320901 +0000 UTC m=+0.063089353 container create 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:46:47 compute-0 systemd[1]: Started libpod-conmon-930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329.scope.
Feb 25 12:46:47 compute-0 nova_compute[244014]: 2026-02-25 12:46:47.688 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 25 12:46:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45e31c374f7f8967ca524d1b85ad5a63827aeadab1c7beb276d1f90b01fde07/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:46:47 compute-0 ceph-mon[76335]: pgmap v1986: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 12:46:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2585637404' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:46:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2585637404' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:46:47 compute-0 podman[348790]: 2026-02-25 12:46:47.626894349 +0000 UTC m=+0.039662841 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:46:47 compute-0 podman[348790]: 2026-02-25 12:46:47.733205081 +0000 UTC m=+0.145973613 container init 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:46:47 compute-0 podman[348790]: 2026-02-25 12:46:47.740564479 +0000 UTC m=+0.153332961 container start 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 12:46:47 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [NOTICE]   (348809) : New worker (348811) forked
Feb 25 12:46:47 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [NOTICE]   (348809) : Loading success.
Feb 25 12:46:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 25 12:46:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:49 compute-0 nova_compute[244014]: 2026-02-25 12:46:49.303 244018 DEBUG nova.compute.manager [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:49 compute-0 nova_compute[244014]: 2026-02-25 12:46:49.304 244018 DEBUG oslo_concurrency.lockutils [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:49 compute-0 nova_compute[244014]: 2026-02-25 12:46:49.304 244018 DEBUG oslo_concurrency.lockutils [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:49 compute-0 nova_compute[244014]: 2026-02-25 12:46:49.305 244018 DEBUG oslo_concurrency.lockutils [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:49 compute-0 nova_compute[244014]: 2026-02-25 12:46:49.305 244018 DEBUG nova.compute.manager [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:49 compute-0 nova_compute[244014]: 2026-02-25 12:46:49.306 244018 WARNING nova.compute.manager [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state active and task_state None.
Feb 25 12:46:49 compute-0 ceph-mon[76335]: pgmap v1987: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.241 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.269 244018 INFO nova.compute.manager [None req-616fa0f6-5322-435f-958a-54738a92ad05 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Get console output
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.276 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:46:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 12 KiB/s wr, 2 op/s
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.876 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.877 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.909 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.937 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.938 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Image id c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6 yields fingerprint a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.938 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] image c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6 at (/var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6): checking
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.938 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] image c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6 at (/var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.940 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.941 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] 709a8b15-83eb-45f4-b681-c150ad270e01 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.941 244018 WARNING nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.941 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Active base files: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.941 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Removable base files: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.942 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.942 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.942 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.943 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Feb 25 12:46:50 compute-0 nova_compute[244014]: 2026-02-25 12:46:50.943 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.494 244018 DEBUG nova.compute.manager [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.495 244018 DEBUG nova.compute.manager [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing instance network info cache due to event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.495 244018 DEBUG oslo_concurrency.lockutils [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.496 244018 DEBUG oslo_concurrency.lockutils [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.496 244018 DEBUG nova.network.neutron [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.536 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.537 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.537 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.538 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.538 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.540 244018 INFO nova.compute.manager [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Terminating instance
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.541 244018 DEBUG nova.compute.manager [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:46:51 compute-0 kernel: tapa0f62675-55 (unregistering): left promiscuous mode
Feb 25 12:46:51 compute-0 NetworkManager[49836]: <info>  [1772023611.5898] device (tapa0f62675-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:51 compute-0 ovn_controller[147040]: 2026-02-25T12:46:51Z|01201|binding|INFO|Releasing lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 from this chassis (sb_readonly=0)
Feb 25 12:46:51 compute-0 ovn_controller[147040]: 2026-02-25T12:46:51Z|01202|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 down in Southbound
Feb 25 12:46:51 compute-0 ovn_controller[147040]: 2026-02-25T12:46:51Z|01203|binding|INFO|Removing iface tapa0f62675-55 ovn-installed in OVS
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.605 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:b0:61 10.100.0.7'], port_security=['fa:16:3e:56:b0:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '709a8b15-83eb-45f4-b681-c150ad270e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6cab933e-ec3e-4cce-9e9e-8098a6cc3a83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68434b74-2de5-46be-a5a4-5cead97a1427, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a0f62675-5535-4e75-98f6-dc4fbadbc4c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.606 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 in datapath bf745dc4-78d4-4d88-8794-da49c21b9f38 unbound from our chassis
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.607 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf745dc4-78d4-4d88-8794-da49c21b9f38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.608 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdf4988-4bd6-47f0-812a-cb044947014b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.609 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 namespace which is not needed anymore
Feb 25 12:46:51 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Deactivated successfully.
Feb 25 12:46:51 compute-0 systemd-machined[210048]: Machine qemu-148-instance-00000074 terminated.
Feb 25 12:46:51 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [NOTICE]   (348809) : haproxy version is 2.8.14-c23fe91
Feb 25 12:46:51 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [NOTICE]   (348809) : path to executable is /usr/sbin/haproxy
Feb 25 12:46:51 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [WARNING]  (348809) : Exiting Master process...
Feb 25 12:46:51 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [ALERT]    (348809) : Current worker (348811) exited with code 143 (Terminated)
Feb 25 12:46:51 compute-0 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [WARNING]  (348809) : All workers exited. Exiting... (0)
Feb 25 12:46:51 compute-0 systemd[1]: libpod-930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329.scope: Deactivated successfully.
Feb 25 12:46:51 compute-0 podman[348844]: 2026-02-25 12:46:51.740525694 +0000 UTC m=+0.050356003 container died 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:46:51 compute-0 ceph-mon[76335]: pgmap v1988: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 12 KiB/s wr, 2 op/s
Feb 25 12:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e45e31c374f7f8967ca524d1b85ad5a63827aeadab1c7beb276d1f90b01fde07-merged.mount: Deactivated successfully.
Feb 25 12:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329-userdata-shm.mount: Deactivated successfully.
Feb 25 12:46:51 compute-0 podman[348844]: 2026-02-25 12:46:51.774457772 +0000 UTC m=+0.084288081 container cleanup 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.781 244018 INFO nova.virt.libvirt.driver [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance destroyed successfully.
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.781 244018 DEBUG nova.objects.instance [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:46:51 compute-0 systemd[1]: libpod-conmon-930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329.scope: Deactivated successfully.
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.815 244018 DEBUG nova.virt.libvirt.vif [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:46:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-742082713',display_name='tempest-TestNetworkAdvancedServerOps-server-742082713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-742082713',id=116,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+I5tZfRy0ovwNPnrrYTneQbLw5hVnQnH/IanoLiHVduiItxLS5mgbnypXYZj3B9Q1TA9okLBB9xUXwMoUUkuZoJAUYr2FIjfbOIXstrZLpgrCV3aVFIh2fnMujkQ9rZw==',key_name='tempest-TestNetworkAdvancedServerOps-1976579647',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:46:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-m0ykjdnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:46:47Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=709a8b15-83eb-45f4-b681-c150ad270e01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.816 244018 DEBUG nova.network.os_vif_util [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.816 244018 DEBUG nova.network.os_vif_util [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.816 244018 DEBUG os_vif [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.818 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.818 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0f62675-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.824 244018 INFO os_vif [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55')
Feb 25 12:46:51 compute-0 podman[348884]: 2026-02-25 12:46:51.848588525 +0000 UTC m=+0.053623625 container remove 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.856 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfe1fd0-7b57-44da-bf33-c4555e1a8bf6]: (4, ('Wed Feb 25 12:46:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 (930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329)\n930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329\nWed Feb 25 12:46:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 (930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329)\n930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.858 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0381fc3f-742f-4874-987f-52731057a707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.859 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf745dc4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:46:51 compute-0 kernel: tapbf745dc4-70: left promiscuous mode
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:51 compute-0 nova_compute[244014]: 2026-02-25 12:46:51.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.872 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73fcaae2-4ac5-4a82-9037-3306e1c7f514]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.886 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[efed83ec-18f1-4be8-a25b-38f42df37edf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.887 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6951cb9b-4fd7-4041-9488-f977e01b7a0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.903 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0a77d852-b25e-46fc-bd9a-ac9b1c8907cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557647, 'reachable_time': 32557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348918, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:51 compute-0 systemd[1]: run-netns-ovnmeta\x2dbf745dc4\x2d78d4\x2d4d88\x2d8794\x2dda49c21b9f38.mount: Deactivated successfully.
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.907 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:46:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.907 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[18bfc7fa-0fbd-4acd-bc1e-892a78790af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:46:52 compute-0 nova_compute[244014]: 2026-02-25 12:46:52.115 244018 INFO nova.virt.libvirt.driver [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Deleting instance files /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01_del
Feb 25 12:46:52 compute-0 nova_compute[244014]: 2026-02-25 12:46:52.116 244018 INFO nova.virt.libvirt.driver [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Deletion of /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01_del complete
Feb 25 12:46:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 191 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 13 KiB/s wr, 17 op/s
Feb 25 12:46:52 compute-0 nova_compute[244014]: 2026-02-25 12:46:52.606 244018 INFO nova.compute.manager [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Took 1.06 seconds to destroy the instance on the hypervisor.
Feb 25 12:46:52 compute-0 nova_compute[244014]: 2026-02-25 12:46:52.606 244018 DEBUG oslo.service.loopingcall [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:46:52 compute-0 nova_compute[244014]: 2026-02-25 12:46:52.607 244018 DEBUG nova.compute.manager [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:46:52 compute-0 nova_compute[244014]: 2026-02-25 12:46:52.607 244018 DEBUG nova.network.neutron [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.650 244018 DEBUG nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.651 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.652 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.652 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.653 244018 DEBUG nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.653 244018 DEBUG nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.654 244018 DEBUG nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.654 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.655 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.655 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.656 244018 DEBUG nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:46:53 compute-0 nova_compute[244014]: 2026-02-25 12:46:53.656 244018 WARNING nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state active and task_state deleting.
Feb 25 12:46:53 compute-0 ceph-mon[76335]: pgmap v1989: 305 pgs: 305 active+clean; 191 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 13 KiB/s wr, 17 op/s
Feb 25 12:46:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 153 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 32 op/s
Feb 25 12:46:54 compute-0 nova_compute[244014]: 2026-02-25 12:46:54.478 244018 DEBUG nova.network.neutron [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:54 compute-0 nova_compute[244014]: 2026-02-25 12:46:54.505 244018 INFO nova.compute.manager [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Took 1.90 seconds to deallocate network for instance.
Feb 25 12:46:54 compute-0 nova_compute[244014]: 2026-02-25 12:46:54.568 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:54 compute-0 nova_compute[244014]: 2026-02-25 12:46:54.569 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:54 compute-0 nova_compute[244014]: 2026-02-25 12:46:54.591 244018 DEBUG nova.compute.manager [req-1563e37e-6a6a-4e75-ad68-76d7922bde04 req-75874591-ef59-4401-8c11-494bd9d2f0bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-deleted-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:46:54 compute-0 nova_compute[244014]: 2026-02-25 12:46:54.643 244018 DEBUG oslo_concurrency.processutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:46:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:55.025 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:46:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:55.025 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:46:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:46:55.026 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:55 compute-0 nova_compute[244014]: 2026-02-25 12:46:55.114 244018 DEBUG nova.network.neutron [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updated VIF entry in instance network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:46:55 compute-0 nova_compute[244014]: 2026-02-25 12:46:55.115 244018 DEBUG nova.network.neutron [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:46:55 compute-0 nova_compute[244014]: 2026-02-25 12:46:55.163 244018 DEBUG oslo_concurrency.lockutils [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:46:55 compute-0 nova_compute[244014]: 2026-02-25 12:46:55.243 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:46:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2331357453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:55 compute-0 nova_compute[244014]: 2026-02-25 12:46:55.271 244018 DEBUG oslo_concurrency.processutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:46:55 compute-0 nova_compute[244014]: 2026-02-25 12:46:55.279 244018 DEBUG nova.compute.provider_tree [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:46:55 compute-0 nova_compute[244014]: 2026-02-25 12:46:55.307 244018 DEBUG nova.scheduler.client.report [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:46:55 compute-0 nova_compute[244014]: 2026-02-25 12:46:55.329 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:55 compute-0 nova_compute[244014]: 2026-02-25 12:46:55.374 244018 INFO nova.scheduler.client.report [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Deleted allocations for instance 709a8b15-83eb-45f4-b681-c150ad270e01
Feb 25 12:46:55 compute-0 nova_compute[244014]: 2026-02-25 12:46:55.465 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:46:55 compute-0 ceph-mon[76335]: pgmap v1990: 305 pgs: 305 active+clean; 153 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 32 op/s
Feb 25 12:46:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2331357453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:46:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 153 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Feb 25 12:46:56 compute-0 nova_compute[244014]: 2026-02-25 12:46:56.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:56 compute-0 nova_compute[244014]: 2026-02-25 12:46:56.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:57 compute-0 ceph-mon[76335]: pgmap v1991: 305 pgs: 305 active+clean; 153 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Feb 25 12:46:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Feb 25 12:46:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:46:58 compute-0 podman[348942]: 2026-02-25 12:46:58.735303102 +0000 UTC m=+0.064689807 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:46:58 compute-0 podman[348961]: 2026-02-25 12:46:58.859370716 +0000 UTC m=+0.089707944 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 25 12:46:59 compute-0 nova_compute[244014]: 2026-02-25 12:46:59.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:59 compute-0 nova_compute[244014]: 2026-02-25 12:46:59.455 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:46:59 compute-0 ceph-mon[76335]: pgmap v1992: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Feb 25 12:47:00 compute-0 nova_compute[244014]: 2026-02-25 12:47:00.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Feb 25 12:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:47:01 compute-0 ceph-mon[76335]: pgmap v1993: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Feb 25 12:47:01 compute-0 nova_compute[244014]: 2026-02-25 12:47:01.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Feb 25 12:47:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:03 compute-0 ceph-mon[76335]: pgmap v1994: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Feb 25 12:47:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Feb 25 12:47:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:04.997 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:af:c3 2001:db8:0:1:f816:3eff:fe02:afc3 2001:db8::f816:3eff:fe02:afc3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe02:afc3/64 2001:db8::f816:3eff:fe02:afc3/64', 'neutron:device_id': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06544f4e-1579-4f9d-8faa-1248478e7fbc, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8372e6c6-fbbc-48a7-be95-d95e1d2ad95a) old=Port_Binding(mac=['fa:16:3e:02:af:c3 2001:db8::f816:3eff:fe02:afc3'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe02:afc3/64', 'neutron:device_id': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:04.999 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8372e6c6-fbbc-48a7-be95-d95e1d2ad95a in datapath 5c482202-8994-4033-a0a9-167d92a9e301 updated
Feb 25 12:47:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:05.000 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c482202-8994-4033-a0a9-167d92a9e301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:47:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:05.001 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdd29a4-4bda-42ff-a235-30466cfefc41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:05 compute-0 nova_compute[244014]: 2026-02-25 12:47:05.246 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:05 compute-0 ceph-mon[76335]: pgmap v1995: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Feb 25 12:47:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 25 12:47:06 compute-0 nova_compute[244014]: 2026-02-25 12:47:06.775 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023611.7733135, 709a8b15-83eb-45f4-b681-c150ad270e01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:47:06 compute-0 nova_compute[244014]: 2026-02-25 12:47:06.775 244018 INFO nova.compute.manager [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Stopped (Lifecycle Event)
Feb 25 12:47:06 compute-0 nova_compute[244014]: 2026-02-25 12:47:06.798 244018 DEBUG nova.compute.manager [None req-48b42e87-fc83-4362-a13b-e46986ce3a5c - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:47:06 compute-0 nova_compute[244014]: 2026-02-25 12:47:06.825 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:07 compute-0 sudo[348989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:47:07 compute-0 sudo[348989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:47:07 compute-0 sudo[348989]: pam_unix(sudo:session): session closed for user root
Feb 25 12:47:07 compute-0 sudo[349014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:47:07 compute-0 sudo[349014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:47:07 compute-0 ceph-mon[76335]: pgmap v1996: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 25 12:47:08 compute-0 sudo[349014]: pam_unix(sudo:session): session closed for user root
Feb 25 12:47:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:47:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:47:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:47:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:47:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:47:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:47:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:47:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:47:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:47:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:47:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:47:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:47:08 compute-0 sudo[349070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:47:08 compute-0 sudo[349070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:47:08 compute-0 sudo[349070]: pam_unix(sudo:session): session closed for user root
Feb 25 12:47:08 compute-0 sudo[349095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:47:08 compute-0 sudo[349095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:47:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 25 12:47:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:08 compute-0 podman[349133]: 2026-02-25 12:47:08.694555217 +0000 UTC m=+0.049798307 container create 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 12:47:08 compute-0 systemd[1]: Started libpod-conmon-36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd.scope.
Feb 25 12:47:08 compute-0 podman[349133]: 2026-02-25 12:47:08.675206811 +0000 UTC m=+0.030449941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:47:08 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:47:08 compute-0 podman[349133]: 2026-02-25 12:47:08.789973862 +0000 UTC m=+0.145216982 container init 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:47:08 compute-0 podman[349133]: 2026-02-25 12:47:08.798838352 +0000 UTC m=+0.154081402 container start 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:47:08 compute-0 podman[349133]: 2026-02-25 12:47:08.80267822 +0000 UTC m=+0.157921350 container attach 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Feb 25 12:47:08 compute-0 busy_spence[349149]: 167 167
Feb 25 12:47:08 compute-0 systemd[1]: libpod-36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd.scope: Deactivated successfully.
Feb 25 12:47:08 compute-0 podman[349133]: 2026-02-25 12:47:08.807240159 +0000 UTC m=+0.162483249 container died 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:47:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-25893d5e3c5de31391c381596dd8f03ffc64ce915373dca1c2bbf771f0e8f8b7-merged.mount: Deactivated successfully.
Feb 25 12:47:08 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:47:08 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:47:08 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:47:08 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:47:08 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:47:08 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:47:08 compute-0 podman[349133]: 2026-02-25 12:47:08.85860875 +0000 UTC m=+0.213851810 container remove 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:47:08 compute-0 systemd[1]: libpod-conmon-36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd.scope: Deactivated successfully.
Feb 25 12:47:09 compute-0 podman[349173]: 2026-02-25 12:47:09.012549487 +0000 UTC m=+0.060686275 container create 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:47:09 compute-0 systemd[1]: Started libpod-conmon-5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b.scope.
Feb 25 12:47:09 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:47:09 compute-0 podman[349173]: 2026-02-25 12:47:08.988724474 +0000 UTC m=+0.036861312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:09 compute-0 podman[349173]: 2026-02-25 12:47:09.113316942 +0000 UTC m=+0.161453790 container init 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:47:09 compute-0 podman[349173]: 2026-02-25 12:47:09.120775063 +0000 UTC m=+0.168911871 container start 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:47:09 compute-0 podman[349173]: 2026-02-25 12:47:09.132410931 +0000 UTC m=+0.180547709 container attach 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 12:47:09 compute-0 funny_kowalevski[349190]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:47:09 compute-0 funny_kowalevski[349190]: --> All data devices are unavailable
Feb 25 12:47:09 compute-0 systemd[1]: libpod-5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b.scope: Deactivated successfully.
Feb 25 12:47:09 compute-0 podman[349173]: 2026-02-25 12:47:09.575105771 +0000 UTC m=+0.623242599 container died 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:47:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844-merged.mount: Deactivated successfully.
Feb 25 12:47:09 compute-0 podman[349173]: 2026-02-25 12:47:09.616380627 +0000 UTC m=+0.664517405 container remove 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:47:09 compute-0 systemd[1]: libpod-conmon-5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b.scope: Deactivated successfully.
Feb 25 12:47:09 compute-0 sudo[349095]: pam_unix(sudo:session): session closed for user root
Feb 25 12:47:09 compute-0 sudo[349220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:47:09 compute-0 sudo[349220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:47:09 compute-0 sudo[349220]: pam_unix(sudo:session): session closed for user root
Feb 25 12:47:09 compute-0 sudo[349245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:47:09 compute-0 sudo[349245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:47:10 compute-0 podman[349282]: 2026-02-25 12:47:10.048710555 +0000 UTC m=+0.059109940 container create 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 12:47:10 compute-0 systemd[1]: Started libpod-conmon-193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806.scope.
Feb 25 12:47:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:47:10 compute-0 podman[349282]: 2026-02-25 12:47:10.025200511 +0000 UTC m=+0.035599946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:47:10 compute-0 podman[349282]: 2026-02-25 12:47:10.122757805 +0000 UTC m=+0.133157190 container init 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:47:10 compute-0 podman[349282]: 2026-02-25 12:47:10.129701071 +0000 UTC m=+0.140100446 container start 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:47:10 compute-0 podman[349282]: 2026-02-25 12:47:10.133672984 +0000 UTC m=+0.144072389 container attach 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:47:10 compute-0 practical_chebyshev[349298]: 167 167
Feb 25 12:47:10 compute-0 systemd[1]: libpod-193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806.scope: Deactivated successfully.
Feb 25 12:47:10 compute-0 podman[349282]: 2026-02-25 12:47:10.135439323 +0000 UTC m=+0.145838708 container died 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 12:47:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-44cc39a9e288d7bb7961ce1333a461e57e179402e881480add0b6a8fb58295d4-merged.mount: Deactivated successfully.
Feb 25 12:47:10 compute-0 podman[349282]: 2026-02-25 12:47:10.178110268 +0000 UTC m=+0.188509663 container remove 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:47:10 compute-0 systemd[1]: libpod-conmon-193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806.scope: Deactivated successfully.
Feb 25 12:47:10 compute-0 nova_compute[244014]: 2026-02-25 12:47:10.248 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:10 compute-0 ceph-mon[76335]: pgmap v1997: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 25 12:47:10 compute-0 podman[349321]: 2026-02-25 12:47:10.3442629 +0000 UTC m=+0.042401488 container create 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:47:10 compute-0 systemd[1]: Started libpod-conmon-960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974.scope.
Feb 25 12:47:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b70f7e7e1423533accac71a11060e8d645cd0131c858ef9db8d57f837fb120/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b70f7e7e1423533accac71a11060e8d645cd0131c858ef9db8d57f837fb120/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b70f7e7e1423533accac71a11060e8d645cd0131c858ef9db8d57f837fb120/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b70f7e7e1423533accac71a11060e8d645cd0131c858ef9db8d57f837fb120/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:10 compute-0 podman[349321]: 2026-02-25 12:47:10.323175545 +0000 UTC m=+0.021314183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:47:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:47:10 compute-0 podman[349321]: 2026-02-25 12:47:10.437780711 +0000 UTC m=+0.135919309 container init 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb 25 12:47:10 compute-0 podman[349321]: 2026-02-25 12:47:10.442480383 +0000 UTC m=+0.140618981 container start 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:47:10 compute-0 podman[349321]: 2026-02-25 12:47:10.446087535 +0000 UTC m=+0.144226113 container attach 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 12:47:10 compute-0 boring_blackwell[349337]: {
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:     "0": [
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:         {
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "devices": [
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "/dev/loop3"
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             ],
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_name": "ceph_lv0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_size": "21470642176",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "name": "ceph_lv0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "tags": {
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.cluster_name": "ceph",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.crush_device_class": "",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.encrypted": "0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.objectstore": "bluestore",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.osd_id": "0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.type": "block",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.vdo": "0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.with_tpm": "0"
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             },
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "type": "block",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "vg_name": "ceph_vg0"
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:         }
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:     ],
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:     "1": [
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:         {
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "devices": [
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "/dev/loop4"
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             ],
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_name": "ceph_lv1",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_size": "21470642176",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "name": "ceph_lv1",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "tags": {
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.cluster_name": "ceph",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.crush_device_class": "",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.encrypted": "0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.objectstore": "bluestore",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.osd_id": "1",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.type": "block",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.vdo": "0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.with_tpm": "0"
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             },
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "type": "block",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "vg_name": "ceph_vg1"
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:         }
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:     ],
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:     "2": [
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:         {
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "devices": [
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "/dev/loop5"
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             ],
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_name": "ceph_lv2",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_size": "21470642176",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "name": "ceph_lv2",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "tags": {
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.cluster_name": "ceph",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.crush_device_class": "",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.encrypted": "0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.objectstore": "bluestore",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.osd_id": "2",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.type": "block",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.vdo": "0",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:                 "ceph.with_tpm": "0"
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             },
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "type": "block",
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:             "vg_name": "ceph_vg2"
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:         }
Feb 25 12:47:10 compute-0 boring_blackwell[349337]:     ]
Feb 25 12:47:10 compute-0 boring_blackwell[349337]: }
Feb 25 12:47:10 compute-0 systemd[1]: libpod-960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974.scope: Deactivated successfully.
Feb 25 12:47:10 compute-0 podman[349321]: 2026-02-25 12:47:10.692147473 +0000 UTC m=+0.390286091 container died 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:47:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-52b70f7e7e1423533accac71a11060e8d645cd0131c858ef9db8d57f837fb120-merged.mount: Deactivated successfully.
Feb 25 12:47:10 compute-0 podman[349321]: 2026-02-25 12:47:10.744236414 +0000 UTC m=+0.442374992 container remove 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 12:47:10 compute-0 systemd[1]: libpod-conmon-960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974.scope: Deactivated successfully.
Feb 25 12:47:10 compute-0 sudo[349245]: pam_unix(sudo:session): session closed for user root
Feb 25 12:47:10 compute-0 sudo[349358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:47:10 compute-0 sudo[349358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:47:10 compute-0 sudo[349358]: pam_unix(sudo:session): session closed for user root
Feb 25 12:47:10 compute-0 sudo[349383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:47:10 compute-0 sudo[349383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:47:11 compute-0 podman[349420]: 2026-02-25 12:47:11.166772264 +0000 UTC m=+0.051834695 container create d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:47:11 compute-0 systemd[1]: Started libpod-conmon-d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c.scope.
Feb 25 12:47:11 compute-0 podman[349420]: 2026-02-25 12:47:11.138899787 +0000 UTC m=+0.023962298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:47:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:47:11 compute-0 podman[349420]: 2026-02-25 12:47:11.255756207 +0000 UTC m=+0.140818708 container init d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 12:47:11 compute-0 podman[349420]: 2026-02-25 12:47:11.266735907 +0000 UTC m=+0.151798358 container start d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:47:11 compute-0 crazy_keldysh[349437]: 167 167
Feb 25 12:47:11 compute-0 podman[349420]: 2026-02-25 12:47:11.271516192 +0000 UTC m=+0.156578703 container attach d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:47:11 compute-0 systemd[1]: libpod-d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c.scope: Deactivated successfully.
Feb 25 12:47:11 compute-0 podman[349420]: 2026-02-25 12:47:11.273047445 +0000 UTC m=+0.158109896 container died d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:47:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7abcd0643aba3dc235464b0e316dac9e32ae06a28a2da594dd1a3abd66b2ab68-merged.mount: Deactivated successfully.
Feb 25 12:47:11 compute-0 podman[349420]: 2026-02-25 12:47:11.3128966 +0000 UTC m=+0.197959021 container remove d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 12:47:11 compute-0 systemd[1]: libpod-conmon-d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c.scope: Deactivated successfully.
Feb 25 12:47:11 compute-0 nova_compute[244014]: 2026-02-25 12:47:11.392 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:11 compute-0 nova_compute[244014]: 2026-02-25 12:47:11.394 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:11 compute-0 nova_compute[244014]: 2026-02-25 12:47:11.419 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:47:11 compute-0 nova_compute[244014]: 2026-02-25 12:47:11.502 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:11 compute-0 nova_compute[244014]: 2026-02-25 12:47:11.503 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:11 compute-0 nova_compute[244014]: 2026-02-25 12:47:11.512 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:47:11 compute-0 nova_compute[244014]: 2026-02-25 12:47:11.512 244018 INFO nova.compute.claims [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:47:11 compute-0 podman[349461]: 2026-02-25 12:47:11.473854495 +0000 UTC m=+0.039237919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:47:11 compute-0 nova_compute[244014]: 2026-02-25 12:47:11.627 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:47:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176965018' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.327 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.333 244018 DEBUG nova.compute.provider_tree [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:47:12 compute-0 podman[349461]: 2026-02-25 12:47:12.338922342 +0000 UTC m=+0.904305806 container create f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.353 244018 DEBUG nova.scheduler.client.report [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:47:12 compute-0 ceph-mon[76335]: pgmap v1998: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:47:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/176965018' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.401 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.405 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:47:12 compute-0 systemd[1]: Started libpod-conmon-f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf.scope.
Feb 25 12:47:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:47:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068c619ffb0499fad911446ab8b97662cb2cb26e782674bf7558d4704f824621/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068c619ffb0499fad911446ab8b97662cb2cb26e782674bf7558d4704f824621/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068c619ffb0499fad911446ab8b97662cb2cb26e782674bf7558d4704f824621/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068c619ffb0499fad911446ab8b97662cb2cb26e782674bf7558d4704f824621/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.493 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.494 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.520 244018 INFO nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:47:12 compute-0 podman[349461]: 2026-02-25 12:47:12.532046175 +0000 UTC m=+1.097429669 container init f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:47:12 compute-0 podman[349461]: 2026-02-25 12:47:12.54143748 +0000 UTC m=+1.106820924 container start f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:47:12 compute-0 podman[349461]: 2026-02-25 12:47:12.545909077 +0000 UTC m=+1.111292621 container attach f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.562 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.684 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.686 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.687 244018 INFO nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Creating image(s)
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.716 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.738 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.773 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.776 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.849 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.850 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.851 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.851 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.875 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.884 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 848fd033-0ebb-460a-a8a0-56583fa5f481_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:12 compute-0 nova_compute[244014]: 2026-02-25 12:47:12.913 244018 DEBUG nova.policy [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:47:13 compute-0 lvm[349668]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:47:13 compute-0 lvm[349668]: VG ceph_vg0 finished
Feb 25 12:47:13 compute-0 lvm[349669]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:47:13 compute-0 lvm[349669]: VG ceph_vg1 finished
Feb 25 12:47:13 compute-0 lvm[349672]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:47:13 compute-0 lvm[349672]: VG ceph_vg2 finished
Feb 25 12:47:13 compute-0 lvm[349676]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:47:13 compute-0 lvm[349676]: VG ceph_vg2 finished
Feb 25 12:47:13 compute-0 jovial_neumann[349499]: {}
Feb 25 12:47:13 compute-0 systemd[1]: libpod-f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf.scope: Deactivated successfully.
Feb 25 12:47:13 compute-0 podman[349461]: 2026-02-25 12:47:13.307119221 +0000 UTC m=+1.872502655 container died f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 12:47:13 compute-0 systemd[1]: libpod-f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf.scope: Consumed 1.070s CPU time.
Feb 25 12:47:13 compute-0 ceph-mon[76335]: pgmap v1999: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:47:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-068c619ffb0499fad911446ab8b97662cb2cb26e782674bf7558d4704f824621-merged.mount: Deactivated successfully.
Feb 25 12:47:13 compute-0 podman[349461]: 2026-02-25 12:47:13.600860635 +0000 UTC m=+2.166244079 container remove f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 12:47:13 compute-0 nova_compute[244014]: 2026-02-25 12:47:13.608 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 848fd033-0ebb-460a-a8a0-56583fa5f481_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.723s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:13 compute-0 systemd[1]: libpod-conmon-f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf.scope: Deactivated successfully.
Feb 25 12:47:13 compute-0 sudo[349383]: pam_unix(sudo:session): session closed for user root
Feb 25 12:47:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:47:13 compute-0 nova_compute[244014]: 2026-02-25 12:47:13.753 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:47:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:47:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:47:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:47:13 compute-0 nova_compute[244014]: 2026-02-25 12:47:13.911 244018 DEBUG nova.objects.instance [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:47:13 compute-0 sudo[349746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:47:13 compute-0 sudo[349746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:47:13 compute-0 sudo[349746]: pam_unix(sudo:session): session closed for user root
Feb 25 12:47:13 compute-0 nova_compute[244014]: 2026-02-25 12:47:13.933 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:47:13 compute-0 nova_compute[244014]: 2026-02-25 12:47:13.934 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Ensure instance console log exists: /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:47:13 compute-0 nova_compute[244014]: 2026-02-25 12:47:13.935 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:13 compute-0 nova_compute[244014]: 2026-02-25 12:47:13.936 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:13 compute-0 nova_compute[244014]: 2026-02-25 12:47:13.936 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:14 compute-0 nova_compute[244014]: 2026-02-25 12:47:14.110 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Successfully created port: 0358e18d-8ce8-43a7-a8b2-11193708891a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:47:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:47:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:47:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:47:14 compute-0 nova_compute[244014]: 2026-02-25 12:47:14.912 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Successfully created port: ebd67787-8af9-4a7f-8b38-6d18daff8ff3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:47:15 compute-0 nova_compute[244014]: 2026-02-25 12:47:15.249 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:15 compute-0 ceph-mon[76335]: pgmap v2000: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:47:16 compute-0 nova_compute[244014]: 2026-02-25 12:47:16.203 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Successfully updated port: 0358e18d-8ce8-43a7-a8b2-11193708891a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:47:16 compute-0 nova_compute[244014]: 2026-02-25 12:47:16.320 244018 DEBUG nova.compute.manager [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:16 compute-0 nova_compute[244014]: 2026-02-25 12:47:16.321 244018 DEBUG nova.compute.manager [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing instance network info cache due to event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:47:16 compute-0 nova_compute[244014]: 2026-02-25 12:47:16.322 244018 DEBUG oslo_concurrency.lockutils [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:47:16 compute-0 nova_compute[244014]: 2026-02-25 12:47:16.322 244018 DEBUG oslo_concurrency.lockutils [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:47:16 compute-0 nova_compute[244014]: 2026-02-25 12:47:16.322 244018 DEBUG nova.network.neutron [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing network info cache for port 0358e18d-8ce8-43a7-a8b2-11193708891a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:47:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:47:16 compute-0 nova_compute[244014]: 2026-02-25 12:47:16.585 244018 DEBUG nova.network.neutron [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:47:17 compute-0 nova_compute[244014]: 2026-02-25 12:47:17.157 244018 DEBUG nova.network.neutron [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:47:17 compute-0 nova_compute[244014]: 2026-02-25 12:47:17.176 244018 DEBUG oslo_concurrency.lockutils [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:47:17 compute-0 nova_compute[244014]: 2026-02-25 12:47:17.305 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:17 compute-0 nova_compute[244014]: 2026-02-25 12:47:17.324 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Successfully updated port: ebd67787-8af9-4a7f-8b38-6d18daff8ff3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:47:17 compute-0 nova_compute[244014]: 2026-02-25 12:47:17.340 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:47:17 compute-0 nova_compute[244014]: 2026-02-25 12:47:17.340 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:47:17 compute-0 nova_compute[244014]: 2026-02-25 12:47:17.341 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:47:17 compute-0 nova_compute[244014]: 2026-02-25 12:47:17.485 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:47:17 compute-0 ceph-mon[76335]: pgmap v2001: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:47:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:47:18 compute-0 nova_compute[244014]: 2026-02-25 12:47:18.475 244018 DEBUG nova.compute.manager [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-changed-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:18 compute-0 nova_compute[244014]: 2026-02-25 12:47:18.476 244018 DEBUG nova.compute.manager [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing instance network info cache due to event network-changed-ebd67787-8af9-4a7f-8b38-6d18daff8ff3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:47:18 compute-0 nova_compute[244014]: 2026-02-25 12:47:18.476 244018 DEBUG oslo_concurrency.lockutils [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:47:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:19.359 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:19.361 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated
Feb 25 12:47:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:19.364 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:47:19 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:19.365 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa15935-d0d7-4559-86cb-5cee03ef70a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.884 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.910 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.911 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance network_info: |[{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.912 244018 DEBUG oslo_concurrency.lockutils [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.912 244018 DEBUG nova.network.neutron [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing network info cache for port ebd67787-8af9-4a7f-8b38-6d18daff8ff3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.921 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Start _get_guest_xml network_info=[{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.926 244018 WARNING nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.933 244018 DEBUG nova.virt.libvirt.host [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.934 244018 DEBUG nova.virt.libvirt.host [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.942 244018 DEBUG nova.virt.libvirt.host [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.943 244018 DEBUG nova.virt.libvirt.host [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.943 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.944 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.945 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.945 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.946 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.946 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.947 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.947 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.948 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.948 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.948 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.949 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:47:19 compute-0 nova_compute[244014]: 2026-02-25 12:47:19.953 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:20 compute-0 ceph-mon[76335]: pgmap v2002: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:47:20 compute-0 nova_compute[244014]: 2026-02-25 12:47:20.251 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:47:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203664412' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:47:20 compute-0 nova_compute[244014]: 2026-02-25 12:47:20.456 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:20 compute-0 nova_compute[244014]: 2026-02-25 12:47:20.476 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:20 compute-0 nova_compute[244014]: 2026-02-25 12:47:20.481 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:47:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838295176' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:20 compute-0 nova_compute[244014]: 2026-02-25 12:47:20.997 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:20 compute-0 nova_compute[244014]: 2026-02-25 12:47:20.999 244018 DEBUG nova.virt.libvirt.vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:47:20 compute-0 nova_compute[244014]: 2026-02-25 12:47:20.999 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.000 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.001 244018 DEBUG nova.virt.libvirt.vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.001 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.001 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.002 244018 DEBUG nova.objects.instance [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.022 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:47:21 compute-0 nova_compute[244014]:   <uuid>848fd033-0ebb-460a-a8a0-56583fa5f481</uuid>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   <name>instance-00000075</name>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-289696819</nova:name>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:47:19</nova:creationTime>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <nova:port uuid="0358e18d-8ce8-43a7-a8b2-11193708891a">
Feb 25 12:47:21 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <nova:port uuid="ebd67787-8af9-4a7f-8b38-6d18daff8ff3">
Feb 25 12:47:21 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8e:ea66" ipVersion="6"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe8e:ea66" ipVersion="6"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <system>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <entry name="serial">848fd033-0ebb-460a-a8a0-56583fa5f481</entry>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <entry name="uuid">848fd033-0ebb-460a-a8a0-56583fa5f481</entry>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     </system>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   <os>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   </os>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   <features>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   </features>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/848fd033-0ebb-460a-a8a0-56583fa5f481_disk">
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       </source>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config">
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       </source>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:47:21 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:fd:1f:00"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <target dev="tap0358e18d-8c"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:8e:ea:66"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <target dev="tapebd67787-8a"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/console.log" append="off"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <video>
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     </video>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:47:21 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:47:21 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:47:21 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:47:21 compute-0 nova_compute[244014]: </domain>
Feb 25 12:47:21 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.023 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Preparing to wait for external event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.024 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.024 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.024 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.024 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Preparing to wait for external event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.024 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.025 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.025 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.025 244018 DEBUG nova.virt.libvirt.vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.025 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.026 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.026 244018 DEBUG os_vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.027 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.027 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.030 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.030 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0358e18d-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.030 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0358e18d-8c, col_values=(('external_ids', {'iface-id': '0358e18d-8ce8-43a7-a8b2-11193708891a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:1f:00', 'vm-uuid': '848fd033-0ebb-460a-a8a0-56583fa5f481'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:21 compute-0 NetworkManager[49836]: <info>  [1772023641.0338] manager: (tap0358e18d-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.034 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.038 244018 INFO os_vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c')
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.039 244018 DEBUG nova.virt.libvirt.vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.040 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.041 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.041 244018 DEBUG os_vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.042 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.042 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.045 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.045 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebd67787-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.047 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapebd67787-8a, col_values=(('external_ids', {'iface-id': 'ebd67787-8af9-4a7f-8b38-6d18daff8ff3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:ea:66', 'vm-uuid': '848fd033-0ebb-460a-a8a0-56583fa5f481'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:21 compute-0 NetworkManager[49836]: <info>  [1772023641.0497] manager: (tapebd67787-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Feb 25 12:47:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1203664412' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2838295176' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.056 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.057 244018 INFO os_vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a')
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.121 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.122 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.122 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:fd:1f:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.122 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:8e:ea:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.123 244018 INFO nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Using config drive
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.143 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.698 244018 DEBUG nova.network.neutron [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updated VIF entry in instance network info cache for port ebd67787-8af9-4a7f-8b38-6d18daff8ff3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.699 244018 DEBUG nova.network.neutron [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.716 244018 INFO nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Creating config drive at /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.720 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeh83pveg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.754 244018 DEBUG oslo_concurrency.lockutils [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.860 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeh83pveg" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.886 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.890 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config 848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.939 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.940 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:21 compute-0 nova_compute[244014]: 2026-02-25 12:47:21.955 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.049 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config 848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.050 244018 INFO nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Deleting local config drive /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config because it was imported into RBD.
Feb 25 12:47:22 compute-0 ceph-mon[76335]: pgmap v2003: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:47:22 compute-0 kernel: tap0358e18d-8c: entered promiscuous mode
Feb 25 12:47:22 compute-0 NetworkManager[49836]: <info>  [1772023642.1134] manager: (tap0358e18d-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/500)
Feb 25 12:47:22 compute-0 ovn_controller[147040]: 2026-02-25T12:47:22Z|01204|binding|INFO|Claiming lport 0358e18d-8ce8-43a7-a8b2-11193708891a for this chassis.
Feb 25 12:47:22 compute-0 ovn_controller[147040]: 2026-02-25T12:47:22Z|01205|binding|INFO|0358e18d-8ce8-43a7-a8b2-11193708891a: Claiming fa:16:3e:fd:1f:00 10.100.0.11
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:22 compute-0 NetworkManager[49836]: <info>  [1772023642.1269] manager: (tapebd67787-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Feb 25 12:47:22 compute-0 kernel: tapebd67787-8a: entered promiscuous mode
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.130 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.131 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:22 compute-0 ovn_controller[147040]: 2026-02-25T12:47:22Z|01206|if_status|INFO|Dropped 2 log messages in last 1195 seconds (most recently, 1195 seconds ago) due to excessive rate
Feb 25 12:47:22 compute-0 ovn_controller[147040]: 2026-02-25T12:47:22Z|01207|if_status|INFO|Not updating pb chassis for ebd67787-8af9-4a7f-8b38-6d18daff8ff3 now as sb is readonly
Feb 25 12:47:22 compute-0 ovn_controller[147040]: 2026-02-25T12:47:22Z|01208|binding|INFO|Claiming lport ebd67787-8af9-4a7f-8b38-6d18daff8ff3 for this chassis.
Feb 25 12:47:22 compute-0 ovn_controller[147040]: 2026-02-25T12:47:22Z|01209|binding|INFO|ebd67787-8af9-4a7f-8b38-6d18daff8ff3: Claiming fa:16:3e:8e:ea:66 2001:db8:0:1:f816:3eff:fe8e:ea66 2001:db8::f816:3eff:fe8e:ea66
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.138 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.138 244018 INFO nova.compute.claims [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.141 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:1f:00 10.100.0.11'], port_security=['fa:16:3e:fd:1f:00 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '848fd033-0ebb-460a-a8a0-56583fa5f481', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0358e18d-8ce8-43a7-a8b2-11193708891a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:22 compute-0 systemd-udevd[349930]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:47:22 compute-0 systemd-udevd[349931]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.143 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0358e18d-8ce8-43a7-a8b2-11193708891a in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 bound to our chassis
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.145 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b6d114-fcb4-4a25-988c-1ee301ef0419
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.148 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:ea:66 2001:db8:0:1:f816:3eff:fe8e:ea66 2001:db8::f816:3eff:fe8e:ea66'], port_security=['fa:16:3e:8e:ea:66 2001:db8:0:1:f816:3eff:fe8e:ea66 2001:db8::f816:3eff:fe8e:ea66'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8e:ea66/64 2001:db8::f816:3eff:fe8e:ea66/64', 'neutron:device_id': '848fd033-0ebb-460a-a8a0-56583fa5f481', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06544f4e-1579-4f9d-8faa-1248478e7fbc, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ebd67787-8af9-4a7f-8b38-6d18daff8ff3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[20184612-45e6-4410-80cf-10f54dc057b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.156 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd0b6d114-f1 in ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:47:22 compute-0 NetworkManager[49836]: <info>  [1772023642.1575] device (tap0358e18d-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:47:22 compute-0 NetworkManager[49836]: <info>  [1772023642.1590] device (tap0358e18d-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:47:22 compute-0 NetworkManager[49836]: <info>  [1772023642.1598] device (tapebd67787-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:47:22 compute-0 NetworkManager[49836]: <info>  [1772023642.1608] device (tapebd67787-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.159 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd0b6d114-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.159 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee28af28-40c2-437e-ada7-d9eef8ca6d38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.161 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[afcb8c52-76d9-47ee-a44c-e2a67b135b6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.169 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[527ccde7-b1e1-4a49-9920-b231e383b558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 systemd-machined[210048]: New machine qemu-149-instance-00000075.
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:22 compute-0 ovn_controller[147040]: 2026-02-25T12:47:22Z|01210|binding|INFO|Setting lport 0358e18d-8ce8-43a7-a8b2-11193708891a ovn-installed in OVS
Feb 25 12:47:22 compute-0 ovn_controller[147040]: 2026-02-25T12:47:22Z|01211|binding|INFO|Setting lport 0358e18d-8ce8-43a7-a8b2-11193708891a up in Southbound
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:22 compute-0 ovn_controller[147040]: 2026-02-25T12:47:22Z|01212|binding|INFO|Setting lport ebd67787-8af9-4a7f-8b38-6d18daff8ff3 ovn-installed in OVS
Feb 25 12:47:22 compute-0 ovn_controller[147040]: 2026-02-25T12:47:22Z|01213|binding|INFO|Setting lport ebd67787-8af9-4a7f-8b38-6d18daff8ff3 up in Southbound
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.192 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[765361fd-d586-40a6-9094-732fceb27612]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-00000075.
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.218 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ae644c35-a1bc-49ae-876a-f8e9314b834a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.225 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6c81f9-eabc-496e-8626-b7978d5f99a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 NetworkManager[49836]: <info>  [1772023642.2289] manager: (tapd0b6d114-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/502)
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.252 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f84f6c34-20fb-490c-8771-b744c5c8439f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.255 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[21c6de20-83a2-461b-852e-ca0a4dd52ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 NetworkManager[49836]: <info>  [1772023642.2722] device (tapd0b6d114-f0): carrier: link connected
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.274 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc6ba3a-35b4-4d26-ab78-9c594ffcf398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.286 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0de490e6-8ae5-47d4-8c0f-97ac069bf800]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349967, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.295 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11be16ef-f294-473d-8c86-9fec263401b6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:c1ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561168, 'tstamp': 561168}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349968, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.304 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb8122f-cfd5-4157-8faf-33e3a78f598d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 349969, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.322 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d616ec4b-ef9e-4e75-8987-b26c5665cccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.362 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a88218c8-fcfb-46f3-8acd-ee0a61be0dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.363 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.364 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.364 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b6d114-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:22 compute-0 NetworkManager[49836]: <info>  [1772023642.3671] manager: (tapd0b6d114-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Feb 25 12:47:22 compute-0 kernel: tapd0b6d114-f0: entered promiscuous mode
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.369 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.370 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b6d114-f0, col_values=(('external_ids', {'iface-id': '4e1af2d0-780b-4ffd-93a0-445de7a22322'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.372 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:22 compute-0 ovn_controller[147040]: 2026-02-25T12:47:22Z|01214|binding|INFO|Releasing lport 4e1af2d0-780b-4ffd-93a0-445de7a22322 from this chassis (sb_readonly=0)
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.373 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d0b6d114-fcb4-4a25-988c-1ee301ef0419.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d0b6d114-fcb4-4a25-988c-1ee301ef0419.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.376 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a132a9a9-bd67-4474-b3d0-073f4086b135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.377 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-d0b6d114-fcb4-4a25-988c-1ee301ef0419
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/d0b6d114-fcb4-4a25-988c-1ee301ef0419.pid.haproxy
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID d0b6d114-fcb4-4a25-988c-1ee301ef0419
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.379 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'env', 'PROCESS_TAG=haproxy-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d0b6d114-fcb4-4a25-988c-1ee301ef0419.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.397 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.429 244018 DEBUG nova.compute.manager [req-5484353f-e7a0-4774-85c8-70985a7e502f req-facc1d7b-96b0-4b77-8d2f-f2ff8b5bb93a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.430 244018 DEBUG oslo_concurrency.lockutils [req-5484353f-e7a0-4774-85c8-70985a7e502f req-facc1d7b-96b0-4b77-8d2f-f2ff8b5bb93a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.430 244018 DEBUG oslo_concurrency.lockutils [req-5484353f-e7a0-4774-85c8-70985a7e502f req-facc1d7b-96b0-4b77-8d2f-f2ff8b5bb93a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.430 244018 DEBUG oslo_concurrency.lockutils [req-5484353f-e7a0-4774-85c8-70985a7e502f req-facc1d7b-96b0-4b77-8d2f-f2ff8b5bb93a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.430 244018 DEBUG nova.compute.manager [req-5484353f-e7a0-4774-85c8-70985a7e502f req-facc1d7b-96b0-4b77-8d2f-f2ff8b5bb93a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Processing event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:47:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.464 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.703 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023642.7032177, 848fd033-0ebb-460a-a8a0-56583fa5f481 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.703 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] VM Started (Lifecycle Event)
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.730 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.734 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023642.7032783, 848fd033-0ebb-460a-a8a0-56583fa5f481 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.734 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] VM Paused (Lifecycle Event)
Feb 25 12:47:22 compute-0 podman[350064]: 2026-02-25 12:47:22.749612064 +0000 UTC m=+0.055258451 container create a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.752 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.774 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:47:22 compute-0 systemd[1]: Started libpod-conmon-a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9.scope.
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.793 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:47:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:47:22 compute-0 podman[350064]: 2026-02-25 12:47:22.719223416 +0000 UTC m=+0.024869873 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a06c297626a192e338cd1e2dd123eba9e16164173bf46ec6839ab1a455ed742/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:22 compute-0 podman[350064]: 2026-02-25 12:47:22.833238886 +0000 UTC m=+0.138885273 container init a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:47:22 compute-0 podman[350064]: 2026-02-25 12:47:22.837250469 +0000 UTC m=+0.142896856 container start a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:47:22 compute-0 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [NOTICE]   (350081) : New worker (350083) forked
Feb 25 12:47:22 compute-0 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [NOTICE]   (350081) : Loading success.
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.910 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ebd67787-8af9-4a7f-8b38-6d18daff8ff3 in datapath 5c482202-8994-4033-a0a9-167d92a9e301 unbound from our chassis
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.913 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c482202-8994-4033-a0a9-167d92a9e301
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.922 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[51f1d402-83a2-4010-84d2-fe2724faf4c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.923 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c482202-81 in ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.926 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c482202-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.926 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[07124876-1581-4c07-8ba5-87a3be1b38f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.928 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a1da0a89-ffeb-49e4-b87c-b79d33ddd502]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.939 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[07ef7971-6888-40d1-8806-034bc7329ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.949 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7305ec96-fcca-4899-b1d9-87a2f4ab6602]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:47:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294403115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.973 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0e1898-f96b-48a2-95d9-0600c5f274d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.980 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67b27c46-0446-4c5c-9af3-51112fc990a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:22 compute-0 NetworkManager[49836]: <info>  [1772023642.9819] manager: (tap5c482202-80): new Veth device (/org/freedesktop/NetworkManager/Devices/504)
Feb 25 12:47:22 compute-0 systemd-udevd[349958]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.987 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:22 compute-0 nova_compute[244014]: 2026-02-25 12:47:22.992 244018 DEBUG nova.compute.provider_tree [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.007 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[44310bfe-44d7-4117-a0a0-2532589ed728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.011 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0278bf-e9b7-45c0-bda4-bf14f8c1ead7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.023 244018 DEBUG nova.scheduler.client.report [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:47:23 compute-0 NetworkManager[49836]: <info>  [1772023643.0334] device (tap5c482202-80): carrier: link connected
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.038 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[477320fd-5993-4b1c-9d6f-2d03b4cc3f39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.048 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.049 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.053 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[18386e0e-00ec-4edf-b138-c28a404547ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c482202-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:af:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561244, 'reachable_time': 22558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350104, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1294403115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.069 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a1358bb4-92d0-4018-b167-ea875f76a4ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:afc3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561244, 'tstamp': 561244}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350105, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.088 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb9186d-40ab-4fcb-b9de-6582f0f1ae15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c482202-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:af:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561244, 'reachable_time': 22558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350106, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.117 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35e27d57-4925-4b89-8467-e3ebe4ea9912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.118 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.118 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.142 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[179700c6-4d67-44b2-aec0-171cbac34d2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.143 244018 INFO nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.145 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c482202-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.145 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.146 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c482202-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:23 compute-0 NetworkManager[49836]: <info>  [1772023643.1489] manager: (tap5c482202-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.148 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:23 compute-0 kernel: tap5c482202-80: entered promiscuous mode
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.151 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c482202-80, col_values=(('external_ids', {'iface-id': '8372e6c6-fbbc-48a7-be95-d95e1d2ad95a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:23 compute-0 ovn_controller[147040]: 2026-02-25T12:47:23Z|01215|binding|INFO|Releasing lport 8372e6c6-fbbc-48a7-be95-d95e1d2ad95a from this chassis (sb_readonly=0)
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.153 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.154 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c482202-8994-4033-a0a9-167d92a9e301.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c482202-8994-4033-a0a9-167d92a9e301.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.154 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d55ef3a-7888-436b-a0a1-f6511dfa82dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.156 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-5c482202-8994-4033-a0a9-167d92a9e301
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/5c482202-8994-4033-a0a9-167d92a9e301.pid.haproxy
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 5c482202-8994-4033-a0a9-167d92a9e301
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.157 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'env', 'PROCESS_TAG=haproxy-5c482202-8994-4033-a0a9-167d92a9e301', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c482202-8994-4033-a0a9-167d92a9e301.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.158 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.180 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.275 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.276 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.277 244018 INFO nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Creating image(s)
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.299 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.322 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.342 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.346 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.381 244018 DEBUG nova.policy [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.426 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.433 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.434 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.435 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.435 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.459 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.463 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:23 compute-0 podman[350190]: 2026-02-25 12:47:23.48607037 +0000 UTC m=+0.047272816 container create 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:47:23 compute-0 systemd[1]: Started libpod-conmon-20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45.scope.
Feb 25 12:47:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:47:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83335b7bb0badbab01818a01b3ea3886d2d30cde665b02056377fe1391514531/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:23 compute-0 podman[350190]: 2026-02-25 12:47:23.461972099 +0000 UTC m=+0.023174575 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:47:23 compute-0 podman[350190]: 2026-02-25 12:47:23.567109268 +0000 UTC m=+0.128311814 container init 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 12:47:23 compute-0 podman[350190]: 2026-02-25 12:47:23.572910092 +0000 UTC m=+0.134112588 container start 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:47:23 compute-0 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [NOTICE]   (350245) : New worker (350250) forked
Feb 25 12:47:23 compute-0 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [NOTICE]   (350245) : Loading success.
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.634 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.636 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.637 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c48f27bb-dce5-4e1c-bfbe-2510db368d62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.637 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.638 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:47:23 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.639 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11da65b3-26c8-4a93-a0d1-c6920d8894ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.741 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.798 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.873 244018 DEBUG nova.objects.instance [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 95730650-36ac-4eee-8b22-9ea3f01d82d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.886 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.887 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Ensure instance console log exists: /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.887 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.887 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:23 compute-0 nova_compute[244014]: 2026-02-25 12:47:23.888 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:24 compute-0 ceph-mon[76335]: pgmap v2004: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.143 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Successfully created port: 0660ccb7-a986-45dd-8aa8-e10ddd71144a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:47:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.548 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.548 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.549 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.549 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.550 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No event matching network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a in dict_keys([('network-vif-plugged', 'ebd67787-8af9-4a7f-8b38-6d18daff8ff3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.551 244018 WARNING nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received unexpected event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a for instance with vm_state building and task_state spawning.
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.551 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.551 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.552 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.552 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.553 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Processing event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.553 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.554 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.554 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.555 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.555 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No waiting events found dispatching network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.556 244018 WARNING nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received unexpected event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 for instance with vm_state building and task_state spawning.
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.557 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.562 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023644.5616028, 848fd033-0ebb-460a-a8a0-56583fa5f481 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.562 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] VM Resumed (Lifecycle Event)
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.565 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.569 244018 INFO nova.virt.libvirt.driver [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance spawned successfully.
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.570 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.586 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.594 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.600 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.601 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.601 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.602 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.603 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.604 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.630 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.670 244018 INFO nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Took 11.99 seconds to spawn the instance on the hypervisor.
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.671 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.745 244018 INFO nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Took 13.27 seconds to build instance.
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.762 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:24 compute-0 nova_compute[244014]: 2026-02-25 12:47:24.940 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.058 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Successfully updated port: 0660ccb7-a986-45dd-8aa8-e10ddd71144a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.167 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.167 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.168 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.176 244018 DEBUG nova.compute.manager [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.176 244018 DEBUG nova.compute.manager [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing instance network info cache due to event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.177 244018 DEBUG oslo_concurrency.lockutils [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.369 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:47:25 compute-0 nova_compute[244014]: 2026-02-25 12:47:25.903 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:26 compute-0 ceph-mon[76335]: pgmap v2005: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.081 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.082 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.082 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.082 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.124 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updating instance_info_cache with network_info: [{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.186 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.186 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Instance network_info: |[{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.187 244018 DEBUG oslo_concurrency.lockutils [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.187 244018 DEBUG nova.network.neutron [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.189 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Start _get_guest_xml network_info=[{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.193 244018 WARNING nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.198 244018 DEBUG nova.virt.libvirt.host [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.198 244018 DEBUG nova.virt.libvirt.host [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.201 244018 DEBUG nova.virt.libvirt.host [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.201 244018 DEBUG nova.virt.libvirt.host [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.201 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.201 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.203 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.203 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.203 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.203 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.205 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 12:47:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:47:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062346787' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.760 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.781 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:26 compute-0 nova_compute[244014]: 2026-02-25 12:47:26.785 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:26.948 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:26.950 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated
Feb 25 12:47:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:26.951 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:47:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:26.951 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[064896a2-c26e-40cf-9c36-c952c89cc386]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1062346787' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:47:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616965702' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.326 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.328 244018 DEBUG nova.virt.libvirt.vif [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-404307455',display_name='tempest-TestNetworkBasicOps-server-404307455',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-404307455',id=118,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFFlaz+1JiPozqOMtDFDKtIjDHy+pR4Dk92wrSmsbZfFxI7F561H9M9jMhpR4UoZcnqUowU/hewm+5foBFGhY3SM2rprh5R3QDm+X6unzTrRIJHcQVeiyxVtyOmfm3aC3A==',key_name='tempest-TestNetworkBasicOps-1001785432',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-x2oa88n1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:23Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=95730650-36ac-4eee-8b22-9ea3f01d82d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.328 244018 DEBUG nova.network.os_vif_util [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.329 244018 DEBUG nova.network.os_vif_util [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.330 244018 DEBUG nova.objects.instance [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95730650-36ac-4eee-8b22-9ea3f01d82d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.344 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:47:27 compute-0 nova_compute[244014]:   <uuid>95730650-36ac-4eee-8b22-9ea3f01d82d1</uuid>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   <name>instance-00000076</name>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-404307455</nova:name>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:47:26</nova:creationTime>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <nova:port uuid="0660ccb7-a986-45dd-8aa8-e10ddd71144a">
Feb 25 12:47:27 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <system>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <entry name="serial">95730650-36ac-4eee-8b22-9ea3f01d82d1</entry>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <entry name="uuid">95730650-36ac-4eee-8b22-9ea3f01d82d1</entry>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     </system>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   <os>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   </os>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   <features>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   </features>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/95730650-36ac-4eee-8b22-9ea3f01d82d1_disk">
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       </source>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config">
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       </source>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:47:27 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:cb:b0:7a"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <target dev="tap0660ccb7-a9"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/console.log" append="off"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <video>
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     </video>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:47:27 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:47:27 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:47:27 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:47:27 compute-0 nova_compute[244014]: </domain>
Feb 25 12:47:27 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.345 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Preparing to wait for external event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.345 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.345 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.346 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.346 244018 DEBUG nova.virt.libvirt.vif [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-404307455',display_name='tempest-TestNetworkBasicOps-server-404307455',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-404307455',id=118,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFFlaz+1JiPozqOMtDFDKtIjDHy+pR4Dk92wrSmsbZfFxI7F561H9M9jMhpR4UoZcnqUowU/hewm+5foBFGhY3SM2rprh5R3QDm+X6unzTrRIJHcQVeiyxVtyOmfm3aC3A==',key_name='tempest-TestNetworkBasicOps-1001785432',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-x2oa88n1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:23Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=95730650-36ac-4eee-8b22-9ea3f01d82d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.346 244018 DEBUG nova.network.os_vif_util [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.347 244018 DEBUG nova.network.os_vif_util [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.347 244018 DEBUG os_vif [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.348 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.348 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.350 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0660ccb7-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.351 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0660ccb7-a9, col_values=(('external_ids', {'iface-id': '0660ccb7-a986-45dd-8aa8-e10ddd71144a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:b0:7a', 'vm-uuid': '95730650-36ac-4eee-8b22-9ea3f01d82d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:27 compute-0 NetworkManager[49836]: <info>  [1772023647.3530] manager: (tap0660ccb7-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.358 244018 INFO os_vif [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9')
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.407 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.407 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.407 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:cb:b0:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.407 244018 INFO nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Using config drive
Feb 25 12:47:27 compute-0 nova_compute[244014]: 2026-02-25 12:47:27.429 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.088 244018 INFO nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Creating config drive at /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config
Feb 25 12:47:28 compute-0 ceph-mon[76335]: pgmap v2006: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 12:47:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1616965702' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.094 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp69z2zc0d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.232 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp69z2zc0d" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.272 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.277 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.306 244018 DEBUG nova.network.neutron [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updated VIF entry in instance network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.308 244018 DEBUG nova.network.neutron [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updating instance_info_cache with network_info: [{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.345 244018 DEBUG oslo_concurrency.lockutils [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.416 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.417 244018 INFO nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Deleting local config drive /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config because it was imported into RBD.
Feb 25 12:47:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Feb 25 12:47:28 compute-0 kernel: tap0660ccb7-a9: entered promiscuous mode
Feb 25 12:47:28 compute-0 NetworkManager[49836]: <info>  [1772023648.4638] manager: (tap0660ccb7-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/507)
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:28 compute-0 ovn_controller[147040]: 2026-02-25T12:47:28Z|01216|binding|INFO|Claiming lport 0660ccb7-a986-45dd-8aa8-e10ddd71144a for this chassis.
Feb 25 12:47:28 compute-0 ovn_controller[147040]: 2026-02-25T12:47:28Z|01217|binding|INFO|0660ccb7-a986-45dd-8aa8-e10ddd71144a: Claiming fa:16:3e:cb:b0:7a 10.100.0.12
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.480 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:b0:7a 10.100.0.12'], port_security=['fa:16:3e:cb:b0:7a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '95730650-36ac-4eee-8b22-9ea3f01d82d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b261df4d-3b33-4344-9c3c-a73feb8773db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6d838fa7-2ae5-457e-b70a-ef0e132d7e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af3ba9a4-232f-4585-95fc-f83215abb671, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0660ccb7-a986-45dd-8aa8-e10ddd71144a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.481 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0660ccb7-a986-45dd-8aa8-e10ddd71144a in datapath b261df4d-3b33-4344-9c3c-a73feb8773db bound to our chassis
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.483 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b261df4d-3b33-4344-9c3c-a73feb8773db
Feb 25 12:47:28 compute-0 ovn_controller[147040]: 2026-02-25T12:47:28Z|01218|binding|INFO|Setting lport 0660ccb7-a986-45dd-8aa8-e10ddd71144a ovn-installed in OVS
Feb 25 12:47:28 compute-0 ovn_controller[147040]: 2026-02-25T12:47:28Z|01219|binding|INFO|Setting lport 0660ccb7-a986-45dd-8aa8-e10ddd71144a up in Southbound
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.494 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf72d91-1c4d-4ae7-a3cb-fd0040303fc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.495 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb261df4d-31 in ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:47:28 compute-0 systemd-machined[210048]: New machine qemu-150-instance-00000076.
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.500 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb261df4d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.501 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8837c7-886e-4c3c-ba20-4c087ff0ffcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.502 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[95e070bd-fd79-4b85-8d89-b2f0f21159a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-00000076.
Feb 25 12:47:28 compute-0 systemd-udevd[350469]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.514 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed8623d-6b36-476e-8b5a-ec7d02134a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 NetworkManager[49836]: <info>  [1772023648.5178] device (tap0660ccb7-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:47:28 compute-0 NetworkManager[49836]: <info>  [1772023648.5183] device (tap0660ccb7-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.530 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f847c1-2599-4d0a-9f64-60535a52dc1b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.546 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e13ad397-3509-4d5f-9df2-40c0c9ce7937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 NetworkManager[49836]: <info>  [1772023648.5512] manager: (tapb261df4d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/508)
Feb 25 12:47:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:28 compute-0 systemd-udevd[350471]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.550 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5a647d1a-ed4a-4df4-a821-cfda9b22a53d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.580 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3e342e28-6e01-4bee-973c-d3d02b9d5bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.582 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc82d95-59aa-40b0-867e-94ba5dbead64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 NetworkManager[49836]: <info>  [1772023648.6001] device (tapb261df4d-30): carrier: link connected
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.604 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaa3830-7f9f-4c24-b7ab-e26154763fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.616 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[692464a7-2c95-4067-b8f1-3cac7fd1a0f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb261df4d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:6f:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561800, 'reachable_time': 30020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350500, 'error': None, 'target': 'ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.627 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f30e8bff-9b21-4889-b083-679bc7772122]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed9:6f1c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561800, 'tstamp': 561800}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350501, 'error': None, 'target': 'ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.640 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6a191b-e70c-4fca-b372-f593f5b4f1e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb261df4d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:6f:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561800, 'reachable_time': 30020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350502, 'error': None, 'target': 'ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.665 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[56d4b11a-269f-4052-b210-6720797a39cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.705 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11bdabf5-3ece-49fb-aaa4-6b11d3f8ecc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.706 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb261df4d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.707 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.707 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb261df4d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:28 compute-0 NetworkManager[49836]: <info>  [1772023648.7098] manager: (tapb261df4d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Feb 25 12:47:28 compute-0 kernel: tapb261df4d-30: entered promiscuous mode
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.714 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb261df4d-30, col_values=(('external_ids', {'iface-id': 'f0fc68ea-8ea9-4b0b-9e36-f54d3bc2f6ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:28 compute-0 ovn_controller[147040]: 2026-02-25T12:47:28Z|01220|binding|INFO|Releasing lport f0fc68ea-8ea9-4b0b-9e36-f54d3bc2f6ca from this chassis (sb_readonly=0)
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.717 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b261df4d-3b33-4344-9c3c-a73feb8773db.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b261df4d-3b33-4344-9c3c-a73feb8773db.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.718 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1e6db6cd-53c1-4c8d-b7af-3ee57a0ec7b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.719 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-b261df4d-3b33-4344-9c3c-a73feb8773db
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/b261df4d-3b33-4344-9c3c-a73feb8773db.pid.haproxy
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID b261df4d-3b33-4344-9c3c-a73feb8773db
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:47:28 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.720 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db', 'env', 'PROCESS_TAG=haproxy-b261df4d-3b33-4344-9c3c-a73feb8773db', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b261df4d-3b33-4344-9c3c-a73feb8773db.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.734 244018 DEBUG nova.compute.manager [req-9a8b8013-94db-4a6e-b4ec-46522c8da0ef req-cab7232f-ac7b-413f-b544-63de2cdb70dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.734 244018 DEBUG oslo_concurrency.lockutils [req-9a8b8013-94db-4a6e-b4ec-46522c8da0ef req-cab7232f-ac7b-413f-b544-63de2cdb70dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.734 244018 DEBUG oslo_concurrency.lockutils [req-9a8b8013-94db-4a6e-b4ec-46522c8da0ef req-cab7232f-ac7b-413f-b544-63de2cdb70dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.734 244018 DEBUG oslo_concurrency.lockutils [req-9a8b8013-94db-4a6e-b4ec-46522c8da0ef req-cab7232f-ac7b-413f-b544-63de2cdb70dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.735 244018 DEBUG nova.compute.manager [req-9a8b8013-94db-4a6e-b4ec-46522c8da0ef req-cab7232f-ac7b-413f-b544-63de2cdb70dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Processing event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.849 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.850 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023648.8495708, 95730650-36ac-4eee-8b22-9ea3f01d82d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.851 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] VM Started (Lifecycle Event)
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.854 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.857 244018 INFO nova.virt.libvirt.driver [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Instance spawned successfully.
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.858 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.877 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.884 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.888 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.888 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.889 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.890 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.890 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.891 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.908 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.908 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023648.8503447, 95730650-36ac-4eee-8b22-9ea3f01d82d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.908 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] VM Paused (Lifecycle Event)
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.946 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.950 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023648.8541086, 95730650-36ac-4eee-8b22-9ea3f01d82d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.951 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] VM Resumed (Lifecycle Event)
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.994 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:47:28 compute-0 nova_compute[244014]: 2026-02-25 12:47:28.996 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:47:29 compute-0 nova_compute[244014]: 2026-02-25 12:47:29.000 244018 INFO nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Took 5.72 seconds to spawn the instance on the hypervisor.
Feb 25 12:47:29 compute-0 nova_compute[244014]: 2026-02-25 12:47:29.000 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:47:29 compute-0 podman[350575]: 2026-02-25 12:47:29.052241129 +0000 UTC m=+0.054742676 container create c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 12:47:29 compute-0 nova_compute[244014]: 2026-02-25 12:47:29.062 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:47:29 compute-0 systemd[1]: Started libpod-conmon-c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc.scope.
Feb 25 12:47:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5086818508e5fa7e94bd5cba9a5524707dcb64bf7ba443e8eab59b7dad15f1a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:47:29 compute-0 podman[350575]: 2026-02-25 12:47:29.119175179 +0000 UTC m=+0.121676746 container init c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:47:29 compute-0 podman[350575]: 2026-02-25 12:47:29.024560717 +0000 UTC m=+0.027062304 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:47:29 compute-0 podman[350575]: 2026-02-25 12:47:29.126042162 +0000 UTC m=+0.128543719 container start c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:47:29 compute-0 nova_compute[244014]: 2026-02-25 12:47:29.136 244018 INFO nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Took 7.13 seconds to build instance.
Feb 25 12:47:29 compute-0 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [NOTICE]   (350617) : New worker (350633) forked
Feb 25 12:47:29 compute-0 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [NOTICE]   (350617) : Loading success.
Feb 25 12:47:29 compute-0 podman[350588]: 2026-02-25 12:47:29.157955344 +0000 UTC m=+0.066516680 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:47:29 compute-0 nova_compute[244014]: 2026-02-25 12:47:29.168 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:29 compute-0 podman[350591]: 2026-02-25 12:47:29.218976447 +0000 UTC m=+0.131680470 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:47:29 compute-0 ovn_controller[147040]: 2026-02-25T12:47:29Z|01221|binding|INFO|Releasing lport 4e1af2d0-780b-4ffd-93a0-445de7a22322 from this chassis (sb_readonly=0)
Feb 25 12:47:29 compute-0 ovn_controller[147040]: 2026-02-25T12:47:29Z|01222|binding|INFO|Releasing lport 8372e6c6-fbbc-48a7-be95-d95e1d2ad95a from this chassis (sb_readonly=0)
Feb 25 12:47:29 compute-0 ovn_controller[147040]: 2026-02-25T12:47:29Z|01223|binding|INFO|Releasing lport f0fc68ea-8ea9-4b0b-9e36-f54d3bc2f6ca from this chassis (sb_readonly=0)
Feb 25 12:47:29 compute-0 NetworkManager[49836]: <info>  [1772023649.8417] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Feb 25 12:47:29 compute-0 NetworkManager[49836]: <info>  [1772023649.8428] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Feb 25 12:47:29 compute-0 nova_compute[244014]: 2026-02-25 12:47:29.844 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:29 compute-0 ovn_controller[147040]: 2026-02-25T12:47:29Z|01224|binding|INFO|Releasing lport 4e1af2d0-780b-4ffd-93a0-445de7a22322 from this chassis (sb_readonly=0)
Feb 25 12:47:29 compute-0 ovn_controller[147040]: 2026-02-25T12:47:29Z|01225|binding|INFO|Releasing lport 8372e6c6-fbbc-48a7-be95-d95e1d2ad95a from this chassis (sb_readonly=0)
Feb 25 12:47:29 compute-0 ovn_controller[147040]: 2026-02-25T12:47:29Z|01226|binding|INFO|Releasing lport f0fc68ea-8ea9-4b0b-9e36-f54d3bc2f6ca from this chassis (sb_readonly=0)
Feb 25 12:47:29 compute-0 nova_compute[244014]: 2026-02-25 12:47:29.863 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:29 compute-0 nova_compute[244014]: 2026-02-25 12:47:29.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:29.901 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:29.903 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated
Feb 25 12:47:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:29.906 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:47:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:29.907 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bbfdad46-c5c5-4455-8313-a1ae295133ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:30 compute-0 ceph-mon[76335]: pgmap v2007: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Feb 25 12:47:30 compute-0 sshd-session[350649]: Received disconnect from 91.224.92.54 port 40360:11:  [preauth]
Feb 25 12:47:30 compute-0 sshd-session[350649]: Disconnected from authenticating user root 91.224.92.54 port 40360 [preauth]
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.290 244018 DEBUG nova.compute.manager [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.291 244018 DEBUG nova.compute.manager [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing instance network info cache due to event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.291 244018 DEBUG oslo_concurrency.lockutils [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:47:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.620 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.643 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.644 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.645 244018 DEBUG oslo_concurrency.lockutils [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.646 244018 DEBUG nova.network.neutron [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing network info cache for port 0358e18d-8ce8-43a7-a8b2-11193708891a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.648 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.649 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.671 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.672 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.672 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.672 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.673 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.827 244018 DEBUG nova.compute.manager [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.830 244018 DEBUG oslo_concurrency.lockutils [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.830 244018 DEBUG oslo_concurrency.lockutils [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.830 244018 DEBUG oslo_concurrency.lockutils [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.830 244018 DEBUG nova.compute.manager [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] No waiting events found dispatching network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:47:30 compute-0 nova_compute[244014]: 2026-02-25 12:47:30.831 244018 WARNING nova.compute.manager [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received unexpected event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a for instance with vm_state active and task_state None.
Feb 25 12:47:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:47:30
Feb 25 12:47:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:47:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:47:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', '.mgr', 'volumes', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta']
Feb 25 12:47:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:47:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:47:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3858140156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.253 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.476 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.476 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.480 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.480 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.632 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.633 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3287MB free_disk=59.945687803439796GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.633 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.633 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.756 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 848fd033-0ebb-460a-a8a0-56583fa5f481 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.756 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 95730650-36ac-4eee-8b22-9ea3f01d82d1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.756 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.757 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.845 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.920 244018 DEBUG nova.network.neutron [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updated VIF entry in instance network info cache for port 0358e18d-8ce8-43a7-a8b2-11193708891a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.921 244018 DEBUG nova.network.neutron [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:47:31 compute-0 nova_compute[244014]: 2026-02-25 12:47:31.942 244018 DEBUG oslo_concurrency.lockutils [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:47:32 compute-0 ceph-mon[76335]: pgmap v2008: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:47:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3858140156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:32 compute-0 nova_compute[244014]: 2026-02-25 12:47:32.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:47:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705992989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:32 compute-0 nova_compute[244014]: 2026-02-25 12:47:32.411 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:32 compute-0 nova_compute[244014]: 2026-02-25 12:47:32.417 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:47:32 compute-0 nova_compute[244014]: 2026-02-25 12:47:32.431 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:47:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Feb 25 12:47:32 compute-0 nova_compute[244014]: 2026-02-25 12:47:32.448 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:47:32 compute-0 nova_compute[244014]: 2026-02-25 12:47:32.448 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:32 compute-0 nova_compute[244014]: 2026-02-25 12:47:32.675 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:32 compute-0 nova_compute[244014]: 2026-02-25 12:47:32.676 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:32 compute-0 nova_compute[244014]: 2026-02-25 12:47:32.676 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2705992989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:34 compute-0 ceph-mon[76335]: pgmap v2009: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Feb 25 12:47:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Feb 25 12:47:34 compute-0 nova_compute[244014]: 2026-02-25 12:47:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:34.979 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:34.980 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated
Feb 25 12:47:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:34.981 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:47:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:34.982 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[002ff129-e920-44a9-8b63-83ab537a167c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:35 compute-0 nova_compute[244014]: 2026-02-25 12:47:35.051 244018 DEBUG nova.compute.manager [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:35 compute-0 nova_compute[244014]: 2026-02-25 12:47:35.051 244018 DEBUG nova.compute.manager [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing instance network info cache due to event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:47:35 compute-0 nova_compute[244014]: 2026-02-25 12:47:35.052 244018 DEBUG oslo_concurrency.lockutils [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:47:35 compute-0 nova_compute[244014]: 2026-02-25 12:47:35.052 244018 DEBUG oslo_concurrency.lockutils [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:47:35 compute-0 nova_compute[244014]: 2026-02-25 12:47:35.052 244018 DEBUG nova.network.neutron [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:47:35 compute-0 nova_compute[244014]: 2026-02-25 12:47:35.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:36 compute-0 ceph-mon[76335]: pgmap v2010: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Feb 25 12:47:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Feb 25 12:47:36 compute-0 ovn_controller[147040]: 2026-02-25T12:47:36Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fd:1f:00 10.100.0.11
Feb 25 12:47:36 compute-0 ovn_controller[147040]: 2026-02-25T12:47:36Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fd:1f:00 10.100.0.11
Feb 25 12:47:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:36.666 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:36.667 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated
Feb 25 12:47:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:36.668 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:47:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:36.669 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ce687954-e767-4681-9c0e-1aced68aede9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:36 compute-0 nova_compute[244014]: 2026-02-25 12:47:36.692 244018 DEBUG nova.network.neutron [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updated VIF entry in instance network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:47:36 compute-0 nova_compute[244014]: 2026-02-25 12:47:36.693 244018 DEBUG nova.network.neutron [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updating instance_info_cache with network_info: [{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:47:36 compute-0 nova_compute[244014]: 2026-02-25 12:47:36.714 244018 DEBUG oslo_concurrency.lockutils [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:47:36 compute-0 nova_compute[244014]: 2026-02-25 12:47:36.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:36 compute-0 nova_compute[244014]: 2026-02-25 12:47:36.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:47:37 compute-0 nova_compute[244014]: 2026-02-25 12:47:37.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:38 compute-0 ceph-mon[76335]: pgmap v2011: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Feb 25 12:47:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 279 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 234 op/s
Feb 25 12:47:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:40 compute-0 ceph-mon[76335]: pgmap v2012: 305 pgs: 305 active+clean; 279 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 234 op/s
Feb 25 12:47:40 compute-0 nova_compute[244014]: 2026-02-25 12:47:40.260 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 279 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Feb 25 12:47:40 compute-0 ovn_controller[147040]: 2026-02-25T12:47:40Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:b0:7a 10.100.0.12
Feb 25 12:47:40 compute-0 ovn_controller[147040]: 2026-02-25T12:47:40Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:b0:7a 10.100.0.12
Feb 25 12:47:40 compute-0 nova_compute[244014]: 2026-02-25 12:47:40.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:42 compute-0 ceph-mon[76335]: pgmap v2013: 305 pgs: 305 active+clean; 279 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Feb 25 12:47:42 compute-0 nova_compute[244014]: 2026-02-25 12:47:42.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.403 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:42 compute-0 nova_compute[244014]: 2026-02-25 12:47:42.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.405 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 307 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 185 op/s
Feb 25 12:47:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.520 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.523 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated
Feb 25 12:47:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.526 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:47:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.527 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c642377-822b-4212-bfa8-93e3f2af7d7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014518223260928079 of space, bias 1.0, pg target 0.4355466978278424 quantized to 32 (current 32)
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494016506867987 of space, bias 1.0, pg target 0.7482049520603962 quantized to 32 (current 32)
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4046466994069785e-06 of space, bias 4.0, pg target 0.0016855760392883742 quantized to 16 (current 16)
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:47:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:47:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:44 compute-0 ceph-mon[76335]: pgmap v2014: 305 pgs: 305 active+clean; 307 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 185 op/s
Feb 25 12:47:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 148 op/s
Feb 25 12:47:45 compute-0 nova_compute[244014]: 2026-02-25 12:47:45.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 716 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Feb 25 12:47:46 compute-0 ceph-mon[76335]: pgmap v2015: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 148 op/s
Feb 25 12:47:47 compute-0 nova_compute[244014]: 2026-02-25 12:47:47.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:47 compute-0 nova_compute[244014]: 2026-02-25 12:47:47.691 244018 INFO nova.compute.manager [None req-70b4cf28-6122-4eed-8752-18ace07472d8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Get console output
Feb 25 12:47:47 compute-0 nova_compute[244014]: 2026-02-25 12:47:47.698 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:47:47 compute-0 ceph-mon[76335]: pgmap v2016: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 716 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Feb 25 12:47:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 717 KiB/s rd, 4.3 MiB/s wr, 131 op/s
Feb 25 12:47:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:48 compute-0 nova_compute[244014]: 2026-02-25 12:47:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:48 compute-0 nova_compute[244014]: 2026-02-25 12:47:48.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 12:47:48 compute-0 nova_compute[244014]: 2026-02-25 12:47:48.926 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 12:47:49 compute-0 nova_compute[244014]: 2026-02-25 12:47:49.126 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:49 compute-0 nova_compute[244014]: 2026-02-25 12:47:49.126 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:49 compute-0 nova_compute[244014]: 2026-02-25 12:47:49.243 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:47:49 compute-0 nova_compute[244014]: 2026-02-25 12:47:49.393 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:49 compute-0 nova_compute[244014]: 2026-02-25 12:47:49.394 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:49 compute-0 nova_compute[244014]: 2026-02-25 12:47:49.403 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:47:49 compute-0 nova_compute[244014]: 2026-02-25 12:47:49.403 244018 INFO nova.compute.claims [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:47:49 compute-0 nova_compute[244014]: 2026-02-25 12:47:49.616 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:49 compute-0 ceph-mon[76335]: pgmap v2017: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 717 KiB/s rd, 4.3 MiB/s wr, 131 op/s
Feb 25 12:47:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:47:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/341875472' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.191 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.196 244018 DEBUG nova.compute.provider_tree [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.213 244018 DEBUG nova.scheduler.client.report [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.240 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.240 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.291 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.291 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.308 244018 INFO nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.323 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.396 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.398 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.399 244018 INFO nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Creating image(s)
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.431 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.469 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.506 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.510 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.550 244018 DEBUG nova.policy [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.591 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.592 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.594 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.594 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.628 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.633 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 271f6569-a8f6-43a3-ac98-511eff77c426_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/341875472' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:50 compute-0 nova_compute[244014]: 2026-02-25 12:47:50.963 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 271f6569-a8f6-43a3-ac98-511eff77c426_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:51 compute-0 nova_compute[244014]: 2026-02-25 12:47:51.042 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:47:51 compute-0 nova_compute[244014]: 2026-02-25 12:47:51.144 244018 DEBUG nova.objects.instance [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 271f6569-a8f6-43a3-ac98-511eff77c426 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:47:51 compute-0 nova_compute[244014]: 2026-02-25 12:47:51.159 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:47:51 compute-0 nova_compute[244014]: 2026-02-25 12:47:51.160 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Ensure instance console log exists: /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:47:51 compute-0 nova_compute[244014]: 2026-02-25 12:47:51.160 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:51 compute-0 nova_compute[244014]: 2026-02-25 12:47:51.161 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:51 compute-0 nova_compute[244014]: 2026-02-25 12:47:51.161 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:51 compute-0 nova_compute[244014]: 2026-02-25 12:47:51.583 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Successfully created port: 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:47:52 compute-0 ceph-mon[76335]: pgmap v2018: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Feb 25 12:47:52 compute-0 nova_compute[244014]: 2026-02-25 12:47:52.364 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:52.407 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 345 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Feb 25 12:47:52 compute-0 nova_compute[244014]: 2026-02-25 12:47:52.938 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Successfully created port: 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:47:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:54 compute-0 ceph-mon[76335]: pgmap v2019: 305 pgs: 305 active+clean; 345 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.179 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Successfully updated port: 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.278 244018 DEBUG nova.compute.manager [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.279 244018 DEBUG nova.compute.manager [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing instance network info cache due to event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.279 244018 DEBUG oslo_concurrency.lockutils [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.279 244018 DEBUG oslo_concurrency.lockutils [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.280 244018 DEBUG nova.network.neutron [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing network info cache for port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:47:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 2.2 MiB/s wr, 47 op/s
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.492 244018 DEBUG nova.network.neutron [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.801 244018 DEBUG nova.network.neutron [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.820 244018 DEBUG oslo_concurrency.lockutils [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.851 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Successfully updated port: 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.867 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.867 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.867 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:47:54 compute-0 nova_compute[244014]: 2026-02-25 12:47:54.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:55 compute-0 nova_compute[244014]: 2026-02-25 12:47:55.011 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:47:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:55.025 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:55.026 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:55.027 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:55 compute-0 nova_compute[244014]: 2026-02-25 12:47:55.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:56 compute-0 ceph-mon[76335]: pgmap v2020: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 2.2 MiB/s wr, 47 op/s
Feb 25 12:47:56 compute-0 nova_compute[244014]: 2026-02-25 12:47:56.354 244018 DEBUG nova.compute.manager [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-changed-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:47:56 compute-0 nova_compute[244014]: 2026-02-25 12:47:56.355 244018 DEBUG nova.compute.manager [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing instance network info cache due to event network-changed-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:47:56 compute-0 nova_compute[244014]: 2026-02-25 12:47:56.356 244018 DEBUG oslo_concurrency.lockutils [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:47:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:47:56 compute-0 nova_compute[244014]: 2026-02-25 12:47:56.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:47:56 compute-0 nova_compute[244014]: 2026-02-25 12:47:56.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.056 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.075 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.075 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance network_info: |[{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.076 244018 DEBUG oslo_concurrency.lockutils [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.076 244018 DEBUG nova.network.neutron [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing network info cache for port 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.079 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Start _get_guest_xml network_info=[{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.083 244018 WARNING nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.088 244018 DEBUG nova.virt.libvirt.host [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.088 244018 DEBUG nova.virt.libvirt.host [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.095 244018 DEBUG nova.virt.libvirt.host [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.095 244018 DEBUG nova.virt.libvirt.host [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.096 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.096 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.096 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.097 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.097 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.097 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.097 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.097 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.098 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.098 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.098 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.098 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.103 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.215 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.221 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.239 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.313 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.314 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.321 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.322 244018 INFO nova.compute.claims [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.482 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:47:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826206493' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.680 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.714 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:57 compute-0 nova_compute[244014]: 2026-02-25 12:47:57.721 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:47:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701022679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.035 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.040 244018 DEBUG nova.compute.provider_tree [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:47:58 compute-0 ceph-mon[76335]: pgmap v2021: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:47:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2826206493' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1701022679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.096 244018 DEBUG nova.scheduler.client.report [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.256 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.257 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:47:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:47:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295620193' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.285 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.287 244018 DEBUG nova.virt.libvirt.vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:50Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.287 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.288 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.289 244018 DEBUG nova.virt.libvirt.vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:50Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.289 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.290 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.291 244018 DEBUG nova.objects.instance [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 271f6569-a8f6-43a3-ac98-511eff77c426 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.455 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:47:58 compute-0 nova_compute[244014]:   <uuid>271f6569-a8f6-43a3-ac98-511eff77c426</uuid>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   <name>instance-00000077</name>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1288096549</nova:name>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:47:57</nova:creationTime>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <nova:port uuid="3de24fa3-5df5-470c-98d2-1a60e0ebfc0c">
Feb 25 12:47:58 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <nova:port uuid="0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934">
Feb 25 12:47:58 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe46:c29f" ipVersion="6"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe46:c29f" ipVersion="6"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <system>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <entry name="serial">271f6569-a8f6-43a3-ac98-511eff77c426</entry>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <entry name="uuid">271f6569-a8f6-43a3-ac98-511eff77c426</entry>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     </system>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   <os>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   </os>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   <features>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   </features>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/271f6569-a8f6-43a3-ac98-511eff77c426_disk">
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       </source>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/271f6569-a8f6-43a3-ac98-511eff77c426_disk.config">
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       </source>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:47:58 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:c8:6b:63"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <target dev="tap3de24fa3-5d"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:46:c2:9f"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <target dev="tap0dbf7aaf-2d"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/console.log" append="off"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <video>
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     </video>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:47:58 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:47:58 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:47:58 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:47:58 compute-0 nova_compute[244014]: </domain>
Feb 25 12:47:58 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.457 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Preparing to wait for external event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:47:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.457 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.458 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.458 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.458 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Preparing to wait for external event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.459 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.459 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.459 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.460 244018 DEBUG nova.virt.libvirt.vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:50Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.460 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.461 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.461 244018 DEBUG os_vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.463 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.463 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.466 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3de24fa3-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.466 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3de24fa3-5d, col_values=(('external_ids', {'iface-id': '3de24fa3-5df5-470c-98d2-1a60e0ebfc0c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:6b:63', 'vm-uuid': '271f6569-a8f6-43a3-ac98-511eff77c426'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:58 compute-0 NetworkManager[49836]: <info>  [1772023678.4687] manager: (tap3de24fa3-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.470 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.477 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.478 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.479 244018 INFO os_vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d')
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.480 244018 DEBUG nova.virt.libvirt.vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:50Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.480 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.481 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.481 244018 DEBUG os_vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.482 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.482 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.484 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0dbf7aaf-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.485 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0dbf7aaf-2d, col_values=(('external_ids', {'iface-id': '0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:c2:9f', 'vm-uuid': '271f6569-a8f6-43a3-ac98-511eff77c426'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:58 compute-0 NetworkManager[49836]: <info>  [1772023678.4868] manager: (tap0dbf7aaf-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/513)
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.491 244018 INFO os_vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d')
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.521 244018 INFO nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:47:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.572 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.594 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.594 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.594 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:c8:6b:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.595 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:46:c2:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.595 244018 INFO nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Using config drive
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.629 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.717 244018 DEBUG nova.policy [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.906 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.907 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.907 244018 INFO nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Creating image(s)
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.929 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.954 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.982 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:58 compute-0 nova_compute[244014]: 2026-02-25 12:47:58.986 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.044 244018 INFO nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Creating config drive at /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.048 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzb37u9ux execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2295620193' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.088 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.089 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.090 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.090 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.109 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.112 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.190 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzb37u9ux" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.231 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.239 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config 271f6569-a8f6-43a3-ac98-511eff77c426_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.277 244018 DEBUG nova.network.neutron [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updated VIF entry in instance network info cache for port 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.278 244018 DEBUG nova.network.neutron [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.337 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.379 244018 DEBUG oslo_concurrency.lockutils [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.407 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config 271f6569-a8f6-43a3-ac98-511eff77c426_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.408 244018 INFO nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Deleting local config drive /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config because it was imported into RBD.
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.411 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:47:59 compute-0 kernel: tap3de24fa3-5d: entered promiscuous mode
Feb 25 12:47:59 compute-0 NetworkManager[49836]: <info>  [1772023679.4471] manager: (tap3de24fa3-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/514)
Feb 25 12:47:59 compute-0 ovn_controller[147040]: 2026-02-25T12:47:59Z|01227|binding|INFO|Claiming lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c for this chassis.
Feb 25 12:47:59 compute-0 ovn_controller[147040]: 2026-02-25T12:47:59Z|01228|binding|INFO|3de24fa3-5df5-470c-98d2-1a60e0ebfc0c: Claiming fa:16:3e:c8:6b:63 10.100.0.8
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:59 compute-0 NetworkManager[49836]: <info>  [1772023679.4590] manager: (tap0dbf7aaf-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Feb 25 12:47:59 compute-0 kernel: tap0dbf7aaf-2d: entered promiscuous mode
Feb 25 12:47:59 compute-0 ovn_controller[147040]: 2026-02-25T12:47:59Z|01229|binding|INFO|Setting lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c ovn-installed in OVS
Feb 25 12:47:59 compute-0 ovn_controller[147040]: 2026-02-25T12:47:59Z|01230|if_status|INFO|Not updating pb chassis for 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 now as sb is readonly
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.471 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:59 compute-0 ovn_controller[147040]: 2026-02-25T12:47:59Z|01231|binding|INFO|Claiming lport 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 for this chassis.
Feb 25 12:47:59 compute-0 ovn_controller[147040]: 2026-02-25T12:47:59Z|01232|binding|INFO|0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934: Claiming fa:16:3e:46:c2:9f 2001:db8:0:1:f816:3eff:fe46:c29f 2001:db8::f816:3eff:fe46:c29f
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.473 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:59 compute-0 ovn_controller[147040]: 2026-02-25T12:47:59Z|01233|binding|INFO|Setting lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c up in Southbound
Feb 25 12:47:59 compute-0 ovn_controller[147040]: 2026-02-25T12:47:59Z|01234|binding|INFO|Setting lport 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 ovn-installed in OVS
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.475 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.474 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:6b:63 10.100.0.8'], port_security=['fa:16:3e:c8:6b:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.475 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 bound to our chassis
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.477 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b6d114-fcb4-4a25-988c-1ee301ef0419
Feb 25 12:47:59 compute-0 ovn_controller[147040]: 2026-02-25T12:47:59Z|01235|binding|INFO|Setting lport 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 up in Southbound
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.489 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:c2:9f 2001:db8:0:1:f816:3eff:fe46:c29f 2001:db8::f816:3eff:fe46:c29f'], port_security=['fa:16:3e:46:c2:9f 2001:db8:0:1:f816:3eff:fe46:c29f 2001:db8::f816:3eff:fe46:c29f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe46:c29f/64 2001:db8::f816:3eff:fe46:c29f/64', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06544f4e-1579-4f9d-8faa-1248478e7fbc, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:47:59 compute-0 systemd-udevd[351217]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:47:59 compute-0 systemd-udevd[351218]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.491 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c41c82fb-651c-4cd0-8d30-59519985d3c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 systemd-machined[210048]: New machine qemu-151-instance-00000077.
Feb 25 12:47:59 compute-0 NetworkManager[49836]: <info>  [1772023679.5067] device (tap3de24fa3-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:47:59 compute-0 NetworkManager[49836]: <info>  [1772023679.5075] device (tap3de24fa3-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:47:59 compute-0 NetworkManager[49836]: <info>  [1772023679.5084] device (tap0dbf7aaf-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:47:59 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-00000077.
Feb 25 12:47:59 compute-0 NetworkManager[49836]: <info>  [1772023679.5109] device (tap0dbf7aaf-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.516 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d7253f-d6a2-4c33-9a89-098862217bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.521 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe0a436-56fb-4d3f-902b-5f651fa87dee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.543 244018 DEBUG nova.objects.instance [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 9c11f17d-5dba-4ece-8340-1c4ff0939294 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.546 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0e28de5f-8f9a-46a6-9142-c13728dc82b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.558 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1a060d-3259-4848-86ab-a6de57452367]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351269, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.566 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.566 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Ensure instance console log exists: /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.566 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.567 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.567 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.567 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2e85700d-1543-42d2-b020-1590d326fdd6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561174, 'tstamp': 561174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351273, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561176, 'tstamp': 561176}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351273, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.568 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:59 compute-0 podman[351189]: 2026-02-25 12:47:59.568818953 +0000 UTC m=+0.103888174 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.570 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b6d114-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.570 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.571 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b6d114-f0, col_values=(('external_ids', {'iface-id': '4e1af2d0-780b-4ffd-93a0-445de7a22322'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.571 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.572 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 in datapath 5c482202-8994-4033-a0a9-167d92a9e301 unbound from our chassis
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.573 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c482202-8994-4033-a0a9-167d92a9e301
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.585 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05fc4223-ea30-41d0-985d-def68c53d3aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 podman[351193]: 2026-02-25 12:47:59.589504857 +0000 UTC m=+0.122231142 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.608 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2d39ec1a-6ebc-4d92-9556-5c60e2fbdabd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.610 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c90119b4-4216-4c53-a84a-1480c09fdc9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.627 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[660595bd-a11f-4b7a-8508-17d49fdfd9c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.639 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c65e96f6-db29-4ef9-8bb2-6b6958801209]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c482202-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:af:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561244, 'reachable_time': 22558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351280, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.653 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[918eae70-339d-4bc6-a900-1845833a8184]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5c482202-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561254, 'tstamp': 561254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351281, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.655 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c482202-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.656 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.658 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c482202-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.658 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.658 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c482202-80, col_values=(('external_ids', {'iface-id': '8372e6c6-fbbc-48a7-be95-d95e1d2ad95a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:47:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.659 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.793 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023679.7924416, 271f6569-a8f6-43a3-ac98-511eff77c426 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.793 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] VM Started (Lifecycle Event)
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.817 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.822 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023679.7952223, 271f6569-a8f6-43a3-ac98-511eff77c426 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.822 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] VM Paused (Lifecycle Event)
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.850 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.855 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:47:59 compute-0 nova_compute[244014]: 2026-02-25 12:47:59.886 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:48:00 compute-0 ceph-mon[76335]: pgmap v2022: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.135 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Successfully created port: cbf23880-21db-4752-be67-0abdb0153c2b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.436 244018 DEBUG nova.compute.manager [req-9eaf9c38-a85c-4c70-9a38-95f8ee498a96 req-ade40265-07d1-4f09-8943-8659296418cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.437 244018 DEBUG oslo_concurrency.lockutils [req-9eaf9c38-a85c-4c70-9a38-95f8ee498a96 req-ade40265-07d1-4f09-8943-8659296418cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.437 244018 DEBUG oslo_concurrency.lockutils [req-9eaf9c38-a85c-4c70-9a38-95f8ee498a96 req-ade40265-07d1-4f09-8943-8659296418cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.437 244018 DEBUG oslo_concurrency.lockutils [req-9eaf9c38-a85c-4c70-9a38-95f8ee498a96 req-ade40265-07d1-4f09-8943-8659296418cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.438 244018 DEBUG nova.compute.manager [req-9eaf9c38-a85c-4c70-9a38-95f8ee498a96 req-ade40265-07d1-4f09-8943-8659296418cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Processing event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:48:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.864 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Successfully updated port: cbf23880-21db-4752-be67-0abdb0153c2b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.883 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.884 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.884 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.958 244018 DEBUG nova.compute.manager [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-changed-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.958 244018 DEBUG nova.compute.manager [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Refreshing instance network info cache due to event network-changed-cbf23880-21db-4752-be67-0abdb0153c2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:48:00 compute-0 nova_compute[244014]: 2026-02-25 12:48:00.959 244018 DEBUG oslo_concurrency.lockutils [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:48:01 compute-0 nova_compute[244014]: 2026-02-25 12:48:01.081 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:48:02 compute-0 ceph-mon[76335]: pgmap v2023: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.230 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Updating instance_info_cache with network_info: [{"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.272 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.272 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Instance network_info: |[{"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.273 244018 DEBUG oslo_concurrency.lockutils [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.273 244018 DEBUG nova.network.neutron [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Refreshing network info cache for port cbf23880-21db-4752-be67-0abdb0153c2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.277 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Start _get_guest_xml network_info=[{"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.281 244018 WARNING nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.287 244018 DEBUG nova.virt.libvirt.host [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.287 244018 DEBUG nova.virt.libvirt.host [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.291 244018 DEBUG nova.virt.libvirt.host [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.291 244018 DEBUG nova.virt.libvirt.host [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.292 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.292 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.293 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.293 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.294 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.294 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.294 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.295 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.295 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.295 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.295 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.296 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.301 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 384 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 MiB/s wr, 58 op/s
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.544 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.545 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.545 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.546 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.546 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No event matching network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c in dict_keys([('network-vif-plugged', '0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.547 244018 WARNING nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received unexpected event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c for instance with vm_state building and task_state spawning.
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.547 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.547 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.548 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.548 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.549 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Processing event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.549 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.550 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.550 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.550 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.551 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No waiting events found dispatching network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.551 244018 WARNING nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received unexpected event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 for instance with vm_state building and task_state spawning.
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.552 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.558 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.559 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023682.557369, 271f6569-a8f6-43a3-ac98-511eff77c426 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.559 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] VM Resumed (Lifecycle Event)
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.567 244018 INFO nova.virt.libvirt.driver [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance spawned successfully.
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.567 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.576 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.581 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.594 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.595 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.596 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.597 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.597 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.598 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.608 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.660 244018 INFO nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Took 12.26 seconds to spawn the instance on the hypervisor.
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.661 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.766 244018 INFO nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Took 13.41 seconds to build instance.
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.790 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:48:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557137040' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.849 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.870 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:48:02 compute-0 nova_compute[244014]: 2026-02-25 12:48:02.873 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2557137040' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:48:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:48:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3889701299' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.445 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.447 244018 DEBUG nova.virt.libvirt.vif [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-83970638',display_name='tempest-TestNetworkBasicOps-server-83970638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-83970638',id=120,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxYaDCaF+dhprfU7p2GFAe30UCzLc9MF80n83kHh1iIsnjcJHnQ6Q/VzjgOn2GIc4A+5MBe7VFnhNTJz7AlxywFHXtPZOo+vT/4VV+kNxRXdCo8I4zsXHElpgbrS0cbTQ==',key_name='tempest-TestNetworkBasicOps-1724785857',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-0o0rgl3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:58Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=9c11f17d-5dba-4ece-8340-1c4ff0939294,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.447 244018 DEBUG nova.network.os_vif_util [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.448 244018 DEBUG nova.network.os_vif_util [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.450 244018 DEBUG nova.objects.instance [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c11f17d-5dba-4ece-8340-1c4ff0939294 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.564 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:48:03 compute-0 nova_compute[244014]:   <uuid>9c11f17d-5dba-4ece-8340-1c4ff0939294</uuid>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   <name>instance-00000078</name>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-83970638</nova:name>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:48:02</nova:creationTime>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <nova:port uuid="cbf23880-21db-4752-be67-0abdb0153c2b">
Feb 25 12:48:03 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <system>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <entry name="serial">9c11f17d-5dba-4ece-8340-1c4ff0939294</entry>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <entry name="uuid">9c11f17d-5dba-4ece-8340-1c4ff0939294</entry>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     </system>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   <os>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   </os>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   <features>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   </features>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/9c11f17d-5dba-4ece-8340-1c4ff0939294_disk">
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       </source>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config">
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       </source>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:48:03 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:72:a7:94"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <target dev="tapcbf23880-21"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/console.log" append="off"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <video>
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     </video>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:48:03 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:48:03 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:48:03 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:48:03 compute-0 nova_compute[244014]: </domain>
Feb 25 12:48:03 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.565 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Preparing to wait for external event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.565 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.565 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.566 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.566 244018 DEBUG nova.virt.libvirt.vif [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-83970638',display_name='tempest-TestNetworkBasicOps-server-83970638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-83970638',id=120,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxYaDCaF+dhprfU7p2GFAe30UCzLc9MF80n83kHh1iIsnjcJHnQ6Q/VzjgOn2GIc4A+5MBe7VFnhNTJz7AlxywFHXtPZOo+vT/4VV+kNxRXdCo8I4zsXHElpgbrS0cbTQ==',key_name='tempest-TestNetworkBasicOps-1724785857',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-0o0rgl3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:58Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=9c11f17d-5dba-4ece-8340-1c4ff0939294,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.567 244018 DEBUG nova.network.os_vif_util [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.567 244018 DEBUG nova.network.os_vif_util [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.567 244018 DEBUG os_vif [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.568 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.568 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.571 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbf23880-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.571 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcbf23880-21, col_values=(('external_ids', {'iface-id': 'cbf23880-21db-4752-be67-0abdb0153c2b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:a7:94', 'vm-uuid': '9c11f17d-5dba-4ece-8340-1c4ff0939294'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:03 compute-0 NetworkManager[49836]: <info>  [1772023683.5734] manager: (tapcbf23880-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/516)
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:03 compute-0 nova_compute[244014]: 2026-02-25 12:48:03.580 244018 INFO os_vif [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21')
Feb 25 12:48:04 compute-0 ceph-mon[76335]: pgmap v2024: 305 pgs: 305 active+clean; 384 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 MiB/s wr, 58 op/s
Feb 25 12:48:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3889701299' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:48:04 compute-0 nova_compute[244014]: 2026-02-25 12:48:04.406 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:48:04 compute-0 nova_compute[244014]: 2026-02-25 12:48:04.407 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:48:04 compute-0 nova_compute[244014]: 2026-02-25 12:48:04.407 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:72:a7:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:48:04 compute-0 nova_compute[244014]: 2026-02-25 12:48:04.407 244018 INFO nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Using config drive
Feb 25 12:48:04 compute-0 nova_compute[244014]: 2026-02-25 12:48:04.422 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:48:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Feb 25 12:48:04 compute-0 nova_compute[244014]: 2026-02-25 12:48:04.872 244018 DEBUG nova.network.neutron [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Updated VIF entry in instance network info cache for port cbf23880-21db-4752-be67-0abdb0153c2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:48:04 compute-0 nova_compute[244014]: 2026-02-25 12:48:04.873 244018 DEBUG nova.network.neutron [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Updating instance_info_cache with network_info: [{"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:04 compute-0 nova_compute[244014]: 2026-02-25 12:48:04.887 244018 DEBUG oslo_concurrency.lockutils [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.098 244018 INFO nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Creating config drive at /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.105 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeod7h4ha execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.243 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeod7h4ha" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.278 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.283 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:05 compute-0 ceph-mon[76335]: pgmap v2025: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.412 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.413 244018 INFO nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Deleting local config drive /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config because it was imported into RBD.
Feb 25 12:48:05 compute-0 kernel: tapcbf23880-21: entered promiscuous mode
Feb 25 12:48:05 compute-0 NetworkManager[49836]: <info>  [1772023685.4835] manager: (tapcbf23880-21): new Tun device (/org/freedesktop/NetworkManager/Devices/517)
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:05 compute-0 ovn_controller[147040]: 2026-02-25T12:48:05Z|01236|binding|INFO|Claiming lport cbf23880-21db-4752-be67-0abdb0153c2b for this chassis.
Feb 25 12:48:05 compute-0 ovn_controller[147040]: 2026-02-25T12:48:05Z|01237|binding|INFO|cbf23880-21db-4752-be67-0abdb0153c2b: Claiming fa:16:3e:72:a7:94 10.100.0.30
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.500 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:a7:94 10.100.0.30'], port_security=['fa:16:3e:72:a7:94 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '9c11f17d-5dba-4ece-8340-1c4ff0939294', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3635704-b98f-49df-8dc5-9052a34d8970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '98aaa3fd-2372-44ea-9de4-f4c0cf776cf4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08599aa3-75c1-4d0f-bdce-36e8497cd876, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=cbf23880-21db-4752-be67-0abdb0153c2b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.501 157129 INFO neutron.agent.ovn.metadata.agent [-] Port cbf23880-21db-4752-be67-0abdb0153c2b in datapath f3635704-b98f-49df-8dc5-9052a34d8970 bound to our chassis
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.503 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3635704-b98f-49df-8dc5-9052a34d8970
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.515 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.516 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e729dd3-a5af-48f5-99f7-534f2ccc319e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_controller[147040]: 2026-02-25T12:48:05Z|01238|binding|INFO|Setting lport cbf23880-21db-4752-be67-0abdb0153c2b ovn-installed in OVS
Feb 25 12:48:05 compute-0 ovn_controller[147040]: 2026-02-25T12:48:05Z|01239|binding|INFO|Setting lport cbf23880-21db-4752-be67-0abdb0153c2b up in Southbound
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.518 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3635704-b1 in ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.520 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3635704-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.521 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38b6d0f7-3dc8-4452-890c-d0a677afe925]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.523 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6d475cef-b256-4b5e-acdc-4de2376e5460]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 systemd-udevd[351462]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:48:05 compute-0 systemd-machined[210048]: New machine qemu-152-instance-00000078.
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.537 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[206cfc65-86a5-4fbb-9b6c-6642adf2183f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 NetworkManager[49836]: <info>  [1772023685.5457] device (tapcbf23880-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:48:05 compute-0 NetworkManager[49836]: <info>  [1772023685.5468] device (tapcbf23880-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:48:05 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-00000078.
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.568 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[94021274-fa3f-493f-b01b-d91b97b52206]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.597 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5cbd44-a2d7-4b55-8786-1329348ada3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 NetworkManager[49836]: <info>  [1772023685.6044] manager: (tapf3635704-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/518)
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.603 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4036167c-51c6-4d1e-9a41-dbf94e7afe27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.636 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ef6056-f264-4854-bfc7-6d66b0457884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.641 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[67fdf63b-2c0b-4423-a65f-1007c80c9983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 NetworkManager[49836]: <info>  [1772023685.6636] device (tapf3635704-b0): carrier: link connected
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.669 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4024c6ef-d8ea-4aaa-9068-789b9d196ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.690 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[64e88a62-3f5a-46f5-9171-48561a054ecc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3635704-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:cb:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565507, 'reachable_time': 18528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351494, 'error': None, 'target': 'ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.708 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49c34c94-be51-42ba-be54-f80cd3237bf8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:cb09'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565507, 'tstamp': 565507}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351495, 'error': None, 'target': 'ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.727 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b50f16fa-14ba-43b8-9dc7-b4d6da55e265]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3635704-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:cb:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565507, 'reachable_time': 18528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351496, 'error': None, 'target': 'ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.756 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19827ac5-947e-46dd-9bec-d846bbd23be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.829 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a282e615-a29d-42d4-ac32-4630f9d03895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.830 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3635704-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.831 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.831 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3635704-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:05 compute-0 NetworkManager[49836]: <info>  [1772023685.8349] manager: (tapf3635704-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Feb 25 12:48:05 compute-0 kernel: tapf3635704-b0: entered promiscuous mode
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.841 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3635704-b0, col_values=(('external_ids', {'iface-id': 'ed1dee53-c556-4395-a0e4-d8aae80a256a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:05 compute-0 ovn_controller[147040]: 2026-02-25T12:48:05Z|01240|binding|INFO|Releasing lport ed1dee53-c556-4395-a0e4-d8aae80a256a from this chassis (sb_readonly=0)
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.854 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3635704-b98f-49df-8dc5-9052a34d8970.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3635704-b98f-49df-8dc5-9052a34d8970.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.856 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c1cd6f-209a-4521-8047-c1f0ae52fd84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.857 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-f3635704-b98f-49df-8dc5-9052a34d8970
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/f3635704-b98f-49df-8dc5-9052a34d8970.pid.haproxy
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID f3635704-b98f-49df-8dc5-9052a34d8970
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:48:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.857 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970', 'env', 'PROCESS_TAG=haproxy-f3635704-b98f-49df-8dc5-9052a34d8970', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3635704-b98f-49df-8dc5-9052a34d8970.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.860 244018 DEBUG nova.compute.manager [req-1a904070-4eac-4c96-8807-4e81aa86fd35 req-7eedc969-b456-4762-a0bf-1a0abb701039 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.861 244018 DEBUG oslo_concurrency.lockutils [req-1a904070-4eac-4c96-8807-4e81aa86fd35 req-7eedc969-b456-4762-a0bf-1a0abb701039 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.861 244018 DEBUG oslo_concurrency.lockutils [req-1a904070-4eac-4c96-8807-4e81aa86fd35 req-7eedc969-b456-4762-a0bf-1a0abb701039 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.862 244018 DEBUG oslo_concurrency.lockutils [req-1a904070-4eac-4c96-8807-4e81aa86fd35 req-7eedc969-b456-4762-a0bf-1a0abb701039 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.862 244018 DEBUG nova.compute.manager [req-1a904070-4eac-4c96-8807-4e81aa86fd35 req-7eedc969-b456-4762-a0bf-1a0abb701039 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Processing event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:48:05 compute-0 nova_compute[244014]: 2026-02-25 12:48:05.863 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.073 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.075 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023686.0726354, 9c11f17d-5dba-4ece-8340-1c4ff0939294 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.075 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] VM Started (Lifecycle Event)
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.082 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.088 244018 INFO nova.virt.libvirt.driver [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Instance spawned successfully.
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.089 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.117 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.122 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.128 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.128 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.129 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.130 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.130 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.131 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.162 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.162 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023686.0878763, 9c11f17d-5dba-4ece-8340-1c4ff0939294 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.163 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] VM Paused (Lifecycle Event)
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.192 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.196 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023686.0889897, 9c11f17d-5dba-4ece-8340-1c4ff0939294 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.196 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] VM Resumed (Lifecycle Event)
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.207 244018 INFO nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Took 7.30 seconds to spawn the instance on the hypervisor.
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.208 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.220 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.223 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.249 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:48:06 compute-0 podman[351567]: 2026-02-25 12:48:06.29970965 +0000 UTC m=+0.069092212 container create 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.307 244018 INFO nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Took 9.02 seconds to build instance.
Feb 25 12:48:06 compute-0 nova_compute[244014]: 2026-02-25 12:48:06.334 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:06 compute-0 systemd[1]: Started libpod-conmon-7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f.scope.
Feb 25 12:48:06 compute-0 podman[351567]: 2026-02-25 12:48:06.2671242 +0000 UTC m=+0.036506852 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:48:06 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b72957ff8d950551abb65b325b6b882f88ff85666c7edcb84f856ea8d68c66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:06 compute-0 podman[351567]: 2026-02-25 12:48:06.382392695 +0000 UTC m=+0.151775267 container init 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:48:06 compute-0 podman[351567]: 2026-02-25 12:48:06.38788585 +0000 UTC m=+0.157268422 container start 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 12:48:06 compute-0 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [NOTICE]   (351586) : New worker (351588) forked
Feb 25 12:48:06 compute-0 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [NOTICE]   (351586) : Loading success.
Feb 25 12:48:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:48:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:07.351 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8:0:1:f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:48:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:07.353 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated
Feb 25 12:48:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:07.355 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:48:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:07.356 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e871e335-d627-4ee7-8a3e-b8526220bf19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:07 compute-0 ceph-mon[76335]: pgmap v2026: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.129 244018 DEBUG nova.compute.manager [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.130 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.130 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.130 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.130 244018 DEBUG nova.compute.manager [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] No waiting events found dispatching network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.130 244018 WARNING nova.compute.manager [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received unexpected event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b for instance with vm_state active and task_state None.
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.131 244018 DEBUG nova.compute.manager [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.131 244018 DEBUG nova.compute.manager [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing instance network info cache due to event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.131 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.131 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.131 244018 DEBUG nova.network.neutron [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing network info cache for port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:48:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Feb 25 12:48:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:08 compute-0 nova_compute[244014]: 2026-02-25 12:48:08.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:09 compute-0 ceph-mon[76335]: pgmap v2027: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Feb 25 12:48:10 compute-0 nova_compute[244014]: 2026-02-25 12:48:10.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 12:48:11 compute-0 ceph-mon[76335]: pgmap v2028: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 12:48:12 compute-0 nova_compute[244014]: 2026-02-25 12:48:12.084 244018 DEBUG nova.network.neutron [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updated VIF entry in instance network info cache for port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:48:12 compute-0 nova_compute[244014]: 2026-02-25 12:48:12.085 244018 DEBUG nova.network.neutron [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:12 compute-0 nova_compute[244014]: 2026-02-25 12:48:12.110 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:48:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Feb 25 12:48:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:13 compute-0 nova_compute[244014]: 2026-02-25 12:48:13.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:13 compute-0 ceph-mon[76335]: pgmap v2029: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Feb 25 12:48:13 compute-0 sudo[351597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:48:14 compute-0 sudo[351597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:48:14 compute-0 sudo[351597]: pam_unix(sudo:session): session closed for user root
Feb 25 12:48:14 compute-0 sudo[351622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:48:14 compute-0 sudo[351622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:48:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 911 KiB/s wr, 145 op/s
Feb 25 12:48:14 compute-0 ovn_controller[147040]: 2026-02-25T12:48:14Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:6b:63 10.100.0.8
Feb 25 12:48:14 compute-0 ovn_controller[147040]: 2026-02-25T12:48:14Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:6b:63 10.100.0.8
Feb 25 12:48:14 compute-0 sudo[351622]: pam_unix(sudo:session): session closed for user root
Feb 25 12:48:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:48:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:48:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:48:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:48:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:48:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:48:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:48:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:48:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:48:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:48:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:48:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:48:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:48:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:48:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:48:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:48:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:48:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:48:14 compute-0 sudo[351678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:48:14 compute-0 sudo[351678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:48:14 compute-0 sudo[351678]: pam_unix(sudo:session): session closed for user root
Feb 25 12:48:14 compute-0 sudo[351703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:48:14 compute-0 sudo[351703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:48:15 compute-0 podman[351740]: 2026-02-25 12:48:15.110508768 +0000 UTC m=+0.057730262 container create b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 12:48:15 compute-0 podman[351740]: 2026-02-25 12:48:15.080541851 +0000 UTC m=+0.027763425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:48:15 compute-0 systemd[1]: Started libpod-conmon-b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc.scope.
Feb 25 12:48:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:48:15 compute-0 podman[351740]: 2026-02-25 12:48:15.235465176 +0000 UTC m=+0.182686680 container init b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:48:15 compute-0 podman[351740]: 2026-02-25 12:48:15.242206136 +0000 UTC m=+0.189427620 container start b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 12:48:15 compute-0 podman[351740]: 2026-02-25 12:48:15.245362785 +0000 UTC m=+0.192584279 container attach b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:48:15 compute-0 cool_easley[351756]: 167 167
Feb 25 12:48:15 compute-0 podman[351740]: 2026-02-25 12:48:15.249168313 +0000 UTC m=+0.196389797 container died b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:48:15 compute-0 systemd[1]: libpod-b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc.scope: Deactivated successfully.
Feb 25 12:48:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-7742335af27db4c06b3a3866e7935cc7e624cd68a84da4f9f6bad4939c606bdc-merged.mount: Deactivated successfully.
Feb 25 12:48:15 compute-0 nova_compute[244014]: 2026-02-25 12:48:15.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:15 compute-0 podman[351740]: 2026-02-25 12:48:15.297265821 +0000 UTC m=+0.244487305 container remove b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:48:15 compute-0 systemd[1]: libpod-conmon-b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc.scope: Deactivated successfully.
Feb 25 12:48:15 compute-0 podman[351780]: 2026-02-25 12:48:15.465751558 +0000 UTC m=+0.040643298 container create ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:48:15 compute-0 systemd[1]: Started libpod-conmon-ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87.scope.
Feb 25 12:48:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:15 compute-0 podman[351780]: 2026-02-25 12:48:15.44561533 +0000 UTC m=+0.020507100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:15 compute-0 podman[351780]: 2026-02-25 12:48:15.554268777 +0000 UTC m=+0.129160597 container init ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:48:15 compute-0 podman[351780]: 2026-02-25 12:48:15.563361064 +0000 UTC m=+0.138252814 container start ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:48:15 compute-0 podman[351780]: 2026-02-25 12:48:15.566500102 +0000 UTC m=+0.141391922 container attach ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:48:15 compute-0 ceph-mon[76335]: pgmap v2030: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 911 KiB/s wr, 145 op/s
Feb 25 12:48:16 compute-0 epic_yalow[351796]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:48:16 compute-0 epic_yalow[351796]: --> All data devices are unavailable
Feb 25 12:48:16 compute-0 systemd[1]: libpod-ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87.scope: Deactivated successfully.
Feb 25 12:48:16 compute-0 conmon[351796]: conmon ba8652c9fdc0eed3516f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87.scope/container/memory.events
Feb 25 12:48:16 compute-0 podman[351780]: 2026-02-25 12:48:16.104843733 +0000 UTC m=+0.679735473 container died ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f-merged.mount: Deactivated successfully.
Feb 25 12:48:16 compute-0 podman[351780]: 2026-02-25 12:48:16.152929741 +0000 UTC m=+0.727821481 container remove ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:48:16 compute-0 systemd[1]: libpod-conmon-ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87.scope: Deactivated successfully.
Feb 25 12:48:16 compute-0 sudo[351703]: pam_unix(sudo:session): session closed for user root
Feb 25 12:48:16 compute-0 sudo[351830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:48:16 compute-0 sudo[351830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:48:16 compute-0 sudo[351830]: pam_unix(sudo:session): session closed for user root
Feb 25 12:48:16 compute-0 sudo[351855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:48:16 compute-0 sudo[351855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:48:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 138 op/s
Feb 25 12:48:16 compute-0 podman[351893]: 2026-02-25 12:48:16.647317201 +0000 UTC m=+0.035151033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:48:16 compute-0 podman[351893]: 2026-02-25 12:48:16.77475961 +0000 UTC m=+0.162593432 container create 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:48:16 compute-0 systemd[1]: Started libpod-conmon-8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b.scope.
Feb 25 12:48:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:48:16 compute-0 podman[351893]: 2026-02-25 12:48:16.962017727 +0000 UTC m=+0.349851609 container init 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:48:16 compute-0 podman[351893]: 2026-02-25 12:48:16.968359466 +0000 UTC m=+0.356193298 container start 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 12:48:16 compute-0 angry_hugle[351911]: 167 167
Feb 25 12:48:16 compute-0 systemd[1]: libpod-8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b.scope: Deactivated successfully.
Feb 25 12:48:16 compute-0 podman[351893]: 2026-02-25 12:48:16.993003082 +0000 UTC m=+0.380836884 container attach 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:48:16 compute-0 podman[351893]: 2026-02-25 12:48:16.994107693 +0000 UTC m=+0.381941505 container died 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-641b2d2e6066891ee27df8a6842fb96822d8082fc044cf68036ca46d2c71d628-merged.mount: Deactivated successfully.
Feb 25 12:48:17 compute-0 podman[351893]: 2026-02-25 12:48:17.260278259 +0000 UTC m=+0.648112071 container remove 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:48:17 compute-0 systemd[1]: libpod-conmon-8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b.scope: Deactivated successfully.
Feb 25 12:48:17 compute-0 podman[351937]: 2026-02-25 12:48:17.474822847 +0000 UTC m=+0.058671047 container create 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:48:17 compute-0 systemd[1]: Started libpod-conmon-2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e.scope.
Feb 25 12:48:17 compute-0 podman[351937]: 2026-02-25 12:48:17.446776765 +0000 UTC m=+0.030625005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:48:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38a9f12bdad6efa115b3145c468428871041b1e5eb8380502c7ff9497380721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38a9f12bdad6efa115b3145c468428871041b1e5eb8380502c7ff9497380721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38a9f12bdad6efa115b3145c468428871041b1e5eb8380502c7ff9497380721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38a9f12bdad6efa115b3145c468428871041b1e5eb8380502c7ff9497380721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:17 compute-0 podman[351937]: 2026-02-25 12:48:17.576805767 +0000 UTC m=+0.160653987 container init 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:48:17 compute-0 podman[351937]: 2026-02-25 12:48:17.582205719 +0000 UTC m=+0.166053919 container start 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 12:48:17 compute-0 podman[351937]: 2026-02-25 12:48:17.587829648 +0000 UTC m=+0.171677888 container attach 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.668 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.693 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.694 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 95730650-36ac-4eee-8b22-9ea3f01d82d1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.695 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 271f6569-a8f6-43a3-ac98-511eff77c426 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.696 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 9c11f17d-5dba-4ece-8340-1c4ff0939294 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.696 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.697 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.698 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.699 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.699 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.700 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.700 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.701 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.738 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.741 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.743 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:17 compute-0 nova_compute[244014]: 2026-02-25 12:48:17.745 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:17 compute-0 ceph-mon[76335]: pgmap v2031: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 138 op/s
Feb 25 12:48:17 compute-0 quirky_allen[351954]: {
Feb 25 12:48:17 compute-0 quirky_allen[351954]:     "0": [
Feb 25 12:48:17 compute-0 quirky_allen[351954]:         {
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "devices": [
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "/dev/loop3"
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             ],
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_name": "ceph_lv0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_size": "21470642176",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "name": "ceph_lv0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "tags": {
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.cluster_name": "ceph",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.crush_device_class": "",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.encrypted": "0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.objectstore": "bluestore",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.osd_id": "0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.type": "block",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.vdo": "0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.with_tpm": "0"
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             },
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "type": "block",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "vg_name": "ceph_vg0"
Feb 25 12:48:17 compute-0 quirky_allen[351954]:         }
Feb 25 12:48:17 compute-0 quirky_allen[351954]:     ],
Feb 25 12:48:17 compute-0 quirky_allen[351954]:     "1": [
Feb 25 12:48:17 compute-0 quirky_allen[351954]:         {
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "devices": [
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "/dev/loop4"
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             ],
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_name": "ceph_lv1",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_size": "21470642176",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "name": "ceph_lv1",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "tags": {
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.cluster_name": "ceph",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.crush_device_class": "",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.encrypted": "0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.objectstore": "bluestore",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.osd_id": "1",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.type": "block",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.vdo": "0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.with_tpm": "0"
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             },
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "type": "block",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "vg_name": "ceph_vg1"
Feb 25 12:48:17 compute-0 quirky_allen[351954]:         }
Feb 25 12:48:17 compute-0 quirky_allen[351954]:     ],
Feb 25 12:48:17 compute-0 quirky_allen[351954]:     "2": [
Feb 25 12:48:17 compute-0 quirky_allen[351954]:         {
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "devices": [
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "/dev/loop5"
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             ],
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_name": "ceph_lv2",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_size": "21470642176",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "name": "ceph_lv2",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "tags": {
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.cluster_name": "ceph",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.crush_device_class": "",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.encrypted": "0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.objectstore": "bluestore",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.osd_id": "2",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.type": "block",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.vdo": "0",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:                 "ceph.with_tpm": "0"
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             },
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "type": "block",
Feb 25 12:48:17 compute-0 quirky_allen[351954]:             "vg_name": "ceph_vg2"
Feb 25 12:48:17 compute-0 quirky_allen[351954]:         }
Feb 25 12:48:17 compute-0 quirky_allen[351954]:     ]
Feb 25 12:48:17 compute-0 quirky_allen[351954]: }
Feb 25 12:48:17 compute-0 systemd[1]: libpod-2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e.scope: Deactivated successfully.
Feb 25 12:48:18 compute-0 podman[351963]: 2026-02-25 12:48:18.018936551 +0000 UTC m=+0.025645435 container died 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Feb 25 12:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d38a9f12bdad6efa115b3145c468428871041b1e5eb8380502c7ff9497380721-merged.mount: Deactivated successfully.
Feb 25 12:48:18 compute-0 podman[351963]: 2026-02-25 12:48:18.055552275 +0000 UTC m=+0.062261129 container remove 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:48:18 compute-0 systemd[1]: libpod-conmon-2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e.scope: Deactivated successfully.
Feb 25 12:48:18 compute-0 sudo[351855]: pam_unix(sudo:session): session closed for user root
Feb 25 12:48:18 compute-0 sudo[351978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:48:18 compute-0 sudo[351978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:48:18 compute-0 sudo[351978]: pam_unix(sudo:session): session closed for user root
Feb 25 12:48:18 compute-0 sudo[352003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:48:18 compute-0 sudo[352003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:48:18 compute-0 ovn_controller[147040]: 2026-02-25T12:48:18Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:a7:94 10.100.0.30
Feb 25 12:48:18 compute-0 ovn_controller[147040]: 2026-02-25T12:48:18Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:a7:94 10.100.0.30
Feb 25 12:48:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 448 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 217 op/s
Feb 25 12:48:18 compute-0 podman[352040]: 2026-02-25 12:48:18.555651897 +0000 UTC m=+0.046377291 container create 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:48:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:18 compute-0 nova_compute[244014]: 2026-02-25 12:48:18.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:18 compute-0 systemd[1]: Started libpod-conmon-18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0.scope.
Feb 25 12:48:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:48:18 compute-0 podman[352040]: 2026-02-25 12:48:18.53664927 +0000 UTC m=+0.027374714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:48:18 compute-0 podman[352040]: 2026-02-25 12:48:18.639554056 +0000 UTC m=+0.130279450 container init 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:48:18 compute-0 podman[352040]: 2026-02-25 12:48:18.649012313 +0000 UTC m=+0.139737717 container start 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:48:18 compute-0 systemd[1]: libpod-18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0.scope: Deactivated successfully.
Feb 25 12:48:18 compute-0 podman[352040]: 2026-02-25 12:48:18.652303346 +0000 UTC m=+0.143028740 container attach 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb 25 12:48:18 compute-0 nostalgic_beaver[352056]: 167 167
Feb 25 12:48:18 compute-0 conmon[352056]: conmon 18a0c883a8d5c626f74b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0.scope/container/memory.events
Feb 25 12:48:18 compute-0 podman[352040]: 2026-02-25 12:48:18.657850872 +0000 UTC m=+0.148576306 container died 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 12:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-44edb3642857b5e51a8e9bb1a982c975a0e094d742b5a7736e1a84e13de23853-merged.mount: Deactivated successfully.
Feb 25 12:48:18 compute-0 podman[352040]: 2026-02-25 12:48:18.699341564 +0000 UTC m=+0.190066998 container remove 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:48:18 compute-0 systemd[1]: libpod-conmon-18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0.scope: Deactivated successfully.
Feb 25 12:48:18 compute-0 podman[352080]: 2026-02-25 12:48:18.894004861 +0000 UTC m=+0.044138398 container create 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:48:18 compute-0 systemd[1]: Started libpod-conmon-13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec.scope.
Feb 25 12:48:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1762dc3ed452471c8aee009aa1bc4227ce16f02c31a4d6301d6f0e979fae0d81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1762dc3ed452471c8aee009aa1bc4227ce16f02c31a4d6301d6f0e979fae0d81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1762dc3ed452471c8aee009aa1bc4227ce16f02c31a4d6301d6f0e979fae0d81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1762dc3ed452471c8aee009aa1bc4227ce16f02c31a4d6301d6f0e979fae0d81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:48:18 compute-0 podman[352080]: 2026-02-25 12:48:18.875444986 +0000 UTC m=+0.025578583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:48:18 compute-0 podman[352080]: 2026-02-25 12:48:18.973340481 +0000 UTC m=+0.123474038 container init 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:48:18 compute-0 podman[352080]: 2026-02-25 12:48:18.978042994 +0000 UTC m=+0.128176531 container start 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:48:18 compute-0 podman[352080]: 2026-02-25 12:48:18.982106948 +0000 UTC m=+0.132240485 container attach 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 12:48:19 compute-0 lvm[352173]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:48:19 compute-0 lvm[352176]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:48:19 compute-0 lvm[352176]: VG ceph_vg1 finished
Feb 25 12:48:19 compute-0 lvm[352173]: VG ceph_vg0 finished
Feb 25 12:48:19 compute-0 lvm[352178]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:48:19 compute-0 lvm[352178]: VG ceph_vg2 finished
Feb 25 12:48:19 compute-0 lvm[352179]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:48:19 compute-0 lvm[352179]: VG ceph_vg1 finished
Feb 25 12:48:19 compute-0 lvm[352181]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:48:19 compute-0 lvm[352181]: VG ceph_vg1 finished
Feb 25 12:48:19 compute-0 nervous_ramanujan[352097]: {}
Feb 25 12:48:19 compute-0 systemd[1]: libpod-13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec.scope: Deactivated successfully.
Feb 25 12:48:19 compute-0 podman[352080]: 2026-02-25 12:48:19.678951884 +0000 UTC m=+0.829085421 container died 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:48:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1762dc3ed452471c8aee009aa1bc4227ce16f02c31a4d6301d6f0e979fae0d81-merged.mount: Deactivated successfully.
Feb 25 12:48:19 compute-0 podman[352080]: 2026-02-25 12:48:19.714989402 +0000 UTC m=+0.865122979 container remove 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 12:48:19 compute-0 systemd[1]: libpod-conmon-13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec.scope: Deactivated successfully.
Feb 25 12:48:19 compute-0 sudo[352003]: pam_unix(sudo:session): session closed for user root
Feb 25 12:48:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:48:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:48:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:48:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:48:19 compute-0 ceph-mon[76335]: pgmap v2032: 305 pgs: 305 active+clean; 448 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 217 op/s
Feb 25 12:48:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:48:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:48:19 compute-0 sudo[352195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:48:19 compute-0 sudo[352195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:48:19 compute-0 sudo[352195]: pam_unix(sudo:session): session closed for user root
Feb 25 12:48:20 compute-0 nova_compute[244014]: 2026-02-25 12:48:20.283 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 448 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 3.2 MiB/s wr, 90 op/s
Feb 25 12:48:21 compute-0 ceph-mon[76335]: pgmap v2033: 305 pgs: 305 active+clean; 448 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 3.2 MiB/s wr, 90 op/s
Feb 25 12:48:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 468 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 951 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Feb 25 12:48:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:23 compute-0 nova_compute[244014]: 2026-02-25 12:48:23.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:23 compute-0 ceph-mon[76335]: pgmap v2034: 305 pgs: 305 active+clean; 468 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 951 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Feb 25 12:48:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 471 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 651 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.286 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.359 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.360 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.360 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.361 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.361 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.363 244018 INFO nova.compute.manager [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Terminating instance
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.365 244018 DEBUG nova.compute.manager [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:48:25 compute-0 kernel: tapcbf23880-21 (unregistering): left promiscuous mode
Feb 25 12:48:25 compute-0 NetworkManager[49836]: <info>  [1772023705.4145] device (tapcbf23880-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:48:25 compute-0 ovn_controller[147040]: 2026-02-25T12:48:25Z|01241|binding|INFO|Releasing lport cbf23880-21db-4752-be67-0abdb0153c2b from this chassis (sb_readonly=0)
Feb 25 12:48:25 compute-0 ovn_controller[147040]: 2026-02-25T12:48:25Z|01242|binding|INFO|Setting lport cbf23880-21db-4752-be67-0abdb0153c2b down in Southbound
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:25 compute-0 ovn_controller[147040]: 2026-02-25T12:48:25Z|01243|binding|INFO|Removing iface tapcbf23880-21 ovn-installed in OVS
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.430 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.441 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.447 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:a7:94 10.100.0.30'], port_security=['fa:16:3e:72:a7:94 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '9c11f17d-5dba-4ece-8340-1c4ff0939294', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3635704-b98f-49df-8dc5-9052a34d8970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '98aaa3fd-2372-44ea-9de4-f4c0cf776cf4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08599aa3-75c1-4d0f-bdce-36e8497cd876, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=cbf23880-21db-4752-be67-0abdb0153c2b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.449 157129 INFO neutron.agent.ovn.metadata.agent [-] Port cbf23880-21db-4752-be67-0abdb0153c2b in datapath f3635704-b98f-49df-8dc5-9052a34d8970 unbound from our chassis
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.450 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3635704-b98f-49df-8dc5-9052a34d8970, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.452 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dacd2751-eaa0-4fba-b125-a275dba1b8ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.452 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970 namespace which is not needed anymore
Feb 25 12:48:25 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Deactivated successfully.
Feb 25 12:48:25 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Consumed 13.511s CPU time.
Feb 25 12:48:25 compute-0 systemd-machined[210048]: Machine qemu-152-instance-00000078 terminated.
Feb 25 12:48:25 compute-0 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [NOTICE]   (351586) : haproxy version is 2.8.14-c23fe91
Feb 25 12:48:25 compute-0 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [NOTICE]   (351586) : path to executable is /usr/sbin/haproxy
Feb 25 12:48:25 compute-0 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [ALERT]    (351586) : Current worker (351588) exited with code 143 (Terminated)
Feb 25 12:48:25 compute-0 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [WARNING]  (351586) : All workers exited. Exiting... (0)
Feb 25 12:48:25 compute-0 systemd[1]: libpod-7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f.scope: Deactivated successfully.
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.603 244018 INFO nova.virt.libvirt.driver [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Instance destroyed successfully.
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.603 244018 DEBUG nova.objects.instance [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 9c11f17d-5dba-4ece-8340-1c4ff0939294 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:48:25 compute-0 podman[352243]: 2026-02-25 12:48:25.605327045 +0000 UTC m=+0.050507467 container died 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.619 244018 DEBUG nova.virt.libvirt.vif [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-83970638',display_name='tempest-TestNetworkBasicOps-server-83970638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-83970638',id=120,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxYaDCaF+dhprfU7p2GFAe30UCzLc9MF80n83kHh1iIsnjcJHnQ6Q/VzjgOn2GIc4A+5MBe7VFnhNTJz7AlxywFHXtPZOo+vT/4VV+kNxRXdCo8I4zsXHElpgbrS0cbTQ==',key_name='tempest-TestNetworkBasicOps-1724785857',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:48:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-0o0rgl3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:48:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=9c11f17d-5dba-4ece-8340-1c4ff0939294,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.620 244018 DEBUG nova.network.os_vif_util [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.621 244018 DEBUG nova.network.os_vif_util [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.621 244018 DEBUG os_vif [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.623 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbf23880-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.630 244018 INFO os_vif [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21')
Feb 25 12:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f-userdata-shm.mount: Deactivated successfully.
Feb 25 12:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-97b72957ff8d950551abb65b325b6b882f88ff85666c7edcb84f856ea8d68c66-merged.mount: Deactivated successfully.
Feb 25 12:48:25 compute-0 podman[352243]: 2026-02-25 12:48:25.647085304 +0000 UTC m=+0.092265726 container cleanup 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:48:25 compute-0 systemd[1]: libpod-conmon-7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f.scope: Deactivated successfully.
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.701 244018 DEBUG nova.compute.manager [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-unplugged-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.702 244018 DEBUG oslo_concurrency.lockutils [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.702 244018 DEBUG oslo_concurrency.lockutils [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.702 244018 DEBUG oslo_concurrency.lockutils [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.703 244018 DEBUG nova.compute.manager [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] No waiting events found dispatching network-vif-unplugged-cbf23880-21db-4752-be67-0abdb0153c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.703 244018 DEBUG nova.compute.manager [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-unplugged-cbf23880-21db-4752-be67-0abdb0153c2b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:48:25 compute-0 podman[352298]: 2026-02-25 12:48:25.721446754 +0000 UTC m=+0.051842155 container remove 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.727 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e361888-d00f-4337-9dd7-7ea8015056dd]: (4, ('Wed Feb 25 12:48:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970 (7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f)\n7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f\nWed Feb 25 12:48:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970 (7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f)\n7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.729 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a61003f7-63e6-4c42-90b4-dbb9e102c302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.731 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3635704-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.733 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:25 compute-0 kernel: tapf3635704-b0: left promiscuous mode
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.743 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc0f11f-5b8b-4418-ad3f-2cda0a2e1e71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.763 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[699fb1e5-4d28-446e-95d0-78c991ed53f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.765 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cae5b83c-8e83-40be-8040-ce2ec3e10e05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.778 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3981ec8e-beaa-4cb2-bce6-31561a0ad479]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565499, 'reachable_time': 18596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352317, 'error': None, 'target': 'ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.780 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:48:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.780 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b15ec24e-74f4-403a-a7f8-9e9a3722b41d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:25 compute-0 systemd[1]: run-netns-ovnmeta\x2df3635704\x2db98f\x2d49df\x2d8dc5\x2d9052a34d8970.mount: Deactivated successfully.
Feb 25 12:48:25 compute-0 ceph-mon[76335]: pgmap v2035: 305 pgs: 305 active+clean; 471 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 651 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.877 244018 INFO nova.virt.libvirt.driver [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Deleting instance files /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294_del
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.878 244018 INFO nova.virt.libvirt.driver [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Deletion of /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294_del complete
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.953 244018 INFO nova.compute.manager [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Took 0.59 seconds to destroy the instance on the hypervisor.
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.954 244018 DEBUG oslo.service.loopingcall [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.954 244018 DEBUG nova.compute.manager [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:48:25 compute-0 nova_compute[244014]: 2026-02-25 12:48:25.954 244018 DEBUG nova.network.neutron [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:48:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 471 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Feb 25 12:48:26 compute-0 nova_compute[244014]: 2026-02-25 12:48:26.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:48:26 compute-0 nova_compute[244014]: 2026-02-25 12:48:26.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:48:26 compute-0 nova_compute[244014]: 2026-02-25 12:48:26.922 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:26 compute-0 nova_compute[244014]: 2026-02-25 12:48:26.923 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:26 compute-0 nova_compute[244014]: 2026-02-25 12:48:26.923 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:26 compute-0 nova_compute[244014]: 2026-02-25 12:48:26.924 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:48:26 compute-0 nova_compute[244014]: 2026-02-25 12:48:26.924 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.223 244018 DEBUG nova.network.neutron [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.251 244018 INFO nova.compute.manager [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Took 1.30 seconds to deallocate network for instance.
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.292 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.292 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.293 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.293 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.294 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.295 244018 INFO nova.compute.manager [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Terminating instance
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.296 244018 DEBUG nova.compute.manager [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.304 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.304 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:27 compute-0 kernel: tap3de24fa3-5d (unregistering): left promiscuous mode
Feb 25 12:48:27 compute-0 NetworkManager[49836]: <info>  [1772023707.3418] device (tap3de24fa3-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:48:27 compute-0 ovn_controller[147040]: 2026-02-25T12:48:27Z|01244|binding|INFO|Releasing lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c from this chassis (sb_readonly=0)
Feb 25 12:48:27 compute-0 ovn_controller[147040]: 2026-02-25T12:48:27Z|01245|binding|INFO|Setting lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c down in Southbound
Feb 25 12:48:27 compute-0 ovn_controller[147040]: 2026-02-25T12:48:27Z|01246|binding|INFO|Removing iface tap3de24fa3-5d ovn-installed in OVS
Feb 25 12:48:27 compute-0 kernel: tap0dbf7aaf-2d (unregistering): left promiscuous mode
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 NetworkManager[49836]: <info>  [1772023707.3597] device (tap0dbf7aaf-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.359 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:6b:63 10.100.0.8'], port_security=['fa:16:3e:c8:6b:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.361 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 unbound from our chassis
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.364 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b6d114-fcb4-4a25-988c-1ee301ef0419
Feb 25 12:48:27 compute-0 ovn_controller[147040]: 2026-02-25T12:48:27Z|01247|binding|INFO|Releasing lport 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 from this chassis (sb_readonly=0)
Feb 25 12:48:27 compute-0 ovn_controller[147040]: 2026-02-25T12:48:27Z|01248|binding|INFO|Setting lport 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 down in Southbound
Feb 25 12:48:27 compute-0 ovn_controller[147040]: 2026-02-25T12:48:27Z|01249|binding|INFO|Removing iface tap0dbf7aaf-2d ovn-installed in OVS
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.380 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:c2:9f 2001:db8:0:1:f816:3eff:fe46:c29f 2001:db8::f816:3eff:fe46:c29f'], port_security=['fa:16:3e:46:c2:9f 2001:db8:0:1:f816:3eff:fe46:c29f 2001:db8::f816:3eff:fe46:c29f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe46:c29f/64 2001:db8::f816:3eff:fe46:c29f/64', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06544f4e-1579-4f9d-8faa-1248478e7fbc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.381 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b4db1f08-7af7-454e-ac3b-5807d7202e36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Deactivated successfully.
Feb 25 12:48:27 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Consumed 13.402s CPU time.
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.413 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebd54ba-0592-4f88-9478-878b5ef58089]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 systemd-machined[210048]: Machine qemu-151-instance-00000077 terminated.
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.416 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd4bdbe-654b-4931-acb8-c7f3810e9c21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.447 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[12e10e40-24ef-45c9-a890-6aae73ff1a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.454 244018 DEBUG oslo_concurrency.processutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.468 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21b04bb0-cc15-4e4f-8cad-073e60fdedf3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352352, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.483 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c870ad70-921b-4dcd-8417-e1f5b4d75836]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561174, 'tstamp': 561174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352354, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561176, 'tstamp': 561176}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352354, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.485 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.496 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b6d114-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.496 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.496 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b6d114-f0, col_values=(('external_ids', {'iface-id': '4e1af2d0-780b-4ffd-93a0-445de7a22322'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.497 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.498 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 in datapath 5c482202-8994-4033-a0a9-167d92a9e301 unbound from our chassis
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.499 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c482202-8994-4033-a0a9-167d92a9e301
Feb 25 12:48:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:48:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047658061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:27 compute-0 kernel: tap3de24fa3-5d: entered promiscuous mode
Feb 25 12:48:27 compute-0 kernel: tap3de24fa3-5d (unregistering): left promiscuous mode
Feb 25 12:48:27 compute-0 NetworkManager[49836]: <info>  [1772023707.5170] manager: (tap3de24fa3-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/520)
Feb 25 12:48:27 compute-0 ovn_controller[147040]: 2026-02-25T12:48:27Z|01250|binding|INFO|Claiming lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c for this chassis.
Feb 25 12:48:27 compute-0 ovn_controller[147040]: 2026-02-25T12:48:27Z|01251|binding|INFO|3de24fa3-5df5-470c-98d2-1a60e0ebfc0c: Claiming fa:16:3e:c8:6b:63 10.100.0.8
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.522 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2f53eb1a-9b26-4ac5-b6a6-57dcc2d2b4bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.527 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:6b:63 10.100.0.8'], port_security=['fa:16:3e:c8:6b:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:48:27 compute-0 NetworkManager[49836]: <info>  [1772023707.5283] manager: (tap0dbf7aaf-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/521)
Feb 25 12:48:27 compute-0 ovn_controller[147040]: 2026-02-25T12:48:27Z|01252|binding|INFO|Releasing lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c from this chassis (sb_readonly=0)
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.545 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:6b:63 10.100.0.8'], port_security=['fa:16:3e:c8:6b:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.548 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.553 244018 INFO nova.virt.libvirt.driver [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance destroyed successfully.
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.554 244018 DEBUG nova.objects.instance [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 271f6569-a8f6-43a3-ac98-511eff77c426 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.555 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[07458673-4107-43db-934b-8935478b1894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.558 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f31fce2b-2f64-460f-9e3b-f31649e0076b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.569 244018 DEBUG nova.virt.libvirt.vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:48:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:48:02Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.570 244018 DEBUG nova.network.os_vif_util [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.571 244018 DEBUG nova.network.os_vif_util [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.571 244018 DEBUG os_vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.573 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3de24fa3-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.582 244018 INFO os_vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d')
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.583 244018 DEBUG nova.virt.libvirt.vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:48:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:48:02Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.584 244018 DEBUG nova.network.os_vif_util [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.584 244018 DEBUG nova.network.os_vif_util [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.585 244018 DEBUG os_vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.587 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0dbf7aaf-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.592 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5bedc655-2f09-426d-bbc1-ef7c1383d8e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.592 244018 INFO os_vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d')
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.606 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[283273ae-8b78-4573-bc2d-5c233dff9379]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c482202-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:af:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561244, 'reachable_time': 22558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352399, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.618 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[755ab6dd-5f6d-42f2-aa95-950fe1ec3eb2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5c482202-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561254, 'tstamp': 561254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352416, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.619 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c482202-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.622 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c482202-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.622 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.623 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c482202-80, col_values=(('external_ids', {'iface-id': '8372e6c6-fbbc-48a7-be95-d95e1d2ad95a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.623 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.624 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 unbound from our chassis
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.625 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b6d114-fcb4-4a25-988c-1ee301ef0419
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.637 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c61403-a7de-4975-a662-469e4dd8d311]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.660 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb2658b-36b2-4b2b-96ff-bd077c387e12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.664 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b79788d6-0ed7-4f50-a01f-6cabcb9ad263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.670 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.670 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.675 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.675 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.681 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.681 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.692 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0ba077-5d0d-4dda-a234-67d8f88052c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.716 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2412249-1fc1-4bcf-a45d-85c21a571b5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352430, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.734 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[adf553b1-e9df-4ca9-913a-34e0378b08af]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561174, 'tstamp': 561174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352431, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561176, 'tstamp': 561176}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352431, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.736 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.744 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.745 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b6d114-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.745 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.745 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b6d114-f0, col_values=(('external_ids', {'iface-id': '4e1af2d0-780b-4ffd-93a0-445de7a22322'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.745 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.746 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 unbound from our chassis
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.748 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b6d114-fcb4-4a25-988c-1ee301ef0419
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.760 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[545e318c-11cc-475b-b27a-3b605ccd3162]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.784 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[79c5fcb0-bb78-4ba9-9630-ea0dd53627e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.788 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee9cdf0-db9b-43a3-aef8-57be6e0accd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.813 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[85c4b549-b831-4266-bb9c-a4f23de1098a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ceph-mon[76335]: pgmap v2036: 305 pgs: 305 active+clean; 471 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Feb 25 12:48:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2047658061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.828 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49608d58-4d5c-4434-844e-e53e48f97127]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352438, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.842 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[22cd27b3-ca78-4840-b02d-10e2681e3bf5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561174, 'tstamp': 561174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352439, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561176, 'tstamp': 561176}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352439, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.844 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.846 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b6d114-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.846 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.847 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b6d114-f0, col_values=(('external_ids', {'iface-id': '4e1af2d0-780b-4ffd-93a0-445de7a22322'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.847 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.855 244018 INFO nova.virt.libvirt.driver [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Deleting instance files /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426_del
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.856 244018 INFO nova.virt.libvirt.driver [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Deletion of /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426_del complete
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.894 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.895 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3215MB free_disk=59.80528958886862GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.895 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.945 244018 INFO nova.compute.manager [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.945 244018 DEBUG oslo.service.loopingcall [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.946 244018 DEBUG nova.compute.manager [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.946 244018 DEBUG nova.network.neutron [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] No waiting events found dispatching network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 WARNING nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received unexpected event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b for instance with vm_state deleted and task_state None.
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.986 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.986 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing instance network info cache due to event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.986 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.986 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:48:27 compute-0 nova_compute[244014]: 2026-02-25 12:48:27.986 244018 DEBUG nova.network.neutron [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing network info cache for port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:48:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:48:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155424539' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.067 244018 DEBUG oslo_concurrency.processutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.072 244018 DEBUG nova.compute.provider_tree [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.085 244018 DEBUG nova.scheduler.client.report [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.105 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.107 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.129 244018 INFO nova.scheduler.client.report [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 9c11f17d-5dba-4ece-8340-1c4ff0939294
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.185 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 848fd033-0ebb-460a-a8a0-56583fa5f481 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.186 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 95730650-36ac-4eee-8b22-9ea3f01d82d1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.186 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 271f6569-a8f6-43a3-ac98-511eff77c426 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.187 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.187 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.197 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.280 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 366 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 693 KiB/s rd, 4.3 MiB/s wr, 172 op/s
Feb 25 12:48:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.570466) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708570520, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1128, "num_deletes": 250, "total_data_size": 1619031, "memory_usage": 1645000, "flush_reason": "Manual Compaction"}
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708577734, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 979369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42726, "largest_seqno": 43853, "table_properties": {"data_size": 975144, "index_size": 1749, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11341, "raw_average_key_size": 20, "raw_value_size": 965968, "raw_average_value_size": 1765, "num_data_blocks": 79, "num_entries": 547, "num_filter_entries": 547, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023603, "oldest_key_time": 1772023603, "file_creation_time": 1772023708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 7317 microseconds, and 3830 cpu microseconds.
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.577782) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 979369 bytes OK
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.577805) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.579346) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.579367) EVENT_LOG_v1 {"time_micros": 1772023708579361, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.579385) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1613813, prev total WAL file size 1613813, number of live WAL files 2.
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.580480) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353033' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(956KB)], [98(9811KB)]
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708580566, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 11026817, "oldest_snapshot_seqno": -1}
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6505 keys, 8294781 bytes, temperature: kUnknown
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708633352, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8294781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8252458, "index_size": 24933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 168761, "raw_average_key_size": 25, "raw_value_size": 8137247, "raw_average_value_size": 1250, "num_data_blocks": 975, "num_entries": 6505, "num_filter_entries": 6505, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.633600) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8294781 bytes
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.635119) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.6 rd, 156.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.6 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(19.7) write-amplify(8.5) OK, records in: 6971, records dropped: 466 output_compression: NoCompression
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.635140) EVENT_LOG_v1 {"time_micros": 1772023708635130, "job": 58, "event": "compaction_finished", "compaction_time_micros": 52869, "compaction_time_cpu_micros": 29782, "output_level": 6, "num_output_files": 1, "total_output_size": 8294781, "num_input_records": 6971, "num_output_records": 6505, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708635456, "job": 58, "event": "table_file_deletion", "file_number": 100}
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708636638, "job": 58, "event": "table_file_deletion", "file_number": 98}
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.580343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.636807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.636814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.636818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.636821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:48:28 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.636824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:48:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1155424539' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:48:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3337761892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.854 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.864 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.884 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.928 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:48:28 compute-0 nova_compute[244014]: 2026-02-25 12:48:28.929 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:29 compute-0 podman[352464]: 2026-02-25 12:48:29.723527439 +0000 UTC m=+0.059337857 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent)
Feb 25 12:48:29 compute-0 podman[352465]: 2026-02-25 12:48:29.759918466 +0000 UTC m=+0.096438864 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:48:29 compute-0 ceph-mon[76335]: pgmap v2037: 305 pgs: 305 active+clean; 366 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 693 KiB/s rd, 4.3 MiB/s wr, 172 op/s
Feb 25 12:48:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3337761892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:29 compute-0 nova_compute[244014]: 2026-02-25 12:48:29.926 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:48:29 compute-0 nova_compute[244014]: 2026-02-25 12:48:29.927 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:48:29 compute-0 nova_compute[244014]: 2026-02-25 12:48:29.927 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:48:29 compute-0 nova_compute[244014]: 2026-02-25 12:48:29.957 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.248 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.249 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.249 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.250 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.250 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No waiting events found dispatching network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.251 244018 WARNING nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received unexpected event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c for instance with vm_state active and task_state deleting.
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.251 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-unplugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.252 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.252 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.252 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.253 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No waiting events found dispatching network-vif-unplugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.253 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-unplugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.254 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.254 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.255 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.255 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.256 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No waiting events found dispatching network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.256 244018 WARNING nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received unexpected event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 for instance with vm_state active and task_state deleting.
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.256 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-deleted-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.257 244018 INFO nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Neutron deleted interface 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c; detaching it from the instance and deleting it from the info cache
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.257 244018 DEBUG nova.network.neutron [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [{"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.287 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Detach interface failed, port_id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c, reason: Instance 271f6569-a8f6-43a3-ac98-511eff77c426 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.300 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.300 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.300 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.301 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:48:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 366 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 1.1 MiB/s wr, 94 op/s
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.646 244018 DEBUG nova.network.neutron [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updated VIF entry in instance network info cache for port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.647 244018 DEBUG nova.network.neutron [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.680 244018 DEBUG nova.network.neutron [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.683 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.684 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-deleted-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.684 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-unplugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.684 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.684 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.685 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.685 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No waiting events found dispatching network-vif-unplugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.685 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-unplugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.700 244018 INFO nova.compute.manager [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Took 2.75 seconds to deallocate network for instance.
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.773 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.774 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:30 compute-0 nova_compute[244014]: 2026-02-25 12:48:30.895 244018 DEBUG oslo_concurrency.processutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:30 compute-0 ovn_controller[147040]: 2026-02-25T12:48:30Z|01253|binding|INFO|Releasing lport 4e1af2d0-780b-4ffd-93a0-445de7a22322 from this chassis (sb_readonly=0)
Feb 25 12:48:30 compute-0 ovn_controller[147040]: 2026-02-25T12:48:30Z|01254|binding|INFO|Releasing lport 8372e6c6-fbbc-48a7-be95-d95e1d2ad95a from this chassis (sb_readonly=0)
Feb 25 12:48:30 compute-0 ovn_controller[147040]: 2026-02-25T12:48:30Z|01255|binding|INFO|Releasing lport f0fc68ea-8ea9-4b0b-9e36-f54d3bc2f6ca from this chassis (sb_readonly=0)
Feb 25 12:48:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:48:31
Feb 25 12:48:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:48:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:48:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'backups']
Feb 25 12:48:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:48:31 compute-0 nova_compute[244014]: 2026-02-25 12:48:31.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:48:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2638342429' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:31 compute-0 nova_compute[244014]: 2026-02-25 12:48:31.479 244018 DEBUG oslo_concurrency.processutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:31 compute-0 nova_compute[244014]: 2026-02-25 12:48:31.486 244018 DEBUG nova.compute.provider_tree [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:48:31 compute-0 nova_compute[244014]: 2026-02-25 12:48:31.504 244018 DEBUG nova.scheduler.client.report [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:48:31 compute-0 nova_compute[244014]: 2026-02-25 12:48:31.529 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:31 compute-0 nova_compute[244014]: 2026-02-25 12:48:31.585 244018 INFO nova.scheduler.client.report [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 271f6569-a8f6-43a3-ac98-511eff77c426
Feb 25 12:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:48:31 compute-0 nova_compute[244014]: 2026-02-25 12:48:31.687 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:31 compute-0 ceph-mon[76335]: pgmap v2038: 305 pgs: 305 active+clean; 366 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 1.1 MiB/s wr, 94 op/s
Feb 25 12:48:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2638342429' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.382 244018 DEBUG nova.compute.manager [req-d2182a6b-24e2-40d6-8115-b53c410ae474 req-93be17be-6be2-45ed-8093-c442898b135d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-deleted-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 1.1 MiB/s wr, 109 op/s
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.615 244018 DEBUG nova.compute.manager [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.615 244018 DEBUG nova.compute.manager [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing instance network info cache due to event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.616 244018 DEBUG oslo_concurrency.lockutils [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.616 244018 DEBUG oslo_concurrency.lockutils [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.616 244018 DEBUG nova.network.neutron [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.702 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.703 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.704 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.704 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.705 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.707 244018 INFO nova.compute.manager [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Terminating instance
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.709 244018 DEBUG nova.compute.manager [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:48:32 compute-0 kernel: tap0660ccb7-a9 (unregistering): left promiscuous mode
Feb 25 12:48:32 compute-0 NetworkManager[49836]: <info>  [1772023712.7564] device (tap0660ccb7-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:48:32 compute-0 ovn_controller[147040]: 2026-02-25T12:48:32Z|01256|binding|INFO|Releasing lport 0660ccb7-a986-45dd-8aa8-e10ddd71144a from this chassis (sb_readonly=0)
Feb 25 12:48:32 compute-0 ovn_controller[147040]: 2026-02-25T12:48:32Z|01257|binding|INFO|Setting lport 0660ccb7-a986-45dd-8aa8-e10ddd71144a down in Southbound
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.765 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:32 compute-0 ovn_controller[147040]: 2026-02-25T12:48:32Z|01258|binding|INFO|Removing iface tap0660ccb7-a9 ovn-installed in OVS
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.769 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:32.778 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:b0:7a 10.100.0.12'], port_security=['fa:16:3e:cb:b0:7a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '95730650-36ac-4eee-8b22-9ea3f01d82d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b261df4d-3b33-4344-9c3c-a73feb8773db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6d838fa7-2ae5-457e-b70a-ef0e132d7e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af3ba9a4-232f-4585-95fc-f83215abb671, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0660ccb7-a986-45dd-8aa8-e10ddd71144a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:48:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:32.780 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0660ccb7-a986-45dd-8aa8-e10ddd71144a in datapath b261df4d-3b33-4344-9c3c-a73feb8773db unbound from our chassis
Feb 25 12:48:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:32.781 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b261df4d-3b33-4344-9c3c-a73feb8773db, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:48:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:32.783 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a641000-62d9-49f2-8a16-796fadfb2c36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:32.785 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db namespace which is not needed anymore
Feb 25 12:48:32 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Deactivated successfully.
Feb 25 12:48:32 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Consumed 14.218s CPU time.
Feb 25 12:48:32 compute-0 systemd-machined[210048]: Machine qemu-150-instance-00000076 terminated.
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.946 244018 INFO nova.virt.libvirt.driver [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Instance destroyed successfully.
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.946 244018 DEBUG nova.objects.instance [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 95730650-36ac-4eee-8b22-9ea3f01d82d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:48:32 compute-0 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [NOTICE]   (350617) : haproxy version is 2.8.14-c23fe91
Feb 25 12:48:32 compute-0 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [NOTICE]   (350617) : path to executable is /usr/sbin/haproxy
Feb 25 12:48:32 compute-0 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [WARNING]  (350617) : Exiting Master process...
Feb 25 12:48:32 compute-0 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [WARNING]  (350617) : Exiting Master process...
Feb 25 12:48:32 compute-0 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [ALERT]    (350617) : Current worker (350633) exited with code 143 (Terminated)
Feb 25 12:48:32 compute-0 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [WARNING]  (350617) : All workers exited. Exiting... (0)
Feb 25 12:48:32 compute-0 systemd[1]: libpod-c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc.scope: Deactivated successfully.
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.963 244018 DEBUG nova.virt.libvirt.vif [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-404307455',display_name='tempest-TestNetworkBasicOps-server-404307455',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-404307455',id=118,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFFlaz+1JiPozqOMtDFDKtIjDHy+pR4Dk92wrSmsbZfFxI7F561H9M9jMhpR4UoZcnqUowU/hewm+5foBFGhY3SM2rprh5R3QDm+X6unzTrRIJHcQVeiyxVtyOmfm3aC3A==',key_name='tempest-TestNetworkBasicOps-1001785432',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:47:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-x2oa88n1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:47:29Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=95730650-36ac-4eee-8b22-9ea3f01d82d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:48:32 compute-0 podman[352556]: 2026-02-25 12:48:32.966990793 +0000 UTC m=+0.053833391 container died c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.964 244018 DEBUG nova.network.os_vif_util [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.967 244018 DEBUG nova.network.os_vif_util [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.968 244018 DEBUG os_vif [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.971 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.971 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0660ccb7-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:32 compute-0 nova_compute[244014]: 2026-02-25 12:48:32.977 244018 INFO os_vif [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9')
Feb 25 12:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc-userdata-shm.mount: Deactivated successfully.
Feb 25 12:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5086818508e5fa7e94bd5cba9a5524707dcb64bf7ba443e8eab59b7dad15f1a4-merged.mount: Deactivated successfully.
Feb 25 12:48:33 compute-0 podman[352556]: 2026-02-25 12:48:33.00830168 +0000 UTC m=+0.095144278 container cleanup c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:48:33 compute-0 systemd[1]: libpod-conmon-c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc.scope: Deactivated successfully.
Feb 25 12:48:33 compute-0 podman[352614]: 2026-02-25 12:48:33.100170824 +0000 UTC m=+0.058607606 container remove c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:48:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.105 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[333f4730-8b1b-429e-960b-f5b982895dbd]: (4, ('Wed Feb 25 12:48:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db (c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc)\nc4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc\nWed Feb 25 12:48:33 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db (c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc)\nc4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.107 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[58ce5851-70b1-49b4-908d-0c2cb9446105]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.109 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb261df4d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:33 compute-0 nova_compute[244014]: 2026-02-25 12:48:33.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:33 compute-0 kernel: tapb261df4d-30: left promiscuous mode
Feb 25 12:48:33 compute-0 nova_compute[244014]: 2026-02-25 12:48:33.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:33 compute-0 nova_compute[244014]: 2026-02-25 12:48:33.120 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.124 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e35b0d54-517b-4653-b359-9cd8c5deda17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.141 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[15696970-b5a7-4309-9088-2be343ac09f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.144 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d19c27-5f8c-429d-a18a-9c4b069eae05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.160 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[571f6d10-302f-416b-be98-96ce44ef4488]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561795, 'reachable_time': 44386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352630, 'error': None, 'target': 'ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.164 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:48:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.164 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[35b005d0-9da0-4acb-b002-5b27e568336d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:33 compute-0 systemd[1]: run-netns-ovnmeta\x2db261df4d\x2d3b33\x2d4344\x2d9c3c\x2da73feb8773db.mount: Deactivated successfully.
Feb 25 12:48:33 compute-0 nova_compute[244014]: 2026-02-25 12:48:33.260 244018 INFO nova.virt.libvirt.driver [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Deleting instance files /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1_del
Feb 25 12:48:33 compute-0 nova_compute[244014]: 2026-02-25 12:48:33.261 244018 INFO nova.virt.libvirt.driver [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Deletion of /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1_del complete
Feb 25 12:48:33 compute-0 nova_compute[244014]: 2026-02-25 12:48:33.326 244018 INFO nova.compute.manager [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 25 12:48:33 compute-0 nova_compute[244014]: 2026-02-25 12:48:33.327 244018 DEBUG oslo.service.loopingcall [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:48:33 compute-0 nova_compute[244014]: 2026-02-25 12:48:33.327 244018 DEBUG nova.compute.manager [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:48:33 compute-0 nova_compute[244014]: 2026-02-25 12:48:33.327 244018 DEBUG nova.network.neutron [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:48:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:33 compute-0 ceph-mon[76335]: pgmap v2039: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 1.1 MiB/s wr, 109 op/s
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.025 244018 DEBUG nova.network.neutron [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updated VIF entry in instance network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.026 244018 DEBUG nova.network.neutron [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updating instance_info_cache with network_info: [{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.056 244018 DEBUG oslo_concurrency.lockutils [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.152 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.153 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.153 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.154 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.154 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.155 244018 INFO nova.compute.manager [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Terminating instance
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.157 244018 DEBUG nova.compute.manager [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:48:34 compute-0 kernel: tap0358e18d-8c (unregistering): left promiscuous mode
Feb 25 12:48:34 compute-0 NetworkManager[49836]: <info>  [1772023714.2205] device (tap0358e18d-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 ovn_controller[147040]: 2026-02-25T12:48:34Z|01259|binding|INFO|Releasing lport 0358e18d-8ce8-43a7-a8b2-11193708891a from this chassis (sb_readonly=0)
Feb 25 12:48:34 compute-0 ovn_controller[147040]: 2026-02-25T12:48:34Z|01260|binding|INFO|Setting lport 0358e18d-8ce8-43a7-a8b2-11193708891a down in Southbound
Feb 25 12:48:34 compute-0 ovn_controller[147040]: 2026-02-25T12:48:34Z|01261|binding|INFO|Removing iface tap0358e18d-8c ovn-installed in OVS
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.236 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:1f:00 10.100.0.11'], port_security=['fa:16:3e:fd:1f:00 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '848fd033-0ebb-460a-a8a0-56583fa5f481', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0358e18d-8ce8-43a7-a8b2-11193708891a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.238 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0358e18d-8ce8-43a7-a8b2-11193708891a in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 unbound from our chassis
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.239 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d0b6d114-fcb4-4a25-988c-1ee301ef0419, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.240 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f831975a-eee3-4afc-b827-492d8243d3f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.240 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419 namespace which is not needed anymore
Feb 25 12:48:34 compute-0 kernel: tapebd67787-8a (unregistering): left promiscuous mode
Feb 25 12:48:34 compute-0 NetworkManager[49836]: <info>  [1772023714.2489] device (tapebd67787-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.251 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 ovn_controller[147040]: 2026-02-25T12:48:34Z|01262|binding|INFO|Releasing lport ebd67787-8af9-4a7f-8b38-6d18daff8ff3 from this chassis (sb_readonly=0)
Feb 25 12:48:34 compute-0 ovn_controller[147040]: 2026-02-25T12:48:34Z|01263|binding|INFO|Setting lport ebd67787-8af9-4a7f-8b38-6d18daff8ff3 down in Southbound
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.261 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 ovn_controller[147040]: 2026-02-25T12:48:34Z|01264|binding|INFO|Removing iface tapebd67787-8a ovn-installed in OVS
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.268 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:ea:66 2001:db8:0:1:f816:3eff:fe8e:ea66 2001:db8::f816:3eff:fe8e:ea66'], port_security=['fa:16:3e:8e:ea:66 2001:db8:0:1:f816:3eff:fe8e:ea66 2001:db8::f816:3eff:fe8e:ea66'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8e:ea66/64 2001:db8::f816:3eff:fe8e:ea66/64', 'neutron:device_id': '848fd033-0ebb-460a-a8a0-56583fa5f481', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06544f4e-1579-4f9d-8faa-1248478e7fbc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ebd67787-8af9-4a7f-8b38-6d18daff8ff3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.273 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Deactivated successfully.
Feb 25 12:48:34 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Consumed 15.227s CPU time.
Feb 25 12:48:34 compute-0 systemd-machined[210048]: Machine qemu-149-instance-00000075 terminated.
Feb 25 12:48:34 compute-0 NetworkManager[49836]: <info>  [1772023714.3817] manager: (tap0358e18d-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/522)
Feb 25 12:48:34 compute-0 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [NOTICE]   (350081) : haproxy version is 2.8.14-c23fe91
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [NOTICE]   (350081) : path to executable is /usr/sbin/haproxy
Feb 25 12:48:34 compute-0 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [WARNING]  (350081) : Exiting Master process...
Feb 25 12:48:34 compute-0 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [WARNING]  (350081) : Exiting Master process...
Feb 25 12:48:34 compute-0 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [ALERT]    (350081) : Current worker (350083) exited with code 143 (Terminated)
Feb 25 12:48:34 compute-0 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [WARNING]  (350081) : All workers exited. Exiting... (0)
Feb 25 12:48:34 compute-0 systemd[1]: libpod-a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9.scope: Deactivated successfully.
Feb 25 12:48:34 compute-0 NetworkManager[49836]: <info>  [1772023714.3961] manager: (tapebd67787-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/523)
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 podman[352657]: 2026-02-25 12:48:34.400998594 +0000 UTC m=+0.069430392 container died a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.412 244018 INFO nova.virt.libvirt.driver [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance destroyed successfully.
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.414 244018 DEBUG nova.objects.instance [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9-userdata-shm.mount: Deactivated successfully.
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.430 244018 DEBUG nova.virt.libvirt.vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:47:24Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.432 244018 DEBUG nova.network.os_vif_util [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.433 244018 DEBUG nova.network.os_vif_util [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.434 244018 DEBUG os_vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a06c297626a192e338cd1e2dd123eba9e16164173bf46ec6839ab1a455ed742-merged.mount: Deactivated successfully.
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.437 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0358e18d-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:34 compute-0 podman[352657]: 2026-02-25 12:48:34.437556086 +0000 UTC m=+0.105987884 container cleanup a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:48:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 55 KiB/s wr, 66 op/s
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.513 244018 DEBUG nova.compute.manager [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.514 244018 DEBUG nova.compute.manager [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing instance network info cache due to event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.514 244018 DEBUG oslo_concurrency.lockutils [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.517 244018 INFO os_vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c')
Feb 25 12:48:34 compute-0 systemd[1]: libpod-conmon-a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9.scope: Deactivated successfully.
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.518 244018 DEBUG nova.virt.libvirt.vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:47:24Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.519 244018 DEBUG nova.network.os_vif_util [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.520 244018 DEBUG nova.network.os_vif_util [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.520 244018 DEBUG os_vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.521 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebd67787-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.525 244018 INFO os_vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a')
Feb 25 12:48:34 compute-0 podman[352703]: 2026-02-25 12:48:34.56803336 +0000 UTC m=+0.040499824 container remove a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.573 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb4d531-eb6c-4c7c-b512-d398f86cae30]: (4, ('Wed Feb 25 12:48:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419 (a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9)\na7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9\nWed Feb 25 12:48:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419 (a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9)\na7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.575 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc8f969d-2349-4d37-833d-509850160d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.576 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 kernel: tapd0b6d114-f0: left promiscuous mode
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.581 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7be3dad5-a852-488b-b003-51c70e4dcca5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.585 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.593 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86e9d39b-bc76-4f5b-963a-4bd9db834b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.594 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[642264b7-6865-4a4f-b306-1d98076acf13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.609 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d8cc4eca-2136-48fa-a49c-6d9a9f48dd11]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561162, 'reachable_time': 16301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352734, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.611 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.611 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[d308adc8-76f3-41a9-b2f2-4faa74d01120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.612 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ebd67787-8af9-4a7f-8b38-6d18daff8ff3 in datapath 5c482202-8994-4033-a0a9-167d92a9e301 unbound from our chassis
Feb 25 12:48:34 compute-0 systemd[1]: run-netns-ovnmeta\x2dd0b6d114\x2dfcb4\x2d4a25\x2d988c\x2d1ee301ef0419.mount: Deactivated successfully.
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.614 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c482202-8994-4033-a0a9-167d92a9e301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.615 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16def604-5ee2-4c71-aa89-58b7956286b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.616 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301 namespace which is not needed anymore
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.732 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:34 compute-0 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [NOTICE]   (350245) : haproxy version is 2.8.14-c23fe91
Feb 25 12:48:34 compute-0 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [NOTICE]   (350245) : path to executable is /usr/sbin/haproxy
Feb 25 12:48:34 compute-0 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [WARNING]  (350245) : Exiting Master process...
Feb 25 12:48:34 compute-0 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [ALERT]    (350245) : Current worker (350250) exited with code 143 (Terminated)
Feb 25 12:48:34 compute-0 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [WARNING]  (350245) : All workers exited. Exiting... (0)
Feb 25 12:48:34 compute-0 systemd[1]: libpod-20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45.scope: Deactivated successfully.
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.749 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:48:34 compute-0 podman[352753]: 2026-02-25 12:48:34.749329739 +0000 UTC m=+0.046610507 container died 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.750 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.751 244018 DEBUG oslo_concurrency.lockutils [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.751 244018 DEBUG nova.network.neutron [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing network info cache for port 0358e18d-8ce8-43a7-a8b2-11193708891a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.752 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.753 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.754 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.754 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.767 244018 INFO nova.virt.libvirt.driver [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Deleting instance files /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481_del
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.768 244018 INFO nova.virt.libvirt.driver [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Deletion of /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481_del complete
Feb 25 12:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45-userdata-shm.mount: Deactivated successfully.
Feb 25 12:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-83335b7bb0badbab01818a01b3ea3886d2d30cde665b02056377fe1391514531-merged.mount: Deactivated successfully.
Feb 25 12:48:34 compute-0 podman[352753]: 2026-02-25 12:48:34.796201623 +0000 UTC m=+0.093482431 container cleanup 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:48:34 compute-0 systemd[1]: libpod-conmon-20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45.scope: Deactivated successfully.
Feb 25 12:48:34 compute-0 podman[352782]: 2026-02-25 12:48:34.857471363 +0000 UTC m=+0.043409157 container remove 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.863 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6319e025-3596-4491-915f-b3207c3e955a]: (4, ('Wed Feb 25 12:48:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301 (20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45)\n20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45\nWed Feb 25 12:48:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301 (20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45)\n20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.865 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e849ef6e-0aa4-453f-bb42-706159ca2661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.866 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c482202-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:34 compute-0 kernel: tap5c482202-80: left promiscuous mode
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.876 244018 INFO nova.compute.manager [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Took 0.72 seconds to destroy the instance on the hypervisor.
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.877 244018 DEBUG oslo.service.loopingcall [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.877 244018 DEBUG nova.compute.manager [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:48:34 compute-0 nova_compute[244014]: 2026-02-25 12:48:34.877 244018 DEBUG nova.network.neutron [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.878 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1062dc-05d9-43b7-8918-1587889a5254]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5778d4e8-8e4f-4f1e-8771-197bd186f8bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.899 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[db1e5407-d8aa-4d4e-95ff-348bbed7bc6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7918bff6-d224-47aa-8b7a-609a4c69fef1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561237, 'reachable_time': 29426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352799, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.916 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:48:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.916 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e82092-6b80-45d4-bff3-953b6bf5031e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.035 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-unplugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.035 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.036 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.036 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.036 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] No waiting events found dispatching network-vif-unplugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.037 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-unplugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.037 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.037 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.038 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.038 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.038 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] No waiting events found dispatching network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.038 244018 WARNING nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received unexpected event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a for instance with vm_state active and task_state deleting.
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.039 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-unplugged-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.039 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.039 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.040 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.040 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No waiting events found dispatching network-vif-unplugged-0358e18d-8ce8-43a7-a8b2-11193708891a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.040 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-unplugged-0358e18d-8ce8-43a7-a8b2-11193708891a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.053 244018 DEBUG nova.network.neutron [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.072 244018 INFO nova.compute.manager [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Took 1.74 seconds to deallocate network for instance.
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.126 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.127 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.207 244018 DEBUG oslo_concurrency.processutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c482202\x2d8994\x2d4033\x2da0a9\x2d167d92a9e301.mount: Deactivated successfully.
Feb 25 12:48:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:48:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1663251523' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.758 244018 DEBUG oslo_concurrency.processutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.765 244018 DEBUG nova.compute.provider_tree [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.787 244018 DEBUG nova.scheduler.client.report [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.822 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:35 compute-0 ceph-mon[76335]: pgmap v2040: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 55 KiB/s wr, 66 op/s
Feb 25 12:48:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1663251523' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.864 244018 INFO nova.scheduler.client.report [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 95730650-36ac-4eee-8b22-9ea3f01d82d1
Feb 25 12:48:35 compute-0 nova_compute[244014]: 2026-02-25 12:48:35.935 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 35 KiB/s wr, 60 op/s
Feb 25 12:48:36 compute-0 nova_compute[244014]: 2026-02-25 12:48:36.653 244018 DEBUG nova.compute.manager [req-b614b30f-940b-4aaa-a45e-a11507e5e764 req-6d5c8ed8-6488-4580-93b4-1eb22542172e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-deleted-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:36 compute-0 nova_compute[244014]: 2026-02-25 12:48:36.654 244018 DEBUG nova.compute.manager [req-b614b30f-940b-4aaa-a45e-a11507e5e764 req-6d5c8ed8-6488-4580-93b4-1eb22542172e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-deleted-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:36 compute-0 nova_compute[244014]: 2026-02-25 12:48:36.654 244018 INFO nova.compute.manager [req-b614b30f-940b-4aaa-a45e-a11507e5e764 req-6d5c8ed8-6488-4580-93b4-1eb22542172e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Neutron deleted interface ebd67787-8af9-4a7f-8b38-6d18daff8ff3; detaching it from the instance and deleting it from the info cache
Feb 25 12:48:36 compute-0 nova_compute[244014]: 2026-02-25 12:48:36.655 244018 DEBUG nova.network.neutron [req-b614b30f-940b-4aaa-a45e-a11507e5e764 req-6d5c8ed8-6488-4580-93b4-1eb22542172e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:36 compute-0 nova_compute[244014]: 2026-02-25 12:48:36.702 244018 DEBUG nova.compute.manager [req-b614b30f-940b-4aaa-a45e-a11507e5e764 req-6d5c8ed8-6488-4580-93b4-1eb22542172e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Detach interface failed, port_id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3, reason: Instance 848fd033-0ebb-460a-a8a0-56583fa5f481 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:48:36 compute-0 nova_compute[244014]: 2026-02-25 12:48:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:48:36 compute-0 nova_compute[244014]: 2026-02-25 12:48:36.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.133 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.134 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.135 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.135 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.136 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No waiting events found dispatching network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.136 244018 WARNING nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received unexpected event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a for instance with vm_state active and task_state deleting.
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.137 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-unplugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.137 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.138 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.138 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.139 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No waiting events found dispatching network-vif-unplugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.139 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-unplugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.140 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.140 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.141 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.141 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.141 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No waiting events found dispatching network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.142 244018 WARNING nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received unexpected event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 for instance with vm_state active and task_state deleting.
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.424 244018 DEBUG nova.network.neutron [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updated VIF entry in instance network info cache for port 0358e18d-8ce8-43a7-a8b2-11193708891a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.425 244018 DEBUG nova.network.neutron [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:37 compute-0 nova_compute[244014]: 2026-02-25 12:48:37.452 244018 DEBUG oslo_concurrency.lockutils [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:48:37 compute-0 ceph-mon[76335]: pgmap v2041: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 35 KiB/s wr, 60 op/s
Feb 25 12:48:38 compute-0 nova_compute[244014]: 2026-02-25 12:48:38.270 244018 DEBUG nova.network.neutron [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:48:38 compute-0 nova_compute[244014]: 2026-02-25 12:48:38.312 244018 INFO nova.compute.manager [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Took 3.43 seconds to deallocate network for instance.
Feb 25 12:48:38 compute-0 nova_compute[244014]: 2026-02-25 12:48:38.379 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:38 compute-0 nova_compute[244014]: 2026-02-25 12:48:38.379 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:38 compute-0 nova_compute[244014]: 2026-02-25 12:48:38.420 244018 DEBUG oslo_concurrency.processutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 37 KiB/s wr, 115 op/s
Feb 25 12:48:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:38 compute-0 nova_compute[244014]: 2026-02-25 12:48:38.787 244018 DEBUG nova.compute.manager [req-bbf106c9-2a1c-4d8b-925e-db38292697e8 req-37fd7718-4045-4a7f-9f08-3de35016a6db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-deleted-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:48:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:48:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/794443488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:39 compute-0 nova_compute[244014]: 2026-02-25 12:48:39.016 244018 DEBUG oslo_concurrency.processutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:39 compute-0 nova_compute[244014]: 2026-02-25 12:48:39.023 244018 DEBUG nova.compute.provider_tree [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:48:39 compute-0 nova_compute[244014]: 2026-02-25 12:48:39.042 244018 DEBUG nova.scheduler.client.report [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:48:39 compute-0 nova_compute[244014]: 2026-02-25 12:48:39.070 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:39 compute-0 nova_compute[244014]: 2026-02-25 12:48:39.111 244018 INFO nova.scheduler.client.report [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 848fd033-0ebb-460a-a8a0-56583fa5f481
Feb 25 12:48:39 compute-0 nova_compute[244014]: 2026-02-25 12:48:39.220 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:39 compute-0 nova_compute[244014]: 2026-02-25 12:48:39.526 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:39 compute-0 ceph-mon[76335]: pgmap v2042: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 37 KiB/s wr, 115 op/s
Feb 25 12:48:39 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/794443488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:40 compute-0 nova_compute[244014]: 2026-02-25 12:48:40.291 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 8.9 KiB/s wr, 70 op/s
Feb 25 12:48:40 compute-0 nova_compute[244014]: 2026-02-25 12:48:40.600 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023705.5991154, 9c11f17d-5dba-4ece-8340-1c4ff0939294 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:48:40 compute-0 nova_compute[244014]: 2026-02-25 12:48:40.601 244018 INFO nova.compute.manager [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] VM Stopped (Lifecycle Event)
Feb 25 12:48:40 compute-0 nova_compute[244014]: 2026-02-25 12:48:40.621 244018 DEBUG nova.compute.manager [None req-de036231-73d5-480c-8346-7062bf3b3abd - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:48:41 compute-0 nova_compute[244014]: 2026-02-25 12:48:41.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:41 compute-0 nova_compute[244014]: 2026-02-25 12:48:41.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:41 compute-0 ceph-mon[76335]: pgmap v2043: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 8.9 KiB/s wr, 70 op/s
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 8.9 KiB/s wr, 70 op/s
Feb 25 12:48:42 compute-0 nova_compute[244014]: 2026-02-25 12:48:42.551 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023707.548173, 271f6569-a8f6-43a3-ac98-511eff77c426 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:48:42 compute-0 nova_compute[244014]: 2026-02-25 12:48:42.552 244018 INFO nova.compute.manager [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] VM Stopped (Lifecycle Event)
Feb 25 12:48:42 compute-0 nova_compute[244014]: 2026-02-25 12:48:42.577 244018 DEBUG nova.compute.manager [None req-f32260c3-2aa0-49bd-81e9-5893d12ffd7d - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.566761086646441e-05 of space, bias 1.0, pg target 0.004700283259939323 quantized to 32 (current 32)
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024940534720719228 of space, bias 1.0, pg target 0.7482160416215768 quantized to 32 (current 32)
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3979243418663797e-06 of space, bias 4.0, pg target 0.0016775092102396555 quantized to 16 (current 16)
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:48:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:48:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:42.882 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:48:42 compute-0 nova_compute[244014]: 2026-02-25 12:48:42.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:42.885 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:48:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:43 compute-0 ceph-mon[76335]: pgmap v2044: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 8.9 KiB/s wr, 70 op/s
Feb 25 12:48:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 55 op/s
Feb 25 12:48:44 compute-0 nova_compute[244014]: 2026-02-25 12:48:44.528 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:45 compute-0 nova_compute[244014]: 2026-02-25 12:48:45.293 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:45 compute-0 ceph-mon[76335]: pgmap v2045: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 55 op/s
Feb 25 12:48:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 12:48:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:48:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/333512940' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:48:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:48:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/333512940' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:48:47 compute-0 ceph-mon[76335]: pgmap v2046: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 12:48:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/333512940' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:48:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/333512940' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:48:47 compute-0 nova_compute[244014]: 2026-02-25 12:48:47.944 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023712.9430044, 95730650-36ac-4eee-8b22-9ea3f01d82d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:48:47 compute-0 nova_compute[244014]: 2026-02-25 12:48:47.944 244018 INFO nova.compute.manager [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] VM Stopped (Lifecycle Event)
Feb 25 12:48:47 compute-0 nova_compute[244014]: 2026-02-25 12:48:47.970 244018 DEBUG nova.compute.manager [None req-dde8d55b-0c06-4c3b-832f-86a893457556 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:48:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 12:48:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:49 compute-0 nova_compute[244014]: 2026-02-25 12:48:49.412 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023714.4112868, 848fd033-0ebb-460a-a8a0-56583fa5f481 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:48:49 compute-0 nova_compute[244014]: 2026-02-25 12:48:49.413 244018 INFO nova.compute.manager [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] VM Stopped (Lifecycle Event)
Feb 25 12:48:49 compute-0 nova_compute[244014]: 2026-02-25 12:48:49.448 244018 DEBUG nova.compute.manager [None req-fb8eaa51-af69-410c-9d0e-dbc43380ea00 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:48:49 compute-0 nova_compute[244014]: 2026-02-25 12:48:49.530 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:49 compute-0 ceph-mon[76335]: pgmap v2047: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 12:48:50 compute-0 nova_compute[244014]: 2026-02-25 12:48:50.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:48:51 compute-0 ceph-mon[76335]: pgmap v2048: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:48:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:48:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:52.887 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:48:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:53 compute-0 ceph-mon[76335]: pgmap v2049: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:48:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:48:54 compute-0 nova_compute[244014]: 2026-02-25 12:48:54.534 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:55.026 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:55.027 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:48:55.027 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:55 compute-0 nova_compute[244014]: 2026-02-25 12:48:55.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:55 compute-0 ceph-mon[76335]: pgmap v2050: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:48:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:48:57 compute-0 ceph-mon[76335]: pgmap v2051: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:48:58 compute-0 nova_compute[244014]: 2026-02-25 12:48:58.504 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:58 compute-0 nova_compute[244014]: 2026-02-25 12:48:58.505 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:48:58 compute-0 nova_compute[244014]: 2026-02-25 12:48:58.528 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:48:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:48:58 compute-0 nova_compute[244014]: 2026-02-25 12:48:58.663 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:58 compute-0 nova_compute[244014]: 2026-02-25 12:48:58.664 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:58 compute-0 nova_compute[244014]: 2026-02-25 12:48:58.675 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:48:58 compute-0 nova_compute[244014]: 2026-02-25 12:48:58.675 244018 INFO nova.compute.claims [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:48:58 compute-0 nova_compute[244014]: 2026-02-25 12:48:58.881 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:48:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3490951547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.424 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.431 244018 DEBUG nova.compute.provider_tree [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.455 244018 DEBUG nova.scheduler.client.report [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.488 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.489 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.579 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.579 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.612 244018 INFO nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.642 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.735 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.737 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.738 244018 INFO nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Creating image(s)
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.769 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.795 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.818 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.821 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.872 244018 DEBUG nova.policy [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.894 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.895 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.895 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.896 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.918 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:48:59 compute-0 nova_compute[244014]: 2026-02-25 12:48:59.921 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:48:59 compute-0 ceph-mon[76335]: pgmap v2052: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:48:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3490951547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:49:00 compute-0 nova_compute[244014]: 2026-02-25 12:49:00.189 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:00 compute-0 nova_compute[244014]: 2026-02-25 12:49:00.247 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:49:00 compute-0 nova_compute[244014]: 2026-02-25 12:49:00.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:00 compute-0 nova_compute[244014]: 2026-02-25 12:49:00.318 244018 DEBUG nova.objects.instance [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:49:00 compute-0 nova_compute[244014]: 2026-02-25 12:49:00.333 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:49:00 compute-0 nova_compute[244014]: 2026-02-25 12:49:00.333 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Ensure instance console log exists: /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:49:00 compute-0 nova_compute[244014]: 2026-02-25 12:49:00.333 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:00 compute-0 nova_compute[244014]: 2026-02-25 12:49:00.334 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:00 compute-0 nova_compute[244014]: 2026-02-25 12:49:00.334 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:49:00 compute-0 podman[353033]: 2026-02-25 12:49:00.716822292 +0000 UTC m=+0.058874453 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:49:00 compute-0 podman[353034]: 2026-02-25 12:49:00.763342336 +0000 UTC m=+0.096458375 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 12:49:00 compute-0 nova_compute[244014]: 2026-02-25 12:49:00.782 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Successfully created port: cda27370-858e-4443-819e-696576515c52 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:49:01 compute-0 nova_compute[244014]: 2026-02-25 12:49:01.928 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Successfully updated port: cda27370-858e-4443-819e-696576515c52 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:49:01 compute-0 nova_compute[244014]: 2026-02-25 12:49:01.962 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:01 compute-0 nova_compute[244014]: 2026-02-25 12:49:01.963 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:01 compute-0 nova_compute[244014]: 2026-02-25 12:49:01.963 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:49:01 compute-0 ceph-mon[76335]: pgmap v2053: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:49:02 compute-0 nova_compute[244014]: 2026-02-25 12:49:02.066 244018 DEBUG nova.compute.manager [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-changed-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:02 compute-0 nova_compute[244014]: 2026-02-25 12:49:02.067 244018 DEBUG nova.compute.manager [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing instance network info cache due to event network-changed-cda27370-858e-4443-819e-696576515c52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:49:02 compute-0 nova_compute[244014]: 2026-02-25 12:49:02.067 244018 DEBUG oslo_concurrency.lockutils [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:02 compute-0 nova_compute[244014]: 2026-02-25 12:49:02.203 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:49:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 176 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 917 KiB/s wr, 14 op/s
Feb 25 12:49:02 compute-0 nova_compute[244014]: 2026-02-25 12:49:02.976 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.012 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.013 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Instance network_info: |[{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.013 244018 DEBUG oslo_concurrency.lockutils [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.014 244018 DEBUG nova.network.neutron [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing network info cache for port cda27370-858e-4443-819e-696576515c52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.020 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Start _get_guest_xml network_info=[{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.026 244018 WARNING nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.037 244018 DEBUG nova.virt.libvirt.host [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.038 244018 DEBUG nova.virt.libvirt.host [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.047 244018 DEBUG nova.virt.libvirt.host [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.047 244018 DEBUG nova.virt.libvirt.host [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.048 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.048 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.049 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.050 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.050 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.051 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.051 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.052 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.052 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.053 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.053 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.054 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.058 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:49:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1959613151' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.638 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.663 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:03 compute-0 nova_compute[244014]: 2026-02-25 12:49:03.669 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:03 compute-0 ceph-mon[76335]: pgmap v2054: 305 pgs: 305 active+clean; 176 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 917 KiB/s wr, 14 op/s
Feb 25 12:49:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1959613151' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:49:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:49:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/757917452' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.270 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.273 244018 DEBUG nova.virt.libvirt.vif [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:48:59Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.273 244018 DEBUG nova.network.os_vif_util [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.274 244018 DEBUG nova.network.os_vif_util [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.276 244018 DEBUG nova.objects.instance [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.293 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:49:04 compute-0 nova_compute[244014]:   <uuid>37ce1876-2b57-4f84-800c-2a1b6eaa6943</uuid>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   <name>instance-00000079</name>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-1787082415</nova:name>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:49:03</nova:creationTime>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <nova:port uuid="cda27370-858e-4443-819e-696576515c52">
Feb 25 12:49:04 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <system>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <entry name="serial">37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <entry name="uuid">37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     </system>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   <os>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   </os>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   <features>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   </features>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk">
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       </source>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config">
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       </source>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:49:04 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:50:74:9c"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <target dev="tapcda27370-85"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log" append="off"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <video>
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     </video>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:49:04 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:49:04 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:49:04 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:49:04 compute-0 nova_compute[244014]: </domain>
Feb 25 12:49:04 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.295 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Preparing to wait for external event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.296 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.296 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.297 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.298 244018 DEBUG nova.virt.libvirt.vif [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:48:59Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.298 244018 DEBUG nova.network.os_vif_util [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.299 244018 DEBUG nova.network.os_vif_util [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.299 244018 DEBUG os_vif [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.301 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.302 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.306 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.307 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcda27370-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.308 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcda27370-85, col_values=(('external_ids', {'iface-id': 'cda27370-858e-4443-819e-696576515c52', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:74:9c', 'vm-uuid': '37ce1876-2b57-4f84-800c-2a1b6eaa6943'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:04 compute-0 NetworkManager[49836]: <info>  [1772023744.3125] manager: (tapcda27370-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.319 244018 INFO os_vif [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85')
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.366 244018 DEBUG nova.network.neutron [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updated VIF entry in instance network info cache for port cda27370-858e-4443-819e-696576515c52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.367 244018 DEBUG nova.network.neutron [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.395 244018 DEBUG oslo_concurrency.lockutils [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.397 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.397 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.398 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:50:74:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.398 244018 INFO nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Using config drive
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.423 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.879 244018 INFO nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Creating config drive at /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config
Feb 25 12:49:04 compute-0 nova_compute[244014]: 2026-02-25 12:49:04.886 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa83ldsxx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/757917452' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.023 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa83ldsxx" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.051 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.056 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.208 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.209 244018 INFO nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Deleting local config drive /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config because it was imported into RBD.
Feb 25 12:49:05 compute-0 kernel: tapcda27370-85: entered promiscuous mode
Feb 25 12:49:05 compute-0 NetworkManager[49836]: <info>  [1772023745.2673] manager: (tapcda27370-85): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Feb 25 12:49:05 compute-0 ovn_controller[147040]: 2026-02-25T12:49:05Z|01265|binding|INFO|Claiming lport cda27370-858e-4443-819e-696576515c52 for this chassis.
Feb 25 12:49:05 compute-0 ovn_controller[147040]: 2026-02-25T12:49:05Z|01266|binding|INFO|cda27370-858e-4443-819e-696576515c52: Claiming fa:16:3e:50:74:9c 10.100.0.7
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:05 compute-0 systemd-udevd[353211]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.298 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:74:9c 10.100.0.7'], port_security=['fa:16:3e:50:74:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '37ce1876-2b57-4f84-800c-2a1b6eaa6943', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b302251f-a239-4374-92d3-7686a49e9d67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9170f76a-1c5e-4379-83f2-194a69c3afae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa6e9e5e-6b6d-43d1-bea6-b8dba8200d28, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=cda27370-858e-4443-819e-696576515c52) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:49:05 compute-0 NetworkManager[49836]: <info>  [1772023745.2994] device (tapcda27370-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:49:05 compute-0 NetworkManager[49836]: <info>  [1772023745.2999] device (tapcda27370-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:49:05 compute-0 systemd-machined[210048]: New machine qemu-153-instance-00000079.
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.299 157129 INFO neutron.agent.ovn.metadata.agent [-] Port cda27370-858e-4443-819e-696576515c52 in datapath b302251f-a239-4374-92d3-7686a49e9d67 bound to our chassis
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.300 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b302251f-a239-4374-92d3-7686a49e9d67
Feb 25 12:49:05 compute-0 ovn_controller[147040]: 2026-02-25T12:49:05Z|01267|binding|INFO|Setting lport cda27370-858e-4443-819e-696576515c52 ovn-installed in OVS
Feb 25 12:49:05 compute-0 ovn_controller[147040]: 2026-02-25T12:49:05Z|01268|binding|INFO|Setting lport cda27370-858e-4443-819e-696576515c52 up in Southbound
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.310 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[045a5f61-b414-4f69-b1fa-c1c165fc967d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.310 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb302251f-a1 in ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.313 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb302251f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.313 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d82f0faf-4684-4294-bfed-ed750d1c4869]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.313 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[79996e52-0f4a-41d4-b4c5-1430439a6403]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-00000079.
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.324 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9b0a52-c509-4049-baef-f93960fb6394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.338 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eddbd7a1-c74f-4291-8325-920cb1cbcbad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.365 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[063d4145-fd52-4731-a42b-02e5a2499505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 NetworkManager[49836]: <info>  [1772023745.3707] manager: (tapb302251f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/526)
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.370 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35536431-c627-43bf-85bb-018b4326af09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.398 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7a78e5a5-8a82-4264-9055-f6f1309989de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.402 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[63afdd50-911d-41e4-9129-5ec9217cf492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 NetworkManager[49836]: <info>  [1772023745.4171] device (tapb302251f-a0): carrier: link connected
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.419 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ed1aaa-2581-4e3b-8372-e11734645c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.430 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6be21d90-acc9-4d28-8eb3-404da71fd66c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb302251f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:a5:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571482, 'reachable_time': 15721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353245, 'error': None, 'target': 'ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.445 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[36058356-abdd-4c5e-941b-e6051d1786b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:a514'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571482, 'tstamp': 571482}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353246, 'error': None, 'target': 'ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.458 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2599c3-4f99-470d-a0b3-5daab67df511]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb302251f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:a5:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571482, 'reachable_time': 15721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353247, 'error': None, 'target': 'ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.484 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[51881249-22ec-40e2-97ac-96a62a052dfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.521 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d997af6c-efbe-4670-a616-3ab0b7fc21f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.523 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb302251f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.523 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb302251f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.526 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:05 compute-0 kernel: tapb302251f-a0: entered promiscuous mode
Feb 25 12:49:05 compute-0 NetworkManager[49836]: <info>  [1772023745.5267] manager: (tapb302251f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.530 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb302251f-a0, col_values=(('external_ids', {'iface-id': '3be3f9aa-5947-437f-9936-240be99a8dd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.531 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:05 compute-0 ovn_controller[147040]: 2026-02-25T12:49:05Z|01269|binding|INFO|Releasing lport 3be3f9aa-5947-437f-9936-240be99a8dd3 from this chassis (sb_readonly=0)
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.532 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.534 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b302251f-a239-4374-92d3-7686a49e9d67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b302251f-a239-4374-92d3-7686a49e9d67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.535 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[83706e33-3c1a-4f0e-9bc0-8a3088bde0b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.536 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-b302251f-a239-4374-92d3-7686a49e9d67
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/b302251f-a239-4374-92d3-7686a49e9d67.pid.haproxy
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID b302251f-a239-4374-92d3-7686a49e9d67
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.536 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.537 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67', 'env', 'PROCESS_TAG=haproxy-b302251f-a239-4374-92d3-7686a49e9d67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b302251f-a239-4374-92d3-7686a49e9d67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.714 244018 DEBUG nova.compute.manager [req-59f372fe-e7e0-47dc-8110-ac314632b4d3 req-ab375902-e050-4fd1-b8dd-c82bbd09c73d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.715 244018 DEBUG oslo_concurrency.lockutils [req-59f372fe-e7e0-47dc-8110-ac314632b4d3 req-ab375902-e050-4fd1-b8dd-c82bbd09c73d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.716 244018 DEBUG oslo_concurrency.lockutils [req-59f372fe-e7e0-47dc-8110-ac314632b4d3 req-ab375902-e050-4fd1-b8dd-c82bbd09c73d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.716 244018 DEBUG oslo_concurrency.lockutils [req-59f372fe-e7e0-47dc-8110-ac314632b4d3 req-ab375902-e050-4fd1-b8dd-c82bbd09c73d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.716 244018 DEBUG nova.compute.manager [req-59f372fe-e7e0-47dc-8110-ac314632b4d3 req-ab375902-e050-4fd1-b8dd-c82bbd09c73d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Processing event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.788 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.789 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023745.7873554, 37ce1876-2b57-4f84-800c-2a1b6eaa6943 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.789 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] VM Started (Lifecycle Event)
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.796 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.800 244018 INFO nova.virt.libvirt.driver [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Instance spawned successfully.
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.801 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.834 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.841 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.842 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.843 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.844 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.845 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.845 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.854 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.888 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.889 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023745.7927842, 37ce1876-2b57-4f84-800c-2a1b6eaa6943 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.889 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] VM Paused (Lifecycle Event)
Feb 25 12:49:05 compute-0 podman[353321]: 2026-02-25 12:49:05.916840102 +0000 UTC m=+0.047561844 container create 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.933 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.937 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023745.796021, 37ce1876-2b57-4f84-800c-2a1b6eaa6943 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.937 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] VM Resumed (Lifecycle Event)
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.949 244018 INFO nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Took 6.21 seconds to spawn the instance on the hypervisor.
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.950 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:49:05 compute-0 systemd[1]: Started libpod-conmon-5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704.scope.
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.971 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:49:05 compute-0 nova_compute[244014]: 2026-02-25 12:49:05.977 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:49:05 compute-0 podman[353321]: 2026-02-25 12:49:05.888230925 +0000 UTC m=+0.018952687 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:49:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:49:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51fe52d468e2be311accba8895564be948c5e431928c89066ff8445171f1806f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:06 compute-0 podman[353321]: 2026-02-25 12:49:06.006189375 +0000 UTC m=+0.136911137 container init 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:49:06 compute-0 ceph-mon[76335]: pgmap v2055: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:49:06 compute-0 podman[353321]: 2026-02-25 12:49:06.010441065 +0000 UTC m=+0.141162817 container start 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:49:06 compute-0 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [NOTICE]   (353342) : New worker (353344) forked
Feb 25 12:49:06 compute-0 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [NOTICE]   (353342) : Loading success.
Feb 25 12:49:06 compute-0 nova_compute[244014]: 2026-02-25 12:49:06.036 244018 INFO nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Took 7.41 seconds to build instance.
Feb 25 12:49:06 compute-0 nova_compute[244014]: 2026-02-25 12:49:06.067 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:49:07 compute-0 nova_compute[244014]: 2026-02-25 12:49:07.815 244018 DEBUG nova.compute.manager [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:07 compute-0 nova_compute[244014]: 2026-02-25 12:49:07.816 244018 DEBUG oslo_concurrency.lockutils [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:07 compute-0 nova_compute[244014]: 2026-02-25 12:49:07.816 244018 DEBUG oslo_concurrency.lockutils [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:07 compute-0 nova_compute[244014]: 2026-02-25 12:49:07.816 244018 DEBUG oslo_concurrency.lockutils [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:07 compute-0 nova_compute[244014]: 2026-02-25 12:49:07.816 244018 DEBUG nova.compute.manager [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-plugged-cda27370-858e-4443-819e-696576515c52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:49:07 compute-0 nova_compute[244014]: 2026-02-25 12:49:07.817 244018 WARNING nova.compute.manager [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 for instance with vm_state active and task_state None.
Feb 25 12:49:08 compute-0 ceph-mon[76335]: pgmap v2056: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:49:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:49:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:09 compute-0 nova_compute[244014]: 2026-02-25 12:49:09.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:09 compute-0 NetworkManager[49836]: <info>  [1772023749.6037] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Feb 25 12:49:09 compute-0 NetworkManager[49836]: <info>  [1772023749.6046] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/529)
Feb 25 12:49:09 compute-0 nova_compute[244014]: 2026-02-25 12:49:09.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:09 compute-0 nova_compute[244014]: 2026-02-25 12:49:09.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:09 compute-0 ovn_controller[147040]: 2026-02-25T12:49:09Z|01270|binding|INFO|Releasing lport 3be3f9aa-5947-437f-9936-240be99a8dd3 from this chassis (sb_readonly=0)
Feb 25 12:49:09 compute-0 nova_compute[244014]: 2026-02-25 12:49:09.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:10 compute-0 ceph-mon[76335]: pgmap v2057: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:49:10 compute-0 nova_compute[244014]: 2026-02-25 12:49:10.150 244018 DEBUG nova.compute.manager [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-changed-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:10 compute-0 nova_compute[244014]: 2026-02-25 12:49:10.151 244018 DEBUG nova.compute.manager [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing instance network info cache due to event network-changed-cda27370-858e-4443-819e-696576515c52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:49:10 compute-0 nova_compute[244014]: 2026-02-25 12:49:10.151 244018 DEBUG oslo_concurrency.lockutils [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:10 compute-0 nova_compute[244014]: 2026-02-25 12:49:10.152 244018 DEBUG oslo_concurrency.lockutils [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:10 compute-0 nova_compute[244014]: 2026-02-25 12:49:10.152 244018 DEBUG nova.network.neutron [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing network info cache for port cda27370-858e-4443-819e-696576515c52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:49:10 compute-0 nova_compute[244014]: 2026-02-25 12:49:10.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:49:11 compute-0 nova_compute[244014]: 2026-02-25 12:49:11.959 244018 DEBUG nova.network.neutron [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updated VIF entry in instance network info cache for port cda27370-858e-4443-819e-696576515c52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:49:11 compute-0 nova_compute[244014]: 2026-02-25 12:49:11.960 244018 DEBUG nova.network.neutron [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:11 compute-0 nova_compute[244014]: 2026-02-25 12:49:11.987 244018 DEBUG oslo_concurrency.lockutils [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:49:12 compute-0 ceph-mon[76335]: pgmap v2058: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:49:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:49:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:49:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9637 writes, 44K keys, 9637 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 9637 writes, 9637 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1511 writes, 7510 keys, 1511 commit groups, 1.0 writes per commit group, ingest: 9.35 MB, 0.02 MB/s
                                           Interval WAL: 1511 writes, 1511 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     52.3      0.98              0.14        29    0.034       0      0       0.0       0.0
                                             L6      1/0    7.91 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    105.1     88.3      2.59              0.64        28    0.093    158K    15K       0.0       0.0
                                            Sum      1/0    7.91 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5     76.2     78.4      3.58              0.78        57    0.063    158K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.6     99.1     97.7      0.90              0.24        16    0.057     55K   4095       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    105.1     88.3      2.59              0.64        28    0.093    158K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     52.5      0.98              0.14        28    0.035       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.27 GB write, 0.08 MB/s write, 0.27 GB read, 0.08 MB/s read, 3.6 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 30.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000275 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1917,29.15 MB,9.58745%) FilterBlock(58,441.98 KB,0.141982%) IndexBlock(58,771.70 KB,0.2479%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 12:49:12 compute-0 nova_compute[244014]: 2026-02-25 12:49:12.969 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:12 compute-0 nova_compute[244014]: 2026-02-25 12:49:12.970 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:12 compute-0 nova_compute[244014]: 2026-02-25 12:49:12.988 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:49:13 compute-0 nova_compute[244014]: 2026-02-25 12:49:13.058 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:13 compute-0 nova_compute[244014]: 2026-02-25 12:49:13.059 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:13 compute-0 nova_compute[244014]: 2026-02-25 12:49:13.067 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:49:13 compute-0 nova_compute[244014]: 2026-02-25 12:49:13.068 244018 INFO nova.compute.claims [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:49:13 compute-0 nova_compute[244014]: 2026-02-25 12:49:13.199 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:49:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446107335' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:49:13 compute-0 nova_compute[244014]: 2026-02-25 12:49:13.725 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:13 compute-0 nova_compute[244014]: 2026-02-25 12:49:13.732 244018 DEBUG nova.compute.provider_tree [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:49:13 compute-0 nova_compute[244014]: 2026-02-25 12:49:13.978 244018 DEBUG nova.scheduler.client.report [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.009 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.010 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:49:14 compute-0 ceph-mon[76335]: pgmap v2059: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:49:14 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1446107335' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.116 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.116 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.135 244018 INFO nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.152 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.272 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.274 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.275 244018 INFO nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Creating image(s)
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.309 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.343 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.372 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.377 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.440 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.441 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.442 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.443 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.474 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.479 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 910 KiB/s wr, 86 op/s
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.739 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.795 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.886 244018 DEBUG nova.objects.instance [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid b6501baa-8bc9-4724-b4c1-8ff43faf517c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.913 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.914 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Ensure instance console log exists: /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.914 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.915 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.915 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:14 compute-0 nova_compute[244014]: 2026-02-25 12:49:14.971 244018 DEBUG nova.policy [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:49:15 compute-0 nova_compute[244014]: 2026-02-25 12:49:15.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:16 compute-0 ceph-mon[76335]: pgmap v2060: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 910 KiB/s wr, 86 op/s
Feb 25 12:49:16 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:49:16 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:49:16 compute-0 nova_compute[244014]: 2026-02-25 12:49:16.330 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Successfully created port: dee62982-d46c-4a81-b4ad-8154d7cfc7af _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:49:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:49:17 compute-0 nova_compute[244014]: 2026-02-25 12:49:17.313 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Successfully created port: 344b59b1-93f2-4e23-8c28-835e5d954630 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:49:17 compute-0 ovn_controller[147040]: 2026-02-25T12:49:17Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:74:9c 10.100.0.7
Feb 25 12:49:17 compute-0 ovn_controller[147040]: 2026-02-25T12:49:17Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:74:9c 10.100.0.7
Feb 25 12:49:18 compute-0 ceph-mon[76335]: pgmap v2061: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:49:18 compute-0 nova_compute[244014]: 2026-02-25 12:49:18.294 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Successfully updated port: dee62982-d46c-4a81-b4ad-8154d7cfc7af _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:49:18 compute-0 nova_compute[244014]: 2026-02-25 12:49:18.500 244018 DEBUG nova.compute.manager [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:18 compute-0 nova_compute[244014]: 2026-02-25 12:49:18.501 244018 DEBUG nova.compute.manager [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing instance network info cache due to event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:49:18 compute-0 nova_compute[244014]: 2026-02-25 12:49:18.502 244018 DEBUG oslo_concurrency.lockutils [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:18 compute-0 nova_compute[244014]: 2026-02-25 12:49:18.502 244018 DEBUG oslo_concurrency.lockutils [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:18 compute-0 nova_compute[244014]: 2026-02-25 12:49:18.502 244018 DEBUG nova.network.neutron [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing network info cache for port dee62982-d46c-4a81-b4ad-8154d7cfc7af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:49:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 272 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Feb 25 12:49:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:18 compute-0 nova_compute[244014]: 2026-02-25 12:49:18.751 244018 DEBUG nova.network.neutron [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:49:19 compute-0 nova_compute[244014]: 2026-02-25 12:49:19.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:19 compute-0 nova_compute[244014]: 2026-02-25 12:49:19.568 244018 DEBUG nova.network.neutron [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:19 compute-0 nova_compute[244014]: 2026-02-25 12:49:19.586 244018 DEBUG oslo_concurrency.lockutils [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:49:19 compute-0 nova_compute[244014]: 2026-02-25 12:49:19.588 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Successfully updated port: 344b59b1-93f2-4e23-8c28-835e5d954630 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:49:19 compute-0 nova_compute[244014]: 2026-02-25 12:49:19.601 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:19 compute-0 nova_compute[244014]: 2026-02-25 12:49:19.601 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:19 compute-0 nova_compute[244014]: 2026-02-25 12:49:19.602 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:49:19 compute-0 nova_compute[244014]: 2026-02-25 12:49:19.834 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:49:19 compute-0 systemd[1]: Starting dnf makecache...
Feb 25 12:49:19 compute-0 sudo[353545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:49:19 compute-0 sudo[353545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:49:19 compute-0 sudo[353545]: pam_unix(sudo:session): session closed for user root
Feb 25 12:49:19 compute-0 sudo[353571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:49:19 compute-0 sudo[353571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:49:20 compute-0 ceph-mon[76335]: pgmap v2062: 305 pgs: 305 active+clean; 272 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Feb 25 12:49:20 compute-0 nova_compute[244014]: 2026-02-25 12:49:20.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:20 compute-0 dnf[353569]: Metadata cache refreshed recently.
Feb 25 12:49:20 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 25 12:49:20 compute-0 systemd[1]: Finished dnf makecache.
Feb 25 12:49:20 compute-0 sudo[353571]: pam_unix(sudo:session): session closed for user root
Feb 25 12:49:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:49:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:49:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:49:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:49:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:49:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:49:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:49:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:49:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 272 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 3.9 MiB/s wr, 86 op/s
Feb 25 12:49:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:49:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:49:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:49:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:49:20 compute-0 sudo[353627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:49:20 compute-0 sudo[353627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:49:20 compute-0 sudo[353627]: pam_unix(sudo:session): session closed for user root
Feb 25 12:49:20 compute-0 nova_compute[244014]: 2026-02-25 12:49:20.599 244018 DEBUG nova.compute.manager [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-changed-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:20 compute-0 nova_compute[244014]: 2026-02-25 12:49:20.600 244018 DEBUG nova.compute.manager [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing instance network info cache due to event network-changed-344b59b1-93f2-4e23-8c28-835e5d954630. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:49:20 compute-0 nova_compute[244014]: 2026-02-25 12:49:20.601 244018 DEBUG oslo_concurrency.lockutils [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:20 compute-0 sudo[353652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:49:20 compute-0 sudo[353652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:49:20 compute-0 podman[353688]: 2026-02-25 12:49:20.969519138 +0000 UTC m=+0.058194154 container create 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 12:49:21 compute-0 systemd[1]: Started libpod-conmon-8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759.scope.
Feb 25 12:49:21 compute-0 podman[353688]: 2026-02-25 12:49:20.947120326 +0000 UTC m=+0.035795422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:49:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:49:21 compute-0 podman[353688]: 2026-02-25 12:49:21.073105843 +0000 UTC m=+0.161780889 container init 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 12:49:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:49:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:49:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:49:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:49:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:49:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:49:21 compute-0 podman[353688]: 2026-02-25 12:49:21.08149183 +0000 UTC m=+0.170166846 container start 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:49:21 compute-0 podman[353688]: 2026-02-25 12:49:21.084956248 +0000 UTC m=+0.173631264 container attach 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:49:21 compute-0 competent_clarke[353704]: 167 167
Feb 25 12:49:21 compute-0 systemd[1]: libpod-8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759.scope: Deactivated successfully.
Feb 25 12:49:21 compute-0 podman[353688]: 2026-02-25 12:49:21.088155878 +0000 UTC m=+0.176830884 container died 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Feb 25 12:49:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ca1571c8437861220340b29e14cd90a2cf279110985ca7d90b8e7baa09995b8-merged.mount: Deactivated successfully.
Feb 25 12:49:21 compute-0 podman[353688]: 2026-02-25 12:49:21.132604133 +0000 UTC m=+0.221279139 container remove 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 12:49:21 compute-0 systemd[1]: libpod-conmon-8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759.scope: Deactivated successfully.
Feb 25 12:49:21 compute-0 podman[353728]: 2026-02-25 12:49:21.280413217 +0000 UTC m=+0.048969944 container create 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 12:49:21 compute-0 systemd[1]: Started libpod-conmon-310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823.scope.
Feb 25 12:49:21 compute-0 podman[353728]: 2026-02-25 12:49:21.258712124 +0000 UTC m=+0.027268861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:49:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:21 compute-0 podman[353728]: 2026-02-25 12:49:21.390804274 +0000 UTC m=+0.159360981 container init 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:49:21 compute-0 podman[353728]: 2026-02-25 12:49:21.406842637 +0000 UTC m=+0.175399364 container start 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 25 12:49:21 compute-0 podman[353728]: 2026-02-25 12:49:21.412580789 +0000 UTC m=+0.181137516 container attach 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:49:21 compute-0 inspiring_wing[353745]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:49:21 compute-0 inspiring_wing[353745]: --> All data devices are unavailable
Feb 25 12:49:21 compute-0 systemd[1]: libpod-310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823.scope: Deactivated successfully.
Feb 25 12:49:21 compute-0 podman[353765]: 2026-02-25 12:49:21.893448067 +0000 UTC m=+0.029376181 container died 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Feb 25 12:49:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8-merged.mount: Deactivated successfully.
Feb 25 12:49:21 compute-0 podman[353765]: 2026-02-25 12:49:21.941628817 +0000 UTC m=+0.077556921 container remove 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:49:21 compute-0 systemd[1]: libpod-conmon-310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823.scope: Deactivated successfully.
Feb 25 12:49:22 compute-0 sudo[353652]: pam_unix(sudo:session): session closed for user root
Feb 25 12:49:22 compute-0 sudo[353780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:49:22 compute-0 sudo[353780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:49:22 compute-0 sudo[353780]: pam_unix(sudo:session): session closed for user root
Feb 25 12:49:22 compute-0 ceph-mon[76335]: pgmap v2063: 305 pgs: 305 active+clean; 272 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 3.9 MiB/s wr, 86 op/s
Feb 25 12:49:22 compute-0 sudo[353805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:49:22 compute-0 sudo[353805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:49:22 compute-0 podman[353842]: 2026-02-25 12:49:22.423517314 +0000 UTC m=+0.046396561 container create 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 12:49:22 compute-0 systemd[1]: Started libpod-conmon-509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85.scope.
Feb 25 12:49:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:49:22 compute-0 podman[353842]: 2026-02-25 12:49:22.407524303 +0000 UTC m=+0.030403560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:49:22 compute-0 podman[353842]: 2026-02-25 12:49:22.513177646 +0000 UTC m=+0.136056953 container init 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:49:22 compute-0 podman[353842]: 2026-02-25 12:49:22.518409124 +0000 UTC m=+0.141288361 container start 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 12:49:22 compute-0 podman[353842]: 2026-02-25 12:49:22.522139179 +0000 UTC m=+0.145018416 container attach 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:49:22 compute-0 adoring_chaum[353858]: 167 167
Feb 25 12:49:22 compute-0 systemd[1]: libpod-509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85.scope: Deactivated successfully.
Feb 25 12:49:22 compute-0 podman[353842]: 2026-02-25 12:49:22.523464617 +0000 UTC m=+0.146343844 container died 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 12:49:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:49:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-619c316696d0c687b52a390d8172a0469116e0a1d8e45e3ab53f847f3904ae93-merged.mount: Deactivated successfully.
Feb 25 12:49:22 compute-0 podman[353842]: 2026-02-25 12:49:22.559787782 +0000 UTC m=+0.182667009 container remove 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 12:49:22 compute-0 systemd[1]: libpod-conmon-509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85.scope: Deactivated successfully.
Feb 25 12:49:22 compute-0 podman[353882]: 2026-02-25 12:49:22.71092097 +0000 UTC m=+0.042794479 container create 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 12:49:22 compute-0 systemd[1]: Started libpod-conmon-8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f.scope.
Feb 25 12:49:22 compute-0 podman[353882]: 2026-02-25 12:49:22.692307224 +0000 UTC m=+0.024180823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:49:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756a809f24b9b1861f720fc93e27e4178bc62c268e843c2fa3e6d730b686b90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756a809f24b9b1861f720fc93e27e4178bc62c268e843c2fa3e6d730b686b90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756a809f24b9b1861f720fc93e27e4178bc62c268e843c2fa3e6d730b686b90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756a809f24b9b1861f720fc93e27e4178bc62c268e843c2fa3e6d730b686b90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:22 compute-0 podman[353882]: 2026-02-25 12:49:22.809793402 +0000 UTC m=+0.141666911 container init 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:49:22 compute-0 podman[353882]: 2026-02-25 12:49:22.814727641 +0000 UTC m=+0.146601180 container start 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:49:22 compute-0 podman[353882]: 2026-02-25 12:49:22.818884348 +0000 UTC m=+0.150757877 container attach 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Feb 25 12:49:23 compute-0 thirsty_moore[353898]: {
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:     "0": [
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:         {
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "devices": [
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "/dev/loop3"
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             ],
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_name": "ceph_lv0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_size": "21470642176",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "name": "ceph_lv0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "tags": {
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.cluster_name": "ceph",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.crush_device_class": "",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.encrypted": "0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.objectstore": "bluestore",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.osd_id": "0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.type": "block",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.vdo": "0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.with_tpm": "0"
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             },
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "type": "block",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "vg_name": "ceph_vg0"
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:         }
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:     ],
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:     "1": [
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:         {
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "devices": [
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "/dev/loop4"
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             ],
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_name": "ceph_lv1",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_size": "21470642176",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "name": "ceph_lv1",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "tags": {
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.cluster_name": "ceph",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.crush_device_class": "",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.encrypted": "0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.objectstore": "bluestore",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.osd_id": "1",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.type": "block",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.vdo": "0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.with_tpm": "0"
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             },
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "type": "block",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "vg_name": "ceph_vg1"
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:         }
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:     ],
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:     "2": [
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:         {
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "devices": [
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "/dev/loop5"
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             ],
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_name": "ceph_lv2",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_size": "21470642176",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "name": "ceph_lv2",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "tags": {
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.cluster_name": "ceph",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.crush_device_class": "",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.encrypted": "0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.objectstore": "bluestore",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.osd_id": "2",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.type": "block",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.vdo": "0",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:                 "ceph.with_tpm": "0"
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             },
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "type": "block",
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:             "vg_name": "ceph_vg2"
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:         }
Feb 25 12:49:23 compute-0 thirsty_moore[353898]:     ]
Feb 25 12:49:23 compute-0 thirsty_moore[353898]: }
Feb 25 12:49:23 compute-0 systemd[1]: libpod-8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f.scope: Deactivated successfully.
Feb 25 12:49:23 compute-0 podman[353882]: 2026-02-25 12:49:23.12066403 +0000 UTC m=+0.452537559 container died 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7756a809f24b9b1861f720fc93e27e4178bc62c268e843c2fa3e6d730b686b90-merged.mount: Deactivated successfully.
Feb 25 12:49:23 compute-0 podman[353882]: 2026-02-25 12:49:23.158843818 +0000 UTC m=+0.490717327 container remove 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:49:23 compute-0 systemd[1]: libpod-conmon-8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f.scope: Deactivated successfully.
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.186 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:23 compute-0 sudo[353805]: pam_unix(sudo:session): session closed for user root
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.207 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.208 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance network_info: |[{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.208 244018 DEBUG oslo_concurrency.lockutils [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.208 244018 DEBUG nova.network.neutron [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing network info cache for port 344b59b1-93f2-4e23-8c28-835e5d954630 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.213 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Start _get_guest_xml network_info=[{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.219 244018 WARNING nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.228 244018 DEBUG nova.virt.libvirt.host [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.229 244018 DEBUG nova.virt.libvirt.host [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.233 244018 DEBUG nova.virt.libvirt.host [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.234 244018 DEBUG nova.virt.libvirt.host [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.234 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.234 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.235 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.235 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.235 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.236 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.236 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.236 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.237 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.237 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.237 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.238 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.241 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:23 compute-0 sudo[353919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:49:23 compute-0 sudo[353919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:49:23 compute-0 sudo[353919]: pam_unix(sudo:session): session closed for user root
Feb 25 12:49:23 compute-0 sudo[353945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:49:23 compute-0 sudo[353945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.470 244018 INFO nova.compute.manager [None req-3fcac6e8-50e0-4815-ba31-4c0f4efbf563 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Get console output
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.479 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:49:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:23 compute-0 podman[354001]: 2026-02-25 12:49:23.620237385 +0000 UTC m=+0.044712663 container create 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:49:23 compute-0 systemd[1]: Started libpod-conmon-74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12.scope.
Feb 25 12:49:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:49:23 compute-0 podman[354001]: 2026-02-25 12:49:23.600862398 +0000 UTC m=+0.025337706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:49:23 compute-0 podman[354001]: 2026-02-25 12:49:23.698585977 +0000 UTC m=+0.123061275 container init 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:49:23 compute-0 podman[354001]: 2026-02-25 12:49:23.704606118 +0000 UTC m=+0.129081396 container start 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 12:49:23 compute-0 relaxed_jones[354018]: 167 167
Feb 25 12:49:23 compute-0 systemd[1]: libpod-74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12.scope: Deactivated successfully.
Feb 25 12:49:23 compute-0 podman[354001]: 2026-02-25 12:49:23.710406261 +0000 UTC m=+0.134881539 container attach 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 12:49:23 compute-0 podman[354001]: 2026-02-25 12:49:23.710868894 +0000 UTC m=+0.135344162 container died 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 12:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2948901fdea99e55a805067cc9fa3d12e7dae76f768d0d1023b1b3a3cd17d4bd-merged.mount: Deactivated successfully.
Feb 25 12:49:23 compute-0 podman[354001]: 2026-02-25 12:49:23.755806963 +0000 UTC m=+0.180282281 container remove 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:49:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:49:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756431689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:49:23 compute-0 systemd[1]: libpod-conmon-74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12.scope: Deactivated successfully.
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.780 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.800 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:23 compute-0 nova_compute[244014]: 2026-02-25 12:49:23.804 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:23 compute-0 podman[354063]: 2026-02-25 12:49:23.912095686 +0000 UTC m=+0.046097712 container create 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:49:23 compute-0 systemd[1]: Started libpod-conmon-1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e.scope.
Feb 25 12:49:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935f0f6ea4cc0791112f4b61da2c886e8f8c060ec9cdb488fa735e1cbfcf33f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935f0f6ea4cc0791112f4b61da2c886e8f8c060ec9cdb488fa735e1cbfcf33f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935f0f6ea4cc0791112f4b61da2c886e8f8c060ec9cdb488fa735e1cbfcf33f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935f0f6ea4cc0791112f4b61da2c886e8f8c060ec9cdb488fa735e1cbfcf33f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:23 compute-0 podman[354063]: 2026-02-25 12:49:23.889090107 +0000 UTC m=+0.023092133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:49:23 compute-0 podman[354063]: 2026-02-25 12:49:23.999562696 +0000 UTC m=+0.133564792 container init 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:49:24 compute-0 podman[354063]: 2026-02-25 12:49:24.014161638 +0000 UTC m=+0.148163694 container start 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 12:49:24 compute-0 podman[354063]: 2026-02-25 12:49:24.018168442 +0000 UTC m=+0.152170498 container attach 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:49:24 compute-0 ceph-mon[76335]: pgmap v2064: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:49:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1756431689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:49:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:49:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258743471' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.360 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.363 244018 DEBUG nova.virt.libvirt.vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.364 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.365 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.366 244018 DEBUG nova.virt.libvirt.vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.367 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.368 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.369 244018 DEBUG nova.objects.instance [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6501baa-8bc9-4724-b4c1-8ff43faf517c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.389 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:49:24 compute-0 nova_compute[244014]:   <uuid>b6501baa-8bc9-4724-b4c1-8ff43faf517c</uuid>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   <name>instance-0000007a</name>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1173463848</nova:name>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:49:23</nova:creationTime>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <nova:port uuid="dee62982-d46c-4a81-b4ad-8154d7cfc7af">
Feb 25 12:49:24 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <nova:port uuid="344b59b1-93f2-4e23-8c28-835e5d954630">
Feb 25 12:49:24 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8d:1050" ipVersion="6"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <system>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <entry name="serial">b6501baa-8bc9-4724-b4c1-8ff43faf517c</entry>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <entry name="uuid">b6501baa-8bc9-4724-b4c1-8ff43faf517c</entry>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     </system>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   <os>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   </os>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   <features>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   </features>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk">
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       </source>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config">
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       </source>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:49:24 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:93:cc:74"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <target dev="tapdee62982-d4"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:8d:10:50"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <target dev="tap344b59b1-93"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/console.log" append="off"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <video>
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     </video>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:49:24 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:49:24 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:49:24 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:49:24 compute-0 nova_compute[244014]: </domain>
Feb 25 12:49:24 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.390 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Preparing to wait for external event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.390 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.390 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.390 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.390 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Preparing to wait for external event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.391 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.391 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.391 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.392 244018 DEBUG nova.virt.libvirt.vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.392 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.393 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.393 244018 DEBUG os_vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.394 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.394 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.395 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.399 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdee62982-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.400 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdee62982-d4, col_values=(('external_ids', {'iface-id': 'dee62982-d46c-4a81-b4ad-8154d7cfc7af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:cc:74', 'vm-uuid': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:24 compute-0 NetworkManager[49836]: <info>  [1772023764.4022] manager: (tapdee62982-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/530)
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.401 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.406 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.406 244018 INFO os_vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4')
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.407 244018 DEBUG nova.virt.libvirt.vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.407 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.408 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.408 244018 DEBUG os_vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.409 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.409 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.412 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap344b59b1-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.412 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap344b59b1-93, col_values=(('external_ids', {'iface-id': '344b59b1-93f2-4e23-8c28-835e5d954630', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:10:50', 'vm-uuid': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.413 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:24 compute-0 NetworkManager[49836]: <info>  [1772023764.4144] manager: (tap344b59b1-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/531)
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.420 244018 INFO os_vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93')
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.490 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.491 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.491 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:93:cc:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.492 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:8d:10:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.492 244018 INFO nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Using config drive
Feb 25 12:49:24 compute-0 nova_compute[244014]: 2026-02-25 12:49:24.516 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:49:24 compute-0 lvm[354200]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:49:24 compute-0 lvm[354200]: VG ceph_vg0 finished
Feb 25 12:49:24 compute-0 lvm[354203]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:49:24 compute-0 lvm[354203]: VG ceph_vg1 finished
Feb 25 12:49:24 compute-0 lvm[354205]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:49:24 compute-0 lvm[354205]: VG ceph_vg2 finished
Feb 25 12:49:24 compute-0 cranky_feynman[354098]: {}
Feb 25 12:49:24 compute-0 systemd[1]: libpod-1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e.scope: Deactivated successfully.
Feb 25 12:49:24 compute-0 systemd[1]: libpod-1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e.scope: Consumed 1.055s CPU time.
Feb 25 12:49:24 compute-0 podman[354063]: 2026-02-25 12:49:24.779414967 +0000 UTC m=+0.913417023 container died 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 12:49:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-935f0f6ea4cc0791112f4b61da2c886e8f8c060ec9cdb488fa735e1cbfcf33f5-merged.mount: Deactivated successfully.
Feb 25 12:49:24 compute-0 podman[354063]: 2026-02-25 12:49:24.817133942 +0000 UTC m=+0.951135958 container remove 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 12:49:24 compute-0 systemd[1]: libpod-conmon-1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e.scope: Deactivated successfully.
Feb 25 12:49:24 compute-0 sudo[353945]: pam_unix(sudo:session): session closed for user root
Feb 25 12:49:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:49:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:49:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:49:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:49:24 compute-0 sudo[354218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:49:24 compute-0 sudo[354218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:49:24 compute-0 sudo[354218]: pam_unix(sudo:session): session closed for user root
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.010 244018 INFO nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Creating config drive at /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.017 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqj5qfh21 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.056 244018 DEBUG nova.network.neutron [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updated VIF entry in instance network info cache for port 344b59b1-93f2-4e23-8c28-835e5d954630. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.057 244018 DEBUG nova.network.neutron [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.087 244018 DEBUG oslo_concurrency.lockutils [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:49:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4258743471' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:49:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:49:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.162 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqj5qfh21" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.201 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.204 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.306 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.329 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.330 244018 INFO nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Deleting local config drive /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config because it was imported into RBD.
Feb 25 12:49:25 compute-0 NetworkManager[49836]: <info>  [1772023765.3735] manager: (tapdee62982-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/532)
Feb 25 12:49:25 compute-0 systemd-udevd[354202]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:49:25 compute-0 kernel: tapdee62982-d4: entered promiscuous mode
Feb 25 12:49:25 compute-0 ovn_controller[147040]: 2026-02-25T12:49:25Z|01271|binding|INFO|Claiming lport dee62982-d46c-4a81-b4ad-8154d7cfc7af for this chassis.
Feb 25 12:49:25 compute-0 ovn_controller[147040]: 2026-02-25T12:49:25Z|01272|binding|INFO|dee62982-d46c-4a81-b4ad-8154d7cfc7af: Claiming fa:16:3e:93:cc:74 10.100.0.9
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:25 compute-0 NetworkManager[49836]: <info>  [1772023765.3827] manager: (tap344b59b1-93): new Tun device (/org/freedesktop/NetworkManager/Devices/533)
Feb 25 12:49:25 compute-0 systemd-udevd[354199]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:49:25 compute-0 kernel: tap344b59b1-93: entered promiscuous mode
Feb 25 12:49:25 compute-0 ovn_controller[147040]: 2026-02-25T12:49:25Z|01273|binding|INFO|Claiming lport 344b59b1-93f2-4e23-8c28-835e5d954630 for this chassis.
Feb 25 12:49:25 compute-0 ovn_controller[147040]: 2026-02-25T12:49:25Z|01274|binding|INFO|344b59b1-93f2-4e23-8c28-835e5d954630: Claiming fa:16:3e:8d:10:50 2001:db8::f816:3eff:fe8d:1050
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.386 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:cc:74 10.100.0.9'], port_security=['fa:16:3e:93:cc:74 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7b776d7-fb2a-404e-b423-f79885837022, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dee62982-d46c-4a81-b4ad-8154d7cfc7af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:49:25 compute-0 ovn_controller[147040]: 2026-02-25T12:49:25Z|01275|binding|INFO|Setting lport dee62982-d46c-4a81-b4ad-8154d7cfc7af ovn-installed in OVS
Feb 25 12:49:25 compute-0 ovn_controller[147040]: 2026-02-25T12:49:25Z|01276|binding|INFO|Setting lport dee62982-d46c-4a81-b4ad-8154d7cfc7af up in Southbound
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.390 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:25 compute-0 NetworkManager[49836]: <info>  [1772023765.3911] device (tapdee62982-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.391 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dee62982-d46c-4a81-b4ad-8154d7cfc7af in datapath 481feaf1-7ff4-47be-9159-a1dd19ceebcc bound to our chassis
Feb 25 12:49:25 compute-0 NetworkManager[49836]: <info>  [1772023765.3932] device (tapdee62982-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:49:25 compute-0 NetworkManager[49836]: <info>  [1772023765.3942] device (tap344b59b1-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:49:25 compute-0 ovn_controller[147040]: 2026-02-25T12:49:25Z|01277|binding|INFO|Setting lport 344b59b1-93f2-4e23-8c28-835e5d954630 ovn-installed in OVS
Feb 25 12:49:25 compute-0 ovn_controller[147040]: 2026-02-25T12:49:25Z|01278|binding|INFO|Setting lport 344b59b1-93f2-4e23-8c28-835e5d954630 up in Southbound
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.393 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 481feaf1-7ff4-47be-9159-a1dd19ceebcc
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:25 compute-0 NetworkManager[49836]: <info>  [1772023765.3962] device (tap344b59b1-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.397 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:10:50 2001:db8::f816:3eff:fe8d:1050'], port_security=['fa:16:3e:8d:10:50 2001:db8::f816:3eff:fe8d:1050'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8d:1050/64', 'neutron:device_id': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f83b91c5-311c-45de-aa70-06fb6ec6fece, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=344b59b1-93f2-4e23-8c28-835e5d954630) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.404 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4efa9b-f3b2-495d-804a-0bc1172ad26d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.404 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap481feaf1-71 in ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.411 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap481feaf1-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.411 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9bbc69d-ecc3-41f6-a908-c7e990d9e2b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.412 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[355a5b19-1bf4-4231-93e9-552cf5515d62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 systemd-machined[210048]: New machine qemu-154-instance-0000007a.
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.421 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e64758fc-21cc-4fc5-a3ab-a5584c90f867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.435 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6cb856-9e79-43de-9521-34a182d00922]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-0000007a.
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.458 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8a95f0b6-6717-4c84-980b-d676726b981a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.465 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec0ad6e-5f95-415d-a28a-d26397d9ddef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 NetworkManager[49836]: <info>  [1772023765.4660] manager: (tap481feaf1-70): new Veth device (/org/freedesktop/NetworkManager/Devices/534)
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.491 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[06688183-86ab-44a4-8fef-d7199f449a88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.494 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1adfc4d4-0b1f-4254-b742-9a5ecee7a4a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 NetworkManager[49836]: <info>  [1772023765.5122] device (tap481feaf1-70): carrier: link connected
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.516 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b6727c93-51cf-4fc4-aff9-b1595045c645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.529 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fb24fc19-1370-417b-b6cf-7467930a3583]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap481feaf1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:6f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573491, 'reachable_time': 20258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354330, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.544 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[142a477d-cdc6-42f1-88e1-59572382a2c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe24:6f82'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573491, 'tstamp': 573491}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354331, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.555 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c83132f4-3457-472c-b4cf-4b15631f55a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap481feaf1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:6f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573491, 'reachable_time': 20258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354332, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.576 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e50c72c3-e5e9-4d6a-ba76-1adf2e717459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.626 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[abd154d3-9d6f-439a-af84-e72fb141eb00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.628 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap481feaf1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.628 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.628 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap481feaf1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:25 compute-0 kernel: tap481feaf1-70: entered promiscuous mode
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:25 compute-0 NetworkManager[49836]: <info>  [1772023765.6312] manager: (tap481feaf1-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.633 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap481feaf1-70, col_values=(('external_ids', {'iface-id': '8234c876-6930-42e9-b642-2d1f82e23ee0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:25 compute-0 ovn_controller[147040]: 2026-02-25T12:49:25Z|01279|binding|INFO|Releasing lport 8234c876-6930-42e9-b642-2d1f82e23ee0 from this chassis (sb_readonly=0)
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.635 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.635 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/481feaf1-7ff4-47be-9159-a1dd19ceebcc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/481feaf1-7ff4-47be-9159-a1dd19ceebcc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.636 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd712ec-e2fe-4c9f-a1fc-9ffc5d3e0e24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.637 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-481feaf1-7ff4-47be-9159-a1dd19ceebcc
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/481feaf1-7ff4-47be-9159-a1dd19ceebcc.pid.haproxy
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 481feaf1-7ff4-47be-9159-a1dd19ceebcc
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:49:25 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.638 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'env', 'PROCESS_TAG=haproxy-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/481feaf1-7ff4-47be-9159-a1dd19ceebcc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.811 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023765.8112133, b6501baa-8bc9-4724-b4c1-8ff43faf517c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.812 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] VM Started (Lifecycle Event)
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.834 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.839 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023765.8115282, b6501baa-8bc9-4724-b4c1-8ff43faf517c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.840 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] VM Paused (Lifecycle Event)
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.856 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.859 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:49:25 compute-0 nova_compute[244014]: 2026-02-25 12:49:25.878 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:49:25 compute-0 podman[354407]: 2026-02-25 12:49:25.989329021 +0000 UTC m=+0.060528200 container create 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:49:26 compute-0 systemd[1]: Started libpod-conmon-7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6.scope.
Feb 25 12:49:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6fa4fcab328f5a267bf20836bf596abfcd5ef7623497a8450d10dfaa2251880/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:26 compute-0 podman[354407]: 2026-02-25 12:49:25.960631071 +0000 UTC m=+0.031830330 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:49:26 compute-0 podman[354407]: 2026-02-25 12:49:26.055432407 +0000 UTC m=+0.126631676 container init 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:49:26 compute-0 podman[354407]: 2026-02-25 12:49:26.059731699 +0000 UTC m=+0.130930908 container start 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 12:49:26 compute-0 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [NOTICE]   (354426) : New worker (354428) forked
Feb 25 12:49:26 compute-0 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [NOTICE]   (354426) : Loading success.
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.105 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 344b59b1-93f2-4e23-8c28-835e5d954630 in datapath eb832dde-9848-40c5-9505-cc643b1bd0fa unbound from our chassis
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.108 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb832dde-9848-40c5-9505-cc643b1bd0fa
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.115 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35591b7b-4bbb-4702-85ba-f852b1a173ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.116 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb832dde-91 in ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.119 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb832dde-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.119 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1a0cd9-4269-4769-98ca-37bd28f20e22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.120 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29caf8fd-3097-4d38-b999-c6e37fb28320]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.131 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe10efd-ac8e-439b-b75f-f003c876a76b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d99de92-0f1b-40fd-947c-0362f372b395]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ceph-mon[76335]: pgmap v2065: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.179 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6831c47b-9741-44e3-88fd-ba12e6d77ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 NetworkManager[49836]: <info>  [1772023766.1864] manager: (tapeb832dde-90): new Veth device (/org/freedesktop/NetworkManager/Devices/536)
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.185 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a9251794-64bc-4d66-a088-4b5f4ce42c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 systemd-udevd[354324]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.211 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c3762ba3-61ed-4afd-872c-2b74f57d8007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.214 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e82ef732-8863-48da-b98d-5b44a0f1cec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 NetworkManager[49836]: <info>  [1772023766.2368] device (tapeb832dde-90): carrier: link connected
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.245 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1b60d522-eb2d-4595-a9f1-b4fb071941cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 nova_compute[244014]: 2026-02-25 12:49:26.247 244018 DEBUG nova.compute.manager [req-23a5580f-44f5-45c8-8232-5c59a220d786 req-b39b9514-a044-47b2-b189-b89badc09af4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:26 compute-0 nova_compute[244014]: 2026-02-25 12:49:26.248 244018 DEBUG oslo_concurrency.lockutils [req-23a5580f-44f5-45c8-8232-5c59a220d786 req-b39b9514-a044-47b2-b189-b89badc09af4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:26 compute-0 nova_compute[244014]: 2026-02-25 12:49:26.248 244018 DEBUG oslo_concurrency.lockutils [req-23a5580f-44f5-45c8-8232-5c59a220d786 req-b39b9514-a044-47b2-b189-b89badc09af4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:26 compute-0 nova_compute[244014]: 2026-02-25 12:49:26.248 244018 DEBUG oslo_concurrency.lockutils [req-23a5580f-44f5-45c8-8232-5c59a220d786 req-b39b9514-a044-47b2-b189-b89badc09af4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:26 compute-0 nova_compute[244014]: 2026-02-25 12:49:26.248 244018 DEBUG nova.compute.manager [req-23a5580f-44f5-45c8-8232-5c59a220d786 req-b39b9514-a044-47b2-b189-b89badc09af4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Processing event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.261 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c110a3d3-d24f-4277-a0f3-31840d246579]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb832dde-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a5:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573564, 'reachable_time': 17677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354447, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.272 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[665f69d3-55d7-42fd-8986-2df926350080]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:a537'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573564, 'tstamp': 573564}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354448, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.290 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d45f364a-f45b-4e34-8a5f-56b32086a06f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb832dde-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a5:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573564, 'reachable_time': 17677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354449, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.311 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[01f3fd79-d3f7-47ee-b81f-5a1aaa701e3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.332 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[284b242e-8e2b-40d8-9f5c-59d9546bc580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.333 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb832dde-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.333 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.334 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb832dde-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:26 compute-0 nova_compute[244014]: 2026-02-25 12:49:26.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:26 compute-0 NetworkManager[49836]: <info>  [1772023766.3371] manager: (tapeb832dde-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Feb 25 12:49:26 compute-0 kernel: tapeb832dde-90: entered promiscuous mode
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.338 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb832dde-90, col_values=(('external_ids', {'iface-id': 'b3234524-e109-417d-a204-a0ab750c983e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:26 compute-0 ovn_controller[147040]: 2026-02-25T12:49:26Z|01280|binding|INFO|Releasing lport b3234524-e109-417d-a204-a0ab750c983e from this chassis (sb_readonly=0)
Feb 25 12:49:26 compute-0 nova_compute[244014]: 2026-02-25 12:49:26.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:26 compute-0 nova_compute[244014]: 2026-02-25 12:49:26.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:26 compute-0 nova_compute[244014]: 2026-02-25 12:49:26.346 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.346 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb832dde-9848-40c5-9505-cc643b1bd0fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb832dde-9848-40c5-9505-cc643b1bd0fa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.347 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f489c13f-7c59-481a-a998-c6a0380584f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.348 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-eb832dde-9848-40c5-9505-cc643b1bd0fa
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/eb832dde-9848-40c5-9505-cc643b1bd0fa.pid.haproxy
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID eb832dde-9848-40c5-9505-cc643b1bd0fa
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:49:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.349 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'env', 'PROCESS_TAG=haproxy-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb832dde-9848-40c5-9505-cc643b1bd0fa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:49:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:49:26 compute-0 podman[354480]: 2026-02-25 12:49:26.662801868 +0000 UTC m=+0.038638302 container create af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:49:26 compute-0 systemd[1]: Started libpod-conmon-af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c.scope.
Feb 25 12:49:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abdf7d75bc10bb95826887c814df1a0ab66d19dd1ae55e1c4beadda3a359345/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:26 compute-0 podman[354480]: 2026-02-25 12:49:26.711913394 +0000 UTC m=+0.087749848 container init af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:49:26 compute-0 podman[354480]: 2026-02-25 12:49:26.716211116 +0000 UTC m=+0.092047550 container start af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 12:49:26 compute-0 podman[354480]: 2026-02-25 12:49:26.643627946 +0000 UTC m=+0.019464400 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:49:26 compute-0 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [NOTICE]   (354499) : New worker (354501) forked
Feb 25 12:49:26 compute-0 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [NOTICE]   (354499) : Loading success.
Feb 25 12:49:27 compute-0 nova_compute[244014]: 2026-02-25 12:49:27.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:49:27 compute-0 nova_compute[244014]: 2026-02-25 12:49:27.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:49:27 compute-0 nova_compute[244014]: 2026-02-25 12:49:27.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:49:27 compute-0 nova_compute[244014]: 2026-02-25 12:49:27.905 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:49:28 compute-0 ceph-mon[76335]: pgmap v2066: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.347 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.348 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.348 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.349 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.349 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No event matching network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af in dict_keys([('network-vif-plugged', '344b59b1-93f2-4e23-8c28-835e5d954630')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.349 244018 WARNING nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received unexpected event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af for instance with vm_state building and task_state spawning.
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.350 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.350 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.350 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.351 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.352 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Processing event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.352 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.352 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.353 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.353 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.353 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No waiting events found dispatching network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.353 244018 WARNING nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received unexpected event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 for instance with vm_state building and task_state spawning.
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.354 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.359 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023768.3588724, b6501baa-8bc9-4724-b4c1-8ff43faf517c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.359 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] VM Resumed (Lifecycle Event)
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.361 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.366 244018 INFO nova.virt.libvirt.driver [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance spawned successfully.
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.367 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.387 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.394 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.397 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.398 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.398 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.399 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.399 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.400 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.428 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.466 244018 INFO nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Took 14.19 seconds to spawn the instance on the hypervisor.
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.467 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.511 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.511 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.512 244018 DEBUG nova.objects.instance [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'flavor' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:49:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.545 244018 INFO nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Took 15.51 seconds to build instance.
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.577 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.895 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.896 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.896 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.897 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.897 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.986 244018 DEBUG nova.objects.instance [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_requests' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:49:28 compute-0 nova_compute[244014]: 2026-02-25 12:49:28.997 244018 DEBUG nova.network.neutron [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.120 244018 DEBUG nova.policy [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:49:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1950405128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.485 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.561 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.562 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.565 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.565 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.619 244018 DEBUG nova.network.neutron [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Successfully created port: 887af58f-85b4-49c2-93b7-855c843c0cd5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.723 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.724 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3345MB free_disk=59.920919011346996GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.724 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.725 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.818 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.819 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b6501baa-8bc9-4724-b4c1-8ff43faf517c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.819 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.819 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:49:29 compute-0 nova_compute[244014]: 2026-02-25 12:49:29.873 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:30 compute-0 ceph-mon[76335]: pgmap v2067: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Feb 25 12:49:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1950405128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.308 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.333 244018 DEBUG nova.network.neutron [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Successfully updated port: 887af58f-85b4-49c2-93b7-855c843c0cd5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.347 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.347 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.348 244018 DEBUG nova.network.neutron [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:49:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:49:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2470822927' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.376 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.381 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.397 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.418 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.419 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.491 244018 DEBUG nova.compute.manager [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-changed-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.492 244018 DEBUG nova.compute.manager [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing instance network info cache due to event network-changed-887af58f-85b4-49c2-93b7-855c843c0cd5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:49:30 compute-0 nova_compute[244014]: 2026-02-25 12:49:30.493 244018 DEBUG oslo_concurrency.lockutils [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 63 KiB/s wr, 14 op/s
Feb 25 12:49:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:49:31
Feb 25 12:49:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:49:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:49:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'images', 'vms']
Feb 25 12:49:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:49:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2470822927' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:49:31 compute-0 podman[354556]: 2026-02-25 12:49:31.72619827 +0000 UTC m=+0.064920904 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:49:31 compute-0 podman[354555]: 2026-02-25 12:49:31.731655914 +0000 UTC m=+0.073049054 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 12:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:49:32 compute-0 ceph-mon[76335]: pgmap v2068: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 63 KiB/s wr, 14 op/s
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.305 244018 DEBUG nova.network.neutron [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.330 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.331 244018 DEBUG oslo_concurrency.lockutils [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.332 244018 DEBUG nova.network.neutron [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing network info cache for port 887af58f-85b4-49c2-93b7-855c843c0cd5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.335 244018 DEBUG nova.virt.libvirt.vif [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.336 244018 DEBUG nova.network.os_vif_util [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.336 244018 DEBUG nova.network.os_vif_util [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.337 244018 DEBUG os_vif [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.338 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.338 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.339 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.341 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.341 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap887af58f-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.342 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap887af58f-85, col_values=(('external_ids', {'iface-id': '887af58f-85b4-49c2-93b7-855c843c0cd5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:00:77', 'vm-uuid': '37ce1876-2b57-4f84-800c-2a1b6eaa6943'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:32 compute-0 NetworkManager[49836]: <info>  [1772023772.3451] manager: (tap887af58f-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.352 244018 INFO os_vif [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85')
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.354 244018 DEBUG nova.virt.libvirt.vif [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.354 244018 DEBUG nova.network.os_vif_util [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.355 244018 DEBUG nova.network.os_vif_util [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.358 244018 DEBUG nova.virt.libvirt.guest [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] attach device xml: <interface type="ethernet">
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:08:00:77"/>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <target dev="tap887af58f-85"/>
Feb 25 12:49:32 compute-0 nova_compute[244014]: </interface>
Feb 25 12:49:32 compute-0 nova_compute[244014]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 25 12:49:32 compute-0 NetworkManager[49836]: <info>  [1772023772.3681] manager: (tap887af58f-85): new Tun device (/org/freedesktop/NetworkManager/Devices/539)
Feb 25 12:49:32 compute-0 kernel: tap887af58f-85: entered promiscuous mode
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:32 compute-0 ovn_controller[147040]: 2026-02-25T12:49:32Z|01281|binding|INFO|Claiming lport 887af58f-85b4-49c2-93b7-855c843c0cd5 for this chassis.
Feb 25 12:49:32 compute-0 ovn_controller[147040]: 2026-02-25T12:49:32Z|01282|binding|INFO|887af58f-85b4-49c2-93b7-855c843c0cd5: Claiming fa:16:3e:08:00:77 10.100.0.29
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.393 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:00:77 10.100.0.29'], port_security=['fa:16:3e:08:00:77 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '37ce1876-2b57-4f84-800c-2a1b6eaa6943', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd570f87-4a07-4ca5-8faf-e2384fdeb2a2, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=887af58f-85b4-49c2-93b7-855c843c0cd5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.395 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 887af58f-85b4-49c2-93b7-855c843c0cd5 in datapath f8617723-856d-4eb3-ab8d-62fd3cacab7e bound to our chassis
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.397 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8617723-856d-4eb3-ab8d-62fd3cacab7e
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:32 compute-0 ovn_controller[147040]: 2026-02-25T12:49:32Z|01283|binding|INFO|Setting lport 887af58f-85b4-49c2-93b7-855c843c0cd5 ovn-installed in OVS
Feb 25 12:49:32 compute-0 ovn_controller[147040]: 2026-02-25T12:49:32Z|01284|binding|INFO|Setting lport 887af58f-85b4-49c2-93b7-855c843c0cd5 up in Southbound
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:32 compute-0 systemd-udevd[354606]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.405 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73754554-30f1-42a7-9113-290c61a51210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.407 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8617723-81 in ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.409 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8617723-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.409 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dd21a12f-2ad0-4ce7-969d-a6c04e14990f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.412 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cbccb4d5-269a-46ce-bfcf-61e5710a3043]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 NetworkManager[49836]: <info>  [1772023772.4205] device (tap887af58f-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:49:32 compute-0 NetworkManager[49836]: <info>  [1772023772.4211] device (tap887af58f-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.424 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[75060272-bd2d-42df-bef6-39facf5d34ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.448 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f811d8d4-8096-42bf-8542-506605673dc1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.455 244018 DEBUG nova.virt.libvirt.driver [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.456 244018 DEBUG nova.virt.libvirt.driver [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.456 244018 DEBUG nova.virt.libvirt.driver [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:50:74:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.457 244018 DEBUG nova.virt.libvirt.driver [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:08:00:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.472 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c0221642-ecc8-4b39-bb99-00e995582765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.478 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c257432-80ea-4c19-9243-43079863834c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 NetworkManager[49836]: <info>  [1772023772.4794] manager: (tapf8617723-80): new Veth device (/org/freedesktop/NetworkManager/Devices/540)
Feb 25 12:49:32 compute-0 systemd-udevd[354609]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.497 244018 DEBUG nova.virt.libvirt.guest [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <nova:name>tempest-TestNetworkBasicOps-server-1787082415</nova:name>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:49:32</nova:creationTime>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:49:32 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:49:32 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:49:32 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:49:32 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:49:32 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:49:32 compute-0 nova_compute[244014]:     <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:49:32 compute-0 nova_compute[244014]:     <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:49:32 compute-0 nova_compute[244014]:     <nova:port uuid="cda27370-858e-4443-819e-696576515c52">
Feb 25 12:49:32 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:49:32 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:49:32 compute-0 nova_compute[244014]:     <nova:port uuid="887af58f-85b4-49c2-93b7-855c843c0cd5">
Feb 25 12:49:32 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 12:49:32 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:49:32 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:49:32 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:49:32 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.503 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[57c19cc0-f1d0-484d-9a01-7937417f51d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.507 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[24cdd947-affc-4206-88b7-f9f80379f2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 NetworkManager[49836]: <info>  [1772023772.5231] device (tapf8617723-80): carrier: link connected
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.524 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.528 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[284ae3b9-0fe2-4e8d-a501-46383a38ce2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 65 KiB/s wr, 61 op/s
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.542 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[25cc46f3-0370-45de-a11c-b8598c3308d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8617723-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:4b:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574193, 'reachable_time': 16327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354634, 'error': None, 'target': 'ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.553 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aff99c6b-5dde-4b4d-b2cf-5c52770ae957]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:4b55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 574193, 'tstamp': 574193}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354635, 'error': None, 'target': 'ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.567 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb07871-32c0-4a07-a795-1bd1ec25cf04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8617723-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:4b:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574193, 'reachable_time': 16327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354636, 'error': None, 'target': 'ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.591 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f92e32c5-c5f7-4580-9216-5606f1201c39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.638 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[716c8b34-b05b-480a-b24d-88975d24da05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.639 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8617723-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.639 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.639 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8617723-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:32 compute-0 NetworkManager[49836]: <info>  [1772023772.6432] manager: (tapf8617723-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Feb 25 12:49:32 compute-0 kernel: tapf8617723-80: entered promiscuous mode
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.650 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8617723-80, col_values=(('external_ids', {'iface-id': '256a466a-4cb8-48fa-a60d-9db8ac74f95b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:32 compute-0 ovn_controller[147040]: 2026-02-25T12:49:32Z|01285|binding|INFO|Releasing lport 256a466a-4cb8-48fa-a60d-9db8ac74f95b from this chassis (sb_readonly=0)
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.665 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8617723-856d-4eb3-ab8d-62fd3cacab7e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8617723-856d-4eb3-ab8d-62fd3cacab7e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.665 244018 DEBUG nova.compute.manager [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.666 244018 DEBUG oslo_concurrency.lockutils [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.666 244018 DEBUG oslo_concurrency.lockutils [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.666 244018 DEBUG oslo_concurrency.lockutils [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.666 244018 DEBUG nova.compute.manager [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.667 244018 WARNING nova.compute.manager [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 for instance with vm_state active and task_state None.
Feb 25 12:49:32 compute-0 nova_compute[244014]: 2026-02-25 12:49:32.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.668 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2eb0b3-470c-4484-ba64-8708fe5911df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.669 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-f8617723-856d-4eb3-ab8d-62fd3cacab7e
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/f8617723-856d-4eb3-ab8d-62fd3cacab7e.pid.haproxy
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID f8617723-856d-4eb3-ab8d-62fd3cacab7e
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:49:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.671 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'env', 'PROCESS_TAG=haproxy-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8617723-856d-4eb3-ab8d-62fd3cacab7e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:49:33 compute-0 podman[354668]: 2026-02-25 12:49:33.000332647 +0000 UTC m=+0.042262674 container create cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:49:33 compute-0 systemd[1]: Started libpod-conmon-cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52.scope.
Feb 25 12:49:33 compute-0 nova_compute[244014]: 2026-02-25 12:49:33.055 244018 DEBUG nova.compute.manager [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:33 compute-0 nova_compute[244014]: 2026-02-25 12:49:33.056 244018 DEBUG nova.compute.manager [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing instance network info cache due to event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:49:33 compute-0 nova_compute[244014]: 2026-02-25 12:49:33.056 244018 DEBUG oslo_concurrency.lockutils [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:33 compute-0 nova_compute[244014]: 2026-02-25 12:49:33.056 244018 DEBUG oslo_concurrency.lockutils [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:33 compute-0 nova_compute[244014]: 2026-02-25 12:49:33.056 244018 DEBUG nova.network.neutron [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing network info cache for port dee62982-d46c-4a81-b4ad-8154d7cfc7af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:49:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5ae28e2e538a9b1e2fa1fd3ba97baafc40136904659ba2baf74eab449664c35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:49:33 compute-0 podman[354668]: 2026-02-25 12:49:33.071569269 +0000 UTC m=+0.113499306 container init cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:49:33 compute-0 podman[354668]: 2026-02-25 12:49:33.075408517 +0000 UTC m=+0.117338534 container start cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:49:33 compute-0 podman[354668]: 2026-02-25 12:49:32.97952147 +0000 UTC m=+0.021451517 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:49:33 compute-0 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [NOTICE]   (354688) : New worker (354690) forked
Feb 25 12:49:33 compute-0 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [NOTICE]   (354688) : Loading success.
Feb 25 12:49:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:33 compute-0 nova_compute[244014]: 2026-02-25 12:49:33.929 244018 DEBUG nova.network.neutron [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updated VIF entry in instance network info cache for port 887af58f-85b4-49c2-93b7-855c843c0cd5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:49:33 compute-0 nova_compute[244014]: 2026-02-25 12:49:33.930 244018 DEBUG nova.network.neutron [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:33 compute-0 nova_compute[244014]: 2026-02-25 12:49:33.943 244018 DEBUG oslo_concurrency.lockutils [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:49:34 compute-0 ovn_controller[147040]: 2026-02-25T12:49:34Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:00:77 10.100.0.29
Feb 25 12:49:34 compute-0 ovn_controller[147040]: 2026-02-25T12:49:34Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:00:77 10.100.0.29
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.172 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-887af58f-85b4-49c2-93b7-855c843c0cd5" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.173 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-887af58f-85b4-49c2-93b7-855c843c0cd5" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.188 244018 DEBUG nova.objects.instance [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'flavor' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:49:34 compute-0 ceph-mon[76335]: pgmap v2069: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 65 KiB/s wr, 61 op/s
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.213 244018 DEBUG nova.virt.libvirt.vif [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.215 244018 DEBUG nova.network.os_vif_util [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.216 244018 DEBUG nova.network.os_vif_util [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.221 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.223 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.226 244018 DEBUG nova.virt.libvirt.driver [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Attempting to detach device tap887af58f-85 from instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.226 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:08:00:77"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <target dev="tap887af58f-85"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]: </interface>
Feb 25 12:49:34 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.232 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.236 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface>not found in domain: <domain type='kvm' id='153'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <name>instance-00000079</name>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <uuid>37ce1876-2b57-4f84-800c-2a1b6eaa6943</uuid>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:name>tempest-TestNetworkBasicOps-server-1787082415</nova:name>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:49:32</nova:creationTime>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:port uuid="cda27370-858e-4443-819e-696576515c52">
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:port uuid="887af58f-85b4-49c2-93b7-855c843c0cd5">
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:49:34 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <system>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='serial'>37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='uuid'>37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </system>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <os>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </os>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <features>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </features>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk' index='2'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       </source>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config' index='1'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       </source>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:50:74:9c'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target dev='tapcda27370-85'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:08:00:77'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target dev='tap887af58f-85'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='net1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <source path='/dev/pts/0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log' append='off'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       </target>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/0'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <source path='/dev/pts/0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log' append='off'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </console>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </input>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </input>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </input>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <video>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </video>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c59,c138</label>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c59,c138</imagelabel>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:49:34 compute-0 nova_compute[244014]: </domain>
Feb 25 12:49:34 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.238 244018 INFO nova.virt.libvirt.driver [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully detached device tap887af58f-85 from instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 from the persistent domain config.
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.238 244018 DEBUG nova.virt.libvirt.driver [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] (1/8): Attempting to detach device tap887af58f-85 with device alias net1 from instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.238 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:08:00:77"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <target dev="tap887af58f-85"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]: </interface>
Feb 25 12:49:34 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.295 244018 DEBUG nova.network.neutron [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updated VIF entry in instance network info cache for port dee62982-d46c-4a81-b4ad-8154d7cfc7af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.296 244018 DEBUG nova.network.neutron [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.314 244018 DEBUG oslo_concurrency.lockutils [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:49:34 compute-0 kernel: tap887af58f-85 (unregistering): left promiscuous mode
Feb 25 12:49:34 compute-0 NetworkManager[49836]: <info>  [1772023774.3451] device (tap887af58f-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:34 compute-0 ovn_controller[147040]: 2026-02-25T12:49:34Z|01286|binding|INFO|Releasing lport 887af58f-85b4-49c2-93b7-855c843c0cd5 from this chassis (sb_readonly=0)
Feb 25 12:49:34 compute-0 ovn_controller[147040]: 2026-02-25T12:49:34Z|01287|binding|INFO|Setting lport 887af58f-85b4-49c2-93b7-855c843c0cd5 down in Southbound
Feb 25 12:49:34 compute-0 ovn_controller[147040]: 2026-02-25T12:49:34Z|01288|binding|INFO|Removing iface tap887af58f-85 ovn-installed in OVS
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.361 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:00:77 10.100.0.29'], port_security=['fa:16:3e:08:00:77 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '37ce1876-2b57-4f84-800c-2a1b6eaa6943', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd570f87-4a07-4ca5-8faf-e2384fdeb2a2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=887af58f-85b4-49c2-93b7-855c843c0cd5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.363 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 887af58f-85b4-49c2-93b7-855c843c0cd5 in datapath f8617723-856d-4eb3-ab8d-62fd3cacab7e unbound from our chassis
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.364 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8617723-856d-4eb3-ab8d-62fd3cacab7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.365 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bb231d-c552-4091-9bfb-616ef41b9d58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.366 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e namespace which is not needed anymore
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.370 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Received event <DeviceRemovedEvent: 1772023774.3705173, 37ce1876-2b57-4f84-800c-2a1b6eaa6943 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.372 244018 DEBUG nova.virt.libvirt.driver [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Start waiting for the detach event from libvirt for device tap887af58f-85 with device alias net1 for instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.372 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.377 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface>not found in domain: <domain type='kvm' id='153'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <name>instance-00000079</name>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <uuid>37ce1876-2b57-4f84-800c-2a1b6eaa6943</uuid>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:name>tempest-TestNetworkBasicOps-server-1787082415</nova:name>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:49:32</nova:creationTime>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:port uuid="cda27370-858e-4443-819e-696576515c52">
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:port uuid="887af58f-85b4-49c2-93b7-855c843c0cd5">
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:49:34 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <system>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='serial'>37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='uuid'>37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </system>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <os>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </os>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <features>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </features>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk' index='2'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       </source>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config' index='1'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       </source>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:50:74:9c'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target dev='tapcda27370-85'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <source path='/dev/pts/0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log' append='off'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       </target>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/0'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <source path='/dev/pts/0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log' append='off'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </console>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </input>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </input>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </input>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <video>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </video>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c59,c138</label>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c59,c138</imagelabel>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:49:34 compute-0 nova_compute[244014]: </domain>
Feb 25 12:49:34 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.378 244018 INFO nova.virt.libvirt.driver [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully detached device tap887af58f-85 from instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 from the live domain config.
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.380 244018 DEBUG nova.virt.libvirt.vif [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.380 244018 DEBUG nova.network.os_vif_util [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.381 244018 DEBUG nova.network.os_vif_util [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.381 244018 DEBUG os_vif [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.383 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap887af58f-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.387 244018 INFO os_vif [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85')
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.388 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:name>tempest-TestNetworkBasicOps-server-1787082415</nova:name>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:49:34</nova:creationTime>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     <nova:port uuid="cda27370-858e-4443-819e-696576515c52">
Feb 25 12:49:34 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:49:34 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:49:34 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:49:34 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:49:34 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.421 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.421 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:49:34 compute-0 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [NOTICE]   (354688) : haproxy version is 2.8.14-c23fe91
Feb 25 12:49:34 compute-0 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [NOTICE]   (354688) : path to executable is /usr/sbin/haproxy
Feb 25 12:49:34 compute-0 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [WARNING]  (354688) : Exiting Master process...
Feb 25 12:49:34 compute-0 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [WARNING]  (354688) : Exiting Master process...
Feb 25 12:49:34 compute-0 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [ALERT]    (354688) : Current worker (354690) exited with code 143 (Terminated)
Feb 25 12:49:34 compute-0 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [WARNING]  (354688) : All workers exited. Exiting... (0)
Feb 25 12:49:34 compute-0 systemd[1]: libpod-cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52.scope: Deactivated successfully.
Feb 25 12:49:34 compute-0 conmon[354684]: conmon cac817ebfb80f8215941 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52.scope/container/memory.events
Feb 25 12:49:34 compute-0 podman[354721]: 2026-02-25 12:49:34.502140773 +0000 UTC m=+0.050266190 container died cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 12:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52-userdata-shm.mount: Deactivated successfully.
Feb 25 12:49:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Feb 25 12:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5ae28e2e538a9b1e2fa1fd3ba97baafc40136904659ba2baf74eab449664c35-merged.mount: Deactivated successfully.
Feb 25 12:49:34 compute-0 podman[354721]: 2026-02-25 12:49:34.548561514 +0000 UTC m=+0.096686891 container cleanup cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 12:49:34 compute-0 systemd[1]: libpod-conmon-cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52.scope: Deactivated successfully.
Feb 25 12:49:34 compute-0 podman[354752]: 2026-02-25 12:49:34.61571417 +0000 UTC m=+0.046412892 container remove cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.620 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[82a3780b-296a-43a1-8504-662bf67b7f36]: (4, ('Wed Feb 25 12:49:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e (cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52)\ncac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52\nWed Feb 25 12:49:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e (cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52)\ncac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.622 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45c9f911-6cb6-43ec-8454-8a290ab2cd28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.623 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8617723-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:34 compute-0 kernel: tapf8617723-80: left promiscuous mode
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.637 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[290186bf-c020-457f-bfc3-0220e1dd54cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.652 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[55faf56c-c783-410f-9f50-ed60208fa5c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.653 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8d57e5-33ab-4af8-bde3-eb733ebb4801]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.672 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ec846bd4-6140-4261-ab01-a578c5704fff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574187, 'reachable_time': 30851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354767, 'error': None, 'target': 'ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.674 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:49:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.675 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1a69a6-bca2-4cf4-9cd3-7b211bdf66df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:34 compute-0 systemd[1]: run-netns-ovnmeta\x2df8617723\x2d856d\x2d4eb3\x2dab8d\x2d62fd3cacab7e.mount: Deactivated successfully.
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.750 244018 DEBUG nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.750 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.751 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.751 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.751 244018 DEBUG nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.751 244018 WARNING nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 for instance with vm_state active and task_state None.
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.751 244018 DEBUG nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-unplugged-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.752 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.752 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.752 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.752 244018 DEBUG nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-unplugged-887af58f-85b4-49c2-93b7-855c843c0cd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.752 244018 WARNING nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-unplugged-887af58f-85b4-49c2-93b7-855c843c0cd5 for instance with vm_state active and task_state None.
Feb 25 12:49:34 compute-0 nova_compute[244014]: 2026-02-25 12:49:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:49:35 compute-0 nova_compute[244014]: 2026-02-25 12:49:35.051 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:35 compute-0 nova_compute[244014]: 2026-02-25 12:49:35.052 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:35 compute-0 nova_compute[244014]: 2026-02-25 12:49:35.052 244018 DEBUG nova.network.neutron [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:49:35 compute-0 nova_compute[244014]: 2026-02-25 12:49:35.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:35 compute-0 sshd-session[354768]: Invalid user sol from 80.94.92.186 port 44466
Feb 25 12:49:35 compute-0 sshd-session[354768]: Connection closed by invalid user sol 80.94.92.186 port 44466 [preauth]
Feb 25 12:49:36 compute-0 ceph-mon[76335]: pgmap v2070: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.243 244018 INFO nova.network.neutron [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Port 887af58f-85b4-49c2-93b7-855c843c0cd5 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.243 244018 DEBUG nova.network.neutron [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.285 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.335 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-887af58f-85b4-49c2-93b7-855c843c0cd5" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:36 compute-0 ovn_controller[147040]: 2026-02-25T12:49:36Z|01289|binding|INFO|Releasing lport 3be3f9aa-5947-437f-9936-240be99a8dd3 from this chassis (sb_readonly=0)
Feb 25 12:49:36 compute-0 ovn_controller[147040]: 2026-02-25T12:49:36Z|01290|binding|INFO|Releasing lport 8234c876-6930-42e9-b642-2d1f82e23ee0 from this chassis (sb_readonly=0)
Feb 25 12:49:36 compute-0 ovn_controller[147040]: 2026-02-25T12:49:36Z|01291|binding|INFO|Releasing lport b3234524-e109-417d-a204-a0ab750c983e from this chassis (sb_readonly=0)
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.818 244018 DEBUG nova.compute.manager [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.819 244018 DEBUG oslo_concurrency.lockutils [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.819 244018 DEBUG oslo_concurrency.lockutils [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.819 244018 DEBUG oslo_concurrency.lockutils [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.820 244018 DEBUG nova.compute.manager [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.820 244018 WARNING nova.compute.manager [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 for instance with vm_state active and task_state None.
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.820 244018 DEBUG nova.compute.manager [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-deleted-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:36 compute-0 nova_compute[244014]: 2026-02-25 12:49:36.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.193 244018 DEBUG nova.compute.manager [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-changed-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.193 244018 DEBUG nova.compute.manager [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing instance network info cache due to event network-changed-cda27370-858e-4443-819e-696576515c52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.193 244018 DEBUG oslo_concurrency.lockutils [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.193 244018 DEBUG oslo_concurrency.lockutils [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.193 244018 DEBUG nova.network.neutron [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing network info cache for port cda27370-858e-4443-819e-696576515c52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.342 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.343 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.343 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.344 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.344 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.347 244018 INFO nova.compute.manager [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Terminating instance
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.349 244018 DEBUG nova.compute.manager [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:49:37 compute-0 kernel: tapcda27370-85 (unregistering): left promiscuous mode
Feb 25 12:49:37 compute-0 NetworkManager[49836]: <info>  [1772023777.4061] device (tapcda27370-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:49:37 compute-0 ovn_controller[147040]: 2026-02-25T12:49:37Z|01292|binding|INFO|Releasing lport cda27370-858e-4443-819e-696576515c52 from this chassis (sb_readonly=0)
Feb 25 12:49:37 compute-0 ovn_controller[147040]: 2026-02-25T12:49:37Z|01293|binding|INFO|Setting lport cda27370-858e-4443-819e-696576515c52 down in Southbound
Feb 25 12:49:37 compute-0 ovn_controller[147040]: 2026-02-25T12:49:37Z|01294|binding|INFO|Removing iface tapcda27370-85 ovn-installed in OVS
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.425 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:74:9c 10.100.0.7'], port_security=['fa:16:3e:50:74:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '37ce1876-2b57-4f84-800c-2a1b6eaa6943', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b302251f-a239-4374-92d3-7686a49e9d67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9170f76a-1c5e-4379-83f2-194a69c3afae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa6e9e5e-6b6d-43d1-bea6-b8dba8200d28, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=cda27370-858e-4443-819e-696576515c52) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.427 157129 INFO neutron.agent.ovn.metadata.agent [-] Port cda27370-858e-4443-819e-696576515c52 in datapath b302251f-a239-4374-92d3-7686a49e9d67 unbound from our chassis
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.428 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b302251f-a239-4374-92d3-7686a49e9d67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.429 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e1cf9abf-e113-4927-80d3-5e488f6fa2d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.430 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67 namespace which is not needed anymore
Feb 25 12:49:37 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Deactivated successfully.
Feb 25 12:49:37 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Consumed 12.614s CPU time.
Feb 25 12:49:37 compute-0 systemd-machined[210048]: Machine qemu-153-instance-00000079 terminated.
Feb 25 12:49:37 compute-0 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [NOTICE]   (353342) : haproxy version is 2.8.14-c23fe91
Feb 25 12:49:37 compute-0 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [NOTICE]   (353342) : path to executable is /usr/sbin/haproxy
Feb 25 12:49:37 compute-0 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [WARNING]  (353342) : Exiting Master process...
Feb 25 12:49:37 compute-0 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [ALERT]    (353342) : Current worker (353344) exited with code 143 (Terminated)
Feb 25 12:49:37 compute-0 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [WARNING]  (353342) : All workers exited. Exiting... (0)
Feb 25 12:49:37 compute-0 systemd[1]: libpod-5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704.scope: Deactivated successfully.
Feb 25 12:49:37 compute-0 podman[354793]: 2026-02-25 12:49:37.555588613 +0000 UTC m=+0.041132093 container died 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 12:49:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704-userdata-shm.mount: Deactivated successfully.
Feb 25 12:49:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-51fe52d468e2be311accba8895564be948c5e431928c89066ff8445171f1806f-merged.mount: Deactivated successfully.
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:37 compute-0 podman[354793]: 2026-02-25 12:49:37.599018389 +0000 UTC m=+0.084561859 container cleanup 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.598 244018 INFO nova.virt.libvirt.driver [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Instance destroyed successfully.
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.599 244018 DEBUG nova.objects.instance [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:49:37 compute-0 systemd[1]: libpod-conmon-5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704.scope: Deactivated successfully.
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.614 244018 DEBUG nova.virt.libvirt.vif [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.615 244018 DEBUG nova.network.os_vif_util [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.615 244018 DEBUG nova.network.os_vif_util [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.617 244018 DEBUG os_vif [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.620 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcda27370-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.626 244018 INFO os_vif [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85')
Feb 25 12:49:37 compute-0 podman[354833]: 2026-02-25 12:49:37.653909059 +0000 UTC m=+0.036381798 container remove 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.657 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4301166d-3518-432c-8554-0609133ff0a4]: (4, ('Wed Feb 25 12:49:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67 (5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704)\n5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704\nWed Feb 25 12:49:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67 (5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704)\n5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.659 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa4edee-0c60-4609-8b7e-f0e82007d551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.660 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb302251f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.662 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:37 compute-0 kernel: tapb302251f-a0: left promiscuous mode
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.669 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[41ac133b-723f-4f59-a99e-f049a0c8d88b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.685 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[20465ee8-3ef2-4663-88a3-d5358842b775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.686 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[10b7123e-2ce1-4e0f-8c36-f00daf7ff6e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.697 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13633d11-0f51-44b5-9571-98be906cdc07]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571476, 'reachable_time': 20180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354866, 'error': None, 'target': 'ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:37 compute-0 systemd[1]: run-netns-ovnmeta\x2db302251f\x2da239\x2d4374\x2d92d3\x2d7686a49e9d67.mount: Deactivated successfully.
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.700 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:49:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.700 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1e3f64-668a-4e19-af0b-4d90ceb59d6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.851 244018 INFO nova.virt.libvirt.driver [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Deleting instance files /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943_del
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.851 244018 INFO nova.virt.libvirt.driver [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Deletion of /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943_del complete
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.922 244018 INFO nova.compute.manager [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Took 0.57 seconds to destroy the instance on the hypervisor.
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.923 244018 DEBUG oslo.service.loopingcall [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.923 244018 DEBUG nova.compute.manager [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:49:37 compute-0 nova_compute[244014]: 2026-02-25 12:49:37.924 244018 DEBUG nova.network.neutron [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:49:38 compute-0 ceph-mon[76335]: pgmap v2071: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Feb 25 12:49:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 249 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 80 op/s
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.577 244018 DEBUG nova.network.neutron [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updated VIF entry in instance network info cache for port cda27370-858e-4443-819e-696576515c52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.577 244018 DEBUG nova.network.neutron [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.610 244018 DEBUG nova.network.neutron [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.612 244018 DEBUG oslo_concurrency.lockutils [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.626 244018 INFO nova.compute.manager [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Took 0.70 seconds to deallocate network for instance.
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.665 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.665 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.721 244018 DEBUG oslo_concurrency.processutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.903 244018 DEBUG nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-unplugged-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.904 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.904 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.905 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.905 244018 DEBUG nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-unplugged-cda27370-858e-4443-819e-696576515c52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.905 244018 WARNING nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-unplugged-cda27370-858e-4443-819e-696576515c52 for instance with vm_state deleted and task_state None.
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.906 244018 DEBUG nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.906 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.907 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.907 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.908 244018 DEBUG nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-plugged-cda27370-858e-4443-819e-696576515c52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.908 244018 WARNING nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 for instance with vm_state deleted and task_state None.
Feb 25 12:49:38 compute-0 nova_compute[244014]: 2026-02-25 12:49:38.908 244018 DEBUG nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-deleted-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:49:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844278572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:49:39 compute-0 nova_compute[244014]: 2026-02-25 12:49:39.422 244018 DEBUG oslo_concurrency.processutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:39 compute-0 nova_compute[244014]: 2026-02-25 12:49:39.428 244018 DEBUG nova.compute.provider_tree [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:49:39 compute-0 nova_compute[244014]: 2026-02-25 12:49:39.446 244018 DEBUG nova.scheduler.client.report [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:49:39 compute-0 nova_compute[244014]: 2026-02-25 12:49:39.469 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:39 compute-0 nova_compute[244014]: 2026-02-25 12:49:39.496 244018 INFO nova.scheduler.client.report [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943
Feb 25 12:49:39 compute-0 nova_compute[244014]: 2026-02-25 12:49:39.585 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:40 compute-0 ceph-mon[76335]: pgmap v2072: 305 pgs: 305 active+clean; 249 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 80 op/s
Feb 25 12:49:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/844278572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:49:40 compute-0 nova_compute[244014]: 2026-02-25 12:49:40.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:40 compute-0 ovn_controller[147040]: 2026-02-25T12:49:40Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:cc:74 10.100.0.9
Feb 25 12:49:40 compute-0 ovn_controller[147040]: 2026-02-25T12:49:40Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:cc:74 10.100.0.9
Feb 25 12:49:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 249 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.0 KiB/s wr, 70 op/s
Feb 25 12:49:42 compute-0 ceph-mon[76335]: pgmap v2073: 305 pgs: 305 active+clean; 249 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.0 KiB/s wr, 70 op/s
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 223 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000697193756197908 of space, bias 1.0, pg target 0.2091581268593724 quantized to 32 (current 32)
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494054403576432 of space, bias 1.0, pg target 0.7482163210729297 quantized to 32 (current 32)
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3979243418663797e-06 of space, bias 4.0, pg target 0.0016775092102396555 quantized to 16 (current 16)
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:49:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:49:42 compute-0 nova_compute[244014]: 2026-02-25 12:49:42.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:43 compute-0 nova_compute[244014]: 2026-02-25 12:49:43.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:49:44 compute-0 ovn_controller[147040]: 2026-02-25T12:49:44Z|01295|binding|INFO|Releasing lport 8234c876-6930-42e9-b642-2d1f82e23ee0 from this chassis (sb_readonly=0)
Feb 25 12:49:44 compute-0 ovn_controller[147040]: 2026-02-25T12:49:44Z|01296|binding|INFO|Releasing lport b3234524-e109-417d-a204-a0ab750c983e from this chassis (sb_readonly=0)
Feb 25 12:49:44 compute-0 nova_compute[244014]: 2026-02-25 12:49:44.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:44 compute-0 ceph-mon[76335]: pgmap v2074: 305 pgs: 305 active+clean; 223 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Feb 25 12:49:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Feb 25 12:49:45 compute-0 nova_compute[244014]: 2026-02-25 12:49:45.239 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:45.240 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:49:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:45.241 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:49:45 compute-0 nova_compute[244014]: 2026-02-25 12:49:45.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:46 compute-0 ceph-mon[76335]: pgmap v2075: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Feb 25 12:49:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 25 12:49:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:49:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1245765082' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:49:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:49:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1245765082' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:49:47 compute-0 nova_compute[244014]: 2026-02-25 12:49:47.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:48 compute-0 nova_compute[244014]: 2026-02-25 12:49:48.063 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:48 compute-0 ceph-mon[76335]: pgmap v2076: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 25 12:49:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1245765082' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:49:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1245765082' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:49:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 25 12:49:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:50 compute-0 ceph-mon[76335]: pgmap v2077: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 25 12:49:50 compute-0 nova_compute[244014]: 2026-02-25 12:49:50.319 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Feb 25 12:49:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:52.243 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:49:52 compute-0 ceph-mon[76335]: pgmap v2078: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Feb 25 12:49:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Feb 25 12:49:52 compute-0 nova_compute[244014]: 2026-02-25 12:49:52.595 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023777.5943918, 37ce1876-2b57-4f84-800c-2a1b6eaa6943 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:49:52 compute-0 nova_compute[244014]: 2026-02-25 12:49:52.596 244018 INFO nova.compute.manager [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] VM Stopped (Lifecycle Event)
Feb 25 12:49:52 compute-0 nova_compute[244014]: 2026-02-25 12:49:52.618 244018 DEBUG nova.compute.manager [None req-c8527e5e-5089-4cbd-8e49-f0673f82ef61 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:49:52 compute-0 nova_compute[244014]: 2026-02-25 12:49:52.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:53 compute-0 nova_compute[244014]: 2026-02-25 12:49:53.366 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:53 compute-0 nova_compute[244014]: 2026-02-25 12:49:53.366 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:53 compute-0 nova_compute[244014]: 2026-02-25 12:49:53.390 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:49:53 compute-0 nova_compute[244014]: 2026-02-25 12:49:53.468 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:53 compute-0 nova_compute[244014]: 2026-02-25 12:49:53.469 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:53 compute-0 nova_compute[244014]: 2026-02-25 12:49:53.477 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:49:53 compute-0 nova_compute[244014]: 2026-02-25 12:49:53.477 244018 INFO nova.compute.claims [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:49:53 compute-0 nova_compute[244014]: 2026-02-25 12:49:53.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:53 compute-0 nova_compute[244014]: 2026-02-25 12:49:53.609 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:49:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2760296266' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.205 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.212 244018 DEBUG nova.compute.provider_tree [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.239 244018 DEBUG nova.scheduler.client.report [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.283 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.284 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:49:54 compute-0 ceph-mon[76335]: pgmap v2079: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Feb 25 12:49:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2760296266' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.385 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.386 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.437 244018 INFO nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.521 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:49:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 409 KiB/s wr, 20 op/s
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.742 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.744 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.744 244018 INFO nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Creating image(s)
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.775 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.810 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.840 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.843 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.939 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.940 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.941 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.942 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.973 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:49:54 compute-0 nova_compute[244014]: 2026-02-25 12:49:54.978 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 828def8a-01dd-4845-98b3-1516060251a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:49:55 compute-0 nova_compute[244014]: 2026-02-25 12:49:55.024 244018 DEBUG nova.policy [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:49:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:55.026 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:55.027 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:49:55.028 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:55 compute-0 nova_compute[244014]: 2026-02-25 12:49:55.270 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 828def8a-01dd-4845-98b3-1516060251a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:49:55 compute-0 nova_compute[244014]: 2026-02-25 12:49:55.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:55 compute-0 nova_compute[244014]: 2026-02-25 12:49:55.335 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:49:55 compute-0 nova_compute[244014]: 2026-02-25 12:49:55.426 244018 DEBUG nova.objects.instance [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 828def8a-01dd-4845-98b3-1516060251a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:49:55 compute-0 nova_compute[244014]: 2026-02-25 12:49:55.447 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:49:55 compute-0 nova_compute[244014]: 2026-02-25 12:49:55.448 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Ensure instance console log exists: /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:49:55 compute-0 nova_compute[244014]: 2026-02-25 12:49:55.448 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:49:55 compute-0 nova_compute[244014]: 2026-02-25 12:49:55.449 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:49:55 compute-0 nova_compute[244014]: 2026-02-25 12:49:55.449 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:49:56 compute-0 ceph-mon[76335]: pgmap v2080: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 409 KiB/s wr, 20 op/s
Feb 25 12:49:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 12:49:56 compute-0 nova_compute[244014]: 2026-02-25 12:49:56.575 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Successfully created port: 99fbecbd-1815-4bf9-8e7f-6f114847f46f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:49:57 compute-0 nova_compute[244014]: 2026-02-25 12:49:57.190 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Successfully created port: 94cae4f8-cc4b-443a-b22b-36ef77438ede _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:49:57 compute-0 nova_compute[244014]: 2026-02-25 12:49:57.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:49:58 compute-0 nova_compute[244014]: 2026-02-25 12:49:58.094 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Successfully updated port: 99fbecbd-1815-4bf9-8e7f-6f114847f46f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:49:58 compute-0 nova_compute[244014]: 2026-02-25 12:49:58.221 244018 DEBUG nova.compute.manager [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:49:58 compute-0 nova_compute[244014]: 2026-02-25 12:49:58.221 244018 DEBUG nova.compute.manager [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing instance network info cache due to event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:49:58 compute-0 nova_compute[244014]: 2026-02-25 12:49:58.221 244018 DEBUG oslo_concurrency.lockutils [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:49:58 compute-0 nova_compute[244014]: 2026-02-25 12:49:58.222 244018 DEBUG oslo_concurrency.lockutils [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:49:58 compute-0 nova_compute[244014]: 2026-02-25 12:49:58.222 244018 DEBUG nova.network.neutron [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing network info cache for port 99fbecbd-1815-4bf9-8e7f-6f114847f46f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:49:58 compute-0 ceph-mon[76335]: pgmap v2081: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 12:49:58 compute-0 nova_compute[244014]: 2026-02-25 12:49:58.417 244018 DEBUG nova.network.neutron [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:49:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:49:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:49:59 compute-0 nova_compute[244014]: 2026-02-25 12:49:59.079 244018 DEBUG nova.network.neutron [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:49:59 compute-0 nova_compute[244014]: 2026-02-25 12:49:59.100 244018 DEBUG oslo_concurrency.lockutils [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:00 compute-0 ceph-mon[76335]: pgmap v2082: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:50:00 compute-0 nova_compute[244014]: 2026-02-25 12:50:00.324 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:00 compute-0 nova_compute[244014]: 2026-02-25 12:50:00.372 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Successfully updated port: 94cae4f8-cc4b-443a-b22b-36ef77438ede _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:50:00 compute-0 nova_compute[244014]: 2026-02-25 12:50:00.406 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:00 compute-0 nova_compute[244014]: 2026-02-25 12:50:00.407 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:00 compute-0 nova_compute[244014]: 2026-02-25 12:50:00.407 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:50:00 compute-0 nova_compute[244014]: 2026-02-25 12:50:00.470 244018 DEBUG nova.compute.manager [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-changed-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:00 compute-0 nova_compute[244014]: 2026-02-25 12:50:00.471 244018 DEBUG nova.compute.manager [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing instance network info cache due to event network-changed-94cae4f8-cc4b-443a-b22b-36ef77438ede. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:50:00 compute-0 nova_compute[244014]: 2026-02-25 12:50:00.472 244018 DEBUG oslo_concurrency.lockutils [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:50:00 compute-0 nova_compute[244014]: 2026-02-25 12:50:00.607 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:50:02 compute-0 ceph-mon[76335]: pgmap v2083: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.350 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.351 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.384 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.463 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.464 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.469 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.469 244018 INFO nova.compute.claims [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:50:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.636 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:02 compute-0 podman[355078]: 2026-02-25 12:50:02.746732613 +0000 UTC m=+0.078299821 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 12:50:02 compute-0 podman[355080]: 2026-02-25 12:50:02.786750903 +0000 UTC m=+0.115649376 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.930 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.969 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.970 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance network_info: |[{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.972 244018 DEBUG oslo_concurrency.lockutils [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.973 244018 DEBUG nova.network.neutron [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing network info cache for port 94cae4f8-cc4b-443a-b22b-36ef77438ede _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.980 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Start _get_guest_xml network_info=[{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.989 244018 WARNING nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.995 244018 DEBUG nova.virt.libvirt.host [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:50:02 compute-0 nova_compute[244014]: 2026-02-25 12:50:02.996 244018 DEBUG nova.virt.libvirt.host [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.006 244018 DEBUG nova.virt.libvirt.host [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.007 244018 DEBUG nova.virt.libvirt.host [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.008 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.009 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.010 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.010 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.011 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.012 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.012 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.013 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.013 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.014 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.014 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.015 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.021 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:50:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622933079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.186 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.205 244018 DEBUG nova.compute.provider_tree [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.232 244018 DEBUG nova.scheduler.client.report [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.279 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.280 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:50:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1622933079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.344 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.345 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.368 244018 INFO nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.389 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.480 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.482 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.483 244018 INFO nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Creating image(s)
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.514 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.547 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:50:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2827062943' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.582 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.587 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.612 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.645 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.650 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.700 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.701 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.702 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.703 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.735 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.741 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d6cf21ec-717e-41f7-9351-2214b43ce275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:03 compute-0 nova_compute[244014]: 2026-02-25 12:50:03.913 244018 DEBUG nova.policy [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.069 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d6cf21ec-717e-41f7-9351-2214b43ce275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.133 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.218 244018 DEBUG nova.objects.instance [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid d6cf21ec-717e-41f7-9351-2214b43ce275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:50:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:50:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173344239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.231 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.231 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Ensure instance console log exists: /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.232 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.232 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.232 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.240 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.241 244018 DEBUG nova.virt.libvirt.vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:54Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.242 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.243 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.244 244018 DEBUG nova.virt.libvirt.vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:54Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.244 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.244 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.245 244018 DEBUG nova.objects.instance [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 828def8a-01dd-4845-98b3-1516060251a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.258 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:50:04 compute-0 nova_compute[244014]:   <uuid>828def8a-01dd-4845-98b3-1516060251a0</uuid>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   <name>instance-0000007b</name>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1153433119</nova:name>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:50:02</nova:creationTime>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <nova:port uuid="99fbecbd-1815-4bf9-8e7f-6f114847f46f">
Feb 25 12:50:04 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <nova:port uuid="94cae4f8-cc4b-443a-b22b-36ef77438ede">
Feb 25 12:50:04 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feec:bf12" ipVersion="6"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <system>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <entry name="serial">828def8a-01dd-4845-98b3-1516060251a0</entry>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <entry name="uuid">828def8a-01dd-4845-98b3-1516060251a0</entry>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     </system>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   <os>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   </os>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   <features>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   </features>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/828def8a-01dd-4845-98b3-1516060251a0_disk">
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       </source>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/828def8a-01dd-4845-98b3-1516060251a0_disk.config">
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       </source>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:50:04 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:7e:94:3d"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <target dev="tap99fbecbd-18"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:ec:bf:12"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <target dev="tap94cae4f8-cc"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/console.log" append="off"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <video>
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     </video>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:50:04 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:50:04 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:50:04 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:50:04 compute-0 nova_compute[244014]: </domain>
Feb 25 12:50:04 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.259 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Preparing to wait for external event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.259 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.259 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.259 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.260 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Preparing to wait for external event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.260 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.260 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.260 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.261 244018 DEBUG nova.virt.libvirt.vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:54Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.261 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.261 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.262 244018 DEBUG os_vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.263 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.263 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.266 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99fbecbd-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.266 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap99fbecbd-18, col_values=(('external_ids', {'iface-id': '99fbecbd-1815-4bf9-8e7f-6f114847f46f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:94:3d', 'vm-uuid': '828def8a-01dd-4845-98b3-1516060251a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:04 compute-0 NetworkManager[49836]: <info>  [1772023804.2688] manager: (tap99fbecbd-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.270 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.273 244018 INFO os_vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18')
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.274 244018 DEBUG nova.virt.libvirt.vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:54Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.274 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.274 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.275 244018 DEBUG os_vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.275 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.275 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.278 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94cae4f8-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.279 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94cae4f8-cc, col_values=(('external_ids', {'iface-id': '94cae4f8-cc4b-443a-b22b-36ef77438ede', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:bf:12', 'vm-uuid': '828def8a-01dd-4845-98b3-1516060251a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:04 compute-0 NetworkManager[49836]: <info>  [1772023804.2815] manager: (tap94cae4f8-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.282 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.285 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.286 244018 INFO os_vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc')
Feb 25 12:50:04 compute-0 ceph-mon[76335]: pgmap v2084: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:50:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2827062943' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3173344239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.347 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.348 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.348 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:7e:94:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.349 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:ec:bf:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.349 244018 INFO nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Using config drive
Feb 25 12:50:04 compute-0 nova_compute[244014]: 2026-02-25 12:50:04.370 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.084 244018 INFO nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Creating config drive at /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.091 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpknv5lugx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.234 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpknv5lugx" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.266 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.271 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config 828def8a-01dd-4845-98b3-1516060251a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.402 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config 828def8a-01dd-4845-98b3-1516060251a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.403 244018 INFO nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Deleting local config drive /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config because it was imported into RBD.
Feb 25 12:50:05 compute-0 NetworkManager[49836]: <info>  [1772023805.4568] manager: (tap99fbecbd-18): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Feb 25 12:50:05 compute-0 kernel: tap99fbecbd-18: entered promiscuous mode
Feb 25 12:50:05 compute-0 ovn_controller[147040]: 2026-02-25T12:50:05Z|01297|binding|INFO|Claiming lport 99fbecbd-1815-4bf9-8e7f-6f114847f46f for this chassis.
Feb 25 12:50:05 compute-0 ovn_controller[147040]: 2026-02-25T12:50:05Z|01298|binding|INFO|99fbecbd-1815-4bf9-8e7f-6f114847f46f: Claiming fa:16:3e:7e:94:3d 10.100.0.3
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.470 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:94:3d 10.100.0.3'], port_security=['fa:16:3e:7e:94:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '828def8a-01dd-4845-98b3-1516060251a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7b776d7-fb2a-404e-b423-f79885837022, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=99fbecbd-1815-4bf9-8e7f-6f114847f46f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:50:05 compute-0 NetworkManager[49836]: <info>  [1772023805.4718] manager: (tap94cae4f8-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/545)
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.472 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 99fbecbd-1815-4bf9-8e7f-6f114847f46f in datapath 481feaf1-7ff4-47be-9159-a1dd19ceebcc bound to our chassis
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.473 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 481feaf1-7ff4-47be-9159-a1dd19ceebcc
Feb 25 12:50:05 compute-0 kernel: tap94cae4f8-cc: entered promiscuous mode
Feb 25 12:50:05 compute-0 ovn_controller[147040]: 2026-02-25T12:50:05Z|01299|binding|INFO|Setting lport 99fbecbd-1815-4bf9-8e7f-6f114847f46f ovn-installed in OVS
Feb 25 12:50:05 compute-0 ovn_controller[147040]: 2026-02-25T12:50:05Z|01300|binding|INFO|Setting lport 99fbecbd-1815-4bf9-8e7f-6f114847f46f up in Southbound
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:05 compute-0 ovn_controller[147040]: 2026-02-25T12:50:05Z|01301|if_status|INFO|Dropped 5 log messages in last 126 seconds (most recently, 126 seconds ago) due to excessive rate
Feb 25 12:50:05 compute-0 ovn_controller[147040]: 2026-02-25T12:50:05Z|01302|if_status|INFO|Not updating pb chassis for 94cae4f8-cc4b-443a-b22b-36ef77438ede now as sb is readonly
Feb 25 12:50:05 compute-0 ovn_controller[147040]: 2026-02-25T12:50:05Z|01303|binding|INFO|Claiming lport 94cae4f8-cc4b-443a-b22b-36ef77438ede for this chassis.
Feb 25 12:50:05 compute-0 ovn_controller[147040]: 2026-02-25T12:50:05Z|01304|binding|INFO|94cae4f8-cc4b-443a-b22b-36ef77438ede: Claiming fa:16:3e:ec:bf:12 2001:db8::f816:3eff:feec:bf12
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.488 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:bf:12 2001:db8::f816:3eff:feec:bf12'], port_security=['fa:16:3e:ec:bf:12 2001:db8::f816:3eff:feec:bf12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:bf12/64', 'neutron:device_id': '828def8a-01dd-4845-98b3-1516060251a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f83b91c5-311c-45de-aa70-06fb6ec6fece, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=94cae4f8-cc4b-443a-b22b-36ef77438ede) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:05 compute-0 ovn_controller[147040]: 2026-02-25T12:50:05Z|01305|binding|INFO|Setting lport 94cae4f8-cc4b-443a-b22b-36ef77438ede up in Southbound
Feb 25 12:50:05 compute-0 ovn_controller[147040]: 2026-02-25T12:50:05Z|01306|binding|INFO|Setting lport 94cae4f8-cc4b-443a-b22b-36ef77438ede ovn-installed in OVS
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.494 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0185f2-8c52-4e4f-afb7-d711652db354]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 systemd-udevd[355452]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:50:05 compute-0 systemd-udevd[355451]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:50:05 compute-0 systemd-machined[210048]: New machine qemu-155-instance-0000007b.
Feb 25 12:50:05 compute-0 NetworkManager[49836]: <info>  [1772023805.5085] device (tap99fbecbd-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:50:05 compute-0 NetworkManager[49836]: <info>  [1772023805.5099] device (tap99fbecbd-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:50:05 compute-0 NetworkManager[49836]: <info>  [1772023805.5112] device (tap94cae4f8-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:50:05 compute-0 NetworkManager[49836]: <info>  [1772023805.5123] device (tap94cae4f8-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:50:05 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-0000007b.
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.525 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[262ae0af-ae0d-4e83-909e-ee16d476faf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.529 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[07b26c99-65cb-481a-a598-4896a6e2555e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.555 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a20e398c-6639-4d3b-9159-822b4509754e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.572 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c83fc8-8c9d-4dea-8e26-6cd8361bc330]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap481feaf1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:6f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573491, 'reachable_time': 20258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355465, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.584 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bebb6b-f1a5-4f60-abf2-1e4c0922e34d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap481feaf1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573500, 'tstamp': 573500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355467, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap481feaf1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573502, 'tstamp': 573502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355467, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.585 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap481feaf1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.588 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap481feaf1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.588 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.588 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap481feaf1-70, col_values=(('external_ids', {'iface-id': '8234c876-6930-42e9-b642-2d1f82e23ee0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.589 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.590 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 94cae4f8-cc4b-443a-b22b-36ef77438ede in datapath eb832dde-9848-40c5-9505-cc643b1bd0fa unbound from our chassis
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.591 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb832dde-9848-40c5-9505-cc643b1bd0fa
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.601 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0fb45f-3cbb-4387-bbfa-c76fedc86f30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.619 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2e7a98-74f4-4695-ba36-41b2a824df1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.622 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce95209-39c0-49e7-8899-7caf69d4b8d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.642 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e9adfd05-cc90-4990-860a-64d9420743fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.652 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57b396d4-a789-4a6c-aa07-27d8dbb23cfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb832dde-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a5:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573564, 'reachable_time': 17677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355474, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.664 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fc671d-3c0c-4b98-87fc-cbe61df84358]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapeb832dde-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573573, 'tstamp': 573573}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355475, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.665 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb832dde-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.668 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb832dde-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.668 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.668 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb832dde-90, col_values=(('external_ids', {'iface-id': 'b3234524-e109-417d-a204-a0ab750c983e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.669 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.676 244018 DEBUG nova.compute.manager [req-058c4a5d-a447-4942-9e49-a9b9d4d07625 req-e2aa301a-e093-4c64-a662-b41aaa2d372a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.676 244018 DEBUG oslo_concurrency.lockutils [req-058c4a5d-a447-4942-9e49-a9b9d4d07625 req-e2aa301a-e093-4c64-a662-b41aaa2d372a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.676 244018 DEBUG oslo_concurrency.lockutils [req-058c4a5d-a447-4942-9e49-a9b9d4d07625 req-e2aa301a-e093-4c64-a662-b41aaa2d372a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.677 244018 DEBUG oslo_concurrency.lockutils [req-058c4a5d-a447-4942-9e49-a9b9d4d07625 req-e2aa301a-e093-4c64-a662-b41aaa2d372a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.677 244018 DEBUG nova.compute.manager [req-058c4a5d-a447-4942-9e49-a9b9d4d07625 req-e2aa301a-e093-4c64-a662-b41aaa2d372a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Processing event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.727 244018 DEBUG nova.network.neutron [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updated VIF entry in instance network info cache for port 94cae4f8-cc4b-443a-b22b-36ef77438ede. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.727 244018 DEBUG nova.network.neutron [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.747 244018 DEBUG oslo_concurrency.lockutils [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.837 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023805.8363304, 828def8a-01dd-4845-98b3-1516060251a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.837 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] VM Started (Lifecycle Event)
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.862 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.867 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023805.836876, 828def8a-01dd-4845-98b3-1516060251a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.868 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] VM Paused (Lifecycle Event)
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.891 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.894 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.913 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Successfully created port: 33f0e898-9477-416a-9ae6-268ef8e71ee3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:50:05 compute-0 nova_compute[244014]: 2026-02-25 12:50:05.919 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:50:06 compute-0 ceph-mon[76335]: pgmap v2085: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:50:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.129 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Successfully updated port: 33f0e898-9477-416a-9ae6-268ef8e71ee3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.147 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.148 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.148 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.233 244018 DEBUG nova.compute.manager [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.234 244018 DEBUG nova.compute.manager [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing instance network info cache due to event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.234 244018 DEBUG oslo_concurrency.lockutils [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.317 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.781 244018 DEBUG nova.compute.manager [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.782 244018 DEBUG oslo_concurrency.lockutils [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.783 244018 DEBUG oslo_concurrency.lockutils [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.783 244018 DEBUG oslo_concurrency.lockutils [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.783 244018 DEBUG nova.compute.manager [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No event matching network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede in dict_keys([('network-vif-plugged', '99fbecbd-1815-4bf9-8e7f-6f114847f46f')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 25 12:50:07 compute-0 nova_compute[244014]: 2026-02-25 12:50:07.784 244018 WARNING nova.compute.manager [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received unexpected event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede for instance with vm_state building and task_state spawning.
Feb 25 12:50:08 compute-0 ceph-mon[76335]: pgmap v2086: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:50:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Feb 25 12:50:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.931 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updating instance_info_cache with network_info: [{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.968 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.969 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Instance network_info: |[{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.970 244018 DEBUG oslo_concurrency.lockutils [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.971 244018 DEBUG nova.network.neutron [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.977 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Start _get_guest_xml network_info=[{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.983 244018 WARNING nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.989 244018 DEBUG nova.virt.libvirt.host [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.990 244018 DEBUG nova.virt.libvirt.host [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.993 244018 DEBUG nova.virt.libvirt.host [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.994 244018 DEBUG nova.virt.libvirt.host [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.995 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.995 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.996 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.996 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.996 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.997 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.997 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.997 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.998 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.998 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.999 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:50:08 compute-0 nova_compute[244014]: 2026-02-25 12:50:08.999 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:50:09 compute-0 nova_compute[244014]: 2026-02-25 12:50:09.004 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:09 compute-0 nova_compute[244014]: 2026-02-25 12:50:09.283 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:50:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861968286' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:09 compute-0 nova_compute[244014]: 2026-02-25 12:50:09.565 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:09 compute-0 nova_compute[244014]: 2026-02-25 12:50:09.596 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:09 compute-0 nova_compute[244014]: 2026-02-25 12:50:09.601 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.365 244018 DEBUG nova.network.neutron [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updated VIF entry in instance network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.366 244018 DEBUG nova.network.neutron [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updating instance_info_cache with network_info: [{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.385 244018 DEBUG oslo_concurrency.lockutils [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:50:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3992282661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:10 compute-0 ceph-mon[76335]: pgmap v2087: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Feb 25 12:50:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1861968286' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.507 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.906s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.508 244018 DEBUG nova.virt.libvirt.vif [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:50:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563456365',display_name='tempest-TestNetworkBasicOps-server-563456365',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563456365',id=124,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeMiVeu1dOqPpqcWXNCLagoBwiGcWNicZxrpOmZtGijG3qf3SaUB0NXPfDvHb9oPVwp3SJzJeTa27pr/BNkR9Koy7DJ2jBsI3Xx4qBnFg17X6/5BsC47NShmWdZikYbsA==',key_name='tempest-TestNetworkBasicOps-2016084182',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-8x0n80rf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:50:03Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=d6cf21ec-717e-41f7-9351-2214b43ce275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.508 244018 DEBUG nova.network.os_vif_util [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.509 244018 DEBUG nova.network.os_vif_util [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.510 244018 DEBUG nova.objects.instance [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid d6cf21ec-717e-41f7-9351-2214b43ce275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:50:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.647 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:50:10 compute-0 nova_compute[244014]:   <uuid>d6cf21ec-717e-41f7-9351-2214b43ce275</uuid>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   <name>instance-0000007c</name>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-563456365</nova:name>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:50:08</nova:creationTime>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <nova:port uuid="33f0e898-9477-416a-9ae6-268ef8e71ee3">
Feb 25 12:50:10 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <system>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <entry name="serial">d6cf21ec-717e-41f7-9351-2214b43ce275</entry>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <entry name="uuid">d6cf21ec-717e-41f7-9351-2214b43ce275</entry>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     </system>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   <os>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   </os>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   <features>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   </features>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d6cf21ec-717e-41f7-9351-2214b43ce275_disk">
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       </source>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config">
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       </source>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:50:10 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:8d:16:fb"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <target dev="tap33f0e898-94"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/console.log" append="off"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <video>
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     </video>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:50:10 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:50:10 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:50:10 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:50:10 compute-0 nova_compute[244014]: </domain>
Feb 25 12:50:10 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.648 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Preparing to wait for external event network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.648 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.649 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.649 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.650 244018 DEBUG nova.virt.libvirt.vif [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:50:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563456365',display_name='tempest-TestNetworkBasicOps-server-563456365',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563456365',id=124,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeMiVeu1dOqPpqcWXNCLagoBwiGcWNicZxrpOmZtGijG3qf3SaUB0NXPfDvHb9oPVwp3SJzJeTa27pr/BNkR9Koy7DJ2jBsI3Xx4qBnFg17X6/5BsC47NShmWdZikYbsA==',key_name='tempest-TestNetworkBasicOps-2016084182',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-8x0n80rf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:50:03Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=d6cf21ec-717e-41f7-9351-2214b43ce275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.650 244018 DEBUG nova.network.os_vif_util [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.651 244018 DEBUG nova.network.os_vif_util [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.652 244018 DEBUG os_vif [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.653 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.653 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.654 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.656 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.656 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33f0e898-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.657 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33f0e898-94, col_values=(('external_ids', {'iface-id': '33f0e898-9477-416a-9ae6-268ef8e71ee3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:16:fb', 'vm-uuid': 'd6cf21ec-717e-41f7-9351-2214b43ce275'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:10 compute-0 NetworkManager[49836]: <info>  [1772023810.6610] manager: (tap33f0e898-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.667 244018 INFO os_vif [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94')
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.728 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.728 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.728 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:8d:16:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.729 244018 INFO nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Using config drive
Feb 25 12:50:10 compute-0 nova_compute[244014]: 2026-02-25 12:50:10.758 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.229 244018 INFO nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Creating config drive at /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.236 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1vioq7t4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.380 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1vioq7t4" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.418 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.422 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3992282661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:11 compute-0 ceph-mon[76335]: pgmap v2088: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.564 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.566 244018 INFO nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Deleting local config drive /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config because it was imported into RBD.
Feb 25 12:50:11 compute-0 kernel: tap33f0e898-94: entered promiscuous mode
Feb 25 12:50:11 compute-0 NetworkManager[49836]: <info>  [1772023811.6285] manager: (tap33f0e898-94): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Feb 25 12:50:11 compute-0 ovn_controller[147040]: 2026-02-25T12:50:11Z|01307|binding|INFO|Claiming lport 33f0e898-9477-416a-9ae6-268ef8e71ee3 for this chassis.
Feb 25 12:50:11 compute-0 ovn_controller[147040]: 2026-02-25T12:50:11Z|01308|binding|INFO|33f0e898-9477-416a-9ae6-268ef8e71ee3: Claiming fa:16:3e:8d:16:fb 10.100.0.8
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:11 compute-0 ovn_controller[147040]: 2026-02-25T12:50:11Z|01309|binding|INFO|Setting lport 33f0e898-9477-416a-9ae6-268ef8e71ee3 ovn-installed in OVS
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:11 compute-0 ovn_controller[147040]: 2026-02-25T12:50:11Z|01310|binding|INFO|Setting lport 33f0e898-9477-416a-9ae6-268ef8e71ee3 up in Southbound
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.640 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:16:fb 10.100.0.8'], port_security=['fa:16:3e:8d:16:fb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd6cf21ec-717e-41f7-9351-2214b43ce275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7113265a-1d97-4a63-a30a-7677c464f652', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d3a1f8d-2c5f-4ea3-9518-03dbef27606a, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=33f0e898-9477-416a-9ae6-268ef8e71ee3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.643 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 33f0e898-9477-416a-9ae6-268ef8e71ee3 in datapath 06dcd48c-d26b-4718-b4c7-9c2416698bed bound to our chassis
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.646 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 06dcd48c-d26b-4718-b4c7-9c2416698bed
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.659 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[561015e8-fd6e-4b99-a6ed-e840a0340f08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.659 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap06dcd48c-d1 in ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.662 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap06dcd48c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.662 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a1aa0abe-f26e-433a-959c-686a5d697ef0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.663 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2f2acc-8bef-47fa-9f57-1c9dbdadfc5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 systemd-udevd[355655]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:50:11 compute-0 systemd-machined[210048]: New machine qemu-156-instance-0000007c.
Feb 25 12:50:11 compute-0 NetworkManager[49836]: <info>  [1772023811.6793] device (tap33f0e898-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:50:11 compute-0 NetworkManager[49836]: <info>  [1772023811.6810] device (tap33f0e898-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.683 244018 DEBUG nova.compute.manager [req-cdfef91e-581c-4719-b7e2-752c6c4237a7 req-9f1881e6-29a8-4a08-a9e3-14be59209362 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.683 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a87702c7-025e-40fa-bf67-6c7a62e65e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.684 244018 DEBUG oslo_concurrency.lockutils [req-cdfef91e-581c-4719-b7e2-752c6c4237a7 req-9f1881e6-29a8-4a08-a9e3-14be59209362 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.685 244018 DEBUG oslo_concurrency.lockutils [req-cdfef91e-581c-4719-b7e2-752c6c4237a7 req-9f1881e6-29a8-4a08-a9e3-14be59209362 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.685 244018 DEBUG oslo_concurrency.lockutils [req-cdfef91e-581c-4719-b7e2-752c6c4237a7 req-9f1881e6-29a8-4a08-a9e3-14be59209362 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.686 244018 DEBUG nova.compute.manager [req-cdfef91e-581c-4719-b7e2-752c6c4237a7 req-9f1881e6-29a8-4a08-a9e3-14be59209362 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Processing event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.687 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance event wait completed in 5 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:50:11 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-0000007c.
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.692 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023811.692569, 828def8a-01dd-4845-98b3-1516060251a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.693 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] VM Resumed (Lifecycle Event)
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.696 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.696 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34b8dab4-9d29-41ff-8e6d-7b3c73a12152]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.702 244018 INFO nova.virt.libvirt.driver [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance spawned successfully.
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.703 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.716 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.722 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a02b2d-ddda-4e18-8e6f-b775239c792e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.729 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:50:11 compute-0 NetworkManager[49836]: <info>  [1772023811.7309] manager: (tap06dcd48c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/548)
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.730 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebe9faa-3247-4e29-b275-c9b394a1ec2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.733 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.734 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.735 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.735 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.735 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.736 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:11 compute-0 systemd-udevd[355658]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.745 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.764 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0c2f4d-b561-4dcb-899e-cbf897752f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.767 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4276d6da-d610-43db-99aa-2ff1c30fea5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.789 244018 INFO nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Took 17.05 seconds to spawn the instance on the hypervisor.
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.789 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:11 compute-0 NetworkManager[49836]: <info>  [1772023811.7932] device (tap06dcd48c-d0): carrier: link connected
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.797 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0517a302-b02d-4e20-9ca0-a15bf912c0f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.812 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69783a18-2442-4084-9cdf-918fc5ba6bc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06dcd48c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:35:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578120, 'reachable_time': 38011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355689, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.827 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[077d4418-225c-4d67-bd70-e0dd8912e1eb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:35ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578120, 'tstamp': 578120}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355690, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.840 244018 INFO nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Took 18.40 seconds to build instance.
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.842 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35657669-e886-4b69-827a-6ad7a3184da1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06dcd48c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:35:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578120, 'reachable_time': 38011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355691, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.863 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.864 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d45c6d0d-a038-4b5e-98e1-71dafccd6fac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.928 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2dead495-577d-4af4-8c4e-288404e91e92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.930 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06dcd48c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.931 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.931 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06dcd48c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.933 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:11 compute-0 NetworkManager[49836]: <info>  [1772023811.9345] manager: (tap06dcd48c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/549)
Feb 25 12:50:11 compute-0 kernel: tap06dcd48c-d0: entered promiscuous mode
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.937 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap06dcd48c-d0, col_values=(('external_ids', {'iface-id': '06afb3ba-d963-4735-ae78-91cfa95e52ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.938 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:11 compute-0 ovn_controller[147040]: 2026-02-25T12:50:11Z|01311|binding|INFO|Releasing lport 06afb3ba-d963-4735-ae78-91cfa95e52ff from this chassis (sb_readonly=0)
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.940 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/06dcd48c-d26b-4718-b4c7-9c2416698bed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/06dcd48c-d26b-4718-b4c7-9c2416698bed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.941 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29560145-a240-4bac-8e57-ad3426de35e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.942 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-06dcd48c-d26b-4718-b4c7-9c2416698bed
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/06dcd48c-d26b-4718-b4c7-9c2416698bed.pid.haproxy
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 06dcd48c-d26b-4718-b4c7-9c2416698bed
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:50:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.943 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'env', 'PROCESS_TAG=haproxy-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/06dcd48c-d26b-4718-b4c7-9c2416698bed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:50:11 compute-0 nova_compute[244014]: 2026-02-25 12:50:11.949 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.122 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023812.1222494, d6cf21ec-717e-41f7-9351-2214b43ce275 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.122 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] VM Started (Lifecycle Event)
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.171 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.176 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023812.12639, d6cf21ec-717e-41f7-9351-2214b43ce275 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.176 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] VM Paused (Lifecycle Event)
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.224 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.230 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.277 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:50:12 compute-0 podman[355765]: 2026-02-25 12:50:12.319749321 +0000 UTC m=+0.058821702 container create a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:50:12 compute-0 systemd[1]: Started libpod-conmon-a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed.scope.
Feb 25 12:50:12 compute-0 podman[355765]: 2026-02-25 12:50:12.287661165 +0000 UTC m=+0.026733606 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:50:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66261fae152732234a66a0a4cefd1a9db46debf2297b95beda372509622fae68/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:12 compute-0 podman[355765]: 2026-02-25 12:50:12.41637513 +0000 UTC m=+0.155447551 container init a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:50:12 compute-0 podman[355765]: 2026-02-25 12:50:12.423657375 +0000 UTC m=+0.162729766 container start a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:50:12 compute-0 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [NOTICE]   (355784) : New worker (355786) forked
Feb 25 12:50:12 compute-0 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [NOTICE]   (355784) : Loading success.
Feb 25 12:50:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.567 244018 DEBUG nova.compute.manager [req-073b3e0b-9c79-456d-964f-7febab8f8732 req-51964750-eabe-4f13-ab9d-c9db00d40ed8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.568 244018 DEBUG oslo_concurrency.lockutils [req-073b3e0b-9c79-456d-964f-7febab8f8732 req-51964750-eabe-4f13-ab9d-c9db00d40ed8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.570 244018 DEBUG oslo_concurrency.lockutils [req-073b3e0b-9c79-456d-964f-7febab8f8732 req-51964750-eabe-4f13-ab9d-c9db00d40ed8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.570 244018 DEBUG oslo_concurrency.lockutils [req-073b3e0b-9c79-456d-964f-7febab8f8732 req-51964750-eabe-4f13-ab9d-c9db00d40ed8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.571 244018 DEBUG nova.compute.manager [req-073b3e0b-9c79-456d-964f-7febab8f8732 req-51964750-eabe-4f13-ab9d-c9db00d40ed8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Processing event network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.572 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.576 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023812.5764813, d6cf21ec-717e-41f7-9351-2214b43ce275 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.577 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] VM Resumed (Lifecycle Event)
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.580 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.583 244018 INFO nova.virt.libvirt.driver [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Instance spawned successfully.
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.584 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.634 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.640 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.651 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.652 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.653 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.654 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.654 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.655 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.779 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.848 244018 INFO nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Took 9.37 seconds to spawn the instance on the hypervisor.
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.848 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.943 244018 INFO nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Took 10.52 seconds to build instance.
Feb 25 12:50:12 compute-0 nova_compute[244014]: 2026-02-25 12:50:12.974 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:13 compute-0 ceph-mon[76335]: pgmap v2089: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Feb 25 12:50:13 compute-0 nova_compute[244014]: 2026-02-25 12:50:13.873 244018 DEBUG nova.compute.manager [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:13 compute-0 nova_compute[244014]: 2026-02-25 12:50:13.873 244018 DEBUG oslo_concurrency.lockutils [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:13 compute-0 nova_compute[244014]: 2026-02-25 12:50:13.873 244018 DEBUG oslo_concurrency.lockutils [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:13 compute-0 nova_compute[244014]: 2026-02-25 12:50:13.874 244018 DEBUG oslo_concurrency.lockutils [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:13 compute-0 nova_compute[244014]: 2026-02-25 12:50:13.874 244018 DEBUG nova.compute.manager [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No waiting events found dispatching network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:50:13 compute-0 nova_compute[244014]: 2026-02-25 12:50:13.875 244018 WARNING nova.compute.manager [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received unexpected event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f for instance with vm_state active and task_state None.
Feb 25 12:50:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 454 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 25 12:50:14 compute-0 nova_compute[244014]: 2026-02-25 12:50:14.689 244018 DEBUG nova.compute.manager [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:14 compute-0 nova_compute[244014]: 2026-02-25 12:50:14.690 244018 DEBUG oslo_concurrency.lockutils [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:14 compute-0 nova_compute[244014]: 2026-02-25 12:50:14.691 244018 DEBUG oslo_concurrency.lockutils [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:14 compute-0 nova_compute[244014]: 2026-02-25 12:50:14.691 244018 DEBUG oslo_concurrency.lockutils [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:14 compute-0 nova_compute[244014]: 2026-02-25 12:50:14.692 244018 DEBUG nova.compute.manager [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] No waiting events found dispatching network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:50:14 compute-0 nova_compute[244014]: 2026-02-25 12:50:14.693 244018 WARNING nova.compute.manager [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received unexpected event network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 for instance with vm_state active and task_state None.
Feb 25 12:50:15 compute-0 nova_compute[244014]: 2026-02-25 12:50:15.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:50:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.79 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6246 writes, 25K keys, 6246 commit groups, 1.0 writes per commit group, ingest: 29.52 MB, 0.05 MB/s
                                           Interval WAL: 6246 writes, 2436 syncs, 2.56 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:50:15 compute-0 ceph-mon[76335]: pgmap v2090: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 454 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 25 12:50:15 compute-0 nova_compute[244014]: 2026-02-25 12:50:15.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 454 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 25 12:50:16 compute-0 nova_compute[244014]: 2026-02-25 12:50:16.813 244018 DEBUG nova.compute.manager [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:16 compute-0 nova_compute[244014]: 2026-02-25 12:50:16.815 244018 DEBUG nova.compute.manager [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing instance network info cache due to event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:50:16 compute-0 nova_compute[244014]: 2026-02-25 12:50:16.816 244018 DEBUG oslo_concurrency.lockutils [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:16 compute-0 nova_compute[244014]: 2026-02-25 12:50:16.817 244018 DEBUG oslo_concurrency.lockutils [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:16 compute-0 nova_compute[244014]: 2026-02-25 12:50:16.817 244018 DEBUG nova.network.neutron [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing network info cache for port 99fbecbd-1815-4bf9-8e7f-6f114847f46f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:50:17 compute-0 ceph-mon[76335]: pgmap v2091: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 454 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 25 12:50:18 compute-0 nova_compute[244014]: 2026-02-25 12:50:18.135 244018 DEBUG nova.network.neutron [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updated VIF entry in instance network info cache for port 99fbecbd-1815-4bf9-8e7f-6f114847f46f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:50:18 compute-0 nova_compute[244014]: 2026-02-25 12:50:18.136 244018 DEBUG nova.network.neutron [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:18 compute-0 nova_compute[244014]: 2026-02-25 12:50:18.169 244018 DEBUG oslo_concurrency.lockutils [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Feb 25 12:50:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:18 compute-0 nova_compute[244014]: 2026-02-25 12:50:18.943 244018 DEBUG nova.compute.manager [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:18 compute-0 nova_compute[244014]: 2026-02-25 12:50:18.944 244018 DEBUG nova.compute.manager [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing instance network info cache due to event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:50:18 compute-0 nova_compute[244014]: 2026-02-25 12:50:18.945 244018 DEBUG oslo_concurrency.lockutils [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:18 compute-0 nova_compute[244014]: 2026-02-25 12:50:18.945 244018 DEBUG oslo_concurrency.lockutils [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:18 compute-0 nova_compute[244014]: 2026-02-25 12:50:18.946 244018 DEBUG nova.network.neutron [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:50:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:50:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.2 total, 600.0 interval
                                           Cumulative writes: 40K writes, 157K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 40K writes, 14K syncs, 2.80 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6865 writes, 28K keys, 6865 commit groups, 1.0 writes per commit group, ingest: 33.19 MB, 0.06 MB/s
                                           Interval WAL: 6865 writes, 2589 syncs, 2.65 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:50:19 compute-0 ceph-mon[76335]: pgmap v2092: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Feb 25 12:50:20 compute-0 nova_compute[244014]: 2026-02-25 12:50:20.332 244018 DEBUG nova.network.neutron [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updated VIF entry in instance network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:50:20 compute-0 nova_compute[244014]: 2026-02-25 12:50:20.334 244018 DEBUG nova.network.neutron [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updating instance_info_cache with network_info: [{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:20 compute-0 nova_compute[244014]: 2026-02-25 12:50:20.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:20 compute-0 nova_compute[244014]: 2026-02-25 12:50:20.358 244018 DEBUG oslo_concurrency.lockutils [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 138 op/s
Feb 25 12:50:20 compute-0 nova_compute[244014]: 2026-02-25 12:50:20.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:21 compute-0 ceph-mon[76335]: pgmap v2093: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 138 op/s
Feb 25 12:50:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 335 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 157 op/s
Feb 25 12:50:23 compute-0 ovn_controller[147040]: 2026-02-25T12:50:23Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:16:fb 10.100.0.8
Feb 25 12:50:23 compute-0 ovn_controller[147040]: 2026-02-25T12:50:23Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:16:fb 10.100.0.8
Feb 25 12:50:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:23 compute-0 ceph-mon[76335]: pgmap v2094: 305 pgs: 305 active+clean; 335 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 157 op/s
Feb 25 12:50:24 compute-0 ovn_controller[147040]: 2026-02-25T12:50:24Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:94:3d 10.100.0.3
Feb 25 12:50:24 compute-0 ovn_controller[147040]: 2026-02-25T12:50:24Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:94:3d 10.100.0.3
Feb 25 12:50:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 335 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 147 op/s
Feb 25 12:50:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:50:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.6 total, 600.0 interval
                                           Cumulative writes: 31K writes, 127K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.85 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5791 writes, 22K keys, 5791 commit groups, 1.0 writes per commit group, ingest: 24.60 MB, 0.04 MB/s
                                           Interval WAL: 5791 writes, 2335 syncs, 2.48 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 12:50:25 compute-0 sudo[355796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:50:25 compute-0 sudo[355796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:50:25 compute-0 sudo[355796]: pam_unix(sudo:session): session closed for user root
Feb 25 12:50:25 compute-0 sudo[355821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:50:25 compute-0 sudo[355821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:50:25 compute-0 nova_compute[244014]: 2026-02-25 12:50:25.337 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:25 compute-0 sudo[355821]: pam_unix(sudo:session): session closed for user root
Feb 25 12:50:25 compute-0 nova_compute[244014]: 2026-02-25 12:50:25.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:25 compute-0 sudo[355877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:50:25 compute-0 sudo[355877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:50:25 compute-0 sudo[355877]: pam_unix(sudo:session): session closed for user root
Feb 25 12:50:25 compute-0 sudo[355902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Feb 25 12:50:25 compute-0 sudo[355902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:50:26 compute-0 ceph-mon[76335]: pgmap v2095: 305 pgs: 305 active+clean; 335 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 147 op/s
Feb 25 12:50:26 compute-0 sudo[355902]: pam_unix(sudo:session): session closed for user root
Feb 25 12:50:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:50:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:50:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.183146) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826183189, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1182, "num_deletes": 251, "total_data_size": 1769908, "memory_usage": 1796352, "flush_reason": "Manual Compaction"}
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826222800, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1752055, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43854, "largest_seqno": 45035, "table_properties": {"data_size": 1746378, "index_size": 3072, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12191, "raw_average_key_size": 19, "raw_value_size": 1735000, "raw_average_value_size": 2830, "num_data_blocks": 137, "num_entries": 613, "num_filter_entries": 613, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023708, "oldest_key_time": 1772023708, "file_creation_time": 1772023826, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 39729 microseconds, and 6780 cpu microseconds.
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.222870) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1752055 bytes OK
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.222897) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.225049) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.225080) EVENT_LOG_v1 {"time_micros": 1772023826225068, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.225100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1764479, prev total WAL file size 1807731, number of live WAL files 2.
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.225880) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1710KB)], [101(8100KB)]
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826225930, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10046836, "oldest_snapshot_seqno": -1}
Feb 25 12:50:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:50:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:50:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:50:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:50:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:50:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:50:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:50:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:50:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:50:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:50:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:50:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:50:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:50:26 compute-0 sudo[355945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:50:26 compute-0 sudo[355945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:50:26 compute-0 sudo[355945]: pam_unix(sudo:session): session closed for user root
Feb 25 12:50:26 compute-0 sudo[355970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:50:26 compute-0 sudo[355970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6604 keys, 8365998 bytes, temperature: kUnknown
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826363830, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8365998, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8322986, "index_size": 25428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 171525, "raw_average_key_size": 25, "raw_value_size": 8205944, "raw_average_value_size": 1242, "num_data_blocks": 991, "num_entries": 6604, "num_filter_entries": 6604, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023826, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.364071) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8365998 bytes
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.365520) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.8 rd, 60.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.9 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(10.5) write-amplify(4.8) OK, records in: 7118, records dropped: 514 output_compression: NoCompression
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.365541) EVENT_LOG_v1 {"time_micros": 1772023826365531, "job": 60, "event": "compaction_finished", "compaction_time_micros": 137971, "compaction_time_cpu_micros": 29257, "output_level": 6, "num_output_files": 1, "total_output_size": 8365998, "num_input_records": 7118, "num_output_records": 6604, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826365906, "job": 60, "event": "table_file_deletion", "file_number": 103}
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826366987, "job": 60, "event": "table_file_deletion", "file_number": 101}
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.225799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.367107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.367113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.367114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.367116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:50:26 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.367118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:50:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 157 op/s
Feb 25 12:50:26 compute-0 podman[356007]: 2026-02-25 12:50:26.674309515 +0000 UTC m=+0.092257366 container create 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb 25 12:50:26 compute-0 podman[356007]: 2026-02-25 12:50:26.616111482 +0000 UTC m=+0.034059333 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:50:26 compute-0 systemd[1]: Started libpod-conmon-9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78.scope.
Feb 25 12:50:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:50:26 compute-0 podman[356007]: 2026-02-25 12:50:26.758601045 +0000 UTC m=+0.176548876 container init 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:50:26 compute-0 podman[356007]: 2026-02-25 12:50:26.764516272 +0000 UTC m=+0.182464083 container start 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:50:26 compute-0 wonderful_dewdney[356023]: 167 167
Feb 25 12:50:26 compute-0 systemd[1]: libpod-9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78.scope: Deactivated successfully.
Feb 25 12:50:26 compute-0 podman[356007]: 2026-02-25 12:50:26.768830494 +0000 UTC m=+0.186778305 container attach 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:50:26 compute-0 podman[356007]: 2026-02-25 12:50:26.769790601 +0000 UTC m=+0.187738432 container died 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 12:50:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-629f68d10826f564bf274a950859b155b0faab3fbb130a72593551d8a4b85aaa-merged.mount: Deactivated successfully.
Feb 25 12:50:26 compute-0 podman[356007]: 2026-02-25 12:50:26.83595177 +0000 UTC m=+0.253899581 container remove 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:50:26 compute-0 systemd[1]: libpod-conmon-9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78.scope: Deactivated successfully.
Feb 25 12:50:27 compute-0 podman[356047]: 2026-02-25 12:50:27.017118705 +0000 UTC m=+0.037200481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:50:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 12:50:27 compute-0 podman[356047]: 2026-02-25 12:50:27.216490475 +0000 UTC m=+0.236572201 container create fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:50:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:50:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:50:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:50:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:50:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:50:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:50:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:50:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:50:27 compute-0 systemd[1]: Started libpod-conmon-fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9.scope.
Feb 25 12:50:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:50:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:27 compute-0 podman[356047]: 2026-02-25 12:50:27.456987735 +0000 UTC m=+0.477069461 container init fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:50:27 compute-0 podman[356047]: 2026-02-25 12:50:27.46882568 +0000 UTC m=+0.488907396 container start fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:50:27 compute-0 podman[356047]: 2026-02-25 12:50:27.546115482 +0000 UTC m=+0.566197248 container attach fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:50:27 compute-0 nova_compute[244014]: 2026-02-25 12:50:27.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:50:27 compute-0 nova_compute[244014]: 2026-02-25 12:50:27.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:50:27 compute-0 nova_compute[244014]: 2026-02-25 12:50:27.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:50:27 compute-0 strange_mirzakhani[356064]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:50:27 compute-0 strange_mirzakhani[356064]: --> All data devices are unavailable
Feb 25 12:50:27 compute-0 systemd[1]: libpod-fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9.scope: Deactivated successfully.
Feb 25 12:50:27 compute-0 podman[356047]: 2026-02-25 12:50:27.950204452 +0000 UTC m=+0.970286168 container died fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 12:50:28 compute-0 nova_compute[244014]: 2026-02-25 12:50:28.069 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:28 compute-0 nova_compute[244014]: 2026-02-25 12:50:28.070 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:28 compute-0 nova_compute[244014]: 2026-02-25 12:50:28.070 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:50:28 compute-0 nova_compute[244014]: 2026-02-25 12:50:28.070 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6501baa-8bc9-4724-b4c1-8ff43faf517c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:50:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91-merged.mount: Deactivated successfully.
Feb 25 12:50:28 compute-0 podman[356047]: 2026-02-25 12:50:28.105168207 +0000 UTC m=+1.125249903 container remove fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:50:28 compute-0 systemd[1]: libpod-conmon-fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9.scope: Deactivated successfully.
Feb 25 12:50:28 compute-0 sudo[355970]: pam_unix(sudo:session): session closed for user root
Feb 25 12:50:28 compute-0 sudo[356097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:50:28 compute-0 sudo[356097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:50:28 compute-0 sudo[356097]: pam_unix(sudo:session): session closed for user root
Feb 25 12:50:28 compute-0 sudo[356122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:50:28 compute-0 sudo[356122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:50:28 compute-0 ceph-mon[76335]: pgmap v2096: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 157 op/s
Feb 25 12:50:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.3 MiB/s wr, 240 op/s
Feb 25 12:50:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:28 compute-0 podman[356161]: 2026-02-25 12:50:28.647797689 +0000 UTC m=+0.087468731 container create b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:50:28 compute-0 podman[356161]: 2026-02-25 12:50:28.593170056 +0000 UTC m=+0.032841148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:50:28 compute-0 systemd[1]: Started libpod-conmon-b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6.scope.
Feb 25 12:50:28 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:50:28 compute-0 podman[356161]: 2026-02-25 12:50:28.748168803 +0000 UTC m=+0.187839925 container init b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb 25 12:50:28 compute-0 podman[356161]: 2026-02-25 12:50:28.755803869 +0000 UTC m=+0.195474881 container start b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 12:50:28 compute-0 podman[356161]: 2026-02-25 12:50:28.759265946 +0000 UTC m=+0.198937058 container attach b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb 25 12:50:28 compute-0 recursing_gauss[356178]: 167 167
Feb 25 12:50:28 compute-0 systemd[1]: libpod-b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6.scope: Deactivated successfully.
Feb 25 12:50:28 compute-0 podman[356161]: 2026-02-25 12:50:28.762484677 +0000 UTC m=+0.202155719 container died b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:50:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-49b01d286bf8dc91af89b4fbdbc81db13d2d232336fbd618055ce2bd401afb3a-merged.mount: Deactivated successfully.
Feb 25 12:50:28 compute-0 podman[356161]: 2026-02-25 12:50:28.866577667 +0000 UTC m=+0.306248699 container remove b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:50:28 compute-0 systemd[1]: libpod-conmon-b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6.scope: Deactivated successfully.
Feb 25 12:50:29 compute-0 podman[356204]: 2026-02-25 12:50:29.032725838 +0000 UTC m=+0.025313686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:50:29 compute-0 podman[356204]: 2026-02-25 12:50:29.135871031 +0000 UTC m=+0.128458859 container create 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:50:29 compute-0 systemd[1]: Started libpod-conmon-6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086.scope.
Feb 25 12:50:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67e914ddb02a0f2f753961b75a504ba63d4f50e7909ff963db6f3e20d6273a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67e914ddb02a0f2f753961b75a504ba63d4f50e7909ff963db6f3e20d6273a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67e914ddb02a0f2f753961b75a504ba63d4f50e7909ff963db6f3e20d6273a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67e914ddb02a0f2f753961b75a504ba63d4f50e7909ff963db6f3e20d6273a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:29 compute-0 podman[356204]: 2026-02-25 12:50:29.269082062 +0000 UTC m=+0.261669900 container init 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:50:29 compute-0 podman[356204]: 2026-02-25 12:50:29.276734068 +0000 UTC m=+0.269321896 container start 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 12:50:29 compute-0 podman[356204]: 2026-02-25 12:50:29.287655177 +0000 UTC m=+0.280243015 container attach 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]: {
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:     "0": [
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:         {
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "devices": [
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "/dev/loop3"
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             ],
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_name": "ceph_lv0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_size": "21470642176",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "name": "ceph_lv0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "tags": {
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.cluster_name": "ceph",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.crush_device_class": "",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.encrypted": "0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.objectstore": "bluestore",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.osd_id": "0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.type": "block",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.vdo": "0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.with_tpm": "0"
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             },
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "type": "block",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "vg_name": "ceph_vg0"
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:         }
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:     ],
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:     "1": [
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:         {
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "devices": [
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "/dev/loop4"
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             ],
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_name": "ceph_lv1",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_size": "21470642176",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "name": "ceph_lv1",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "tags": {
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.cluster_name": "ceph",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.crush_device_class": "",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.encrypted": "0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.objectstore": "bluestore",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.osd_id": "1",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.type": "block",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.vdo": "0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.with_tpm": "0"
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             },
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "type": "block",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "vg_name": "ceph_vg1"
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:         }
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:     ],
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:     "2": [
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:         {
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "devices": [
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "/dev/loop5"
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             ],
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_name": "ceph_lv2",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_size": "21470642176",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "name": "ceph_lv2",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "tags": {
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.cluster_name": "ceph",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.crush_device_class": "",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.encrypted": "0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.objectstore": "bluestore",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.osd_id": "2",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.type": "block",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.vdo": "0",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:                 "ceph.with_tpm": "0"
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             },
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "type": "block",
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:             "vg_name": "ceph_vg2"
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:         }
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]:     ]
Feb 25 12:50:29 compute-0 priceless_meninsky[356220]: }
Feb 25 12:50:29 compute-0 systemd[1]: libpod-6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086.scope: Deactivated successfully.
Feb 25 12:50:29 compute-0 podman[356204]: 2026-02-25 12:50:29.57640103 +0000 UTC m=+0.568988888 container died 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:50:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f67e914ddb02a0f2f753961b75a504ba63d4f50e7909ff963db6f3e20d6273a2-merged.mount: Deactivated successfully.
Feb 25 12:50:29 compute-0 podman[356204]: 2026-02-25 12:50:29.6174755 +0000 UTC m=+0.610063308 container remove 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:50:29 compute-0 systemd[1]: libpod-conmon-6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086.scope: Deactivated successfully.
Feb 25 12:50:29 compute-0 sudo[356122]: pam_unix(sudo:session): session closed for user root
Feb 25 12:50:29 compute-0 sudo[356241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:50:29 compute-0 sudo[356241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:50:29 compute-0 sudo[356241]: pam_unix(sudo:session): session closed for user root
Feb 25 12:50:29 compute-0 sudo[356266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:50:29 compute-0 sudo[356266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:50:29 compute-0 nova_compute[244014]: 2026-02-25 12:50:29.883 244018 INFO nova.compute.manager [None req-8dc1755a-1790-4238-8bef-85fc9f0c30ff 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Get console output
Feb 25 12:50:29 compute-0 nova_compute[244014]: 2026-02-25 12:50:29.890 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:50:30 compute-0 podman[356303]: 2026-02-25 12:50:30.067667252 +0000 UTC m=+0.070133042 container create 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:50:30 compute-0 podman[356303]: 2026-02-25 12:50:30.021114917 +0000 UTC m=+0.023580677 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:50:30 compute-0 systemd[1]: Started libpod-conmon-499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f.scope.
Feb 25 12:50:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:50:30 compute-0 podman[356303]: 2026-02-25 12:50:30.179268763 +0000 UTC m=+0.181734603 container init 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:50:30 compute-0 podman[356303]: 2026-02-25 12:50:30.187206317 +0000 UTC m=+0.189672087 container start 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 12:50:30 compute-0 clever_lederberg[356320]: 167 167
Feb 25 12:50:30 compute-0 podman[356303]: 2026-02-25 12:50:30.191011184 +0000 UTC m=+0.193477014 container attach 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:50:30 compute-0 systemd[1]: libpod-499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f.scope: Deactivated successfully.
Feb 25 12:50:30 compute-0 podman[356303]: 2026-02-25 12:50:30.192119586 +0000 UTC m=+0.194585386 container died 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0041eebafbf00d6964450c9f9f73828fea016b2ab9c0c7518c2707ee435da7e3-merged.mount: Deactivated successfully.
Feb 25 12:50:30 compute-0 podman[356303]: 2026-02-25 12:50:30.239966197 +0000 UTC m=+0.242431957 container remove 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:50:30 compute-0 systemd[1]: libpod-conmon-499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f.scope: Deactivated successfully.
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.340 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:30 compute-0 ceph-mon[76335]: pgmap v2097: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.3 MiB/s wr, 240 op/s
Feb 25 12:50:30 compute-0 podman[356344]: 2026-02-25 12:50:30.430933449 +0000 UTC m=+0.054570552 container create d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:50:30 compute-0 systemd[1]: Started libpod-conmon-d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e.scope.
Feb 25 12:50:30 compute-0 podman[356344]: 2026-02-25 12:50:30.40686856 +0000 UTC m=+0.030505733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:50:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf040c63894f878c02e4f07b53577ba5ba2d7e7852fc76913613c5a13e879dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf040c63894f878c02e4f07b53577ba5ba2d7e7852fc76913613c5a13e879dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf040c63894f878c02e4f07b53577ba5ba2d7e7852fc76913613c5a13e879dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf040c63894f878c02e4f07b53577ba5ba2d7e7852fc76913613c5a13e879dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:50:30 compute-0 podman[356344]: 2026-02-25 12:50:30.530635724 +0000 UTC m=+0.154272847 container init d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:50:30 compute-0 podman[356344]: 2026-02-25 12:50:30.547374857 +0000 UTC m=+0.171011970 container start d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:50:30 compute-0 podman[356344]: 2026-02-25 12:50:30.550938198 +0000 UTC m=+0.174575321 container attach d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.552 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.574 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.575 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.575 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.576 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.599 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.600 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.600 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.601 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.601 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:30 compute-0 nova_compute[244014]: 2026-02-25 12:50:30.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:50:31
Feb 25 12:50:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:50:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:50:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.control', 'images', 'backups', '.mgr', '.rgw.root']
Feb 25 12:50:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:50:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:50:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/650485319' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:31 compute-0 nova_compute[244014]: 2026-02-25 12:50:31.218 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:31 compute-0 lvm[356461]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:50:31 compute-0 lvm[356462]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:50:31 compute-0 lvm[356461]: VG ceph_vg0 finished
Feb 25 12:50:31 compute-0 lvm[356462]: VG ceph_vg1 finished
Feb 25 12:50:31 compute-0 lvm[356464]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:50:31 compute-0 lvm[356464]: VG ceph_vg2 finished
Feb 25 12:50:31 compute-0 gallant_chandrasekhar[356360]: {}
Feb 25 12:50:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/650485319' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:31 compute-0 systemd[1]: libpod-d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e.scope: Deactivated successfully.
Feb 25 12:50:31 compute-0 systemd[1]: libpod-d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e.scope: Consumed 1.248s CPU time.
Feb 25 12:50:31 compute-0 podman[356344]: 2026-02-25 12:50:31.413298818 +0000 UTC m=+1.036935901 container died d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:50:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bf040c63894f878c02e4f07b53577ba5ba2d7e7852fc76913613c5a13e879dc-merged.mount: Deactivated successfully.
Feb 25 12:50:31 compute-0 podman[356344]: 2026-02-25 12:50:31.455655904 +0000 UTC m=+1.079293017 container remove d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:50:31 compute-0 systemd[1]: libpod-conmon-d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e.scope: Deactivated successfully.
Feb 25 12:50:31 compute-0 sudo[356266]: pam_unix(sudo:session): session closed for user root
Feb 25 12:50:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:50:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:50:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:50:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:50:31 compute-0 sudo[356478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:50:31 compute-0 sudo[356478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:50:31 compute-0 sudo[356478]: pam_unix(sudo:session): session closed for user root
Feb 25 12:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:50:31 compute-0 nova_compute[244014]: 2026-02-25 12:50:31.751 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:50:31 compute-0 nova_compute[244014]: 2026-02-25 12:50:31.752 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:50:31 compute-0 nova_compute[244014]: 2026-02-25 12:50:31.759 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:50:31 compute-0 nova_compute[244014]: 2026-02-25 12:50:31.759 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:50:31 compute-0 nova_compute[244014]: 2026-02-25 12:50:31.764 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:50:31 compute-0 nova_compute[244014]: 2026-02-25 12:50:31.764 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.043 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.044 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2905MB free_disk=59.85101382341236GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.045 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.045 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.068 244018 DEBUG nova.compute.manager [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.068 244018 DEBUG nova.compute.manager [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing instance network info cache due to event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.069 244018 DEBUG oslo_concurrency.lockutils [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.069 244018 DEBUG oslo_concurrency.lockutils [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.069 244018 DEBUG nova.network.neutron [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.210 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b6501baa-8bc9-4724-b4c1-8ff43faf517c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.210 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 828def8a-01dd-4845-98b3-1516060251a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.210 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d6cf21ec-717e-41f7-9351-2214b43ce275 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.211 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.211 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.228 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.247 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.248 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.266 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.289 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.353 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:32 compute-0 ceph-mon[76335]: pgmap v2098: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Feb 25 12:50:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:50:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:50:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 636 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Feb 25 12:50:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:50:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213670914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.945 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.951 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:50:32 compute-0 nova_compute[244014]: 2026-02-25 12:50:32.990 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:50:33 compute-0 nova_compute[244014]: 2026-02-25 12:50:33.134 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:50:33 compute-0 nova_compute[244014]: 2026-02-25 12:50:33.135 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/213670914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:33 compute-0 podman[356525]: 2026-02-25 12:50:33.748074093 +0000 UTC m=+0.077676244 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:50:33 compute-0 podman[356526]: 2026-02-25 12:50:33.810592348 +0000 UTC m=+0.140809307 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 25 12:50:34 compute-0 ceph-mon[76335]: pgmap v2099: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 636 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Feb 25 12:50:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 3.1 MiB/s wr, 108 op/s
Feb 25 12:50:34 compute-0 nova_compute[244014]: 2026-02-25 12:50:34.847 244018 DEBUG nova.network.neutron [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updated VIF entry in instance network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:50:34 compute-0 nova_compute[244014]: 2026-02-25 12:50:34.848 244018 DEBUG nova.network.neutron [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updating instance_info_cache with network_info: [{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:34 compute-0 nova_compute[244014]: 2026-02-25 12:50:34.915 244018 DEBUG oslo_concurrency.lockutils [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.130 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.131 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.131 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.344 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:35 compute-0 ceph-mon[76335]: pgmap v2100: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 3.1 MiB/s wr, 108 op/s
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.668 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.680 244018 DEBUG nova.compute.manager [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.680 244018 DEBUG nova.compute.manager [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing instance network info cache due to event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.681 244018 DEBUG oslo_concurrency.lockutils [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.681 244018 DEBUG oslo_concurrency.lockutils [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.681 244018 DEBUG nova.network.neutron [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing network info cache for port 99fbecbd-1815-4bf9-8e7f-6f114847f46f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.912 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.912 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.912 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.913 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.913 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.914 244018 INFO nova.compute.manager [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Terminating instance
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.914 244018 DEBUG nova.compute.manager [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:50:35 compute-0 kernel: tap99fbecbd-18 (unregistering): left promiscuous mode
Feb 25 12:50:35 compute-0 NetworkManager[49836]: <info>  [1772023835.9529] device (tap99fbecbd-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:50:35 compute-0 ovn_controller[147040]: 2026-02-25T12:50:35Z|01312|binding|INFO|Releasing lport 99fbecbd-1815-4bf9-8e7f-6f114847f46f from this chassis (sb_readonly=0)
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:35 compute-0 ovn_controller[147040]: 2026-02-25T12:50:35Z|01313|binding|INFO|Setting lport 99fbecbd-1815-4bf9-8e7f-6f114847f46f down in Southbound
Feb 25 12:50:35 compute-0 ovn_controller[147040]: 2026-02-25T12:50:35Z|01314|binding|INFO|Removing iface tap99fbecbd-18 ovn-installed in OVS
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.966 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.969 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:35 compute-0 kernel: tap94cae4f8-cc (unregistering): left promiscuous mode
Feb 25 12:50:35 compute-0 NetworkManager[49836]: <info>  [1772023835.9767] device (tap94cae4f8-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:35 compute-0 ovn_controller[147040]: 2026-02-25T12:50:35Z|01315|binding|INFO|Releasing lport 94cae4f8-cc4b-443a-b22b-36ef77438ede from this chassis (sb_readonly=1)
Feb 25 12:50:35 compute-0 ovn_controller[147040]: 2026-02-25T12:50:35Z|01316|binding|INFO|Removing iface tap94cae4f8-cc ovn-installed in OVS
Feb 25 12:50:35 compute-0 ovn_controller[147040]: 2026-02-25T12:50:35Z|01317|if_status|INFO|Dropped 1 log messages in last 710 seconds (most recently, 710 seconds ago) due to excessive rate
Feb 25 12:50:35 compute-0 ovn_controller[147040]: 2026-02-25T12:50:35Z|01318|if_status|INFO|Not setting lport 94cae4f8-cc4b-443a-b22b-36ef77438ede down as sb is readonly
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:35 compute-0 nova_compute[244014]: 2026-02-25 12:50:35.989 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:35 compute-0 ovn_controller[147040]: 2026-02-25T12:50:35Z|01319|binding|INFO|Setting lport 94cae4f8-cc4b-443a-b22b-36ef77438ede down in Southbound
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.001 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:94:3d 10.100.0.3'], port_security=['fa:16:3e:7e:94:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '828def8a-01dd-4845-98b3-1516060251a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7b776d7-fb2a-404e-b423-f79885837022, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=99fbecbd-1815-4bf9-8e7f-6f114847f46f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.004 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 99fbecbd-1815-4bf9-8e7f-6f114847f46f in datapath 481feaf1-7ff4-47be-9159-a1dd19ceebcc unbound from our chassis
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.006 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 481feaf1-7ff4-47be-9159-a1dd19ceebcc
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.020 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:bf:12 2001:db8::f816:3eff:feec:bf12'], port_security=['fa:16:3e:ec:bf:12 2001:db8::f816:3eff:feec:bf12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:bf12/64', 'neutron:device_id': '828def8a-01dd-4845-98b3-1516060251a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f83b91c5-311c-45de-aa70-06fb6ec6fece, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=94cae4f8-cc4b-443a-b22b-36ef77438ede) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.021 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eff86ce5-4249-4895-8da9-8063591a42ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Feb 25 12:50:36 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Consumed 13.008s CPU time.
Feb 25 12:50:36 compute-0 systemd-machined[210048]: Machine qemu-155-instance-0000007b terminated.
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.042 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5f35dba4-0168-4cf4-9112-9bdebc861dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.047 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[71a5cdd5-c90c-4c40-a295-331ffdc1cd23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.079 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8c548bd7-afa0-4477-8e7c-0fd1c2f61311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.096 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[da02ecab-e037-44b6-ace5-679ecbe724d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap481feaf1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:6f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573491, 'reachable_time': 20258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356588, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.111 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb01a8a-ca38-4bc4-b602-21ef28456ee7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap481feaf1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573500, 'tstamp': 573500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356589, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap481feaf1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573502, 'tstamp': 573502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356589, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.113 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap481feaf1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.125 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap481feaf1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.126 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.126 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap481feaf1-70, col_values=(('external_ids', {'iface-id': '8234c876-6930-42e9-b642-2d1f82e23ee0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.127 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.128 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 94cae4f8-cc4b-443a-b22b-36ef77438ede in datapath eb832dde-9848-40c5-9505-cc643b1bd0fa unbound from our chassis
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.129 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb832dde-9848-40c5-9505-cc643b1bd0fa
Feb 25 12:50:36 compute-0 NetworkManager[49836]: <info>  [1772023836.1422] manager: (tap94cae4f8-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.142 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7626cd-a6dd-4e6e-aa0f-c2b5af79cc9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.154 244018 INFO nova.virt.libvirt.driver [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance destroyed successfully.
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.154 244018 DEBUG nova.objects.instance [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 828def8a-01dd-4845-98b3-1516060251a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.168 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[46657d19-b1de-410e-a74f-ebf36866caa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.171 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[045280a9-ce3c-45e8-a70a-bd7eb42d8c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.175 244018 DEBUG nova.virt.libvirt.vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:50:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:50:11Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.175 244018 DEBUG nova.network.os_vif_util [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.176 244018 DEBUG nova.network.os_vif_util [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.176 244018 DEBUG os_vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.177 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99fbecbd-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.179 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.181 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.184 244018 INFO os_vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18')
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.185 244018 DEBUG nova.virt.libvirt.vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:50:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:50:11Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.185 244018 DEBUG nova.network.os_vif_util [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.186 244018 DEBUG nova.network.os_vif_util [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.186 244018 DEBUG os_vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.187 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.187 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94cae4f8-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.188 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.191 244018 INFO os_vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc')
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.195 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4099ad90-bb07-47d7-b1cd-e92443b74b4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.210 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c3e133-ca3c-4212-a2f1-3d826cb1a9c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb832dde-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a5:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573564, 'reachable_time': 17677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356627, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.224 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c050f04b-6d90-48e2-99d8-3bf59cd13d97]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapeb832dde-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573573, 'tstamp': 573573}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356636, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.226 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb832dde-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.229 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb832dde-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.229 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.230 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb832dde-90, col_values=(('external_ids', {'iface-id': 'b3234524-e109-417d-a204-a0ab750c983e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.230 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.429 244018 INFO nova.virt.libvirt.driver [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Deleting instance files /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0_del
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.430 244018 INFO nova.virt.libvirt.driver [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Deletion of /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0_del complete
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.466 244018 DEBUG nova.compute.manager [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-unplugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.466 244018 DEBUG oslo_concurrency.lockutils [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.467 244018 DEBUG oslo_concurrency.lockutils [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.467 244018 DEBUG oslo_concurrency.lockutils [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.467 244018 DEBUG nova.compute.manager [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No waiting events found dispatching network-vif-unplugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.468 244018 DEBUG nova.compute.manager [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-unplugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.535 244018 INFO nova.compute.manager [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.536 244018 DEBUG oslo.service.loopingcall [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.536 244018 DEBUG nova.compute.manager [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:50:36 compute-0 nova_compute[244014]: 2026-02-25 12:50:36.536 244018 DEBUG nova.network.neutron [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:50:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 612 KiB/s rd, 3.1 MiB/s wr, 109 op/s
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.284 244018 DEBUG nova.network.neutron [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updated VIF entry in instance network info cache for port 99fbecbd-1815-4bf9-8e7f-6f114847f46f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.285 244018 DEBUG nova.network.neutron [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.481 244018 DEBUG oslo_concurrency.lockutils [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:37 compute-0 ceph-mon[76335]: pgmap v2101: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 612 KiB/s rd, 3.1 MiB/s wr, 109 op/s
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.812 244018 DEBUG nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-unplugged-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.813 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.813 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.814 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.814 244018 DEBUG nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No waiting events found dispatching network-vif-unplugged-94cae4f8-cc4b-443a-b22b-36ef77438ede pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.814 244018 DEBUG nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-unplugged-94cae4f8-cc4b-443a-b22b-36ef77438ede for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.815 244018 DEBUG nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.815 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.815 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.815 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.816 244018 DEBUG nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No waiting events found dispatching network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:50:37 compute-0 nova_compute[244014]: 2026-02-25 12:50:37.816 244018 WARNING nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received unexpected event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede for instance with vm_state active and task_state deleting.
Feb 25 12:50:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 343 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 1.7 MiB/s wr, 111 op/s
Feb 25 12:50:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:38 compute-0 nova_compute[244014]: 2026-02-25 12:50:38.675 244018 DEBUG nova.compute.manager [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:38 compute-0 nova_compute[244014]: 2026-02-25 12:50:38.676 244018 DEBUG oslo_concurrency.lockutils [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:38 compute-0 nova_compute[244014]: 2026-02-25 12:50:38.676 244018 DEBUG oslo_concurrency.lockutils [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:38 compute-0 nova_compute[244014]: 2026-02-25 12:50:38.677 244018 DEBUG oslo_concurrency.lockutils [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:38 compute-0 nova_compute[244014]: 2026-02-25 12:50:38.677 244018 DEBUG nova.compute.manager [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No waiting events found dispatching network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:50:38 compute-0 nova_compute[244014]: 2026-02-25 12:50:38.678 244018 WARNING nova.compute.manager [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received unexpected event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f for instance with vm_state active and task_state deleting.
Feb 25 12:50:38 compute-0 nova_compute[244014]: 2026-02-25 12:50:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:50:39 compute-0 ceph-mon[76335]: pgmap v2102: 305 pgs: 305 active+clean; 343 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 1.7 MiB/s wr, 111 op/s
Feb 25 12:50:40 compute-0 nova_compute[244014]: 2026-02-25 12:50:40.008 244018 DEBUG nova.compute.manager [req-5b49402c-4a83-48b7-92bd-8881be13907f req-32446b4e-8621-47db-b530-5c4054435925 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-deleted-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:40 compute-0 nova_compute[244014]: 2026-02-25 12:50:40.009 244018 INFO nova.compute.manager [req-5b49402c-4a83-48b7-92bd-8881be13907f req-32446b4e-8621-47db-b530-5c4054435925 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Neutron deleted interface 99fbecbd-1815-4bf9-8e7f-6f114847f46f; detaching it from the instance and deleting it from the info cache
Feb 25 12:50:40 compute-0 nova_compute[244014]: 2026-02-25 12:50:40.009 244018 DEBUG nova.network.neutron [req-5b49402c-4a83-48b7-92bd-8881be13907f req-32446b4e-8621-47db-b530-5c4054435925 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [{"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:40 compute-0 nova_compute[244014]: 2026-02-25 12:50:40.332 244018 DEBUG nova.network.neutron [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:40 compute-0 nova_compute[244014]: 2026-02-25 12:50:40.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:40 compute-0 nova_compute[244014]: 2026-02-25 12:50:40.481 244018 DEBUG nova.compute.manager [req-5b49402c-4a83-48b7-92bd-8881be13907f req-32446b4e-8621-47db-b530-5c4054435925 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Detach interface failed, port_id=99fbecbd-1815-4bf9-8e7f-6f114847f46f, reason: Instance 828def8a-01dd-4845-98b3-1516060251a0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:50:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 343 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 34 KiB/s wr, 28 op/s
Feb 25 12:50:40 compute-0 nova_compute[244014]: 2026-02-25 12:50:40.847 244018 INFO nova.compute.manager [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Took 4.31 seconds to deallocate network for instance.
Feb 25 12:50:40 compute-0 nova_compute[244014]: 2026-02-25 12:50:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:50:40 compute-0 nova_compute[244014]: 2026-02-25 12:50:40.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:50:41 compute-0 nova_compute[244014]: 2026-02-25 12:50:41.107 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:41 compute-0 nova_compute[244014]: 2026-02-25 12:50:41.107 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:41 compute-0 nova_compute[244014]: 2026-02-25 12:50:41.189 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:41 compute-0 nova_compute[244014]: 2026-02-25 12:50:41.203 244018 DEBUG oslo_concurrency.processutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:41 compute-0 ceph-mon[76335]: pgmap v2103: 305 pgs: 305 active+clean; 343 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 34 KiB/s wr, 28 op/s
Feb 25 12:50:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:50:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3143914118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:41 compute-0 nova_compute[244014]: 2026-02-25 12:50:41.804 244018 DEBUG oslo_concurrency.processutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:41 compute-0 nova_compute[244014]: 2026-02-25 12:50:41.812 244018 DEBUG nova.compute.provider_tree [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:50:41 compute-0 nova_compute[244014]: 2026-02-25 12:50:41.898 244018 DEBUG nova.scheduler.client.report [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:50:42 compute-0 nova_compute[244014]: 2026-02-25 12:50:42.069 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:42 compute-0 nova_compute[244014]: 2026-02-25 12:50:42.211 244018 DEBUG nova.compute.manager [req-039bbcff-414b-4d18-b96a-74d6330f1d77 req-edd8bacb-cfc5-49d7-bf1d-992f5c4a6cea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-deleted-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:42 compute-0 nova_compute[244014]: 2026-02-25 12:50:42.245 244018 INFO nova.scheduler.client.report [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 828def8a-01dd-4845-98b3-1516060251a0
Feb 25 12:50:42 compute-0 nova_compute[244014]: 2026-02-25 12:50:42.435 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 39 KiB/s wr, 31 op/s
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015353869746002576 of space, bias 1.0, pg target 0.4606160923800773 quantized to 32 (current 32)
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494070409928913 of space, bias 1.0, pg target 0.7482211229786739 quantized to 32 (current 32)
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3976293654385012e-06 of space, bias 4.0, pg target 0.0016771552385262014 quantized to 16 (current 16)
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:50:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:50:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3143914118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:43 compute-0 ceph-mon[76335]: pgmap v2104: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 39 KiB/s wr, 31 op/s
Feb 25 12:50:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 14 KiB/s wr, 30 op/s
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.322 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.322 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.323 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.323 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.323 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.324 244018 INFO nova.compute.manager [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Terminating instance
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.325 244018 DEBUG nova.compute.manager [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:45 compute-0 kernel: tapdee62982-d4 (unregistering): left promiscuous mode
Feb 25 12:50:45 compute-0 NetworkManager[49836]: <info>  [1772023845.8028] device (tapdee62982-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:50:45 compute-0 ovn_controller[147040]: 2026-02-25T12:50:45Z|01320|binding|INFO|Releasing lport dee62982-d46c-4a81-b4ad-8154d7cfc7af from this chassis (sb_readonly=0)
Feb 25 12:50:45 compute-0 ovn_controller[147040]: 2026-02-25T12:50:45Z|01321|binding|INFO|Setting lport dee62982-d46c-4a81-b4ad-8154d7cfc7af down in Southbound
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.810 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:45 compute-0 ovn_controller[147040]: 2026-02-25T12:50:45Z|01322|binding|INFO|Removing iface tapdee62982-d4 ovn-installed in OVS
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:45 compute-0 kernel: tap344b59b1-93 (unregistering): left promiscuous mode
Feb 25 12:50:45 compute-0 NetworkManager[49836]: <info>  [1772023845.8231] device (tap344b59b1-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:45 compute-0 ovn_controller[147040]: 2026-02-25T12:50:45Z|01323|binding|INFO|Releasing lport 344b59b1-93f2-4e23-8c28-835e5d954630 from this chassis (sb_readonly=1)
Feb 25 12:50:45 compute-0 ovn_controller[147040]: 2026-02-25T12:50:45Z|01324|binding|INFO|Removing iface tap344b59b1-93 ovn-installed in OVS
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:45 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Feb 25 12:50:45 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Consumed 15.271s CPU time.
Feb 25 12:50:45 compute-0 systemd-machined[210048]: Machine qemu-154-instance-0000007a terminated.
Feb 25 12:50:45 compute-0 ovn_controller[147040]: 2026-02-25T12:50:45Z|01325|binding|INFO|Setting lport 344b59b1-93f2-4e23-8c28-835e5d954630 down in Southbound
Feb 25 12:50:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.913 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:cc:74 10.100.0.9'], port_security=['fa:16:3e:93:cc:74 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7b776d7-fb2a-404e-b423-f79885837022, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dee62982-d46c-4a81-b4ad-8154d7cfc7af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:50:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.915 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dee62982-d46c-4a81-b4ad-8154d7cfc7af in datapath 481feaf1-7ff4-47be-9159-a1dd19ceebcc unbound from our chassis
Feb 25 12:50:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.918 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 481feaf1-7ff4-47be-9159-a1dd19ceebcc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:50:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.919 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00066dd3-fedd-451c-857b-c87ce2922b5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.920 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc namespace which is not needed anymore
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.972 244018 INFO nova.virt.libvirt.driver [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance destroyed successfully.
Feb 25 12:50:45 compute-0 nova_compute[244014]: 2026-02-25 12:50:45.972 244018 DEBUG nova.objects.instance [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid b6501baa-8bc9-4724-b4c1-8ff43faf517c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:50:45 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.978 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:10:50 2001:db8::f816:3eff:fe8d:1050'], port_security=['fa:16:3e:8d:10:50 2001:db8::f816:3eff:fe8d:1050'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8d:1050/64', 'neutron:device_id': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f83b91c5-311c-45de-aa70-06fb6ec6fece, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=344b59b1-93f2-4e23-8c28-835e5d954630) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:50:46 compute-0 ceph-mon[76335]: pgmap v2105: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 14 KiB/s wr, 30 op/s
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.113 244018 DEBUG nova.virt.libvirt.vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:28Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.114 244018 DEBUG nova.network.os_vif_util [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.115 244018 DEBUG nova.network.os_vif_util [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.116 244018 DEBUG os_vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.118 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdee62982-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.119 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.125 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.127 244018 INFO os_vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4')
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.128 244018 DEBUG nova.virt.libvirt.vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:28Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.129 244018 DEBUG nova.network.os_vif_util [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.130 244018 DEBUG nova.network.os_vif_util [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.130 244018 DEBUG os_vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.133 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap344b59b1-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.139 244018 INFO os_vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93')
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.300 244018 DEBUG nova.compute.manager [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.301 244018 DEBUG nova.compute.manager [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing instance network info cache due to event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.301 244018 DEBUG oslo_concurrency.lockutils [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.301 244018 DEBUG oslo_concurrency.lockutils [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.302 244018 DEBUG nova.network.neutron [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing network info cache for port dee62982-d46c-4a81-b4ad-8154d7cfc7af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:50:46 compute-0 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [NOTICE]   (354426) : haproxy version is 2.8.14-c23fe91
Feb 25 12:50:46 compute-0 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [NOTICE]   (354426) : path to executable is /usr/sbin/haproxy
Feb 25 12:50:46 compute-0 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [WARNING]  (354426) : Exiting Master process...
Feb 25 12:50:46 compute-0 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [WARNING]  (354426) : Exiting Master process...
Feb 25 12:50:46 compute-0 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [ALERT]    (354426) : Current worker (354428) exited with code 143 (Terminated)
Feb 25 12:50:46 compute-0 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [WARNING]  (354426) : All workers exited. Exiting... (0)
Feb 25 12:50:46 compute-0 systemd[1]: libpod-7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6.scope: Deactivated successfully.
Feb 25 12:50:46 compute-0 nova_compute[244014]: 2026-02-25 12:50:46.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:46 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:46.333 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:50:46 compute-0 podman[356714]: 2026-02-25 12:50:46.335009741 +0000 UTC m=+0.315285743 container died 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:50:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 14 KiB/s wr, 30 op/s
Feb 25 12:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6-userdata-shm.mount: Deactivated successfully.
Feb 25 12:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6fa4fcab328f5a267bf20836bf596abfcd5ef7623497a8450d10dfaa2251880-merged.mount: Deactivated successfully.
Feb 25 12:50:47 compute-0 nova_compute[244014]: 2026-02-25 12:50:47.442 244018 DEBUG nova.compute.manager [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-unplugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:47 compute-0 nova_compute[244014]: 2026-02-25 12:50:47.443 244018 DEBUG oslo_concurrency.lockutils [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:47 compute-0 nova_compute[244014]: 2026-02-25 12:50:47.443 244018 DEBUG oslo_concurrency.lockutils [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:47 compute-0 nova_compute[244014]: 2026-02-25 12:50:47.444 244018 DEBUG oslo_concurrency.lockutils [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:47 compute-0 nova_compute[244014]: 2026-02-25 12:50:47.444 244018 DEBUG nova.compute.manager [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No waiting events found dispatching network-vif-unplugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:50:47 compute-0 nova_compute[244014]: 2026-02-25 12:50:47.445 244018 DEBUG nova.compute.manager [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-unplugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:50:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:50:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1598363634' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:50:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:50:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1598363634' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:50:47 compute-0 podman[356714]: 2026-02-25 12:50:47.558291843 +0000 UTC m=+1.538567805 container cleanup 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 12:50:47 compute-0 systemd[1]: libpod-conmon-7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6.scope: Deactivated successfully.
Feb 25 12:50:47 compute-0 nova_compute[244014]: 2026-02-25 12:50:47.882 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:47 compute-0 nova_compute[244014]: 2026-02-25 12:50:47.883 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:47 compute-0 nova_compute[244014]: 2026-02-25 12:50:47.937 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:50:48 compute-0 nova_compute[244014]: 2026-02-25 12:50:48.057 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:48 compute-0 nova_compute[244014]: 2026-02-25 12:50:48.058 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:48 compute-0 nova_compute[244014]: 2026-02-25 12:50:48.067 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:50:48 compute-0 nova_compute[244014]: 2026-02-25 12:50:48.067 244018 INFO nova.compute.claims [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:50:48 compute-0 podman[356764]: 2026-02-25 12:50:48.226991226 +0000 UTC m=+0.639046166 container remove 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.236 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9c1659-95a2-4852-ae1a-ae19b4796b23]: (4, ('Wed Feb 25 12:50:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc (7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6)\n7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6\nWed Feb 25 12:50:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc (7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6)\n7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.237 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8caf3a4b-efd9-4750-b7a9-12a51495ebcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.238 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap481feaf1-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:48 compute-0 nova_compute[244014]: 2026-02-25 12:50:48.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:48 compute-0 kernel: tap481feaf1-70: left promiscuous mode
Feb 25 12:50:48 compute-0 nova_compute[244014]: 2026-02-25 12:50:48.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.250 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49487980-357d-4778-826d-480efcd30fb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:48 compute-0 nova_compute[244014]: 2026-02-25 12:50:48.259 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.278 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[211c1569-55d5-4d4c-afef-2ce0fe5e0a68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.279 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c5fce779-2873-47fe-8e33-8e95f718407e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.294 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[302c3c3a-793a-49f6-a04e-3e04315e03aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573486, 'reachable_time': 40955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356780, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.312 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.312 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e2207473-3c44-455b-8b81-f4434cc05bd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.313 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 344b59b1-93f2-4e23-8c28-835e5d954630 in datapath eb832dde-9848-40c5-9505-cc643b1bd0fa unbound from our chassis
Feb 25 12:50:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d481feaf1\x2d7ff4\x2d47be\x2d9159\x2da1dd19ceebcc.mount: Deactivated successfully.
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.314 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb832dde-9848-40c5-9505-cc643b1bd0fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.315 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4683f05e-1de2-41fe-8dfa-3fe354471fad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.316 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa namespace which is not needed anymore
Feb 25 12:50:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 9.9 KiB/s wr, 35 op/s
Feb 25 12:50:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:48 compute-0 nova_compute[244014]: 2026-02-25 12:50:48.663 244018 DEBUG nova.network.neutron [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updated VIF entry in instance network info cache for port dee62982-d46c-4a81-b4ad-8154d7cfc7af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:50:48 compute-0 nova_compute[244014]: 2026-02-25 12:50:48.664 244018 DEBUG nova.network.neutron [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:48 compute-0 nova_compute[244014]: 2026-02-25 12:50:48.728 244018 DEBUG oslo_concurrency.lockutils [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:48 compute-0 ceph-mon[76335]: pgmap v2106: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 14 KiB/s wr, 30 op/s
Feb 25 12:50:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1598363634' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:50:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1598363634' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:50:48 compute-0 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [NOTICE]   (354499) : haproxy version is 2.8.14-c23fe91
Feb 25 12:50:48 compute-0 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [NOTICE]   (354499) : path to executable is /usr/sbin/haproxy
Feb 25 12:50:48 compute-0 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [WARNING]  (354499) : Exiting Master process...
Feb 25 12:50:48 compute-0 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [WARNING]  (354499) : Exiting Master process...
Feb 25 12:50:48 compute-0 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [ALERT]    (354499) : Current worker (354501) exited with code 143 (Terminated)
Feb 25 12:50:48 compute-0 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [WARNING]  (354499) : All workers exited. Exiting... (0)
Feb 25 12:50:48 compute-0 systemd[1]: libpod-af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c.scope: Deactivated successfully.
Feb 25 12:50:48 compute-0 podman[356798]: 2026-02-25 12:50:48.789496939 +0000 UTC m=+0.387959806 container died af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:50:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:50:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975364052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.072 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.813s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.078 244018 DEBUG nova.compute.provider_tree [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.094 244018 DEBUG nova.scheduler.client.report [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.175 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.176 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c-userdata-shm.mount: Deactivated successfully.
Feb 25 12:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1abdf7d75bc10bb95826887c814df1a0ab66d19dd1ae55e1c4beadda3a359345-merged.mount: Deactivated successfully.
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.298 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.299 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.326 244018 INFO nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.420 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:50:49 compute-0 podman[356798]: 2026-02-25 12:50:49.538681993 +0000 UTC m=+1.137144910 container cleanup af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.541 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.543 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.544 244018 INFO nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Creating image(s)
Feb 25 12:50:49 compute-0 systemd[1]: libpod-conmon-af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c.scope: Deactivated successfully.
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.579 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.607 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.632 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.636 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.662 244018 DEBUG nova.policy [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.696 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.696 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.697 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.697 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.715 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.717 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.745 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.745 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.745 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.745 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.745 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No waiting events found dispatching network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 WARNING nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received unexpected event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af for instance with vm_state active and task_state deleting.
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-unplugged-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No waiting events found dispatching network-vif-unplugged-344b59b1-93f2-4e23-8c28-835e5d954630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-unplugged-344b59b1-93f2-4e23-8c28-835e5d954630 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.747 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.747 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.747 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.747 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No waiting events found dispatching network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:50:49 compute-0 nova_compute[244014]: 2026-02-25 12:50:49.747 244018 WARNING nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received unexpected event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 for instance with vm_state active and task_state deleting.
Feb 25 12:50:50 compute-0 podman[356859]: 2026-02-25 12:50:50.060489307 +0000 UTC m=+0.487426834 container remove af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:50:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.066 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb1ea26-2966-48d9-8dc2-e151213baa5d]: (4, ('Wed Feb 25 12:50:48 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa (af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c)\naf9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c\nWed Feb 25 12:50:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa (af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c)\naf9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.068 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[36dc6550-c9b3-41f6-bc22-b395309d0ef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.069 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb832dde-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:50 compute-0 nova_compute[244014]: 2026-02-25 12:50:50.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:50 compute-0 kernel: tapeb832dde-90: left promiscuous mode
Feb 25 12:50:50 compute-0 nova_compute[244014]: 2026-02-25 12:50:50.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.078 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e6124a29-6b3b-40a4-8ca3-c3bb8b536a1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:50 compute-0 ceph-mon[76335]: pgmap v2107: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 9.9 KiB/s wr, 35 op/s
Feb 25 12:50:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1975364052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.088 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9b062b9a-dec0-431d-b0e1-d522ea484f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.091 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5beb1803-7130-4d42-85b0-fccb1303ea01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.106 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d093758-3559-4d3b-9ae9-100ee1ada4fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573558, 'reachable_time': 36906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356960, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.108 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:50:50 compute-0 systemd[1]: run-netns-ovnmeta\x2deb832dde\x2d9848\x2d40c5\x2d9505\x2dcc643b1bd0fa.mount: Deactivated successfully.
Feb 25 12:50:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.108 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2c68ac8c-a5c9-4e43-a87b-89003f991765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.109 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:50:50 compute-0 nova_compute[244014]: 2026-02-25 12:50:50.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 5.8 KiB/s wr, 9 op/s
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.146 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Successfully created port: a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.153 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023836.1516545, 828def8a-01dd-4845-98b3-1516060251a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.153 244018 INFO nova.compute.manager [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] VM Stopped (Lifecycle Event)
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.188 244018 DEBUG nova.compute.manager [None req-bccee992-167b-47a8-97f7-014e88b50b4e - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:51 compute-0 ceph-mon[76335]: pgmap v2108: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 5.8 KiB/s wr, 9 op/s
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.897 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Successfully updated port: a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.916 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.917 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.918 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.986 244018 DEBUG nova.compute.manager [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received event network-changed-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.986 244018 DEBUG nova.compute.manager [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Refreshing instance network info cache due to event network-changed-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:50:51 compute-0 nova_compute[244014]: 2026-02-25 12:50:51.987 244018 DEBUG oslo_concurrency.lockutils [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:52 compute-0 nova_compute[244014]: 2026-02-25 12:50:52.040 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:52 compute-0 nova_compute[244014]: 2026-02-25 12:50:52.106 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:50:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:52.110 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:52 compute-0 nova_compute[244014]: 2026-02-25 12:50:52.257 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:50:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 271 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 MiB/s wr, 40 op/s
Feb 25 12:50:52 compute-0 nova_compute[244014]: 2026-02-25 12:50:52.911 244018 DEBUG nova.objects.instance [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 968294d4-db1f-4cdc-822b-f7d4e382ac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:50:52 compute-0 nova_compute[244014]: 2026-02-25 12:50:52.928 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:50:52 compute-0 nova_compute[244014]: 2026-02-25 12:50:52.929 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Ensure instance console log exists: /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:50:52 compute-0 nova_compute[244014]: 2026-02-25 12:50:52.930 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:52 compute-0 nova_compute[244014]: 2026-02-25 12:50:52.931 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:52 compute-0 nova_compute[244014]: 2026-02-25 12:50:52.931 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.124 244018 INFO nova.virt.libvirt.driver [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Deleting instance files /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c_del
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.125 244018 INFO nova.virt.libvirt.driver [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Deletion of /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c_del complete
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.202 244018 INFO nova.compute.manager [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Took 7.88 seconds to destroy the instance on the hypervisor.
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.203 244018 DEBUG oslo.service.loopingcall [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.203 244018 DEBUG nova.compute.manager [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.203 244018 DEBUG nova.network.neutron [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.259 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updating instance_info_cache with network_info: [{"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.301 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.301 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Instance network_info: |[{"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.302 244018 DEBUG oslo_concurrency.lockutils [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.303 244018 DEBUG nova.network.neutron [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Refreshing network info cache for port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.309 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Start _get_guest_xml network_info=[{"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.316 244018 WARNING nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.321 244018 DEBUG nova.virt.libvirt.host [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.322 244018 DEBUG nova.virt.libvirt.host [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.328 244018 DEBUG nova.virt.libvirt.host [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.329 244018 DEBUG nova.virt.libvirt.host [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.329 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.329 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.330 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.330 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.330 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.331 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.331 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.331 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.331 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.331 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.332 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.332 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.335 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:53 compute-0 ceph-mon[76335]: pgmap v2109: 305 pgs: 305 active+clean; 271 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 MiB/s wr, 40 op/s
Feb 25 12:50:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:50:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2407116719' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.916 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.954 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:53 compute-0 nova_compute[244014]: 2026-02-25 12:50:53.961 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.029 244018 DEBUG nova.compute.manager [req-77718d30-ce5e-43dd-bf14-cd2b80668813 req-eb0a2aa3-4d31-4cf1-8267-8f25a38a2743 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-deleted-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.030 244018 INFO nova.compute.manager [req-77718d30-ce5e-43dd-bf14-cd2b80668813 req-eb0a2aa3-4d31-4cf1-8267-8f25a38a2743 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Neutron deleted interface 344b59b1-93f2-4e23-8c28-835e5d954630; detaching it from the instance and deleting it from the info cache
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.031 244018 DEBUG nova.network.neutron [req-77718d30-ce5e-43dd-bf14-cd2b80668813 req-eb0a2aa3-4d31-4cf1-8267-8f25a38a2743 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.322 244018 DEBUG nova.network.neutron [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.326 244018 DEBUG nova.compute.manager [req-77718d30-ce5e-43dd-bf14-cd2b80668813 req-eb0a2aa3-4d31-4cf1-8267-8f25a38a2743 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Detach interface failed, port_id=344b59b1-93f2-4e23-8c28-835e5d954630, reason: Instance b6501baa-8bc9-4724-b4c1-8ff43faf517c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.337 244018 INFO nova.compute.manager [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Took 1.13 seconds to deallocate network for instance.
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.378 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.379 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.453 244018 DEBUG oslo_concurrency.processutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:50:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3020492380' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.510 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.512 244018 DEBUG nova.virt.libvirt.vif [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:50:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-653885904',display_name='tempest-TestNetworkBasicOps-server-653885904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-653885904',id=125,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOy92l8/vPN7vFNQNIFwAyPWnh0ZWRvqGkx7SK83ofkqUIVSoYlWbbBqq+DpRGXPAAbaPssNxxLMrtn4SsTj7FK0oeeoWePcAOUtLz/kEkMZ4dQRPKUcIrOPQSmGB7w+fw==',key_name='tempest-TestNetworkBasicOps-27091948',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-tqixi3n6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:50:49Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=968294d4-db1f-4cdc-822b-f7d4e382ac90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.512 244018 DEBUG nova.network.os_vif_util [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.513 244018 DEBUG nova.network.os_vif_util [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.514 244018 DEBUG nova.objects.instance [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 968294d4-db1f-4cdc-822b-f7d4e382ac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.531 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:50:54 compute-0 nova_compute[244014]:   <uuid>968294d4-db1f-4cdc-822b-f7d4e382ac90</uuid>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   <name>instance-0000007d</name>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-653885904</nova:name>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:50:53</nova:creationTime>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <nova:port uuid="a4ebf65d-00dd-4cf9-88b7-af09e1ff5738">
Feb 25 12:50:54 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <system>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <entry name="serial">968294d4-db1f-4cdc-822b-f7d4e382ac90</entry>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <entry name="uuid">968294d4-db1f-4cdc-822b-f7d4e382ac90</entry>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     </system>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   <os>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   </os>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   <features>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   </features>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/968294d4-db1f-4cdc-822b-f7d4e382ac90_disk">
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       </source>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config">
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       </source>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:50:54 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:9f:b9:1b"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <target dev="tapa4ebf65d-00"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/console.log" append="off"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <video>
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     </video>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:50:54 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:50:54 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:50:54 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:50:54 compute-0 nova_compute[244014]: </domain>
Feb 25 12:50:54 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.532 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Preparing to wait for external event network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.532 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.532 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.533 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.533 244018 DEBUG nova.virt.libvirt.vif [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:50:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-653885904',display_name='tempest-TestNetworkBasicOps-server-653885904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-653885904',id=125,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOy92l8/vPN7vFNQNIFwAyPWnh0ZWRvqGkx7SK83ofkqUIVSoYlWbbBqq+DpRGXPAAbaPssNxxLMrtn4SsTj7FK0oeeoWePcAOUtLz/kEkMZ4dQRPKUcIrOPQSmGB7w+fw==',key_name='tempest-TestNetworkBasicOps-27091948',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-tqixi3n6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:50:49Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=968294d4-db1f-4cdc-822b-f7d4e382ac90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.534 244018 DEBUG nova.network.os_vif_util [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.534 244018 DEBUG nova.network.os_vif_util [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.535 244018 DEBUG os_vif [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.535 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.536 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.536 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.540 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4ebf65d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.540 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4ebf65d-00, col_values=(('external_ids', {'iface-id': 'a4ebf65d-00dd-4cf9-88b7-af09e1ff5738', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:b9:1b', 'vm-uuid': '968294d4-db1f-4cdc-822b-f7d4e382ac90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.542 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:54 compute-0 NetworkManager[49836]: <info>  [1772023854.5429] manager: (tapa4ebf65d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.547 244018 INFO os_vif [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00')
Feb 25 12:50:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 271 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 MiB/s wr, 37 op/s
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.603 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.603 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.604 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:9f:b9:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.604 244018 INFO nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Using config drive
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.634 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2407116719' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3020492380' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.933 244018 DEBUG nova.network.neutron [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updated VIF entry in instance network info cache for port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.934 244018 DEBUG nova.network.neutron [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updating instance_info_cache with network_info: [{"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:50:54 compute-0 nova_compute[244014]: 2026-02-25 12:50:54.951 244018 DEBUG oslo_concurrency.lockutils [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:50:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:50:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739611012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.027 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.028 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.028 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.039 244018 DEBUG oslo_concurrency.processutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.044 244018 DEBUG nova.compute.provider_tree [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.061 244018 DEBUG nova.scheduler.client.report [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.088 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.098 244018 INFO nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Creating config drive at /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.102 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnuc3toiq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.145 244018 INFO nova.scheduler.client.report [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance b6501baa-8bc9-4724-b4c1-8ff43faf517c
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.208 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.257 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnuc3toiq" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.283 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.286 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.413 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.414 244018 INFO nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Deleting local config drive /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config because it was imported into RBD.
Feb 25 12:50:55 compute-0 kernel: tapa4ebf65d-00: entered promiscuous mode
Feb 25 12:50:55 compute-0 ovn_controller[147040]: 2026-02-25T12:50:55Z|01326|binding|INFO|Claiming lport a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 for this chassis.
Feb 25 12:50:55 compute-0 ovn_controller[147040]: 2026-02-25T12:50:55Z|01327|binding|INFO|a4ebf65d-00dd-4cf9-88b7-af09e1ff5738: Claiming fa:16:3e:9f:b9:1b 10.100.0.5
Feb 25 12:50:55 compute-0 NetworkManager[49836]: <info>  [1772023855.4622] manager: (tapa4ebf65d-00): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:55 compute-0 ovn_controller[147040]: 2026-02-25T12:50:55Z|01328|binding|INFO|Setting lport a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 ovn-installed in OVS
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:55 compute-0 ovn_controller[147040]: 2026-02-25T12:50:55Z|01329|binding|INFO|Setting lport a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 up in Southbound
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.471 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:b9:1b 10.100.0.5'], port_security=['fa:16:3e:9f:b9:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '968294d4-db1f-4cdc-822b-f7d4e382ac90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46c21bd2-9635-4433-9b19-dbda2570ebd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d3a1f8d-2c5f-4ea3-9518-03dbef27606a, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.474 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 in datapath 06dcd48c-d26b-4718-b4c7-9c2416698bed bound to our chassis
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.478 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 06dcd48c-d26b-4718-b4c7-9c2416698bed
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.493 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2e8c16-c949-4d60-8a27-1654c1532aa5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:55 compute-0 systemd-machined[210048]: New machine qemu-157-instance-0000007d.
Feb 25 12:50:55 compute-0 systemd-udevd[357196]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:50:55 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-0000007d.
Feb 25 12:50:55 compute-0 NetworkManager[49836]: <info>  [1772023855.5208] device (tapa4ebf65d-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:50:55 compute-0 NetworkManager[49836]: <info>  [1772023855.5222] device (tapa4ebf65d-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.527 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0414a5-c993-4125-af99-03425189dba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.531 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0aa7ed-a9ce-4d54-99be-5767ee9606d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.556 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4e706d-babe-4f89-9ea6-56e3867122f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.571 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1e79a2-4bce-4643-b9e8-47d51da5d327]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06dcd48c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:35:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578120, 'reachable_time': 38011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357208, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.583 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b73d627-d7de-4836-8894-b3da78af0243]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap06dcd48c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578130, 'tstamp': 578130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357209, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap06dcd48c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578133, 'tstamp': 578133}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357209, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.585 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06dcd48c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.589 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06dcd48c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.589 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.590 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap06dcd48c-d0, col_values=(('external_ids', {'iface-id': '06afb3ba-d963-4735-ae78-91cfa95e52ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.591 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.756 244018 DEBUG nova.compute.manager [req-699759c8-e5a5-4634-8ed2-c4561c419b94 req-d16b4aa8-3bcd-4571-8ad9-f536dcffc0ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received event network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.756 244018 DEBUG oslo_concurrency.lockutils [req-699759c8-e5a5-4634-8ed2-c4561c419b94 req-d16b4aa8-3bcd-4571-8ad9-f536dcffc0ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.756 244018 DEBUG oslo_concurrency.lockutils [req-699759c8-e5a5-4634-8ed2-c4561c419b94 req-d16b4aa8-3bcd-4571-8ad9-f536dcffc0ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.757 244018 DEBUG oslo_concurrency.lockutils [req-699759c8-e5a5-4634-8ed2-c4561c419b94 req-d16b4aa8-3bcd-4571-8ad9-f536dcffc0ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.757 244018 DEBUG nova.compute.manager [req-699759c8-e5a5-4634-8ed2-c4561c419b94 req-d16b4aa8-3bcd-4571-8ad9-f536dcffc0ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Processing event network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.823 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023855.82343, 968294d4-db1f-4cdc-822b-f7d4e382ac90 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.824 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] VM Started (Lifecycle Event)
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.826 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.829 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:50:55 compute-0 ceph-mon[76335]: pgmap v2110: 305 pgs: 305 active+clean; 271 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 MiB/s wr, 37 op/s
Feb 25 12:50:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3739611012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.835 244018 INFO nova.virt.libvirt.driver [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Instance spawned successfully.
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.835 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.848 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.851 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.860 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.860 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.861 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.861 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.862 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.862 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.871 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.871 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023855.8263345, 968294d4-db1f-4cdc-822b-f7d4e382ac90 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.871 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] VM Paused (Lifecycle Event)
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.896 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.900 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023855.8303695, 968294d4-db1f-4cdc-822b-f7d4e382ac90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.900 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] VM Resumed (Lifecycle Event)
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.923 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.926 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.934 244018 INFO nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Took 6.39 seconds to spawn the instance on the hypervisor.
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.935 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:50:55 compute-0 nova_compute[244014]: 2026-02-25 12:50:55.965 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:50:56 compute-0 nova_compute[244014]: 2026-02-25 12:50:56.011 244018 INFO nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Took 7.98 seconds to build instance.
Feb 25 12:50:56 compute-0 nova_compute[244014]: 2026-02-25 12:50:56.025 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:56 compute-0 nova_compute[244014]: 2026-02-25 12:50:56.148 244018 DEBUG nova.compute.manager [req-502d7000-2748-45b8-a77d-e3394b48ce72 req-3f0214fa-fde4-4a2b-b2ff-f54b466e5dcd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-deleted-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Feb 25 12:50:57 compute-0 ceph-mon[76335]: pgmap v2111: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Feb 25 12:50:57 compute-0 nova_compute[244014]: 2026-02-25 12:50:57.902 244018 DEBUG nova.compute.manager [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received event network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:57 compute-0 nova_compute[244014]: 2026-02-25 12:50:57.902 244018 DEBUG oslo_concurrency.lockutils [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:50:57 compute-0 nova_compute[244014]: 2026-02-25 12:50:57.903 244018 DEBUG oslo_concurrency.lockutils [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:50:57 compute-0 nova_compute[244014]: 2026-02-25 12:50:57.904 244018 DEBUG oslo_concurrency.lockutils [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:50:57 compute-0 nova_compute[244014]: 2026-02-25 12:50:57.904 244018 DEBUG nova.compute.manager [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] No waiting events found dispatching network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:50:57 compute-0 nova_compute[244014]: 2026-02-25 12:50:57.905 244018 WARNING nova.compute.manager [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received unexpected event network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 for instance with vm_state active and task_state None.
Feb 25 12:50:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1002 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Feb 25 12:50:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:50:59 compute-0 nova_compute[244014]: 2026-02-25 12:50:59.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:59 compute-0 ovn_controller[147040]: 2026-02-25T12:50:59Z|01330|binding|INFO|Releasing lport 06afb3ba-d963-4735-ae78-91cfa95e52ff from this chassis (sb_readonly=0)
Feb 25 12:50:59 compute-0 nova_compute[244014]: 2026-02-25 12:50:59.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:50:59 compute-0 nova_compute[244014]: 2026-02-25 12:50:59.711 244018 DEBUG nova.compute.manager [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received event network-changed-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:50:59 compute-0 nova_compute[244014]: 2026-02-25 12:50:59.711 244018 DEBUG nova.compute.manager [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Refreshing instance network info cache due to event network-changed-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:50:59 compute-0 nova_compute[244014]: 2026-02-25 12:50:59.711 244018 DEBUG oslo_concurrency.lockutils [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:50:59 compute-0 nova_compute[244014]: 2026-02-25 12:50:59.711 244018 DEBUG oslo_concurrency.lockutils [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:50:59 compute-0 nova_compute[244014]: 2026-02-25 12:50:59.711 244018 DEBUG nova.network.neutron [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Refreshing network info cache for port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:50:59 compute-0 ceph-mon[76335]: pgmap v2112: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1002 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Feb 25 12:51:00 compute-0 nova_compute[244014]: 2026-02-25 12:51:00.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 997 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Feb 25 12:51:00 compute-0 nova_compute[244014]: 2026-02-25 12:51:00.971 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023845.970157, b6501baa-8bc9-4724-b4c1-8ff43faf517c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:51:00 compute-0 nova_compute[244014]: 2026-02-25 12:51:00.972 244018 INFO nova.compute.manager [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] VM Stopped (Lifecycle Event)
Feb 25 12:51:00 compute-0 nova_compute[244014]: 2026-02-25 12:51:00.995 244018 DEBUG nova.compute.manager [None req-e876b3d4-4ad1-4af6-9230-33b2c405d67b - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:51:01 compute-0 ceph-mon[76335]: pgmap v2113: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 997 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Feb 25 12:51:02 compute-0 nova_compute[244014]: 2026-02-25 12:51:02.214 244018 DEBUG nova.network.neutron [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updated VIF entry in instance network info cache for port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:51:02 compute-0 nova_compute[244014]: 2026-02-25 12:51:02.216 244018 DEBUG nova.network.neutron [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updating instance_info_cache with network_info: [{"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:51:02 compute-0 nova_compute[244014]: 2026-02-25 12:51:02.253 244018 DEBUG oslo_concurrency.lockutils [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:51:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Feb 25 12:51:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:03 compute-0 ceph-mon[76335]: pgmap v2114: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Feb 25 12:51:04 compute-0 nova_compute[244014]: 2026-02-25 12:51:04.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 315 KiB/s wr, 91 op/s
Feb 25 12:51:04 compute-0 podman[357252]: 2026-02-25 12:51:04.748414144 +0000 UTC m=+0.083175590 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:51:04 compute-0 podman[357253]: 2026-02-25 12:51:04.771088894 +0000 UTC m=+0.104092990 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 12:51:05 compute-0 nova_compute[244014]: 2026-02-25 12:51:05.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:05 compute-0 ceph-mon[76335]: pgmap v2115: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 315 KiB/s wr, 91 op/s
Feb 25 12:51:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 282 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 731 KiB/s wr, 95 op/s
Feb 25 12:51:07 compute-0 ovn_controller[147040]: 2026-02-25T12:51:07Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:b9:1b 10.100.0.5
Feb 25 12:51:07 compute-0 ovn_controller[147040]: 2026-02-25T12:51:07Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:b9:1b 10.100.0.5
Feb 25 12:51:07 compute-0 ceph-mon[76335]: pgmap v2116: 305 pgs: 305 active+clean; 282 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 731 KiB/s wr, 95 op/s
Feb 25 12:51:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 295 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 123 op/s
Feb 25 12:51:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:09 compute-0 nova_compute[244014]: 2026-02-25 12:51:09.550 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:09 compute-0 ceph-mon[76335]: pgmap v2117: 305 pgs: 305 active+clean; 295 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 123 op/s
Feb 25 12:51:10 compute-0 nova_compute[244014]: 2026-02-25 12:51:10.361 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 295 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 68 op/s
Feb 25 12:51:11 compute-0 nova_compute[244014]: 2026-02-25 12:51:11.305 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:12 compute-0 ceph-mon[76335]: pgmap v2118: 305 pgs: 305 active+clean; 295 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 68 op/s
Feb 25 12:51:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 96 op/s
Feb 25 12:51:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:14 compute-0 ceph-mon[76335]: pgmap v2119: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 96 op/s
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.420 244018 INFO nova.compute.manager [None req-7b830aa4-ae46-4c5f-9df0-40341875ddcd 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Get console output
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.427 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.748 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.749 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.749 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.750 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.750 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.752 244018 INFO nova.compute.manager [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Terminating instance
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.754 244018 DEBUG nova.compute.manager [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:51:14 compute-0 kernel: tapa4ebf65d-00 (unregistering): left promiscuous mode
Feb 25 12:51:14 compute-0 NetworkManager[49836]: <info>  [1772023874.8551] device (tapa4ebf65d-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.863 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:14 compute-0 ovn_controller[147040]: 2026-02-25T12:51:14Z|01331|binding|INFO|Releasing lport a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 from this chassis (sb_readonly=0)
Feb 25 12:51:14 compute-0 ovn_controller[147040]: 2026-02-25T12:51:14Z|01332|binding|INFO|Setting lport a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 down in Southbound
Feb 25 12:51:14 compute-0 ovn_controller[147040]: 2026-02-25T12:51:14Z|01333|binding|INFO|Removing iface tapa4ebf65d-00 ovn-installed in OVS
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.883 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:b9:1b 10.100.0.5'], port_security=['fa:16:3e:9f:b9:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '968294d4-db1f-4cdc-822b-f7d4e382ac90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46c21bd2-9635-4433-9b19-dbda2570ebd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d3a1f8d-2c5f-4ea3-9518-03dbef27606a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.885 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 in datapath 06dcd48c-d26b-4718-b4c7-9c2416698bed unbound from our chassis
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.887 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 06dcd48c-d26b-4718-b4c7-9c2416698bed
Feb 25 12:51:14 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Feb 25 12:51:14 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Consumed 13.002s CPU time.
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.899 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7c63d7-af4a-4bec-8864-edda7af23d03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:14 compute-0 systemd-machined[210048]: Machine qemu-157-instance-0000007d terminated.
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.925 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[73452ede-6a9c-4760-89ce-4b4ef5e0a526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.929 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a20664f0-a66b-4b6b-8771-24edc426038c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.949 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c0e95d-e9f2-44bb-b374-bc63bfc7707c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.964 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aa423813-991a-412c-95a2-d86a64d8bd4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06dcd48c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:35:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578120, 'reachable_time': 38011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357305, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.980 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3ee3af-b693-4c9a-9760-284e87598c3b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap06dcd48c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578130, 'tstamp': 578130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357308, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap06dcd48c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578133, 'tstamp': 578133}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357308, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.982 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06dcd48c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.989 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06dcd48c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.989 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.989 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap06dcd48c-d0, col_values=(('external_ids', {'iface-id': '06afb3ba-d963-4735-ae78-91cfa95e52ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.990 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.990 244018 INFO nova.virt.libvirt.driver [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Instance destroyed successfully.
Feb 25 12:51:14 compute-0 nova_compute[244014]: 2026-02-25 12:51:14.991 244018 DEBUG nova.objects.instance [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 968294d4-db1f-4cdc-822b-f7d4e382ac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.008 244018 DEBUG nova.virt.libvirt.vif [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:50:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-653885904',display_name='tempest-TestNetworkBasicOps-server-653885904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-653885904',id=125,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOy92l8/vPN7vFNQNIFwAyPWnh0ZWRvqGkx7SK83ofkqUIVSoYlWbbBqq+DpRGXPAAbaPssNxxLMrtn4SsTj7FK0oeeoWePcAOUtLz/kEkMZ4dQRPKUcIrOPQSmGB7w+fw==',key_name='tempest-TestNetworkBasicOps-27091948',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:50:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-tqixi3n6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:50:55Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=968294d4-db1f-4cdc-822b-f7d4e382ac90,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.008 244018 DEBUG nova.network.os_vif_util [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.009 244018 DEBUG nova.network.os_vif_util [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.009 244018 DEBUG os_vif [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.011 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4ebf65d-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.012 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.014 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.017 244018 INFO os_vif [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00')
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.330 244018 INFO nova.virt.libvirt.driver [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Deleting instance files /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90_del
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.331 244018 INFO nova.virt.libvirt.driver [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Deletion of /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90_del complete
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.392 244018 INFO nova.compute.manager [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.392 244018 DEBUG oslo.service.loopingcall [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.393 244018 DEBUG nova.compute.manager [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.393 244018 DEBUG nova.network.neutron [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:51:15 compute-0 nova_compute[244014]: 2026-02-25 12:51:15.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:16 compute-0 ceph-mon[76335]: pgmap v2120: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:51:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 286 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Feb 25 12:51:17 compute-0 nova_compute[244014]: 2026-02-25 12:51:17.731 244018 DEBUG nova.network.neutron [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:51:17 compute-0 nova_compute[244014]: 2026-02-25 12:51:17.757 244018 INFO nova.compute.manager [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Took 2.36 seconds to deallocate network for instance.
Feb 25 12:51:17 compute-0 nova_compute[244014]: 2026-02-25 12:51:17.832 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:17 compute-0 nova_compute[244014]: 2026-02-25 12:51:17.833 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:17 compute-0 nova_compute[244014]: 2026-02-25 12:51:17.859 244018 DEBUG nova.compute.manager [req-ddc65e11-3e54-4b0f-a0b2-aa98cfbcf027 req-f50286e8-46ba-45f2-89a7-8682a2311e51 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received event network-vif-deleted-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:51:17 compute-0 nova_compute[244014]: 2026-02-25 12:51:17.911 244018 DEBUG oslo_concurrency.processutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:18 compute-0 ceph-mon[76335]: pgmap v2121: 305 pgs: 305 active+clean; 286 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Feb 25 12:51:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:18.268 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:4b:a0 10.100.0.2 2001:db8::f816:3eff:fe6f:4ba0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:4ba0/64', 'neutron:device_id': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=591ce053-3764-4ce0-841f-6728c8fd9491) old=Port_Binding(mac=['fa:16:3e:6f:4b:a0 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:51:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:18.270 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 591ce053-3764-4ce0-841f-6728c8fd9491 in datapath e19ed85e-54ee-4274-951c-ade412625983 updated
Feb 25 12:51:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:18.272 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e19ed85e-54ee-4274-951c-ade412625983, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:51:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:18.273 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c8bb10-d597-42a7-a956-ffb399a8835f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:51:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2729816460' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:18 compute-0 nova_compute[244014]: 2026-02-25 12:51:18.474 244018 DEBUG oslo_concurrency.processutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:18 compute-0 nova_compute[244014]: 2026-02-25 12:51:18.481 244018 DEBUG nova.compute.provider_tree [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:51:18 compute-0 nova_compute[244014]: 2026-02-25 12:51:18.504 244018 DEBUG nova.scheduler.client.report [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:51:18 compute-0 nova_compute[244014]: 2026-02-25 12:51:18.531 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:18 compute-0 nova_compute[244014]: 2026-02-25 12:51:18.583 244018 INFO nova.scheduler.client.report [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 968294d4-db1f-4cdc-822b-f7d4e382ac90
Feb 25 12:51:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 233 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 1.7 MiB/s wr, 87 op/s
Feb 25 12:51:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:18 compute-0 nova_compute[244014]: 2026-02-25 12:51:18.879 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2729816460' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:20 compute-0 nova_compute[244014]: 2026-02-25 12:51:20.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:20 compute-0 ceph-mon[76335]: pgmap v2122: 305 pgs: 305 active+clean; 233 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 1.7 MiB/s wr, 87 op/s
Feb 25 12:51:20 compute-0 nova_compute[244014]: 2026-02-25 12:51:20.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 233 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 578 KiB/s wr, 56 op/s
Feb 25 12:51:21 compute-0 nova_compute[244014]: 2026-02-25 12:51:21.601 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:21 compute-0 nova_compute[244014]: 2026-02-25 12:51:21.601 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:21 compute-0 nova_compute[244014]: 2026-02-25 12:51:21.602 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:21 compute-0 nova_compute[244014]: 2026-02-25 12:51:21.602 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:21 compute-0 nova_compute[244014]: 2026-02-25 12:51:21.602 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:21 compute-0 nova_compute[244014]: 2026-02-25 12:51:21.604 244018 INFO nova.compute.manager [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Terminating instance
Feb 25 12:51:21 compute-0 nova_compute[244014]: 2026-02-25 12:51:21.605 244018 DEBUG nova.compute.manager [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:51:21 compute-0 kernel: tap33f0e898-94 (unregistering): left promiscuous mode
Feb 25 12:51:21 compute-0 nova_compute[244014]: 2026-02-25 12:51:21.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:21 compute-0 NetworkManager[49836]: <info>  [1772023881.7896] device (tap33f0e898-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:51:21 compute-0 nova_compute[244014]: 2026-02-25 12:51:21.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:21 compute-0 ovn_controller[147040]: 2026-02-25T12:51:21Z|01334|binding|INFO|Releasing lport 33f0e898-9477-416a-9ae6-268ef8e71ee3 from this chassis (sb_readonly=0)
Feb 25 12:51:21 compute-0 ovn_controller[147040]: 2026-02-25T12:51:21Z|01335|binding|INFO|Setting lport 33f0e898-9477-416a-9ae6-268ef8e71ee3 down in Southbound
Feb 25 12:51:21 compute-0 ovn_controller[147040]: 2026-02-25T12:51:21Z|01336|binding|INFO|Removing iface tap33f0e898-94 ovn-installed in OVS
Feb 25 12:51:21 compute-0 nova_compute[244014]: 2026-02-25 12:51:21.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:21 compute-0 nova_compute[244014]: 2026-02-25 12:51:21.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:21.814 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:16:fb 10.100.0.8'], port_security=['fa:16:3e:8d:16:fb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd6cf21ec-717e-41f7-9351-2214b43ce275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7113265a-1d97-4a63-a30a-7677c464f652', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d3a1f8d-2c5f-4ea3-9518-03dbef27606a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=33f0e898-9477-416a-9ae6-268ef8e71ee3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:51:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:21.816 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 33f0e898-9477-416a-9ae6-268ef8e71ee3 in datapath 06dcd48c-d26b-4718-b4c7-9c2416698bed unbound from our chassis
Feb 25 12:51:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:21.818 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 06dcd48c-d26b-4718-b4c7-9c2416698bed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:51:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:21.819 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8308171b-d57f-4a40-988b-d21dbd6e719f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:21.820 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed namespace which is not needed anymore
Feb 25 12:51:21 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Feb 25 12:51:21 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Consumed 14.427s CPU time.
Feb 25 12:51:21 compute-0 systemd-machined[210048]: Machine qemu-156-instance-0000007c terminated.
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.040 244018 INFO nova.virt.libvirt.driver [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Instance destroyed successfully.
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.040 244018 DEBUG nova.objects.instance [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid d6cf21ec-717e-41f7-9351-2214b43ce275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:51:22 compute-0 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [NOTICE]   (355784) : haproxy version is 2.8.14-c23fe91
Feb 25 12:51:22 compute-0 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [NOTICE]   (355784) : path to executable is /usr/sbin/haproxy
Feb 25 12:51:22 compute-0 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [WARNING]  (355784) : Exiting Master process...
Feb 25 12:51:22 compute-0 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [WARNING]  (355784) : Exiting Master process...
Feb 25 12:51:22 compute-0 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [ALERT]    (355784) : Current worker (355786) exited with code 143 (Terminated)
Feb 25 12:51:22 compute-0 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [WARNING]  (355784) : All workers exited. Exiting... (0)
Feb 25 12:51:22 compute-0 systemd[1]: libpod-a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed.scope: Deactivated successfully.
Feb 25 12:51:22 compute-0 podman[357384]: 2026-02-25 12:51:22.057322275 +0000 UTC m=+0.128473828 container died a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.074 244018 DEBUG nova.virt.libvirt.vif [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:50:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563456365',display_name='tempest-TestNetworkBasicOps-server-563456365',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563456365',id=124,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeMiVeu1dOqPpqcWXNCLagoBwiGcWNicZxrpOmZtGijG3qf3SaUB0NXPfDvHb9oPVwp3SJzJeTa27pr/BNkR9Koy7DJ2jBsI3Xx4qBnFg17X6/5BsC47NShmWdZikYbsA==',key_name='tempest-TestNetworkBasicOps-2016084182',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:50:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-8x0n80rf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:50:12Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=d6cf21ec-717e-41f7-9351-2214b43ce275,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.075 244018 DEBUG nova.network.os_vif_util [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.077 244018 DEBUG nova.network.os_vif_util [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.077 244018 DEBUG os_vif [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.081 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33f0e898-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.083 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.089 244018 INFO os_vif [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94')
Feb 25 12:51:22 compute-0 ceph-mon[76335]: pgmap v2123: 305 pgs: 305 active+clean; 233 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 578 KiB/s wr, 56 op/s
Feb 25 12:51:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed-userdata-shm.mount: Deactivated successfully.
Feb 25 12:51:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-66261fae152732234a66a0a4cefd1a9db46debf2297b95beda372509622fae68-merged.mount: Deactivated successfully.
Feb 25 12:51:22 compute-0 podman[357384]: 2026-02-25 12:51:22.280185998 +0000 UTC m=+0.351337551 container cleanup a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 12:51:22 compute-0 systemd[1]: libpod-conmon-a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed.scope: Deactivated successfully.
Feb 25 12:51:22 compute-0 podman[357441]: 2026-02-25 12:51:22.415220671 +0000 UTC m=+0.109127812 container remove a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:51:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.422 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e06e821f-51ea-4b36-a8a7-0356456ca892]: (4, ('Wed Feb 25 12:51:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed (a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed)\na0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed\nWed Feb 25 12:51:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed (a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed)\na0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.424 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4c46cf-c5ed-4f87-b291-e9c3a541ca76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.425 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06dcd48c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:22 compute-0 kernel: tap06dcd48c-d0: left promiscuous mode
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.427 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.437 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.441 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4da7e76f-cb69-4ee9-ae68-1793b7fd940e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.461 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d63bd2-949f-4ecf-8ed2-e302e15c1cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.463 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c045220-8c25-4d2c-9d1c-ae2f8e9bfb47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.481 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1c5342-2436-41a3-a2ca-db4a78551e0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578112, 'reachable_time': 27766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357459, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.485 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:51:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.485 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[da963c7d-fd62-4d82-be7c-880c1378778e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d06dcd48c\x2dd26b\x2d4718\x2db4c7\x2d9c2416698bed.mount: Deactivated successfully.
Feb 25 12:51:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 203 KiB/s rd, 578 KiB/s wr, 144 op/s
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.649 244018 INFO nova.virt.libvirt.driver [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Deleting instance files /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275_del
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.650 244018 INFO nova.virt.libvirt.driver [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Deletion of /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275_del complete
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.720 244018 INFO nova.compute.manager [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Took 1.11 seconds to destroy the instance on the hypervisor.
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.721 244018 DEBUG oslo.service.loopingcall [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.722 244018 DEBUG nova.compute.manager [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:51:22 compute-0 nova_compute[244014]: 2026-02-25 12:51:22.722 244018 DEBUG nova.network.neutron [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:51:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:23 compute-0 nova_compute[244014]: 2026-02-25 12:51:23.859 244018 DEBUG nova.network.neutron [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:51:23 compute-0 nova_compute[244014]: 2026-02-25 12:51:23.875 244018 INFO nova.compute.manager [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Took 1.15 seconds to deallocate network for instance.
Feb 25 12:51:23 compute-0 nova_compute[244014]: 2026-02-25 12:51:23.920 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:23 compute-0 nova_compute[244014]: 2026-02-25 12:51:23.921 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:23 compute-0 nova_compute[244014]: 2026-02-25 12:51:23.956 244018 DEBUG nova.compute.manager [req-6c992cc8-881f-4985-8153-c57d2d418265 req-89d57ae2-8ff2-4f97-87b6-c84f2f1ef9e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-vif-deleted-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:51:23 compute-0 nova_compute[244014]: 2026-02-25 12:51:23.988 244018 DEBUG oslo_concurrency.processutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:24.018 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:4b:a0 10.100.0.2 2001:db8:0:1:f816:3eff:fe6f:4ba0 2001:db8::f816:3eff:fe6f:4ba0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe6f:4ba0/64 2001:db8::f816:3eff:fe6f:4ba0/64', 'neutron:device_id': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=591ce053-3764-4ce0-841f-6728c8fd9491) old=Port_Binding(mac=['fa:16:3e:6f:4b:a0 10.100.0.2 2001:db8::f816:3eff:fe6f:4ba0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:4ba0/64', 'neutron:device_id': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:51:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:24.021 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 591ce053-3764-4ce0-841f-6728c8fd9491 in datapath e19ed85e-54ee-4274-951c-ade412625983 updated
Feb 25 12:51:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:24.023 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e19ed85e-54ee-4274-951c-ade412625983, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:51:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:24.025 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[862d66ae-5108-4c7a-b26c-8b0ea25b8e7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:24 compute-0 ceph-mon[76335]: pgmap v2124: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 203 KiB/s rd, 578 KiB/s wr, 144 op/s
Feb 25 12:51:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:51:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042474607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:24 compute-0 nova_compute[244014]: 2026-02-25 12:51:24.532 244018 DEBUG oslo_concurrency.processutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:24 compute-0 nova_compute[244014]: 2026-02-25 12:51:24.539 244018 DEBUG nova.compute.provider_tree [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:51:24 compute-0 nova_compute[244014]: 2026-02-25 12:51:24.558 244018 DEBUG nova.scheduler.client.report [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:51:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 13 KiB/s wr, 117 op/s
Feb 25 12:51:24 compute-0 nova_compute[244014]: 2026-02-25 12:51:24.591 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:24 compute-0 nova_compute[244014]: 2026-02-25 12:51:24.618 244018 INFO nova.scheduler.client.report [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance d6cf21ec-717e-41f7-9351-2214b43ce275
Feb 25 12:51:24 compute-0 nova_compute[244014]: 2026-02-25 12:51:24.699 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1042474607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:25 compute-0 nova_compute[244014]: 2026-02-25 12:51:25.369 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:26 compute-0 ceph-mon[76335]: pgmap v2125: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 13 KiB/s wr, 117 op/s
Feb 25 12:51:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 210 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 14 KiB/s wr, 130 op/s
Feb 25 12:51:27 compute-0 nova_compute[244014]: 2026-02-25 12:51:27.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:27 compute-0 nova_compute[244014]: 2026-02-25 12:51:27.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:27 compute-0 nova_compute[244014]: 2026-02-25 12:51:27.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:28 compute-0 ceph-mon[76335]: pgmap v2126: 305 pgs: 305 active+clean; 210 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 14 KiB/s wr, 130 op/s
Feb 25 12:51:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 153 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 2.8 KiB/s wr, 139 op/s
Feb 25 12:51:28 compute-0 nova_compute[244014]: 2026-02-25 12:51:28.617 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:28 compute-0 nova_compute[244014]: 2026-02-25 12:51:28.618 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:28 compute-0 nova_compute[244014]: 2026-02-25 12:51:28.634 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:51:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:28 compute-0 nova_compute[244014]: 2026-02-25 12:51:28.721 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:28 compute-0 nova_compute[244014]: 2026-02-25 12:51:28.722 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:28 compute-0 nova_compute[244014]: 2026-02-25 12:51:28.730 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:51:28 compute-0 nova_compute[244014]: 2026-02-25 12:51:28.730 244018 INFO nova.compute.claims [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:51:28 compute-0 nova_compute[244014]: 2026-02-25 12:51:28.824 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:28 compute-0 nova_compute[244014]: 2026-02-25 12:51:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:51:28 compute-0 nova_compute[244014]: 2026-02-25 12:51:28.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:51:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001207695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.377 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.383 244018 DEBUG nova.compute.provider_tree [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.396 244018 DEBUG nova.scheduler.client.report [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.415 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.415 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.418 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.418 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.419 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.419 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.509 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.510 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.530 244018 INFO nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.550 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.663 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.665 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.666 244018 INFO nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Creating image(s)
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.700 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.735 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.773 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.779 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.819 244018 DEBUG nova.policy [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.847 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.848 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.849 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.850 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.884 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.888 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:51:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097842149' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.965 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.987 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023874.9873362, 968294d4-db1f-4cdc-822b-f7d4e382ac90 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:51:29 compute-0 nova_compute[244014]: 2026-02-25 12:51:29.989 244018 INFO nova.compute.manager [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] VM Stopped (Lifecycle Event)
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.026 244018 DEBUG nova.compute.manager [None req-b89ab770-f981-4c63-a899-0c7203dd2a8b - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.171 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:30 compute-0 ceph-mon[76335]: pgmap v2127: 305 pgs: 305 active+clean; 153 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 2.8 KiB/s wr, 139 op/s
Feb 25 12:51:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3001207695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1097842149' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.259 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.373 244018 DEBUG nova.objects.instance [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.390 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.391 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Ensure instance console log exists: /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.392 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.392 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.393 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.430 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.431 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3634MB free_disk=59.9873388139531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.432 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.432 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.516 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.517 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.517 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:51:30 compute-0 nova_compute[244014]: 2026-02-25 12:51:30.567 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 153 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 1.2 KiB/s wr, 116 op/s
Feb 25 12:51:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:51:31
Feb 25 12:51:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:51:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:51:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', 'backups', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta']
Feb 25 12:51:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:51:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:51:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/598244499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:31 compute-0 nova_compute[244014]: 2026-02-25 12:51:31.135 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:31 compute-0 nova_compute[244014]: 2026-02-25 12:51:31.142 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:51:31 compute-0 nova_compute[244014]: 2026-02-25 12:51:31.163 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:51:31 compute-0 nova_compute[244014]: 2026-02-25 12:51:31.189 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:51:31 compute-0 nova_compute[244014]: 2026-02-25 12:51:31.190 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/598244499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:51:31 compute-0 sudo[357717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:51:31 compute-0 sudo[357717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:31 compute-0 sudo[357717]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:31 compute-0 nova_compute[244014]: 2026-02-25 12:51:31.741 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Successfully created port: 9981394d-e733-404e-85a5-e2e51877881a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:51:31 compute-0 sudo[357742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 12:51:31 compute-0 sudo[357742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:51:32 compute-0 nova_compute[244014]: 2026-02-25 12:51:32.087 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:32 compute-0 sudo[357742]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:51:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:51:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:32 compute-0 sudo[357788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:51:32 compute-0 sudo[357788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:32 compute-0 sudo[357788]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:32 compute-0 nova_compute[244014]: 2026-02-25 12:51:32.190 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:51:32 compute-0 nova_compute[244014]: 2026-02-25 12:51:32.190 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:51:32 compute-0 nova_compute[244014]: 2026-02-25 12:51:32.191 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:51:32 compute-0 sudo[357813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:51:32 compute-0 sudo[357813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:32 compute-0 nova_compute[244014]: 2026-02-25 12:51:32.258 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:51:32 compute-0 nova_compute[244014]: 2026-02-25 12:51:32.259 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:51:32 compute-0 ceph-mon[76335]: pgmap v2128: 305 pgs: 305 active+clean; 153 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 1.2 KiB/s wr, 116 op/s
Feb 25 12:51:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.8 MiB/s wr, 143 op/s
Feb 25 12:51:32 compute-0 sudo[357813]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:32 compute-0 sudo[357869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:51:32 compute-0 sudo[357869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:32 compute-0 sudo[357869]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:32 compute-0 nova_compute[244014]: 2026-02-25 12:51:32.891 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Successfully updated port: 9981394d-e733-404e-85a5-e2e51877881a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:51:32 compute-0 sudo[357894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- inventory --format=json-pretty --filter-for-batch
Feb 25 12:51:32 compute-0 sudo[357894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:32 compute-0 nova_compute[244014]: 2026-02-25 12:51:32.949 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:51:32 compute-0 nova_compute[244014]: 2026-02-25 12:51:32.949 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:51:32 compute-0 nova_compute[244014]: 2026-02-25 12:51:32.950 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:51:33 compute-0 nova_compute[244014]: 2026-02-25 12:51:33.030 244018 DEBUG nova.compute.manager [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-changed-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:51:33 compute-0 nova_compute[244014]: 2026-02-25 12:51:33.031 244018 DEBUG nova.compute.manager [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing instance network info cache due to event network-changed-9981394d-e733-404e-85a5-e2e51877881a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:51:33 compute-0 nova_compute[244014]: 2026-02-25 12:51:33.031 244018 DEBUG oslo_concurrency.lockutils [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:51:33 compute-0 nova_compute[244014]: 2026-02-25 12:51:33.191 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:51:33 compute-0 podman[357931]: 2026-02-25 12:51:33.217894682 +0000 UTC m=+0.057835134 container create 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 12:51:33 compute-0 systemd[1]: Started libpod-conmon-1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9.scope.
Feb 25 12:51:33 compute-0 podman[357931]: 2026-02-25 12:51:33.19305844 +0000 UTC m=+0.032998952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:51:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:51:33 compute-0 podman[357931]: 2026-02-25 12:51:33.306976437 +0000 UTC m=+0.146916949 container init 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:51:33 compute-0 podman[357931]: 2026-02-25 12:51:33.31629055 +0000 UTC m=+0.156230972 container start 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 12:51:33 compute-0 podman[357931]: 2026-02-25 12:51:33.319865211 +0000 UTC m=+0.159805723 container attach 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 12:51:33 compute-0 eager_dewdney[357947]: 167 167
Feb 25 12:51:33 compute-0 systemd[1]: libpod-1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9.scope: Deactivated successfully.
Feb 25 12:51:33 compute-0 podman[357931]: 2026-02-25 12:51:33.320658323 +0000 UTC m=+0.160598785 container died 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 12:51:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a629305b72bdccd98a16f5220eddbdb67af4c8565cf8c6e347fd33f6304c6f5-merged.mount: Deactivated successfully.
Feb 25 12:51:33 compute-0 podman[357931]: 2026-02-25 12:51:33.364839341 +0000 UTC m=+0.204779803 container remove 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:51:33 compute-0 systemd[1]: libpod-conmon-1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9.scope: Deactivated successfully.
Feb 25 12:51:33 compute-0 podman[357971]: 2026-02-25 12:51:33.547214011 +0000 UTC m=+0.060628443 container create 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:51:33 compute-0 systemd[1]: Started libpod-conmon-346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7.scope.
Feb 25 12:51:33 compute-0 podman[357971]: 2026-02-25 12:51:33.520668001 +0000 UTC m=+0.034082453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:51:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cd9a8fce12fe30854b8389d46ab8e703dfdf6919bd42986ca327d0194fc242/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cd9a8fce12fe30854b8389d46ab8e703dfdf6919bd42986ca327d0194fc242/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cd9a8fce12fe30854b8389d46ab8e703dfdf6919bd42986ca327d0194fc242/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cd9a8fce12fe30854b8389d46ab8e703dfdf6919bd42986ca327d0194fc242/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:33 compute-0 podman[357971]: 2026-02-25 12:51:33.657843834 +0000 UTC m=+0.171258316 container init 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:51:33 compute-0 podman[357971]: 2026-02-25 12:51:33.670325937 +0000 UTC m=+0.183740379 container start 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:51:33 compute-0 podman[357971]: 2026-02-25 12:51:33.674067963 +0000 UTC m=+0.187482405 container attach 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:51:34 compute-0 confident_brown[357988]: [
Feb 25 12:51:34 compute-0 confident_brown[357988]:     {
Feb 25 12:51:34 compute-0 confident_brown[357988]:         "available": false,
Feb 25 12:51:34 compute-0 confident_brown[357988]:         "being_replaced": false,
Feb 25 12:51:34 compute-0 confident_brown[357988]:         "ceph_device_lvm": false,
Feb 25 12:51:34 compute-0 confident_brown[357988]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 25 12:51:34 compute-0 confident_brown[357988]:         "lsm_data": {},
Feb 25 12:51:34 compute-0 confident_brown[357988]:         "lvs": [],
Feb 25 12:51:34 compute-0 confident_brown[357988]:         "path": "/dev/sr0",
Feb 25 12:51:34 compute-0 confident_brown[357988]:         "rejected_reasons": [
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "Has a FileSystem",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "Insufficient space (<5GB)"
Feb 25 12:51:34 compute-0 confident_brown[357988]:         ],
Feb 25 12:51:34 compute-0 confident_brown[357988]:         "sys_api": {
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "actuators": null,
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "device_nodes": [
Feb 25 12:51:34 compute-0 confident_brown[357988]:                 "sr0"
Feb 25 12:51:34 compute-0 confident_brown[357988]:             ],
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "devname": "sr0",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "human_readable_size": "482.00 KB",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "id_bus": "ata",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "model": "QEMU DVD-ROM",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "nr_requests": "2",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "parent": "/dev/sr0",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "partitions": {},
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "path": "/dev/sr0",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "removable": "1",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "rev": "2.5+",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "ro": "0",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "rotational": "1",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "sas_address": "",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "sas_device_handle": "",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "scheduler_mode": "mq-deadline",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "sectors": 0,
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "sectorsize": "2048",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "size": 493568.0,
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "support_discard": "2048",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "type": "disk",
Feb 25 12:51:34 compute-0 confident_brown[357988]:             "vendor": "QEMU"
Feb 25 12:51:34 compute-0 confident_brown[357988]:         }
Feb 25 12:51:34 compute-0 confident_brown[357988]:     }
Feb 25 12:51:34 compute-0 confident_brown[357988]: ]
Feb 25 12:51:34 compute-0 systemd[1]: libpod-346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7.scope: Deactivated successfully.
Feb 25 12:51:34 compute-0 podman[357971]: 2026-02-25 12:51:34.273334684 +0000 UTC m=+0.786749136 container died 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 12:51:34 compute-0 ceph-mon[76335]: pgmap v2129: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.8 MiB/s wr, 143 op/s
Feb 25 12:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-41cd9a8fce12fe30854b8389d46ab8e703dfdf6919bd42986ca327d0194fc242-merged.mount: Deactivated successfully.
Feb 25 12:51:34 compute-0 podman[357971]: 2026-02-25 12:51:34.323009017 +0000 UTC m=+0.836423459 container remove 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:51:34 compute-0 systemd[1]: libpod-conmon-346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7.scope: Deactivated successfully.
Feb 25 12:51:34 compute-0 sudo[357894]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:51:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:51:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:51:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:51:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:51:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:51:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:51:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:51:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:51:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:51:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:51:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:51:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:51:34 compute-0 sudo[358718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:51:34 compute-0 sudo[358718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.511 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:51:34 compute-0 sudo[358718]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.568 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.568 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Instance network_info: |[{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.569 244018 DEBUG oslo_concurrency.lockutils [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.569 244018 DEBUG nova.network.neutron [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing network info cache for port 9981394d-e733-404e-85a5-e2e51877881a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.575 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Start _get_guest_xml network_info=[{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:51:34 compute-0 sudo[358743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.582 244018 WARNING nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:51:34 compute-0 sudo[358743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.588 244018 DEBUG nova.virt.libvirt.host [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.589 244018 DEBUG nova.virt.libvirt.host [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:51:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.593 244018 DEBUG nova.virt.libvirt.host [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.594 244018 DEBUG nova.virt.libvirt.host [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.595 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.595 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.596 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.597 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.597 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.597 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.598 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.598 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.599 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.599 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.599 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.600 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.605 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:34 compute-0 nova_compute[244014]: 2026-02-25 12:51:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:51:34 compute-0 podman[358800]: 2026-02-25 12:51:34.94962981 +0000 UTC m=+0.063160384 container create 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:51:35 compute-0 podman[358800]: 2026-02-25 12:51:34.926368694 +0000 UTC m=+0.039899338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:51:35 compute-0 systemd[1]: Started libpod-conmon-9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503.scope.
Feb 25 12:51:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:51:35 compute-0 podman[358800]: 2026-02-25 12:51:35.076516403 +0000 UTC m=+0.190047007 container init 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:51:35 compute-0 podman[358814]: 2026-02-25 12:51:35.080558598 +0000 UTC m=+0.079992090 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:51:35 compute-0 podman[358800]: 2026-02-25 12:51:35.084131418 +0000 UTC m=+0.197661972 container start 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 12:51:35 compute-0 bold_brown[358832]: 167 167
Feb 25 12:51:35 compute-0 systemd[1]: libpod-9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503.scope: Deactivated successfully.
Feb 25 12:51:35 compute-0 podman[358800]: 2026-02-25 12:51:35.092039882 +0000 UTC m=+0.205570506 container attach 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:51:35 compute-0 podman[358800]: 2026-02-25 12:51:35.092782533 +0000 UTC m=+0.206313097 container died 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:51:35 compute-0 podman[358817]: 2026-02-25 12:51:35.111041028 +0000 UTC m=+0.107985160 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:51:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cb9f0771bda27cd0e290ca7fdc6fe5c32d01ed42287f57da7ab65de40f09e00-merged.mount: Deactivated successfully.
Feb 25 12:51:35 compute-0 podman[358800]: 2026-02-25 12:51:35.136792405 +0000 UTC m=+0.250322969 container remove 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:51:35 compute-0 systemd[1]: libpod-conmon-9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503.scope: Deactivated successfully.
Feb 25 12:51:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:51:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982017398' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.240 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.262 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.266 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:35 compute-0 podman[358887]: 2026-02-25 12:51:35.322148679 +0000 UTC m=+0.073876967 container create ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:51:35 compute-0 systemd[1]: Started libpod-conmon-ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa.scope.
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.372 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:35 compute-0 podman[358887]: 2026-02-25 12:51:35.287957644 +0000 UTC m=+0.039685992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:51:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:51:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:51:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:51:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:51:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:51:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3982017398' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:51:35 compute-0 podman[358887]: 2026-02-25 12:51:35.429767288 +0000 UTC m=+0.181495576 container init ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:51:35 compute-0 podman[358887]: 2026-02-25 12:51:35.447276042 +0000 UTC m=+0.199004330 container start ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:51:35 compute-0 podman[358887]: 2026-02-25 12:51:35.4553196 +0000 UTC m=+0.207047888 container attach ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:51:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:51:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4287189214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:51:35 compute-0 practical_northcutt[358922]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:51:35 compute-0 practical_northcutt[358922]: --> All data devices are unavailable
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.887 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.889 244018 DEBUG nova.virt.libvirt.vif [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:51:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1033485245',display_name='tempest-TestGettingAddress-server-1033485245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1033485245',id=126,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-iqq9ar0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:51:29Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd7feae9-9d2a-41b6-9277-cbf51a2c8f23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.889 244018 DEBUG nova.network.os_vif_util [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.890 244018 DEBUG nova.network.os_vif_util [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.891 244018 DEBUG nova.objects.instance [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:51:35 compute-0 systemd[1]: libpod-ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa.scope: Deactivated successfully.
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.906 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:51:35 compute-0 nova_compute[244014]:   <uuid>dd7feae9-9d2a-41b6-9277-cbf51a2c8f23</uuid>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   <name>instance-0000007e</name>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1033485245</nova:name>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:51:34</nova:creationTime>
Feb 25 12:51:35 compute-0 podman[358887]: 2026-02-25 12:51:35.906932562 +0000 UTC m=+0.658660820 container died ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <nova:port uuid="9981394d-e733-404e-85a5-e2e51877881a">
Feb 25 12:51:35 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe2b:9299" ipVersion="6"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2b:9299" ipVersion="6"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <system>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <entry name="serial">dd7feae9-9d2a-41b6-9277-cbf51a2c8f23</entry>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <entry name="uuid">dd7feae9-9d2a-41b6-9277-cbf51a2c8f23</entry>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     </system>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   <os>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   </os>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   <features>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   </features>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk">
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       </source>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config">
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       </source>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:51:35 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:2b:92:99"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <target dev="tap9981394d-e7"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/console.log" append="off"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <video>
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     </video>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:51:35 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:51:35 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:51:35 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:51:35 compute-0 nova_compute[244014]: </domain>
Feb 25 12:51:35 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.906 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Preparing to wait for external event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.907 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.907 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.907 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.907 244018 DEBUG nova.virt.libvirt.vif [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:51:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1033485245',display_name='tempest-TestGettingAddress-server-1033485245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1033485245',id=126,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-iqq9ar0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:51:29Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd7feae9-9d2a-41b6-9277-cbf51a2c8f23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.908 244018 DEBUG nova.network.os_vif_util [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.908 244018 DEBUG nova.network.os_vif_util [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.909 244018 DEBUG os_vif [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.909 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.909 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.910 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.912 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9981394d-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.912 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9981394d-e7, col_values=(('external_ids', {'iface-id': '9981394d-e733-404e-85a5-e2e51877881a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:92:99', 'vm-uuid': 'dd7feae9-9d2a-41b6-9277-cbf51a2c8f23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.913 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:35 compute-0 NetworkManager[49836]: <info>  [1772023895.9148] manager: (tap9981394d-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/553)
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.921 244018 INFO os_vif [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7')
Feb 25 12:51:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9-merged.mount: Deactivated successfully.
Feb 25 12:51:35 compute-0 podman[358887]: 2026-02-25 12:51:35.954877455 +0000 UTC m=+0.706605753 container remove ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:51:35 compute-0 systemd[1]: libpod-conmon-ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa.scope: Deactivated successfully.
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.967 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.967 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.968 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:2b:92:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.968 244018 INFO nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Using config drive
Feb 25 12:51:35 compute-0 nova_compute[244014]: 2026-02-25 12:51:35.988 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:36 compute-0 sudo[358743]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:36 compute-0 sudo[358994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:51:36 compute-0 sudo[358994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:36 compute-0 sudo[358994]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:36 compute-0 sudo[359019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:51:36 compute-0 sudo[359019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:36 compute-0 ceph-mon[76335]: pgmap v2130: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:51:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4287189214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:51:36 compute-0 podman[359058]: 2026-02-25 12:51:36.43138397 +0000 UTC m=+0.054259244 container create 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:51:36 compute-0 systemd[1]: Started libpod-conmon-978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94.scope.
Feb 25 12:51:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:51:36 compute-0 podman[359058]: 2026-02-25 12:51:36.410351646 +0000 UTC m=+0.033226810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:51:36 compute-0 podman[359058]: 2026-02-25 12:51:36.511549743 +0000 UTC m=+0.134424897 container init 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:51:36 compute-0 podman[359058]: 2026-02-25 12:51:36.519726474 +0000 UTC m=+0.142601608 container start 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:51:36 compute-0 focused_yalow[359075]: 167 167
Feb 25 12:51:36 compute-0 podman[359058]: 2026-02-25 12:51:36.52490261 +0000 UTC m=+0.147777744 container attach 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 12:51:36 compute-0 systemd[1]: libpod-978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94.scope: Deactivated successfully.
Feb 25 12:51:36 compute-0 podman[359080]: 2026-02-25 12:51:36.580037307 +0000 UTC m=+0.040822064 container died 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:51:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:51:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8d7c7d0ce80ecdf82e2e7671a0ed943ab803a09482b98006c3eb61018cd07f8-merged.mount: Deactivated successfully.
Feb 25 12:51:36 compute-0 podman[359080]: 2026-02-25 12:51:36.636191243 +0000 UTC m=+0.096975960 container remove 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 12:51:36 compute-0 systemd[1]: libpod-conmon-978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94.scope: Deactivated successfully.
Feb 25 12:51:36 compute-0 nova_compute[244014]: 2026-02-25 12:51:36.773 244018 INFO nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Creating config drive at /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config
Feb 25 12:51:36 compute-0 nova_compute[244014]: 2026-02-25 12:51:36.780 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj6sm1g_7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:36 compute-0 podman[359102]: 2026-02-25 12:51:36.86309572 +0000 UTC m=+0.090010793 container create 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 12:51:36 compute-0 nova_compute[244014]: 2026-02-25 12:51:36.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:51:36 compute-0 podman[359102]: 2026-02-25 12:51:36.8025427 +0000 UTC m=+0.029457863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:51:36 compute-0 systemd[1]: Started libpod-conmon-1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623.scope.
Feb 25 12:51:36 compute-0 nova_compute[244014]: 2026-02-25 12:51:36.927 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj6sm1g_7" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae5596fd29a0d9542c52a36b25ac79037728a9ae27431a10764d260c364ef23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae5596fd29a0d9542c52a36b25ac79037728a9ae27431a10764d260c364ef23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae5596fd29a0d9542c52a36b25ac79037728a9ae27431a10764d260c364ef23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae5596fd29a0d9542c52a36b25ac79037728a9ae27431a10764d260c364ef23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:36 compute-0 nova_compute[244014]: 2026-02-25 12:51:36.955 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:36 compute-0 nova_compute[244014]: 2026-02-25 12:51:36.959 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:36 compute-0 podman[359102]: 2026-02-25 12:51:36.971226553 +0000 UTC m=+0.198141636 container init 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 12:51:36 compute-0 podman[359102]: 2026-02-25 12:51:36.981461892 +0000 UTC m=+0.208376995 container start 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:51:36 compute-0 podman[359102]: 2026-02-25 12:51:36.991125555 +0000 UTC m=+0.218040708 container attach 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.039 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023882.037794, d6cf21ec-717e-41f7-9351-2214b43ce275 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.040 244018 INFO nova.compute.manager [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] VM Stopped (Lifecycle Event)
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.070 244018 DEBUG nova.compute.manager [None req-3e2077a3-43ad-4595-820f-2caff9af09eb - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.166 244018 DEBUG nova.network.neutron [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updated VIF entry in instance network info cache for port 9981394d-e733-404e-85a5-e2e51877881a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.166 244018 DEBUG nova.network.neutron [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.183 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.183 244018 INFO nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Deleting local config drive /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config because it was imported into RBD.
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.186 244018 DEBUG oslo_concurrency.lockutils [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:51:37 compute-0 loving_panini[359122]: {
Feb 25 12:51:37 compute-0 loving_panini[359122]:     "0": [
Feb 25 12:51:37 compute-0 loving_panini[359122]:         {
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "devices": [
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "/dev/loop3"
Feb 25 12:51:37 compute-0 loving_panini[359122]:             ],
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_name": "ceph_lv0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_size": "21470642176",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "name": "ceph_lv0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "tags": {
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.cluster_name": "ceph",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.crush_device_class": "",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.encrypted": "0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.objectstore": "bluestore",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.osd_id": "0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.type": "block",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.vdo": "0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.with_tpm": "0"
Feb 25 12:51:37 compute-0 loving_panini[359122]:             },
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "type": "block",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "vg_name": "ceph_vg0"
Feb 25 12:51:37 compute-0 loving_panini[359122]:         }
Feb 25 12:51:37 compute-0 loving_panini[359122]:     ],
Feb 25 12:51:37 compute-0 loving_panini[359122]:     "1": [
Feb 25 12:51:37 compute-0 loving_panini[359122]:         {
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "devices": [
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "/dev/loop4"
Feb 25 12:51:37 compute-0 loving_panini[359122]:             ],
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_name": "ceph_lv1",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_size": "21470642176",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "name": "ceph_lv1",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "tags": {
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.cluster_name": "ceph",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.crush_device_class": "",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.encrypted": "0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.objectstore": "bluestore",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.osd_id": "1",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.type": "block",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.vdo": "0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.with_tpm": "0"
Feb 25 12:51:37 compute-0 loving_panini[359122]:             },
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "type": "block",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "vg_name": "ceph_vg1"
Feb 25 12:51:37 compute-0 loving_panini[359122]:         }
Feb 25 12:51:37 compute-0 loving_panini[359122]:     ],
Feb 25 12:51:37 compute-0 loving_panini[359122]:     "2": [
Feb 25 12:51:37 compute-0 loving_panini[359122]:         {
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "devices": [
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "/dev/loop5"
Feb 25 12:51:37 compute-0 loving_panini[359122]:             ],
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_name": "ceph_lv2",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_size": "21470642176",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "name": "ceph_lv2",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "tags": {
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.cluster_name": "ceph",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.crush_device_class": "",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.encrypted": "0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.objectstore": "bluestore",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.osd_id": "2",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.type": "block",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.vdo": "0",
Feb 25 12:51:37 compute-0 loving_panini[359122]:                 "ceph.with_tpm": "0"
Feb 25 12:51:37 compute-0 loving_panini[359122]:             },
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "type": "block",
Feb 25 12:51:37 compute-0 loving_panini[359122]:             "vg_name": "ceph_vg2"
Feb 25 12:51:37 compute-0 loving_panini[359122]:         }
Feb 25 12:51:37 compute-0 loving_panini[359122]:     ]
Feb 25 12:51:37 compute-0 loving_panini[359122]: }
Feb 25 12:51:37 compute-0 kernel: tap9981394d-e7: entered promiscuous mode
Feb 25 12:51:37 compute-0 NetworkManager[49836]: <info>  [1772023897.2401] manager: (tap9981394d-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/554)
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.242 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.248 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:37 compute-0 ovn_controller[147040]: 2026-02-25T12:51:37Z|01337|binding|INFO|Claiming lport 9981394d-e733-404e-85a5-e2e51877881a for this chassis.
Feb 25 12:51:37 compute-0 ovn_controller[147040]: 2026-02-25T12:51:37Z|01338|binding|INFO|9981394d-e733-404e-85a5-e2e51877881a: Claiming fa:16:3e:2b:92:99 10.100.0.7 2001:db8:0:1:f816:3eff:fe2b:9299 2001:db8::f816:3eff:fe2b:9299
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.262 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:92:99 10.100.0.7 2001:db8:0:1:f816:3eff:fe2b:9299 2001:db8::f816:3eff:fe2b:9299'], port_security=['fa:16:3e:2b:92:99 10.100.0.7 2001:db8:0:1:f816:3eff:fe2b:9299 2001:db8::f816:3eff:fe2b:9299'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fe2b:9299/64 2001:db8::f816:3eff:fe2b:9299/64', 'neutron:device_id': 'dd7feae9-9d2a-41b6-9277-cbf51a2c8f23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1c36bad-4548-4f88-8bee-0f028af7a076', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9981394d-e733-404e-85a5-e2e51877881a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.264 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9981394d-e733-404e-85a5-e2e51877881a in datapath e19ed85e-54ee-4274-951c-ade412625983 bound to our chassis
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.265 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e19ed85e-54ee-4274-951c-ade412625983
Feb 25 12:51:37 compute-0 systemd[1]: libpod-1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623.scope: Deactivated successfully.
Feb 25 12:51:37 compute-0 systemd-machined[210048]: New machine qemu-158-instance-0000007e.
Feb 25 12:51:37 compute-0 ovn_controller[147040]: 2026-02-25T12:51:37Z|01339|binding|INFO|Setting lport 9981394d-e733-404e-85a5-e2e51877881a ovn-installed in OVS
Feb 25 12:51:37 compute-0 ovn_controller[147040]: 2026-02-25T12:51:37Z|01340|binding|INFO|Setting lport 9981394d-e733-404e-85a5-e2e51877881a up in Southbound
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.279 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.279 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[92192337-dcd5-4c1b-8b73-fe8ec1cf59e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.280 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape19ed85e-51 in ovnmeta-e19ed85e-54ee-4274-951c-ade412625983 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:51:37 compute-0 systemd-udevd[359183]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:51:37 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-0000007e.
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.284 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape19ed85e-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.284 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ab47d5bf-f2bd-445b-bc04-081db138e0d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.285 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eff50e4b-62f2-498b-8af0-a594f386d79a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 NetworkManager[49836]: <info>  [1772023897.2981] device (tap9981394d-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:51:37 compute-0 NetworkManager[49836]: <info>  [1772023897.2990] device (tap9981394d-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.297 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[d69e979e-b3e8-4bab-855c-2bfd0c40f4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.319 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[960713b7-f467-420a-8b2d-200d549d4ad9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 podman[359181]: 2026-02-25 12:51:37.320993689 +0000 UTC m=+0.036048099 container died 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.349 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[27b7bbc6-9b07-4c21-82c0-143cb4ac7ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 systemd-udevd[359192]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:51:37 compute-0 NetworkManager[49836]: <info>  [1772023897.3542] manager: (tape19ed85e-50): new Veth device (/org/freedesktop/NetworkManager/Devices/555)
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.353 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ea0d93-feba-4bd9-a25d-8ece8bc76273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.384 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[af91d82d-4358-4fc3-9a13-92b34e6650e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.387 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e3b357-7598-4ad5-b88a-d06efc4923a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 NetworkManager[49836]: <info>  [1772023897.4088] device (tape19ed85e-50): carrier: link connected
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.414 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9a015788-6052-4ece-8028-533dd74e54e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.429 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cb45745d-44da-49cf-a212-8a0f9e71367f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape19ed85e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:4b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586681, 'reachable_time': 39508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359228, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.444 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34ffd3e8-e789-4a0c-818e-82778f352e20]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:4ba0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586681, 'tstamp': 586681}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359229, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ae5596fd29a0d9542c52a36b25ac79037728a9ae27431a10764d260c364ef23-merged.mount: Deactivated successfully.
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.463 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4791bc-4ffc-4059-8c04-675b0b6d3598]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape19ed85e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:4b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586681, 'reachable_time': 39508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359230, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.487 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[84d17b6e-1b26-42e5-90e5-96641380bfb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 ceph-mon[76335]: pgmap v2131: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:51:37 compute-0 podman[359181]: 2026-02-25 12:51:37.53278898 +0000 UTC m=+0.247843390 container remove 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.533 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0e2f37-2ca8-49e2-9f03-f4dfae368c05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.535 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape19ed85e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.535 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.536 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape19ed85e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.537 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:37 compute-0 systemd[1]: libpod-conmon-1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623.scope: Deactivated successfully.
Feb 25 12:51:37 compute-0 kernel: tape19ed85e-50: entered promiscuous mode
Feb 25 12:51:37 compute-0 NetworkManager[49836]: <info>  [1772023897.5403] manager: (tape19ed85e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.542 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape19ed85e-50, col_values=(('external_ids', {'iface-id': '591ce053-3764-4ce0-841f-6728c8fd9491'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:37 compute-0 ovn_controller[147040]: 2026-02-25T12:51:37Z|01341|binding|INFO|Releasing lport 591ce053-3764-4ce0-841f-6728c8fd9491 from this chassis (sb_readonly=0)
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.574 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e19ed85e-54ee-4274-951c-ade412625983.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e19ed85e-54ee-4274-951c-ade412625983.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.575 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[897288d6-99f3-4c85-ba0e-2c87fd82454f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.576 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-e19ed85e-54ee-4274-951c-ade412625983
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/e19ed85e-54ee-4274-951c-ade412625983.pid.haproxy
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID e19ed85e-54ee-4274-951c-ade412625983
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:51:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.578 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'env', 'PROCESS_TAG=haproxy-e19ed85e-54ee-4274-951c-ade412625983', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e19ed85e-54ee-4274-951c-ade412625983.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:51:37 compute-0 sudo[359019]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.605 244018 DEBUG nova.compute.manager [req-10d8ef44-1e7e-43ff-a0a7-6a4cdcbf3bc0 req-7e7af82d-28c4-4601-be4d-9ce4b3d8597b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.606 244018 DEBUG oslo_concurrency.lockutils [req-10d8ef44-1e7e-43ff-a0a7-6a4cdcbf3bc0 req-7e7af82d-28c4-4601-be4d-9ce4b3d8597b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.607 244018 DEBUG oslo_concurrency.lockutils [req-10d8ef44-1e7e-43ff-a0a7-6a4cdcbf3bc0 req-7e7af82d-28c4-4601-be4d-9ce4b3d8597b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.607 244018 DEBUG oslo_concurrency.lockutils [req-10d8ef44-1e7e-43ff-a0a7-6a4cdcbf3bc0 req-7e7af82d-28c4-4601-be4d-9ce4b3d8597b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.608 244018 DEBUG nova.compute.manager [req-10d8ef44-1e7e-43ff-a0a7-6a4cdcbf3bc0 req-7e7af82d-28c4-4601-be4d-9ce4b3d8597b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Processing event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:51:37 compute-0 sudo[359239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:51:37 compute-0 sudo[359239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:37 compute-0 sudo[359239]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:37 compute-0 sudo[359298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:51:37 compute-0 sudo[359298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.787 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023897.786589, dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.787 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] VM Started (Lifecycle Event)
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.789 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.792 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.794 244018 INFO nova.virt.libvirt.driver [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Instance spawned successfully.
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.795 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.828 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.834 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.838 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.838 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.839 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.839 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.840 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.840 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.870 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.871 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023897.7868466, dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:51:37 compute-0 nova_compute[244014]: 2026-02-25 12:51:37.871 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] VM Paused (Lifecycle Event)
Feb 25 12:51:37 compute-0 podman[359355]: 2026-02-25 12:51:37.940263636 +0000 UTC m=+0.066641443 container create f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:51:37 compute-0 podman[359379]: 2026-02-25 12:51:37.986642735 +0000 UTC m=+0.054438998 container create e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 12:51:37 compute-0 podman[359355]: 2026-02-25 12:51:37.893892236 +0000 UTC m=+0.020270043 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:51:38 compute-0 systemd[1]: Started libpod-conmon-f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987.scope.
Feb 25 12:51:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b16cd2a2c94cf38fc347d4884920d0f50de2f7df3adcfcac47023bdece528de3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:38 compute-0 podman[359355]: 2026-02-25 12:51:38.054248704 +0000 UTC m=+0.180626501 container init f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 12:51:38 compute-0 podman[359355]: 2026-02-25 12:51:38.058557376 +0000 UTC m=+0.184935163 container start f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 12:51:38 compute-0 podman[359379]: 2026-02-25 12:51:37.97159728 +0000 UTC m=+0.039393563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:51:38 compute-0 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [NOTICE]   (359402) : New worker (359404) forked
Feb 25 12:51:38 compute-0 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [NOTICE]   (359402) : Loading success.
Feb 25 12:51:38 compute-0 systemd[1]: Started libpod-conmon-e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8.scope.
Feb 25 12:51:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:51:38 compute-0 nova_compute[244014]: 2026-02-25 12:51:38.102 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:51:38 compute-0 nova_compute[244014]: 2026-02-25 12:51:38.107 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023897.7913318, dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:51:38 compute-0 nova_compute[244014]: 2026-02-25 12:51:38.108 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] VM Resumed (Lifecycle Event)
Feb 25 12:51:38 compute-0 podman[359379]: 2026-02-25 12:51:38.110344988 +0000 UTC m=+0.178141261 container init e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:51:38 compute-0 nova_compute[244014]: 2026-02-25 12:51:38.114 244018 INFO nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Took 8.45 seconds to spawn the instance on the hypervisor.
Feb 25 12:51:38 compute-0 nova_compute[244014]: 2026-02-25 12:51:38.115 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:51:38 compute-0 podman[359379]: 2026-02-25 12:51:38.117380936 +0000 UTC m=+0.185177199 container start e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 12:51:38 compute-0 fervent_cartwright[359413]: 167 167
Feb 25 12:51:38 compute-0 systemd[1]: libpod-e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8.scope: Deactivated successfully.
Feb 25 12:51:38 compute-0 podman[359379]: 2026-02-25 12:51:38.123126379 +0000 UTC m=+0.190922662 container attach e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:51:38 compute-0 podman[359379]: 2026-02-25 12:51:38.123727656 +0000 UTC m=+0.191523919 container died e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:51:38 compute-0 nova_compute[244014]: 2026-02-25 12:51:38.130 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:51:38 compute-0 nova_compute[244014]: 2026-02-25 12:51:38.136 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:51:38 compute-0 nova_compute[244014]: 2026-02-25 12:51:38.162 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c7f5e0a4569f1c0a4a8374dc215249e71b87684b5bda6c4616b5a2c3849fb9e-merged.mount: Deactivated successfully.
Feb 25 12:51:38 compute-0 nova_compute[244014]: 2026-02-25 12:51:38.186 244018 INFO nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Took 9.48 seconds to build instance.
Feb 25 12:51:38 compute-0 nova_compute[244014]: 2026-02-25 12:51:38.205 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:38 compute-0 podman[359379]: 2026-02-25 12:51:38.219364326 +0000 UTC m=+0.287160599 container remove e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:51:38 compute-0 systemd[1]: libpod-conmon-e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8.scope: Deactivated successfully.
Feb 25 12:51:38 compute-0 podman[359436]: 2026-02-25 12:51:38.417143041 +0000 UTC m=+0.074985748 container create 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:51:38 compute-0 systemd[1]: Started libpod-conmon-6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502.scope.
Feb 25 12:51:38 compute-0 podman[359436]: 2026-02-25 12:51:38.382497783 +0000 UTC m=+0.040340540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:51:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0928661ff8ca5d456265d50d65d0fcb14f0bd5b32a9fe531a8ab4979ee7cac66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0928661ff8ca5d456265d50d65d0fcb14f0bd5b32a9fe531a8ab4979ee7cac66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0928661ff8ca5d456265d50d65d0fcb14f0bd5b32a9fe531a8ab4979ee7cac66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0928661ff8ca5d456265d50d65d0fcb14f0bd5b32a9fe531a8ab4979ee7cac66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:38 compute-0 podman[359436]: 2026-02-25 12:51:38.563195635 +0000 UTC m=+0.221038382 container init 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:51:38 compute-0 podman[359436]: 2026-02-25 12:51:38.57046775 +0000 UTC m=+0.228310417 container start 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Feb 25 12:51:38 compute-0 podman[359436]: 2026-02-25 12:51:38.574524975 +0000 UTC m=+0.232367742 container attach 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 12:51:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Feb 25 12:51:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:38 compute-0 nova_compute[244014]: 2026-02-25 12:51:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:51:39 compute-0 lvm[359527]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:51:39 compute-0 lvm[359527]: VG ceph_vg0 finished
Feb 25 12:51:39 compute-0 lvm[359529]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:51:39 compute-0 lvm[359529]: VG ceph_vg1 finished
Feb 25 12:51:39 compute-0 lvm[359530]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:51:39 compute-0 lvm[359530]: VG ceph_vg2 finished
Feb 25 12:51:39 compute-0 lvm[359531]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:51:39 compute-0 lvm[359531]: VG ceph_vg2 finished
Feb 25 12:51:39 compute-0 lvm[359534]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:51:39 compute-0 lvm[359534]: VG ceph_vg2 finished
Feb 25 12:51:39 compute-0 cool_swanson[359452]: {}
Feb 25 12:51:39 compute-0 systemd[1]: libpod-6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502.scope: Deactivated successfully.
Feb 25 12:51:39 compute-0 systemd[1]: libpod-6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502.scope: Consumed 1.030s CPU time.
Feb 25 12:51:39 compute-0 podman[359436]: 2026-02-25 12:51:39.397448422 +0000 UTC m=+1.055291109 container died 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:51:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0928661ff8ca5d456265d50d65d0fcb14f0bd5b32a9fe531a8ab4979ee7cac66-merged.mount: Deactivated successfully.
Feb 25 12:51:39 compute-0 podman[359436]: 2026-02-25 12:51:39.464294689 +0000 UTC m=+1.122137366 container remove 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 12:51:39 compute-0 systemd[1]: libpod-conmon-6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502.scope: Deactivated successfully.
Feb 25 12:51:39 compute-0 sudo[359298]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:51:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:51:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:39 compute-0 sudo[359547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:51:39 compute-0 sudo[359547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:51:39 compute-0 sudo[359547]: pam_unix(sudo:session): session closed for user root
Feb 25 12:51:39 compute-0 ceph-mon[76335]: pgmap v2132: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Feb 25 12:51:39 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:39 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:51:39 compute-0 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 DEBUG nova.compute.manager [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:51:39 compute-0 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 DEBUG oslo_concurrency.lockutils [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:39 compute-0 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 DEBUG oslo_concurrency.lockutils [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:39 compute-0 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 DEBUG oslo_concurrency.lockutils [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:39 compute-0 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 DEBUG nova.compute.manager [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] No waiting events found dispatching network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:51:39 compute-0 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 WARNING nova.compute.manager [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received unexpected event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a for instance with vm_state active and task_state None.
Feb 25 12:51:40 compute-0 nova_compute[244014]: 2026-02-25 12:51:40.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:51:40 compute-0 nova_compute[244014]: 2026-02-25 12:51:40.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:41 compute-0 ceph-mon[76335]: pgmap v2133: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:51:41 compute-0 nova_compute[244014]: 2026-02-25 12:51:41.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:51:41 compute-0 nova_compute[244014]: 2026-02-25 12:51:41.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036411289436910166 of space, bias 1.0, pg target 0.1092338683107305 quantized to 32 (current 32)
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494097439084752 of space, bias 1.0, pg target 0.7482292317254255 quantized to 32 (current 32)
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3976293654385012e-06 of space, bias 4.0, pg target 0.0016771552385262014 quantized to 16 (current 16)
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:51:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:51:43 compute-0 NetworkManager[49836]: <info>  [1772023903.1099] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/557)
Feb 25 12:51:43 compute-0 NetworkManager[49836]: <info>  [1772023903.1108] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/558)
Feb 25 12:51:43 compute-0 nova_compute[244014]: 2026-02-25 12:51:43.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:43 compute-0 nova_compute[244014]: 2026-02-25 12:51:43.161 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:43 compute-0 ovn_controller[147040]: 2026-02-25T12:51:43Z|01342|binding|INFO|Releasing lport 591ce053-3764-4ce0-841f-6728c8fd9491 from this chassis (sb_readonly=0)
Feb 25 12:51:43 compute-0 nova_compute[244014]: 2026-02-25 12:51:43.181 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:43 compute-0 nova_compute[244014]: 2026-02-25 12:51:43.667 244018 DEBUG nova.compute.manager [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-changed-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:51:43 compute-0 nova_compute[244014]: 2026-02-25 12:51:43.668 244018 DEBUG nova.compute.manager [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing instance network info cache due to event network-changed-9981394d-e733-404e-85a5-e2e51877881a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:51:43 compute-0 nova_compute[244014]: 2026-02-25 12:51:43.669 244018 DEBUG oslo_concurrency.lockutils [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:51:43 compute-0 nova_compute[244014]: 2026-02-25 12:51:43.669 244018 DEBUG oslo_concurrency.lockutils [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:51:43 compute-0 nova_compute[244014]: 2026-02-25 12:51:43.669 244018 DEBUG nova.network.neutron [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing network info cache for port 9981394d-e733-404e-85a5-e2e51877881a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:51:43 compute-0 ceph-mon[76335]: pgmap v2134: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:51:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:51:45 compute-0 nova_compute[244014]: 2026-02-25 12:51:45.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:45 compute-0 nova_compute[244014]: 2026-02-25 12:51:45.387 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:45 compute-0 nova_compute[244014]: 2026-02-25 12:51:45.388 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:45 compute-0 nova_compute[244014]: 2026-02-25 12:51:45.417 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:51:45 compute-0 nova_compute[244014]: 2026-02-25 12:51:45.538 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:45 compute-0 nova_compute[244014]: 2026-02-25 12:51:45.539 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:45 compute-0 nova_compute[244014]: 2026-02-25 12:51:45.548 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:51:45 compute-0 nova_compute[244014]: 2026-02-25 12:51:45.549 244018 INFO nova.compute.claims [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:51:45 compute-0 nova_compute[244014]: 2026-02-25 12:51:45.668 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:45 compute-0 ceph-mon[76335]: pgmap v2135: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:51:45 compute-0 nova_compute[244014]: 2026-02-25 12:51:45.917 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:51:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869653802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.182 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.188 244018 DEBUG nova.compute.provider_tree [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.279 244018 DEBUG nova.scheduler.client.report [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.316 244018 DEBUG nova.network.neutron [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updated VIF entry in instance network info cache for port 9981394d-e733-404e-85a5-e2e51877881a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.317 244018 DEBUG nova.network.neutron [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.349 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.350 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.356 244018 DEBUG oslo_concurrency.lockutils [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.417 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.418 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.477 244018 INFO nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.560 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:51:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.619 244018 DEBUG nova.policy [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.660 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.662 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.663 244018 INFO nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Creating image(s)
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.696 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.733 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.763 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.767 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/869653802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.868 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.869 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.870 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.870 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.894 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:46 compute-0 nova_compute[244014]: 2026-02-25 12:51:46.897 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 809da994-7551-4f52-8920-b0dfaa2ef73e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:47 compute-0 nova_compute[244014]: 2026-02-25 12:51:47.186 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 809da994-7551-4f52-8920-b0dfaa2ef73e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:47 compute-0 nova_compute[244014]: 2026-02-25 12:51:47.234 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:51:47 compute-0 nova_compute[244014]: 2026-02-25 12:51:47.297 244018 DEBUG nova.objects.instance [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:51:47 compute-0 nova_compute[244014]: 2026-02-25 12:51:47.310 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:51:47 compute-0 nova_compute[244014]: 2026-02-25 12:51:47.310 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Ensure instance console log exists: /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:51:47 compute-0 nova_compute[244014]: 2026-02-25 12:51:47.310 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:47 compute-0 nova_compute[244014]: 2026-02-25 12:51:47.310 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:47 compute-0 nova_compute[244014]: 2026-02-25 12:51:47.311 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:51:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1069909818' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:51:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:51:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1069909818' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:51:47 compute-0 nova_compute[244014]: 2026-02-25 12:51:47.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:51:47 compute-0 ceph-mon[76335]: pgmap v2136: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:51:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1069909818' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:51:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1069909818' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:51:48 compute-0 nova_compute[244014]: 2026-02-25 12:51:48.274 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:48.273 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:51:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:48.276 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:51:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 209 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 294 KiB/s wr, 74 op/s
Feb 25 12:51:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:48 compute-0 ovn_controller[147040]: 2026-02-25T12:51:48Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:92:99 10.100.0.7
Feb 25 12:51:48 compute-0 ovn_controller[147040]: 2026-02-25T12:51:48Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:92:99 10.100.0.7
Feb 25 12:51:49 compute-0 nova_compute[244014]: 2026-02-25 12:51:49.089 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Successfully created port: 4f59e1f7-f07c-48a1-82b4-b6a563a7130a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:51:49 compute-0 ceph-mon[76335]: pgmap v2137: 305 pgs: 305 active+clean; 209 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 294 KiB/s wr, 74 op/s
Feb 25 12:51:50 compute-0 nova_compute[244014]: 2026-02-25 12:51:50.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:50 compute-0 nova_compute[244014]: 2026-02-25 12:51:50.439 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Successfully updated port: 4f59e1f7-f07c-48a1-82b4-b6a563a7130a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:51:50 compute-0 nova_compute[244014]: 2026-02-25 12:51:50.475 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:51:50 compute-0 nova_compute[244014]: 2026-02-25 12:51:50.476 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:51:50 compute-0 nova_compute[244014]: 2026-02-25 12:51:50.477 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:51:50 compute-0 nova_compute[244014]: 2026-02-25 12:51:50.537 244018 DEBUG nova.compute.manager [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:51:50 compute-0 nova_compute[244014]: 2026-02-25 12:51:50.538 244018 DEBUG nova.compute.manager [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing instance network info cache due to event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:51:50 compute-0 nova_compute[244014]: 2026-02-25 12:51:50.538 244018 DEBUG oslo_concurrency.lockutils [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:51:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 209 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 294 KiB/s wr, 74 op/s
Feb 25 12:51:50 compute-0 nova_compute[244014]: 2026-02-25 12:51:50.631 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:51:50 compute-0 nova_compute[244014]: 2026-02-25 12:51:50.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:51 compute-0 ceph-mon[76335]: pgmap v2138: 305 pgs: 305 active+clean; 209 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 294 KiB/s wr, 74 op/s
Feb 25 12:51:51 compute-0 nova_compute[244014]: 2026-02-25 12:51:51.937 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.081 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.082 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance network_info: |[{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.082 244018 DEBUG oslo_concurrency.lockutils [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.083 244018 DEBUG nova.network.neutron [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.088 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Start _get_guest_xml network_info=[{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.095 244018 WARNING nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.101 244018 DEBUG nova.virt.libvirt.host [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.102 244018 DEBUG nova.virt.libvirt.host [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.107 244018 DEBUG nova.virt.libvirt.host [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.108 244018 DEBUG nova.virt.libvirt.host [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.108 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.109 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.109 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.110 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.110 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.111 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.111 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.112 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.112 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.113 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.113 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.113 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.118 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Feb 25 12:51:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:51:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/188310535' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.662 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.686 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:52 compute-0 nova_compute[244014]: 2026-02-25 12:51:52.689 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:52 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/188310535' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:51:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:51:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4225123064' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.219 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.222 244018 DEBUG nova.virt.libvirt.vif [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:51:46Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.223 244018 DEBUG nova.network.os_vif_util [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.224 244018 DEBUG nova.network.os_vif_util [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.226 244018 DEBUG nova.objects.instance [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.247 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:51:53 compute-0 nova_compute[244014]:   <uuid>809da994-7551-4f52-8920-b0dfaa2ef73e</uuid>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   <name>instance-0000007f</name>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:51:52</nova:creationTime>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 12:51:53 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <system>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <entry name="serial">809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <entry name="uuid">809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     </system>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   <os>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   </os>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   <features>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   </features>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk">
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       </source>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config">
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       </source>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:51:53 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:c3:63:14"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <target dev="tap4f59e1f7-f0"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log" append="off"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <video>
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     </video>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:51:53 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:51:53 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:51:53 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:51:53 compute-0 nova_compute[244014]: </domain>
Feb 25 12:51:53 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.249 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Preparing to wait for external event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.250 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.250 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.251 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.252 244018 DEBUG nova.virt.libvirt.vif [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:51:46Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.253 244018 DEBUG nova.network.os_vif_util [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.254 244018 DEBUG nova.network.os_vif_util [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.255 244018 DEBUG os_vif [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.256 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.257 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.258 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.263 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f59e1f7-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.264 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f59e1f7-f0, col_values=(('external_ids', {'iface-id': '4f59e1f7-f07c-48a1-82b4-b6a563a7130a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:63:14', 'vm-uuid': '809da994-7551-4f52-8920-b0dfaa2ef73e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:53 compute-0 NetworkManager[49836]: <info>  [1772023913.2675] manager: (tap4f59e1f7-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/559)
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.276 244018 INFO os_vif [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0')
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.333 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.334 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.334 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:c3:63:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.334 244018 INFO nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Using config drive
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.354 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.685 244018 INFO nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Creating config drive at /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.688 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuvt8xfhc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.829 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuvt8xfhc" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.867 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:51:53 compute-0 nova_compute[244014]: 2026-02-25 12:51:53.872 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config 809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:51:53 compute-0 ceph-mon[76335]: pgmap v2139: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Feb 25 12:51:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4225123064' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.026 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config 809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.026 244018 INFO nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Deleting local config drive /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config because it was imported into RBD.
Feb 25 12:51:54 compute-0 kernel: tap4f59e1f7-f0: entered promiscuous mode
Feb 25 12:51:54 compute-0 NetworkManager[49836]: <info>  [1772023914.0793] manager: (tap4f59e1f7-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/560)
Feb 25 12:51:54 compute-0 ovn_controller[147040]: 2026-02-25T12:51:54Z|01343|binding|INFO|Claiming lport 4f59e1f7-f07c-48a1-82b4-b6a563a7130a for this chassis.
Feb 25 12:51:54 compute-0 ovn_controller[147040]: 2026-02-25T12:51:54Z|01344|binding|INFO|4f59e1f7-f07c-48a1-82b4-b6a563a7130a: Claiming fa:16:3e:c3:63:14 10.100.0.12
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.084 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:54 compute-0 ovn_controller[147040]: 2026-02-25T12:51:54Z|01345|binding|INFO|Setting lport 4f59e1f7-f07c-48a1-82b4-b6a563a7130a ovn-installed in OVS
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.092 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:54 compute-0 systemd-machined[210048]: New machine qemu-159-instance-0000007f.
Feb 25 12:51:54 compute-0 ovn_controller[147040]: 2026-02-25T12:51:54Z|01346|binding|INFO|Setting lport 4f59e1f7-f07c-48a1-82b4-b6a563a7130a up in Southbound
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.131 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:63:14 10.100.0.12'], port_security=['fa:16:3e:c3:63:14 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '809da994-7551-4f52-8920-b0dfaa2ef73e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3e3f0007-e379-4ef5-bc52-5669e937e826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9099fb32-6839-4ebe-bdc3-ec67cc06d180, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4f59e1f7-f07c-48a1-82b4-b6a563a7130a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.134 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a in datapath 526ae63c-3640-4e70-a308-56e7a67e4cf2 bound to our chassis
Feb 25 12:51:54 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-0000007f.
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.139 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 526ae63c-3640-4e70-a308-56e7a67e4cf2
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.150 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[302073b1-c067-472d-98c3-60d2e9a1cd1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.152 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap526ae63c-31 in ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.155 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap526ae63c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1455a863-1660-4656-8f19-1de7f6403898]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.156 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c59de4e7-db14-48a3-aea2-43b4722b1c85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 systemd-udevd[359897]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.171 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e369750d-34f3-490d-8f67-42a1bd3924ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 NetworkManager[49836]: <info>  [1772023914.1801] device (tap4f59e1f7-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:51:54 compute-0 NetworkManager[49836]: <info>  [1772023914.1808] device (tap4f59e1f7-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.185 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5364a42-73f9-4aa6-a09f-7ab70ec6a10d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.219 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd05c65-d051-4fed-a64b-9919584f59db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 NetworkManager[49836]: <info>  [1772023914.2272] manager: (tap526ae63c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/561)
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.226 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4193afe-4cd6-49ab-b50c-92073cdcd9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.266 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[61995f98-d8fc-4e6c-8b58-84af7d2e33e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.270 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[098db765-0630-4d16-bc5c-04559da5b97e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 NetworkManager[49836]: <info>  [1772023914.2974] device (tap526ae63c-30): carrier: link connected
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.306 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ab3389-7d1d-4777-a12d-12a341507ad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.323 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd522ff-a0ce-4015-8efd-dcb53e635f56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap526ae63c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:f2:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588370, 'reachable_time': 33616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359929, 'error': None, 'target': 'ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.340 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7d2e04-90f6-4a59-ba4e-5c1f22a1f317]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:f27b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588370, 'tstamp': 588370}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359930, 'error': None, 'target': 'ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.356 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e9dadb63-e84f-472d-8c8a-f3e72d325e1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap526ae63c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:f2:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588370, 'reachable_time': 33616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359931, 'error': None, 'target': 'ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.379 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b392db4-caa9-462b-990b-cb281452eff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.443 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2120c06-204b-4ba8-9c49-acc51a5c5664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.447 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap526ae63c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.448 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.448 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap526ae63c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:54 compute-0 kernel: tap526ae63c-30: entered promiscuous mode
Feb 25 12:51:54 compute-0 NetworkManager[49836]: <info>  [1772023914.4537] manager: (tap526ae63c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/562)
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.464 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap526ae63c-30, col_values=(('external_ids', {'iface-id': '1599c73d-07eb-42e9-83bd-2cf546347a5b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:54 compute-0 ovn_controller[147040]: 2026-02-25T12:51:54Z|01347|binding|INFO|Releasing lport 1599c73d-07eb-42e9-83bd-2cf546347a5b from this chassis (sb_readonly=0)
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.470 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/526ae63c-3640-4e70-a308-56e7a67e4cf2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/526ae63c-3640-4e70-a308-56e7a67e4cf2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.471 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b050fae9-3b15-4852-9e27-beae8c1415e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.472 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-526ae63c-3640-4e70-a308-56e7a67e4cf2
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/526ae63c-3640-4e70-a308-56e7a67e4cf2.pid.haproxy
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 526ae63c-3640-4e70-a308-56e7a67e4cf2
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:51:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.473 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'env', 'PROCESS_TAG=haproxy-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/526ae63c-3640-4e70-a308-56e7a67e4cf2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.803 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023914.8030102, 809da994-7551-4f52-8920-b0dfaa2ef73e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.805 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] VM Started (Lifecycle Event)
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.825 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.831 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023914.8044746, 809da994-7551-4f52-8920-b0dfaa2ef73e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.832 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] VM Paused (Lifecycle Event)
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.849 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.853 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.870 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:51:54 compute-0 podman[360004]: 2026-02-25 12:51:54.941320586 +0000 UTC m=+0.070453680 container create ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.979 244018 DEBUG nova.network.neutron [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updated VIF entry in instance network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:51:54 compute-0 nova_compute[244014]: 2026-02-25 12:51:54.980 244018 DEBUG nova.network.neutron [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:51:54 compute-0 systemd[1]: Started libpod-conmon-ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552.scope.
Feb 25 12:51:54 compute-0 podman[360004]: 2026-02-25 12:51:54.905776922 +0000 UTC m=+0.034910066 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:51:55 compute-0 nova_compute[244014]: 2026-02-25 12:51:55.002 244018 DEBUG oslo_concurrency.lockutils [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:51:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:51:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:55.029 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:51:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:55.029 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:51:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:55.030 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f39925cd20db3978a9556b803bb9da1be3764b7cb84297a2c2d11b4e6c0059d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:51:55 compute-0 podman[360004]: 2026-02-25 12:51:55.046577028 +0000 UTC m=+0.175710132 container init ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:51:55 compute-0 podman[360004]: 2026-02-25 12:51:55.054537823 +0000 UTC m=+0.183670907 container start ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:51:55 compute-0 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [NOTICE]   (360023) : New worker (360025) forked
Feb 25 12:51:55 compute-0 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [NOTICE]   (360023) : Loading success.
Feb 25 12:51:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:51:55.278 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:51:55 compute-0 nova_compute[244014]: 2026-02-25 12:51:55.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:55 compute-0 ceph-mon[76335]: pgmap v2140: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 12:51:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Feb 25 12:51:57 compute-0 ceph-mon[76335]: pgmap v2141: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Feb 25 12:51:58 compute-0 nova_compute[244014]: 2026-02-25 12:51:58.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:51:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Feb 25 12:51:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:51:59 compute-0 ceph-mon[76335]: pgmap v2142: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Feb 25 12:52:00 compute-0 nova_compute[244014]: 2026-02-25 12:52:00.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 3.7 MiB/s wr, 99 op/s
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.258 244018 DEBUG nova.compute.manager [req-f4c1d7c0-d424-4330-8251-faf2ae93b40d req-b300ba2b-516a-4648-9b57-c0480790dcf1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.258 244018 DEBUG oslo_concurrency.lockutils [req-f4c1d7c0-d424-4330-8251-faf2ae93b40d req-b300ba2b-516a-4648-9b57-c0480790dcf1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.259 244018 DEBUG oslo_concurrency.lockutils [req-f4c1d7c0-d424-4330-8251-faf2ae93b40d req-b300ba2b-516a-4648-9b57-c0480790dcf1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.259 244018 DEBUG oslo_concurrency.lockutils [req-f4c1d7c0-d424-4330-8251-faf2ae93b40d req-b300ba2b-516a-4648-9b57-c0480790dcf1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.260 244018 DEBUG nova.compute.manager [req-f4c1d7c0-d424-4330-8251-faf2ae93b40d req-b300ba2b-516a-4648-9b57-c0480790dcf1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Processing event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.261 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.271 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023921.2709608, 809da994-7551-4f52-8920-b0dfaa2ef73e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.271 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] VM Resumed (Lifecycle Event)
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.274 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.279 244018 INFO nova.virt.libvirt.driver [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance spawned successfully.
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.280 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.299 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.307 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.314 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.315 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.316 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.316 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.317 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.318 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.351 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.436 244018 INFO nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Took 14.78 seconds to spawn the instance on the hypervisor.
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.437 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.495 244018 INFO nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Took 16.00 seconds to build instance.
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.509 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.652 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.653 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.676 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.756 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.757 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.766 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.767 244018 INFO nova.compute.claims [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:52:01 compute-0 nova_compute[244014]: 2026-02-25 12:52:01.891 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:01 compute-0 ceph-mon[76335]: pgmap v2143: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 3.7 MiB/s wr, 99 op/s
Feb 25 12:52:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:52:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239587367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.459 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.467 244018 DEBUG nova.compute.provider_tree [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.490 244018 DEBUG nova.scheduler.client.report [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.522 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.523 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.592 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.593 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:52:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 3.7 MiB/s wr, 110 op/s
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.613 244018 INFO nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.645 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.749 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.750 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.751 244018 INFO nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Creating image(s)
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.775 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.803 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.834 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.839 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.925 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.926 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.927 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.928 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.961 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:02 compute-0 nova_compute[244014]: 2026-02-25 12:52:02.966 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3239587367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.032 244018 DEBUG nova.policy [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.261 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.341 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.378 244018 DEBUG nova.compute.manager [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.379 244018 DEBUG oslo_concurrency.lockutils [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.379 244018 DEBUG oslo_concurrency.lockutils [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.379 244018 DEBUG oslo_concurrency.lockutils [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.380 244018 DEBUG nova.compute.manager [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.380 244018 WARNING nova.compute.manager [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a for instance with vm_state active and task_state None.
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.432 244018 DEBUG nova.objects.instance [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 739026fc-9c96-4212-9fa3-e6731e7f61f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.447 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.448 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Ensure instance console log exists: /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.448 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.448 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:03 compute-0 nova_compute[244014]: 2026-02-25 12:52:03.449 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:04 compute-0 ceph-mon[76335]: pgmap v2144: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 3.7 MiB/s wr, 110 op/s
Feb 25 12:52:04 compute-0 nova_compute[244014]: 2026-02-25 12:52:04.502 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Successfully created port: 61c3ba1d-0d4b-426e-8d04-ce56efd650ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:52:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 25 KiB/s wr, 21 op/s
Feb 25 12:52:05 compute-0 nova_compute[244014]: 2026-02-25 12:52:05.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:05 compute-0 nova_compute[244014]: 2026-02-25 12:52:05.630 244018 DEBUG nova.compute.manager [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:05 compute-0 nova_compute[244014]: 2026-02-25 12:52:05.631 244018 DEBUG nova.compute.manager [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing instance network info cache due to event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:52:05 compute-0 nova_compute[244014]: 2026-02-25 12:52:05.631 244018 DEBUG oslo_concurrency.lockutils [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:52:05 compute-0 nova_compute[244014]: 2026-02-25 12:52:05.632 244018 DEBUG oslo_concurrency.lockutils [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:52:05 compute-0 nova_compute[244014]: 2026-02-25 12:52:05.632 244018 DEBUG nova.network.neutron [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:52:05 compute-0 podman[360222]: 2026-02-25 12:52:05.760562964 +0000 UTC m=+0.094010535 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:52:05 compute-0 podman[360223]: 2026-02-25 12:52:05.773672355 +0000 UTC m=+0.097235907 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 12:52:06 compute-0 ceph-mon[76335]: pgmap v2145: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 25 KiB/s wr, 21 op/s
Feb 25 12:52:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 306 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 564 KiB/s rd, 1.0 MiB/s wr, 32 op/s
Feb 25 12:52:07 compute-0 nova_compute[244014]: 2026-02-25 12:52:07.048 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Successfully updated port: 61c3ba1d-0d4b-426e-8d04-ce56efd650ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:52:07 compute-0 nova_compute[244014]: 2026-02-25 12:52:07.065 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:52:07 compute-0 nova_compute[244014]: 2026-02-25 12:52:07.066 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:52:07 compute-0 nova_compute[244014]: 2026-02-25 12:52:07.066 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:52:07 compute-0 nova_compute[244014]: 2026-02-25 12:52:07.264 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:52:07 compute-0 nova_compute[244014]: 2026-02-25 12:52:07.451 244018 DEBUG nova.network.neutron [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updated VIF entry in instance network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:52:07 compute-0 nova_compute[244014]: 2026-02-25 12:52:07.451 244018 DEBUG nova.network.neutron [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:07 compute-0 nova_compute[244014]: 2026-02-25 12:52:07.470 244018 DEBUG oslo_concurrency.lockutils [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:52:07 compute-0 nova_compute[244014]: 2026-02-25 12:52:07.710 244018 DEBUG nova.compute.manager [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:07 compute-0 nova_compute[244014]: 2026-02-25 12:52:07.711 244018 DEBUG nova.compute.manager [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing instance network info cache due to event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:52:07 compute-0 nova_compute[244014]: 2026-02-25 12:52:07.711 244018 DEBUG oslo_concurrency.lockutils [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:52:08 compute-0 ceph-mon[76335]: pgmap v2146: 305 pgs: 305 active+clean; 306 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 564 KiB/s rd, 1.0 MiB/s wr, 32 op/s
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.305 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Feb 25 12:52:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.774 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.792 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.792 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Instance network_info: |[{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.793 244018 DEBUG oslo_concurrency.lockutils [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.793 244018 DEBUG nova.network.neutron [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.800 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Start _get_guest_xml network_info=[{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.806 244018 WARNING nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.811 244018 DEBUG nova.virt.libvirt.host [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.812 244018 DEBUG nova.virt.libvirt.host [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.816 244018 DEBUG nova.virt.libvirt.host [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.816 244018 DEBUG nova.virt.libvirt.host [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.817 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.817 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.817 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.818 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.818 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.818 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.819 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.819 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.819 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.819 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.820 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.820 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:52:08 compute-0 nova_compute[244014]: 2026-02-25 12:52:08.823 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:52:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882774330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.336 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.377 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.383 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:52:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1968571132' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.945 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.947 244018 DEBUG nova.virt.libvirt.vif [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:52:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1844602195',display_name='tempest-TestGettingAddress-server-1844602195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1844602195',id=128,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-3sho0d1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:52:02Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=739026fc-9c96-4212-9fa3-e6731e7f61f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.948 244018 DEBUG nova.network.os_vif_util [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.950 244018 DEBUG nova.network.os_vif_util [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.951 244018 DEBUG nova.objects.instance [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 739026fc-9c96-4212-9fa3-e6731e7f61f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.966 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:52:09 compute-0 nova_compute[244014]:   <uuid>739026fc-9c96-4212-9fa3-e6731e7f61f9</uuid>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   <name>instance-00000080</name>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1844602195</nova:name>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:52:08</nova:creationTime>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <nova:port uuid="61c3ba1d-0d4b-426e-8d04-ce56efd650ae">
Feb 25 12:52:09 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe93:52d2" ipVersion="6"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe93:52d2" ipVersion="6"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <system>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <entry name="serial">739026fc-9c96-4212-9fa3-e6731e7f61f9</entry>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <entry name="uuid">739026fc-9c96-4212-9fa3-e6731e7f61f9</entry>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     </system>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   <os>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   </os>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   <features>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   </features>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/739026fc-9c96-4212-9fa3-e6731e7f61f9_disk">
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       </source>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config">
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       </source>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:52:09 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:93:52:d2"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <target dev="tap61c3ba1d-0d"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/console.log" append="off"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <video>
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     </video>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:52:09 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:52:09 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:52:09 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:52:09 compute-0 nova_compute[244014]: </domain>
Feb 25 12:52:09 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.972 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Preparing to wait for external event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.973 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.973 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.973 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.974 244018 DEBUG nova.virt.libvirt.vif [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:52:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1844602195',display_name='tempest-TestGettingAddress-server-1844602195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1844602195',id=128,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-3sho0d1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:52:02Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=739026fc-9c96-4212-9fa3-e6731e7f61f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.975 244018 DEBUG nova.network.os_vif_util [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.976 244018 DEBUG nova.network.os_vif_util [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.976 244018 DEBUG os_vif [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.978 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.978 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.982 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61c3ba1d-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.983 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap61c3ba1d-0d, col_values=(('external_ids', {'iface-id': '61c3ba1d-0d4b-426e-8d04-ce56efd650ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:52:d2', 'vm-uuid': '739026fc-9c96-4212-9fa3-e6731e7f61f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:09 compute-0 NetworkManager[49836]: <info>  [1772023929.9855] manager: (tap61c3ba1d-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:09 compute-0 nova_compute[244014]: 2026-02-25 12:52:09.991 244018 INFO os_vif [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d')
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.045 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.045 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.046 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:93:52:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.046 244018 INFO nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Using config drive
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.069 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:10 compute-0 ceph-mon[76335]: pgmap v2147: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Feb 25 12:52:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2882774330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:52:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1968571132' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.390 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.472 244018 DEBUG nova.network.neutron [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updated VIF entry in instance network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.473 244018 DEBUG nova.network.neutron [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.492 244018 DEBUG oslo_concurrency.lockutils [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.498 244018 INFO nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Creating config drive at /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.502 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpryghxuhx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.635 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpryghxuhx" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.661 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.667 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.841 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.841 244018 INFO nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Deleting local config drive /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config because it was imported into RBD.
Feb 25 12:52:10 compute-0 NetworkManager[49836]: <info>  [1772023930.8744] manager: (tap61c3ba1d-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/564)
Feb 25 12:52:10 compute-0 kernel: tap61c3ba1d-0d: entered promiscuous mode
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.878 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:10 compute-0 ovn_controller[147040]: 2026-02-25T12:52:10Z|01348|binding|INFO|Claiming lport 61c3ba1d-0d4b-426e-8d04-ce56efd650ae for this chassis.
Feb 25 12:52:10 compute-0 ovn_controller[147040]: 2026-02-25T12:52:10Z|01349|binding|INFO|61c3ba1d-0d4b-426e-8d04-ce56efd650ae: Claiming fa:16:3e:93:52:d2 10.100.0.14 2001:db8:0:1:f816:3eff:fe93:52d2 2001:db8::f816:3eff:fe93:52d2
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.889 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:52:d2 10.100.0.14 2001:db8:0:1:f816:3eff:fe93:52d2 2001:db8::f816:3eff:fe93:52d2'], port_security=['fa:16:3e:93:52:d2 10.100.0.14 2001:db8:0:1:f816:3eff:fe93:52d2 2001:db8::f816:3eff:fe93:52d2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe93:52d2/64 2001:db8::f816:3eff:fe93:52d2/64', 'neutron:device_id': '739026fc-9c96-4212-9fa3-e6731e7f61f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1c36bad-4548-4f88-8bee-0f028af7a076', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=61c3ba1d-0d4b-426e-8d04-ce56efd650ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.891 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae in datapath e19ed85e-54ee-4274-951c-ade412625983 bound to our chassis
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.892 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e19ed85e-54ee-4274-951c-ade412625983
Feb 25 12:52:10 compute-0 ovn_controller[147040]: 2026-02-25T12:52:10Z|01350|binding|INFO|Setting lport 61c3ba1d-0d4b-426e-8d04-ce56efd650ae ovn-installed in OVS
Feb 25 12:52:10 compute-0 ovn_controller[147040]: 2026-02-25T12:52:10Z|01351|binding|INFO|Setting lport 61c3ba1d-0d4b-426e-8d04-ce56efd650ae up in Southbound
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.897 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.899 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:10 compute-0 systemd-udevd[360404]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:52:10 compute-0 systemd-machined[210048]: New machine qemu-160-instance-00000080.
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.904 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2e5a07-0e69-4b47-a3fa-3656541bc7aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:10 compute-0 NetworkManager[49836]: <info>  [1772023930.9128] device (tap61c3ba1d-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:52:10 compute-0 NetworkManager[49836]: <info>  [1772023930.9135] device (tap61c3ba1d-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:52:10 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-00000080.
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.934 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[46c1c55f-3d13-40ff-8260-e688ea29e990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.937 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[19611ef2-e4db-4089-abfc-eb44a15685dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.956 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f118893d-e6b2-4fd9-acd8-28e8bbf17c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.966 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b84927c5-f2c6-4b4b-99fe-351dac7b3dc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape19ed85e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:4b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586681, 'reachable_time': 39508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360419, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.973 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[789a28e3-d492-4570-a7d9-fe50c5a4e702]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape19ed85e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586691, 'tstamp': 586691}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360420, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape19ed85e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586693, 'tstamp': 586693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360420, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.975 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape19ed85e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:10 compute-0 nova_compute[244014]: 2026-02-25 12:52:10.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.977 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape19ed85e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.977 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.978 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape19ed85e-50, col_values=(('external_ids', {'iface-id': '591ce053-3764-4ce0-841f-6728c8fd9491'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.978 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.361 244018 DEBUG nova.compute.manager [req-38e12b8f-8f3c-4334-a836-b6fad576e6c7 req-b40e0451-cc5f-4619-86a2-1f47613edeab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.361 244018 DEBUG oslo_concurrency.lockutils [req-38e12b8f-8f3c-4334-a836-b6fad576e6c7 req-b40e0451-cc5f-4619-86a2-1f47613edeab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.361 244018 DEBUG oslo_concurrency.lockutils [req-38e12b8f-8f3c-4334-a836-b6fad576e6c7 req-b40e0451-cc5f-4619-86a2-1f47613edeab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.361 244018 DEBUG oslo_concurrency.lockutils [req-38e12b8f-8f3c-4334-a836-b6fad576e6c7 req-b40e0451-cc5f-4619-86a2-1f47613edeab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.362 244018 DEBUG nova.compute.manager [req-38e12b8f-8f3c-4334-a836-b6fad576e6c7 req-b40e0451-cc5f-4619-86a2-1f47613edeab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Processing event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:52:11 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.727 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023931.7268791, 739026fc-9c96-4212-9fa3-e6731e7f61f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.728 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] VM Started (Lifecycle Event)
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.732 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.737 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.740 244018 INFO nova.virt.libvirt.driver [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Instance spawned successfully.
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.741 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.750 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.756 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.762 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.763 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.763 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.764 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.764 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.764 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.773 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.773 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023931.727957, 739026fc-9c96-4212-9fa3-e6731e7f61f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.773 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] VM Paused (Lifecycle Event)
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.803 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.806 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023931.7361777, 739026fc-9c96-4212-9fa3-e6731e7f61f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.806 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] VM Resumed (Lifecycle Event)
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.813 244018 INFO nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Took 9.06 seconds to spawn the instance on the hypervisor.
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.813 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.824 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.827 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.847 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.878 244018 INFO nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Took 10.15 seconds to build instance.
Feb 25 12:52:11 compute-0 nova_compute[244014]: 2026-02-25 12:52:11.892 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:12 compute-0 ceph-mon[76335]: pgmap v2148: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Feb 25 12:52:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 341 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 127 op/s
Feb 25 12:52:13 compute-0 ovn_controller[147040]: 2026-02-25T12:52:13Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:63:14 10.100.0.12
Feb 25 12:52:13 compute-0 ovn_controller[147040]: 2026-02-25T12:52:13Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:63:14 10.100.0.12
Feb 25 12:52:13 compute-0 nova_compute[244014]: 2026-02-25 12:52:13.448 244018 DEBUG nova.compute.manager [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:13 compute-0 nova_compute[244014]: 2026-02-25 12:52:13.448 244018 DEBUG oslo_concurrency.lockutils [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:13 compute-0 nova_compute[244014]: 2026-02-25 12:52:13.449 244018 DEBUG oslo_concurrency.lockutils [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:13 compute-0 nova_compute[244014]: 2026-02-25 12:52:13.449 244018 DEBUG oslo_concurrency.lockutils [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:13 compute-0 nova_compute[244014]: 2026-02-25 12:52:13.449 244018 DEBUG nova.compute.manager [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] No waiting events found dispatching network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:52:13 compute-0 nova_compute[244014]: 2026-02-25 12:52:13.449 244018 WARNING nova.compute.manager [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received unexpected event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae for instance with vm_state active and task_state None.
Feb 25 12:52:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:14 compute-0 ceph-mon[76335]: pgmap v2149: 305 pgs: 305 active+clean; 341 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 127 op/s
Feb 25 12:52:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 341 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.2 MiB/s wr, 116 op/s
Feb 25 12:52:14 compute-0 nova_compute[244014]: 2026-02-25 12:52:14.985 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:15 compute-0 nova_compute[244014]: 2026-02-25 12:52:15.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:16 compute-0 ceph-mon[76335]: pgmap v2150: 305 pgs: 305 active+clean; 341 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.2 MiB/s wr, 116 op/s
Feb 25 12:52:16 compute-0 nova_compute[244014]: 2026-02-25 12:52:16.445 244018 DEBUG nova.compute.manager [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:16 compute-0 nova_compute[244014]: 2026-02-25 12:52:16.446 244018 DEBUG nova.compute.manager [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing instance network info cache due to event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:52:16 compute-0 nova_compute[244014]: 2026-02-25 12:52:16.446 244018 DEBUG oslo_concurrency.lockutils [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:52:16 compute-0 nova_compute[244014]: 2026-02-25 12:52:16.446 244018 DEBUG oslo_concurrency.lockutils [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:52:16 compute-0 nova_compute[244014]: 2026-02-25 12:52:16.446 244018 DEBUG nova.network.neutron [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:52:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 354 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 167 op/s
Feb 25 12:52:18 compute-0 nova_compute[244014]: 2026-02-25 12:52:18.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:18 compute-0 ceph-mon[76335]: pgmap v2151: 305 pgs: 305 active+clean; 354 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 167 op/s
Feb 25 12:52:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.9 MiB/s wr, 206 op/s
Feb 25 12:52:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:19 compute-0 nova_compute[244014]: 2026-02-25 12:52:19.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:20 compute-0 ceph-mon[76335]: pgmap v2152: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.9 MiB/s wr, 206 op/s
Feb 25 12:52:20 compute-0 nova_compute[244014]: 2026-02-25 12:52:20.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:20 compute-0 nova_compute[244014]: 2026-02-25 12:52:20.464 244018 INFO nova.compute.manager [None req-458f70d0-6a76-433b-b06e-8cda6e0b96ab 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Get console output
Feb 25 12:52:20 compute-0 nova_compute[244014]: 2026-02-25 12:52:20.472 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:52:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 25 12:52:21 compute-0 nova_compute[244014]: 2026-02-25 12:52:21.024 244018 DEBUG nova.network.neutron [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updated VIF entry in instance network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:52:21 compute-0 nova_compute[244014]: 2026-02-25 12:52:21.025 244018 DEBUG nova.network.neutron [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:21 compute-0 nova_compute[244014]: 2026-02-25 12:52:21.069 244018 DEBUG oslo_concurrency.lockutils [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:52:21 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Feb 25 12:52:22 compute-0 ceph-mon[76335]: pgmap v2153: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 25 12:52:22 compute-0 ovn_controller[147040]: 2026-02-25T12:52:22Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:52:d2 10.100.0.14
Feb 25 12:52:22 compute-0 ovn_controller[147040]: 2026-02-25T12:52:22Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:52:d2 10.100.0.14
Feb 25 12:52:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 384 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 170 op/s
Feb 25 12:52:23 compute-0 nova_compute[244014]: 2026-02-25 12:52:23.559 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:24 compute-0 ceph-mon[76335]: pgmap v2154: 305 pgs: 305 active+clean; 384 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 170 op/s
Feb 25 12:52:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 384 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 134 op/s
Feb 25 12:52:24 compute-0 nova_compute[244014]: 2026-02-25 12:52:24.775 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:24 compute-0 nova_compute[244014]: 2026-02-25 12:52:24.776 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:24 compute-0 nova_compute[244014]: 2026-02-25 12:52:24.776 244018 DEBUG nova.objects.instance [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'flavor' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:52:24 compute-0 nova_compute[244014]: 2026-02-25 12:52:24.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:25 compute-0 nova_compute[244014]: 2026-02-25 12:52:25.140 244018 DEBUG nova.objects.instance [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_requests' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:52:25 compute-0 nova_compute[244014]: 2026-02-25 12:52:25.154 244018 DEBUG nova.network.neutron [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:52:25 compute-0 nova_compute[244014]: 2026-02-25 12:52:25.370 244018 DEBUG nova.policy [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:52:25 compute-0 nova_compute[244014]: 2026-02-25 12:52:25.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:26 compute-0 nova_compute[244014]: 2026-02-25 12:52:26.359 244018 DEBUG nova.network.neutron [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Successfully created port: 77034e66-a3e3-47d0-b467-29a045343530 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:52:26 compute-0 ceph-mon[76335]: pgmap v2155: 305 pgs: 305 active+clean; 384 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 134 op/s
Feb 25 12:52:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 386 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 157 op/s
Feb 25 12:52:27 compute-0 nova_compute[244014]: 2026-02-25 12:52:27.252 244018 DEBUG nova.network.neutron [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Successfully updated port: 77034e66-a3e3-47d0-b467-29a045343530 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:52:27 compute-0 nova_compute[244014]: 2026-02-25 12:52:27.271 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:52:27 compute-0 nova_compute[244014]: 2026-02-25 12:52:27.271 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:52:27 compute-0 nova_compute[244014]: 2026-02-25 12:52:27.271 244018 DEBUG nova.network.neutron [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:52:27 compute-0 nova_compute[244014]: 2026-02-25 12:52:27.367 244018 DEBUG nova.compute.manager [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-changed-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:27 compute-0 nova_compute[244014]: 2026-02-25 12:52:27.367 244018 DEBUG nova.compute.manager [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing instance network info cache due to event network-changed-77034e66-a3e3-47d0-b467-29a045343530. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:52:27 compute-0 nova_compute[244014]: 2026-02-25 12:52:27.368 244018 DEBUG oslo_concurrency.lockutils [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:52:28 compute-0 ceph-mon[76335]: pgmap v2156: 305 pgs: 305 active+clean; 386 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 157 op/s
Feb 25 12:52:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 116 op/s
Feb 25 12:52:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:29 compute-0 nova_compute[244014]: 2026-02-25 12:52:29.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:29 compute-0 nova_compute[244014]: 2026-02-25 12:52:29.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:52:29 compute-0 nova_compute[244014]: 2026-02-25 12:52:29.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:52:29 compute-0 nova_compute[244014]: 2026-02-25 12:52:29.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:52:29 compute-0 nova_compute[244014]: 2026-02-25 12:52:29.992 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:30 compute-0 nova_compute[244014]: 2026-02-25 12:52:30.150 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:52:30 compute-0 nova_compute[244014]: 2026-02-25 12:52:30.151 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:52:30 compute-0 nova_compute[244014]: 2026-02-25 12:52:30.151 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:52:30 compute-0 nova_compute[244014]: 2026-02-25 12:52:30.152 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:52:30 compute-0 nova_compute[244014]: 2026-02-25 12:52:30.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:30 compute-0 ceph-mon[76335]: pgmap v2157: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 116 op/s
Feb 25 12:52:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 12:52:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:52:31
Feb 25 12:52:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:52:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:52:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.meta', '.mgr']
Feb 25 12:52:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.586 244018 DEBUG nova.network.neutron [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.621 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.623 244018 DEBUG oslo_concurrency.lockutils [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.624 244018 DEBUG nova.network.neutron [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing network info cache for port 77034e66-a3e3-47d0-b467-29a045343530 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.628 244018 DEBUG nova.virt.libvirt.vif [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.629 244018 DEBUG nova.network.os_vif_util [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.631 244018 DEBUG nova.network.os_vif_util [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.631 244018 DEBUG os_vif [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.633 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.633 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.638 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77034e66-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.639 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77034e66-a3, col_values=(('external_ids', {'iface-id': '77034e66-a3e3-47d0-b467-29a045343530', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:eb:19', 'vm-uuid': '809da994-7551-4f52-8920-b0dfaa2ef73e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:31 compute-0 NetworkManager[49836]: <info>  [1772023951.6425] manager: (tap77034e66-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/565)
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.647 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.652 244018 INFO os_vif [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3')
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.653 244018 DEBUG nova.virt.libvirt.vif [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.654 244018 DEBUG nova.network.os_vif_util [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.655 244018 DEBUG nova.network.os_vif_util [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.658 244018 DEBUG nova.virt.libvirt.guest [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] attach device xml: <interface type="ethernet">
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:63:eb:19"/>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <target dev="tap77034e66-a3"/>
Feb 25 12:52:31 compute-0 nova_compute[244014]: </interface>
Feb 25 12:52:31 compute-0 nova_compute[244014]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 25 12:52:31 compute-0 kernel: tap77034e66-a3: entered promiscuous mode
Feb 25 12:52:31 compute-0 NetworkManager[49836]: <info>  [1772023951.6734] manager: (tap77034e66-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/566)
Feb 25 12:52:31 compute-0 ovn_controller[147040]: 2026-02-25T12:52:31Z|01352|binding|INFO|Claiming lport 77034e66-a3e3-47d0-b467-29a045343530 for this chassis.
Feb 25 12:52:31 compute-0 ovn_controller[147040]: 2026-02-25T12:52:31Z|01353|binding|INFO|77034e66-a3e3-47d0-b467-29a045343530: Claiming fa:16:3e:63:eb:19 10.100.0.29
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.688 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:eb:19 10.100.0.29'], port_security=['fa:16:3e:63:eb:19 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '809da994-7551-4f52-8920-b0dfaa2ef73e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68b4f5a-a28d-4155-93af-2997c1302403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6b15ef5-ec0a-4008-9d34-97a6f1968263, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=77034e66-a3e3-47d0-b467-29a045343530) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.690 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 77034e66-a3e3-47d0-b467-29a045343530 in datapath e68b4f5a-a28d-4155-93af-2997c1302403 bound to our chassis
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.694 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68b4f5a-a28d-4155-93af-2997c1302403
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:31 compute-0 ovn_controller[147040]: 2026-02-25T12:52:31Z|01354|binding|INFO|Setting lport 77034e66-a3e3-47d0-b467-29a045343530 ovn-installed in OVS
Feb 25 12:52:31 compute-0 ovn_controller[147040]: 2026-02-25T12:52:31Z|01355|binding|INFO|Setting lport 77034e66-a3e3-47d0-b467-29a045343530 up in Southbound
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.710 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe3b404-cb54-4d11-8b57-501c499d25f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.711 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape68b4f5a-a1 in ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.712 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape68b4f5a-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.712 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0432e7fc-2250-44cb-9c27-3df216c991f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.713 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2a43bf88-2cce-48a3-be49-600fc21b7882]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 systemd-udevd[360472]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.724 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2213ec93-8032-465d-877e-af58505f61c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.736 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61c04112-e998-499f-a24d-9c0847d81740]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 NetworkManager[49836]: <info>  [1772023951.7383] device (tap77034e66-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:52:31 compute-0 NetworkManager[49836]: <info>  [1772023951.7397] device (tap77034e66-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.768 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[810fe5ca-9fbd-4521-adb0-20ab3e32ca38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.774 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6428ff34-7f84-4b45-b827-d4ca096a92a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 systemd-udevd[360476]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:52:31 compute-0 NetworkManager[49836]: <info>  [1772023951.7761] manager: (tape68b4f5a-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/567)
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.780 244018 DEBUG nova.virt.libvirt.driver [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.780 244018 DEBUG nova.virt.libvirt.driver [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.781 244018 DEBUG nova.virt.libvirt.driver [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:c3:63:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.781 244018 DEBUG nova.virt.libvirt.driver [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:63:eb:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.806 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[99ea369f-3b92-4352-a9b0-80b3cb148125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.806 244018 DEBUG nova.virt.libvirt.guest [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:52:31</nova:creationTime>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:52:31 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:52:31 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:52:31 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:52:31 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:52:31 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:52:31 compute-0 nova_compute[244014]:     <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:52:31 compute-0 nova_compute[244014]:     <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:52:31 compute-0 nova_compute[244014]:     <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 12:52:31 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:52:31 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:52:31 compute-0 nova_compute[244014]:     <nova:port uuid="77034e66-a3e3-47d0-b467-29a045343530">
Feb 25 12:52:31 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 12:52:31 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:52:31 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:52:31 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:52:31 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.810 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3060dd74-c423-4c3b-98d5-9d8b1804dbd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 NetworkManager[49836]: <info>  [1772023951.8297] device (tape68b4f5a-a0): carrier: link connected
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.837 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5d18bd-f7c8-4b58-876d-7a1ece3f9359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.849 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.857 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[234435eb-b4ca-4744-856e-83bb8d4085c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68b4f5a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:ac:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592123, 'reachable_time': 41665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360498, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.872 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[937d43ca-9568-41b7-83df-c5e5a64cfab5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe03:ace6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592123, 'tstamp': 592123}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360499, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.889 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d06c2ee-b0ce-4056-8f48-437958bfb92d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68b4f5a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:ac:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592123, 'reachable_time': 41665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360500, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.920 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f409d3c-5c9c-4dd3-8ff0-8837ea5826ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.979 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5117192-4aac-415a-9139-fe2843f95d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.981 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68b4f5a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.981 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.982 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68b4f5a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.982 244018 DEBUG nova.compute.manager [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.982 244018 DEBUG oslo_concurrency.lockutils [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.984 244018 DEBUG oslo_concurrency.lockutils [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.984 244018 DEBUG oslo_concurrency.lockutils [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.984 244018 DEBUG nova.compute.manager [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.985 244018 WARNING nova.compute.manager [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 for instance with vm_state active and task_state None.
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.985 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:31 compute-0 NetworkManager[49836]: <info>  [1772023951.9860] manager: (tape68b4f5a-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Feb 25 12:52:31 compute-0 kernel: tape68b4f5a-a0: entered promiscuous mode
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.990 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68b4f5a-a0, col_values=(('external_ids', {'iface-id': 'd3cef94a-f2c7-4a41-beb1-fd29a623854a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:31 compute-0 ovn_controller[147040]: 2026-02-25T12:52:31Z|01356|binding|INFO|Releasing lport d3cef94a-f2c7-4a41-beb1-fd29a623854a from this chassis (sb_readonly=0)
Feb 25 12:52:31 compute-0 nova_compute[244014]: 2026-02-25 12:52:31.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:32 compute-0 nova_compute[244014]: 2026-02-25 12:52:32.000 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:32.002 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e68b4f5a-a28d-4155-93af-2997c1302403.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e68b4f5a-a28d-4155-93af-2997c1302403.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:32.003 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9455aeb-f426-4094-8250-78cd18f2a78a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:32.004 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-e68b4f5a-a28d-4155-93af-2997c1302403
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/e68b4f5a-a28d-4155-93af-2997c1302403.pid.haproxy
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID e68b4f5a-a28d-4155-93af-2997c1302403
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:52:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:32.004 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'env', 'PROCESS_TAG=haproxy-e68b4f5a-a28d-4155-93af-2997c1302403', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e68b4f5a-a28d-4155-93af-2997c1302403.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:52:32 compute-0 podman[360532]: 2026-02-25 12:52:32.379625003 +0000 UTC m=+0.062789764 container create 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:52:32 compute-0 systemd[1]: Started libpod-conmon-62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e.scope.
Feb 25 12:52:32 compute-0 ceph-mon[76335]: pgmap v2158: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 12:52:32 compute-0 podman[360532]: 2026-02-25 12:52:32.341325452 +0000 UTC m=+0.024490273 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:52:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce54b1a0b6462e52effccd4126e7ec55dceac4038acb86f1df2038cf5f156ad6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:32 compute-0 podman[360532]: 2026-02-25 12:52:32.470854 +0000 UTC m=+0.154018821 container init 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:52:32 compute-0 podman[360532]: 2026-02-25 12:52:32.478747692 +0000 UTC m=+0.161912453 container start 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:52:32 compute-0 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [NOTICE]   (360551) : New worker (360553) forked
Feb 25 12:52:32 compute-0 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [NOTICE]   (360551) : Loading success.
Feb 25 12:52:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Feb 25 12:52:32 compute-0 ovn_controller[147040]: 2026-02-25T12:52:32Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:eb:19 10.100.0.29
Feb 25 12:52:32 compute-0 ovn_controller[147040]: 2026-02-25T12:52:32Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:eb:19 10.100.0.29
Feb 25 12:52:33 compute-0 nova_compute[244014]: 2026-02-25 12:52:33.475 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:33 compute-0 nova_compute[244014]: 2026-02-25 12:52:33.495 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:52:33 compute-0 nova_compute[244014]: 2026-02-25 12:52:33.495 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:52:33 compute-0 nova_compute[244014]: 2026-02-25 12:52:33.496 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:52:33 compute-0 nova_compute[244014]: 2026-02-25 12:52:33.496 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:52:33 compute-0 nova_compute[244014]: 2026-02-25 12:52:33.521 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:33 compute-0 nova_compute[244014]: 2026-02-25 12:52:33.522 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:33 compute-0 nova_compute[244014]: 2026-02-25 12:52:33.522 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:33 compute-0 nova_compute[244014]: 2026-02-25 12:52:33.523 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:52:33 compute-0 nova_compute[244014]: 2026-02-25 12:52:33.523 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:52:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913115532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.078 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.149 244018 DEBUG nova.compute.manager [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.150 244018 DEBUG oslo_concurrency.lockutils [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.150 244018 DEBUG oslo_concurrency.lockutils [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.151 244018 DEBUG oslo_concurrency.lockutils [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.152 244018 DEBUG nova.compute.manager [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.152 244018 WARNING nova.compute.manager [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 for instance with vm_state active and task_state None.
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.189 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.189 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.195 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.196 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.201 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.201 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.223 244018 DEBUG nova.network.neutron [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updated VIF entry in instance network info cache for port 77034e66-a3e3-47d0-b467-29a045343530. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.224 244018 DEBUG nova.network.neutron [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.247 244018 DEBUG oslo_concurrency.lockutils [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:52:34 compute-0 ceph-mon[76335]: pgmap v2159: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Feb 25 12:52:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/913115532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.449 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.451 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2955MB free_disk=59.85068342462182GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.451 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.451 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 277 KiB/s rd, 173 KiB/s wr, 33 op/s
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.730 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 809da994-7551-4f52-8920-b0dfaa2ef73e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 739026fc-9c96-4212-9fa3-e6731e7f61f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:52:34 compute-0 nova_compute[244014]: 2026-02-25 12:52:34.732 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.182 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:35 compute-0 ceph-mon[76335]: pgmap v2160: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 277 KiB/s rd, 173 KiB/s wr, 33 op/s
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.503 244018 DEBUG nova.compute.manager [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.504 244018 DEBUG nova.compute.manager [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing instance network info cache due to event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.505 244018 DEBUG oslo_concurrency.lockutils [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.506 244018 DEBUG oslo_concurrency.lockutils [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.506 244018 DEBUG nova.network.neutron [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.555 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.556 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.557 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.558 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.559 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.561 244018 INFO nova.compute.manager [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Terminating instance
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.563 244018 DEBUG nova.compute.manager [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:52:35 compute-0 kernel: tap61c3ba1d-0d (unregistering): left promiscuous mode
Feb 25 12:52:35 compute-0 NetworkManager[49836]: <info>  [1772023955.6103] device (tap61c3ba1d-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:52:35 compute-0 ovn_controller[147040]: 2026-02-25T12:52:35Z|01357|binding|INFO|Releasing lport 61c3ba1d-0d4b-426e-8d04-ce56efd650ae from this chassis (sb_readonly=0)
Feb 25 12:52:35 compute-0 ovn_controller[147040]: 2026-02-25T12:52:35Z|01358|binding|INFO|Setting lport 61c3ba1d-0d4b-426e-8d04-ce56efd650ae down in Southbound
Feb 25 12:52:35 compute-0 ovn_controller[147040]: 2026-02-25T12:52:35Z|01359|binding|INFO|Removing iface tap61c3ba1d-0d ovn-installed in OVS
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.636 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.639 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:52:d2 10.100.0.14 2001:db8:0:1:f816:3eff:fe93:52d2 2001:db8::f816:3eff:fe93:52d2'], port_security=['fa:16:3e:93:52:d2 10.100.0.14 2001:db8:0:1:f816:3eff:fe93:52d2 2001:db8::f816:3eff:fe93:52d2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe93:52d2/64 2001:db8::f816:3eff:fe93:52d2/64', 'neutron:device_id': '739026fc-9c96-4212-9fa3-e6731e7f61f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1c36bad-4548-4f88-8bee-0f028af7a076', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=61c3ba1d-0d4b-426e-8d04-ce56efd650ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.642 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae in datapath e19ed85e-54ee-4274-951c-ade412625983 unbound from our chassis
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.645 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e19ed85e-54ee-4274-951c-ade412625983
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.662 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e324be3d-4e24-46b9-8f98-3b319f7a5232]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:35 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Deactivated successfully.
Feb 25 12:52:35 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Consumed 12.282s CPU time.
Feb 25 12:52:35 compute-0 systemd-machined[210048]: Machine qemu-160-instance-00000080 terminated.
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.691 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[eecb4ea6-f4ea-43e6-a67a-55f9e4a53bb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.695 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4cacdae9-86a7-4a8c-bfb4-cf07120f1454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.720 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5c790e-761b-4f51-8aa7-9f02014ba49a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.741 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05747e0d-f072-409e-ae05-642f8513b41a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape19ed85e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:4b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586681, 'reachable_time': 17563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360617, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.757 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dea07f12-4267-4b3a-b49c-0ce917613cc8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape19ed85e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586691, 'tstamp': 586691}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360618, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape19ed85e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586693, 'tstamp': 586693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360618, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.760 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape19ed85e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.768 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape19ed85e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.769 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.771 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape19ed85e-50, col_values=(('external_ids', {'iface-id': '591ce053-3764-4ce0-841f-6728c8fd9491'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.772 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.798 244018 INFO nova.virt.libvirt.driver [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Instance destroyed successfully.
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.799 244018 DEBUG nova.objects.instance [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 739026fc-9c96-4212-9fa3-e6731e7f61f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:52:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:52:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/399743122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.816 244018 DEBUG nova.virt.libvirt.vif [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:52:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1844602195',display_name='tempest-TestGettingAddress-server-1844602195',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1844602195',id=128,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-3sho0d1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:11Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=739026fc-9c96-4212-9fa3-e6731e7f61f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.816 244018 DEBUG nova.network.os_vif_util [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.817 244018 DEBUG nova.network.os_vif_util [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.817 244018 DEBUG os_vif [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.819 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61c3ba1d-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.825 244018 INFO os_vif [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d')
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.841 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.846 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.870 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.888 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:52:35 compute-0 nova_compute[244014]: 2026-02-25 12:52:35.888 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:35 compute-0 podman[360633]: 2026-02-25 12:52:35.954143325 +0000 UTC m=+0.109027829 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:52:35 compute-0 podman[360634]: 2026-02-25 12:52:35.952732296 +0000 UTC m=+0.107488287 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.131 244018 INFO nova.virt.libvirt.driver [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Deleting instance files /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9_del
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.132 244018 INFO nova.virt.libvirt.driver [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Deletion of /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9_del complete
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.195 244018 INFO nova.compute.manager [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Took 0.63 seconds to destroy the instance on the hypervisor.
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.195 244018 DEBUG oslo.service.loopingcall [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.196 244018 DEBUG nova.compute.manager [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.196 244018 DEBUG nova.network.neutron [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.274 244018 DEBUG nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-unplugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.275 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.275 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.275 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.275 244018 DEBUG nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] No waiting events found dispatching network-vif-unplugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.275 244018 DEBUG nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-unplugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.276 244018 DEBUG nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.276 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.276 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.276 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.276 244018 DEBUG nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] No waiting events found dispatching network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.277 244018 WARNING nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received unexpected event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae for instance with vm_state active and task_state deleting.
Feb 25 12:52:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/399743122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 180 KiB/s wr, 40 op/s
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.883 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:52:36 compute-0 nova_compute[244014]: 2026-02-25 12:52:36.884 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:52:37 compute-0 nova_compute[244014]: 2026-02-25 12:52:37.239 244018 DEBUG nova.network.neutron [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:37 compute-0 nova_compute[244014]: 2026-02-25 12:52:37.271 244018 INFO nova.compute.manager [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Took 1.08 seconds to deallocate network for instance.
Feb 25 12:52:37 compute-0 nova_compute[244014]: 2026-02-25 12:52:37.340 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:37 compute-0 nova_compute[244014]: 2026-02-25 12:52:37.340 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:37 compute-0 ceph-mon[76335]: pgmap v2161: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 180 KiB/s wr, 40 op/s
Feb 25 12:52:37 compute-0 nova_compute[244014]: 2026-02-25 12:52:37.696 244018 DEBUG oslo_concurrency.processutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:37 compute-0 nova_compute[244014]: 2026-02-25 12:52:37.742 244018 DEBUG nova.network.neutron [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updated VIF entry in instance network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:52:37 compute-0 nova_compute[244014]: 2026-02-25 12:52:37.743 244018 DEBUG nova.network.neutron [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:37 compute-0 nova_compute[244014]: 2026-02-25 12:52:37.766 244018 DEBUG oslo_concurrency.lockutils [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:52:37 compute-0 nova_compute[244014]: 2026-02-25 12:52:37.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:52:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:52:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1652785066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:38 compute-0 nova_compute[244014]: 2026-02-25 12:52:38.231 244018 DEBUG oslo_concurrency.processutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:38 compute-0 nova_compute[244014]: 2026-02-25 12:52:38.238 244018 DEBUG nova.compute.provider_tree [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:52:38 compute-0 nova_compute[244014]: 2026-02-25 12:52:38.264 244018 DEBUG nova.scheduler.client.report [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:52:38 compute-0 nova_compute[244014]: 2026-02-25 12:52:38.292 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:38 compute-0 nova_compute[244014]: 2026-02-25 12:52:38.336 244018 INFO nova.scheduler.client.report [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 739026fc-9c96-4212-9fa3-e6731e7f61f9
Feb 25 12:52:38 compute-0 nova_compute[244014]: 2026-02-25 12:52:38.385 244018 DEBUG nova.compute.manager [req-d04870c7-5da3-418a-ac2c-0c4417960add req-fcaa921c-bfc9-4a2f-9735-a5b76da293f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-deleted-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:38 compute-0 nova_compute[244014]: 2026-02-25 12:52:38.385 244018 INFO nova.compute.manager [req-d04870c7-5da3-418a-ac2c-0c4417960add req-fcaa921c-bfc9-4a2f-9735-a5b76da293f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Neutron deleted interface 61c3ba1d-0d4b-426e-8d04-ce56efd650ae; detaching it from the instance and deleting it from the info cache
Feb 25 12:52:38 compute-0 nova_compute[244014]: 2026-02-25 12:52:38.385 244018 DEBUG nova.network.neutron [req-d04870c7-5da3-418a-ac2c-0c4417960add req-fcaa921c-bfc9-4a2f-9735-a5b76da293f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:38 compute-0 nova_compute[244014]: 2026-02-25 12:52:38.426 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:38 compute-0 nova_compute[244014]: 2026-02-25 12:52:38.444 244018 DEBUG nova.compute.manager [req-d04870c7-5da3-418a-ac2c-0c4417960add req-fcaa921c-bfc9-4a2f-9735-a5b76da293f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Detach interface failed, port_id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae, reason: Instance 739026fc-9c96-4212-9fa3-e6731e7f61f9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:52:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1652785066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 91 KiB/s wr, 41 op/s
Feb 25 12:52:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:38 compute-0 nova_compute[244014]: 2026-02-25 12:52:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:52:39 compute-0 ceph-mon[76335]: pgmap v2162: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 91 KiB/s wr, 41 op/s
Feb 25 12:52:39 compute-0 ovn_controller[147040]: 2026-02-25T12:52:39Z|01360|binding|INFO|Releasing lport 591ce053-3764-4ce0-841f-6728c8fd9491 from this chassis (sb_readonly=0)
Feb 25 12:52:39 compute-0 ovn_controller[147040]: 2026-02-25T12:52:39Z|01361|binding|INFO|Releasing lport 1599c73d-07eb-42e9-83bd-2cf546347a5b from this chassis (sb_readonly=0)
Feb 25 12:52:39 compute-0 ovn_controller[147040]: 2026-02-25T12:52:39Z|01362|binding|INFO|Releasing lport d3cef94a-f2c7-4a41-beb1-fd29a623854a from this chassis (sb_readonly=0)
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:39 compute-0 sudo[360719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:52:39 compute-0 sudo[360719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:52:39 compute-0 sudo[360719]: pam_unix(sudo:session): session closed for user root
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.700 244018 DEBUG nova.compute.manager [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-changed-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.700 244018 DEBUG nova.compute.manager [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing instance network info cache due to event network-changed-9981394d-e733-404e-85a5-e2e51877881a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.701 244018 DEBUG oslo_concurrency.lockutils [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.701 244018 DEBUG oslo_concurrency.lockutils [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.702 244018 DEBUG nova.network.neutron [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing network info cache for port 9981394d-e733-404e-85a5-e2e51877881a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:52:39 compute-0 sudo[360744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:52:39 compute-0 sudo[360744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.870 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.870 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.871 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.871 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.872 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.874 244018 INFO nova.compute.manager [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Terminating instance
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.876 244018 DEBUG nova.compute.manager [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:52:39 compute-0 kernel: tap9981394d-e7 (unregistering): left promiscuous mode
Feb 25 12:52:39 compute-0 NetworkManager[49836]: <info>  [1772023959.9277] device (tap9981394d-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:52:39 compute-0 ovn_controller[147040]: 2026-02-25T12:52:39Z|01363|binding|INFO|Releasing lport 9981394d-e733-404e-85a5-e2e51877881a from this chassis (sb_readonly=0)
Feb 25 12:52:39 compute-0 ovn_controller[147040]: 2026-02-25T12:52:39Z|01364|binding|INFO|Setting lport 9981394d-e733-404e-85a5-e2e51877881a down in Southbound
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.946 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:39 compute-0 ovn_controller[147040]: 2026-02-25T12:52:39Z|01365|binding|INFO|Removing iface tap9981394d-e7 ovn-installed in OVS
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.951 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:39.955 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:92:99 10.100.0.7 2001:db8:0:1:f816:3eff:fe2b:9299 2001:db8::f816:3eff:fe2b:9299'], port_security=['fa:16:3e:2b:92:99 10.100.0.7 2001:db8:0:1:f816:3eff:fe2b:9299 2001:db8::f816:3eff:fe2b:9299'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fe2b:9299/64 2001:db8::f816:3eff:fe2b:9299/64', 'neutron:device_id': 'dd7feae9-9d2a-41b6-9277-cbf51a2c8f23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1c36bad-4548-4f88-8bee-0f028af7a076', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9981394d-e733-404e-85a5-e2e51877881a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:52:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:39.956 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9981394d-e733-404e-85a5-e2e51877881a in datapath e19ed85e-54ee-4274-951c-ade412625983 unbound from our chassis
Feb 25 12:52:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:39.958 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e19ed85e-54ee-4274-951c-ade412625983, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:52:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:39.959 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e339e908-55d7-46a2-8a08-bac5e18ff50e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:39 compute-0 nova_compute[244014]: 2026-02-25 12:52:39.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:39 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:39.961 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e19ed85e-54ee-4274-951c-ade412625983 namespace which is not needed anymore
Feb 25 12:52:39 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Feb 25 12:52:39 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Consumed 14.221s CPU time.
Feb 25 12:52:39 compute-0 systemd-machined[210048]: Machine qemu-158-instance-0000007e terminated.
Feb 25 12:52:40 compute-0 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [NOTICE]   (359402) : haproxy version is 2.8.14-c23fe91
Feb 25 12:52:40 compute-0 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [NOTICE]   (359402) : path to executable is /usr/sbin/haproxy
Feb 25 12:52:40 compute-0 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [WARNING]  (359402) : Exiting Master process...
Feb 25 12:52:40 compute-0 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [ALERT]    (359402) : Current worker (359404) exited with code 143 (Terminated)
Feb 25 12:52:40 compute-0 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [WARNING]  (359402) : All workers exited. Exiting... (0)
Feb 25 12:52:40 compute-0 systemd[1]: libpod-f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987.scope: Deactivated successfully.
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.112 244018 INFO nova.virt.libvirt.driver [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Instance destroyed successfully.
Feb 25 12:52:40 compute-0 podman[360805]: 2026-02-25 12:52:40.113284526 +0000 UTC m=+0.060256252 container died f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.113 244018 DEBUG nova.objects.instance [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.124 244018 DEBUG nova.virt.libvirt.vif [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1033485245',display_name='tempest-TestGettingAddress-server-1033485245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1033485245',id=126,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:51:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-iqq9ar0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:51:38Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd7feae9-9d2a-41b6-9277-cbf51a2c8f23,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.125 244018 DEBUG nova.network.os_vif_util [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.126 244018 DEBUG nova.network.os_vif_util [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.127 244018 DEBUG os_vif [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.129 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.129 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9981394d-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.134 244018 INFO os_vif [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7')
Feb 25 12:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987-userdata-shm.mount: Deactivated successfully.
Feb 25 12:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b16cd2a2c94cf38fc347d4884920d0f50de2f7df3adcfcac47023bdece528de3-merged.mount: Deactivated successfully.
Feb 25 12:52:40 compute-0 podman[360805]: 2026-02-25 12:52:40.166631063 +0000 UTC m=+0.113602779 container cleanup f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:52:40 compute-0 systemd[1]: libpod-conmon-f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987.scope: Deactivated successfully.
Feb 25 12:52:40 compute-0 podman[360869]: 2026-02-25 12:52:40.221331167 +0000 UTC m=+0.035130633 container remove f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:52:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.226 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[698f3ff9-d96e-4768-8a0f-0338090c218a]: (4, ('Wed Feb 25 12:52:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983 (f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987)\nf6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987\nWed Feb 25 12:52:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983 (f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987)\nf6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.228 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2dae55c2-2173-44f4-aad3-35f5450dcbd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.230 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape19ed85e-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.233 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:40 compute-0 kernel: tape19ed85e-50: left promiscuous mode
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.246 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aef75533-2f59-4095-a311-eae6736b2598]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.268 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc643520-e8bd-443b-946e-4af0476f79fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.270 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc592e5-2054-4093-a7ed-77beaac1407a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:40 compute-0 sudo[360744]: pam_unix(sudo:session): session closed for user root
Feb 25 12:52:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.285 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c64953d4-e87c-4a3f-a028-86972400a968]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586675, 'reachable_time': 40052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360898, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.287 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e19ed85e-54ee-4274-951c-ade412625983 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:52:40 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.287 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[abec4e70-e3c6-4c13-adca-c147abeb38a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:40 compute-0 systemd[1]: run-netns-ovnmeta\x2de19ed85e\x2d54ee\x2d4274\x2d951c\x2dade412625983.mount: Deactivated successfully.
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.325 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.325 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 12:52:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 12:52:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:52:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:52:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:52:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:52:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:52:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:52:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:52:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:52:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:52:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:52:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:52:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.344 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.365 244018 INFO nova.virt.libvirt.driver [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Deleting instance files /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_del
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.366 244018 INFO nova.virt.libvirt.driver [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Deletion of /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_del complete
Feb 25 12:52:40 compute-0 sudo[360899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:52:40 compute-0 sudo[360899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:52:40 compute-0 sudo[360899]: pam_unix(sudo:session): session closed for user root
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:40 compute-0 sudo[360924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:52:40 compute-0 sudo[360924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.443 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.444 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.444 244018 INFO nova.compute.manager [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Took 0.57 seconds to destroy the instance on the hypervisor.
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.445 244018 DEBUG oslo.service.loopingcall [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.445 244018 DEBUG nova.compute.manager [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.445 244018 DEBUG nova.network.neutron [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.451 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.452 244018 INFO nova.compute.claims [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.519 244018 DEBUG nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-unplugged-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] No waiting events found dispatching network-vif-unplugged-9981394d-e733-404e-85a5-e2e51877881a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-unplugged-9981394d-e733-404e-85a5-e2e51877881a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.521 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.521 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.521 244018 DEBUG nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] No waiting events found dispatching network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.521 244018 WARNING nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received unexpected event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a for instance with vm_state active and task_state deleting.
Feb 25 12:52:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 12:52:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:52:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:52:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:52:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:52:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:52:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:52:40 compute-0 nova_compute[244014]: 2026-02-25 12:52:40.615 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 28 KiB/s wr, 31 op/s
Feb 25 12:52:40 compute-0 podman[360963]: 2026-02-25 12:52:40.681200641 +0000 UTC m=+0.034578377 container create d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 12:52:40 compute-0 systemd[1]: Started libpod-conmon-d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a.scope.
Feb 25 12:52:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:52:40 compute-0 podman[360963]: 2026-02-25 12:52:40.757199677 +0000 UTC m=+0.110577433 container init d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:52:40 compute-0 podman[360963]: 2026-02-25 12:52:40.664904411 +0000 UTC m=+0.018282167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:52:40 compute-0 podman[360963]: 2026-02-25 12:52:40.763492905 +0000 UTC m=+0.116870631 container start d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:52:40 compute-0 trusting_merkle[360980]: 167 167
Feb 25 12:52:40 compute-0 systemd[1]: libpod-d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a.scope: Deactivated successfully.
Feb 25 12:52:40 compute-0 conmon[360980]: conmon d2ba9f23f4fe6ad89c9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a.scope/container/memory.events
Feb 25 12:52:40 compute-0 podman[360963]: 2026-02-25 12:52:40.767735645 +0000 UTC m=+0.121113371 container attach d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 12:52:40 compute-0 podman[360963]: 2026-02-25 12:52:40.77041637 +0000 UTC m=+0.123794106 container died d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 12:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-42c289b9884857a252bca9bf525e90c1df6a29ecde71b35f5be8bca9d5358210-merged.mount: Deactivated successfully.
Feb 25 12:52:40 compute-0 podman[360963]: 2026-02-25 12:52:40.859885817 +0000 UTC m=+0.213263533 container remove d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 12:52:40 compute-0 systemd[1]: libpod-conmon-d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a.scope: Deactivated successfully.
Feb 25 12:52:41 compute-0 podman[361023]: 2026-02-25 12:52:41.003466401 +0000 UTC m=+0.036099940 container create ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:52:41 compute-0 systemd[1]: Started libpod-conmon-ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db.scope.
Feb 25 12:52:41 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:41 compute-0 podman[361023]: 2026-02-25 12:52:40.988856989 +0000 UTC m=+0.021490548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:52:41 compute-0 podman[361023]: 2026-02-25 12:52:41.090085457 +0000 UTC m=+0.122719046 container init ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:52:41 compute-0 podman[361023]: 2026-02-25 12:52:41.098433423 +0000 UTC m=+0.131066962 container start ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 12:52:41 compute-0 podman[361023]: 2026-02-25 12:52:41.101241072 +0000 UTC m=+0.133874661 container attach ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.127 244018 DEBUG nova.network.neutron [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:52:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069586792' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.150 244018 INFO nova.compute.manager [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Took 0.70 seconds to deallocate network for instance.
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.164 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.172 244018 DEBUG nova.compute.provider_tree [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.249 244018 DEBUG nova.scheduler.client.report [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.258 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.289 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.291 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.296 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.387 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.389 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.412 244018 INFO nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.447 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.473 244018 DEBUG oslo_concurrency.processutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.558 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.561 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.561 244018 INFO nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Creating image(s)
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.594 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:41 compute-0 ceph-mon[76335]: pgmap v2163: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 28 KiB/s wr, 31 op/s
Feb 25 12:52:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1069586792' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:41 compute-0 bold_lumiere[361041]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:52:41 compute-0 bold_lumiere[361041]: --> All data devices are unavailable
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.642 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:41 compute-0 systemd[1]: libpod-ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db.scope: Deactivated successfully.
Feb 25 12:52:41 compute-0 podman[361023]: 2026-02-25 12:52:41.660525284 +0000 UTC m=+0.693158863 container died ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.684 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647-merged.mount: Deactivated successfully.
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.695 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:41 compute-0 podman[361023]: 2026-02-25 12:52:41.710236108 +0000 UTC m=+0.742869687 container remove ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 12:52:41 compute-0 systemd[1]: libpod-conmon-ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db.scope: Deactivated successfully.
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.736 244018 DEBUG nova.policy [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:52:41 compute-0 sudo[360924]: pam_unix(sudo:session): session closed for user root
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.777 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.777 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.778 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.778 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:41 compute-0 sudo[361150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:52:41 compute-0 sudo[361150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:52:41 compute-0 sudo[361150]: pam_unix(sudo:session): session closed for user root
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.810 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.820 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:41 compute-0 sudo[361195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:52:41 compute-0 sudo[361195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:52:41 compute-0 nova_compute[244014]: 2026-02-25 12:52:41.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.044 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:52:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572988115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.087 244018 DEBUG oslo_concurrency.processutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.093 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.120 244018 DEBUG nova.compute.provider_tree [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.135 244018 DEBUG nova.scheduler.client.report [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:52:42 compute-0 podman[361304]: 2026-02-25 12:52:42.145636192 +0000 UTC m=+0.032477488 container create 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 12:52:42 compute-0 systemd[1]: Started libpod-conmon-90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7.scope.
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.173 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.182 244018 DEBUG nova.objects.instance [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid f3af9615-94aa-4498-ab5c-3fadcab4a4e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.194 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.195 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Ensure instance console log exists: /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.195 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.195 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.196 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.209 244018 INFO nova.scheduler.client.report [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance dd7feae9-9d2a-41b6-9277-cbf51a2c8f23
Feb 25 12:52:42 compute-0 podman[361304]: 2026-02-25 12:52:42.213261412 +0000 UTC m=+0.100102728 container init 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:52:42 compute-0 podman[361304]: 2026-02-25 12:52:42.218646464 +0000 UTC m=+0.105487770 container start 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:52:42 compute-0 podman[361304]: 2026-02-25 12:52:42.221408112 +0000 UTC m=+0.108249418 container attach 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 12:52:42 compute-0 gifted_khayyam[361341]: 167 167
Feb 25 12:52:42 compute-0 systemd[1]: libpod-90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7.scope: Deactivated successfully.
Feb 25 12:52:42 compute-0 podman[361304]: 2026-02-25 12:52:42.22274228 +0000 UTC m=+0.109583596 container died 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 12:52:42 compute-0 podman[361304]: 2026-02-25 12:52:42.131293817 +0000 UTC m=+0.018135143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-319505d2ff08281de9b8ba7091d18733ded986c0bb27e5d8616863039bed69d4-merged.mount: Deactivated successfully.
Feb 25 12:52:42 compute-0 podman[361304]: 2026-02-25 12:52:42.262264256 +0000 UTC m=+0.149105562 container remove 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 12:52:42 compute-0 systemd[1]: libpod-conmon-90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7.scope: Deactivated successfully.
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.289 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:42 compute-0 podman[361365]: 2026-02-25 12:52:42.457452577 +0000 UTC m=+0.056283660 container create 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.461 244018 DEBUG nova.network.neutron [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updated VIF entry in instance network info cache for port 9981394d-e733-404e-85a5-e2e51877881a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.461 244018 DEBUG nova.network.neutron [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.476 244018 DEBUG oslo_concurrency.lockutils [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:52:42 compute-0 systemd[1]: Started libpod-conmon-79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae.scope.
Feb 25 12:52:42 compute-0 podman[361365]: 2026-02-25 12:52:42.436831925 +0000 UTC m=+0.035663098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:52:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed34c6abe7cf1fa8f0f4e5af64e706044c16ff66937f160068362ce41fa9dab5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed34c6abe7cf1fa8f0f4e5af64e706044c16ff66937f160068362ce41fa9dab5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed34c6abe7cf1fa8f0f4e5af64e706044c16ff66937f160068362ce41fa9dab5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed34c6abe7cf1fa8f0f4e5af64e706044c16ff66937f160068362ce41fa9dab5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:42 compute-0 podman[361365]: 2026-02-25 12:52:42.557187073 +0000 UTC m=+0.156018226 container init 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 12:52:42 compute-0 podman[361365]: 2026-02-25 12:52:42.571905339 +0000 UTC m=+0.170736442 container start 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:52:42 compute-0 podman[361365]: 2026-02-25 12:52:42.576255232 +0000 UTC m=+0.175086415 container attach 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 12:52:42 compute-0 nova_compute[244014]: 2026-02-25 12:52:42.616 244018 DEBUG nova.compute.manager [req-aaca354e-b539-4f1d-bc82-b35161663d43 req-ee9a58cc-9839-455c-99d8-f176749989b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-deleted-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2572988115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 468 KiB/s wr, 70 op/s
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008594589074421668 of space, bias 1.0, pg target 0.25783767223265003 quantized to 32 (current 32)
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024941313303238075 of space, bias 1.0, pg target 0.7482393990971422 quantized to 32 (current 32)
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3944001498069894e-06 of space, bias 4.0, pg target 0.0016732801797683873 quantized to 16 (current 16)
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:52:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:52:42 compute-0 objective_lehmann[361381]: {
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:     "0": [
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:         {
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "devices": [
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "/dev/loop3"
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             ],
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_name": "ceph_lv0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_size": "21470642176",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "name": "ceph_lv0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "tags": {
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.cluster_name": "ceph",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.crush_device_class": "",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.encrypted": "0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.objectstore": "bluestore",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.osd_id": "0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.type": "block",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.vdo": "0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.with_tpm": "0"
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             },
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "type": "block",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "vg_name": "ceph_vg0"
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:         }
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:     ],
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:     "1": [
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:         {
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "devices": [
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "/dev/loop4"
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             ],
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_name": "ceph_lv1",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_size": "21470642176",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "name": "ceph_lv1",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "tags": {
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.cluster_name": "ceph",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.crush_device_class": "",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.encrypted": "0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.objectstore": "bluestore",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.osd_id": "1",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.type": "block",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.vdo": "0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.with_tpm": "0"
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             },
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "type": "block",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "vg_name": "ceph_vg1"
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:         }
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:     ],
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:     "2": [
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:         {
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "devices": [
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "/dev/loop5"
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             ],
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_name": "ceph_lv2",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_size": "21470642176",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "name": "ceph_lv2",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "tags": {
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.cluster_name": "ceph",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.crush_device_class": "",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.encrypted": "0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.objectstore": "bluestore",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.osd_id": "2",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.type": "block",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.vdo": "0",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:                 "ceph.with_tpm": "0"
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             },
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "type": "block",
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:             "vg_name": "ceph_vg2"
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:         }
Feb 25 12:52:42 compute-0 objective_lehmann[361381]:     ]
Feb 25 12:52:42 compute-0 objective_lehmann[361381]: }
Feb 25 12:52:42 compute-0 systemd[1]: libpod-79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae.scope: Deactivated successfully.
Feb 25 12:52:42 compute-0 podman[361365]: 2026-02-25 12:52:42.89809791 +0000 UTC m=+0.496929013 container died 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed34c6abe7cf1fa8f0f4e5af64e706044c16ff66937f160068362ce41fa9dab5-merged.mount: Deactivated successfully.
Feb 25 12:52:42 compute-0 podman[361365]: 2026-02-25 12:52:42.949740738 +0000 UTC m=+0.548571841 container remove 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:52:42 compute-0 systemd[1]: libpod-conmon-79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae.scope: Deactivated successfully.
Feb 25 12:52:43 compute-0 sudo[361195]: pam_unix(sudo:session): session closed for user root
Feb 25 12:52:43 compute-0 sudo[361402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:52:43 compute-0 sudo[361402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:52:43 compute-0 sudo[361402]: pam_unix(sudo:session): session closed for user root
Feb 25 12:52:43 compute-0 sudo[361427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:52:43 compute-0 sudo[361427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:52:43 compute-0 podman[361463]: 2026-02-25 12:52:43.444633912 +0000 UTC m=+0.043478689 container create 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 12:52:43 compute-0 systemd[1]: Started libpod-conmon-5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638.scope.
Feb 25 12:52:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:52:43 compute-0 podman[361463]: 2026-02-25 12:52:43.421563411 +0000 UTC m=+0.020408168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:52:43 compute-0 podman[361463]: 2026-02-25 12:52:43.525083724 +0000 UTC m=+0.123928451 container init 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:52:43 compute-0 podman[361463]: 2026-02-25 12:52:43.532500413 +0000 UTC m=+0.131345170 container start 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:52:43 compute-0 podman[361463]: 2026-02-25 12:52:43.535800336 +0000 UTC m=+0.134645093 container attach 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:52:43 compute-0 amazing_hofstadter[361480]: 167 167
Feb 25 12:52:43 compute-0 systemd[1]: libpod-5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638.scope: Deactivated successfully.
Feb 25 12:52:43 compute-0 podman[361463]: 2026-02-25 12:52:43.538683548 +0000 UTC m=+0.137528315 container died 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b150f6e384d5013aa7fc94e51ab4568eb208965cc2bef13ec5213768ae11185-merged.mount: Deactivated successfully.
Feb 25 12:52:43 compute-0 podman[361463]: 2026-02-25 12:52:43.577067622 +0000 UTC m=+0.175912399 container remove 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:52:43 compute-0 systemd[1]: libpod-conmon-5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638.scope: Deactivated successfully.
Feb 25 12:52:43 compute-0 ceph-mon[76335]: pgmap v2164: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 468 KiB/s wr, 70 op/s
Feb 25 12:52:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:43 compute-0 podman[361505]: 2026-02-25 12:52:43.704286684 +0000 UTC m=+0.043750847 container create 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 12:52:43 compute-0 systemd[1]: Started libpod-conmon-77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8.scope.
Feb 25 12:52:43 compute-0 podman[361505]: 2026-02-25 12:52:43.682961762 +0000 UTC m=+0.022426005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:52:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32de881e62c0b265d0861481a37b7f04bc804ffc03e66c7bd5fddfab6d864369/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32de881e62c0b265d0861481a37b7f04bc804ffc03e66c7bd5fddfab6d864369/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32de881e62c0b265d0861481a37b7f04bc804ffc03e66c7bd5fddfab6d864369/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32de881e62c0b265d0861481a37b7f04bc804ffc03e66c7bd5fddfab6d864369/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:52:43 compute-0 podman[361505]: 2026-02-25 12:52:43.808032543 +0000 UTC m=+0.147496736 container init 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:52:43 compute-0 podman[361505]: 2026-02-25 12:52:43.821496463 +0000 UTC m=+0.160960656 container start 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:52:43 compute-0 podman[361505]: 2026-02-25 12:52:43.824886779 +0000 UTC m=+0.164350962 container attach 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 12:52:44 compute-0 nova_compute[244014]: 2026-02-25 12:52:44.018 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Successfully created port: aadc9e87-c7c5-4cb0-8906-64017d5aa14b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:52:44 compute-0 lvm[361601]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:52:44 compute-0 lvm[361601]: VG ceph_vg1 finished
Feb 25 12:52:44 compute-0 lvm[361600]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:52:44 compute-0 lvm[361600]: VG ceph_vg0 finished
Feb 25 12:52:44 compute-0 lvm[361603]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:52:44 compute-0 lvm[361603]: VG ceph_vg2 finished
Feb 25 12:52:44 compute-0 beautiful_hellman[361522]: {}
Feb 25 12:52:44 compute-0 podman[361505]: 2026-02-25 12:52:44.5757365 +0000 UTC m=+0.915200683 container died 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:52:44 compute-0 systemd[1]: libpod-77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8.scope: Deactivated successfully.
Feb 25 12:52:44 compute-0 systemd[1]: libpod-77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8.scope: Consumed 1.096s CPU time.
Feb 25 12:52:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-32de881e62c0b265d0861481a37b7f04bc804ffc03e66c7bd5fddfab6d864369-merged.mount: Deactivated successfully.
Feb 25 12:52:44 compute-0 podman[361505]: 2026-02-25 12:52:44.613816445 +0000 UTC m=+0.953280618 container remove 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Feb 25 12:52:44 compute-0 systemd[1]: libpod-conmon-77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8.scope: Deactivated successfully.
Feb 25 12:52:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 454 KiB/s wr, 70 op/s
Feb 25 12:52:44 compute-0 sudo[361427]: pam_unix(sudo:session): session closed for user root
Feb 25 12:52:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:52:44 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:52:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:52:44 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:52:44 compute-0 sudo[361617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:52:44 compute-0 sudo[361617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:52:44 compute-0 sudo[361617]: pam_unix(sudo:session): session closed for user root
Feb 25 12:52:45 compute-0 nova_compute[244014]: 2026-02-25 12:52:45.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:45 compute-0 nova_compute[244014]: 2026-02-25 12:52:45.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:45 compute-0 ceph-mon[76335]: pgmap v2165: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 454 KiB/s wr, 70 op/s
Feb 25 12:52:45 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:52:45 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:52:45 compute-0 nova_compute[244014]: 2026-02-25 12:52:45.824 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Successfully updated port: aadc9e87-c7c5-4cb0-8906-64017d5aa14b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:52:45 compute-0 nova_compute[244014]: 2026-02-25 12:52:45.845 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:52:45 compute-0 nova_compute[244014]: 2026-02-25 12:52:45.845 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:52:45 compute-0 nova_compute[244014]: 2026-02-25 12:52:45.845 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:52:45 compute-0 nova_compute[244014]: 2026-02-25 12:52:45.940 244018 DEBUG nova.compute.manager [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-changed-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:45 compute-0 nova_compute[244014]: 2026-02-25 12:52:45.940 244018 DEBUG nova.compute.manager [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Refreshing instance network info cache due to event network-changed-aadc9e87-c7c5-4cb0-8906-64017d5aa14b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:52:45 compute-0 nova_compute[244014]: 2026-02-25 12:52:45.941 244018 DEBUG oslo_concurrency.lockutils [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:52:45 compute-0 nova_compute[244014]: 2026-02-25 12:52:45.995 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:52:46 compute-0 ovn_controller[147040]: 2026-02-25T12:52:46Z|01366|binding|INFO|Releasing lport 1599c73d-07eb-42e9-83bd-2cf546347a5b from this chassis (sb_readonly=0)
Feb 25 12:52:46 compute-0 ovn_controller[147040]: 2026-02-25T12:52:46Z|01367|binding|INFO|Releasing lport d3cef94a-f2c7-4a41-beb1-fd29a623854a from this chassis (sb_readonly=0)
Feb 25 12:52:46 compute-0 nova_compute[244014]: 2026-02-25 12:52:46.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.6 MiB/s wr, 72 op/s
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.118 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Updating instance_info_cache with network_info: [{"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.139 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.139 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Instance network_info: |[{"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.140 244018 DEBUG oslo_concurrency.lockutils [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.140 244018 DEBUG nova.network.neutron [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Refreshing network info cache for port aadc9e87-c7c5-4cb0-8906-64017d5aa14b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.146 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Start _get_guest_xml network_info=[{"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.152 244018 WARNING nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.165 244018 DEBUG nova.virt.libvirt.host [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.166 244018 DEBUG nova.virt.libvirt.host [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.170 244018 DEBUG nova.virt.libvirt.host [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.171 244018 DEBUG nova.virt.libvirt.host [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.172 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.172 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.173 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.174 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.175 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.175 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.175 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.176 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.176 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.177 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.177 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.178 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.182 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:52:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/574639651' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:52:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:52:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/574639651' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:52:47 compute-0 ceph-mon[76335]: pgmap v2166: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.6 MiB/s wr, 72 op/s
Feb 25 12:52:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/574639651' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:52:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/574639651' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:52:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:52:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814262031' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.834 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.867 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:47 compute-0 nova_compute[244014]: 2026-02-25 12:52:47.872 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:52:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/650191885' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.432 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.435 244018 DEBUG nova.virt.libvirt.vif [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1698593086',display_name='tempest-TestNetworkBasicOps-server-1698593086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1698593086',id=129,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfH5NIL0pxdeRt38xy0B1++ndyNZfkmy0xiJ4hQ2Mr9IzyNFCD9Sw7rhVhyk4a+PAujrQtax8jE403LZBlZi7aC4qHiQhot3Wwoiye+WhezVUjaIGjAv8ZbAegoYf9KmA==',key_name='tempest-TestNetworkBasicOps-1673629868',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ukf4s38n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:52:41Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=f3af9615-94aa-4498-ab5c-3fadcab4a4e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.436 244018 DEBUG nova.network.os_vif_util [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.438 244018 DEBUG nova.network.os_vif_util [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.440 244018 DEBUG nova.objects.instance [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid f3af9615-94aa-4498-ab5c-3fadcab4a4e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.464 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:52:48 compute-0 nova_compute[244014]:   <uuid>f3af9615-94aa-4498-ab5c-3fadcab4a4e6</uuid>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   <name>instance-00000081</name>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-1698593086</nova:name>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:52:47</nova:creationTime>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <nova:port uuid="aadc9e87-c7c5-4cb0-8906-64017d5aa14b">
Feb 25 12:52:48 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <system>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <entry name="serial">f3af9615-94aa-4498-ab5c-3fadcab4a4e6</entry>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <entry name="uuid">f3af9615-94aa-4498-ab5c-3fadcab4a4e6</entry>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     </system>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   <os>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   </os>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   <features>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   </features>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk">
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       </source>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config">
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       </source>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:52:48 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:c5:22:67"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <target dev="tapaadc9e87-c7"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/console.log" append="off"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <video>
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     </video>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:52:48 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:52:48 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:52:48 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:52:48 compute-0 nova_compute[244014]: </domain>
Feb 25 12:52:48 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.466 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Preparing to wait for external event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.466 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.467 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.467 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.469 244018 DEBUG nova.virt.libvirt.vif [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1698593086',display_name='tempest-TestNetworkBasicOps-server-1698593086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1698593086',id=129,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfH5NIL0pxdeRt38xy0B1++ndyNZfkmy0xiJ4hQ2Mr9IzyNFCD9Sw7rhVhyk4a+PAujrQtax8jE403LZBlZi7aC4qHiQhot3Wwoiye+WhezVUjaIGjAv8ZbAegoYf9KmA==',key_name='tempest-TestNetworkBasicOps-1673629868',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ukf4s38n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:52:41Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=f3af9615-94aa-4498-ab5c-3fadcab4a4e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.469 244018 DEBUG nova.network.os_vif_util [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.470 244018 DEBUG nova.network.os_vif_util [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.471 244018 DEBUG os_vif [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.473 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.473 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.478 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaadc9e87-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.479 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaadc9e87-c7, col_values=(('external_ids', {'iface-id': 'aadc9e87-c7c5-4cb0-8906-64017d5aa14b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:22:67', 'vm-uuid': 'f3af9615-94aa-4498-ab5c-3fadcab4a4e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:48 compute-0 NetworkManager[49836]: <info>  [1772023968.4823] manager: (tapaadc9e87-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.490 244018 INFO os_vif [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7')
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.553 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.554 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.554 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:c5:22:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.554 244018 INFO nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Using config drive
Feb 25 12:52:48 compute-0 nova_compute[244014]: 2026-02-25 12:52:48.577 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Feb 25 12:52:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3814262031' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:52:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/650191885' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:52:49 compute-0 ceph-mon[76335]: pgmap v2167: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.031 244018 INFO nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Creating config drive at /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.037 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4hco5kjk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.124 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.126 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.125 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.177 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4hco5kjk" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.214 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.218 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.355 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.356 244018 INFO nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Deleting local config drive /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config because it was imported into RBD.
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.410 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:50 compute-0 kernel: tapaadc9e87-c7: entered promiscuous mode
Feb 25 12:52:50 compute-0 NetworkManager[49836]: <info>  [1772023970.4188] manager: (tapaadc9e87-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/570)
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.419 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:50 compute-0 ovn_controller[147040]: 2026-02-25T12:52:50Z|01368|binding|INFO|Claiming lport aadc9e87-c7c5-4cb0-8906-64017d5aa14b for this chassis.
Feb 25 12:52:50 compute-0 ovn_controller[147040]: 2026-02-25T12:52:50Z|01369|binding|INFO|aadc9e87-c7c5-4cb0-8906-64017d5aa14b: Claiming fa:16:3e:c5:22:67 10.100.0.30
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.429 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:22:67 10.100.0.30'], port_security=['fa:16:3e:c5:22:67 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'f3af9615-94aa-4498-ab5c-3fadcab4a4e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68b4f5a-a28d-4155-93af-2997c1302403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b63d7e28-c418-4eb5-bd68-2c67c6b10e04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6b15ef5-ec0a-4008-9d34-97a6f1968263, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=aadc9e87-c7c5-4cb0-8906-64017d5aa14b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.431 157129 INFO neutron.agent.ovn.metadata.agent [-] Port aadc9e87-c7c5-4cb0-8906-64017d5aa14b in datapath e68b4f5a-a28d-4155-93af-2997c1302403 bound to our chassis
Feb 25 12:52:50 compute-0 ovn_controller[147040]: 2026-02-25T12:52:50Z|01370|binding|INFO|Setting lport aadc9e87-c7c5-4cb0-8906-64017d5aa14b ovn-installed in OVS
Feb 25 12:52:50 compute-0 ovn_controller[147040]: 2026-02-25T12:52:50Z|01371|binding|INFO|Setting lport aadc9e87-c7c5-4cb0-8906-64017d5aa14b up in Southbound
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.433 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68b4f5a-a28d-4155-93af-2997c1302403
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.454 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1894df7-488d-492a-9e8d-293fe18fba70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:50 compute-0 systemd-machined[210048]: New machine qemu-161-instance-00000081.
Feb 25 12:52:50 compute-0 systemd-udevd[361780]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:52:50 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-00000081.
Feb 25 12:52:50 compute-0 NetworkManager[49836]: <info>  [1772023970.4820] device (tapaadc9e87-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:52:50 compute-0 NetworkManager[49836]: <info>  [1772023970.4833] device (tapaadc9e87-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.485 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f71b434e-a89f-490d-91bb-f3e6a71b1212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.490 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[76848a39-43fe-4170-aed9-101798bd039b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.522 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[670f51ab-1afe-41c7-8293-3254298e9bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.543 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13576499-bd43-4d43-8c29-a63e14a91a64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68b4f5a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:ac:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592123, 'reachable_time': 41665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361790, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.563 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f00855c-1a58-4e52-81b7-3702f5110f5d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tape68b4f5a-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592135, 'tstamp': 592135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361792, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape68b4f5a-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592138, 'tstamp': 592138}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361792, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.565 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68b4f5a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.569 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68b4f5a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.570 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.570 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68b4f5a-a0, col_values=(('external_ids', {'iface-id': 'd3cef94a-f2c7-4a41-beb1-fd29a623854a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:50 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.570 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:52:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.680 244018 DEBUG nova.compute.manager [req-3bd22f25-6eeb-464c-a3f1-d7b6ec40398f req-2abcdf7c-17cd-4aae-b9b1-12734fed63e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.680 244018 DEBUG oslo_concurrency.lockutils [req-3bd22f25-6eeb-464c-a3f1-d7b6ec40398f req-2abcdf7c-17cd-4aae-b9b1-12734fed63e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.681 244018 DEBUG oslo_concurrency.lockutils [req-3bd22f25-6eeb-464c-a3f1-d7b6ec40398f req-2abcdf7c-17cd-4aae-b9b1-12734fed63e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.681 244018 DEBUG oslo_concurrency.lockutils [req-3bd22f25-6eeb-464c-a3f1-d7b6ec40398f req-2abcdf7c-17cd-4aae-b9b1-12734fed63e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.681 244018 DEBUG nova.compute.manager [req-3bd22f25-6eeb-464c-a3f1-d7b6ec40398f req-2abcdf7c-17cd-4aae-b9b1-12734fed63e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Processing event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.758 244018 DEBUG nova.network.neutron [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Updated VIF entry in instance network info cache for port aadc9e87-c7c5-4cb0-8906-64017d5aa14b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.759 244018 DEBUG nova.network.neutron [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Updating instance_info_cache with network_info: [{"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.781 244018 DEBUG oslo_concurrency.lockutils [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.796 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023955.7955625, 739026fc-9c96-4212-9fa3-e6731e7f61f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.796 244018 INFO nova.compute.manager [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] VM Stopped (Lifecycle Event)
Feb 25 12:52:50 compute-0 nova_compute[244014]: 2026-02-25 12:52:50.815 244018 DEBUG nova.compute.manager [None req-b27aab78-2568-4047-9f44-d8c34f49e484 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.154 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.155 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023971.1537647, f3af9615-94aa-4498-ab5c-3fadcab4a4e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.155 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] VM Started (Lifecycle Event)
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.159 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.163 244018 INFO nova.virt.libvirt.driver [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Instance spawned successfully.
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.163 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.179 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.184 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.187 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.188 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.188 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.189 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.189 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.189 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.211 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.212 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023971.153989, f3af9615-94aa-4498-ab5c-3fadcab4a4e6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.212 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] VM Paused (Lifecycle Event)
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.236 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.240 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023971.1581452, f3af9615-94aa-4498-ab5c-3fadcab4a4e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.240 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] VM Resumed (Lifecycle Event)
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.245 244018 INFO nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Took 9.69 seconds to spawn the instance on the hypervisor.
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.245 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.269 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.272 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.292 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.303 244018 INFO nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Took 10.88 seconds to build instance.
Feb 25 12:52:51 compute-0 nova_compute[244014]: 2026-02-25 12:52:51.319 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:51 compute-0 ceph-mon[76335]: pgmap v2168: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:52:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:52.128 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:52:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 429 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Feb 25 12:52:52 compute-0 nova_compute[244014]: 2026-02-25 12:52:52.808 244018 DEBUG nova.compute.manager [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:52:52 compute-0 nova_compute[244014]: 2026-02-25 12:52:52.809 244018 DEBUG oslo_concurrency.lockutils [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:52 compute-0 nova_compute[244014]: 2026-02-25 12:52:52.810 244018 DEBUG oslo_concurrency.lockutils [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:52 compute-0 nova_compute[244014]: 2026-02-25 12:52:52.810 244018 DEBUG oslo_concurrency.lockutils [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:52 compute-0 nova_compute[244014]: 2026-02-25 12:52:52.810 244018 DEBUG nova.compute.manager [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] No waiting events found dispatching network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:52:52 compute-0 nova_compute[244014]: 2026-02-25 12:52:52.810 244018 WARNING nova.compute.manager [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received unexpected event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b for instance with vm_state active and task_state None.
Feb 25 12:52:53 compute-0 nova_compute[244014]: 2026-02-25 12:52:53.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:53 compute-0 ceph-mon[76335]: pgmap v2169: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 429 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Feb 25 12:52:54 compute-0 ovn_controller[147040]: 2026-02-25T12:52:54Z|01372|binding|INFO|Releasing lport 1599c73d-07eb-42e9-83bd-2cf546347a5b from this chassis (sb_readonly=0)
Feb 25 12:52:54 compute-0 ovn_controller[147040]: 2026-02-25T12:52:54Z|01373|binding|INFO|Releasing lport d3cef94a-f2c7-4a41-beb1-fd29a623854a from this chassis (sb_readonly=0)
Feb 25 12:52:54 compute-0 nova_compute[244014]: 2026-02-25 12:52:54.171 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Feb 25 12:52:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:55.029 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:52:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:55.030 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:52:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:52:55.031 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:52:55 compute-0 nova_compute[244014]: 2026-02-25 12:52:55.110 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023960.108923, dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:52:55 compute-0 nova_compute[244014]: 2026-02-25 12:52:55.110 244018 INFO nova.compute.manager [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] VM Stopped (Lifecycle Event)
Feb 25 12:52:55 compute-0 nova_compute[244014]: 2026-02-25 12:52:55.153 244018 DEBUG nova.compute.manager [None req-2bdfcbab-26be-485b-8fdf-8ea52490f415 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:52:55 compute-0 nova_compute[244014]: 2026-02-25 12:52:55.413 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:55 compute-0 ceph-mon[76335]: pgmap v2170: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Feb 25 12:52:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 76 op/s
Feb 25 12:52:57 compute-0 ceph-mon[76335]: pgmap v2171: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 76 op/s
Feb 25 12:52:58 compute-0 nova_compute[244014]: 2026-02-25 12:52:58.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:52:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 214 KiB/s wr, 88 op/s
Feb 25 12:52:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:52:59 compute-0 ceph-mon[76335]: pgmap v2172: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 214 KiB/s wr, 88 op/s
Feb 25 12:52:59 compute-0 nova_compute[244014]: 2026-02-25 12:52:59.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:52:59 compute-0 nova_compute[244014]: 2026-02-25 12:52:59.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 12:52:59 compute-0 nova_compute[244014]: 2026-02-25 12:52:59.896 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 12:53:00 compute-0 nova_compute[244014]: 2026-02-25 12:53:00.415 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Feb 25 12:53:00 compute-0 nova_compute[244014]: 2026-02-25 12:53:00.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:00 compute-0 nova_compute[244014]: 2026-02-25 12:53:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:00 compute-0 nova_compute[244014]: 2026-02-25 12:53:00.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 12:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:53:01 compute-0 ceph-mon[76335]: pgmap v2173: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Feb 25 12:53:02 compute-0 ovn_controller[147040]: 2026-02-25T12:53:02Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c5:22:67 10.100.0.30
Feb 25 12:53:02 compute-0 ovn_controller[147040]: 2026-02-25T12:53:02Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:22:67 10.100.0.30
Feb 25 12:53:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 295 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 101 op/s
Feb 25 12:53:03 compute-0 nova_compute[244014]: 2026-02-25 12:53:03.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:03 compute-0 sshd-session[361836]: Received disconnect from 45.148.10.157 port 64212:11:  [preauth]
Feb 25 12:53:03 compute-0 sshd-session[361836]: Disconnected from authenticating user root 45.148.10.157 port 64212 [preauth]
Feb 25 12:53:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:03 compute-0 nova_compute[244014]: 2026-02-25 12:53:03.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:03 compute-0 ceph-mon[76335]: pgmap v2174: 305 pgs: 305 active+clean; 295 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 101 op/s
Feb 25 12:53:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 295 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.6 MiB/s wr, 78 op/s
Feb 25 12:53:04 compute-0 nova_compute[244014]: 2026-02-25 12:53:04.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:05 compute-0 nova_compute[244014]: 2026-02-25 12:53:05.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:05 compute-0 ceph-mon[76335]: pgmap v2175: 305 pgs: 305 active+clean; 295 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.6 MiB/s wr, 78 op/s
Feb 25 12:53:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:05.954 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:5e:1e 10.100.0.2 2001:db8::f816:3eff:fe1c:5e1e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe1c:5e1e/64', 'neutron:device_id': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09662fcb-392d-469a-981c-54d31225748b) old=Port_Binding(mac=['fa:16:3e:1c:5e:1e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:53:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:05.956 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09662fcb-392d-469a-981c-54d31225748b in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd updated
Feb 25 12:53:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:05.958 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88562c34-222a-439a-b444-9e6f8a6d70cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:53:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:05.960 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3b0f15-ecf4-49de-83cc-8095128e61a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 301 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Feb 25 12:53:06 compute-0 podman[361838]: 2026-02-25 12:53:06.754653685 +0000 UTC m=+0.100178220 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 25 12:53:06 compute-0 podman[361839]: 2026-02-25 12:53:06.770081261 +0000 UTC m=+0.110302706 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:53:07 compute-0 ceph-mon[76335]: pgmap v2176: 305 pgs: 305 active+clean; 301 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Feb 25 12:53:08 compute-0 nova_compute[244014]: 2026-02-25 12:53:08.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Feb 25 12:53:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:09 compute-0 ceph-mon[76335]: pgmap v2177: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Feb 25 12:53:10 compute-0 nova_compute[244014]: 2026-02-25 12:53:10.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 12:53:11 compute-0 nova_compute[244014]: 2026-02-25 12:53:11.348 244018 DEBUG nova.compute.manager [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-changed-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:11 compute-0 nova_compute[244014]: 2026-02-25 12:53:11.349 244018 DEBUG nova.compute.manager [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing instance network info cache due to event network-changed-77034e66-a3e3-47d0-b467-29a045343530. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:53:11 compute-0 nova_compute[244014]: 2026-02-25 12:53:11.350 244018 DEBUG oslo_concurrency.lockutils [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:53:11 compute-0 nova_compute[244014]: 2026-02-25 12:53:11.350 244018 DEBUG oslo_concurrency.lockutils [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:53:11 compute-0 nova_compute[244014]: 2026-02-25 12:53:11.351 244018 DEBUG nova.network.neutron [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing network info cache for port 77034e66-a3e3-47d0-b467-29a045343530 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:53:11 compute-0 ceph-mon[76335]: pgmap v2178: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 12:53:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 12:53:12 compute-0 sshd-session[361881]: Invalid user sol from 80.94.92.186 port 47644
Feb 25 12:53:13 compute-0 sshd-session[361881]: Connection closed by invalid user sol 80.94.92.186 port 47644 [preauth]
Feb 25 12:53:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:13.296 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:5e:1e 10.100.0.2 2001:db8:0:1:f816:3eff:fe1c:5e1e 2001:db8::f816:3eff:fe1c:5e1e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe1c:5e1e/64 2001:db8::f816:3eff:fe1c:5e1e/64', 'neutron:device_id': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09662fcb-392d-469a-981c-54d31225748b) old=Port_Binding(mac=['fa:16:3e:1c:5e:1e 10.100.0.2 2001:db8::f816:3eff:fe1c:5e1e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe1c:5e1e/64', 'neutron:device_id': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:53:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:13.299 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09662fcb-392d-469a-981c-54d31225748b in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd updated
Feb 25 12:53:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:13.302 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88562c34-222a-439a-b444-9e6f8a6d70cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:53:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:13.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49279396-c2fb-4b81-97cb-5bf4d5b9bc04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:13 compute-0 nova_compute[244014]: 2026-02-25 12:53:13.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:13 compute-0 ceph-mon[76335]: pgmap v2179: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 12:53:14 compute-0 nova_compute[244014]: 2026-02-25 12:53:14.041 244018 DEBUG nova.network.neutron [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updated VIF entry in instance network info cache for port 77034e66-a3e3-47d0-b467-29a045343530. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:53:14 compute-0 nova_compute[244014]: 2026-02-25 12:53:14.042 244018 DEBUG nova.network.neutron [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:53:14 compute-0 nova_compute[244014]: 2026-02-25 12:53:14.083 244018 DEBUG oslo_concurrency.lockutils [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:53:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 554 KiB/s wr, 38 op/s
Feb 25 12:53:15 compute-0 nova_compute[244014]: 2026-02-25 12:53:15.423 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:15 compute-0 nova_compute[244014]: 2026-02-25 12:53:15.716 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:15 compute-0 nova_compute[244014]: 2026-02-25 12:53:15.717 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:15 compute-0 nova_compute[244014]: 2026-02-25 12:53:15.718 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:15 compute-0 nova_compute[244014]: 2026-02-25 12:53:15.718 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:15 compute-0 nova_compute[244014]: 2026-02-25 12:53:15.719 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:15 compute-0 nova_compute[244014]: 2026-02-25 12:53:15.720 244018 INFO nova.compute.manager [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Terminating instance
Feb 25 12:53:15 compute-0 nova_compute[244014]: 2026-02-25 12:53:15.722 244018 DEBUG nova.compute.manager [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:53:15 compute-0 kernel: tapaadc9e87-c7 (unregistering): left promiscuous mode
Feb 25 12:53:15 compute-0 NetworkManager[49836]: <info>  [1772023995.9365] device (tapaadc9e87-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:53:15 compute-0 ovn_controller[147040]: 2026-02-25T12:53:15Z|01374|binding|INFO|Releasing lport aadc9e87-c7c5-4cb0-8906-64017d5aa14b from this chassis (sb_readonly=0)
Feb 25 12:53:15 compute-0 ovn_controller[147040]: 2026-02-25T12:53:15Z|01375|binding|INFO|Setting lport aadc9e87-c7c5-4cb0-8906-64017d5aa14b down in Southbound
Feb 25 12:53:15 compute-0 ovn_controller[147040]: 2026-02-25T12:53:15Z|01376|binding|INFO|Removing iface tapaadc9e87-c7 ovn-installed in OVS
Feb 25 12:53:15 compute-0 nova_compute[244014]: 2026-02-25 12:53:15.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:15 compute-0 nova_compute[244014]: 2026-02-25 12:53:15.957 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:15.961 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:22:67 10.100.0.30'], port_security=['fa:16:3e:c5:22:67 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'f3af9615-94aa-4498-ab5c-3fadcab4a4e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68b4f5a-a28d-4155-93af-2997c1302403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b63d7e28-c418-4eb5-bd68-2c67c6b10e04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6b15ef5-ec0a-4008-9d34-97a6f1968263, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=aadc9e87-c7c5-4cb0-8906-64017d5aa14b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:53:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:15.963 157129 INFO neutron.agent.ovn.metadata.agent [-] Port aadc9e87-c7c5-4cb0-8906-64017d5aa14b in datapath e68b4f5a-a28d-4155-93af-2997c1302403 unbound from our chassis
Feb 25 12:53:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:15.964 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68b4f5a-a28d-4155-93af-2997c1302403
Feb 25 12:53:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:15.983 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8d39222d-4162-4850-ab9c-0ec0132e75eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:15 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Deactivated successfully.
Feb 25 12:53:15 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Consumed 12.650s CPU time.
Feb 25 12:53:15 compute-0 systemd-machined[210048]: Machine qemu-161-instance-00000081 terminated.
Feb 25 12:53:16 compute-0 ceph-mon[76335]: pgmap v2180: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 554 KiB/s wr, 38 op/s
Feb 25 12:53:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.021 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[84fe199c-4bfd-447e-b325-b2913c4f77bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.025 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a336a2-57a8-4244-b2b6-d5ac37510d60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.056 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[25dc83a0-fde6-45dc-9cf9-7692f7dced29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.075 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[528665e4-1588-4a1c-bc61-24242d66e0b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68b4f5a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:ac:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592123, 'reachable_time': 41665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361894, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.102 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c44262-3746-4fe5-a454-f2f79d7d5cd7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tape68b4f5a-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592135, 'tstamp': 592135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361895, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape68b4f5a-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592138, 'tstamp': 592138}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361895, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.106 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68b4f5a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68b4f5a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.115 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:53:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.116 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68b4f5a-a0, col_values=(('external_ids', {'iface-id': 'd3cef94a-f2c7-4a41-beb1-fd29a623854a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.117 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.158 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.168 244018 INFO nova.virt.libvirt.driver [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Instance destroyed successfully.
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.169 244018 DEBUG nova.objects.instance [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid f3af9615-94aa-4498-ab5c-3fadcab4a4e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.186 244018 DEBUG nova.virt.libvirt.vif [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1698593086',display_name='tempest-TestNetworkBasicOps-server-1698593086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1698593086',id=129,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfH5NIL0pxdeRt38xy0B1++ndyNZfkmy0xiJ4hQ2Mr9IzyNFCD9Sw7rhVhyk4a+PAujrQtax8jE403LZBlZi7aC4qHiQhot3Wwoiye+WhezVUjaIGjAv8ZbAegoYf9KmA==',key_name='tempest-TestNetworkBasicOps-1673629868',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ukf4s38n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:51Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=f3af9615-94aa-4498-ab5c-3fadcab4a4e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.186 244018 DEBUG nova.network.os_vif_util [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.187 244018 DEBUG nova.network.os_vif_util [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.188 244018 DEBUG os_vif [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.190 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaadc9e87-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.192 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.195 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.199 244018 INFO os_vif [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7')
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.276 244018 DEBUG nova.compute.manager [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-unplugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.278 244018 DEBUG oslo_concurrency.lockutils [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.278 244018 DEBUG oslo_concurrency.lockutils [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.279 244018 DEBUG oslo_concurrency.lockutils [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.279 244018 DEBUG nova.compute.manager [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] No waiting events found dispatching network-vif-unplugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:53:16 compute-0 nova_compute[244014]: 2026-02-25 12:53:16.279 244018 DEBUG nova.compute.manager [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-unplugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:53:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 554 KiB/s wr, 38 op/s
Feb 25 12:53:17 compute-0 nova_compute[244014]: 2026-02-25 12:53:17.885 244018 INFO nova.virt.libvirt.driver [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Deleting instance files /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6_del
Feb 25 12:53:17 compute-0 nova_compute[244014]: 2026-02-25 12:53:17.888 244018 INFO nova.virt.libvirt.driver [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Deletion of /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6_del complete
Feb 25 12:53:17 compute-0 nova_compute[244014]: 2026-02-25 12:53:17.946 244018 INFO nova.compute.manager [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Took 2.22 seconds to destroy the instance on the hypervisor.
Feb 25 12:53:17 compute-0 nova_compute[244014]: 2026-02-25 12:53:17.947 244018 DEBUG oslo.service.loopingcall [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:53:17 compute-0 nova_compute[244014]: 2026-02-25 12:53:17.947 244018 DEBUG nova.compute.manager [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:53:17 compute-0 nova_compute[244014]: 2026-02-25 12:53:17.948 244018 DEBUG nova.network.neutron [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:53:18 compute-0 ceph-mon[76335]: pgmap v2181: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 554 KiB/s wr, 38 op/s
Feb 25 12:53:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 102 KiB/s wr, 25 op/s
Feb 25 12:53:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:19 compute-0 nova_compute[244014]: 2026-02-25 12:53:19.100 244018 DEBUG nova.compute.manager [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:19 compute-0 nova_compute[244014]: 2026-02-25 12:53:19.101 244018 DEBUG oslo_concurrency.lockutils [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:19 compute-0 nova_compute[244014]: 2026-02-25 12:53:19.102 244018 DEBUG oslo_concurrency.lockutils [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:19 compute-0 nova_compute[244014]: 2026-02-25 12:53:19.102 244018 DEBUG oslo_concurrency.lockutils [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:19 compute-0 nova_compute[244014]: 2026-02-25 12:53:19.103 244018 DEBUG nova.compute.manager [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] No waiting events found dispatching network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:53:19 compute-0 nova_compute[244014]: 2026-02-25 12:53:19.103 244018 WARNING nova.compute.manager [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received unexpected event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b for instance with vm_state active and task_state deleting.
Feb 25 12:53:20 compute-0 ceph-mon[76335]: pgmap v2182: 305 pgs: 305 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 102 KiB/s wr, 25 op/s
Feb 25 12:53:20 compute-0 nova_compute[244014]: 2026-02-25 12:53:20.297 244018 DEBUG nova.network.neutron [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:53:20 compute-0 nova_compute[244014]: 2026-02-25 12:53:20.314 244018 INFO nova.compute.manager [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Took 2.37 seconds to deallocate network for instance.
Feb 25 12:53:20 compute-0 nova_compute[244014]: 2026-02-25 12:53:20.359 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:20 compute-0 nova_compute[244014]: 2026-02-25 12:53:20.360 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:20 compute-0 nova_compute[244014]: 2026-02-25 12:53:20.368 244018 DEBUG nova.compute.manager [req-c232c262-cc28-46c5-a877-d41a9a5e2796 req-2bc9ce8a-7c74-40d0-9fca-ee47ad1b6d32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-deleted-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:20 compute-0 nova_compute[244014]: 2026-02-25 12:53:20.426 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:20 compute-0 nova_compute[244014]: 2026-02-25 12:53:20.431 244018 DEBUG oslo_concurrency.processutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 23 KiB/s wr, 10 op/s
Feb 25 12:53:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:53:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080829712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.044 244018 DEBUG oslo_concurrency.processutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.050 244018 DEBUG nova.compute.provider_tree [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:53:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2080829712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.066 244018 DEBUG nova.scheduler.client.report [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.096 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.134 244018 INFO nova.scheduler.client.report [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance f3af9615-94aa-4498-ab5c-3fadcab4a4e6
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.215 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.809 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.810 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.837 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.936 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.936 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.945 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.946 244018 INFO nova.compute.claims [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.996 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-77034e66-a3e3-47d0-b467-29a045343530" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:21 compute-0 nova_compute[244014]: 2026-02-25 12:53:21.997 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-77034e66-a3e3-47d0-b467-29a045343530" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.026 244018 DEBUG nova.objects.instance [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'flavor' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.051 244018 DEBUG nova.virt.libvirt.vif [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.052 244018 DEBUG nova.network.os_vif_util [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.053 244018 DEBUG nova.network.os_vif_util [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.063 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.067 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.071 244018 DEBUG nova.virt.libvirt.driver [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Attempting to detach device tap77034e66-a3 from instance 809da994-7551-4f52-8920-b0dfaa2ef73e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.072 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:63:eb:19"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <target dev="tap77034e66-a3"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]: </interface>
Feb 25 12:53:22 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:53:22 compute-0 ceph-mon[76335]: pgmap v2183: 305 pgs: 305 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 23 KiB/s wr, 10 op/s
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.081 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.086 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface>not found in domain: <domain type='kvm' id='159'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <name>instance-0000007f</name>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <uuid>809da994-7551-4f52-8920-b0dfaa2ef73e</uuid>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:52:31</nova:creationTime>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:port uuid="77034e66-a3e3-47d0-b467-29a045343530">
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:53:22 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <system>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='serial'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='uuid'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </system>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <os>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </os>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <features>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </features>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk' index='2'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config' index='1'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:c3:63:14'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target dev='tap4f59e1f7-f0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:63:eb:19'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target dev='tap77034e66-a3'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='net1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       </target>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/1'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </console>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <video>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </video>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c232,c554</label>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c232,c554</imagelabel>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:53:22 compute-0 nova_compute[244014]: </domain>
Feb 25 12:53:22 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.087 244018 INFO nova.virt.libvirt.driver [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully detached device tap77034e66-a3 from instance 809da994-7551-4f52-8920-b0dfaa2ef73e from the persistent domain config.
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.088 244018 DEBUG nova.virt.libvirt.driver [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] (1/8): Attempting to detach device tap77034e66-a3 with device alias net1 from instance 809da994-7551-4f52-8920-b0dfaa2ef73e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.089 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] detach device xml: <interface type="ethernet">
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <mac address="fa:16:3e:63:eb:19"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <model type="virtio"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <mtu size="1442"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <target dev="tap77034e66-a3"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]: </interface>
Feb 25 12:53:22 compute-0 nova_compute[244014]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.110 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:22 compute-0 kernel: tap77034e66-a3 (unregistering): left promiscuous mode
Feb 25 12:53:22 compute-0 NetworkManager[49836]: <info>  [1772024002.2019] device (tap77034e66-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.213 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Received event <DeviceRemovedEvent: 1772024002.2125895, 809da994-7551-4f52-8920-b0dfaa2ef73e => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.216 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:22 compute-0 ovn_controller[147040]: 2026-02-25T12:53:22Z|01377|binding|INFO|Releasing lport 77034e66-a3e3-47d0-b467-29a045343530 from this chassis (sb_readonly=0)
Feb 25 12:53:22 compute-0 ovn_controller[147040]: 2026-02-25T12:53:22Z|01378|binding|INFO|Setting lport 77034e66-a3e3-47d0-b467-29a045343530 down in Southbound
Feb 25 12:53:22 compute-0 ovn_controller[147040]: 2026-02-25T12:53:22Z|01379|binding|INFO|Removing iface tap77034e66-a3 ovn-installed in OVS
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.220 244018 DEBUG nova.virt.libvirt.driver [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Start waiting for the detach event from libvirt for device tap77034e66-a3 with device alias net1 for instance 809da994-7551-4f52-8920-b0dfaa2ef73e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.221 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.223 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.223 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:eb:19 10.100.0.29', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '809da994-7551-4f52-8920-b0dfaa2ef73e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68b4f5a-a28d-4155-93af-2997c1302403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6b15ef5-ec0a-4008-9d34-97a6f1968263, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=77034e66-a3e3-47d0-b467-29a045343530) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.225 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 77034e66-a3e3-47d0-b467-29a045343530 in datapath e68b4f5a-a28d-4155-93af-2997c1302403 unbound from our chassis
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.227 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e68b4f5a-a28d-4155-93af-2997c1302403, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.228 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c1277e46-e0e5-4b4a-a4c8-c86b10fdf3cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.228 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403 namespace which is not needed anymore
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.229 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface>not found in domain: <domain type='kvm' id='159'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <name>instance-0000007f</name>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <uuid>809da994-7551-4f52-8920-b0dfaa2ef73e</uuid>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:52:31</nova:creationTime>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:port uuid="77034e66-a3e3-47d0-b467-29a045343530">
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:53:22 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <system>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='serial'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='uuid'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </system>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <os>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </os>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <features>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </features>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk' index='2'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config' index='1'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:c3:63:14'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target dev='tap4f59e1f7-f0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       </target>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/1'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </console>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <video>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </video>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c232,c554</label>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c232,c554</imagelabel>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:53:22 compute-0 nova_compute[244014]: </domain>
Feb 25 12:53:22 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.229 244018 INFO nova.virt.libvirt.driver [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully detached device tap77034e66-a3 from instance 809da994-7551-4f52-8920-b0dfaa2ef73e from the live domain config.
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.230 244018 DEBUG nova.virt.libvirt.vif [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.231 244018 DEBUG nova.network.os_vif_util [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.231 244018 DEBUG nova.network.os_vif_util [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.232 244018 DEBUG os_vif [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.233 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.234 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77034e66-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.239 244018 INFO os_vif [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3')
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.240 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:53:22</nova:creationTime>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 12:53:22 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:53:22 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:53:22 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:53:22 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:53:22 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:53:22 compute-0 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [NOTICE]   (360551) : haproxy version is 2.8.14-c23fe91
Feb 25 12:53:22 compute-0 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [NOTICE]   (360551) : path to executable is /usr/sbin/haproxy
Feb 25 12:53:22 compute-0 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [WARNING]  (360551) : Exiting Master process...
Feb 25 12:53:22 compute-0 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [ALERT]    (360551) : Current worker (360553) exited with code 143 (Terminated)
Feb 25 12:53:22 compute-0 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [WARNING]  (360551) : All workers exited. Exiting... (0)
Feb 25 12:53:22 compute-0 systemd[1]: libpod-62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e.scope: Deactivated successfully.
Feb 25 12:53:22 compute-0 conmon[360547]: conmon 62a431ef2588502cb52e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e.scope/container/memory.events
Feb 25 12:53:22 compute-0 podman[361990]: 2026-02-25 12:53:22.375829521 +0000 UTC m=+0.047802431 container died 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 12:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e-userdata-shm.mount: Deactivated successfully.
Feb 25 12:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce54b1a0b6462e52effccd4126e7ec55dceac4038acb86f1df2038cf5f156ad6-merged.mount: Deactivated successfully.
Feb 25 12:53:22 compute-0 podman[361990]: 2026-02-25 12:53:22.433926131 +0000 UTC m=+0.105899051 container cleanup 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:53:22 compute-0 systemd[1]: libpod-conmon-62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e.scope: Deactivated successfully.
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.491 244018 DEBUG nova.compute.manager [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-unplugged-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.491 244018 DEBUG oslo_concurrency.lockutils [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.491 244018 DEBUG oslo_concurrency.lockutils [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.492 244018 DEBUG oslo_concurrency.lockutils [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.492 244018 DEBUG nova.compute.manager [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-unplugged-77034e66-a3e3-47d0-b467-29a045343530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.493 244018 WARNING nova.compute.manager [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-unplugged-77034e66-a3e3-47d0-b467-29a045343530 for instance with vm_state active and task_state None.
Feb 25 12:53:22 compute-0 podman[362018]: 2026-02-25 12:53:22.502442346 +0000 UTC m=+0.049679714 container remove 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.508 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57ccbdb9-f0b1-4832-8c29-b03023480764]: (4, ('Wed Feb 25 12:53:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403 (62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e)\n62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e\nWed Feb 25 12:53:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403 (62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e)\n62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.510 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a87ed3df-1413-4a0c-b6f4-e60298612f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.512 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68b4f5a-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:22 compute-0 kernel: tape68b4f5a-a0: left promiscuous mode
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.526 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3dce6776-9039-4678-aeaf-f853fcb22adc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.547 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1edd0c4-5fa1-4338-b748-b14be5e03767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.548 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3767ecfe-4407-4d44-aaf3-6a2076f5fc07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.566 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76862e2b-2194-4273-8601-5bd0fca78805]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592117, 'reachable_time': 20926, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362033, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:22 compute-0 systemd[1]: run-netns-ovnmeta\x2de68b4f5a\x2da28d\x2d4155\x2d93af\x2d2997c1302403.mount: Deactivated successfully.
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.569 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:53:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.570 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f27c2a-68da-49f2-bf92-b11e2936855c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 29 op/s
Feb 25 12:53:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:53:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/636794886' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.743 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.749 244018 DEBUG nova.compute.provider_tree [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.765 244018 DEBUG nova.scheduler.client.report [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.788 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.788 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.843 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.843 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.862 244018 INFO nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.888 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.973 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.974 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:53:22 compute-0 nova_compute[244014]: 2026-02-25 12:53:22.974 244018 INFO nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Creating image(s)
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.002 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.028 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.053 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.056 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/636794886' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.138 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.140 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.140 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.141 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.196 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.200 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.239 244018 DEBUG nova.policy [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.383 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.384 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.384 244018 DEBUG nova.network.neutron [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.432 244018 DEBUG nova.compute.manager [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-deleted-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.433 244018 INFO nova.compute.manager [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Neutron deleted interface 77034e66-a3e3-47d0-b467-29a045343530; detaching it from the instance and deleting it from the info cache
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.433 244018 DEBUG nova.network.neutron [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.454 244018 DEBUG nova.objects.instance [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'system_metadata' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.487 244018 DEBUG nova.objects.instance [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'flavor' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.514 244018 DEBUG nova.virt.libvirt.vif [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.515 244018 DEBUG nova.network.os_vif_util [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.516 244018 DEBUG nova.network.os_vif_util [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.520 244018 DEBUG nova.virt.libvirt.guest [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.523 244018 DEBUG nova.virt.libvirt.guest [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface>not found in domain: <domain type='kvm' id='159'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <name>instance-0000007f</name>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <uuid>809da994-7551-4f52-8920-b0dfaa2ef73e</uuid>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:53:22</nova:creationTime>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:53:23 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <system>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='serial'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='uuid'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </system>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <os>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </os>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <features>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </features>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk' index='2'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config' index='1'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:c3:63:14'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target dev='tap4f59e1f7-f0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       </target>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/1'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </console>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <video>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </video>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c232,c554</label>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c232,c554</imagelabel>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:53:23 compute-0 nova_compute[244014]: </domain>
Feb 25 12:53:23 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.524 244018 DEBUG nova.virt.libvirt.guest [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.528 244018 DEBUG nova.virt.libvirt.guest [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface>not found in domain: <domain type='kvm' id='159'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <name>instance-0000007f</name>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <uuid>809da994-7551-4f52-8920-b0dfaa2ef73e</uuid>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:53:22</nova:creationTime>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:53:23 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <memory unit='KiB'>131072</memory>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <vcpu placement='static'>1</vcpu>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <resource>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <partition>/machine</partition>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </resource>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <sysinfo type='smbios'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <system>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='manufacturer'>RDO</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='product'>OpenStack Compute</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='serial'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='uuid'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <entry name='family'>Virtual Machine</entry>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </system>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <os>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <boot dev='hd'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <smbios mode='sysinfo'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </os>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <features>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <vmcoreinfo state='on'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </features>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <cpu mode='custom' match='exact' check='full'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <vendor>AMD</vendor>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='x2apic'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc-deadline'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='hypervisor'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='tsc_adjust'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='spec-ctrl'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='stibp'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='ssbd'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='cmp_legacy'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='overflow-recov'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='succor'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='ibrs'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='amd-ssbd'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='virt-ssbd'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='lbrv'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='tsc-scale'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='vmcb-clean'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='flushbyasid'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='pause-filter'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='pfthreshold'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='xsaves'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='svm'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='require' name='topoext'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='npt'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <feature policy='disable' name='nrip-save'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <clock offset='utc'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <timer name='pit' tickpolicy='delay'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <timer name='hpet' present='no'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <on_poweroff>destroy</on_poweroff>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <on_reboot>restart</on_reboot>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <on_crash>destroy</on_crash>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <disk type='network' device='disk'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk' index='2'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target dev='vda' bus='virtio'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='virtio-disk0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <disk type='network' device='cdrom'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <driver name='qemu' type='raw' cache='none'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <auth username='openstack'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:         <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config' index='1'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:         <host name='192.168.122.100' port='6789'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target dev='sda' bus='sata'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <readonly/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='sata0-0-0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='0' model='pcie-root'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pcie.0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='1' port='0x10'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='2' port='0x11'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='3' port='0x12'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.3'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='4' port='0x13'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.4'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='5' port='0x14'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.5'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='6' port='0x15'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.6'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='7' port='0x16'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.7'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='8' port='0x17'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.8'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='9' port='0x18'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.9'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='10' port='0x19'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.10'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='11' port='0x1a'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.11'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='12' port='0x1b'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.12'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='13' port='0x1c'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.13'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='14' port='0x1d'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.14'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='15' port='0x1e'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.15'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='16' port='0x1f'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.16'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='17' port='0x20'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.17'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='18' port='0x21'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.18'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='19' port='0x22'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.19'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='20' port='0x23'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.20'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='21' port='0x24'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.21'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='22' port='0x25'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.22'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='23' port='0x26'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.23'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='24' port='0x27'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.24'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-root-port'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target chassis='25' port='0x28'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.25'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model name='pcie-pci-bridge'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='pci.26'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='usb'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <controller type='sata' index='0'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='ide'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </controller>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <interface type='ethernet'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <mac address='fa:16:3e:c3:63:14'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target dev='tap4f59e1f7-f0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model type='virtio'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <driver name='vhost' rx_queue_size='512'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <mtu size='1442'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='net0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <serial type='pty'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target type='isa-serial' port='0'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:         <model name='isa-serial'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       </target>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <console type='pty' tty='/dev/pts/1'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <source path='/dev/pts/1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <target type='serial' port='0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='serial0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </console>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <input type='tablet' bus='usb'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='input0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='usb' bus='0' port='1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <input type='mouse' bus='ps2'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='input1'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <input type='keyboard' bus='ps2'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='input2'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </input>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <listen type='address' address='::0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </graphics>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <audio id='1' type='none'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <video>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <model type='virtio' heads='1' primary='yes'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='video0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </video>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <watchdog model='itco' action='reset'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='watchdog0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </watchdog>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <memballoon model='virtio'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <stats period='10'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='balloon0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <rng model='virtio'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <backend model='random'>/dev/urandom</backend>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <alias name='rng0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <label>system_u:system_r:svirt_t:s0:c232,c554</label>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c232,c554</imagelabel>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <label>+107:+107</label>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <imagelabel>+107:+107</imagelabel>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </seclabel>
Feb 25 12:53:23 compute-0 nova_compute[244014]: </domain>
Feb 25 12:53:23 compute-0 nova_compute[244014]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.529 244018 WARNING nova.virt.libvirt.driver [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Detaching interface fa:16:3e:63:eb:19 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap77034e66-a3' not found.
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.530 244018 DEBUG nova.virt.libvirt.vif [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.530 244018 DEBUG nova.network.os_vif_util [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.531 244018 DEBUG nova.network.os_vif_util [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.531 244018 DEBUG os_vif [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.534 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.534 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77034e66-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.535 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.537 244018 INFO os_vif [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3')
Feb 25 12:53:23 compute-0 nova_compute[244014]: 2026-02-25 12:53:23.539 244018 DEBUG nova.virt.libvirt.guest [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:creationTime>2026-02-25 12:53:23</nova:creationTime>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:flavor name="m1.nano">
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:memory>128</nova:memory>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:disk>1</nova:disk>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:swap>0</nova:swap>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:vcpus>1</nova:vcpus>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </nova:flavor>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:owner>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </nova:owner>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   <nova:ports>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 12:53:23 compute-0 nova_compute[244014]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 12:53:23 compute-0 nova_compute[244014]:     </nova:port>
Feb 25 12:53:23 compute-0 nova_compute[244014]:   </nova:ports>
Feb 25 12:53:23 compute-0 nova_compute[244014]: </nova:instance>
Feb 25 12:53:23 compute-0 nova_compute[244014]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 25 12:53:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:24 compute-0 ceph-mon[76335]: pgmap v2184: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 29 op/s
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.328 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.441 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.597 244018 DEBUG nova.compute.manager [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.598 244018 DEBUG oslo_concurrency.lockutils [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.598 244018 DEBUG oslo_concurrency.lockutils [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.599 244018 DEBUG oslo_concurrency.lockutils [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.599 244018 DEBUG nova.compute.manager [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.599 244018 WARNING nova.compute.manager [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 for instance with vm_state active and task_state None.
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.601 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Successfully created port: f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:53:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 29 op/s
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.740 244018 DEBUG nova.objects.instance [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 03d948e4-e7cb-45ea-bf63-7ab363b4d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.753 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.753 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Ensure instance console log exists: /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.754 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.754 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:24 compute-0 nova_compute[244014]: 2026-02-25 12:53:24.755 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:25 compute-0 ovn_controller[147040]: 2026-02-25T12:53:25Z|01380|binding|INFO|Releasing lport 1599c73d-07eb-42e9-83bd-2cf546347a5b from this chassis (sb_readonly=0)
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.394 244018 INFO nova.network.neutron [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Port 77034e66-a3e3-47d0-b467-29a045343530 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.394 244018 DEBUG nova.network.neutron [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.429 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.452 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.481 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-77034e66-a3e3-47d0-b467-29a045343530" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.795 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Successfully updated port: f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.810 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.810 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.810 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.882 244018 DEBUG nova.compute.manager [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.883 244018 DEBUG nova.compute.manager [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing instance network info cache due to event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:53:25 compute-0 nova_compute[244014]: 2026-02-25 12:53:25.883 244018 DEBUG oslo_concurrency.lockutils [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.006 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.157 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.158 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.158 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.159 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.159 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.161 244018 INFO nova.compute.manager [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Terminating instance
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.163 244018 DEBUG nova.compute.manager [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:53:26 compute-0 kernel: tap4f59e1f7-f0 (unregistering): left promiscuous mode
Feb 25 12:53:26 compute-0 NetworkManager[49836]: <info>  [1772024006.2204] device (tap4f59e1f7-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:26 compute-0 ovn_controller[147040]: 2026-02-25T12:53:26Z|01381|binding|INFO|Releasing lport 4f59e1f7-f07c-48a1-82b4-b6a563a7130a from this chassis (sb_readonly=0)
Feb 25 12:53:26 compute-0 ovn_controller[147040]: 2026-02-25T12:53:26Z|01382|binding|INFO|Setting lport 4f59e1f7-f07c-48a1-82b4-b6a563a7130a down in Southbound
Feb 25 12:53:26 compute-0 ovn_controller[147040]: 2026-02-25T12:53:26Z|01383|binding|INFO|Removing iface tap4f59e1f7-f0 ovn-installed in OVS
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.239 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:63:14 10.100.0.12'], port_security=['fa:16:3e:c3:63:14 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '809da994-7551-4f52-8920-b0dfaa2ef73e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3e3f0007-e379-4ef5-bc52-5669e937e826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9099fb32-6839-4ebe-bdc3-ec67cc06d180, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4f59e1f7-f07c-48a1-82b4-b6a563a7130a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.241 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a in datapath 526ae63c-3640-4e70-a308-56e7a67e4cf2 unbound from our chassis
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.242 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.243 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 526ae63c-3640-4e70-a308-56e7a67e4cf2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.244 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[50415736-f589-4fb8-a320-bc1d6daf85f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.244 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2 namespace which is not needed anymore
Feb 25 12:53:26 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Feb 25 12:53:26 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Consumed 15.819s CPU time.
Feb 25 12:53:26 compute-0 systemd-machined[210048]: Machine qemu-159-instance-0000007f terminated.
Feb 25 12:53:26 compute-0 ceph-mon[76335]: pgmap v2185: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 29 op/s
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:26 compute-0 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [NOTICE]   (360023) : haproxy version is 2.8.14-c23fe91
Feb 25 12:53:26 compute-0 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [NOTICE]   (360023) : path to executable is /usr/sbin/haproxy
Feb 25 12:53:26 compute-0 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [WARNING]  (360023) : Exiting Master process...
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.398 244018 INFO nova.virt.libvirt.driver [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance destroyed successfully.
Feb 25 12:53:26 compute-0 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [ALERT]    (360023) : Current worker (360025) exited with code 143 (Terminated)
Feb 25 12:53:26 compute-0 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [WARNING]  (360023) : All workers exited. Exiting... (0)
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.399 244018 DEBUG nova.objects.instance [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:53:26 compute-0 systemd[1]: libpod-ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552.scope: Deactivated successfully.
Feb 25 12:53:26 compute-0 podman[362227]: 2026-02-25 12:53:26.40820487 +0000 UTC m=+0.058551574 container died ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.417 244018 DEBUG nova.virt.libvirt.vif [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.417 244018 DEBUG nova.network.os_vif_util [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.418 244018 DEBUG nova.network.os_vif_util [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.418 244018 DEBUG os_vif [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.421 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f59e1f7-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.428 244018 INFO os_vif [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0')
Feb 25 12:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552-userdata-shm.mount: Deactivated successfully.
Feb 25 12:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f39925cd20db3978a9556b803bb9da1be3764b7cb84297a2c2d11b4e6c0059d-merged.mount: Deactivated successfully.
Feb 25 12:53:26 compute-0 podman[362227]: 2026-02-25 12:53:26.465629792 +0000 UTC m=+0.115976486 container cleanup ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 12:53:26 compute-0 systemd[1]: libpod-conmon-ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552.scope: Deactivated successfully.
Feb 25 12:53:26 compute-0 podman[362283]: 2026-02-25 12:53:26.552243877 +0000 UTC m=+0.056353932 container remove ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.559 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[878c39f5-a03b-4284-b262-890e2992a8b3]: (4, ('Wed Feb 25 12:53:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2 (ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552)\necfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552\nWed Feb 25 12:53:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2 (ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552)\necfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.562 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3e6d9b48-3e15-4e73-8cbf-16756de1229c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.563 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap526ae63c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:26 compute-0 kernel: tap526ae63c-30: left promiscuous mode
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.576 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47b5f27b-39d9-4203-a8c8-f765c8d1c753]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.588 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b7e61e-8faa-4b2a-b1c4-9a5f2f112afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.589 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8e1bb8-401a-4d09-a076-92f39ccd3c68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.602 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b96d3336-412e-4682-9276-6a310b613b3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588362, 'reachable_time': 24275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362301, 'error': None, 'target': 'ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d526ae63c\x2d3640\x2d4e70\x2da308\x2d56e7a67e4cf2.mount: Deactivated successfully.
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.605 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:53:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.605 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[68fa93cd-a89b-4d6f-bed3-0bdb23fbb130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 253 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.0 MiB/s wr, 43 op/s
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.696 244018 INFO nova.virt.libvirt.driver [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Deleting instance files /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e_del
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.697 244018 INFO nova.virt.libvirt.driver [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Deletion of /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e_del complete
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.750 244018 DEBUG nova.compute.manager [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.751 244018 DEBUG nova.compute.manager [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing instance network info cache due to event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.752 244018 DEBUG oslo_concurrency.lockutils [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.752 244018 DEBUG oslo_concurrency.lockutils [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.753 244018 DEBUG nova.network.neutron [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.773 244018 INFO nova.compute.manager [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Took 0.61 seconds to destroy the instance on the hypervisor.
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.773 244018 DEBUG oslo.service.loopingcall [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.774 244018 DEBUG nova.compute.manager [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:53:26 compute-0 nova_compute[244014]: 2026-02-25 12:53:26.775 244018 DEBUG nova.network.neutron [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:53:27 compute-0 nova_compute[244014]: 2026-02-25 12:53:27.923 244018 DEBUG nova.network.neutron [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:53:27 compute-0 nova_compute[244014]: 2026-02-25 12:53:27.947 244018 INFO nova.compute.manager [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Took 1.17 seconds to deallocate network for instance.
Feb 25 12:53:27 compute-0 nova_compute[244014]: 2026-02-25 12:53:27.992 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:27 compute-0 nova_compute[244014]: 2026-02-25 12:53:27.993 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.001 244018 DEBUG nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-unplugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.001 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.002 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.003 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.003 244018 DEBUG nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-unplugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.004 244018 WARNING nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-unplugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a for instance with vm_state deleted and task_state None.
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.004 244018 DEBUG nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.005 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.006 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.006 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.006 244018 DEBUG nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.007 244018 WARNING nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a for instance with vm_state deleted and task_state None.
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.074 244018 DEBUG oslo_concurrency.processutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:28 compute-0 ceph-mon[76335]: pgmap v2186: 305 pgs: 305 active+clean; 253 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.0 MiB/s wr, 43 op/s
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.350 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.369 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.370 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Instance network_info: |[{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.371 244018 DEBUG oslo_concurrency.lockutils [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.371 244018 DEBUG nova.network.neutron [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.379 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Start _get_guest_xml network_info=[{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.386 244018 WARNING nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.397 244018 DEBUG nova.virt.libvirt.host [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.398 244018 DEBUG nova.virt.libvirt.host [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.402 244018 DEBUG nova.virt.libvirt.host [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.403 244018 DEBUG nova.virt.libvirt.host [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.404 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.404 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.405 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.406 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.407 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.407 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.408 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.408 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.409 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.410 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.410 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.411 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.417 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.458 244018 DEBUG nova.network.neutron [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updated VIF entry in instance network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.460 244018 DEBUG nova.network.neutron [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.487 244018 DEBUG oslo_concurrency.lockutils [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:53:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:53:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940924392' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 223 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.655 244018 DEBUG oslo_concurrency.processutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.663 244018 DEBUG nova.compute.provider_tree [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:53:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.681 244018 DEBUG nova.scheduler.client.report [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.709 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.734 244018 INFO nova.scheduler.client.report [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 809da994-7551-4f52-8920-b0dfaa2ef73e
Feb 25 12:53:28 compute-0 nova_compute[244014]: 2026-02-25 12:53:28.793 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:53:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2461866324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.022 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.053 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.057 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2940924392' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2461866324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:53:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:53:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3696651289' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.610 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.612 244018 DEBUG nova.virt.libvirt.vif [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1355608326',display_name='tempest-TestGettingAddress-server-1355608326',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1355608326',id=130,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-0ah75or2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:22Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=03d948e4-e7cb-45ea-bf63-7ab363b4d46e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.613 244018 DEBUG nova.network.os_vif_util [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.615 244018 DEBUG nova.network.os_vif_util [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.617 244018 DEBUG nova.objects.instance [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 03d948e4-e7cb-45ea-bf63-7ab363b4d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.655 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:53:29 compute-0 nova_compute[244014]:   <uuid>03d948e4-e7cb-45ea-bf63-7ab363b4d46e</uuid>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   <name>instance-00000082</name>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1355608326</nova:name>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:53:28</nova:creationTime>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <nova:port uuid="f66ec19f-119c-4e9b-ba57-c8e2d850f5dc">
Feb 25 12:53:29 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fed9:beae" ipVersion="6"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fed9:beae" ipVersion="6"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <system>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <entry name="serial">03d948e4-e7cb-45ea-bf63-7ab363b4d46e</entry>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <entry name="uuid">03d948e4-e7cb-45ea-bf63-7ab363b4d46e</entry>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     </system>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   <os>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   </os>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   <features>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   </features>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk">
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config">
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:53:29 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:d9:be:ae"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <target dev="tapf66ec19f-11"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/console.log" append="off"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <video>
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     </video>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:53:29 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:53:29 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:53:29 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:53:29 compute-0 nova_compute[244014]: </domain>
Feb 25 12:53:29 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.656 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Preparing to wait for external event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.656 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.657 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.657 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.658 244018 DEBUG nova.virt.libvirt.vif [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1355608326',display_name='tempest-TestGettingAddress-server-1355608326',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1355608326',id=130,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-0ah75or2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:22Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=03d948e4-e7cb-45ea-bf63-7ab363b4d46e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.659 244018 DEBUG nova.network.os_vif_util [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.660 244018 DEBUG nova.network.os_vif_util [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.661 244018 DEBUG os_vif [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.662 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.663 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.667 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66ec19f-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.667 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf66ec19f-11, col_values=(('external_ids', {'iface-id': 'f66ec19f-119c-4e9b-ba57-c8e2d850f5dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:be:ae', 'vm-uuid': '03d948e4-e7cb-45ea-bf63-7ab363b4d46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:29 compute-0 NetworkManager[49836]: <info>  [1772024009.6708] manager: (tapf66ec19f-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/571)
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.673 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.676 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.678 244018 INFO os_vif [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11')
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.765 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.766 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.766 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:d9:be:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.767 244018 INFO nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Using config drive
Feb 25 12:53:29 compute-0 nova_compute[244014]: 2026-02-25 12:53:29.794 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.080 244018 DEBUG nova.compute.manager [req-a97a8de4-2009-441d-9995-e54ee8bc8c98 req-97c0e964-a46d-4edc-b98a-d03f0ca0d28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-deleted-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.081 244018 INFO nova.compute.manager [req-a97a8de4-2009-441d-9995-e54ee8bc8c98 req-97c0e964-a46d-4edc-b98a-d03f0ca0d28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Neutron deleted interface 4f59e1f7-f07c-48a1-82b4-b6a563a7130a; detaching it from the instance and deleting it from the info cache
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.081 244018 DEBUG nova.network.neutron [req-a97a8de4-2009-441d-9995-e54ee8bc8c98 req-97c0e964-a46d-4edc-b98a-d03f0ca0d28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.086 244018 DEBUG nova.compute.manager [req-a97a8de4-2009-441d-9995-e54ee8bc8c98 req-97c0e964-a46d-4edc-b98a-d03f0ca0d28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Detach interface failed, port_id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a, reason: Instance 809da994-7551-4f52-8920-b0dfaa2ef73e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.256 244018 INFO nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Creating config drive at /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.262 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv4jek6l8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:30 compute-0 ceph-mon[76335]: pgmap v2187: 305 pgs: 305 active+clean; 223 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 25 12:53:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3696651289' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.409 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv4jek6l8" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.448 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.453 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.612 244018 DEBUG nova.network.neutron [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updated VIF entry in instance network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.613 244018 DEBUG nova.network.neutron [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.637 244018 DEBUG oslo_concurrency.lockutils [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.644 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.645 244018 INFO nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Deleting local config drive /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config because it was imported into RBD.
Feb 25 12:53:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 223 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Feb 25 12:53:30 compute-0 kernel: tapf66ec19f-11: entered promiscuous mode
Feb 25 12:53:30 compute-0 NetworkManager[49836]: <info>  [1772024010.7067] manager: (tapf66ec19f-11): new Tun device (/org/freedesktop/NetworkManager/Devices/572)
Feb 25 12:53:30 compute-0 ovn_controller[147040]: 2026-02-25T12:53:30Z|01384|binding|INFO|Claiming lport f66ec19f-119c-4e9b-ba57-c8e2d850f5dc for this chassis.
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:30 compute-0 ovn_controller[147040]: 2026-02-25T12:53:30Z|01385|binding|INFO|f66ec19f-119c-4e9b-ba57-c8e2d850f5dc: Claiming fa:16:3e:d9:be:ae 10.100.0.3 2001:db8:0:1:f816:3eff:fed9:beae 2001:db8::f816:3eff:fed9:beae
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.718 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:be:ae 10.100.0.3 2001:db8:0:1:f816:3eff:fed9:beae 2001:db8::f816:3eff:fed9:beae'], port_security=['fa:16:3e:d9:be:ae 10.100.0.3 2001:db8:0:1:f816:3eff:fed9:beae 2001:db8::f816:3eff:fed9:beae'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fed9:beae/64 2001:db8::f816:3eff:fed9:beae/64', 'neutron:device_id': '03d948e4-e7cb-45ea-bf63-7ab363b4d46e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd4482ae-906b-42b8-8dd6-2eff099d2f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.720 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd bound to our chassis
Feb 25 12:53:30 compute-0 ovn_controller[147040]: 2026-02-25T12:53:30Z|01386|binding|INFO|Setting lport f66ec19f-119c-4e9b-ba57-c8e2d850f5dc ovn-installed in OVS
Feb 25 12:53:30 compute-0 ovn_controller[147040]: 2026-02-25T12:53:30Z|01387|binding|INFO|Setting lport f66ec19f-119c-4e9b-ba57-c8e2d850f5dc up in Southbound
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.723 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.723 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88562c34-222a-439a-b444-9e6f8a6d70cd
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.735 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c115ef29-cca5-40bb-a517-f55b416b149c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.736 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap88562c34-21 in ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.739 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap88562c34-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.740 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6babd599-0f28-4be9-a6b3-66a9c02ba0ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.741 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbaa2a2-e735-4f7b-9f6c-f9f502d2840e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 systemd-machined[210048]: New machine qemu-162-instance-00000082.
Feb 25 12:53:30 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-00000082.
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.757 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ad148f8c-4968-49d5-b76f-79bc483fe068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 systemd-udevd[362463]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.776 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[74863abf-c67a-47fa-9fed-dabba18d96de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 NetworkManager[49836]: <info>  [1772024010.7867] device (tapf66ec19f-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:53:30 compute-0 NetworkManager[49836]: <info>  [1772024010.7876] device (tapf66ec19f-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.817 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d0817f5b-7e1d-49f1-92cf-5abb0dfb73f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d385ffab-a571-41fa-a357-76698d5ba219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 NetworkManager[49836]: <info>  [1772024010.8265] manager: (tap88562c34-20): new Veth device (/org/freedesktop/NetworkManager/Devices/573)
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.858 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[00abc008-0e88-48a9-8220-51dc836e94ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.863 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[346bb885-c0af-4e4b-9684-a0d6fea94f2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.883 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.884 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:30 compute-0 NetworkManager[49836]: <info>  [1772024010.8957] device (tap88562c34-20): carrier: link connected
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.903 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba8c267-b190-42bd-9136-0f61097faab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.912 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:53:30 compute-0 nova_compute[244014]: 2026-02-25 12:53:30.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.920 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[93e65cd8-5eb6-4ab7-b348-0df2b776334f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88562c34-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:5e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598029, 'reachable_time': 38748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362494, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.937 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[af22e56c-ef48-4b95-8381-bad17cedaadc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:5e1e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598029, 'tstamp': 598029}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362496, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.951 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3218c053-a9fd-4561-834e-870f813f97a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88562c34-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:5e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598029, 'reachable_time': 38748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362497, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:30 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.980 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3f76e60a-4687-4ae7-9faf-140fff7071e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:53:31
Feb 25 12:53:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:53:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:53:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.log', 'volumes', 'images', 'vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Feb 25 12:53:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.029 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2fba0f98-dbf4-4229-80dd-8eb668319222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.030 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88562c34-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.030 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.031 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88562c34-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:31 compute-0 NetworkManager[49836]: <info>  [1772024011.0331] manager: (tap88562c34-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:31 compute-0 kernel: tap88562c34-20: entered promiscuous mode
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.035 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88562c34-20, col_values=(('external_ids', {'iface-id': '09662fcb-392d-469a-981c-54d31225748b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.037 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/88562c34-222a-439a-b444-9e6f8a6d70cd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/88562c34-222a-439a-b444-9e6f8a6d70cd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:53:31 compute-0 ovn_controller[147040]: 2026-02-25T12:53:31Z|01388|binding|INFO|Releasing lport 09662fcb-392d-469a-981c-54d31225748b from this chassis (sb_readonly=0)
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.041 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb22e7ef-b923-4ec8-9567-58b87b19651b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.041 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-88562c34-222a-439a-b444-9e6f8a6d70cd
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/88562c34-222a-439a-b444-9e6f8a6d70cd.pid.haproxy
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 88562c34-222a-439a-b444-9e6f8a6d70cd
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:53:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.042 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'env', 'PROCESS_TAG=haproxy-88562c34-222a-439a-b444-9e6f8a6d70cd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/88562c34-222a-439a-b444-9e6f8a6d70cd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.045 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.166 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023996.165597, f3af9615-94aa-4498-ab5c-3fadcab4a4e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.167 244018 INFO nova.compute.manager [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] VM Stopped (Lifecycle Event)
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.191 244018 DEBUG nova.compute.manager [None req-78ed1a19-63a7-4f08-b0c7-bdcd0ceb2869 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.346 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024011.3457973, 03d948e4-e7cb-45ea-bf63-7ab363b4d46e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.347 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] VM Started (Lifecycle Event)
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.368 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.373 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024011.3461757, 03d948e4-e7cb-45ea-bf63-7ab363b4d46e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.374 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] VM Paused (Lifecycle Event)
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.389 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:53:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:53:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366131965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.395 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.408 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.422 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.481 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.481 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:53:31 compute-0 podman[362593]: 2026-02-25 12:53:31.498191623 +0000 UTC m=+0.054468019 container create fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:53:31 compute-0 systemd[1]: Started libpod-conmon-fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57.scope.
Feb 25 12:53:31 compute-0 podman[362593]: 2026-02-25 12:53:31.465406407 +0000 UTC m=+0.021682843 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:53:31 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a865f5da51854f8f958183be1ede03192965b68e57f97054deb9abb51f58a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:31 compute-0 podman[362593]: 2026-02-25 12:53:31.606589683 +0000 UTC m=+0.162866059 container init fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:53:31 compute-0 podman[362593]: 2026-02-25 12:53:31.612774898 +0000 UTC m=+0.169051284 container start fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:53:31 compute-0 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [NOTICE]   (362613) : New worker (362615) forked
Feb 25 12:53:31 compute-0 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [NOTICE]   (362613) : Loading success.
Feb 25 12:53:31 compute-0 ovn_controller[147040]: 2026-02-25T12:53:31Z|01389|binding|INFO|Releasing lport 09662fcb-392d-469a-981c-54d31225748b from this chassis (sb_readonly=0)
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:31 compute-0 ovn_controller[147040]: 2026-02-25T12:53:31Z|01390|binding|INFO|Releasing lport 09662fcb-392d-469a-981c-54d31225748b from this chassis (sb_readonly=0)
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.736 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.738 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3481MB free_disk=59.95623039081693GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.739 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.739 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.808 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 03d948e4-e7cb-45ea-bf63-7ab363b4d46e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.809 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.809 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:53:31 compute-0 nova_compute[244014]: 2026-02-25 12:53:31.848 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.156 244018 DEBUG nova.compute.manager [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.156 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.158 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.158 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.159 244018 DEBUG nova.compute.manager [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Processing event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.159 244018 DEBUG nova.compute.manager [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.160 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.160 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.160 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.161 244018 DEBUG nova.compute.manager [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] No waiting events found dispatching network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.161 244018 WARNING nova.compute.manager [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received unexpected event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc for instance with vm_state building and task_state spawning.
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.162 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.166 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024012.1663218, 03d948e4-e7cb-45ea-bf63-7ab363b4d46e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.167 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] VM Resumed (Lifecycle Event)
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.171 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.174 244018 INFO nova.virt.libvirt.driver [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Instance spawned successfully.
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.175 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.193 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.198 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.201 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.201 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.201 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.201 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.202 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.202 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.224 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.268 244018 INFO nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Took 9.29 seconds to spawn the instance on the hypervisor.
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.268 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.332 244018 INFO nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Took 10.42 seconds to build instance.
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.348 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:32 compute-0 ceph-mon[76335]: pgmap v2188: 305 pgs: 305 active+clean; 223 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Feb 25 12:53:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1366131965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:53:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944912620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.400 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.405 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.423 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.450 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:53:32 compute-0 nova_compute[244014]: 2026-02-25 12:53:32.451 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 25 12:53:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2944912620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:33 compute-0 nova_compute[244014]: 2026-02-25 12:53:33.442 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:33 compute-0 nova_compute[244014]: 2026-02-25 12:53:33.443 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:53:33 compute-0 nova_compute[244014]: 2026-02-25 12:53:33.490 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:53:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.681339) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013681361, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1869, "num_deletes": 256, "total_data_size": 3052952, "memory_usage": 3093928, "flush_reason": "Manual Compaction"}
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013694852, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 2966165, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45036, "largest_seqno": 46904, "table_properties": {"data_size": 2957724, "index_size": 5127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17453, "raw_average_key_size": 19, "raw_value_size": 2940788, "raw_average_value_size": 3345, "num_data_blocks": 228, "num_entries": 879, "num_filter_entries": 879, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023826, "oldest_key_time": 1772023826, "file_creation_time": 1772024013, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 13543 microseconds, and 4297 cpu microseconds.
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.694881) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 2966165 bytes OK
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.694894) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.696366) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.696376) EVENT_LOG_v1 {"time_micros": 1772024013696373, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.696389) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3044965, prev total WAL file size 3044965, number of live WAL files 2.
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.696874) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373633' seq:72057594037927935, type:22 .. '6C6F676D0032303135' seq:0, type:0; will stop at (end)
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(2896KB)], [104(8169KB)]
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013696946, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11332163, "oldest_snapshot_seqno": -1}
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 6959 keys, 11211013 bytes, temperature: kUnknown
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013822051, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 11211013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11162410, "index_size": 30123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 179834, "raw_average_key_size": 25, "raw_value_size": 11036082, "raw_average_value_size": 1585, "num_data_blocks": 1192, "num_entries": 6959, "num_filter_entries": 6959, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024013, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.822299) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11211013 bytes
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.823755) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.5 rd, 89.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(7.6) write-amplify(3.8) OK, records in: 7483, records dropped: 524 output_compression: NoCompression
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.823774) EVENT_LOG_v1 {"time_micros": 1772024013823764, "job": 62, "event": "compaction_finished", "compaction_time_micros": 125175, "compaction_time_cpu_micros": 51333, "output_level": 6, "num_output_files": 1, "total_output_size": 11211013, "num_input_records": 7483, "num_output_records": 6959, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013824175, "job": 62, "event": "table_file_deletion", "file_number": 106}
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013824831, "job": 62, "event": "table_file_deletion", "file_number": 104}
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.696749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.824933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.824943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.824948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.824952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.824957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:33 compute-0 nova_compute[244014]: 2026-02-25 12:53:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:34 compute-0 ceph-mon[76335]: pgmap v2189: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 25 12:53:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Feb 25 12:53:34 compute-0 nova_compute[244014]: 2026-02-25 12:53:34.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:35 compute-0 nova_compute[244014]: 2026-02-25 12:53:35.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:36 compute-0 ceph-mon[76335]: pgmap v2190: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Feb 25 12:53:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Feb 25 12:53:36 compute-0 nova_compute[244014]: 2026-02-25 12:53:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:37 compute-0 nova_compute[244014]: 2026-02-25 12:53:37.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:37 compute-0 NetworkManager[49836]: <info>  [1772024017.2830] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/575)
Feb 25 12:53:37 compute-0 NetworkManager[49836]: <info>  [1772024017.2844] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/576)
Feb 25 12:53:37 compute-0 ovn_controller[147040]: 2026-02-25T12:53:37Z|01391|binding|INFO|Releasing lport 09662fcb-392d-469a-981c-54d31225748b from this chassis (sb_readonly=0)
Feb 25 12:53:37 compute-0 nova_compute[244014]: 2026-02-25 12:53:37.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:37 compute-0 nova_compute[244014]: 2026-02-25 12:53:37.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:37 compute-0 podman[362647]: 2026-02-25 12:53:37.733551938 +0000 UTC m=+0.074561266 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223)
Feb 25 12:53:37 compute-0 podman[362648]: 2026-02-25 12:53:37.761577149 +0000 UTC m=+0.099418787 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:53:37 compute-0 nova_compute[244014]: 2026-02-25 12:53:37.842 244018 DEBUG nova.compute.manager [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:37 compute-0 nova_compute[244014]: 2026-02-25 12:53:37.843 244018 DEBUG nova.compute.manager [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing instance network info cache due to event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:53:37 compute-0 nova_compute[244014]: 2026-02-25 12:53:37.843 244018 DEBUG oslo_concurrency.lockutils [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:53:37 compute-0 nova_compute[244014]: 2026-02-25 12:53:37.843 244018 DEBUG oslo_concurrency.lockutils [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:53:37 compute-0 nova_compute[244014]: 2026-02-25 12:53:37.844 244018 DEBUG nova.network.neutron [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:53:37 compute-0 nova_compute[244014]: 2026-02-25 12:53:37.885 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:38 compute-0 ceph-mon[76335]: pgmap v2191: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Feb 25 12:53:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 782 KiB/s wr, 114 op/s
Feb 25 12:53:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:39 compute-0 nova_compute[244014]: 2026-02-25 12:53:39.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:39 compute-0 nova_compute[244014]: 2026-02-25 12:53:39.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:39 compute-0 nova_compute[244014]: 2026-02-25 12:53:39.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:40 compute-0 nova_compute[244014]: 2026-02-25 12:53:40.285 244018 DEBUG nova.network.neutron [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updated VIF entry in instance network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:53:40 compute-0 nova_compute[244014]: 2026-02-25 12:53:40.285 244018 DEBUG nova.network.neutron [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:53:40 compute-0 nova_compute[244014]: 2026-02-25 12:53:40.307 244018 DEBUG oslo_concurrency.lockutils [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:53:40 compute-0 ceph-mon[76335]: pgmap v2192: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 782 KiB/s wr, 114 op/s
Feb 25 12:53:40 compute-0 nova_compute[244014]: 2026-02-25 12:53:40.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Feb 25 12:53:40 compute-0 nova_compute[244014]: 2026-02-25 12:53:40.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:41 compute-0 nova_compute[244014]: 2026-02-25 12:53:41.396 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024006.394624, 809da994-7551-4f52-8920-b0dfaa2ef73e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:53:41 compute-0 nova_compute[244014]: 2026-02-25 12:53:41.397 244018 INFO nova.compute.manager [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] VM Stopped (Lifecycle Event)
Feb 25 12:53:41 compute-0 nova_compute[244014]: 2026-02-25 12:53:41.422 244018 DEBUG nova.compute.manager [None req-6b742b2f-5571-4647-9ed0-7fb8696efe2e - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:53:42 compute-0 ovn_controller[147040]: 2026-02-25T12:53:42Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:be:ae 10.100.0.3
Feb 25 12:53:42 compute-0 ovn_controller[147040]: 2026-02-25T12:53:42Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:be:ae 10.100.0.3
Feb 25 12:53:42 compute-0 ceph-mon[76335]: pgmap v2193: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 216 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007082478581069775 of space, bias 1.0, pg target 0.21247435743209325 quantized to 32 (current 32)
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494152801502742 of space, bias 1.0, pg target 0.7482458404508227 quantized to 32 (current 32)
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3911709341754778e-06 of space, bias 4.0, pg target 0.0016694051210105734 quantized to 16 (current 16)
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:53:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:53:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:43 compute-0 nova_compute[244014]: 2026-02-25 12:53:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:43 compute-0 nova_compute[244014]: 2026-02-25 12:53:43.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:53:44 compute-0 ceph-mon[76335]: pgmap v2194: 305 pgs: 305 active+clean; 216 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Feb 25 12:53:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 216 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 25 12:53:44 compute-0 nova_compute[244014]: 2026-02-25 12:53:44.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:44 compute-0 sudo[362689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:53:44 compute-0 sudo[362689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:53:44 compute-0 sudo[362689]: pam_unix(sudo:session): session closed for user root
Feb 25 12:53:44 compute-0 sudo[362714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 12:53:44 compute-0 sudo[362714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:53:45 compute-0 podman[362783]: 2026-02-25 12:53:45.338818075 +0000 UTC m=+0.074495945 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:53:45 compute-0 nova_compute[244014]: 2026-02-25 12:53:45.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:45 compute-0 podman[362783]: 2026-02-25 12:53:45.451140746 +0000 UTC m=+0.186818626 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:53:46 compute-0 sudo[362714]: pam_unix(sudo:session): session closed for user root
Feb 25 12:53:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:53:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:53:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:53:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:53:46 compute-0 sudo[362973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:53:46 compute-0 sudo[362973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:53:46 compute-0 sudo[362973]: pam_unix(sudo:session): session closed for user root
Feb 25 12:53:46 compute-0 ceph-mon[76335]: pgmap v2195: 305 pgs: 305 active+clean; 216 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 25 12:53:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:53:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:53:46 compute-0 sudo[362998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:53:46 compute-0 sudo[362998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:53:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 221 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Feb 25 12:53:47 compute-0 sudo[362998]: pam_unix(sudo:session): session closed for user root
Feb 25 12:53:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:53:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:53:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:53:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:53:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:53:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:53:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:53:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:53:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:53:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:53:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:53:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:53:47 compute-0 sudo[363054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:53:47 compute-0 sudo[363054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:53:47 compute-0 sudo[363054]: pam_unix(sudo:session): session closed for user root
Feb 25 12:53:47 compute-0 sudo[363079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:53:47 compute-0 sudo[363079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:53:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:53:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:53:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:53:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:53:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:53:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:53:47 compute-0 podman[363116]: 2026-02-25 12:53:47.52161728 +0000 UTC m=+0.065514151 container create fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 12:53:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:53:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2315838714' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:53:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:53:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2315838714' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:53:47 compute-0 systemd[1]: Started libpod-conmon-fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07.scope.
Feb 25 12:53:47 compute-0 podman[363116]: 2026-02-25 12:53:47.490952604 +0000 UTC m=+0.034849535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:53:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:53:47 compute-0 podman[363116]: 2026-02-25 12:53:47.607190687 +0000 UTC m=+0.151087598 container init fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:53:47 compute-0 podman[363116]: 2026-02-25 12:53:47.615676596 +0000 UTC m=+0.159573467 container start fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 12:53:47 compute-0 nice_morse[363133]: 167 167
Feb 25 12:53:47 compute-0 podman[363116]: 2026-02-25 12:53:47.619950987 +0000 UTC m=+0.163847858 container attach fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:53:47 compute-0 systemd[1]: libpod-fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07.scope: Deactivated successfully.
Feb 25 12:53:47 compute-0 conmon[363133]: conmon fbe80fa4290d21af9e24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07.scope/container/memory.events
Feb 25 12:53:47 compute-0 podman[363116]: 2026-02-25 12:53:47.624631019 +0000 UTC m=+0.168527900 container died fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 12:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce7cf2ceb861dbc3c7788437e2e298f0d62aefa913b557b6f5d486727fbb2c20-merged.mount: Deactivated successfully.
Feb 25 12:53:47 compute-0 podman[363116]: 2026-02-25 12:53:47.673663154 +0000 UTC m=+0.217560025 container remove fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:53:47 compute-0 systemd[1]: libpod-conmon-fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07.scope: Deactivated successfully.
Feb 25 12:53:47 compute-0 podman[363156]: 2026-02-25 12:53:47.869840473 +0000 UTC m=+0.060887430 container create baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:53:47 compute-0 nova_compute[244014]: 2026-02-25 12:53:47.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:53:47 compute-0 systemd[1]: Started libpod-conmon-baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d.scope.
Feb 25 12:53:47 compute-0 podman[363156]: 2026-02-25 12:53:47.845010552 +0000 UTC m=+0.036057579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:53:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:47 compute-0 podman[363156]: 2026-02-25 12:53:47.989940114 +0000 UTC m=+0.180987051 container init baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 12:53:48 compute-0 podman[363156]: 2026-02-25 12:53:48.003627061 +0000 UTC m=+0.194674028 container start baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:53:48 compute-0 podman[363156]: 2026-02-25 12:53:48.007944273 +0000 UTC m=+0.198991240 container attach baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 12:53:48 compute-0 ceph-mon[76335]: pgmap v2196: 305 pgs: 305 active+clean; 221 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Feb 25 12:53:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2315838714' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:53:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2315838714' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:53:48 compute-0 nifty_hopper[363172]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:53:48 compute-0 nifty_hopper[363172]: --> All data devices are unavailable
Feb 25 12:53:48 compute-0 systemd[1]: libpod-baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d.scope: Deactivated successfully.
Feb 25 12:53:48 compute-0 podman[363194]: 2026-02-25 12:53:48.563120838 +0000 UTC m=+0.041677248 container died baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:53:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad-merged.mount: Deactivated successfully.
Feb 25 12:53:48 compute-0 podman[363194]: 2026-02-25 12:53:48.653402627 +0000 UTC m=+0.131958997 container remove baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:53:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 25 12:53:48 compute-0 systemd[1]: libpod-conmon-baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d.scope: Deactivated successfully.
Feb 25 12:53:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:48 compute-0 sudo[363079]: pam_unix(sudo:session): session closed for user root
Feb 25 12:53:48 compute-0 sudo[363209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:53:48 compute-0 sudo[363209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:53:48 compute-0 sudo[363209]: pam_unix(sudo:session): session closed for user root
Feb 25 12:53:48 compute-0 sudo[363234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:53:48 compute-0 sudo[363234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:53:49 compute-0 podman[363274]: 2026-02-25 12:53:49.157851511 +0000 UTC m=+0.056754493 container create 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 12:53:49 compute-0 systemd[1]: Started libpod-conmon-95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef.scope.
Feb 25 12:53:49 compute-0 podman[363274]: 2026-02-25 12:53:49.133907545 +0000 UTC m=+0.032810577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:53:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:53:49 compute-0 podman[363274]: 2026-02-25 12:53:49.245712482 +0000 UTC m=+0.144615504 container init 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 12:53:49 compute-0 podman[363274]: 2026-02-25 12:53:49.254747447 +0000 UTC m=+0.153650439 container start 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 12:53:49 compute-0 podman[363274]: 2026-02-25 12:53:49.258777511 +0000 UTC m=+0.157680493 container attach 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 12:53:49 compute-0 systemd[1]: libpod-95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef.scope: Deactivated successfully.
Feb 25 12:53:49 compute-0 competent_wright[363291]: 167 167
Feb 25 12:53:49 compute-0 podman[363274]: 2026-02-25 12:53:49.260943632 +0000 UTC m=+0.159846614 container died 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd5da45afddc3415a25f7c8926ba9ba0a2b3f03a320414d48a7b91325d44858c-merged.mount: Deactivated successfully.
Feb 25 12:53:49 compute-0 podman[363274]: 2026-02-25 12:53:49.308936808 +0000 UTC m=+0.207839790 container remove 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:53:49 compute-0 systemd[1]: libpod-conmon-95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef.scope: Deactivated successfully.
Feb 25 12:53:49 compute-0 podman[363313]: 2026-02-25 12:53:49.51689708 +0000 UTC m=+0.061318643 container create 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:53:49 compute-0 nova_compute[244014]: 2026-02-25 12:53:49.531 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:49 compute-0 nova_compute[244014]: 2026-02-25 12:53:49.531 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:49 compute-0 systemd[1]: Started libpod-conmon-06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7.scope.
Feb 25 12:53:49 compute-0 podman[363313]: 2026-02-25 12:53:49.492368187 +0000 UTC m=+0.036789810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:53:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d1c6e0b4c299d45939bbb684e9ba3c579fd34026cfb6ae29ee384f91961d42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d1c6e0b4c299d45939bbb684e9ba3c579fd34026cfb6ae29ee384f91961d42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d1c6e0b4c299d45939bbb684e9ba3c579fd34026cfb6ae29ee384f91961d42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d1c6e0b4c299d45939bbb684e9ba3c579fd34026cfb6ae29ee384f91961d42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:49 compute-0 podman[363313]: 2026-02-25 12:53:49.630118817 +0000 UTC m=+0.174540400 container init 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 12:53:49 compute-0 podman[363313]: 2026-02-25 12:53:49.639994905 +0000 UTC m=+0.184416468 container start 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:53:49 compute-0 podman[363313]: 2026-02-25 12:53:49.643772172 +0000 UTC m=+0.188193725 container attach 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 12:53:49 compute-0 nova_compute[244014]: 2026-02-25 12:53:49.642 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:53:49 compute-0 nova_compute[244014]: 2026-02-25 12:53:49.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:49 compute-0 nova_compute[244014]: 2026-02-25 12:53:49.746 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:49 compute-0 nova_compute[244014]: 2026-02-25 12:53:49.747 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:49 compute-0 nova_compute[244014]: 2026-02-25 12:53:49.758 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:53:49 compute-0 nova_compute[244014]: 2026-02-25 12:53:49.758 244018 INFO nova.compute.claims [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:53:49 compute-0 nova_compute[244014]: 2026-02-25 12:53:49.896 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]: {
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:     "0": [
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:         {
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "devices": [
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "/dev/loop3"
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             ],
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_name": "ceph_lv0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_size": "21470642176",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "name": "ceph_lv0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "tags": {
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.cluster_name": "ceph",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.crush_device_class": "",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.encrypted": "0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.objectstore": "bluestore",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.osd_id": "0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.type": "block",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.vdo": "0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.with_tpm": "0"
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             },
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "type": "block",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "vg_name": "ceph_vg0"
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:         }
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:     ],
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:     "1": [
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:         {
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "devices": [
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "/dev/loop4"
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             ],
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_name": "ceph_lv1",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_size": "21470642176",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "name": "ceph_lv1",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "tags": {
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.cluster_name": "ceph",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.crush_device_class": "",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.encrypted": "0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.objectstore": "bluestore",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.osd_id": "1",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.type": "block",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.vdo": "0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.with_tpm": "0"
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             },
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "type": "block",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "vg_name": "ceph_vg1"
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:         }
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:     ],
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:     "2": [
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:         {
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "devices": [
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "/dev/loop5"
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             ],
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_name": "ceph_lv2",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_size": "21470642176",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "name": "ceph_lv2",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "tags": {
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.cluster_name": "ceph",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.crush_device_class": "",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.encrypted": "0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.objectstore": "bluestore",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.osd_id": "2",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.type": "block",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.vdo": "0",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:                 "ceph.with_tpm": "0"
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             },
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "type": "block",
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:             "vg_name": "ceph_vg2"
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:         }
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]:     ]
Feb 25 12:53:49 compute-0 distracted_sutherland[363329]: }
Feb 25 12:53:50 compute-0 systemd[1]: libpod-06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7.scope: Deactivated successfully.
Feb 25 12:53:50 compute-0 podman[363313]: 2026-02-25 12:53:50.007431391 +0000 UTC m=+0.551852914 container died 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 12:53:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-45d1c6e0b4c299d45939bbb684e9ba3c579fd34026cfb6ae29ee384f91961d42-merged.mount: Deactivated successfully.
Feb 25 12:53:50 compute-0 podman[363313]: 2026-02-25 12:53:50.047843282 +0000 UTC m=+0.592264795 container remove 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 12:53:50 compute-0 systemd[1]: libpod-conmon-06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7.scope: Deactivated successfully.
Feb 25 12:53:50 compute-0 sudo[363234]: pam_unix(sudo:session): session closed for user root
Feb 25 12:53:50 compute-0 sudo[363369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:53:50 compute-0 sudo[363369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:53:50 compute-0 sudo[363369]: pam_unix(sudo:session): session closed for user root
Feb 25 12:53:50 compute-0 sudo[363394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:53:50 compute-0 sudo[363394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:50 compute-0 ceph-mon[76335]: pgmap v2197: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 25 12:53:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:53:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4114764317' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.491 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.499 244018 DEBUG nova.compute.provider_tree [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.527 244018 DEBUG nova.scheduler.client.report [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:53:50 compute-0 podman[363431]: 2026-02-25 12:53:50.540321838 +0000 UTC m=+0.043225272 container create 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.556 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.556 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:53:50 compute-0 systemd[1]: Started libpod-conmon-18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e.scope.
Feb 25 12:53:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.602 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:53:50 compute-0 podman[363431]: 2026-02-25 12:53:50.602807312 +0000 UTC m=+0.105710786 container init 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.602 244018 DEBUG nova.network.neutron [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:53:50 compute-0 podman[363431]: 2026-02-25 12:53:50.609628135 +0000 UTC m=+0.112531599 container start 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:53:50 compute-0 lucid_dirac[363447]: 167 167
Feb 25 12:53:50 compute-0 podman[363431]: 2026-02-25 12:53:50.613948817 +0000 UTC m=+0.116852291 container attach 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:53:50 compute-0 systemd[1]: libpod-18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e.scope: Deactivated successfully.
Feb 25 12:53:50 compute-0 podman[363431]: 2026-02-25 12:53:50.614380219 +0000 UTC m=+0.117283693 container died 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 12:53:50 compute-0 podman[363431]: 2026-02-25 12:53:50.528092033 +0000 UTC m=+0.030995487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.627 244018 INFO nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:53:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbdf629afc534b1d0eb7a6af03a8a0089fcc8f5df688ebcbb496d8151ee3db20-merged.mount: Deactivated successfully.
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.649 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:53:50 compute-0 podman[363431]: 2026-02-25 12:53:50.657356753 +0000 UTC m=+0.160260227 container remove 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:53:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:53:50 compute-0 systemd[1]: libpod-conmon-18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e.scope: Deactivated successfully.
Feb 25 12:53:50 compute-0 podman[363471]: 2026-02-25 12:53:50.808368617 +0000 UTC m=+0.046908196 container create ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb 25 12:53:50 compute-0 systemd[1]: Started libpod-conmon-ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe.scope.
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.871 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.875 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.876 244018 INFO nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Creating image(s)
Feb 25 12:53:50 compute-0 podman[363471]: 2026-02-25 12:53:50.782531547 +0000 UTC m=+0.021071176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:53:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9102adddcc0f464a24e9fdbcc3ee3fa2121fe32349bf8fc291e52bd5803ac97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9102adddcc0f464a24e9fdbcc3ee3fa2121fe32349bf8fc291e52bd5803ac97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9102adddcc0f464a24e9fdbcc3ee3fa2121fe32349bf8fc291e52bd5803ac97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9102adddcc0f464a24e9fdbcc3ee3fa2121fe32349bf8fc291e52bd5803ac97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:53:50 compute-0 podman[363471]: 2026-02-25 12:53:50.913614438 +0000 UTC m=+0.152154017 container init ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.923 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:50 compute-0 podman[363471]: 2026-02-25 12:53:50.928033216 +0000 UTC m=+0.166572765 container start ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:53:50 compute-0 podman[363471]: 2026-02-25 12:53:50.931085482 +0000 UTC m=+0.169625031 container attach ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.958 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.986 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:50 compute-0 nova_compute[244014]: 2026-02-25 12:53:50.990 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.053 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.054 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.055 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.056 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.084 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.091 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.190 244018 DEBUG nova.policy [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.364 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.437 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:53:51 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4114764317' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:51 compute-0 ceph-mon[76335]: pgmap v2198: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.506 244018 DEBUG nova.objects.instance [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.522 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.523 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Ensure instance console log exists: /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.523 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.524 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:51 compute-0 nova_compute[244014]: 2026-02-25 12:53:51.524 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:51 compute-0 lvm[363735]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:53:51 compute-0 lvm[363735]: VG ceph_vg1 finished
Feb 25 12:53:51 compute-0 lvm[363732]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:53:51 compute-0 lvm[363732]: VG ceph_vg0 finished
Feb 25 12:53:51 compute-0 lvm[363737]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:53:51 compute-0 lvm[363737]: VG ceph_vg2 finished
Feb 25 12:53:51 compute-0 nervous_chatelet[363489]: {}
Feb 25 12:53:51 compute-0 systemd[1]: libpod-ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe.scope: Deactivated successfully.
Feb 25 12:53:51 compute-0 systemd[1]: libpod-ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe.scope: Consumed 1.073s CPU time.
Feb 25 12:53:51 compute-0 podman[363471]: 2026-02-25 12:53:51.698159772 +0000 UTC m=+0.936699361 container died ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 12:53:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9102adddcc0f464a24e9fdbcc3ee3fa2121fe32349bf8fc291e52bd5803ac97-merged.mount: Deactivated successfully.
Feb 25 12:53:51 compute-0 podman[363471]: 2026-02-25 12:53:51.920579252 +0000 UTC m=+1.159118841 container remove ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:53:51 compute-0 systemd[1]: libpod-conmon-ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe.scope: Deactivated successfully.
Feb 25 12:53:51 compute-0 sudo[363394]: pam_unix(sudo:session): session closed for user root
Feb 25 12:53:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:53:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:53:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:53:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:53:52 compute-0 sudo[363752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:53:52 compute-0 sudo[363752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:53:52 compute-0 sudo[363752]: pam_unix(sudo:session): session closed for user root
Feb 25 12:53:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 273 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.7 MiB/s wr, 90 op/s
Feb 25 12:53:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:53:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:53:52 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Feb 25 12:53:52 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:52.999809) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:53:52 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Feb 25 12:53:52 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024032999848, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 456, "num_deletes": 251, "total_data_size": 404517, "memory_usage": 413032, "flush_reason": "Manual Compaction"}
Feb 25 12:53:52 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024033004100, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 389601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46905, "largest_seqno": 47360, "table_properties": {"data_size": 386901, "index_size": 736, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6504, "raw_average_key_size": 19, "raw_value_size": 381548, "raw_average_value_size": 1122, "num_data_blocks": 32, "num_entries": 340, "num_filter_entries": 340, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024014, "oldest_key_time": 1772024014, "file_creation_time": 1772024032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 4339 microseconds, and 1962 cpu microseconds.
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.004147) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 389601 bytes OK
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.004165) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.005791) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.005810) EVENT_LOG_v1 {"time_micros": 1772024033005804, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.005828) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 401746, prev total WAL file size 401746, number of live WAL files 2.
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.006209) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(380KB)], [107(10MB)]
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024033006247, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 11600614, "oldest_snapshot_seqno": -1}
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6786 keys, 9892285 bytes, temperature: kUnknown
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024033056989, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 9892285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9846196, "index_size": 27993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 176935, "raw_average_key_size": 26, "raw_value_size": 9724260, "raw_average_value_size": 1432, "num_data_blocks": 1094, "num_entries": 6786, "num_filter_entries": 6786, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.057294) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 9892285 bytes
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.058319) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.1 rd, 194.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(55.2) write-amplify(25.4) OK, records in: 7299, records dropped: 513 output_compression: NoCompression
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.058348) EVENT_LOG_v1 {"time_micros": 1772024033058335, "job": 64, "event": "compaction_finished", "compaction_time_micros": 50852, "compaction_time_cpu_micros": 16908, "output_level": 6, "num_output_files": 1, "total_output_size": 9892285, "num_input_records": 7299, "num_output_records": 6786, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024033058561, "job": 64, "event": "table_file_deletion", "file_number": 109}
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024033060118, "job": 64, "event": "table_file_deletion", "file_number": 107}
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.006134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.060259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.060269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.060274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.060278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:53 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.060283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:53:53 compute-0 nova_compute[244014]: 2026-02-25 12:53:53.150 244018 DEBUG nova.network.neutron [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Successfully updated port: 205ead5a-797e-421e-87e3-ec5dac2037d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:53:53 compute-0 nova_compute[244014]: 2026-02-25 12:53:53.165 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:53:53 compute-0 nova_compute[244014]: 2026-02-25 12:53:53.165 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:53:53 compute-0 nova_compute[244014]: 2026-02-25 12:53:53.166 244018 DEBUG nova.network.neutron [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:53:53 compute-0 nova_compute[244014]: 2026-02-25 12:53:53.257 244018 DEBUG nova.compute.manager [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:53:53 compute-0 nova_compute[244014]: 2026-02-25 12:53:53.257 244018 DEBUG nova.compute.manager [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Refreshing instance network info cache due to event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:53:53 compute-0 nova_compute[244014]: 2026-02-25 12:53:53.257 244018 DEBUG oslo_concurrency.lockutils [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:53:53 compute-0 nova_compute[244014]: 2026-02-25 12:53:53.344 244018 DEBUG nova.network.neutron [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:53:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:54 compute-0 ceph-mon[76335]: pgmap v2199: 305 pgs: 305 active+clean; 273 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.7 MiB/s wr, 90 op/s
Feb 25 12:53:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 273 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Feb 25 12:53:54 compute-0 nova_compute[244014]: 2026-02-25 12:53:54.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:55.031 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:55.032 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:55.032 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.146 244018 DEBUG nova.network.neutron [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updating instance_info_cache with network_info: [{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.254 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.255 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Instance network_info: |[{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.255 244018 DEBUG oslo_concurrency.lockutils [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.255 244018 DEBUG nova.network.neutron [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Refreshing network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.260 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Start _get_guest_xml network_info=[{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.266 244018 WARNING nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.273 244018 DEBUG nova.virt.libvirt.host [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.274 244018 DEBUG nova.virt.libvirt.host [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.278 244018 DEBUG nova.virt.libvirt.host [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.279 244018 DEBUG nova.virt.libvirt.host [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.279 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.279 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.280 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.280 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.280 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.281 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.281 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.281 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.282 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.282 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.282 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.282 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.286 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.441 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:53:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/26886895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.857 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.887 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:55 compute-0 nova_compute[244014]: 2026-02-25 12:53:55.892 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:56 compute-0 ceph-mon[76335]: pgmap v2200: 305 pgs: 305 active+clean; 273 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Feb 25 12:53:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/26886895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.301 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.302 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:53:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338384354' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.428 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.442 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.443 244018 DEBUG nova.virt.libvirt.vif [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-726997753',display_name='tempest-TestNetworkBasicOps-server-726997753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-726997753',id=131,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbOW6FmwA5xCX/nfm45JqiS9iGNwncp9v5d8H3G3ZA8NRo+s1Asg0gZAqq62mcFCBdPP+KeNg7r6EvgTxZQYa2mEaBkl1lAwClqNYTOE5oGijuBkGH8PPKePy7SM1EfdQ==',key_name='tempest-TestNetworkBasicOps-533048180',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-p8om99l7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:50Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=31c909bc-0d05-4a67-83e9-b45fb2eb35a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.444 244018 DEBUG nova.network.os_vif_util [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.445 244018 DEBUG nova.network.os_vif_util [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.447 244018 DEBUG nova.objects.instance [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.450 244018 DEBUG nova.network.neutron [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updated VIF entry in instance network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.451 244018 DEBUG nova.network.neutron [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updating instance_info_cache with network_info: [{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.525 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:53:56 compute-0 nova_compute[244014]:   <uuid>31c909bc-0d05-4a67-83e9-b45fb2eb35a9</uuid>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   <name>instance-00000083</name>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-726997753</nova:name>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:53:55</nova:creationTime>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <nova:port uuid="205ead5a-797e-421e-87e3-ec5dac2037d3">
Feb 25 12:53:56 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <system>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <entry name="serial">31c909bc-0d05-4a67-83e9-b45fb2eb35a9</entry>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <entry name="uuid">31c909bc-0d05-4a67-83e9-b45fb2eb35a9</entry>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     </system>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   <os>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   </os>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   <features>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   </features>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk">
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config">
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       </source>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:53:56 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:5f:de:be"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <target dev="tap205ead5a-79"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/console.log" append="off"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <video>
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     </video>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:53:56 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:53:56 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:53:56 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:53:56 compute-0 nova_compute[244014]: </domain>
Feb 25 12:53:56 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.527 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Preparing to wait for external event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.527 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.528 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.528 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.529 244018 DEBUG nova.virt.libvirt.vif [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-726997753',display_name='tempest-TestNetworkBasicOps-server-726997753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-726997753',id=131,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbOW6FmwA5xCX/nfm45JqiS9iGNwncp9v5d8H3G3ZA8NRo+s1Asg0gZAqq62mcFCBdPP+KeNg7r6EvgTxZQYa2mEaBkl1lAwClqNYTOE5oGijuBkGH8PPKePy7SM1EfdQ==',key_name='tempest-TestNetworkBasicOps-533048180',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-p8om99l7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:50Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=31c909bc-0d05-4a67-83e9-b45fb2eb35a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.529 244018 DEBUG nova.network.os_vif_util [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.530 244018 DEBUG nova.network.os_vif_util [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.530 244018 DEBUG os_vif [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.531 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.532 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.532 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.534 244018 DEBUG oslo_concurrency.lockutils [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.538 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap205ead5a-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.539 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap205ead5a-79, col_values=(('external_ids', {'iface-id': '205ead5a-797e-421e-87e3-ec5dac2037d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:de:be', 'vm-uuid': '31c909bc-0d05-4a67-83e9-b45fb2eb35a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:56 compute-0 NetworkManager[49836]: <info>  [1772024036.5420] manager: (tap205ead5a-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/577)
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.549 244018 INFO os_vif [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79')
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.568 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.569 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.576 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.576 244018 INFO nova.compute.claims [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:53:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.732 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.733 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.734 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:5f:de:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.735 244018 INFO nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Using config drive
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.773 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:56 compute-0 nova_compute[244014]: 2026-02-25 12:53:56.921 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2338384354' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:53:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:53:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3403396074' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:57 compute-0 nova_compute[244014]: 2026-02-25 12:53:57.485 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:57 compute-0 nova_compute[244014]: 2026-02-25 12:53:57.493 244018 DEBUG nova.compute.provider_tree [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:53:57 compute-0 nova_compute[244014]: 2026-02-25 12:53:57.658 244018 DEBUG nova.scheduler.client.report [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:53:57 compute-0 nova_compute[244014]: 2026-02-25 12:53:57.765 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:57 compute-0 nova_compute[244014]: 2026-02-25 12:53:57.766 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:53:57 compute-0 nova_compute[244014]: 2026-02-25 12:53:57.855 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:53:57 compute-0 nova_compute[244014]: 2026-02-25 12:53:57.856 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:53:57 compute-0 nova_compute[244014]: 2026-02-25 12:53:57.904 244018 INFO nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:53:57 compute-0 nova_compute[244014]: 2026-02-25 12:53:57.937 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:53:58 compute-0 ceph-mon[76335]: pgmap v2201: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Feb 25 12:53:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3403396074' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.078 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.080 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.080 244018 INFO nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Creating image(s)
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.115 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.153 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.188 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.192 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.274 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.275 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.276 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.277 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.313 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.318 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 945e5549-40d1-4eae-8179-84ad1d751957_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.558 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 945e5549-40d1-4eae-8179-84ad1d751957_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.646 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:53:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Feb 25 12:53:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.755 244018 DEBUG nova.objects.instance [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 945e5549-40d1-4eae-8179-84ad1d751957 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.773 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.774 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Ensure instance console log exists: /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.775 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.775 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:53:58 compute-0 nova_compute[244014]: 2026-02-25 12:53:58.776 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.087 244018 INFO nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Creating config drive at /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.094 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpui_wleo6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.129 244018 DEBUG nova.policy [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.230 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpui_wleo6" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.266 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.271 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.411 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.413 244018 INFO nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Deleting local config drive /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config because it was imported into RBD.
Feb 25 12:53:59 compute-0 kernel: tap205ead5a-79: entered promiscuous mode
Feb 25 12:53:59 compute-0 NetworkManager[49836]: <info>  [1772024039.4795] manager: (tap205ead5a-79): new Tun device (/org/freedesktop/NetworkManager/Devices/578)
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.480 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:59 compute-0 ovn_controller[147040]: 2026-02-25T12:53:59Z|01392|binding|INFO|Claiming lport 205ead5a-797e-421e-87e3-ec5dac2037d3 for this chassis.
Feb 25 12:53:59 compute-0 ovn_controller[147040]: 2026-02-25T12:53:59Z|01393|binding|INFO|205ead5a-797e-421e-87e3-ec5dac2037d3: Claiming fa:16:3e:5f:de:be 10.100.0.13
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.492 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:de:be 10.100.0.13'], port_security=['fa:16:3e:5f:de:be 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '31c909bc-0d05-4a67-83e9-b45fb2eb35a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02748d96-83c0-45be-acd6-081ad673e4bc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=988642bf-bd28-4e43-bffa-6e16e232d40d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=205ead5a-797e-421e-87e3-ec5dac2037d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:59 compute-0 ovn_controller[147040]: 2026-02-25T12:53:59Z|01394|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 ovn-installed in OVS
Feb 25 12:53:59 compute-0 ovn_controller[147040]: 2026-02-25T12:53:59Z|01395|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 up in Southbound
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.494 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 205ead5a-797e-421e-87e3-ec5dac2037d3 in datapath 02748d96-83c0-45be-acd6-081ad673e4bc bound to our chassis
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.496 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02748d96-83c0-45be-acd6-081ad673e4bc
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.508 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b512e82a-13d2-49e3-885b-3cdce915b494]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.510 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02748d96-81 in ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.512 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02748d96-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.512 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6774f4-eac7-480c-9d3a-22ba10581de2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.513 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6a298c-9a0d-420a-b5ab-6445fd40cb41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 systemd-machined[210048]: New machine qemu-163-instance-00000083.
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.526 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b7097cd3-5e26-4f0d-8e05-3e902733d6ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-00000083.
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.538 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57782821-b4c6-49dd-bee3-3fdf645846a6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 systemd-udevd[364103]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:53:59 compute-0 NetworkManager[49836]: <info>  [1772024039.5573] device (tap205ead5a-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:53:59 compute-0 NetworkManager[49836]: <info>  [1772024039.5585] device (tap205ead5a-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.566 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[02c8290b-6169-4992-91f9-1bbff01de279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.570 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[669a6b03-2d0b-42e4-99d0-c7906c291334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 NetworkManager[49836]: <info>  [1772024039.5723] manager: (tap02748d96-80): new Veth device (/org/freedesktop/NetworkManager/Devices/579)
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.594 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[360bff30-e5bb-4842-868a-62824cd11dc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.598 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4e73e7b2-0a7c-41f5-93d5-bfd27694f9f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 NetworkManager[49836]: <info>  [1772024039.6169] device (tap02748d96-80): carrier: link connected
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.622 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2290689f-27ce-4676-b33b-9d34fd63d6ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.635 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb5d84d-5050-44b1-81dd-a086a05199b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02748d96-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:5d:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600902, 'reachable_time': 31482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364133, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.645 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d41fa17-d9b3-47f2-b0b9-cad99506321c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:5d94'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600902, 'tstamp': 600902}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364134, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.661 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[44289261-5588-4df5-9112-9dcedb2bac4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02748d96-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:5d:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600902, 'reachable_time': 31482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364135, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.686 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f9030c-92ff-436c-bf5b-f6ba84ca0ad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.751 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2eb025-41e7-4bee-aa57-0ff376b7ba40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.752 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02748d96-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.753 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.754 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02748d96-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:59 compute-0 kernel: tap02748d96-80: entered promiscuous mode
Feb 25 12:53:59 compute-0 NetworkManager[49836]: <info>  [1772024039.7579] manager: (tap02748d96-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.762 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02748d96-80, col_values=(('external_ids', {'iface-id': 'd05bda2c-299c-4a37-881d-1ed81c75bb47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:59 compute-0 ovn_controller[147040]: 2026-02-25T12:53:59Z|01396|binding|INFO|Releasing lport d05bda2c-299c-4a37-881d-1ed81c75bb47 from this chassis (sb_readonly=0)
Feb 25 12:53:59 compute-0 nova_compute[244014]: 2026-02-25 12:53:59.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.774 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.775 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[710167a8-2afd-4f20-a319-d9aa1b3a4df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.776 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-02748d96-83c0-45be-acd6-081ad673e4bc
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 02748d96-83c0-45be-acd6-081ad673e4bc
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:53:59 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.777 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'env', 'PROCESS_TAG=haproxy-02748d96-83c0-45be-acd6-081ad673e4bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02748d96-83c0-45be-acd6-081ad673e4bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:54:00 compute-0 ceph-mon[76335]: pgmap v2202: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.052 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024040.05061, 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.053 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] VM Started (Lifecycle Event)
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.080 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.085 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024040.0512712, 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.085 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] VM Paused (Lifecycle Event)
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.130 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.134 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:54:00 compute-0 podman[364209]: 2026-02-25 12:54:00.188518999 +0000 UTC m=+0.069737800 container create 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.214 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:54:00 compute-0 systemd[1]: Started libpod-conmon-44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c.scope.
Feb 25 12:54:00 compute-0 podman[364209]: 2026-02-25 12:54:00.156920347 +0000 UTC m=+0.038139208 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:54:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c900f458115b85a5678678f33d3ef16ca7c94ef55bdacac047617c1bfeaf0708/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:00 compute-0 podman[364209]: 2026-02-25 12:54:00.281296419 +0000 UTC m=+0.162515270 container init 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:54:00 compute-0 podman[364209]: 2026-02-25 12:54:00.288460961 +0000 UTC m=+0.169679752 container start 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.303 244018 DEBUG nova.compute.manager [req-6d2f226a-4f9a-4be0-91d8-4368a49ba7fd req-70201471-86c8-4d34-894e-60834bb7808d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.304 244018 DEBUG oslo_concurrency.lockutils [req-6d2f226a-4f9a-4be0-91d8-4368a49ba7fd req-70201471-86c8-4d34-894e-60834bb7808d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.304 244018 DEBUG oslo_concurrency.lockutils [req-6d2f226a-4f9a-4be0-91d8-4368a49ba7fd req-70201471-86c8-4d34-894e-60834bb7808d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.305 244018 DEBUG oslo_concurrency.lockutils [req-6d2f226a-4f9a-4be0-91d8-4368a49ba7fd req-70201471-86c8-4d34-894e-60834bb7808d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.305 244018 DEBUG nova.compute.manager [req-6d2f226a-4f9a-4be0-91d8-4368a49ba7fd req-70201471-86c8-4d34-894e-60834bb7808d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Processing event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.307 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.327 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024040.3261776, 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:00 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [NOTICE]   (364228) : New worker (364230) forked
Feb 25 12:54:00 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [NOTICE]   (364228) : Loading success.
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.329 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] VM Resumed (Lifecycle Event)
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.331 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.349 244018 INFO nova.virt.libvirt.driver [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Instance spawned successfully.
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.350 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.355 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.361 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.376 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.377 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.378 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.378 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.379 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.380 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.388 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.433 244018 INFO nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Took 9.56 seconds to spawn the instance on the hypervisor.
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.434 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.510 244018 INFO nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Took 10.81 seconds to build instance.
Feb 25 12:54:00 compute-0 nova_compute[244014]: 2026-02-25 12:54:00.532 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:54:01 compute-0 nova_compute[244014]: 2026-02-25 12:54:01.191 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:01.192 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:54:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:01.194 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:54:01 compute-0 nova_compute[244014]: 2026-02-25 12:54:01.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:54:01 compute-0 nova_compute[244014]: 2026-02-25 12:54:01.868 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Successfully created port: edede35b-327b-4dc5-8432-cc64bb4a290d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:54:02 compute-0 ceph-mon[76335]: pgmap v2203: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:54:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:02.196 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 124 op/s
Feb 25 12:54:02 compute-0 nova_compute[244014]: 2026-02-25 12:54:02.903 244018 DEBUG nova.compute.manager [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:02 compute-0 nova_compute[244014]: 2026-02-25 12:54:02.904 244018 DEBUG oslo_concurrency.lockutils [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:02 compute-0 nova_compute[244014]: 2026-02-25 12:54:02.904 244018 DEBUG oslo_concurrency.lockutils [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:02 compute-0 nova_compute[244014]: 2026-02-25 12:54:02.904 244018 DEBUG oslo_concurrency.lockutils [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:02 compute-0 nova_compute[244014]: 2026-02-25 12:54:02.904 244018 DEBUG nova.compute.manager [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] No waiting events found dispatching network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:54:02 compute-0 nova_compute[244014]: 2026-02-25 12:54:02.905 244018 WARNING nova.compute.manager [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received unexpected event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with vm_state active and task_state None.
Feb 25 12:54:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:04 compute-0 ceph-mon[76335]: pgmap v2204: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 124 op/s
Feb 25 12:54:04 compute-0 nova_compute[244014]: 2026-02-25 12:54:04.106 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Successfully updated port: edede35b-327b-4dc5-8432-cc64bb4a290d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:54:04 compute-0 nova_compute[244014]: 2026-02-25 12:54:04.133 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:54:04 compute-0 nova_compute[244014]: 2026-02-25 12:54:04.134 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:54:04 compute-0 nova_compute[244014]: 2026-02-25 12:54:04.134 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:54:04 compute-0 nova_compute[244014]: 2026-02-25 12:54:04.227 244018 DEBUG nova.compute.manager [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:04 compute-0 nova_compute[244014]: 2026-02-25 12:54:04.228 244018 DEBUG nova.compute.manager [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing instance network info cache due to event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:54:04 compute-0 nova_compute[244014]: 2026-02-25 12:54:04.228 244018 DEBUG oslo_concurrency.lockutils [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:54:04 compute-0 nova_compute[244014]: 2026-02-25 12:54:04.348 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:54:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Feb 25 12:54:05 compute-0 nova_compute[244014]: 2026-02-25 12:54:05.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:06 compute-0 ceph-mon[76335]: pgmap v2205: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 101 op/s
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.869 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updating instance_info_cache with network_info: [{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.899 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.899 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Instance network_info: |[{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.901 244018 DEBUG oslo_concurrency.lockutils [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.901 244018 DEBUG nova.network.neutron [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.904 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Start _get_guest_xml network_info=[{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.909 244018 WARNING nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.919 244018 DEBUG nova.virt.libvirt.host [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.920 244018 DEBUG nova.virt.libvirt.host [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.924 244018 DEBUG nova.virt.libvirt.host [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.925 244018 DEBUG nova.virt.libvirt.host [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.925 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.925 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.926 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.926 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.927 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.927 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.927 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.928 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.928 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.928 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.928 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.929 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:54:06 compute-0 nova_compute[244014]: 2026-02-25 12:54:06.931 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:54:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394111518' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.482 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.523 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.530 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.651 244018 DEBUG nova.compute.manager [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.652 244018 DEBUG nova.compute.manager [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Refreshing instance network info cache due to event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.652 244018 DEBUG oslo_concurrency.lockutils [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.653 244018 DEBUG oslo_concurrency.lockutils [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.653 244018 DEBUG nova.network.neutron [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Refreshing network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.807 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.807 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.808 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.809 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.810 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.812 244018 INFO nova.compute.manager [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Terminating instance
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.814 244018 DEBUG nova.compute.manager [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:54:07 compute-0 kernel: tap205ead5a-79 (unregistering): left promiscuous mode
Feb 25 12:54:07 compute-0 NetworkManager[49836]: <info>  [1772024047.8704] device (tap205ead5a-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:54:07 compute-0 ovn_controller[147040]: 2026-02-25T12:54:07Z|01397|binding|INFO|Releasing lport 205ead5a-797e-421e-87e3-ec5dac2037d3 from this chassis (sb_readonly=0)
Feb 25 12:54:07 compute-0 ovn_controller[147040]: 2026-02-25T12:54:07Z|01398|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 down in Southbound
Feb 25 12:54:07 compute-0 ovn_controller[147040]: 2026-02-25T12:54:07Z|01399|binding|INFO|Removing iface tap205ead5a-79 ovn-installed in OVS
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:07.887 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:de:be 10.100.0.13'], port_security=['fa:16:3e:5f:de:be 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '31c909bc-0d05-4a67-83e9-b45fb2eb35a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02748d96-83c0-45be-acd6-081ad673e4bc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=988642bf-bd28-4e43-bffa-6e16e232d40d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=205ead5a-797e-421e-87e3-ec5dac2037d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:54:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:07.888 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 205ead5a-797e-421e-87e3-ec5dac2037d3 in datapath 02748d96-83c0-45be-acd6-081ad673e4bc unbound from our chassis
Feb 25 12:54:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:07.889 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02748d96-83c0-45be-acd6-081ad673e4bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:54:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:07.890 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d9330ac2-2f9f-4019-ae06-078edaf97555]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:07.890 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc namespace which is not needed anymore
Feb 25 12:54:07 compute-0 nova_compute[244014]: 2026-02-25 12:54:07.900 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:07 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000083.scope: Deactivated successfully.
Feb 25 12:54:07 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000083.scope: Consumed 8.254s CPU time.
Feb 25 12:54:07 compute-0 systemd-machined[210048]: Machine qemu-163-instance-00000083 terminated.
Feb 25 12:54:08 compute-0 podman[364299]: 2026-02-25 12:54:08.005953766 +0000 UTC m=+0.106567830 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.035 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:08 compute-0 podman[364302]: 2026-02-25 12:54:08.03759954 +0000 UTC m=+0.133672076 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:08 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [NOTICE]   (364228) : haproxy version is 2.8.14-c23fe91
Feb 25 12:54:08 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [NOTICE]   (364228) : path to executable is /usr/sbin/haproxy
Feb 25 12:54:08 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [WARNING]  (364228) : Exiting Master process...
Feb 25 12:54:08 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [WARNING]  (364228) : Exiting Master process...
Feb 25 12:54:08 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [ALERT]    (364228) : Current worker (364230) exited with code 143 (Terminated)
Feb 25 12:54:08 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [WARNING]  (364228) : All workers exited. Exiting... (0)
Feb 25 12:54:08 compute-0 systemd[1]: libpod-44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c.scope: Deactivated successfully.
Feb 25 12:54:08 compute-0 conmon[364224]: conmon 44e3df8efac0d089b0c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c.scope/container/memory.events
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.050 244018 INFO nova.virt.libvirt.driver [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Instance destroyed successfully.
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.051 244018 DEBUG nova.objects.instance [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:54:08 compute-0 podman[364356]: 2026-02-25 12:54:08.055049853 +0000 UTC m=+0.065162181 container died 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:54:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:54:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548444361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.064 244018 DEBUG nova.virt.libvirt.vif [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:53:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-726997753',display_name='tempest-TestNetworkBasicOps-server-726997753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-726997753',id=131,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbOW6FmwA5xCX/nfm45JqiS9iGNwncp9v5d8H3G3ZA8NRo+s1Asg0gZAqq62mcFCBdPP+KeNg7r6EvgTxZQYa2mEaBkl1lAwClqNYTOE5oGijuBkGH8PPKePy7SM1EfdQ==',key_name='tempest-TestNetworkBasicOps-533048180',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:54:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-p8om99l7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:54:00Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=31c909bc-0d05-4a67-83e9-b45fb2eb35a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.065 244018 DEBUG nova.network.os_vif_util [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.066 244018 DEBUG nova.network.os_vif_util [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.066 244018 DEBUG os_vif [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.069 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap205ead5a-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:08 compute-0 ceph-mon[76335]: pgmap v2206: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 101 op/s
Feb 25 12:54:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1394111518' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:54:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/548444361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.077 244018 INFO os_vif [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79')
Feb 25 12:54:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c-userdata-shm.mount: Deactivated successfully.
Feb 25 12:54:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c900f458115b85a5678678f33d3ef16ca7c94ef55bdacac047617c1bfeaf0708-merged.mount: Deactivated successfully.
Feb 25 12:54:08 compute-0 podman[364356]: 2026-02-25 12:54:08.101051962 +0000 UTC m=+0.111164290 container cleanup 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.106 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.107 244018 DEBUG nova.compute.manager [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.108 244018 DEBUG oslo_concurrency.lockutils [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.108 244018 DEBUG oslo_concurrency.lockutils [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.108 244018 DEBUG oslo_concurrency.lockutils [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.108 244018 DEBUG nova.compute.manager [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] No waiting events found dispatching network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.109 244018 DEBUG nova.compute.manager [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.110 244018 DEBUG nova.virt.libvirt.vif [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487150499',display_name='tempest-TestGettingAddress-server-487150499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487150499',id=132,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-9mlkqqq2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:57Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=945e5549-40d1-4eae-8179-84ad1d751957,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.110 244018 DEBUG nova.network.os_vif_util [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.111 244018 DEBUG nova.network.os_vif_util [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.112 244018 DEBUG nova.objects.instance [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 945e5549-40d1-4eae-8179-84ad1d751957 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:54:08 compute-0 systemd[1]: libpod-conmon-44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c.scope: Deactivated successfully.
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.128 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:54:08 compute-0 nova_compute[244014]:   <uuid>945e5549-40d1-4eae-8179-84ad1d751957</uuid>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   <name>instance-00000084</name>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-487150499</nova:name>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:54:06</nova:creationTime>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <nova:port uuid="edede35b-327b-4dc5-8432-cc64bb4a290d">
Feb 25 12:54:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe06:b586" ipVersion="6"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe06:b586" ipVersion="6"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <system>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <entry name="serial">945e5549-40d1-4eae-8179-84ad1d751957</entry>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <entry name="uuid">945e5549-40d1-4eae-8179-84ad1d751957</entry>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     </system>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   <os>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   </os>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   <features>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   </features>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/945e5549-40d1-4eae-8179-84ad1d751957_disk">
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/945e5549-40d1-4eae-8179-84ad1d751957_disk.config">
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:54:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:06:b5:86"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <target dev="tapedede35b-32"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/console.log" append="off"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <video>
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     </video>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:54:08 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:54:08 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:54:08 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:54:08 compute-0 nova_compute[244014]: </domain>
Feb 25 12:54:08 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.128 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Preparing to wait for external event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.128 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.129 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.129 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.129 244018 DEBUG nova.virt.libvirt.vif [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487150499',display_name='tempest-TestGettingAddress-server-487150499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487150499',id=132,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-9mlkqqq2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:57Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=945e5549-40d1-4eae-8179-84ad1d751957,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.130 244018 DEBUG nova.network.os_vif_util [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.131 244018 DEBUG nova.network.os_vif_util [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.131 244018 DEBUG os_vif [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.132 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.132 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.134 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedede35b-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.135 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapedede35b-32, col_values=(('external_ids', {'iface-id': 'edede35b-327b-4dc5-8432-cc64bb4a290d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:b5:86', 'vm-uuid': '945e5549-40d1-4eae-8179-84ad1d751957'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:08 compute-0 NetworkManager[49836]: <info>  [1772024048.1373] manager: (tapedede35b-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.141 244018 INFO os_vif [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32')
Feb 25 12:54:08 compute-0 podman[364417]: 2026-02-25 12:54:08.172447368 +0000 UTC m=+0.052111903 container remove 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.177 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73a2ae6a-2156-4c29-9e27-5b19c1fabcec]: (4, ('Wed Feb 25 12:54:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc (44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c)\n44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c\nWed Feb 25 12:54:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc (44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c)\n44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.179 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae48a71-0584-4fcf-84d9-131fd8768a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.181 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02748d96-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:08 compute-0 kernel: tap02748d96-80: left promiscuous mode
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.195 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.197 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.197 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.197 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:06:b5:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.198 244018 INFO nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Using config drive
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.198 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[228991ae-5d3b-47e0-8386-3119e5ded323]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.210 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7f699d88-cde1-461d-ba95-280d619d5180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.213 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d27c54-e6ba-429d-bd08-de7275a958b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.223 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.226 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7b69118d-15b9-4512-b7d6-47e793b68187]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600896, 'reachable_time': 16521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364453, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.229 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.229 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[18a6b085-904b-44fc-bb93-89227a7c1582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d02748d96\x2d83c0\x2d45be\x2dacd6\x2d081ad673e4bc.mount: Deactivated successfully.
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.491 244018 INFO nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Creating config drive at /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.497 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_3xf1xn5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.641 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_3xf1xn5" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.679 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.684 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config 945e5549-40d1-4eae-8179-84ad1d751957_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.852 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config 945e5549-40d1-4eae-8179-84ad1d751957_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.853 244018 INFO nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Deleting local config drive /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config because it was imported into RBD.
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.886 244018 INFO nova.virt.libvirt.driver [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Deleting instance files /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9_del
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.887 244018 INFO nova.virt.libvirt.driver [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Deletion of /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9_del complete
Feb 25 12:54:08 compute-0 kernel: tapedede35b-32: entered promiscuous mode
Feb 25 12:54:08 compute-0 NetworkManager[49836]: <info>  [1772024048.9152] manager: (tapedede35b-32): new Tun device (/org/freedesktop/NetworkManager/Devices/582)
Feb 25 12:54:08 compute-0 systemd-udevd[364321]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:08 compute-0 ovn_controller[147040]: 2026-02-25T12:54:08Z|01400|binding|INFO|Claiming lport edede35b-327b-4dc5-8432-cc64bb4a290d for this chassis.
Feb 25 12:54:08 compute-0 ovn_controller[147040]: 2026-02-25T12:54:08Z|01401|binding|INFO|edede35b-327b-4dc5-8432-cc64bb4a290d: Claiming fa:16:3e:06:b5:86 10.100.0.13 2001:db8:0:1:f816:3eff:fe06:b586 2001:db8::f816:3eff:fe06:b586
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.928 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:b5:86 10.100.0.13 2001:db8:0:1:f816:3eff:fe06:b586 2001:db8::f816:3eff:fe06:b586'], port_security=['fa:16:3e:06:b5:86 10.100.0.13 2001:db8:0:1:f816:3eff:fe06:b586 2001:db8::f816:3eff:fe06:b586'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe06:b586/64 2001:db8::f816:3eff:fe06:b586/64', 'neutron:device_id': '945e5549-40d1-4eae-8179-84ad1d751957', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd4482ae-906b-42b8-8dd6-2eff099d2f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=edede35b-327b-4dc5-8432-cc64bb4a290d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:54:08 compute-0 ovn_controller[147040]: 2026-02-25T12:54:08Z|01402|binding|INFO|Setting lport edede35b-327b-4dc5-8432-cc64bb4a290d ovn-installed in OVS
Feb 25 12:54:08 compute-0 ovn_controller[147040]: 2026-02-25T12:54:08Z|01403|binding|INFO|Setting lport edede35b-327b-4dc5-8432-cc64bb4a290d up in Southbound
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.930 157129 INFO neutron.agent.ovn.metadata.agent [-] Port edede35b-327b-4dc5-8432-cc64bb4a290d in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd bound to our chassis
Feb 25 12:54:08 compute-0 NetworkManager[49836]: <info>  [1772024048.9330] device (tapedede35b-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:54:08 compute-0 NetworkManager[49836]: <info>  [1772024048.9338] device (tapedede35b-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.933 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88562c34-222a-439a-b444-9e6f8a6d70cd
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.931 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:08 compute-0 systemd-machined[210048]: New machine qemu-164-instance-00000084.
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.952 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2199e03-4b3b-42a9-b26f-14b5e9271b20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.970 244018 INFO nova.compute.manager [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Took 1.16 seconds to destroy the instance on the hypervisor.
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.971 244018 DEBUG oslo.service.loopingcall [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.971 244018 DEBUG nova.compute.manager [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:54:08 compute-0 nova_compute[244014]: 2026-02-25 12:54:08.971 244018 DEBUG nova.network.neutron [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:54:08 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-00000084.
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.974 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd07a80-a3e5-45e2-a103-fc21427fd3f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.980 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8cec34-516a-47c0-8cf1-b83320e56b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.003 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f26254-1dd2-4fb9-b744-27b14eb476cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.017 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a262f6-14b3-4b84-a251-942219ef2edb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88562c34-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:5e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598029, 'reachable_time': 38748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364518, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.033 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d79d99d-c743-4736-9be0-26307a8c6f66]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap88562c34-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598041, 'tstamp': 598041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364522, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap88562c34-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598043, 'tstamp': 598043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364522, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.035 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88562c34-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.039 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88562c34-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.039 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:54:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.040 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88562c34-20, col_values=(('external_ids', {'iface-id': '09662fcb-392d-469a-981c-54d31225748b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.040 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.415 244018 DEBUG nova.network.neutron [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updated VIF entry in instance network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.416 244018 DEBUG nova.network.neutron [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updating instance_info_cache with network_info: [{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.422 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024049.421914, 945e5549-40d1-4eae-8179-84ad1d751957 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.422 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] VM Started (Lifecycle Event)
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.445 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.448 244018 DEBUG oslo_concurrency.lockutils [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.452 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024049.4220247, 945e5549-40d1-4eae-8179-84ad1d751957 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.453 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] VM Paused (Lifecycle Event)
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.471 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.473 244018 DEBUG nova.network.neutron [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updated VIF entry in instance network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.474 244018 DEBUG nova.network.neutron [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updating instance_info_cache with network_info: [{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.479 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.499 244018 DEBUG oslo_concurrency.lockutils [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.500 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.729 244018 DEBUG nova.compute.manager [req-0881cfbc-e394-4298-a09c-f474753621f0 req-29ec2177-6e7d-4f20-89cb-7a98f86be221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.730 244018 DEBUG oslo_concurrency.lockutils [req-0881cfbc-e394-4298-a09c-f474753621f0 req-29ec2177-6e7d-4f20-89cb-7a98f86be221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.730 244018 DEBUG oslo_concurrency.lockutils [req-0881cfbc-e394-4298-a09c-f474753621f0 req-29ec2177-6e7d-4f20-89cb-7a98f86be221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.730 244018 DEBUG oslo_concurrency.lockutils [req-0881cfbc-e394-4298-a09c-f474753621f0 req-29ec2177-6e7d-4f20-89cb-7a98f86be221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.731 244018 DEBUG nova.compute.manager [req-0881cfbc-e394-4298-a09c-f474753621f0 req-29ec2177-6e7d-4f20-89cb-7a98f86be221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Processing event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.732 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.737 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024049.7373264, 945e5549-40d1-4eae-8179-84ad1d751957 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.737 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] VM Resumed (Lifecycle Event)
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.740 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.744 244018 INFO nova.virt.libvirt.driver [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Instance spawned successfully.
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.744 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.761 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.766 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.802 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.805 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.805 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.806 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.807 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.807 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.808 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.888 244018 INFO nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Took 11.81 seconds to spawn the instance on the hypervisor.
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.888 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.968 244018 INFO nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Took 13.43 seconds to build instance.
Feb 25 12:54:09 compute-0 nova_compute[244014]: 2026-02-25 12:54:09.991 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:10 compute-0 ceph-mon[76335]: pgmap v2207: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.183 244018 DEBUG nova.compute.manager [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.183 244018 DEBUG oslo_concurrency.lockutils [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.184 244018 DEBUG oslo_concurrency.lockutils [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.184 244018 DEBUG oslo_concurrency.lockutils [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.184 244018 DEBUG nova.compute.manager [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] No waiting events found dispatching network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.185 244018 WARNING nova.compute.manager [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received unexpected event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with vm_state active and task_state deleting.
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.440 244018 DEBUG nova.network.neutron [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.466 244018 INFO nova.compute.manager [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Took 1.50 seconds to deallocate network for instance.
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.560 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.560 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:10 compute-0 nova_compute[244014]: 2026-02-25 12:54:10.665 244018 DEBUG oslo_concurrency.processutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:54:11 compute-0 nova_compute[244014]: 2026-02-25 12:54:11.882 244018 DEBUG nova.compute.manager [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:11 compute-0 nova_compute[244014]: 2026-02-25 12:54:11.883 244018 DEBUG oslo_concurrency.lockutils [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:11 compute-0 nova_compute[244014]: 2026-02-25 12:54:11.883 244018 DEBUG oslo_concurrency.lockutils [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:11 compute-0 nova_compute[244014]: 2026-02-25 12:54:11.883 244018 DEBUG oslo_concurrency.lockutils [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:11 compute-0 nova_compute[244014]: 2026-02-25 12:54:11.883 244018 DEBUG nova.compute.manager [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] No waiting events found dispatching network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:54:11 compute-0 nova_compute[244014]: 2026-02-25 12:54:11.883 244018 WARNING nova.compute.manager [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received unexpected event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d for instance with vm_state active and task_state None.
Feb 25 12:54:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:54:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3648435004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:12 compute-0 nova_compute[244014]: 2026-02-25 12:54:12.561 244018 DEBUG oslo_concurrency.processutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.896s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:12 compute-0 nova_compute[244014]: 2026-02-25 12:54:12.567 244018 DEBUG nova.compute.provider_tree [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:54:12 compute-0 ceph-mon[76335]: pgmap v2208: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:54:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3648435004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:12 compute-0 nova_compute[244014]: 2026-02-25 12:54:12.596 244018 DEBUG nova.scheduler.client.report [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:54:12 compute-0 nova_compute[244014]: 2026-02-25 12:54:12.659 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Feb 25 12:54:12 compute-0 nova_compute[244014]: 2026-02-25 12:54:12.752 244018 INFO nova.scheduler.client.report [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 31c909bc-0d05-4a67-83e9-b45fb2eb35a9
Feb 25 12:54:12 compute-0 nova_compute[244014]: 2026-02-25 12:54:12.857 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:13 compute-0 nova_compute[244014]: 2026-02-25 12:54:13.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:13 compute-0 ceph-mon[76335]: pgmap v2209: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Feb 25 12:54:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:14 compute-0 nova_compute[244014]: 2026-02-25 12:54:14.004 244018 DEBUG nova.compute.manager [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:14 compute-0 nova_compute[244014]: 2026-02-25 12:54:14.004 244018 DEBUG nova.compute.manager [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing instance network info cache due to event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:54:14 compute-0 nova_compute[244014]: 2026-02-25 12:54:14.004 244018 DEBUG oslo_concurrency.lockutils [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:54:14 compute-0 nova_compute[244014]: 2026-02-25 12:54:14.005 244018 DEBUG oslo_concurrency.lockutils [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:54:14 compute-0 nova_compute[244014]: 2026-02-25 12:54:14.005 244018 DEBUG nova.network.neutron [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:54:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 14 KiB/s wr, 90 op/s
Feb 25 12:54:15 compute-0 nova_compute[244014]: 2026-02-25 12:54:15.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:15 compute-0 ceph-mon[76335]: pgmap v2210: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 14 KiB/s wr, 90 op/s
Feb 25 12:54:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 14 KiB/s wr, 90 op/s
Feb 25 12:54:17 compute-0 ceph-mon[76335]: pgmap v2211: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 14 KiB/s wr, 90 op/s
Feb 25 12:54:18 compute-0 nova_compute[244014]: 2026-02-25 12:54:18.087 244018 DEBUG nova.network.neutron [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updated VIF entry in instance network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:54:18 compute-0 nova_compute[244014]: 2026-02-25 12:54:18.088 244018 DEBUG nova.network.neutron [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updating instance_info_cache with network_info: [{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:18 compute-0 nova_compute[244014]: 2026-02-25 12:54:18.111 244018 DEBUG oslo_concurrency.lockutils [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:54:18 compute-0 nova_compute[244014]: 2026-02-25 12:54:18.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Feb 25 12:54:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:19 compute-0 ceph-mon[76335]: pgmap v2212: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Feb 25 12:54:20 compute-0 nova_compute[244014]: 2026-02-25 12:54:20.453 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Feb 25 12:54:21 compute-0 ceph-mon[76335]: pgmap v2213: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Feb 25 12:54:22 compute-0 ovn_controller[147040]: 2026-02-25T12:54:22Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:b5:86 10.100.0.13
Feb 25 12:54:22 compute-0 ovn_controller[147040]: 2026-02-25T12:54:22Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:b5:86 10.100.0.13
Feb 25 12:54:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.033 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.035 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.049 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024048.0479279, 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.049 244018 INFO nova.compute.manager [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] VM Stopped (Lifecycle Event)
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.054 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.090 244018 DEBUG nova.compute.manager [None req-8af727cb-a2b3-4027-ad75-d058f0e5b86e - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.142 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.165 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.166 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.178 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.179 244018 INFO nova.compute.claims [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:54:23 compute-0 nova_compute[244014]: 2026-02-25 12:54:23.401 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:23 compute-0 ceph-mon[76335]: pgmap v2214: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Feb 25 12:54:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:54:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/990708143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.016 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.024 244018 DEBUG nova.compute.provider_tree [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.043 244018 DEBUG nova.scheduler.client.report [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.068 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.069 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.119 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.120 244018 DEBUG nova.network.neutron [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.144 244018 INFO nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.161 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.314 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.315 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.316 244018 INFO nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Creating image(s)
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.343 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.372 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.398 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.403 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.490 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.492 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.493 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.494 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.525 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.530 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.763 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/990708143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.842 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.943 244018 DEBUG nova.objects.instance [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 3da53ea4-e42d-4bfb-b917-d9c81fd3652f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.960 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.961 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Ensure instance console log exists: /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.962 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.962 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:24 compute-0 nova_compute[244014]: 2026-02-25 12:54:24.963 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:25 compute-0 nova_compute[244014]: 2026-02-25 12:54:25.061 244018 DEBUG nova.policy [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:54:25 compute-0 nova_compute[244014]: 2026-02-25 12:54:25.454 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:25 compute-0 ceph-mon[76335]: pgmap v2215: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Feb 25 12:54:26 compute-0 nova_compute[244014]: 2026-02-25 12:54:26.531 244018 DEBUG nova.network.neutron [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Successfully updated port: 205ead5a-797e-421e-87e3-ec5dac2037d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:54:26 compute-0 nova_compute[244014]: 2026-02-25 12:54:26.548 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:54:26 compute-0 nova_compute[244014]: 2026-02-25 12:54:26.549 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:54:26 compute-0 nova_compute[244014]: 2026-02-25 12:54:26.549 244018 DEBUG nova.network.neutron [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:54:26 compute-0 nova_compute[244014]: 2026-02-25 12:54:26.629 244018 DEBUG nova.compute.manager [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:26 compute-0 nova_compute[244014]: 2026-02-25 12:54:26.630 244018 DEBUG nova.compute.manager [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Refreshing instance network info cache due to event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:54:26 compute-0 nova_compute[244014]: 2026-02-25 12:54:26.630 244018 DEBUG oslo_concurrency.lockutils [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:54:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Feb 25 12:54:26 compute-0 nova_compute[244014]: 2026-02-25 12:54:26.684 244018 DEBUG nova.network.neutron [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.386 244018 DEBUG nova.network.neutron [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Updating instance_info_cache with network_info: [{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.409 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.409 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Instance network_info: |[{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.410 244018 DEBUG oslo_concurrency.lockutils [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.411 244018 DEBUG nova.network.neutron [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Refreshing network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.418 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Start _get_guest_xml network_info=[{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.426 244018 WARNING nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.432 244018 DEBUG nova.virt.libvirt.host [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.433 244018 DEBUG nova.virt.libvirt.host [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.453 244018 DEBUG nova.virt.libvirt.host [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.454 244018 DEBUG nova.virt.libvirt.host [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.455 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.455 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.456 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.456 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.457 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.457 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.458 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.458 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.459 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.459 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.459 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.460 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:54:27 compute-0 nova_compute[244014]: 2026-02-25 12:54:27.464 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:27 compute-0 ceph-mon[76335]: pgmap v2216: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Feb 25 12:54:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:54:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4135331109' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.058 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.078 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.082 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.146 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:54:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2655311904' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:54:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 775 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Feb 25 12:54:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.693 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.696 244018 DEBUG nova.virt.libvirt.vif [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:54:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213997257',display_name='tempest-TestNetworkBasicOps-server-1213997257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213997257',id=133,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNG26zhuNSlF1TfE4sDBmDnvGmo5/ze2p13GCwWfDn/0tZUccugkaXigVz2IREshfwlZeDWvbX4lpuIp+1iD+9XZNIkC3KjSMU6+yZzbD4HnlUsSc5LJ/0HlGl49tXOmw==',key_name='tempest-TestNetworkBasicOps-1599989750',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ywrihctv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:54:24Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=3da53ea4-e42d-4bfb-b917-d9c81fd3652f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.696 244018 DEBUG nova.network.os_vif_util [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.698 244018 DEBUG nova.network.os_vif_util [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.699 244018 DEBUG nova.objects.instance [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3da53ea4-e42d-4bfb-b917-d9c81fd3652f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.720 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:54:28 compute-0 nova_compute[244014]:   <uuid>3da53ea4-e42d-4bfb-b917-d9c81fd3652f</uuid>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   <name>instance-00000085</name>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-1213997257</nova:name>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:54:27</nova:creationTime>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <nova:port uuid="205ead5a-797e-421e-87e3-ec5dac2037d3">
Feb 25 12:54:28 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <system>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <entry name="serial">3da53ea4-e42d-4bfb-b917-d9c81fd3652f</entry>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <entry name="uuid">3da53ea4-e42d-4bfb-b917-d9c81fd3652f</entry>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     </system>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   <os>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   </os>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   <features>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   </features>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk">
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       </source>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config">
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       </source>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:54:28 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:5f:de:be"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <target dev="tap205ead5a-79"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/console.log" append="off"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <video>
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     </video>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:54:28 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:54:28 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:54:28 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:54:28 compute-0 nova_compute[244014]: </domain>
Feb 25 12:54:28 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.721 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Preparing to wait for external event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.722 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.722 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.722 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.723 244018 DEBUG nova.virt.libvirt.vif [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:54:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213997257',display_name='tempest-TestNetworkBasicOps-server-1213997257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213997257',id=133,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNG26zhuNSlF1TfE4sDBmDnvGmo5/ze2p13GCwWfDn/0tZUccugkaXigVz2IREshfwlZeDWvbX4lpuIp+1iD+9XZNIkC3KjSMU6+yZzbD4HnlUsSc5LJ/0HlGl49tXOmw==',key_name='tempest-TestNetworkBasicOps-1599989750',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ywrihctv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:54:24Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=3da53ea4-e42d-4bfb-b917-d9c81fd3652f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.723 244018 DEBUG nova.network.os_vif_util [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.724 244018 DEBUG nova.network.os_vif_util [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.724 244018 DEBUG os_vif [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.725 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.725 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.729 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap205ead5a-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.729 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap205ead5a-79, col_values=(('external_ids', {'iface-id': '205ead5a-797e-421e-87e3-ec5dac2037d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:de:be', 'vm-uuid': '3da53ea4-e42d-4bfb-b917-d9c81fd3652f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:28 compute-0 NetworkManager[49836]: <info>  [1772024068.7317] manager: (tap205ead5a-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/583)
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.738 244018 INFO os_vif [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79')
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.788 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.788 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.788 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:5f:de:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.788 244018 INFO nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Using config drive
Feb 25 12:54:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4135331109' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:54:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2655311904' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:54:28 compute-0 nova_compute[244014]: 2026-02-25 12:54:28.821 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.179 244018 INFO nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Creating config drive at /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.186 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8en4w6n2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.326 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8en4w6n2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.363 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.368 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.406 244018 DEBUG nova.network.neutron [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Updated VIF entry in instance network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.407 244018 DEBUG nova.network.neutron [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Updating instance_info_cache with network_info: [{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.426 244018 DEBUG oslo_concurrency.lockutils [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.535 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.537 244018 INFO nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Deleting local config drive /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config because it was imported into RBD.
Feb 25 12:54:29 compute-0 kernel: tap205ead5a-79: entered promiscuous mode
Feb 25 12:54:29 compute-0 NetworkManager[49836]: <info>  [1772024069.5935] manager: (tap205ead5a-79): new Tun device (/org/freedesktop/NetworkManager/Devices/584)
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.596 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:29 compute-0 ovn_controller[147040]: 2026-02-25T12:54:29Z|01404|binding|INFO|Claiming lport 205ead5a-797e-421e-87e3-ec5dac2037d3 for this chassis.
Feb 25 12:54:29 compute-0 ovn_controller[147040]: 2026-02-25T12:54:29Z|01405|binding|INFO|205ead5a-797e-421e-87e3-ec5dac2037d3: Claiming fa:16:3e:5f:de:be 10.100.0.13
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.608 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:de:be 10.100.0.13'], port_security=['fa:16:3e:5f:de:be 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3da53ea4-e42d-4bfb-b917-d9c81fd3652f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02748d96-83c0-45be-acd6-081ad673e4bc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '7', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=988642bf-bd28-4e43-bffa-6e16e232d40d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=205ead5a-797e-421e-87e3-ec5dac2037d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:54:29 compute-0 ovn_controller[147040]: 2026-02-25T12:54:29Z|01406|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 ovn-installed in OVS
Feb 25 12:54:29 compute-0 ovn_controller[147040]: 2026-02-25T12:54:29Z|01407|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 up in Southbound
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.611 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 205ead5a-797e-421e-87e3-ec5dac2037d3 in datapath 02748d96-83c0-45be-acd6-081ad673e4bc bound to our chassis
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.613 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02748d96-83c0-45be-acd6-081ad673e4bc
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.623 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e855fed1-a291-4275-85a5-8fc4f4ad3ffc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.624 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02748d96-81 in ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.627 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02748d96-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.628 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8f98a9-4cb5-41cf-835e-4442fe05b6bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.630 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[597caa3d-0de6-4421-b247-aa9fed6b9ca5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 systemd-udevd[364916]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:54:29 compute-0 systemd-machined[210048]: New machine qemu-165-instance-00000085.
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.644 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4e031721-519e-4f8a-8019-6cd88a0ce106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 NetworkManager[49836]: <info>  [1772024069.6573] device (tap205ead5a-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:54:29 compute-0 NetworkManager[49836]: <info>  [1772024069.6585] device (tap205ead5a-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:54:29 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-00000085.
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.671 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a58ec3-a539-4e4a-bbcd-1b937cabf96b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.702 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[40591f7b-46bf-4d6c-8e73-4ec923335776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 NetworkManager[49836]: <info>  [1772024069.7100] manager: (tap02748d96-80): new Veth device (/org/freedesktop/NetworkManager/Devices/585)
Feb 25 12:54:29 compute-0 systemd-udevd[364919]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.710 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f07a93b9-a7a1-46db-9469-2b36feaff7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.740 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7d6b24e7-0b33-447e-8e9e-46257061a365]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.745 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5a48c221-b3ed-4952-849e-631c94d70aac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 NetworkManager[49836]: <info>  [1772024069.7747] device (tap02748d96-80): carrier: link connected
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.783 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[03d5c8f9-838a-40e7-b81e-ffa31cff7780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.803 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d1eec0e7-e933-437b-8a13-cc392c2f2633]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02748d96-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:5d:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603918, 'reachable_time': 36344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364947, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ceph-mon[76335]: pgmap v2217: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 775 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8447c8b1-b210-4617-a581-445921791310]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:5d94'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603918, 'tstamp': 603918}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364948, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.845 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3033fd69-b9b4-4025-b0da-643d485878b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02748d96-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:5d:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603918, 'reachable_time': 36344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364949, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.887 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bed870f7-a731-41ea-a5a5-7a71e2a26de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.951 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d5b4fa-c7c9-4715-a8e8-85517083e85e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.955 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02748d96-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.956 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.956 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02748d96-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:29 compute-0 NetworkManager[49836]: <info>  [1772024069.9613] manager: (tap02748d96-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Feb 25 12:54:29 compute-0 kernel: tap02748d96-80: entered promiscuous mode
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.964 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.965 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02748d96-80, col_values=(('external_ids', {'iface-id': 'd05bda2c-299c-4a37-881d-1ed81c75bb47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:29 compute-0 ovn_controller[147040]: 2026-02-25T12:54:29Z|01408|binding|INFO|Releasing lport d05bda2c-299c-4a37-881d-1ed81c75bb47 from this chassis (sb_readonly=0)
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.968 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.970 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb52bb3-03f2-4a81-8650-3aff3c80917d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.971 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-02748d96-83c0-45be-acd6-081ad673e4bc
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 02748d96-83c0-45be-acd6-081ad673e4bc
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:54:29 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.972 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'env', 'PROCESS_TAG=haproxy-02748d96-83c0-45be-acd6-081ad673e4bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02748d96-83c0-45be-acd6-081ad673e4bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:54:29 compute-0 nova_compute[244014]: 2026-02-25 12:54:29.973 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.046 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024070.0462513, 3da53ea4-e42d-4bfb-b917-d9c81fd3652f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.047 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] VM Started (Lifecycle Event)
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.072 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.077 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024070.0463872, 3da53ea4-e42d-4bfb-b917-d9c81fd3652f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.077 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] VM Paused (Lifecycle Event)
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.103 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.108 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.146 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.178 244018 DEBUG nova.compute.manager [req-71c21a7d-3001-4def-8e67-84f962255f76 req-febe5ce4-2b76-46bd-9ec3-645a6ede5d91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.179 244018 DEBUG oslo_concurrency.lockutils [req-71c21a7d-3001-4def-8e67-84f962255f76 req-febe5ce4-2b76-46bd-9ec3-645a6ede5d91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.180 244018 DEBUG oslo_concurrency.lockutils [req-71c21a7d-3001-4def-8e67-84f962255f76 req-febe5ce4-2b76-46bd-9ec3-645a6ede5d91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.181 244018 DEBUG oslo_concurrency.lockutils [req-71c21a7d-3001-4def-8e67-84f962255f76 req-febe5ce4-2b76-46bd-9ec3-645a6ede5d91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.182 244018 DEBUG nova.compute.manager [req-71c21a7d-3001-4def-8e67-84f962255f76 req-febe5ce4-2b76-46bd-9ec3-645a6ede5d91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Processing event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.184 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.205 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024070.2051318, 3da53ea4-e42d-4bfb-b917-d9c81fd3652f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.208 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] VM Resumed (Lifecycle Event)
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.210 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.213 244018 INFO nova.virt.libvirt.driver [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Instance spawned successfully.
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.213 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.274 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.279 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.284 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.284 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.285 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.285 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.285 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.286 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.335 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.365 244018 INFO nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Took 6.05 seconds to spawn the instance on the hypervisor.
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.366 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.413 244018 INFO nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Took 7.28 seconds to build instance.
Feb 25 12:54:30 compute-0 podman[365023]: 2026-02-25 12:54:30.419899423 +0000 UTC m=+0.089224068 container create 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.425 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.457 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:30 compute-0 podman[365023]: 2026-02-25 12:54:30.378642289 +0000 UTC m=+0.047966924 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:54:30 compute-0 systemd[1]: Started libpod-conmon-0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3.scope.
Feb 25 12:54:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094a23f1f3da701437aaad813d898e692f00c7eb9efe6f71e247ac3422f36f17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:30 compute-0 podman[365023]: 2026-02-25 12:54:30.530014009 +0000 UTC m=+0.199338694 container init 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 12:54:30 compute-0 podman[365023]: 2026-02-25 12:54:30.537643774 +0000 UTC m=+0.206968419 container start 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:54:30 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [NOTICE]   (365042) : New worker (365044) forked
Feb 25 12:54:30 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [NOTICE]   (365042) : Loading success.
Feb 25 12:54:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.895 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.896 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.896 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.896 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:54:30 compute-0 nova_compute[244014]: 2026-02-25 12:54:30.897 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:54:31
Feb 25 12:54:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:54:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:54:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'backups', 'vms', 'images', 'default.rgw.log', 'volumes', '.mgr']
Feb 25 12:54:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:54:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:54:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1112003846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.489 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.582 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.583 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.587 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.588 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.592 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.593 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.793 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.795 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3029MB free_disk=59.875610740855336GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.795 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.796 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:31 compute-0 ceph-mon[76335]: pgmap v2218: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Feb 25 12:54:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1112003846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.939 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 03d948e4-e7cb-45ea-bf63-7ab363b4d46e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.939 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 945e5549-40d1-4eae-8179-84ad1d751957 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.940 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 3da53ea4-e42d-4bfb-b917-d9c81fd3652f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.940 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:54:31 compute-0 nova_compute[244014]: 2026-02-25 12:54:31.940 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.033 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.310 244018 DEBUG nova.compute.manager [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.311 244018 DEBUG oslo_concurrency.lockutils [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.311 244018 DEBUG oslo_concurrency.lockutils [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.312 244018 DEBUG oslo_concurrency.lockutils [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.312 244018 DEBUG nova.compute.manager [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] No waiting events found dispatching network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.313 244018 WARNING nova.compute.manager [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received unexpected event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with vm_state active and task_state None.
Feb 25 12:54:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:54:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882036114' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.592 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.598 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.621 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.660 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.660 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.725 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.726 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.727 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.728 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.728 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.731 244018 INFO nova.compute.manager [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Terminating instance
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.733 244018 DEBUG nova.compute.manager [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.783 244018 DEBUG nova.compute.manager [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.784 244018 DEBUG nova.compute.manager [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing instance network info cache due to event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:54:32 compute-0 kernel: tapedede35b-32 (unregistering): left promiscuous mode
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.784 244018 DEBUG oslo_concurrency.lockutils [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.786 244018 DEBUG oslo_concurrency.lockutils [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.786 244018 DEBUG nova.network.neutron [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:54:32 compute-0 NetworkManager[49836]: <info>  [1772024072.7895] device (tapedede35b-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:32 compute-0 ovn_controller[147040]: 2026-02-25T12:54:32Z|01409|binding|INFO|Releasing lport edede35b-327b-4dc5-8432-cc64bb4a290d from this chassis (sb_readonly=0)
Feb 25 12:54:32 compute-0 ovn_controller[147040]: 2026-02-25T12:54:32Z|01410|binding|INFO|Setting lport edede35b-327b-4dc5-8432-cc64bb4a290d down in Southbound
Feb 25 12:54:32 compute-0 ovn_controller[147040]: 2026-02-25T12:54:32Z|01411|binding|INFO|Removing iface tapedede35b-32 ovn-installed in OVS
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.808 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:b5:86 10.100.0.13 2001:db8:0:1:f816:3eff:fe06:b586 2001:db8::f816:3eff:fe06:b586'], port_security=['fa:16:3e:06:b5:86 10.100.0.13 2001:db8:0:1:f816:3eff:fe06:b586 2001:db8::f816:3eff:fe06:b586'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe06:b586/64 2001:db8::f816:3eff:fe06:b586/64', 'neutron:device_id': '945e5549-40d1-4eae-8179-84ad1d751957', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd4482ae-906b-42b8-8dd6-2eff099d2f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=edede35b-327b-4dc5-8432-cc64bb4a290d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.809 157129 INFO neutron.agent.ovn.metadata.agent [-] Port edede35b-327b-4dc5-8432-cc64bb4a290d in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd unbound from our chassis
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.810 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88562c34-222a-439a-b444-9e6f8a6d70cd
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1882036114' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.833 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be229f7b-1aab-415d-b7a8-13c370c307c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:32 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000084.scope: Deactivated successfully.
Feb 25 12:54:32 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000084.scope: Consumed 14.241s CPU time.
Feb 25 12:54:32 compute-0 systemd-machined[210048]: Machine qemu-164-instance-00000084 terminated.
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.868 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bb00e625-9d29-4c2b-8066-072697461f2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.871 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[108c8a51-a3b0-431a-a3d7-a97fac6b3f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.900 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[15f62c61-1452-4006-8a82-a63463eb4d0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.912 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[31d2a44e-c1a3-481d-aa63-b39edbf7f4a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88562c34-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:5e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598029, 'reachable_time': 38748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365111, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.926 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a935afc-e17d-4d0f-8007-1f95f414414c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap88562c34-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598041, 'tstamp': 598041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365112, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap88562c34-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598043, 'tstamp': 598043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365112, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.928 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88562c34-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.936 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88562c34-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.937 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.937 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88562c34-20, col_values=(('external_ids', {'iface-id': '09662fcb-392d-469a-981c-54d31225748b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.938 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.969 244018 INFO nova.virt.libvirt.driver [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Instance destroyed successfully.
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.970 244018 DEBUG nova.objects.instance [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 945e5549-40d1-4eae-8179-84ad1d751957 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.992 244018 DEBUG nova.virt.libvirt.vif [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:53:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487150499',display_name='tempest-TestGettingAddress-server-487150499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487150499',id=132,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:54:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-9mlkqqq2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:54:09Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=945e5549-40d1-4eae-8179-84ad1d751957,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.992 244018 DEBUG nova.network.os_vif_util [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.993 244018 DEBUG nova.network.os_vif_util [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.994 244018 DEBUG os_vif [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.996 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedede35b-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:32 compute-0 nova_compute[244014]: 2026-02-25 12:54:32.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.002 244018 INFO os_vif [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32')
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.218 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.219 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.219 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.219 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.219 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.220 244018 INFO nova.compute.manager [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Terminating instance
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.221 244018 DEBUG nova.compute.manager [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:54:33 compute-0 kernel: tap205ead5a-79 (unregistering): left promiscuous mode
Feb 25 12:54:33 compute-0 NetworkManager[49836]: <info>  [1772024073.2508] device (tap205ead5a-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:54:33 compute-0 ovn_controller[147040]: 2026-02-25T12:54:33Z|01412|binding|INFO|Releasing lport 205ead5a-797e-421e-87e3-ec5dac2037d3 from this chassis (sb_readonly=0)
Feb 25 12:54:33 compute-0 ovn_controller[147040]: 2026-02-25T12:54:33Z|01413|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 down in Southbound
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.255 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:33 compute-0 ovn_controller[147040]: 2026-02-25T12:54:33Z|01414|binding|INFO|Removing iface tap205ead5a-79 ovn-installed in OVS
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.264 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:de:be 10.100.0.13'], port_security=['fa:16:3e:5f:de:be 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3da53ea4-e42d-4bfb-b917-d9c81fd3652f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02748d96-83c0-45be-acd6-081ad673e4bc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '9', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=988642bf-bd28-4e43-bffa-6e16e232d40d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=205ead5a-797e-421e-87e3-ec5dac2037d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.264 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.265 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 205ead5a-797e-421e-87e3-ec5dac2037d3 in datapath 02748d96-83c0-45be-acd6-081ad673e4bc unbound from our chassis
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.266 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02748d96-83c0-45be-acd6-081ad673e4bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.267 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6a8c9b-5e09-42d6-a2c4-f7d5c9f4c3c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.267 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc namespace which is not needed anymore
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.271 244018 INFO nova.virt.libvirt.driver [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Deleting instance files /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957_del
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.272 244018 INFO nova.virt.libvirt.driver [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Deletion of /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957_del complete
Feb 25 12:54:33 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000085.scope: Deactivated successfully.
Feb 25 12:54:33 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000085.scope: Consumed 3.409s CPU time.
Feb 25 12:54:33 compute-0 systemd-machined[210048]: Machine qemu-165-instance-00000085 terminated.
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.328 244018 INFO nova.compute.manager [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Took 0.59 seconds to destroy the instance on the hypervisor.
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.329 244018 DEBUG oslo.service.loopingcall [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.329 244018 DEBUG nova.compute.manager [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.329 244018 DEBUG nova.network.neutron [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:54:33 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [NOTICE]   (365042) : haproxy version is 2.8.14-c23fe91
Feb 25 12:54:33 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [NOTICE]   (365042) : path to executable is /usr/sbin/haproxy
Feb 25 12:54:33 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [WARNING]  (365042) : Exiting Master process...
Feb 25 12:54:33 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [ALERT]    (365042) : Current worker (365044) exited with code 143 (Terminated)
Feb 25 12:54:33 compute-0 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [WARNING]  (365042) : All workers exited. Exiting... (0)
Feb 25 12:54:33 compute-0 systemd[1]: libpod-0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3.scope: Deactivated successfully.
Feb 25 12:54:33 compute-0 podman[365165]: 2026-02-25 12:54:33.379777896 +0000 UTC m=+0.045307669 container died 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:54:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3-userdata-shm.mount: Deactivated successfully.
Feb 25 12:54:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-094a23f1f3da701437aaad813d898e692f00c7eb9efe6f71e247ac3422f36f17-merged.mount: Deactivated successfully.
Feb 25 12:54:33 compute-0 podman[365165]: 2026-02-25 12:54:33.42283473 +0000 UTC m=+0.088364413 container cleanup 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:54:33 compute-0 systemd[1]: libpod-conmon-0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3.scope: Deactivated successfully.
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.449 244018 INFO nova.virt.libvirt.driver [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Instance destroyed successfully.
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.450 244018 DEBUG nova.objects.instance [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 3da53ea4-e42d-4bfb-b917-d9c81fd3652f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:54:33 compute-0 podman[365194]: 2026-02-25 12:54:33.48095992 +0000 UTC m=+0.040299088 container remove 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.486 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c84055e6-79c6-462b-b83f-d86426f4d34c]: (4, ('Wed Feb 25 12:54:33 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc (0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3)\n0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3\nWed Feb 25 12:54:33 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc (0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3)\n0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.488 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[597fa506-150a-4623-a12c-ef3b015fbcf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.489 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02748d96-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:33 compute-0 kernel: tap02748d96-80: left promiscuous mode
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.503 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[040ee3dc-c99d-4ce9-a27a-c51f0df98da6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.505 244018 DEBUG nova.virt.libvirt.vif [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:54:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213997257',display_name='tempest-TestNetworkBasicOps-server-1213997257',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213997257',id=133,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNG26zhuNSlF1TfE4sDBmDnvGmo5/ze2p13GCwWfDn/0tZUccugkaXigVz2IREshfwlZeDWvbX4lpuIp+1iD+9XZNIkC3KjSMU6+yZzbD4HnlUsSc5LJ/0HlGl49tXOmw==',key_name='tempest-TestNetworkBasicOps-1599989750',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:54:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ywrihctv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:54:30Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=3da53ea4-e42d-4bfb-b917-d9c81fd3652f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.507 244018 DEBUG nova.network.os_vif_util [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.508 244018 DEBUG nova.network.os_vif_util [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.509 244018 DEBUG os_vif [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.512 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap205ead5a-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.518 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f5196b88-00cf-4702-8131-ec24c9cb0278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.519 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.520 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bf56cc6b-3b6d-4069-ad7f-672583368240]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.524 244018 INFO os_vif [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79')
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.533 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[23d57f6f-7677-48b0-9eb1-6e3d5e53c3da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603910, 'reachable_time': 18310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365223, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d02748d96\x2d83c0\x2d45be\x2dacd6\x2d081ad673e4bc.mount: Deactivated successfully.
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.538 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:54:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.538 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8b237c-b23c-4909-9352-f4971d1c07dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.656 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:54:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.808 244018 INFO nova.virt.libvirt.driver [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Deleting instance files /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f_del
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.808 244018 INFO nova.virt.libvirt.driver [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Deletion of /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f_del complete
Feb 25 12:54:33 compute-0 ceph-mon[76335]: pgmap v2219: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.867 244018 INFO nova.compute.manager [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.868 244018 DEBUG oslo.service.loopingcall [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.871 244018 DEBUG nova.compute.manager [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.871 244018 DEBUG nova.network.neutron [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 25 12:54:33 compute-0 nova_compute[244014]: 2026-02-25 12:54:33.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.060 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.061 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.061 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.061 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 03d948e4-e7cb-45ea-bf63-7ab363b4d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.378 244018 DEBUG nova.network.neutron [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.401 244018 INFO nova.compute.manager [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Took 1.07 seconds to deallocate network for instance.
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.419 244018 DEBUG nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.419 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.421 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.421 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.422 244018 DEBUG nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] No waiting events found dispatching network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.422 244018 DEBUG nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.422 244018 DEBUG nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.423 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.423 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.424 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.424 244018 DEBUG nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] No waiting events found dispatching network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.425 244018 WARNING nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received unexpected event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with vm_state active and task_state deleting.
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.460 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.461 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.582 244018 DEBUG oslo_concurrency.processutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.941 244018 DEBUG nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-vif-unplugged-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.942 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.942 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.943 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.943 244018 DEBUG nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] No waiting events found dispatching network-vif-unplugged-edede35b-327b-4dc5-8432-cc64bb4a290d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.944 244018 WARNING nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received unexpected event network-vif-unplugged-edede35b-327b-4dc5-8432-cc64bb4a290d for instance with vm_state deleted and task_state None.
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.944 244018 DEBUG nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.945 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.945 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.945 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.946 244018 DEBUG nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] No waiting events found dispatching network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:54:34 compute-0 nova_compute[244014]: 2026-02-25 12:54:34.946 244018 WARNING nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received unexpected event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d for instance with vm_state deleted and task_state None.
Feb 25 12:54:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:54:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1009173585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.203 244018 DEBUG oslo_concurrency.processutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.210 244018 DEBUG nova.compute.provider_tree [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.229 244018 DEBUG nova.scheduler.client.report [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.261 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.305 244018 INFO nova.scheduler.client.report [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 945e5549-40d1-4eae-8179-84ad1d751957
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.378 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.397 244018 DEBUG nova.network.neutron [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updated VIF entry in instance network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.397 244018 DEBUG nova.network.neutron [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updating instance_info_cache with network_info: [{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.427 244018 DEBUG oslo_concurrency.lockutils [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.633 244018 DEBUG nova.network.neutron [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.658 244018 INFO nova.compute.manager [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Took 1.79 seconds to deallocate network for instance.
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.740 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.741 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:35 compute-0 nova_compute[244014]: 2026-02-25 12:54:35.799 244018 DEBUG oslo_concurrency.processutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:35 compute-0 ceph-mon[76335]: pgmap v2220: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Feb 25 12:54:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1009173585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:54:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154605364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.442 244018 DEBUG oslo_concurrency.processutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.450 244018 DEBUG nova.compute.provider_tree [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.482 244018 DEBUG nova.scheduler.client.report [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.521 244018 DEBUG nova.compute.manager [req-32145513-7911-437b-9991-0b4d71fd81ea req-bfeff6a5-6c90-45dc-ae69-a8672fbc49b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-vif-deleted-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.561 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.628 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.652 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.653 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.654 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.663 244018 INFO nova.scheduler.client.report [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 3da53ea4-e42d-4bfb-b917-d9c81fd3652f
Feb 25 12:54:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.697 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.698 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.698 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.699 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.699 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.701 244018 INFO nova.compute.manager [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Terminating instance
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.703 244018 DEBUG nova.compute.manager [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:54:36 compute-0 kernel: tapf66ec19f-11 (unregistering): left promiscuous mode
Feb 25 12:54:36 compute-0 NetworkManager[49836]: <info>  [1772024076.7581] device (tapf66ec19f-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:54:36 compute-0 ovn_controller[147040]: 2026-02-25T12:54:36Z|01415|binding|INFO|Releasing lport f66ec19f-119c-4e9b-ba57-c8e2d850f5dc from this chassis (sb_readonly=0)
Feb 25 12:54:36 compute-0 ovn_controller[147040]: 2026-02-25T12:54:36Z|01416|binding|INFO|Setting lport f66ec19f-119c-4e9b-ba57-c8e2d850f5dc down in Southbound
Feb 25 12:54:36 compute-0 ovn_controller[147040]: 2026-02-25T12:54:36Z|01417|binding|INFO|Removing iface tapf66ec19f-11 ovn-installed in OVS
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.768 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:36.774 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:be:ae 10.100.0.3 2001:db8:0:1:f816:3eff:fed9:beae 2001:db8::f816:3eff:fed9:beae'], port_security=['fa:16:3e:d9:be:ae 10.100.0.3 2001:db8:0:1:f816:3eff:fed9:beae 2001:db8::f816:3eff:fed9:beae'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fed9:beae/64 2001:db8::f816:3eff:fed9:beae/64', 'neutron:device_id': '03d948e4-e7cb-45ea-bf63-7ab363b4d46e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd4482ae-906b-42b8-8dd6-2eff099d2f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:36.775 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd unbound from our chassis
Feb 25 12:54:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:36.776 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88562c34-222a-439a-b444-9e6f8a6d70cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:54:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:36.777 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c946b269-b95f-496f-8d61-6b252b34b636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:36.778 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd namespace which is not needed anymore
Feb 25 12:54:36 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Deactivated successfully.
Feb 25 12:54:36 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Consumed 13.551s CPU time.
Feb 25 12:54:36 compute-0 systemd-machined[210048]: Machine qemu-162-instance-00000082 terminated.
Feb 25 12:54:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1154605364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:36 compute-0 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [NOTICE]   (362613) : haproxy version is 2.8.14-c23fe91
Feb 25 12:54:36 compute-0 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [NOTICE]   (362613) : path to executable is /usr/sbin/haproxy
Feb 25 12:54:36 compute-0 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [WARNING]  (362613) : Exiting Master process...
Feb 25 12:54:36 compute-0 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [ALERT]    (362613) : Current worker (362615) exited with code 143 (Terminated)
Feb 25 12:54:36 compute-0 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [WARNING]  (362613) : All workers exited. Exiting... (0)
Feb 25 12:54:36 compute-0 systemd[1]: libpod-fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57.scope: Deactivated successfully.
Feb 25 12:54:36 compute-0 podman[365311]: 2026-02-25 12:54:36.925590636 +0000 UTC m=+0.048250532 container died fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.948 244018 INFO nova.virt.libvirt.driver [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Instance destroyed successfully.
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.949 244018 DEBUG nova.objects.instance [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 03d948e4-e7cb-45ea-bf63-7ab363b4d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:54:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57-userdata-shm.mount: Deactivated successfully.
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.964 244018 DEBUG nova.virt.libvirt.vif [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:53:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1355608326',display_name='tempest-TestGettingAddress-server-1355608326',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1355608326',id=130,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:53:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-0ah75or2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:53:32Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=03d948e4-e7cb-45ea-bf63-7ab363b4d46e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.965 244018 DEBUG nova.network.os_vif_util [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:54:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-10a865f5da51854f8f958183be1ede03192965b68e57f97054deb9abb51f58a8-merged.mount: Deactivated successfully.
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.966 244018 DEBUG nova.network.os_vif_util [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.969 244018 DEBUG os_vif [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.971 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.972 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66ec19f-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:36 compute-0 nova_compute[244014]: 2026-02-25 12:54:36.977 244018 INFO os_vif [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11')
Feb 25 12:54:36 compute-0 podman[365311]: 2026-02-25 12:54:36.978760056 +0000 UTC m=+0.101419932 container cleanup fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:54:36 compute-0 systemd[1]: libpod-conmon-fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57.scope: Deactivated successfully.
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.033 244018 DEBUG nova.compute.manager [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.034 244018 DEBUG nova.compute.manager [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing instance network info cache due to event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.034 244018 DEBUG oslo_concurrency.lockutils [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.035 244018 DEBUG oslo_concurrency.lockutils [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.035 244018 DEBUG nova.network.neutron [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:54:37 compute-0 podman[365356]: 2026-02-25 12:54:37.064413132 +0000 UTC m=+0.061984739 container remove fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:54:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.069 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[faf2f217-4147-44c5-ba85-fc86f170242a]: (4, ('Wed Feb 25 12:54:36 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd (fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57)\nfea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57\nWed Feb 25 12:54:36 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd (fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57)\nfea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.070 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3a510b-df9a-46ca-9b51-d7a9e25d1a77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.071 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88562c34-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:37 compute-0 kernel: tap88562c34-20: left promiscuous mode
Feb 25 12:54:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.078 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[978ea447-d7c3-422a-b867-7d8a87f2fd68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.083 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.096 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8507d36e-fabd-47ed-bee8-bc14b4b2f275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.097 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0baf10bd-ea9a-4fbc-9bed-b98b3ea6f156]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.112 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4bba14a-50ad-497d-a178-b37e80f96ccd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598021, 'reachable_time': 26883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365383, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.114 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:54:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d88562c34\x2d222a\x2d439a\x2db444\x2d9e6f8a6d70cd.mount: Deactivated successfully.
Feb 25 12:54:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.115 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[d125e578-9eaa-4216-9eb9-a8c80ec7200c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.243 244018 INFO nova.virt.libvirt.driver [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Deleting instance files /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e_del
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.244 244018 INFO nova.virt.libvirt.driver [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Deletion of /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e_del complete
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.313 244018 INFO nova.compute.manager [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Took 0.61 seconds to destroy the instance on the hypervisor.
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.313 244018 DEBUG oslo.service.loopingcall [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.314 244018 DEBUG nova.compute.manager [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:54:37 compute-0 nova_compute[244014]: 2026-02-25 12:54:37.314 244018 DEBUG nova.network.neutron [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:54:37 compute-0 ceph-mon[76335]: pgmap v2221: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Feb 25 12:54:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 199 op/s
Feb 25 12:54:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:38 compute-0 podman[365385]: 2026-02-25 12:54:38.73090492 +0000 UTC m=+0.074312577 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 25 12:54:38 compute-0 podman[365386]: 2026-02-25 12:54:38.769658363 +0000 UTC m=+0.110462297 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 25 12:54:38 compute-0 nova_compute[244014]: 2026-02-25 12:54:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:54:38 compute-0 nova_compute[244014]: 2026-02-25 12:54:38.921 244018 DEBUG nova.network.neutron [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.016 244018 INFO nova.compute.manager [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Took 1.70 seconds to deallocate network for instance.
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.170 244018 DEBUG nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-vif-unplugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.171 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.171 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.171 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.171 244018 DEBUG nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] No waiting events found dispatching network-vif-unplugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.172 244018 WARNING nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received unexpected event network-vif-unplugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc for instance with vm_state deleted and task_state None.
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.172 244018 DEBUG nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.172 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.172 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.173 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.173 244018 DEBUG nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] No waiting events found dispatching network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.173 244018 WARNING nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received unexpected event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc for instance with vm_state deleted and task_state None.
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.173 244018 DEBUG nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-vif-deleted-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.178 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.178 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.217 244018 DEBUG oslo_concurrency.processutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:54:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344282572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.711 244018 DEBUG oslo_concurrency.processutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.718 244018 DEBUG nova.compute.provider_tree [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.741 244018 DEBUG nova.scheduler.client.report [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.775 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.832 244018 INFO nova.scheduler.client.report [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 03d948e4-e7cb-45ea-bf63-7ab363b4d46e
Feb 25 12:54:39 compute-0 ceph-mon[76335]: pgmap v2222: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 199 op/s
Feb 25 12:54:39 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2344282572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:54:39 compute-0 nova_compute[244014]: 2026-02-25 12:54:39.953 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:40 compute-0 nova_compute[244014]: 2026-02-25 12:54:40.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:40 compute-0 nova_compute[244014]: 2026-02-25 12:54:40.473 244018 DEBUG nova.network.neutron [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updated VIF entry in instance network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:54:40 compute-0 nova_compute[244014]: 2026-02-25 12:54:40.474 244018 DEBUG nova.network.neutron [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:54:40 compute-0 nova_compute[244014]: 2026-02-25 12:54:40.496 244018 DEBUG oslo_concurrency.lockutils [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:54:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 154 op/s
Feb 25 12:54:40 compute-0 nova_compute[244014]: 2026-02-25 12:54:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:54:41 compute-0 ceph-mon[76335]: pgmap v2223: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 154 op/s
Feb 25 12:54:41 compute-0 nova_compute[244014]: 2026-02-25 12:54:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:54:41 compute-0 nova_compute[244014]: 2026-02-25 12:54:41.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 157 op/s
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6317816538884318e-05 of space, bias 1.0, pg target 0.004895344961665295 quantized to 32 (current 32)
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024941881055236368 of space, bias 1.0, pg target 0.748256431657091 quantized to 32 (current 32)
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.383268670923365e-06 of space, bias 4.0, pg target 0.0016599224051080381 quantized to 16 (current 16)
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:54:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:54:43 compute-0 nova_compute[244014]: 2026-02-25 12:54:43.168 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:43 compute-0 nova_compute[244014]: 2026-02-25 12:54:43.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:43 compute-0 nova_compute[244014]: 2026-02-25 12:54:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:54:43 compute-0 nova_compute[244014]: 2026-02-25 12:54:43.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:54:43 compute-0 ceph-mon[76335]: pgmap v2224: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 157 op/s
Feb 25 12:54:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 6.2 KiB/s wr, 82 op/s
Feb 25 12:54:45 compute-0 nova_compute[244014]: 2026-02-25 12:54:45.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:45 compute-0 ceph-mon[76335]: pgmap v2225: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 6.2 KiB/s wr, 82 op/s
Feb 25 12:54:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 6.2 KiB/s wr, 82 op/s
Feb 25 12:54:46 compute-0 nova_compute[244014]: 2026-02-25 12:54:46.978 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:54:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1668056892' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:54:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:54:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1668056892' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:54:47 compute-0 ceph-mon[76335]: pgmap v2226: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 6.2 KiB/s wr, 82 op/s
Feb 25 12:54:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1668056892' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:54:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1668056892' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:54:47 compute-0 nova_compute[244014]: 2026-02-25 12:54:47.968 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024072.9669049, 945e5549-40d1-4eae-8179-84ad1d751957 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:47 compute-0 nova_compute[244014]: 2026-02-25 12:54:47.969 244018 INFO nova.compute.manager [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] VM Stopped (Lifecycle Event)
Feb 25 12:54:47 compute-0 nova_compute[244014]: 2026-02-25 12:54:47.993 244018 DEBUG nova.compute.manager [None req-ebc94e2d-99f0-402b-9e96-7868500dfe3f - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:48 compute-0 nova_compute[244014]: 2026-02-25 12:54:48.449 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024073.4479399, 3da53ea4-e42d-4bfb-b917-d9c81fd3652f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:48 compute-0 nova_compute[244014]: 2026-02-25 12:54:48.450 244018 INFO nova.compute.manager [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] VM Stopped (Lifecycle Event)
Feb 25 12:54:48 compute-0 nova_compute[244014]: 2026-02-25 12:54:48.474 244018 DEBUG nova.compute.manager [None req-f428ee07-986c-424d-815d-035e8ce1c2f6 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 6.2 KiB/s wr, 82 op/s
Feb 25 12:54:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:49 compute-0 ceph-mon[76335]: pgmap v2227: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 6.2 KiB/s wr, 82 op/s
Feb 25 12:54:50 compute-0 nova_compute[244014]: 2026-02-25 12:54:50.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 2 op/s
Feb 25 12:54:51 compute-0 ceph-mon[76335]: pgmap v2228: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 2 op/s
Feb 25 12:54:51 compute-0 nova_compute[244014]: 2026-02-25 12:54:51.946 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024076.9453082, 03d948e4-e7cb-45ea-bf63-7ab363b4d46e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:54:51 compute-0 nova_compute[244014]: 2026-02-25 12:54:51.947 244018 INFO nova.compute.manager [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] VM Stopped (Lifecycle Event)
Feb 25 12:54:51 compute-0 nova_compute[244014]: 2026-02-25 12:54:51.973 244018 DEBUG nova.compute.manager [None req-c8f4a569-4f5e-4ca4-925d-6e2565d76284 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:54:51 compute-0 nova_compute[244014]: 2026-02-25 12:54:51.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:52 compute-0 sudo[365454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:54:52 compute-0 sudo[365454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:54:52 compute-0 sudo[365454]: pam_unix(sudo:session): session closed for user root
Feb 25 12:54:52 compute-0 sudo[365479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:54:52 compute-0 sudo[365479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:54:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 2 op/s
Feb 25 12:54:52 compute-0 sudo[365479]: pam_unix(sudo:session): session closed for user root
Feb 25 12:54:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:54:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:54:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:54:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:54:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:54:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:54:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:54:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:54:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:54:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:54:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:54:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:54:53 compute-0 sudo[365535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:54:53 compute-0 sudo[365535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:54:53 compute-0 sudo[365535]: pam_unix(sudo:session): session closed for user root
Feb 25 12:54:53 compute-0 sudo[365560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:54:53 compute-0 sudo[365560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:54:53 compute-0 podman[365597]: 2026-02-25 12:54:53.413155612 +0000 UTC m=+0.058761099 container create b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:54:53 compute-0 systemd[1]: Started libpod-conmon-b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202.scope.
Feb 25 12:54:53 compute-0 podman[365597]: 2026-02-25 12:54:53.390084371 +0000 UTC m=+0.035689948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:54:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:54:53 compute-0 podman[365597]: 2026-02-25 12:54:53.498438557 +0000 UTC m=+0.144044064 container init b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 12:54:53 compute-0 podman[365597]: 2026-02-25 12:54:53.508281375 +0000 UTC m=+0.153886882 container start b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:54:53 compute-0 podman[365597]: 2026-02-25 12:54:53.511973709 +0000 UTC m=+0.157579286 container attach b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 12:54:53 compute-0 stoic_mcnulty[365614]: 167 167
Feb 25 12:54:53 compute-0 systemd[1]: libpod-b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202.scope: Deactivated successfully.
Feb 25 12:54:53 compute-0 podman[365597]: 2026-02-25 12:54:53.51660177 +0000 UTC m=+0.162207247 container died b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:54:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-48dc76d065f033bce1c657b79cf08cb6b54423ed454a606ff2d04b4bd7281f9f-merged.mount: Deactivated successfully.
Feb 25 12:54:53 compute-0 podman[365597]: 2026-02-25 12:54:53.563469142 +0000 UTC m=+0.209074649 container remove b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 12:54:53 compute-0 systemd[1]: libpod-conmon-b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202.scope: Deactivated successfully.
Feb 25 12:54:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:53 compute-0 podman[365638]: 2026-02-25 12:54:53.728069225 +0000 UTC m=+0.046957806 container create 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 12:54:53 compute-0 systemd[1]: Started libpod-conmon-9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7.scope.
Feb 25 12:54:53 compute-0 podman[365638]: 2026-02-25 12:54:53.707302659 +0000 UTC m=+0.026191240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:54:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:53 compute-0 podman[365638]: 2026-02-25 12:54:53.86792685 +0000 UTC m=+0.186815511 container init 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 12:54:53 compute-0 podman[365638]: 2026-02-25 12:54:53.877172821 +0000 UTC m=+0.196061422 container start 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:54:53 compute-0 podman[365638]: 2026-02-25 12:54:53.881855983 +0000 UTC m=+0.200744614 container attach 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 12:54:53 compute-0 ceph-mon[76335]: pgmap v2229: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 2 op/s
Feb 25 12:54:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:54:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:54:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:54:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:54:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:54:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:54:54 compute-0 quirky_pasteur[365652]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:54:54 compute-0 quirky_pasteur[365652]: --> All data devices are unavailable
Feb 25 12:54:54 compute-0 systemd[1]: libpod-9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7.scope: Deactivated successfully.
Feb 25 12:54:54 compute-0 podman[365638]: 2026-02-25 12:54:54.354147846 +0000 UTC m=+0.673036427 container died 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 12:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179-merged.mount: Deactivated successfully.
Feb 25 12:54:54 compute-0 podman[365638]: 2026-02-25 12:54:54.40359287 +0000 UTC m=+0.722481481 container remove 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:54:54 compute-0 systemd[1]: libpod-conmon-9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7.scope: Deactivated successfully.
Feb 25 12:54:54 compute-0 sudo[365560]: pam_unix(sudo:session): session closed for user root
Feb 25 12:54:54 compute-0 sudo[365685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:54:54 compute-0 sudo[365685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:54:54 compute-0 sudo[365685]: pam_unix(sudo:session): session closed for user root
Feb 25 12:54:54 compute-0 sudo[365710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:54:54 compute-0 sudo[365710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:54:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:54:54 compute-0 podman[365749]: 2026-02-25 12:54:54.914274226 +0000 UTC m=+0.055272120 container create b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:54:54 compute-0 systemd[1]: Started libpod-conmon-b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1.scope.
Feb 25 12:54:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:54:54 compute-0 podman[365749]: 2026-02-25 12:54:54.891930636 +0000 UTC m=+0.032928540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:54:55 compute-0 podman[365749]: 2026-02-25 12:54:55.00232016 +0000 UTC m=+0.143318114 container init b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 12:54:55 compute-0 podman[365749]: 2026-02-25 12:54:55.010299655 +0000 UTC m=+0.151297549 container start b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 12:54:55 compute-0 cool_shaw[365765]: 167 167
Feb 25 12:54:55 compute-0 systemd[1]: libpod-b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1.scope: Deactivated successfully.
Feb 25 12:54:55 compute-0 podman[365749]: 2026-02-25 12:54:55.01615455 +0000 UTC m=+0.157152494 container attach b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:54:55 compute-0 podman[365749]: 2026-02-25 12:54:55.017008484 +0000 UTC m=+0.158006398 container died b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:54:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:55.032 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:55.033 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:54:55.033 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-da148ca0941bbe5b12272c8290d136616501c01c2dbf42759bc1d5927570850d-merged.mount: Deactivated successfully.
Feb 25 12:54:55 compute-0 podman[365749]: 2026-02-25 12:54:55.062895758 +0000 UTC m=+0.203893642 container remove b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 12:54:55 compute-0 systemd[1]: libpod-conmon-b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1.scope: Deactivated successfully.
Feb 25 12:54:55 compute-0 podman[365789]: 2026-02-25 12:54:55.271845043 +0000 UTC m=+0.071039765 container create 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:54:55 compute-0 systemd[1]: Started libpod-conmon-39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094.scope.
Feb 25 12:54:55 compute-0 podman[365789]: 2026-02-25 12:54:55.243554125 +0000 UTC m=+0.042748887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:54:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7ad0e9cb04249b23bb01a36c9b60d1f3e53f5ac1ceaaf085ddc9de43c3e125/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7ad0e9cb04249b23bb01a36c9b60d1f3e53f5ac1ceaaf085ddc9de43c3e125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7ad0e9cb04249b23bb01a36c9b60d1f3e53f5ac1ceaaf085ddc9de43c3e125/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7ad0e9cb04249b23bb01a36c9b60d1f3e53f5ac1ceaaf085ddc9de43c3e125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:55 compute-0 podman[365789]: 2026-02-25 12:54:55.373285894 +0000 UTC m=+0.172480596 container init 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:54:55 compute-0 podman[365789]: 2026-02-25 12:54:55.387768113 +0000 UTC m=+0.186962815 container start 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:54:55 compute-0 podman[365789]: 2026-02-25 12:54:55.391813667 +0000 UTC m=+0.191008339 container attach 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:54:55 compute-0 nova_compute[244014]: 2026-02-25 12:54:55.467 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:55 compute-0 affectionate_tu[365806]: {
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:     "0": [
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:         {
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "devices": [
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "/dev/loop3"
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             ],
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_name": "ceph_lv0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_size": "21470642176",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "name": "ceph_lv0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "tags": {
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.cluster_name": "ceph",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.crush_device_class": "",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.encrypted": "0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.objectstore": "bluestore",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.osd_id": "0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.type": "block",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.vdo": "0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.with_tpm": "0"
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             },
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "type": "block",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "vg_name": "ceph_vg0"
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:         }
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:     ],
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:     "1": [
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:         {
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "devices": [
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "/dev/loop4"
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             ],
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_name": "ceph_lv1",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_size": "21470642176",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "name": "ceph_lv1",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "tags": {
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.cluster_name": "ceph",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.crush_device_class": "",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.encrypted": "0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.objectstore": "bluestore",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.osd_id": "1",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.type": "block",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.vdo": "0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.with_tpm": "0"
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             },
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "type": "block",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "vg_name": "ceph_vg1"
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:         }
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:     ],
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:     "2": [
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:         {
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "devices": [
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "/dev/loop5"
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             ],
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_name": "ceph_lv2",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_size": "21470642176",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "name": "ceph_lv2",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "tags": {
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.cluster_name": "ceph",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.crush_device_class": "",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.encrypted": "0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.objectstore": "bluestore",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.osd_id": "2",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.type": "block",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.vdo": "0",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:                 "ceph.with_tpm": "0"
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             },
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "type": "block",
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:             "vg_name": "ceph_vg2"
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:         }
Feb 25 12:54:55 compute-0 affectionate_tu[365806]:     ]
Feb 25 12:54:55 compute-0 affectionate_tu[365806]: }
Feb 25 12:54:55 compute-0 systemd[1]: libpod-39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094.scope: Deactivated successfully.
Feb 25 12:54:55 compute-0 podman[365789]: 2026-02-25 12:54:55.71132806 +0000 UTC m=+0.510522772 container died 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 12:54:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b7ad0e9cb04249b23bb01a36c9b60d1f3e53f5ac1ceaaf085ddc9de43c3e125-merged.mount: Deactivated successfully.
Feb 25 12:54:55 compute-0 podman[365789]: 2026-02-25 12:54:55.770455138 +0000 UTC m=+0.569649860 container remove 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:54:55 compute-0 systemd[1]: libpod-conmon-39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094.scope: Deactivated successfully.
Feb 25 12:54:55 compute-0 sudo[365710]: pam_unix(sudo:session): session closed for user root
Feb 25 12:54:55 compute-0 sudo[365826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:54:55 compute-0 sudo[365826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:54:55 compute-0 sudo[365826]: pam_unix(sudo:session): session closed for user root
Feb 25 12:54:55 compute-0 ceph-mon[76335]: pgmap v2230: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:54:55 compute-0 sudo[365851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:54:55 compute-0 sudo[365851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:54:56 compute-0 podman[365887]: 2026-02-25 12:54:56.25479763 +0000 UTC m=+0.046603235 container create d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:54:56 compute-0 systemd[1]: Started libpod-conmon-d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6.scope.
Feb 25 12:54:56 compute-0 podman[365887]: 2026-02-25 12:54:56.230860005 +0000 UTC m=+0.022665650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:54:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:54:56 compute-0 podman[365887]: 2026-02-25 12:54:56.345214241 +0000 UTC m=+0.137019876 container init d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:54:56 compute-0 podman[365887]: 2026-02-25 12:54:56.35333143 +0000 UTC m=+0.145137035 container start d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:54:56 compute-0 confident_jang[365903]: 167 167
Feb 25 12:54:56 compute-0 systemd[1]: libpod-d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6.scope: Deactivated successfully.
Feb 25 12:54:56 compute-0 podman[365887]: 2026-02-25 12:54:56.358178027 +0000 UTC m=+0.149983632 container attach d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 12:54:56 compute-0 podman[365887]: 2026-02-25 12:54:56.358676261 +0000 UTC m=+0.150481866 container died d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:54:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-99ca54d39be380edf7bf8d692a879278a68bcf1c992bcb3100f77c9462e000f2-merged.mount: Deactivated successfully.
Feb 25 12:54:56 compute-0 podman[365887]: 2026-02-25 12:54:56.406134869 +0000 UTC m=+0.197940464 container remove d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:54:56 compute-0 systemd[1]: libpod-conmon-d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6.scope: Deactivated successfully.
Feb 25 12:54:56 compute-0 podman[365926]: 2026-02-25 12:54:56.594830391 +0000 UTC m=+0.056166985 container create 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 12:54:56 compute-0 systemd[1]: Started libpod-conmon-54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c.scope.
Feb 25 12:54:56 compute-0 podman[365926]: 2026-02-25 12:54:56.570559267 +0000 UTC m=+0.031895911 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:54:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294dd482efdca199fbf007ef29655fbc7cda5113c890ace1163249975487324/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294dd482efdca199fbf007ef29655fbc7cda5113c890ace1163249975487324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294dd482efdca199fbf007ef29655fbc7cda5113c890ace1163249975487324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294dd482efdca199fbf007ef29655fbc7cda5113c890ace1163249975487324/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:54:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:54:56 compute-0 podman[365926]: 2026-02-25 12:54:56.697855627 +0000 UTC m=+0.159192281 container init 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:54:56 compute-0 podman[365926]: 2026-02-25 12:54:56.711103891 +0000 UTC m=+0.172440465 container start 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 12:54:56 compute-0 podman[365926]: 2026-02-25 12:54:56.714913239 +0000 UTC m=+0.176249913 container attach 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 12:54:56 compute-0 nova_compute[244014]: 2026-02-25 12:54:56.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:54:57 compute-0 lvm[366021]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:54:57 compute-0 lvm[366021]: VG ceph_vg0 finished
Feb 25 12:54:57 compute-0 lvm[366022]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:54:57 compute-0 lvm[366022]: VG ceph_vg1 finished
Feb 25 12:54:57 compute-0 lvm[366024]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:54:57 compute-0 lvm[366024]: VG ceph_vg2 finished
Feb 25 12:54:57 compute-0 cool_herschel[365943]: {}
Feb 25 12:54:57 compute-0 systemd[1]: libpod-54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c.scope: Deactivated successfully.
Feb 25 12:54:57 compute-0 podman[365926]: 2026-02-25 12:54:57.508306669 +0000 UTC m=+0.969643263 container died 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:54:57 compute-0 systemd[1]: libpod-54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c.scope: Consumed 1.125s CPU time.
Feb 25 12:54:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3294dd482efdca199fbf007ef29655fbc7cda5113c890ace1163249975487324-merged.mount: Deactivated successfully.
Feb 25 12:54:57 compute-0 podman[365926]: 2026-02-25 12:54:57.56079876 +0000 UTC m=+1.022135354 container remove 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 12:54:57 compute-0 systemd[1]: libpod-conmon-54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c.scope: Deactivated successfully.
Feb 25 12:54:57 compute-0 sudo[365851]: pam_unix(sudo:session): session closed for user root
Feb 25 12:54:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:54:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:54:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:54:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:54:57 compute-0 sudo[366040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:54:57 compute-0 sudo[366040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:54:57 compute-0 sudo[366040]: pam_unix(sudo:session): session closed for user root
Feb 25 12:54:57 compute-0 ceph-mon[76335]: pgmap v2231: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:54:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:54:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:54:58 compute-0 nova_compute[244014]: 2026-02-25 12:54:58.460 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:58 compute-0 nova_compute[244014]: 2026-02-25 12:54:58.461 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:58 compute-0 nova_compute[244014]: 2026-02-25 12:54:58.489 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:54:58 compute-0 nova_compute[244014]: 2026-02-25 12:54:58.578 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:58 compute-0 nova_compute[244014]: 2026-02-25 12:54:58.579 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:58 compute-0 nova_compute[244014]: 2026-02-25 12:54:58.588 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:54:58 compute-0 nova_compute[244014]: 2026-02-25 12:54:58.589 244018 INFO nova.compute.claims [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:54:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:54:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:54:58 compute-0 nova_compute[244014]: 2026-02-25 12:54:58.722 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:54:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1161396759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.306 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.314 244018 DEBUG nova.compute.provider_tree [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.332 244018 DEBUG nova.scheduler.client.report [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.374 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.374 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.444 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.445 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.469 244018 INFO nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.514 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.629 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.631 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.632 244018 INFO nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Creating image(s)
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.661 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.693 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.722 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.727 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.796 244018 DEBUG nova.policy [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.808 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.809 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.810 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.810 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.841 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:54:59 compute-0 nova_compute[244014]: 2026-02-25 12:54:59.846 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:54:59 compute-0 ceph-mon[76335]: pgmap v2232: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:54:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1161396759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:00 compute-0 nova_compute[244014]: 2026-02-25 12:55:00.144 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:00 compute-0 nova_compute[244014]: 2026-02-25 12:55:00.225 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:55:00 compute-0 nova_compute[244014]: 2026-02-25 12:55:00.324 244018 DEBUG nova.objects.instance [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e93a158-a4bd-41a0-8fe0-52b2a069c409 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:55:00 compute-0 nova_compute[244014]: 2026-02-25 12:55:00.346 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:55:00 compute-0 nova_compute[244014]: 2026-02-25 12:55:00.346 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Ensure instance console log exists: /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:55:00 compute-0 nova_compute[244014]: 2026-02-25 12:55:00.347 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:00 compute-0 nova_compute[244014]: 2026-02-25 12:55:00.347 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:00 compute-0 nova_compute[244014]: 2026-02-25 12:55:00.348 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:00 compute-0 nova_compute[244014]: 2026-02-25 12:55:00.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:55:01 compute-0 ceph-mon[76335]: pgmap v2233: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 12:55:01 compute-0 nova_compute[244014]: 2026-02-25 12:55:01.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:02.077 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:55:02 compute-0 nova_compute[244014]: 2026-02-25 12:55:02.078 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:02.080 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:55:02 compute-0 nova_compute[244014]: 2026-02-25 12:55:02.613 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Successfully created port: 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:55:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:03 compute-0 ceph-mon[76335]: pgmap v2234: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:05 compute-0 nova_compute[244014]: 2026-02-25 12:55:05.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:05 compute-0 ceph-mon[76335]: pgmap v2235: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:06 compute-0 nova_compute[244014]: 2026-02-25 12:55:06.170 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Successfully updated port: 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:55:06 compute-0 nova_compute[244014]: 2026-02-25 12:55:06.192 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:55:06 compute-0 nova_compute[244014]: 2026-02-25 12:55:06.192 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:55:06 compute-0 nova_compute[244014]: 2026-02-25 12:55:06.192 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:55:06 compute-0 nova_compute[244014]: 2026-02-25 12:55:06.290 244018 DEBUG nova.compute.manager [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:06 compute-0 nova_compute[244014]: 2026-02-25 12:55:06.290 244018 DEBUG nova.compute.manager [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing instance network info cache due to event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:55:06 compute-0 nova_compute[244014]: 2026-02-25 12:55:06.291 244018 DEBUG oslo_concurrency.lockutils [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:55:06 compute-0 nova_compute[244014]: 2026-02-25 12:55:06.442 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:55:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:06.526 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:91:36 10.100.0.2 2001:db8::f816:3eff:fe31:9136'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe31:9136/64', 'neutron:device_id': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7d68418-660c-4b4a-9631-94822d5aa11e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d6ceb7cb-a48f-47c3-acb6-58d61268f8c7) old=Port_Binding(mac=['fa:16:3e:31:91:36 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:55:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:06.527 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d6ceb7cb-a48f-47c3-acb6-58d61268f8c7 in datapath 9f82a56f-a5c3-446e-9b75-7dc69c93c56b updated
Feb 25 12:55:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:06.529 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f82a56f-a5c3-446e-9b75-7dc69c93c56b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:55:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:06.530 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9838d597-5dda-452f-b79e-505935356654]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:06 compute-0 nova_compute[244014]: 2026-02-25 12:55:06.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.525 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updating instance_info_cache with network_info: [{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.545 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.545 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Instance network_info: |[{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.546 244018 DEBUG oslo_concurrency.lockutils [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.547 244018 DEBUG nova.network.neutron [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.552 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Start _get_guest_xml network_info=[{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.557 244018 WARNING nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.563 244018 DEBUG nova.virt.libvirt.host [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.564 244018 DEBUG nova.virt.libvirt.host [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.573 244018 DEBUG nova.virt.libvirt.host [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.573 244018 DEBUG nova.virt.libvirt.host [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.574 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.574 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.575 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.576 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.576 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.576 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.577 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.577 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.578 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.578 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.578 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.579 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:55:07 compute-0 nova_compute[244014]: 2026-02-25 12:55:07.584 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:07 compute-0 ceph-mon[76335]: pgmap v2236: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:55:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196340221' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.185 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.221 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.226 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:55:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1885824017' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.793 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.795 244018 DEBUG nova.virt.libvirt.vif [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-308904214',display_name='tempest-TestNetworkBasicOps-server-308904214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-308904214',id=134,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC4B/zjaDJTncfqsk6mkNSo/RgL3k210CyqHPfVKNsCjY61bbmJZVB8ZwHBrT5XizBW+KM0WAI8Ln67e320MCh/gk3oyfKKaIoKvHrkjtWvb3C4JL4zwHhWY7bHt65jOyg==',key_name='tempest-TestNetworkBasicOps-945311814',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ypd9bclm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:54:59Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=4e93a158-a4bd-41a0-8fe0-52b2a069c409,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.796 244018 DEBUG nova.network.os_vif_util [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.797 244018 DEBUG nova.network.os_vif_util [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.798 244018 DEBUG nova.objects.instance [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e93a158-a4bd-41a0-8fe0-52b2a069c409 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.817 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:55:08 compute-0 nova_compute[244014]:   <uuid>4e93a158-a4bd-41a0-8fe0-52b2a069c409</uuid>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   <name>instance-00000086</name>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-308904214</nova:name>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:55:07</nova:creationTime>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <nova:port uuid="8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d">
Feb 25 12:55:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <system>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <entry name="serial">4e93a158-a4bd-41a0-8fe0-52b2a069c409</entry>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <entry name="uuid">4e93a158-a4bd-41a0-8fe0-52b2a069c409</entry>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     </system>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   <os>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   </os>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   <features>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   </features>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk">
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config">
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       </source>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:55:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:e1:08:7c"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <target dev="tap8d4ab3e5-d9"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/console.log" append="off"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <video>
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     </video>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:55:08 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:55:08 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:55:08 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:55:08 compute-0 nova_compute[244014]: </domain>
Feb 25 12:55:08 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.818 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Preparing to wait for external event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.819 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.819 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.819 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.820 244018 DEBUG nova.virt.libvirt.vif [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-308904214',display_name='tempest-TestNetworkBasicOps-server-308904214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-308904214',id=134,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC4B/zjaDJTncfqsk6mkNSo/RgL3k210CyqHPfVKNsCjY61bbmJZVB8ZwHBrT5XizBW+KM0WAI8Ln67e320MCh/gk3oyfKKaIoKvHrkjtWvb3C4JL4zwHhWY7bHt65jOyg==',key_name='tempest-TestNetworkBasicOps-945311814',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ypd9bclm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:54:59Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=4e93a158-a4bd-41a0-8fe0-52b2a069c409,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.821 244018 DEBUG nova.network.os_vif_util [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.821 244018 DEBUG nova.network.os_vif_util [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.822 244018 DEBUG os_vif [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.823 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.824 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.827 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d4ab3e5-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d4ab3e5-d9, col_values=(('external_ids', {'iface-id': '8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:08:7c', 'vm-uuid': '4e93a158-a4bd-41a0-8fe0-52b2a069c409'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:08 compute-0 NetworkManager[49836]: <info>  [1772024108.8318] manager: (tap8d4ab3e5-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/587)
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.839 244018 INFO os_vif [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9')
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.902 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.903 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.903 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:e1:08:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.903 244018 INFO nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Using config drive
Feb 25 12:55:08 compute-0 nova_compute[244014]: 2026-02-25 12:55:08.930 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:08 compute-0 podman[366318]: 2026-02-25 12:55:08.938796775 +0000 UTC m=+0.056611878 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:55:08 compute-0 podman[366319]: 2026-02-25 12:55:08.967528746 +0000 UTC m=+0.084394952 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 12:55:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3196340221' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1885824017' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.081 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.510 244018 INFO nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Creating config drive at /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.515 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_h4jzot2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.653 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_h4jzot2" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.692 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.697 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.740 244018 DEBUG nova.network.neutron [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updated VIF entry in instance network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.741 244018 DEBUG nova.network.neutron [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updating instance_info_cache with network_info: [{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.777 244018 DEBUG oslo_concurrency.lockutils [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.860 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.861 244018 INFO nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Deleting local config drive /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config because it was imported into RBD.
Feb 25 12:55:09 compute-0 kernel: tap8d4ab3e5-d9: entered promiscuous mode
Feb 25 12:55:09 compute-0 NetworkManager[49836]: <info>  [1772024109.9244] manager: (tap8d4ab3e5-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/588)
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.924 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:09 compute-0 ovn_controller[147040]: 2026-02-25T12:55:09Z|01418|binding|INFO|Claiming lport 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d for this chassis.
Feb 25 12:55:09 compute-0 ovn_controller[147040]: 2026-02-25T12:55:09Z|01419|binding|INFO|8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d: Claiming fa:16:3e:e1:08:7c 10.100.0.9
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.929 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.932 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.952 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:08:7c 10.100.0.9'], port_security=['fa:16:3e:e1:08:7c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e93a158-a4bd-41a0-8fe0-52b2a069c409', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0c08d4b-4169-4e97-8a55-a9553174491d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14ad45b1-188a-47f0-9d37-00198e9d57fa, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:55:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.953 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d in datapath 77884df7-14d9-4fbd-8fa6-5eb17d0a82bf bound to our chassis
Feb 25 12:55:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.955 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77884df7-14d9-4fbd-8fa6-5eb17d0a82bf
Feb 25 12:55:09 compute-0 systemd-machined[210048]: New machine qemu-166-instance-00000086.
Feb 25 12:55:09 compute-0 systemd-udevd[366434]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:55:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.967 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0811b6ee-78ee-45ed-af02-c9759abf7cd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.968 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77884df7-11 in ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:55:09 compute-0 NetworkManager[49836]: <info>  [1772024109.9740] device (tap8d4ab3e5-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:55:09 compute-0 NetworkManager[49836]: <info>  [1772024109.9750] device (tap8d4ab3e5-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:55:09 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-00000086.
Feb 25 12:55:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.971 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77884df7-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:55:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.971 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f19be0bb-0839-45ce-875c-10e6a9974c43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.974 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[62904fb7-7757-4a48-af90-268bf7bb420f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:09 compute-0 ovn_controller[147040]: 2026-02-25T12:55:09Z|01420|binding|INFO|Setting lport 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d ovn-installed in OVS
Feb 25 12:55:09 compute-0 ovn_controller[147040]: 2026-02-25T12:55:09Z|01421|binding|INFO|Setting lport 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d up in Southbound
Feb 25 12:55:09 compute-0 nova_compute[244014]: 2026-02-25 12:55:09.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.986 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[5801bdb8-cb8f-4012-91aa-0e469afafd16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:09 compute-0 ceph-mon[76335]: pgmap v2237: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.000 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[725d8f68-194c-44f3-94bf-4187cad5d686]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.023 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d598f095-8fd3-4dd7-a1df-9d66453ecebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 systemd-udevd[366437]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:55:10 compute-0 NetworkManager[49836]: <info>  [1772024110.0303] manager: (tap77884df7-10): new Veth device (/org/freedesktop/NetworkManager/Devices/589)
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.029 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8749b5-b2b0-4b4e-a932-af304d71e390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.053 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4f06a8c9-7330-47a8-b9bb-4d177dacbabf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.058 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8db995-90d3-43b1-94f8-4eddba65eb79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 NetworkManager[49836]: <info>  [1772024110.0797] device (tap77884df7-10): carrier: link connected
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.083 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb29783-b098-472e-995b-47367109c8b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.101 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdd3612-f568-4bd5-91e3-f87e703e83e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77884df7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:34:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607948, 'reachable_time': 32603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366467, 'error': None, 'target': 'ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.120 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[24d64c7e-de7e-42fa-925f-4196bc0c2d59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:3411'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607948, 'tstamp': 607948}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366468, 'error': None, 'target': 'ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.139 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4df5927d-7931-4512-99a1-495dd26082f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77884df7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:34:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607948, 'reachable_time': 32603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366469, 'error': None, 'target': 'ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.169 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[02450337-5c4f-4be8-be51-38cd3c4aaa73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.225 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9fede634-ccea-447a-bb55-bdefa35db250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.226 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77884df7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.227 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.228 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77884df7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:10 compute-0 kernel: tap77884df7-10: entered promiscuous mode
Feb 25 12:55:10 compute-0 NetworkManager[49836]: <info>  [1772024110.2322] manager: (tap77884df7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/590)
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.235 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77884df7-10, col_values=(('external_ids', {'iface-id': '99924042-340f-4cf2-b7e0-cbd906fe3c45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:10 compute-0 ovn_controller[147040]: 2026-02-25T12:55:10Z|01422|binding|INFO|Releasing lport 99924042-340f-4cf2-b7e0-cbd906fe3c45 from this chassis (sb_readonly=0)
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.237 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.238 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77884df7-14d9-4fbd-8fa6-5eb17d0a82bf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77884df7-14d9-4fbd-8fa6-5eb17d0a82bf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.238 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5309c82f-1711-4e4c-9ebf-ea2a686e9113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.239 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/77884df7-14d9-4fbd-8fa6-5eb17d0a82bf.pid.haproxy
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 77884df7-14d9-4fbd-8fa6-5eb17d0a82bf
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:55:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.240 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'env', 'PROCESS_TAG=haproxy-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77884df7-14d9-4fbd-8fa6-5eb17d0a82bf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.264 244018 DEBUG nova.compute.manager [req-78749b2b-d7f2-4fef-85fc-412884bf11c1 req-7de0c61f-81f0-40d0-b550-745144d98243 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.264 244018 DEBUG oslo_concurrency.lockutils [req-78749b2b-d7f2-4fef-85fc-412884bf11c1 req-7de0c61f-81f0-40d0-b550-745144d98243 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.265 244018 DEBUG oslo_concurrency.lockutils [req-78749b2b-d7f2-4fef-85fc-412884bf11c1 req-7de0c61f-81f0-40d0-b550-745144d98243 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.265 244018 DEBUG oslo_concurrency.lockutils [req-78749b2b-d7f2-4fef-85fc-412884bf11c1 req-7de0c61f-81f0-40d0-b550-745144d98243 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.266 244018 DEBUG nova.compute.manager [req-78749b2b-d7f2-4fef-85fc-412884bf11c1 req-7de0c61f-81f0-40d0-b550-745144d98243 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Processing event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.473 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:10 compute-0 podman[366501]: 2026-02-25 12:55:10.600006226 +0000 UTC m=+0.070556991 container create 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:55:10 compute-0 systemd[1]: Started libpod-conmon-403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141.scope.
Feb 25 12:55:10 compute-0 podman[366501]: 2026-02-25 12:55:10.567657063 +0000 UTC m=+0.038207888 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:55:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d579fc02c3db25950a35ca5e2bc6443af220c9434e5f70315a2a4e00da493a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:55:10 compute-0 podman[366501]: 2026-02-25 12:55:10.693552574 +0000 UTC m=+0.164103359 container init 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:55:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:10 compute-0 podman[366501]: 2026-02-25 12:55:10.700990004 +0000 UTC m=+0.171540749 container start 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.710 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.712 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024110.7094576, 4e93a158-a4bd-41a0-8fe0-52b2a069c409 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.713 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] VM Started (Lifecycle Event)
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.723 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.728 244018 INFO nova.virt.libvirt.driver [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Instance spawned successfully.
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.728 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:55:10 compute-0 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [NOTICE]   (366562) : New worker (366564) forked
Feb 25 12:55:10 compute-0 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [NOTICE]   (366562) : Loading success.
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.777 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.781 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.848 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.848 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024110.7108655, 4e93a158-a4bd-41a0-8fe0-52b2a069c409 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.849 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] VM Paused (Lifecycle Event)
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.853 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.853 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.854 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.854 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.854 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.855 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.904 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.908 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024110.7154756, 4e93a158-a4bd-41a0-8fe0-52b2a069c409 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.908 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] VM Resumed (Lifecycle Event)
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.936 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.940 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.961 244018 INFO nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Took 11.33 seconds to spawn the instance on the hypervisor.
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.961 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:10 compute-0 nova_compute[244014]: 2026-02-25 12:55:10.969 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:55:11 compute-0 nova_compute[244014]: 2026-02-25 12:55:11.041 244018 INFO nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Took 12.49 seconds to build instance.
Feb 25 12:55:11 compute-0 nova_compute[244014]: 2026-02-25 12:55:11.061 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:12 compute-0 ceph-mon[76335]: pgmap v2238: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.185 244018 DEBUG nova.compute.manager [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.186 244018 DEBUG oslo_concurrency.lockutils [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.186 244018 DEBUG oslo_concurrency.lockutils [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.186 244018 DEBUG oslo_concurrency.lockutils [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.187 244018 DEBUG nova.compute.manager [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] No waiting events found dispatching network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.187 244018 WARNING nova.compute.manager [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received unexpected event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d for instance with vm_state active and task_state None.
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.511 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.512 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.538 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.614 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.615 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.622 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.623 244018 INFO nova.compute.claims [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:55:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:13 compute-0 nova_compute[244014]: 2026-02-25 12:55:13.868 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:14 compute-0 ceph-mon[76335]: pgmap v2239: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Feb 25 12:55:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:55:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944023126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.414 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.420 244018 DEBUG nova.compute.provider_tree [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.444 244018 DEBUG nova.scheduler.client.report [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.479 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.479 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.538 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.539 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.627 244018 INFO nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.646 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:55:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 69 op/s
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.736 244018 DEBUG nova.policy [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.743 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.745 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.746 244018 INFO nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Creating image(s)
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.784 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.820 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.846 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.849 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.940 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.941 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.942 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.942 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.974 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:14 compute-0 nova_compute[244014]: 2026-02-25 12:55:14.982 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dfb7287a-5448-4579-8938-fe909fbf54c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2944023126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:15 compute-0 NetworkManager[49836]: <info>  [1772024115.0383] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:15 compute-0 NetworkManager[49836]: <info>  [1772024115.0395] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:15 compute-0 ovn_controller[147040]: 2026-02-25T12:55:15Z|01423|binding|INFO|Releasing lport 99924042-340f-4cf2-b7e0-cbd906fe3c45 from this chassis (sb_readonly=0)
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.233 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dfb7287a-5448-4579-8938-fe909fbf54c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.305 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.396 244018 DEBUG nova.compute.manager [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.397 244018 DEBUG nova.compute.manager [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing instance network info cache due to event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.397 244018 DEBUG oslo_concurrency.lockutils [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.397 244018 DEBUG oslo_concurrency.lockutils [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.398 244018 DEBUG nova.network.neutron [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.404 244018 DEBUG nova.objects.instance [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid dfb7287a-5448-4579-8938-fe909fbf54c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.436 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.437 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Ensure instance console log exists: /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.438 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.438 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.438 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:15 compute-0 nova_compute[244014]: 2026-02-25 12:55:15.655 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Successfully created port: 44908825-991d-42d4-9bad-18d2a1f5fe9c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:55:16 compute-0 ceph-mon[76335]: pgmap v2240: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 69 op/s
Feb 25 12:55:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 69 op/s
Feb 25 12:55:16 compute-0 nova_compute[244014]: 2026-02-25 12:55:16.767 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Successfully updated port: 44908825-991d-42d4-9bad-18d2a1f5fe9c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:55:16 compute-0 nova_compute[244014]: 2026-02-25 12:55:16.810 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:55:16 compute-0 nova_compute[244014]: 2026-02-25 12:55:16.810 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:55:16 compute-0 nova_compute[244014]: 2026-02-25 12:55:16.810 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:55:16 compute-0 nova_compute[244014]: 2026-02-25 12:55:16.837 244018 DEBUG nova.compute.manager [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:16 compute-0 nova_compute[244014]: 2026-02-25 12:55:16.838 244018 DEBUG nova.compute.manager [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing instance network info cache due to event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:55:16 compute-0 nova_compute[244014]: 2026-02-25 12:55:16.839 244018 DEBUG oslo_concurrency.lockutils [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:55:16 compute-0 nova_compute[244014]: 2026-02-25 12:55:16.851 244018 DEBUG nova.network.neutron [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updated VIF entry in instance network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:55:16 compute-0 nova_compute[244014]: 2026-02-25 12:55:16.852 244018 DEBUG nova.network.neutron [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updating instance_info_cache with network_info: [{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:55:16 compute-0 nova_compute[244014]: 2026-02-25 12:55:16.918 244018 DEBUG oslo_concurrency.lockutils [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:55:16 compute-0 nova_compute[244014]: 2026-02-25 12:55:16.940 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:55:18 compute-0 ceph-mon[76335]: pgmap v2241: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 69 op/s
Feb 25 12:55:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:55:18 compute-0 nova_compute[244014]: 2026-02-25 12:55:18.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.094 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.133 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.134 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Instance network_info: |[{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.134 244018 DEBUG oslo_concurrency.lockutils [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.134 244018 DEBUG nova.network.neutron [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.137 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Start _get_guest_xml network_info=[{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.143 244018 WARNING nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.149 244018 DEBUG nova.virt.libvirt.host [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.150 244018 DEBUG nova.virt.libvirt.host [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.156 244018 DEBUG nova.virt.libvirt.host [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.156 244018 DEBUG nova.virt.libvirt.host [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.157 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.157 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.157 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.158 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.158 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.158 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.158 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.158 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.159 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.159 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.159 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.159 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.162 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:55:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712710689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.698 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.730 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:19 compute-0 nova_compute[244014]: 2026-02-25 12:55:19.734 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:20 compute-0 ceph-mon[76335]: pgmap v2242: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:55:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/712710689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:55:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1690628935' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.357 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.360 244018 DEBUG nova.virt.libvirt.vif [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-730348204',display_name='tempest-TestGettingAddress-server-730348204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-730348204',id=135,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-cliy4xfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dfb7287a-5448-4579-8938-fe909fbf54c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.361 244018 DEBUG nova.network.os_vif_util [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.363 244018 DEBUG nova.network.os_vif_util [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.365 244018 DEBUG nova.objects.instance [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfb7287a-5448-4579-8938-fe909fbf54c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.386 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:55:20 compute-0 nova_compute[244014]:   <uuid>dfb7287a-5448-4579-8938-fe909fbf54c6</uuid>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   <name>instance-00000087</name>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-730348204</nova:name>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:55:19</nova:creationTime>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <nova:port uuid="44908825-991d-42d4-9bad-18d2a1f5fe9c">
Feb 25 12:55:20 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fee4:4521" ipVersion="6"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <system>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <entry name="serial">dfb7287a-5448-4579-8938-fe909fbf54c6</entry>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <entry name="uuid">dfb7287a-5448-4579-8938-fe909fbf54c6</entry>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     </system>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   <os>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   </os>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   <features>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   </features>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/dfb7287a-5448-4579-8938-fe909fbf54c6_disk">
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       </source>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config">
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       </source>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:55:20 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:e4:45:21"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <target dev="tap44908825-99"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/console.log" append="off"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <video>
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     </video>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:55:20 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:55:20 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:55:20 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:55:20 compute-0 nova_compute[244014]: </domain>
Feb 25 12:55:20 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.388 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Preparing to wait for external event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.388 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.388 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.389 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.389 244018 DEBUG nova.virt.libvirt.vif [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-730348204',display_name='tempest-TestGettingAddress-server-730348204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-730348204',id=135,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-cliy4xfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dfb7287a-5448-4579-8938-fe909fbf54c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.390 244018 DEBUG nova.network.os_vif_util [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.390 244018 DEBUG nova.network.os_vif_util [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.391 244018 DEBUG os_vif [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.392 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.392 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.395 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44908825-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.396 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44908825-99, col_values=(('external_ids', {'iface-id': '44908825-991d-42d4-9bad-18d2a1f5fe9c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:45:21', 'vm-uuid': 'dfb7287a-5448-4579-8938-fe909fbf54c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:20 compute-0 NetworkManager[49836]: <info>  [1772024120.4000] manager: (tap44908825-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.406 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.407 244018 INFO os_vif [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99')
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.463 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.464 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.465 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:e4:45:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.466 244018 INFO nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Using config drive
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.506 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:20 compute-0 nova_compute[244014]: 2026-02-25 12:55:20.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:55:21 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1690628935' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.105 244018 INFO nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Creating config drive at /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.108 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1faxc0a9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.143 244018 DEBUG nova.network.neutron [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updated VIF entry in instance network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.144 244018 DEBUG nova.network.neutron [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.164 244018 DEBUG oslo_concurrency.lockutils [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.251 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1faxc0a9" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.283 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.289 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.444 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.446 244018 INFO nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Deleting local config drive /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config because it was imported into RBD.
Feb 25 12:55:21 compute-0 kernel: tap44908825-99: entered promiscuous mode
Feb 25 12:55:21 compute-0 NetworkManager[49836]: <info>  [1772024121.4969] manager: (tap44908825-99): new Tun device (/org/freedesktop/NetworkManager/Devices/594)
Feb 25 12:55:21 compute-0 ovn_controller[147040]: 2026-02-25T12:55:21Z|01424|binding|INFO|Claiming lport 44908825-991d-42d4-9bad-18d2a1f5fe9c for this chassis.
Feb 25 12:55:21 compute-0 ovn_controller[147040]: 2026-02-25T12:55:21Z|01425|binding|INFO|44908825-991d-42d4-9bad-18d2a1f5fe9c: Claiming fa:16:3e:e4:45:21 10.100.0.3 2001:db8::f816:3eff:fee4:4521
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.498 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.509 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:45:21 10.100.0.3 2001:db8::f816:3eff:fee4:4521'], port_security=['fa:16:3e:e4:45:21 10.100.0.3 2001:db8::f816:3eff:fee4:4521'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fee4:4521/64', 'neutron:device_id': 'dfb7287a-5448-4579-8938-fe909fbf54c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aaaa1259-a3db-497e-8efb-1f3c1f8cc088', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7d68418-660c-4b4a-9631-94822d5aa11e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=44908825-991d-42d4-9bad-18d2a1f5fe9c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.513 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 44908825-991d-42d4-9bad-18d2a1f5fe9c in datapath 9f82a56f-a5c3-446e-9b75-7dc69c93c56b bound to our chassis
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.512 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:21 compute-0 ovn_controller[147040]: 2026-02-25T12:55:21Z|01426|binding|INFO|Setting lport 44908825-991d-42d4-9bad-18d2a1f5fe9c up in Southbound
Feb 25 12:55:21 compute-0 ovn_controller[147040]: 2026-02-25T12:55:21Z|01427|binding|INFO|Setting lport 44908825-991d-42d4-9bad-18d2a1f5fe9c ovn-installed in OVS
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.515 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.516 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f82a56f-a5c3-446e-9b75-7dc69c93c56b
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.528 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[549270e8-73d0-46ee-9754-65ac5c447f93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.529 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f82a56f-a1 in ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.532 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f82a56f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.532 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[891719ac-a7c2-484c-8d30-4455970ff947]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 systemd-machined[210048]: New machine qemu-167-instance-00000087.
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.533 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a46de76-c2f2-48eb-afb0-7cc00e678677]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-00000087.
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.546 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[00c73720-85b7-469d-b0d1-92a0aab3afc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 systemd-udevd[366902]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.573 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[caa50fbe-a252-43b0-a00b-6f19aaab1e2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 NetworkManager[49836]: <info>  [1772024121.5789] device (tap44908825-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:55:21 compute-0 NetworkManager[49836]: <info>  [1772024121.5801] device (tap44908825-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.605 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbb7c6d-3eed-4929-8049-0db4f00c888f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.612 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c1a1d0-5a96-41ad-8072-73d2a164d575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 NetworkManager[49836]: <info>  [1772024121.6167] manager: (tap9f82a56f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/595)
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.645 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7982c7-7f52-460e-939d-e0dc793ed2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.648 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5fbf23-d4ae-435c-8220-7f5ff3c10bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 NetworkManager[49836]: <info>  [1772024121.6695] device (tap9f82a56f-a0): carrier: link connected
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.675 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[afdda326-42e1-4355-ac1b-c75789013fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.693 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[007b4908-f86d-4c97-8338-64c7738dd1b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f82a56f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:91:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609107, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366932, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.707 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb7aa6a-c9f2-40cf-a1ea-5b7b26a58cdc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:9136'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609107, 'tstamp': 609107}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366933, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.717 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[98b8cb06-b990-4fbc-945e-6a88a16d9e59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f82a56f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:91:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609107, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366934, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.734 244018 DEBUG nova.compute.manager [req-4db9796f-1cc1-42b6-b458-74fc9ec7b371 req-1342e31e-dba9-415e-adbc-d63ae30855c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.735 244018 DEBUG oslo_concurrency.lockutils [req-4db9796f-1cc1-42b6-b458-74fc9ec7b371 req-1342e31e-dba9-415e-adbc-d63ae30855c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.735 244018 DEBUG oslo_concurrency.lockutils [req-4db9796f-1cc1-42b6-b458-74fc9ec7b371 req-1342e31e-dba9-415e-adbc-d63ae30855c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.736 244018 DEBUG oslo_concurrency.lockutils [req-4db9796f-1cc1-42b6-b458-74fc9ec7b371 req-1342e31e-dba9-415e-adbc-d63ae30855c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.736 244018 DEBUG nova.compute.manager [req-4db9796f-1cc1-42b6-b458-74fc9ec7b371 req-1342e31e-dba9-415e-adbc-d63ae30855c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Processing event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.740 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd5d7958-1ada-4f7f-a21c-0273037e649c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.789 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbc59e3-d648-44db-b747-d2a669b64169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.791 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f82a56f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.791 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.792 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f82a56f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:21 compute-0 NetworkManager[49836]: <info>  [1772024121.7941] manager: (tap9f82a56f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/596)
Feb 25 12:55:21 compute-0 kernel: tap9f82a56f-a0: entered promiscuous mode
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.796 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f82a56f-a0, col_values=(('external_ids', {'iface-id': 'd6ceb7cb-a48f-47c3-acb6-58d61268f8c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:21 compute-0 ovn_controller[147040]: 2026-02-25T12:55:21Z|01428|binding|INFO|Releasing lport d6ceb7cb-a48f-47c3-acb6-58d61268f8c7 from this chassis (sb_readonly=0)
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.798 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f82a56f-a5c3-446e-9b75-7dc69c93c56b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f82a56f-a5c3-446e-9b75-7dc69c93c56b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.799 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a86cf020-9cd8-4f84-a0a8-90ea2471412d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.800 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-9f82a56f-a5c3-446e-9b75-7dc69c93c56b
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/9f82a56f-a5c3-446e-9b75-7dc69c93c56b.pid.haproxy
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 9f82a56f-a5c3-446e-9b75-7dc69c93c56b
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:55:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.801 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'env', 'PROCESS_TAG=haproxy-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f82a56f-a5c3-446e-9b75-7dc69c93c56b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.887 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024121.8870604, dfb7287a-5448-4579-8938-fe909fbf54c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.888 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] VM Started (Lifecycle Event)
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.891 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.895 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.899 244018 INFO nova.virt.libvirt.driver [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Instance spawned successfully.
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.900 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.915 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.922 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.930 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.931 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.932 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.933 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.934 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.935 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.946 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.947 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024121.887162, dfb7287a-5448-4579-8938-fe909fbf54c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.947 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] VM Paused (Lifecycle Event)
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.979 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.983 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024121.893982, dfb7287a-5448-4579-8938-fe909fbf54c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.984 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] VM Resumed (Lifecycle Event)
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.994 244018 INFO nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Took 7.25 seconds to spawn the instance on the hypervisor.
Feb 25 12:55:21 compute-0 nova_compute[244014]: 2026-02-25 12:55:21.995 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:22 compute-0 ovn_controller[147040]: 2026-02-25T12:55:22Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:08:7c 10.100.0.9
Feb 25 12:55:22 compute-0 ovn_controller[147040]: 2026-02-25T12:55:22Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:08:7c 10.100.0.9
Feb 25 12:55:22 compute-0 nova_compute[244014]: 2026-02-25 12:55:22.004 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:22 compute-0 nova_compute[244014]: 2026-02-25 12:55:22.006 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:55:22 compute-0 nova_compute[244014]: 2026-02-25 12:55:22.038 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:55:22 compute-0 nova_compute[244014]: 2026-02-25 12:55:22.069 244018 INFO nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Took 8.49 seconds to build instance.
Feb 25 12:55:22 compute-0 ceph-mon[76335]: pgmap v2243: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:55:22 compute-0 nova_compute[244014]: 2026-02-25 12:55:22.089 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:22 compute-0 podman[367008]: 2026-02-25 12:55:22.181124719 +0000 UTC m=+0.065777006 container create 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:55:22 compute-0 systemd[1]: Started libpod-conmon-428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635.scope.
Feb 25 12:55:22 compute-0 podman[367008]: 2026-02-25 12:55:22.152404449 +0000 UTC m=+0.037056746 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:55:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eb1392c95fbb33162d7879fcd33b904283c8f0341d2d03795af7b704d65fdf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:55:22 compute-0 podman[367008]: 2026-02-25 12:55:22.287925132 +0000 UTC m=+0.172577459 container init 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:55:22 compute-0 podman[367008]: 2026-02-25 12:55:22.292761538 +0000 UTC m=+0.177413825 container start 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 25 12:55:22 compute-0 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [NOTICE]   (367027) : New worker (367029) forked
Feb 25 12:55:22 compute-0 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [NOTICE]   (367027) : Loading success.
Feb 25 12:55:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 180 op/s
Feb 25 12:55:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:23 compute-0 nova_compute[244014]: 2026-02-25 12:55:23.849 244018 DEBUG nova.compute.manager [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:23 compute-0 nova_compute[244014]: 2026-02-25 12:55:23.850 244018 DEBUG oslo_concurrency.lockutils [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:23 compute-0 nova_compute[244014]: 2026-02-25 12:55:23.851 244018 DEBUG oslo_concurrency.lockutils [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:23 compute-0 nova_compute[244014]: 2026-02-25 12:55:23.851 244018 DEBUG oslo_concurrency.lockutils [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:23 compute-0 nova_compute[244014]: 2026-02-25 12:55:23.852 244018 DEBUG nova.compute.manager [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] No waiting events found dispatching network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:55:23 compute-0 nova_compute[244014]: 2026-02-25 12:55:23.852 244018 WARNING nova.compute.manager [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received unexpected event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c for instance with vm_state active and task_state None.
Feb 25 12:55:24 compute-0 ceph-mon[76335]: pgmap v2244: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 180 op/s
Feb 25 12:55:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 3.9 MiB/s wr, 110 op/s
Feb 25 12:55:25 compute-0 nova_compute[244014]: 2026-02-25 12:55:25.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:25 compute-0 nova_compute[244014]: 2026-02-25 12:55:25.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:26 compute-0 ceph-mon[76335]: pgmap v2245: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 3.9 MiB/s wr, 110 op/s
Feb 25 12:55:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 3.9 MiB/s wr, 110 op/s
Feb 25 12:55:27 compute-0 nova_compute[244014]: 2026-02-25 12:55:27.160 244018 DEBUG nova.compute.manager [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:27 compute-0 nova_compute[244014]: 2026-02-25 12:55:27.161 244018 DEBUG nova.compute.manager [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing instance network info cache due to event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:55:27 compute-0 nova_compute[244014]: 2026-02-25 12:55:27.162 244018 DEBUG oslo_concurrency.lockutils [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:55:27 compute-0 nova_compute[244014]: 2026-02-25 12:55:27.162 244018 DEBUG oslo_concurrency.lockutils [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:55:27 compute-0 nova_compute[244014]: 2026-02-25 12:55:27.163 244018 DEBUG nova.network.neutron [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:55:28 compute-0 ceph-mon[76335]: pgmap v2246: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 3.9 MiB/s wr, 110 op/s
Feb 25 12:55:28 compute-0 nova_compute[244014]: 2026-02-25 12:55:28.116 244018 INFO nova.compute.manager [None req-72ed04ef-a0f6-4b1a-8dda-1613dfa6e021 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Get console output
Feb 25 12:55:28 compute-0 nova_compute[244014]: 2026-02-25 12:55:28.122 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:55:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Feb 25 12:55:29 compute-0 nova_compute[244014]: 2026-02-25 12:55:29.549 244018 DEBUG nova.network.neutron [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updated VIF entry in instance network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:55:29 compute-0 nova_compute[244014]: 2026-02-25 12:55:29.549 244018 DEBUG nova.network.neutron [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:55:29 compute-0 nova_compute[244014]: 2026-02-25 12:55:29.585 244018 DEBUG oslo_concurrency.lockutils [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:55:30 compute-0 ovn_controller[147040]: 2026-02-25T12:55:30Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:08:7c 10.100.0.9
Feb 25 12:55:30 compute-0 ceph-mon[76335]: pgmap v2247: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Feb 25 12:55:30 compute-0 nova_compute[244014]: 2026-02-25 12:55:30.403 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:30 compute-0 nova_compute[244014]: 2026-02-25 12:55:30.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 12:55:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:55:31
Feb 25 12:55:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:55:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:55:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'images', 'default.rgw.control', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Feb 25 12:55:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:55:32 compute-0 ceph-mon[76335]: pgmap v2248: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 12:55:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 291 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 150 op/s
Feb 25 12:55:32 compute-0 nova_compute[244014]: 2026-02-25 12:55:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:55:32 compute-0 nova_compute[244014]: 2026-02-25 12:55:32.932 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:32 compute-0 nova_compute[244014]: 2026-02-25 12:55:32.933 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:32 compute-0 nova_compute[244014]: 2026-02-25 12:55:32.934 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:32 compute-0 nova_compute[244014]: 2026-02-25 12:55:32.935 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:55:32 compute-0 nova_compute[244014]: 2026-02-25 12:55:32.935 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:55:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2627345700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.587 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.706 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.707 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.713 244018 DEBUG nova.compute.manager [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.713 244018 DEBUG nova.compute.manager [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing instance network info cache due to event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.714 244018 DEBUG oslo_concurrency.lockutils [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.714 244018 DEBUG oslo_concurrency.lockutils [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.714 244018 DEBUG nova.network.neutron [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.721 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.722 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.786 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.787 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.788 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.788 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.788 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.790 244018 INFO nova.compute.manager [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Terminating instance
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.792 244018 DEBUG nova.compute.manager [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:55:33 compute-0 kernel: tap8d4ab3e5-d9 (unregistering): left promiscuous mode
Feb 25 12:55:33 compute-0 NetworkManager[49836]: <info>  [1772024133.9221] device (tap8d4ab3e5-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:33 compute-0 ovn_controller[147040]: 2026-02-25T12:55:33Z|01429|binding|INFO|Releasing lport 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d from this chassis (sb_readonly=0)
Feb 25 12:55:33 compute-0 ovn_controller[147040]: 2026-02-25T12:55:33Z|01430|binding|INFO|Setting lport 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d down in Southbound
Feb 25 12:55:33 compute-0 ovn_controller[147040]: 2026-02-25T12:55:33Z|01431|binding|INFO|Removing iface tap8d4ab3e5-d9 ovn-installed in OVS
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.933 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:33 compute-0 nova_compute[244014]: 2026-02-25 12:55:33.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:33.942 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:08:7c 10.100.0.9'], port_security=['fa:16:3e:e1:08:7c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e93a158-a4bd-41a0-8fe0-52b2a069c409', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0c08d4b-4169-4e97-8a55-a9553174491d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14ad45b1-188a-47f0-9d37-00198e9d57fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:55:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:33.946 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d in datapath 77884df7-14d9-4fbd-8fa6-5eb17d0a82bf unbound from our chassis
Feb 25 12:55:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:33.948 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:55:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:33.949 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc0b6fd-4786-4e38-822b-f3ddc04c006a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:33.951 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf namespace which is not needed anymore
Feb 25 12:55:33 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000086.scope: Deactivated successfully.
Feb 25 12:55:33 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000086.scope: Consumed 12.483s CPU time.
Feb 25 12:55:33 compute-0 systemd-machined[210048]: Machine qemu-166-instance-00000086 terminated.
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.039 244018 INFO nova.virt.libvirt.driver [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Instance destroyed successfully.
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.039 244018 DEBUG nova.objects.instance [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 4e93a158-a4bd-41a0-8fe0-52b2a069c409 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.057 244018 DEBUG nova.virt.libvirt.vif [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-308904214',display_name='tempest-TestNetworkBasicOps-server-308904214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-308904214',id=134,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC4B/zjaDJTncfqsk6mkNSo/RgL3k210CyqHPfVKNsCjY61bbmJZVB8ZwHBrT5XizBW+KM0WAI8Ln67e320MCh/gk3oyfKKaIoKvHrkjtWvb3C4JL4zwHhWY7bHt65jOyg==',key_name='tempest-TestNetworkBasicOps-945311814',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:55:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ypd9bclm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:55:11Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=4e93a158-a4bd-41a0-8fe0-52b2a069c409,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.057 244018 DEBUG nova.network.os_vif_util [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.058 244018 DEBUG nova.network.os_vif_util [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.058 244018 DEBUG os_vif [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.060 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.061 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d4ab3e5-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.068 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.070 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3204MB free_disk=59.90699964296073GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.070 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.071 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.073 244018 INFO os_vif [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9')
Feb 25 12:55:34 compute-0 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [NOTICE]   (366562) : haproxy version is 2.8.14-c23fe91
Feb 25 12:55:34 compute-0 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [NOTICE]   (366562) : path to executable is /usr/sbin/haproxy
Feb 25 12:55:34 compute-0 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [WARNING]  (366562) : Exiting Master process...
Feb 25 12:55:34 compute-0 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [ALERT]    (366562) : Current worker (366564) exited with code 143 (Terminated)
Feb 25 12:55:34 compute-0 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [WARNING]  (366562) : All workers exited. Exiting... (0)
Feb 25 12:55:34 compute-0 systemd[1]: libpod-403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141.scope: Deactivated successfully.
Feb 25 12:55:34 compute-0 podman[367093]: 2026-02-25 12:55:34.126068137 +0000 UTC m=+0.067187876 container died 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141-userdata-shm.mount: Deactivated successfully.
Feb 25 12:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-36d579fc02c3db25950a35ca5e2bc6443af220c9434e5f70315a2a4e00da493a-merged.mount: Deactivated successfully.
Feb 25 12:55:34 compute-0 podman[367093]: 2026-02-25 12:55:34.176620234 +0000 UTC m=+0.117739953 container cleanup 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.180 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 4e93a158-a4bd-41a0-8fe0-52b2a069c409 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.181 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance dfb7287a-5448-4579-8938-fe909fbf54c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.182 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.182 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:55:34 compute-0 systemd[1]: libpod-conmon-403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141.scope: Deactivated successfully.
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.203 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.221 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.221 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.241 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 12:55:34 compute-0 ceph-mon[76335]: pgmap v2249: 305 pgs: 305 active+clean; 291 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 150 op/s
Feb 25 12:55:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2627345700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:34 compute-0 podman[367141]: 2026-02-25 12:55:34.280447552 +0000 UTC m=+0.076452987 container remove 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 12:55:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.287 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6b605807-c2c5-4b00-8da3-ac55738ffbfe]: (4, ('Wed Feb 25 12:55:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf (403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141)\n403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141\nWed Feb 25 12:55:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf (403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141)\n403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.288 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1c19d9c2-85f8-417e-b7a9-8f1b6050c072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.289 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77884df7-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:34 compute-0 kernel: tap77884df7-10: left promiscuous mode
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.298 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 12:55:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.301 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0ab20b-e8c6-4166-9112-ba18730ce947]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.301 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.311 244018 DEBUG nova.compute.manager [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-unplugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.311 244018 DEBUG oslo_concurrency.lockutils [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.312 244018 DEBUG oslo_concurrency.lockutils [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.312 244018 DEBUG oslo_concurrency.lockutils [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.312 244018 DEBUG nova.compute.manager [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] No waiting events found dispatching network-vif-unplugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.313 244018 DEBUG nova.compute.manager [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-unplugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:55:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.318 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[46384604-ad53-411c-ab6b-c522ceec5b57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.320 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd378779-ebc1-44a3-8cac-be700ad3be4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:34 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Feb 25 12:55:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.331 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e31ec76c-3c3c-411f-900b-445834899d58]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607942, 'reachable_time': 42050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367155, 'error': None, 'target': 'ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d77884df7\x2d14d9\x2d4fbd\x2d8fa6\x2d5eb17d0a82bf.mount: Deactivated successfully.
Feb 25 12:55:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.333 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:55:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.334 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a4b785-14d7-4678-8b36-860d4f6058b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.370 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.432 244018 INFO nova.virt.libvirt.driver [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Deleting instance files /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409_del
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.433 244018 INFO nova.virt.libvirt.driver [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Deletion of /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409_del complete
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.548 244018 INFO nova.compute.manager [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Took 0.76 seconds to destroy the instance on the hypervisor.
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.549 244018 DEBUG oslo.service.loopingcall [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.549 244018 DEBUG nova.compute.manager [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.549 244018 DEBUG nova.network.neutron [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:55:34 compute-0 ovn_controller[147040]: 2026-02-25T12:55:34Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:45:21 10.100.0.3
Feb 25 12:55:34 compute-0 ovn_controller[147040]: 2026-02-25T12:55:34Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:45:21 10.100.0.3
Feb 25 12:55:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 291 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 70 op/s
Feb 25 12:55:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:55:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737742452' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.923 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.930 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.944 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.979 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:55:34 compute-0 nova_compute[244014]: 2026-02-25 12:55:34.980 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/737742452' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:35 compute-0 nova_compute[244014]: 2026-02-25 12:55:35.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:35 compute-0 nova_compute[244014]: 2026-02-25 12:55:35.976 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:55:35 compute-0 nova_compute[244014]: 2026-02-25 12:55:35.977 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:55:35 compute-0 nova_compute[244014]: 2026-02-25 12:55:35.977 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:55:35 compute-0 nova_compute[244014]: 2026-02-25 12:55:35.977 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:55:35 compute-0 nova_compute[244014]: 2026-02-25 12:55:35.994 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.134 244018 DEBUG nova.network.neutron [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updated VIF entry in instance network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.135 244018 DEBUG nova.network.neutron [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updating instance_info_cache with network_info: [{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.149 244018 DEBUG nova.network.neutron [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.153 244018 DEBUG oslo_concurrency.lockutils [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.168 244018 INFO nova.compute.manager [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Took 1.62 seconds to deallocate network for instance.
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.175 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.176 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.176 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.177 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid dfb7287a-5448-4579-8938-fe909fbf54c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.241 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.242 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:36 compute-0 ceph-mon[76335]: pgmap v2250: 305 pgs: 305 active+clean; 291 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 70 op/s
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.338 244018 DEBUG oslo_concurrency.processutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.383 244018 DEBUG nova.compute.manager [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.384 244018 DEBUG oslo_concurrency.lockutils [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.385 244018 DEBUG oslo_concurrency.lockutils [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.386 244018 DEBUG oslo_concurrency.lockutils [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.386 244018 DEBUG nova.compute.manager [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] No waiting events found dispatching network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.387 244018 WARNING nova.compute.manager [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received unexpected event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d for instance with vm_state deleted and task_state None.
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.388 244018 DEBUG nova.compute.manager [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-deleted-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 291 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 70 op/s
Feb 25 12:55:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:55:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/85658916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.890 244018 DEBUG oslo_concurrency.processutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.898 244018 DEBUG nova.compute.provider_tree [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.913 244018 DEBUG nova.scheduler.client.report [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.937 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:36 compute-0 nova_compute[244014]: 2026-02-25 12:55:36.968 244018 INFO nova.scheduler.client.report [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 4e93a158-a4bd-41a0-8fe0-52b2a069c409
Feb 25 12:55:37 compute-0 nova_compute[244014]: 2026-02-25 12:55:37.035 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/85658916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:37 compute-0 nova_compute[244014]: 2026-02-25 12:55:37.865 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:55:37 compute-0 nova_compute[244014]: 2026-02-25 12:55:37.893 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:55:37 compute-0 nova_compute[244014]: 2026-02-25 12:55:37.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:55:37 compute-0 nova_compute[244014]: 2026-02-25 12:55:37.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:55:38 compute-0 ceph-mon[76335]: pgmap v2251: 305 pgs: 305 active+clean; 291 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 70 op/s
Feb 25 12:55:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Feb 25 12:55:39 compute-0 nova_compute[244014]: 2026-02-25 12:55:39.064 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:39 compute-0 podman[367201]: 2026-02-25 12:55:39.761935536 +0000 UTC m=+0.096341879 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 12:55:39 compute-0 podman[367200]: 2026-02-25 12:55:39.762262035 +0000 UTC m=+0.101541225 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 25 12:55:39 compute-0 nova_compute[244014]: 2026-02-25 12:55:39.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:55:40 compute-0 ceph-mon[76335]: pgmap v2252: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Feb 25 12:55:40 compute-0 nova_compute[244014]: 2026-02-25 12:55:40.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Feb 25 12:55:41 compute-0 ovn_controller[147040]: 2026-02-25T12:55:41Z|01432|binding|INFO|Releasing lport d6ceb7cb-a48f-47c3-acb6-58d61268f8c7 from this chassis (sb_readonly=0)
Feb 25 12:55:41 compute-0 nova_compute[244014]: 2026-02-25 12:55:41.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:41 compute-0 nova_compute[244014]: 2026-02-25 12:55:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:55:41 compute-0 nova_compute[244014]: 2026-02-25 12:55:41.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:55:42 compute-0 ceph-mon[76335]: pgmap v2253: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007749709516653071 of space, bias 1.0, pg target 0.23249128549959214 quantized to 32 (current 32)
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494192933822009 of space, bias 1.0, pg target 0.7482578801466027 quantized to 32 (current 32)
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.377850419695492e-06 of space, bias 4.0, pg target 0.0016534205036345905 quantized to 16 (current 16)
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:55:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:55:42 compute-0 nova_compute[244014]: 2026-02-25 12:55:42.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:55:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:44 compute-0 nova_compute[244014]: 2026-02-25 12:55:44.070 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:44 compute-0 ceph-mon[76335]: pgmap v2254: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Feb 25 12:55:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 1015 KiB/s wr, 77 op/s
Feb 25 12:55:44 compute-0 nova_compute[244014]: 2026-02-25 12:55:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:55:44 compute-0 nova_compute[244014]: 2026-02-25 12:55:44.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:55:45 compute-0 nova_compute[244014]: 2026-02-25 12:55:45.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:45 compute-0 nova_compute[244014]: 2026-02-25 12:55:45.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:46 compute-0 ceph-mon[76335]: pgmap v2255: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 1015 KiB/s wr, 77 op/s
Feb 25 12:55:46 compute-0 nova_compute[244014]: 2026-02-25 12:55:46.438 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:46 compute-0 nova_compute[244014]: 2026-02-25 12:55:46.439 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:46 compute-0 nova_compute[244014]: 2026-02-25 12:55:46.468 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:55:46 compute-0 nova_compute[244014]: 2026-02-25 12:55:46.544 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:46 compute-0 nova_compute[244014]: 2026-02-25 12:55:46.545 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:46 compute-0 nova_compute[244014]: 2026-02-25 12:55:46.553 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:55:46 compute-0 nova_compute[244014]: 2026-02-25 12:55:46.553 244018 INFO nova.compute.claims [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:55:46 compute-0 nova_compute[244014]: 2026-02-25 12:55:46.658 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 1015 KiB/s wr, 77 op/s
Feb 25 12:55:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:55:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1657928784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.183 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.191 244018 DEBUG nova.compute.provider_tree [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.211 244018 DEBUG nova.scheduler.client.report [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.260 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.260 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:55:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1657928784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.359 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.360 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.395 244018 INFO nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.415 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.507 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.510 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.510 244018 INFO nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Creating image(s)
Feb 25 12:55:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:55:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792475797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:55:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:55:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792475797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.542 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.581 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.616 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.622 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.662 244018 DEBUG nova.policy [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.713 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.715 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.716 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.717 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.753 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:47 compute-0 nova_compute[244014]: 2026-02-25 12:55:47.759 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7102d0db-32cc-4a5e-8282-cf5266710872_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:48 compute-0 nova_compute[244014]: 2026-02-25 12:55:48.056 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7102d0db-32cc-4a5e-8282-cf5266710872_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:48 compute-0 nova_compute[244014]: 2026-02-25 12:55:48.140 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:55:48 compute-0 nova_compute[244014]: 2026-02-25 12:55:48.221 244018 DEBUG nova.objects.instance [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 7102d0db-32cc-4a5e-8282-cf5266710872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:55:48 compute-0 nova_compute[244014]: 2026-02-25 12:55:48.252 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:55:48 compute-0 nova_compute[244014]: 2026-02-25 12:55:48.253 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Ensure instance console log exists: /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:55:48 compute-0 nova_compute[244014]: 2026-02-25 12:55:48.254 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:48 compute-0 nova_compute[244014]: 2026-02-25 12:55:48.254 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:48 compute-0 nova_compute[244014]: 2026-02-25 12:55:48.255 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:48 compute-0 ceph-mon[76335]: pgmap v2256: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 1015 KiB/s wr, 77 op/s
Feb 25 12:55:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2792475797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:55:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2792475797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:55:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 1019 KiB/s wr, 77 op/s
Feb 25 12:55:49 compute-0 nova_compute[244014]: 2026-02-25 12:55:49.037 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024134.0357685, 4e93a158-a4bd-41a0-8fe0-52b2a069c409 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:55:49 compute-0 nova_compute[244014]: 2026-02-25 12:55:49.037 244018 INFO nova.compute.manager [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] VM Stopped (Lifecycle Event)
Feb 25 12:55:49 compute-0 nova_compute[244014]: 2026-02-25 12:55:49.059 244018 DEBUG nova.compute.manager [None req-a36ce2df-2508-44ea-882d-36cc01b5b686 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:49 compute-0 nova_compute[244014]: 2026-02-25 12:55:49.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:49 compute-0 nova_compute[244014]: 2026-02-25 12:55:49.666 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Successfully created port: dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:55:50 compute-0 ceph-mon[76335]: pgmap v2257: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 1019 KiB/s wr, 77 op/s
Feb 25 12:55:50 compute-0 nova_compute[244014]: 2026-02-25 12:55:50.424 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:50 compute-0 nova_compute[244014]: 2026-02-25 12:55:50.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s wr, 0 op/s
Feb 25 12:55:51 compute-0 nova_compute[244014]: 2026-02-25 12:55:51.255 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Successfully updated port: dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:55:51 compute-0 nova_compute[244014]: 2026-02-25 12:55:51.277 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:55:51 compute-0 nova_compute[244014]: 2026-02-25 12:55:51.277 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:55:51 compute-0 nova_compute[244014]: 2026-02-25 12:55:51.278 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:55:51 compute-0 nova_compute[244014]: 2026-02-25 12:55:51.359 244018 DEBUG nova.compute.manager [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:51 compute-0 nova_compute[244014]: 2026-02-25 12:55:51.360 244018 DEBUG nova.compute.manager [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing instance network info cache due to event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:55:51 compute-0 nova_compute[244014]: 2026-02-25 12:55:51.360 244018 DEBUG oslo_concurrency.lockutils [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:55:51 compute-0 nova_compute[244014]: 2026-02-25 12:55:51.462 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:55:51 compute-0 nova_compute[244014]: 2026-02-25 12:55:51.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:55:52 compute-0 ceph-mon[76335]: pgmap v2258: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s wr, 0 op/s
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.596 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updating instance_info_cache with network_info: [{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.615 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.616 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance network_info: |[{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.616 244018 DEBUG oslo_concurrency.lockutils [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.617 244018 DEBUG nova.network.neutron [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.622 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Start _get_guest_xml network_info=[{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.628 244018 WARNING nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.637 244018 DEBUG nova.virt.libvirt.host [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.640 244018 DEBUG nova.virt.libvirt.host [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.644 244018 DEBUG nova.virt.libvirt.host [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.645 244018 DEBUG nova.virt.libvirt.host [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.646 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.646 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.647 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.647 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.648 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.648 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.649 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.649 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.650 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.650 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.651 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.651 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:55:52 compute-0 nova_compute[244014]: 2026-02-25 12:55:52.656 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:55:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410031636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.223 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.254 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.259 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:53 compute-0 ceph-mon[76335]: pgmap v2259: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/410031636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:55:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77170465' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.846 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.849 244018 DEBUG nova.virt.libvirt.vif [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1911565712',display_name='tempest-TestGettingAddress-server-1911565712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1911565712',id=136,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-tvigm4mr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:47Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=7102d0db-32cc-4a5e-8282-cf5266710872,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.850 244018 DEBUG nova.network.os_vif_util [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.851 244018 DEBUG nova.network.os_vif_util [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.853 244018 DEBUG nova.objects.instance [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7102d0db-32cc-4a5e-8282-cf5266710872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.869 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:55:53 compute-0 nova_compute[244014]:   <uuid>7102d0db-32cc-4a5e-8282-cf5266710872</uuid>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   <name>instance-00000088</name>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <nova:name>tempest-TestGettingAddress-server-1911565712</nova:name>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:55:52</nova:creationTime>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <nova:port uuid="dd7b2b79-55bd-4df0-a79a-3344d8c79c92">
Feb 25 12:55:53 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fec1:35f1" ipVersion="6"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <system>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <entry name="serial">7102d0db-32cc-4a5e-8282-cf5266710872</entry>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <entry name="uuid">7102d0db-32cc-4a5e-8282-cf5266710872</entry>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     </system>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   <os>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   </os>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   <features>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   </features>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7102d0db-32cc-4a5e-8282-cf5266710872_disk">
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       </source>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/7102d0db-32cc-4a5e-8282-cf5266710872_disk.config">
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       </source>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:55:53 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:c1:35:f1"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <target dev="tapdd7b2b79-55"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/console.log" append="off"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <video>
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     </video>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:55:53 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:55:53 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:55:53 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:55:53 compute-0 nova_compute[244014]: </domain>
Feb 25 12:55:53 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.870 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Preparing to wait for external event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.871 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.871 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.872 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.873 244018 DEBUG nova.virt.libvirt.vif [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1911565712',display_name='tempest-TestGettingAddress-server-1911565712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1911565712',id=136,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-tvigm4mr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:47Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=7102d0db-32cc-4a5e-8282-cf5266710872,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.874 244018 DEBUG nova.network.os_vif_util [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.875 244018 DEBUG nova.network.os_vif_util [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.876 244018 DEBUG os_vif [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.877 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.878 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.879 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.883 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd7b2b79-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.884 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdd7b2b79-55, col_values=(('external_ids', {'iface-id': 'dd7b2b79-55bd-4df0-a79a-3344d8c79c92', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:35:f1', 'vm-uuid': '7102d0db-32cc-4a5e-8282-cf5266710872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.886 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:53 compute-0 NetworkManager[49836]: <info>  [1772024153.8879] manager: (tapdd7b2b79-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/597)
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.896 244018 INFO os_vif [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55')
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.981 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.982 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.982 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:c1:35:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:55:53 compute-0 nova_compute[244014]: 2026-02-25 12:55:53.984 244018 INFO nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Using config drive
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.017 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.431 244018 INFO nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Creating config drive at /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.436 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_yqx6vlk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.580 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_yqx6vlk" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.618 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.623 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config 7102d0db-32cc-4a5e-8282-cf5266710872_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/77170465' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:55:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.757 244018 DEBUG nova.network.neutron [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updated VIF entry in instance network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.758 244018 DEBUG nova.network.neutron [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updating instance_info_cache with network_info: [{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.813 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config 7102d0db-32cc-4a5e-8282-cf5266710872_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.815 244018 INFO nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Deleting local config drive /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config because it was imported into RBD.
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.863 244018 DEBUG oslo_concurrency.lockutils [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:55:54 compute-0 kernel: tapdd7b2b79-55: entered promiscuous mode
Feb 25 12:55:54 compute-0 NetworkManager[49836]: <info>  [1772024154.8762] manager: (tapdd7b2b79-55): new Tun device (/org/freedesktop/NetworkManager/Devices/598)
Feb 25 12:55:54 compute-0 ovn_controller[147040]: 2026-02-25T12:55:54Z|01433|binding|INFO|Claiming lport dd7b2b79-55bd-4df0-a79a-3344d8c79c92 for this chassis.
Feb 25 12:55:54 compute-0 ovn_controller[147040]: 2026-02-25T12:55:54Z|01434|binding|INFO|dd7b2b79-55bd-4df0-a79a-3344d8c79c92: Claiming fa:16:3e:c1:35:f1 10.100.0.4 2001:db8::f816:3eff:fec1:35f1
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.877 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.888 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:54 compute-0 ovn_controller[147040]: 2026-02-25T12:55:54Z|01435|binding|INFO|Setting lport dd7b2b79-55bd-4df0-a79a-3344d8c79c92 ovn-installed in OVS
Feb 25 12:55:54 compute-0 nova_compute[244014]: 2026-02-25 12:55:54.891 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:54 compute-0 systemd-udevd[367563]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:55:54 compute-0 ovn_controller[147040]: 2026-02-25T12:55:54Z|01436|binding|INFO|Setting lport dd7b2b79-55bd-4df0-a79a-3344d8c79c92 up in Southbound
Feb 25 12:55:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.925 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:35:f1 10.100.0.4 2001:db8::f816:3eff:fec1:35f1'], port_security=['fa:16:3e:c1:35:f1 10.100.0.4 2001:db8::f816:3eff:fec1:35f1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fec1:35f1/64', 'neutron:device_id': '7102d0db-32cc-4a5e-8282-cf5266710872', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aaaa1259-a3db-497e-8efb-1f3c1f8cc088', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7d68418-660c-4b4a-9631-94822d5aa11e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dd7b2b79-55bd-4df0-a79a-3344d8c79c92) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:55:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.927 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dd7b2b79-55bd-4df0-a79a-3344d8c79c92 in datapath 9f82a56f-a5c3-446e-9b75-7dc69c93c56b bound to our chassis
Feb 25 12:55:54 compute-0 NetworkManager[49836]: <info>  [1772024154.9291] device (tapdd7b2b79-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:55:54 compute-0 NetworkManager[49836]: <info>  [1772024154.9303] device (tapdd7b2b79-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:55:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.931 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f82a56f-a5c3-446e-9b75-7dc69c93c56b
Feb 25 12:55:54 compute-0 systemd-machined[210048]: New machine qemu-168-instance-00000088.
Feb 25 12:55:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.948 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6bab90-3088-405d-85dd-d010ca959f72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:54 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-00000088.
Feb 25 12:55:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.977 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3167656b-afde-4dc4-bbe8-b36e3b90df5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.982 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[39656ab0-c161-4cd4-b409-ee06debca756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.012 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7f09559a-5db8-4892-8e38-c4ec4ff18c7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.030 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7116da-5e7e-462d-8a4f-c3db5f982a63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f82a56f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:91:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609107, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367579, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.032 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.033 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.034 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.047 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0227d22d-8dba-4ceb-b3b6-273b4b021efd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9f82a56f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609116, 'tstamp': 609116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367580, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f82a56f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609119, 'tstamp': 609119}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367580, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.049 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f82a56f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.053 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f82a56f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.053 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.054 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f82a56f-a0, col_values=(('external_ids', {'iface-id': 'd6ceb7cb-a48f-47c3-acb6-58d61268f8c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.055 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.319 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024155.3182516, 7102d0db-32cc-4a5e-8282-cf5266710872 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.320 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] VM Started (Lifecycle Event)
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.343 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.348 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024155.3185399, 7102d0db-32cc-4a5e-8282-cf5266710872 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.349 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] VM Paused (Lifecycle Event)
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.372 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.376 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.388 244018 DEBUG nova.compute.manager [req-8a945c03-f40a-4231-8936-9b122045f010 req-38aec52e-e21f-4c6e-9866-a90507cab2d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.389 244018 DEBUG oslo_concurrency.lockutils [req-8a945c03-f40a-4231-8936-9b122045f010 req-38aec52e-e21f-4c6e-9866-a90507cab2d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.389 244018 DEBUG oslo_concurrency.lockutils [req-8a945c03-f40a-4231-8936-9b122045f010 req-38aec52e-e21f-4c6e-9866-a90507cab2d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.389 244018 DEBUG oslo_concurrency.lockutils [req-8a945c03-f40a-4231-8936-9b122045f010 req-38aec52e-e21f-4c6e-9866-a90507cab2d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.390 244018 DEBUG nova.compute.manager [req-8a945c03-f40a-4231-8936-9b122045f010 req-38aec52e-e21f-4c6e-9866-a90507cab2d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Processing event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.390 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.395 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.398 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.398 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024155.3939693, 7102d0db-32cc-4a5e-8282-cf5266710872 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.398 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] VM Resumed (Lifecycle Event)
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.402 244018 INFO nova.virt.libvirt.driver [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance spawned successfully.
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.402 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.428 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.433 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.437 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.437 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.438 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.438 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.438 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.439 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.481 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.520 244018 INFO nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Took 8.01 seconds to spawn the instance on the hypervisor.
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.520 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.613 244018 INFO nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Took 9.10 seconds to build instance.
Feb 25 12:55:55 compute-0 nova_compute[244014]: 2026-02-25 12:55:55.651 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:55 compute-0 ceph-mon[76335]: pgmap v2260: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:56 compute-0 nova_compute[244014]: 2026-02-25 12:55:56.410 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:56 compute-0 nova_compute[244014]: 2026-02-25 12:55:56.412 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:56 compute-0 nova_compute[244014]: 2026-02-25 12:55:56.431 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:55:56 compute-0 nova_compute[244014]: 2026-02-25 12:55:56.525 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:56 compute-0 nova_compute[244014]: 2026-02-25 12:55:56.526 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:56 compute-0 nova_compute[244014]: 2026-02-25 12:55:56.534 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:55:56 compute-0 nova_compute[244014]: 2026-02-25 12:55:56.535 244018 INFO nova.compute.claims [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:55:56 compute-0 nova_compute[244014]: 2026-02-25 12:55:56.687 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:55:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050084465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.204 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.213 244018 DEBUG nova.compute.provider_tree [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.236 244018 DEBUG nova.scheduler.client.report [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.272 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.272 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.344 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.344 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.366 244018 INFO nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.382 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.496 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.497 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.498 244018 INFO nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Creating image(s)
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.530 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.562 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.594 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.598 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.636 244018 DEBUG nova.policy [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.642 244018 DEBUG nova.compute.manager [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.642 244018 DEBUG oslo_concurrency.lockutils [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.643 244018 DEBUG oslo_concurrency.lockutils [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.643 244018 DEBUG oslo_concurrency.lockutils [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.643 244018 DEBUG nova.compute.manager [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] No waiting events found dispatching network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.644 244018 WARNING nova.compute.manager [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received unexpected event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 for instance with vm_state active and task_state None.
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.665 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.666 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.667 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.668 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.699 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:55:57 compute-0 nova_compute[244014]: 2026-02-25 12:55:57.705 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:55:57 compute-0 ceph-mon[76335]: pgmap v2261: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:55:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2050084465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:55:57 compute-0 sudo[367722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:55:57 compute-0 sudo[367722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:55:57 compute-0 sudo[367722]: pam_unix(sudo:session): session closed for user root
Feb 25 12:55:57 compute-0 sudo[367754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:55:57 compute-0 sudo[367754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:55:58 compute-0 nova_compute[244014]: 2026-02-25 12:55:58.016 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:55:58 compute-0 nova_compute[244014]: 2026-02-25 12:55:58.110 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:55:58 compute-0 nova_compute[244014]: 2026-02-25 12:55:58.207 244018 DEBUG nova.objects.instance [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid e728a9dc-bb04-4a25-bcad-b787a044bc0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:55:58 compute-0 nova_compute[244014]: 2026-02-25 12:55:58.223 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:55:58 compute-0 nova_compute[244014]: 2026-02-25 12:55:58.224 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Ensure instance console log exists: /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:55:58 compute-0 nova_compute[244014]: 2026-02-25 12:55:58.224 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:55:58 compute-0 nova_compute[244014]: 2026-02-25 12:55:58.225 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:55:58 compute-0 nova_compute[244014]: 2026-02-25 12:55:58.225 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:55:58 compute-0 sudo[367754]: pam_unix(sudo:session): session closed for user root
Feb 25 12:55:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:55:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:55:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:55:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:55:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:55:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:55:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:55:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:55:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:55:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:55:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:55:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:55:58 compute-0 sudo[367891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:55:58 compute-0 sudo[367891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:55:58 compute-0 sudo[367891]: pam_unix(sudo:session): session closed for user root
Feb 25 12:55:58 compute-0 sudo[367916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:55:58 compute-0 sudo[367916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:55:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:55:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:55:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:55:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:55:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:55:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:55:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:55:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:55:58 compute-0 nova_compute[244014]: 2026-02-25 12:55:58.887 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:55:58 compute-0 podman[367952]: 2026-02-25 12:55:58.91565627 +0000 UTC m=+0.056872325 container create b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:55:58 compute-0 systemd[1]: Started libpod-conmon-b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3.scope.
Feb 25 12:55:58 compute-0 podman[367952]: 2026-02-25 12:55:58.891491549 +0000 UTC m=+0.032707654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:55:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:55:59 compute-0 podman[367952]: 2026-02-25 12:55:59.010020202 +0000 UTC m=+0.151236327 container init b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:55:59 compute-0 podman[367952]: 2026-02-25 12:55:59.01952125 +0000 UTC m=+0.160737275 container start b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:55:59 compute-0 podman[367952]: 2026-02-25 12:55:59.023162573 +0000 UTC m=+0.164378688 container attach b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:55:59 compute-0 charming_colden[367968]: 167 167
Feb 25 12:55:59 compute-0 systemd[1]: libpod-b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3.scope: Deactivated successfully.
Feb 25 12:55:59 compute-0 podman[367952]: 2026-02-25 12:55:59.026651011 +0000 UTC m=+0.167867086 container died b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 12:55:59 compute-0 nova_compute[244014]: 2026-02-25 12:55:59.055 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Successfully created port: bde15f84-edfb-445b-b129-ec33331763f0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-de27f618d86a2df5e1713aa4926795056f92ed99ff00a334ae45733434243c24-merged.mount: Deactivated successfully.
Feb 25 12:55:59 compute-0 podman[367952]: 2026-02-25 12:55:59.081815358 +0000 UTC m=+0.223031423 container remove b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 12:55:59 compute-0 systemd[1]: libpod-conmon-b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3.scope: Deactivated successfully.
Feb 25 12:55:59 compute-0 podman[367993]: 2026-02-25 12:55:59.279460153 +0000 UTC m=+0.058330506 container create 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 12:55:59 compute-0 systemd[1]: Started libpod-conmon-89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb.scope.
Feb 25 12:55:59 compute-0 podman[367993]: 2026-02-25 12:55:59.254220261 +0000 UTC m=+0.033090614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:55:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:55:59 compute-0 podman[367993]: 2026-02-25 12:55:59.386521833 +0000 UTC m=+0.165392176 container init 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 12:55:59 compute-0 podman[367993]: 2026-02-25 12:55:59.396727961 +0000 UTC m=+0.175598314 container start 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:55:59 compute-0 podman[367993]: 2026-02-25 12:55:59.400997511 +0000 UTC m=+0.179867904 container attach 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 12:55:59 compute-0 nova_compute[244014]: 2026-02-25 12:55:59.645 244018 DEBUG nova.compute.manager [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:55:59 compute-0 nova_compute[244014]: 2026-02-25 12:55:59.646 244018 DEBUG nova.compute.manager [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing instance network info cache due to event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:55:59 compute-0 nova_compute[244014]: 2026-02-25 12:55:59.647 244018 DEBUG oslo_concurrency.lockutils [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:55:59 compute-0 nova_compute[244014]: 2026-02-25 12:55:59.647 244018 DEBUG oslo_concurrency.lockutils [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:55:59 compute-0 nova_compute[244014]: 2026-02-25 12:55:59.647 244018 DEBUG nova.network.neutron [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:55:59 compute-0 ceph-mon[76335]: pgmap v2262: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 12:55:59 compute-0 jovial_easley[368009]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:55:59 compute-0 jovial_easley[368009]: --> All data devices are unavailable
Feb 25 12:55:59 compute-0 systemd[1]: libpod-89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb.scope: Deactivated successfully.
Feb 25 12:55:59 compute-0 podman[367993]: 2026-02-25 12:55:59.940364286 +0000 UTC m=+0.719234649 container died 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2-merged.mount: Deactivated successfully.
Feb 25 12:55:59 compute-0 podman[367993]: 2026-02-25 12:55:59.991487358 +0000 UTC m=+0.770357681 container remove 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 12:56:00 compute-0 systemd[1]: libpod-conmon-89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb.scope: Deactivated successfully.
Feb 25 12:56:00 compute-0 sudo[367916]: pam_unix(sudo:session): session closed for user root
Feb 25 12:56:00 compute-0 sudo[368040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:56:00 compute-0 sudo[368040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:56:00 compute-0 sudo[368040]: pam_unix(sudo:session): session closed for user root
Feb 25 12:56:00 compute-0 sudo[368065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:56:00 compute-0 sudo[368065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:56:00 compute-0 nova_compute[244014]: 2026-02-25 12:56:00.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:00 compute-0 podman[368102]: 2026-02-25 12:56:00.526638374 +0000 UTC m=+0.046337628 container create 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:56:00 compute-0 systemd[1]: Started libpod-conmon-50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d.scope.
Feb 25 12:56:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:56:00 compute-0 podman[368102]: 2026-02-25 12:56:00.506957949 +0000 UTC m=+0.026657203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:56:00 compute-0 podman[368102]: 2026-02-25 12:56:00.611676223 +0000 UTC m=+0.131375467 container init 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 12:56:00 compute-0 podman[368102]: 2026-02-25 12:56:00.619331619 +0000 UTC m=+0.139030873 container start 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 12:56:00 compute-0 podman[368102]: 2026-02-25 12:56:00.623160657 +0000 UTC m=+0.142859901 container attach 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:56:00 compute-0 friendly_vaughan[368118]: 167 167
Feb 25 12:56:00 compute-0 systemd[1]: libpod-50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d.scope: Deactivated successfully.
Feb 25 12:56:00 compute-0 podman[368102]: 2026-02-25 12:56:00.624488554 +0000 UTC m=+0.144187798 container died 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4114e1d5785febfe884b89f30ee687f8a8c8881d393b35e3cb67eb3d39ca0fb8-merged.mount: Deactivated successfully.
Feb 25 12:56:00 compute-0 podman[368102]: 2026-02-25 12:56:00.670876243 +0000 UTC m=+0.190575477 container remove 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 12:56:00 compute-0 systemd[1]: libpod-conmon-50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d.scope: Deactivated successfully.
Feb 25 12:56:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:56:00 compute-0 podman[368141]: 2026-02-25 12:56:00.867033095 +0000 UTC m=+0.066407353 container create ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 12:56:00 compute-0 systemd[1]: Started libpod-conmon-ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9.scope.
Feb 25 12:56:00 compute-0 podman[368141]: 2026-02-25 12:56:00.840425525 +0000 UTC m=+0.039799823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:56:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6cb00a73bdfb3ea74a190be20e3800fd1f231c48b2d4a86831c8a5e0b89416/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6cb00a73bdfb3ea74a190be20e3800fd1f231c48b2d4a86831c8a5e0b89416/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6cb00a73bdfb3ea74a190be20e3800fd1f231c48b2d4a86831c8a5e0b89416/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6cb00a73bdfb3ea74a190be20e3800fd1f231c48b2d4a86831c8a5e0b89416/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:56:00 compute-0 podman[368141]: 2026-02-25 12:56:00.999888073 +0000 UTC m=+0.199262371 container init ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:56:01 compute-0 podman[368141]: 2026-02-25 12:56:01.01079137 +0000 UTC m=+0.210165628 container start ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 12:56:01 compute-0 podman[368141]: 2026-02-25 12:56:01.015255136 +0000 UTC m=+0.214629444 container attach ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 12:56:01 compute-0 nova_compute[244014]: 2026-02-25 12:56:01.188 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Successfully updated port: bde15f84-edfb-445b-b129-ec33331763f0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:56:01 compute-0 nova_compute[244014]: 2026-02-25 12:56:01.208 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:01 compute-0 nova_compute[244014]: 2026-02-25 12:56:01.209 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:01 compute-0 nova_compute[244014]: 2026-02-25 12:56:01.209 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:56:01 compute-0 happy_fermi[368157]: {
Feb 25 12:56:01 compute-0 happy_fermi[368157]:     "0": [
Feb 25 12:56:01 compute-0 happy_fermi[368157]:         {
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "devices": [
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "/dev/loop3"
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             ],
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_name": "ceph_lv0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_size": "21470642176",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "name": "ceph_lv0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "tags": {
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.cluster_name": "ceph",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.crush_device_class": "",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.encrypted": "0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.objectstore": "bluestore",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.osd_id": "0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.type": "block",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.vdo": "0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.with_tpm": "0"
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             },
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "type": "block",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "vg_name": "ceph_vg0"
Feb 25 12:56:01 compute-0 happy_fermi[368157]:         }
Feb 25 12:56:01 compute-0 happy_fermi[368157]:     ],
Feb 25 12:56:01 compute-0 happy_fermi[368157]:     "1": [
Feb 25 12:56:01 compute-0 happy_fermi[368157]:         {
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "devices": [
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "/dev/loop4"
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             ],
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_name": "ceph_lv1",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_size": "21470642176",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "name": "ceph_lv1",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "tags": {
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.cluster_name": "ceph",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.crush_device_class": "",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.encrypted": "0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.objectstore": "bluestore",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.osd_id": "1",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.type": "block",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.vdo": "0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.with_tpm": "0"
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             },
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "type": "block",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "vg_name": "ceph_vg1"
Feb 25 12:56:01 compute-0 happy_fermi[368157]:         }
Feb 25 12:56:01 compute-0 happy_fermi[368157]:     ],
Feb 25 12:56:01 compute-0 happy_fermi[368157]:     "2": [
Feb 25 12:56:01 compute-0 happy_fermi[368157]:         {
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "devices": [
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "/dev/loop5"
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             ],
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_name": "ceph_lv2",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_size": "21470642176",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "name": "ceph_lv2",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "tags": {
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.cluster_name": "ceph",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.crush_device_class": "",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.encrypted": "0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.objectstore": "bluestore",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.osd_id": "2",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.type": "block",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.vdo": "0",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:                 "ceph.with_tpm": "0"
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             },
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "type": "block",
Feb 25 12:56:01 compute-0 happy_fermi[368157]:             "vg_name": "ceph_vg2"
Feb 25 12:56:01 compute-0 happy_fermi[368157]:         }
Feb 25 12:56:01 compute-0 happy_fermi[368157]:     ]
Feb 25 12:56:01 compute-0 happy_fermi[368157]: }
Feb 25 12:56:01 compute-0 systemd[1]: libpod-ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9.scope: Deactivated successfully.
Feb 25 12:56:01 compute-0 podman[368141]: 2026-02-25 12:56:01.32877411 +0000 UTC m=+0.528148368 container died ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:56:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea6cb00a73bdfb3ea74a190be20e3800fd1f231c48b2d4a86831c8a5e0b89416-merged.mount: Deactivated successfully.
Feb 25 12:56:01 compute-0 podman[368141]: 2026-02-25 12:56:01.383097943 +0000 UTC m=+0.582472191 container remove ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 12:56:01 compute-0 systemd[1]: libpod-conmon-ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9.scope: Deactivated successfully.
Feb 25 12:56:01 compute-0 nova_compute[244014]: 2026-02-25 12:56:01.421 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:56:01 compute-0 sudo[368065]: pam_unix(sudo:session): session closed for user root
Feb 25 12:56:01 compute-0 sudo[368176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:56:01 compute-0 sudo[368176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:56:01 compute-0 sudo[368176]: pam_unix(sudo:session): session closed for user root
Feb 25 12:56:01 compute-0 sudo[368201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:56:01 compute-0 sudo[368201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:56:01 compute-0 nova_compute[244014]: 2026-02-25 12:56:01.740 244018 DEBUG nova.compute.manager [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-changed-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:01 compute-0 nova_compute[244014]: 2026-02-25 12:56:01.740 244018 DEBUG nova.compute.manager [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing instance network info cache due to event network-changed-bde15f84-edfb-445b-b129-ec33331763f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:56:01 compute-0 nova_compute[244014]: 2026-02-25 12:56:01.740 244018 DEBUG oslo_concurrency.lockutils [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:01 compute-0 ceph-mon[76335]: pgmap v2263: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:56:01 compute-0 podman[368238]: 2026-02-25 12:56:01.850745064 +0000 UTC m=+0.056379781 container create 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:56:01 compute-0 systemd[1]: Started libpod-conmon-17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43.scope.
Feb 25 12:56:01 compute-0 podman[368238]: 2026-02-25 12:56:01.831467731 +0000 UTC m=+0.037102518 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:56:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:56:01 compute-0 podman[368238]: 2026-02-25 12:56:01.945140187 +0000 UTC m=+0.150774904 container init 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:56:01 compute-0 podman[368238]: 2026-02-25 12:56:01.952573997 +0000 UTC m=+0.158208704 container start 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:56:01 compute-0 amazing_sanderson[368254]: 167 167
Feb 25 12:56:01 compute-0 podman[368238]: 2026-02-25 12:56:01.956677863 +0000 UTC m=+0.162312570 container attach 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 12:56:01 compute-0 systemd[1]: libpod-17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43.scope: Deactivated successfully.
Feb 25 12:56:01 compute-0 podman[368238]: 2026-02-25 12:56:01.957385763 +0000 UTC m=+0.163020480 container died 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 12:56:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a2bf63b41ad4728c354ceae0ea302dbce268672c0f80f34fd0278db1c2c5273-merged.mount: Deactivated successfully.
Feb 25 12:56:02 compute-0 podman[368238]: 2026-02-25 12:56:02.004005728 +0000 UTC m=+0.209640435 container remove 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 12:56:02 compute-0 systemd[1]: libpod-conmon-17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43.scope: Deactivated successfully.
Feb 25 12:56:02 compute-0 podman[368280]: 2026-02-25 12:56:02.190366675 +0000 UTC m=+0.044185098 container create 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:56:02 compute-0 systemd[1]: Started libpod-conmon-650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d.scope.
Feb 25 12:56:02 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0327b07df9ec2f1a28bd5aba09a4e82f7b10fd0b7e120c4848bd5c4f7b37d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0327b07df9ec2f1a28bd5aba09a4e82f7b10fd0b7e120c4848bd5c4f7b37d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:56:02 compute-0 podman[368280]: 2026-02-25 12:56:02.173032486 +0000 UTC m=+0.026850929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0327b07df9ec2f1a28bd5aba09a4e82f7b10fd0b7e120c4848bd5c4f7b37d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0327b07df9ec2f1a28bd5aba09a4e82f7b10fd0b7e120c4848bd5c4f7b37d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:56:02 compute-0 podman[368280]: 2026-02-25 12:56:02.30220592 +0000 UTC m=+0.156024423 container init 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:56:02 compute-0 podman[368280]: 2026-02-25 12:56:02.310769791 +0000 UTC m=+0.164588244 container start 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 12:56:02 compute-0 podman[368280]: 2026-02-25 12:56:02.31390565 +0000 UTC m=+0.167724143 container attach 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.353 244018 DEBUG nova.network.neutron [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updated VIF entry in instance network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.355 244018 DEBUG nova.network.neutron [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updating instance_info_cache with network_info: [{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.379 244018 DEBUG oslo_concurrency.lockutils [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.541 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.557 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.558 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance network_info: |[{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.558 244018 DEBUG oslo_concurrency.lockutils [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.558 244018 DEBUG nova.network.neutron [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing network info cache for port bde15f84-edfb-445b-b129-ec33331763f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.563 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Start _get_guest_xml network_info=[{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.570 244018 WARNING nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.579 244018 DEBUG nova.virt.libvirt.host [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.581 244018 DEBUG nova.virt.libvirt.host [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.585 244018 DEBUG nova.virt.libvirt.host [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.585 244018 DEBUG nova.virt.libvirt.host [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.586 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.586 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.586 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.586 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.587 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.587 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.587 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.587 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.587 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.588 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.588 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.588 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:56:02 compute-0 nova_compute[244014]: 2026-02-25 12:56:02.591 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Feb 25 12:56:02 compute-0 lvm[368396]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:56:02 compute-0 lvm[368396]: VG ceph_vg1 finished
Feb 25 12:56:02 compute-0 lvm[368395]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:56:02 compute-0 lvm[368395]: VG ceph_vg0 finished
Feb 25 12:56:02 compute-0 lvm[368398]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:56:02 compute-0 lvm[368398]: VG ceph_vg2 finished
Feb 25 12:56:03 compute-0 dazzling_goldwasser[368297]: {}
Feb 25 12:56:03 compute-0 systemd[1]: libpod-650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d.scope: Deactivated successfully.
Feb 25 12:56:03 compute-0 podman[368280]: 2026-02-25 12:56:03.092579255 +0000 UTC m=+0.946397678 container died 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb 25 12:56:03 compute-0 systemd[1]: libpod-650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d.scope: Consumed 1.123s CPU time.
Feb 25 12:56:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d0327b07df9ec2f1a28bd5aba09a4e82f7b10fd0b7e120c4848bd5c4f7b37d2-merged.mount: Deactivated successfully.
Feb 25 12:56:03 compute-0 podman[368280]: 2026-02-25 12:56:03.138625304 +0000 UTC m=+0.992443767 container remove 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:56:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:56:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259164080' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:56:03 compute-0 systemd[1]: libpod-conmon-650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d.scope: Deactivated successfully.
Feb 25 12:56:03 compute-0 sudo[368201]: pam_unix(sudo:session): session closed for user root
Feb 25 12:56:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:56:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:56:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.193 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.227 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.235 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:03 compute-0 sudo[368415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:56:03 compute-0 sudo[368415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:56:03 compute-0 sudo[368415]: pam_unix(sudo:session): session closed for user root
Feb 25 12:56:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:56:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/671392504' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.759 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.761 244018 DEBUG nova.virt.libvirt.vif [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563089963',display_name='tempest-TestNetworkBasicOps-server-563089963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563089963',id=137,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDFJ0F94NwULIDtBe1hWjwHL1rXe9bp7hZF1/RCkpzZ1NYU+13CFJoQDcWC4Kzxk0oxcoRc7zBPHT8toFws+wsWPaxnQhFXZM/VRwZJjXXVApOjgca9Ve/Do6WWCKSzVsA==',key_name='tempest-TestNetworkBasicOps-1065518207',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-nqssdsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:57Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=e728a9dc-bb04-4a25-bcad-b787a044bc0b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.762 244018 DEBUG nova.network.os_vif_util [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.763 244018 DEBUG nova.network.os_vif_util [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.764 244018 DEBUG nova.objects.instance [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid e728a9dc-bb04-4a25-bcad-b787a044bc0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.778 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:56:03 compute-0 nova_compute[244014]:   <uuid>e728a9dc-bb04-4a25-bcad-b787a044bc0b</uuid>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   <name>instance-00000089</name>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-563089963</nova:name>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:56:02</nova:creationTime>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <nova:port uuid="bde15f84-edfb-445b-b129-ec33331763f0">
Feb 25 12:56:03 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <system>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <entry name="serial">e728a9dc-bb04-4a25-bcad-b787a044bc0b</entry>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <entry name="uuid">e728a9dc-bb04-4a25-bcad-b787a044bc0b</entry>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     </system>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   <os>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   </os>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   <features>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   </features>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk">
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       </source>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config">
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       </source>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:56:03 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:99:9e:1f"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <target dev="tapbde15f84-ed"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/console.log" append="off"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <video>
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     </video>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:56:03 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:56:03 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:56:03 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:56:03 compute-0 nova_compute[244014]: </domain>
Feb 25 12:56:03 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.779 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Preparing to wait for external event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.779 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.779 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.780 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.780 244018 DEBUG nova.virt.libvirt.vif [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563089963',display_name='tempest-TestNetworkBasicOps-server-563089963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563089963',id=137,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDFJ0F94NwULIDtBe1hWjwHL1rXe9bp7hZF1/RCkpzZ1NYU+13CFJoQDcWC4Kzxk0oxcoRc7zBPHT8toFws+wsWPaxnQhFXZM/VRwZJjXXVApOjgca9Ve/Do6WWCKSzVsA==',key_name='tempest-TestNetworkBasicOps-1065518207',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-nqssdsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:57Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=e728a9dc-bb04-4a25-bcad-b787a044bc0b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.780 244018 DEBUG nova.network.os_vif_util [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.781 244018 DEBUG nova.network.os_vif_util [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.781 244018 DEBUG os_vif [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.782 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.782 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.791 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.791 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbde15f84-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.791 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbde15f84-ed, col_values=(('external_ids', {'iface-id': 'bde15f84-edfb-445b-b129-ec33331763f0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:9e:1f', 'vm-uuid': 'e728a9dc-bb04-4a25-bcad-b787a044bc0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:03 compute-0 NetworkManager[49836]: <info>  [1772024163.7946] manager: (tapbde15f84-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.800 244018 INFO os_vif [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed')
Feb 25 12:56:03 compute-0 ceph-mon[76335]: pgmap v2264: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Feb 25 12:56:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2259164080' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:56:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:56:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:56:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/671392504' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.846 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.846 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.847 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:99:9e:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.847 244018 INFO nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Using config drive
Feb 25 12:56:03 compute-0 nova_compute[244014]: 2026-02-25 12:56:03.871 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.285 244018 INFO nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Creating config drive at /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.293 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3kiflx1d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.437 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3kiflx1d" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.479 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.486 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.525 244018 DEBUG nova.network.neutron [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updated VIF entry in instance network info cache for port bde15f84-edfb-445b-b129-ec33331763f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.527 244018 DEBUG nova.network.neutron [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.547 244018 DEBUG oslo_concurrency.lockutils [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.661 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.662 244018 INFO nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Deleting local config drive /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config because it was imported into RBD.
Feb 25 12:56:04 compute-0 kernel: tapbde15f84-ed: entered promiscuous mode
Feb 25 12:56:04 compute-0 NetworkManager[49836]: <info>  [1772024164.7115] manager: (tapbde15f84-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/600)
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.714 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:04 compute-0 ovn_controller[147040]: 2026-02-25T12:56:04Z|01437|binding|INFO|Claiming lport bde15f84-edfb-445b-b129-ec33331763f0 for this chassis.
Feb 25 12:56:04 compute-0 ovn_controller[147040]: 2026-02-25T12:56:04Z|01438|binding|INFO|bde15f84-edfb-445b-b129-ec33331763f0: Claiming fa:16:3e:99:9e:1f 10.100.0.11
Feb 25 12:56:04 compute-0 systemd-udevd[368393]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:56:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.733 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:04 compute-0 ovn_controller[147040]: 2026-02-25T12:56:04Z|01439|binding|INFO|Setting lport bde15f84-edfb-445b-b129-ec33331763f0 ovn-installed in OVS
Feb 25 12:56:04 compute-0 NetworkManager[49836]: <info>  [1772024164.7398] device (tapbde15f84-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:04 compute-0 NetworkManager[49836]: <info>  [1772024164.7406] device (tapbde15f84-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:04 compute-0 nova_compute[244014]: 2026-02-25 12:56:04.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:04 compute-0 systemd-machined[210048]: New machine qemu-169-instance-00000089.
Feb 25 12:56:04 compute-0 ovn_controller[147040]: 2026-02-25T12:56:04Z|01440|binding|INFO|Setting lport bde15f84-edfb-445b-b129-ec33331763f0 up in Southbound
Feb 25 12:56:04 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-00000089.
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.775 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:9e:1f 10.100.0.11'], port_security=['fa:16:3e:99:9e:1f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e728a9dc-bb04-4a25-bcad-b787a044bc0b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1d8168d2-f5f8-4f41-af80-56661b6a1e4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=394a85b3-bdd5-4c73-8ada-c7f3f99bacc6, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=bde15f84-edfb-445b-b129-ec33331763f0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.777 157129 INFO neutron.agent.ovn.metadata.agent [-] Port bde15f84-edfb-445b-b129-ec33331763f0 in datapath 74cacb3c-0135-4e0b-9776-478b5f7a3349 bound to our chassis
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.779 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74cacb3c-0135-4e0b-9776-478b5f7a3349
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.789 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfecc4e-65ab-44ae-b3f0-50aa91617c90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.790 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74cacb3c-01 in ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.793 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74cacb3c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.793 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34a83d7d-bdce-4b4a-9057-e0feb8686942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.794 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[224810bc-deed-4ec0-b4a0-cb59ac1f8535]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.807 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2a31d9a2-5c99-45d9-8226-a351cfb5f610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.831 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dc999532-59e8-438f-ad75-7418a27ba054]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.870 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[61c7e0ea-0ecc-4589-97c5-f49a4f03f7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.876 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dedd3e63-f6b0-47a6-9e2f-c8a894efac7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:04 compute-0 NetworkManager[49836]: <info>  [1772024164.8780] manager: (tap74cacb3c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/601)
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.913 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d8459c6c-0d8a-48bd-8e81-6d483f10a8fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.918 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[449a6dd0-454c-48c9-b0bd-bdfdd57937d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:04 compute-0 NetworkManager[49836]: <info>  [1772024164.9453] device (tap74cacb3c-00): carrier: link connected
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.957 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[83af35b7-2bc6-4373-84bc-83114ace39b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.972 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[086531c9-07ce-4690-8fe6-adaa99b4b627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74cacb3c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:f1:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613435, 'reachable_time': 33349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368583, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:04 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.991 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e038546b-4b47-46d7-8ffc-ba6991799041]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:f194'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613435, 'tstamp': 613435}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368584, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.007 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9abb27-b272-4f5b-af54-274a27918556]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74cacb3c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:f1:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613435, 'reachable_time': 33349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368585, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.041 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d23ecf72-7995-45a8-b770-f4f553c90807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.105 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[467d45f4-1387-4b03-8668-66d4b4a650fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.107 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74cacb3c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.107 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.108 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74cacb3c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:05 compute-0 NetworkManager[49836]: <info>  [1772024165.1112] manager: (tap74cacb3c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Feb 25 12:56:05 compute-0 kernel: tap74cacb3c-00: entered promiscuous mode
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.113 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74cacb3c-00, col_values=(('external_ids', {'iface-id': 'b1eb3633-3950-4cbb-8a36-6968fb223904'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:05 compute-0 ovn_controller[147040]: 2026-02-25T12:56:05Z|01441|binding|INFO|Releasing lport b1eb3633-3950-4cbb-8a36-6968fb223904 from this chassis (sb_readonly=0)
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.116 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74cacb3c-0135-4e0b-9776-478b5f7a3349.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74cacb3c-0135-4e0b-9776-478b5f7a3349.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.117 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f10f70e4-f0bb-43da-9016-5228888c2b25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.118 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-74cacb3c-0135-4e0b-9776-478b5f7a3349
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/74cacb3c-0135-4e0b-9776-478b5f7a3349.pid.haproxy
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 74cacb3c-0135-4e0b-9776-478b5f7a3349
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:56:05 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.119 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'env', 'PROCESS_TAG=haproxy-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74cacb3c-0135-4e0b-9776-478b5f7a3349.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.121 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.385 244018 DEBUG nova.compute.manager [req-236e7422-b27f-4bcd-bf79-1e124ccd5fdf req-ae0ad119-343c-4eec-85ec-4e374b41d73f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.386 244018 DEBUG oslo_concurrency.lockutils [req-236e7422-b27f-4bcd-bf79-1e124ccd5fdf req-ae0ad119-343c-4eec-85ec-4e374b41d73f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.386 244018 DEBUG oslo_concurrency.lockutils [req-236e7422-b27f-4bcd-bf79-1e124ccd5fdf req-ae0ad119-343c-4eec-85ec-4e374b41d73f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.387 244018 DEBUG oslo_concurrency.lockutils [req-236e7422-b27f-4bcd-bf79-1e124ccd5fdf req-ae0ad119-343c-4eec-85ec-4e374b41d73f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.387 244018 DEBUG nova.compute.manager [req-236e7422-b27f-4bcd-bf79-1e124ccd5fdf req-ae0ad119-343c-4eec-85ec-4e374b41d73f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Processing event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:56:05 compute-0 podman[368655]: 2026-02-25 12:56:05.485592118 +0000 UTC m=+0.061047733 container create 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.510 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.511 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024165.5100322, e728a9dc-bb04-4a25-bcad-b787a044bc0b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.512 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] VM Started (Lifecycle Event)
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.518 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.523 244018 INFO nova.virt.libvirt.driver [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance spawned successfully.
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.523 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:56:05 compute-0 systemd[1]: Started libpod-conmon-0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab.scope.
Feb 25 12:56:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e045d31172de33fbbfdc9cfbf5c4682002fbe9f3cdd63cf75b833a745b1cae3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:56:05 compute-0 podman[368655]: 2026-02-25 12:56:05.457378432 +0000 UTC m=+0.032834047 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.559 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:56:05 compute-0 podman[368655]: 2026-02-25 12:56:05.561432897 +0000 UTC m=+0.136888522 container init 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:56:05 compute-0 podman[368655]: 2026-02-25 12:56:05.566168711 +0000 UTC m=+0.141624296 container start 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.569 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.576 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.577 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.578 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.578 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.579 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.579 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:05 compute-0 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [NOTICE]   (368675) : New worker (368677) forked
Feb 25 12:56:05 compute-0 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [NOTICE]   (368675) : Loading success.
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.601 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.601 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024165.5184069, e728a9dc-bb04-4a25-bcad-b787a044bc0b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.602 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] VM Paused (Lifecycle Event)
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.645 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.652 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024165.5190613, e728a9dc-bb04-4a25-bcad-b787a044bc0b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.653 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] VM Resumed (Lifecycle Event)
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.676 244018 INFO nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Took 8.18 seconds to spawn the instance on the hypervisor.
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.676 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.677 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.684 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.711 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.737 244018 INFO nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Took 9.25 seconds to build instance.
Feb 25 12:56:05 compute-0 nova_compute[244014]: 2026-02-25 12:56:05.759 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:05 compute-0 ceph-mon[76335]: pgmap v2265: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:56:05 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 25 12:56:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:56:07 compute-0 nova_compute[244014]: 2026-02-25 12:56:07.464 244018 DEBUG nova.compute.manager [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:07 compute-0 nova_compute[244014]: 2026-02-25 12:56:07.464 244018 DEBUG oslo_concurrency.lockutils [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:07 compute-0 nova_compute[244014]: 2026-02-25 12:56:07.465 244018 DEBUG oslo_concurrency.lockutils [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:07 compute-0 nova_compute[244014]: 2026-02-25 12:56:07.465 244018 DEBUG oslo_concurrency.lockutils [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:07 compute-0 nova_compute[244014]: 2026-02-25 12:56:07.465 244018 DEBUG nova.compute.manager [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:56:07 compute-0 nova_compute[244014]: 2026-02-25 12:56:07.465 244018 WARNING nova.compute.manager [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state None.
Feb 25 12:56:07 compute-0 ceph-mon[76335]: pgmap v2266: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:56:07 compute-0 ovn_controller[147040]: 2026-02-25T12:56:07Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:35:f1 10.100.0.4
Feb 25 12:56:07 compute-0 ovn_controller[147040]: 2026-02-25T12:56:07Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:35:f1 10.100.0.4
Feb 25 12:56:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 238 op/s
Feb 25 12:56:08 compute-0 nova_compute[244014]: 2026-02-25 12:56:08.795 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:09 compute-0 nova_compute[244014]: 2026-02-25 12:56:09.708 244018 DEBUG nova.compute.manager [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-changed-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:09 compute-0 nova_compute[244014]: 2026-02-25 12:56:09.708 244018 DEBUG nova.compute.manager [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing instance network info cache due to event network-changed-bde15f84-edfb-445b-b129-ec33331763f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:56:09 compute-0 nova_compute[244014]: 2026-02-25 12:56:09.709 244018 DEBUG oslo_concurrency.lockutils [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:09 compute-0 nova_compute[244014]: 2026-02-25 12:56:09.709 244018 DEBUG oslo_concurrency.lockutils [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:09 compute-0 nova_compute[244014]: 2026-02-25 12:56:09.710 244018 DEBUG nova.network.neutron [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing network info cache for port bde15f84-edfb-445b-b129-ec33331763f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:56:09 compute-0 ceph-mon[76335]: pgmap v2267: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 238 op/s
Feb 25 12:56:10 compute-0 nova_compute[244014]: 2026-02-25 12:56:10.168 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:10.168 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:56:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:10.170 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:56:10 compute-0 nova_compute[244014]: 2026-02-25 12:56:10.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:10 compute-0 podman[368686]: 2026-02-25 12:56:10.72451754 +0000 UTC m=+0.058206593 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 25 12:56:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Feb 25 12:56:10 compute-0 podman[368687]: 2026-02-25 12:56:10.752643163 +0000 UTC m=+0.086939193 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:56:11 compute-0 nova_compute[244014]: 2026-02-25 12:56:11.134 244018 DEBUG nova.network.neutron [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updated VIF entry in instance network info cache for port bde15f84-edfb-445b-b129-ec33331763f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:56:11 compute-0 nova_compute[244014]: 2026-02-25 12:56:11.135 244018 DEBUG nova.network.neutron [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:11 compute-0 nova_compute[244014]: 2026-02-25 12:56:11.171 244018 DEBUG oslo_concurrency.lockutils [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:11 compute-0 ceph-mon[76335]: pgmap v2268: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Feb 25 12:56:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Feb 25 12:56:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:13 compute-0 nova_compute[244014]: 2026-02-25 12:56:13.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:13 compute-0 ceph-mon[76335]: pgmap v2269: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Feb 25 12:56:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 25 12:56:15 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:15.173 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:15 compute-0 nova_compute[244014]: 2026-02-25 12:56:15.505 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:15 compute-0 ceph-mon[76335]: pgmap v2270: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 25 12:56:16 compute-0 ovn_controller[147040]: 2026-02-25T12:56:16Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:9e:1f 10.100.0.11
Feb 25 12:56:16 compute-0 ovn_controller[147040]: 2026-02-25T12:56:16Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:9e:1f 10.100.0.11
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.240 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.241 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.302 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.554 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.555 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.565 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.565 244018 INFO nova.compute.claims [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:56:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.794 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.839 244018 DEBUG nova.compute.manager [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.840 244018 DEBUG nova.compute.manager [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing instance network info cache due to event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.840 244018 DEBUG oslo_concurrency.lockutils [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.841 244018 DEBUG oslo_concurrency.lockutils [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.841 244018 DEBUG nova.network.neutron [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.953 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.953 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.954 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.954 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.955 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.957 244018 INFO nova.compute.manager [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Terminating instance
Feb 25 12:56:16 compute-0 nova_compute[244014]: 2026-02-25 12:56:16.959 244018 DEBUG nova.compute.manager [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:56:17 compute-0 kernel: tapdd7b2b79-55 (unregistering): left promiscuous mode
Feb 25 12:56:17 compute-0 NetworkManager[49836]: <info>  [1772024177.0959] device (tapdd7b2b79-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:56:17 compute-0 ovn_controller[147040]: 2026-02-25T12:56:17Z|01442|binding|INFO|Releasing lport dd7b2b79-55bd-4df0-a79a-3344d8c79c92 from this chassis (sb_readonly=0)
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.105 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:17 compute-0 ovn_controller[147040]: 2026-02-25T12:56:17Z|01443|binding|INFO|Setting lport dd7b2b79-55bd-4df0-a79a-3344d8c79c92 down in Southbound
Feb 25 12:56:17 compute-0 ovn_controller[147040]: 2026-02-25T12:56:17Z|01444|binding|INFO|Removing iface tapdd7b2b79-55 ovn-installed in OVS
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.124 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:35:f1 10.100.0.4 2001:db8::f816:3eff:fec1:35f1'], port_security=['fa:16:3e:c1:35:f1 10.100.0.4 2001:db8::f816:3eff:fec1:35f1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fec1:35f1/64', 'neutron:device_id': '7102d0db-32cc-4a5e-8282-cf5266710872', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aaaa1259-a3db-497e-8efb-1f3c1f8cc088', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7d68418-660c-4b4a-9631-94822d5aa11e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dd7b2b79-55bd-4df0-a79a-3344d8c79c92) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.126 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dd7b2b79-55bd-4df0-a79a-3344d8c79c92 in datapath 9f82a56f-a5c3-446e-9b75-7dc69c93c56b unbound from our chassis
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.129 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f82a56f-a5c3-446e-9b75-7dc69c93c56b
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9add1fe0-1e66-4c23-921c-5a9d82411ce2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:17 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000088.scope: Deactivated successfully.
Feb 25 12:56:17 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000088.scope: Consumed 12.339s CPU time.
Feb 25 12:56:17 compute-0 systemd-machined[210048]: Machine qemu-168-instance-00000088 terminated.
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.202 244018 INFO nova.virt.libvirt.driver [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance destroyed successfully.
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.202 244018 DEBUG nova.objects.instance [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 7102d0db-32cc-4a5e-8282-cf5266710872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.204 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ca698616-f2b2-403d-b10d-25396718701c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.212 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a47e99cf-aefc-4f3e-8cc9-9286db0ec482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.218 244018 DEBUG nova.virt.libvirt.vif [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1911565712',display_name='tempest-TestGettingAddress-server-1911565712',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1911565712',id=136,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:55:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-tvigm4mr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:55:55Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=7102d0db-32cc-4a5e-8282-cf5266710872,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.219 244018 DEBUG nova.network.os_vif_util [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.221 244018 DEBUG nova.network.os_vif_util [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.221 244018 DEBUG os_vif [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.223 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.224 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd7b2b79-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.229 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.232 244018 INFO os_vif [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55')
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.240 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ada6ff47-f71e-4f5a-ad9b-0b57db3d2ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.255 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88a5e857-8fcf-4b43-bf93-c2333394c06d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f82a56f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:91:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609107, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368780, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.270 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f80c3b-c06a-456c-8b93-64e272ab440e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9f82a56f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609116, 'tstamp': 609116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368789, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f82a56f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609119, 'tstamp': 609119}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368789, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.272 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f82a56f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.273 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.276 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f82a56f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.276 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.276 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f82a56f-a0, col_values=(('external_ids', {'iface-id': 'd6ceb7cb-a48f-47c3-acb6-58d61268f8c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.277 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:56:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:56:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067882301' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.404 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.412 244018 DEBUG nova.compute.provider_tree [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.432 244018 DEBUG nova.scheduler.client.report [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.452 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.453 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.497 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.498 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.517 244018 INFO nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.532 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.610 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.611 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.612 244018 INFO nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Creating image(s)
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.642 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.681 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.716 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.722 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.826 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.828 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.830 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.830 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.864 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:17 compute-0 nova_compute[244014]: 2026-02-25 12:56:17.870 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:17 compute-0 ceph-mon[76335]: pgmap v2271: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 25 12:56:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3067882301' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:18 compute-0 nova_compute[244014]: 2026-02-25 12:56:18.219 244018 DEBUG nova.policy [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:56:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 210 op/s
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.015 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.108 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.159 244018 INFO nova.virt.libvirt.driver [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Deleting instance files /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872_del
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.160 244018 INFO nova.virt.libvirt.driver [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Deletion of /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872_del complete
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.214 244018 DEBUG nova.objects.instance [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 859fd309-32ea-4025-8312-ddecfa0d6a7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.230 244018 INFO nova.compute.manager [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Took 2.27 seconds to destroy the instance on the hypervisor.
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.231 244018 DEBUG oslo.service.loopingcall [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.231 244018 DEBUG nova.compute.manager [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.231 244018 DEBUG nova.network.neutron [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.238 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.239 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Ensure instance console log exists: /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.239 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.240 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:19 compute-0 nova_compute[244014]: 2026-02-25 12:56:19.240 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:20 compute-0 ceph-mon[76335]: pgmap v2272: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 210 op/s
Feb 25 12:56:20 compute-0 nova_compute[244014]: 2026-02-25 12:56:20.324 244018 DEBUG nova.compute.manager [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-unplugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:20 compute-0 nova_compute[244014]: 2026-02-25 12:56:20.325 244018 DEBUG oslo_concurrency.lockutils [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:20 compute-0 nova_compute[244014]: 2026-02-25 12:56:20.325 244018 DEBUG oslo_concurrency.lockutils [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:20 compute-0 nova_compute[244014]: 2026-02-25 12:56:20.325 244018 DEBUG oslo_concurrency.lockutils [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:20 compute-0 nova_compute[244014]: 2026-02-25 12:56:20.326 244018 DEBUG nova.compute.manager [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] No waiting events found dispatching network-vif-unplugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:56:20 compute-0 nova_compute[244014]: 2026-02-25 12:56:20.326 244018 DEBUG nova.compute.manager [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-unplugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:56:20 compute-0 nova_compute[244014]: 2026-02-25 12:56:20.508 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Feb 25 12:56:22 compute-0 ceph-mon[76335]: pgmap v2273: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.229 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.232 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Successfully created port: 2a99d3aa-08e4-4712-a8b3-984de83b4b60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.433 244018 DEBUG nova.compute.manager [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.434 244018 DEBUG oslo_concurrency.lockutils [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.434 244018 DEBUG oslo_concurrency.lockutils [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.434 244018 DEBUG oslo_concurrency.lockutils [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.435 244018 DEBUG nova.compute.manager [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] No waiting events found dispatching network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.435 244018 WARNING nova.compute.manager [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received unexpected event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 for instance with vm_state active and task_state deleting.
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.437 244018 DEBUG nova.network.neutron [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.473 244018 INFO nova.compute.manager [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Took 3.24 seconds to deallocate network for instance.
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.521 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.522 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.551 244018 DEBUG nova.network.neutron [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updated VIF entry in instance network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.551 244018 DEBUG nova.network.neutron [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updating instance_info_cache with network_info: [{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.576 244018 DEBUG oslo_concurrency.lockutils [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.618 244018 DEBUG oslo_concurrency.processutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.980 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Successfully updated port: 2a99d3aa-08e4-4712-a8b3-984de83b4b60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.998 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.999 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:22 compute-0 nova_compute[244014]: 2026-02-25 12:56:22.999 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:56:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:56:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147388156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:23 compute-0 nova_compute[244014]: 2026-02-25 12:56:23.185 244018 DEBUG oslo_concurrency.processutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:23 compute-0 nova_compute[244014]: 2026-02-25 12:56:23.191 244018 DEBUG nova.compute.provider_tree [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:56:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3147388156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:23 compute-0 nova_compute[244014]: 2026-02-25 12:56:23.227 244018 DEBUG nova.scheduler.client.report [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:56:23 compute-0 nova_compute[244014]: 2026-02-25 12:56:23.237 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:56:23 compute-0 nova_compute[244014]: 2026-02-25 12:56:23.312 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:23 compute-0 nova_compute[244014]: 2026-02-25 12:56:23.358 244018 INFO nova.scheduler.client.report [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 7102d0db-32cc-4a5e-8282-cf5266710872
Feb 25 12:56:23 compute-0 nova_compute[244014]: 2026-02-25 12:56:23.436 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:24 compute-0 ceph-mon[76335]: pgmap v2274: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.521 244018 DEBUG nova.compute.manager [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-deleted-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.521 244018 INFO nova.compute.manager [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Neutron deleted interface dd7b2b79-55bd-4df0-a79a-3344d8c79c92; detaching it from the instance and deleting it from the info cache
Feb 25 12:56:24 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.522 244018 DEBUG nova.network.neutron [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 25 12:56:24 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.526 244018 DEBUG nova.compute.manager [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Detach interface failed, port_id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92, reason: Instance 7102d0db-32cc-4a5e-8282-cf5266710872 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.526 244018 DEBUG nova.compute.manager [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.527 244018 DEBUG nova.compute.manager [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing instance network info cache due to event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.527 244018 DEBUG oslo_concurrency.lockutils [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.561 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.585 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.586 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Instance network_info: |[{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.586 244018 DEBUG oslo_concurrency.lockutils [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.587 244018 DEBUG nova.network.neutron [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.592 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Start _get_guest_xml network_info=[{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.599 244018 WARNING nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.610 244018 DEBUG nova.virt.libvirt.host [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.611 244018 DEBUG nova.virt.libvirt.host [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.615 244018 DEBUG nova.virt.libvirt.host [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.616 244018 DEBUG nova.virt.libvirt.host [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.617 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.617 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.618 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.618 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.619 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.619 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.619 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.620 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.620 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.621 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.621 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.622 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:56:24 compute-0 nova_compute[244014]: 2026-02-25 12:56:24.627 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Feb 25 12:56:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:56:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112457902' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.195 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2112457902' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.321 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.326 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:56:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646772582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.950 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.952 244018 DEBUG nova.virt.libvirt.vif [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1835970958',display_name='tempest-TestNetworkBasicOps-server-1835970958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1835970958',id=138,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKi5hwXM8otzF8onl/sQ2HcXmcnSYZEPkja+OxRBMskTynTP/vErrzil0sRJRcFkhhDMTnC+dkl/140PCZpaQazdCB0rdws3ctDi/BBK2mrv8mGIbCHW3g3d+gqo12lx8A==',key_name='tempest-TestNetworkBasicOps-53938794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-idncdrcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:56:17Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=859fd309-32ea-4025-8312-ddecfa0d6a7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.953 244018 DEBUG nova.network.os_vif_util [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.954 244018 DEBUG nova.network.os_vif_util [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.955 244018 DEBUG nova.objects.instance [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 859fd309-32ea-4025-8312-ddecfa0d6a7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.974 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:56:25 compute-0 nova_compute[244014]:   <uuid>859fd309-32ea-4025-8312-ddecfa0d6a7f</uuid>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   <name>instance-0000008a</name>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-1835970958</nova:name>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:56:24</nova:creationTime>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <nova:port uuid="2a99d3aa-08e4-4712-a8b3-984de83b4b60">
Feb 25 12:56:25 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <system>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <entry name="serial">859fd309-32ea-4025-8312-ddecfa0d6a7f</entry>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <entry name="uuid">859fd309-32ea-4025-8312-ddecfa0d6a7f</entry>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     </system>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   <os>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   </os>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   <features>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   </features>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/859fd309-32ea-4025-8312-ddecfa0d6a7f_disk">
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       </source>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config">
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       </source>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:56:25 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:7f:bf:de"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <target dev="tap2a99d3aa-08"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/console.log" append="off"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <video>
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     </video>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:56:25 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:56:25 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:56:25 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:56:25 compute-0 nova_compute[244014]: </domain>
Feb 25 12:56:25 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.977 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Preparing to wait for external event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.978 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.979 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.979 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.981 244018 DEBUG nova.virt.libvirt.vif [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1835970958',display_name='tempest-TestNetworkBasicOps-server-1835970958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1835970958',id=138,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKi5hwXM8otzF8onl/sQ2HcXmcnSYZEPkja+OxRBMskTynTP/vErrzil0sRJRcFkhhDMTnC+dkl/140PCZpaQazdCB0rdws3ctDi/BBK2mrv8mGIbCHW3g3d+gqo12lx8A==',key_name='tempest-TestNetworkBasicOps-53938794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-idncdrcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:56:17Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=859fd309-32ea-4025-8312-ddecfa0d6a7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.981 244018 DEBUG nova.network.os_vif_util [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.982 244018 DEBUG nova.network.os_vif_util [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.983 244018 DEBUG os_vif [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.985 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.985 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.989 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.990 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a99d3aa-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.990 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a99d3aa-08, col_values=(('external_ids', {'iface-id': '2a99d3aa-08e4-4712-a8b3-984de83b4b60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:bf:de', 'vm-uuid': '859fd309-32ea-4025-8312-ddecfa0d6a7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.992 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:25 compute-0 NetworkManager[49836]: <info>  [1772024185.9938] manager: (tap2a99d3aa-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/603)
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:56:25 compute-0 nova_compute[244014]: 2026-02-25 12:56:25.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.000 244018 INFO os_vif [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08')
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.075 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.076 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.077 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:7f:bf:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.077 244018 INFO nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Using config drive
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.111 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:26 compute-0 ceph-mon[76335]: pgmap v2275: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Feb 25 12:56:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/646772582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.387 244018 DEBUG nova.compute.manager [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.388 244018 DEBUG nova.compute.manager [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing instance network info cache due to event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.388 244018 DEBUG oslo_concurrency.lockutils [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.389 244018 DEBUG oslo_concurrency.lockutils [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.389 244018 DEBUG nova.network.neutron [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.476 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.477 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.477 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.477 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.477 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.479 244018 INFO nova.compute.manager [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Terminating instance
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.480 244018 DEBUG nova.compute.manager [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:56:26 compute-0 kernel: tap44908825-99 (unregistering): left promiscuous mode
Feb 25 12:56:26 compute-0 NetworkManager[49836]: <info>  [1772024186.5729] device (tap44908825-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:56:26 compute-0 ovn_controller[147040]: 2026-02-25T12:56:26Z|01445|binding|INFO|Releasing lport 44908825-991d-42d4-9bad-18d2a1f5fe9c from this chassis (sb_readonly=0)
Feb 25 12:56:26 compute-0 ovn_controller[147040]: 2026-02-25T12:56:26Z|01446|binding|INFO|Setting lport 44908825-991d-42d4-9bad-18d2a1f5fe9c down in Southbound
Feb 25 12:56:26 compute-0 ovn_controller[147040]: 2026-02-25T12:56:26Z|01447|binding|INFO|Removing iface tap44908825-99 ovn-installed in OVS
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.594 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:45:21 10.100.0.3 2001:db8::f816:3eff:fee4:4521'], port_security=['fa:16:3e:e4:45:21 10.100.0.3 2001:db8::f816:3eff:fee4:4521'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fee4:4521/64', 'neutron:device_id': 'dfb7287a-5448-4579-8938-fe909fbf54c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aaaa1259-a3db-497e-8efb-1f3c1f8cc088', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7d68418-660c-4b4a-9631-94822d5aa11e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=44908825-991d-42d4-9bad-18d2a1f5fe9c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.597 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 44908825-991d-42d4-9bad-18d2a1f5fe9c in datapath 9f82a56f-a5c3-446e-9b75-7dc69c93c56b unbound from our chassis
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.600 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f82a56f-a5c3-446e-9b75-7dc69c93c56b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.601 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6f79fe-3713-488f-b06c-2db9db7888e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.602 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b namespace which is not needed anymore
Feb 25 12:56:26 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Deactivated successfully.
Feb 25 12:56:26 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Consumed 14.386s CPU time.
Feb 25 12:56:26 compute-0 systemd-machined[210048]: Machine qemu-167-instance-00000087 terminated.
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.635 244018 INFO nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Creating config drive at /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.642 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4clsxbcm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.722 244018 INFO nova.virt.libvirt.driver [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Instance destroyed successfully.
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.724 244018 DEBUG nova.objects.instance [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid dfb7287a-5448-4579-8938-fe909fbf54c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:56:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Feb 25 12:56:26 compute-0 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [NOTICE]   (367027) : haproxy version is 2.8.14-c23fe91
Feb 25 12:56:26 compute-0 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [NOTICE]   (367027) : path to executable is /usr/sbin/haproxy
Feb 25 12:56:26 compute-0 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [WARNING]  (367027) : Exiting Master process...
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.740 244018 DEBUG nova.virt.libvirt.vif [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-730348204',display_name='tempest-TestGettingAddress-server-730348204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-730348204',id=135,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:55:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-cliy4xfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:55:22Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dfb7287a-5448-4579-8938-fe909fbf54c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:56:26 compute-0 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [ALERT]    (367027) : Current worker (367029) exited with code 143 (Terminated)
Feb 25 12:56:26 compute-0 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [WARNING]  (367027) : All workers exited. Exiting... (0)
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.741 244018 DEBUG nova.network.os_vif_util [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.743 244018 DEBUG nova.network.os_vif_util [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:56:26 compute-0 systemd[1]: libpod-428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635.scope: Deactivated successfully.
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.743 244018 DEBUG os_vif [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.748 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44908825-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:26 compute-0 podman[369095]: 2026-02-25 12:56:26.751122223 +0000 UTC m=+0.057905865 container died 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.753 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.759 244018 DEBUG nova.network.neutron [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updated VIF entry in instance network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.760 244018 DEBUG nova.network.neutron [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.761 244018 INFO os_vif [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99')
Feb 25 12:56:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635-userdata-shm.mount: Deactivated successfully.
Feb 25 12:56:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-36eb1392c95fbb33162d7879fcd33b904283c8f0341d2d03795af7b704d65fdf-merged.mount: Deactivated successfully.
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.782 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4clsxbcm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:26 compute-0 podman[369095]: 2026-02-25 12:56:26.798039526 +0000 UTC m=+0.104823198 container cleanup 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:56:26 compute-0 systemd[1]: libpod-conmon-428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635.scope: Deactivated successfully.
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.810 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.816 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.849 244018 DEBUG oslo_concurrency.lockutils [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:26 compute-0 podman[369168]: 2026-02-25 12:56:26.893437177 +0000 UTC m=+0.065081727 container remove 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d8250f6b-1dea-42e6-9ea2-a06b944b15b2]: (4, ('Wed Feb 25 12:56:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b (428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635)\n428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635\nWed Feb 25 12:56:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b (428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635)\n428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.900 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[08012f68-bc99-4d0c-8251-5813a636e125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.901 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f82a56f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:26 compute-0 kernel: tap9f82a56f-a0: left promiscuous mode
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.918 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[403f8d05-412e-4cde-a5ae-e97cf8c39c52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.930 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[33778e97-d5d1-4904-92d4-c788e9116bcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.931 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5958a241-ddd2-44d3-8350-469efd669e0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.946 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[64eb8071-a9f3-4acf-a6ce-572da1bb431f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609100, 'reachable_time': 15579, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369206, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.948 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:56:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.948 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[be7efc2b-36a2-490f-9e50-e7428e97052a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f82a56f\x2da5c3\x2d446e\x2d9b75\x2d7dc69c93c56b.mount: Deactivated successfully.
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.989 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:26 compute-0 nova_compute[244014]: 2026-02-25 12:56:26.990 244018 INFO nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Deleting local config drive /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config because it was imported into RBD.
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.029 244018 DEBUG nova.compute.manager [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-unplugged-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.030 244018 DEBUG oslo_concurrency.lockutils [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.030 244018 DEBUG oslo_concurrency.lockutils [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.030 244018 DEBUG oslo_concurrency.lockutils [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.031 244018 DEBUG nova.compute.manager [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] No waiting events found dispatching network-vif-unplugged-44908825-991d-42d4-9bad-18d2a1f5fe9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.031 244018 DEBUG nova.compute.manager [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-unplugged-44908825-991d-42d4-9bad-18d2a1f5fe9c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:56:27 compute-0 kernel: tap2a99d3aa-08: entered promiscuous mode
Feb 25 12:56:27 compute-0 systemd-udevd[369073]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:56:27 compute-0 NetworkManager[49836]: <info>  [1772024187.0365] manager: (tap2a99d3aa-08): new Tun device (/org/freedesktop/NetworkManager/Devices/604)
Feb 25 12:56:27 compute-0 ovn_controller[147040]: 2026-02-25T12:56:27Z|01448|binding|INFO|Claiming lport 2a99d3aa-08e4-4712-a8b3-984de83b4b60 for this chassis.
Feb 25 12:56:27 compute-0 ovn_controller[147040]: 2026-02-25T12:56:27Z|01449|binding|INFO|2a99d3aa-08e4-4712-a8b3-984de83b4b60: Claiming fa:16:3e:7f:bf:de 10.100.0.4
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:27 compute-0 ovn_controller[147040]: 2026-02-25T12:56:27Z|01450|binding|INFO|Setting lport 2a99d3aa-08e4-4712-a8b3-984de83b4b60 ovn-installed in OVS
Feb 25 12:56:27 compute-0 ovn_controller[147040]: 2026-02-25T12:56:27Z|01451|binding|INFO|Setting lport 2a99d3aa-08e4-4712-a8b3-984de83b4b60 up in Southbound
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.043 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:bf:de 10.100.0.4'], port_security=['fa:16:3e:7f:bf:de 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '859fd309-32ea-4025-8312-ddecfa0d6a7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf564118-674f-4f5e-bcfa-9616c32835a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=394a85b3-bdd5-4c73-8ada-c7f3f99bacc6, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2a99d3aa-08e4-4712-a8b3-984de83b4b60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.044 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2a99d3aa-08e4-4712-a8b3-984de83b4b60 in datapath 74cacb3c-0135-4e0b-9776-478b5f7a3349 bound to our chassis
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.045 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74cacb3c-0135-4e0b-9776-478b5f7a3349
Feb 25 12:56:27 compute-0 NetworkManager[49836]: <info>  [1772024187.0527] device (tap2a99d3aa-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:56:27 compute-0 NetworkManager[49836]: <info>  [1772024187.0532] device (tap2a99d3aa-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.056 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[860b12b6-4311-4e48-967a-760fd6c2cb90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:27 compute-0 systemd-machined[210048]: New machine qemu-170-instance-0000008a.
Feb 25 12:56:27 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-0000008a.
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.084 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8af97a65-23a1-414d-a3cf-b4a54009d54e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.087 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f03a71f8-6880-45b3-b913-ee2b86be302a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.116 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[46811807-6095-46dd-9ac7-457381d14bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.136 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb31767-83dd-49d1-8c7f-70d84336e5d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74cacb3c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:f1:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613435, 'reachable_time': 33349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369236, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.152 244018 INFO nova.virt.libvirt.driver [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Deleting instance files /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6_del
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.154 244018 INFO nova.virt.libvirt.driver [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Deletion of /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6_del complete
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.154 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[64986f35-c74d-46b2-be01-0423b93ca252]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74cacb3c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613447, 'tstamp': 613447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369237, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74cacb3c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613450, 'tstamp': 613450}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369237, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.156 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74cacb3c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.159 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.160 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74cacb3c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.160 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.160 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74cacb3c-00, col_values=(('external_ids', {'iface-id': 'b1eb3633-3950-4cbb-8a36-6968fb223904'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.161 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.239 244018 INFO nova.compute.manager [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Took 0.76 seconds to destroy the instance on the hypervisor.
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.240 244018 DEBUG oslo.service.loopingcall [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.240 244018 DEBUG nova.compute.manager [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.241 244018 DEBUG nova.network.neutron [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.468 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024187.4673276, 859fd309-32ea-4025-8312-ddecfa0d6a7f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.469 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] VM Started (Lifecycle Event)
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.493 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.504 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024187.4674957, 859fd309-32ea-4025-8312-ddecfa0d6a7f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.504 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] VM Paused (Lifecycle Event)
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.527 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.532 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:56:27 compute-0 nova_compute[244014]: 2026-02-25 12:56:27.569 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.129 244018 DEBUG nova.network.neutron [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updated VIF entry in instance network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.130 244018 DEBUG nova.network.neutron [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.164 244018 DEBUG nova.network.neutron [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.167 244018 DEBUG oslo_concurrency.lockutils [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.190 244018 INFO nova.compute.manager [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Took 0.95 seconds to deallocate network for instance.
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.234 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.234 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.318 244018 DEBUG oslo_concurrency.processutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:28 compute-0 ceph-mon[76335]: pgmap v2276: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Feb 25 12:56:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 3.9 MiB/s wr, 156 op/s
Feb 25 12:56:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:56:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940670755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.879 244018 DEBUG oslo_concurrency.processutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.885 244018 DEBUG nova.compute.provider_tree [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.904 244018 DEBUG nova.scheduler.client.report [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.930 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:28 compute-0 nova_compute[244014]: 2026-02-25 12:56:28.956 244018 INFO nova.scheduler.client.report [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance dfb7287a-5448-4579-8938-fe909fbf54c6
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.026 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.124 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.125 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.125 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.126 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.126 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] No waiting events found dispatching network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.127 244018 WARNING nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received unexpected event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c for instance with vm_state deleted and task_state None.
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.127 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.127 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.128 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.128 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.128 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Processing event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.129 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.129 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.130 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.130 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.130 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] No waiting events found dispatching network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.131 244018 WARNING nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received unexpected event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 for instance with vm_state building and task_state spawning.
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.131 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-deleted-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.132 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.137 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024189.1365752, 859fd309-32ea-4025-8312-ddecfa0d6a7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.137 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] VM Resumed (Lifecycle Event)
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.140 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.145 244018 INFO nova.virt.libvirt.driver [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Instance spawned successfully.
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.145 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.171 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.186 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.190 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.190 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.191 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.191 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.192 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.192 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.215 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.317 244018 INFO nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Took 11.71 seconds to spawn the instance on the hypervisor.
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.318 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:56:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2940670755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.402 244018 INFO nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Took 12.88 seconds to build instance.
Feb 25 12:56:29 compute-0 nova_compute[244014]: 2026-02-25 12:56:29.426 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:30 compute-0 ceph-mon[76335]: pgmap v2277: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 3.9 MiB/s wr, 156 op/s
Feb 25 12:56:30 compute-0 nova_compute[244014]: 2026-02-25 12:56:30.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 25 12:56:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:56:31
Feb 25 12:56:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:56:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:56:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'images', '.mgr', 'vms']
Feb 25 12:56:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:56:31 compute-0 nova_compute[244014]: 2026-02-25 12:56:31.753 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:56:32 compute-0 nova_compute[244014]: 2026-02-25 12:56:32.196 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024177.195008, 7102d0db-32cc-4a5e-8282-cf5266710872 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:56:32 compute-0 nova_compute[244014]: 2026-02-25 12:56:32.197 244018 INFO nova.compute.manager [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] VM Stopped (Lifecycle Event)
Feb 25 12:56:32 compute-0 nova_compute[244014]: 2026-02-25 12:56:32.222 244018 DEBUG nova.compute.manager [None req-fd89bb00-9117-4391-9ec6-9e26ee3cc708 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:56:32 compute-0 ceph-mon[76335]: pgmap v2278: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 25 12:56:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 152 op/s
Feb 25 12:56:33 compute-0 nova_compute[244014]: 2026-02-25 12:56:33.514 244018 DEBUG nova.compute.manager [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:33 compute-0 nova_compute[244014]: 2026-02-25 12:56:33.514 244018 DEBUG nova.compute.manager [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing instance network info cache due to event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:56:33 compute-0 nova_compute[244014]: 2026-02-25 12:56:33.515 244018 DEBUG oslo_concurrency.lockutils [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:33 compute-0 nova_compute[244014]: 2026-02-25 12:56:33.515 244018 DEBUG oslo_concurrency.lockutils [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:33 compute-0 nova_compute[244014]: 2026-02-25 12:56:33.516 244018 DEBUG nova.network.neutron [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:56:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:33 compute-0 nova_compute[244014]: 2026-02-25 12:56:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:56:33 compute-0 nova_compute[244014]: 2026-02-25 12:56:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:56:33 compute-0 nova_compute[244014]: 2026-02-25 12:56:33.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:56:33 compute-0 nova_compute[244014]: 2026-02-25 12:56:33.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:56:34 compute-0 nova_compute[244014]: 2026-02-25 12:56:34.165 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:34 compute-0 nova_compute[244014]: 2026-02-25 12:56:34.166 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:34 compute-0 nova_compute[244014]: 2026-02-25 12:56:34.166 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:56:34 compute-0 nova_compute[244014]: 2026-02-25 12:56:34.167 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid e728a9dc-bb04-4a25-bcad-b787a044bc0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:56:34 compute-0 ovn_controller[147040]: 2026-02-25T12:56:34Z|01452|binding|INFO|Releasing lport b1eb3633-3950-4cbb-8a36-6968fb223904 from this chassis (sb_readonly=0)
Feb 25 12:56:34 compute-0 nova_compute[244014]: 2026-02-25 12:56:34.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:34 compute-0 ceph-mon[76335]: pgmap v2279: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 152 op/s
Feb 25 12:56:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:56:35 compute-0 nova_compute[244014]: 2026-02-25 12:56:35.515 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:35 compute-0 nova_compute[244014]: 2026-02-25 12:56:35.659 244018 DEBUG nova.network.neutron [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updated VIF entry in instance network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:56:35 compute-0 nova_compute[244014]: 2026-02-25 12:56:35.660 244018 DEBUG nova.network.neutron [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:35 compute-0 nova_compute[244014]: 2026-02-25 12:56:35.799 244018 DEBUG oslo_concurrency.lockutils [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.153 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.190 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.191 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.192 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.243 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.244 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.244 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.245 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.246 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:36 compute-0 ceph-mon[76335]: pgmap v2280: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:56:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:56:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611800794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.853 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.964 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.965 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.969 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:56:36 compute-0 nova_compute[244014]: 2026-02-25 12:56:36.970 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:56:37 compute-0 nova_compute[244014]: 2026-02-25 12:56:37.168 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:56:37 compute-0 nova_compute[244014]: 2026-02-25 12:56:37.169 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3213MB free_disk=59.92087952699512GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:56:37 compute-0 nova_compute[244014]: 2026-02-25 12:56:37.170 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:37 compute-0 nova_compute[244014]: 2026-02-25 12:56:37.170 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2611800794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:37 compute-0 nova_compute[244014]: 2026-02-25 12:56:37.433 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e728a9dc-bb04-4a25-bcad-b787a044bc0b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:56:37 compute-0 nova_compute[244014]: 2026-02-25 12:56:37.434 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 859fd309-32ea-4025-8312-ddecfa0d6a7f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:56:37 compute-0 nova_compute[244014]: 2026-02-25 12:56:37.435 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:56:37 compute-0 nova_compute[244014]: 2026-02-25 12:56:37.435 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:56:37 compute-0 nova_compute[244014]: 2026-02-25 12:56:37.481 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:56:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826527813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:38 compute-0 nova_compute[244014]: 2026-02-25 12:56:38.058 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:38 compute-0 nova_compute[244014]: 2026-02-25 12:56:38.067 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:56:38 compute-0 nova_compute[244014]: 2026-02-25 12:56:38.091 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:56:38 compute-0 nova_compute[244014]: 2026-02-25 12:56:38.186 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:56:38 compute-0 nova_compute[244014]: 2026-02-25 12:56:38.187 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:38 compute-0 ceph-mon[76335]: pgmap v2281: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:56:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3826527813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 101 op/s
Feb 25 12:56:39 compute-0 nova_compute[244014]: 2026-02-25 12:56:39.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:56:40 compute-0 ovn_controller[147040]: 2026-02-25T12:56:40Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:bf:de 10.100.0.4
Feb 25 12:56:40 compute-0 ovn_controller[147040]: 2026-02-25T12:56:40Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:bf:de 10.100.0.4
Feb 25 12:56:40 compute-0 ceph-mon[76335]: pgmap v2282: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 101 op/s
Feb 25 12:56:40 compute-0 nova_compute[244014]: 2026-02-25 12:56:40.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 68 op/s
Feb 25 12:56:41 compute-0 nova_compute[244014]: 2026-02-25 12:56:41.720 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024186.7188518, dfb7287a-5448-4579-8938-fe909fbf54c6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:56:41 compute-0 nova_compute[244014]: 2026-02-25 12:56:41.720 244018 INFO nova.compute.manager [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] VM Stopped (Lifecycle Event)
Feb 25 12:56:41 compute-0 podman[369348]: 2026-02-25 12:56:41.737576266 +0000 UTC m=+0.074110201 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 25 12:56:41 compute-0 nova_compute[244014]: 2026-02-25 12:56:41.741 244018 DEBUG nova.compute.manager [None req-f72da2e3-b062-4b8a-bee2-1d1de480d0d6 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:56:41 compute-0 nova_compute[244014]: 2026-02-25 12:56:41.760 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:41 compute-0 podman[369349]: 2026-02-25 12:56:41.766592025 +0000 UTC m=+0.102918675 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 25 12:56:41 compute-0 nova_compute[244014]: 2026-02-25 12:56:41.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:56:42 compute-0 ceph-mon[76335]: pgmap v2283: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 68 op/s
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011237722717163936 of space, bias 1.0, pg target 0.33713168151491807 quantized to 32 (current 32)
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024941998890556766 of space, bias 1.0, pg target 0.748259966716703 quantized to 32 (current 32)
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3756613840991307e-06 of space, bias 4.0, pg target 0.001650793660918957 quantized to 16 (current 16)
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:56:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 25 12:56:42 compute-0 nova_compute[244014]: 2026-02-25 12:56:42.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:56:42 compute-0 nova_compute[244014]: 2026-02-25 12:56:42.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:56:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:43 compute-0 nova_compute[244014]: 2026-02-25 12:56:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:56:44 compute-0 ceph-mon[76335]: pgmap v2284: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 25 12:56:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 12:56:44 compute-0 nova_compute[244014]: 2026-02-25 12:56:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:56:44 compute-0 nova_compute[244014]: 2026-02-25 12:56:44.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:56:45 compute-0 nova_compute[244014]: 2026-02-25 12:56:45.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:46 compute-0 ceph-mon[76335]: pgmap v2285: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 12:56:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 12:56:46 compute-0 nova_compute[244014]: 2026-02-25 12:56:46.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:56:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2944514131' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:56:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:56:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2944514131' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:56:48 compute-0 ceph-mon[76335]: pgmap v2286: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 12:56:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2944514131' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:56:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2944514131' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:56:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.721163) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208721197, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1686, "num_deletes": 250, "total_data_size": 2637898, "memory_usage": 2669248, "flush_reason": "Manual Compaction"}
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208730323, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1522098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47361, "largest_seqno": 49046, "table_properties": {"data_size": 1516493, "index_size": 2681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15126, "raw_average_key_size": 20, "raw_value_size": 1504016, "raw_average_value_size": 2071, "num_data_blocks": 122, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024034, "oldest_key_time": 1772024034, "file_creation_time": 1772024208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 9211 microseconds, and 4061 cpu microseconds.
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.730371) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1522098 bytes OK
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.730394) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.732974) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.732988) EVENT_LOG_v1 {"time_micros": 1772024208732983, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.733007) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2630622, prev total WAL file size 2630622, number of live WAL files 2.
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.733591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1486KB)], [110(9660KB)]
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208733673, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11414383, "oldest_snapshot_seqno": -1}
Feb 25 12:56:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7081 keys, 9139339 bytes, temperature: kUnknown
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208813860, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9139339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9093884, "index_size": 26655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 183288, "raw_average_key_size": 25, "raw_value_size": 8969409, "raw_average_value_size": 1266, "num_data_blocks": 1045, "num_entries": 7081, "num_filter_entries": 7081, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.814174) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9139339 bytes
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.815464) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.2 rd, 113.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.4 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(13.5) write-amplify(6.0) OK, records in: 7512, records dropped: 431 output_compression: NoCompression
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.815495) EVENT_LOG_v1 {"time_micros": 1772024208815480, "job": 66, "event": "compaction_finished", "compaction_time_micros": 80246, "compaction_time_cpu_micros": 32818, "output_level": 6, "num_output_files": 1, "total_output_size": 9139339, "num_input_records": 7512, "num_output_records": 7081, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208815906, "job": 66, "event": "table_file_deletion", "file_number": 112}
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208817333, "job": 66, "event": "table_file_deletion", "file_number": 110}
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.733450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.817478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.817489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.817493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.817500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:56:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.817503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:56:49 compute-0 nova_compute[244014]: 2026-02-25 12:56:49.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:49 compute-0 nova_compute[244014]: 2026-02-25 12:56:49.387 244018 INFO nova.compute.manager [None req-20bd1fec-e1be-4f22-9377-03f383e4b505 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Get console output
Feb 25 12:56:49 compute-0 nova_compute[244014]: 2026-02-25 12:56:49.396 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:56:49 compute-0 ceph-mon[76335]: pgmap v2287: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 12:56:50 compute-0 nova_compute[244014]: 2026-02-25 12:56:50.522 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:50 compute-0 nova_compute[244014]: 2026-02-25 12:56:50.631 244018 DEBUG nova.compute.manager [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-changed-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:50 compute-0 nova_compute[244014]: 2026-02-25 12:56:50.632 244018 DEBUG nova.compute.manager [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing instance network info cache due to event network-changed-bde15f84-edfb-445b-b129-ec33331763f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:56:50 compute-0 nova_compute[244014]: 2026-02-25 12:56:50.632 244018 DEBUG oslo_concurrency.lockutils [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:50 compute-0 nova_compute[244014]: 2026-02-25 12:56:50.632 244018 DEBUG oslo_concurrency.lockutils [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:50 compute-0 nova_compute[244014]: 2026-02-25 12:56:50.632 244018 DEBUG nova.network.neutron [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing network info cache for port bde15f84-edfb-445b-b129-ec33331763f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:56:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 12:56:51 compute-0 nova_compute[244014]: 2026-02-25 12:56:51.653 244018 INFO nova.compute.manager [None req-f6443d22-817e-4990-b9a1-1217a191288f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Get console output
Feb 25 12:56:51 compute-0 nova_compute[244014]: 2026-02-25 12:56:51.657 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:56:51 compute-0 nova_compute[244014]: 2026-02-25 12:56:51.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:51 compute-0 ceph-mon[76335]: pgmap v2288: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 12:56:51 compute-0 nova_compute[244014]: 2026-02-25 12:56:51.910 244018 DEBUG nova.network.neutron [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updated VIF entry in instance network info cache for port bde15f84-edfb-445b-b129-ec33331763f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:56:51 compute-0 nova_compute[244014]: 2026-02-25 12:56:51.911 244018 DEBUG nova.network.neutron [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:51 compute-0 nova_compute[244014]: 2026-02-25 12:56:51.931 244018 DEBUG oslo_concurrency.lockutils [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.717 244018 DEBUG nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.717 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.718 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.718 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.719 244018 DEBUG nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.719 244018 WARNING nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state None.
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.720 244018 DEBUG nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.720 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.721 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.721 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.721 244018 DEBUG nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:56:52 compute-0 nova_compute[244014]: 2026-02-25 12:56:52.722 244018 WARNING nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state None.
Feb 25 12:56:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 12:56:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:53 compute-0 nova_compute[244014]: 2026-02-25 12:56:53.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:53 compute-0 ceph-mon[76335]: pgmap v2289: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 12:56:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb 25 12:56:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:55.033 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:55.034 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:55.035 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:55 compute-0 nova_compute[244014]: 2026-02-25 12:56:55.083 244018 DEBUG nova.compute.manager [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-changed-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:55 compute-0 nova_compute[244014]: 2026-02-25 12:56:55.083 244018 DEBUG nova.compute.manager [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing instance network info cache due to event network-changed-bde15f84-edfb-445b-b129-ec33331763f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:56:55 compute-0 nova_compute[244014]: 2026-02-25 12:56:55.084 244018 DEBUG oslo_concurrency.lockutils [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:55 compute-0 nova_compute[244014]: 2026-02-25 12:56:55.085 244018 DEBUG oslo_concurrency.lockutils [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:55 compute-0 nova_compute[244014]: 2026-02-25 12:56:55.085 244018 DEBUG nova.network.neutron [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing network info cache for port bde15f84-edfb-445b-b129-ec33331763f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:56:55 compute-0 nova_compute[244014]: 2026-02-25 12:56:55.211 244018 INFO nova.compute.manager [None req-a502e8ad-9a3e-40e0-b8ba-508ca0897766 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Get console output
Feb 25 12:56:55 compute-0 nova_compute[244014]: 2026-02-25 12:56:55.216 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:56:55 compute-0 nova_compute[244014]: 2026-02-25 12:56:55.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:55 compute-0 ceph-mon[76335]: pgmap v2290: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb 25 12:56:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb 25 12:56:56 compute-0 nova_compute[244014]: 2026-02-25 12:56:56.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.367 244018 DEBUG nova.compute.manager [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.368 244018 DEBUG nova.compute.manager [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing instance network info cache due to event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.369 244018 DEBUG oslo_concurrency.lockutils [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.369 244018 DEBUG oslo_concurrency.lockutils [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.370 244018 DEBUG nova.network.neutron [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.376 244018 DEBUG nova.network.neutron [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updated VIF entry in instance network info cache for port bde15f84-edfb-445b-b129-ec33331763f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.377 244018 DEBUG nova.network.neutron [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.422 244018 DEBUG oslo_concurrency.lockutils [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.423 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.423 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.424 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.424 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.425 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.427 244018 INFO nova.compute.manager [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Terminating instance
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.428 244018 DEBUG nova.compute.manager [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:56:57 compute-0 kernel: tap2a99d3aa-08 (unregistering): left promiscuous mode
Feb 25 12:56:57 compute-0 NetworkManager[49836]: <info>  [1772024217.4853] device (tap2a99d3aa-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:56:57 compute-0 ovn_controller[147040]: 2026-02-25T12:56:57Z|01453|binding|INFO|Releasing lport 2a99d3aa-08e4-4712-a8b3-984de83b4b60 from this chassis (sb_readonly=0)
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:57 compute-0 ovn_controller[147040]: 2026-02-25T12:56:57Z|01454|binding|INFO|Setting lport 2a99d3aa-08e4-4712-a8b3-984de83b4b60 down in Southbound
Feb 25 12:56:57 compute-0 ovn_controller[147040]: 2026-02-25T12:56:57Z|01455|binding|INFO|Removing iface tap2a99d3aa-08 ovn-installed in OVS
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.501 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:bf:de 10.100.0.4'], port_security=['fa:16:3e:7f:bf:de 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '859fd309-32ea-4025-8312-ddecfa0d6a7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf564118-674f-4f5e-bcfa-9616c32835a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=394a85b3-bdd5-4c73-8ada-c7f3f99bacc6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2a99d3aa-08e4-4712-a8b3-984de83b4b60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.503 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2a99d3aa-08e4-4712-a8b3-984de83b4b60 in datapath 74cacb3c-0135-4e0b-9776-478b5f7a3349 unbound from our chassis
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.505 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74cacb3c-0135-4e0b-9776-478b5f7a3349
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.524 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f946f2-fd94-4f75-adc3-6c9b4c1e302a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:57 compute-0 sshd-session[369392]: Invalid user ubuntu from 80.94.92.186 port 50808
Feb 25 12:56:57 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Feb 25 12:56:57 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Consumed 12.407s CPU time.
Feb 25 12:56:57 compute-0 systemd-machined[210048]: Machine qemu-170-instance-0000008a terminated.
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.560 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7f4ebd-eb99-4393-b57c-8898cc9935a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.564 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f7c5e2-11a1-4913-9038-63198eb28eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.594 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[88019a38-083d-469a-b48f-e7b51f128288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.613 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fbad3c8b-0498-4dcc-a376-a1acb065e9ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74cacb3c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:f1:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613435, 'reachable_time': 33349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369405, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.628 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3049742a-bb1f-4874-a4bb-67e81bf84068]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74cacb3c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613447, 'tstamp': 613447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369406, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74cacb3c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613450, 'tstamp': 613450}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369406, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.630 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74cacb3c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.641 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74cacb3c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.641 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:56:57 compute-0 sshd-session[369392]: Connection closed by invalid user ubuntu 80.94.92.186 port 50808 [preauth]
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.642 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74cacb3c-00, col_values=(('external_ids', {'iface-id': 'b1eb3633-3950-4cbb-8a36-6968fb223904'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:57 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.643 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.669 244018 INFO nova.virt.libvirt.driver [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Instance destroyed successfully.
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.670 244018 DEBUG nova.objects.instance [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 859fd309-32ea-4025-8312-ddecfa0d6a7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.684 244018 DEBUG nova.virt.libvirt.vif [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1835970958',display_name='tempest-TestNetworkBasicOps-server-1835970958',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1835970958',id=138,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKi5hwXM8otzF8onl/sQ2HcXmcnSYZEPkja+OxRBMskTynTP/vErrzil0sRJRcFkhhDMTnC+dkl/140PCZpaQazdCB0rdws3ctDi/BBK2mrv8mGIbCHW3g3d+gqo12lx8A==',key_name='tempest-TestNetworkBasicOps-53938794',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:56:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-idncdrcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:56:29Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=859fd309-32ea-4025-8312-ddecfa0d6a7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.685 244018 DEBUG nova.network.os_vif_util [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.686 244018 DEBUG nova.network.os_vif_util [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.687 244018 DEBUG os_vif [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.689 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.690 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a99d3aa-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:56:57 compute-0 nova_compute[244014]: 2026-02-25 12:56:57.697 244018 INFO os_vif [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08')
Feb 25 12:56:57 compute-0 ceph-mon[76335]: pgmap v2291: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.235 244018 INFO nova.virt.libvirt.driver [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Deleting instance files /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f_del
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.237 244018 INFO nova.virt.libvirt.driver [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Deletion of /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f_del complete
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.300 244018 INFO nova.compute.manager [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Took 0.87 seconds to destroy the instance on the hypervisor.
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.301 244018 DEBUG oslo.service.loopingcall [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.302 244018 DEBUG nova.compute.manager [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.302 244018 DEBUG nova.network.neutron [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.341 244018 DEBUG nova.compute.manager [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-unplugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.341 244018 DEBUG oslo_concurrency.lockutils [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.342 244018 DEBUG oslo_concurrency.lockutils [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.342 244018 DEBUG oslo_concurrency.lockutils [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.342 244018 DEBUG nova.compute.manager [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] No waiting events found dispatching network-vif-unplugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.343 244018 DEBUG nova.compute.manager [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-unplugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.575 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.575 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.603 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.709 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.709 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.721 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.722 244018 INFO nova.compute.claims [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:56:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 19 KiB/s wr, 1 op/s
Feb 25 12:56:58 compute-0 nova_compute[244014]: 2026-02-25 12:56:58.869 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.071 244018 DEBUG nova.network.neutron [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.087 244018 INFO nova.compute.manager [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Took 0.78 seconds to deallocate network for instance.
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.135 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.268 244018 DEBUG nova.network.neutron [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updated VIF entry in instance network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.268 244018 DEBUG nova.network.neutron [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.284 244018 DEBUG oslo_concurrency.lockutils [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:56:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:56:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465264861' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.453 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.461 244018 DEBUG nova.compute.provider_tree [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.478 244018 DEBUG nova.scheduler.client.report [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.512 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.513 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.518 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.556 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.557 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.575 244018 INFO nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.593 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.603 244018 DEBUG oslo_concurrency.processutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.703 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.707 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.707 244018 INFO nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Creating image(s)
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.734 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.763 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.801 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.804 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:56:59 compute-0 ceph-mon[76335]: pgmap v2292: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 19 KiB/s wr, 1 op/s
Feb 25 12:56:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3465264861' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.890 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.891 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.891 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.892 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.923 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:56:59 compute-0 nova_compute[244014]: 2026-02-25 12:56:59.927 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6185e497-8422-4a5f-a98a-865484d53d4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:57:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267808107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.179 244018 DEBUG nova.policy [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.187 244018 DEBUG oslo_concurrency.processutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.194 244018 DEBUG nova.compute.provider_tree [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.210 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6185e497-8422-4a5f-a98a-865484d53d4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.249 244018 DEBUG nova.scheduler.client.report [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.297 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.304 244018 DEBUG nova.compute.manager [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.304 244018 DEBUG oslo_concurrency.lockutils [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.305 244018 DEBUG oslo_concurrency.lockutils [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.305 244018 DEBUG oslo_concurrency.lockutils [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.305 244018 DEBUG nova.compute.manager [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.306 244018 WARNING nova.compute.manager [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state None.
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.312 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.344 244018 INFO nova.scheduler.client.report [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 859fd309-32ea-4025-8312-ddecfa0d6a7f
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.397 244018 DEBUG nova.objects.instance [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid 6185e497-8422-4a5f-a98a-865484d53d4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.423 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.423 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Ensure instance console log exists: /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.427 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.428 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.428 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.464 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.470 244018 DEBUG nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.471 244018 DEBUG oslo_concurrency.lockutils [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.471 244018 DEBUG oslo_concurrency.lockutils [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.472 244018 DEBUG oslo_concurrency.lockutils [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.472 244018 DEBUG nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] No waiting events found dispatching network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.472 244018 WARNING nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received unexpected event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 for instance with vm_state deleted and task_state None.
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.472 244018 DEBUG nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-deleted-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.473 244018 INFO nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Neutron deleted interface 2a99d3aa-08e4-4712-a8b3-984de83b4b60; detaching it from the instance and deleting it from the info cache
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.473 244018 DEBUG nova.network.neutron [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.505 244018 DEBUG nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Detach interface failed, port_id=2a99d3aa-08e4-4712-a8b3-984de83b4b60, reason: Instance 859fd309-32ea-4025-8312-ddecfa0d6a7f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:57:00 compute-0 nova_compute[244014]: 2026-02-25 12:57:00.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 4.3 KiB/s wr, 1 op/s
Feb 25 12:57:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1267808107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:01 compute-0 nova_compute[244014]: 2026-02-25 12:57:01.524 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Successfully created port: a4d9181f-fefa-4cde-81ea-c5e59433606c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:57:01 compute-0 ceph-mon[76335]: pgmap v2293: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 4.3 KiB/s wr, 1 op/s
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.417 244018 DEBUG nova.compute.manager [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.417 244018 DEBUG oslo_concurrency.lockutils [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.418 244018 DEBUG oslo_concurrency.lockutils [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.418 244018 DEBUG oslo_concurrency.lockutils [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.419 244018 DEBUG nova.compute.manager [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.419 244018 WARNING nova.compute.manager [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state None.
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.717 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Successfully updated port: a4d9181f-fefa-4cde-81ea-c5e59433606c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.721 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.722 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.722 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.723 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.723 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.724 244018 INFO nova.compute.manager [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Terminating instance
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.726 244018 DEBUG nova.compute.manager [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.729 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.730 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.730 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:57:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Feb 25 12:57:02 compute-0 kernel: tapbde15f84-ed (unregistering): left promiscuous mode
Feb 25 12:57:02 compute-0 NetworkManager[49836]: <info>  [1772024222.7856] device (tapbde15f84-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:02 compute-0 ovn_controller[147040]: 2026-02-25T12:57:02Z|01456|binding|INFO|Releasing lport bde15f84-edfb-445b-b129-ec33331763f0 from this chassis (sb_readonly=0)
Feb 25 12:57:02 compute-0 ovn_controller[147040]: 2026-02-25T12:57:02Z|01457|binding|INFO|Setting lport bde15f84-edfb-445b-b129-ec33331763f0 down in Southbound
Feb 25 12:57:02 compute-0 ovn_controller[147040]: 2026-02-25T12:57:02Z|01458|binding|INFO|Removing iface tapbde15f84-ed ovn-installed in OVS
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:02.801 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:9e:1f 10.100.0.11'], port_security=['fa:16:3e:99:9e:1f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e728a9dc-bb04-4a25-bcad-b787a044bc0b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1d8168d2-f5f8-4f41-af80-56661b6a1e4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=394a85b3-bdd5-4c73-8ada-c7f3f99bacc6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=bde15f84-edfb-445b-b129-ec33331763f0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:02.804 157129 INFO neutron.agent.ovn.metadata.agent [-] Port bde15f84-edfb-445b-b129-ec33331763f0 in datapath 74cacb3c-0135-4e0b-9776-478b5f7a3349 unbound from our chassis
Feb 25 12:57:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:02.806 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74cacb3c-0135-4e0b-9776-478b5f7a3349, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:57:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:02.807 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[03c3fe58-41c3-46f3-8fc0-ff6ece663c03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:02.807 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349 namespace which is not needed anymore
Feb 25 12:57:02 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000089.scope: Deactivated successfully.
Feb 25 12:57:02 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000089.scope: Consumed 13.440s CPU time.
Feb 25 12:57:02 compute-0 systemd-machined[210048]: Machine qemu-169-instance-00000089 terminated.
Feb 25 12:57:02 compute-0 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [NOTICE]   (368675) : haproxy version is 2.8.14-c23fe91
Feb 25 12:57:02 compute-0 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [NOTICE]   (368675) : path to executable is /usr/sbin/haproxy
Feb 25 12:57:02 compute-0 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [WARNING]  (368675) : Exiting Master process...
Feb 25 12:57:02 compute-0 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [WARNING]  (368675) : Exiting Master process...
Feb 25 12:57:02 compute-0 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [ALERT]    (368675) : Current worker (368677) exited with code 143 (Terminated)
Feb 25 12:57:02 compute-0 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [WARNING]  (368675) : All workers exited. Exiting... (0)
Feb 25 12:57:02 compute-0 systemd[1]: libpod-0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab.scope: Deactivated successfully.
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.981 244018 INFO nova.virt.libvirt.driver [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance destroyed successfully.
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.983 244018 DEBUG nova.objects.instance [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid e728a9dc-bb04-4a25-bcad-b787a044bc0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:57:02 compute-0 podman[369673]: 2026-02-25 12:57:02.985528104 +0000 UTC m=+0.064554512 container died 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.997 244018 DEBUG nova.virt.libvirt.vif [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563089963',display_name='tempest-TestNetworkBasicOps-server-563089963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563089963',id=137,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDFJ0F94NwULIDtBe1hWjwHL1rXe9bp7hZF1/RCkpzZ1NYU+13CFJoQDcWC4Kzxk0oxcoRc7zBPHT8toFws+wsWPaxnQhFXZM/VRwZJjXXVApOjgca9Ve/Do6WWCKSzVsA==',key_name='tempest-TestNetworkBasicOps-1065518207',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:56:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-nqssdsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:56:05Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=e728a9dc-bb04-4a25-bcad-b787a044bc0b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:57:02 compute-0 nova_compute[244014]: 2026-02-25 12:57:02.998 244018 DEBUG nova.network.os_vif_util [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.000 244018 DEBUG nova.network.os_vif_util [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.001 244018 DEBUG os_vif [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.004 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.004 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbde15f84-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.012 244018 INFO os_vif [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed')
Feb 25 12:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab-userdata-shm.mount: Deactivated successfully.
Feb 25 12:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e045d31172de33fbbfdc9cfbf5c4682002fbe9f3cdd63cf75b833a745b1cae3-merged.mount: Deactivated successfully.
Feb 25 12:57:03 compute-0 podman[369673]: 2026-02-25 12:57:03.036630096 +0000 UTC m=+0.115656484 container cleanup 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:57:03 compute-0 systemd[1]: libpod-conmon-0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab.scope: Deactivated successfully.
Feb 25 12:57:03 compute-0 podman[369730]: 2026-02-25 12:57:03.12011096 +0000 UTC m=+0.061120985 container remove 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:57:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.126 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa136b9-cbdb-4b4f-8615-0996439531f1]: (4, ('Wed Feb 25 12:57:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349 (0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab)\n0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab\nWed Feb 25 12:57:03 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349 (0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab)\n0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.129 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bba77c70-b1ba-40b9-a945-810592c138a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.130 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74cacb3c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:03 compute-0 kernel: tap74cacb3c-00: left promiscuous mode
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.144 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.149 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5598ec58-438f-4dcd-a971-3816d76005a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.166 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2278683b-c093-4dd0-9a73-cabccd4f6baa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.168 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45295995-90eb-41ee-a119-20da672216d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.171 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:57:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.182 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac23ba40-fbb0-4cbf-8af8-71df2e84f622]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613427, 'reachable_time': 34629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369746, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d74cacb3c\x2d0135\x2d4e0b\x2d9776\x2d478b5f7a3349.mount: Deactivated successfully.
Feb 25 12:57:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.185 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:57:03 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.185 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0427499c-3fa4-4cc8-ab6e-68a7e3c45da7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.316 244018 INFO nova.virt.libvirt.driver [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Deleting instance files /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b_del
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.317 244018 INFO nova.virt.libvirt.driver [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Deletion of /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b_del complete
Feb 25 12:57:03 compute-0 sudo[369749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:57:03 compute-0 sudo[369749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:57:03 compute-0 sudo[369749]: pam_unix(sudo:session): session closed for user root
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.375 244018 INFO nova.compute.manager [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.376 244018 DEBUG oslo.service.loopingcall [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.376 244018 DEBUG nova.compute.manager [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:57:03 compute-0 nova_compute[244014]: 2026-02-25 12:57:03.376 244018 DEBUG nova.network.neutron [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:57:03 compute-0 sudo[369774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:57:03 compute-0 sudo[369774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:57:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:03 compute-0 ceph-mon[76335]: pgmap v2294: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Feb 25 12:57:03 compute-0 sudo[369774]: pam_unix(sudo:session): session closed for user root
Feb 25 12:57:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:57:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:57:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:57:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:57:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:57:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:57:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:57:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:57:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:57:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:57:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:57:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:57:04 compute-0 sudo[369831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:57:04 compute-0 sudo[369831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:57:04 compute-0 sudo[369831]: pam_unix(sudo:session): session closed for user root
Feb 25 12:57:04 compute-0 sudo[369856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:57:04 compute-0 sudo[369856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:57:04 compute-0 podman[369893]: 2026-02-25 12:57:04.442023219 +0000 UTC m=+0.058515861 container create 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:57:04 compute-0 systemd[1]: Started libpod-conmon-93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8.scope.
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.485 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updating instance_info_cache with network_info: [{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.508 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.509 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Instance network_info: |[{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:57:04 compute-0 podman[369893]: 2026-02-25 12:57:04.419776922 +0000 UTC m=+0.036269544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.513 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Start _get_guest_xml network_info=[{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:57:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.522 244018 WARNING nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.527 244018 DEBUG nova.virt.libvirt.host [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.528 244018 DEBUG nova.virt.libvirt.host [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.532 244018 DEBUG nova.virt.libvirt.host [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.533 244018 DEBUG nova.virt.libvirt.host [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.533 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.534 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.535 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.535 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.535 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.535 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.536 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.536 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.536 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.537 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.537 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.538 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:57:04 compute-0 podman[369893]: 2026-02-25 12:57:04.539121998 +0000 UTC m=+0.155614630 container init 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.543 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:04 compute-0 podman[369893]: 2026-02-25 12:57:04.547635759 +0000 UTC m=+0.164128371 container start 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:57:04 compute-0 podman[369893]: 2026-02-25 12:57:04.551318252 +0000 UTC m=+0.167810864 container attach 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:57:04 compute-0 gallant_moore[369910]: 167 167
Feb 25 12:57:04 compute-0 systemd[1]: libpod-93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8.scope: Deactivated successfully.
Feb 25 12:57:04 compute-0 podman[369893]: 2026-02-25 12:57:04.556775996 +0000 UTC m=+0.173268638 container died 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 12:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-edbb969fb18fc278e6b4947c177e7c8d4b69b26dff864a732fc82a8c3477940a-merged.mount: Deactivated successfully.
Feb 25 12:57:04 compute-0 podman[369893]: 2026-02-25 12:57:04.609110513 +0000 UTC m=+0.225603145 container remove 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 25 12:57:04 compute-0 systemd[1]: libpod-conmon-93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8.scope: Deactivated successfully.
Feb 25 12:57:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:57:04 compute-0 podman[369954]: 2026-02-25 12:57:04.78519031 +0000 UTC m=+0.047541092 container create 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 12:57:04 compute-0 systemd[1]: Started libpod-conmon-1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8.scope.
Feb 25 12:57:04 compute-0 podman[369954]: 2026-02-25 12:57:04.764254619 +0000 UTC m=+0.026605421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.864 244018 DEBUG nova.network.neutron [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:57:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:57:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:57:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:57:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:57:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:04 compute-0 podman[369954]: 2026-02-25 12:57:04.908065896 +0000 UTC m=+0.170416758 container init 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 25 12:57:04 compute-0 nova_compute[244014]: 2026-02-25 12:57:04.913 244018 INFO nova.compute.manager [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Took 1.54 seconds to deallocate network for instance.
Feb 25 12:57:04 compute-0 podman[369954]: 2026-02-25 12:57:04.917378608 +0000 UTC m=+0.179729370 container start 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 12:57:04 compute-0 podman[369954]: 2026-02-25 12:57:04.920902128 +0000 UTC m=+0.183252930 container attach 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.005 244018 DEBUG nova.compute.manager [req-22e19e12-07ae-4840-ac4d-0b235481572b req-fa4c780c-f222-46e9-b9f6-cdd46f392eaa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-deleted-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.005 244018 INFO nova.compute.manager [req-22e19e12-07ae-4840-ac4d-0b235481572b req-fa4c780c-f222-46e9-b9f6-cdd46f392eaa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Neutron deleted interface bde15f84-edfb-445b-b129-ec33331763f0; detaching it from the instance and deleting it from the info cache
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.006 244018 DEBUG nova.network.neutron [req-22e19e12-07ae-4840-ac4d-0b235481572b req-fa4c780c-f222-46e9-b9f6-cdd46f392eaa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.009 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-changed-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.009 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing instance network info cache due to event network-changed-bde15f84-edfb-445b-b129-ec33331763f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.010 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.011 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.011 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing network info cache for port bde15f84-edfb-445b-b129-ec33331763f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.066 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.067 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.070 244018 DEBUG nova.compute.manager [req-22e19e12-07ae-4840-ac4d-0b235481572b req-fa4c780c-f222-46e9-b9f6-cdd46f392eaa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Detach interface failed, port_id=bde15f84-edfb-445b-b129-ec33331763f0, reason: Instance e728a9dc-bb04-4a25-bcad-b787a044bc0b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.137 244018 DEBUG oslo_concurrency.processutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:57:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882747267' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.184 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.190 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.227 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.235 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:05 compute-0 cool_curran[369971]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:57:05 compute-0 cool_curran[369971]: --> All data devices are unavailable
Feb 25 12:57:05 compute-0 systemd[1]: libpod-1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8.scope: Deactivated successfully.
Feb 25 12:57:05 compute-0 podman[369954]: 2026-02-25 12:57:05.442010637 +0000 UTC m=+0.704361439 container died 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 12:57:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219-merged.mount: Deactivated successfully.
Feb 25 12:57:05 compute-0 podman[369954]: 2026-02-25 12:57:05.491478902 +0000 UTC m=+0.753829694 container remove 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 12:57:05 compute-0 systemd[1]: libpod-conmon-1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8.scope: Deactivated successfully.
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.521 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.530 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.540 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.541 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.541 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing instance network info cache due to event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.542 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:57:05 compute-0 sudo[369856]: pam_unix(sudo:session): session closed for user root
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.542 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.542 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:57:05 compute-0 sudo[370062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:57:05 compute-0 sudo[370062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:57:05 compute-0 sudo[370062]: pam_unix(sudo:session): session closed for user root
Feb 25 12:57:05 compute-0 sudo[370087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:57:05 compute-0 sudo[370087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:57:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:57:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279281827' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.725 244018 DEBUG oslo_concurrency.processutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.733 244018 DEBUG nova.compute.provider_tree [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.755 244018 DEBUG nova.scheduler.client.report [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.785 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:57:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251817825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.805 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.808 244018 DEBUG nova.virt.libvirt.vif [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:56:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=139,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL87R8Bekvm59mo5kwvnuGBsBkp9tXZgiXMYkz6TvQlSCgwb0IPTrLwrZDNNWgUnIBhU5eG7QYkAKCOG1521iSWxvl92auy73uemRv7k4zUT5bfi0v5q+2K3Cz8+sbuK0A==',key_name='tempest-TestSecurityGroupsBasicOps-1295736462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-405awptu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:56:59Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=6185e497-8422-4a5f-a98a-865484d53d4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.808 244018 DEBUG nova.network.os_vif_util [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.810 244018 DEBUG nova.network.os_vif_util [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.812 244018 DEBUG nova.objects.instance [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6185e497-8422-4a5f-a98a-865484d53d4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.829 244018 INFO nova.scheduler.client.report [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance e728a9dc-bb04-4a25-bcad-b787a044bc0b
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.844 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:57:05 compute-0 nova_compute[244014]:   <uuid>6185e497-8422-4a5f-a98a-865484d53d4f</uuid>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   <name>instance-0000008b</name>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789</nova:name>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:57:04</nova:creationTime>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <nova:port uuid="a4d9181f-fefa-4cde-81ea-c5e59433606c">
Feb 25 12:57:05 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <system>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <entry name="serial">6185e497-8422-4a5f-a98a-865484d53d4f</entry>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <entry name="uuid">6185e497-8422-4a5f-a98a-865484d53d4f</entry>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     </system>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   <os>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   </os>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   <features>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   </features>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/6185e497-8422-4a5f-a98a-865484d53d4f_disk">
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       </source>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/6185e497-8422-4a5f-a98a-865484d53d4f_disk.config">
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       </source>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:57:05 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:ef:c2:bb"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <target dev="tapa4d9181f-fe"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/console.log" append="off"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <video>
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     </video>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:57:05 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:57:05 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:57:05 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:57:05 compute-0 nova_compute[244014]: </domain>
Feb 25 12:57:05 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.845 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Preparing to wait for external event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.845 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.845 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.846 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.847 244018 DEBUG nova.virt.libvirt.vif [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:56:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=139,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL87R8Bekvm59mo5kwvnuGBsBkp9tXZgiXMYkz6TvQlSCgwb0IPTrLwrZDNNWgUnIBhU5eG7QYkAKCOG1521iSWxvl92auy73uemRv7k4zUT5bfi0v5q+2K3Cz8+sbuK0A==',key_name='tempest-TestSecurityGroupsBasicOps-1295736462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-405awptu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:56:59Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=6185e497-8422-4a5f-a98a-865484d53d4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.847 244018 DEBUG nova.network.os_vif_util [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.848 244018 DEBUG nova.network.os_vif_util [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.849 244018 DEBUG os_vif [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.849 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.850 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.850 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.859 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4d9181f-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.860 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4d9181f-fe, col_values=(('external_ids', {'iface-id': 'a4d9181f-fefa-4cde-81ea-c5e59433606c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:c2:bb', 'vm-uuid': '6185e497-8422-4a5f-a98a-865484d53d4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:05 compute-0 NetworkManager[49836]: <info>  [1772024225.8641] manager: (tapa4d9181f-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:05 compute-0 nova_compute[244014]: 2026-02-25 12:57:05.874 244018 INFO os_vif [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe')
Feb 25 12:57:05 compute-0 podman[370129]: 2026-02-25 12:57:05.960921724 +0000 UTC m=+0.063402189 container create f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 12:57:05 compute-0 ceph-mon[76335]: pgmap v2295: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:57:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1882747267' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4279281827' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4251817825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:06 compute-0 systemd[1]: Started libpod-conmon-f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72.scope.
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.022 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:06 compute-0 podman[370129]: 2026-02-25 12:57:05.935261041 +0000 UTC m=+0.037741546 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:57:06 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:57:06 compute-0 podman[370129]: 2026-02-25 12:57:06.057082537 +0000 UTC m=+0.159563062 container init f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 12:57:06 compute-0 podman[370129]: 2026-02-25 12:57:06.06356487 +0000 UTC m=+0.166045335 container start f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 12:57:06 compute-0 podman[370129]: 2026-02-25 12:57:06.068983833 +0000 UTC m=+0.171464348 container attach f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:57:06 compute-0 trusting_shtern[370147]: 167 167
Feb 25 12:57:06 compute-0 systemd[1]: libpod-f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72.scope: Deactivated successfully.
Feb 25 12:57:06 compute-0 podman[370129]: 2026-02-25 12:57:06.070825105 +0000 UTC m=+0.173305630 container died f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.080 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.080 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.081 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:ef:c2:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.082 244018 INFO nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Using config drive
Feb 25 12:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-76fd4d580a93400a8974fb39a4257d5a104621e0b26cf17119c53cd399167e1b-merged.mount: Deactivated successfully.
Feb 25 12:57:06 compute-0 podman[370129]: 2026-02-25 12:57:06.115113414 +0000 UTC m=+0.217593879 container remove f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.120 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:06 compute-0 systemd[1]: libpod-conmon-f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72.scope: Deactivated successfully.
Feb 25 12:57:06 compute-0 podman[370189]: 2026-02-25 12:57:06.266805933 +0000 UTC m=+0.042525361 container create 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:57:06 compute-0 systemd[1]: Started libpod-conmon-61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797.scope.
Feb 25 12:57:06 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8476635d56727ada9b5724ade5c7efc8774131fbe481fbf88bb2488e495c46a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8476635d56727ada9b5724ade5c7efc8774131fbe481fbf88bb2488e495c46a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8476635d56727ada9b5724ade5c7efc8774131fbe481fbf88bb2488e495c46a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8476635d56727ada9b5724ade5c7efc8774131fbe481fbf88bb2488e495c46a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:06 compute-0 podman[370189]: 2026-02-25 12:57:06.248799225 +0000 UTC m=+0.024518663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:57:06 compute-0 podman[370189]: 2026-02-25 12:57:06.362078571 +0000 UTC m=+0.137798049 container init 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 12:57:06 compute-0 podman[370189]: 2026-02-25 12:57:06.378586816 +0000 UTC m=+0.154306224 container start 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 12:57:06 compute-0 podman[370189]: 2026-02-25 12:57:06.382387623 +0000 UTC m=+0.158107091 container attach 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.440 244018 INFO nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Creating config drive at /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.445 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4fd8f1cp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.583 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4fd8f1cp" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.617 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.622 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config 6185e497-8422-4a5f-a98a-865484d53d4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]: {
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:     "0": [
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:         {
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "devices": [
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "/dev/loop3"
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             ],
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_name": "ceph_lv0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_size": "21470642176",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "name": "ceph_lv0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "tags": {
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.cluster_name": "ceph",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.crush_device_class": "",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.encrypted": "0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.objectstore": "bluestore",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.osd_id": "0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.type": "block",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.vdo": "0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.with_tpm": "0"
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             },
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "type": "block",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "vg_name": "ceph_vg0"
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:         }
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:     ],
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:     "1": [
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:         {
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "devices": [
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "/dev/loop4"
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             ],
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_name": "ceph_lv1",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_size": "21470642176",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "name": "ceph_lv1",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "tags": {
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.cluster_name": "ceph",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.crush_device_class": "",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.encrypted": "0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.objectstore": "bluestore",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.osd_id": "1",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.type": "block",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.vdo": "0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.with_tpm": "0"
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             },
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "type": "block",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "vg_name": "ceph_vg1"
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:         }
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:     ],
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:     "2": [
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:         {
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "devices": [
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "/dev/loop5"
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             ],
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_name": "ceph_lv2",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_size": "21470642176",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "name": "ceph_lv2",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "tags": {
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.cluster_name": "ceph",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.crush_device_class": "",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.encrypted": "0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.objectstore": "bluestore",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.osd_id": "2",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.type": "block",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.vdo": "0",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:                 "ceph.with_tpm": "0"
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             },
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "type": "block",
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:             "vg_name": "ceph_vg2"
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:         }
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]:     ]
Feb 25 12:57:06 compute-0 upbeat_shockley[370205]: }
Feb 25 12:57:06 compute-0 systemd[1]: libpod-61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797.scope: Deactivated successfully.
Feb 25 12:57:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:57:06 compute-0 podman[370251]: 2026-02-25 12:57:06.76230167 +0000 UTC m=+0.032467767 container died 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.785 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config 6185e497-8422-4a5f-a98a-865484d53d4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.786 244018 INFO nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Deleting local config drive /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config because it was imported into RBD.
Feb 25 12:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8476635d56727ada9b5724ade5c7efc8774131fbe481fbf88bb2488e495c46a-merged.mount: Deactivated successfully.
Feb 25 12:57:06 compute-0 podman[370251]: 2026-02-25 12:57:06.808285767 +0000 UTC m=+0.078451814 container remove 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 12:57:06 compute-0 systemd[1]: libpod-conmon-61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797.scope: Deactivated successfully.
Feb 25 12:57:06 compute-0 kernel: tapa4d9181f-fe: entered promiscuous mode
Feb 25 12:57:06 compute-0 NetworkManager[49836]: <info>  [1772024226.8417] manager: (tapa4d9181f-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/606)
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.842 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:06 compute-0 ovn_controller[147040]: 2026-02-25T12:57:06Z|01459|binding|INFO|Claiming lport a4d9181f-fefa-4cde-81ea-c5e59433606c for this chassis.
Feb 25 12:57:06 compute-0 ovn_controller[147040]: 2026-02-25T12:57:06Z|01460|binding|INFO|a4d9181f-fefa-4cde-81ea-c5e59433606c: Claiming fa:16:3e:ef:c2:bb 10.100.0.11
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.851 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:c2:bb 10.100.0.11'], port_security=['fa:16:3e:ef:c2:bb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6185e497-8422-4a5f-a98a-865484d53d4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f23f1675-b7ff-4265-a011-0912c637d746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87d37268-c5dd-4380-a676-5b9940f82b8f 979c759f-0c66-4e81-a7f1-5a970907b9e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5ffac6-eab3-4eca-a87d-9bf8335b5de6, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4d9181f-fefa-4cde-81ea-c5e59433606c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.853 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4d9181f-fefa-4cde-81ea-c5e59433606c in datapath f23f1675-b7ff-4265-a011-0912c637d746 bound to our chassis
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.855 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f23f1675-b7ff-4265-a011-0912c637d746
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:06 compute-0 ovn_controller[147040]: 2026-02-25T12:57:06Z|01461|binding|INFO|Setting lport a4d9181f-fefa-4cde-81ea-c5e59433606c ovn-installed in OVS
Feb 25 12:57:06 compute-0 ovn_controller[147040]: 2026-02-25T12:57:06Z|01462|binding|INFO|Setting lport a4d9181f-fefa-4cde-81ea-c5e59433606c up in Southbound
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:06 compute-0 nova_compute[244014]: 2026-02-25 12:57:06.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.866 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d4536e72-ab57-43c2-a12a-3df9be9672d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.868 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf23f1675-b1 in ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.871 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf23f1675-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.871 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[447afe6e-5aca-4e9f-b308-0653d8c63f2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.873 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4be4a40b-bba3-40c7-af6e-977addef1b5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:06 compute-0 sudo[370087]: pam_unix(sudo:session): session closed for user root
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.884 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[06da8670-a162-4924-a873-445d4b8a4086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:06 compute-0 systemd-machined[210048]: New machine qemu-171-instance-0000008b.
Feb 25 12:57:06 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-0000008b.
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.906 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0d1edd-58a9-4c37-99a4-892b50e67856]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:06 compute-0 systemd-udevd[370307]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.932 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cbba6b0e-a19a-4d72-8288-4ccbd09b040f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:06 compute-0 systemd-udevd[370312]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.939 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d62e5cab-f93f-44aa-8f16-1ef7255fa0f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:06 compute-0 NetworkManager[49836]: <info>  [1772024226.9462] manager: (tapf23f1675-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/607)
Feb 25 12:57:06 compute-0 NetworkManager[49836]: <info>  [1772024226.9483] device (tapa4d9181f-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:57:06 compute-0 sudo[370285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:57:06 compute-0 NetworkManager[49836]: <info>  [1772024226.9501] device (tapa4d9181f-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:57:06 compute-0 sudo[370285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:57:06 compute-0 sudo[370285]: pam_unix(sudo:session): session closed for user root
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.973 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[59f7fbbf-e34d-41bc-ae16-c5b1f1a86ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:06 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.978 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[65997da9-34c0-44b2-a997-be9b4c32e892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:07 compute-0 NetworkManager[49836]: <info>  [1772024227.0015] device (tapf23f1675-b0): carrier: link connected
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.005 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a860d562-d185-4fdf-94ab-3ee3d7bf44a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:07 compute-0 sudo[370337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd836392-c4c5-42f6-bda6-e0545d622349]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf23f1675-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:4e:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619640, 'reachable_time': 34313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370363, 'error': None, 'target': 'ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:07 compute-0 sudo[370337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.037 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[48222472-b827-4068-bc3b-22af36be4592]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe24:4e9f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619640, 'tstamp': 619640}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370366, 'error': None, 'target': 'ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.054 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73287dcf-b5c9-4341-803f-11ed6f0bc929]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf23f1675-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:4e:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619640, 'reachable_time': 34313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370367, 'error': None, 'target': 'ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.082 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f5152cef-8e4b-46bb-a17b-8b95dca5d29d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.132 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[82bed630-2ea3-482e-af31-f9b0c79a81f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.133 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf23f1675-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.134 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.135 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf23f1675-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:07 compute-0 kernel: tapf23f1675-b0: entered promiscuous mode
Feb 25 12:57:07 compute-0 NetworkManager[49836]: <info>  [1772024227.1378] manager: (tapf23f1675-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/608)
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.140 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf23f1675-b0, col_values=(('external_ids', {'iface-id': '93d7aa8d-1845-4103-8fd2-aed7a8c4298a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:07 compute-0 ovn_controller[147040]: 2026-02-25T12:57:07Z|01463|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.153 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.153 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f23f1675-b7ff-4265-a011-0912c637d746.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f23f1675-b7ff-4265-a011-0912c637d746.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.154 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34ac1809-8aef-49ae-88fd-3416c3d0d54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.154 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-f23f1675-b7ff-4265-a011-0912c637d746
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/f23f1675-b7ff-4265-a011-0912c637d746.pid.haproxy
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID f23f1675-b7ff-4265-a011-0912c637d746
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:57:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.155 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746', 'env', 'PROCESS_TAG=haproxy-f23f1675-b7ff-4265-a011-0912c637d746', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f23f1675-b7ff-4265-a011-0912c637d746.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:57:07 compute-0 podman[370389]: 2026-02-25 12:57:07.299243977 +0000 UTC m=+0.052421530 container create 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 12:57:07 compute-0 systemd[1]: Started libpod-conmon-2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882.scope.
Feb 25 12:57:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:57:07 compute-0 podman[370389]: 2026-02-25 12:57:07.265006941 +0000 UTC m=+0.018184534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:57:07 compute-0 podman[370389]: 2026-02-25 12:57:07.363498869 +0000 UTC m=+0.116676472 container init 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 12:57:07 compute-0 podman[370389]: 2026-02-25 12:57:07.372984727 +0000 UTC m=+0.126162310 container start 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:57:07 compute-0 naughty_varahamihira[370405]: 167 167
Feb 25 12:57:07 compute-0 systemd[1]: libpod-2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882.scope: Deactivated successfully.
Feb 25 12:57:07 compute-0 podman[370389]: 2026-02-25 12:57:07.377194996 +0000 UTC m=+0.130372559 container attach 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 12:57:07 compute-0 podman[370389]: 2026-02-25 12:57:07.377528205 +0000 UTC m=+0.130705768 container died 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.386 244018 DEBUG nova.compute.manager [req-69ec7bb3-211f-4c14-8dcc-74f997a70d1f req-b68dfb05-ea20-44e4-967e-55833308f702 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.388 244018 DEBUG oslo_concurrency.lockutils [req-69ec7bb3-211f-4c14-8dcc-74f997a70d1f req-b68dfb05-ea20-44e4-967e-55833308f702 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.388 244018 DEBUG oslo_concurrency.lockutils [req-69ec7bb3-211f-4c14-8dcc-74f997a70d1f req-b68dfb05-ea20-44e4-967e-55833308f702 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.388 244018 DEBUG oslo_concurrency.lockutils [req-69ec7bb3-211f-4c14-8dcc-74f997a70d1f req-b68dfb05-ea20-44e4-967e-55833308f702 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.389 244018 DEBUG nova.compute.manager [req-69ec7bb3-211f-4c14-8dcc-74f997a70d1f req-b68dfb05-ea20-44e4-967e-55833308f702 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Processing event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:57:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-58b44eeb6950c252e84e50f6336893559c59935c688db34c83a4dc9cd1a74489-merged.mount: Deactivated successfully.
Feb 25 12:57:07 compute-0 podman[370389]: 2026-02-25 12:57:07.419461628 +0000 UTC m=+0.172639171 container remove 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:57:07 compute-0 systemd[1]: libpod-conmon-2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882.scope: Deactivated successfully.
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.533 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updated VIF entry in instance network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.534 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updating instance_info_cache with network_info: [{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.549 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.550 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.550 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.550 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.551 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.551 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.552 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.552 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.552 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.553 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.553 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.553 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.554 244018 WARNING nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state deleting.
Feb 25 12:57:07 compute-0 podman[370444]: 2026-02-25 12:57:07.586909081 +0000 UTC m=+0.062895345 container create cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:57:07 compute-0 podman[370455]: 2026-02-25 12:57:07.595702069 +0000 UTC m=+0.053196801 container create 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:57:07 compute-0 systemd[1]: Started libpod-conmon-cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7.scope.
Feb 25 12:57:07 compute-0 systemd[1]: Started libpod-conmon-67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7.scope.
Feb 25 12:57:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:57:07 compute-0 podman[370444]: 2026-02-25 12:57:07.556610087 +0000 UTC m=+0.032596361 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a1918a9162265e4a26eeedf3b2aaae3652d2ea097cac975c16800ff9ce3b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a1918a9162265e4a26eeedf3b2aaae3652d2ea097cac975c16800ff9ce3b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a1918a9162265e4a26eeedf3b2aaae3652d2ea097cac975c16800ff9ce3b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a1918a9162265e4a26eeedf3b2aaae3652d2ea097cac975c16800ff9ce3b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc100cb099adcbdbcce587f0f6d881ddf87a2202a85f400719149b8997f75aa8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:07 compute-0 podman[370455]: 2026-02-25 12:57:07.678501225 +0000 UTC m=+0.135995957 container init 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:57:07 compute-0 podman[370455]: 2026-02-25 12:57:07.576765425 +0000 UTC m=+0.034260177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:57:07 compute-0 podman[370455]: 2026-02-25 12:57:07.690788902 +0000 UTC m=+0.148283634 container start 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:57:07 compute-0 podman[370455]: 2026-02-25 12:57:07.694104855 +0000 UTC m=+0.151599587 container attach 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:57:07 compute-0 podman[370444]: 2026-02-25 12:57:07.699280421 +0000 UTC m=+0.175266665 container init cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:57:07 compute-0 podman[370444]: 2026-02-25 12:57:07.706009521 +0000 UTC m=+0.181995745 container start cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:57:07 compute-0 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [NOTICE]   (370489) : New worker (370491) forked
Feb 25 12:57:07 compute-0 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [NOTICE]   (370489) : Loading success.
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.976 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024227.9753835, 6185e497-8422-4a5f-a98a-865484d53d4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.977 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] VM Started (Lifecycle Event)
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.980 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.984 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.987 244018 INFO nova.virt.libvirt.driver [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Instance spawned successfully.
Feb 25 12:57:07 compute-0 nova_compute[244014]: 2026-02-25 12:57:07.988 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.002 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:08 compute-0 ceph-mon[76335]: pgmap v2296: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.011 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.018 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.019 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.020 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.020 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.021 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.021 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.043 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.044 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024227.9768565, 6185e497-8422-4a5f-a98a-865484d53d4f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.044 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] VM Paused (Lifecycle Event)
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.075 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.079 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024227.9838955, 6185e497-8422-4a5f-a98a-865484d53d4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.079 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] VM Resumed (Lifecycle Event)
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.089 244018 INFO nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Took 8.39 seconds to spawn the instance on the hypervisor.
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.090 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.098 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.105 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.127 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.161 244018 INFO nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Took 9.49 seconds to build instance.
Feb 25 12:57:08 compute-0 nova_compute[244014]: 2026-02-25 12:57:08.176 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:08 compute-0 lvm[370612]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:57:08 compute-0 lvm[370612]: VG ceph_vg0 finished
Feb 25 12:57:08 compute-0 lvm[370614]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:57:08 compute-0 lvm[370614]: VG ceph_vg1 finished
Feb 25 12:57:08 compute-0 lvm[370615]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:57:08 compute-0 lvm[370615]: VG ceph_vg2 finished
Feb 25 12:57:08 compute-0 unruffled_payne[370481]: {}
Feb 25 12:57:08 compute-0 systemd[1]: libpod-67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7.scope: Deactivated successfully.
Feb 25 12:57:08 compute-0 systemd[1]: libpod-67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7.scope: Consumed 1.112s CPU time.
Feb 25 12:57:08 compute-0 podman[370455]: 2026-02-25 12:57:08.446076567 +0000 UTC m=+0.903571339 container died 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 12:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-be5a1918a9162265e4a26eeedf3b2aaae3652d2ea097cac975c16800ff9ce3b1-merged.mount: Deactivated successfully.
Feb 25 12:57:08 compute-0 podman[370455]: 2026-02-25 12:57:08.494624317 +0000 UTC m=+0.952119049 container remove 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:57:08 compute-0 systemd[1]: libpod-conmon-67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7.scope: Deactivated successfully.
Feb 25 12:57:08 compute-0 sudo[370337]: pam_unix(sudo:session): session closed for user root
Feb 25 12:57:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:57:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:57:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:57:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:57:08 compute-0 sudo[370632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:57:08 compute-0 sudo[370632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:57:08 compute-0 sudo[370632]: pam_unix(sudo:session): session closed for user root
Feb 25 12:57:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Feb 25 12:57:09 compute-0 nova_compute[244014]: 2026-02-25 12:57:09.464 244018 DEBUG nova.compute.manager [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:09 compute-0 nova_compute[244014]: 2026-02-25 12:57:09.466 244018 DEBUG oslo_concurrency.lockutils [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:09 compute-0 nova_compute[244014]: 2026-02-25 12:57:09.467 244018 DEBUG oslo_concurrency.lockutils [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:09 compute-0 nova_compute[244014]: 2026-02-25 12:57:09.467 244018 DEBUG oslo_concurrency.lockutils [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:09 compute-0 nova_compute[244014]: 2026-02-25 12:57:09.468 244018 DEBUG nova.compute.manager [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] No waiting events found dispatching network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:57:09 compute-0 nova_compute[244014]: 2026-02-25 12:57:09.468 244018 WARNING nova.compute.manager [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received unexpected event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c for instance with vm_state active and task_state None.
Feb 25 12:57:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:57:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:57:09 compute-0 ceph-mon[76335]: pgmap v2297: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Feb 25 12:57:09 compute-0 ovn_controller[147040]: 2026-02-25T12:57:09Z|01464|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 12:57:09 compute-0 nova_compute[244014]: 2026-02-25 12:57:09.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:09 compute-0 ovn_controller[147040]: 2026-02-25T12:57:09Z|01465|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 12:57:09 compute-0 nova_compute[244014]: 2026-02-25 12:57:09.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:10 compute-0 nova_compute[244014]: 2026-02-25 12:57:10.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:10.420 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:57:10 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:10.424 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:57:10 compute-0 nova_compute[244014]: 2026-02-25 12:57:10.532 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Feb 25 12:57:10 compute-0 nova_compute[244014]: 2026-02-25 12:57:10.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:11 compute-0 NetworkManager[49836]: <info>  [1772024231.6225] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Feb 25 12:57:11 compute-0 NetworkManager[49836]: <info>  [1772024231.6241] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Feb 25 12:57:11 compute-0 ovn_controller[147040]: 2026-02-25T12:57:11Z|01466|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 12:57:11 compute-0 nova_compute[244014]: 2026-02-25 12:57:11.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:11 compute-0 ovn_controller[147040]: 2026-02-25T12:57:11Z|01467|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 12:57:11 compute-0 nova_compute[244014]: 2026-02-25 12:57:11.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:11 compute-0 nova_compute[244014]: 2026-02-25 12:57:11.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:11 compute-0 ceph-mon[76335]: pgmap v2298: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Feb 25 12:57:12 compute-0 nova_compute[244014]: 2026-02-25 12:57:12.084 244018 DEBUG nova.compute.manager [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:12 compute-0 nova_compute[244014]: 2026-02-25 12:57:12.084 244018 DEBUG nova.compute.manager [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing instance network info cache due to event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:57:12 compute-0 nova_compute[244014]: 2026-02-25 12:57:12.085 244018 DEBUG oslo_concurrency.lockutils [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:57:12 compute-0 nova_compute[244014]: 2026-02-25 12:57:12.085 244018 DEBUG oslo_concurrency.lockutils [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:57:12 compute-0 nova_compute[244014]: 2026-02-25 12:57:12.085 244018 DEBUG nova.network.neutron [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:57:12 compute-0 nova_compute[244014]: 2026-02-25 12:57:12.667 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024217.6667678, 859fd309-32ea-4025-8312-ddecfa0d6a7f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:57:12 compute-0 nova_compute[244014]: 2026-02-25 12:57:12.668 244018 INFO nova.compute.manager [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] VM Stopped (Lifecycle Event)
Feb 25 12:57:12 compute-0 nova_compute[244014]: 2026-02-25 12:57:12.694 244018 DEBUG nova.compute.manager [None req-1d0094c3-7f6d-42bb-b768-b3d5ca7c89a8 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:12 compute-0 podman[370659]: 2026-02-25 12:57:12.737510021 +0000 UTC m=+0.078004362 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 12:57:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Feb 25 12:57:12 compute-0 podman[370660]: 2026-02-25 12:57:12.771402737 +0000 UTC m=+0.111879717 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:57:13 compute-0 nova_compute[244014]: 2026-02-25 12:57:13.647 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:13 compute-0 ceph-mon[76335]: pgmap v2299: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Feb 25 12:57:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:57:15 compute-0 nova_compute[244014]: 2026-02-25 12:57:15.534 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:15 compute-0 nova_compute[244014]: 2026-02-25 12:57:15.581 244018 DEBUG nova.network.neutron [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updated VIF entry in instance network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:57:15 compute-0 nova_compute[244014]: 2026-02-25 12:57:15.582 244018 DEBUG nova.network.neutron [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updating instance_info_cache with network_info: [{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:15 compute-0 nova_compute[244014]: 2026-02-25 12:57:15.606 244018 DEBUG oslo_concurrency.lockutils [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:57:15 compute-0 ceph-mon[76335]: pgmap v2300: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:57:15 compute-0 nova_compute[244014]: 2026-02-25 12:57:15.866 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:16.427 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:57:17 compute-0 ceph-mon[76335]: pgmap v2301: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 12:57:17 compute-0 nova_compute[244014]: 2026-02-25 12:57:17.974 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024222.973289, e728a9dc-bb04-4a25-bcad-b787a044bc0b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:57:17 compute-0 nova_compute[244014]: 2026-02-25 12:57:17.975 244018 INFO nova.compute.manager [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] VM Stopped (Lifecycle Event)
Feb 25 12:57:18 compute-0 nova_compute[244014]: 2026-02-25 12:57:18.012 244018 DEBUG nova.compute.manager [None req-7f8b5dbb-f109-416e-9e2d-95bd8d0312d8 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Feb 25 12:57:19 compute-0 ovn_controller[147040]: 2026-02-25T12:57:19Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:c2:bb 10.100.0.11
Feb 25 12:57:19 compute-0 ovn_controller[147040]: 2026-02-25T12:57:19Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:c2:bb 10.100.0.11
Feb 25 12:57:19 compute-0 ceph-mon[76335]: pgmap v2302: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Feb 25 12:57:20 compute-0 nova_compute[244014]: 2026-02-25 12:57:20.536 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Feb 25 12:57:20 compute-0 nova_compute[244014]: 2026-02-25 12:57:20.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:21 compute-0 ceph-mon[76335]: pgmap v2303: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Feb 25 12:57:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 25 12:57:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:23 compute-0 ceph-mon[76335]: pgmap v2304: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 25 12:57:24 compute-0 nova_compute[244014]: 2026-02-25 12:57:24.560 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:57:25 compute-0 nova_compute[244014]: 2026-02-25 12:57:25.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:25 compute-0 nova_compute[244014]: 2026-02-25 12:57:25.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:25 compute-0 ceph-mon[76335]: pgmap v2305: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:57:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:57:27 compute-0 ceph-mon[76335]: pgmap v2306: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:57:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:57:29 compute-0 ceph-mon[76335]: pgmap v2307: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:57:30 compute-0 nova_compute[244014]: 2026-02-25 12:57:30.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 25 12:57:30 compute-0 nova_compute[244014]: 2026-02-25 12:57:30.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:57:31
Feb 25 12:57:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:57:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:57:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.control', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Feb 25 12:57:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:57:31 compute-0 nova_compute[244014]: 2026-02-25 12:57:31.365 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:31 compute-0 nova_compute[244014]: 2026-02-25 12:57:31.366 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:31 compute-0 nova_compute[244014]: 2026-02-25 12:57:31.412 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:57:31 compute-0 nova_compute[244014]: 2026-02-25 12:57:31.689 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:31 compute-0 nova_compute[244014]: 2026-02-25 12:57:31.690 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:31 compute-0 nova_compute[244014]: 2026-02-25 12:57:31.700 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:57:31 compute-0 nova_compute[244014]: 2026-02-25 12:57:31.701 244018 INFO nova.compute.claims [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:57:31 compute-0 ceph-mon[76335]: pgmap v2308: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 25 12:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.094 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:57:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2224785933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.662 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.668 244018 DEBUG nova.compute.provider_tree [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.700 244018 DEBUG nova.scheduler.client.report [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.724 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.725 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:57:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.777 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.778 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.799 244018 INFO nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.821 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.905 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.907 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.908 244018 INFO nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Creating image(s)
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.933 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.961 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.987 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2224785933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:32 compute-0 nova_compute[244014]: 2026-02-25 12:57:32.992 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.007282) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253007314, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 626, "num_deletes": 251, "total_data_size": 726796, "memory_usage": 739496, "flush_reason": "Manual Compaction"}
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253016552, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 720263, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49047, "largest_seqno": 49672, "table_properties": {"data_size": 716874, "index_size": 1297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7810, "raw_average_key_size": 19, "raw_value_size": 710097, "raw_average_value_size": 1762, "num_data_blocks": 58, "num_entries": 403, "num_filter_entries": 403, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024209, "oldest_key_time": 1772024209, "file_creation_time": 1772024253, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 9326 microseconds, and 2445 cpu microseconds.
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.016604) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 720263 bytes OK
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.016623) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.022028) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.022051) EVENT_LOG_v1 {"time_micros": 1772024253022044, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.022074) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 723411, prev total WAL file size 723411, number of live WAL files 2.
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.022576) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(703KB)], [113(8925KB)]
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253022637, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 9859602, "oldest_snapshot_seqno": -1}
Feb 25 12:57:33 compute-0 nova_compute[244014]: 2026-02-25 12:57:33.057 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:33 compute-0 nova_compute[244014]: 2026-02-25 12:57:33.058 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:33 compute-0 nova_compute[244014]: 2026-02-25 12:57:33.058 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:33 compute-0 nova_compute[244014]: 2026-02-25 12:57:33.059 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:33 compute-0 nova_compute[244014]: 2026-02-25 12:57:33.083 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:33 compute-0 nova_compute[244014]: 2026-02-25 12:57:33.086 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6970 keys, 8143845 bytes, temperature: kUnknown
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253170628, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8143845, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8099996, "index_size": 25305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 181671, "raw_average_key_size": 26, "raw_value_size": 7978292, "raw_average_value_size": 1144, "num_data_blocks": 981, "num_entries": 6970, "num_filter_entries": 6970, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024253, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.170875) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8143845 bytes
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.178202) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 66.6 rd, 55.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.7 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(25.0) write-amplify(11.3) OK, records in: 7484, records dropped: 514 output_compression: NoCompression
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.178234) EVENT_LOG_v1 {"time_micros": 1772024253178220, "job": 68, "event": "compaction_finished", "compaction_time_micros": 148077, "compaction_time_cpu_micros": 26575, "output_level": 6, "num_output_files": 1, "total_output_size": 8143845, "num_input_records": 7484, "num_output_records": 6970, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253178491, "job": 68, "event": "table_file_deletion", "file_number": 115}
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253180077, "job": 68, "event": "table_file_deletion", "file_number": 113}
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.022479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.180193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.180202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.180207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.180211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:57:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.180215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 12:57:33 compute-0 nova_compute[244014]: 2026-02-25 12:57:33.237 244018 DEBUG nova.policy [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:57:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:33 compute-0 nova_compute[244014]: 2026-02-25 12:57:33.954 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Successfully created port: dcb845b3-f7fb-449b-b027-65efdcdcf6ec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.011 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.925s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:34 compute-0 ceph-mon[76335]: pgmap v2309: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.079 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.160 244018 DEBUG nova.objects.instance [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 0389cfa0-085b-4e4e-8d61-95d0b91c413e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.189 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.189 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Ensure instance console log exists: /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.190 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.190 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.190 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.655 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Successfully updated port: dcb845b3-f7fb-449b-b027-65efdcdcf6ec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.689 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.689 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.690 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.745 244018 DEBUG nova.compute.manager [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.746 244018 DEBUG nova.compute.manager [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing instance network info cache due to event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.746 244018 DEBUG oslo_concurrency.lockutils [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:57:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.907 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:57:34 compute-0 nova_compute[244014]: 2026-02-25 12:57:34.907 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.158 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:57:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:57:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2722102778' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.480 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.553 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.554 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.791 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.793 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3366MB free_disk=59.94174979068339GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.794 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.794 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.873 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.932 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 6185e497-8422-4a5f-a98a-865484d53d4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.933 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 0389cfa0-085b-4e4e-8d61-95d0b91c413e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.933 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:57:35 compute-0 nova_compute[244014]: 2026-02-25 12:57:35.933 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:57:36 compute-0 ceph-mon[76335]: pgmap v2310: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Feb 25 12:57:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2722102778' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.108 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.273 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.294 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.295 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Instance network_info: |[{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.296 244018 DEBUG oslo_concurrency.lockutils [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.297 244018 DEBUG nova.network.neutron [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.302 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Start _get_guest_xml network_info=[{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.310 244018 WARNING nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.316 244018 DEBUG nova.virt.libvirt.host [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.317 244018 DEBUG nova.virt.libvirt.host [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.322 244018 DEBUG nova.virt.libvirt.host [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.323 244018 DEBUG nova.virt.libvirt.host [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.324 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.324 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.325 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.325 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.326 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.326 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.326 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.327 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.327 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.328 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.328 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.328 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.334 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:57:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3958285174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.685 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.691 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.711 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:57:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Feb 25 12:57:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:57:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621989641' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.904 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.928 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.933 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.961 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:57:36 compute-0 nova_compute[244014]: 2026-02-25 12:57:36.961 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3958285174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2621989641' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:57:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/429612525' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.452 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.453 244018 DEBUG nova.virt.libvirt.vif [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742784697',display_name='tempest-TestNetworkBasicOps-server-1742784697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742784697',id=140,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAZFg6svZGG2HhSYQn7nNKCzfycyIXr66T8mUvP1MQ920eEEyjRb+o64QsZLqXfBMNPIzdFc3Q2mjbznplS/flTAudp3OXevaI0LCPFmYdclt9P1cih6MnuEw2nz2PTUtw==',key_name='tempest-TestNetworkBasicOps-571395165',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ydu0he2w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:57:32Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=0389cfa0-085b-4e4e-8d61-95d0b91c413e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.454 244018 DEBUG nova.network.os_vif_util [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.454 244018 DEBUG nova.network.os_vif_util [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.455 244018 DEBUG nova.objects.instance [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0389cfa0-085b-4e4e-8d61-95d0b91c413e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.509 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:57:37 compute-0 nova_compute[244014]:   <uuid>0389cfa0-085b-4e4e-8d61-95d0b91c413e</uuid>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   <name>instance-0000008c</name>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <nova:name>tempest-TestNetworkBasicOps-server-1742784697</nova:name>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:57:36</nova:creationTime>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <nova:port uuid="dcb845b3-f7fb-449b-b027-65efdcdcf6ec">
Feb 25 12:57:37 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <system>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <entry name="serial">0389cfa0-085b-4e4e-8d61-95d0b91c413e</entry>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <entry name="uuid">0389cfa0-085b-4e4e-8d61-95d0b91c413e</entry>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     </system>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   <os>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   </os>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   <features>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   </features>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk">
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       </source>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config">
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       </source>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:57:37 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:89:15:55"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <target dev="tapdcb845b3-f7"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/console.log" append="off"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <video>
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     </video>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:57:37 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:57:37 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:57:37 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:57:37 compute-0 nova_compute[244014]: </domain>
Feb 25 12:57:37 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.510 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Preparing to wait for external event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.510 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.510 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.511 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.511 244018 DEBUG nova.virt.libvirt.vif [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742784697',display_name='tempest-TestNetworkBasicOps-server-1742784697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742784697',id=140,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAZFg6svZGG2HhSYQn7nNKCzfycyIXr66T8mUvP1MQ920eEEyjRb+o64QsZLqXfBMNPIzdFc3Q2mjbznplS/flTAudp3OXevaI0LCPFmYdclt9P1cih6MnuEw2nz2PTUtw==',key_name='tempest-TestNetworkBasicOps-571395165',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ydu0he2w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:57:32Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=0389cfa0-085b-4e4e-8d61-95d0b91c413e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.512 244018 DEBUG nova.network.os_vif_util [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.512 244018 DEBUG nova.network.os_vif_util [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.512 244018 DEBUG os_vif [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.513 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.513 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.513 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.517 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdcb845b3-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.517 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdcb845b3-f7, col_values=(('external_ids', {'iface-id': 'dcb845b3-f7fb-449b-b027-65efdcdcf6ec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:15:55', 'vm-uuid': '0389cfa0-085b-4e4e-8d61-95d0b91c413e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.519 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:37 compute-0 NetworkManager[49836]: <info>  [1772024257.5202] manager: (tapdcb845b3-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/611)
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.528 244018 INFO os_vif [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7')
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.578 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.578 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.578 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:89:15:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.579 244018 INFO nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Using config drive
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.599 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.958 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.958 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.959 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.982 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:57:37 compute-0 nova_compute[244014]: 2026-02-25 12:57:37.995 244018 INFO nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Creating config drive at /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.000 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpclqw_7j4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:38 compute-0 ceph-mon[76335]: pgmap v2311: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Feb 25 12:57:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/429612525' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.046 244018 DEBUG nova.network.neutron [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updated VIF entry in instance network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.047 244018 DEBUG nova.network.neutron [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.068 244018 DEBUG oslo_concurrency.lockutils [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.133 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpclqw_7j4" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.163 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.167 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.305 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.306 244018 INFO nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Deleting local config drive /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config because it was imported into RBD.
Feb 25 12:57:38 compute-0 kernel: tapdcb845b3-f7: entered promiscuous mode
Feb 25 12:57:38 compute-0 NetworkManager[49836]: <info>  [1772024258.3587] manager: (tapdcb845b3-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/612)
Feb 25 12:57:38 compute-0 ovn_controller[147040]: 2026-02-25T12:57:38Z|01468|binding|INFO|Claiming lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec for this chassis.
Feb 25 12:57:38 compute-0 ovn_controller[147040]: 2026-02-25T12:57:38Z|01469|binding|INFO|dcb845b3-f7fb-449b-b027-65efdcdcf6ec: Claiming fa:16:3e:89:15:55 10.100.0.8
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.370 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:15:55 10.100.0.8'], port_security=['fa:16:3e:89:15:55 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0389cfa0-085b-4e4e-8d61-95d0b91c413e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c261236-9e75-404c-ae2b-04691f3dc670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd6f4c79-0604-475b-8d7d-e893b4f0a2c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e13b5f3-1e2c-45d6-a1db-56c3aecd9d1e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dcb845b3-f7fb-449b-b027-65efdcdcf6ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:57:38 compute-0 ovn_controller[147040]: 2026-02-25T12:57:38Z|01470|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec up in Southbound
Feb 25 12:57:38 compute-0 ovn_controller[147040]: 2026-02-25T12:57:38Z|01471|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec ovn-installed in OVS
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.373 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dcb845b3-f7fb-449b-b027-65efdcdcf6ec in datapath 6c261236-9e75-404c-ae2b-04691f3dc670 bound to our chassis
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.376 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c261236-9e75-404c-ae2b-04691f3dc670
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.387 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1e0706-1745-4dea-ba3a-24906a867e21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.388 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c261236-91 in ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.393 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c261236-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.393 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5b63a1-7322-4a7a-87ba-b3d646bbbc7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 systemd-machined[210048]: New machine qemu-172-instance-0000008c.
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.394 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8d826250-43e2-42f6-8f69-c9c43d195def]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.407 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9f8eb4-3220-4eac-b0fa-292bf482f9f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-0000008c.
Feb 25 12:57:38 compute-0 systemd-udevd[371073]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.423 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b56026a5-a0ed-4bbe-85bd-3a7e85c35a21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 NetworkManager[49836]: <info>  [1772024258.4369] device (tapdcb845b3-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:57:38 compute-0 NetworkManager[49836]: <info>  [1772024258.4374] device (tapdcb845b3-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.457 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff49975-e861-4945-81c0-fa5b857d6591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 NetworkManager[49836]: <info>  [1772024258.4657] manager: (tap6c261236-90): new Veth device (/org/freedesktop/NetworkManager/Devices/613)
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.465 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35679de3-49e9-4cee-a2f7-c1f7b9db9a5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.500 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9a0019-08a0-43f8-9280-1792fbb757d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.504 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c199634b-b8b0-47b8-8154-2eb0c83105ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 NetworkManager[49836]: <info>  [1772024258.5264] device (tap6c261236-90): carrier: link connected
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.531 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a0920e65-1d11-4220-bb66-0a6b99abade1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.547 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9f017312-89ae-4167-8888-6b6e5de785fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c261236-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4c:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622793, 'reachable_time': 28503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371103, 'error': None, 'target': 'ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.562 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aa084513-65bf-4afe-ae68-58dd7c3f43e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:4ce3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622793, 'tstamp': 622793}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371104, 'error': None, 'target': 'ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.573 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[26f85744-7700-4467-9dce-2f0229385b06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c261236-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4c:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622793, 'reachable_time': 28503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371105, 'error': None, 'target': 'ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.594 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ae70cec1-36fc-436c-b50c-3bc62b64b4dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.638 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4624cc1b-d6bf-4f27-8b6c-60c3fc04dd7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.641 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c261236-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.641 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.642 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c261236-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:38 compute-0 kernel: tap6c261236-90: entered promiscuous mode
Feb 25 12:57:38 compute-0 NetworkManager[49836]: <info>  [1772024258.6471] manager: (tap6c261236-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.648 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.649 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c261236-90, col_values=(('external_ids', {'iface-id': 'f9b1a684-77d3-4856-bba2-cff43e072272'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:38 compute-0 ovn_controller[147040]: 2026-02-25T12:57:38Z|01472|binding|INFO|Releasing lport f9b1a684-77d3-4856-bba2-cff43e072272 from this chassis (sb_readonly=0)
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.652 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c261236-9e75-404c-ae2b-04691f3dc670.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c261236-9e75-404c-ae2b-04691f3dc670.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.653 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d3c747-a914-490e-8707-5b202ce8c7e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.654 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-6c261236-9e75-404c-ae2b-04691f3dc670
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/6c261236-9e75-404c-ae2b-04691f3dc670.pid.haproxy
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 6c261236-9e75-404c-ae2b-04691f3dc670
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:57:38 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.655 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670', 'env', 'PROCESS_TAG=haproxy-6c261236-9e75-404c-ae2b-04691f3dc670', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c261236-9e75-404c-ae2b-04691f3dc670.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.656 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:57:38 compute-0 nova_compute[244014]: 2026-02-25 12:57:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:57:38 compute-0 podman[371138]: 2026-02-25 12:57:38.976070717 +0000 UTC m=+0.056110823 container create 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:57:39 compute-0 systemd[1]: Started libpod-conmon-3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c.scope.
Feb 25 12:57:39 compute-0 podman[371138]: 2026-02-25 12:57:38.942034917 +0000 UTC m=+0.022075033 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:57:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57e765a2b026bd301f1d6a49133c1f0c94503a7d34a089d5f615c08ee5f0bc83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:39 compute-0 podman[371138]: 2026-02-25 12:57:39.059752098 +0000 UTC m=+0.139792224 container init 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 12:57:39 compute-0 podman[371138]: 2026-02-25 12:57:39.065004076 +0000 UTC m=+0.145044182 container start 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:57:39 compute-0 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [NOTICE]   (371198) : New worker (371201) forked
Feb 25 12:57:39 compute-0 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [NOTICE]   (371198) : Loading success.
Feb 25 12:57:39 compute-0 nova_compute[244014]: 2026-02-25 12:57:39.136 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024259.1354146, 0389cfa0-085b-4e4e-8d61-95d0b91c413e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:57:39 compute-0 nova_compute[244014]: 2026-02-25 12:57:39.136 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] VM Started (Lifecycle Event)
Feb 25 12:57:39 compute-0 nova_compute[244014]: 2026-02-25 12:57:39.155 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:39 compute-0 nova_compute[244014]: 2026-02-25 12:57:39.160 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024259.135805, 0389cfa0-085b-4e4e-8d61-95d0b91c413e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:57:39 compute-0 nova_compute[244014]: 2026-02-25 12:57:39.160 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] VM Paused (Lifecycle Event)
Feb 25 12:57:39 compute-0 nova_compute[244014]: 2026-02-25 12:57:39.209 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:39 compute-0 nova_compute[244014]: 2026-02-25 12:57:39.213 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:57:39 compute-0 nova_compute[244014]: 2026-02-25 12:57:39.242 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:57:40 compute-0 ceph-mon[76335]: pgmap v2312: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:57:40 compute-0 nova_compute[244014]: 2026-02-25 12:57:40.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:57:42 compute-0 ceph-mon[76335]: pgmap v2313: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.333 244018 DEBUG nova.compute.manager [req-ed698516-676d-4f81-acad-cd6f4217dd56 req-695ccad3-da91-4a73-897a-7533f123a9fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.333 244018 DEBUG oslo_concurrency.lockutils [req-ed698516-676d-4f81-acad-cd6f4217dd56 req-695ccad3-da91-4a73-897a-7533f123a9fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.334 244018 DEBUG oslo_concurrency.lockutils [req-ed698516-676d-4f81-acad-cd6f4217dd56 req-695ccad3-da91-4a73-897a-7533f123a9fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.334 244018 DEBUG oslo_concurrency.lockutils [req-ed698516-676d-4f81-acad-cd6f4217dd56 req-695ccad3-da91-4a73-897a-7533f123a9fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.334 244018 DEBUG nova.compute.manager [req-ed698516-676d-4f81-acad-cd6f4217dd56 req-695ccad3-da91-4a73-897a-7533f123a9fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Processing event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.335 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.338 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024262.337872, 0389cfa0-085b-4e4e-8d61-95d0b91c413e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.338 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] VM Resumed (Lifecycle Event)
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.339 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.342 244018 INFO nova.virt.libvirt.driver [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Instance spawned successfully.
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.342 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.360 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.364 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.368 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.368 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.368 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.369 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.369 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.370 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.394 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.421 244018 INFO nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Took 9.52 seconds to spawn the instance on the hypervisor.
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.422 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.665 244018 INFO nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Took 11.21 seconds to build instance.
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.695 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011215693877529972 of space, bias 1.0, pg target 0.33647081632589915 quantized to 32 (current 32)
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024942067356138187 of space, bias 1.0, pg target 0.7482620206841456 quantized to 32 (current 32)
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3702431328712579e-06 of space, bias 4.0, pg target 0.0016442917594455095 quantized to 16 (current 16)
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:57:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:57:42 compute-0 nova_compute[244014]: 2026-02-25 12:57:42.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:57:43 compute-0 nova_compute[244014]: 2026-02-25 12:57:43.529 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:43 compute-0 nova_compute[244014]: 2026-02-25 12:57:43.529 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:43 compute-0 nova_compute[244014]: 2026-02-25 12:57:43.550 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:57:43 compute-0 nova_compute[244014]: 2026-02-25 12:57:43.635 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:43 compute-0 nova_compute[244014]: 2026-02-25 12:57:43.635 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:43 compute-0 nova_compute[244014]: 2026-02-25 12:57:43.644 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:57:43 compute-0 nova_compute[244014]: 2026-02-25 12:57:43.644 244018 INFO nova.compute.claims [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:57:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:43 compute-0 podman[371210]: 2026-02-25 12:57:43.780567136 +0000 UTC m=+0.110149828 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 12:57:43 compute-0 nova_compute[244014]: 2026-02-25 12:57:43.798 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:43 compute-0 podman[371211]: 2026-02-25 12:57:43.80340247 +0000 UTC m=+0.135521874 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:57:44 compute-0 ceph-mon[76335]: pgmap v2314: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:57:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:57:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3513331914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.355 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.363 244018 DEBUG nova.compute.provider_tree [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.384 244018 DEBUG nova.scheduler.client.report [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.406 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.407 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.472 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.473 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.505 244018 INFO nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.519 244018 DEBUG nova.compute.manager [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.519 244018 DEBUG oslo_concurrency.lockutils [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.520 244018 DEBUG oslo_concurrency.lockutils [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.520 244018 DEBUG oslo_concurrency.lockutils [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.521 244018 DEBUG nova.compute.manager [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] No waiting events found dispatching network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.521 244018 WARNING nova.compute.manager [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received unexpected event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec for instance with vm_state active and task_state None.
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.526 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.650 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.652 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.653 244018 INFO nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Creating image(s)
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.687 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.721 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.749 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.753 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.840 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.842 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.843 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.843 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.870 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.874 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.909 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:57:44 compute-0 nova_compute[244014]: 2026-02-25 12:57:44.910 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:57:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3513331914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.173 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.256 244018 DEBUG nova.policy [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2248dda8be6e4a51ace44a9e66dc4b45', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '089b8f0ee9684ec69cbdc13d24262170', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.266 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] resizing rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.361 244018 DEBUG nova.objects.instance [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lazy-loading 'migration_context' on Instance uuid 621d2b1a-0d06-4a98-b252-2acafee3ba02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.403 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.404 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Ensure instance console log exists: /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.405 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.405 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.406 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:57:45 compute-0 nova_compute[244014]: 2026-02-25 12:57:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:57:46 compute-0 ceph-mon[76335]: pgmap v2315: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:57:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:57:46 compute-0 nova_compute[244014]: 2026-02-25 12:57:46.849 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Successfully created port: 13dff95c-b96c-4657-9807-58964669e78a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:57:46 compute-0 nova_compute[244014]: 2026-02-25 12:57:46.900 244018 DEBUG nova.compute.manager [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:46 compute-0 nova_compute[244014]: 2026-02-25 12:57:46.901 244018 DEBUG nova.compute.manager [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing instance network info cache due to event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:57:46 compute-0 nova_compute[244014]: 2026-02-25 12:57:46.901 244018 DEBUG oslo_concurrency.lockutils [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:57:46 compute-0 nova_compute[244014]: 2026-02-25 12:57:46.902 244018 DEBUG oslo_concurrency.lockutils [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:57:46 compute-0 nova_compute[244014]: 2026-02-25 12:57:46.902 244018 DEBUG nova.network.neutron [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:57:47 compute-0 nova_compute[244014]: 2026-02-25 12:57:47.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:57:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1895987037' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:57:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:57:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1895987037' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:57:48 compute-0 ceph-mon[76335]: pgmap v2316: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 12:57:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1895987037' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:57:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1895987037' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:57:48 compute-0 nova_compute[244014]: 2026-02-25 12:57:48.677 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Successfully updated port: 13dff95c-b96c-4657-9807-58964669e78a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:57:48 compute-0 nova_compute[244014]: 2026-02-25 12:57:48.691 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:57:48 compute-0 nova_compute[244014]: 2026-02-25 12:57:48.691 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquired lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:57:48 compute-0 nova_compute[244014]: 2026-02-25 12:57:48.692 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:57:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Feb 25 12:57:48 compute-0 nova_compute[244014]: 2026-02-25 12:57:48.831 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:57:48 compute-0 nova_compute[244014]: 2026-02-25 12:57:48.989 244018 DEBUG nova.compute.manager [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-changed-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:48 compute-0 nova_compute[244014]: 2026-02-25 12:57:48.990 244018 DEBUG nova.compute.manager [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing instance network info cache due to event network-changed-13dff95c-b96c-4657-9807-58964669e78a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:57:48 compute-0 nova_compute[244014]: 2026-02-25 12:57:48.991 244018 DEBUG oslo_concurrency.lockutils [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.021 244018 DEBUG nova.network.neutron [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updated VIF entry in instance network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.022 244018 DEBUG nova.network.neutron [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.040 244018 DEBUG oslo_concurrency.lockutils [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.443 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updating instance_info_cache with network_info: [{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.461 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Releasing lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.462 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Instance network_info: |[{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.463 244018 DEBUG oslo_concurrency.lockutils [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.463 244018 DEBUG nova.network.neutron [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing network info cache for port 13dff95c-b96c-4657-9807-58964669e78a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.468 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Start _get_guest_xml network_info=[{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.473 244018 WARNING nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.481 244018 DEBUG nova.virt.libvirt.host [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.482 244018 DEBUG nova.virt.libvirt.host [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.486 244018 DEBUG nova.virt.libvirt.host [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.486 244018 DEBUG nova.virt.libvirt.host [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.487 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.488 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.489 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.489 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.490 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.490 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.491 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.492 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.492 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.493 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.493 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.494 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:57:49 compute-0 nova_compute[244014]: 2026-02-25 12:57:49.499 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:57:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1456607593' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.055 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.100 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.108 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:50 compute-0 ceph-mon[76335]: pgmap v2317: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Feb 25 12:57:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1456607593' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:57:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2692681946' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.620 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.622 244018 DEBUG nova.virt.libvirt.vif [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-370800561-acc',id=141,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJN+hUOMaKAQ0YB42oqWjvIfKM8uPP+02JazD6RxfhEthxrTmdltfGPXOOUYwxJvxVj/DZ80733jprZ2P4416qp+bN+v8dr9z51p/U8yRQby/Qu2leuQCIhcBO5CRWY1xg==',key_name='tempest-TestSecurityGroupsBasicOps-1642482985',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='089b8f0ee9684ec69cbdc13d24262170',ramdisk_id='',reservation_id='r-r5sfe18w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-370800561',owner_user_name='tempest-TestSecurityGroupsBasicOps-370800561-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:57:44Z,user_data=None,user_id='2248dda8be6e4a51ace44a9e66dc4b45',uuid=621d2b1a-0d06-4a98-b252-2acafee3ba02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.623 244018 DEBUG nova.network.os_vif_util [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converting VIF {"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.624 244018 DEBUG nova.network.os_vif_util [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.626 244018 DEBUG nova.objects.instance [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lazy-loading 'pci_devices' on Instance uuid 621d2b1a-0d06-4a98-b252-2acafee3ba02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.664 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:57:50 compute-0 nova_compute[244014]:   <uuid>621d2b1a-0d06-4a98-b252-2acafee3ba02</uuid>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   <name>instance-0000008d</name>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043</nova:name>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:57:49</nova:creationTime>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <nova:user uuid="2248dda8be6e4a51ace44a9e66dc4b45">tempest-TestSecurityGroupsBasicOps-370800561-project-member</nova:user>
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <nova:project uuid="089b8f0ee9684ec69cbdc13d24262170">tempest-TestSecurityGroupsBasicOps-370800561</nova:project>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <nova:port uuid="13dff95c-b96c-4657-9807-58964669e78a">
Feb 25 12:57:50 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <system>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <entry name="serial">621d2b1a-0d06-4a98-b252-2acafee3ba02</entry>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <entry name="uuid">621d2b1a-0d06-4a98-b252-2acafee3ba02</entry>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     </system>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   <os>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   </os>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   <features>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   </features>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/621d2b1a-0d06-4a98-b252-2acafee3ba02_disk">
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       </source>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config">
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       </source>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:57:50 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:42:9b:25"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <target dev="tap13dff95c-b9"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/console.log" append="off"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <video>
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     </video>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:57:50 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:57:50 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:57:50 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:57:50 compute-0 nova_compute[244014]: </domain>
Feb 25 12:57:50 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.667 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Preparing to wait for external event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.668 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.668 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.669 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.670 244018 DEBUG nova.virt.libvirt.vif [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-370800561-acc',id=141,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJN+hUOMaKAQ0YB42oqWjvIfKM8uPP+02JazD6RxfhEthxrTmdltfGPXOOUYwxJvxVj/DZ80733jprZ2P4416qp+bN+v8dr9z51p/U8yRQby/Qu2leuQCIhcBO5CRWY1xg==',key_name='tempest-TestSecurityGroupsBasicOps-1642482985',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='089b8f0ee9684ec69cbdc13d24262170',ramdisk_id='',reservation_id='r-r5sfe18w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-370800561',owner_user_name='tempest-TestSecurityGroupsBasicOps-370800561-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:57:44Z,user_data=None,user_id='2248dda8be6e4a51ace44a9e66dc4b45',uuid=621d2b1a-0d06-4a98-b252-2acafee3ba02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.670 244018 DEBUG nova.network.os_vif_util [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converting VIF {"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.671 244018 DEBUG nova.network.os_vif_util [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.672 244018 DEBUG os_vif [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.673 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.673 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.674 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.684 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13dff95c-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.685 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13dff95c-b9, col_values=(('external_ids', {'iface-id': '13dff95c-b96c-4657-9807-58964669e78a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:9b:25', 'vm-uuid': '621d2b1a-0d06-4a98-b252-2acafee3ba02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.689 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:50 compute-0 NetworkManager[49836]: <info>  [1772024270.6904] manager: (tap13dff95c-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.698 244018 INFO os_vif [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9')
Feb 25 12:57:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.834 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.834 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.834 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] No VIF found with MAC fa:16:3e:42:9b:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.835 244018 INFO nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Using config drive
Feb 25 12:57:50 compute-0 nova_compute[244014]: 2026-02-25 12:57:50.863 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:51 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2692681946' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.338 244018 INFO nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Creating config drive at /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.342 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7c74x7i6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.487 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7c74x7i6" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.512 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.516 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.630 244018 DEBUG nova.network.neutron [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updated VIF entry in instance network info cache for port 13dff95c-b96c-4657-9807-58964669e78a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.632 244018 DEBUG nova.network.neutron [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updating instance_info_cache with network_info: [{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.641 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.641 244018 INFO nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Deleting local config drive /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config because it was imported into RBD.
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.681 244018 DEBUG oslo_concurrency.lockutils [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:57:51 compute-0 kernel: tap13dff95c-b9: entered promiscuous mode
Feb 25 12:57:51 compute-0 NetworkManager[49836]: <info>  [1772024271.6917] manager: (tap13dff95c-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/616)
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:51 compute-0 ovn_controller[147040]: 2026-02-25T12:57:51Z|01473|binding|INFO|Claiming lport 13dff95c-b96c-4657-9807-58964669e78a for this chassis.
Feb 25 12:57:51 compute-0 ovn_controller[147040]: 2026-02-25T12:57:51Z|01474|binding|INFO|13dff95c-b96c-4657-9807-58964669e78a: Claiming fa:16:3e:42:9b:25 10.100.0.8
Feb 25 12:57:51 compute-0 ovn_controller[147040]: 2026-02-25T12:57:51Z|01475|binding|INFO|Setting lport 13dff95c-b96c-4657-9807-58964669e78a ovn-installed in OVS
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:51 compute-0 nova_compute[244014]: 2026-02-25 12:57:51.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:51 compute-0 systemd-udevd[371578]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:57:51 compute-0 ovn_controller[147040]: 2026-02-25T12:57:51Z|01476|binding|INFO|Setting lport 13dff95c-b96c-4657-9807-58964669e78a up in Southbound
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.727 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:9b:25 10.100.0.8'], port_security=['fa:16:3e:42:9b:25 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '621d2b1a-0d06-4a98-b252-2acafee3ba02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '089b8f0ee9684ec69cbdc13d24262170', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0892ce4b-109e-43d1-a441-11b1be3535ea 1dfe9b6c-62fb-43ca-b23e-3d57cce238fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=916a48b0-3c8c-48cf-bf68-7fcb782233f1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=13dff95c-b96c-4657-9807-58964669e78a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.729 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 13dff95c-b96c-4657-9807-58964669e78a in datapath f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd bound to our chassis
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.730 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd
Feb 25 12:57:51 compute-0 NetworkManager[49836]: <info>  [1772024271.7385] device (tap13dff95c-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:57:51 compute-0 NetworkManager[49836]: <info>  [1772024271.7393] device (tap13dff95c-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:57:51 compute-0 systemd-machined[210048]: New machine qemu-173-instance-0000008d.
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.742 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61223646-48d3-41c3-b9f2-86e09541c3db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.744 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1fe6aa3-21 in ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.747 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1fe6aa3-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.747 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6f607099-1509-430c-9d8e-8130007da305]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.748 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd849e52-4fc6-406e-a7c6-5b2f7a9cfb7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-0000008d.
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.759 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c681f1-e547-465b-b838-e42432237e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.771 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[25237499-6720-497c-aa3c-8c7da38e0a85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.804 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3839b749-9332-476b-b60f-7d197d25ce96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.809 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d230859-7eb6-4c3b-bcfa-eef989466922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 NetworkManager[49836]: <info>  [1772024271.8105] manager: (tapf1fe6aa3-20): new Veth device (/org/freedesktop/NetworkManager/Devices/617)
Feb 25 12:57:51 compute-0 systemd-udevd[371582]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.834 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c69ca2e7-4130-4ca3-9f40-e37c8e01bbe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.837 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[78ad3315-8d27-4fb3-a687-a349c3dbea92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 NetworkManager[49836]: <info>  [1772024271.8552] device (tapf1fe6aa3-20): carrier: link connected
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.861 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f504eb04-32e5-4d18-a561-a8b33aa3b097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.880 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37f325f6-3d20-4c90-b85f-90ffabc1cf6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1fe6aa3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:c1:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 439], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624126, 'reachable_time': 42465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371612, 'error': None, 'target': 'ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.897 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4e58651b-e383-4afe-ae1f-3932eda1ea3e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:c1b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624126, 'tstamp': 624126}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371613, 'error': None, 'target': 'ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.918 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9671ad0f-bf8e-4d2f-8563-2f63a8a7a36d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1fe6aa3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:c1:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 439], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624126, 'reachable_time': 42465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371614, 'error': None, 'target': 'ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:51 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.947 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30486d14-e526-4385-ad06-9aa37fe44357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.002 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5a4e7e-007a-4124-9219-5adbff17d205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.004 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1fe6aa3-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.004 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.004 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1fe6aa3-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:52 compute-0 NetworkManager[49836]: <info>  [1772024272.0069] manager: (tapf1fe6aa3-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Feb 25 12:57:52 compute-0 kernel: tapf1fe6aa3-20: entered promiscuous mode
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.009 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1fe6aa3-20, col_values=(('external_ids', {'iface-id': 'd5efe0f3-2e55-4d47-9b3f-ed6e541466bd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.010 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:52 compute-0 ovn_controller[147040]: 2026-02-25T12:57:52Z|01477|binding|INFO|Releasing lport d5efe0f3-2e55-4d47-9b3f-ed6e541466bd from this chassis (sb_readonly=0)
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.018 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.018 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6113e6-0501-48ef-9fe2-eb8a118b19ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.019 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd.pid.haproxy
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:57:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.020 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'env', 'PROCESS_TAG=haproxy-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.117 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024272.11736, 621d2b1a-0d06-4a98-b252-2acafee3ba02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.118 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] VM Started (Lifecycle Event)
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.201 244018 DEBUG nova.compute.manager [req-3c081cfc-f83b-4ea7-92cc-de1757964d7a req-5488b1c3-a593-4624-96c9-f0ea65063d65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.201 244018 DEBUG oslo_concurrency.lockutils [req-3c081cfc-f83b-4ea7-92cc-de1757964d7a req-5488b1c3-a593-4624-96c9-f0ea65063d65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.201 244018 DEBUG oslo_concurrency.lockutils [req-3c081cfc-f83b-4ea7-92cc-de1757964d7a req-5488b1c3-a593-4624-96c9-f0ea65063d65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.202 244018 DEBUG oslo_concurrency.lockutils [req-3c081cfc-f83b-4ea7-92cc-de1757964d7a req-5488b1c3-a593-4624-96c9-f0ea65063d65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.202 244018 DEBUG nova.compute.manager [req-3c081cfc-f83b-4ea7-92cc-de1757964d7a req-5488b1c3-a593-4624-96c9-f0ea65063d65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Processing event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.203 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.208 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.212 244018 INFO nova.virt.libvirt.driver [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Instance spawned successfully.
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.212 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:57:52 compute-0 ceph-mon[76335]: pgmap v2318: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.284 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.289 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.464 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.465 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.466 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.467 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.467 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.468 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:57:52 compute-0 podman[371684]: 2026-02-25 12:57:52.511722587 +0000 UTC m=+0.072082614 container create 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 12:57:52 compute-0 systemd[1]: Started libpod-conmon-977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f.scope.
Feb 25 12:57:52 compute-0 podman[371684]: 2026-02-25 12:57:52.467803248 +0000 UTC m=+0.028163275 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:57:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:57:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f481ca962f4dc0356ecf15f122766f43015783bde0babd55e306f3282801e38/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.599 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.599 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024272.1198149, 621d2b1a-0d06-4a98-b252-2acafee3ba02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.600 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] VM Paused (Lifecycle Event)
Feb 25 12:57:52 compute-0 podman[371684]: 2026-02-25 12:57:52.605537884 +0000 UTC m=+0.165897931 container init 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 12:57:52 compute-0 podman[371684]: 2026-02-25 12:57:52.610539645 +0000 UTC m=+0.170899662 container start 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 12:57:52 compute-0 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [NOTICE]   (371703) : New worker (371705) forked
Feb 25 12:57:52 compute-0 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [NOTICE]   (371703) : Loading success.
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.659 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.665 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024272.208287, 621d2b1a-0d06-4a98-b252-2acafee3ba02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.665 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] VM Resumed (Lifecycle Event)
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.709 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.712 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.726 244018 INFO nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Took 8.07 seconds to spawn the instance on the hypervisor.
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.726 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.734 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:57:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 342 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 136 op/s
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.834 244018 INFO nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Took 9.24 seconds to build instance.
Feb 25 12:57:52 compute-0 nova_compute[244014]: 2026-02-25 12:57:52.950 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:53 compute-0 ovn_controller[147040]: 2026-02-25T12:57:53Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:15:55 10.100.0.8
Feb 25 12:57:53 compute-0 ovn_controller[147040]: 2026-02-25T12:57:53Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:15:55 10.100.0.8
Feb 25 12:57:54 compute-0 ceph-mon[76335]: pgmap v2319: 305 pgs: 305 active+clean; 342 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 136 op/s
Feb 25 12:57:54 compute-0 nova_compute[244014]: 2026-02-25 12:57:54.374 244018 DEBUG nova.compute.manager [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:54 compute-0 nova_compute[244014]: 2026-02-25 12:57:54.375 244018 DEBUG oslo_concurrency.lockutils [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:54 compute-0 nova_compute[244014]: 2026-02-25 12:57:54.375 244018 DEBUG oslo_concurrency.lockutils [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:54 compute-0 nova_compute[244014]: 2026-02-25 12:57:54.375 244018 DEBUG oslo_concurrency.lockutils [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:54 compute-0 nova_compute[244014]: 2026-02-25 12:57:54.375 244018 DEBUG nova.compute.manager [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] No waiting events found dispatching network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:57:54 compute-0 nova_compute[244014]: 2026-02-25 12:57:54.376 244018 WARNING nova.compute.manager [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received unexpected event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a for instance with vm_state active and task_state None.
Feb 25 12:57:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 342 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 126 op/s
Feb 25 12:57:54 compute-0 nova_compute[244014]: 2026-02-25 12:57:54.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:57:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:55.034 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:57:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:55.035 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:57:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:57:55.036 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:57:55 compute-0 nova_compute[244014]: 2026-02-25 12:57:55.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:55 compute-0 nova_compute[244014]: 2026-02-25 12:57:55.689 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:57:56 compute-0 ceph-mon[76335]: pgmap v2320: 305 pgs: 305 active+clean; 342 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 126 op/s
Feb 25 12:57:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 342 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 126 op/s
Feb 25 12:57:56 compute-0 nova_compute[244014]: 2026-02-25 12:57:56.921 244018 DEBUG nova.compute.manager [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-changed-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:57:56 compute-0 nova_compute[244014]: 2026-02-25 12:57:56.922 244018 DEBUG nova.compute.manager [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing instance network info cache due to event network-changed-13dff95c-b96c-4657-9807-58964669e78a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:57:56 compute-0 nova_compute[244014]: 2026-02-25 12:57:56.923 244018 DEBUG oslo_concurrency.lockutils [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:57:56 compute-0 nova_compute[244014]: 2026-02-25 12:57:56.923 244018 DEBUG oslo_concurrency.lockutils [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:57:56 compute-0 nova_compute[244014]: 2026-02-25 12:57:56.923 244018 DEBUG nova.network.neutron [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing network info cache for port 13dff95c-b96c-4657-9807-58964669e78a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:57:58 compute-0 ceph-mon[76335]: pgmap v2321: 305 pgs: 305 active+clean; 342 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 126 op/s
Feb 25 12:57:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:57:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Feb 25 12:58:00 compute-0 ceph-mon[76335]: pgmap v2322: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Feb 25 12:58:00 compute-0 nova_compute[244014]: 2026-02-25 12:58:00.555 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:00 compute-0 nova_compute[244014]: 2026-02-25 12:58:00.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 12:58:01 compute-0 nova_compute[244014]: 2026-02-25 12:58:01.498 244018 INFO nova.compute.manager [None req-57e40573-7566-492d-be2f-04f4c988e21f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Get console output
Feb 25 12:58:01 compute-0 nova_compute[244014]: 2026-02-25 12:58:01.507 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:58:01 compute-0 anacron[164977]: Job `cron.monthly' started
Feb 25 12:58:01 compute-0 anacron[164977]: Job `cron.monthly' terminated
Feb 25 12:58:01 compute-0 anacron[164977]: Normal exit (3 jobs run)
Feb 25 12:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:58:02 compute-0 ceph-mon[76335]: pgmap v2323: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 12:58:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Feb 25 12:58:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:04 compute-0 ovn_controller[147040]: 2026-02-25T12:58:04Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:9b:25 10.100.0.8
Feb 25 12:58:04 compute-0 ovn_controller[147040]: 2026-02-25T12:58:04Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:9b:25 10.100.0.8
Feb 25 12:58:04 compute-0 ceph-mon[76335]: pgmap v2324: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Feb 25 12:58:04 compute-0 ovn_controller[147040]: 2026-02-25T12:58:04Z|01478|binding|INFO|Releasing lport f9b1a684-77d3-4856-bba2-cff43e072272 from this chassis (sb_readonly=0)
Feb 25 12:58:04 compute-0 ovn_controller[147040]: 2026-02-25T12:58:04Z|01479|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 12:58:04 compute-0 ovn_controller[147040]: 2026-02-25T12:58:04Z|01480|binding|INFO|Releasing lport d5efe0f3-2e55-4d47-9b3f-ed6e541466bd from this chassis (sb_readonly=0)
Feb 25 12:58:04 compute-0 nova_compute[244014]: 2026-02-25 12:58:04.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 705 KiB/s wr, 103 op/s
Feb 25 12:58:05 compute-0 nova_compute[244014]: 2026-02-25 12:58:05.184 244018 DEBUG nova.network.neutron [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updated VIF entry in instance network info cache for port 13dff95c-b96c-4657-9807-58964669e78a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:58:05 compute-0 nova_compute[244014]: 2026-02-25 12:58:05.185 244018 DEBUG nova.network.neutron [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updating instance_info_cache with network_info: [{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:58:05 compute-0 nova_compute[244014]: 2026-02-25 12:58:05.213 244018 DEBUG oslo_concurrency.lockutils [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:58:05 compute-0 nova_compute[244014]: 2026-02-25 12:58:05.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:05 compute-0 nova_compute[244014]: 2026-02-25 12:58:05.651 244018 INFO nova.compute.manager [None req-135e1180-0535-491c-8f05-fdda71f2b806 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Get console output
Feb 25 12:58:05 compute-0 nova_compute[244014]: 2026-02-25 12:58:05.658 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:58:05 compute-0 nova_compute[244014]: 2026-02-25 12:58:05.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:06 compute-0 ceph-mon[76335]: pgmap v2325: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 705 KiB/s wr, 103 op/s
Feb 25 12:58:06 compute-0 nova_compute[244014]: 2026-02-25 12:58:06.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 705 KiB/s wr, 103 op/s
Feb 25 12:58:07 compute-0 nova_compute[244014]: 2026-02-25 12:58:07.153 244018 INFO nova.compute.manager [None req-85fd1b40-bcc6-4c4d-bd13-3e6ba5d5e427 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Get console output
Feb 25 12:58:07 compute-0 nova_compute[244014]: 2026-02-25 12:58:07.159 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 25 12:58:08 compute-0 ceph-mon[76335]: pgmap v2326: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 705 KiB/s wr, 103 op/s
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.559 244018 DEBUG nova.compute.manager [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.561 244018 DEBUG nova.compute.manager [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing instance network info cache due to event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.561 244018 DEBUG oslo_concurrency.lockutils [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.562 244018 DEBUG oslo_concurrency.lockutils [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.562 244018 DEBUG nova.network.neutron [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.603 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.604 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.605 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.605 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.606 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.607 244018 INFO nova.compute.manager [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Terminating instance
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.608 244018 DEBUG nova.compute.manager [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:58:08 compute-0 kernel: tapdcb845b3-f7 (unregistering): left promiscuous mode
Feb 25 12:58:08 compute-0 NetworkManager[49836]: <info>  [1772024288.6748] device (tapdcb845b3-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01481|binding|INFO|Releasing lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec from this chassis (sb_readonly=0)
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01482|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec down in Southbound
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01483|binding|INFO|Removing iface tapdcb845b3-f7 ovn-installed in OVS
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.694 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:15:55 10.100.0.8'], port_security=['fa:16:3e:89:15:55 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0389cfa0-085b-4e4e-8d61-95d0b91c413e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c261236-9e75-404c-ae2b-04691f3dc670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd6f4c79-0604-475b-8d7d-e893b4f0a2c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e13b5f3-1e2c-45d6-a1db-56c3aecd9d1e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dcb845b3-f7fb-449b-b027-65efdcdcf6ec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:58:08 compute-0 sudo[371717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:58:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.697 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dcb845b3-f7fb-449b-b027-65efdcdcf6ec in datapath 6c261236-9e75-404c-ae2b-04691f3dc670 unbound from our chassis
Feb 25 12:58:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.702 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c261236-9e75-404c-ae2b-04691f3dc670, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:58:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.704 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bb901bd1-a520-4d95-a849-36e52f2ea172]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.704 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670 namespace which is not needed anymore
Feb 25 12:58:08 compute-0 sudo[371717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:58:08 compute-0 sudo[371717]: pam_unix(sudo:session): session closed for user root
Feb 25 12:58:08 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Feb 25 12:58:08 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008c.scope: Consumed 12.167s CPU time.
Feb 25 12:58:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:08 compute-0 systemd-machined[210048]: Machine qemu-172-instance-0000008c terminated.
Feb 25 12:58:08 compute-0 sudo[371750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:58:08 compute-0 sudo[371750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:58:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 165 op/s
Feb 25 12:58:08 compute-0 kernel: tapdcb845b3-f7: entered promiscuous mode
Feb 25 12:58:08 compute-0 systemd-udevd[371744]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:58:08 compute-0 NetworkManager[49836]: <info>  [1772024288.8341] manager: (tapdcb845b3-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/619)
Feb 25 12:58:08 compute-0 kernel: tapdcb845b3-f7 (unregistering): left promiscuous mode
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01484|binding|INFO|Claiming lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec for this chassis.
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01485|binding|INFO|dcb845b3-f7fb-449b-b027-65efdcdcf6ec: Claiming fa:16:3e:89:15:55 10.100.0.8
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.846 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:15:55 10.100.0.8'], port_security=['fa:16:3e:89:15:55 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0389cfa0-085b-4e4e-8d61-95d0b91c413e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c261236-9e75-404c-ae2b-04691f3dc670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd6f4c79-0604-475b-8d7d-e893b4f0a2c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e13b5f3-1e2c-45d6-a1db-56c3aecd9d1e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dcb845b3-f7fb-449b-b027-65efdcdcf6ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01486|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec ovn-installed in OVS
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01487|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec up in Southbound
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01488|binding|INFO|Releasing lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec from this chassis (sb_readonly=1)
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01489|if_status|INFO|Dropped 9 log messages in last 453 seconds (most recently, 443 seconds ago) due to excessive rate
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01490|if_status|INFO|Not setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec down as sb is readonly
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01491|binding|INFO|Removing iface tapdcb845b3-f7 ovn-installed in OVS
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01492|binding|INFO|Releasing lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec from this chassis (sb_readonly=0)
Feb 25 12:58:08 compute-0 ovn_controller[147040]: 2026-02-25T12:58:08Z|01493|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec down in Southbound
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.865 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.866 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.868 244018 INFO nova.virt.libvirt.driver [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Instance destroyed successfully.
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.868 244018 DEBUG nova.objects.instance [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 0389cfa0-085b-4e4e-8d61-95d0b91c413e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:58:08 compute-0 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [NOTICE]   (371198) : haproxy version is 2.8.14-c23fe91
Feb 25 12:58:08 compute-0 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [NOTICE]   (371198) : path to executable is /usr/sbin/haproxy
Feb 25 12:58:08 compute-0 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [WARNING]  (371198) : Exiting Master process...
Feb 25 12:58:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.872 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:15:55 10.100.0.8'], port_security=['fa:16:3e:89:15:55 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0389cfa0-085b-4e4e-8d61-95d0b91c413e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c261236-9e75-404c-ae2b-04691f3dc670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd6f4c79-0604-475b-8d7d-e893b4f0a2c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e13b5f3-1e2c-45d6-a1db-56c3aecd9d1e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dcb845b3-f7fb-449b-b027-65efdcdcf6ec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:58:08 compute-0 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [ALERT]    (371198) : Current worker (371201) exited with code 143 (Terminated)
Feb 25 12:58:08 compute-0 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [WARNING]  (371198) : All workers exited. Exiting... (0)
Feb 25 12:58:08 compute-0 systemd[1]: libpod-3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c.scope: Deactivated successfully.
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 12:58:08 compute-0 podman[371790]: 2026-02-25 12:58:08.88138246 +0000 UTC m=+0.073351590 container died 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.889 244018 DEBUG nova.virt.libvirt.vif [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742784697',display_name='tempest-TestNetworkBasicOps-server-1742784697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742784697',id=140,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAZFg6svZGG2HhSYQn7nNKCzfycyIXr66T8mUvP1MQ920eEEyjRb+o64QsZLqXfBMNPIzdFc3Q2mjbznplS/flTAudp3OXevaI0LCPFmYdclt9P1cih6MnuEw2nz2PTUtw==',key_name='tempest-TestNetworkBasicOps-571395165',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:57:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ydu0he2w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:57:42Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=0389cfa0-085b-4e4e-8d61-95d0b91c413e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.889 244018 DEBUG nova.network.os_vif_util [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.890 244018 DEBUG nova.network.os_vif_util [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.891 244018 DEBUG os_vif [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.894 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdcb845b3-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.896 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.897 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.901 244018 INFO os_vif [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7')
Feb 25 12:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c-userdata-shm.mount: Deactivated successfully.
Feb 25 12:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-57e765a2b026bd301f1d6a49133c1f0c94503a7d34a089d5f615c08ee5f0bc83-merged.mount: Deactivated successfully.
Feb 25 12:58:08 compute-0 podman[371790]: 2026-02-25 12:58:08.92500148 +0000 UTC m=+0.116970600 container cleanup 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 12:58:08 compute-0 systemd[1]: libpod-conmon-3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c.scope: Deactivated successfully.
Feb 25 12:58:08 compute-0 podman[371838]: 2026-02-25 12:58:08.989309134 +0000 UTC m=+0.044501266 container remove 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:58:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.993 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e01ec0bb-f73c-4994-b9f0-101480cd451d]: (4, ('Wed Feb 25 12:58:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670 (3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c)\n3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c\nWed Feb 25 12:58:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670 (3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c)\n3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.995 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a39c086-fe2a-4529-b153-6e2b1a403bad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:08 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.996 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c261236-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:08 compute-0 nova_compute[244014]: 2026-02-25 12:58:08.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:08 compute-0 kernel: tap6c261236-90: left promiscuous mode
Feb 25 12:58:09 compute-0 nova_compute[244014]: 2026-02-25 12:58:09.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.005 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2099a9f7-fcb4-46e3-9f56-ec7eab0efb18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:09 compute-0 nova_compute[244014]: 2026-02-25 12:58:09.010 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.017 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[51e4f6f6-f1a2-4441-a45e-0abbc77bbcb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.019 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49948001-0d20-4f6e-9523-2c3f85cdd22f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.036 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4feffb9d-edf4-4a16-a0d9-bfda378b1937]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622785, 'reachable_time': 22640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371871, 'error': None, 'target': 'ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c261236\x2d9e75\x2d404c\x2dae2b\x2d04691f3dc670.mount: Deactivated successfully.
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.041 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.041 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[34860cb0-7a2a-484d-8313-cdcab1d0e9d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.041 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dcb845b3-f7fb-449b-b027-65efdcdcf6ec in datapath 6c261236-9e75-404c-ae2b-04691f3dc670 unbound from our chassis
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.042 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c261236-9e75-404c-ae2b-04691f3dc670, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.043 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5d2dfa-8172-416f-ad73-47a181407889]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.044 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dcb845b3-f7fb-449b-b027-65efdcdcf6ec in datapath 6c261236-9e75-404c-ae2b-04691f3dc670 unbound from our chassis
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.045 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c261236-9e75-404c-ae2b-04691f3dc670, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:58:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.045 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8197ec67-fd18-4dae-bd32-cf304fbe0683]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:09 compute-0 nova_compute[244014]: 2026-02-25 12:58:09.165 244018 INFO nova.virt.libvirt.driver [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Deleting instance files /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e_del
Feb 25 12:58:09 compute-0 nova_compute[244014]: 2026-02-25 12:58:09.166 244018 INFO nova.virt.libvirt.driver [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Deletion of /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e_del complete
Feb 25 12:58:09 compute-0 nova_compute[244014]: 2026-02-25 12:58:09.214 244018 INFO nova.compute.manager [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 25 12:58:09 compute-0 nova_compute[244014]: 2026-02-25 12:58:09.216 244018 DEBUG oslo.service.loopingcall [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:58:09 compute-0 nova_compute[244014]: 2026-02-25 12:58:09.217 244018 DEBUG nova.compute.manager [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:58:09 compute-0 nova_compute[244014]: 2026-02-25 12:58:09.217 244018 DEBUG nova.network.neutron [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:58:09 compute-0 sudo[371750]: pam_unix(sudo:session): session closed for user root
Feb 25 12:58:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:58:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:58:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:58:09 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:58:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:58:09 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:58:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:58:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:58:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:58:09 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:58:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:58:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:58:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:58:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:58:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:58:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:58:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:58:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:58:09 compute-0 sudo[371890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:58:09 compute-0 sudo[371890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:58:09 compute-0 sudo[371890]: pam_unix(sudo:session): session closed for user root
Feb 25 12:58:09 compute-0 sudo[371915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:58:09 compute-0 sudo[371915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:58:09 compute-0 podman[371952]: 2026-02-25 12:58:09.752831621 +0000 UTC m=+0.082858967 container create 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:58:09 compute-0 podman[371952]: 2026-02-25 12:58:09.704340183 +0000 UTC m=+0.034367569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:58:09 compute-0 systemd[1]: Started libpod-conmon-57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce.scope.
Feb 25 12:58:09 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:58:09 compute-0 podman[371952]: 2026-02-25 12:58:09.88464792 +0000 UTC m=+0.214675286 container init 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:58:09 compute-0 podman[371952]: 2026-02-25 12:58:09.894631471 +0000 UTC m=+0.224658817 container start 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:58:09 compute-0 podman[371952]: 2026-02-25 12:58:09.898315385 +0000 UTC m=+0.228342741 container attach 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:58:09 compute-0 zen_chaplygin[371968]: 167 167
Feb 25 12:58:09 compute-0 systemd[1]: libpod-57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce.scope: Deactivated successfully.
Feb 25 12:58:09 compute-0 conmon[371968]: conmon 57f794b9fc21a9bbca87 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce.scope/container/memory.events
Feb 25 12:58:09 compute-0 podman[371952]: 2026-02-25 12:58:09.903235734 +0000 UTC m=+0.233263110 container died 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6db8b6cffbde6cf203ee21fb63f37ee113920fd5572a3489c13a7271172f4e07-merged.mount: Deactivated successfully.
Feb 25 12:58:09 compute-0 podman[371952]: 2026-02-25 12:58:09.948798799 +0000 UTC m=+0.278826175 container remove 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:58:09 compute-0 systemd[1]: libpod-conmon-57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce.scope: Deactivated successfully.
Feb 25 12:58:10 compute-0 podman[371994]: 2026-02-25 12:58:10.148085041 +0000 UTC m=+0.051475533 container create 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:58:10 compute-0 systemd[1]: Started libpod-conmon-584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b.scope.
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.208 244018 DEBUG nova.network.neutron [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:58:10 compute-0 podman[371994]: 2026-02-25 12:58:10.124402123 +0000 UTC m=+0.027792695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:58:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.235 244018 INFO nova.compute.manager [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Took 1.02 seconds to deallocate network for instance.
Feb 25 12:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.286 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.287 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.361 244018 DEBUG oslo_concurrency.processutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:10 compute-0 ceph-mon[76335]: pgmap v2327: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 165 op/s
Feb 25 12:58:10 compute-0 podman[371994]: 2026-02-25 12:58:10.385513758 +0000 UTC m=+0.288904280 container init 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:58:10 compute-0 podman[371994]: 2026-02-25 12:58:10.397066194 +0000 UTC m=+0.300456676 container start 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.399 244018 DEBUG nova.network.neutron [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updated VIF entry in instance network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.400 244018 DEBUG nova.network.neutron [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:58:10 compute-0 podman[371994]: 2026-02-25 12:58:10.403930578 +0000 UTC m=+0.307321060 container attach 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.421 244018 DEBUG oslo_concurrency.lockutils [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.559 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.637 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-vif-unplugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.638 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.639 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.640 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.640 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] No waiting events found dispatching network-vif-unplugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.641 244018 WARNING nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received unexpected event network-vif-unplugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec for instance with vm_state deleted and task_state None.
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.641 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.642 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.643 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.643 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.644 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] No waiting events found dispatching network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.644 244018 WARNING nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received unexpected event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec for instance with vm_state deleted and task_state None.
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.645 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-vif-deleted-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.646 244018 INFO nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Neutron deleted interface dcb845b3-f7fb-449b-b027-65efdcdcf6ec; detaching it from the instance and deleting it from the info cache
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.646 244018 DEBUG nova.network.neutron [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.669 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Detach interface failed, port_id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec, reason: Instance 0389cfa0-085b-4e4e-8d61-95d0b91c413e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 12:58:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:58:10 compute-0 awesome_gould[372010]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:58:10 compute-0 awesome_gould[372010]: --> All data devices are unavailable
Feb 25 12:58:10 compute-0 systemd[1]: libpod-584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b.scope: Deactivated successfully.
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 12:58:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:58:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894236462' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.906 244018 DEBUG oslo_concurrency.processutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:10 compute-0 podman[372050]: 2026-02-25 12:58:10.907349888 +0000 UTC m=+0.034562726 container died 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.914 244018 DEBUG nova.compute.provider_tree [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990-merged.mount: Deactivated successfully.
Feb 25 12:58:10 compute-0 podman[372050]: 2026-02-25 12:58:10.945518205 +0000 UTC m=+0.072730983 container remove 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.949 244018 DEBUG nova.scheduler.client.report [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:58:10 compute-0 systemd[1]: libpod-conmon-584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b.scope: Deactivated successfully.
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.969 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:10 compute-0 sudo[371915]: pam_unix(sudo:session): session closed for user root
Feb 25 12:58:10 compute-0 nova_compute[244014]: 2026-02-25 12:58:10.995 244018 INFO nova.scheduler.client.report [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 0389cfa0-085b-4e4e-8d61-95d0b91c413e
Feb 25 12:58:11 compute-0 sudo[372069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:58:11 compute-0 sudo[372069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:58:11 compute-0 sudo[372069]: pam_unix(sudo:session): session closed for user root
Feb 25 12:58:11 compute-0 nova_compute[244014]: 2026-02-25 12:58:11.089 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:11 compute-0 sudo[372094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:58:11 compute-0 sudo[372094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:58:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2894236462' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:11 compute-0 podman[372132]: 2026-02-25 12:58:11.437815262 +0000 UTC m=+0.051907975 container create d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:58:11 compute-0 systemd[1]: Started libpod-conmon-d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a.scope.
Feb 25 12:58:11 compute-0 podman[372132]: 2026-02-25 12:58:11.415341138 +0000 UTC m=+0.029433881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:58:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:58:11 compute-0 podman[372132]: 2026-02-25 12:58:11.534786018 +0000 UTC m=+0.148878831 container init d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 12:58:11 compute-0 podman[372132]: 2026-02-25 12:58:11.544918553 +0000 UTC m=+0.159011266 container start d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb 25 12:58:11 compute-0 podman[372132]: 2026-02-25 12:58:11.548258938 +0000 UTC m=+0.162351751 container attach d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 12:58:11 compute-0 cool_cohen[372148]: 167 167
Feb 25 12:58:11 compute-0 systemd[1]: libpod-d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a.scope: Deactivated successfully.
Feb 25 12:58:11 compute-0 podman[372132]: 2026-02-25 12:58:11.551306904 +0000 UTC m=+0.165399657 container died d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 12:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8df13f63c09405e79b335bf2f9bacf9906caefa28da0850fa60efc304e2aabcd-merged.mount: Deactivated successfully.
Feb 25 12:58:11 compute-0 podman[372132]: 2026-02-25 12:58:11.59337266 +0000 UTC m=+0.207465383 container remove d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:58:11 compute-0 systemd[1]: libpod-conmon-d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a.scope: Deactivated successfully.
Feb 25 12:58:11 compute-0 podman[372171]: 2026-02-25 12:58:11.78692491 +0000 UTC m=+0.060405005 container create a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:58:11 compute-0 systemd[1]: Started libpod-conmon-a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a.scope.
Feb 25 12:58:11 compute-0 podman[372171]: 2026-02-25 12:58:11.763305214 +0000 UTC m=+0.036785329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:58:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc46f80f4fcb0f18a4e3c085a98129672a41c8ac7a10b883b1a6537fd26ee1af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc46f80f4fcb0f18a4e3c085a98129672a41c8ac7a10b883b1a6537fd26ee1af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc46f80f4fcb0f18a4e3c085a98129672a41c8ac7a10b883b1a6537fd26ee1af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc46f80f4fcb0f18a4e3c085a98129672a41c8ac7a10b883b1a6537fd26ee1af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:11 compute-0 podman[372171]: 2026-02-25 12:58:11.887221479 +0000 UTC m=+0.160701644 container init a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 12:58:11 compute-0 podman[372171]: 2026-02-25 12:58:11.899786864 +0000 UTC m=+0.173266959 container start a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 12:58:11 compute-0 podman[372171]: 2026-02-25 12:58:11.903097737 +0000 UTC m=+0.176577832 container attach a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:58:12 compute-0 bold_shockley[372187]: {
Feb 25 12:58:12 compute-0 bold_shockley[372187]:     "0": [
Feb 25 12:58:12 compute-0 bold_shockley[372187]:         {
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "devices": [
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "/dev/loop3"
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             ],
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_name": "ceph_lv0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_size": "21470642176",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "name": "ceph_lv0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "tags": {
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.cluster_name": "ceph",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.crush_device_class": "",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.encrypted": "0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.objectstore": "bluestore",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.osd_id": "0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.type": "block",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.vdo": "0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.with_tpm": "0"
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             },
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "type": "block",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "vg_name": "ceph_vg0"
Feb 25 12:58:12 compute-0 bold_shockley[372187]:         }
Feb 25 12:58:12 compute-0 bold_shockley[372187]:     ],
Feb 25 12:58:12 compute-0 bold_shockley[372187]:     "1": [
Feb 25 12:58:12 compute-0 bold_shockley[372187]:         {
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "devices": [
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "/dev/loop4"
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             ],
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_name": "ceph_lv1",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_size": "21470642176",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "name": "ceph_lv1",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "tags": {
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.cluster_name": "ceph",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.crush_device_class": "",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.encrypted": "0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.objectstore": "bluestore",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.osd_id": "1",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.type": "block",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.vdo": "0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.with_tpm": "0"
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             },
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "type": "block",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "vg_name": "ceph_vg1"
Feb 25 12:58:12 compute-0 bold_shockley[372187]:         }
Feb 25 12:58:12 compute-0 bold_shockley[372187]:     ],
Feb 25 12:58:12 compute-0 bold_shockley[372187]:     "2": [
Feb 25 12:58:12 compute-0 bold_shockley[372187]:         {
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "devices": [
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "/dev/loop5"
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             ],
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_name": "ceph_lv2",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_size": "21470642176",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "name": "ceph_lv2",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "tags": {
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.cluster_name": "ceph",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.crush_device_class": "",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.encrypted": "0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.objectstore": "bluestore",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.osd_id": "2",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.type": "block",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.vdo": "0",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:                 "ceph.with_tpm": "0"
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             },
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "type": "block",
Feb 25 12:58:12 compute-0 bold_shockley[372187]:             "vg_name": "ceph_vg2"
Feb 25 12:58:12 compute-0 bold_shockley[372187]:         }
Feb 25 12:58:12 compute-0 bold_shockley[372187]:     ]
Feb 25 12:58:12 compute-0 bold_shockley[372187]: }
Feb 25 12:58:12 compute-0 systemd[1]: libpod-a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a.scope: Deactivated successfully.
Feb 25 12:58:12 compute-0 podman[372171]: 2026-02-25 12:58:12.211052614 +0000 UTC m=+0.484532679 container died a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 12:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc46f80f4fcb0f18a4e3c085a98129672a41c8ac7a10b883b1a6537fd26ee1af-merged.mount: Deactivated successfully.
Feb 25 12:58:12 compute-0 podman[372171]: 2026-02-25 12:58:12.252564645 +0000 UTC m=+0.526044720 container remove a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 12:58:12 compute-0 systemd[1]: libpod-conmon-a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a.scope: Deactivated successfully.
Feb 25 12:58:12 compute-0 sudo[372094]: pam_unix(sudo:session): session closed for user root
Feb 25 12:58:12 compute-0 sudo[372208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:58:12 compute-0 sudo[372208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:58:12 compute-0 sudo[372208]: pam_unix(sudo:session): session closed for user root
Feb 25 12:58:12 compute-0 ceph-mon[76335]: pgmap v2328: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:58:12 compute-0 sudo[372233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:58:12 compute-0 sudo[372233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:58:12 compute-0 podman[372270]: 2026-02-25 12:58:12.732475943 +0000 UTC m=+0.056423903 container create 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:58:12 compute-0 systemd[1]: Started libpod-conmon-7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b.scope.
Feb 25 12:58:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Feb 25 12:58:12 compute-0 podman[372270]: 2026-02-25 12:58:12.707846268 +0000 UTC m=+0.031794308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:58:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:58:12 compute-0 podman[372270]: 2026-02-25 12:58:12.82097804 +0000 UTC m=+0.144926060 container init 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:58:12 compute-0 podman[372270]: 2026-02-25 12:58:12.831236969 +0000 UTC m=+0.155184929 container start 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:58:12 compute-0 podman[372270]: 2026-02-25 12:58:12.835390576 +0000 UTC m=+0.159338546 container attach 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:58:12 compute-0 mystifying_sutherland[372286]: 167 167
Feb 25 12:58:12 compute-0 systemd[1]: libpod-7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b.scope: Deactivated successfully.
Feb 25 12:58:12 compute-0 podman[372270]: 2026-02-25 12:58:12.838545155 +0000 UTC m=+0.162493115 container died 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb 25 12:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-eef602e4ae2c43315a453425731352c2c38e36207eb0178e9ef676268ad32977-merged.mount: Deactivated successfully.
Feb 25 12:58:12 compute-0 nova_compute[244014]: 2026-02-25 12:58:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:12 compute-0 podman[372270]: 2026-02-25 12:58:12.896747467 +0000 UTC m=+0.220695437 container remove 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 12:58:12 compute-0 systemd[1]: libpod-conmon-7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b.scope: Deactivated successfully.
Feb 25 12:58:13 compute-0 podman[372310]: 2026-02-25 12:58:13.086445718 +0000 UTC m=+0.055440945 container create 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 12:58:13 compute-0 systemd[1]: Started libpod-conmon-5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f.scope.
Feb 25 12:58:13 compute-0 podman[372310]: 2026-02-25 12:58:13.062325458 +0000 UTC m=+0.031320705 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:58:13 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef17f5c20802ed424b625ee61322356594d2dc7bc862243e5e172efdbea316e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef17f5c20802ed424b625ee61322356594d2dc7bc862243e5e172efdbea316e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef17f5c20802ed424b625ee61322356594d2dc7bc862243e5e172efdbea316e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef17f5c20802ed424b625ee61322356594d2dc7bc862243e5e172efdbea316e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:58:13 compute-0 podman[372310]: 2026-02-25 12:58:13.192127689 +0000 UTC m=+0.161122906 container init 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:58:13 compute-0 podman[372310]: 2026-02-25 12:58:13.206231567 +0000 UTC m=+0.175226804 container start 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 12:58:13 compute-0 podman[372310]: 2026-02-25 12:58:13.21273387 +0000 UTC m=+0.181729087 container attach 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:58:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:13 compute-0 lvm[372417]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:58:13 compute-0 lvm[372423]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:58:13 compute-0 lvm[372423]: VG ceph_vg1 finished
Feb 25 12:58:13 compute-0 lvm[372424]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:58:13 compute-0 lvm[372424]: VG ceph_vg2 finished
Feb 25 12:58:13 compute-0 lvm[372417]: VG ceph_vg0 finished
Feb 25 12:58:13 compute-0 podman[372401]: 2026-02-25 12:58:13.891122506 +0000 UTC m=+0.069544033 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 12:58:13 compute-0 nova_compute[244014]: 2026-02-25 12:58:13.897 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:13 compute-0 lvm[372446]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:58:13 compute-0 lvm[372446]: VG ceph_vg0 finished
Feb 25 12:58:13 compute-0 zealous_sanderson[372326]: {}
Feb 25 12:58:13 compute-0 podman[372403]: 2026-02-25 12:58:13.909868225 +0000 UTC m=+0.084699721 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:58:13 compute-0 systemd[1]: libpod-5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f.scope: Deactivated successfully.
Feb 25 12:58:13 compute-0 systemd[1]: libpod-5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f.scope: Consumed 1.126s CPU time.
Feb 25 12:58:13 compute-0 podman[372310]: 2026-02-25 12:58:13.943964276 +0000 UTC m=+0.912959493 container died 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 12:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ef17f5c20802ed424b625ee61322356594d2dc7bc862243e5e172efdbea316e-merged.mount: Deactivated successfully.
Feb 25 12:58:13 compute-0 podman[372310]: 2026-02-25 12:58:13.983817661 +0000 UTC m=+0.952812858 container remove 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 12:58:13 compute-0 systemd[1]: libpod-conmon-5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f.scope: Deactivated successfully.
Feb 25 12:58:14 compute-0 sudo[372233]: pam_unix(sudo:session): session closed for user root
Feb 25 12:58:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:58:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:58:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:58:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:58:14 compute-0 sudo[372466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:58:14 compute-0 sudo[372466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:58:14 compute-0 sudo[372466]: pam_unix(sudo:session): session closed for user root
Feb 25 12:58:14 compute-0 ceph-mon[76335]: pgmap v2329: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Feb 25 12:58:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:58:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:58:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Feb 25 12:58:15 compute-0 nova_compute[244014]: 2026-02-25 12:58:15.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:15 compute-0 ovn_controller[147040]: 2026-02-25T12:58:15Z|01494|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 12:58:15 compute-0 ovn_controller[147040]: 2026-02-25T12:58:15Z|01495|binding|INFO|Releasing lport d5efe0f3-2e55-4d47-9b3f-ed6e541466bd from this chassis (sb_readonly=0)
Feb 25 12:58:15 compute-0 nova_compute[244014]: 2026-02-25 12:58:15.978 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:16.271 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:58:16 compute-0 nova_compute[244014]: 2026-02-25 12:58:16.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:16.273 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:58:16 compute-0 ceph-mon[76335]: pgmap v2330: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Feb 25 12:58:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Feb 25 12:58:18 compute-0 ceph-mon[76335]: pgmap v2331: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Feb 25 12:58:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 25 12:58:18 compute-0 nova_compute[244014]: 2026-02-25 12:58:18.900 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:20 compute-0 ceph-mon[76335]: pgmap v2332: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 25 12:58:20 compute-0 nova_compute[244014]: 2026-02-25 12:58:20.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 28 op/s
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.751 244018 DEBUG nova.compute.manager [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-changed-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.751 244018 DEBUG nova.compute.manager [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing instance network info cache due to event network-changed-13dff95c-b96c-4657-9807-58964669e78a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.751 244018 DEBUG oslo_concurrency.lockutils [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.751 244018 DEBUG oslo_concurrency.lockutils [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.751 244018 DEBUG nova.network.neutron [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing network info cache for port 13dff95c-b96c-4657-9807-58964669e78a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.837 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.837 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.838 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.838 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.838 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.839 244018 INFO nova.compute.manager [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Terminating instance
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.839 244018 DEBUG nova.compute.manager [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:58:21 compute-0 kernel: tap13dff95c-b9 (unregistering): left promiscuous mode
Feb 25 12:58:21 compute-0 NetworkManager[49836]: <info>  [1772024301.8892] device (tap13dff95c-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:58:21 compute-0 ovn_controller[147040]: 2026-02-25T12:58:21Z|01496|binding|INFO|Releasing lport 13dff95c-b96c-4657-9807-58964669e78a from this chassis (sb_readonly=0)
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:21 compute-0 ovn_controller[147040]: 2026-02-25T12:58:21Z|01497|binding|INFO|Setting lport 13dff95c-b96c-4657-9807-58964669e78a down in Southbound
Feb 25 12:58:21 compute-0 ovn_controller[147040]: 2026-02-25T12:58:21Z|01498|binding|INFO|Removing iface tap13dff95c-b9 ovn-installed in OVS
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.901 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:21 compute-0 nova_compute[244014]: 2026-02-25 12:58:21.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:21.919 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:9b:25 10.100.0.8'], port_security=['fa:16:3e:42:9b:25 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '621d2b1a-0d06-4a98-b252-2acafee3ba02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '089b8f0ee9684ec69cbdc13d24262170', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0892ce4b-109e-43d1-a441-11b1be3535ea 1dfe9b6c-62fb-43ca-b23e-3d57cce238fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=916a48b0-3c8c-48cf-bf68-7fcb782233f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=13dff95c-b96c-4657-9807-58964669e78a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:58:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:21.921 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 13dff95c-b96c-4657-9807-58964669e78a in datapath f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd unbound from our chassis
Feb 25 12:58:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:21.923 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:58:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:21.924 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6057bba7-51ba-4891-bfee-1ea69542b976]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:21 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:21.925 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd namespace which is not needed anymore
Feb 25 12:58:21 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Feb 25 12:58:21 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008d.scope: Consumed 12.776s CPU time.
Feb 25 12:58:21 compute-0 systemd-machined[210048]: Machine qemu-173-instance-0000008d terminated.
Feb 25 12:58:22 compute-0 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [NOTICE]   (371703) : haproxy version is 2.8.14-c23fe91
Feb 25 12:58:22 compute-0 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [NOTICE]   (371703) : path to executable is /usr/sbin/haproxy
Feb 25 12:58:22 compute-0 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [WARNING]  (371703) : Exiting Master process...
Feb 25 12:58:22 compute-0 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [WARNING]  (371703) : Exiting Master process...
Feb 25 12:58:22 compute-0 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [ALERT]    (371703) : Current worker (371705) exited with code 143 (Terminated)
Feb 25 12:58:22 compute-0 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [WARNING]  (371703) : All workers exited. Exiting... (0)
Feb 25 12:58:22 compute-0 systemd[1]: libpod-977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f.scope: Deactivated successfully.
Feb 25 12:58:22 compute-0 podman[372516]: 2026-02-25 12:58:22.064093312 +0000 UTC m=+0.048500829 container died 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.083 244018 INFO nova.virt.libvirt.driver [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Instance destroyed successfully.
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.084 244018 DEBUG nova.objects.instance [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lazy-loading 'resources' on Instance uuid 621d2b1a-0d06-4a98-b252-2acafee3ba02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.103 244018 DEBUG nova.virt.libvirt.vif [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-370800561-acc',id=141,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJN+hUOMaKAQ0YB42oqWjvIfKM8uPP+02JazD6RxfhEthxrTmdltfGPXOOUYwxJvxVj/DZ80733jprZ2P4416qp+bN+v8dr9z51p/U8yRQby/Qu2leuQCIhcBO5CRWY1xg==',key_name='tempest-TestSecurityGroupsBasicOps-1642482985',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:57:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='089b8f0ee9684ec69cbdc13d24262170',ramdisk_id='',reservation_id='r-r5sfe18w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-370800561',owner_user_name='tempest-TestSecurityGroupsBasicOps-370800561-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:57:52Z,user_data=None,user_id='2248dda8be6e4a51ace44a9e66dc4b45',uuid=621d2b1a-0d06-4a98-b252-2acafee3ba02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.103 244018 DEBUG nova.network.os_vif_util [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converting VIF {"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.104 244018 DEBUG nova.network.os_vif_util [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.105 244018 DEBUG os_vif [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f-userdata-shm.mount: Deactivated successfully.
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.107 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.108 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13dff95c-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f481ca962f4dc0356ecf15f122766f43015783bde0babd55e306f3282801e38-merged.mount: Deactivated successfully.
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.117 244018 INFO os_vif [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9')
Feb 25 12:58:22 compute-0 podman[372516]: 2026-02-25 12:58:22.123478327 +0000 UTC m=+0.107885814 container cleanup 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 12:58:22 compute-0 systemd[1]: libpod-conmon-977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f.scope: Deactivated successfully.
Feb 25 12:58:22 compute-0 podman[372563]: 2026-02-25 12:58:22.191872756 +0000 UTC m=+0.047742618 container remove 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:58:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.197 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[492e6f8e-16f8-4f9b-8252-7585c9ec961a]: (4, ('Wed Feb 25 12:58:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd (977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f)\n977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f\nWed Feb 25 12:58:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd (977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f)\n977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.199 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6db8123d-b289-4eb8-a458-67ada0063bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.201 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1fe6aa3-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:22 compute-0 kernel: tapf1fe6aa3-20: left promiscuous mode
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.204 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.211 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e67ba772-59e9-47ee-afce-cd36000c3a10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.225 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e923f7a1-ae12-4d49-bf5e-df514002d986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.227 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16f20823-2061-45f9-a018-a09142631096]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.243 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e7280a4d-e75a-491c-bb49-4f03aa6f1763]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624120, 'reachable_time': 43000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372588, 'error': None, 'target': 'ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:22 compute-0 systemd[1]: run-netns-ovnmeta\x2df1fe6aa3\x2d20a1\x2d49f5\x2d80e1\x2dd26b81a3a5cd.mount: Deactivated successfully.
Feb 25 12:58:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.248 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:58:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.248 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[3806571b-e4b2-40fc-a44d-9bbf0f8f07db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.276 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.410 244018 INFO nova.virt.libvirt.driver [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Deleting instance files /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02_del
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.411 244018 INFO nova.virt.libvirt.driver [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Deletion of /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02_del complete
Feb 25 12:58:22 compute-0 ceph-mon[76335]: pgmap v2333: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 28 op/s
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.503 244018 INFO nova.compute.manager [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.504 244018 DEBUG oslo.service.loopingcall [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.505 244018 DEBUG nova.compute.manager [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.505 244018 DEBUG nova.network.neutron [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:58:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 29 KiB/s wr, 49 op/s
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.922 244018 DEBUG nova.network.neutron [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updated VIF entry in instance network info cache for port 13dff95c-b96c-4657-9807-58964669e78a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.923 244018 DEBUG nova.network.neutron [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updating instance_info_cache with network_info: [{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:58:22 compute-0 nova_compute[244014]: 2026-02-25 12:58:22.965 244018 DEBUG oslo_concurrency.lockutils [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.130 244018 DEBUG nova.network.neutron [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.169 244018 INFO nova.compute.manager [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Took 0.66 seconds to deallocate network for instance.
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.214 244018 DEBUG nova.compute.manager [req-9e29473a-36d8-4d6f-8202-b721c47b5979 req-e688aa20-415d-4bba-bee5-6233f17e9688 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-vif-deleted-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.225 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.226 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.324 244018 DEBUG oslo_concurrency.processutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.855 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024288.8528855, 0389cfa0-085b-4e4e-8d61-95d0b91c413e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.856 244018 INFO nova.compute.manager [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] VM Stopped (Lifecycle Event)
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.860 244018 DEBUG nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-vif-unplugged-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.861 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.861 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.862 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.862 244018 DEBUG nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] No waiting events found dispatching network-vif-unplugged-13dff95c-b96c-4657-9807-58964669e78a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.862 244018 WARNING nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received unexpected event network-vif-unplugged-13dff95c-b96c-4657-9807-58964669e78a for instance with vm_state deleted and task_state None.
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.863 244018 DEBUG nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.863 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.863 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.863 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.864 244018 DEBUG nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] No waiting events found dispatching network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.864 244018 WARNING nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received unexpected event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a for instance with vm_state deleted and task_state None.
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.897 244018 DEBUG nova.compute.manager [None req-e28a871d-9205-4f24-97f5-b7ac8334e9e8 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:58:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/565057763' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.962 244018 DEBUG oslo_concurrency.processutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.968 244018 DEBUG nova.compute.provider_tree [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:58:23 compute-0 nova_compute[244014]: 2026-02-25 12:58:23.993 244018 DEBUG nova.scheduler.client.report [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:58:24 compute-0 nova_compute[244014]: 2026-02-25 12:58:24.049 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:24 compute-0 nova_compute[244014]: 2026-02-25 12:58:24.085 244018 INFO nova.scheduler.client.report [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Deleted allocations for instance 621d2b1a-0d06-4a98-b252-2acafee3ba02
Feb 25 12:58:24 compute-0 nova_compute[244014]: 2026-02-25 12:58:24.183 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:24 compute-0 ceph-mon[76335]: pgmap v2334: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 29 KiB/s wr, 49 op/s
Feb 25 12:58:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/565057763' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 21 op/s
Feb 25 12:58:25 compute-0 nova_compute[244014]: 2026-02-25 12:58:25.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:26 compute-0 ceph-mon[76335]: pgmap v2335: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 21 op/s
Feb 25 12:58:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 21 op/s
Feb 25 12:58:27 compute-0 nova_compute[244014]: 2026-02-25 12:58:27.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:28 compute-0 ceph-mon[76335]: pgmap v2336: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 21 op/s
Feb 25 12:58:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 15 KiB/s wr, 30 op/s
Feb 25 12:58:30 compute-0 ceph-mon[76335]: pgmap v2337: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 15 KiB/s wr, 30 op/s
Feb 25 12:58:30 compute-0 nova_compute[244014]: 2026-02-25 12:58:30.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:30 compute-0 nova_compute[244014]: 2026-02-25 12:58:30.763 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:30 compute-0 nova_compute[244014]: 2026-02-25 12:58:30.787 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 6185e497-8422-4a5f-a98a-865484d53d4f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 25 12:58:30 compute-0 nova_compute[244014]: 2026-02-25 12:58:30.788 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:30 compute-0 nova_compute[244014]: 2026-02-25 12:58:30.788 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 15 KiB/s wr, 29 op/s
Feb 25 12:58:30 compute-0 nova_compute[244014]: 2026-02-25 12:58:30.812 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:58:31
Feb 25 12:58:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:58:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:58:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.log', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Feb 25 12:58:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:58:31 compute-0 ceph-mon[76335]: pgmap v2338: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 15 KiB/s wr, 29 op/s
Feb 25 12:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:58:32 compute-0 nova_compute[244014]: 2026-02-25 12:58:32.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:32 compute-0 ovn_controller[147040]: 2026-02-25T12:58:32Z|01499|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 12:58:32 compute-0 nova_compute[244014]: 2026-02-25 12:58:32.655 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:32 compute-0 nova_compute[244014]: 2026-02-25 12:58:32.655 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:32 compute-0 nova_compute[244014]: 2026-02-25 12:58:32.674 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:58:32 compute-0 nova_compute[244014]: 2026-02-25 12:58:32.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:32 compute-0 nova_compute[244014]: 2026-02-25 12:58:32.772 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:32 compute-0 nova_compute[244014]: 2026-02-25 12:58:32.774 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:32 compute-0 nova_compute[244014]: 2026-02-25 12:58:32.789 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:58:32 compute-0 nova_compute[244014]: 2026-02-25 12:58:32.789 244018 INFO nova.compute.claims [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:58:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 15 KiB/s wr, 29 op/s
Feb 25 12:58:32 compute-0 nova_compute[244014]: 2026-02-25 12:58:32.955 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:58:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2250003891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:33 compute-0 nova_compute[244014]: 2026-02-25 12:58:33.512 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:33 compute-0 nova_compute[244014]: 2026-02-25 12:58:33.518 244018 DEBUG nova.compute.provider_tree [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:58:33 compute-0 nova_compute[244014]: 2026-02-25 12:58:33.595 244018 DEBUG nova.scheduler.client.report [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:58:33 compute-0 nova_compute[244014]: 2026-02-25 12:58:33.715 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:33 compute-0 nova_compute[244014]: 2026-02-25 12:58:33.716 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:58:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:33 compute-0 nova_compute[244014]: 2026-02-25 12:58:33.894 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:58:33 compute-0 nova_compute[244014]: 2026-02-25 12:58:33.895 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:58:33 compute-0 ceph-mon[76335]: pgmap v2339: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 15 KiB/s wr, 29 op/s
Feb 25 12:58:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2250003891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:33 compute-0 nova_compute[244014]: 2026-02-25 12:58:33.946 244018 INFO nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:58:33 compute-0 nova_compute[244014]: 2026-02-25 12:58:33.991 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.250 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.252 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.252 244018 INFO nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Creating image(s)
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.280 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.307 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.330 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.336 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.407 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.408 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.409 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.409 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.433 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.438 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 eadab8d0-1f24-4ace-b7b4-10329df53f12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.596 244018 DEBUG nova.policy [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2236154a16ea4715a79c2e6f36e22e83', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.720 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 eadab8d0-1f24-4ace-b7b4-10329df53f12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.794 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] resizing rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:58:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.905 244018 DEBUG nova.objects.instance [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'migration_context' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.938 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.938 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Ensure instance console log exists: /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.938 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.939 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:34 compute-0 nova_compute[244014]: 2026-02-25 12:58:34.939 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.317 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Successfully created port: 98c45ccf-8678-4765-bf66-7a9e3dae8cac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.604 244018 DEBUG nova.compute.manager [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.605 244018 DEBUG nova.compute.manager [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing instance network info cache due to event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.606 244018 DEBUG oslo_concurrency.lockutils [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.606 244018 DEBUG oslo_concurrency.lockutils [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.607 244018 DEBUG nova.network.neutron [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.881 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.882 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.883 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.883 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.884 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.886 244018 INFO nova.compute.manager [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Terminating instance
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.888 244018 DEBUG nova.compute.manager [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.901 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.902 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.929 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.929 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.930 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 12:58:35 compute-0 kernel: tapa4d9181f-fe (unregistering): left promiscuous mode
Feb 25 12:58:35 compute-0 NetworkManager[49836]: <info>  [1772024315.9474] device (tapa4d9181f-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:58:35 compute-0 ceph-mon[76335]: pgmap v2340: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.958 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:35 compute-0 ovn_controller[147040]: 2026-02-25T12:58:35Z|01500|binding|INFO|Releasing lport a4d9181f-fefa-4cde-81ea-c5e59433606c from this chassis (sb_readonly=0)
Feb 25 12:58:35 compute-0 ovn_controller[147040]: 2026-02-25T12:58:35Z|01501|binding|INFO|Setting lport a4d9181f-fefa-4cde-81ea-c5e59433606c down in Southbound
Feb 25 12:58:35 compute-0 ovn_controller[147040]: 2026-02-25T12:58:35Z|01502|binding|INFO|Removing iface tapa4d9181f-fe ovn-installed in OVS
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:35 compute-0 nova_compute[244014]: 2026-02-25 12:58:35.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:35.988 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:c2:bb 10.100.0.11'], port_security=['fa:16:3e:ef:c2:bb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6185e497-8422-4a5f-a98a-865484d53d4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f23f1675-b7ff-4265-a011-0912c637d746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87d37268-c5dd-4380-a676-5b9940f82b8f 979c759f-0c66-4e81-a7f1-5a970907b9e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5ffac6-eab3-4eca-a87d-9bf8335b5de6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4d9181f-fefa-4cde-81ea-c5e59433606c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:58:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:35.991 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4d9181f-fefa-4cde-81ea-c5e59433606c in datapath f23f1675-b7ff-4265-a011-0912c637d746 unbound from our chassis
Feb 25 12:58:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:35.993 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f23f1675-b7ff-4265-a011-0912c637d746, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 12:58:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:35.994 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7bbed3e2-5708-4f87-b13f-07974deeb25e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:35.995 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746 namespace which is not needed anymore
Feb 25 12:58:36 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Feb 25 12:58:36 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008b.scope: Consumed 15.708s CPU time.
Feb 25 12:58:36 compute-0 systemd-machined[210048]: Machine qemu-171-instance-0000008b terminated.
Feb 25 12:58:36 compute-0 kernel: tapa4d9181f-fe: entered promiscuous mode
Feb 25 12:58:36 compute-0 kernel: tapa4d9181f-fe (unregistering): left promiscuous mode
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.127 244018 INFO nova.virt.libvirt.driver [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Instance destroyed successfully.
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.128 244018 DEBUG nova.objects.instance [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid 6185e497-8422-4a5f-a98a-865484d53d4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:58:36 compute-0 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [NOTICE]   (370489) : haproxy version is 2.8.14-c23fe91
Feb 25 12:58:36 compute-0 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [NOTICE]   (370489) : path to executable is /usr/sbin/haproxy
Feb 25 12:58:36 compute-0 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [WARNING]  (370489) : Exiting Master process...
Feb 25 12:58:36 compute-0 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [ALERT]    (370489) : Current worker (370491) exited with code 143 (Terminated)
Feb 25 12:58:36 compute-0 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [WARNING]  (370489) : All workers exited. Exiting... (0)
Feb 25 12:58:36 compute-0 systemd[1]: libpod-cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7.scope: Deactivated successfully.
Feb 25 12:58:36 compute-0 podman[372823]: 2026-02-25 12:58:36.167897145 +0000 UTC m=+0.063308637 container died cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 12:58:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7-userdata-shm.mount: Deactivated successfully.
Feb 25 12:58:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc100cb099adcbdbcce587f0f6d881ddf87a2202a85f400719149b8997f75aa8-merged.mount: Deactivated successfully.
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.204 244018 DEBUG nova.virt.libvirt.vif [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:56:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=139,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL87R8Bekvm59mo5kwvnuGBsBkp9tXZgiXMYkz6TvQlSCgwb0IPTrLwrZDNNWgUnIBhU5eG7QYkAKCOG1521iSWxvl92auy73uemRv7k4zUT5bfi0v5q+2K3Cz8+sbuK0A==',key_name='tempest-TestSecurityGroupsBasicOps-1295736462',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:57:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-405awptu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:57:08Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=6185e497-8422-4a5f-a98a-865484d53d4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.206 244018 DEBUG nova.network.os_vif_util [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.208 244018 DEBUG nova.network.os_vif_util [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.209 244018 DEBUG os_vif [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.212 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.212 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4d9181f-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:36 compute-0 podman[372823]: 2026-02-25 12:58:36.216464575 +0000 UTC m=+0.111876057 container cleanup cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.220 244018 INFO os_vif [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe')
Feb 25 12:58:36 compute-0 systemd[1]: libpod-conmon-cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7.scope: Deactivated successfully.
Feb 25 12:58:36 compute-0 podman[372863]: 2026-02-25 12:58:36.312169104 +0000 UTC m=+0.067170655 container remove cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 12:58:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.319 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1290c45d-9b1c-4227-930b-1d57c33d9ffd]: (4, ('Wed Feb 25 12:58:36 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746 (cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7)\ncdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7\nWed Feb 25 12:58:36 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746 (cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7)\ncdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.322 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16ce3014-bdba-48cc-9e53-71ede35735f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.324 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf23f1675-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:36 compute-0 kernel: tapf23f1675-b0: left promiscuous mode
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.333 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.337 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00fdb6cc-2370-4e1c-a139-138ceaa09fc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.356 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[071e39dd-dbac-4cc4-8b83-85691fe77e15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.359 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b62d8fdc-159c-45aa-a57b-4c3d5b02feca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.379 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0d427080-94b8-4b5b-9dca-af2e4712f3cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619633, 'reachable_time': 26442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372896, 'error': None, 'target': 'ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:36 compute-0 systemd[1]: run-netns-ovnmeta\x2df23f1675\x2db7ff\x2d4265\x2da011\x2d0912c637d746.mount: Deactivated successfully.
Feb 25 12:58:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.383 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 12:58:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.384 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0d5778-973f-40f8-a6d9-efd58868e268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.496 244018 INFO nova.virt.libvirt.driver [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Deleting instance files /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f_del
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.497 244018 INFO nova.virt.libvirt.driver [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Deletion of /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f_del complete
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.702 244018 INFO nova.compute.manager [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Took 0.81 seconds to destroy the instance on the hypervisor.
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.703 244018 DEBUG oslo.service.loopingcall [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.704 244018 DEBUG nova.compute.manager [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.704 244018 DEBUG nova.network.neutron [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:58:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.950 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.951 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.951 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.952 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:58:36 compute-0 nova_compute[244014]: 2026-02-25 12:58:36.952 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.081 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024302.0802164, 621d2b1a-0d06-4a98-b252-2acafee3ba02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.082 244018 INFO nova.compute.manager [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] VM Stopped (Lifecycle Event)
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.114 244018 DEBUG nova.compute.manager [None req-db822f53-7f67-4dfd-9372-21c4548b3fac - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:58:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476929791' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.495 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.709 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.711 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3488MB free_disk=59.941710148938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.711 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.712 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.759 244018 DEBUG nova.compute.manager [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-unplugged-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.760 244018 DEBUG oslo_concurrency.lockutils [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.760 244018 DEBUG oslo_concurrency.lockutils [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.761 244018 DEBUG oslo_concurrency.lockutils [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.761 244018 DEBUG nova.compute.manager [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] No waiting events found dispatching network-vif-unplugged-a4d9181f-fefa-4cde-81ea-c5e59433606c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.762 244018 DEBUG nova.compute.manager [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-unplugged-a4d9181f-fefa-4cde-81ea-c5e59433606c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.799 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 6185e497-8422-4a5f-a98a-865484d53d4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.799 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance eadab8d0-1f24-4ace-b7b4-10329df53f12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.800 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.800 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:58:37 compute-0 nova_compute[244014]: 2026-02-25 12:58:37.875 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:37 compute-0 ceph-mon[76335]: pgmap v2341: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Feb 25 12:58:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3476929791' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:58:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244823206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:38 compute-0 nova_compute[244014]: 2026-02-25 12:58:38.471 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:38 compute-0 nova_compute[244014]: 2026-02-25 12:58:38.476 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:58:38 compute-0 nova_compute[244014]: 2026-02-25 12:58:38.501 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:58:38 compute-0 nova_compute[244014]: 2026-02-25 12:58:38.535 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:58:38 compute-0 nova_compute[244014]: 2026-02-25 12:58:38.535 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:38 compute-0 nova_compute[244014]: 2026-02-25 12:58:38.614 244018 DEBUG nova.network.neutron [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updated VIF entry in instance network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:58:38 compute-0 nova_compute[244014]: 2026-02-25 12:58:38.615 244018 DEBUG nova.network.neutron [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updating instance_info_cache with network_info: [{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:58:38 compute-0 nova_compute[244014]: 2026-02-25 12:58:38.643 244018 DEBUG oslo_concurrency.lockutils [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:58:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Feb 25 12:58:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4244823206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.531 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.715 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Successfully updated port: 98c45ccf-8678-4765-bf66-7a9e3dae8cac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.735 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.736 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquired lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.736 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.898 244018 DEBUG nova.compute.manager [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.898 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.899 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.899 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.900 244018 DEBUG nova.compute.manager [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] No waiting events found dispatching network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.900 244018 WARNING nova.compute.manager [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received unexpected event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c for instance with vm_state active and task_state deleting.
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.901 244018 DEBUG nova.compute.manager [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-changed-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.901 244018 DEBUG nova.compute.manager [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Refreshing instance network info cache due to event network-changed-98c45ccf-8678-4765-bf66-7a9e3dae8cac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.902 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.903 244018 DEBUG nova.network.neutron [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.931 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.938 244018 INFO nova.compute.manager [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Took 3.23 seconds to deallocate network for instance.
Feb 25 12:58:39 compute-0 ceph-mon[76335]: pgmap v2342: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.995 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:39 compute-0 nova_compute[244014]: 2026-02-25 12:58:39.996 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:40 compute-0 nova_compute[244014]: 2026-02-25 12:58:40.070 244018 DEBUG oslo_concurrency.processutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:40 compute-0 nova_compute[244014]: 2026-02-25 12:58:40.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:58:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2461465610' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:40 compute-0 nova_compute[244014]: 2026-02-25 12:58:40.655 244018 DEBUG oslo_concurrency.processutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:40 compute-0 nova_compute[244014]: 2026-02-25 12:58:40.671 244018 DEBUG nova.compute.provider_tree [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:58:40 compute-0 nova_compute[244014]: 2026-02-25 12:58:40.693 244018 DEBUG nova.scheduler.client.report [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:58:40 compute-0 nova_compute[244014]: 2026-02-25 12:58:40.735 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:40 compute-0 nova_compute[244014]: 2026-02-25 12:58:40.774 244018 INFO nova.scheduler.client.report [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance 6185e497-8422-4a5f-a98a-865484d53d4f
Feb 25 12:58:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:58:40 compute-0 nova_compute[244014]: 2026-02-25 12:58:40.847 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:40 compute-0 nova_compute[244014]: 2026-02-25 12:58:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2461465610' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.186 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updating instance_info_cache with network_info: [{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.211 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Releasing lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.212 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance network_info: |[{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.213 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.213 244018 DEBUG nova.network.neutron [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Refreshing network info cache for port 98c45ccf-8678-4765-bf66-7a9e3dae8cac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.217 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Start _get_guest_xml network_info=[{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.223 244018 WARNING nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.228 244018 DEBUG nova.virt.libvirt.host [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.229 244018 DEBUG nova.virt.libvirt.host [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.237 244018 DEBUG nova.virt.libvirt.host [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.238 244018 DEBUG nova.virt.libvirt.host [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.239 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.239 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.240 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.240 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.240 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.240 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.241 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.241 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.241 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.242 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.242 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.242 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.246 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:58:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2680388769' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.911 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.949 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:58:41 compute-0 nova_compute[244014]: 2026-02-25 12:58:41.955 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.011 244018 DEBUG nova.compute.manager [req-e7d23be8-e956-43ef-8d00-f25e0d830351 req-f954cad6-38d0-4b87-9148-da0c887d2bbe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-deleted-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:42 compute-0 ceph-mon[76335]: pgmap v2343: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:58:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2680388769' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:58:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:58:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2110053394' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.576 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.579 244018 DEBUG nova.virt.libvirt.vif [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1391903363',display_name='tempest-TestServerAdvancedOps-server-1391903363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1391903363',id=142,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e20a44a5f2664c25a996daf9d0d14b97',ramdisk_id='',reservation_id='r-fckrf11t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-400846090',owner_user_name='tempest-TestServerAdvancedOps-400846090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:58:34Z,user_data=None,user_id='2236154a16ea4715a79c2e6f36e22e83',uuid=eadab8d0-1f24-4ace-b7b4-10329df53f12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.580 244018 DEBUG nova.network.os_vif_util [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converting VIF {"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.581 244018 DEBUG nova.network.os_vif_util [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.583 244018 DEBUG nova.objects.instance [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'pci_devices' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.605 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:58:42 compute-0 nova_compute[244014]:   <uuid>eadab8d0-1f24-4ace-b7b4-10329df53f12</uuid>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   <name>instance-0000008e</name>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <nova:name>tempest-TestServerAdvancedOps-server-1391903363</nova:name>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:58:41</nova:creationTime>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <nova:user uuid="2236154a16ea4715a79c2e6f36e22e83">tempest-TestServerAdvancedOps-400846090-project-member</nova:user>
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <nova:project uuid="e20a44a5f2664c25a996daf9d0d14b97">tempest-TestServerAdvancedOps-400846090</nova:project>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <nova:port uuid="98c45ccf-8678-4765-bf66-7a9e3dae8cac">
Feb 25 12:58:42 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <system>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <entry name="serial">eadab8d0-1f24-4ace-b7b4-10329df53f12</entry>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <entry name="uuid">eadab8d0-1f24-4ace-b7b4-10329df53f12</entry>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     </system>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   <os>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   </os>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   <features>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   </features>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/eadab8d0-1f24-4ace-b7b4-10329df53f12_disk">
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       </source>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config">
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       </source>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:58:42 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:25:62:35"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <target dev="tap98c45ccf-86"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/console.log" append="off"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <video>
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     </video>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:58:42 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:58:42 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:58:42 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:58:42 compute-0 nova_compute[244014]: </domain>
Feb 25 12:58:42 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.607 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Preparing to wait for external event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.607 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.608 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.608 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.609 244018 DEBUG nova.virt.libvirt.vif [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1391903363',display_name='tempest-TestServerAdvancedOps-server-1391903363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1391903363',id=142,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e20a44a5f2664c25a996daf9d0d14b97',ramdisk_id='',reservation_id='r-fckrf11t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-400846090',owner_user_name='tempest-TestServerAdvancedOps-400846090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:58:34Z,user_data=None,user_id='2236154a16ea4715a79c2e6f36e22e83',uuid=eadab8d0-1f24-4ace-b7b4-10329df53f12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.610 244018 DEBUG nova.network.os_vif_util [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converting VIF {"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.611 244018 DEBUG nova.network.os_vif_util [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.612 244018 DEBUG os_vif [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.613 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.614 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.618 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98c45ccf-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.619 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98c45ccf-86, col_values=(('external_ids', {'iface-id': '98c45ccf-8678-4765-bf66-7a9e3dae8cac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:62:35', 'vm-uuid': 'eadab8d0-1f24-4ace-b7b4-10329df53f12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:42 compute-0 NetworkManager[49836]: <info>  [1772024322.6225] manager: (tap98c45ccf-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.628 244018 INFO os_vif [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86')
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036222554203308425 of space, bias 1.0, pg target 0.10866766260992528 quantized to 32 (current 32)
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024942606386747445 of space, bias 1.0, pg target 0.7482781916024234 quantized to 32 (current 32)
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3700257818191368e-06 of space, bias 4.0, pg target 0.0016440309381829641 quantized to 16 (current 16)
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:58:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.837 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.837 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.838 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] No VIF found with MAC fa:16:3e:25:62:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.838 244018 INFO nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Using config drive
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.869 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:58:42 compute-0 nova_compute[244014]: 2026-02-25 12:58:42.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2110053394' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:58:43 compute-0 nova_compute[244014]: 2026-02-25 12:58:43.638 244018 DEBUG nova.network.neutron [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updated VIF entry in instance network info cache for port 98c45ccf-8678-4765-bf66-7a9e3dae8cac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:58:43 compute-0 nova_compute[244014]: 2026-02-25 12:58:43.639 244018 DEBUG nova.network.neutron [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updating instance_info_cache with network_info: [{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:58:43 compute-0 nova_compute[244014]: 2026-02-25 12:58:43.688 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:58:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:43 compute-0 nova_compute[244014]: 2026-02-25 12:58:43.751 244018 INFO nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Creating config drive at /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config
Feb 25 12:58:43 compute-0 nova_compute[244014]: 2026-02-25 12:58:43.757 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpstx3eitn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:43 compute-0 nova_compute[244014]: 2026-02-25 12:58:43.901 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpstx3eitn" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:43 compute-0 nova_compute[244014]: 2026-02-25 12:58:43.941 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:58:43 compute-0 nova_compute[244014]: 2026-02-25 12:58:43.948 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.125 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.126 244018 INFO nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Deleting local config drive /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config because it was imported into RBD.
Feb 25 12:58:44 compute-0 kernel: tap98c45ccf-86: entered promiscuous mode
Feb 25 12:58:44 compute-0 NetworkManager[49836]: <info>  [1772024324.1965] manager: (tap98c45ccf-86): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Feb 25 12:58:44 compute-0 ovn_controller[147040]: 2026-02-25T12:58:44Z|01503|binding|INFO|Claiming lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac for this chassis.
Feb 25 12:58:44 compute-0 ovn_controller[147040]: 2026-02-25T12:58:44Z|01504|binding|INFO|98c45ccf-8678-4765-bf66-7a9e3dae8cac: Claiming fa:16:3e:25:62:35 10.100.0.7
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.198 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:44 compute-0 ceph-mon[76335]: pgmap v2344: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.225 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.232 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:44 compute-0 ovn_controller[147040]: 2026-02-25T12:58:44Z|01505|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac ovn-installed in OVS
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:44 compute-0 systemd-machined[210048]: New machine qemu-174-instance-0000008e.
Feb 25 12:58:44 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-0000008e.
Feb 25 12:58:44 compute-0 systemd-udevd[373124]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:58:44 compute-0 ovn_controller[147040]: 2026-02-25T12:58:44Z|01506|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac up in Southbound
Feb 25 12:58:44 compute-0 NetworkManager[49836]: <info>  [1772024324.3001] device (tap98c45ccf-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:58:44 compute-0 NetworkManager[49836]: <info>  [1772024324.3007] device (tap98c45ccf-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:58:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:44.296 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:58:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:44.298 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 bound to our chassis
Feb 25 12:58:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:44.299 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:58:44 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:44.302 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e76ffb16-da0b-4d14-b57c-4c7441041a21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:44 compute-0 podman[373097]: 2026-02-25 12:58:44.324941041 +0000 UTC m=+0.087621743 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 12:58:44 compute-0 podman[373098]: 2026-02-25 12:58:44.379952513 +0000 UTC m=+0.140548786 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 25 12:58:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.830 244018 DEBUG nova.compute.manager [req-745c3461-4388-4a92-ae80-0e81113aef14 req-271d335c-ecb1-442b-87e1-e569ed334409 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.831 244018 DEBUG oslo_concurrency.lockutils [req-745c3461-4388-4a92-ae80-0e81113aef14 req-271d335c-ecb1-442b-87e1-e569ed334409 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.832 244018 DEBUG oslo_concurrency.lockutils [req-745c3461-4388-4a92-ae80-0e81113aef14 req-271d335c-ecb1-442b-87e1-e569ed334409 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.832 244018 DEBUG oslo_concurrency.lockutils [req-745c3461-4388-4a92-ae80-0e81113aef14 req-271d335c-ecb1-442b-87e1-e569ed334409 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.833 244018 DEBUG nova.compute.manager [req-745c3461-4388-4a92-ae80-0e81113aef14 req-271d335c-ecb1-442b-87e1-e569ed334409 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Processing event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:58:44 compute-0 nova_compute[244014]: 2026-02-25 12:58:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.157 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024325.1570914, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.158 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Started (Lifecycle Event)
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.162 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.167 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.171 244018 INFO nova.virt.libvirt.driver [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance spawned successfully.
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.172 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:58:45 compute-0 sshd-session[373153]: Received disconnect from 45.148.10.147 port 17444:11:  [preauth]
Feb 25 12:58:45 compute-0 sshd-session[373153]: Disconnected from authenticating user root 45.148.10.147 port 17444 [preauth]
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.350 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.359 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.366 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.367 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.368 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.369 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.370 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.371 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.409 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.410 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024325.1572623, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.410 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Paused (Lifecycle Event)
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.473 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.478 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024325.1663496, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.479 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Resumed (Lifecycle Event)
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.539 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.543 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.553 244018 INFO nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Took 11.30 seconds to spawn the instance on the hypervisor.
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.554 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.618 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.684 244018 INFO nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Took 12.95 seconds to build instance.
Feb 25 12:58:45 compute-0 nova_compute[244014]: 2026-02-25 12:58:45.794 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:46 compute-0 ceph-mon[76335]: pgmap v2345: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:58:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:58:46 compute-0 nova_compute[244014]: 2026-02-25 12:58:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:46 compute-0 nova_compute[244014]: 2026-02-25 12:58:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:46 compute-0 nova_compute[244014]: 2026-02-25 12:58:46.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:58:46 compute-0 nova_compute[244014]: 2026-02-25 12:58:46.933 244018 DEBUG nova.compute.manager [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:46 compute-0 nova_compute[244014]: 2026-02-25 12:58:46.933 244018 DEBUG oslo_concurrency.lockutils [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:46 compute-0 nova_compute[244014]: 2026-02-25 12:58:46.934 244018 DEBUG oslo_concurrency.lockutils [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:46 compute-0 nova_compute[244014]: 2026-02-25 12:58:46.934 244018 DEBUG oslo_concurrency.lockutils [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:46 compute-0 nova_compute[244014]: 2026-02-25 12:58:46.934 244018 DEBUG nova.compute.manager [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:46 compute-0 nova_compute[244014]: 2026-02-25 12:58:46.935 244018 WARNING nova.compute.manager [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state None.
Feb 25 12:58:47 compute-0 nova_compute[244014]: 2026-02-25 12:58:47.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:58:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3785713222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:58:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:58:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3785713222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:58:47 compute-0 nova_compute[244014]: 2026-02-25 12:58:47.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:47 compute-0 nova_compute[244014]: 2026-02-25 12:58:47.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:47 compute-0 nova_compute[244014]: 2026-02-25 12:58:47.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.195 244018 DEBUG nova.objects.instance [None req-962210e9-8602-41fe-a8bb-68938c7a4b0c 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'pci_devices' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.225 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024328.2256804, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.226 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Paused (Lifecycle Event)
Feb 25 12:58:48 compute-0 ceph-mon[76335]: pgmap v2346: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 12:58:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3785713222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:58:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3785713222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.259 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.263 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.286 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 25 12:58:48 compute-0 kernel: tap98c45ccf-86 (unregistering): left promiscuous mode
Feb 25 12:58:48 compute-0 NetworkManager[49836]: <info>  [1772024328.5660] device (tap98c45ccf-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:48 compute-0 ovn_controller[147040]: 2026-02-25T12:58:48Z|01507|binding|INFO|Releasing lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac from this chassis (sb_readonly=0)
Feb 25 12:58:48 compute-0 ovn_controller[147040]: 2026-02-25T12:58:48Z|01508|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac down in Southbound
Feb 25 12:58:48 compute-0 ovn_controller[147040]: 2026-02-25T12:58:48Z|01509|binding|INFO|Removing iface tap98c45ccf-86 ovn-installed in OVS
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:48.584 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:58:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:48.586 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 unbound from our chassis
Feb 25 12:58:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:48.587 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:58:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:48.587 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ede2cd0-2c97-4b94-990d-130087a6cbc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:48 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Feb 25 12:58:48 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008e.scope: Consumed 4.056s CPU time.
Feb 25 12:58:48 compute-0 systemd-machined[210048]: Machine qemu-174-instance-0000008e terminated.
Feb 25 12:58:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.745 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:48 compute-0 nova_compute[244014]: 2026-02-25 12:58:48.761 244018 DEBUG nova.compute.manager [None req-962210e9-8602-41fe-a8bb-68938c7a4b0c 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 25 12:58:49 compute-0 nova_compute[244014]: 2026-02-25 12:58:49.003 244018 DEBUG nova.compute.manager [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:49 compute-0 nova_compute[244014]: 2026-02-25 12:58:49.004 244018 DEBUG oslo_concurrency.lockutils [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:49 compute-0 nova_compute[244014]: 2026-02-25 12:58:49.004 244018 DEBUG oslo_concurrency.lockutils [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:49 compute-0 nova_compute[244014]: 2026-02-25 12:58:49.005 244018 DEBUG oslo_concurrency.lockutils [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:49 compute-0 nova_compute[244014]: 2026-02-25 12:58:49.005 244018 DEBUG nova.compute.manager [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:49 compute-0 nova_compute[244014]: 2026-02-25 12:58:49.006 244018 WARNING nova.compute.manager [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state suspended and task_state None.
Feb 25 12:58:50 compute-0 ceph-mon[76335]: pgmap v2347: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 25 12:58:50 compute-0 nova_compute[244014]: 2026-02-25 12:58:50.376 244018 INFO nova.compute.manager [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Resuming
Feb 25 12:58:50 compute-0 nova_compute[244014]: 2026-02-25 12:58:50.378 244018 DEBUG nova.objects.instance [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'flavor' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:58:50 compute-0 nova_compute[244014]: 2026-02-25 12:58:50.425 244018 DEBUG oslo_concurrency.lockutils [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:58:50 compute-0 nova_compute[244014]: 2026-02-25 12:58:50.426 244018 DEBUG oslo_concurrency.lockutils [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquired lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:58:50 compute-0 nova_compute[244014]: 2026-02-25 12:58:50.426 244018 DEBUG nova.network.neutron [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:58:50 compute-0 nova_compute[244014]: 2026-02-25 12:58:50.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:58:51 compute-0 nova_compute[244014]: 2026-02-25 12:58:51.116 244018 DEBUG nova.compute.manager [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:51 compute-0 nova_compute[244014]: 2026-02-25 12:58:51.116 244018 DEBUG oslo_concurrency.lockutils [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:51 compute-0 nova_compute[244014]: 2026-02-25 12:58:51.116 244018 DEBUG oslo_concurrency.lockutils [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:51 compute-0 nova_compute[244014]: 2026-02-25 12:58:51.117 244018 DEBUG oslo_concurrency.lockutils [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:51 compute-0 nova_compute[244014]: 2026-02-25 12:58:51.117 244018 DEBUG nova.compute.manager [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:51 compute-0 nova_compute[244014]: 2026-02-25 12:58:51.117 244018 WARNING nova.compute.manager [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state suspended and task_state resuming.
Feb 25 12:58:51 compute-0 nova_compute[244014]: 2026-02-25 12:58:51.126 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024316.1253798, 6185e497-8422-4a5f-a98a-865484d53d4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:58:51 compute-0 nova_compute[244014]: 2026-02-25 12:58:51.126 244018 INFO nova.compute.manager [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] VM Stopped (Lifecycle Event)
Feb 25 12:58:51 compute-0 nova_compute[244014]: 2026-02-25 12:58:51.235 244018 DEBUG nova.compute.manager [None req-8e7fd769-c213-42ab-ab50-7d3d1d969a95 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:52 compute-0 ceph-mon[76335]: pgmap v2348: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.589 244018 DEBUG nova.network.neutron [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updating instance_info_cache with network_info: [{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.609 244018 DEBUG oslo_concurrency.lockutils [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Releasing lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.616 244018 DEBUG nova.virt.libvirt.vif [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1391903363',display_name='tempest-TestServerAdvancedOps-server-1391903363',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1391903363',id=142,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:58:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e20a44a5f2664c25a996daf9d0d14b97',ramdisk_id='',reservation_id='r-fckrf11t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-400846090',owner_user_name='tempest-TestServerAdvancedOps-400846090-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:58:48Z,user_data=None,user_id='2236154a16ea4715a79c2e6f36e22e83',uuid=eadab8d0-1f24-4ace-b7b4-10329df53f12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.617 244018 DEBUG nova.network.os_vif_util [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converting VIF {"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.618 244018 DEBUG nova.network.os_vif_util [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.619 244018 DEBUG os_vif [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.619 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.620 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.621 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.625 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98c45ccf-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.625 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98c45ccf-86, col_values=(('external_ids', {'iface-id': '98c45ccf-8678-4765-bf66-7a9e3dae8cac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:62:35', 'vm-uuid': 'eadab8d0-1f24-4ace-b7b4-10329df53f12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.626 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.627 244018 INFO os_vif [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86')
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.651 244018 DEBUG nova.objects.instance [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'numa_topology' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:58:52 compute-0 kernel: tap98c45ccf-86: entered promiscuous mode
Feb 25 12:58:52 compute-0 ovn_controller[147040]: 2026-02-25T12:58:52Z|01510|binding|INFO|Claiming lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac for this chassis.
Feb 25 12:58:52 compute-0 ovn_controller[147040]: 2026-02-25T12:58:52Z|01511|binding|INFO|98c45ccf-8678-4765-bf66-7a9e3dae8cac: Claiming fa:16:3e:25:62:35 10.100.0.7
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:52 compute-0 NetworkManager[49836]: <info>  [1772024332.7281] manager: (tap98c45ccf-86): new Tun device (/org/freedesktop/NetworkManager/Devices/622)
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:52 compute-0 ovn_controller[147040]: 2026-02-25T12:58:52Z|01512|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac up in Southbound
Feb 25 12:58:52 compute-0 ovn_controller[147040]: 2026-02-25T12:58:52Z|01513|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac ovn-installed in OVS
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:52.737 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.738 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:52 compute-0 nova_compute[244014]: 2026-02-25 12:58:52.742 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:52.741 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 bound to our chassis
Feb 25 12:58:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:52.743 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:58:52 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:52.744 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[92abb6e4-a9fc-4ae9-84a4-daf541d8fdc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:52 compute-0 systemd-machined[210048]: New machine qemu-175-instance-0000008e.
Feb 25 12:58:52 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-0000008e.
Feb 25 12:58:52 compute-0 systemd-udevd[373231]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:58:52 compute-0 NetworkManager[49836]: <info>  [1772024332.8109] device (tap98c45ccf-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:58:52 compute-0 NetworkManager[49836]: <info>  [1772024332.8120] device (tap98c45ccf-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:58:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.209 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for eadab8d0-1f24-4ace-b7b4-10329df53f12 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.211 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024333.2091112, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.211 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Started (Lifecycle Event)
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.220 244018 DEBUG nova.compute.manager [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.220 244018 DEBUG nova.objects.instance [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'pci_devices' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.242 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.244 244018 INFO nova.virt.libvirt.driver [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance running successfully.
Feb 25 12:58:53 compute-0 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.246 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.248 244018 DEBUG nova.virt.libvirt.guest [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.248 244018 DEBUG nova.compute.manager [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.269 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.270 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024333.2121406, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.270 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Resumed (Lifecycle Event)
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.302 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:53 compute-0 nova_compute[244014]: 2026-02-25 12:58:53.305 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:58:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:54 compute-0 ceph-mon[76335]: pgmap v2349: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:58:54 compute-0 nova_compute[244014]: 2026-02-25 12:58:54.706 244018 DEBUG nova.compute.manager [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:54 compute-0 nova_compute[244014]: 2026-02-25 12:58:54.707 244018 DEBUG oslo_concurrency.lockutils [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:54 compute-0 nova_compute[244014]: 2026-02-25 12:58:54.707 244018 DEBUG oslo_concurrency.lockutils [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:54 compute-0 nova_compute[244014]: 2026-02-25 12:58:54.707 244018 DEBUG oslo_concurrency.lockutils [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:54 compute-0 nova_compute[244014]: 2026-02-25 12:58:54.707 244018 DEBUG nova.compute.manager [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:54 compute-0 nova_compute[244014]: 2026-02-25 12:58:54.708 244018 WARNING nova.compute.manager [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state None.
Feb 25 12:58:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:58:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:55.035 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:55.035 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:55.036 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:55 compute-0 nova_compute[244014]: 2026-02-25 12:58:55.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:56 compute-0 ceph-mon[76335]: pgmap v2350: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:58:56 compute-0 nova_compute[244014]: 2026-02-25 12:58:56.777 244018 DEBUG nova.compute.manager [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:56 compute-0 nova_compute[244014]: 2026-02-25 12:58:56.778 244018 DEBUG oslo_concurrency.lockutils [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:56 compute-0 nova_compute[244014]: 2026-02-25 12:58:56.779 244018 DEBUG oslo_concurrency.lockutils [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:56 compute-0 nova_compute[244014]: 2026-02-25 12:58:56.779 244018 DEBUG oslo_concurrency.lockutils [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:56 compute-0 nova_compute[244014]: 2026-02-25 12:58:56.780 244018 DEBUG nova.compute.manager [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:56 compute-0 nova_compute[244014]: 2026-02-25 12:58:56.780 244018 WARNING nova.compute.manager [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state None.
Feb 25 12:58:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.083 244018 DEBUG nova.objects.instance [None req-f8190f48-00cb-4861-8fbc-73bbf802b55c 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'pci_devices' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.109 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024337.1094124, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.110 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Paused (Lifecycle Event)
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.126 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.130 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.147 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:57 compute-0 ceph-mon[76335]: pgmap v2351: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 12:58:57 compute-0 kernel: tap98c45ccf-86 (unregistering): left promiscuous mode
Feb 25 12:58:57 compute-0 NetworkManager[49836]: <info>  [1772024337.6787] device (tap98c45ccf-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:57 compute-0 ovn_controller[147040]: 2026-02-25T12:58:57Z|01514|binding|INFO|Releasing lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac from this chassis (sb_readonly=0)
Feb 25 12:58:57 compute-0 ovn_controller[147040]: 2026-02-25T12:58:57Z|01515|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac down in Southbound
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.686 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:57 compute-0 ovn_controller[147040]: 2026-02-25T12:58:57Z|01516|binding|INFO|Removing iface tap98c45ccf-86 ovn-installed in OVS
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:57 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Feb 25 12:58:57 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Consumed 4.431s CPU time.
Feb 25 12:58:57 compute-0 systemd-machined[210048]: Machine qemu-175-instance-0000008e terminated.
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.842 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.847 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:57 compute-0 nova_compute[244014]: 2026-02-25 12:58:57.856 244018 DEBUG nova.compute.manager [None req-f8190f48-00cb-4861-8fbc-73bbf802b55c 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:58:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:58.124 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:58:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:58.125 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 unbound from our chassis
Feb 25 12:58:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:58.126 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:58:58 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:58:58.127 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[951b08cc-00f1-4ebd-aaa5-ce18d1380ec6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:58:58 compute-0 nova_compute[244014]: 2026-02-25 12:58:58.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:58:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:58:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Feb 25 12:58:58 compute-0 nova_compute[244014]: 2026-02-25 12:58:58.977 244018 DEBUG nova.compute.manager [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:58:58 compute-0 nova_compute[244014]: 2026-02-25 12:58:58.977 244018 DEBUG oslo_concurrency.lockutils [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:58:58 compute-0 nova_compute[244014]: 2026-02-25 12:58:58.977 244018 DEBUG oslo_concurrency.lockutils [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:58:58 compute-0 nova_compute[244014]: 2026-02-25 12:58:58.978 244018 DEBUG oslo_concurrency.lockutils [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:58:58 compute-0 nova_compute[244014]: 2026-02-25 12:58:58.978 244018 DEBUG nova.compute.manager [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:58:58 compute-0 nova_compute[244014]: 2026-02-25 12:58:58.978 244018 WARNING nova.compute.manager [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state suspended and task_state None.
Feb 25 12:58:59 compute-0 ceph-mon[76335]: pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Feb 25 12:59:00 compute-0 nova_compute[244014]: 2026-02-25 12:59:00.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:00 compute-0 nova_compute[244014]: 2026-02-25 12:59:00.694 244018 INFO nova.compute.manager [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Resuming
Feb 25 12:59:00 compute-0 nova_compute[244014]: 2026-02-25 12:59:00.696 244018 DEBUG nova.objects.instance [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'flavor' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:59:00 compute-0 nova_compute[244014]: 2026-02-25 12:59:00.741 244018 DEBUG oslo_concurrency.lockutils [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:59:00 compute-0 nova_compute[244014]: 2026-02-25 12:59:00.741 244018 DEBUG oslo_concurrency.lockutils [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquired lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:59:00 compute-0 nova_compute[244014]: 2026-02-25 12:59:00.742 244018 DEBUG nova.network.neutron [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:59:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 12:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:59:01 compute-0 ceph-mon[76335]: pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 12:59:02 compute-0 nova_compute[244014]: 2026-02-25 12:59:02.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 12:59:03 compute-0 nova_compute[244014]: 2026-02-25 12:59:03.375 244018 DEBUG nova.compute.manager [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:03 compute-0 nova_compute[244014]: 2026-02-25 12:59:03.376 244018 DEBUG oslo_concurrency.lockutils [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:03 compute-0 nova_compute[244014]: 2026-02-25 12:59:03.376 244018 DEBUG oslo_concurrency.lockutils [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:03 compute-0 nova_compute[244014]: 2026-02-25 12:59:03.377 244018 DEBUG oslo_concurrency.lockutils [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:03 compute-0 nova_compute[244014]: 2026-02-25 12:59:03.377 244018 DEBUG nova.compute.manager [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:59:03 compute-0 nova_compute[244014]: 2026-02-25 12:59:03.377 244018 WARNING nova.compute.manager [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state suspended and task_state resuming.
Feb 25 12:59:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:03 compute-0 ceph-mon[76335]: pgmap v2354: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 12:59:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 12:59:05 compute-0 nova_compute[244014]: 2026-02-25 12:59:05.581 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:05 compute-0 ceph-mon[76335]: pgmap v2355: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 12:59:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.126 244018 DEBUG nova.network.neutron [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updating instance_info_cache with network_info: [{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.165 244018 DEBUG oslo_concurrency.lockutils [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Releasing lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.173 244018 DEBUG nova.virt.libvirt.vif [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1391903363',display_name='tempest-TestServerAdvancedOps-server-1391903363',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1391903363',id=142,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:58:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e20a44a5f2664c25a996daf9d0d14b97',ramdisk_id='',reservation_id='r-fckrf11t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-400846090',owner_user_name='tempest-TestServerAdvancedOps-400846090-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:58:58Z,user_data=None,user_id='2236154a16ea4715a79c2e6f36e22e83',uuid=eadab8d0-1f24-4ace-b7b4-10329df53f12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.174 244018 DEBUG nova.network.os_vif_util [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converting VIF {"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.176 244018 DEBUG nova.network.os_vif_util [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.177 244018 DEBUG os_vif [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.178 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.178 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.182 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.183 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98c45ccf-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.183 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98c45ccf-86, col_values=(('external_ids', {'iface-id': '98c45ccf-8678-4765-bf66-7a9e3dae8cac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:62:35', 'vm-uuid': 'eadab8d0-1f24-4ace-b7b4-10329df53f12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.184 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.185 244018 INFO os_vif [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86')
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.203 244018 DEBUG nova.objects.instance [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'numa_topology' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:59:07 compute-0 kernel: tap98c45ccf-86: entered promiscuous mode
Feb 25 12:59:07 compute-0 NetworkManager[49836]: <info>  [1772024347.3347] manager: (tap98c45ccf-86): new Tun device (/org/freedesktop/NetworkManager/Devices/623)
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.334 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:07 compute-0 ovn_controller[147040]: 2026-02-25T12:59:07Z|01517|binding|INFO|Claiming lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac for this chassis.
Feb 25 12:59:07 compute-0 ovn_controller[147040]: 2026-02-25T12:59:07Z|01518|binding|INFO|98c45ccf-8678-4765-bf66-7a9e3dae8cac: Claiming fa:16:3e:25:62:35 10.100.0.7
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.364 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:07 compute-0 ovn_controller[147040]: 2026-02-25T12:59:07Z|01519|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac ovn-installed in OVS
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.368 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:07 compute-0 systemd-machined[210048]: New machine qemu-176-instance-0000008e.
Feb 25 12:59:07 compute-0 systemd-udevd[373317]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:59:07 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-0000008e.
Feb 25 12:59:07 compute-0 NetworkManager[49836]: <info>  [1772024347.3891] device (tap98c45ccf-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:59:07 compute-0 NetworkManager[49836]: <info>  [1772024347.3903] device (tap98c45ccf-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:59:07 compute-0 ovn_controller[147040]: 2026-02-25T12:59:07Z|01520|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac up in Southbound
Feb 25 12:59:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:07.400 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:59:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:07.402 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 bound to our chassis
Feb 25 12:59:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:07.403 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:59:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:07.403 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b38d1deb-e2e3-48bd-9b59-b31dc57b8dc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:07 compute-0 nova_compute[244014]: 2026-02-25 12:59:07.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:07 compute-0 ceph-mon[76335]: pgmap v2356: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.094 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for eadab8d0-1f24-4ace-b7b4-10329df53f12 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.095 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024348.0937846, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.095 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Started (Lifecycle Event)
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.106 244018 DEBUG nova.compute.manager [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.107 244018 DEBUG nova.objects.instance [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'pci_devices' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.112 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.117 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.123 244018 INFO nova.virt.libvirt.driver [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance running successfully.
Feb 25 12:59:08 compute-0 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.125 244018 DEBUG nova.virt.libvirt.guest [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.125 244018 DEBUG nova.compute.manager [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.140 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.141 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024348.0974534, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.141 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Resumed (Lifecycle Event)
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.175 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.181 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.497 244018 DEBUG nova.compute.manager [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.497 244018 DEBUG oslo_concurrency.lockutils [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.498 244018 DEBUG oslo_concurrency.lockutils [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.498 244018 DEBUG oslo_concurrency.lockutils [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.499 244018 DEBUG nova.compute.manager [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.499 244018 WARNING nova.compute.manager [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state None.
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.636 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.636 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.658 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.737 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.738 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.745 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.746 244018 INFO nova.compute.claims [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:59:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 12:59:08 compute-0 nova_compute[244014]: 2026-02-25 12:59:08.868 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:59:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3707919021' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.422 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.428 244018 DEBUG nova.compute.provider_tree [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.455 244018 DEBUG nova.scheduler.client.report [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.481 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.481 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.544 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.544 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.575 244018 INFO nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.598 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.709 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.710 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.710 244018 INFO nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Creating image(s)
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.732 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.753 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.772 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.776 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.843 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.845 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.846 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.846 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.877 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:09 compute-0 nova_compute[244014]: 2026-02-25 12:59:09.880 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 fc7d7f86-eb7c-476c-840e-98c97329de34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.292 244018 DEBUG nova.policy [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.611 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.612 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.612 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.612 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.613 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.614 244018 INFO nova.compute.manager [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Terminating instance
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.615 244018 DEBUG nova.compute.manager [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.698 244018 DEBUG nova.compute.manager [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.699 244018 DEBUG oslo_concurrency.lockutils [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.699 244018 DEBUG oslo_concurrency.lockutils [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.699 244018 DEBUG oslo_concurrency.lockutils [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.699 244018 DEBUG nova.compute.manager [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:59:10 compute-0 nova_compute[244014]: 2026-02-25 12:59:10.699 244018 WARNING nova.compute.manager [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state deleting.
Feb 25 12:59:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 12:59:11 compute-0 ceph-mon[76335]: pgmap v2357: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 12:59:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3707919021' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:11 compute-0 kernel: tap98c45ccf-86 (unregistering): left promiscuous mode
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.375 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Successfully created port: 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:59:11 compute-0 NetworkManager[49836]: <info>  [1772024351.3775] device (tap98c45ccf-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:11 compute-0 ovn_controller[147040]: 2026-02-25T12:59:11Z|01521|binding|INFO|Releasing lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac from this chassis (sb_readonly=0)
Feb 25 12:59:11 compute-0 ovn_controller[147040]: 2026-02-25T12:59:11Z|01522|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac down in Southbound
Feb 25 12:59:11 compute-0 ovn_controller[147040]: 2026-02-25T12:59:11Z|01523|binding|INFO|Removing iface tap98c45ccf-86 ovn-installed in OVS
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:11.390 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:11.394 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 unbound from our chassis
Feb 25 12:59:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:11.395 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 25 12:59:11 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:11.397 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3535caae-2a18-400c-99c8-b3ac9f7a3c61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:11 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Feb 25 12:59:11 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Consumed 3.396s CPU time.
Feb 25 12:59:11 compute-0 systemd-machined[210048]: Machine qemu-176-instance-0000008e terminated.
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.547 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 fc7d7f86-eb7c-476c-840e-98c97329de34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.630 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.665 244018 INFO nova.virt.libvirt.driver [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance destroyed successfully.
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.666 244018 DEBUG nova.objects.instance [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'resources' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.680 244018 DEBUG nova.virt.libvirt.vif [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1391903363',display_name='tempest-TestServerAdvancedOps-server-1391903363',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1391903363',id=142,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:58:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e20a44a5f2664c25a996daf9d0d14b97',ramdisk_id='',reservation_id='r-fckrf11t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-400846090',owner_user_name='tempest-TestServerAdvancedOps-400846090-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:59:08Z,user_data=None,user_id='2236154a16ea4715a79c2e6f36e22e83',uuid=eadab8d0-1f24-4ace-b7b4-10329df53f12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.680 244018 DEBUG nova.network.os_vif_util [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converting VIF {"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.681 244018 DEBUG nova.network.os_vif_util [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.682 244018 DEBUG os_vif [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.685 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98c45ccf-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.693 244018 INFO os_vif [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86')
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.864 244018 DEBUG nova.objects.instance [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid fc7d7f86-eb7c-476c-840e-98c97329de34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.876 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.877 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Ensure instance console log exists: /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.877 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.877 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:11 compute-0 nova_compute[244014]: 2026-02-25 12:59:11.878 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:12 compute-0 ceph-mon[76335]: pgmap v2358: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 12:59:12 compute-0 nova_compute[244014]: 2026-02-25 12:59:12.432 244018 INFO nova.virt.libvirt.driver [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Deleting instance files /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12_del
Feb 25 12:59:12 compute-0 nova_compute[244014]: 2026-02-25 12:59:12.433 244018 INFO nova.virt.libvirt.driver [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Deletion of /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12_del complete
Feb 25 12:59:12 compute-0 nova_compute[244014]: 2026-02-25 12:59:12.465 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Successfully updated port: 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:59:12 compute-0 nova_compute[244014]: 2026-02-25 12:59:12.481 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:59:12 compute-0 nova_compute[244014]: 2026-02-25 12:59:12.481 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:59:12 compute-0 nova_compute[244014]: 2026-02-25 12:59:12.482 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:59:12 compute-0 nova_compute[244014]: 2026-02-25 12:59:12.487 244018 INFO nova.compute.manager [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Took 1.87 seconds to destroy the instance on the hypervisor.
Feb 25 12:59:12 compute-0 nova_compute[244014]: 2026-02-25 12:59:12.487 244018 DEBUG oslo.service.loopingcall [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 12:59:12 compute-0 nova_compute[244014]: 2026-02-25 12:59:12.488 244018 DEBUG nova.compute.manager [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 12:59:12 compute-0 nova_compute[244014]: 2026-02-25 12:59:12.488 244018 DEBUG nova.network.neutron [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 12:59:12 compute-0 nova_compute[244014]: 2026-02-25 12:59:12.647 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:59:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 12:59:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 50K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1390 writes, 6279 keys, 1390 commit groups, 1.0 writes per commit group, ingest: 8.96 MB, 0.01 MB/s
                                           Interval WAL: 1390 writes, 1390 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     55.2      1.06              0.16        34    0.031       0      0       0.0       0.0
                                             L6      1/0    7.77 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.7    103.5     87.3      3.13              0.80        33    0.095    195K    17K       0.0       0.0
                                            Sum      1/0    7.77 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.7     77.3     79.2      4.19              0.96        67    0.063    195K    17K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.4     83.7     83.4      0.62              0.18        10    0.062     36K   2496       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    103.5     87.3      3.13              0.80        33    0.095    195K    17K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     55.4      1.05              0.16        33    0.032       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.057, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.32 GB write, 0.08 MB/s write, 0.32 GB read, 0.08 MB/s read, 4.2 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 37.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000341 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2361,35.88 MB,11.8042%) FilterBlock(68,550.42 KB,0.176816%) IndexBlock(68,934.36 KB,0.300151%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 12:59:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.534 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.534 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.534 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.535 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.535 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.535 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.535 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.535 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.536 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.536 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.536 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.536 244018 WARNING nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state deleting.
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.537 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.537 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing instance network info cache due to event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.537 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:59:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.855 244018 DEBUG nova.network.neutron [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.874 244018 INFO nova.compute.manager [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Took 1.39 seconds to deallocate network for instance.
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.952 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:13 compute-0 nova_compute[244014]: 2026-02-25 12:59:13.953 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.009 244018 DEBUG oslo_concurrency.processutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:14 compute-0 sudo[373595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:59:14 compute-0 sudo[373595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:59:14 compute-0 sudo[373595]: pam_unix(sudo:session): session closed for user root
Feb 25 12:59:14 compute-0 sudo[373630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 12:59:14 compute-0 sudo[373630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.242 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.263 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.263 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Instance network_info: |[{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.264 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.264 244018 DEBUG nova.network.neutron [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.267 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Start _get_guest_xml network_info=[{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.274 244018 WARNING nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.290 244018 DEBUG nova.virt.libvirt.host [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.291 244018 DEBUG nova.virt.libvirt.host [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.294 244018 DEBUG nova.virt.libvirt.host [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.295 244018 DEBUG nova.virt.libvirt.host [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.295 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.296 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.296 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.296 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.297 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.297 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.297 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.297 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.298 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.298 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.298 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.298 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.302 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:14 compute-0 ceph-mon[76335]: pgmap v2359: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:59:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:59:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172366935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.621 244018 DEBUG oslo_concurrency.processutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.650 244018 DEBUG nova.compute.provider_tree [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:59:14 compute-0 sudo[373630]: pam_unix(sudo:session): session closed for user root
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.668 244018 DEBUG nova.scheduler.client.report [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.696 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:59:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:59:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 12:59:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:59:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 12:59:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:59:14 compute-0 podman[373717]: 2026-02-25 12:59:14.713250183 +0000 UTC m=+0.056792133 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:59:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 12:59:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:59:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 12:59:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:59:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 12:59:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:59:14 compute-0 podman[373718]: 2026-02-25 12:59:14.748043975 +0000 UTC m=+0.087259973 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.757 244018 INFO nova.scheduler.client.report [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Deleted allocations for instance eadab8d0-1f24-4ace-b7b4-10329df53f12
Feb 25 12:59:14 compute-0 sudo[373756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:59:14 compute-0 sudo[373756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:59:14 compute-0 sudo[373756]: pam_unix(sudo:session): session closed for user root
Feb 25 12:59:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:59:14 compute-0 sudo[373784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 12:59:14 compute-0 sudo[373784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:59:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:59:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2593479441' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.894 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.922 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.930 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:14 compute-0 nova_compute[244014]: 2026-02-25 12:59:14.976 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:15 compute-0 podman[373862]: 2026-02-25 12:59:15.109392858 +0000 UTC m=+0.041198923 container create d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb 25 12:59:15 compute-0 systemd[1]: Started libpod-conmon-d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2.scope.
Feb 25 12:59:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:59:15 compute-0 podman[373862]: 2026-02-25 12:59:15.089758764 +0000 UTC m=+0.021564839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:59:15 compute-0 podman[373862]: 2026-02-25 12:59:15.200195139 +0000 UTC m=+0.132001234 container init d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:59:15 compute-0 podman[373862]: 2026-02-25 12:59:15.2087173 +0000 UTC m=+0.140523345 container start d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:59:15 compute-0 podman[373862]: 2026-02-25 12:59:15.211893369 +0000 UTC m=+0.143699424 container attach d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 12:59:15 compute-0 laughing_agnesi[373878]: 167 167
Feb 25 12:59:15 compute-0 systemd[1]: libpod-d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2.scope: Deactivated successfully.
Feb 25 12:59:15 compute-0 podman[373862]: 2026-02-25 12:59:15.215251104 +0000 UTC m=+0.147057199 container died d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:59:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-40e970e8c8ab26343596feba3e9548a787c2eb69e8233a85e7287f2dbffb4f46-merged.mount: Deactivated successfully.
Feb 25 12:59:15 compute-0 podman[373862]: 2026-02-25 12:59:15.261104368 +0000 UTC m=+0.192910453 container remove d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 12:59:15 compute-0 systemd[1]: libpod-conmon-d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2.scope: Deactivated successfully.
Feb 25 12:59:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1172366935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:59:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 12:59:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:59:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 12:59:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 12:59:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 12:59:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2593479441' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:15 compute-0 podman[373902]: 2026-02-25 12:59:15.432809141 +0000 UTC m=+0.051834303 container create 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:59:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:59:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1594549115' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.455 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.456 244018 DEBUG nova.virt.libvirt.vif [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=143,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-q2l93d4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:09Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=fc7d7f86-eb7c-476c-840e-98c97329de34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.456 244018 DEBUG nova.network.os_vif_util [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.457 244018 DEBUG nova.network.os_vif_util [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.458 244018 DEBUG nova.objects.instance [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid fc7d7f86-eb7c-476c-840e-98c97329de34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:59:15 compute-0 systemd[1]: Started libpod-conmon-23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a.scope.
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.482 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:59:15 compute-0 nova_compute[244014]:   <uuid>fc7d7f86-eb7c-476c-840e-98c97329de34</uuid>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   <name>instance-0000008f</name>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617</nova:name>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:59:14</nova:creationTime>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <nova:port uuid="5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3">
Feb 25 12:59:15 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <system>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <entry name="serial">fc7d7f86-eb7c-476c-840e-98c97329de34</entry>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <entry name="uuid">fc7d7f86-eb7c-476c-840e-98c97329de34</entry>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     </system>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   <os>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   </os>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   <features>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   </features>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/fc7d7f86-eb7c-476c-840e-98c97329de34_disk">
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config">
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       </source>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:59:15 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:db:fb:af"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <target dev="tap5b9c40d0-fe"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/console.log" append="off"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <video>
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     </video>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:59:15 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:59:15 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:59:15 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:59:15 compute-0 nova_compute[244014]: </domain>
Feb 25 12:59:15 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.482 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Preparing to wait for external event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.482 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.483 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.483 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.483 244018 DEBUG nova.virt.libvirt.vif [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=143,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-q2l93d4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:09Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=fc7d7f86-eb7c-476c-840e-98c97329de34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.484 244018 DEBUG nova.network.os_vif_util [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.484 244018 DEBUG nova.network.os_vif_util [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.485 244018 DEBUG os_vif [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.486 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.486 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.490 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b9c40d0-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.491 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b9c40d0-fe, col_values=(('external_ids', {'iface-id': '5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:fb:af', 'vm-uuid': 'fc7d7f86-eb7c-476c-840e-98c97329de34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.493 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:15 compute-0 NetworkManager[49836]: <info>  [1772024355.4948] manager: (tap5b9c40d0-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.499 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.500 244018 INFO os_vif [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe')
Feb 25 12:59:15 compute-0 podman[373902]: 2026-02-25 12:59:15.407786675 +0000 UTC m=+0.026811947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:59:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:15 compute-0 podman[373902]: 2026-02-25 12:59:15.532557295 +0000 UTC m=+0.151582537 container init 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:59:15 compute-0 podman[373902]: 2026-02-25 12:59:15.541824426 +0000 UTC m=+0.160849598 container start 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 12:59:15 compute-0 podman[373902]: 2026-02-25 12:59:15.545316795 +0000 UTC m=+0.164341977 container attach 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.581 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.582 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.583 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:db:fb:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.584 244018 INFO nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Using config drive
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.616 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.626 244018 DEBUG nova.network.neutron [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updated VIF entry in instance network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.627 244018 DEBUG nova.network.neutron [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.633 244018 DEBUG nova.compute.manager [req-448ad05e-c3e7-4b75-9c49-7dcbeeea5c59 req-b96ce751-8170-4732-9b8e-31483f679f29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-deleted-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:15 compute-0 nova_compute[244014]: 2026-02-25 12:59:15.680 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:59:15 compute-0 practical_matsumoto[373923]: --> passed data devices: 0 physical, 3 LVM
Feb 25 12:59:15 compute-0 practical_matsumoto[373923]: --> All data devices are unavailable
Feb 25 12:59:16 compute-0 systemd[1]: libpod-23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a.scope: Deactivated successfully.
Feb 25 12:59:16 compute-0 podman[373902]: 2026-02-25 12:59:16.01912544 +0000 UTC m=+0.638150632 container died 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6-merged.mount: Deactivated successfully.
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.057 244018 INFO nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Creating config drive at /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.064 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpawuak_3v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:16 compute-0 podman[373902]: 2026-02-25 12:59:16.070543441 +0000 UTC m=+0.689568633 container remove 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 12:59:16 compute-0 systemd[1]: libpod-conmon-23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a.scope: Deactivated successfully.
Feb 25 12:59:16 compute-0 sudo[373784]: pam_unix(sudo:session): session closed for user root
Feb 25 12:59:16 compute-0 sudo[373978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:59:16 compute-0 sudo[373978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:59:16 compute-0 sudo[373978]: pam_unix(sudo:session): session closed for user root
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.210 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpawuak_3v" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:16 compute-0 sudo[374003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 12:59:16 compute-0 sudo[374003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.246 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.251 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:16 compute-0 ceph-mon[76335]: pgmap v2360: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:59:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1594549115' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.475 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.477 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.521 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.522 244018 INFO nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Deleting local config drive /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config because it was imported into RBD.
Feb 25 12:59:16 compute-0 podman[374077]: 2026-02-25 12:59:16.533188371 +0000 UTC m=+0.051955156 container create abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:59:16 compute-0 systemd[1]: Started libpod-conmon-abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74.scope.
Feb 25 12:59:16 compute-0 kernel: tap5b9c40d0-fe: entered promiscuous mode
Feb 25 12:59:16 compute-0 NetworkManager[49836]: <info>  [1772024356.5892] manager: (tap5b9c40d0-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/625)
Feb 25 12:59:16 compute-0 ovn_controller[147040]: 2026-02-25T12:59:16Z|01524|binding|INFO|Claiming lport 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 for this chassis.
Feb 25 12:59:16 compute-0 ovn_controller[147040]: 2026-02-25T12:59:16Z|01525|binding|INFO|5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3: Claiming fa:16:3e:db:fb:af 10.100.0.14
Feb 25 12:59:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:16 compute-0 podman[374077]: 2026-02-25 12:59:16.506065886 +0000 UTC m=+0.024832731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:59:16 compute-0 podman[374077]: 2026-02-25 12:59:16.615133323 +0000 UTC m=+0.133900078 container init abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 12:59:16 compute-0 ovn_controller[147040]: 2026-02-25T12:59:16Z|01526|binding|INFO|Setting lport 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 ovn-installed in OVS
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:16 compute-0 ovn_controller[147040]: 2026-02-25T12:59:16Z|01527|binding|INFO|Setting lport 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 up in Southbound
Feb 25 12:59:16 compute-0 podman[374077]: 2026-02-25 12:59:16.622944843 +0000 UTC m=+0.141711588 container start abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:59:16 compute-0 systemd-udevd[374110]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:59:16 compute-0 podman[374077]: 2026-02-25 12:59:16.626572275 +0000 UTC m=+0.145339020 container attach abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Feb 25 12:59:16 compute-0 magical_stonebraker[374098]: 167 167
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.624 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:fb:af 10.100.0.14'], port_security=['fa:16:3e:db:fb:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fc7d7f86-eb7c-476c-840e-98c97329de34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1134355f-9f11-4363-a935-f82cb22e83bd 1ae32a7b-44d8-4e80-93f7-db39a030f905', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf42b548-d8ba-420a-827b-4e92eb42f3f0, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.627 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 in datapath 62bbaf1c-560e-4f11-b053-43d27fe35ba2 bound to our chassis
Feb 25 12:59:16 compute-0 podman[374077]: 2026-02-25 12:59:16.629966151 +0000 UTC m=+0.148732896 container died abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:59:16 compute-0 systemd-machined[210048]: New machine qemu-177-instance-0000008f.
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.634 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62bbaf1c-560e-4f11-b053-43d27fe35ba2
Feb 25 12:59:16 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-0000008f.
Feb 25 12:59:16 compute-0 systemd[1]: libpod-abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74.scope: Deactivated successfully.
Feb 25 12:59:16 compute-0 NetworkManager[49836]: <info>  [1772024356.6418] device (tap5b9c40d0-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:59:16 compute-0 NetworkManager[49836]: <info>  [1772024356.6430] device (tap5b9c40d0-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.648 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6d27bb-5fba-49ad-b041-ccf066b56b34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.649 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62bbaf1c-51 in ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.652 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62bbaf1c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.652 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f207faa0-757b-403b-b863-50c79a0da0fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.653 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2a592b-d9ba-4f21-a49a-8707be145567]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2910d1336720f321e2d4f71f810206892fdf3047074ede6af3673d00ceb00274-merged.mount: Deactivated successfully.
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.668 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[84efd376-b9e8-4ca3-9689-bffd46146796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 podman[374077]: 2026-02-25 12:59:16.677512152 +0000 UTC m=+0.196278897 container remove abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:59:16 compute-0 systemd[1]: libpod-conmon-abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74.scope: Deactivated successfully.
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.691 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd64af8-05aa-44bc-820c-0b26a372cad2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.714 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9a65e31c-2e44-489a-841d-20969d94e311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.718 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[601263d8-f602-4745-9870-9933ebfaca64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 NetworkManager[49836]: <info>  [1772024356.7190] manager: (tap62bbaf1c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/626)
Feb 25 12:59:16 compute-0 systemd-udevd[374121]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.745 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[45487196-d23f-4206-9eb6-7e85d28e7a75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.748 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[217d59c9-8b7c-4f57-af25-1648095b4881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 NetworkManager[49836]: <info>  [1772024356.7631] device (tap62bbaf1c-50): carrier: link connected
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.768 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2f74b1-8e9e-46aa-809e-fdccb921489b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.783 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5280d508-d930-4a78-8742-d3c5c250dee1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62bbaf1c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:da:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 450], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632617, 'reachable_time': 39162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374170, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.795 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[796b7aca-f418-4c06-94d4-710746940fc5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:dac2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632617, 'tstamp': 632617}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374177, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 podman[374162]: 2026-02-25 12:59:16.802009304 +0000 UTC m=+0.038009863 container create 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.807 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dacfaa7b-082c-41ce-b890-fd3254cef60b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62bbaf1c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:da:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 450], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632617, 'reachable_time': 39162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374179, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:59:16 compute-0 systemd[1]: Started libpod-conmon-5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4.scope.
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.832 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dff4d2b6-eddd-4df5-b9a9-8b8135f37d39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ceed0f413b8011ad086b8b59db954440778209cc857706cbc5f9377b0407d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ceed0f413b8011ad086b8b59db954440778209cc857706cbc5f9377b0407d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ceed0f413b8011ad086b8b59db954440778209cc857706cbc5f9377b0407d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ceed0f413b8011ad086b8b59db954440778209cc857706cbc5f9377b0407d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:16 compute-0 podman[374162]: 2026-02-25 12:59:16.866268747 +0000 UTC m=+0.102269396 container init 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.873 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[756bd72d-7f1b-463e-a808-22ed46693c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.874 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62bbaf1c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.875 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.875 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62bbaf1c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:16 compute-0 podman[374162]: 2026-02-25 12:59:16.875539629 +0000 UTC m=+0.111540188 container start 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.877 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:16 compute-0 NetworkManager[49836]: <info>  [1772024356.8781] manager: (tap62bbaf1c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/627)
Feb 25 12:59:16 compute-0 kernel: tap62bbaf1c-50: entered promiscuous mode
Feb 25 12:59:16 compute-0 podman[374162]: 2026-02-25 12:59:16.784959163 +0000 UTC m=+0.020959742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.882 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62bbaf1c-50, col_values=(('external_ids', {'iface-id': 'e8ddd525-f10e-4482-b8e7-397d0dcdd470'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:16 compute-0 ovn_controller[147040]: 2026-02-25T12:59:16Z|01528|binding|INFO|Releasing lport e8ddd525-f10e-4482-b8e7-397d0dcdd470 from this chassis (sb_readonly=0)
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:16 compute-0 podman[374162]: 2026-02-25 12:59:16.891788817 +0000 UTC m=+0.127789416 container attach 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.895 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62bbaf1c-560e-4f11-b053-43d27fe35ba2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62bbaf1c-560e-4f11-b053-43d27fe35ba2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 12:59:16 compute-0 nova_compute[244014]: 2026-02-25 12:59:16.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.896 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b742dd20-d271-48c5-a79d-ae45e5aa8873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.897 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: global
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-62bbaf1c-560e-4f11-b053-43d27fe35ba2
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/62bbaf1c-560e-4f11-b053-43d27fe35ba2.pid.haproxy
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 62bbaf1c-560e-4f11-b053-43d27fe35ba2
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 12:59:16 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.898 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'env', 'PROCESS_TAG=haproxy-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62bbaf1c-560e-4f11-b053-43d27fe35ba2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]: {
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:     "0": [
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:         {
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "devices": [
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "/dev/loop3"
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             ],
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_name": "ceph_lv0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_size": "21470642176",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "name": "ceph_lv0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "tags": {
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.cluster_name": "ceph",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.crush_device_class": "",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.encrypted": "0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.objectstore": "bluestore",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.osd_id": "0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.type": "block",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.vdo": "0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.with_tpm": "0"
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             },
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "type": "block",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "vg_name": "ceph_vg0"
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:         }
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:     ],
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:     "1": [
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:         {
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "devices": [
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "/dev/loop4"
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             ],
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_name": "ceph_lv1",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_size": "21470642176",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "name": "ceph_lv1",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "tags": {
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.cluster_name": "ceph",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.crush_device_class": "",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.encrypted": "0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.objectstore": "bluestore",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.osd_id": "1",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.type": "block",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.vdo": "0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.with_tpm": "0"
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             },
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "type": "block",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "vg_name": "ceph_vg1"
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:         }
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:     ],
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:     "2": [
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:         {
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "devices": [
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "/dev/loop5"
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             ],
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_name": "ceph_lv2",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_size": "21470642176",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "name": "ceph_lv2",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "tags": {
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.cluster_name": "ceph",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.crush_device_class": "",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.encrypted": "0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.objectstore": "bluestore",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.osd_id": "2",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.type": "block",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.vdo": "0",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:                 "ceph.with_tpm": "0"
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             },
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "type": "block",
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:             "vg_name": "ceph_vg2"
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:         }
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]:     ]
Feb 25 12:59:17 compute-0 beautiful_mccarthy[374184]: }
Feb 25 12:59:17 compute-0 systemd[1]: libpod-5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4.scope: Deactivated successfully.
Feb 25 12:59:17 compute-0 podman[374162]: 2026-02-25 12:59:17.1598858 +0000 UTC m=+0.395886359 container died 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 12:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-78ceed0f413b8011ad086b8b59db954440778209cc857706cbc5f9377b0407d8-merged.mount: Deactivated successfully.
Feb 25 12:59:17 compute-0 podman[374162]: 2026-02-25 12:59:17.201907955 +0000 UTC m=+0.437908514 container remove 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 12:59:17 compute-0 systemd[1]: libpod-conmon-5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4.scope: Deactivated successfully.
Feb 25 12:59:17 compute-0 sudo[374003]: pam_unix(sudo:session): session closed for user root
Feb 25 12:59:17 compute-0 podman[374227]: 2026-02-25 12:59:17.242948683 +0000 UTC m=+0.058309276 container create 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 12:59:17 compute-0 systemd[1]: Started libpod-conmon-2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea.scope.
Feb 25 12:59:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:59:17 compute-0 sudo[374244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 12:59:17 compute-0 sudo[374244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00e350bf0228b37d39a08274b06ef58a344901c5a3e3eee456d5980226c76505/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:17 compute-0 sudo[374244]: pam_unix(sudo:session): session closed for user root
Feb 25 12:59:17 compute-0 podman[374227]: 2026-02-25 12:59:17.213210274 +0000 UTC m=+0.028570887 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 12:59:17 compute-0 podman[374227]: 2026-02-25 12:59:17.322911558 +0000 UTC m=+0.138272221 container init 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 12:59:17 compute-0 podman[374227]: 2026-02-25 12:59:17.326654234 +0000 UTC m=+0.142014867 container start 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 12:59:17 compute-0 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [NOTICE]   (374288) : New worker (374300) forked
Feb 25 12:59:17 compute-0 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [NOTICE]   (374288) : Loading success.
Feb 25 12:59:17 compute-0 sudo[374274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 12:59:17 compute-0 sudo[374274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.659 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024357.658368, fc7d7f86-eb7c-476c-840e-98c97329de34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.659 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] VM Started (Lifecycle Event)
Feb 25 12:59:17 compute-0 podman[374357]: 2026-02-25 12:59:17.684886059 +0000 UTC m=+0.109204971 container create 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.693 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:17 compute-0 podman[374357]: 2026-02-25 12:59:17.599100789 +0000 UTC m=+0.023419701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.698 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024357.661661, fc7d7f86-eb7c-476c-840e-98c97329de34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.699 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] VM Paused (Lifecycle Event)
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.732 244018 DEBUG nova.compute.manager [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.732 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.733 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.733 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.733 244018 DEBUG nova.compute.manager [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Processing event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.733 244018 DEBUG nova.compute.manager [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.734 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.734 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.734 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.734 244018 DEBUG nova.compute.manager [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] No waiting events found dispatching network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.735 244018 WARNING nova.compute.manager [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received unexpected event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 for instance with vm_state building and task_state spawning.
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.735 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.739 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.743 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.744 244018 INFO nova.virt.libvirt.driver [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Instance spawned successfully.
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.744 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.748 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024357.7388053, fc7d7f86-eb7c-476c-840e-98c97329de34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.748 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] VM Resumed (Lifecycle Event)
Feb 25 12:59:17 compute-0 systemd[1]: Started libpod-conmon-871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7.scope.
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.780 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.780 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.781 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.781 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.782 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.782 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.815 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.819 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:59:17 compute-0 podman[374357]: 2026-02-25 12:59:17.837469852 +0000 UTC m=+0.261788824 container init 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 12:59:17 compute-0 podman[374357]: 2026-02-25 12:59:17.846656571 +0000 UTC m=+0.270975493 container start 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:59:17 compute-0 jovial_wiles[374379]: 167 167
Feb 25 12:59:17 compute-0 systemd[1]: libpod-871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7.scope: Deactivated successfully.
Feb 25 12:59:17 compute-0 podman[374357]: 2026-02-25 12:59:17.880551227 +0000 UTC m=+0.304870109 container attach 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:59:17 compute-0 podman[374357]: 2026-02-25 12:59:17.888830211 +0000 UTC m=+0.313149093 container died 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.909 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.935 244018 INFO nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Took 8.23 seconds to spawn the instance on the hypervisor.
Feb 25 12:59:17 compute-0 nova_compute[244014]: 2026-02-25 12:59:17.935 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-83d07b114da5ebf2cec124c9c87dd82391c5ba1bd9bddd60afd8797dc5bbf4f7-merged.mount: Deactivated successfully.
Feb 25 12:59:18 compute-0 nova_compute[244014]: 2026-02-25 12:59:18.048 244018 INFO nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Took 9.34 seconds to build instance.
Feb 25 12:59:18 compute-0 podman[374357]: 2026-02-25 12:59:18.136109007 +0000 UTC m=+0.560427929 container remove 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 12:59:18 compute-0 systemd[1]: libpod-conmon-871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7.scope: Deactivated successfully.
Feb 25 12:59:18 compute-0 nova_compute[244014]: 2026-02-25 12:59:18.155 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:18 compute-0 podman[374402]: 2026-02-25 12:59:18.316382562 +0000 UTC m=+0.031163060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 12:59:18 compute-0 podman[374402]: 2026-02-25 12:59:18.440650307 +0000 UTC m=+0.155430815 container create 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 12:59:18 compute-0 ceph-mon[76335]: pgmap v2361: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 12:59:18 compute-0 systemd[1]: Started libpod-conmon-7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a.scope.
Feb 25 12:59:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 12:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a92e78175949c4efd2e3724f637d11f001650fa371595a7d477176de8180ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a92e78175949c4efd2e3724f637d11f001650fa371595a7d477176de8180ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a92e78175949c4efd2e3724f637d11f001650fa371595a7d477176de8180ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a92e78175949c4efd2e3724f637d11f001650fa371595a7d477176de8180ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 12:59:18 compute-0 podman[374402]: 2026-02-25 12:59:18.63004558 +0000 UTC m=+0.344826108 container init 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:59:18 compute-0 podman[374402]: 2026-02-25 12:59:18.639180298 +0000 UTC m=+0.353960806 container start 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 12:59:18 compute-0 podman[374402]: 2026-02-25 12:59:18.65380891 +0000 UTC m=+0.368589428 container attach 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 12:59:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Feb 25 12:59:19 compute-0 lvm[374493]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 12:59:19 compute-0 lvm[374493]: VG ceph_vg0 finished
Feb 25 12:59:19 compute-0 lvm[374495]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 12:59:19 compute-0 lvm[374495]: VG ceph_vg1 finished
Feb 25 12:59:19 compute-0 lvm[374496]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 12:59:19 compute-0 lvm[374496]: VG ceph_vg2 finished
Feb 25 12:59:19 compute-0 kind_noether[374418]: {}
Feb 25 12:59:19 compute-0 systemd[1]: libpod-7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a.scope: Deactivated successfully.
Feb 25 12:59:19 compute-0 systemd[1]: libpod-7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a.scope: Consumed 1.074s CPU time.
Feb 25 12:59:19 compute-0 podman[374402]: 2026-02-25 12:59:19.520410395 +0000 UTC m=+1.235190883 container died 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 12:59:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3a92e78175949c4efd2e3724f637d11f001650fa371595a7d477176de8180ef-merged.mount: Deactivated successfully.
Feb 25 12:59:19 compute-0 podman[374402]: 2026-02-25 12:59:19.656739811 +0000 UTC m=+1.371520289 container remove 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 12:59:19 compute-0 systemd[1]: libpod-conmon-7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a.scope: Deactivated successfully.
Feb 25 12:59:19 compute-0 sudo[374274]: pam_unix(sudo:session): session closed for user root
Feb 25 12:59:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 12:59:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:59:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 12:59:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:59:19 compute-0 sudo[374512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 12:59:19 compute-0 sudo[374512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 12:59:19 compute-0 sudo[374512]: pam_unix(sudo:session): session closed for user root
Feb 25 12:59:20 compute-0 nova_compute[244014]: 2026-02-25 12:59:20.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:20 compute-0 ceph-mon[76335]: pgmap v2362: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Feb 25 12:59:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:59:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 12:59:20 compute-0 nova_compute[244014]: 2026-02-25 12:59:20.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Feb 25 12:59:21 compute-0 ceph-mon[76335]: pgmap v2363: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Feb 25 12:59:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Feb 25 12:59:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:23 compute-0 ceph-mon[76335]: pgmap v2364: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Feb 25 12:59:24 compute-0 NetworkManager[49836]: <info>  [1772024364.5051] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Feb 25 12:59:24 compute-0 ovn_controller[147040]: 2026-02-25T12:59:24Z|01529|binding|INFO|Releasing lport e8ddd525-f10e-4482-b8e7-397d0dcdd470 from this chassis (sb_readonly=0)
Feb 25 12:59:24 compute-0 NetworkManager[49836]: <info>  [1772024364.5066] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Feb 25 12:59:24 compute-0 nova_compute[244014]: 2026-02-25 12:59:24.501 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:24 compute-0 ovn_controller[147040]: 2026-02-25T12:59:24Z|01530|binding|INFO|Releasing lport e8ddd525-f10e-4482-b8e7-397d0dcdd470 from this chassis (sb_readonly=0)
Feb 25 12:59:24 compute-0 nova_compute[244014]: 2026-02-25 12:59:24.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:24 compute-0 nova_compute[244014]: 2026-02-25 12:59:24.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 25 12:59:25 compute-0 nova_compute[244014]: 2026-02-25 12:59:25.337 244018 DEBUG nova.compute.manager [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:25 compute-0 nova_compute[244014]: 2026-02-25 12:59:25.337 244018 DEBUG nova.compute.manager [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing instance network info cache due to event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:59:25 compute-0 nova_compute[244014]: 2026-02-25 12:59:25.337 244018 DEBUG oslo_concurrency.lockutils [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:59:25 compute-0 nova_compute[244014]: 2026-02-25 12:59:25.337 244018 DEBUG oslo_concurrency.lockutils [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:59:25 compute-0 nova_compute[244014]: 2026-02-25 12:59:25.338 244018 DEBUG nova.network.neutron [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:59:25 compute-0 nova_compute[244014]: 2026-02-25 12:59:25.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:25 compute-0 nova_compute[244014]: 2026-02-25 12:59:25.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:26 compute-0 ceph-mon[76335]: pgmap v2365: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 25 12:59:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:26.478 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:26 compute-0 nova_compute[244014]: 2026-02-25 12:59:26.588 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024351.5640838, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:59:26 compute-0 nova_compute[244014]: 2026-02-25 12:59:26.588 244018 INFO nova.compute.manager [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Stopped (Lifecycle Event)
Feb 25 12:59:26 compute-0 nova_compute[244014]: 2026-02-25 12:59:26.611 244018 DEBUG nova.compute.manager [None req-1bf328c5-1afa-4c98-8bd3-1e31115f465f - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 25 12:59:27 compute-0 nova_compute[244014]: 2026-02-25 12:59:27.490 244018 DEBUG nova.network.neutron [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updated VIF entry in instance network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:59:27 compute-0 nova_compute[244014]: 2026-02-25 12:59:27.491 244018 DEBUG nova.network.neutron [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:59:27 compute-0 nova_compute[244014]: 2026-02-25 12:59:27.515 244018 DEBUG oslo_concurrency.lockutils [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:59:28 compute-0 ceph-mon[76335]: pgmap v2366: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 25 12:59:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 77 op/s
Feb 25 12:59:29 compute-0 nova_compute[244014]: 2026-02-25 12:59:29.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:30 compute-0 ceph-mon[76335]: pgmap v2367: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 77 op/s
Feb 25 12:59:30 compute-0 nova_compute[244014]: 2026-02-25 12:59:30.498 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:30 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Feb 25 12:59:30 compute-0 ovn_controller[147040]: 2026-02-25T12:59:30Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:fb:af 10.100.0.14
Feb 25 12:59:30 compute-0 ovn_controller[147040]: 2026-02-25T12:59:30Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:fb:af 10.100.0.14
Feb 25 12:59:30 compute-0 nova_compute[244014]: 2026-02-25 12:59:30.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Feb 25 12:59:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:59:31
Feb 25 12:59:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 12:59:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 12:59:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'images', 'vms', 'backups', 'cephfs.cephfs.meta', 'volumes']
Feb 25 12:59:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 12:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 12:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 12:59:31 compute-0 nova_compute[244014]: 2026-02-25 12:59:31.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 12:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 12:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 12:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 12:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 12:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 12:59:32 compute-0 ceph-mon[76335]: pgmap v2368: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Feb 25 12:59:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Feb 25 12:59:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:34 compute-0 ceph-mon[76335]: pgmap v2369: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Feb 25 12:59:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:59:35 compute-0 nova_compute[244014]: 2026-02-25 12:59:35.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:35 compute-0 nova_compute[244014]: 2026-02-25 12:59:35.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:36 compute-0 ceph-mon[76335]: pgmap v2370: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:59:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:59:36 compute-0 nova_compute[244014]: 2026-02-25 12:59:36.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:59:36 compute-0 nova_compute[244014]: 2026-02-25 12:59:36.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 12:59:36 compute-0 nova_compute[244014]: 2026-02-25 12:59:36.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 12:59:37 compute-0 nova_compute[244014]: 2026-02-25 12:59:37.252 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:59:37 compute-0 nova_compute[244014]: 2026-02-25 12:59:37.253 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:59:37 compute-0 nova_compute[244014]: 2026-02-25 12:59:37.253 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 12:59:37 compute-0 nova_compute[244014]: 2026-02-25 12:59:37.254 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid fc7d7f86-eb7c-476c-840e-98c97329de34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:59:38 compute-0 ceph-mon[76335]: pgmap v2371: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:59:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 12:59:38 compute-0 nova_compute[244014]: 2026-02-25 12:59:38.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:39 compute-0 nova_compute[244014]: 2026-02-25 12:59:39.353 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:59:39 compute-0 nova_compute[244014]: 2026-02-25 12:59:39.400 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:59:39 compute-0 nova_compute[244014]: 2026-02-25 12:59:39.401 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 12:59:39 compute-0 nova_compute[244014]: 2026-02-25 12:59:39.402 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:59:39 compute-0 nova_compute[244014]: 2026-02-25 12:59:39.532 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:39 compute-0 nova_compute[244014]: 2026-02-25 12:59:39.532 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:39 compute-0 nova_compute[244014]: 2026-02-25 12:59:39.533 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:39 compute-0 nova_compute[244014]: 2026-02-25 12:59:39.533 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 12:59:39 compute-0 nova_compute[244014]: 2026-02-25 12:59:39.534 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:59:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2157476487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:40 compute-0 nova_compute[244014]: 2026-02-25 12:59:40.189 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:40 compute-0 ceph-mon[76335]: pgmap v2372: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 12:59:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2157476487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:40 compute-0 nova_compute[244014]: 2026-02-25 12:59:40.353 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:59:40 compute-0 nova_compute[244014]: 2026-02-25 12:59:40.354 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 12:59:40 compute-0 nova_compute[244014]: 2026-02-25 12:59:40.502 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:40 compute-0 nova_compute[244014]: 2026-02-25 12:59:40.557 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:59:40 compute-0 nova_compute[244014]: 2026-02-25 12:59:40.559 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3370MB free_disk=59.94178275205195GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 12:59:40 compute-0 nova_compute[244014]: 2026-02-25 12:59:40.560 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:40 compute-0 nova_compute[244014]: 2026-02-25 12:59:40.560 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:40 compute-0 nova_compute[244014]: 2026-02-25 12:59:40.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:59:41 compute-0 nova_compute[244014]: 2026-02-25 12:59:41.104 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance fc7d7f86-eb7c-476c-840e-98c97329de34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 12:59:41 compute-0 nova_compute[244014]: 2026-02-25 12:59:41.105 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 12:59:41 compute-0 nova_compute[244014]: 2026-02-25 12:59:41.105 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 12:59:41 compute-0 nova_compute[244014]: 2026-02-25 12:59:41.144 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:59:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1508777480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:41 compute-0 nova_compute[244014]: 2026-02-25 12:59:41.719 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:41 compute-0 nova_compute[244014]: 2026-02-25 12:59:41.727 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:59:41 compute-0 nova_compute[244014]: 2026-02-25 12:59:41.765 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:59:41 compute-0 nova_compute[244014]: 2026-02-25 12:59:41.839 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 12:59:41 compute-0 nova_compute[244014]: 2026-02-25 12:59:41.840 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:42 compute-0 ceph-mon[76335]: pgmap v2373: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:59:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1508777480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:42 compute-0 nova_compute[244014]: 2026-02-25 12:59:42.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007751263576675736 of space, bias 1.0, pg target 0.23253790730027207 quantized to 32 (current 32)
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024942605610493687 of space, bias 1.0, pg target 0.7482781683148106 quantized to 32 (current 32)
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3624184949949025e-06 of space, bias 4.0, pg target 0.001634902193993883 quantized to 16 (current 16)
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 12:59:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:59:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:43 compute-0 nova_compute[244014]: 2026-02-25 12:59:43.835 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:59:43 compute-0 nova_compute[244014]: 2026-02-25 12:59:43.835 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:59:43 compute-0 nova_compute[244014]: 2026-02-25 12:59:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:59:44 compute-0 ceph-mon[76335]: pgmap v2374: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 12:59:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 12:59:45 compute-0 nova_compute[244014]: 2026-02-25 12:59:45.504 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:45 compute-0 nova_compute[244014]: 2026-02-25 12:59:45.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:45 compute-0 podman[374583]: 2026-02-25 12:59:45.736099067 +0000 UTC m=+0.071986802 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 25 12:59:45 compute-0 podman[374584]: 2026-02-25 12:59:45.769030466 +0000 UTC m=+0.104899350 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 12:59:45 compute-0 nova_compute[244014]: 2026-02-25 12:59:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:59:46 compute-0 ceph-mon[76335]: pgmap v2375: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 12:59:46 compute-0 nova_compute[244014]: 2026-02-25 12:59:46.667 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:46 compute-0 nova_compute[244014]: 2026-02-25 12:59:46.667 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:46 compute-0 nova_compute[244014]: 2026-02-25 12:59:46.681 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:59:46 compute-0 nova_compute[244014]: 2026-02-25 12:59:46.765 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:46 compute-0 nova_compute[244014]: 2026-02-25 12:59:46.765 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:46 compute-0 nova_compute[244014]: 2026-02-25 12:59:46.774 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:59:46 compute-0 nova_compute[244014]: 2026-02-25 12:59:46.775 244018 INFO nova.compute.claims [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:59:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 12:59:46 compute-0 nova_compute[244014]: 2026-02-25 12:59:46.902 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:59:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74252487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.437 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.445 244018 DEBUG nova.compute.provider_tree [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.464 244018 DEBUG nova.scheduler.client.report [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:59:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/74252487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.484 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.486 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:59:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 12:59:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3267162017' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:59:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 12:59:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3267162017' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.541 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.542 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.563 244018 INFO nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.584 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.663 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.664 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.665 244018 INFO nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Creating image(s)
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.691 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.727 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.760 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.765 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.812 244018 DEBUG nova.policy [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.845 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.846 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.847 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.847 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.871 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.876 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b0014972-9433-4648-951c-bd9a210b6a69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.909 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.910 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:59:47 compute-0 nova_compute[244014]: 2026-02-25 12:59:47.911 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 12:59:48 compute-0 nova_compute[244014]: 2026-02-25 12:59:48.418 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b0014972-9433-4648-951c-bd9a210b6a69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:48 compute-0 ceph-mon[76335]: pgmap v2376: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 12:59:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3267162017' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 12:59:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3267162017' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 12:59:48 compute-0 nova_compute[244014]: 2026-02-25 12:59:48.537 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:59:48 compute-0 nova_compute[244014]: 2026-02-25 12:59:48.736 244018 DEBUG nova.objects.instance [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid b0014972-9433-4648-951c-bd9a210b6a69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:59:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:48 compute-0 nova_compute[244014]: 2026-02-25 12:59:48.755 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:59:48 compute-0 nova_compute[244014]: 2026-02-25 12:59:48.757 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Ensure instance console log exists: /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:59:48 compute-0 nova_compute[244014]: 2026-02-25 12:59:48.758 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:48 compute-0 nova_compute[244014]: 2026-02-25 12:59:48.759 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:48 compute-0 nova_compute[244014]: 2026-02-25 12:59:48.759 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 12:59:49 compute-0 nova_compute[244014]: 2026-02-25 12:59:49.408 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Successfully created port: 54f5b116-49ad-43de-8259-78cb20b778c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:59:49 compute-0 nova_compute[244014]: 2026-02-25 12:59:49.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.048 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.049 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.066 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.123 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.124 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.132 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.132 244018 INFO nova.compute.claims [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Claim successful on node compute-0.ctlplane.example.com
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.287 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.416 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Successfully updated port: 54f5b116-49ad-43de-8259-78cb20b778c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.436 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.437 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.437 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.526 244018 DEBUG nova.compute.manager [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-changed-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.526 244018 DEBUG nova.compute.manager [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Refreshing instance network info cache due to event network-changed-54f5b116-49ad-43de-8259-78cb20b778c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.526 244018 DEBUG oslo_concurrency.lockutils [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:59:50 compute-0 ceph-mon[76335]: pgmap v2377: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.600 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 12:59:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336040633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.821 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.828 244018 DEBUG nova.compute.provider_tree [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 12:59:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.845 244018 DEBUG nova.scheduler.client.report [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.869 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.870 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.945 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.946 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.970 244018 INFO nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 12:59:50 compute-0 nova_compute[244014]: 2026-02-25 12:59:50.991 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.114 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.116 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.117 244018 INFO nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Creating image(s)
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.151 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.189 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.222 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.227 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.297 244018 DEBUG nova.policy [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7592542cdf7f423c86332695423dbe79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.314 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.315 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.315 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.316 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.349 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.355 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8d338640-2b5f-4571-8f76-b523064ee129_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:51 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3336040633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 12:59:51 compute-0 nova_compute[244014]: 2026-02-25 12:59:51.947 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8d338640-2b5f-4571-8f76-b523064ee129_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.031 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] resizing rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.064 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Updating instance_info_cache with network_info: [{"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.087 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.088 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Instance network_info: |[{"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.089 244018 DEBUG oslo_concurrency.lockutils [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.090 244018 DEBUG nova.network.neutron [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Refreshing network info cache for port 54f5b116-49ad-43de-8259-78cb20b778c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.096 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Start _get_guest_xml network_info=[{"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.101 244018 WARNING nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.106 244018 DEBUG nova.virt.libvirt.host [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.106 244018 DEBUG nova.virt.libvirt.host [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.111 244018 DEBUG nova.virt.libvirt.host [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.112 244018 DEBUG nova.virt.libvirt.host [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.113 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.114 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.114 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.115 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.115 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.116 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.116 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.116 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.117 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.117 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.118 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.118 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.122 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.155 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Successfully created port: e11a4d69-8666-420b-b13c-69a6427567a4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.196 244018 DEBUG nova.objects.instance [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'migration_context' on Instance uuid 8d338640-2b5f-4571-8f76-b523064ee129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.215 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.215 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Ensure instance console log exists: /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.216 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.216 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.217 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:52 compute-0 ceph-mon[76335]: pgmap v2378: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Feb 25 12:59:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:59:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/145816303' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.720 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.743 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:52 compute-0 nova_compute[244014]: 2026-02-25 12:59:52.749 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Feb 25 12:59:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:59:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876705242' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.262 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.264 244018 DEBUG nova.virt.libvirt.vif [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=144,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-ir6by9q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:47Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=b0014972-9433-4648-951c-bd9a210b6a69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.264 244018 DEBUG nova.network.os_vif_util [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.265 244018 DEBUG nova.network.os_vif_util [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.266 244018 DEBUG nova.objects.instance [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid b0014972-9433-4648-951c-bd9a210b6a69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.281 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Successfully updated port: e11a4d69-8666-420b-b13c-69a6427567a4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.290 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:59:53 compute-0 nova_compute[244014]:   <uuid>b0014972-9433-4648-951c-bd9a210b6a69</uuid>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   <name>instance-00000090</name>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765</nova:name>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:59:52</nova:creationTime>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <nova:port uuid="54f5b116-49ad-43de-8259-78cb20b778c1">
Feb 25 12:59:53 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <system>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <entry name="serial">b0014972-9433-4648-951c-bd9a210b6a69</entry>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <entry name="uuid">b0014972-9433-4648-951c-bd9a210b6a69</entry>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     </system>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   <os>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   </os>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   <features>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   </features>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b0014972-9433-4648-951c-bd9a210b6a69_disk">
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       </source>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/b0014972-9433-4648-951c-bd9a210b6a69_disk.config">
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       </source>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:59:53 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:33:21:fa"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <target dev="tap54f5b116-49"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/console.log" append="off"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <video>
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     </video>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:59:53 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:59:53 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:59:53 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:59:53 compute-0 nova_compute[244014]: </domain>
Feb 25 12:59:53 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.292 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Preparing to wait for external event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.293 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.293 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.294 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.295 244018 DEBUG nova.virt.libvirt.vif [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=144,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-ir6by9q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:47Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=b0014972-9433-4648-951c-bd9a210b6a69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.295 244018 DEBUG nova.network.os_vif_util [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.296 244018 DEBUG nova.network.os_vif_util [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.296 244018 DEBUG os_vif [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.297 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.298 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.299 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.303 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.303 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquired lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.303 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.306 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.306 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54f5b116-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.307 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54f5b116-49, col_values=(('external_ids', {'iface-id': '54f5b116-49ad-43de-8259-78cb20b778c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:21:fa', 'vm-uuid': 'b0014972-9433-4648-951c-bd9a210b6a69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.309 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:53 compute-0 NetworkManager[49836]: <info>  [1772024393.3097] manager: (tap54f5b116-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/630)
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.321 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.323 244018 INFO os_vif [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49')
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.377 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.377 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.378 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:33:21:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.378 244018 INFO nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Using config drive
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.402 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.410 244018 DEBUG nova.compute.manager [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.410 244018 DEBUG nova.compute.manager [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing instance network info cache due to event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.411 244018 DEBUG oslo_concurrency.lockutils [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.468 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.680 244018 DEBUG nova.network.neutron [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Updated VIF entry in instance network info cache for port 54f5b116-49ad-43de-8259-78cb20b778c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.681 244018 DEBUG nova.network.neutron [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Updating instance_info_cache with network_info: [{"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.702 244018 DEBUG oslo_concurrency.lockutils [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:59:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/145816303' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:53 compute-0 ceph-mon[76335]: pgmap v2379: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Feb 25 12:59:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/876705242' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.752 244018 INFO nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Creating config drive at /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config
Feb 25 12:59:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.757 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpda0ick6j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.900 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpda0ick6j" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.941 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:53 compute-0 nova_compute[244014]: 2026-02-25 12:59:53.946 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config b0014972-9433-4648-951c-bd9a210b6a69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:54 compute-0 nova_compute[244014]: 2026-02-25 12:59:54.239 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config b0014972-9433-4648-951c-bd9a210b6a69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:54 compute-0 nova_compute[244014]: 2026-02-25 12:59:54.240 244018 INFO nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Deleting local config drive /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config because it was imported into RBD.
Feb 25 12:59:54 compute-0 NetworkManager[49836]: <info>  [1772024394.2966] manager: (tap54f5b116-49): new Tun device (/org/freedesktop/NetworkManager/Devices/631)
Feb 25 12:59:54 compute-0 kernel: tap54f5b116-49: entered promiscuous mode
Feb 25 12:59:54 compute-0 ovn_controller[147040]: 2026-02-25T12:59:54Z|01531|binding|INFO|Claiming lport 54f5b116-49ad-43de-8259-78cb20b778c1 for this chassis.
Feb 25 12:59:54 compute-0 ovn_controller[147040]: 2026-02-25T12:59:54Z|01532|binding|INFO|54f5b116-49ad-43de-8259-78cb20b778c1: Claiming fa:16:3e:33:21:fa 10.100.0.4
Feb 25 12:59:54 compute-0 nova_compute[244014]: 2026-02-25 12:59:54.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:54 compute-0 nova_compute[244014]: 2026-02-25 12:59:54.310 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:54 compute-0 ovn_controller[147040]: 2026-02-25T12:59:54Z|01533|binding|INFO|Setting lport 54f5b116-49ad-43de-8259-78cb20b778c1 ovn-installed in OVS
Feb 25 12:59:54 compute-0 nova_compute[244014]: 2026-02-25 12:59:54.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:54 compute-0 systemd-udevd[375134]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:59:54 compute-0 NetworkManager[49836]: <info>  [1772024394.3553] device (tap54f5b116-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:59:54 compute-0 NetworkManager[49836]: <info>  [1772024394.3567] device (tap54f5b116-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.628 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:21:fa 10.100.0.4'], port_security=['fa:16:3e:33:21:fa 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b0014972-9433-4648-951c-bd9a210b6a69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ae32a7b-44d8-4e80-93f7-db39a030f905', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf42b548-d8ba-420a-827b-4e92eb42f3f0, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=54f5b116-49ad-43de-8259-78cb20b778c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.630 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 54f5b116-49ad-43de-8259-78cb20b778c1 in datapath 62bbaf1c-560e-4f11-b053-43d27fe35ba2 bound to our chassis
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.633 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62bbaf1c-560e-4f11-b053-43d27fe35ba2
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.649 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[27899da6-db4e-4328-b09a-cd025a0c14e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.678 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bb77d0-097e-4579-8650-8284c1b90a28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.682 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[63a885e4-52d1-48f3-ae14-99d4c1395a05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:54 compute-0 ovn_controller[147040]: 2026-02-25T12:59:54Z|01534|binding|INFO|Setting lport 54f5b116-49ad-43de-8259-78cb20b778c1 up in Southbound
Feb 25 12:59:54 compute-0 systemd-machined[210048]: New machine qemu-178-instance-00000090.
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.712 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7a8743-7864-4c19-bb8c-73538554aea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:54 compute-0 systemd[1]: Started Virtual Machine qemu-178-instance-00000090.
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.731 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b127624-b731-4a8b-b308-5dcc6437ea99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62bbaf1c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:da:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 450], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632617, 'reachable_time': 39162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375145, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.744 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9f775a67-aa05-44ad-b3bc-7e9a12dd03b7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62bbaf1c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632625, 'tstamp': 632625}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375146, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62bbaf1c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632627, 'tstamp': 632627}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375146, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.746 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62bbaf1c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:54 compute-0 nova_compute[244014]: 2026-02-25 12:59:54.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:54 compute-0 nova_compute[244014]: 2026-02-25 12:59:54.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.750 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62bbaf1c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.750 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.750 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62bbaf1c-50, col_values=(('external_ids', {'iface-id': 'e8ddd525-f10e-4482-b8e7-397d0dcdd470'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:54 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.751 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:59:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Feb 25 12:59:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:55.036 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:55.036 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 12:59:55.037 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.399 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024395.3990338, b0014972-9433-4648-951c-bd9a210b6a69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.399 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] VM Started (Lifecycle Event)
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.423 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.427 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024395.4016118, b0014972-9433-4648-951c-bd9a210b6a69 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.427 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] VM Paused (Lifecycle Event)
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.445 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.447 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.470 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.590 244018 DEBUG nova.compute.manager [req-2363be12-1364-4ab4-9f3e-e6798ed39c4d req-7fe29d74-d973-4775-bdf1-6c2e1da7afc9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.591 244018 DEBUG oslo_concurrency.lockutils [req-2363be12-1364-4ab4-9f3e-e6798ed39c4d req-7fe29d74-d973-4775-bdf1-6c2e1da7afc9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.592 244018 DEBUG oslo_concurrency.lockutils [req-2363be12-1364-4ab4-9f3e-e6798ed39c4d req-7fe29d74-d973-4775-bdf1-6c2e1da7afc9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.592 244018 DEBUG oslo_concurrency.lockutils [req-2363be12-1364-4ab4-9f3e-e6798ed39c4d req-7fe29d74-d973-4775-bdf1-6c2e1da7afc9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.593 244018 DEBUG nova.compute.manager [req-2363be12-1364-4ab4-9f3e-e6798ed39c4d req-7fe29d74-d973-4775-bdf1-6c2e1da7afc9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Processing event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.593 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.597 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024395.597441, b0014972-9433-4648-951c-bd9a210b6a69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.598 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] VM Resumed (Lifecycle Event)
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.601 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.607 244018 INFO nova.virt.libvirt.driver [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Instance spawned successfully.
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.608 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.617 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.621 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.629 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.637 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.638 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.638 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.639 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.640 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.641 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.646 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.666 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Releasing lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.666 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Instance network_info: |[{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.667 244018 DEBUG oslo_concurrency.lockutils [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.668 244018 DEBUG nova.network.neutron [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.673 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Start _get_guest_xml network_info=[{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.679 244018 WARNING nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.686 244018 DEBUG nova.virt.libvirt.host [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.688 244018 DEBUG nova.virt.libvirt.host [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.692 244018 DEBUG nova.virt.libvirt.host [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.693 244018 DEBUG nova.virt.libvirt.host [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.693 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.694 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.695 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.695 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.696 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.696 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.696 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.697 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.697 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.698 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.698 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.699 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.704 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.747 244018 INFO nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Took 8.08 seconds to spawn the instance on the hypervisor.
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.748 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.813 244018 INFO nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Took 9.08 seconds to build instance.
Feb 25 12:59:55 compute-0 nova_compute[244014]: 2026-02-25 12:59:55.829 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:55 compute-0 ceph-mon[76335]: pgmap v2380: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Feb 25 12:59:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:59:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1818604780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.289 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.329 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.334 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.771 244018 DEBUG nova.network.neutron [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updated VIF entry in instance network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.772 244018 DEBUG nova.network.neutron [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.794 244018 DEBUG oslo_concurrency.lockutils [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 12:59:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Feb 25 12:59:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 12:59:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1711484078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.859 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.861 244018 DEBUG nova.virt.libvirt.vif [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1793809408',display_name='tempest-TestSnapshotPattern-server-1793809408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1793809408',id=145,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-b4g1xjnl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:51Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=8d338640-2b5f-4571-8f76-b523064ee129,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.861 244018 DEBUG nova.network.os_vif_util [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.862 244018 DEBUG nova.network.os_vif_util [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.863 244018 DEBUG nova.objects.instance [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d338640-2b5f-4571-8f76-b523064ee129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.879 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] End _get_guest_xml xml=<domain type="kvm">
Feb 25 12:59:56 compute-0 nova_compute[244014]:   <uuid>8d338640-2b5f-4571-8f76-b523064ee129</uuid>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   <name>instance-00000091</name>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   <metadata>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <nova:name>tempest-TestSnapshotPattern-server-1793809408</nova:name>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 12:59:55</nova:creationTime>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <nova:user uuid="7592542cdf7f423c86332695423dbe79">tempest-TestSnapshotPattern-1122231160-project-member</nova:user>
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <nova:project uuid="cf5eb89ba0424237a313b1f369bcb92b">tempest-TestSnapshotPattern-1122231160</nova:project>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <nova:port uuid="e11a4d69-8666-420b-b13c-69a6427567a4">
Feb 25 12:59:56 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   </metadata>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <system>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <entry name="serial">8d338640-2b5f-4571-8f76-b523064ee129</entry>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <entry name="uuid">8d338640-2b5f-4571-8f76-b523064ee129</entry>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     </system>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   <os>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   </os>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   <features>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <apic/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   </features>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   </clock>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   </cpu>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   <devices>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8d338640-2b5f-4571-8f76-b523064ee129_disk">
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       </source>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8d338640-2b5f-4571-8f76-b523064ee129_disk.config">
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       </source>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 12:59:56 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       </auth>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     </disk>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:15:04:04"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <target dev="tape11a4d69-86"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     </interface>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/console.log" append="off"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     </serial>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <video>
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     </video>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     </rng>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 12:59:56 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 12:59:56 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 12:59:56 compute-0 nova_compute[244014]:   </devices>
Feb 25 12:59:56 compute-0 nova_compute[244014]: </domain>
Feb 25 12:59:56 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.880 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Preparing to wait for external event network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.881 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.881 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.881 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.882 244018 DEBUG nova.virt.libvirt.vif [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1793809408',display_name='tempest-TestSnapshotPattern-server-1793809408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1793809408',id=145,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-b4g1xjnl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:51Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=8d338640-2b5f-4571-8f76-b523064ee129,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.882 244018 DEBUG nova.network.os_vif_util [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.883 244018 DEBUG nova.network.os_vif_util [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.883 244018 DEBUG os_vif [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.884 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.885 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.888 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.888 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape11a4d69-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.888 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape11a4d69-86, col_values=(('external_ids', {'iface-id': 'e11a4d69-8666-420b-b13c-69a6427567a4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:04:04', 'vm-uuid': '8d338640-2b5f-4571-8f76-b523064ee129'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:56 compute-0 NetworkManager[49836]: <info>  [1772024396.8911] manager: (tape11a4d69-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:56 compute-0 nova_compute[244014]: 2026-02-25 12:59:56.895 244018 INFO os_vif [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86')
Feb 25 12:59:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1818604780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1711484078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 12:59:57 compute-0 nova_compute[244014]: 2026-02-25 12:59:57.237 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:59:57 compute-0 nova_compute[244014]: 2026-02-25 12:59:57.237 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 12:59:57 compute-0 nova_compute[244014]: 2026-02-25 12:59:57.237 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No VIF found with MAC fa:16:3e:15:04:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 12:59:57 compute-0 nova_compute[244014]: 2026-02-25 12:59:57.238 244018 INFO nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Using config drive
Feb 25 12:59:57 compute-0 nova_compute[244014]: 2026-02-25 12:59:57.262 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:57 compute-0 nova_compute[244014]: 2026-02-25 12:59:57.662 244018 DEBUG nova.compute.manager [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 12:59:57 compute-0 nova_compute[244014]: 2026-02-25 12:59:57.663 244018 DEBUG oslo_concurrency.lockutils [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 12:59:57 compute-0 nova_compute[244014]: 2026-02-25 12:59:57.663 244018 DEBUG oslo_concurrency.lockutils [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 12:59:57 compute-0 nova_compute[244014]: 2026-02-25 12:59:57.664 244018 DEBUG oslo_concurrency.lockutils [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 12:59:57 compute-0 nova_compute[244014]: 2026-02-25 12:59:57.665 244018 DEBUG nova.compute.manager [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] No waiting events found dispatching network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 12:59:57 compute-0 nova_compute[244014]: 2026-02-25 12:59:57.665 244018 WARNING nova.compute.manager [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received unexpected event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 for instance with vm_state active and task_state None.
Feb 25 12:59:57 compute-0 ceph-mon[76335]: pgmap v2381: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Feb 25 12:59:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 12:59:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Feb 25 12:59:59 compute-0 nova_compute[244014]: 2026-02-25 12:59:59.424 244018 INFO nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Creating config drive at /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config
Feb 25 12:59:59 compute-0 nova_compute[244014]: 2026-02-25 12:59:59.427 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbz3_su6e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:59 compute-0 nova_compute[244014]: 2026-02-25 12:59:59.554 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbz3_su6e" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:59 compute-0 nova_compute[244014]: 2026-02-25 12:59:59.578 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 12:59:59 compute-0 nova_compute[244014]: 2026-02-25 12:59:59.584 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config 8d338640-2b5f-4571-8f76-b523064ee129_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 12:59:59 compute-0 nova_compute[244014]: 2026-02-25 12:59:59.730 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config 8d338640-2b5f-4571-8f76-b523064ee129_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 12:59:59 compute-0 nova_compute[244014]: 2026-02-25 12:59:59.731 244018 INFO nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Deleting local config drive /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config because it was imported into RBD.
Feb 25 12:59:59 compute-0 NetworkManager[49836]: <info>  [1772024399.7734] manager: (tape11a4d69-86): new Tun device (/org/freedesktop/NetworkManager/Devices/633)
Feb 25 12:59:59 compute-0 kernel: tape11a4d69-86: entered promiscuous mode
Feb 25 12:59:59 compute-0 ovn_controller[147040]: 2026-02-25T12:59:59Z|01535|binding|INFO|Claiming lport e11a4d69-8666-420b-b13c-69a6427567a4 for this chassis.
Feb 25 12:59:59 compute-0 ovn_controller[147040]: 2026-02-25T12:59:59Z|01536|binding|INFO|e11a4d69-8666-420b-b13c-69a6427567a4: Claiming fa:16:3e:15:04:04 10.100.0.9
Feb 25 12:59:59 compute-0 nova_compute[244014]: 2026-02-25 12:59:59.778 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:59 compute-0 nova_compute[244014]: 2026-02-25 12:59:59.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:59 compute-0 ovn_controller[147040]: 2026-02-25T12:59:59Z|01537|binding|INFO|Setting lport e11a4d69-8666-420b-b13c-69a6427567a4 ovn-installed in OVS
Feb 25 12:59:59 compute-0 nova_compute[244014]: 2026-02-25 12:59:59.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 12:59:59 compute-0 systemd-udevd[375328]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 12:59:59 compute-0 NetworkManager[49836]: <info>  [1772024399.8052] device (tape11a4d69-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 12:59:59 compute-0 NetworkManager[49836]: <info>  [1772024399.8064] device (tape11a4d69-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 12:59:59 compute-0 nova_compute[244014]: 2026-02-25 12:59:59.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 12:59:59 compute-0 ceph-mon[76335]: pgmap v2382: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Feb 25 13:00:00 compute-0 ovn_controller[147040]: 2026-02-25T13:00:00Z|01538|binding|INFO|Setting lport e11a4d69-8666-420b-b13c-69a6427567a4 up in Southbound
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.403 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:04:04 10.100.0.9'], port_security=['fa:16:3e:15:04:04 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8d338640-2b5f-4571-8f76-b523064ee129', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9876e36c-27f9-4526-a220-4685b5be270d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b451f093-c10b-47f6-9a8c-8892cb3ad351, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e11a4d69-8666-420b-b13c-69a6427567a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.405 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e11a4d69-8666-420b-b13c-69a6427567a4 in datapath 4273798e-5f22-4d98-8d00-a22d1ea2c776 bound to our chassis
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.408 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4273798e-5f22-4d98-8d00-a22d1ea2c776
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.420 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0fa6213-887a-4453-8e64-d73e22933e14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.421 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4273798e-51 in ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.423 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4273798e-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.423 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21e01010-3d8d-4d51-afe7-71886a68433b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.426 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[94b925e5-60b3-4b3f-83b7-e6be851c7724]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 systemd-machined[210048]: New machine qemu-179-instance-00000091.
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.444 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8c95307b-88a4-4e73-b33a-15e126268296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 systemd[1]: Started Virtual Machine qemu-179-instance-00000091.
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.467 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7da352eb-af29-45a8-82d3-f7b219593101]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.492 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cffddcb6-533a-41d2-853c-fb9e4026b0c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 NetworkManager[49836]: <info>  [1772024400.4989] manager: (tap4273798e-50): new Veth device (/org/freedesktop/NetworkManager/Devices/634)
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.498 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfcada9-d4c2-4c95-a6de-522c34de8556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.530 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bab68de0-291e-4800-a172-bfae83f85b79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.533 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c73d0eb8-76ab-41d6-b059-888cdd03ec7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 NetworkManager[49836]: <info>  [1772024400.5504] device (tap4273798e-50): carrier: link connected
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.555 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ab88fa72-a3b2-4cb1-b568-38b08582c2c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.571 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9a8601-97ea-4f1b-b95f-95980437021e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4273798e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 453], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636995, 'reachable_time': 38262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375364, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.583 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[296ca9d1-e013-4827-bf13-672727c6d902]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:a341'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636995, 'tstamp': 636995}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375365, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.596 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf4718c-07a0-4aa9-911f-c503ccea4806]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4273798e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 453], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636995, 'reachable_time': 38262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375366, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 nova_compute[244014]: 2026-02-25 13:00:00.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.624 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a989c8c3-b338-4b5c-bfdc-5a293d218518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.680 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[281c252c-4750-4b19-a53a-39469868fb93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.682 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4273798e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4273798e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:00 compute-0 NetworkManager[49836]: <info>  [1772024400.6862] manager: (tap4273798e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Feb 25 13:00:00 compute-0 nova_compute[244014]: 2026-02-25 13:00:00.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:00 compute-0 kernel: tap4273798e-50: entered promiscuous mode
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.690 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4273798e-50, col_values=(('external_ids', {'iface-id': 'bba4cbca-ca61-4422-a903-61bd05b8ebd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:00 compute-0 nova_compute[244014]: 2026-02-25 13:00:00.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:00 compute-0 ovn_controller[147040]: 2026-02-25T13:00:00Z|01539|binding|INFO|Releasing lport bba4cbca-ca61-4422-a903-61bd05b8ebd6 from this chassis (sb_readonly=1)
Feb 25 13:00:00 compute-0 nova_compute[244014]: 2026-02-25 13:00:00.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.699 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4273798e-5f22-4d98-8d00-a22d1ea2c776.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4273798e-5f22-4d98-8d00-a22d1ea2c776.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.699 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[90be80e9-2a39-419c-8817-750fd2538959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.700 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: global
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-4273798e-5f22-4d98-8d00-a22d1ea2c776
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/4273798e-5f22-4d98-8d00-a22d1ea2c776.pid.haproxy
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 4273798e-5f22-4d98-8d00-a22d1ea2c776
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 13:00:00 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.701 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'env', 'PROCESS_TAG=haproxy-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4273798e-5f22-4d98-8d00-a22d1ea2c776.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 13:00:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.012 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024401.0114937, 8d338640-2b5f-4571-8f76-b523064ee129 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.012 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] VM Started (Lifecycle Event)
Feb 25 13:00:01 compute-0 podman[375437]: 2026-02-25 13:00:01.076167105 +0000 UTC m=+0.033414914 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.218 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.224 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024401.01498, 8d338640-2b5f-4571-8f76-b523064ee129 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.224 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] VM Paused (Lifecycle Event)
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.246 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.250 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.273 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:00:01 compute-0 podman[375437]: 2026-02-25 13:00:01.339242976 +0000 UTC m=+0.296490765 container create 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 13:00:01 compute-0 systemd[1]: Started libpod-conmon-563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d.scope.
Feb 25 13:00:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c064d32c5a8bc9b32dbc5298e999e73ca9ab9f090eb5ef23c399a7752b4fd87b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:01 compute-0 podman[375437]: 2026-02-25 13:00:01.50280519 +0000 UTC m=+0.460053049 container init 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 25 13:00:01 compute-0 podman[375437]: 2026-02-25 13:00:01.510563619 +0000 UTC m=+0.467811438 container start 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 13:00:01 compute-0 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [NOTICE]   (375457) : New worker (375459) forked
Feb 25 13:00:01 compute-0 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [NOTICE]   (375457) : Loading success.
Feb 25 13:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.729 244018 DEBUG nova.compute.manager [req-d8eb3859-290d-499f-aec0-3be3b0d97d48 req-78f0ed17-b299-475b-a0b6-6e617c5a2f57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.729 244018 DEBUG oslo_concurrency.lockutils [req-d8eb3859-290d-499f-aec0-3be3b0d97d48 req-78f0ed17-b299-475b-a0b6-6e617c5a2f57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.730 244018 DEBUG oslo_concurrency.lockutils [req-d8eb3859-290d-499f-aec0-3be3b0d97d48 req-78f0ed17-b299-475b-a0b6-6e617c5a2f57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.730 244018 DEBUG oslo_concurrency.lockutils [req-d8eb3859-290d-499f-aec0-3be3b0d97d48 req-78f0ed17-b299-475b-a0b6-6e617c5a2f57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.730 244018 DEBUG nova.compute.manager [req-d8eb3859-290d-499f-aec0-3be3b0d97d48 req-78f0ed17-b299-475b-a0b6-6e617c5a2f57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Processing event network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.731 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.736 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.736 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024401.7359138, 8d338640-2b5f-4571-8f76-b523064ee129 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.737 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] VM Resumed (Lifecycle Event)
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.741 244018 INFO nova.virt.libvirt.driver [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Instance spawned successfully.
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.742 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.765 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.770 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.774 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.775 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.775 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.776 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.776 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.777 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.807 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.845 244018 INFO nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Took 10.73 seconds to spawn the instance on the hypervisor.
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.845 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.938 244018 INFO nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Took 11.83 seconds to build instance.
Feb 25 13:00:01 compute-0 nova_compute[244014]: 2026-02-25 13:00:01.975 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:02 compute-0 ceph-mon[76335]: pgmap v2383: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Feb 25 13:00:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Feb 25 13:00:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:03 compute-0 nova_compute[244014]: 2026-02-25 13:00:03.812 244018 DEBUG nova.compute.manager [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:03 compute-0 nova_compute[244014]: 2026-02-25 13:00:03.813 244018 DEBUG oslo_concurrency.lockutils [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:03 compute-0 nova_compute[244014]: 2026-02-25 13:00:03.813 244018 DEBUG oslo_concurrency.lockutils [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:03 compute-0 nova_compute[244014]: 2026-02-25 13:00:03.813 244018 DEBUG oslo_concurrency.lockutils [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:03 compute-0 nova_compute[244014]: 2026-02-25 13:00:03.813 244018 DEBUG nova.compute.manager [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] No waiting events found dispatching network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:00:03 compute-0 nova_compute[244014]: 2026-02-25 13:00:03.814 244018 WARNING nova.compute.manager [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received unexpected event network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 for instance with vm_state active and task_state None.
Feb 25 13:00:04 compute-0 ceph-mon[76335]: pgmap v2384: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Feb 25 13:00:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 27 KiB/s wr, 107 op/s
Feb 25 13:00:05 compute-0 nova_compute[244014]: 2026-02-25 13:00:05.606 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:05 compute-0 nova_compute[244014]: 2026-02-25 13:00:05.933 244018 DEBUG nova.compute.manager [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:05 compute-0 nova_compute[244014]: 2026-02-25 13:00:05.933 244018 DEBUG nova.compute.manager [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing instance network info cache due to event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:00:05 compute-0 nova_compute[244014]: 2026-02-25 13:00:05.934 244018 DEBUG oslo_concurrency.lockutils [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:00:05 compute-0 nova_compute[244014]: 2026-02-25 13:00:05.934 244018 DEBUG oslo_concurrency.lockutils [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:00:05 compute-0 nova_compute[244014]: 2026-02-25 13:00:05.935 244018 DEBUG nova.network.neutron [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:00:06 compute-0 ceph-mon[76335]: pgmap v2385: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 27 KiB/s wr, 107 op/s
Feb 25 13:00:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 27 KiB/s wr, 107 op/s
Feb 25 13:00:06 compute-0 nova_compute[244014]: 2026-02-25 13:00:06.895 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:07 compute-0 nova_compute[244014]: 2026-02-25 13:00:07.124 244018 DEBUG nova.network.neutron [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updated VIF entry in instance network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:00:07 compute-0 nova_compute[244014]: 2026-02-25 13:00:07.125 244018 DEBUG nova.network.neutron [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:00:07 compute-0 nova_compute[244014]: 2026-02-25 13:00:07.154 244018 DEBUG oslo_concurrency.lockutils [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:00:07 compute-0 ovn_controller[147040]: 2026-02-25T13:00:07Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:21:fa 10.100.0.4
Feb 25 13:00:07 compute-0 ovn_controller[147040]: 2026-02-25T13:00:07Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:21:fa 10.100.0.4
Feb 25 13:00:08 compute-0 ceph-mon[76335]: pgmap v2386: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 27 KiB/s wr, 107 op/s
Feb 25 13:00:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 357 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 207 op/s
Feb 25 13:00:10 compute-0 ceph-mon[76335]: pgmap v2387: 305 pgs: 305 active+clean; 357 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 207 op/s
Feb 25 13:00:10 compute-0 nova_compute[244014]: 2026-02-25 13:00:10.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 357 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Feb 25 13:00:11 compute-0 nova_compute[244014]: 2026-02-25 13:00:11.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:12 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Feb 25 13:00:12 compute-0 ceph-mon[76335]: pgmap v2388: 305 pgs: 305 active+clean; 357 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Feb 25 13:00:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 375 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Feb 25 13:00:13 compute-0 ovn_controller[147040]: 2026-02-25T13:00:13Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:04:04 10.100.0.9
Feb 25 13:00:13 compute-0 ovn_controller[147040]: 2026-02-25T13:00:13Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:04:04 10.100.0.9
Feb 25 13:00:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:14 compute-0 ceph-mon[76335]: pgmap v2389: 305 pgs: 305 active+clean; 375 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.706 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.707 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.707 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.708 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.709 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.711 244018 INFO nova.compute.manager [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Terminating instance
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.712 244018 DEBUG nova.compute.manager [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 13:00:14 compute-0 kernel: tap54f5b116-49 (unregistering): left promiscuous mode
Feb 25 13:00:14 compute-0 NetworkManager[49836]: <info>  [1772024414.7740] device (tap54f5b116-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 13:00:14 compute-0 ovn_controller[147040]: 2026-02-25T13:00:14Z|01540|binding|INFO|Releasing lport 54f5b116-49ad-43de-8259-78cb20b778c1 from this chassis (sb_readonly=0)
Feb 25 13:00:14 compute-0 ovn_controller[147040]: 2026-02-25T13:00:14Z|01541|binding|INFO|Setting lport 54f5b116-49ad-43de-8259-78cb20b778c1 down in Southbound
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.781 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:14 compute-0 ovn_controller[147040]: 2026-02-25T13:00:14Z|01542|binding|INFO|Removing iface tap54f5b116-49 ovn-installed in OVS
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.789 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:21:fa 10.100.0.4'], port_security=['fa:16:3e:33:21:fa 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b0014972-9433-4648-951c-bd9a210b6a69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ae32a7b-44d8-4e80-93f7-db39a030f905', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf42b548-d8ba-420a-827b-4e92eb42f3f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=54f5b116-49ad-43de-8259-78cb20b778c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.790 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 54f5b116-49ad-43de-8259-78cb20b778c1 in datapath 62bbaf1c-560e-4f11-b053-43d27fe35ba2 unbound from our chassis
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.792 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62bbaf1c-560e-4f11-b053-43d27fe35ba2
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.807 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a06369-a45f-4d59-b850-4f4b8fb49c5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:14 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Deactivated successfully.
Feb 25 13:00:14 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Consumed 12.085s CPU time.
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.835 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0c3230-62ac-43df-b856-e525ca2be2c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:14 compute-0 systemd-machined[210048]: Machine qemu-178-instance-00000090 terminated.
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.839 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f96ed993-bdb4-4fb5-9c7a-88f11913b990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 375 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.865 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ce25ee47-b4c3-429c-8470-8dcb08ec3d67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.887 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[565db2e2-c7d0-4ab4-ab2d-dd640be29a26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62bbaf1c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:da:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 450], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632617, 'reachable_time': 39162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375479, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.901 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c05db7-08b2-4a3f-9740-67ff06335121]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62bbaf1c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632625, 'tstamp': 632625}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375480, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62bbaf1c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632627, 'tstamp': 632627}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375480, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.903 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62bbaf1c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.905 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.910 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62bbaf1c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.911 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.911 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62bbaf1c-50, col_values=(('external_ids', {'iface-id': 'e8ddd525-f10e-4482-b8e7-397d0dcdd470'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:14 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.912 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.933 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.951 244018 INFO nova.virt.libvirt.driver [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Instance destroyed successfully.
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.951 244018 DEBUG nova.objects.instance [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid b0014972-9433-4648-951c-bd9a210b6a69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.987 244018 DEBUG nova.compute.manager [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-unplugged-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.987 244018 DEBUG oslo_concurrency.lockutils [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.987 244018 DEBUG oslo_concurrency.lockutils [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.988 244018 DEBUG oslo_concurrency.lockutils [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.988 244018 DEBUG nova.compute.manager [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] No waiting events found dispatching network-vif-unplugged-54f5b116-49ad-43de-8259-78cb20b778c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.988 244018 DEBUG nova.compute.manager [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-unplugged-54f5b116-49ad-43de-8259-78cb20b778c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.991 244018 DEBUG nova.virt.libvirt.vif [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:59:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=144,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:59:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-ir6by9q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:59:55Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=b0014972-9433-4648-951c-bd9a210b6a69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.991 244018 DEBUG nova.network.os_vif_util [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.992 244018 DEBUG nova.network.os_vif_util [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.993 244018 DEBUG os_vif [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.996 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54f5b116-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:14 compute-0 nova_compute[244014]: 2026-02-25 13:00:14.999 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:15 compute-0 nova_compute[244014]: 2026-02-25 13:00:15.003 244018 INFO os_vif [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49')
Feb 25 13:00:15 compute-0 nova_compute[244014]: 2026-02-25 13:00:15.342 244018 INFO nova.virt.libvirt.driver [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Deleting instance files /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69_del
Feb 25 13:00:15 compute-0 nova_compute[244014]: 2026-02-25 13:00:15.343 244018 INFO nova.virt.libvirt.driver [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Deletion of /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69_del complete
Feb 25 13:00:15 compute-0 nova_compute[244014]: 2026-02-25 13:00:15.395 244018 INFO nova.compute.manager [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Took 0.68 seconds to destroy the instance on the hypervisor.
Feb 25 13:00:15 compute-0 nova_compute[244014]: 2026-02-25 13:00:15.395 244018 DEBUG oslo.service.loopingcall [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 13:00:15 compute-0 nova_compute[244014]: 2026-02-25 13:00:15.396 244018 DEBUG nova.compute.manager [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 13:00:15 compute-0 nova_compute[244014]: 2026-02-25 13:00:15.396 244018 DEBUG nova.network.neutron [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 13:00:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:00:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 44K writes, 173K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 44K writes, 15K syncs, 2.76 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5670 writes, 23K keys, 5670 commit groups, 1.0 writes per commit group, ingest: 27.83 MB, 0.05 MB/s
                                           Interval WAL: 5670 writes, 2215 syncs, 2.56 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:00:15 compute-0 nova_compute[244014]: 2026-02-25 13:00:15.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:16 compute-0 ceph-mon[76335]: pgmap v2390: 305 pgs: 305 active+clean; 375 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Feb 25 13:00:16 compute-0 podman[375512]: 2026-02-25 13:00:16.74616305 +0000 UTC m=+0.078902147 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 25 13:00:16 compute-0 podman[375513]: 2026-02-25 13:00:16.784850671 +0000 UTC m=+0.117555107 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible)
Feb 25 13:00:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 375 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Feb 25 13:00:17 compute-0 nova_compute[244014]: 2026-02-25 13:00:17.106 244018 DEBUG nova.compute.manager [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:17 compute-0 nova_compute[244014]: 2026-02-25 13:00:17.106 244018 DEBUG oslo_concurrency.lockutils [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:17 compute-0 nova_compute[244014]: 2026-02-25 13:00:17.106 244018 DEBUG oslo_concurrency.lockutils [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:17 compute-0 nova_compute[244014]: 2026-02-25 13:00:17.107 244018 DEBUG oslo_concurrency.lockutils [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:17 compute-0 nova_compute[244014]: 2026-02-25 13:00:17.107 244018 DEBUG nova.compute.manager [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] No waiting events found dispatching network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:00:17 compute-0 nova_compute[244014]: 2026-02-25 13:00:17.107 244018 WARNING nova.compute.manager [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received unexpected event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 for instance with vm_state active and task_state deleting.
Feb 25 13:00:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:17.638 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:00:17 compute-0 nova_compute[244014]: 2026-02-25 13:00:17.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:17.639 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 13:00:18 compute-0 ceph-mon[76335]: pgmap v2391: 305 pgs: 305 active+clean; 375 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Feb 25 13:00:18 compute-0 nova_compute[244014]: 2026-02-25 13:00:18.618 244018 DEBUG nova.network.neutron [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:00:18 compute-0 nova_compute[244014]: 2026-02-25 13:00:18.642 244018 INFO nova.compute.manager [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Took 3.25 seconds to deallocate network for instance.
Feb 25 13:00:18 compute-0 nova_compute[244014]: 2026-02-25 13:00:18.710 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:18 compute-0 nova_compute[244014]: 2026-02-25 13:00:18.710 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:18 compute-0 nova_compute[244014]: 2026-02-25 13:00:18.738 244018 DEBUG nova.compute.manager [req-76fdac8d-b780-40c8-9798-ffcaf17babe9 req-ed683fe8-323c-4d38-975c-b07d6c2d43d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-deleted-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:18 compute-0 nova_compute[244014]: 2026-02-25 13:00:18.803 244018 DEBUG oslo_concurrency.processutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Feb 25 13:00:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:00:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4035139781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:19 compute-0 nova_compute[244014]: 2026-02-25 13:00:19.347 244018 DEBUG oslo_concurrency.processutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:19 compute-0 nova_compute[244014]: 2026-02-25 13:00:19.353 244018 DEBUG nova.compute.provider_tree [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:00:19 compute-0 nova_compute[244014]: 2026-02-25 13:00:19.372 244018 DEBUG nova.scheduler.client.report [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:00:19 compute-0 nova_compute[244014]: 2026-02-25 13:00:19.393 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:00:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.78 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5581 writes, 24K keys, 5581 commit groups, 1.0 writes per commit group, ingest: 27.22 MB, 0.05 MB/s
                                           Interval WAL: 5581 writes, 2111 syncs, 2.64 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:00:19 compute-0 nova_compute[244014]: 2026-02-25 13:00:19.421 244018 INFO nova.scheduler.client.report [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance b0014972-9433-4648-951c-bd9a210b6a69
Feb 25 13:00:19 compute-0 nova_compute[244014]: 2026-02-25 13:00:19.493 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:19 compute-0 sudo[375580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:00:19 compute-0 sudo[375580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:00:19 compute-0 sudo[375580]: pam_unix(sudo:session): session closed for user root
Feb 25 13:00:19 compute-0 sudo[375605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:00:19 compute-0 sudo[375605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:00:19 compute-0 nova_compute[244014]: 2026-02-25 13:00:19.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.247 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.248 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.248 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.249 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.249 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.251 244018 INFO nova.compute.manager [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Terminating instance
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.253 244018 DEBUG nova.compute.manager [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 13:00:20 compute-0 kernel: tap5b9c40d0-fe (unregistering): left promiscuous mode
Feb 25 13:00:20 compute-0 NetworkManager[49836]: <info>  [1772024420.3259] device (tap5b9c40d0-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 13:00:20 compute-0 ovn_controller[147040]: 2026-02-25T13:00:20Z|01543|binding|INFO|Releasing lport 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 from this chassis (sb_readonly=0)
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:20 compute-0 ovn_controller[147040]: 2026-02-25T13:00:20Z|01544|binding|INFO|Setting lport 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 down in Southbound
Feb 25 13:00:20 compute-0 ovn_controller[147040]: 2026-02-25T13:00:20Z|01545|binding|INFO|Removing iface tap5b9c40d0-fe ovn-installed in OVS
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.334 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:20 compute-0 ceph-mon[76335]: pgmap v2392: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Feb 25 13:00:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4035139781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.340 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:fb:af 10.100.0.14'], port_security=['fa:16:3e:db:fb:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fc7d7f86-eb7c-476c-840e-98c97329de34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1134355f-9f11-4363-a935-f82cb22e83bd 1ae32a7b-44d8-4e80-93f7-db39a030f905', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf42b548-d8ba-420a-827b-4e92eb42f3f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.341 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.342 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 in datapath 62bbaf1c-560e-4f11-b053-43d27fe35ba2 unbound from our chassis
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.343 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62bbaf1c-560e-4f11-b053-43d27fe35ba2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.344 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eba56fa9-ed88-4cbb-a546-2ec83f2f41e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.345 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2 namespace which is not needed anymore
Feb 25 13:00:20 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Feb 25 13:00:20 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Consumed 14.723s CPU time.
Feb 25 13:00:20 compute-0 systemd-machined[210048]: Machine qemu-177-instance-0000008f terminated.
Feb 25 13:00:20 compute-0 sudo[375605]: pam_unix(sudo:session): session closed for user root
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.480 244018 INFO nova.virt.libvirt.driver [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Instance destroyed successfully.
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.481 244018 DEBUG nova.objects.instance [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid fc7d7f86-eb7c-476c-840e-98c97329de34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:00:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:00:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:00:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:00:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:00:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.498 244018 DEBUG nova.virt.libvirt.vif [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=143,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:59:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-q2l93d4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:59:18Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=fc7d7f86-eb7c-476c-840e-98c97329de34,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.498 244018 DEBUG nova.network.os_vif_util [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.499 244018 DEBUG nova.network.os_vif_util [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.499 244018 DEBUG os_vif [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.501 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.501 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b9c40d0-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.502 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.505 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:00:20 compute-0 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [NOTICE]   (374288) : haproxy version is 2.8.14-c23fe91
Feb 25 13:00:20 compute-0 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [NOTICE]   (374288) : path to executable is /usr/sbin/haproxy
Feb 25 13:00:20 compute-0 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [WARNING]  (374288) : Exiting Master process...
Feb 25 13:00:20 compute-0 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [ALERT]    (374288) : Current worker (374300) exited with code 143 (Terminated)
Feb 25 13:00:20 compute-0 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [WARNING]  (374288) : All workers exited. Exiting... (0)
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.508 244018 INFO os_vif [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe')
Feb 25 13:00:20 compute-0 systemd[1]: libpod-2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea.scope: Deactivated successfully.
Feb 25 13:00:20 compute-0 podman[375684]: 2026-02-25 13:00:20.516857424 +0000 UTC m=+0.088244180 container died 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 13:00:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:00:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:00:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:00:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:00:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:00:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:00:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:00:20 compute-0 sudo[375731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:00:20 compute-0 sudo[375731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:00:20 compute-0 sudo[375731]: pam_unix(sudo:session): session closed for user root
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea-userdata-shm.mount: Deactivated successfully.
Feb 25 13:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-00e350bf0228b37d39a08274b06ef58a344901c5a3e3eee456d5980226c76505-merged.mount: Deactivated successfully.
Feb 25 13:00:20 compute-0 sudo[375764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:00:20 compute-0 sudo[375764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:00:20 compute-0 podman[375684]: 2026-02-25 13:00:20.640497082 +0000 UTC m=+0.211883838 container cleanup 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:00:20 compute-0 systemd[1]: libpod-conmon-2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea.scope: Deactivated successfully.
Feb 25 13:00:20 compute-0 podman[375794]: 2026-02-25 13:00:20.748960481 +0000 UTC m=+0.085419800 container remove 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.759 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[054abd8e-7a3e-4546-ab63-6a6fef539af9]: (4, ('Wed Feb 25 01:00:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2 (2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea)\n2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea\nWed Feb 25 01:00:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2 (2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea)\n2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.761 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a393f8b-7885-4b8c-a684-49464afde31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.761 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62bbaf1c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:20 compute-0 kernel: tap62bbaf1c-50: left promiscuous mode
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.777 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[59f6d8a5-f273-4469-9bdc-991ef7655b18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.790 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3daea00c-c24e-4759-a890-dbd5916ef82d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.791 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e6dc0fda-cf9c-468f-92d7-e49d9ff78b2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.805 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d14eefa-1516-4ccc-a286-44bf29e4c3f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632611, 'reachable_time': 43680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375809, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.808 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 13:00:20 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.809 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffa9d40-afe7-4398-9d71-7508c55a6a97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d62bbaf1c\x2d560e\x2d4f11\x2db053\x2d43d27fe35ba2.mount: Deactivated successfully.
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.851 244018 DEBUG nova.compute.manager [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.852 244018 DEBUG nova.compute.manager [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing instance network info cache due to event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.852 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.853 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:00:20 compute-0 nova_compute[244014]: 2026-02-25 13:00:20.853 244018 DEBUG nova.network.neutron [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:00:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Feb 25 13:00:20 compute-0 podman[375822]: 2026-02-25 13:00:20.963904205 +0000 UTC m=+0.067064113 container create 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:00:21 compute-0 podman[375822]: 2026-02-25 13:00:20.9226152 +0000 UTC m=+0.025775178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:00:21 compute-0 systemd[1]: Started libpod-conmon-7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8.scope.
Feb 25 13:00:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:00:21 compute-0 podman[375822]: 2026-02-25 13:00:21.089010094 +0000 UTC m=+0.192170062 container init 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:00:21 compute-0 podman[375822]: 2026-02-25 13:00:21.096901666 +0000 UTC m=+0.200061584 container start 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:00:21 compute-0 jovial_pike[375838]: 167 167
Feb 25 13:00:21 compute-0 systemd[1]: libpod-7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8.scope: Deactivated successfully.
Feb 25 13:00:21 compute-0 podman[375822]: 2026-02-25 13:00:21.129136736 +0000 UTC m=+0.232296834 container attach 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:00:21 compute-0 podman[375822]: 2026-02-25 13:00:21.129552868 +0000 UTC m=+0.232712776 container died 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:00:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e75caac92e819f6daf5aa44940c49249d3387b5f8b1672fc941ba3212bda929-merged.mount: Deactivated successfully.
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.223 244018 INFO nova.virt.libvirt.driver [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Deleting instance files /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34_del
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.226 244018 INFO nova.virt.libvirt.driver [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Deletion of /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34_del complete
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.288 244018 INFO nova.compute.manager [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Took 1.03 seconds to destroy the instance on the hypervisor.
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.289 244018 DEBUG oslo.service.loopingcall [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.290 244018 DEBUG nova.compute.manager [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.291 244018 DEBUG nova.network.neutron [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 13:00:21 compute-0 podman[375822]: 2026-02-25 13:00:21.294055898 +0000 UTC m=+0.397215816 container remove 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:00:21 compute-0 systemd[1]: libpod-conmon-7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8.scope: Deactivated successfully.
Feb 25 13:00:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:00:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:00:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:00:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:00:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:00:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:00:21 compute-0 podman[375863]: 2026-02-25 13:00:21.501571412 +0000 UTC m=+0.059318205 container create a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:00:21 compute-0 systemd[1]: Started libpod-conmon-a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385.scope.
Feb 25 13:00:21 compute-0 podman[375863]: 2026-02-25 13:00:21.470368091 +0000 UTC m=+0.028114934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:00:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:21 compute-0 podman[375863]: 2026-02-25 13:00:21.616891115 +0000 UTC m=+0.174637928 container init a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 13:00:21 compute-0 podman[375863]: 2026-02-25 13:00:21.630451057 +0000 UTC m=+0.188197860 container start a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:00:21 compute-0 podman[375863]: 2026-02-25 13:00:21.635294334 +0000 UTC m=+0.193041137 container attach a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.775 244018 DEBUG nova.network.neutron [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.795 244018 INFO nova.compute.manager [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Took 0.50 seconds to deallocate network for instance.
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.842 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.843 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.846 244018 DEBUG nova.compute.manager [req-ca1b2299-21d4-40e7-966b-37b12b17162e req-e026ea22-2325-4378-a373-9537c6283bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-deleted-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.908 244018 DEBUG oslo_concurrency.processutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.989 244018 DEBUG nova.network.neutron [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updated VIF entry in instance network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:00:21 compute-0 nova_compute[244014]: 2026-02-25 13:00:21.990 244018 DEBUG nova.network.neutron [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.012 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.013 244018 DEBUG nova.compute.manager [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-unplugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.013 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.013 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.013 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.014 244018 DEBUG nova.compute.manager [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] No waiting events found dispatching network-vif-unplugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.014 244018 DEBUG nova.compute.manager [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-unplugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.086 244018 DEBUG nova.compute.manager [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:22 compute-0 jovial_proskuriakova[375880]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:00:22 compute-0 jovial_proskuriakova[375880]: --> All data devices are unavailable
Feb 25 13:00:22 compute-0 systemd[1]: libpod-a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385.scope: Deactivated successfully.
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.138 244018 INFO nova.compute.manager [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] instance snapshotting
Feb 25 13:00:22 compute-0 podman[375920]: 2026-02-25 13:00:22.162252418 +0000 UTC m=+0.024880743 container died a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 13:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293-merged.mount: Deactivated successfully.
Feb 25 13:00:22 compute-0 podman[375920]: 2026-02-25 13:00:22.257076362 +0000 UTC m=+0.119704637 container remove a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 25 13:00:22 compute-0 systemd[1]: libpod-conmon-a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385.scope: Deactivated successfully.
Feb 25 13:00:22 compute-0 sudo[375764]: pam_unix(sudo:session): session closed for user root
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.350 244018 INFO nova.virt.libvirt.driver [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Beginning live snapshot process
Feb 25 13:00:22 compute-0 sudo[375936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:00:22 compute-0 sudo[375936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:00:22 compute-0 sudo[375936]: pam_unix(sudo:session): session closed for user root
Feb 25 13:00:22 compute-0 sudo[375961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:00:22 compute-0 sudo[375961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.484 244018 DEBUG nova.virt.libvirt.imagebackend [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 25 13:00:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:00:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/626962576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.512 244018 DEBUG oslo_concurrency.processutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.518 244018 DEBUG nova.compute.provider_tree [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.540 244018 DEBUG nova.scheduler.client.report [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:00:22 compute-0 ceph-mon[76335]: pgmap v2393: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Feb 25 13:00:22 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/626962576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.559 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.584 244018 INFO nova.scheduler.client.report [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance fc7d7f86-eb7c-476c-840e-98c97329de34
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.640 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:22.642 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.662 244018 DEBUG nova.storage.rbd_utils [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] creating snapshot(0548b3c2c0a14dbf963fcd1dadaa89d0) on rbd image(8d338640-2b5f-4571-8f76-b523064ee129_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 13:00:22 compute-0 podman[376051]: 2026-02-25 13:00:22.758794585 +0000 UTC m=+0.055337622 container create b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:00:22 compute-0 systemd[1]: Started libpod-conmon-b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809.scope.
Feb 25 13:00:22 compute-0 podman[376051]: 2026-02-25 13:00:22.732619237 +0000 UTC m=+0.029162294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:00:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:00:22 compute-0 podman[376051]: 2026-02-25 13:00:22.853644681 +0000 UTC m=+0.150187768 container init b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 13:00:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 2.2 MiB/s wr, 123 op/s
Feb 25 13:00:22 compute-0 podman[376051]: 2026-02-25 13:00:22.861665137 +0000 UTC m=+0.158208174 container start b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:00:22 compute-0 romantic_dubinsky[376068]: 167 167
Feb 25 13:00:22 compute-0 systemd[1]: libpod-b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809.scope: Deactivated successfully.
Feb 25 13:00:22 compute-0 podman[376051]: 2026-02-25 13:00:22.87419077 +0000 UTC m=+0.170733887 container attach b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Feb 25 13:00:22 compute-0 podman[376051]: 2026-02-25 13:00:22.875013194 +0000 UTC m=+0.171556261 container died b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e18761e552bef6bf6bf4fec2bd65caed3de9b2dcbe435fdb036a4e852829d2c2-merged.mount: Deactivated successfully.
Feb 25 13:00:22 compute-0 podman[376051]: 2026-02-25 13:00:22.93478429 +0000 UTC m=+0.231327347 container remove b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:00:22 compute-0 systemd[1]: libpod-conmon-b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809.scope: Deactivated successfully.
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.946 244018 DEBUG nova.compute.manager [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.948 244018 DEBUG oslo_concurrency.lockutils [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.948 244018 DEBUG oslo_concurrency.lockutils [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.949 244018 DEBUG oslo_concurrency.lockutils [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.949 244018 DEBUG nova.compute.manager [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] No waiting events found dispatching network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:00:22 compute-0 nova_compute[244014]: 2026-02-25 13:00:22.950 244018 WARNING nova.compute.manager [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received unexpected event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 for instance with vm_state deleted and task_state None.
Feb 25 13:00:23 compute-0 podman[376094]: 2026-02-25 13:00:23.114844629 +0000 UTC m=+0.061268959 container create 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:00:23 compute-0 systemd[1]: Started libpod-conmon-7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53.scope.
Feb 25 13:00:23 compute-0 podman[376094]: 2026-02-25 13:00:23.086684405 +0000 UTC m=+0.033108785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:00:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef29fcdcc0175c44a613718f354cc7e0dd9654d9bedd6f43f32ecaeb1d2fa7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef29fcdcc0175c44a613718f354cc7e0dd9654d9bedd6f43f32ecaeb1d2fa7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef29fcdcc0175c44a613718f354cc7e0dd9654d9bedd6f43f32ecaeb1d2fa7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef29fcdcc0175c44a613718f354cc7e0dd9654d9bedd6f43f32ecaeb1d2fa7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:23 compute-0 podman[376094]: 2026-02-25 13:00:23.210491507 +0000 UTC m=+0.156915917 container init 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:00:23 compute-0 podman[376094]: 2026-02-25 13:00:23.221444426 +0000 UTC m=+0.167868756 container start 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:00:23 compute-0 podman[376094]: 2026-02-25 13:00:23.224513833 +0000 UTC m=+0.170938193 container attach 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 13:00:23 compute-0 nervous_yonath[376110]: {
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:     "0": [
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:         {
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "devices": [
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "/dev/loop3"
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             ],
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_name": "ceph_lv0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_size": "21470642176",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "name": "ceph_lv0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "tags": {
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.cluster_name": "ceph",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.crush_device_class": "",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.encrypted": "0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.objectstore": "bluestore",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.osd_id": "0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.type": "block",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.vdo": "0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.with_tpm": "0"
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             },
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "type": "block",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "vg_name": "ceph_vg0"
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:         }
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:     ],
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:     "1": [
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:         {
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "devices": [
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "/dev/loop4"
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             ],
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_name": "ceph_lv1",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_size": "21470642176",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "name": "ceph_lv1",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "tags": {
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.cluster_name": "ceph",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.crush_device_class": "",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.encrypted": "0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.objectstore": "bluestore",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.osd_id": "1",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.type": "block",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.vdo": "0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.with_tpm": "0"
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             },
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "type": "block",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "vg_name": "ceph_vg1"
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:         }
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:     ],
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:     "2": [
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:         {
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "devices": [
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "/dev/loop5"
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             ],
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_name": "ceph_lv2",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_size": "21470642176",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "name": "ceph_lv2",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "tags": {
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.cluster_name": "ceph",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.crush_device_class": "",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.encrypted": "0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.objectstore": "bluestore",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.osd_id": "2",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.type": "block",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.vdo": "0",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:                 "ceph.with_tpm": "0"
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             },
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "type": "block",
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:             "vg_name": "ceph_vg2"
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:         }
Feb 25 13:00:23 compute-0 nervous_yonath[376110]:     ]
Feb 25 13:00:23 compute-0 nervous_yonath[376110]: }
Feb 25 13:00:23 compute-0 systemd[1]: libpod-7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53.scope: Deactivated successfully.
Feb 25 13:00:23 compute-0 podman[376094]: 2026-02-25 13:00:23.540508806 +0000 UTC m=+0.486933136 container died 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:00:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Feb 25 13:00:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Feb 25 13:00:23 compute-0 ceph-mon[76335]: pgmap v2394: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 2.2 MiB/s wr, 123 op/s
Feb 25 13:00:23 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Feb 25 13:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-aef29fcdcc0175c44a613718f354cc7e0dd9654d9bedd6f43f32ecaeb1d2fa7f-merged.mount: Deactivated successfully.
Feb 25 13:00:23 compute-0 podman[376094]: 2026-02-25 13:00:23.668329482 +0000 UTC m=+0.614753842 container remove 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:00:23 compute-0 systemd[1]: libpod-conmon-7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53.scope: Deactivated successfully.
Feb 25 13:00:23 compute-0 nova_compute[244014]: 2026-02-25 13:00:23.683 244018 DEBUG nova.storage.rbd_utils [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] cloning vms/8d338640-2b5f-4571-8f76-b523064ee129_disk@0548b3c2c0a14dbf963fcd1dadaa89d0 to images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 13:00:23 compute-0 sudo[375961]: pam_unix(sudo:session): session closed for user root
Feb 25 13:00:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:23 compute-0 sudo[376163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:00:23 compute-0 sudo[376163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:00:23 compute-0 sudo[376163]: pam_unix(sudo:session): session closed for user root
Feb 25 13:00:23 compute-0 nova_compute[244014]: 2026-02-25 13:00:23.840 244018 DEBUG nova.storage.rbd_utils [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] flattening images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 13:00:23 compute-0 sudo[376194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:00:23 compute-0 sudo[376194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:00:24 compute-0 podman[376248]: 2026-02-25 13:00:24.229348388 +0000 UTC m=+0.081064768 container create ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:00:24 compute-0 podman[376248]: 2026-02-25 13:00:24.193487116 +0000 UTC m=+0.045203546 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:00:24 compute-0 systemd[1]: Started libpod-conmon-ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84.scope.
Feb 25 13:00:24 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:00:24 compute-0 podman[376248]: 2026-02-25 13:00:24.514834281 +0000 UTC m=+0.366550671 container init ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:00:24 compute-0 podman[376248]: 2026-02-25 13:00:24.522319042 +0000 UTC m=+0.374035422 container start ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 13:00:24 compute-0 pedantic_gagarin[376265]: 167 167
Feb 25 13:00:24 compute-0 systemd[1]: libpod-ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84.scope: Deactivated successfully.
Feb 25 13:00:24 compute-0 ceph-mon[76335]: osdmap e279: 3 total, 3 up, 3 in
Feb 25 13:00:24 compute-0 podman[376248]: 2026-02-25 13:00:24.701463755 +0000 UTC m=+0.553180105 container attach ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:00:24 compute-0 podman[376248]: 2026-02-25 13:00:24.70232424 +0000 UTC m=+0.554040610 container died ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea83933d114ee582d4f9ade2ec8c154f26eba861f13f9c07a8afec710fdfd25b-merged.mount: Deactivated successfully.
Feb 25 13:00:24 compute-0 podman[376248]: 2026-02-25 13:00:24.811928361 +0000 UTC m=+0.663644721 container remove ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:00:24 compute-0 systemd[1]: libpod-conmon-ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84.scope: Deactivated successfully.
Feb 25 13:00:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 458 KiB/s wr, 110 op/s
Feb 25 13:00:24 compute-0 nova_compute[244014]: 2026-02-25 13:00:24.862 244018 DEBUG nova.storage.rbd_utils [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] removing snapshot(0548b3c2c0a14dbf963fcd1dadaa89d0) on rbd image(8d338640-2b5f-4571-8f76-b523064ee129_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 13:00:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:00:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.6 total, 600.0 interval
                                           Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.81 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3752 writes, 14K keys, 3752 commit groups, 1.0 writes per commit group, ingest: 17.80 MB, 0.03 MB/s
                                           Interval WAL: 3752 writes, 1485 syncs, 2.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:00:25 compute-0 podman[376308]: 2026-02-25 13:00:25.027443291 +0000 UTC m=+0.072098525 container create a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Feb 25 13:00:25 compute-0 podman[376308]: 2026-02-25 13:00:24.979880329 +0000 UTC m=+0.024535563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:00:25 compute-0 systemd[1]: Started libpod-conmon-a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e.scope.
Feb 25 13:00:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6776945804d9c1e74ed5a87f858a6e9dd6b825a00438677a28711d343aaa96c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6776945804d9c1e74ed5a87f858a6e9dd6b825a00438677a28711d343aaa96c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6776945804d9c1e74ed5a87f858a6e9dd6b825a00438677a28711d343aaa96c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6776945804d9c1e74ed5a87f858a6e9dd6b825a00438677a28711d343aaa96c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:25 compute-0 podman[376308]: 2026-02-25 13:00:25.236630392 +0000 UTC m=+0.281285616 container init a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:00:25 compute-0 podman[376308]: 2026-02-25 13:00:25.245499622 +0000 UTC m=+0.290154846 container start a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:00:25 compute-0 podman[376308]: 2026-02-25 13:00:25.252644593 +0000 UTC m=+0.297299797 container attach a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:00:25 compute-0 nova_compute[244014]: 2026-02-25 13:00:25.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:25 compute-0 nova_compute[244014]: 2026-02-25 13:00:25.614 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Feb 25 13:00:25 compute-0 ceph-mon[76335]: pgmap v2396: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 458 KiB/s wr, 110 op/s
Feb 25 13:00:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Feb 25 13:00:25 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Feb 25 13:00:25 compute-0 nova_compute[244014]: 2026-02-25 13:00:25.796 244018 DEBUG nova.storage.rbd_utils [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] creating snapshot(snap) on rbd image(7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 13:00:25 compute-0 ovn_controller[147040]: 2026-02-25T13:00:25Z|01546|binding|INFO|Releasing lport bba4cbca-ca61-4422-a903-61bd05b8ebd6 from this chassis (sb_readonly=0)
Feb 25 13:00:25 compute-0 nova_compute[244014]: 2026-02-25 13:00:25.875 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:25 compute-0 lvm[376424]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:00:25 compute-0 lvm[376423]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:00:25 compute-0 lvm[376420]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:00:25 compute-0 lvm[376420]: VG ceph_vg0 finished
Feb 25 13:00:25 compute-0 lvm[376424]: VG ceph_vg2 finished
Feb 25 13:00:25 compute-0 lvm[376423]: VG ceph_vg1 finished
Feb 25 13:00:26 compute-0 brave_heyrovsky[376325]: {}
Feb 25 13:00:26 compute-0 systemd[1]: libpod-a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e.scope: Deactivated successfully.
Feb 25 13:00:26 compute-0 systemd[1]: libpod-a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e.scope: Consumed 1.122s CPU time.
Feb 25 13:00:26 compute-0 podman[376308]: 2026-02-25 13:00:26.084087836 +0000 UTC m=+1.128743070 container died a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 13:00:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6776945804d9c1e74ed5a87f858a6e9dd6b825a00438677a28711d343aaa96c5-merged.mount: Deactivated successfully.
Feb 25 13:00:26 compute-0 podman[376308]: 2026-02-25 13:00:26.140969181 +0000 UTC m=+1.185624405 container remove a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:00:26 compute-0 systemd[1]: libpod-conmon-a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e.scope: Deactivated successfully.
Feb 25 13:00:26 compute-0 sudo[376194]: pam_unix(sudo:session): session closed for user root
Feb 25 13:00:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:00:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:00:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:00:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:00:26 compute-0 sudo[376441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:00:26 compute-0 sudo[376441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:00:26 compute-0 sudo[376441]: pam_unix(sudo:session): session closed for user root
Feb 25 13:00:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Feb 25 13:00:26 compute-0 ceph-mon[76335]: osdmap e280: 3 total, 3 up, 3 in
Feb 25 13:00:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:00:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:00:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Feb 25 13:00:26 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Feb 25 13:00:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 64 KiB/s wr, 66 op/s
Feb 25 13:00:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 13:00:27 compute-0 ceph-mon[76335]: osdmap e281: 3 total, 3 up, 3 in
Feb 25 13:00:27 compute-0 ceph-mon[76335]: pgmap v2399: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 64 KiB/s wr, 66 op/s
Feb 25 13:00:28 compute-0 nova_compute[244014]: 2026-02-25 13:00:28.016 244018 INFO nova.virt.libvirt.driver [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Snapshot image upload complete
Feb 25 13:00:28 compute-0 nova_compute[244014]: 2026-02-25 13:00:28.017 244018 INFO nova.compute.manager [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Took 5.88 seconds to snapshot the instance on the hypervisor.
Feb 25 13:00:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 153 op/s
Feb 25 13:00:28 compute-0 nova_compute[244014]: 2026-02-25 13:00:28.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:29 compute-0 ceph-mon[76335]: pgmap v2400: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 153 op/s
Feb 25 13:00:29 compute-0 nova_compute[244014]: 2026-02-25 13:00:29.949 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024414.947953, b0014972-9433-4648-951c-bd9a210b6a69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:00:29 compute-0 nova_compute[244014]: 2026-02-25 13:00:29.949 244018 INFO nova.compute.manager [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] VM Stopped (Lifecycle Event)
Feb 25 13:00:29 compute-0 nova_compute[244014]: 2026-02-25 13:00:29.983 244018 DEBUG nova.compute.manager [None req-bc5bbe2d-2591-4c66-b566-dddcb6bb4606 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:30 compute-0 nova_compute[244014]: 2026-02-25 13:00:30.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:30 compute-0 nova_compute[244014]: 2026-02-25 13:00:30.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 6.4 MiB/s wr, 126 op/s
Feb 25 13:00:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:00:31
Feb 25 13:00:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:00:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:00:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.log', 'backups']
Feb 25 13:00:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:00:32 compute-0 ceph-mon[76335]: pgmap v2401: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 6.4 MiB/s wr, 126 op/s
Feb 25 13:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:00:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Feb 25 13:00:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Feb 25 13:00:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Feb 25 13:00:33 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Feb 25 13:00:33 compute-0 nova_compute[244014]: 2026-02-25 13:00:33.912 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:33 compute-0 nova_compute[244014]: 2026-02-25 13:00:33.912 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:33 compute-0 nova_compute[244014]: 2026-02-25 13:00:33.927 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.002 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.002 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.012 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.012 244018 INFO nova.compute.claims [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Claim successful on node compute-0.ctlplane.example.com
Feb 25 13:00:34 compute-0 ceph-mon[76335]: pgmap v2402: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Feb 25 13:00:34 compute-0 ceph-mon[76335]: osdmap e282: 3 total, 3 up, 3 in
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.144 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:00:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440491008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.675 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.682 244018 DEBUG nova.compute.provider_tree [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.714 244018 DEBUG nova.scheduler.client.report [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.744 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.746 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.824 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.825 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.843 244018 INFO nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 13:00:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.8 MiB/s wr, 127 op/s
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.874 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.981 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.983 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 13:00:34 compute-0 nova_compute[244014]: 2026-02-25 13:00:34.984 244018 INFO nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Creating image(s)
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.013 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.036 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3440491008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.065 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.068 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "fa283b84f237165ed51c00dab3d16371bfe8f671" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.069 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "fa283b84f237165ed51c00dab3d16371bfe8f671" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.295 244018 DEBUG nova.virt.libvirt.imagebackend [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.362 244018 DEBUG nova.virt.libvirt.imagebackend [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Selected location: {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.363 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] cloning images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170@snap to None/0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.420 244018 DEBUG nova.policy [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7592542cdf7f423c86332695423dbe79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.473 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "fa283b84f237165ed51c00dab3d16371bfe8f671" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.525 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.527 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024420.4781866, fc7d7f86-eb7c-476c-840e-98c97329de34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.528 244018 INFO nova.compute.manager [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] VM Stopped (Lifecycle Event)
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.586 244018 DEBUG nova.compute.manager [None req-87618ec9-fd58-470b-88f3-4cee70c77b38 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.649 244018 DEBUG nova.objects.instance [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'migration_context' on Instance uuid 0e4f3bd8-9df3-4329-afdc-27d91b98810d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.673 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.674 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Ensure instance console log exists: /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.675 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.675 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:35 compute-0 nova_compute[244014]: 2026-02-25 13:00:35.676 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:36 compute-0 ceph-mon[76335]: pgmap v2404: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.8 MiB/s wr, 127 op/s
Feb 25 13:00:36 compute-0 nova_compute[244014]: 2026-02-25 13:00:36.158 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Successfully created port: 4f9273d8-a479-491a-bbac-087a11ac1a08 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 13:00:36 compute-0 nova_compute[244014]: 2026-02-25 13:00:36.164 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 102 op/s
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.641 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Successfully updated port: 4f9273d8-a479-491a-bbac-087a11ac1a08 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.662 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.663 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquired lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.663 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.734 244018 DEBUG nova.compute.manager [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.734 244018 DEBUG nova.compute.manager [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing instance network info cache due to event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.734 244018 DEBUG oslo_concurrency.lockutils [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.832 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:00:37 compute-0 nova_compute[244014]: 2026-02-25 13:00:37.895 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 13:00:38 compute-0 ceph-mon[76335]: pgmap v2405: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 102 op/s
Feb 25 13:00:38 compute-0 nova_compute[244014]: 2026-02-25 13:00:38.497 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:00:38 compute-0 nova_compute[244014]: 2026-02-25 13:00:38.497 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:00:38 compute-0 nova_compute[244014]: 2026-02-25 13:00:38.498 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 13:00:38 compute-0 nova_compute[244014]: 2026-02-25 13:00:38.498 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8d338640-2b5f-4571-8f76-b523064ee129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:00:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.796824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438796851, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1818, "num_deletes": 257, "total_data_size": 2889250, "memory_usage": 2934024, "flush_reason": "Manual Compaction"}
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438851291, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 2837246, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49673, "largest_seqno": 51490, "table_properties": {"data_size": 2828935, "index_size": 5061, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17301, "raw_average_key_size": 20, "raw_value_size": 2812148, "raw_average_value_size": 3251, "num_data_blocks": 225, "num_entries": 865, "num_filter_entries": 865, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024254, "oldest_key_time": 1772024254, "file_creation_time": 1772024438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 54550 microseconds, and 4439 cpu microseconds.
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.851365) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 2837246 bytes OK
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.851391) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.859757) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.859782) EVENT_LOG_v1 {"time_micros": 1772024438859774, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.859804) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2881427, prev total WAL file size 2881427, number of live WAL files 2.
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.860712) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303134' seq:72057594037927935, type:22 .. '6C6F676D0032323636' seq:0, type:0; will stop at (end)
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(2770KB)], [116(7952KB)]
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438860754, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 10981091, "oldest_snapshot_seqno": -1}
Feb 25 13:00:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.1 KiB/s wr, 43 op/s
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7305 keys, 10857741 bytes, temperature: kUnknown
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438935816, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10857741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10808436, "index_size": 29929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 189626, "raw_average_key_size": 25, "raw_value_size": 10677768, "raw_average_value_size": 1461, "num_data_blocks": 1178, "num_entries": 7305, "num_filter_entries": 7305, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.936230) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10857741 bytes
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.937510) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.1 rd, 144.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.8 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(7.7) write-amplify(3.8) OK, records in: 7835, records dropped: 530 output_compression: NoCompression
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.937611) EVENT_LOG_v1 {"time_micros": 1772024438937595, "job": 70, "event": "compaction_finished", "compaction_time_micros": 75161, "compaction_time_cpu_micros": 27607, "output_level": 6, "num_output_files": 1, "total_output_size": 10857741, "num_input_records": 7835, "num_output_records": 7305, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438938380, "job": 70, "event": "table_file_deletion", "file_number": 118}
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438940124, "job": 70, "event": "table_file_deletion", "file_number": 116}
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.860555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.940246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.940252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.940254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.940256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:00:38 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.940258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:00:39 compute-0 sshd-session[376665]: Invalid user solana from 80.94.92.186 port 54000
Feb 25 13:00:39 compute-0 sshd-session[376665]: Connection closed by invalid user solana 80.94.92.186 port 54000 [preauth]
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.467 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updating instance_info_cache with network_info: [{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.490 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Releasing lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.491 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Instance network_info: |[{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.491 244018 DEBUG oslo_concurrency.lockutils [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.492 244018 DEBUG nova.network.neutron [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.497 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Start _get_guest_xml network_info=[{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T13:00:21Z,direct_url=<?>,disk_format='raw',id=7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1341513254',owner='cf5eb89ba0424237a313b1f369bcb92b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T13:00:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': '7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.503 244018 WARNING nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.512 244018 DEBUG nova.virt.libvirt.host [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.513 244018 DEBUG nova.virt.libvirt.host [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.524 244018 DEBUG nova.virt.libvirt.host [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.525 244018 DEBUG nova.virt.libvirt.host [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.526 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.526 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T13:00:21Z,direct_url=<?>,disk_format='raw',id=7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1341513254',owner='cf5eb89ba0424237a313b1f369bcb92b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T13:00:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.527 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.528 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.529 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.529 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.530 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.530 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.531 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.532 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.532 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.533 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 13:00:39 compute-0 nova_compute[244014]: 2026-02-25 13:00:39.538 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:39 compute-0 ceph-mon[76335]: pgmap v2406: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.1 KiB/s wr, 43 op/s
Feb 25 13:00:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:00:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019647448' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.071 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.106 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.111 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.511 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.528 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.540 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.540 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.542 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.566 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.567 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.567 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.568 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.568 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:00:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2455738330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.634 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.637 244018 DEBUG nova.virt.libvirt.vif [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1321825921',display_name='tempest-TestSnapshotPattern-server-1321825921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1321825921',id=146,image_ref='7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-1efchiu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8d338640-2b5f-4571-8f76-b523064ee129',image_min_disk='1',image_min_ram='0',image_owner_id='cf5eb89ba0424237a313b1f369bcb92b',image_owner_project_name='tempest-TestSnapshotPattern-1122231160',image_owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member',image_user_id='7592542cdf7f423c86332695423dbe79',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:00:34Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=0e4f3bd8-9df3-4329-afdc-27d91b98810d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.638 244018 DEBUG nova.network.os_vif_util [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.639 244018 DEBUG nova.network.os_vif_util [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.641 244018 DEBUG nova.objects.instance [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e4f3bd8-9df3-4329-afdc-27d91b98810d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.670 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 13:00:40 compute-0 nova_compute[244014]:   <uuid>0e4f3bd8-9df3-4329-afdc-27d91b98810d</uuid>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   <name>instance-00000092</name>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   <metadata>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <nova:name>tempest-TestSnapshotPattern-server-1321825921</nova:name>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 13:00:39</nova:creationTime>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <nova:user uuid="7592542cdf7f423c86332695423dbe79">tempest-TestSnapshotPattern-1122231160-project-member</nova:user>
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <nova:project uuid="cf5eb89ba0424237a313b1f369bcb92b">tempest-TestSnapshotPattern-1122231160</nova:project>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <nova:port uuid="4f9273d8-a479-491a-bbac-087a11ac1a08">
Feb 25 13:00:40 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   </metadata>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <system>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <entry name="serial">0e4f3bd8-9df3-4329-afdc-27d91b98810d</entry>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <entry name="uuid">0e4f3bd8-9df3-4329-afdc-27d91b98810d</entry>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     </system>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   <os>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   </os>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   <features>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <apic/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   </features>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   </clock>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   </cpu>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   <devices>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk">
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       </source>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config">
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       </source>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:00:40 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:f0:48:d9"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <target dev="tap4f9273d8-a4"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     </interface>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/console.log" append="off"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     </serial>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <video>
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     </video>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <input type="keyboard" bus="usb"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     </rng>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 13:00:40 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 13:00:40 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 13:00:40 compute-0 nova_compute[244014]:   </devices>
Feb 25 13:00:40 compute-0 nova_compute[244014]: </domain>
Feb 25 13:00:40 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.678 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Preparing to wait for external event network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.678 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.678 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.679 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.679 244018 DEBUG nova.virt.libvirt.vif [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1321825921',display_name='tempest-TestSnapshotPattern-server-1321825921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1321825921',id=146,image_ref='7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-1efchiu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8d338640-2b5f-4571-8f76-b523064ee129',image_min_disk='1',image_min_ram='0',image_owner_id='cf5eb89ba0424237a313b1f369bcb92b',image_owner_project_name='tempest-TestSnapshotPattern-1122231160',image_owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member',image_user_id='7592542cdf7f423c86332695423dbe79',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:00:34Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=0e4f3bd8-9df3-4329-afdc-27d91b98810d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.679 244018 DEBUG nova.network.os_vif_util [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.680 244018 DEBUG nova.network.os_vif_util [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.680 244018 DEBUG os_vif [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.681 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.682 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.685 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f9273d8-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.685 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f9273d8-a4, col_values=(('external_ids', {'iface-id': '4f9273d8-a479-491a-bbac-087a11ac1a08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:48:d9', 'vm-uuid': '0e4f3bd8-9df3-4329-afdc-27d91b98810d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:40 compute-0 NetworkManager[49836]: <info>  [1772024440.6881] manager: (tap4f9273d8-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.697 244018 INFO os_vif [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4')
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.762 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.763 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.763 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No VIF found with MAC fa:16:3e:f0:48:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.764 244018 INFO nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Using config drive
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.783 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1019647448' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:00:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2455738330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:00:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.1 KiB/s wr, 43 op/s
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.983 244018 DEBUG nova.network.neutron [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updated VIF entry in instance network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:00:40 compute-0 nova_compute[244014]: 2026-02-25 13:00:40.984 244018 DEBUG nova.network.neutron [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updating instance_info_cache with network_info: [{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.005 244018 DEBUG oslo_concurrency.lockutils [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:00:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:00:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/602947456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.115 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.173 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.173 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.178 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.178 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.220 244018 INFO nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Creating config drive at /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.226 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprbjh_j77 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.266 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.267 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.297 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.367 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprbjh_j77" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.392 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.396 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.430 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.431 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.439 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.439 244018 INFO nova.compute.claims [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Claim successful on node compute-0.ctlplane.example.com
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.497 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.498 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3308MB free_disk=59.941679435782135GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.498 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.533 244018 DEBUG nova.scheduler.client.report [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.550 244018 DEBUG nova.scheduler.client.report [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.551 244018 DEBUG nova.compute.provider_tree [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.552 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.553 244018 INFO nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Deleting local config drive /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config because it was imported into RBD.
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.570 244018 DEBUG nova.scheduler.client.report [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.588 244018 DEBUG nova.scheduler.client.report [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:00:41 compute-0 kernel: tap4f9273d8-a4: entered promiscuous mode
Feb 25 13:00:41 compute-0 NetworkManager[49836]: <info>  [1772024441.5949] manager: (tap4f9273d8-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/637)
Feb 25 13:00:41 compute-0 ovn_controller[147040]: 2026-02-25T13:00:41Z|01547|binding|INFO|Claiming lport 4f9273d8-a479-491a-bbac-087a11ac1a08 for this chassis.
Feb 25 13:00:41 compute-0 ovn_controller[147040]: 2026-02-25T13:00:41Z|01548|binding|INFO|4f9273d8-a479-491a-bbac-087a11ac1a08: Claiming fa:16:3e:f0:48:d9 10.100.0.10
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:41 compute-0 ovn_controller[147040]: 2026-02-25T13:00:41Z|01549|binding|INFO|Setting lport 4f9273d8-a479-491a-bbac-087a11ac1a08 ovn-installed in OVS
Feb 25 13:00:41 compute-0 ovn_controller[147040]: 2026-02-25T13:00:41Z|01550|binding|INFO|Setting lport 4f9273d8-a479-491a-bbac-087a11ac1a08 up in Southbound
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.607 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:48:d9 10.100.0.10'], port_security=['fa:16:3e:f0:48:d9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0e4f3bd8-9df3-4329-afdc-27d91b98810d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9876e36c-27f9-4526-a220-4685b5be270d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b451f093-c10b-47f6-9a8c-8892cb3ad351, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4f9273d8-a479-491a-bbac-087a11ac1a08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.610 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4f9273d8-a479-491a-bbac-087a11ac1a08 in datapath 4273798e-5f22-4d98-8d00-a22d1ea2c776 bound to our chassis
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.613 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4273798e-5f22-4d98-8d00-a22d1ea2c776
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.627 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[767f5ca0-8e81-4474-a041-13f338a6f040]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:41 compute-0 systemd-machined[210048]: New machine qemu-180-instance-00000092.
Feb 25 13:00:41 compute-0 systemd[1]: Started Virtual Machine qemu-180-instance-00000092.
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.655 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f4362663-1d5c-4871-92c1-3938a270f9ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.661 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9dfa8d-1fcc-458d-a3eb-7d772e73d1e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:41 compute-0 systemd-udevd[376827]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 13:00:41 compute-0 NetworkManager[49836]: <info>  [1772024441.6832] device (tap4f9273d8-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 13:00:41 compute-0 NetworkManager[49836]: <info>  [1772024441.6840] device (tap4f9273d8-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.691 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[97ef81ac-1ac3-460d-87eb-c8e46abd9787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.694 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.715 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8013257f-c0e3-4168-8527-d0fae538d0eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4273798e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 453], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636995, 'reachable_time': 38262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376834, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8d0c001e-67d2-43f1-a084-849d503f9fb6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4273798e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637005, 'tstamp': 637005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376839, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4273798e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637008, 'tstamp': 637008}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376839, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.734 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4273798e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.738 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4273798e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.738 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.738 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4273798e-50, col_values=(('external_ids', {'iface-id': 'bba4cbca-ca61-4422-a903-61bd05b8ebd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.739 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:00:41 compute-0 ceph-mon[76335]: pgmap v2407: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.1 KiB/s wr, 43 op/s
Feb 25 13:00:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/602947456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.871 244018 DEBUG nova.compute.manager [req-7a617b28-a0f9-45bd-acf4-605537846295 req-901208ee-0a92-482d-a093-47a24d3dde9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.872 244018 DEBUG oslo_concurrency.lockutils [req-7a617b28-a0f9-45bd-acf4-605537846295 req-901208ee-0a92-482d-a093-47a24d3dde9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.873 244018 DEBUG oslo_concurrency.lockutils [req-7a617b28-a0f9-45bd-acf4-605537846295 req-901208ee-0a92-482d-a093-47a24d3dde9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.873 244018 DEBUG oslo_concurrency.lockutils [req-7a617b28-a0f9-45bd-acf4-605537846295 req-901208ee-0a92-482d-a093-47a24d3dde9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:41 compute-0 nova_compute[244014]: 2026-02-25 13:00:41.874 244018 DEBUG nova.compute.manager [req-7a617b28-a0f9-45bd-acf4-605537846295 req-901208ee-0a92-482d-a093-47a24d3dde9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Processing event network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.015 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024442.0145745, 0e4f3bd8-9df3-4329-afdc-27d91b98810d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.015 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] VM Started (Lifecycle Event)
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.023 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.028 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.031 244018 INFO nova.virt.libvirt.driver [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Instance spawned successfully.
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.032 244018 INFO nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Took 7.05 seconds to spawn the instance on the hypervisor.
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.032 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.045 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.053 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.111 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.111 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024442.01482, 0e4f3bd8-9df3-4329-afdc-27d91b98810d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.112 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] VM Paused (Lifecycle Event)
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.146 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.149 244018 INFO nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Took 8.18 seconds to build instance.
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.154 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024442.0263085, 0e4f3bd8-9df3-4329-afdc-27d91b98810d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.155 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] VM Resumed (Lifecycle Event)
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.190 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.194 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.198 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:00:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:00:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3098004259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.296 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.301 244018 DEBUG nova.compute.provider_tree [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.327 244018 DEBUG nova.scheduler.client.report [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.353 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.353 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.356 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.419 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.419 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.597 244018 INFO nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.651 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.657 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8d338640-2b5f-4571-8f76-b523064ee129 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.658 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 0e4f3bd8-9df3-4329-afdc-27d91b98810d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.658 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e53615dc-61ae-4f86-a246-c36739b4d2ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.658 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.659 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.736 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.771 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.773 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.774 244018 INFO nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Creating image(s)
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007768486318795051 of space, bias 1.0, pg target 0.23305458956385153 quantized to 32 (current 32)
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00325201735596192 of space, bias 1.0, pg target 0.975605206788576 quantized to 32 (current 32)
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3623253445439937e-06 of space, bias 4.0, pg target 0.0016347904134527923 quantized to 16 (current 16)
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.805 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.831 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3098004259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 16 KiB/s wr, 51 op/s
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.865 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.876 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.912 244018 DEBUG nova.policy [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.970 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.971 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.971 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.971 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.992 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:42 compute-0 nova_compute[244014]: 2026-02-25 13:00:42.996 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e53615dc-61ae-4f86-a246-c36739b4d2ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.280 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e53615dc-61ae-4f86-a246-c36739b4d2ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:00:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2475209737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.342 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.374 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.380 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.422 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.429 244018 DEBUG nova.objects.instance [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid e53615dc-61ae-4f86-a246-c36739b4d2ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.460 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.460 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Ensure instance console log exists: /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.460 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.461 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.461 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.470 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.470 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.775 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Successfully created port: 614ca81e-067a-4af2-852a-f3155f0f177c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 13:00:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:43 compute-0 ceph-mon[76335]: pgmap v2408: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 16 KiB/s wr, 51 op/s
Feb 25 13:00:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2475209737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.973 244018 DEBUG nova.compute.manager [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.973 244018 DEBUG oslo_concurrency.lockutils [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.973 244018 DEBUG oslo_concurrency.lockutils [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.973 244018 DEBUG oslo_concurrency.lockutils [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.974 244018 DEBUG nova.compute.manager [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] No waiting events found dispatching network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:00:43 compute-0 nova_compute[244014]: 2026-02-25 13:00:43.974 244018 WARNING nova.compute.manager [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received unexpected event network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 for instance with vm_state active and task_state None.
Feb 25 13:00:44 compute-0 nova_compute[244014]: 2026-02-25 13:00:44.466 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:00:44 compute-0 nova_compute[244014]: 2026-02-25 13:00:44.466 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:00:44 compute-0 nova_compute[244014]: 2026-02-25 13:00:44.847 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Successfully updated port: 614ca81e-067a-4af2-852a-f3155f0f177c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 13:00:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 15 KiB/s wr, 46 op/s
Feb 25 13:00:44 compute-0 nova_compute[244014]: 2026-02-25 13:00:44.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:00:44 compute-0 nova_compute[244014]: 2026-02-25 13:00:44.886 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:00:44 compute-0 nova_compute[244014]: 2026-02-25 13:00:44.886 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:00:44 compute-0 nova_compute[244014]: 2026-02-25 13:00:44.886 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 13:00:45 compute-0 nova_compute[244014]: 2026-02-25 13:00:45.107 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 13:00:45 compute-0 nova_compute[244014]: 2026-02-25 13:00:45.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:45 compute-0 nova_compute[244014]: 2026-02-25 13:00:45.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.006 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updating instance_info_cache with network_info: [{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:00:46 compute-0 ceph-mon[76335]: pgmap v2409: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 15 KiB/s wr, 46 op/s
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.068 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.069 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance network_info: |[{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.071 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Start _get_guest_xml network_info=[{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.075 244018 WARNING nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.079 244018 DEBUG nova.compute.manager [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.079 244018 DEBUG nova.compute.manager [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing instance network info cache due to event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.080 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.080 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.080 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.081 244018 DEBUG nova.virt.libvirt.host [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.082 244018 DEBUG nova.virt.libvirt.host [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.085 244018 DEBUG nova.virt.libvirt.host [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.085 244018 DEBUG nova.virt.libvirt.host [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.085 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.085 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.086 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.086 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.086 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.086 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.087 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.087 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.087 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.087 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.087 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.088 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.090 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:00:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1415538947' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.685 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.708 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.713 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 13 KiB/s wr, 42 op/s
Feb 25 13:00:46 compute-0 nova_compute[244014]: 2026-02-25 13:00:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:00:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1415538947' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:00:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:00:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843307968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.232 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.234 244018 DEBUG nova.virt.libvirt.vif [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:00:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=147,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKSnXyyRLcIe+wwroSDefDYtDDfXrk/YMn2qsODUMBYqG2hXn2+h/Zx9KIePkIbsUhd+oTI/uxQzz33WttQ4dgrPw3EcoKeusKnTCEMOPy/CP9hk5/kHcYVWCklV9tEBEw==',key_name='tempest-TestSecurityGroupsBasicOps-302753452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-tvsldivx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:00:42Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=e53615dc-61ae-4f86-a246-c36739b4d2ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.234 244018 DEBUG nova.network.os_vif_util [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.235 244018 DEBUG nova.network.os_vif_util [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.236 244018 DEBUG nova.objects.instance [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid e53615dc-61ae-4f86-a246-c36739b4d2ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.254 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updated VIF entry in instance network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.254 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updating instance_info_cache with network_info: [{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.277 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] End _get_guest_xml xml=<domain type="kvm">
Feb 25 13:00:47 compute-0 nova_compute[244014]:   <uuid>e53615dc-61ae-4f86-a246-c36739b4d2ae</uuid>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   <name>instance-00000093</name>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   <metadata>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436</nova:name>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 13:00:46</nova:creationTime>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <nova:port uuid="614ca81e-067a-4af2-852a-f3155f0f177c">
Feb 25 13:00:47 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   </metadata>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <system>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <entry name="serial">e53615dc-61ae-4f86-a246-c36739b4d2ae</entry>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <entry name="uuid">e53615dc-61ae-4f86-a246-c36739b4d2ae</entry>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     </system>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   <os>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   </os>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   <features>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <apic/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   </features>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   </clock>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   </cpu>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   <devices>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e53615dc-61ae-4f86-a246-c36739b4d2ae_disk">
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       </source>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config">
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       </source>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:00:47 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:d5:16:22"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <target dev="tap614ca81e-06"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     </interface>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/console.log" append="off"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     </serial>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <video>
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     </video>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     </rng>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 13:00:47 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 13:00:47 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 13:00:47 compute-0 nova_compute[244014]:   </devices>
Feb 25 13:00:47 compute-0 nova_compute[244014]: </domain>
Feb 25 13:00:47 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.279 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Preparing to wait for external event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.279 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.279 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.280 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.281 244018 DEBUG nova.virt.libvirt.vif [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:00:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=147,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKSnXyyRLcIe+wwroSDefDYtDDfXrk/YMn2qsODUMBYqG2hXn2+h/Zx9KIePkIbsUhd+oTI/uxQzz33WttQ4dgrPw3EcoKeusKnTCEMOPy/CP9hk5/kHcYVWCklV9tEBEw==',key_name='tempest-TestSecurityGroupsBasicOps-302753452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-tvsldivx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:00:42Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=e53615dc-61ae-4f86-a246-c36739b4d2ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.281 244018 DEBUG nova.network.os_vif_util [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.282 244018 DEBUG nova.network.os_vif_util [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.282 244018 DEBUG os_vif [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.283 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.284 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.285 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.286 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.287 244018 DEBUG nova.compute.manager [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.287 244018 DEBUG nova.compute.manager [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing instance network info cache due to event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.287 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.287 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.288 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.289 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.290 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap614ca81e-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.291 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap614ca81e-06, col_values=(('external_ids', {'iface-id': '614ca81e-067a-4af2-852a-f3155f0f177c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:16:22', 'vm-uuid': 'e53615dc-61ae-4f86-a246-c36739b4d2ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:47 compute-0 NetworkManager[49836]: <info>  [1772024447.2939] manager: (tap614ca81e-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.295 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.301 244018 INFO os_vif [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06')
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.367 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.367 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.368 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:d5:16:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.369 244018 INFO nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Using config drive
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.403 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:47 compute-0 podman[377156]: 2026-02-25 13:00:47.405045766 +0000 UTC m=+0.063272526 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:00:47 compute-0 podman[377158]: 2026-02-25 13:00:47.465283265 +0000 UTC m=+0.122767104 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 13:00:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:00:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3939963270' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:00:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:00:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3939963270' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.684 244018 INFO nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Creating config drive at /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.688 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkwius9tw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.823 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkwius9tw" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.844 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.848 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.960 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.962 244018 INFO nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Deleting local config drive /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config because it was imported into RBD.
Feb 25 13:00:47 compute-0 kernel: tap614ca81e-06: entered promiscuous mode
Feb 25 13:00:47 compute-0 ovn_controller[147040]: 2026-02-25T13:00:47Z|01551|binding|INFO|Claiming lport 614ca81e-067a-4af2-852a-f3155f0f177c for this chassis.
Feb 25 13:00:47 compute-0 ovn_controller[147040]: 2026-02-25T13:00:47Z|01552|binding|INFO|614ca81e-067a-4af2-852a-f3155f0f177c: Claiming fa:16:3e:d5:16:22 10.100.0.13
Feb 25 13:00:47 compute-0 nova_compute[244014]: 2026-02-25 13:00:47.996 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:48 compute-0 NetworkManager[49836]: <info>  [1772024448.0032] manager: (tap614ca81e-06): new Tun device (/org/freedesktop/NetworkManager/Devices/639)
Feb 25 13:00:48 compute-0 ovn_controller[147040]: 2026-02-25T13:00:48Z|01553|binding|INFO|Setting lport 614ca81e-067a-4af2-852a-f3155f0f177c ovn-installed in OVS
Feb 25 13:00:48 compute-0 ovn_controller[147040]: 2026-02-25T13:00:48Z|01554|binding|INFO|Setting lport 614ca81e-067a-4af2-852a-f3155f0f177c up in Southbound
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.003 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:16:22 10.100.0.13'], port_security=['fa:16:3e:d5:16:22 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e53615dc-61ae-4f86-a246-c36739b4d2ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09018a8e-2978-4907-9f5b-7472a21cdef8 7177af96-a132-4797-a5d1-f9104ca018eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a6a1380-5e0c-4178-ac05-c458e549fab7, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=614ca81e-067a-4af2-852a-f3155f0f177c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.004 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 614ca81e-067a-4af2-852a-f3155f0f177c in datapath c6c760d4-99a5-4930-b051-8fdaa331a4d4 bound to our chassis
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.005 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6c760d4-99a5-4930-b051-8fdaa331a4d4
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.018 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6cc0f5-792c-449d-829c-98045038193e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.019 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6c760d4-91 in ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.021 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6c760d4-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.021 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[497ba391-cb60-4b24-b50d-3e309bd9a7b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 systemd-machined[210048]: New machine qemu-181-instance-00000093.
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[691ddc7d-5e87-453b-8f6f-442dce3bd95d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.034 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[26d47f7e-7fa2-4fc9-8508-f6aedfb068c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 systemd[1]: Started Virtual Machine qemu-181-instance-00000093.
Feb 25 13:00:48 compute-0 ceph-mon[76335]: pgmap v2410: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 13 KiB/s wr, 42 op/s
Feb 25 13:00:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/843307968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:00:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3939963270' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:00:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3939963270' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:00:48 compute-0 systemd-udevd[377275]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.055 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[28779397-193d-467c-8bfa-86ce6464b221]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 NetworkManager[49836]: <info>  [1772024448.0678] device (tap614ca81e-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 13:00:48 compute-0 NetworkManager[49836]: <info>  [1772024448.0686] device (tap614ca81e-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.088 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1ed66c-60a4-4df3-84d5-bcc8ff07d908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.096 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05005577-b981-41b0-b0d7-6abd0ee6ee04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 systemd-udevd[377281]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 13:00:48 compute-0 NetworkManager[49836]: <info>  [1772024448.0982] manager: (tapc6c760d4-90): new Veth device (/org/freedesktop/NetworkManager/Devices/640)
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.129 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[40c80016-ae06-4ca6-a147-f97e236e8029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.133 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e1112e-6733-4db3-8121-7131bd07cc16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 NetworkManager[49836]: <info>  [1772024448.1554] device (tapc6c760d4-90): carrier: link connected
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.158 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4d16c7-053d-437a-916d-3d1012a2322f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.170 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3996f83a-7c43-4631-93cf-652f90ef72fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6c760d4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:23:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 458], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641756, 'reachable_time': 19140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377305, 'error': None, 'target': 'ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.188 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[54ed1ffb-7bd6-4684-b68a-541dae8bd2a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:23d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641756, 'tstamp': 641756}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377306, 'error': None, 'target': 'ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.202 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8a9549-78a9-4fef-aba9-9805ac75b264]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6c760d4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:23:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 458], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641756, 'reachable_time': 19140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377307, 'error': None, 'target': 'ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.231 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[febfcb53-3d78-48b1-9f22-866b6b16713d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.289 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[609d643b-99f5-420a-85f6-b1b775024bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.291 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6c760d4-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.291 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.292 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6c760d4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.294 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:48 compute-0 kernel: tapc6c760d4-90: entered promiscuous mode
Feb 25 13:00:48 compute-0 NetworkManager[49836]: <info>  [1772024448.2953] manager: (tapc6c760d4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.301 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6c760d4-90, col_values=(('external_ids', {'iface-id': '1b4f0006-e243-4f86-aa8d-a93d1170014d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:48 compute-0 ovn_controller[147040]: 2026-02-25T13:00:48Z|01555|binding|INFO|Releasing lport 1b4f0006-e243-4f86-aa8d-a93d1170014d from this chassis (sb_readonly=0)
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.308 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.309 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6c760d4-99a5-4930-b051-8fdaa331a4d4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6c760d4-99a5-4930-b051-8fdaa331a4d4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.310 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad52d5b-6e74-4c57-aadf-c3aaa56f2816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.312 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: global
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-c6c760d4-99a5-4930-b051-8fdaa331a4d4
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/c6c760d4-99a5-4930-b051-8fdaa331a4d4.pid.haproxy
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID c6c760d4-99a5-4930-b051-8fdaa331a4d4
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 13:00:48 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.313 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'env', 'PROCESS_TAG=haproxy-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6c760d4-99a5-4930-b051-8fdaa331a4d4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.376 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024448.3756309, e53615dc-61ae-4f86-a246-c36739b4d2ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.376 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] VM Started (Lifecycle Event)
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.397 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.401 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024448.3758302, e53615dc-61ae-4f86-a246-c36739b4d2ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.402 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] VM Paused (Lifecycle Event)
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.443 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.446 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.459 244018 DEBUG nova.compute.manager [req-44e64a43-73fb-4802-affe-5c39a6a03fb2 req-9660ea15-6953-4fc4-b27b-707eeed2346a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.459 244018 DEBUG oslo_concurrency.lockutils [req-44e64a43-73fb-4802-affe-5c39a6a03fb2 req-9660ea15-6953-4fc4-b27b-707eeed2346a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.459 244018 DEBUG oslo_concurrency.lockutils [req-44e64a43-73fb-4802-affe-5c39a6a03fb2 req-9660ea15-6953-4fc4-b27b-707eeed2346a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.459 244018 DEBUG oslo_concurrency.lockutils [req-44e64a43-73fb-4802-affe-5c39a6a03fb2 req-9660ea15-6953-4fc4-b27b-707eeed2346a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.460 244018 DEBUG nova.compute.manager [req-44e64a43-73fb-4802-affe-5c39a6a03fb2 req-9660ea15-6953-4fc4-b27b-707eeed2346a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Processing event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.460 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.472 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.474 244018 INFO nova.virt.libvirt.driver [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance spawned successfully.
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.475 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.480 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.480 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024448.465376, e53615dc-61ae-4f86-a246-c36739b4d2ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.480 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] VM Resumed (Lifecycle Event)
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.493 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.493 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.500 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.500 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.500 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.501 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.571 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.575 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.615 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.622 244018 INFO nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Took 5.85 seconds to spawn the instance on the hypervisor.
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.623 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:00:48 compute-0 podman[377381]: 2026-02-25 13:00:48.657672391 +0000 UTC m=+0.060847167 container create 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.680 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updated VIF entry in instance network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.681 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updating instance_info_cache with network_info: [{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:00:48 compute-0 systemd[1]: Started libpod-conmon-992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51.scope.
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.712 244018 INFO nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Took 7.36 seconds to build instance.
Feb 25 13:00:48 compute-0 podman[377381]: 2026-02-25 13:00:48.627123469 +0000 UTC m=+0.030298325 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.732 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:00:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3d2482bb7304f1dfd8f2b101c4a80fae58fc8d8a42e192363e9767cd83f9d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 13:00:48 compute-0 nova_compute[244014]: 2026-02-25 13:00:48.743 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:48 compute-0 podman[377381]: 2026-02-25 13:00:48.763831256 +0000 UTC m=+0.167006112 container init 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:00:48 compute-0 podman[377381]: 2026-02-25 13:00:48.767938972 +0000 UTC m=+0.171113778 container start 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:00:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:48 compute-0 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [NOTICE]   (377400) : New worker (377402) forked
Feb 25 13:00:48 compute-0 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [NOTICE]   (377400) : Loading success.
Feb 25 13:00:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Feb 25 13:00:49 compute-0 nova_compute[244014]: 2026-02-25 13:00:49.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:00:49 compute-0 nova_compute[244014]: 2026-02-25 13:00:49.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:00:49 compute-0 nova_compute[244014]: 2026-02-25 13:00:49.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:00:50 compute-0 ceph-mon[76335]: pgmap v2411: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Feb 25 13:00:50 compute-0 nova_compute[244014]: 2026-02-25 13:00:50.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:50 compute-0 nova_compute[244014]: 2026-02-25 13:00:50.652 244018 DEBUG nova.compute.manager [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:50 compute-0 nova_compute[244014]: 2026-02-25 13:00:50.653 244018 DEBUG oslo_concurrency.lockutils [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:50 compute-0 nova_compute[244014]: 2026-02-25 13:00:50.653 244018 DEBUG oslo_concurrency.lockutils [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:50 compute-0 nova_compute[244014]: 2026-02-25 13:00:50.653 244018 DEBUG oslo_concurrency.lockutils [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:50 compute-0 nova_compute[244014]: 2026-02-25 13:00:50.654 244018 DEBUG nova.compute.manager [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] No waiting events found dispatching network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:00:50 compute-0 nova_compute[244014]: 2026-02-25 13:00:50.654 244018 WARNING nova.compute.manager [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received unexpected event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c for instance with vm_state active and task_state None.
Feb 25 13:00:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 25 13:00:51 compute-0 nova_compute[244014]: 2026-02-25 13:00:51.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:00:52 compute-0 ceph-mon[76335]: pgmap v2412: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 25 13:00:52 compute-0 nova_compute[244014]: 2026-02-25 13:00:52.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 178 op/s
Feb 25 13:00:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:53 compute-0 ovn_controller[147040]: 2026-02-25T13:00:53Z|00196|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.10
Feb 25 13:00:53 compute-0 ovn_controller[147040]: 2026-02-25T13:00:53Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f0:48:d9 10.100.0.10
Feb 25 13:00:54 compute-0 ceph-mon[76335]: pgmap v2413: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 178 op/s
Feb 25 13:00:54 compute-0 nova_compute[244014]: 2026-02-25 13:00:54.104 244018 DEBUG nova.compute.manager [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:00:54 compute-0 nova_compute[244014]: 2026-02-25 13:00:54.105 244018 DEBUG nova.compute.manager [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing instance network info cache due to event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:00:54 compute-0 nova_compute[244014]: 2026-02-25 13:00:54.106 244018 DEBUG oslo_concurrency.lockutils [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:00:54 compute-0 nova_compute[244014]: 2026-02-25 13:00:54.106 244018 DEBUG oslo_concurrency.lockutils [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:00:54 compute-0 nova_compute[244014]: 2026-02-25 13:00:54.107 244018 DEBUG nova.network.neutron [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:00:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 13:00:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:55.037 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:00:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:55.038 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:00:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:00:55.039 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:00:55 compute-0 nova_compute[244014]: 2026-02-25 13:00:55.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:55 compute-0 nova_compute[244014]: 2026-02-25 13:00:55.811 244018 DEBUG nova.network.neutron [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updated VIF entry in instance network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:00:55 compute-0 nova_compute[244014]: 2026-02-25 13:00:55.812 244018 DEBUG nova.network.neutron [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updating instance_info_cache with network_info: [{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:00:55 compute-0 nova_compute[244014]: 2026-02-25 13:00:55.861 244018 DEBUG oslo_concurrency.lockutils [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:00:56 compute-0 ceph-mon[76335]: pgmap v2414: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 13:00:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 13:00:57 compute-0 nova_compute[244014]: 2026-02-25 13:00:57.293 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:00:57 compute-0 ovn_controller[147040]: 2026-02-25T13:00:57Z|00198|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.10
Feb 25 13:00:57 compute-0 ovn_controller[147040]: 2026-02-25T13:00:57Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f0:48:d9 10.100.0.10
Feb 25 13:00:58 compute-0 ceph-mon[76335]: pgmap v2415: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 13:00:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:00:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 373 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.3 MiB/s wr, 213 op/s
Feb 25 13:00:58 compute-0 ovn_controller[147040]: 2026-02-25T13:00:58Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:48:d9 10.100.0.10
Feb 25 13:00:58 compute-0 ovn_controller[147040]: 2026-02-25T13:00:58Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:48:d9 10.100.0.10
Feb 25 13:01:00 compute-0 ovn_controller[147040]: 2026-02-25T13:01:00Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:16:22 10.100.0.13
Feb 25 13:01:00 compute-0 ovn_controller[147040]: 2026-02-25T13:01:00Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:16:22 10.100.0.13
Feb 25 13:01:00 compute-0 ceph-mon[76335]: pgmap v2416: 305 pgs: 305 active+clean; 373 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.3 MiB/s wr, 213 op/s
Feb 25 13:01:00 compute-0 nova_compute[244014]: 2026-02-25 13:01:00.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 373 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 512 KiB/s wr, 125 op/s
Feb 25 13:01:01 compute-0 CROND[377412]: (root) CMD (run-parts /etc/cron.hourly)
Feb 25 13:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:01:01 compute-0 run-parts[377415]: (/etc/cron.hourly) starting 0anacron
Feb 25 13:01:01 compute-0 run-parts[377421]: (/etc/cron.hourly) finished 0anacron
Feb 25 13:01:01 compute-0 CROND[377411]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 25 13:01:02 compute-0 nova_compute[244014]: 2026-02-25 13:01:02.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:02 compute-0 ceph-mon[76335]: pgmap v2417: 305 pgs: 305 active+clean; 373 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 512 KiB/s wr, 125 op/s
Feb 25 13:01:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.7 MiB/s wr, 191 op/s
Feb 25 13:01:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:04 compute-0 ceph-mon[76335]: pgmap v2418: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.7 MiB/s wr, 191 op/s
Feb 25 13:01:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 116 op/s
Feb 25 13:01:05 compute-0 nova_compute[244014]: 2026-02-25 13:01:05.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:06 compute-0 ceph-mon[76335]: pgmap v2419: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 116 op/s
Feb 25 13:01:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 116 op/s
Feb 25 13:01:07 compute-0 nova_compute[244014]: 2026-02-25 13:01:07.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:08 compute-0 ceph-mon[76335]: pgmap v2420: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 116 op/s
Feb 25 13:01:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Feb 25 13:01:10 compute-0 ceph-mon[76335]: pgmap v2421: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Feb 25 13:01:10 compute-0 nova_compute[244014]: 2026-02-25 13:01:10.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Feb 25 13:01:12 compute-0 nova_compute[244014]: 2026-02-25 13:01:12.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:12 compute-0 ceph-mon[76335]: pgmap v2422: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.427472) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472427500, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 521, "num_deletes": 251, "total_data_size": 503407, "memory_usage": 514232, "flush_reason": "Manual Compaction"}
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472431795, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 498590, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51491, "largest_seqno": 52011, "table_properties": {"data_size": 495686, "index_size": 876, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6801, "raw_average_key_size": 18, "raw_value_size": 490015, "raw_average_value_size": 1364, "num_data_blocks": 39, "num_entries": 359, "num_filter_entries": 359, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024439, "oldest_key_time": 1772024439, "file_creation_time": 1772024472, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 4360 microseconds, and 2024 cpu microseconds.
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.431832) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 498590 bytes OK
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.431846) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.433231) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.433246) EVENT_LOG_v1 {"time_micros": 1772024472433241, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.433259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 500421, prev total WAL file size 500421, number of live WAL files 2.
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.433596) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(486KB)], [119(10MB)]
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472433652, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 11356331, "oldest_snapshot_seqno": -1}
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7155 keys, 9618085 bytes, temperature: kUnknown
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472495751, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 9618085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9570840, "index_size": 28253, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17925, "raw_key_size": 187208, "raw_average_key_size": 26, "raw_value_size": 9443785, "raw_average_value_size": 1319, "num_data_blocks": 1099, "num_entries": 7155, "num_filter_entries": 7155, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024472, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.496039) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 9618085 bytes
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.497525) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.7 rd, 154.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(42.1) write-amplify(19.3) OK, records in: 7664, records dropped: 509 output_compression: NoCompression
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.497554) EVENT_LOG_v1 {"time_micros": 1772024472497541, "job": 72, "event": "compaction_finished", "compaction_time_micros": 62174, "compaction_time_cpu_micros": 33163, "output_level": 6, "num_output_files": 1, "total_output_size": 9618085, "num_input_records": 7664, "num_output_records": 7155, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472497783, "job": 72, "event": "table_file_deletion", "file_number": 121}
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472499269, "job": 72, "event": "table_file_deletion", "file_number": 119}
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.433485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.499417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.499425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.499427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.499430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:01:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.499433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:01:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.213 244018 DEBUG nova.compute.manager [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.214 244018 DEBUG nova.compute.manager [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing instance network info cache due to event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.214 244018 DEBUG oslo_concurrency.lockutils [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.215 244018 DEBUG oslo_concurrency.lockutils [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.215 244018 DEBUG nova.network.neutron [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.307 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.307 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.308 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.308 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.309 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.310 244018 INFO nova.compute.manager [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Terminating instance
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.312 244018 DEBUG nova.compute.manager [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 13:01:13 compute-0 kernel: tap614ca81e-06 (unregistering): left promiscuous mode
Feb 25 13:01:13 compute-0 NetworkManager[49836]: <info>  [1772024473.3752] device (tap614ca81e-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 13:01:13 compute-0 ovn_controller[147040]: 2026-02-25T13:01:13Z|01556|binding|INFO|Releasing lport 614ca81e-067a-4af2-852a-f3155f0f177c from this chassis (sb_readonly=0)
Feb 25 13:01:13 compute-0 ovn_controller[147040]: 2026-02-25T13:01:13Z|01557|binding|INFO|Setting lport 614ca81e-067a-4af2-852a-f3155f0f177c down in Southbound
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:13 compute-0 ovn_controller[147040]: 2026-02-25T13:01:13Z|01558|binding|INFO|Removing iface tap614ca81e-06 ovn-installed in OVS
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.390 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:16:22 10.100.0.13'], port_security=['fa:16:3e:d5:16:22 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e53615dc-61ae-4f86-a246-c36739b4d2ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09018a8e-2978-4907-9f5b-7472a21cdef8 7177af96-a132-4797-a5d1-f9104ca018eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a6a1380-5e0c-4178-ac05-c458e549fab7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=614ca81e-067a-4af2-852a-f3155f0f177c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.392 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 614ca81e-067a-4af2-852a-f3155f0f177c in datapath c6c760d4-99a5-4930-b051-8fdaa331a4d4 unbound from our chassis
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.394 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6c760d4-99a5-4930-b051-8fdaa331a4d4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.399 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7065c938-492d-4e4c-ab6c-7744c1bcf21a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.399 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4 namespace which is not needed anymore
Feb 25 13:01:13 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Deactivated successfully.
Feb 25 13:01:13 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Consumed 12.696s CPU time.
Feb 25 13:01:13 compute-0 systemd-machined[210048]: Machine qemu-181-instance-00000093 terminated.
Feb 25 13:01:13 compute-0 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [NOTICE]   (377400) : haproxy version is 2.8.14-c23fe91
Feb 25 13:01:13 compute-0 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [NOTICE]   (377400) : path to executable is /usr/sbin/haproxy
Feb 25 13:01:13 compute-0 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [WARNING]  (377400) : Exiting Master process...
Feb 25 13:01:13 compute-0 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [ALERT]    (377400) : Current worker (377402) exited with code 143 (Terminated)
Feb 25 13:01:13 compute-0 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [WARNING]  (377400) : All workers exited. Exiting... (0)
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.554 244018 INFO nova.virt.libvirt.driver [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance destroyed successfully.
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.555 244018 DEBUG nova.objects.instance [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid e53615dc-61ae-4f86-a246-c36739b4d2ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:01:13 compute-0 systemd[1]: libpod-992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51.scope: Deactivated successfully.
Feb 25 13:01:13 compute-0 podman[377445]: 2026-02-25 13:01:13.562248806 +0000 UTC m=+0.050711872 container died 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.568 244018 DEBUG nova.virt.libvirt.vif [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:00:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=147,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKSnXyyRLcIe+wwroSDefDYtDDfXrk/YMn2qsODUMBYqG2hXn2+h/Zx9KIePkIbsUhd+oTI/uxQzz33WttQ4dgrPw3EcoKeusKnTCEMOPy/CP9hk5/kHcYVWCklV9tEBEw==',key_name='tempest-TestSecurityGroupsBasicOps-302753452',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:00:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-tvsldivx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:00:48Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=e53615dc-61ae-4f86-a246-c36739b4d2ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.569 244018 DEBUG nova.network.os_vif_util [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.570 244018 DEBUG nova.network.os_vif_util [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.570 244018 DEBUG os_vif [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.573 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap614ca81e-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.579 244018 INFO os_vif [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06')
Feb 25 13:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51-userdata-shm.mount: Deactivated successfully.
Feb 25 13:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f3d2482bb7304f1dfd8f2b101c4a80fae58fc8d8a42e192363e9767cd83f9d9-merged.mount: Deactivated successfully.
Feb 25 13:01:13 compute-0 podman[377445]: 2026-02-25 13:01:13.611783883 +0000 UTC m=+0.100246939 container cleanup 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:01:13 compute-0 systemd[1]: libpod-conmon-992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51.scope: Deactivated successfully.
Feb 25 13:01:13 compute-0 podman[377501]: 2026-02-25 13:01:13.676632812 +0000 UTC m=+0.044529007 container remove 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.680 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ddcdba-42f1-4497-8ebd-db590339b2ef]: (4, ('Wed Feb 25 01:01:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4 (992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51)\n992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51\nWed Feb 25 01:01:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4 (992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51)\n992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.682 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a63fade-9685-48a7-8c4e-78790c3f696e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6c760d4-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:13 compute-0 kernel: tapc6c760d4-90: left promiscuous mode
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.689 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[212795aa-30cb-4f07-8979-6d05b3256fd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.703 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff4b619-4d1c-43f5-90ff-1e57bc01e7e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.704 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb30413-ebd7-4a8a-8b02-252147708030]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.717 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa2d891-94fd-46b0-9562-1a119ad280e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641749, 'reachable_time': 29797, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377519, 'error': None, 'target': 'ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:13 compute-0 systemd[1]: run-netns-ovnmeta\x2dc6c760d4\x2d99a5\x2d4930\x2db051\x2d8fdaa331a4d4.mount: Deactivated successfully.
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.721 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 13:01:13 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.721 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9669f32f-c833-4495-93ee-c16b59e2a93f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.880 244018 INFO nova.virt.libvirt.driver [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Deleting instance files /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae_del
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.882 244018 INFO nova.virt.libvirt.driver [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Deletion of /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae_del complete
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.959 244018 INFO nova.compute.manager [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.960 244018 DEBUG oslo.service.loopingcall [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.961 244018 DEBUG nova.compute.manager [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 13:01:13 compute-0 nova_compute[244014]: 2026-02-25 13:01:13.961 244018 DEBUG nova.network.neutron [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 13:01:14 compute-0 ceph-mon[76335]: pgmap v2423: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Feb 25 13:01:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 211 KiB/s wr, 6 op/s
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.299 244018 DEBUG nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-unplugged-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.299 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.300 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.300 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.300 244018 DEBUG nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] No waiting events found dispatching network-vif-unplugged-614ca81e-067a-4af2-852a-f3155f0f177c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.300 244018 DEBUG nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-unplugged-614ca81e-067a-4af2-852a-f3155f0f177c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.301 244018 DEBUG nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.301 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.301 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.301 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.301 244018 DEBUG nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] No waiting events found dispatching network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.302 244018 WARNING nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received unexpected event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c for instance with vm_state active and task_state deleting.
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.675 244018 DEBUG nova.network.neutron [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.693 244018 INFO nova.compute.manager [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Took 1.73 seconds to deallocate network for instance.
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.745 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.746 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.841 244018 DEBUG nova.network.neutron [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updated VIF entry in instance network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.842 244018 DEBUG nova.network.neutron [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updating instance_info_cache with network_info: [{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.858 244018 DEBUG oslo_concurrency.processutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:15 compute-0 nova_compute[244014]: 2026-02-25 13:01:15.903 244018 DEBUG oslo_concurrency.lockutils [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:01:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:01:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/703727400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:16 compute-0 nova_compute[244014]: 2026-02-25 13:01:16.430 244018 DEBUG oslo_concurrency.processutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:16 compute-0 nova_compute[244014]: 2026-02-25 13:01:16.437 244018 DEBUG nova.compute.provider_tree [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:01:16 compute-0 ceph-mon[76335]: pgmap v2424: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 211 KiB/s wr, 6 op/s
Feb 25 13:01:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/703727400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:16 compute-0 nova_compute[244014]: 2026-02-25 13:01:16.460 244018 DEBUG nova.scheduler.client.report [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:01:16 compute-0 nova_compute[244014]: 2026-02-25 13:01:16.490 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:16 compute-0 nova_compute[244014]: 2026-02-25 13:01:16.521 244018 INFO nova.scheduler.client.report [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance e53615dc-61ae-4f86-a246-c36739b4d2ae
Feb 25 13:01:16 compute-0 nova_compute[244014]: 2026-02-25 13:01:16.584 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 211 KiB/s wr, 6 op/s
Feb 25 13:01:17 compute-0 nova_compute[244014]: 2026-02-25 13:01:17.399 244018 DEBUG nova.compute.manager [req-ab594362-acec-4442-9a97-734b041e6c6a req-049c266e-1e0d-4c5e-90bd-0fbc51ce28ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-deleted-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:17 compute-0 nova_compute[244014]: 2026-02-25 13:01:17.399 244018 INFO nova.compute.manager [req-ab594362-acec-4442-9a97-734b041e6c6a req-049c266e-1e0d-4c5e-90bd-0fbc51ce28ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Neutron deleted interface 614ca81e-067a-4af2-852a-f3155f0f177c; detaching it from the instance and deleting it from the info cache
Feb 25 13:01:17 compute-0 nova_compute[244014]: 2026-02-25 13:01:17.400 244018 DEBUG nova.network.neutron [req-ab594362-acec-4442-9a97-734b041e6c6a req-049c266e-1e0d-4c5e-90bd-0fbc51ce28ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 25 13:01:17 compute-0 nova_compute[244014]: 2026-02-25 13:01:17.403 244018 DEBUG nova.compute.manager [req-ab594362-acec-4442-9a97-734b041e6c6a req-049c266e-1e0d-4c5e-90bd-0fbc51ce28ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Detach interface failed, port_id=614ca81e-067a-4af2-852a-f3155f0f177c, reason: Instance e53615dc-61ae-4f86-a246-c36739b4d2ae could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 25 13:01:17 compute-0 podman[377544]: 2026-02-25 13:01:17.72751534 +0000 UTC m=+0.064433688 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 13:01:17 compute-0 podman[377545]: 2026-02-25 13:01:17.760419889 +0000 UTC m=+0.094798525 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:01:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:18.011 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:01:18 compute-0 nova_compute[244014]: 2026-02-25 13:01:18.012 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:18 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:18.013 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 13:01:18 compute-0 ceph-mon[76335]: pgmap v2425: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 211 KiB/s wr, 6 op/s
Feb 25 13:01:18 compute-0 nova_compute[244014]: 2026-02-25 13:01:18.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:18 compute-0 nova_compute[244014]: 2026-02-25 13:01:18.598 244018 DEBUG nova.compute.manager [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:01:18 compute-0 nova_compute[244014]: 2026-02-25 13:01:18.638 244018 INFO nova.compute.manager [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] instance snapshotting
Feb 25 13:01:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 213 KiB/s wr, 34 op/s
Feb 25 13:01:18 compute-0 nova_compute[244014]: 2026-02-25 13:01:18.912 244018 INFO nova.virt.libvirt.driver [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Beginning live snapshot process
Feb 25 13:01:19 compute-0 nova_compute[244014]: 2026-02-25 13:01:19.083 244018 DEBUG nova.storage.rbd_utils [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] creating snapshot(dfc9a21f590643fdaf313ec9d01a20d1) on rbd image(0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 13:01:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Feb 25 13:01:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Feb 25 13:01:19 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Feb 25 13:01:19 compute-0 ovn_controller[147040]: 2026-02-25T13:01:19Z|01559|binding|INFO|Releasing lport bba4cbca-ca61-4422-a903-61bd05b8ebd6 from this chassis (sb_readonly=0)
Feb 25 13:01:19 compute-0 nova_compute[244014]: 2026-02-25 13:01:19.514 244018 DEBUG nova.storage.rbd_utils [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] cloning vms/0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk@dfc9a21f590643fdaf313ec9d01a20d1 to images/b3595e75-97c7-4173-a1f7-d933f93b52fe clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 25 13:01:19 compute-0 nova_compute[244014]: 2026-02-25 13:01:19.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:19 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Feb 25 13:01:19 compute-0 nova_compute[244014]: 2026-02-25 13:01:19.656 244018 DEBUG nova.storage.rbd_utils [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] flattening images/b3595e75-97c7-4173-a1f7-d933f93b52fe flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 25 13:01:20 compute-0 nova_compute[244014]: 2026-02-25 13:01:20.409 244018 DEBUG nova.storage.rbd_utils [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] removing snapshot(dfc9a21f590643fdaf313ec9d01a20d1) on rbd image(0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 25 13:01:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Feb 25 13:01:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Feb 25 13:01:20 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Feb 25 13:01:20 compute-0 ceph-mon[76335]: pgmap v2426: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 213 KiB/s wr, 34 op/s
Feb 25 13:01:20 compute-0 ceph-mon[76335]: osdmap e283: 3 total, 3 up, 3 in
Feb 25 13:01:20 compute-0 nova_compute[244014]: 2026-02-25 13:01:20.514 244018 DEBUG nova.storage.rbd_utils [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] creating snapshot(snap) on rbd image(b3595e75-97c7-4173-a1f7-d933f93b52fe) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 25 13:01:20 compute-0 nova_compute[244014]: 2026-02-25 13:01:20.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Feb 25 13:01:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Feb 25 13:01:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Feb 25 13:01:21 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Feb 25 13:01:21 compute-0 ceph-mon[76335]: osdmap e284: 3 total, 3 up, 3 in
Feb 25 13:01:22 compute-0 ceph-mon[76335]: pgmap v2429: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Feb 25 13:01:22 compute-0 ceph-mon[76335]: osdmap e285: 3 total, 3 up, 3 in
Feb 25 13:01:22 compute-0 nova_compute[244014]: 2026-02-25 13:01:22.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 434 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 15 MiB/s wr, 323 op/s
Feb 25 13:01:23 compute-0 nova_compute[244014]: 2026-02-25 13:01:23.364 244018 INFO nova.virt.libvirt.driver [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Snapshot image upload complete
Feb 25 13:01:23 compute-0 nova_compute[244014]: 2026-02-25 13:01:23.365 244018 INFO nova.compute.manager [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Took 4.73 seconds to snapshot the instance on the hypervisor.
Feb 25 13:01:23 compute-0 nova_compute[244014]: 2026-02-25 13:01:23.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:23 compute-0 ceph-mon[76335]: pgmap v2431: 305 pgs: 305 active+clean; 434 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 15 MiB/s wr, 323 op/s
Feb 25 13:01:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Feb 25 13:01:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Feb 25 13:01:23 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Feb 25 13:01:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Feb 25 13:01:24 compute-0 ceph-mon[76335]: osdmap e286: 3 total, 3 up, 3 in
Feb 25 13:01:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Feb 25 13:01:24 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Feb 25 13:01:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 434 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 20 MiB/s wr, 364 op/s
Feb 25 13:01:25 compute-0 nova_compute[244014]: 2026-02-25 13:01:25.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:25 compute-0 ceph-mon[76335]: osdmap e287: 3 total, 3 up, 3 in
Feb 25 13:01:25 compute-0 ceph-mon[76335]: pgmap v2434: 305 pgs: 305 active+clean; 434 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 20 MiB/s wr, 364 op/s
Feb 25 13:01:26 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:26.014 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:26 compute-0 sudo[377730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:01:26 compute-0 sudo[377730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:01:26 compute-0 sudo[377730]: pam_unix(sudo:session): session closed for user root
Feb 25 13:01:26 compute-0 sudo[377755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:01:26 compute-0 sudo[377755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:01:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 434 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 267 op/s
Feb 25 13:01:26 compute-0 sudo[377755]: pam_unix(sudo:session): session closed for user root
Feb 25 13:01:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:01:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:01:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:01:27 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:01:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:01:27 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:01:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:01:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:01:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:01:27 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:01:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:01:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:01:27 compute-0 sudo[377811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:01:27 compute-0 sudo[377811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:01:27 compute-0 sudo[377811]: pam_unix(sudo:session): session closed for user root
Feb 25 13:01:27 compute-0 sudo[377836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:01:27 compute-0 sudo[377836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.201 244018 DEBUG nova.compute.manager [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.203 244018 DEBUG nova.compute.manager [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing instance network info cache due to event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.203 244018 DEBUG oslo_concurrency.lockutils [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.204 244018 DEBUG oslo_concurrency.lockutils [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.204 244018 DEBUG nova.network.neutron [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.253 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.254 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.254 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.255 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.255 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.257 244018 INFO nova.compute.manager [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Terminating instance
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.258 244018 DEBUG nova.compute.manager [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 13:01:27 compute-0 kernel: tap4f9273d8-a4 (unregistering): left promiscuous mode
Feb 25 13:01:27 compute-0 NetworkManager[49836]: <info>  [1772024487.3395] device (tap4f9273d8-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:27 compute-0 ovn_controller[147040]: 2026-02-25T13:01:27Z|01560|binding|INFO|Releasing lport 4f9273d8-a479-491a-bbac-087a11ac1a08 from this chassis (sb_readonly=0)
Feb 25 13:01:27 compute-0 ovn_controller[147040]: 2026-02-25T13:01:27Z|01561|binding|INFO|Setting lport 4f9273d8-a479-491a-bbac-087a11ac1a08 down in Southbound
Feb 25 13:01:27 compute-0 ovn_controller[147040]: 2026-02-25T13:01:27Z|01562|binding|INFO|Removing iface tap4f9273d8-a4 ovn-installed in OVS
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.371 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:48:d9 10.100.0.10'], port_security=['fa:16:3e:f0:48:d9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0e4f3bd8-9df3-4329-afdc-27d91b98810d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9876e36c-27f9-4526-a220-4685b5be270d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b451f093-c10b-47f6-9a8c-8892cb3ad351, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4f9273d8-a479-491a-bbac-087a11ac1a08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.373 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4f9273d8-a479-491a-bbac-087a11ac1a08 in datapath 4273798e-5f22-4d98-8d00-a22d1ea2c776 unbound from our chassis
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.376 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4273798e-5f22-4d98-8d00-a22d1ea2c776
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.390 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b99137-2e28-401d-90c9-ba3fe874f6df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:27 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Deactivated successfully.
Feb 25 13:01:27 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Consumed 13.779s CPU time.
Feb 25 13:01:27 compute-0 systemd-machined[210048]: Machine qemu-180-instance-00000092 terminated.
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.419 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea6d775-33da-43c1-9bcc-1399281d599d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.422 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1a75c592-7c82-4529-af06-7b4f0e37aed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.447 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[39e44ec2-ad42-4b1c-99d5-57517b608398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.462 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c391b481-9908-4440-81ae-9699c7615ae1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4273798e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 453], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636995, 'reachable_time': 38262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377893, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:27 compute-0 podman[377881]: 2026-02-25 13:01:27.469178844 +0000 UTC m=+0.040932556 container create 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.475 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.477 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7680cc-3dfd-4f3e-ae55-17190d1b66b1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4273798e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637005, 'tstamp': 637005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377898, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4273798e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637008, 'tstamp': 637008}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377898, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.482 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4273798e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.492 244018 INFO nova.virt.libvirt.driver [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Instance destroyed successfully.
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.492 244018 DEBUG nova.objects.instance [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'resources' on Instance uuid 0e4f3bd8-9df3-4329-afdc-27d91b98810d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.491 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4273798e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.493 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.493 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4273798e-50, col_values=(('external_ids', {'iface-id': 'bba4cbca-ca61-4422-a903-61bd05b8ebd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.494 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.508 244018 DEBUG nova.virt.libvirt.vif [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1321825921',display_name='tempest-TestSnapshotPattern-server-1321825921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1321825921',id=146,image_ref='7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:00:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-1efchiu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8d338640-2b5f-4571-8f76-b523064ee129',image_min_disk='1',image_min_ram='0',image_owner_id='cf5eb89ba0424237a313b1f369bcb92b',image_owner_project_name='tempest-TestSnapshotPattern-1122231160',image_owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member',image_user_id='7592542cdf7f423c86332695423dbe79',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:01:23Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=0e4f3bd8-9df3-4329-afdc-27d91b98810d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.508 244018 DEBUG nova.network.os_vif_util [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.509 244018 DEBUG nova.network.os_vif_util [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.509 244018 DEBUG os_vif [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 13:01:27 compute-0 systemd[1]: Started libpod-conmon-6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85.scope.
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.511 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f9273d8-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.512 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.513 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.515 244018 INFO os_vif [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4')
Feb 25 13:01:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:01:27 compute-0 podman[377881]: 2026-02-25 13:01:27.54382585 +0000 UTC m=+0.115579632 container init 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:01:27 compute-0 podman[377881]: 2026-02-25 13:01:27.450992861 +0000 UTC m=+0.022746603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:01:27 compute-0 podman[377881]: 2026-02-25 13:01:27.552849804 +0000 UTC m=+0.124603526 container start 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:01:27 compute-0 podman[377881]: 2026-02-25 13:01:27.55696807 +0000 UTC m=+0.128721872 container attach 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 13:01:27 compute-0 jovial_golick[377912]: 167 167
Feb 25 13:01:27 compute-0 systemd[1]: libpod-6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85.scope: Deactivated successfully.
Feb 25 13:01:27 compute-0 conmon[377912]: conmon 6b6d59fe69737c921fa9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85.scope/container/memory.events
Feb 25 13:01:27 compute-0 podman[377881]: 2026-02-25 13:01:27.561754495 +0000 UTC m=+0.133508247 container died 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0e21cb2129aa2121b0823bf71888bd5e2b579095797723a43d262d3dc7e8980-merged.mount: Deactivated successfully.
Feb 25 13:01:27 compute-0 podman[377881]: 2026-02-25 13:01:27.613392422 +0000 UTC m=+0.185146174 container remove 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:01:27 compute-0 systemd[1]: libpod-conmon-6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85.scope: Deactivated successfully.
Feb 25 13:01:27 compute-0 podman[377956]: 2026-02-25 13:01:27.778919131 +0000 UTC m=+0.050266079 container create 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.788 244018 INFO nova.virt.libvirt.driver [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Deleting instance files /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d_del
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.790 244018 INFO nova.virt.libvirt.driver [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Deletion of /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d_del complete
Feb 25 13:01:27 compute-0 systemd[1]: Started libpod-conmon-8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c.scope.
Feb 25 13:01:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:27 compute-0 podman[377956]: 2026-02-25 13:01:27.849478792 +0000 UTC m=+0.120825760 container init 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:01:27 compute-0 podman[377956]: 2026-02-25 13:01:27.759978717 +0000 UTC m=+0.031325695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:01:27 compute-0 podman[377956]: 2026-02-25 13:01:27.857426426 +0000 UTC m=+0.128773384 container start 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.857 244018 INFO nova.compute.manager [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.858 244018 DEBUG oslo.service.loopingcall [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.858 244018 DEBUG nova.compute.manager [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 13:01:27 compute-0 nova_compute[244014]: 2026-02-25 13:01:27.858 244018 DEBUG nova.network.neutron [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 13:01:27 compute-0 podman[377956]: 2026-02-25 13:01:27.863275801 +0000 UTC m=+0.134622769 container attach 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:01:27 compute-0 ceph-mon[76335]: pgmap v2435: 305 pgs: 305 active+clean; 434 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 267 op/s
Feb 25 13:01:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:01:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:01:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:01:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:01:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:01:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:01:28 compute-0 interesting_heyrovsky[377972]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:01:28 compute-0 interesting_heyrovsky[377972]: --> All data devices are unavailable
Feb 25 13:01:28 compute-0 systemd[1]: libpod-8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c.scope: Deactivated successfully.
Feb 25 13:01:28 compute-0 podman[377956]: 2026-02-25 13:01:28.385285816 +0000 UTC m=+0.656632814 container died 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 13:01:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521-merged.mount: Deactivated successfully.
Feb 25 13:01:28 compute-0 podman[377956]: 2026-02-25 13:01:28.449536288 +0000 UTC m=+0.720883216 container remove 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:01:28 compute-0 systemd[1]: libpod-conmon-8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c.scope: Deactivated successfully.
Feb 25 13:01:28 compute-0 sudo[377836]: pam_unix(sudo:session): session closed for user root
Feb 25 13:01:28 compute-0 nova_compute[244014]: 2026-02-25 13:01:28.550 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024473.5488775, e53615dc-61ae-4f86-a246-c36739b4d2ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:01:28 compute-0 nova_compute[244014]: 2026-02-25 13:01:28.551 244018 INFO nova.compute.manager [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] VM Stopped (Lifecycle Event)
Feb 25 13:01:28 compute-0 sudo[378006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:01:28 compute-0 nova_compute[244014]: 2026-02-25 13:01:28.568 244018 DEBUG nova.compute.manager [None req-9cebc55d-66e9-4f50-bc3a-4f6d131c1044 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:01:28 compute-0 sudo[378006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:01:28 compute-0 sudo[378006]: pam_unix(sudo:session): session closed for user root
Feb 25 13:01:28 compute-0 sudo[378031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:01:28 compute-0 sudo[378031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:01:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 12 MiB/s wr, 311 op/s
Feb 25 13:01:28 compute-0 podman[378069]: 2026-02-25 13:01:28.949151122 +0000 UTC m=+0.062411252 container create f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:01:28 compute-0 systemd[1]: Started libpod-conmon-f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2.scope.
Feb 25 13:01:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:01:29 compute-0 podman[378069]: 2026-02-25 13:01:28.92781518 +0000 UTC m=+0.041075290 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:01:29 compute-0 podman[378069]: 2026-02-25 13:01:29.03415072 +0000 UTC m=+0.147410860 container init f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:01:29 compute-0 podman[378069]: 2026-02-25 13:01:29.042907157 +0000 UTC m=+0.156167257 container start f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:01:29 compute-0 quirky_morse[378085]: 167 167
Feb 25 13:01:29 compute-0 systemd[1]: libpod-f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2.scope: Deactivated successfully.
Feb 25 13:01:29 compute-0 podman[378069]: 2026-02-25 13:01:29.04728467 +0000 UTC m=+0.160544820 container attach f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:01:29 compute-0 conmon[378085]: conmon f6352c3daed4c3e46962 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2.scope/container/memory.events
Feb 25 13:01:29 compute-0 podman[378069]: 2026-02-25 13:01:29.048549716 +0000 UTC m=+0.161809816 container died f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 13:01:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a70267c61fb7bba96c773f397444fcd3bac8342c256ec3a520ec56280e5dd7a2-merged.mount: Deactivated successfully.
Feb 25 13:01:29 compute-0 podman[378069]: 2026-02-25 13:01:29.096405846 +0000 UTC m=+0.209665976 container remove f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:01:29 compute-0 systemd[1]: libpod-conmon-f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2.scope: Deactivated successfully.
Feb 25 13:01:29 compute-0 nova_compute[244014]: 2026-02-25 13:01:29.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:29 compute-0 podman[378109]: 2026-02-25 13:01:29.233787071 +0000 UTC m=+0.043225460 container create b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:01:29 compute-0 systemd[1]: Started libpod-conmon-b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0.scope.
Feb 25 13:01:29 compute-0 nova_compute[244014]: 2026-02-25 13:01:29.275 244018 DEBUG nova.network.neutron [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:01:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:01:29 compute-0 nova_compute[244014]: 2026-02-25 13:01:29.294 244018 INFO nova.compute.manager [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Took 1.44 seconds to deallocate network for instance.
Feb 25 13:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42f6afe07e70fe27a767fb842dcd31712aff17707bfbd185e95eee2756958c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42f6afe07e70fe27a767fb842dcd31712aff17707bfbd185e95eee2756958c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42f6afe07e70fe27a767fb842dcd31712aff17707bfbd185e95eee2756958c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42f6afe07e70fe27a767fb842dcd31712aff17707bfbd185e95eee2756958c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:29 compute-0 podman[378109]: 2026-02-25 13:01:29.212430569 +0000 UTC m=+0.021869008 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:01:29 compute-0 podman[378109]: 2026-02-25 13:01:29.340867412 +0000 UTC m=+0.150305841 container init b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:01:29 compute-0 podman[378109]: 2026-02-25 13:01:29.348406574 +0000 UTC m=+0.157844973 container start b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:01:29 compute-0 podman[378109]: 2026-02-25 13:01:29.351622945 +0000 UTC m=+0.161061334 container attach b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 13:01:29 compute-0 nova_compute[244014]: 2026-02-25 13:01:29.357 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:29 compute-0 nova_compute[244014]: 2026-02-25 13:01:29.357 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:29 compute-0 nova_compute[244014]: 2026-02-25 13:01:29.374 244018 DEBUG nova.compute.manager [req-b9ab88b9-1ab1-4788-a59c-da3f449c85c0 req-30821dd9-e55e-4641-8dc1-3691f55309af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-vif-deleted-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:29 compute-0 nova_compute[244014]: 2026-02-25 13:01:29.451 244018 DEBUG oslo_concurrency.processutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:29 compute-0 nova_compute[244014]: 2026-02-25 13:01:29.628 244018 DEBUG nova.network.neutron [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updated VIF entry in instance network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:01:29 compute-0 nova_compute[244014]: 2026-02-25 13:01:29.629 244018 DEBUG nova.network.neutron [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updating instance_info_cache with network_info: [{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:01:29 compute-0 charming_beaver[378125]: {
Feb 25 13:01:29 compute-0 charming_beaver[378125]:     "0": [
Feb 25 13:01:29 compute-0 charming_beaver[378125]:         {
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "devices": [
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "/dev/loop3"
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             ],
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_name": "ceph_lv0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_size": "21470642176",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "name": "ceph_lv0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "tags": {
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.cluster_name": "ceph",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.crush_device_class": "",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.encrypted": "0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.objectstore": "bluestore",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.osd_id": "0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.type": "block",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.vdo": "0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.with_tpm": "0"
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             },
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "type": "block",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "vg_name": "ceph_vg0"
Feb 25 13:01:29 compute-0 charming_beaver[378125]:         }
Feb 25 13:01:29 compute-0 charming_beaver[378125]:     ],
Feb 25 13:01:29 compute-0 charming_beaver[378125]:     "1": [
Feb 25 13:01:29 compute-0 charming_beaver[378125]:         {
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "devices": [
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "/dev/loop4"
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             ],
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_name": "ceph_lv1",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_size": "21470642176",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "name": "ceph_lv1",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "tags": {
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.cluster_name": "ceph",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.crush_device_class": "",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.encrypted": "0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.objectstore": "bluestore",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.osd_id": "1",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.type": "block",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.vdo": "0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.with_tpm": "0"
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             },
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "type": "block",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "vg_name": "ceph_vg1"
Feb 25 13:01:29 compute-0 charming_beaver[378125]:         }
Feb 25 13:01:29 compute-0 charming_beaver[378125]:     ],
Feb 25 13:01:29 compute-0 charming_beaver[378125]:     "2": [
Feb 25 13:01:29 compute-0 charming_beaver[378125]:         {
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "devices": [
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "/dev/loop5"
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             ],
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_name": "ceph_lv2",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_size": "21470642176",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "name": "ceph_lv2",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "tags": {
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.cluster_name": "ceph",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.crush_device_class": "",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.encrypted": "0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.objectstore": "bluestore",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.osd_id": "2",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.type": "block",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.vdo": "0",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:                 "ceph.with_tpm": "0"
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             },
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "type": "block",
Feb 25 13:01:29 compute-0 charming_beaver[378125]:             "vg_name": "ceph_vg2"
Feb 25 13:01:29 compute-0 charming_beaver[378125]:         }
Feb 25 13:01:29 compute-0 charming_beaver[378125]:     ]
Feb 25 13:01:29 compute-0 charming_beaver[378125]: }
Feb 25 13:01:29 compute-0 nova_compute[244014]: 2026-02-25 13:01:29.650 244018 DEBUG oslo_concurrency.lockutils [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:01:29 compute-0 systemd[1]: libpod-b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0.scope: Deactivated successfully.
Feb 25 13:01:29 compute-0 conmon[378125]: conmon b9486fe477e6b42f22dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0.scope/container/memory.events
Feb 25 13:01:29 compute-0 podman[378109]: 2026-02-25 13:01:29.694868098 +0000 UTC m=+0.504306527 container died b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:01:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b42f6afe07e70fe27a767fb842dcd31712aff17707bfbd185e95eee2756958c6-merged.mount: Deactivated successfully.
Feb 25 13:01:29 compute-0 podman[378109]: 2026-02-25 13:01:29.744371104 +0000 UTC m=+0.553809493 container remove b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:01:29 compute-0 systemd[1]: libpod-conmon-b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0.scope: Deactivated successfully.
Feb 25 13:01:29 compute-0 sudo[378031]: pam_unix(sudo:session): session closed for user root
Feb 25 13:01:29 compute-0 sudo[378165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:01:29 compute-0 sudo[378165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:01:29 compute-0 sudo[378165]: pam_unix(sudo:session): session closed for user root
Feb 25 13:01:29 compute-0 ceph-mon[76335]: pgmap v2436: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 12 MiB/s wr, 311 op/s
Feb 25 13:01:29 compute-0 sudo[378190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:01:29 compute-0 sudo[378190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:01:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:01:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2363102777' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:30 compute-0 nova_compute[244014]: 2026-02-25 13:01:30.099 244018 DEBUG oslo_concurrency.processutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:30 compute-0 nova_compute[244014]: 2026-02-25 13:01:30.106 244018 DEBUG nova.compute.provider_tree [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:01:30 compute-0 nova_compute[244014]: 2026-02-25 13:01:30.128 244018 DEBUG nova.scheduler.client.report [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:01:30 compute-0 nova_compute[244014]: 2026-02-25 13:01:30.145 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:30 compute-0 nova_compute[244014]: 2026-02-25 13:01:30.170 244018 INFO nova.scheduler.client.report [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Deleted allocations for instance 0e4f3bd8-9df3-4329-afdc-27d91b98810d
Feb 25 13:01:30 compute-0 nova_compute[244014]: 2026-02-25 13:01:30.237 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:30 compute-0 podman[378231]: 2026-02-25 13:01:30.275103714 +0000 UTC m=+0.056958967 container create 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:01:30 compute-0 podman[378231]: 2026-02-25 13:01:30.24056383 +0000 UTC m=+0.022418933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:01:30 compute-0 systemd[1]: Started libpod-conmon-6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd.scope.
Feb 25 13:01:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:01:30 compute-0 podman[378231]: 2026-02-25 13:01:30.40045003 +0000 UTC m=+0.182305073 container init 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 13:01:30 compute-0 podman[378231]: 2026-02-25 13:01:30.406716967 +0000 UTC m=+0.188571990 container start 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:01:30 compute-0 systemd[1]: libpod-6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd.scope: Deactivated successfully.
Feb 25 13:01:30 compute-0 hungry_tharp[378247]: 167 167
Feb 25 13:01:30 compute-0 conmon[378247]: conmon 6b6bf59faa1e24f2a53e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd.scope/container/memory.events
Feb 25 13:01:30 compute-0 podman[378231]: 2026-02-25 13:01:30.418676364 +0000 UTC m=+0.200531387 container attach 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 13:01:30 compute-0 podman[378231]: 2026-02-25 13:01:30.419293592 +0000 UTC m=+0.201148655 container died 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 13:01:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa0dc72c781d0f92e64c51eb95fbce108f654edae2a52ca075191d203a902764-merged.mount: Deactivated successfully.
Feb 25 13:01:30 compute-0 podman[378231]: 2026-02-25 13:01:30.462915052 +0000 UTC m=+0.244770085 container remove 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:01:30 compute-0 systemd[1]: libpod-conmon-6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd.scope: Deactivated successfully.
Feb 25 13:01:30 compute-0 nova_compute[244014]: 2026-02-25 13:01:30.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:30 compute-0 podman[378271]: 2026-02-25 13:01:30.665566969 +0000 UTC m=+0.065559540 container create 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:01:30 compute-0 systemd[1]: Started libpod-conmon-57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814.scope.
Feb 25 13:01:30 compute-0 podman[378271]: 2026-02-25 13:01:30.63405245 +0000 UTC m=+0.034045101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:01:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cba24229211a96d32df0687ec53145e7d52104874956f2b84599b1433913b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cba24229211a96d32df0687ec53145e7d52104874956f2b84599b1433913b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cba24229211a96d32df0687ec53145e7d52104874956f2b84599b1433913b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cba24229211a96d32df0687ec53145e7d52104874956f2b84599b1433913b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:30 compute-0 podman[378271]: 2026-02-25 13:01:30.781362355 +0000 UTC m=+0.181355006 container init 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 13:01:30 compute-0 podman[378271]: 2026-02-25 13:01:30.789807934 +0000 UTC m=+0.189800525 container start 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 13:01:30 compute-0 podman[378271]: 2026-02-25 13:01:30.804875778 +0000 UTC m=+0.204868389 container attach 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 13:01:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.1 KiB/s wr, 85 op/s
Feb 25 13:01:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Feb 25 13:01:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Feb 25 13:01:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2363102777' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:30 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Feb 25 13:01:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:01:31
Feb 25 13:01:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:01:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:01:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'vms', '.mgr', '.rgw.root', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control']
Feb 25 13:01:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:01:31 compute-0 lvm[378365]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:01:31 compute-0 lvm[378365]: VG ceph_vg0 finished
Feb 25 13:01:31 compute-0 lvm[378368]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:01:31 compute-0 lvm[378368]: VG ceph_vg1 finished
Feb 25 13:01:31 compute-0 lvm[378370]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:01:31 compute-0 lvm[378370]: VG ceph_vg2 finished
Feb 25 13:01:31 compute-0 lvm[378371]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:01:31 compute-0 lvm[378371]: VG ceph_vg0 finished
Feb 25 13:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:01:31 compute-0 vigilant_robinson[378288]: {}
Feb 25 13:01:31 compute-0 systemd[1]: libpod-57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814.scope: Deactivated successfully.
Feb 25 13:01:31 compute-0 podman[378271]: 2026-02-25 13:01:31.697150918 +0000 UTC m=+1.097143479 container died 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:01:31 compute-0 systemd[1]: libpod-57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814.scope: Consumed 1.301s CPU time.
Feb 25 13:01:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6cba24229211a96d32df0687ec53145e7d52104874956f2b84599b1433913b4-merged.mount: Deactivated successfully.
Feb 25 13:01:31 compute-0 podman[378271]: 2026-02-25 13:01:31.738508905 +0000 UTC m=+1.138501476 container remove 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:01:31 compute-0 systemd[1]: libpod-conmon-57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814.scope: Deactivated successfully.
Feb 25 13:01:31 compute-0 sudo[378190]: pam_unix(sudo:session): session closed for user root
Feb 25 13:01:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:01:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:01:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:01:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:01:31 compute-0 sudo[378386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:01:31 compute-0 sudo[378386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:01:31 compute-0 sudo[378386]: pam_unix(sudo:session): session closed for user root
Feb 25 13:01:31 compute-0 ceph-mon[76335]: pgmap v2437: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.1 KiB/s wr, 85 op/s
Feb 25 13:01:31 compute-0 ceph-mon[76335]: osdmap e288: 3 total, 3 up, 3 in
Feb 25 13:01:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:01:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.310 244018 DEBUG nova.compute.manager [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.311 244018 DEBUG nova.compute.manager [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing instance network info cache due to event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.312 244018 DEBUG oslo_concurrency.lockutils [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.312 244018 DEBUG oslo_concurrency.lockutils [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.313 244018 DEBUG nova.network.neutron [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.508 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.509 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.509 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.509 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.509 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.511 244018 INFO nova.compute.manager [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Terminating instance
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.512 244018 DEBUG nova.compute.manager [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.513 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:32 compute-0 kernel: tape11a4d69-86 (unregistering): left promiscuous mode
Feb 25 13:01:32 compute-0 NetworkManager[49836]: <info>  [1772024492.5690] device (tape11a4d69-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 13:01:32 compute-0 ovn_controller[147040]: 2026-02-25T13:01:32Z|01563|binding|INFO|Releasing lport e11a4d69-8666-420b-b13c-69a6427567a4 from this chassis (sb_readonly=0)
Feb 25 13:01:32 compute-0 ovn_controller[147040]: 2026-02-25T13:01:32Z|01564|binding|INFO|Setting lport e11a4d69-8666-420b-b13c-69a6427567a4 down in Southbound
Feb 25 13:01:32 compute-0 ovn_controller[147040]: 2026-02-25T13:01:32Z|01565|binding|INFO|Removing iface tape11a4d69-86 ovn-installed in OVS
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.583 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:04:04 10.100.0.9'], port_security=['fa:16:3e:15:04:04 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8d338640-2b5f-4571-8f76-b523064ee129', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9876e36c-27f9-4526-a220-4685b5be270d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b451f093-c10b-47f6-9a8c-8892cb3ad351, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e11a4d69-8666-420b-b13c-69a6427567a4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.585 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e11a4d69-8666-420b-b13c-69a6427567a4 in datapath 4273798e-5f22-4d98-8d00-a22d1ea2c776 unbound from our chassis
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.586 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4273798e-5f22-4d98-8d00-a22d1ea2c776, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.588 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[392ff96d-7173-439c-9d72-79057b35e443]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.589 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776 namespace which is not needed anymore
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:32 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Deactivated successfully.
Feb 25 13:01:32 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Consumed 15.438s CPU time.
Feb 25 13:01:32 compute-0 systemd-machined[210048]: Machine qemu-179-instance-00000091 terminated.
Feb 25 13:01:32 compute-0 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [NOTICE]   (375457) : haproxy version is 2.8.14-c23fe91
Feb 25 13:01:32 compute-0 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [NOTICE]   (375457) : path to executable is /usr/sbin/haproxy
Feb 25 13:01:32 compute-0 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [WARNING]  (375457) : Exiting Master process...
Feb 25 13:01:32 compute-0 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [ALERT]    (375457) : Current worker (375459) exited with code 143 (Terminated)
Feb 25 13:01:32 compute-0 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [WARNING]  (375457) : All workers exited. Exiting... (0)
Feb 25 13:01:32 compute-0 systemd[1]: libpod-563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d.scope: Deactivated successfully.
Feb 25 13:01:32 compute-0 NetworkManager[49836]: <info>  [1772024492.7342] manager: (tape11a4d69-86): new Tun device (/org/freedesktop/NetworkManager/Devices/642)
Feb 25 13:01:32 compute-0 podman[378433]: 2026-02-25 13:01:32.735331334 +0000 UTC m=+0.046919154 container died 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.749 244018 INFO nova.virt.libvirt.driver [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Instance destroyed successfully.
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.749 244018 DEBUG nova.objects.instance [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'resources' on Instance uuid 8d338640-2b5f-4571-8f76-b523064ee129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:01:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d-userdata-shm.mount: Deactivated successfully.
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.775 244018 DEBUG nova.virt.libvirt.vif [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:59:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1793809408',display_name='tempest-TestSnapshotPattern-server-1793809408',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1793809408',id=145,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:00:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-b4g1xjnl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:00:28Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=8d338640-2b5f-4571-8f76-b523064ee129,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 13:01:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c064d32c5a8bc9b32dbc5298e999e73ca9ab9f090eb5ef23c399a7752b4fd87b-merged.mount: Deactivated successfully.
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.775 244018 DEBUG nova.network.os_vif_util [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.778 244018 DEBUG nova.network.os_vif_util [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.778 244018 DEBUG os_vif [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.783 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.784 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape11a4d69-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:01:32 compute-0 podman[378433]: 2026-02-25 13:01:32.794120673 +0000 UTC m=+0.105708513 container cleanup 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.794 244018 INFO os_vif [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86')
Feb 25 13:01:32 compute-0 systemd[1]: libpod-conmon-563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d.scope: Deactivated successfully.
Feb 25 13:01:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 7.5 KiB/s wr, 150 op/s
Feb 25 13:01:32 compute-0 podman[378478]: 2026-02-25 13:01:32.892007004 +0000 UTC m=+0.070850190 container remove 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.900 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[220f47c8-1b28-4c7b-8718-5de859b2a336]: (4, ('Wed Feb 25 01:01:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776 (563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d)\n563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d\nWed Feb 25 01:01:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776 (563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d)\n563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.902 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30915a4e-bbeb-4095-9ac8-a9f733fab509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.904 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4273798e-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:32 compute-0 kernel: tap4273798e-50: left promiscuous mode
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.907 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:32 compute-0 nova_compute[244014]: 2026-02-25 13:01:32.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.919 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[41ea2a7a-aa4c-4f5b-a6e4-1312f3e0131f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.935 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5f31444e-fc59-4993-8365-8c0467ce18b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.936 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[98fea3c1-fa33-455e-ad14-b501494bded6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.952 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86d5053a-8c80-480f-a768-1f10a5956f04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636989, 'reachable_time': 21227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378506, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.954 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 13:01:32 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.955 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[721419f8-9542-4084-9ba0-2b0ba8e623ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d4273798e\x2d5f22\x2d4d98\x2d8d00\x2da22d1ea2c776.mount: Deactivated successfully.
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.209 244018 INFO nova.virt.libvirt.driver [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Deleting instance files /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129_del
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.210 244018 INFO nova.virt.libvirt.driver [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Deletion of /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129_del complete
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.292 244018 INFO nova.compute.manager [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Took 0.78 seconds to destroy the instance on the hypervisor.
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.292 244018 DEBUG oslo.service.loopingcall [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.293 244018 DEBUG nova.compute.manager [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.293 244018 DEBUG nova.network.neutron [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.570 244018 DEBUG nova.network.neutron [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updated VIF entry in instance network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.570 244018 DEBUG nova.network.neutron [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.571 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.572 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.596 244018 DEBUG oslo_concurrency.lockutils [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.597 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.667 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.668 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Feb 25 13:01:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Feb 25 13:01:33 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.827 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.828 244018 INFO nova.compute.claims [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Claim successful on node compute-0.ctlplane.example.com
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.908 244018 DEBUG nova.network.neutron [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.926 244018 INFO nova.compute.manager [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Took 0.63 seconds to deallocate network for instance.
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.951 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:33 compute-0 ceph-mon[76335]: pgmap v2439: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 7.5 KiB/s wr, 150 op/s
Feb 25 13:01:33 compute-0 ceph-mon[76335]: osdmap e289: 3 total, 3 up, 3 in
Feb 25 13:01:33 compute-0 nova_compute[244014]: 2026-02-25 13:01:33.989 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.426 244018 DEBUG nova.compute.manager [req-316772cd-74e0-42fb-b4b9-159cb72dc279 req-93f69feb-be7d-45b3-bd95-c389b52eeb4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-vif-deleted-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:01:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573217387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.515 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.523 244018 DEBUG nova.compute.provider_tree [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.545 244018 DEBUG nova.scheduler.client.report [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.577 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.577 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.580 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.648 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.648 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.655 244018 DEBUG oslo_concurrency.processutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.715 244018 INFO nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.738 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.831 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.834 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.834 244018 INFO nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Creating image(s)
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.872 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:01:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 7.5 KiB/s wr, 151 op/s
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.905 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.933 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:01:34 compute-0 nova_compute[244014]: 2026-02-25 13:01:34.936 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/573217387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.008 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.009 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.010 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.011 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.042 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.046 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ea4e69b0-bf04-4637-847b-9807c884d103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.083 244018 DEBUG nova.policy [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 13:01:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:01:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817129133' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.280 244018 DEBUG oslo_concurrency.processutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.286 244018 DEBUG nova.compute.provider_tree [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.305 244018 DEBUG nova.scheduler.client.report [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.333 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.372 244018 INFO nova.scheduler.client.report [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Deleted allocations for instance 8d338640-2b5f-4571-8f76-b523064ee129
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.403 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ea4e69b0-bf04-4637-847b-9807c884d103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.445 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.487 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.593 244018 DEBUG nova.objects.instance [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid ea4e69b0-bf04-4637-847b-9807c884d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.606 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.606 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Ensure instance console log exists: /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.607 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.607 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.608 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.647 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:35 compute-0 nova_compute[244014]: 2026-02-25 13:01:35.974 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Successfully created port: 5a5e6571-f930-49df-8399-0abb9daf7f4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 13:01:36 compute-0 ceph-mon[76335]: pgmap v2441: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 7.5 KiB/s wr, 151 op/s
Feb 25 13:01:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/817129133' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:36 compute-0 nova_compute[244014]: 2026-02-25 13:01:36.790 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Successfully updated port: 5a5e6571-f930-49df-8399-0abb9daf7f4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 13:01:36 compute-0 nova_compute[244014]: 2026-02-25 13:01:36.813 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:01:36 compute-0 nova_compute[244014]: 2026-02-25 13:01:36.814 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:01:36 compute-0 nova_compute[244014]: 2026-02-25 13:01:36.814 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 13:01:36 compute-0 nova_compute[244014]: 2026-02-25 13:01:36.878 244018 DEBUG nova.compute.manager [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:36 compute-0 nova_compute[244014]: 2026-02-25 13:01:36.879 244018 DEBUG nova.compute.manager [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing instance network info cache due to event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:01:36 compute-0 nova_compute[244014]: 2026-02-25 13:01:36.879 244018 DEBUG oslo_concurrency.lockutils [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:01:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.4 KiB/s wr, 65 op/s
Feb 25 13:01:37 compute-0 nova_compute[244014]: 2026-02-25 13:01:37.192 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 13:01:37 compute-0 nova_compute[244014]: 2026-02-25 13:01:37.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:38 compute-0 ceph-mon[76335]: pgmap v2442: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.4 KiB/s wr, 65 op/s
Feb 25 13:01:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:38 compute-0 nova_compute[244014]: 2026-02-25 13:01:38.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:01:38 compute-0 nova_compute[244014]: 2026-02-25 13:01:38.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:01:38 compute-0 nova_compute[244014]: 2026-02-25 13:01:38.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:01:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 2.7 MiB/s wr, 147 op/s
Feb 25 13:01:38 compute-0 nova_compute[244014]: 2026-02-25 13:01:38.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 25 13:01:38 compute-0 nova_compute[244014]: 2026-02-25 13:01:38.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:01:39 compute-0 nova_compute[244014]: 2026-02-25 13:01:39.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:01:39 compute-0 nova_compute[244014]: 2026-02-25 13:01:39.937 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:39 compute-0 nova_compute[244014]: 2026-02-25 13:01:39.938 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:39 compute-0 nova_compute[244014]: 2026-02-25 13:01:39.938 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:39 compute-0 nova_compute[244014]: 2026-02-25 13:01:39.938 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:01:39 compute-0 nova_compute[244014]: 2026-02-25 13:01:39.938 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.041 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.071 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.071 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Instance network_info: |[{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.072 244018 DEBUG oslo_concurrency.lockutils [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.072 244018 DEBUG nova.network.neutron [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.074 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Start _get_guest_xml network_info=[{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.078 244018 WARNING nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.082 244018 DEBUG nova.virt.libvirt.host [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.082 244018 DEBUG nova.virt.libvirt.host [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 13:01:40 compute-0 ceph-mon[76335]: pgmap v2443: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 2.7 MiB/s wr, 147 op/s
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.086 244018 DEBUG nova.virt.libvirt.host [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.086 244018 DEBUG nova.virt.libvirt.host [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.086 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.087 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.087 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.087 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.088 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.088 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.088 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.088 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.089 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.090 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.090 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.090 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.093 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:01:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/257235082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.439 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.566 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.567 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3434MB free_disk=59.96653531957418GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.567 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.568 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:01:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1383724056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.642 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.667 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.671 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.697 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.701 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance ea4e69b0-bf04-4637-847b-9807c884d103 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.702 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.702 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:01:40 compute-0 nova_compute[244014]: 2026-02-25 13:01:40.741 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Feb 25 13:01:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/257235082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1383724056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:01:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:01:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332419605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.214 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.216 244018 DEBUG nova.virt.libvirt.vif [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=148,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-nfnujduq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:01:34Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=ea4e69b0-bf04-4637-847b-9807c884d103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.217 244018 DEBUG nova.network.os_vif_util [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.218 244018 DEBUG nova.network.os_vif_util [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.220 244018 DEBUG nova.objects.instance [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid ea4e69b0-bf04-4637-847b-9807c884d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.238 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] End _get_guest_xml xml=<domain type="kvm">
Feb 25 13:01:41 compute-0 nova_compute[244014]:   <uuid>ea4e69b0-bf04-4637-847b-9807c884d103</uuid>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   <name>instance-00000094</name>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   <metadata>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561</nova:name>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 13:01:40</nova:creationTime>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <nova:port uuid="5a5e6571-f930-49df-8399-0abb9daf7f4f">
Feb 25 13:01:41 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   </metadata>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <system>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <entry name="serial">ea4e69b0-bf04-4637-847b-9807c884d103</entry>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <entry name="uuid">ea4e69b0-bf04-4637-847b-9807c884d103</entry>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     </system>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   <os>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   </os>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   <features>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <apic/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   </features>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   </clock>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   </cpu>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   <devices>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ea4e69b0-bf04-4637-847b-9807c884d103_disk">
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       </source>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/ea4e69b0-bf04-4637-847b-9807c884d103_disk.config">
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       </source>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:01:41 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:8b:fe:07"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <target dev="tap5a5e6571-f9"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     </interface>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/console.log" append="off"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     </serial>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <video>
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     </video>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     </rng>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 13:01:41 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 13:01:41 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 13:01:41 compute-0 nova_compute[244014]:   </devices>
Feb 25 13:01:41 compute-0 nova_compute[244014]: </domain>
Feb 25 13:01:41 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.240 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Preparing to wait for external event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.240 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.241 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.241 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.242 244018 DEBUG nova.virt.libvirt.vif [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=148,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-nfnujduq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:01:34Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=ea4e69b0-bf04-4637-847b-9807c884d103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.243 244018 DEBUG nova.network.os_vif_util [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.243 244018 DEBUG nova.network.os_vif_util [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.244 244018 DEBUG os_vif [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.246 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.246 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.249 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.249 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a5e6571-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.250 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a5e6571-f9, col_values=(('external_ids', {'iface-id': '5a5e6571-f930-49df-8399-0abb9daf7f4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:fe:07', 'vm-uuid': 'ea4e69b0-bf04-4637-847b-9807c884d103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:41 compute-0 NetworkManager[49836]: <info>  [1772024501.2536] manager: (tap5a5e6571-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/643)
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.261 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.262 244018 INFO os_vif [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9')
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.312 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.313 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.313 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:8b:fe:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.314 244018 INFO nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Using config drive
Feb 25 13:01:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:01:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489639036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.344 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.350 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.355 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.373 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.394 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:01:41 compute-0 nova_compute[244014]: 2026-02-25 13:01:41.394 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:42 compute-0 ceph-mon[76335]: pgmap v2444: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Feb 25 13:01:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3332419605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:01:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2489639036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.309 244018 INFO nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Creating config drive at /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.312 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5nbp24b_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.375 244018 DEBUG nova.network.neutron [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updated VIF entry in instance network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.377 244018 DEBUG nova.network.neutron [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.390 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.393 244018 DEBUG oslo_concurrency.lockutils [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.453 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5nbp24b_" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.482 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.486 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config ea4e69b0-bf04-4637-847b-9807c884d103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.519 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024487.4897103, 0e4f3bd8-9df3-4329-afdc-27d91b98810d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.519 244018 INFO nova.compute.manager [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] VM Stopped (Lifecycle Event)
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.541 244018 DEBUG nova.compute.manager [None req-05d1c5b1-8943-4255-805c-369cf7ebc239 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.695 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config ea4e69b0-bf04-4637-847b-9807c884d103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.696 244018 INFO nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Deleting local config drive /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config because it was imported into RBD.
Feb 25 13:01:42 compute-0 kernel: tap5a5e6571-f9: entered promiscuous mode
Feb 25 13:01:42 compute-0 NetworkManager[49836]: <info>  [1772024502.7560] manager: (tap5a5e6571-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/644)
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.758 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:42 compute-0 ovn_controller[147040]: 2026-02-25T13:01:42Z|01566|binding|INFO|Claiming lport 5a5e6571-f930-49df-8399-0abb9daf7f4f for this chassis.
Feb 25 13:01:42 compute-0 ovn_controller[147040]: 2026-02-25T13:01:42Z|01567|binding|INFO|5a5e6571-f930-49df-8399-0abb9daf7f4f: Claiming fa:16:3e:8b:fe:07 10.100.0.5
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.768 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:fe:07 10.100.0.5'], port_security=['fa:16:3e:8b:fe:07 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ea4e69b0-bf04-4637-847b-9807c884d103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81e387f6-426b-4301-9279-fdf7597d2c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '11991dde-8004-44c6-851b-7ca96866b068 266cd610-5f0f-4f4d-8bfb-2ebdf5d189b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2daa638-d0b7-4114-8331-0d5928db072a, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5a5e6571-f930-49df-8399-0abb9daf7f4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.770 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5a5e6571-f930-49df-8399-0abb9daf7f4f in datapath 81e387f6-426b-4301-9279-fdf7597d2c7e bound to our chassis
Feb 25 13:01:42 compute-0 ovn_controller[147040]: 2026-02-25T13:01:42Z|01568|binding|INFO|Setting lport 5a5e6571-f930-49df-8399-0abb9daf7f4f ovn-installed in OVS
Feb 25 13:01:42 compute-0 ovn_controller[147040]: 2026-02-25T13:01:42Z|01569|binding|INFO|Setting lport 5a5e6571-f930-49df-8399-0abb9daf7f4f up in Southbound
Feb 25 13:01:42 compute-0 nova_compute[244014]: 2026-02-25 13:01:42.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.775 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81e387f6-426b-4301-9279-fdf7597d2c7e
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.786 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a6493f-5d59-4043-a262-80a1df048c11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.787 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81e387f6-41 in ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.790 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81e387f6-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.790 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[334cec34-a7e8-4e91-8128-3044137894da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.791 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ff807221-a95e-4eb4-a683-88b6bfa03b16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 systemd-udevd[378899]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 13:01:42 compute-0 systemd-machined[210048]: New machine qemu-182-instance-00000094.
Feb 25 13:01:42 compute-0 NetworkManager[49836]: <info>  [1772024502.8088] device (tap5a5e6571-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 13:01:42 compute-0 NetworkManager[49836]: <info>  [1772024502.8096] device (tap5a5e6571-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.810 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a013265f-cdb8-4e3d-ade8-35ca3486c24a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 systemd[1]: Started Virtual Machine qemu-182-instance-00000094.
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036250297512604155 of space, bias 1.0, pg target 0.10875089253781246 quantized to 32 (current 32)
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494450541393998 of space, bias 1.0, pg target 0.7483351624181993 quantized to 32 (current 32)
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4133407414918176e-06 of space, bias 4.0, pg target 0.0016960088897901811 quantized to 16 (current 16)
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.823 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0551de6b-68a6-485d-bf1b-020897e41d8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.848 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[383df195-9c83-4139-aab0-646c4f606062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.854 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c93cae-3909-45de-9ddb-1d5a87c0f590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 NetworkManager[49836]: <info>  [1772024502.8546] manager: (tap81e387f6-40): new Veth device (/org/freedesktop/NetworkManager/Devices/645)
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.881 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5a05cf-8040-4db8-ba28-ba79797d98e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.884 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[91975f55-497b-433a-8975-bca5b2aa9ef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:01:42 compute-0 NetworkManager[49836]: <info>  [1772024502.9062] device (tap81e387f6-40): carrier: link connected
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.913 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[078b9c18-0851-45b4-81c7-da271d60066f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.930 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e12cd059-4ad0-4873-a85e-6d19b50f6544]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81e387f6-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:15:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647231, 'reachable_time': 28642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378931, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.941 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f364b3ad-91f6-4fe0-a6dd-de7eff0bbdda]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:15b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647231, 'tstamp': 647231}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378932, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.954 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a4ad76-b1c1-4daf-9564-1f8c62cb8b76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81e387f6-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:15:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647231, 'reachable_time': 28642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378933, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:42 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.976 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2c0400-504f-4b87-a331-a085965c94ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.012 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86ab515b-3ec7-44e1-8609-8c89221054a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.013 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81e387f6-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.013 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.014 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81e387f6-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.015 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:43 compute-0 NetworkManager[49836]: <info>  [1772024503.0163] manager: (tap81e387f6-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Feb 25 13:01:43 compute-0 kernel: tap81e387f6-40: entered promiscuous mode
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.019 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81e387f6-40, col_values=(('external_ids', {'iface-id': '6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:43 compute-0 ovn_controller[147040]: 2026-02-25T13:01:43Z|01570|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.029 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81e387f6-426b-4301-9279-fdf7597d2c7e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81e387f6-426b-4301-9279-fdf7597d2c7e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.030 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[20a5563c-ebab-4993-9973-2d3a895f7e72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.030 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: global
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-81e387f6-426b-4301-9279-fdf7597d2c7e
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/81e387f6-426b-4301-9279-fdf7597d2c7e.pid.haproxy
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 81e387f6-426b-4301-9279-fdf7597d2c7e
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 13:01:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.032 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'env', 'PROCESS_TAG=haproxy-81e387f6-426b-4301-9279-fdf7597d2c7e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81e387f6-426b-4301-9279-fdf7597d2c7e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.409 244018 DEBUG nova.compute.manager [req-78ea9f71-74e4-452a-bd1d-e81cc9190c05 req-1bc519d8-ef4a-4d8b-b682-1cff37fafbe8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.409 244018 DEBUG oslo_concurrency.lockutils [req-78ea9f71-74e4-452a-bd1d-e81cc9190c05 req-1bc519d8-ef4a-4d8b-b682-1cff37fafbe8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.409 244018 DEBUG oslo_concurrency.lockutils [req-78ea9f71-74e4-452a-bd1d-e81cc9190c05 req-1bc519d8-ef4a-4d8b-b682-1cff37fafbe8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.410 244018 DEBUG oslo_concurrency.lockutils [req-78ea9f71-74e4-452a-bd1d-e81cc9190c05 req-1bc519d8-ef4a-4d8b-b682-1cff37fafbe8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.410 244018 DEBUG nova.compute.manager [req-78ea9f71-74e4-452a-bd1d-e81cc9190c05 req-1bc519d8-ef4a-4d8b-b682-1cff37fafbe8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Processing event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 13:01:43 compute-0 podman[378963]: 2026-02-25 13:01:43.319186459 +0000 UTC m=+0.023300238 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.697 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024503.6968124, ea4e69b0-bf04-4637-847b-9807c884d103 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.698 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] VM Started (Lifecycle Event)
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.702 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.706 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 13:01:43 compute-0 podman[378963]: 2026-02-25 13:01:43.710370704 +0000 UTC m=+0.414484443 container create 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.711 244018 INFO nova.virt.libvirt.driver [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Instance spawned successfully.
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.712 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.742 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.756 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.762 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.762 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.763 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.763 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.764 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.764 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:01:43 compute-0 systemd[1]: Started libpod-conmon-47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451.scope.
Feb 25 13:01:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.804 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.805 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024503.6971457, ea4e69b0-bf04-4637-847b-9807c884d103 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.806 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] VM Paused (Lifecycle Event)
Feb 25 13:01:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abb1a1fefa49a0b771e7d6edf00e3f98906310d6c7b5e12ddbb0b47d61691b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.841 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.848 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024503.7056663, ea4e69b0-bf04-4637-847b-9807c884d103 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.848 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] VM Resumed (Lifecycle Event)
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.854 244018 INFO nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Took 9.02 seconds to spawn the instance on the hypervisor.
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.854 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.882 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.885 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.922 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.939 244018 INFO nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Took 10.29 seconds to build instance.
Feb 25 13:01:43 compute-0 podman[378963]: 2026-02-25 13:01:43.943422658 +0000 UTC m=+0.647536467 container init 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:01:43 compute-0 podman[378963]: 2026-02-25 13:01:43.951337391 +0000 UTC m=+0.655451130 container start 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:01:43 compute-0 nova_compute[244014]: 2026-02-25 13:01:43.955 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:43 compute-0 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [NOTICE]   (379025) : New worker (379027) forked
Feb 25 13:01:43 compute-0 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [NOTICE]   (379025) : Loading success.
Feb 25 13:01:44 compute-0 ceph-mon[76335]: pgmap v2445: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:01:44 compute-0 ovn_controller[147040]: 2026-02-25T13:01:44Z|01571|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 13:01:44 compute-0 nova_compute[244014]: 2026-02-25 13:01:44.214 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:44 compute-0 ovn_controller[147040]: 2026-02-25T13:01:44Z|01572|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 13:01:44 compute-0 nova_compute[244014]: 2026-02-25 13:01:44.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.9 MiB/s wr, 59 op/s
Feb 25 13:01:45 compute-0 nova_compute[244014]: 2026-02-25 13:01:45.577 244018 DEBUG nova.compute.manager [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:45 compute-0 nova_compute[244014]: 2026-02-25 13:01:45.577 244018 DEBUG oslo_concurrency.lockutils [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:45 compute-0 nova_compute[244014]: 2026-02-25 13:01:45.577 244018 DEBUG oslo_concurrency.lockutils [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:45 compute-0 nova_compute[244014]: 2026-02-25 13:01:45.578 244018 DEBUG oslo_concurrency.lockutils [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:45 compute-0 nova_compute[244014]: 2026-02-25 13:01:45.578 244018 DEBUG nova.compute.manager [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] No waiting events found dispatching network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:01:45 compute-0 nova_compute[244014]: 2026-02-25 13:01:45.578 244018 WARNING nova.compute.manager [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received unexpected event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f for instance with vm_state active and task_state None.
Feb 25 13:01:45 compute-0 nova_compute[244014]: 2026-02-25 13:01:45.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:45 compute-0 nova_compute[244014]: 2026-02-25 13:01:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:01:46 compute-0 ceph-mon[76335]: pgmap v2446: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.9 MiB/s wr, 59 op/s
Feb 25 13:01:46 compute-0 nova_compute[244014]: 2026-02-25 13:01:46.253 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 13:01:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:01:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2937760942' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:01:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:01:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2937760942' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:01:47 compute-0 nova_compute[244014]: 2026-02-25 13:01:47.746 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024492.7447417, 8d338640-2b5f-4571-8f76-b523064ee129 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:01:47 compute-0 nova_compute[244014]: 2026-02-25 13:01:47.748 244018 INFO nova.compute.manager [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] VM Stopped (Lifecycle Event)
Feb 25 13:01:47 compute-0 nova_compute[244014]: 2026-02-25 13:01:47.772 244018 DEBUG nova.compute.manager [None req-a9e54686-3614-4bcf-a898-9d0bf3c5faca - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:01:48 compute-0 NetworkManager[49836]: <info>  [1772024508.0918] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Feb 25 13:01:48 compute-0 NetworkManager[49836]: <info>  [1772024508.0928] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Feb 25 13:01:48 compute-0 nova_compute[244014]: 2026-02-25 13:01:48.093 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:48 compute-0 nova_compute[244014]: 2026-02-25 13:01:48.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:48 compute-0 ovn_controller[147040]: 2026-02-25T13:01:48Z|01573|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 13:01:48 compute-0 nova_compute[244014]: 2026-02-25 13:01:48.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:48 compute-0 ceph-mon[76335]: pgmap v2447: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 13:01:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2937760942' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:01:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2937760942' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:01:48 compute-0 nova_compute[244014]: 2026-02-25 13:01:48.610 244018 DEBUG nova.compute.manager [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:01:48 compute-0 nova_compute[244014]: 2026-02-25 13:01:48.610 244018 DEBUG nova.compute.manager [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing instance network info cache due to event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:01:48 compute-0 nova_compute[244014]: 2026-02-25 13:01:48.611 244018 DEBUG oslo_concurrency.lockutils [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:01:48 compute-0 nova_compute[244014]: 2026-02-25 13:01:48.611 244018 DEBUG oslo_concurrency.lockutils [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:01:48 compute-0 nova_compute[244014]: 2026-02-25 13:01:48.611 244018 DEBUG nova.network.neutron [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:01:48 compute-0 podman[379038]: 2026-02-25 13:01:48.732899541 +0000 UTC m=+0.066922319 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 13:01:48 compute-0 podman[379039]: 2026-02-25 13:01:48.759801579 +0000 UTC m=+0.087763696 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 13:01:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:48 compute-0 nova_compute[244014]: 2026-02-25 13:01:48.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:01:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 25 13:01:48 compute-0 ovn_controller[147040]: 2026-02-25T13:01:48Z|01574|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 13:01:48 compute-0 nova_compute[244014]: 2026-02-25 13:01:48.961 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:50 compute-0 ceph-mon[76335]: pgmap v2448: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 25 13:01:50 compute-0 nova_compute[244014]: 2026-02-25 13:01:50.315 244018 DEBUG nova.network.neutron [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updated VIF entry in instance network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:01:50 compute-0 nova_compute[244014]: 2026-02-25 13:01:50.316 244018 DEBUG nova.network.neutron [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:01:50 compute-0 nova_compute[244014]: 2026-02-25 13:01:50.375 244018 DEBUG oslo_concurrency.lockutils [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:01:50 compute-0 nova_compute[244014]: 2026-02-25 13:01:50.653 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:50 compute-0 nova_compute[244014]: 2026-02-25 13:01:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:01:50 compute-0 nova_compute[244014]: 2026-02-25 13:01:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:01:50 compute-0 nova_compute[244014]: 2026-02-25 13:01:50.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:01:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 13:01:51 compute-0 nova_compute[244014]: 2026-02-25 13:01:51.255 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:52 compute-0 ceph-mon[76335]: pgmap v2449: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 13:01:52 compute-0 nova_compute[244014]: 2026-02-25 13:01:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:01:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 25 13:01:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:54 compute-0 ceph-mon[76335]: pgmap v2450: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 25 13:01:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 25 13:01:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:55.039 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:01:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:55.039 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:01:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:01:55.040 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:01:55 compute-0 ceph-mon[76335]: pgmap v2451: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 25 13:01:55 compute-0 nova_compute[244014]: 2026-02-25 13:01:55.655 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:56 compute-0 nova_compute[244014]: 2026-02-25 13:01:56.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:01:56 compute-0 ovn_controller[147040]: 2026-02-25T13:01:56Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:fe:07 10.100.0.5
Feb 25 13:01:56 compute-0 ovn_controller[147040]: 2026-02-25T13:01:56Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:fe:07 10.100.0.5
Feb 25 13:01:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 25 13:01:57 compute-0 ceph-mon[76335]: pgmap v2452: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 25 13:01:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:01:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 25 13:01:59 compute-0 nova_compute[244014]: 2026-02-25 13:01:59.542 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:00 compute-0 ceph-mon[76335]: pgmap v2453: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 25 13:02:00 compute-0 nova_compute[244014]: 2026-02-25 13:02:00.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:02:01 compute-0 nova_compute[244014]: 2026-02-25 13:02:01.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:02:02 compute-0 ceph-mon[76335]: pgmap v2454: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:02:02 compute-0 nova_compute[244014]: 2026-02-25 13:02:02.102 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 25 13:02:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:03 compute-0 nova_compute[244014]: 2026-02-25 13:02:03.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:02:04 compute-0 ceph-mon[76335]: pgmap v2455: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 25 13:02:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:02:05 compute-0 nova_compute[244014]: 2026-02-25 13:02:05.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:06 compute-0 ceph-mon[76335]: pgmap v2456: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:02:06 compute-0 nova_compute[244014]: 2026-02-25 13:02:06.261 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:02:08 compute-0 ceph-mon[76335]: pgmap v2457: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:02:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:02:09 compute-0 ovn_controller[147040]: 2026-02-25T13:02:09Z|01575|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 13:02:09 compute-0 nova_compute[244014]: 2026-02-25 13:02:09.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:09 compute-0 nova_compute[244014]: 2026-02-25 13:02:09.408 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:09 compute-0 nova_compute[244014]: 2026-02-25 13:02:09.410 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:09 compute-0 nova_compute[244014]: 2026-02-25 13:02:09.436 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 13:02:09 compute-0 nova_compute[244014]: 2026-02-25 13:02:09.536 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:09 compute-0 nova_compute[244014]: 2026-02-25 13:02:09.537 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:09 compute-0 nova_compute[244014]: 2026-02-25 13:02:09.547 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 13:02:09 compute-0 nova_compute[244014]: 2026-02-25 13:02:09.547 244018 INFO nova.compute.claims [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Claim successful on node compute-0.ctlplane.example.com
Feb 25 13:02:09 compute-0 nova_compute[244014]: 2026-02-25 13:02:09.675 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:02:10 compute-0 ceph-mon[76335]: pgmap v2458: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:02:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:02:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4222605547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:02:10 compute-0 nova_compute[244014]: 2026-02-25 13:02:10.233 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:02:10 compute-0 nova_compute[244014]: 2026-02-25 13:02:10.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 11 KiB/s wr, 0 op/s
Feb 25 13:02:10 compute-0 nova_compute[244014]: 2026-02-25 13:02:10.980 244018 DEBUG nova.compute.provider_tree [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.006 244018 DEBUG nova.scheduler.client.report [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.033 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.035 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.164 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.165 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.182 244018 INFO nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.203 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 13:02:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4222605547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:02:11 compute-0 ceph-mon[76335]: pgmap v2459: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 11 KiB/s wr, 0 op/s
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.316 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.318 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.319 244018 INFO nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Creating image(s)
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.349 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.376 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.397 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.400 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.434 244018 DEBUG nova.policy [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.474 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.475 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.476 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.477 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.514 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.519 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d2d1c933-a594-4748-8f09-84790cab7d73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:02:11 compute-0 nova_compute[244014]: 2026-02-25 13:02:11.956 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d2d1c933-a594-4748-8f09-84790cab7d73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:02:12 compute-0 nova_compute[244014]: 2026-02-25 13:02:12.044 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 13:02:12 compute-0 nova_compute[244014]: 2026-02-25 13:02:12.201 244018 DEBUG nova.objects.instance [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid d2d1c933-a594-4748-8f09-84790cab7d73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:02:12 compute-0 nova_compute[244014]: 2026-02-25 13:02:12.232 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 13:02:12 compute-0 nova_compute[244014]: 2026-02-25 13:02:12.233 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Ensure instance console log exists: /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 13:02:12 compute-0 nova_compute[244014]: 2026-02-25 13:02:12.234 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:12 compute-0 nova_compute[244014]: 2026-02-25 13:02:12.234 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:12 compute-0 nova_compute[244014]: 2026-02-25 13:02:12.235 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:12 compute-0 nova_compute[244014]: 2026-02-25 13:02:12.627 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Successfully created port: 1167ad0b-6ff5-4138-8c61-fde6601ec903 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 13:02:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:02:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:13 compute-0 ceph-mon[76335]: pgmap v2460: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:02:14 compute-0 nova_compute[244014]: 2026-02-25 13:02:14.231 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Successfully updated port: 1167ad0b-6ff5-4138-8c61-fde6601ec903 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 13:02:14 compute-0 nova_compute[244014]: 2026-02-25 13:02:14.249 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:02:14 compute-0 nova_compute[244014]: 2026-02-25 13:02:14.249 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:02:14 compute-0 nova_compute[244014]: 2026-02-25 13:02:14.249 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 13:02:14 compute-0 nova_compute[244014]: 2026-02-25 13:02:14.324 244018 DEBUG nova.compute.manager [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:14 compute-0 nova_compute[244014]: 2026-02-25 13:02:14.325 244018 DEBUG nova.compute.manager [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing instance network info cache due to event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:02:14 compute-0 nova_compute[244014]: 2026-02-25 13:02:14.325 244018 DEBUG oslo_concurrency.lockutils [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:02:14 compute-0 nova_compute[244014]: 2026-02-25 13:02:14.386 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 13:02:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:02:15 compute-0 ceph-mon[76335]: pgmap v2461: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.430 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updating instance_info_cache with network_info: [{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.454 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.454 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Instance network_info: |[{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.455 244018 DEBUG oslo_concurrency.lockutils [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.455 244018 DEBUG nova.network.neutron [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.458 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Start _get_guest_xml network_info=[{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.463 244018 WARNING nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.468 244018 DEBUG nova.virt.libvirt.host [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.469 244018 DEBUG nova.virt.libvirt.host [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.481 244018 DEBUG nova.virt.libvirt.host [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.481 244018 DEBUG nova.virt.libvirt.host [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.482 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.482 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.483 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.484 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.484 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.485 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.485 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.485 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.486 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.486 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.487 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.487 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.492 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:02:15 compute-0 nova_compute[244014]: 2026-02-25 13:02:15.665 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:02:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1265620149' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.083 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.101 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.104 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:02:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1265620149' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.503 244018 DEBUG nova.network.neutron [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updated VIF entry in instance network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.504 244018 DEBUG nova.network.neutron [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updating instance_info_cache with network_info: [{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.524 244018 DEBUG oslo_concurrency.lockutils [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:02:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:02:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1221534215' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.658 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.660 244018 DEBUG nova.virt.libvirt.vif [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:02:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=149,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-rc0rknoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:02:11Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=d2d1c933-a594-4748-8f09-84790cab7d73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.661 244018 DEBUG nova.network.os_vif_util [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.662 244018 DEBUG nova.network.os_vif_util [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.664 244018 DEBUG nova.objects.instance [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid d2d1c933-a594-4748-8f09-84790cab7d73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.690 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] End _get_guest_xml xml=<domain type="kvm">
Feb 25 13:02:16 compute-0 nova_compute[244014]:   <uuid>d2d1c933-a594-4748-8f09-84790cab7d73</uuid>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   <name>instance-00000095</name>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   <metadata>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585</nova:name>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 13:02:15</nova:creationTime>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <nova:port uuid="1167ad0b-6ff5-4138-8c61-fde6601ec903">
Feb 25 13:02:16 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   </metadata>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <system>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <entry name="serial">d2d1c933-a594-4748-8f09-84790cab7d73</entry>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <entry name="uuid">d2d1c933-a594-4748-8f09-84790cab7d73</entry>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     </system>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   <os>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   </os>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   <features>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <apic/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   </features>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   </clock>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   </cpu>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   <devices>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d2d1c933-a594-4748-8f09-84790cab7d73_disk">
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       </source>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/d2d1c933-a594-4748-8f09-84790cab7d73_disk.config">
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       </source>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:02:16 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:16:de:15"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <target dev="tap1167ad0b-6f"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     </interface>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/console.log" append="off"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     </serial>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <video>
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     </video>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     </rng>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 13:02:16 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 13:02:16 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 13:02:16 compute-0 nova_compute[244014]:   </devices>
Feb 25 13:02:16 compute-0 nova_compute[244014]: </domain>
Feb 25 13:02:16 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.691 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Preparing to wait for external event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.692 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.692 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.692 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.693 244018 DEBUG nova.virt.libvirt.vif [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:02:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=149,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-rc0rknoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:02:11Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=d2d1c933-a594-4748-8f09-84790cab7d73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.693 244018 DEBUG nova.network.os_vif_util [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.694 244018 DEBUG nova.network.os_vif_util [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.694 244018 DEBUG os_vif [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.695 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.695 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.699 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.699 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1167ad0b-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.700 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1167ad0b-6f, col_values=(('external_ids', {'iface-id': '1167ad0b-6ff5-4138-8c61-fde6601ec903', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:de:15', 'vm-uuid': 'd2d1c933-a594-4748-8f09-84790cab7d73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:16 compute-0 NetworkManager[49836]: <info>  [1772024536.7036] manager: (tap1167ad0b-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/649)
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.708 244018 INFO os_vif [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f')
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.757 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.758 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.758 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:16:de:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.759 244018 INFO nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Using config drive
Feb 25 13:02:16 compute-0 nova_compute[244014]: 2026-02-25 13:02:16.783 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:02:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.095 244018 INFO nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Creating config drive at /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.102 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpetv24cum execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:02:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1221534215' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:02:17 compute-0 ceph-mon[76335]: pgmap v2462: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.247 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpetv24cum" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.289 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.295 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config d2d1c933-a594-4748-8f09-84790cab7d73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.581 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config d2d1c933-a594-4748-8f09-84790cab7d73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.583 244018 INFO nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Deleting local config drive /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config because it was imported into RBD.
Feb 25 13:02:17 compute-0 kernel: tap1167ad0b-6f: entered promiscuous mode
Feb 25 13:02:17 compute-0 NetworkManager[49836]: <info>  [1772024537.6363] manager: (tap1167ad0b-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/650)
Feb 25 13:02:17 compute-0 ovn_controller[147040]: 2026-02-25T13:02:17Z|01576|binding|INFO|Claiming lport 1167ad0b-6ff5-4138-8c61-fde6601ec903 for this chassis.
Feb 25 13:02:17 compute-0 ovn_controller[147040]: 2026-02-25T13:02:17Z|01577|binding|INFO|1167ad0b-6ff5-4138-8c61-fde6601ec903: Claiming fa:16:3e:16:de:15 10.100.0.12
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.648 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:de:15 10.100.0.12'], port_security=['fa:16:3e:16:de:15 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd2d1c933-a594-4748-8f09-84790cab7d73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81e387f6-426b-4301-9279-fdf7597d2c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '11991dde-8004-44c6-851b-7ca96866b068', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2daa638-d0b7-4114-8331-0d5928db072a, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=1167ad0b-6ff5-4138-8c61-fde6601ec903) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.651 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 1167ad0b-6ff5-4138-8c61-fde6601ec903 in datapath 81e387f6-426b-4301-9279-fdf7597d2c7e bound to our chassis
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.654 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81e387f6-426b-4301-9279-fdf7597d2c7e
Feb 25 13:02:17 compute-0 ovn_controller[147040]: 2026-02-25T13:02:17Z|01578|binding|INFO|Setting lport 1167ad0b-6ff5-4138-8c61-fde6601ec903 ovn-installed in OVS
Feb 25 13:02:17 compute-0 ovn_controller[147040]: 2026-02-25T13:02:17Z|01579|binding|INFO|Setting lport 1167ad0b-6ff5-4138-8c61-fde6601ec903 up in Southbound
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.672 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[851ff627-4ec4-43b4-973f-3a4b0246fb81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:17 compute-0 systemd-machined[210048]: New machine qemu-183-instance-00000095.
Feb 25 13:02:17 compute-0 systemd[1]: Started Virtual Machine qemu-183-instance-00000095.
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.699 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe2f228-c45d-4931-88cc-f736a396af0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:17 compute-0 systemd-udevd[379407]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.704 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c741936c-d675-486c-9fa1-311d32896c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:17 compute-0 NetworkManager[49836]: <info>  [1772024537.7184] device (tap1167ad0b-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 13:02:17 compute-0 NetworkManager[49836]: <info>  [1772024537.7200] device (tap1167ad0b-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.732 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0f07c25d-0646-4306-86e5-f9e9752217aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.753 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e473b7c7-5425-47a9-96a8-fdaa07654503]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81e387f6-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:15:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647231, 'reachable_time': 28642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379415, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.767 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e0869a7d-b6a0-4cc6-8aec-5b06cfcdc681]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap81e387f6-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647239, 'tstamp': 647239}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379418, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap81e387f6-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647241, 'tstamp': 647241}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379418, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.769 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81e387f6-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.770 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.772 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81e387f6-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.772 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.773 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81e387f6-40, col_values=(('external_ids', {'iface-id': '6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:17 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.773 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.837 244018 DEBUG nova.compute.manager [req-dd95fc86-5df6-4fd6-931a-48918bf67e75 req-81364e4b-bf3e-44de-928c-61004eb54602 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.838 244018 DEBUG oslo_concurrency.lockutils [req-dd95fc86-5df6-4fd6-931a-48918bf67e75 req-81364e4b-bf3e-44de-928c-61004eb54602 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.839 244018 DEBUG oslo_concurrency.lockutils [req-dd95fc86-5df6-4fd6-931a-48918bf67e75 req-81364e4b-bf3e-44de-928c-61004eb54602 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.839 244018 DEBUG oslo_concurrency.lockutils [req-dd95fc86-5df6-4fd6-931a-48918bf67e75 req-81364e4b-bf3e-44de-928c-61004eb54602 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:17 compute-0 nova_compute[244014]: 2026-02-25 13:02:17.840 244018 DEBUG nova.compute.manager [req-dd95fc86-5df6-4fd6-931a-48918bf67e75 req-81364e4b-bf3e-44de-928c-61004eb54602 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Processing event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.186 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024538.1856692, d2d1c933-a594-4748-8f09-84790cab7d73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.187 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] VM Started (Lifecycle Event)
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.190 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.194 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.198 244018 INFO nova.virt.libvirt.driver [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Instance spawned successfully.
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.199 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.205 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.209 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.226 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.227 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.227 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.228 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.229 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.230 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.236 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.237 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024538.1860127, d2d1c933-a594-4748-8f09-84790cab7d73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.237 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] VM Paused (Lifecycle Event)
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.275 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.279 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024538.1937666, d2d1c933-a594-4748-8f09-84790cab7d73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.279 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] VM Resumed (Lifecycle Event)
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.289 244018 INFO nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Took 6.97 seconds to spawn the instance on the hypervisor.
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.290 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.301 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.305 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.338 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.369 244018 INFO nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Took 8.87 seconds to build instance.
Feb 25 13:02:18 compute-0 nova_compute[244014]: 2026-02-25 13:02:18.392 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 25 13:02:19 compute-0 ceph-mon[76335]: pgmap v2463: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 25 13:02:19 compute-0 podman[379462]: 2026-02-25 13:02:19.726683664 +0000 UTC m=+0.060859718 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0)
Feb 25 13:02:19 compute-0 podman[379463]: 2026-02-25 13:02:19.761669981 +0000 UTC m=+0.095206867 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb 25 13:02:19 compute-0 nova_compute[244014]: 2026-02-25 13:02:19.985 244018 DEBUG nova.compute.manager [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:19 compute-0 nova_compute[244014]: 2026-02-25 13:02:19.986 244018 DEBUG oslo_concurrency.lockutils [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:19 compute-0 nova_compute[244014]: 2026-02-25 13:02:19.986 244018 DEBUG oslo_concurrency.lockutils [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:19 compute-0 nova_compute[244014]: 2026-02-25 13:02:19.986 244018 DEBUG oslo_concurrency.lockutils [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:19 compute-0 nova_compute[244014]: 2026-02-25 13:02:19.987 244018 DEBUG nova.compute.manager [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] No waiting events found dispatching network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:02:19 compute-0 nova_compute[244014]: 2026-02-25 13:02:19.987 244018 WARNING nova.compute.manager [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received unexpected event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 for instance with vm_state active and task_state None.
Feb 25 13:02:20 compute-0 nova_compute[244014]: 2026-02-25 13:02:20.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 25 13:02:21 compute-0 ceph-mon[76335]: pgmap v2464: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 25 13:02:21 compute-0 nova_compute[244014]: 2026-02-25 13:02:21.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 13:02:23 compute-0 ceph-mon[76335]: pgmap v2465: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 13:02:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:24 compute-0 nova_compute[244014]: 2026-02-25 13:02:24.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:24.404 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:02:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:24.405 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 13:02:24 compute-0 nova_compute[244014]: 2026-02-25 13:02:24.518 244018 DEBUG nova.compute.manager [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:24 compute-0 nova_compute[244014]: 2026-02-25 13:02:24.519 244018 DEBUG nova.compute.manager [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing instance network info cache due to event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:02:24 compute-0 nova_compute[244014]: 2026-02-25 13:02:24.519 244018 DEBUG oslo_concurrency.lockutils [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:02:24 compute-0 nova_compute[244014]: 2026-02-25 13:02:24.519 244018 DEBUG oslo_concurrency.lockutils [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:02:24 compute-0 nova_compute[244014]: 2026-02-25 13:02:24.519 244018 DEBUG nova.network.neutron [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:02:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 25 13:02:25 compute-0 ceph-mon[76335]: pgmap v2466: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 25 13:02:25 compute-0 nova_compute[244014]: 2026-02-25 13:02:25.668 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:26 compute-0 nova_compute[244014]: 2026-02-25 13:02:26.253 244018 DEBUG nova.network.neutron [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updated VIF entry in instance network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:02:26 compute-0 nova_compute[244014]: 2026-02-25 13:02:26.254 244018 DEBUG nova.network.neutron [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updating instance_info_cache with network_info: [{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:02:26 compute-0 nova_compute[244014]: 2026-02-25 13:02:26.282 244018 DEBUG oslo_concurrency.lockutils [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:02:26 compute-0 nova_compute[244014]: 2026-02-25 13:02:26.494 244018 DEBUG nova.compute.manager [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:26 compute-0 nova_compute[244014]: 2026-02-25 13:02:26.495 244018 DEBUG nova.compute.manager [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing instance network info cache due to event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:02:26 compute-0 nova_compute[244014]: 2026-02-25 13:02:26.495 244018 DEBUG oslo_concurrency.lockutils [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:02:26 compute-0 nova_compute[244014]: 2026-02-25 13:02:26.496 244018 DEBUG oslo_concurrency.lockutils [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:02:26 compute-0 nova_compute[244014]: 2026-02-25 13:02:26.496 244018 DEBUG nova.network.neutron [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:02:26 compute-0 nova_compute[244014]: 2026-02-25 13:02:26.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 25 13:02:27 compute-0 nova_compute[244014]: 2026-02-25 13:02:27.557 244018 DEBUG nova.network.neutron [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updated VIF entry in instance network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:02:27 compute-0 nova_compute[244014]: 2026-02-25 13:02:27.559 244018 DEBUG nova.network.neutron [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updating instance_info_cache with network_info: [{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:02:27 compute-0 nova_compute[244014]: 2026-02-25 13:02:27.582 244018 DEBUG oslo_concurrency.lockutils [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:02:28 compute-0 ceph-mon[76335]: pgmap v2467: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 25 13:02:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb 25 13:02:29 compute-0 ceph-mon[76335]: pgmap v2468: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb 25 13:02:30 compute-0 ovn_controller[147040]: 2026-02-25T13:02:30Z|00206|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:de:15 10.100.0.12
Feb 25 13:02:30 compute-0 ovn_controller[147040]: 2026-02-25T13:02:30Z|00207|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:de:15 10.100.0.12
Feb 25 13:02:30 compute-0 nova_compute[244014]: 2026-02-25 13:02:30.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 71 op/s
Feb 25 13:02:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:02:31
Feb 25 13:02:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:02:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:02:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', '.rgw.root']
Feb 25 13:02:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:02:31 compute-0 nova_compute[244014]: 2026-02-25 13:02:31.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:31 compute-0 sudo[379511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:02:31 compute-0 sudo[379511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:02:31 compute-0 sudo[379511]: pam_unix(sudo:session): session closed for user root
Feb 25 13:02:31 compute-0 sudo[379536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 13:02:31 compute-0 sudo[379536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:02:32 compute-0 ceph-mon[76335]: pgmap v2469: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 71 op/s
Feb 25 13:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:02:32 compute-0 sudo[379536]: pam_unix(sudo:session): session closed for user root
Feb 25 13:02:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:02:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:02:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:02:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:02:32 compute-0 sudo[379580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:02:32 compute-0 sudo[379580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:02:32 compute-0 sudo[379580]: pam_unix(sudo:session): session closed for user root
Feb 25 13:02:32 compute-0 sudo[379605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:02:32 compute-0 sudo[379605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:02:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Feb 25 13:02:33 compute-0 sudo[379605]: pam_unix(sudo:session): session closed for user root
Feb 25 13:02:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:02:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:02:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:02:33 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:02:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:02:33 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:02:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:02:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:02:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:02:33 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:02:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:02:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:02:33 compute-0 sudo[379660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:02:33 compute-0 sudo[379660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:02:33 compute-0 sudo[379660]: pam_unix(sudo:session): session closed for user root
Feb 25 13:02:33 compute-0 sudo[379685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:02:33 compute-0 sudo[379685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:02:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:02:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:02:33 compute-0 ceph-mon[76335]: pgmap v2470: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Feb 25 13:02:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:02:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:02:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:02:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:02:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:02:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:02:33 compute-0 podman[379722]: 2026-02-25 13:02:33.609243216 +0000 UTC m=+0.033358572 container create 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:02:33 compute-0 podman[379722]: 2026-02-25 13:02:33.594820889 +0000 UTC m=+0.018936255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:02:33 compute-0 systemd[1]: Started libpod-conmon-676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989.scope.
Feb 25 13:02:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:02:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:33 compute-0 podman[379722]: 2026-02-25 13:02:33.960552506 +0000 UTC m=+0.384667882 container init 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:02:33 compute-0 podman[379722]: 2026-02-25 13:02:33.970573799 +0000 UTC m=+0.394689155 container start 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:02:33 compute-0 podman[379722]: 2026-02-25 13:02:33.97556248 +0000 UTC m=+0.399678036 container attach 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:02:33 compute-0 frosty_wescoff[379739]: 167 167
Feb 25 13:02:33 compute-0 systemd[1]: libpod-676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989.scope: Deactivated successfully.
Feb 25 13:02:33 compute-0 conmon[379739]: conmon 676d09072d5d1f2d1370 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989.scope/container/memory.events
Feb 25 13:02:34 compute-0 podman[379744]: 2026-02-25 13:02:34.0376236 +0000 UTC m=+0.034565226 container died 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:02:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2087928fa407259d5cbabd8701ead140c952d531e91ce535ee12b8aa94b5820-merged.mount: Deactivated successfully.
Feb 25 13:02:34 compute-0 podman[379744]: 2026-02-25 13:02:34.240472902 +0000 UTC m=+0.237414568 container remove 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:02:34 compute-0 systemd[1]: libpod-conmon-676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989.scope: Deactivated successfully.
Feb 25 13:02:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:34.407 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:34 compute-0 podman[379764]: 2026-02-25 13:02:34.431236364 +0000 UTC m=+0.054717445 container create 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 13:02:34 compute-0 systemd[1]: Started libpod-conmon-7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3.scope.
Feb 25 13:02:34 compute-0 podman[379764]: 2026-02-25 13:02:34.405256731 +0000 UTC m=+0.028737842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:02:34 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:34 compute-0 podman[379764]: 2026-02-25 13:02:34.571062357 +0000 UTC m=+0.194543478 container init 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:02:34 compute-0 podman[379764]: 2026-02-25 13:02:34.578675672 +0000 UTC m=+0.202156783 container start 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:02:34 compute-0 podman[379764]: 2026-02-25 13:02:34.587735417 +0000 UTC m=+0.211216598 container attach 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:02:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:02:35 compute-0 optimistic_feynman[379785]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:02:35 compute-0 optimistic_feynman[379785]: --> All data devices are unavailable
Feb 25 13:02:35 compute-0 systemd[1]: libpod-7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3.scope: Deactivated successfully.
Feb 25 13:02:35 compute-0 podman[379764]: 2026-02-25 13:02:35.134593353 +0000 UTC m=+0.758074434 container died 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:02:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32-merged.mount: Deactivated successfully.
Feb 25 13:02:35 compute-0 podman[379764]: 2026-02-25 13:02:35.18764938 +0000 UTC m=+0.811130461 container remove 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 13:02:35 compute-0 systemd[1]: libpod-conmon-7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3.scope: Deactivated successfully.
Feb 25 13:02:35 compute-0 sudo[379685]: pam_unix(sudo:session): session closed for user root
Feb 25 13:02:35 compute-0 sudo[379817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:02:35 compute-0 sudo[379817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:02:35 compute-0 sudo[379817]: pam_unix(sudo:session): session closed for user root
Feb 25 13:02:35 compute-0 sudo[379842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:02:35 compute-0 sudo[379842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:02:35 compute-0 podman[379881]: 2026-02-25 13:02:35.670764408 +0000 UTC m=+0.049851757 container create e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:02:35 compute-0 nova_compute[244014]: 2026-02-25 13:02:35.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:35 compute-0 systemd[1]: Started libpod-conmon-e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7.scope.
Feb 25 13:02:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:02:35 compute-0 podman[379881]: 2026-02-25 13:02:35.64603083 +0000 UTC m=+0.025118219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:02:35 compute-0 podman[379881]: 2026-02-25 13:02:35.750442796 +0000 UTC m=+0.129530135 container init e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 13:02:35 compute-0 podman[379881]: 2026-02-25 13:02:35.756330002 +0000 UTC m=+0.135417311 container start e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 13:02:35 compute-0 systemd[1]: libpod-e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7.scope: Deactivated successfully.
Feb 25 13:02:35 compute-0 bold_payne[379896]: 167 167
Feb 25 13:02:35 compute-0 conmon[379896]: conmon e443eecc6af2ddf5f58e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7.scope/container/memory.events
Feb 25 13:02:35 compute-0 podman[379881]: 2026-02-25 13:02:35.762578288 +0000 UTC m=+0.141665627 container attach e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 13:02:35 compute-0 podman[379881]: 2026-02-25 13:02:35.762980519 +0000 UTC m=+0.142067828 container died e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:02:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2af4c0eb307a88072aecd4a5bfce97ebc59a8340bdd5d7f7fc9c084a4484730f-merged.mount: Deactivated successfully.
Feb 25 13:02:35 compute-0 podman[379881]: 2026-02-25 13:02:35.79917473 +0000 UTC m=+0.178262039 container remove e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 25 13:02:35 compute-0 systemd[1]: libpod-conmon-e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7.scope: Deactivated successfully.
Feb 25 13:02:35 compute-0 podman[379921]: 2026-02-25 13:02:35.959515173 +0000 UTC m=+0.048003495 container create 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:02:35 compute-0 ceph-mon[76335]: pgmap v2471: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:02:36 compute-0 systemd[1]: Started libpod-conmon-0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3.scope.
Feb 25 13:02:36 compute-0 podman[379921]: 2026-02-25 13:02:35.941701221 +0000 UTC m=+0.030189503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:02:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa8eaefb56f575c985262c2101e567cee53dd3f0ab9acf9d76cc569e7d67daa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa8eaefb56f575c985262c2101e567cee53dd3f0ab9acf9d76cc569e7d67daa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa8eaefb56f575c985262c2101e567cee53dd3f0ab9acf9d76cc569e7d67daa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa8eaefb56f575c985262c2101e567cee53dd3f0ab9acf9d76cc569e7d67daa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:36 compute-0 podman[379921]: 2026-02-25 13:02:36.073772596 +0000 UTC m=+0.162260958 container init 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 13:02:36 compute-0 podman[379921]: 2026-02-25 13:02:36.080446755 +0000 UTC m=+0.168935077 container start 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 13:02:36 compute-0 podman[379921]: 2026-02-25 13:02:36.086165056 +0000 UTC m=+0.174653398 container attach 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 25 13:02:36 compute-0 sharp_herschel[379938]: {
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:     "0": [
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:         {
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "devices": [
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "/dev/loop3"
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             ],
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_name": "ceph_lv0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_size": "21470642176",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "name": "ceph_lv0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "tags": {
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.cluster_name": "ceph",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.crush_device_class": "",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.encrypted": "0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.objectstore": "bluestore",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.osd_id": "0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.type": "block",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.vdo": "0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.with_tpm": "0"
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             },
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "type": "block",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "vg_name": "ceph_vg0"
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:         }
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:     ],
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:     "1": [
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:         {
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "devices": [
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "/dev/loop4"
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             ],
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_name": "ceph_lv1",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_size": "21470642176",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "name": "ceph_lv1",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "tags": {
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.cluster_name": "ceph",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.crush_device_class": "",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.encrypted": "0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.objectstore": "bluestore",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.osd_id": "1",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.type": "block",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.vdo": "0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.with_tpm": "0"
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             },
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "type": "block",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "vg_name": "ceph_vg1"
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:         }
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:     ],
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:     "2": [
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:         {
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "devices": [
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "/dev/loop5"
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             ],
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_name": "ceph_lv2",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_size": "21470642176",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "name": "ceph_lv2",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "tags": {
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.cluster_name": "ceph",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.crush_device_class": "",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.encrypted": "0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.objectstore": "bluestore",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.osd_id": "2",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.type": "block",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.vdo": "0",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:                 "ceph.with_tpm": "0"
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             },
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "type": "block",
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:             "vg_name": "ceph_vg2"
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:         }
Feb 25 13:02:36 compute-0 sharp_herschel[379938]:     ]
Feb 25 13:02:36 compute-0 sharp_herschel[379938]: }
Feb 25 13:02:36 compute-0 systemd[1]: libpod-0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3.scope: Deactivated successfully.
Feb 25 13:02:36 compute-0 podman[379921]: 2026-02-25 13:02:36.422072241 +0000 UTC m=+0.510560553 container died 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 13:02:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aa8eaefb56f575c985262c2101e567cee53dd3f0ab9acf9d76cc569e7d67daa-merged.mount: Deactivated successfully.
Feb 25 13:02:36 compute-0 podman[379921]: 2026-02-25 13:02:36.526082995 +0000 UTC m=+0.614571317 container remove 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 13:02:36 compute-0 systemd[1]: libpod-conmon-0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3.scope: Deactivated successfully.
Feb 25 13:02:36 compute-0 sudo[379842]: pam_unix(sudo:session): session closed for user root
Feb 25 13:02:36 compute-0 sudo[379959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:02:36 compute-0 sudo[379959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:02:36 compute-0 sudo[379959]: pam_unix(sudo:session): session closed for user root
Feb 25 13:02:36 compute-0 nova_compute[244014]: 2026-02-25 13:02:36.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:36 compute-0 sudo[379984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:02:36 compute-0 sudo[379984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:02:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:02:37 compute-0 ceph-mon[76335]: pgmap v2472: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:02:37 compute-0 podman[380021]: 2026-02-25 13:02:37.055419097 +0000 UTC m=+0.060773565 container create adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:02:37 compute-0 systemd[1]: Started libpod-conmon-adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27.scope.
Feb 25 13:02:37 compute-0 podman[380021]: 2026-02-25 13:02:37.016261083 +0000 UTC m=+0.021615441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:02:37 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.193 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.195 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.196 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.196 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.196 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.198 244018 INFO nova.compute.manager [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Terminating instance
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.199 244018 DEBUG nova.compute.manager [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 13:02:37 compute-0 podman[380021]: 2026-02-25 13:02:37.211353986 +0000 UTC m=+0.216708314 container init adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:02:37 compute-0 podman[380021]: 2026-02-25 13:02:37.219396883 +0000 UTC m=+0.224751221 container start adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 13:02:37 compute-0 hungry_jackson[380037]: 167 167
Feb 25 13:02:37 compute-0 systemd[1]: libpod-adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27.scope: Deactivated successfully.
Feb 25 13:02:37 compute-0 conmon[380037]: conmon adf789da310610ce5431 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27.scope/container/memory.events
Feb 25 13:02:37 compute-0 podman[380021]: 2026-02-25 13:02:37.273016585 +0000 UTC m=+0.278371133 container attach adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 13:02:37 compute-0 podman[380021]: 2026-02-25 13:02:37.273936391 +0000 UTC m=+0.279290719 container died adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:02:37 compute-0 kernel: tap1167ad0b-6f (unregistering): left promiscuous mode
Feb 25 13:02:37 compute-0 NetworkManager[49836]: <info>  [1772024557.3685] device (tap1167ad0b-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:37 compute-0 ovn_controller[147040]: 2026-02-25T13:02:37Z|01580|binding|INFO|Releasing lport 1167ad0b-6ff5-4138-8c61-fde6601ec903 from this chassis (sb_readonly=0)
Feb 25 13:02:37 compute-0 ovn_controller[147040]: 2026-02-25T13:02:37Z|01581|binding|INFO|Setting lport 1167ad0b-6ff5-4138-8c61-fde6601ec903 down in Southbound
Feb 25 13:02:37 compute-0 ovn_controller[147040]: 2026-02-25T13:02:37Z|01582|binding|INFO|Removing iface tap1167ad0b-6f ovn-installed in OVS
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.409 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:de:15 10.100.0.12', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd2d1c933-a594-4748-8f09-84790cab7d73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81e387f6-426b-4301-9279-fdf7597d2c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2daa638-d0b7-4114-8331-0d5928db072a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=1167ad0b-6ff5-4138-8c61-fde6601ec903) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.411 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 1167ad0b-6ff5-4138-8c61-fde6601ec903 in datapath 81e387f6-426b-4301-9279-fdf7597d2c7e unbound from our chassis
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.412 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81e387f6-426b-4301-9279-fdf7597d2c7e
Feb 25 13:02:37 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Deactivated successfully.
Feb 25 13:02:37 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Consumed 12.407s CPU time.
Feb 25 13:02:37 compute-0 systemd-machined[210048]: Machine qemu-183-instance-00000095 terminated.
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.428 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f51cd758-655b-417f-b5b1-5982cc5585c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-20cc89d3bf2578635e53d1c7e563be88f35c103f7432236adf77f7c12f6df52c-merged.mount: Deactivated successfully.
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.455 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[677e4de6-f925-4a7c-ab7d-2d4ef8d401e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.460 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[105ff05d-32a7-4323-b0d4-6dc8bbfbaa80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:37 compute-0 podman[380021]: 2026-02-25 13:02:37.470606399 +0000 UTC m=+0.475960707 container remove adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.482 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[895115f8-567c-4968-b940-787570aff39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.499 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9731a3f8-f10e-4dd2-9379-5c78562dfa7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81e387f6-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:15:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647231, 'reachable_time': 33442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380068, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:37 compute-0 systemd[1]: libpod-conmon-adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27.scope: Deactivated successfully.
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.513 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6e235780-fcbf-4c08-b13a-2b471ceaf861]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap81e387f6-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647239, 'tstamp': 647239}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380069, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap81e387f6-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647241, 'tstamp': 647241}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380069, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.515 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81e387f6-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.522 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.523 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81e387f6-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.523 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81e387f6-40, col_values=(('external_ids', {'iface-id': '6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.525 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.645 244018 INFO nova.virt.libvirt.driver [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Instance destroyed successfully.
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.646 244018 DEBUG nova.objects.instance [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid d2d1c933-a594-4748-8f09-84790cab7d73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.661 244018 DEBUG nova.virt.libvirt.vif [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:02:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=149,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:02:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-rc0rknoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:02:18Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=d2d1c933-a594-4748-8f09-84790cab7d73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.662 244018 DEBUG nova.network.os_vif_util [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.662 244018 DEBUG nova.network.os_vif_util [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.663 244018 DEBUG os_vif [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.666 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1167ad0b-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.668 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:37 compute-0 podman[380076]: 2026-02-25 13:02:37.670059825 +0000 UTC m=+0.073906236 container create 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:02:37 compute-0 nova_compute[244014]: 2026-02-25 13:02:37.674 244018 INFO os_vif [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f')
Feb 25 13:02:37 compute-0 podman[380076]: 2026-02-25 13:02:37.626682142 +0000 UTC m=+0.030528553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:02:37 compute-0 systemd[1]: Started libpod-conmon-2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a.scope.
Feb 25 13:02:37 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:02:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8e0dcea9d1c85ddad041aa71d0bf5de4064d6df9adb967f5d86283c28b6bda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8e0dcea9d1c85ddad041aa71d0bf5de4064d6df9adb967f5d86283c28b6bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8e0dcea9d1c85ddad041aa71d0bf5de4064d6df9adb967f5d86283c28b6bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8e0dcea9d1c85ddad041aa71d0bf5de4064d6df9adb967f5d86283c28b6bda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:02:37 compute-0 podman[380076]: 2026-02-25 13:02:37.809960632 +0000 UTC m=+0.213807063 container init 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 13:02:37 compute-0 podman[380076]: 2026-02-25 13:02:37.820936171 +0000 UTC m=+0.224782612 container start 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:02:37 compute-0 podman[380076]: 2026-02-25 13:02:37.832110166 +0000 UTC m=+0.235956597 container attach 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.328 244018 DEBUG nova.compute.manager [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-unplugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.329 244018 DEBUG oslo_concurrency.lockutils [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.331 244018 DEBUG oslo_concurrency.lockutils [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.331 244018 DEBUG oslo_concurrency.lockutils [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.332 244018 DEBUG nova.compute.manager [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] No waiting events found dispatching network-vif-unplugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.332 244018 DEBUG nova.compute.manager [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-unplugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 13:02:38 compute-0 lvm[380198]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:02:38 compute-0 lvm[380198]: VG ceph_vg0 finished
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.407 244018 INFO nova.virt.libvirt.driver [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Deleting instance files /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73_del
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.408 244018 INFO nova.virt.libvirt.driver [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Deletion of /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73_del complete
Feb 25 13:02:38 compute-0 lvm[380200]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:02:38 compute-0 lvm[380200]: VG ceph_vg1 finished
Feb 25 13:02:38 compute-0 lvm[380201]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:02:38 compute-0 lvm[380201]: VG ceph_vg2 finished
Feb 25 13:02:38 compute-0 lvm[380202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:02:38 compute-0 lvm[380202]: VG ceph_vg0 finished
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.466 244018 INFO nova.compute.manager [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Took 1.27 seconds to destroy the instance on the hypervisor.
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.466 244018 DEBUG oslo.service.loopingcall [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.467 244018 DEBUG nova.compute.manager [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 13:02:38 compute-0 nova_compute[244014]: 2026-02-25 13:02:38.467 244018 DEBUG nova.network.neutron [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 13:02:38 compute-0 lvm[380204]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:02:38 compute-0 lvm[380204]: VG ceph_vg2 finished
Feb 25 13:02:38 compute-0 objective_driscoll[380122]: {}
Feb 25 13:02:38 compute-0 systemd[1]: libpod-2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a.scope: Deactivated successfully.
Feb 25 13:02:38 compute-0 systemd[1]: libpod-2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a.scope: Consumed 1.088s CPU time.
Feb 25 13:02:38 compute-0 conmon[380122]: conmon 2a191f684dbb4d79893a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a.scope/container/memory.events
Feb 25 13:02:38 compute-0 podman[380076]: 2026-02-25 13:02:38.559819683 +0000 UTC m=+0.963666124 container died 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:02:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c8e0dcea9d1c85ddad041aa71d0bf5de4064d6df9adb967f5d86283c28b6bda-merged.mount: Deactivated successfully.
Feb 25 13:02:38 compute-0 podman[380076]: 2026-02-25 13:02:38.634054707 +0000 UTC m=+1.037901118 container remove 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 13:02:38 compute-0 systemd[1]: libpod-conmon-2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a.scope: Deactivated successfully.
Feb 25 13:02:38 compute-0 sudo[379984]: pam_unix(sudo:session): session closed for user root
Feb 25 13:02:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:02:38 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:02:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:02:38 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:02:38 compute-0 sudo[380218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:02:38 compute-0 sudo[380218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:02:38 compute-0 sudo[380218]: pam_unix(sudo:session): session closed for user root
Feb 25 13:02:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:02:39 compute-0 nova_compute[244014]: 2026-02-25 13:02:39.442 244018 DEBUG nova.network.neutron [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:02:39 compute-0 nova_compute[244014]: 2026-02-25 13:02:39.459 244018 INFO nova.compute.manager [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Took 0.99 seconds to deallocate network for instance.
Feb 25 13:02:39 compute-0 nova_compute[244014]: 2026-02-25 13:02:39.497 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:39 compute-0 nova_compute[244014]: 2026-02-25 13:02:39.497 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:39 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:02:39 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:02:39 compute-0 ceph-mon[76335]: pgmap v2473: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:02:39 compute-0 nova_compute[244014]: 2026-02-25 13:02:39.775 244018 DEBUG oslo_concurrency.processutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:02:39 compute-0 nova_compute[244014]: 2026-02-25 13:02:39.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:02:39 compute-0 nova_compute[244014]: 2026-02-25 13:02:39.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:02:39 compute-0 nova_compute[244014]: 2026-02-25 13:02:39.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.044 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.044 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.044 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.045 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea4e69b0-bf04-4637-847b-9807c884d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:02:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:02:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/949477923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.341 244018 DEBUG oslo_concurrency.processutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.347 244018 DEBUG nova.compute.provider_tree [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.362 244018 DEBUG nova.scheduler.client.report [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.389 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.402 244018 DEBUG nova.compute.manager [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.403 244018 DEBUG oslo_concurrency.lockutils [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.403 244018 DEBUG oslo_concurrency.lockutils [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.404 244018 DEBUG oslo_concurrency.lockutils [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.404 244018 DEBUG nova.compute.manager [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] No waiting events found dispatching network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.404 244018 WARNING nova.compute.manager [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received unexpected event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 for instance with vm_state deleted and task_state None.
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.405 244018 DEBUG nova.compute.manager [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-deleted-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.419 244018 INFO nova.scheduler.client.report [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance d2d1c933-a594-4748-8f09-84790cab7d73
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.476 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:40 compute-0 nova_compute[244014]: 2026-02-25 13:02:40.676 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/949477923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:02:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.450 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.451 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.451 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.451 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.452 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.453 244018 INFO nova.compute.manager [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Terminating instance
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.456 244018 DEBUG nova.compute.manager [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 13:02:41 compute-0 kernel: tap5a5e6571-f9 (unregistering): left promiscuous mode
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.513 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:02:41 compute-0 NetworkManager[49836]: <info>  [1772024561.5155] device (tap5a5e6571-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:41 compute-0 ovn_controller[147040]: 2026-02-25T13:02:41Z|01583|binding|INFO|Releasing lport 5a5e6571-f930-49df-8399-0abb9daf7f4f from this chassis (sb_readonly=0)
Feb 25 13:02:41 compute-0 ovn_controller[147040]: 2026-02-25T13:02:41Z|01584|binding|INFO|Setting lport 5a5e6571-f930-49df-8399-0abb9daf7f4f down in Southbound
Feb 25 13:02:41 compute-0 ovn_controller[147040]: 2026-02-25T13:02:41Z|01585|binding|INFO|Removing iface tap5a5e6571-f9 ovn-installed in OVS
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.532 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:fe:07 10.100.0.5'], port_security=['fa:16:3e:8b:fe:07 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ea4e69b0-bf04-4637-847b-9807c884d103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81e387f6-426b-4301-9279-fdf7597d2c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11991dde-8004-44c6-851b-7ca96866b068 266cd610-5f0f-4f4d-8bfb-2ebdf5d189b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2daa638-d0b7-4114-8331-0d5928db072a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5a5e6571-f930-49df-8399-0abb9daf7f4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.534 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.534 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5a5e6571-f930-49df-8399-0abb9daf7f4f in datapath 81e387f6-426b-4301-9279-fdf7597d2c7e unbound from our chassis
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.534 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.535 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.535 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81e387f6-426b-4301-9279-fdf7597d2c7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.537 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6ef8b2-2aac-4c89-9a44-c2192377c5ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.538 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e namespace which is not needed anymore
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.555 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.555 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.556 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.556 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.556 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:02:41 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Deactivated successfully.
Feb 25 13:02:41 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Consumed 14.334s CPU time.
Feb 25 13:02:41 compute-0 systemd-machined[210048]: Machine qemu-182-instance-00000094 terminated.
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.695 244018 INFO nova.virt.libvirt.driver [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Instance destroyed successfully.
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.697 244018 DEBUG nova.objects.instance [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid ea4e69b0-bf04-4637-847b-9807c884d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:02:41 compute-0 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [NOTICE]   (379025) : haproxy version is 2.8.14-c23fe91
Feb 25 13:02:41 compute-0 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [NOTICE]   (379025) : path to executable is /usr/sbin/haproxy
Feb 25 13:02:41 compute-0 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [WARNING]  (379025) : Exiting Master process...
Feb 25 13:02:41 compute-0 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [WARNING]  (379025) : Exiting Master process...
Feb 25 13:02:41 compute-0 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [ALERT]    (379025) : Current worker (379027) exited with code 143 (Terminated)
Feb 25 13:02:41 compute-0 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [WARNING]  (379025) : All workers exited. Exiting... (0)
Feb 25 13:02:41 compute-0 systemd[1]: libpod-47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451.scope: Deactivated successfully.
Feb 25 13:02:41 compute-0 podman[380292]: 2026-02-25 13:02:41.730943735 +0000 UTC m=+0.065546929 container died 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 13:02:41 compute-0 ceph-mon[76335]: pgmap v2474: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451-userdata-shm.mount: Deactivated successfully.
Feb 25 13:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3abb1a1fefa49a0b771e7d6edf00e3f98906310d6c7b5e12ddbb0b47d61691b7-merged.mount: Deactivated successfully.
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.783 244018 DEBUG nova.virt.libvirt.vif [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=148,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:01:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-nfnujduq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:01:43Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=ea4e69b0-bf04-4637-847b-9807c884d103,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.784 244018 DEBUG nova.network.os_vif_util [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.784 244018 DEBUG nova.network.os_vif_util [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.785 244018 DEBUG os_vif [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.786 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a5e6571-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.791 244018 INFO os_vif [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9')
Feb 25 13:02:41 compute-0 podman[380292]: 2026-02-25 13:02:41.82614969 +0000 UTC m=+0.160752864 container cleanup 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 13:02:41 compute-0 systemd[1]: libpod-conmon-47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451.scope: Deactivated successfully.
Feb 25 13:02:41 compute-0 podman[380369]: 2026-02-25 13:02:41.921678065 +0000 UTC m=+0.073239227 container remove 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.927 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d529a06-d45c-4419-a975-2db17dd8a6ce]: (4, ('Wed Feb 25 01:02:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e (47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451)\n47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451\nWed Feb 25 01:02:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e (47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451)\n47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.929 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bac1dedb-1265-4d71-aeb9-75c9c9f47900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.930 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81e387f6-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:02:41 compute-0 kernel: tap81e387f6-40: left promiscuous mode
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.938 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:41 compute-0 nova_compute[244014]: 2026-02-25 13:02:41.943 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.947 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[afc7e705-dc70-41dd-b1aa-95fc01e89ef2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.965 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a43affe-b9f7-4f81-95b5-a5d231b3701d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.967 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d0b3a5-c746-4c97-ab1a-ed2fa72c756c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.981 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[037f57f0-6caa-4dfb-82d0-f66fc7a8576a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647225, 'reachable_time': 44491, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380384, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d81e387f6\x2d426b\x2d4301\x2d9279\x2dfdf7597d2c7e.mount: Deactivated successfully.
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.992 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 13:02:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.993 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b2295c-0724-497c-bf46-301490695f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:02:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:02:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/250342729' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.110 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.172 244018 INFO nova.virt.libvirt.driver [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Deleting instance files /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103_del
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.173 244018 INFO nova.virt.libvirt.driver [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Deletion of /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103_del complete
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.224 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.224 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.248 244018 INFO nova.compute.manager [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Took 0.79 seconds to destroy the instance on the hypervisor.
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.249 244018 DEBUG oslo.service.loopingcall [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.249 244018 DEBUG nova.compute.manager [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.250 244018 DEBUG nova.network.neutron [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.376 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.377 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3442MB free_disk=59.8961376408115GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.377 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.377 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.445 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance ea4e69b0-bf04-4637-847b-9807c884d103 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.446 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.446 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.495 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.538 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.540 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing instance network info cache due to event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.540 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.540 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:02:42 compute-0 nova_compute[244014]: 2026-02-25 13:02:42.541 244018 DEBUG nova.network.neutron [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:02:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/250342729' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015360268250475516 of space, bias 1.0, pg target 0.46080804751426546 quantized to 32 (current 32)
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494462883828743 of space, bias 1.0, pg target 0.7483388651486229 quantized to 32 (current 32)
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4133407414918176e-06 of space, bias 4.0, pg target 0.0016960088897901811 quantized to 16 (current 16)
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:02:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Feb 25 13:02:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:02:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653326501' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:02:43 compute-0 nova_compute[244014]: 2026-02-25 13:02:43.060 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:02:43 compute-0 nova_compute[244014]: 2026-02-25 13:02:43.067 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:02:43 compute-0 nova_compute[244014]: 2026-02-25 13:02:43.090 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:02:43 compute-0 nova_compute[244014]: 2026-02-25 13:02:43.125 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:02:43 compute-0 nova_compute[244014]: 2026-02-25 13:02:43.126 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:43 compute-0 ceph-mon[76335]: pgmap v2475: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Feb 25 13:02:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/653326501' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:02:44 compute-0 nova_compute[244014]: 2026-02-25 13:02:44.324 244018 DEBUG nova.network.neutron [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:02:44 compute-0 nova_compute[244014]: 2026-02-25 13:02:44.342 244018 INFO nova.compute.manager [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Took 2.09 seconds to deallocate network for instance.
Feb 25 13:02:44 compute-0 nova_compute[244014]: 2026-02-25 13:02:44.386 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:44 compute-0 nova_compute[244014]: 2026-02-25 13:02:44.387 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:44 compute-0 nova_compute[244014]: 2026-02-25 13:02:44.434 244018 DEBUG oslo_concurrency.processutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:02:44 compute-0 nova_compute[244014]: 2026-02-25 13:02:44.586 244018 DEBUG nova.compute.manager [req-5ded0371-698e-4e0c-b96b-2e202be8894b req-a8f2fabd-2d9d-4bfc-84a4-c5f965ce0326 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-deleted-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 25 13:02:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:02:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115451864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.042 244018 DEBUG oslo_concurrency.processutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:02:45 compute-0 ceph-mon[76335]: pgmap v2476: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 25 13:02:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1115451864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.049 244018 DEBUG nova.compute.provider_tree [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.066 244018 DEBUG nova.scheduler.client.report [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.085 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.108 244018 INFO nova.scheduler.client.report [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance ea4e69b0-bf04-4637-847b-9807c884d103
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.203 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.297 244018 DEBUG nova.network.neutron [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updated VIF entry in instance network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.297 244018 DEBUG nova.network.neutron [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.322 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.322 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-unplugged-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.323 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.323 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.324 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.324 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] No waiting events found dispatching network-vif-unplugged-5a5e6571-f930-49df-8399-0abb9daf7f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.324 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-unplugged-5a5e6571-f930-49df-8399-0abb9daf7f4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.325 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.325 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.325 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.326 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.326 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] No waiting events found dispatching network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.326 244018 WARNING nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received unexpected event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f for instance with vm_state active and task_state deleting.
Feb 25 13:02:45 compute-0 nova_compute[244014]: 2026-02-25 13:02:45.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:46 compute-0 nova_compute[244014]: 2026-02-25 13:02:46.121 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:02:46 compute-0 nova_compute[244014]: 2026-02-25 13:02:46.122 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:02:46 compute-0 nova_compute[244014]: 2026-02-25 13:02:46.122 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:02:46 compute-0 nova_compute[244014]: 2026-02-25 13:02:46.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 25 13:02:47 compute-0 ceph-mon[76335]: pgmap v2477: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 25 13:02:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:02:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828213001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:02:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:02:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828213001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:02:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3828213001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:02:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3828213001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:02:48 compute-0 nova_compute[244014]: 2026-02-25 13:02:48.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:48 compute-0 nova_compute[244014]: 2026-02-25 13:02:48.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:48 compute-0 nova_compute[244014]: 2026-02-25 13:02:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:02:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 25 13:02:49 compute-0 ceph-mon[76335]: pgmap v2478: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 25 13:02:50 compute-0 nova_compute[244014]: 2026-02-25 13:02:50.680 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:50 compute-0 podman[380434]: 2026-02-25 13:02:50.741645371 +0000 UTC m=+0.086197932 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 25 13:02:50 compute-0 podman[380435]: 2026-02-25 13:02:50.790759017 +0000 UTC m=+0.130416300 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:02:50 compute-0 nova_compute[244014]: 2026-02-25 13:02:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:02:50 compute-0 nova_compute[244014]: 2026-02-25 13:02:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:02:50 compute-0 nova_compute[244014]: 2026-02-25 13:02:50.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:02:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Feb 25 13:02:51 compute-0 ceph-mon[76335]: pgmap v2479: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Feb 25 13:02:51 compute-0 nova_compute[244014]: 2026-02-25 13:02:51.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:52 compute-0 nova_compute[244014]: 2026-02-25 13:02:52.644 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024557.6426325, d2d1c933-a594-4748-8f09-84790cab7d73 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:02:52 compute-0 nova_compute[244014]: 2026-02-25 13:02:52.645 244018 INFO nova.compute.manager [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] VM Stopped (Lifecycle Event)
Feb 25 13:02:52 compute-0 nova_compute[244014]: 2026-02-25 13:02:52.671 244018 DEBUG nova.compute.manager [None req-223caa6c-cc0e-49bc-a080-da3adbf4ae09 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:02:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Feb 25 13:02:53 compute-0 ceph-mon[76335]: pgmap v2480: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Feb 25 13:02:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:54 compute-0 nova_compute[244014]: 2026-02-25 13:02:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:02:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:02:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:55.040 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:02:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:55.040 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:02:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:02:55.040 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:02:55 compute-0 ceph-mon[76335]: pgmap v2481: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:02:55 compute-0 nova_compute[244014]: 2026-02-25 13:02:55.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:56 compute-0 nova_compute[244014]: 2026-02-25 13:02:56.693 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024561.6922565, ea4e69b0-bf04-4637-847b-9807c884d103 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:02:56 compute-0 nova_compute[244014]: 2026-02-25 13:02:56.694 244018 INFO nova.compute.manager [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] VM Stopped (Lifecycle Event)
Feb 25 13:02:56 compute-0 nova_compute[244014]: 2026-02-25 13:02:56.715 244018 DEBUG nova.compute.manager [None req-22197e49-2bed-45ed-b23b-db58fa1abce8 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:02:56 compute-0 nova_compute[244014]: 2026-02-25 13:02:56.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:02:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:02:57 compute-0 ceph-mon[76335]: pgmap v2482: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:02:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:02:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:02:59 compute-0 ceph-mon[76335]: pgmap v2483: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:03:00 compute-0 nova_compute[244014]: 2026-02-25 13:03:00.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:03:01 compute-0 ceph-mon[76335]: pgmap v2484: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:03:01 compute-0 nova_compute[244014]: 2026-02-25 13:03:01.469 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:01 compute-0 nova_compute[244014]: 2026-02-25 13:03:01.470 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:01 compute-0 nova_compute[244014]: 2026-02-25 13:03:01.489 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 13:03:01 compute-0 nova_compute[244014]: 2026-02-25 13:03:01.574 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:01 compute-0 nova_compute[244014]: 2026-02-25 13:03:01.575 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:01 compute-0 nova_compute[244014]: 2026-02-25 13:03:01.584 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 13:03:01 compute-0 nova_compute[244014]: 2026-02-25 13:03:01.584 244018 INFO nova.compute.claims [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Claim successful on node compute-0.ctlplane.example.com
Feb 25 13:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:03:01 compute-0 nova_compute[244014]: 2026-02-25 13:03:01.706 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:01 compute-0 nova_compute[244014]: 2026-02-25 13:03:01.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:03:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967103420' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.270 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.280 244018 DEBUG nova.compute.provider_tree [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.299 244018 DEBUG nova.scheduler.client.report [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:03:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/967103420' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.331 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.332 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.385 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.386 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.409 244018 INFO nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.431 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.536 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.538 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.538 244018 INFO nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Creating image(s)
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.564 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.592 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.615 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.619 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.694 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.696 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.697 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.698 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.732 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:02 compute-0 nova_compute[244014]: 2026-02-25 13:03:02.738 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:03:03 compute-0 nova_compute[244014]: 2026-02-25 13:03:03.107 244018 DEBUG nova.policy [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 13:03:03 compute-0 nova_compute[244014]: 2026-02-25 13:03:03.121 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:03 compute-0 nova_compute[244014]: 2026-02-25 13:03:03.206 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 13:03:03 compute-0 ceph-mon[76335]: pgmap v2485: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:03:03 compute-0 nova_compute[244014]: 2026-02-25 13:03:03.437 244018 DEBUG nova.objects.instance [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:03:03 compute-0 nova_compute[244014]: 2026-02-25 13:03:03.471 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 13:03:03 compute-0 nova_compute[244014]: 2026-02-25 13:03:03.472 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Ensure instance console log exists: /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 13:03:03 compute-0 nova_compute[244014]: 2026-02-25 13:03:03.472 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:03 compute-0 nova_compute[244014]: 2026-02-25 13:03:03.473 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:03 compute-0 nova_compute[244014]: 2026-02-25 13:03:03.473 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:03:05 compute-0 ceph-mon[76335]: pgmap v2486: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:03:05 compute-0 nova_compute[244014]: 2026-02-25 13:03:05.252 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Successfully created port: 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 13:03:05 compute-0 nova_compute[244014]: 2026-02-25 13:03:05.686 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:05 compute-0 nova_compute[244014]: 2026-02-25 13:03:05.870 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Successfully updated port: 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 13:03:05 compute-0 nova_compute[244014]: 2026-02-25 13:03:05.894 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:03:05 compute-0 nova_compute[244014]: 2026-02-25 13:03:05.895 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:03:05 compute-0 nova_compute[244014]: 2026-02-25 13:03:05.895 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 13:03:06 compute-0 nova_compute[244014]: 2026-02-25 13:03:06.012 244018 DEBUG nova.compute.manager [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:03:06 compute-0 nova_compute[244014]: 2026-02-25 13:03:06.013 244018 DEBUG nova.compute.manager [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing instance network info cache due to event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:03:06 compute-0 nova_compute[244014]: 2026-02-25 13:03:06.013 244018 DEBUG oslo_concurrency.lockutils [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:03:06 compute-0 nova_compute[244014]: 2026-02-25 13:03:06.065 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 13:03:06 compute-0 nova_compute[244014]: 2026-02-25 13:03:06.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:03:06 compute-0 nova_compute[244014]: 2026-02-25 13:03:06.987 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updating instance_info_cache with network_info: [{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.006 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.007 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Instance network_info: |[{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.008 244018 DEBUG oslo_concurrency.lockutils [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.008 244018 DEBUG nova.network.neutron [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.013 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Start _get_guest_xml network_info=[{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.018 244018 WARNING nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.031 244018 DEBUG nova.virt.libvirt.host [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.031 244018 DEBUG nova.virt.libvirt.host [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.037 244018 DEBUG nova.virt.libvirt.host [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.037 244018 DEBUG nova.virt.libvirt.host [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.038 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.039 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.039 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.040 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.040 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.041 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.041 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.041 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.042 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.042 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.043 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.043 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.047 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:07 compute-0 ceph-mon[76335]: pgmap v2487: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:03:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:03:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3410567786' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.552 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.583 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:07 compute-0 nova_compute[244014]: 2026-02-25 13:03:07.588 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:03:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4278592762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.181 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.185 244018 DEBUG nova.virt.libvirt.vif [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=150,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pgg8mh7f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:03:02Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=8c7de3d4-fa84-4c1c-9e61-e3b610cb177d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.185 244018 DEBUG nova.network.os_vif_util [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.187 244018 DEBUG nova.network.os_vif_util [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.189 244018 DEBUG nova.objects.instance [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:03:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3410567786' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:03:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4278592762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.211 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 13:03:08 compute-0 nova_compute[244014]:   <uuid>8c7de3d4-fa84-4c1c-9e61-e3b610cb177d</uuid>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   <name>instance-00000096</name>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   <metadata>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727</nova:name>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 13:03:07</nova:creationTime>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <nova:port uuid="7d538e82-43ce-4f65-b84b-fb9efe7d35b0">
Feb 25 13:03:08 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   </metadata>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <system>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <entry name="serial">8c7de3d4-fa84-4c1c-9e61-e3b610cb177d</entry>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <entry name="uuid">8c7de3d4-fa84-4c1c-9e61-e3b610cb177d</entry>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     </system>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   <os>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   </os>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   <features>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <apic/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   </features>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   </clock>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   </cpu>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   <devices>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk">
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       </source>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config">
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       </source>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:03:08 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:1a:bc:b9"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <target dev="tap7d538e82-43"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     </interface>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/console.log" append="off"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     </serial>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <video>
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     </video>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     </rng>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 13:03:08 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 13:03:08 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 13:03:08 compute-0 nova_compute[244014]:   </devices>
Feb 25 13:03:08 compute-0 nova_compute[244014]: </domain>
Feb 25 13:03:08 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.214 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Preparing to wait for external event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.214 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.215 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.215 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.217 244018 DEBUG nova.virt.libvirt.vif [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=150,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pgg8mh7f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:03:02Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=8c7de3d4-fa84-4c1c-9e61-e3b610cb177d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.217 244018 DEBUG nova.network.os_vif_util [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.218 244018 DEBUG nova.network.os_vif_util [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.219 244018 DEBUG os_vif [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.220 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.221 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.222 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.226 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d538e82-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.227 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d538e82-43, col_values=(('external_ids', {'iface-id': '7d538e82-43ce-4f65-b84b-fb9efe7d35b0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:bc:b9', 'vm-uuid': '8c7de3d4-fa84-4c1c-9e61-e3b610cb177d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:08 compute-0 NetworkManager[49836]: <info>  [1772024588.2310] manager: (tap7d538e82-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.238 244018 INFO os_vif [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43')
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.304 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.305 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.305 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:1a:bc:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.306 244018 INFO nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Using config drive
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.344 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.632 244018 DEBUG nova.network.neutron [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updated VIF entry in instance network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.632 244018 DEBUG nova.network.neutron [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updating instance_info_cache with network_info: [{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:03:08 compute-0 nova_compute[244014]: 2026-02-25 13:03:08.653 244018 DEBUG oslo_concurrency.lockutils [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:03:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.119 244018 INFO nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Creating config drive at /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.125 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp04qyp_3m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:09 compute-0 ceph-mon[76335]: pgmap v2488: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.266 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp04qyp_3m" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.323 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.329 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.502 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.503 244018 INFO nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Deleting local config drive /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config because it was imported into RBD.
Feb 25 13:03:09 compute-0 kernel: tap7d538e82-43: entered promiscuous mode
Feb 25 13:03:09 compute-0 NetworkManager[49836]: <info>  [1772024589.5570] manager: (tap7d538e82-43): new Tun device (/org/freedesktop/NetworkManager/Devices/652)
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:09 compute-0 ovn_controller[147040]: 2026-02-25T13:03:09Z|01586|binding|INFO|Claiming lport 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 for this chassis.
Feb 25 13:03:09 compute-0 ovn_controller[147040]: 2026-02-25T13:03:09Z|01587|binding|INFO|7d538e82-43ce-4f65-b84b-fb9efe7d35b0: Claiming fa:16:3e:1a:bc:b9 10.100.0.6
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.561 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.579 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:bc:b9 10.100.0.6'], port_security=['fa:16:3e:1a:bc:b9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8c7de3d4-fa84-4c1c-9e61-e3b610cb177d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60551679-32db-4035-bd76-7d0d38a9d6de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '411a5034-2bd3-44c2-8d69-e8230e12b7ba 45df576e-9e98-4f75-84d8-faaa81c292fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a78513-3d45-4660-838e-2ad213a5a7d3, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7d538e82-43ce-4f65-b84b-fb9efe7d35b0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.581 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 in datapath 60551679-32db-4035-bd76-7d0d38a9d6de bound to our chassis
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.582 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60551679-32db-4035-bd76-7d0d38a9d6de
Feb 25 13:03:09 compute-0 systemd-machined[210048]: New machine qemu-184-instance-00000096.
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.592 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2809a6a-353d-4136-a277-90dfc210bb66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.593 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60551679-31 in ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 13:03:09 compute-0 systemd-udevd[380801]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.595 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60551679-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.595 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[68cc4277-9ad9-4c96-9afe-2f73d1ac32cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.597 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d732679-b02c-437e-bfb3-fd57a273627e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_controller[147040]: 2026-02-25T13:03:09Z|01588|binding|INFO|Setting lport 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 ovn-installed in OVS
Feb 25 13:03:09 compute-0 ovn_controller[147040]: 2026-02-25T13:03:09Z|01589|binding|INFO|Setting lport 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 up in Southbound
Feb 25 13:03:09 compute-0 NetworkManager[49836]: <info>  [1772024589.6069] device (tap7d538e82-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:09 compute-0 systemd[1]: Started Virtual Machine qemu-184-instance-00000096.
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.606 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e27481d6-71a2-4caf-b892-e6367016c0dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 NetworkManager[49836]: <info>  [1772024589.6092] device (tap7d538e82-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.631 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b3158c-0a0b-4df5-a415-c7aa6ebf3f95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.658 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[caf801e8-79ae-483b-9f23-f88d0ed386b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.663 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eea89d6f-e8c8-411e-913a-9f194856240e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 NetworkManager[49836]: <info>  [1772024589.6650] manager: (tap60551679-30): new Veth device (/org/freedesktop/NetworkManager/Devices/653)
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.693 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[55464254-13c5-4337-a5b4-d68d5ed0e009]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.695 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0094861e-96f7-4981-942f-f6d968b5c34b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 NetworkManager[49836]: <info>  [1772024589.7132] device (tap60551679-30): carrier: link connected
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.716 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1f303967-3f5b-4038-8d09-2bd3083defa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.731 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1e09d918-d66b-4c89-bc95-1ac46ec8a625]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60551679-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a9:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655912, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380833, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.744 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba53ff78-f92a-47c8-aa33-c95412b34252]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:a9f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655912, 'tstamp': 655912}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380834, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.757 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f603e0d-17c1-4464-937d-49a94ff68f3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60551679-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a9:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655912, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380835, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.779 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[60c0aeb3-20e0-4361-ba15-9850f7bd2db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.823 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a93a3921-5455-4606-aa65-f35007f83499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.825 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60551679-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.825 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.825 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60551679-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.828 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:09 compute-0 kernel: tap60551679-30: entered promiscuous mode
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.831 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60551679-30, col_values=(('external_ids', {'iface-id': '5108b2ec-2ba2-4c9d-af59-95b6cba805a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:09 compute-0 NetworkManager[49836]: <info>  [1772024589.8327] manager: (tap60551679-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/654)
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:09 compute-0 ovn_controller[147040]: 2026-02-25T13:03:09Z|01590|binding|INFO|Releasing lport 5108b2ec-2ba2-4c9d-af59-95b6cba805a9 from this chassis (sb_readonly=0)
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.836 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60551679-32db-4035-bd76-7d0d38a9d6de.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60551679-32db-4035-bd76-7d0d38a9d6de.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 13:03:09 compute-0 nova_compute[244014]: 2026-02-25 13:03:09.847 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.847 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e311e079-d11e-483a-94b8-50eb6e0c2ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.848 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: global
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-60551679-32db-4035-bd76-7d0d38a9d6de
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/60551679-32db-4035-bd76-7d0d38a9d6de.pid.haproxy
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID 60551679-32db-4035-bd76-7d0d38a9d6de
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 13:03:09 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.849 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'env', 'PROCESS_TAG=haproxy-60551679-32db-4035-bd76-7d0d38a9d6de', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60551679-32db-4035-bd76-7d0d38a9d6de.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.163 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024590.1633112, 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.164 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] VM Started (Lifecycle Event)
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.175 244018 DEBUG nova.compute.manager [req-5e67936b-08bc-4f2d-83bc-9bcdf19068b3 req-331ab6a8-19dd-4ebf-aa97-4d3be4347ad3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.176 244018 DEBUG oslo_concurrency.lockutils [req-5e67936b-08bc-4f2d-83bc-9bcdf19068b3 req-331ab6a8-19dd-4ebf-aa97-4d3be4347ad3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.176 244018 DEBUG oslo_concurrency.lockutils [req-5e67936b-08bc-4f2d-83bc-9bcdf19068b3 req-331ab6a8-19dd-4ebf-aa97-4d3be4347ad3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.177 244018 DEBUG oslo_concurrency.lockutils [req-5e67936b-08bc-4f2d-83bc-9bcdf19068b3 req-331ab6a8-19dd-4ebf-aa97-4d3be4347ad3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.177 244018 DEBUG nova.compute.manager [req-5e67936b-08bc-4f2d-83bc-9bcdf19068b3 req-331ab6a8-19dd-4ebf-aa97-4d3be4347ad3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Processing event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.178 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.182 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.188 244018 INFO nova.virt.libvirt.driver [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Instance spawned successfully.
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.188 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.191 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.195 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.210 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.211 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.212 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.212 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.213 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.213 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.219 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.219 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024590.164456, 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.220 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] VM Paused (Lifecycle Event)
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.243 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.247 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024590.181352, 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.247 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] VM Resumed (Lifecycle Event)
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.280 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.286 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:03:10 compute-0 podman[380907]: 2026-02-25 13:03:10.28691093 +0000 UTC m=+0.132299452 container create 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 13:03:10 compute-0 podman[380907]: 2026-02-25 13:03:10.192620021 +0000 UTC m=+0.038008503 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.297 244018 INFO nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Took 7.76 seconds to spawn the instance on the hypervisor.
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.298 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.305 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:03:10 compute-0 systemd[1]: Started libpod-conmon-8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602.scope.
Feb 25 13:03:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683ddb331af2ab3814797ad79c5d49ef9abaf7e6aaf6d9a7ae140104b9cbf429/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.361 244018 INFO nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Took 8.83 seconds to build instance.
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.375 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:10 compute-0 podman[380907]: 2026-02-25 13:03:10.438968816 +0000 UTC m=+0.284357298 container init 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 13:03:10 compute-0 podman[380907]: 2026-02-25 13:03:10.446333054 +0000 UTC m=+0.291721536 container start 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 13:03:10 compute-0 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [NOTICE]   (380926) : New worker (380928) forked
Feb 25 13:03:10 compute-0 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [NOTICE]   (380926) : Loading success.
Feb 25 13:03:10 compute-0 nova_compute[244014]: 2026-02-25 13:03:10.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:03:11 compute-0 ceph-mon[76335]: pgmap v2489: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:03:11 compute-0 nova_compute[244014]: 2026-02-25 13:03:11.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:11 compute-0 nova_compute[244014]: 2026-02-25 13:03:11.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:03:11 compute-0 nova_compute[244014]: 2026-02-25 13:03:11.891 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:03:12 compute-0 nova_compute[244014]: 2026-02-25 13:03:12.312 244018 DEBUG nova.compute.manager [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:03:12 compute-0 nova_compute[244014]: 2026-02-25 13:03:12.313 244018 DEBUG oslo_concurrency.lockutils [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:12 compute-0 nova_compute[244014]: 2026-02-25 13:03:12.313 244018 DEBUG oslo_concurrency.lockutils [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:12 compute-0 nova_compute[244014]: 2026-02-25 13:03:12.313 244018 DEBUG oslo_concurrency.lockutils [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:12 compute-0 nova_compute[244014]: 2026-02-25 13:03:12.313 244018 DEBUG nova.compute.manager [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] No waiting events found dispatching network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:03:12 compute-0 nova_compute[244014]: 2026-02-25 13:03:12.313 244018 WARNING nova.compute.manager [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received unexpected event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 for instance with vm_state active and task_state None.
Feb 25 13:03:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 13:03:13 compute-0 ceph-mon[76335]: pgmap v2490: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 13:03:13 compute-0 nova_compute[244014]: 2026-02-25 13:03:13.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:14 compute-0 nova_compute[244014]: 2026-02-25 13:03:14.105 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:14 compute-0 ovn_controller[147040]: 2026-02-25T13:03:14Z|01591|binding|INFO|Releasing lport 5108b2ec-2ba2-4c9d-af59-95b6cba805a9 from this chassis (sb_readonly=0)
Feb 25 13:03:14 compute-0 NetworkManager[49836]: <info>  [1772024594.1115] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/655)
Feb 25 13:03:14 compute-0 NetworkManager[49836]: <info>  [1772024594.1127] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Feb 25 13:03:14 compute-0 ovn_controller[147040]: 2026-02-25T13:03:14Z|01592|binding|INFO|Releasing lport 5108b2ec-2ba2-4c9d-af59-95b6cba805a9 from this chassis (sb_readonly=0)
Feb 25 13:03:14 compute-0 nova_compute[244014]: 2026-02-25 13:03:14.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:14 compute-0 nova_compute[244014]: 2026-02-25 13:03:14.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:14 compute-0 nova_compute[244014]: 2026-02-25 13:03:14.494 244018 DEBUG nova.compute.manager [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:03:14 compute-0 nova_compute[244014]: 2026-02-25 13:03:14.494 244018 DEBUG nova.compute.manager [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing instance network info cache due to event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:03:14 compute-0 nova_compute[244014]: 2026-02-25 13:03:14.494 244018 DEBUG oslo_concurrency.lockutils [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:03:14 compute-0 nova_compute[244014]: 2026-02-25 13:03:14.494 244018 DEBUG oslo_concurrency.lockutils [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:03:14 compute-0 nova_compute[244014]: 2026-02-25 13:03:14.495 244018 DEBUG nova.network.neutron [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:03:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 13:03:15 compute-0 ceph-mon[76335]: pgmap v2491: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 13:03:15 compute-0 nova_compute[244014]: 2026-02-25 13:03:15.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:15 compute-0 nova_compute[244014]: 2026-02-25 13:03:15.838 244018 DEBUG nova.network.neutron [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updated VIF entry in instance network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:03:15 compute-0 nova_compute[244014]: 2026-02-25 13:03:15.838 244018 DEBUG nova.network.neutron [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updating instance_info_cache with network_info: [{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:03:15 compute-0 nova_compute[244014]: 2026-02-25 13:03:15.864 244018 DEBUG oslo_concurrency.lockutils [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:03:15 compute-0 nova_compute[244014]: 2026-02-25 13:03:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:16 compute-0 nova_compute[244014]: 2026-02-25 13:03:16.885 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:16 compute-0 nova_compute[244014]: 2026-02-25 13:03:16.886 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:03:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 13:03:17 compute-0 ceph-mon[76335]: pgmap v2492: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 13:03:18 compute-0 nova_compute[244014]: 2026-02-25 13:03:18.232 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 13:03:19 compute-0 ceph-mon[76335]: pgmap v2493: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 13:03:20 compute-0 nova_compute[244014]: 2026-02-25 13:03:20.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 13:03:21 compute-0 ceph-mon[76335]: pgmap v2494: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 13:03:21 compute-0 podman[380940]: 2026-02-25 13:03:21.703524332 +0000 UTC m=+0.049958209 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 13:03:21 compute-0 podman[380941]: 2026-02-25 13:03:21.724289908 +0000 UTC m=+0.070778227 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 25 13:03:22 compute-0 ovn_controller[147040]: 2026-02-25T13:03:22Z|00208|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:bc:b9 10.100.0.6
Feb 25 13:03:22 compute-0 ovn_controller[147040]: 2026-02-25T13:03:22Z|00209|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:bc:b9 10.100.0.6
Feb 25 13:03:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 13:03:23 compute-0 ceph-mon[76335]: pgmap v2495: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 13:03:23 compute-0 nova_compute[244014]: 2026-02-25 13:03:23.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:03:25 compute-0 ceph-mon[76335]: pgmap v2496: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:03:25 compute-0 nova_compute[244014]: 2026-02-25 13:03:25.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:03:27 compute-0 ceph-mon[76335]: pgmap v2497: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:03:28 compute-0 nova_compute[244014]: 2026-02-25 13:03:28.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:03:29 compute-0 ceph-mon[76335]: pgmap v2498: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:03:30 compute-0 nova_compute[244014]: 2026-02-25 13:03:30.696 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:03:31 compute-0 ceph-mon[76335]: pgmap v2499: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 13:03:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:03:31
Feb 25 13:03:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:03:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:03:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'volumes', 'images', 'default.rgw.control', '.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Feb 25 13:03:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:03:31 compute-0 nova_compute[244014]: 2026-02-25 13:03:31.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:31.349 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:03:31 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:31.351 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 13:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:03:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:03:33 compute-0 ceph-mon[76335]: pgmap v2500: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:03:33 compute-0 nova_compute[244014]: 2026-02-25 13:03:33.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 13:03:35 compute-0 ceph-mon[76335]: pgmap v2501: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 13:03:35 compute-0 nova_compute[244014]: 2026-02-25 13:03:35.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:36 compute-0 nova_compute[244014]: 2026-02-25 13:03:36.471 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:36 compute-0 nova_compute[244014]: 2026-02-25 13:03:36.471 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:36 compute-0 nova_compute[244014]: 2026-02-25 13:03:36.489 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 13:03:36 compute-0 nova_compute[244014]: 2026-02-25 13:03:36.579 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:36 compute-0 nova_compute[244014]: 2026-02-25 13:03:36.580 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:36 compute-0 nova_compute[244014]: 2026-02-25 13:03:36.594 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 13:03:36 compute-0 nova_compute[244014]: 2026-02-25 13:03:36.595 244018 INFO nova.compute.claims [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Claim successful on node compute-0.ctlplane.example.com
Feb 25 13:03:36 compute-0 nova_compute[244014]: 2026-02-25 13:03:36.736 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 13:03:37 compute-0 ceph-mon[76335]: pgmap v2502: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 13:03:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:03:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682082842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.264 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.270 244018 DEBUG nova.compute.provider_tree [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.286 244018 DEBUG nova.scheduler.client.report [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:03:37 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:37.354 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.368 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.369 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.425 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.425 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.446 244018 INFO nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.461 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.550 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.553 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.553 244018 INFO nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Creating image(s)
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.581 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.608 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.634 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.638 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.669 244018 DEBUG nova.policy [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.709 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.709 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.710 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.710 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.734 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:37 compute-0 nova_compute[244014]: 2026-02-25 13:03:37.738 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5cd02959-e31f-4dd4-a50d-caafefc56629_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2682082842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:03:38 compute-0 nova_compute[244014]: 2026-02-25 13:03:38.129 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5cd02959-e31f-4dd4-a50d-caafefc56629_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:38 compute-0 nova_compute[244014]: 2026-02-25 13:03:38.209 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 13:03:38 compute-0 nova_compute[244014]: 2026-02-25 13:03:38.248 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:38 compute-0 nova_compute[244014]: 2026-02-25 13:03:38.321 244018 DEBUG nova.objects.instance [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid 5cd02959-e31f-4dd4-a50d-caafefc56629 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:03:38 compute-0 nova_compute[244014]: 2026-02-25 13:03:38.337 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 13:03:38 compute-0 nova_compute[244014]: 2026-02-25 13:03:38.338 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Ensure instance console log exists: /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 13:03:38 compute-0 nova_compute[244014]: 2026-02-25 13:03:38.338 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:38 compute-0 nova_compute[244014]: 2026-02-25 13:03:38.339 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:38 compute-0 nova_compute[244014]: 2026-02-25 13:03:38.340 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:38 compute-0 nova_compute[244014]: 2026-02-25 13:03:38.525 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Successfully created port: b54ec6ac-470f-4041-8bcc-ed69244b5317 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 13:03:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:38 compute-0 sudo[381176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:03:38 compute-0 sudo[381176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:03:38 compute-0 sudo[381176]: pam_unix(sudo:session): session closed for user root
Feb 25 13:03:38 compute-0 sudo[381201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:03:38 compute-0 sudo[381201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:03:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 14 KiB/s wr, 0 op/s
Feb 25 13:03:39 compute-0 ceph-mon[76335]: pgmap v2503: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 14 KiB/s wr, 0 op/s
Feb 25 13:03:39 compute-0 nova_compute[244014]: 2026-02-25 13:03:39.379 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Successfully updated port: b54ec6ac-470f-4041-8bcc-ed69244b5317 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 13:03:39 compute-0 nova_compute[244014]: 2026-02-25 13:03:39.396 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:03:39 compute-0 nova_compute[244014]: 2026-02-25 13:03:39.397 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:03:39 compute-0 nova_compute[244014]: 2026-02-25 13:03:39.397 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 13:03:39 compute-0 sudo[381201]: pam_unix(sudo:session): session closed for user root
Feb 25 13:03:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 13:03:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:03:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:03:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:03:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:03:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:03:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:03:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:03:39 compute-0 nova_compute[244014]: 2026-02-25 13:03:39.500 244018 DEBUG nova.compute.manager [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-changed-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:03:39 compute-0 nova_compute[244014]: 2026-02-25 13:03:39.500 244018 DEBUG nova.compute.manager [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Refreshing instance network info cache due to event network-changed-b54ec6ac-470f-4041-8bcc-ed69244b5317. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:03:39 compute-0 nova_compute[244014]: 2026-02-25 13:03:39.500 244018 DEBUG oslo_concurrency.lockutils [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:03:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:03:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:03:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:03:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:03:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:03:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:03:39 compute-0 nova_compute[244014]: 2026-02-25 13:03:39.545 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 13:03:39 compute-0 sudo[381255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:03:39 compute-0 sudo[381255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:03:39 compute-0 sudo[381255]: pam_unix(sudo:session): session closed for user root
Feb 25 13:03:39 compute-0 sudo[381280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:03:39 compute-0 sudo[381280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:03:39 compute-0 nova_compute[244014]: 2026-02-25 13:03:39.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:39 compute-0 nova_compute[244014]: 2026-02-25 13:03:39.890 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:03:39 compute-0 podman[381318]: 2026-02-25 13:03:39.900769 +0000 UTC m=+0.079159393 container create 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:03:39 compute-0 nova_compute[244014]: 2026-02-25 13:03:39.913 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:03:39 compute-0 systemd[1]: Started libpod-conmon-6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a.scope.
Feb 25 13:03:39 compute-0 podman[381318]: 2026-02-25 13:03:39.865911968 +0000 UTC m=+0.044302441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:03:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:03:40 compute-0 podman[381318]: 2026-02-25 13:03:40.021881796 +0000 UTC m=+0.200272269 container init 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:03:40 compute-0 podman[381318]: 2026-02-25 13:03:40.035570582 +0000 UTC m=+0.213961005 container start 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:03:40 compute-0 frosty_shaw[381334]: 167 167
Feb 25 13:03:40 compute-0 systemd[1]: libpod-6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a.scope: Deactivated successfully.
Feb 25 13:03:40 compute-0 podman[381318]: 2026-02-25 13:03:40.096902491 +0000 UTC m=+0.275292974 container attach 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:03:40 compute-0 podman[381318]: 2026-02-25 13:03:40.09792085 +0000 UTC m=+0.276311283 container died 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:03:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:03:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:03:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:03:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:03:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:03:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:03:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-244e6d46700cc4535f2a74d909f1c262f62b0f220771f31726b3af3efa5278dc-merged.mount: Deactivated successfully.
Feb 25 13:03:40 compute-0 podman[381318]: 2026-02-25 13:03:40.195117731 +0000 UTC m=+0.373508154 container remove 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:03:40 compute-0 systemd[1]: libpod-conmon-6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a.scope: Deactivated successfully.
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.249 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updating instance_info_cache with network_info: [{"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.268 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.269 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Instance network_info: |[{"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.269 244018 DEBUG oslo_concurrency.lockutils [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.270 244018 DEBUG nova.network.neutron [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Refreshing network info cache for port b54ec6ac-470f-4041-8bcc-ed69244b5317 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.275 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Start _get_guest_xml network_info=[{"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.281 244018 WARNING nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.293 244018 DEBUG nova.virt.libvirt.host [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.294 244018 DEBUG nova.virt.libvirt.host [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.298 244018 DEBUG nova.virt.libvirt.host [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.299 244018 DEBUG nova.virt.libvirt.host [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.299 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.300 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.300 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.301 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.301 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.301 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.302 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.302 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.303 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.303 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.303 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.303 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.309 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:40 compute-0 podman[381359]: 2026-02-25 13:03:40.343996439 +0000 UTC m=+0.049103216 container create f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:03:40 compute-0 systemd[1]: Started libpod-conmon-f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb.scope.
Feb 25 13:03:40 compute-0 podman[381359]: 2026-02-25 13:03:40.319533029 +0000 UTC m=+0.024639826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:03:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:40 compute-0 podman[381359]: 2026-02-25 13:03:40.448578008 +0000 UTC m=+0.153684805 container init f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:03:40 compute-0 podman[381359]: 2026-02-25 13:03:40.457549851 +0000 UTC m=+0.162656628 container start f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:03:40 compute-0 podman[381359]: 2026-02-25 13:03:40.465752962 +0000 UTC m=+0.170859759 container attach f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:40 compute-0 affectionate_cartwright[381378]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:03:40 compute-0 affectionate_cartwright[381378]: --> All data devices are unavailable
Feb 25 13:03:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:03:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883640857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:03:40 compute-0 systemd[1]: libpod-f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb.scope: Deactivated successfully.
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:40 compute-0 podman[381359]: 2026-02-25 13:03:40.912360676 +0000 UTC m=+0.617467463 container died f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.913 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.915 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e-merged.mount: Deactivated successfully.
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.950 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Feb 25 13:03:40 compute-0 podman[381359]: 2026-02-25 13:03:40.95290048 +0000 UTC m=+0.658007257 container remove f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:03:40 compute-0 systemd[1]: libpod-conmon-f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb.scope: Deactivated successfully.
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.975 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:40 compute-0 nova_compute[244014]: 2026-02-25 13:03:40.979 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:40 compute-0 sudo[381280]: pam_unix(sudo:session): session closed for user root
Feb 25 13:03:41 compute-0 sudo[381450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:03:41 compute-0 sudo[381450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:03:41 compute-0 sudo[381450]: pam_unix(sudo:session): session closed for user root
Feb 25 13:03:41 compute-0 sudo[381494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:03:41 compute-0 sudo[381494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:03:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3883640857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:03:41 compute-0 ceph-mon[76335]: pgmap v2504: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Feb 25 13:03:41 compute-0 podman[381551]: 2026-02-25 13:03:41.363123587 +0000 UTC m=+0.059574411 container create 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.379 244018 DEBUG nova.network.neutron [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updated VIF entry in instance network info cache for port b54ec6ac-470f-4041-8bcc-ed69244b5317. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.380 244018 DEBUG nova.network.neutron [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updating instance_info_cache with network_info: [{"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.403 244018 DEBUG oslo_concurrency.lockutils [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:03:41 compute-0 systemd[1]: Started libpod-conmon-18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a.scope.
Feb 25 13:03:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:03:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357654892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:03:41 compute-0 podman[381551]: 2026-02-25 13:03:41.336349072 +0000 UTC m=+0.032799886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:03:41 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.443 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:41 compute-0 podman[381551]: 2026-02-25 13:03:41.45612186 +0000 UTC m=+0.152572704 container init 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:03:41 compute-0 podman[381551]: 2026-02-25 13:03:41.463342603 +0000 UTC m=+0.159793397 container start 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:03:41 compute-0 nervous_kapitsa[381568]: 167 167
Feb 25 13:03:41 compute-0 systemd[1]: libpod-18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a.scope: Deactivated successfully.
Feb 25 13:03:41 compute-0 podman[381551]: 2026-02-25 13:03:41.470488655 +0000 UTC m=+0.166939519 container attach 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:03:41 compute-0 podman[381551]: 2026-02-25 13:03:41.471075781 +0000 UTC m=+0.167526575 container died 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 13:03:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:03:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3250334373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.506 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.506 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 25 13:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-df343bbb39ede07b31ca689e7ef4fd93f4929b214c0fdccec9a6940206686fb3-merged.mount: Deactivated successfully.
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.519 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.520 244018 DEBUG nova.virt.libvirt.vif [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:03:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=151,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pztet3gy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:03:37Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=5cd02959-e31f-4dd4-a50d-caafefc56629,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.521 244018 DEBUG nova.network.os_vif_util [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.521 244018 DEBUG nova.network.os_vif_util [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.523 244018 DEBUG nova.objects.instance [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5cd02959-e31f-4dd4-a50d-caafefc56629 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.537 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] End _get_guest_xml xml=<domain type="kvm">
Feb 25 13:03:41 compute-0 nova_compute[244014]:   <uuid>5cd02959-e31f-4dd4-a50d-caafefc56629</uuid>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   <name>instance-00000097</name>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   <metadata>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438</nova:name>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 13:03:40</nova:creationTime>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <nova:port uuid="b54ec6ac-470f-4041-8bcc-ed69244b5317">
Feb 25 13:03:41 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   </metadata>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <system>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <entry name="serial">5cd02959-e31f-4dd4-a50d-caafefc56629</entry>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <entry name="uuid">5cd02959-e31f-4dd4-a50d-caafefc56629</entry>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     </system>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   <os>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   </os>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   <features>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <apic/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   </features>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   </clock>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   </cpu>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   <devices>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/5cd02959-e31f-4dd4-a50d-caafefc56629_disk">
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       </source>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config">
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       </source>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:03:41 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:fa:d1:ab"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <target dev="tapb54ec6ac-47"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     </interface>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/console.log" append="off"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     </serial>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <video>
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     </video>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     </rng>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 13:03:41 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 13:03:41 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 13:03:41 compute-0 nova_compute[244014]:   </devices>
Feb 25 13:03:41 compute-0 nova_compute[244014]: </domain>
Feb 25 13:03:41 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.537 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Preparing to wait for external event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.537 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.538 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.538 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.539 244018 DEBUG nova.virt.libvirt.vif [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:03:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=151,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pztet3gy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:03:37Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=5cd02959-e31f-4dd4-a50d-caafefc56629,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.539 244018 DEBUG nova.network.os_vif_util [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.540 244018 DEBUG nova.network.os_vif_util [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.540 244018 DEBUG os_vif [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.542 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.542 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.546 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb54ec6ac-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.547 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb54ec6ac-47, col_values=(('external_ids', {'iface-id': 'b54ec6ac-470f-4041-8bcc-ed69244b5317', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:d1:ab', 'vm-uuid': '5cd02959-e31f-4dd4-a50d-caafefc56629'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:41 compute-0 NetworkManager[49836]: <info>  [1772024621.5504] manager: (tapb54ec6ac-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/657)
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:03:41 compute-0 podman[381551]: 2026-02-25 13:03:41.552858358 +0000 UTC m=+0.249309142 container remove 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.557 244018 INFO os_vif [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47')
Feb 25 13:03:41 compute-0 systemd[1]: libpod-conmon-18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a.scope: Deactivated successfully.
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.599 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.600 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.600 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:fa:d1:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.601 244018 INFO nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Using config drive
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.620 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:41 compute-0 podman[381617]: 2026-02-25 13:03:41.695423208 +0000 UTC m=+0.038608840 container create 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.729 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3238MB free_disk=59.941737859509885GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.731 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.731 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:41 compute-0 systemd[1]: Started libpod-conmon-0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76.scope.
Feb 25 13:03:41 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2b70a08eb0a06013a2b5c7813a414045d833868d693d90ae8d6eb4d9e16b38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:41 compute-0 podman[381617]: 2026-02-25 13:03:41.675677301 +0000 UTC m=+0.018862963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2b70a08eb0a06013a2b5c7813a414045d833868d693d90ae8d6eb4d9e16b38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2b70a08eb0a06013a2b5c7813a414045d833868d693d90ae8d6eb4d9e16b38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2b70a08eb0a06013a2b5c7813a414045d833868d693d90ae8d6eb4d9e16b38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:41 compute-0 podman[381617]: 2026-02-25 13:03:41.809389742 +0000 UTC m=+0.152575474 container init 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:03:41 compute-0 podman[381617]: 2026-02-25 13:03:41.818169549 +0000 UTC m=+0.161355181 container start 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Feb 25 13:03:41 compute-0 podman[381617]: 2026-02-25 13:03:41.82812847 +0000 UTC m=+0.171314192 container attach 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.829 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.829 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 5cd02959-e31f-4dd4-a50d-caafefc56629 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.830 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.830 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.875 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.901 244018 INFO nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Creating config drive at /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config
Feb 25 13:03:41 compute-0 nova_compute[244014]: 2026-02-25 13:03:41.905 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbmapajmg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:42 compute-0 nova_compute[244014]: 2026-02-25 13:03:42.041 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbmapajmg" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:42 compute-0 nova_compute[244014]: 2026-02-25 13:03:42.077 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:03:42 compute-0 nova_compute[244014]: 2026-02-25 13:03:42.082 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config 5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]: {
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:     "0": [
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:         {
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "devices": [
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "/dev/loop3"
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             ],
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_name": "ceph_lv0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_size": "21470642176",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "name": "ceph_lv0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "tags": {
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.cluster_name": "ceph",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.crush_device_class": "",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.encrypted": "0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.objectstore": "bluestore",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.osd_id": "0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.type": "block",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.vdo": "0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.with_tpm": "0"
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             },
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "type": "block",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "vg_name": "ceph_vg0"
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:         }
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:     ],
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:     "1": [
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:         {
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "devices": [
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "/dev/loop4"
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             ],
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_name": "ceph_lv1",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_size": "21470642176",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "name": "ceph_lv1",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "tags": {
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.cluster_name": "ceph",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.crush_device_class": "",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.encrypted": "0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.objectstore": "bluestore",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.osd_id": "1",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.type": "block",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.vdo": "0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.with_tpm": "0"
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             },
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "type": "block",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "vg_name": "ceph_vg1"
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:         }
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:     ],
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:     "2": [
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:         {
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "devices": [
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "/dev/loop5"
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             ],
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_name": "ceph_lv2",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_size": "21470642176",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "name": "ceph_lv2",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "tags": {
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.cluster_name": "ceph",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.crush_device_class": "",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.encrypted": "0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.objectstore": "bluestore",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.osd_id": "2",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.type": "block",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.vdo": "0",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:                 "ceph.with_tpm": "0"
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             },
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "type": "block",
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:             "vg_name": "ceph_vg2"
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:         }
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]:     ]
Feb 25 13:03:42 compute-0 suspicious_khayyam[381633]: }
Feb 25 13:03:42 compute-0 systemd[1]: libpod-0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76.scope: Deactivated successfully.
Feb 25 13:03:42 compute-0 podman[381617]: 2026-02-25 13:03:42.114240108 +0000 UTC m=+0.457425780 container died 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:03:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:03:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898101735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:03:42 compute-0 nova_compute[244014]: 2026-02-25 13:03:42.371 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:42 compute-0 nova_compute[244014]: 2026-02-25 13:03:42.376 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:03:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f2b70a08eb0a06013a2b5c7813a414045d833868d693d90ae8d6eb4d9e16b38-merged.mount: Deactivated successfully.
Feb 25 13:03:42 compute-0 nova_compute[244014]: 2026-02-25 13:03:42.389 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:03:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/357654892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:03:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3250334373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:03:42 compute-0 nova_compute[244014]: 2026-02-25 13:03:42.408 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:03:42 compute-0 nova_compute[244014]: 2026-02-25 13:03:42.408 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:42 compute-0 podman[381617]: 2026-02-25 13:03:42.826377799 +0000 UTC m=+1.169563471 container remove 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb 25 13:03:42 compute-0 sudo[381494]: pam_unix(sudo:session): session closed for user root
Feb 25 13:03:42 compute-0 systemd[1]: libpod-conmon-0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76.scope: Deactivated successfully.
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007758747128651013 of space, bias 1.0, pg target 0.2327624138595304 quantized to 32 (current 32)
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024944784865292705 of space, bias 1.0, pg target 0.7483435459587812 quantized to 32 (current 32)
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4133407414918176e-06 of space, bias 4.0, pg target 0.0016960088897901811 quantized to 16 (current 16)
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:03:42 compute-0 sudo[381716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:03:42 compute-0 sudo[381716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:03:42 compute-0 sudo[381716]: pam_unix(sudo:session): session closed for user root
Feb 25 13:03:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 13:03:43 compute-0 sudo[381741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:03:43 compute-0 sudo[381741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.295 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config 5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.295 244018 INFO nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Deleting local config drive /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config because it was imported into RBD.
Feb 25 13:03:43 compute-0 kernel: tapb54ec6ac-47: entered promiscuous mode
Feb 25 13:03:43 compute-0 NetworkManager[49836]: <info>  [1772024623.3365] manager: (tapb54ec6ac-47): new Tun device (/org/freedesktop/NetworkManager/Devices/658)
Feb 25 13:03:43 compute-0 ovn_controller[147040]: 2026-02-25T13:03:43Z|01593|binding|INFO|Claiming lport b54ec6ac-470f-4041-8bcc-ed69244b5317 for this chassis.
Feb 25 13:03:43 compute-0 ovn_controller[147040]: 2026-02-25T13:03:43Z|01594|binding|INFO|b54ec6ac-470f-4041-8bcc-ed69244b5317: Claiming fa:16:3e:fa:d1:ab 10.100.0.10
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.338 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.343 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:43 compute-0 ovn_controller[147040]: 2026-02-25T13:03:43Z|01595|binding|INFO|Setting lport b54ec6ac-470f-4041-8bcc-ed69244b5317 up in Southbound
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:43 compute-0 ovn_controller[147040]: 2026-02-25T13:03:43Z|01596|binding|INFO|Setting lport b54ec6ac-470f-4041-8bcc-ed69244b5317 ovn-installed in OVS
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.345 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:d1:ab 10.100.0.10'], port_security=['fa:16:3e:fa:d1:ab 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5cd02959-e31f-4dd4-a50d-caafefc56629', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60551679-32db-4035-bd76-7d0d38a9d6de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '411a5034-2bd3-44c2-8d69-e8230e12b7ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a78513-3d45-4660-838e-2ad213a5a7d3, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b54ec6ac-470f-4041-8bcc-ed69244b5317) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.346 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.347 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b54ec6ac-470f-4041-8bcc-ed69244b5317 in datapath 60551679-32db-4035-bd76-7d0d38a9d6de bound to our chassis
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.349 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60551679-32db-4035-bd76-7d0d38a9d6de
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.365 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2f9a24-14c9-4803-8ac8-e8e4da483464]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:43 compute-0 podman[381780]: 2026-02-25 13:03:43.370489572 +0000 UTC m=+0.052131711 container create 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 13:03:43 compute-0 systemd-machined[210048]: New machine qemu-185-instance-00000097.
Feb 25 13:03:43 compute-0 systemd-udevd[381809]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 13:03:43 compute-0 systemd[1]: Started Virtual Machine qemu-185-instance-00000097.
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.385 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6906fb-9c85-4aef-acec-b3bdb14f0af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:43 compute-0 NetworkManager[49836]: <info>  [1772024623.3873] device (tapb54ec6ac-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 13:03:43 compute-0 NetworkManager[49836]: <info>  [1772024623.3899] device (tapb54ec6ac-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.388 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4789409c-02ef-4077-80e5-b66903f886e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:43 compute-0 systemd[1]: Started libpod-conmon-9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12.scope.
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.413 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fe28eaa4-d50f-4068-8bb7-11c1a2b5d4f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.432 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc5c218-a428-49e7-9c0e-4f3f0ace74ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60551679-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a9:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655912, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381819, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:43 compute-0 podman[381780]: 2026-02-25 13:03:43.350002355 +0000 UTC m=+0.031644504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.454 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb21b62-fd84-4660-9618-8c58a401ae7d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60551679-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655920, 'tstamp': 655920}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381827, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60551679-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655922, 'tstamp': 655922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381827, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.456 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60551679-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.459 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60551679-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.459 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.459 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60551679-30, col_values=(('external_ids', {'iface-id': '5108b2ec-2ba2-4c9d-af59-95b6cba805a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:03:43 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.460 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:03:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2898101735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:03:43 compute-0 ceph-mon[76335]: pgmap v2505: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.522 244018 DEBUG nova.compute.manager [req-ffa78798-f3c0-4725-b721-e2c593083982 req-d400de03-8507-47a4-b606-4404cb9839cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.522 244018 DEBUG oslo_concurrency.lockutils [req-ffa78798-f3c0-4725-b721-e2c593083982 req-d400de03-8507-47a4-b606-4404cb9839cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.523 244018 DEBUG oslo_concurrency.lockutils [req-ffa78798-f3c0-4725-b721-e2c593083982 req-d400de03-8507-47a4-b606-4404cb9839cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.523 244018 DEBUG oslo_concurrency.lockutils [req-ffa78798-f3c0-4725-b721-e2c593083982 req-d400de03-8507-47a4-b606-4404cb9839cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.524 244018 DEBUG nova.compute.manager [req-ffa78798-f3c0-4725-b721-e2c593083982 req-d400de03-8507-47a4-b606-4404cb9839cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Processing event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 13:03:43 compute-0 podman[381780]: 2026-02-25 13:03:43.617758805 +0000 UTC m=+0.299400984 container init 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:03:43 compute-0 podman[381780]: 2026-02-25 13:03:43.628759385 +0000 UTC m=+0.310401534 container start 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:03:43 compute-0 systemd[1]: libpod-9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12.scope: Deactivated successfully.
Feb 25 13:03:43 compute-0 blissful_fermat[381817]: 167 167
Feb 25 13:03:43 compute-0 conmon[381817]: conmon 9b684dc8cf16ab066afa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12.scope/container/memory.events
Feb 25 13:03:43 compute-0 podman[381780]: 2026-02-25 13:03:43.744064417 +0000 UTC m=+0.425706576 container attach 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:03:43 compute-0 podman[381780]: 2026-02-25 13:03:43.744814788 +0000 UTC m=+0.426456927 container died 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 13:03:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.905 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024623.904618, 5cd02959-e31f-4dd4-a50d-caafefc56629 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.905 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] VM Started (Lifecycle Event)
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.908 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.913 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.917 244018 INFO nova.virt.libvirt.driver [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Instance spawned successfully.
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.917 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.939 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:03:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-056b89cb761559a247c1da86d852e8f03a8dff30dfe92586d6c636302ad55c5c-merged.mount: Deactivated successfully.
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.948 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.954 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.954 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.954 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.955 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.955 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.956 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.981 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.981 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024623.9048011, 5cd02959-e31f-4dd4-a50d-caafefc56629 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:03:43 compute-0 nova_compute[244014]: 2026-02-25 13:03:43.981 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] VM Paused (Lifecycle Event)
Feb 25 13:03:44 compute-0 nova_compute[244014]: 2026-02-25 13:03:44.012 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:03:44 compute-0 nova_compute[244014]: 2026-02-25 13:03:44.015 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024623.9116669, 5cd02959-e31f-4dd4-a50d-caafefc56629 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:03:44 compute-0 nova_compute[244014]: 2026-02-25 13:03:44.015 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] VM Resumed (Lifecycle Event)
Feb 25 13:03:44 compute-0 nova_compute[244014]: 2026-02-25 13:03:44.026 244018 INFO nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Took 6.47 seconds to spawn the instance on the hypervisor.
Feb 25 13:03:44 compute-0 nova_compute[244014]: 2026-02-25 13:03:44.026 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:03:44 compute-0 nova_compute[244014]: 2026-02-25 13:03:44.033 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:03:44 compute-0 nova_compute[244014]: 2026-02-25 13:03:44.036 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:03:44 compute-0 nova_compute[244014]: 2026-02-25 13:03:44.058 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:03:44 compute-0 nova_compute[244014]: 2026-02-25 13:03:44.087 244018 INFO nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Took 7.55 seconds to build instance.
Feb 25 13:03:44 compute-0 nova_compute[244014]: 2026-02-25 13:03:44.101 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:44 compute-0 podman[381780]: 2026-02-25 13:03:44.292110581 +0000 UTC m=+0.973752720 container remove 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:03:44 compute-0 systemd[1]: libpod-conmon-9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12.scope: Deactivated successfully.
Feb 25 13:03:44 compute-0 podman[381891]: 2026-02-25 13:03:44.457618378 +0000 UTC m=+0.034671028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:03:44 compute-0 podman[381891]: 2026-02-25 13:03:44.572811117 +0000 UTC m=+0.149863747 container create 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:03:44 compute-0 systemd[1]: Started libpod-conmon-9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293.scope.
Feb 25 13:03:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bcbd2b93463b1a4dd488497ba9793e008a8d94c09ef78eead4e906ce7828f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bcbd2b93463b1a4dd488497ba9793e008a8d94c09ef78eead4e906ce7828f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bcbd2b93463b1a4dd488497ba9793e008a8d94c09ef78eead4e906ce7828f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bcbd2b93463b1a4dd488497ba9793e008a8d94c09ef78eead4e906ce7828f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:03:44 compute-0 podman[381891]: 2026-02-25 13:03:44.898516461 +0000 UTC m=+0.475569131 container init 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:03:44 compute-0 podman[381891]: 2026-02-25 13:03:44.904817909 +0000 UTC m=+0.481870529 container start 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:03:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 13:03:44 compute-0 podman[381891]: 2026-02-25 13:03:44.981863101 +0000 UTC m=+0.558915831 container attach 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:03:45 compute-0 ceph-mon[76335]: pgmap v2506: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 13:03:45 compute-0 nova_compute[244014]: 2026-02-25 13:03:45.403 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:45 compute-0 lvm[381983]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:03:45 compute-0 lvm[381983]: VG ceph_vg0 finished
Feb 25 13:03:45 compute-0 lvm[381985]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:03:45 compute-0 lvm[381985]: VG ceph_vg1 finished
Feb 25 13:03:45 compute-0 lvm[381987]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:03:45 compute-0 lvm[381987]: VG ceph_vg2 finished
Feb 25 13:03:45 compute-0 nova_compute[244014]: 2026-02-25 13:03:45.590 244018 DEBUG nova.compute.manager [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:03:45 compute-0 nova_compute[244014]: 2026-02-25 13:03:45.591 244018 DEBUG oslo_concurrency.lockutils [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:45 compute-0 nova_compute[244014]: 2026-02-25 13:03:45.591 244018 DEBUG oslo_concurrency.lockutils [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:45 compute-0 nova_compute[244014]: 2026-02-25 13:03:45.591 244018 DEBUG oslo_concurrency.lockutils [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:45 compute-0 nova_compute[244014]: 2026-02-25 13:03:45.592 244018 DEBUG nova.compute.manager [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] No waiting events found dispatching network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:03:45 compute-0 nova_compute[244014]: 2026-02-25 13:03:45.592 244018 WARNING nova.compute.manager [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received unexpected event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 for instance with vm_state active and task_state None.
Feb 25 13:03:45 compute-0 naughty_wiles[381908]: {}
Feb 25 13:03:45 compute-0 systemd[1]: libpod-9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293.scope: Deactivated successfully.
Feb 25 13:03:45 compute-0 podman[381891]: 2026-02-25 13:03:45.697146122 +0000 UTC m=+1.274198782 container died 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 13:03:45 compute-0 systemd[1]: libpod-9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293.scope: Consumed 1.048s CPU time.
Feb 25 13:03:45 compute-0 nova_compute[244014]: 2026-02-25 13:03:45.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-129bcbd2b93463b1a4dd488497ba9793e008a8d94c09ef78eead4e906ce7828f-merged.mount: Deactivated successfully.
Feb 25 13:03:45 compute-0 nova_compute[244014]: 2026-02-25 13:03:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:45 compute-0 podman[381891]: 2026-02-25 13:03:45.905760654 +0000 UTC m=+1.482813314 container remove 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:03:45 compute-0 systemd[1]: libpod-conmon-9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293.scope: Deactivated successfully.
Feb 25 13:03:45 compute-0 sudo[381741]: pam_unix(sudo:session): session closed for user root
Feb 25 13:03:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:03:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:03:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:03:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:03:46 compute-0 sudo[382003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:03:46 compute-0 sudo[382003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:03:46 compute-0 sudo[382003]: pam_unix(sudo:session): session closed for user root
Feb 25 13:03:46 compute-0 nova_compute[244014]: 2026-02-25 13:03:46.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:46 compute-0 nova_compute[244014]: 2026-02-25 13:03:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 13:03:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:03:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:03:47 compute-0 ceph-mon[76335]: pgmap v2507: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 13:03:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:03:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1613258657' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:03:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:03:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1613258657' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:03:47 compute-0 nova_compute[244014]: 2026-02-25 13:03:47.670 244018 DEBUG nova.compute.manager [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-changed-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:03:47 compute-0 nova_compute[244014]: 2026-02-25 13:03:47.671 244018 DEBUG nova.compute.manager [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Refreshing instance network info cache due to event network-changed-b54ec6ac-470f-4041-8bcc-ed69244b5317. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:03:47 compute-0 nova_compute[244014]: 2026-02-25 13:03:47.671 244018 DEBUG oslo_concurrency.lockutils [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:03:47 compute-0 nova_compute[244014]: 2026-02-25 13:03:47.671 244018 DEBUG oslo_concurrency.lockutils [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:03:47 compute-0 nova_compute[244014]: 2026-02-25 13:03:47.672 244018 DEBUG nova.network.neutron [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Refreshing network info cache for port b54ec6ac-470f-4041-8bcc-ed69244b5317 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:03:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1613258657' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:03:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1613258657' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:03:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:48 compute-0 nova_compute[244014]: 2026-02-25 13:03:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 13:03:49 compute-0 nova_compute[244014]: 2026-02-25 13:03:49.125 244018 DEBUG nova.network.neutron [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updated VIF entry in instance network info cache for port b54ec6ac-470f-4041-8bcc-ed69244b5317. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:03:49 compute-0 nova_compute[244014]: 2026-02-25 13:03:49.126 244018 DEBUG nova.network.neutron [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updating instance_info_cache with network_info: [{"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:03:49 compute-0 nova_compute[244014]: 2026-02-25 13:03:49.147 244018 DEBUG oslo_concurrency.lockutils [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:03:49 compute-0 ceph-mon[76335]: pgmap v2508: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 13:03:50 compute-0 nova_compute[244014]: 2026-02-25 13:03:50.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 13:03:51 compute-0 ceph-mon[76335]: pgmap v2509: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 13:03:51 compute-0 nova_compute[244014]: 2026-02-25 13:03:51.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:52 compute-0 podman[382028]: 2026-02-25 13:03:52.734859146 +0000 UTC m=+0.083690261 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:03:52 compute-0 podman[382029]: 2026-02-25 13:03:52.751783833 +0000 UTC m=+0.091533672 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 13:03:52 compute-0 nova_compute[244014]: 2026-02-25 13:03:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:52 compute-0 nova_compute[244014]: 2026-02-25 13:03:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:52 compute-0 nova_compute[244014]: 2026-02-25 13:03:52.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:03:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 13:03:53 compute-0 ceph-mon[76335]: pgmap v2510: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 13:03:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:54 compute-0 nova_compute[244014]: 2026-02-25 13:03:54.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:03:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 73 op/s
Feb 25 13:03:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:55.040 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:03:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:55.041 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:03:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:03:55.042 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:03:55 compute-0 ceph-mon[76335]: pgmap v2511: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 73 op/s
Feb 25 13:03:55 compute-0 nova_compute[244014]: 2026-02-25 13:03:55.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:56 compute-0 nova_compute[244014]: 2026-02-25 13:03:56.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:03:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 73 op/s
Feb 25 13:03:56 compute-0 ovn_controller[147040]: 2026-02-25T13:03:56Z|00210|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:d1:ab 10.100.0.10
Feb 25 13:03:56 compute-0 ovn_controller[147040]: 2026-02-25T13:03:56Z|00211|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:d1:ab 10.100.0.10
Feb 25 13:03:57 compute-0 ceph-mon[76335]: pgmap v2512: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 73 op/s
Feb 25 13:03:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:03:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 13:03:59 compute-0 ceph-mon[76335]: pgmap v2513: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 13:04:00 compute-0 nova_compute[244014]: 2026-02-25 13:04:00.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:04:01 compute-0 ceph-mon[76335]: pgmap v2514: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:04:01 compute-0 nova_compute[244014]: 2026-02-25 13:04:01.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.219 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.220 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.220 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.221 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.221 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.223 244018 INFO nova.compute.manager [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Terminating instance
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.225 244018 DEBUG nova.compute.manager [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 13:04:02 compute-0 kernel: tapb54ec6ac-47 (unregistering): left promiscuous mode
Feb 25 13:04:02 compute-0 NetworkManager[49836]: <info>  [1772024642.3886] device (tapb54ec6ac-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:02 compute-0 ovn_controller[147040]: 2026-02-25T13:04:02Z|01597|binding|INFO|Releasing lport b54ec6ac-470f-4041-8bcc-ed69244b5317 from this chassis (sb_readonly=0)
Feb 25 13:04:02 compute-0 ovn_controller[147040]: 2026-02-25T13:04:02Z|01598|binding|INFO|Setting lport b54ec6ac-470f-4041-8bcc-ed69244b5317 down in Southbound
Feb 25 13:04:02 compute-0 ovn_controller[147040]: 2026-02-25T13:04:02Z|01599|binding|INFO|Removing iface tapb54ec6ac-47 ovn-installed in OVS
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.405 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:d1:ab 10.100.0.10'], port_security=['fa:16:3e:fa:d1:ab 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5cd02959-e31f-4dd4-a50d-caafefc56629', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60551679-32db-4035-bd76-7d0d38a9d6de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9c3f9f1f-262b-43f9-8a65-6c1e7c9534cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a78513-3d45-4660-838e-2ad213a5a7d3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b54ec6ac-470f-4041-8bcc-ed69244b5317) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.408 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b54ec6ac-470f-4041-8bcc-ed69244b5317 in datapath 60551679-32db-4035-bd76-7d0d38a9d6de unbound from our chassis
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.411 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60551679-32db-4035-bd76-7d0d38a9d6de
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.427 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[078a6907-a734-408f-b9b7-bae46ac603b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:02 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Deactivated successfully.
Feb 25 13:04:02 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Consumed 12.498s CPU time.
Feb 25 13:04:02 compute-0 systemd-machined[210048]: Machine qemu-185-instance-00000097 terminated.
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.457 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5a87bccd-90e4-4f17-93da-e93a9b0bcd18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.462 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e853315d-9846-483d-92fb-23c284b62351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.493 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ba850934-c2e3-4221-82bb-cfb53c396114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.513 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[acd1a0c9-6cd6-4419-8fa5-8ef4dbf2e9b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60551679-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a9:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655912, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382086, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.532 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bb977c6c-930f-41cd-9e5f-8c11e9fefb58]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60551679-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655920, 'tstamp': 655920}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382087, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60551679-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655922, 'tstamp': 655922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382087, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.534 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60551679-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.536 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.541 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60551679-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.542 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.543 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60551679-30, col_values=(('external_ids', {'iface-id': '5108b2ec-2ba2-4c9d-af59-95b6cba805a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:04:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.543 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.665 244018 INFO nova.virt.libvirt.driver [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Instance destroyed successfully.
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.666 244018 DEBUG nova.objects.instance [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid 5cd02959-e31f-4dd4-a50d-caafefc56629 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.680 244018 DEBUG nova.virt.libvirt.vif [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:03:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=151,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:03:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pztet3gy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:03:44Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=5cd02959-e31f-4dd4-a50d-caafefc56629,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.681 244018 DEBUG nova.network.os_vif_util [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.683 244018 DEBUG nova.network.os_vif_util [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.684 244018 DEBUG os_vif [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.687 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb54ec6ac-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:04:02 compute-0 nova_compute[244014]: 2026-02-25 13:04:02.696 244018 INFO os_vif [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47')
Feb 25 13:04:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:04:03 compute-0 ceph-mon[76335]: pgmap v2515: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.282 244018 DEBUG nova.compute.manager [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-unplugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.282 244018 DEBUG oslo_concurrency.lockutils [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.283 244018 DEBUG oslo_concurrency.lockutils [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.283 244018 DEBUG oslo_concurrency.lockutils [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.283 244018 DEBUG nova.compute.manager [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] No waiting events found dispatching network-vif-unplugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.284 244018 DEBUG nova.compute.manager [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-unplugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.386 244018 INFO nova.virt.libvirt.driver [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Deleting instance files /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629_del
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.386 244018 INFO nova.virt.libvirt.driver [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Deletion of /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629_del complete
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.440 244018 INFO nova.compute.manager [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Took 1.22 seconds to destroy the instance on the hypervisor.
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.441 244018 DEBUG oslo.service.loopingcall [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.442 244018 DEBUG nova.compute.manager [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 13:04:03 compute-0 nova_compute[244014]: 2026-02-25 13:04:03.442 244018 DEBUG nova.network.neutron [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 13:04:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:04 compute-0 nova_compute[244014]: 2026-02-25 13:04:04.851 244018 DEBUG nova.network.neutron [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:04:04 compute-0 nova_compute[244014]: 2026-02-25 13:04:04.869 244018 INFO nova.compute.manager [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Took 1.43 seconds to deallocate network for instance.
Feb 25 13:04:04 compute-0 nova_compute[244014]: 2026-02-25 13:04:04.911 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:04 compute-0 nova_compute[244014]: 2026-02-25 13:04:04.912 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:04:04 compute-0 nova_compute[244014]: 2026-02-25 13:04:04.979 244018 DEBUG oslo_concurrency.processutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:04:05 compute-0 ceph-mon[76335]: pgmap v2516: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.441 244018 DEBUG nova.compute.manager [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.442 244018 DEBUG oslo_concurrency.lockutils [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.443 244018 DEBUG oslo_concurrency.lockutils [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.443 244018 DEBUG oslo_concurrency.lockutils [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.444 244018 DEBUG nova.compute.manager [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] No waiting events found dispatching network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.444 244018 WARNING nova.compute.manager [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received unexpected event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 for instance with vm_state deleted and task_state None.
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.445 244018 DEBUG nova.compute.manager [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-deleted-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:04:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:04:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82752679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.580 244018 DEBUG oslo_concurrency.processutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.586 244018 DEBUG nova.compute.provider_tree [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.600 244018 DEBUG nova.scheduler.client.report [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.630 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.667 244018 INFO nova.scheduler.client.report [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance 5cd02959-e31f-4dd4-a50d-caafefc56629
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.712 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.739 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:05 compute-0 nova_compute[244014]: 2026-02-25 13:04:05.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:04:06 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/82752679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.863 244018 DEBUG nova.compute.manager [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.863 244018 DEBUG nova.compute.manager [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing instance network info cache due to event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.864 244018 DEBUG oslo_concurrency.lockutils [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.864 244018 DEBUG oslo_concurrency.lockutils [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.865 244018 DEBUG nova.network.neutron [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.961 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.962 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.962 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.963 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.964 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.966 244018 INFO nova.compute.manager [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Terminating instance
Feb 25 13:04:06 compute-0 nova_compute[244014]: 2026-02-25 13:04:06.968 244018 DEBUG nova.compute.manager [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 13:04:07 compute-0 kernel: tap7d538e82-43 (unregistering): left promiscuous mode
Feb 25 13:04:07 compute-0 NetworkManager[49836]: <info>  [1772024647.0294] device (tap7d538e82-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 13:04:07 compute-0 ovn_controller[147040]: 2026-02-25T13:04:07Z|01600|binding|INFO|Releasing lport 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 from this chassis (sb_readonly=0)
Feb 25 13:04:07 compute-0 ovn_controller[147040]: 2026-02-25T13:04:07Z|01601|binding|INFO|Setting lport 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 down in Southbound
Feb 25 13:04:07 compute-0 ovn_controller[147040]: 2026-02-25T13:04:07Z|01602|binding|INFO|Removing iface tap7d538e82-43 ovn-installed in OVS
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.033 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.045 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:bc:b9 10.100.0.6'], port_security=['fa:16:3e:1a:bc:b9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8c7de3d4-fa84-4c1c-9e61-e3b610cb177d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60551679-32db-4035-bd76-7d0d38a9d6de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '411a5034-2bd3-44c2-8d69-e8230e12b7ba 45df576e-9e98-4f75-84d8-faaa81c292fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a78513-3d45-4660-838e-2ad213a5a7d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7d538e82-43ce-4f65-b84b-fb9efe7d35b0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.049 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 in datapath 60551679-32db-4035-bd76-7d0d38a9d6de unbound from our chassis
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.050 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60551679-32db-4035-bd76-7d0d38a9d6de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.052 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f14866a0-7fa1-4051-8d47-12aeac191b80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.052 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de namespace which is not needed anymore
Feb 25 13:04:07 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Deactivated successfully.
Feb 25 13:04:07 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Consumed 13.837s CPU time.
Feb 25 13:04:07 compute-0 systemd-machined[210048]: Machine qemu-184-instance-00000096 terminated.
Feb 25 13:04:07 compute-0 ceph-mon[76335]: pgmap v2517: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.210 244018 INFO nova.virt.libvirt.driver [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Instance destroyed successfully.
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.212 244018 DEBUG nova.objects.instance [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:04:07 compute-0 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [NOTICE]   (380926) : haproxy version is 2.8.14-c23fe91
Feb 25 13:04:07 compute-0 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [NOTICE]   (380926) : path to executable is /usr/sbin/haproxy
Feb 25 13:04:07 compute-0 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [WARNING]  (380926) : Exiting Master process...
Feb 25 13:04:07 compute-0 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [WARNING]  (380926) : Exiting Master process...
Feb 25 13:04:07 compute-0 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [ALERT]    (380926) : Current worker (380928) exited with code 143 (Terminated)
Feb 25 13:04:07 compute-0 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [WARNING]  (380926) : All workers exited. Exiting... (0)
Feb 25 13:04:07 compute-0 systemd[1]: libpod-8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602.scope: Deactivated successfully.
Feb 25 13:04:07 compute-0 podman[382164]: 2026-02-25 13:04:07.229070526 +0000 UTC m=+0.070620383 container died 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.230 244018 DEBUG nova.virt.libvirt.vif [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=150,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:03:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pgg8mh7f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:03:10Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=8c7de3d4-fa84-4c1c-9e61-e3b610cb177d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.231 244018 DEBUG nova.network.os_vif_util [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.232 244018 DEBUG nova.network.os_vif_util [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.233 244018 DEBUG os_vif [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.236 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d538e82-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.243 244018 INFO os_vif [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43')
Feb 25 13:04:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602-userdata-shm.mount: Deactivated successfully.
Feb 25 13:04:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-683ddb331af2ab3814797ad79c5d49ef9abaf7e6aaf6d9a7ae140104b9cbf429-merged.mount: Deactivated successfully.
Feb 25 13:04:07 compute-0 podman[382164]: 2026-02-25 13:04:07.284382535 +0000 UTC m=+0.125932432 container cleanup 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 13:04:07 compute-0 systemd[1]: libpod-conmon-8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602.scope: Deactivated successfully.
Feb 25 13:04:07 compute-0 podman[382226]: 2026-02-25 13:04:07.37213362 +0000 UTC m=+0.059247642 container remove 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.377 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cfae20-8f6f-407c-b0ab-efe5867d2e50]: (4, ('Wed Feb 25 01:04:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de (8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602)\n8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602\nWed Feb 25 01:04:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de (8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602)\n8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.379 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f615c0a9-9471-4e44-9d77-19457b7f390d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.380 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60551679-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:07 compute-0 kernel: tap60551679-30: left promiscuous mode
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.389 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f5dfe7c0-b81e-432d-b066-686a2f03382f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.405 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b2fbf6-0c7b-4f9f-a26c-9ab98010798b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.408 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09564744-e7bf-4ead-8463-161a7110c7c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.424 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[97930067-f423-4a6e-8973-a088e9d3d0ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655906, 'reachable_time': 41303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382241, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.427 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 13:04:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d60551679\x2d32db\x2d4035\x2dbd76\x2d7d0d38a9d6de.mount: Deactivated successfully.
Feb 25 13:04:07 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.427 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7ac954-ccb7-4afb-814a-68b500ecb311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.660 244018 INFO nova.virt.libvirt.driver [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Deleting instance files /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_del
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.661 244018 INFO nova.virt.libvirt.driver [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Deletion of /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_del complete
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.719 244018 INFO nova.compute.manager [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Took 0.75 seconds to destroy the instance on the hypervisor.
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.720 244018 DEBUG oslo.service.loopingcall [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.720 244018 DEBUG nova.compute.manager [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 13:04:07 compute-0 nova_compute[244014]: 2026-02-25 13:04:07.721 244018 DEBUG nova.network.neutron [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.024 244018 DEBUG nova.network.neutron [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updated VIF entry in instance network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.025 244018 DEBUG nova.network.neutron [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updating instance_info_cache with network_info: [{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.086 244018 DEBUG oslo_concurrency.lockutils [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.343 244018 DEBUG nova.network.neutron [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.364 244018 INFO nova.compute.manager [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Took 0.64 seconds to deallocate network for instance.
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.425 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.426 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.489 244018 DEBUG oslo_concurrency.processutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:04:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 187 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 2.1 MiB/s wr, 115 op/s
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.972 244018 DEBUG nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-vif-unplugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.973 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.974 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.974 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.975 244018 DEBUG nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] No waiting events found dispatching network-vif-unplugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.975 244018 WARNING nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received unexpected event network-vif-unplugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 for instance with vm_state deleted and task_state None.
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.976 244018 DEBUG nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.976 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.977 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.977 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.978 244018 DEBUG nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] No waiting events found dispatching network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.979 244018 WARNING nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received unexpected event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 for instance with vm_state deleted and task_state None.
Feb 25 13:04:08 compute-0 nova_compute[244014]: 2026-02-25 13:04:08.979 244018 DEBUG nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-vif-deleted-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:04:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:04:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2147639800' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:04:09 compute-0 nova_compute[244014]: 2026-02-25 13:04:09.043 244018 DEBUG oslo_concurrency.processutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:04:09 compute-0 nova_compute[244014]: 2026-02-25 13:04:09.050 244018 DEBUG nova.compute.provider_tree [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:04:09 compute-0 ceph-mon[76335]: pgmap v2518: 305 pgs: 305 active+clean; 187 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 2.1 MiB/s wr, 115 op/s
Feb 25 13:04:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2147639800' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:04:09 compute-0 nova_compute[244014]: 2026-02-25 13:04:09.072 244018 DEBUG nova.scheduler.client.report [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:04:09 compute-0 nova_compute[244014]: 2026-02-25 13:04:09.097 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:09 compute-0 nova_compute[244014]: 2026-02-25 13:04:09.133 244018 INFO nova.scheduler.client.report [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d
Feb 25 13:04:09 compute-0 nova_compute[244014]: 2026-02-25 13:04:09.216 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:10 compute-0 nova_compute[244014]: 2026-02-25 13:04:10.714 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 187 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.0 KiB/s wr, 51 op/s
Feb 25 13:04:11 compute-0 ceph-mon[76335]: pgmap v2519: 305 pgs: 305 active+clean; 187 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.0 KiB/s wr, 51 op/s
Feb 25 13:04:12 compute-0 nova_compute[244014]: 2026-02-25 13:04:12.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:12 compute-0 nova_compute[244014]: 2026-02-25 13:04:12.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:12 compute-0 nova_compute[244014]: 2026-02-25 13:04:12.256 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 13:04:13 compute-0 ceph-mon[76335]: pgmap v2520: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 13:04:13 compute-0 sshd-session[382266]: Received disconnect from 45.148.10.141 port 50930:11:  [preauth]
Feb 25 13:04:13 compute-0 sshd-session[382266]: Disconnected from authenticating user root 45.148.10.141 port 50930 [preauth]
Feb 25 13:04:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:14 compute-0 sshd-session[382268]: Invalid user node from 80.94.92.186 port 57168
Feb 25 13:04:14 compute-0 sshd-session[382268]: Connection closed by invalid user node 80.94.92.186 port 57168 [preauth]
Feb 25 13:04:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 13:04:15 compute-0 ceph-mon[76335]: pgmap v2521: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 13:04:15 compute-0 nova_compute[244014]: 2026-02-25 13:04:15.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 13:04:17 compute-0 ceph-mon[76335]: pgmap v2522: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 13:04:17 compute-0 nova_compute[244014]: 2026-02-25 13:04:17.241 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:17 compute-0 nova_compute[244014]: 2026-02-25 13:04:17.663 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024642.6627862, 5cd02959-e31f-4dd4-a50d-caafefc56629 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:04:17 compute-0 nova_compute[244014]: 2026-02-25 13:04:17.663 244018 INFO nova.compute.manager [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] VM Stopped (Lifecycle Event)
Feb 25 13:04:17 compute-0 nova_compute[244014]: 2026-02-25 13:04:17.701 244018 DEBUG nova.compute.manager [None req-a402d69e-0a32-41f6-9097-d9c1c9f9eba2 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:04:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 13:04:19 compute-0 ceph-mon[76335]: pgmap v2523: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 13:04:20 compute-0 nova_compute[244014]: 2026-02-25 13:04:20.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 3 op/s
Feb 25 13:04:21 compute-0 ceph-mon[76335]: pgmap v2524: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 3 op/s
Feb 25 13:04:22 compute-0 nova_compute[244014]: 2026-02-25 13:04:22.207 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024647.2063859, 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:04:22 compute-0 nova_compute[244014]: 2026-02-25 13:04:22.208 244018 INFO nova.compute.manager [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] VM Stopped (Lifecycle Event)
Feb 25 13:04:22 compute-0 nova_compute[244014]: 2026-02-25 13:04:22.237 244018 DEBUG nova.compute.manager [None req-1d8b2e26-36c6-4e75-98e2-e1b03ff35dfd - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:04:22 compute-0 nova_compute[244014]: 2026-02-25 13:04:22.243 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 3 op/s
Feb 25 13:04:23 compute-0 ceph-mon[76335]: pgmap v2525: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 3 op/s
Feb 25 13:04:23 compute-0 podman[382271]: 2026-02-25 13:04:23.708414673 +0000 UTC m=+0.051913345 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 13:04:23 compute-0 podman[382272]: 2026-02-25 13:04:23.754609366 +0000 UTC m=+0.098053046 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 25 13:04:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:25 compute-0 ceph-mon[76335]: pgmap v2526: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:25 compute-0 nova_compute[244014]: 2026-02-25 13:04:25.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:27 compute-0 ceph-mon[76335]: pgmap v2527: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:27 compute-0 nova_compute[244014]: 2026-02-25 13:04:27.246 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:29 compute-0 ceph-mon[76335]: pgmap v2528: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:30 compute-0 nova_compute[244014]: 2026-02-25 13:04:30.723 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:04:31
Feb 25 13:04:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:04:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:04:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'backups', 'vms', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 25 13:04:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:04:31 compute-0 ceph-mon[76335]: pgmap v2529: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:04:32 compute-0 nova_compute[244014]: 2026-02-25 13:04:32.249 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:33 compute-0 ceph-mon[76335]: pgmap v2530: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:33.314 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:04:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:33.315 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 13:04:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:33.317 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:04:33 compute-0 nova_compute[244014]: 2026-02-25 13:04:33.316 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:35 compute-0 ceph-mon[76335]: pgmap v2531: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:35 compute-0 nova_compute[244014]: 2026-02-25 13:04:35.725 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:37 compute-0 ceph-mon[76335]: pgmap v2532: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:37 compute-0 nova_compute[244014]: 2026-02-25 13:04:37.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:39 compute-0 ceph-mon[76335]: pgmap v2533: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:40 compute-0 nova_compute[244014]: 2026-02-25 13:04:40.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:40 compute-0 nova_compute[244014]: 2026-02-25 13:04:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:04:40 compute-0 nova_compute[244014]: 2026-02-25 13:04:40.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:40 compute-0 nova_compute[244014]: 2026-02-25 13:04:40.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:40 compute-0 nova_compute[244014]: 2026-02-25 13:04:40.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:40 compute-0 nova_compute[244014]: 2026-02-25 13:04:40.901 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:04:40 compute-0 nova_compute[244014]: 2026-02-25 13:04:40.902 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:04:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:41 compute-0 ceph-mon[76335]: pgmap v2534: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:04:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4032463823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:04:41 compute-0 nova_compute[244014]: 2026-02-25 13:04:41.431 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:04:41 compute-0 nova_compute[244014]: 2026-02-25 13:04:41.582 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:04:41 compute-0 nova_compute[244014]: 2026-02-25 13:04:41.584 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3523MB free_disk=59.98728773742914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:04:41 compute-0 nova_compute[244014]: 2026-02-25 13:04:41.584 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:41 compute-0 nova_compute[244014]: 2026-02-25 13:04:41.584 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:41 compute-0 nova_compute[244014]: 2026-02-25 13:04:41.646 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:04:41 compute-0 nova_compute[244014]: 2026-02-25 13:04:41.647 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:04:41 compute-0 nova_compute[244014]: 2026-02-25 13:04:41.668 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:04:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4032463823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:04:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:04:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995689333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:04:42 compute-0 nova_compute[244014]: 2026-02-25 13:04:42.255 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:42 compute-0 nova_compute[244014]: 2026-02-25 13:04:42.260 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:04:42 compute-0 nova_compute[244014]: 2026-02-25 13:04:42.267 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:04:42 compute-0 nova_compute[244014]: 2026-02-25 13:04:42.295 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:04:42 compute-0 nova_compute[244014]: 2026-02-25 13:04:42.321 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:04:42 compute-0 nova_compute[244014]: 2026-02-25 13:04:42.321 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6561777569814966e-05 of space, bias 1.0, pg target 0.00496853327094449 quantized to 32 (current 32)
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494506322989017 of space, bias 1.0, pg target 0.7483518968967051 quantized to 32 (current 32)
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.410111525860306e-06 of space, bias 4.0, pg target 0.0016921338310323672 quantized to 16 (current 16)
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:04:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2995689333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:04:43 compute-0 ceph-mon[76335]: pgmap v2535: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:43 compute-0 nova_compute[244014]: 2026-02-25 13:04:43.323 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:04:43 compute-0 nova_compute[244014]: 2026-02-25 13:04:43.324 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:04:43 compute-0 nova_compute[244014]: 2026-02-25 13:04:43.324 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:04:43 compute-0 nova_compute[244014]: 2026-02-25 13:04:43.342 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:04:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:43 compute-0 nova_compute[244014]: 2026-02-25 13:04:43.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:04:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:45 compute-0 ceph-mon[76335]: pgmap v2536: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:45 compute-0 nova_compute[244014]: 2026-02-25 13:04:45.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:46 compute-0 sudo[382362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:04:46 compute-0 sudo[382362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:04:46 compute-0 sudo[382362]: pam_unix(sudo:session): session closed for user root
Feb 25 13:04:46 compute-0 sudo[382387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 13:04:46 compute-0 sudo[382387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:04:46 compute-0 podman[382456]: 2026-02-25 13:04:46.736220768 +0000 UTC m=+0.081704675 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:04:46 compute-0 podman[382456]: 2026-02-25 13:04:46.845126939 +0000 UTC m=+0.190610786 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:04:46 compute-0 nova_compute[244014]: 2026-02-25 13:04:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:04:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:47 compute-0 ceph-mon[76335]: pgmap v2537: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:47 compute-0 nova_compute[244014]: 2026-02-25 13:04:47.260 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:04:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722819388' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:04:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:04:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722819388' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:04:47 compute-0 sudo[382387]: pam_unix(sudo:session): session closed for user root
Feb 25 13:04:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:04:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:04:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.710932) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687711179, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 2095, "num_deletes": 253, "total_data_size": 3438033, "memory_usage": 3500784, "flush_reason": "Manual Compaction"}
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687734026, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 3367957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52012, "largest_seqno": 54106, "table_properties": {"data_size": 3358355, "index_size": 6096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19560, "raw_average_key_size": 20, "raw_value_size": 3339167, "raw_average_value_size": 3474, "num_data_blocks": 269, "num_entries": 961, "num_filter_entries": 961, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024473, "oldest_key_time": 1772024473, "file_creation_time": 1772024687, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 23152 microseconds, and 8373 cpu microseconds.
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:04:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.734079) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 3367957 bytes OK
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.734103) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.736964) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.736985) EVENT_LOG_v1 {"time_micros": 1772024687736978, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.737006) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 3429223, prev total WAL file size 3472487, number of live WAL files 2.
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.738039) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(3289KB)], [122(9392KB)]
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687738114, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 12986042, "oldest_snapshot_seqno": -1}
Feb 25 13:04:47 compute-0 sudo[382640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:04:47 compute-0 sudo[382640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:04:47 compute-0 sudo[382640]: pam_unix(sudo:session): session closed for user root
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7594 keys, 11282083 bytes, temperature: kUnknown
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687817173, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 11282083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11230178, "index_size": 31825, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19013, "raw_key_size": 197045, "raw_average_key_size": 25, "raw_value_size": 11093674, "raw_average_value_size": 1460, "num_data_blocks": 1247, "num_entries": 7594, "num_filter_entries": 7594, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024687, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.817417) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 11282083 bytes
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.820351) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.1 rd, 142.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.2 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 8116, records dropped: 522 output_compression: NoCompression
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.820381) EVENT_LOG_v1 {"time_micros": 1772024687820368, "job": 74, "event": "compaction_finished", "compaction_time_micros": 79126, "compaction_time_cpu_micros": 36594, "output_level": 6, "num_output_files": 1, "total_output_size": 11282083, "num_input_records": 8116, "num_output_records": 7594, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687821169, "job": 74, "event": "table_file_deletion", "file_number": 124}
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687822814, "job": 74, "event": "table_file_deletion", "file_number": 122}
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.737857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.822928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.822935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.822939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.822943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:04:47 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.822947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:04:47 compute-0 sudo[382665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:04:47 compute-0 sudo[382665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:04:47 compute-0 nova_compute[244014]: 2026-02-25 13:04:47.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:04:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2722819388' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:04:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2722819388' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:04:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:04:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:04:48 compute-0 sudo[382665]: pam_unix(sudo:session): session closed for user root
Feb 25 13:04:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:04:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:04:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:04:48 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:04:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:04:48 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:04:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:04:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:04:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:04:48 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:04:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:04:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:04:48 compute-0 sudo[382722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:04:48 compute-0 sudo[382722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:04:48 compute-0 sudo[382722]: pam_unix(sudo:session): session closed for user root
Feb 25 13:04:48 compute-0 sudo[382747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:04:48 compute-0 sudo[382747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:04:48 compute-0 podman[382784]: 2026-02-25 13:04:48.765434298 +0000 UTC m=+0.049778954 container create 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 13:04:48 compute-0 systemd[1]: Started libpod-conmon-6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2.scope.
Feb 25 13:04:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:04:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:48 compute-0 podman[382784]: 2026-02-25 13:04:48.738351295 +0000 UTC m=+0.022695961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:04:48 compute-0 podman[382784]: 2026-02-25 13:04:48.855516919 +0000 UTC m=+0.139861665 container init 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 13:04:48 compute-0 podman[382784]: 2026-02-25 13:04:48.861855057 +0000 UTC m=+0.146199703 container start 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 13:04:48 compute-0 crazy_pascal[382800]: 167 167
Feb 25 13:04:48 compute-0 systemd[1]: libpod-6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2.scope: Deactivated successfully.
Feb 25 13:04:48 compute-0 podman[382784]: 2026-02-25 13:04:48.868072713 +0000 UTC m=+0.152417359 container attach 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:04:48 compute-0 podman[382784]: 2026-02-25 13:04:48.869145263 +0000 UTC m=+0.153489919 container died 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:04:48 compute-0 nova_compute[244014]: 2026-02-25 13:04:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-18555a5b6a9a66ee023b35514c802b592c6c695f9dcf7a17ca1f8b778b5a28fa-merged.mount: Deactivated successfully.
Feb 25 13:04:48 compute-0 podman[382784]: 2026-02-25 13:04:48.971118178 +0000 UTC m=+0.255462854 container remove 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:04:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:48 compute-0 systemd[1]: libpod-conmon-6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2.scope: Deactivated successfully.
Feb 25 13:04:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:04:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:04:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:04:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:04:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:04:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:04:49 compute-0 ceph-mon[76335]: pgmap v2538: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:49 compute-0 podman[382826]: 2026-02-25 13:04:49.132434847 +0000 UTC m=+0.050138695 container create 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:04:49 compute-0 systemd[1]: Started libpod-conmon-4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11.scope.
Feb 25 13:04:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:49 compute-0 podman[382826]: 2026-02-25 13:04:49.109941643 +0000 UTC m=+0.027645571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:04:49 compute-0 podman[382826]: 2026-02-25 13:04:49.217067464 +0000 UTC m=+0.134771342 container init 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:04:49 compute-0 podman[382826]: 2026-02-25 13:04:49.222797605 +0000 UTC m=+0.140501494 container start 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:04:49 compute-0 podman[382826]: 2026-02-25 13:04:49.22649333 +0000 UTC m=+0.144197248 container attach 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 13:04:49 compute-0 eager_hodgkin[382842]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:04:49 compute-0 eager_hodgkin[382842]: --> All data devices are unavailable
Feb 25 13:04:49 compute-0 systemd[1]: libpod-4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11.scope: Deactivated successfully.
Feb 25 13:04:49 compute-0 podman[382826]: 2026-02-25 13:04:49.681864751 +0000 UTC m=+0.599568589 container died 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:04:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136-merged.mount: Deactivated successfully.
Feb 25 13:04:49 compute-0 podman[382826]: 2026-02-25 13:04:49.926578871 +0000 UTC m=+0.844282719 container remove 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:04:49 compute-0 systemd[1]: libpod-conmon-4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11.scope: Deactivated successfully.
Feb 25 13:04:49 compute-0 sudo[382747]: pam_unix(sudo:session): session closed for user root
Feb 25 13:04:50 compute-0 sudo[382875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:04:50 compute-0 sudo[382875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:04:50 compute-0 sudo[382875]: pam_unix(sudo:session): session closed for user root
Feb 25 13:04:50 compute-0 sudo[382900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:04:50 compute-0 sudo[382900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:04:50 compute-0 podman[382938]: 2026-02-25 13:04:50.360992771 +0000 UTC m=+0.055012412 container create 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:04:50 compute-0 systemd[1]: Started libpod-conmon-32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948.scope.
Feb 25 13:04:50 compute-0 podman[382938]: 2026-02-25 13:04:50.325752648 +0000 UTC m=+0.019772299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:04:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:04:50 compute-0 podman[382938]: 2026-02-25 13:04:50.472490806 +0000 UTC m=+0.166510457 container init 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:04:50 compute-0 podman[382938]: 2026-02-25 13:04:50.479664528 +0000 UTC m=+0.173684129 container start 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 13:04:50 compute-0 vigorous_allen[382955]: 167 167
Feb 25 13:04:50 compute-0 systemd[1]: libpod-32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948.scope: Deactivated successfully.
Feb 25 13:04:50 compute-0 podman[382938]: 2026-02-25 13:04:50.483114365 +0000 UTC m=+0.177133966 container attach 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 13:04:50 compute-0 podman[382938]: 2026-02-25 13:04:50.48363668 +0000 UTC m=+0.177656271 container died 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-15b1f211fc6c7bdf28813207c8ee51c4281012ef046e6deb28c42aa9f01e6855-merged.mount: Deactivated successfully.
Feb 25 13:04:50 compute-0 podman[382938]: 2026-02-25 13:04:50.538497087 +0000 UTC m=+0.232516748 container remove 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:04:50 compute-0 systemd[1]: libpod-conmon-32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948.scope: Deactivated successfully.
Feb 25 13:04:50 compute-0 podman[382980]: 2026-02-25 13:04:50.72233209 +0000 UTC m=+0.071906559 container create 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:04:50 compute-0 nova_compute[244014]: 2026-02-25 13:04:50.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:50 compute-0 systemd[1]: Started libpod-conmon-5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc.scope.
Feb 25 13:04:50 compute-0 podman[382980]: 2026-02-25 13:04:50.671026653 +0000 UTC m=+0.020601022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:04:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8b87f0f23be93eabf3cf7df4242977c8b669931ce26b38c6364e032f934f68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8b87f0f23be93eabf3cf7df4242977c8b669931ce26b38c6364e032f934f68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8b87f0f23be93eabf3cf7df4242977c8b669931ce26b38c6364e032f934f68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8b87f0f23be93eabf3cf7df4242977c8b669931ce26b38c6364e032f934f68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:50 compute-0 podman[382980]: 2026-02-25 13:04:50.82164684 +0000 UTC m=+0.171221229 container init 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:04:50 compute-0 podman[382980]: 2026-02-25 13:04:50.830371466 +0000 UTC m=+0.179945855 container start 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 13:04:50 compute-0 podman[382980]: 2026-02-25 13:04:50.84151002 +0000 UTC m=+0.191084409 container attach 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:04:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:51 compute-0 ceph-mon[76335]: pgmap v2539: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]: {
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:     "0": [
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:         {
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "devices": [
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "/dev/loop3"
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             ],
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_name": "ceph_lv0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_size": "21470642176",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "name": "ceph_lv0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "tags": {
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.cluster_name": "ceph",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.crush_device_class": "",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.encrypted": "0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.objectstore": "bluestore",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.osd_id": "0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.type": "block",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.vdo": "0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.with_tpm": "0"
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             },
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "type": "block",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "vg_name": "ceph_vg0"
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:         }
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:     ],
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:     "1": [
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:         {
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "devices": [
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "/dev/loop4"
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             ],
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_name": "ceph_lv1",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_size": "21470642176",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "name": "ceph_lv1",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "tags": {
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.cluster_name": "ceph",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.crush_device_class": "",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.encrypted": "0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.objectstore": "bluestore",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.osd_id": "1",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.type": "block",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.vdo": "0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.with_tpm": "0"
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             },
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "type": "block",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "vg_name": "ceph_vg1"
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:         }
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:     ],
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:     "2": [
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:         {
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "devices": [
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "/dev/loop5"
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             ],
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_name": "ceph_lv2",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_size": "21470642176",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "name": "ceph_lv2",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "tags": {
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.cluster_name": "ceph",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.crush_device_class": "",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.encrypted": "0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.objectstore": "bluestore",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.osd_id": "2",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.type": "block",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.vdo": "0",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:                 "ceph.with_tpm": "0"
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             },
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "type": "block",
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:             "vg_name": "ceph_vg2"
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:         }
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]:     ]
Feb 25 13:04:51 compute-0 unruffled_shockley[382996]: }
Feb 25 13:04:51 compute-0 systemd[1]: libpod-5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc.scope: Deactivated successfully.
Feb 25 13:04:51 compute-0 podman[382980]: 2026-02-25 13:04:51.14361443 +0000 UTC m=+0.493188789 container died 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 13:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-da8b87f0f23be93eabf3cf7df4242977c8b669931ce26b38c6364e032f934f68-merged.mount: Deactivated successfully.
Feb 25 13:04:51 compute-0 podman[382980]: 2026-02-25 13:04:51.245535114 +0000 UTC m=+0.595109493 container remove 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Feb 25 13:04:51 compute-0 systemd[1]: libpod-conmon-5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc.scope: Deactivated successfully.
Feb 25 13:04:51 compute-0 sudo[382900]: pam_unix(sudo:session): session closed for user root
Feb 25 13:04:51 compute-0 sudo[383017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:04:51 compute-0 sudo[383017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:04:51 compute-0 sudo[383017]: pam_unix(sudo:session): session closed for user root
Feb 25 13:04:51 compute-0 sudo[383042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:04:51 compute-0 sudo[383042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:04:51 compute-0 podman[383079]: 2026-02-25 13:04:51.729167791 +0000 UTC m=+0.095605676 container create a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:04:51 compute-0 podman[383079]: 2026-02-25 13:04:51.651199713 +0000 UTC m=+0.017637618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:04:51 compute-0 systemd[1]: Started libpod-conmon-a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c.scope.
Feb 25 13:04:51 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:04:52 compute-0 podman[383079]: 2026-02-25 13:04:52.014081116 +0000 UTC m=+0.380519071 container init a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:04:52 compute-0 podman[383079]: 2026-02-25 13:04:52.021520405 +0000 UTC m=+0.387958330 container start a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:04:52 compute-0 jolly_ride[383096]: 167 167
Feb 25 13:04:52 compute-0 systemd[1]: libpod-a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c.scope: Deactivated successfully.
Feb 25 13:04:52 compute-0 podman[383079]: 2026-02-25 13:04:52.071219597 +0000 UTC m=+0.437657572 container attach a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 13:04:52 compute-0 podman[383079]: 2026-02-25 13:04:52.072071161 +0000 UTC m=+0.438509076 container died a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:04:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-61d1e462643882f36d75d7d8600b3d11dd61ae2772f70c46a433bc31b6f7bd72-merged.mount: Deactivated successfully.
Feb 25 13:04:52 compute-0 podman[383079]: 2026-02-25 13:04:52.182733021 +0000 UTC m=+0.549170906 container remove a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:04:52 compute-0 systemd[1]: libpod-conmon-a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c.scope: Deactivated successfully.
Feb 25 13:04:52 compute-0 nova_compute[244014]: 2026-02-25 13:04:52.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:52 compute-0 podman[383122]: 2026-02-25 13:04:52.34652448 +0000 UTC m=+0.055978769 container create cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 13:04:52 compute-0 systemd[1]: Started libpod-conmon-cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642.scope.
Feb 25 13:04:52 compute-0 podman[383122]: 2026-02-25 13:04:52.31425422 +0000 UTC m=+0.023708489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:04:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8869d6b7490f4ad49eb30c8eba2d3b78fa9eb32ec41e9c5c4beb7f0ffb9054/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8869d6b7490f4ad49eb30c8eba2d3b78fa9eb32ec41e9c5c4beb7f0ffb9054/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8869d6b7490f4ad49eb30c8eba2d3b78fa9eb32ec41e9c5c4beb7f0ffb9054/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8869d6b7490f4ad49eb30c8eba2d3b78fa9eb32ec41e9c5c4beb7f0ffb9054/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:04:52 compute-0 podman[383122]: 2026-02-25 13:04:52.458784946 +0000 UTC m=+0.168239225 container init cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 13:04:52 compute-0 podman[383122]: 2026-02-25 13:04:52.463799167 +0000 UTC m=+0.173253456 container start cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:04:52 compute-0 podman[383122]: 2026-02-25 13:04:52.489769369 +0000 UTC m=+0.199223718 container attach cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 13:04:52 compute-0 nova_compute[244014]: 2026-02-25 13:04:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:04:52 compute-0 nova_compute[244014]: 2026-02-25 13:04:52.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:04:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:53 compute-0 ceph-mon[76335]: pgmap v2540: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:53 compute-0 lvm[383217]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:04:53 compute-0 lvm[383217]: VG ceph_vg0 finished
Feb 25 13:04:53 compute-0 lvm[383220]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:04:53 compute-0 lvm[383220]: VG ceph_vg1 finished
Feb 25 13:04:53 compute-0 lvm[383222]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:04:53 compute-0 lvm[383222]: VG ceph_vg2 finished
Feb 25 13:04:53 compute-0 confident_swirles[383141]: {}
Feb 25 13:04:53 compute-0 systemd[1]: libpod-cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642.scope: Deactivated successfully.
Feb 25 13:04:53 compute-0 systemd[1]: libpod-cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642.scope: Consumed 1.064s CPU time.
Feb 25 13:04:53 compute-0 podman[383122]: 2026-02-25 13:04:53.237091673 +0000 UTC m=+0.946545962 container died cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac8869d6b7490f4ad49eb30c8eba2d3b78fa9eb32ec41e9c5c4beb7f0ffb9054-merged.mount: Deactivated successfully.
Feb 25 13:04:53 compute-0 podman[383122]: 2026-02-25 13:04:53.280352653 +0000 UTC m=+0.989806902 container remove cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:04:53 compute-0 systemd[1]: libpod-conmon-cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642.scope: Deactivated successfully.
Feb 25 13:04:53 compute-0 sudo[383042]: pam_unix(sudo:session): session closed for user root
Feb 25 13:04:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:04:53 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:04:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:04:53 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:04:53 compute-0 sudo[383236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:04:53 compute-0 sudo[383236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:04:53 compute-0 sudo[383236]: pam_unix(sudo:session): session closed for user root
Feb 25 13:04:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:54 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:04:54 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:04:54 compute-0 nova_compute[244014]: 2026-02-25 13:04:54.454 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:54 compute-0 nova_compute[244014]: 2026-02-25 13:04:54.455 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:54 compute-0 nova_compute[244014]: 2026-02-25 13:04:54.482 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 13:04:54 compute-0 podman[383261]: 2026-02-25 13:04:54.716376876 +0000 UTC m=+0.054848088 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 13:04:54 compute-0 podman[383262]: 2026-02-25 13:04:54.736349999 +0000 UTC m=+0.074907353 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:04:54 compute-0 nova_compute[244014]: 2026-02-25 13:04:54.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:04:54 compute-0 nova_compute[244014]: 2026-02-25 13:04:54.921 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:54 compute-0 nova_compute[244014]: 2026-02-25 13:04:54.921 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:54 compute-0 nova_compute[244014]: 2026-02-25 13:04:54.931 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 13:04:54 compute-0 nova_compute[244014]: 2026-02-25 13:04:54.932 244018 INFO nova.compute.claims [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Claim successful on node compute-0.ctlplane.example.com
Feb 25 13:04:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:55.042 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:55.042 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:04:55.043 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.116 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:04:55 compute-0 ceph-mon[76335]: pgmap v2541: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:04:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083294746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.734 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.755 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.761 244018 DEBUG nova.compute.provider_tree [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.783 244018 DEBUG nova.scheduler.client.report [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.808 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.809 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.883 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.884 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.910 244018 INFO nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 13:04:55 compute-0 nova_compute[244014]: 2026-02-25 13:04:55.937 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.067 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.069 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.069 244018 INFO nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Creating image(s)
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.098 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.123 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.148 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.151 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.224 244018 DEBUG nova.policy [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '49675ebd5cbe44e1a3373d23daabdc78', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7b441b24147b4cdbaeba6c8bc41ce081', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.234 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.234 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.235 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.235 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.257 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.261 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:04:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2083294746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.520 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.583 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] resizing rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.669 244018 DEBUG nova.objects.instance [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lazy-loading 'migration_context' on Instance uuid 65c18a8f-0df3-4e29-b22d-fbc9362683f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.689 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.689 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Ensure instance console log exists: /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.690 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.690 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.691 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:04:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:56 compute-0 nova_compute[244014]: 2026-02-25 13:04:56.996 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Successfully created port: 6beae6ca-810c-48a5-8fcd-5f58732e64f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 25 13:04:57 compute-0 nova_compute[244014]: 2026-02-25 13:04:57.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:04:57 compute-0 ceph-mon[76335]: pgmap v2542: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:04:57 compute-0 nova_compute[244014]: 2026-02-25 13:04:57.516 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Successfully updated port: 6beae6ca-810c-48a5-8fcd-5f58732e64f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 25 13:04:57 compute-0 nova_compute[244014]: 2026-02-25 13:04:57.531 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:04:57 compute-0 nova_compute[244014]: 2026-02-25 13:04:57.532 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquired lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:04:57 compute-0 nova_compute[244014]: 2026-02-25 13:04:57.532 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 13:04:57 compute-0 ovn_controller[147040]: 2026-02-25T13:04:57Z|01603|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb 25 13:04:57 compute-0 nova_compute[244014]: 2026-02-25 13:04:57.622 244018 DEBUG nova.compute.manager [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-changed-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:04:57 compute-0 nova_compute[244014]: 2026-02-25 13:04:57.622 244018 DEBUG nova.compute.manager [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Refreshing instance network info cache due to event network-changed-6beae6ca-810c-48a5-8fcd-5f58732e64f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:04:57 compute-0 nova_compute[244014]: 2026-02-25 13:04:57.623 244018 DEBUG oslo_concurrency.lockutils [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:04:58 compute-0 nova_compute[244014]: 2026-02-25 13:04:58.158 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 13:04:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:04:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:04:59 compute-0 ceph-mon[76335]: pgmap v2543: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.604 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updating instance_info_cache with network_info: [{"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.630 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Releasing lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.631 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Instance network_info: |[{"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.631 244018 DEBUG oslo_concurrency.lockutils [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.632 244018 DEBUG nova.network.neutron [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Refreshing network info cache for port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.636 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Start _get_guest_xml network_info=[{"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.641 244018 WARNING nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.647 244018 DEBUG nova.virt.libvirt.host [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.648 244018 DEBUG nova.virt.libvirt.host [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.660 244018 DEBUG nova.virt.libvirt.host [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.661 244018 DEBUG nova.virt.libvirt.host [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.662 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.662 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.663 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.664 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.664 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.665 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.665 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.666 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.666 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.667 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.667 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.668 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 13:04:59 compute-0 nova_compute[244014]: 2026-02-25 13:04:59.672 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:05:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4011732510' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.205 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.228 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.231 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4011732510' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:05:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/312836393' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.774 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.775 244018 DEBUG nova.virt.libvirt.vif [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:04:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-326185766',display_name='tempest-TestServerBasicOps-server-326185766',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-326185766',id=152,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAgD3tk1qiKui8lJJf3rJQ9kydu9X1XrZCAFY86RY3JpEFptTJu0F5JBGrnQbFsHJxgtY6rKDs+7evBr0WJnOj8cmec2yTpdNqIYUHkpoNnjetcb9Sa9n8HT0qnNjdCSzw==',key_name='tempest-TestServerBasicOps-1849175750',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7b441b24147b4cdbaeba6c8bc41ce081',ramdisk_id='',reservation_id='r-pdnxw3eu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1933998932',owner_user_name='tempest-TestServerBasicOps-1933998932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:04:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='49675ebd5cbe44e1a3373d23daabdc78',uuid=65c18a8f-0df3-4e29-b22d-fbc9362683f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.776 244018 DEBUG nova.network.os_vif_util [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converting VIF {"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.777 244018 DEBUG nova.network.os_vif_util [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.778 244018 DEBUG nova.objects.instance [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lazy-loading 'pci_devices' on Instance uuid 65c18a8f-0df3-4e29-b22d-fbc9362683f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.802 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] End _get_guest_xml xml=<domain type="kvm">
Feb 25 13:05:00 compute-0 nova_compute[244014]:   <uuid>65c18a8f-0df3-4e29-b22d-fbc9362683f8</uuid>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   <name>instance-00000098</name>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   <metadata>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <nova:name>tempest-TestServerBasicOps-server-326185766</nova:name>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 13:04:59</nova:creationTime>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <nova:user uuid="49675ebd5cbe44e1a3373d23daabdc78">tempest-TestServerBasicOps-1933998932-project-member</nova:user>
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <nova:project uuid="7b441b24147b4cdbaeba6c8bc41ce081">tempest-TestServerBasicOps-1933998932</nova:project>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <nova:ports>
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <nova:port uuid="6beae6ca-810c-48a5-8fcd-5f58732e64f4">
Feb 25 13:05:00 compute-0 nova_compute[244014]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:         </nova:port>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       </nova:ports>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   </metadata>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <system>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <entry name="serial">65c18a8f-0df3-4e29-b22d-fbc9362683f8</entry>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <entry name="uuid">65c18a8f-0df3-4e29-b22d-fbc9362683f8</entry>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     </system>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   <os>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   </os>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   <features>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <apic/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   </features>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   </clock>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   </cpu>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   <devices>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk">
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       </source>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config">
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       </source>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:05:00 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <interface type="ethernet">
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <mac address="fa:16:3e:8a:be:b7"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <driver name="vhost" rx_queue_size="512"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <mtu size="1442"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <target dev="tap6beae6ca-81"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     </interface>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/console.log" append="off"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     </serial>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <video>
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     </video>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     </rng>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 13:05:00 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 13:05:00 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 13:05:00 compute-0 nova_compute[244014]:   </devices>
Feb 25 13:05:00 compute-0 nova_compute[244014]: </domain>
Feb 25 13:05:00 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.803 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Preparing to wait for external event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.803 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.804 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.804 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.805 244018 DEBUG nova.virt.libvirt.vif [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:04:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-326185766',display_name='tempest-TestServerBasicOps-server-326185766',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-326185766',id=152,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAgD3tk1qiKui8lJJf3rJQ9kydu9X1XrZCAFY86RY3JpEFptTJu0F5JBGrnQbFsHJxgtY6rKDs+7evBr0WJnOj8cmec2yTpdNqIYUHkpoNnjetcb9Sa9n8HT0qnNjdCSzw==',key_name='tempest-TestServerBasicOps-1849175750',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7b441b24147b4cdbaeba6c8bc41ce081',ramdisk_id='',reservation_id='r-pdnxw3eu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1933998932',owner_user_name='tempest-TestServerBasicOps-1933998932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:04:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='49675ebd5cbe44e1a3373d23daabdc78',uuid=65c18a8f-0df3-4e29-b22d-fbc9362683f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.805 244018 DEBUG nova.network.os_vif_util [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converting VIF {"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.806 244018 DEBUG nova.network.os_vif_util [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.806 244018 DEBUG os_vif [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.807 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.808 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.811 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6beae6ca-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.811 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6beae6ca-81, col_values=(('external_ids', {'iface-id': '6beae6ca-810c-48a5-8fcd-5f58732e64f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:be:b7', 'vm-uuid': '65c18a8f-0df3-4e29-b22d-fbc9362683f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:00 compute-0 NetworkManager[49836]: <info>  [1772024700.8143] manager: (tap6beae6ca-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.821 244018 INFO os_vif [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81')
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.865 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.865 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.866 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] No VIF found with MAC fa:16:3e:8a:be:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.866 244018 INFO nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Using config drive
Feb 25 13:05:00 compute-0 nova_compute[244014]: 2026-02-25 13:05:00.888 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:05:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:05:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/312836393' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:05:01 compute-0 ceph-mon[76335]: pgmap v2544: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.328 244018 INFO nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Creating config drive at /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.336 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1c2go8pu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.405 244018 DEBUG nova.network.neutron [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updated VIF entry in instance network info cache for port 6beae6ca-810c-48a5-8fcd-5f58732e64f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.406 244018 DEBUG nova.network.neutron [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updating instance_info_cache with network_info: [{"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.431 244018 DEBUG oslo_concurrency.lockutils [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.481 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1c2go8pu" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.519 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.525 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.676 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.677 244018 INFO nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Deleting local config drive /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config because it was imported into RBD.
Feb 25 13:05:01 compute-0 kernel: tap6beae6ca-81: entered promiscuous mode
Feb 25 13:05:01 compute-0 ovn_controller[147040]: 2026-02-25T13:05:01Z|01604|binding|INFO|Claiming lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 for this chassis.
Feb 25 13:05:01 compute-0 ovn_controller[147040]: 2026-02-25T13:05:01Z|01605|binding|INFO|6beae6ca-810c-48a5-8fcd-5f58732e64f4: Claiming fa:16:3e:8a:be:b7 10.100.0.3
Feb 25 13:05:01 compute-0 NetworkManager[49836]: <info>  [1772024701.7412] manager: (tap6beae6ca-81): new Tun device (/org/freedesktop/NetworkManager/Devices/660)
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.755 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:be:b7 10.100.0.3'], port_security=['fa:16:3e:8a:be:b7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65c18a8f-0df3-4e29-b22d-fbc9362683f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7b441b24147b4cdbaeba6c8bc41ce081', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1265db5c-9190-4fc7-98f2-b8b04c9ddabc 876954b7-4b56-43af-996d-ec29d7f8f2ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=247650f0-77ff-40c8-b14f-1d2a2a430ca3, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6beae6ca-810c-48a5-8fcd-5f58732e64f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.756 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 in datapath e5412d48-5ea8-48ec-b0d9-17bc49a104c2 bound to our chassis
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.757 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5412d48-5ea8-48ec-b0d9-17bc49a104c2
Feb 25 13:05:01 compute-0 systemd-machined[210048]: New machine qemu-186-instance-00000098.
Feb 25 13:05:01 compute-0 ovn_controller[147040]: 2026-02-25T13:05:01Z|01606|binding|INFO|Setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 ovn-installed in OVS
Feb 25 13:05:01 compute-0 ovn_controller[147040]: 2026-02-25T13:05:01Z|01607|binding|INFO|Setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 up in Southbound
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.767 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[77e8661e-c32e-46a0-aea3-d43529846449]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.768 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5412d48-51 in ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 25 13:05:01 compute-0 nova_compute[244014]: 2026-02-25 13:05:01.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.770 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5412d48-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.771 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[640207f4-8b80-45b6-aba1-ba6848bae831]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.771 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f699ee-1f53-44b2-8590-b5c08810744d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.784 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[09c94362-ea14-4f18-a28d-d82dcd23699f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.796 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7f7987-8ca4-453d-8e7c-4b8afa1da010]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 systemd-udevd[383630]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 13:05:01 compute-0 NetworkManager[49836]: <info>  [1772024701.8148] device (tap6beae6ca-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 13:05:01 compute-0 NetworkManager[49836]: <info>  [1772024701.8153] device (tap6beae6ca-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.822 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e238740f-bea2-43d0-a30c-ea581369b70b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.828 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1df8a820-49a9-4863-a63a-2686e4a417b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 NetworkManager[49836]: <info>  [1772024701.8287] manager: (tape5412d48-50): new Veth device (/org/freedesktop/NetworkManager/Devices/661)
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.855 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d9fc8256-6bef-40fd-9676-0b9f3fe3b954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.858 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb0df85-8812-49be-95bf-4e8c84a30a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 NetworkManager[49836]: <info>  [1772024701.8792] device (tape5412d48-50): carrier: link connected
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.886 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4b93967d-e381-4746-bec9-9a66e629f696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.901 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[459198b4-3be9-4990-b376-a079b58c0668]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5412d48-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:45:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667128, 'reachable_time': 27561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383659, 'error': None, 'target': 'ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.917 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3277280c-6056-46ed-ac01-e29edda9f8b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:455f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667128, 'tstamp': 667128}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383660, 'error': None, 'target': 'ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.934 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c8897625-c3ab-49d0-b0d4-fd128d75b3c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5412d48-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:45:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667128, 'reachable_time': 27561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383661, 'error': None, 'target': 'ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:01 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.973 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d2555bc1-8b63-4a97-88b4-846a03ac392e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.031 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e29dd2fd-02f0-40b0-b95b-2739016a16fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.033 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5412d48-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.033 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.033 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5412d48-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:05:02 compute-0 kernel: tape5412d48-50: entered promiscuous mode
Feb 25 13:05:02 compute-0 NetworkManager[49836]: <info>  [1772024702.0367] manager: (tape5412d48-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.038 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5412d48-50, col_values=(('external_ids', {'iface-id': 'b38e9a57-21bb-4d54-870d-12b7311abbfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:02 compute-0 ovn_controller[147040]: 2026-02-25T13:05:02Z|01608|binding|INFO|Releasing lport b38e9a57-21bb-4d54-870d-12b7311abbfa from this chassis (sb_readonly=0)
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.049 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5412d48-5ea8-48ec-b0d9-17bc49a104c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5412d48-5ea8-48ec-b0d9-17bc49a104c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.050 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45474884-3556-47fb-b201-127d7af1e962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.050 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: global
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     log         /dev/log local0 debug
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     log-tag     haproxy-metadata-proxy-e5412d48-5ea8-48ec-b0d9-17bc49a104c2
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     user        root
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     group       root
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     maxconn     1024
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     pidfile     /var/lib/neutron/external/pids/e5412d48-5ea8-48ec-b0d9-17bc49a104c2.pid.haproxy
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     daemon
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: defaults
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     log global
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     mode http
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     option httplog
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     option dontlognull
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     option http-server-close
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     option forwardfor
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     retries                 3
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     timeout http-request    30s
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     timeout connect         30s
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     timeout client          32s
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     timeout server          32s
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     timeout http-keep-alive 30s
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: listen listener
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     bind 169.254.169.254:80
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     server metadata /var/lib/neutron/metadata_proxy
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:     http-request add-header X-OVN-Network-ID e5412d48-5ea8-48ec-b0d9-17bc49a104c2
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 25 13:05:02 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.051 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'env', 'PROCESS_TAG=haproxy-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5412d48-5ea8-48ec-b0d9-17bc49a104c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.246 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024702.2458346, 65c18a8f-0df3-4e29-b22d-fbc9362683f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.247 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] VM Started (Lifecycle Event)
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.284 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.288 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024702.2460415, 65c18a8f-0df3-4e29-b22d-fbc9362683f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.294 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] VM Paused (Lifecycle Event)
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.315 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.321 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.343 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.364 244018 DEBUG nova.compute.manager [req-ef459609-0249-40bc-bb31-91a9f1eebab5 req-5f1d6f27-2a0e-463f-92d6-cfbe2ce2ea53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.365 244018 DEBUG oslo_concurrency.lockutils [req-ef459609-0249-40bc-bb31-91a9f1eebab5 req-5f1d6f27-2a0e-463f-92d6-cfbe2ce2ea53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.366 244018 DEBUG oslo_concurrency.lockutils [req-ef459609-0249-40bc-bb31-91a9f1eebab5 req-5f1d6f27-2a0e-463f-92d6-cfbe2ce2ea53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.366 244018 DEBUG oslo_concurrency.lockutils [req-ef459609-0249-40bc-bb31-91a9f1eebab5 req-5f1d6f27-2a0e-463f-92d6-cfbe2ce2ea53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.366 244018 DEBUG nova.compute.manager [req-ef459609-0249-40bc-bb31-91a9f1eebab5 req-5f1d6f27-2a0e-463f-92d6-cfbe2ce2ea53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Processing event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.367 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.371 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024702.3710856, 65c18a8f-0df3-4e29-b22d-fbc9362683f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.372 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] VM Resumed (Lifecycle Event)
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.373 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.376 244018 INFO nova.virt.libvirt.driver [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Instance spawned successfully.
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.377 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.392 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.395 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.404 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.404 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.405 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.405 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.406 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.406 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.413 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:05:02 compute-0 podman[383735]: 2026-02-25 13:05:02.447059632 +0000 UTC m=+0.065486758 container create 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.467 244018 INFO nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Took 6.40 seconds to spawn the instance on the hypervisor.
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.468 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:05:02 compute-0 systemd[1]: Started libpod-conmon-8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024.scope.
Feb 25 13:05:02 compute-0 podman[383735]: 2026-02-25 13:05:02.414310048 +0000 UTC m=+0.032737214 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.527 244018 INFO nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Took 7.99 seconds to build instance.
Feb 25 13:05:02 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e92cfb3f6edfebaff05e0277f03fba995760b8b6190fab1455ec297808617c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:02 compute-0 nova_compute[244014]: 2026-02-25 13:05:02.544 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:02 compute-0 podman[383735]: 2026-02-25 13:05:02.554546263 +0000 UTC m=+0.172973369 container init 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 25 13:05:02 compute-0 podman[383735]: 2026-02-25 13:05:02.56047725 +0000 UTC m=+0.178904356 container start 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:05:02 compute-0 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [NOTICE]   (383754) : New worker (383756) forked
Feb 25 13:05:02 compute-0 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [NOTICE]   (383754) : Loading success.
Feb 25 13:05:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 13:05:03 compute-0 ceph-mon[76335]: pgmap v2545: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 13:05:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:04 compute-0 NetworkManager[49836]: <info>  [1772024704.2957] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/663)
Feb 25 13:05:04 compute-0 NetworkManager[49836]: <info>  [1772024704.2967] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/664)
Feb 25 13:05:04 compute-0 nova_compute[244014]: 2026-02-25 13:05:04.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:04 compute-0 nova_compute[244014]: 2026-02-25 13:05:04.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:04 compute-0 ovn_controller[147040]: 2026-02-25T13:05:04Z|01609|binding|INFO|Releasing lport b38e9a57-21bb-4d54-870d-12b7311abbfa from this chassis (sb_readonly=0)
Feb 25 13:05:04 compute-0 nova_compute[244014]: 2026-02-25 13:05:04.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:04 compute-0 nova_compute[244014]: 2026-02-25 13:05:04.484 244018 DEBUG nova.compute.manager [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:05:04 compute-0 nova_compute[244014]: 2026-02-25 13:05:04.484 244018 DEBUG oslo_concurrency.lockutils [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:04 compute-0 nova_compute[244014]: 2026-02-25 13:05:04.485 244018 DEBUG oslo_concurrency.lockutils [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:04 compute-0 nova_compute[244014]: 2026-02-25 13:05:04.485 244018 DEBUG oslo_concurrency.lockutils [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:04 compute-0 nova_compute[244014]: 2026-02-25 13:05:04.485 244018 DEBUG nova.compute.manager [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] No waiting events found dispatching network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:05:04 compute-0 nova_compute[244014]: 2026-02-25 13:05:04.486 244018 WARNING nova.compute.manager [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received unexpected event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 for instance with vm_state active and task_state None.
Feb 25 13:05:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 13:05:05 compute-0 ceph-mon[76335]: pgmap v2546: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 13:05:05 compute-0 nova_compute[244014]: 2026-02-25 13:05:05.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:05 compute-0 nova_compute[244014]: 2026-02-25 13:05:05.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:06 compute-0 nova_compute[244014]: 2026-02-25 13:05:06.594 244018 DEBUG nova.compute.manager [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-changed-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:05:06 compute-0 nova_compute[244014]: 2026-02-25 13:05:06.594 244018 DEBUG nova.compute.manager [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Refreshing instance network info cache due to event network-changed-6beae6ca-810c-48a5-8fcd-5f58732e64f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 25 13:05:06 compute-0 nova_compute[244014]: 2026-02-25 13:05:06.594 244018 DEBUG oslo_concurrency.lockutils [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:05:06 compute-0 nova_compute[244014]: 2026-02-25 13:05:06.595 244018 DEBUG oslo_concurrency.lockutils [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:05:06 compute-0 nova_compute[244014]: 2026-02-25 13:05:06.595 244018 DEBUG nova.network.neutron [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Refreshing network info cache for port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 25 13:05:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 13:05:07 compute-0 ceph-mon[76335]: pgmap v2547: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 13:05:08 compute-0 nova_compute[244014]: 2026-02-25 13:05:08.211 244018 DEBUG nova.network.neutron [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updated VIF entry in instance network info cache for port 6beae6ca-810c-48a5-8fcd-5f58732e64f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 25 13:05:08 compute-0 nova_compute[244014]: 2026-02-25 13:05:08.212 244018 DEBUG nova.network.neutron [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updating instance_info_cache with network_info: [{"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:05:08 compute-0 nova_compute[244014]: 2026-02-25 13:05:08.237 244018 DEBUG oslo_concurrency.lockutils [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:05:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.845474) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708845540, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 462, "num_deletes": 250, "total_data_size": 403755, "memory_usage": 412832, "flush_reason": "Manual Compaction"}
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708849634, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 335913, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54107, "largest_seqno": 54568, "table_properties": {"data_size": 333303, "index_size": 646, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7000, "raw_average_key_size": 20, "raw_value_size": 328032, "raw_average_value_size": 967, "num_data_blocks": 28, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024687, "oldest_key_time": 1772024687, "file_creation_time": 1772024708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 4215 microseconds, and 1653 cpu microseconds.
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.849681) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 335913 bytes OK
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.849717) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.851143) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.851157) EVENT_LOG_v1 {"time_micros": 1772024708851152, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.851177) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 400967, prev total WAL file size 400967, number of live WAL files 2.
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.851590) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303033' seq:72057594037927935, type:22 .. '6D6772737461740032323534' seq:0, type:0; will stop at (end)
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(328KB)], [125(10MB)]
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708851637, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 11617996, "oldest_snapshot_seqno": -1}
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7426 keys, 8329578 bytes, temperature: kUnknown
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708903896, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8329578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8283382, "index_size": 26499, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18629, "raw_key_size": 193756, "raw_average_key_size": 26, "raw_value_size": 8154456, "raw_average_value_size": 1098, "num_data_blocks": 1026, "num_entries": 7426, "num_filter_entries": 7426, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.904283) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8329578 bytes
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.907310) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.8 rd, 159.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.8 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(59.4) write-amplify(24.8) OK, records in: 7933, records dropped: 507 output_compression: NoCompression
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.907334) EVENT_LOG_v1 {"time_micros": 1772024708907322, "job": 76, "event": "compaction_finished", "compaction_time_micros": 52371, "compaction_time_cpu_micros": 29462, "output_level": 6, "num_output_files": 1, "total_output_size": 8329578, "num_input_records": 7933, "num_output_records": 7426, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708907514, "job": 76, "event": "table_file_deletion", "file_number": 127}
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708908791, "job": 76, "event": "table_file_deletion", "file_number": 125}
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.851529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.908846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.908852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.908854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.908856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:05:08 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.908858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:05:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 13:05:09 compute-0 ceph-mon[76335]: pgmap v2548: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 13:05:10 compute-0 nova_compute[244014]: 2026-02-25 13:05:10.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:10 compute-0 nova_compute[244014]: 2026-02-25 13:05:10.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Feb 25 13:05:11 compute-0 ceph-mon[76335]: pgmap v2549: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Feb 25 13:05:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Feb 25 13:05:13 compute-0 ceph-mon[76335]: pgmap v2550: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Feb 25 13:05:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:14 compute-0 ovn_controller[147040]: 2026-02-25T13:05:14Z|00212|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8a:be:b7 10.100.0.3
Feb 25 13:05:14 compute-0 ovn_controller[147040]: 2026-02-25T13:05:14Z|00213|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8a:be:b7 10.100.0.3
Feb 25 13:05:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 102 op/s
Feb 25 13:05:15 compute-0 ceph-mon[76335]: pgmap v2551: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 102 op/s
Feb 25 13:05:15 compute-0 nova_compute[244014]: 2026-02-25 13:05:15.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:15 compute-0 nova_compute[244014]: 2026-02-25 13:05:15.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 102 op/s
Feb 25 13:05:17 compute-0 ceph-mon[76335]: pgmap v2552: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 102 op/s
Feb 25 13:05:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Feb 25 13:05:19 compute-0 ceph-mon[76335]: pgmap v2553: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Feb 25 13:05:20 compute-0 nova_compute[244014]: 2026-02-25 13:05:20.745 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:20 compute-0 nova_compute[244014]: 2026-02-25 13:05:20.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:05:21 compute-0 ceph-mon[76335]: pgmap v2554: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 13:05:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:22.468 157394 DEBUG eventlet.wsgi.server [-] (157394) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Feb 25 13:05:22 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:22.470 157394 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Feb 25 13:05:22 compute-0 ovn_metadata_agent[157124]: Accept: */*
Feb 25 13:05:22 compute-0 ovn_metadata_agent[157124]: Connection: close
Feb 25 13:05:22 compute-0 ovn_metadata_agent[157124]: Content-Type: text/plain
Feb 25 13:05:22 compute-0 ovn_metadata_agent[157124]: Host: 169.254.169.254
Feb 25 13:05:22 compute-0 ovn_metadata_agent[157124]: User-Agent: curl/7.84.0
Feb 25 13:05:22 compute-0 ovn_metadata_agent[157124]: X-Forwarded-For: 10.100.0.3
Feb 25 13:05:22 compute-0 ovn_metadata_agent[157124]: X-Ovn-Network-Id: e5412d48-5ea8-48ec-b0d9-17bc49a104c2 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Feb 25 13:05:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 25 13:05:23 compute-0 ceph-mon[76335]: pgmap v2555: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 25 13:05:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.350 157394 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Feb 25 13:05:24 compute-0 haproxy-metadata-proxy-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383756]: 10.100.0.3:33532 [25/Feb/2026:13:05:22.464] listener listener/metadata 0/0/0/1886/1886 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.351 157394 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.8806088
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.482 157394 DEBUG eventlet.wsgi.server [-] (157394) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.483 157394 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: Accept: */*
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: Connection: close
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: Content-Length: 100
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: Content-Type: application/x-www-form-urlencoded
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: Host: 169.254.169.254
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: User-Agent: curl/7.84.0
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: X-Forwarded-For: 10.100.0.3
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: X-Ovn-Network-Id: e5412d48-5ea8-48ec-b0d9-17bc49a104c2
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: 
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.739 157394 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Feb 25 13:05:24 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.740 157394 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2570720
Feb 25 13:05:24 compute-0 haproxy-metadata-proxy-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383756]: 10.100.0.3:33534 [25/Feb/2026:13:05:24.481] listener listener/metadata 0/0/0/259/259 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Feb 25 13:05:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 212 KiB/s rd, 109 KiB/s wr, 27 op/s
Feb 25 13:05:25 compute-0 ceph-mon[76335]: pgmap v2556: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 212 KiB/s rd, 109 KiB/s wr, 27 op/s
Feb 25 13:05:25 compute-0 nova_compute[244014]: 2026-02-25 13:05:25.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:25 compute-0 podman[383767]: 2026-02-25 13:05:25.785712815 +0000 UTC m=+0.050369129 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:05:25 compute-0 podman[383768]: 2026-02-25 13:05:25.811455871 +0000 UTC m=+0.078251255 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 13:05:25 compute-0 nova_compute[244014]: 2026-02-25 13:05:25.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 216 KiB/s rd, 109 KiB/s wr, 28 op/s
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.077 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.078 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.079 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.080 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.080 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.082 244018 INFO nova.compute.manager [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Terminating instance
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.083 244018 DEBUG nova.compute.manager [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 13:05:27 compute-0 ceph-mon[76335]: pgmap v2557: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 216 KiB/s rd, 109 KiB/s wr, 28 op/s
Feb 25 13:05:27 compute-0 kernel: tap6beae6ca-81 (unregistering): left promiscuous mode
Feb 25 13:05:27 compute-0 NetworkManager[49836]: <info>  [1772024727.1895] device (tap6beae6ca-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.189 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.197 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 ovn_controller[147040]: 2026-02-25T13:05:27Z|01610|binding|INFO|Releasing lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 from this chassis (sb_readonly=0)
Feb 25 13:05:27 compute-0 ovn_controller[147040]: 2026-02-25T13:05:27Z|01611|binding|INFO|Setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 down in Southbound
Feb 25 13:05:27 compute-0 ovn_controller[147040]: 2026-02-25T13:05:27Z|01612|binding|INFO|Removing iface tap6beae6ca-81 ovn-installed in OVS
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.199 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.206 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:be:b7 10.100.0.3'], port_security=['fa:16:3e:8a:be:b7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65c18a8f-0df3-4e29-b22d-fbc9362683f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7b441b24147b4cdbaeba6c8bc41ce081', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1265db5c-9190-4fc7-98f2-b8b04c9ddabc 876954b7-4b56-43af-996d-ec29d7f8f2ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=247650f0-77ff-40c8-b14f-1d2a2a430ca3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6beae6ca-810c-48a5-8fcd-5f58732e64f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.209 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 in datapath e5412d48-5ea8-48ec-b0d9-17bc49a104c2 unbound from our chassis
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.211 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5412d48-5ea8-48ec-b0d9-17bc49a104c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.213 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7c35c4-5979-4e93-b259-fa72f3b78647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.214 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2 namespace which is not needed anymore
Feb 25 13:05:27 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Feb 25 13:05:27 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 12.593s CPU time.
Feb 25 13:05:27 compute-0 systemd-machined[210048]: Machine qemu-186-instance-00000098 terminated.
Feb 25 13:05:27 compute-0 kernel: tap6beae6ca-81: entered promiscuous mode
Feb 25 13:05:27 compute-0 kernel: tap6beae6ca-81 (unregistering): left promiscuous mode
Feb 25 13:05:27 compute-0 NetworkManager[49836]: <info>  [1772024727.3118] manager: (tap6beae6ca-81): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 ovn_controller[147040]: 2026-02-25T13:05:27Z|01613|binding|INFO|Claiming lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 for this chassis.
Feb 25 13:05:27 compute-0 ovn_controller[147040]: 2026-02-25T13:05:27Z|01614|binding|INFO|6beae6ca-810c-48a5-8fcd-5f58732e64f4: Claiming fa:16:3e:8a:be:b7 10.100.0.3
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.322 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:be:b7 10.100.0.3'], port_security=['fa:16:3e:8a:be:b7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65c18a8f-0df3-4e29-b22d-fbc9362683f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7b441b24147b4cdbaeba6c8bc41ce081', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1265db5c-9190-4fc7-98f2-b8b04c9ddabc 876954b7-4b56-43af-996d-ec29d7f8f2ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=247650f0-77ff-40c8-b14f-1d2a2a430ca3, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6beae6ca-810c-48a5-8fcd-5f58732e64f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:05:27 compute-0 ovn_controller[147040]: 2026-02-25T13:05:27Z|01615|binding|INFO|Setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 up in Southbound
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.327 244018 INFO nova.virt.libvirt.driver [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Instance destroyed successfully.
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 ovn_controller[147040]: 2026-02-25T13:05:27Z|01616|binding|INFO|Releasing lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 from this chassis (sb_readonly=1)
Feb 25 13:05:27 compute-0 ovn_controller[147040]: 2026-02-25T13:05:27Z|01617|if_status|INFO|Dropped 1 log messages in last 438 seconds (most recently, 438 seconds ago) due to excessive rate
Feb 25 13:05:27 compute-0 ovn_controller[147040]: 2026-02-25T13:05:27Z|01618|if_status|INFO|Not setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 down as sb is readonly
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.331 244018 DEBUG nova.objects.instance [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lazy-loading 'resources' on Instance uuid 65c18a8f-0df3-4e29-b22d-fbc9362683f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:05:27 compute-0 ovn_controller[147040]: 2026-02-25T13:05:27Z|01619|binding|INFO|Releasing lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 from this chassis (sb_readonly=0)
Feb 25 13:05:27 compute-0 ovn_controller[147040]: 2026-02-25T13:05:27Z|01620|binding|INFO|Setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 down in Southbound
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.341 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:be:b7 10.100.0.3'], port_security=['fa:16:3e:8a:be:b7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65c18a8f-0df3-4e29-b22d-fbc9362683f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7b441b24147b4cdbaeba6c8bc41ce081', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1265db5c-9190-4fc7-98f2-b8b04c9ddabc 876954b7-4b56-43af-996d-ec29d7f8f2ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=247650f0-77ff-40c8-b14f-1d2a2a430ca3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6beae6ca-810c-48a5-8fcd-5f58732e64f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.344 244018 DEBUG nova.virt.libvirt.vif [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:04:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-326185766',display_name='tempest-TestServerBasicOps-server-326185766',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-326185766',id=152,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAgD3tk1qiKui8lJJf3rJQ9kydu9X1XrZCAFY86RY3JpEFptTJu0F5JBGrnQbFsHJxgtY6rKDs+7evBr0WJnOj8cmec2yTpdNqIYUHkpoNnjetcb9Sa9n8HT0qnNjdCSzw==',key_name='tempest-TestServerBasicOps-1849175750',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:05:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7b441b24147b4cdbaeba6c8bc41ce081',ramdisk_id='',reservation_id='r-pdnxw3eu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1933998932',owner_user_name='tempest-TestServerBasicOps-1933998932-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:05:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='49675ebd5cbe44e1a3373d23daabdc78',uuid=65c18a8f-0df3-4e29-b22d-fbc9362683f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.345 244018 DEBUG nova.network.os_vif_util [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converting VIF {"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.346 244018 DEBUG nova.network.os_vif_util [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.347 244018 DEBUG os_vif [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.350 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6beae6ca-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.358 244018 INFO os_vif [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81')
Feb 25 13:05:27 compute-0 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [NOTICE]   (383754) : haproxy version is 2.8.14-c23fe91
Feb 25 13:05:27 compute-0 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [NOTICE]   (383754) : path to executable is /usr/sbin/haproxy
Feb 25 13:05:27 compute-0 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [WARNING]  (383754) : Exiting Master process...
Feb 25 13:05:27 compute-0 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [ALERT]    (383754) : Current worker (383756) exited with code 143 (Terminated)
Feb 25 13:05:27 compute-0 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [WARNING]  (383754) : All workers exited. Exiting... (0)
Feb 25 13:05:27 compute-0 systemd[1]: libpod-8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024.scope: Deactivated successfully.
Feb 25 13:05:27 compute-0 podman[383835]: 2026-02-25 13:05:27.402783714 +0000 UTC m=+0.101805941 container died 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:05:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024-userdata-shm.mount: Deactivated successfully.
Feb 25 13:05:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-47e92cfb3f6edfebaff05e0277f03fba995760b8b6190fab1455ec297808617c-merged.mount: Deactivated successfully.
Feb 25 13:05:27 compute-0 podman[383835]: 2026-02-25 13:05:27.542980788 +0000 UTC m=+0.242003015 container cleanup 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:05:27 compute-0 systemd[1]: libpod-conmon-8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024.scope: Deactivated successfully.
Feb 25 13:05:27 compute-0 podman[383894]: 2026-02-25 13:05:27.675714511 +0000 UTC m=+0.105169707 container remove 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.681 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9909ff6f-97df-4582-a544-1b0d4c7a901e]: (4, ('Wed Feb 25 01:05:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2 (8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024)\n8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024\nWed Feb 25 01:05:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2 (8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024)\n8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.684 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[907c047b-c3e3-47ef-bd1e-889f377778cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.686 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5412d48-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 kernel: tape5412d48-50: left promiscuous mode
Feb 25 13:05:27 compute-0 nova_compute[244014]: 2026-02-25 13:05:27.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.697 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[70dbdce8-1f6c-4753-9f6c-b604a14db2ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.713 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[31e7a596-8df0-4d09-b5b4-ce6686fa9ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.715 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d181dd07-1cf9-49b6-b43b-bf3daf1d120f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.736 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c4569c-4c15-4ad5-a43c-75a39566461f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667122, 'reachable_time': 37626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383909, 'error': None, 'target': 'ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:27 compute-0 systemd[1]: run-netns-ovnmeta\x2de5412d48\x2d5ea8\x2d48ec\x2db0d9\x2d17bc49a104c2.mount: Deactivated successfully.
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.741 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.741 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[34346772-d730-49d6-ad6d-630413e35cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.743 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 in datapath e5412d48-5ea8-48ec-b0d9-17bc49a104c2 unbound from our chassis
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.745 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5412d48-5ea8-48ec-b0d9-17bc49a104c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.746 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[505f4bb6-9a45-4ff3-ab2c-5d0abcc11379]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.746 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 in datapath e5412d48-5ea8-48ec-b0d9-17bc49a104c2 unbound from our chassis
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.747 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5412d48-5ea8-48ec-b0d9-17bc49a104c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 25 13:05:27 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.748 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7cec63-08e2-4a89-ada9-979e071ed1ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 25 13:05:28 compute-0 nova_compute[244014]: 2026-02-25 13:05:28.017 244018 INFO nova.virt.libvirt.driver [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Deleting instance files /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8_del
Feb 25 13:05:28 compute-0 nova_compute[244014]: 2026-02-25 13:05:28.018 244018 INFO nova.virt.libvirt.driver [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Deletion of /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8_del complete
Feb 25 13:05:28 compute-0 nova_compute[244014]: 2026-02-25 13:05:28.096 244018 INFO nova.compute.manager [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Took 1.01 seconds to destroy the instance on the hypervisor.
Feb 25 13:05:28 compute-0 nova_compute[244014]: 2026-02-25 13:05:28.097 244018 DEBUG oslo.service.loopingcall [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 13:05:28 compute-0 nova_compute[244014]: 2026-02-25 13:05:28.097 244018 DEBUG nova.compute.manager [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 13:05:28 compute-0 nova_compute[244014]: 2026-02-25 13:05:28.098 244018 DEBUG nova.network.neutron [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 13:05:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 215 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 110 KiB/s wr, 37 op/s
Feb 25 13:05:29 compute-0 ceph-mon[76335]: pgmap v2558: 305 pgs: 305 active+clean; 215 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 110 KiB/s wr, 37 op/s
Feb 25 13:05:30 compute-0 nova_compute[244014]: 2026-02-25 13:05:30.293 244018 DEBUG nova.compute.manager [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-unplugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:05:30 compute-0 nova_compute[244014]: 2026-02-25 13:05:30.295 244018 DEBUG oslo_concurrency.lockutils [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:30 compute-0 nova_compute[244014]: 2026-02-25 13:05:30.296 244018 DEBUG oslo_concurrency.lockutils [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:30 compute-0 nova_compute[244014]: 2026-02-25 13:05:30.296 244018 DEBUG oslo_concurrency.lockutils [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:30 compute-0 nova_compute[244014]: 2026-02-25 13:05:30.296 244018 DEBUG nova.compute.manager [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] No waiting events found dispatching network-vif-unplugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:05:30 compute-0 nova_compute[244014]: 2026-02-25 13:05:30.297 244018 DEBUG nova.compute.manager [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-unplugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 25 13:05:30 compute-0 nova_compute[244014]: 2026-02-25 13:05:30.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 215 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 10 op/s
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:05:31
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'volumes', 'backups', '.rgw.root', 'vms', '.mgr']
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:05:31 compute-0 ceph-mon[76335]: pgmap v2559: 305 pgs: 305 active+clean; 215 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 10 op/s
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:05:31 compute-0 nova_compute[244014]: 2026-02-25 13:05:31.655 244018 DEBUG nova.network.neutron [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:05:31 compute-0 nova_compute[244014]: 2026-02-25 13:05:31.684 244018 INFO nova.compute.manager [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Took 3.59 seconds to deallocate network for instance.
Feb 25 13:05:31 compute-0 nova_compute[244014]: 2026-02-25 13:05:31.790 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:31 compute-0 nova_compute[244014]: 2026-02-25 13:05:31.791 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:31 compute-0 nova_compute[244014]: 2026-02-25 13:05:31.854 244018 DEBUG oslo_concurrency.processutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.393 244018 DEBUG nova.compute.manager [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-deleted-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.394 244018 DEBUG nova.compute.manager [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.395 244018 DEBUG oslo_concurrency.lockutils [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.395 244018 DEBUG oslo_concurrency.lockutils [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.396 244018 DEBUG oslo_concurrency.lockutils [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.396 244018 DEBUG nova.compute.manager [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] No waiting events found dispatching network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 25 13:05:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.397 244018 WARNING nova.compute.manager [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received unexpected event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 for instance with vm_state deleted and task_state None.
Feb 25 13:05:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1821226578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.416 244018 DEBUG oslo_concurrency.processutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.422 244018 DEBUG nova.compute.provider_tree [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.444 244018 DEBUG nova.scheduler.client.report [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.471 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.510 244018 INFO nova.scheduler.client.report [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Deleted allocations for instance 65c18a8f-0df3-4e29-b22d-fbc9362683f8
Feb 25 13:05:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1821226578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:05:32 compute-0 nova_compute[244014]: 2026-02-25 13:05:32.583 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 15 KiB/s wr, 30 op/s
Feb 25 13:05:33 compute-0 ceph-mon[76335]: pgmap v2560: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 15 KiB/s wr, 30 op/s
Feb 25 13:05:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:33 compute-0 nova_compute[244014]: 2026-02-25 13:05:33.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:33.923 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:05:33 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:33.924 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 13:05:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Feb 25 13:05:35 compute-0 ceph-mon[76335]: pgmap v2561: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Feb 25 13:05:35 compute-0 nova_compute[244014]: 2026-02-25 13:05:35.752 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:35 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:35.926 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:05:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Feb 25 13:05:37 compute-0 ceph-mon[76335]: pgmap v2562: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Feb 25 13:05:37 compute-0 nova_compute[244014]: 2026-02-25 13:05:37.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:37 compute-0 nova_compute[244014]: 2026-02-25 13:05:37.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:37 compute-0 nova_compute[244014]: 2026-02-25 13:05:37.532 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Feb 25 13:05:39 compute-0 ceph-mon[76335]: pgmap v2563: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Feb 25 13:05:40 compute-0 nova_compute[244014]: 2026-02-25 13:05:40.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:40 compute-0 nova_compute[244014]: 2026-02-25 13:05:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:05:40 compute-0 nova_compute[244014]: 2026-02-25 13:05:40.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:40 compute-0 nova_compute[244014]: 2026-02-25 13:05:40.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:40 compute-0 nova_compute[244014]: 2026-02-25 13:05:40.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:40 compute-0 nova_compute[244014]: 2026-02-25 13:05:40.904 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:05:40 compute-0 nova_compute[244014]: 2026-02-25 13:05:40.904 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Feb 25 13:05:41 compute-0 ceph-mon[76335]: pgmap v2564: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Feb 25 13:05:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:05:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/196505153' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.528 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.712 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.713 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3529MB free_disk=59.98725803941488GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.713 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.713 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.794 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.794 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.813 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.842 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.843 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.861 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.894 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:05:41 compute-0 nova_compute[244014]: 2026-02-25 13:05:41.910 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/196505153' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:05:42 compute-0 nova_compute[244014]: 2026-02-25 13:05:42.326 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024727.3249142, 65c18a8f-0df3-4e29-b22d-fbc9362683f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:05:42 compute-0 nova_compute[244014]: 2026-02-25 13:05:42.326 244018 INFO nova.compute.manager [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] VM Stopped (Lifecycle Event)
Feb 25 13:05:42 compute-0 nova_compute[244014]: 2026-02-25 13:05:42.348 244018 DEBUG nova.compute.manager [None req-d3138dc2-63a6-4321-a940-07f7342911d2 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:05:42 compute-0 nova_compute[244014]: 2026-02-25 13:05:42.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:05:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1808517837' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:05:42 compute-0 nova_compute[244014]: 2026-02-25 13:05:42.466 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:42 compute-0 nova_compute[244014]: 2026-02-25 13:05:42.472 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:05:42 compute-0 nova_compute[244014]: 2026-02-25 13:05:42.490 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:05:42 compute-0 nova_compute[244014]: 2026-02-25 13:05:42.521 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:05:42 compute-0 nova_compute[244014]: 2026-02-25 13:05:42.522 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.705684116624595e-05 of space, bias 1.0, pg target 0.005117052349873786 quantized to 32 (current 32)
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024945035750507153 of space, bias 1.0, pg target 0.7483510725152146 quantized to 32 (current 32)
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4068823102287942e-06 of space, bias 4.0, pg target 0.001688258772274553 quantized to 16 (current 16)
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:05:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:05:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Feb 25 13:05:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1808517837' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:05:43 compute-0 ceph-mon[76335]: pgmap v2565: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Feb 25 13:05:43 compute-0 nova_compute[244014]: 2026-02-25 13:05:43.522 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:05:43 compute-0 nova_compute[244014]: 2026-02-25 13:05:43.523 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:05:43 compute-0 nova_compute[244014]: 2026-02-25 13:05:43.524 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:05:43 compute-0 nova_compute[244014]: 2026-02-25 13:05:43.544 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:05:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:44 compute-0 nova_compute[244014]: 2026-02-25 13:05:44.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:05:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:45 compute-0 ceph-mon[76335]: pgmap v2566: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:45 compute-0 nova_compute[244014]: 2026-02-25 13:05:45.758 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:46 compute-0 nova_compute[244014]: 2026-02-25 13:05:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:05:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:47 compute-0 ceph-mon[76335]: pgmap v2567: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:47 compute-0 nova_compute[244014]: 2026-02-25 13:05:47.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:05:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1036224926' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:05:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:05:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1036224926' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:05:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1036224926' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:05:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1036224926' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:05:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:48 compute-0 nova_compute[244014]: 2026-02-25 13:05:48.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:05:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:49 compute-0 ceph-mon[76335]: pgmap v2568: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:50 compute-0 nova_compute[244014]: 2026-02-25 13:05:50.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:50 compute-0 nova_compute[244014]: 2026-02-25 13:05:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:05:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:51 compute-0 ceph-mon[76335]: pgmap v2569: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.080 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "44f7664a-bc33-4251-9857-884c305df488" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.081 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.099 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.176 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.177 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.186 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.187 244018 INFO nova.compute.claims [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Claim successful on node compute-0.ctlplane.example.com
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.307 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:05:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347998741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.894 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.901 244018 DEBUG nova.compute.provider_tree [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.920 244018 DEBUG nova.scheduler.client.report [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.940 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.941 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 25 13:05:52 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3347998741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.984 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 25 13:05:52 compute-0 nova_compute[244014]: 2026-02-25 13:05:52.985 244018 DEBUG nova.network.neutron [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.005 244018 INFO nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 25 13:05:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.027 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.169 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.171 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.171 244018 INFO nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Creating image(s)
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.200 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.228 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.256 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.260 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.346 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.347 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.348 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.348 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.375 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:05:53 compute-0 nova_compute[244014]: 2026-02-25 13:05:53.380 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 44f7664a-bc33-4251-9857-884c305df488_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:53 compute-0 sudo[384077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:05:53 compute-0 sudo[384077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:05:53 compute-0 sudo[384077]: pam_unix(sudo:session): session closed for user root
Feb 25 13:05:53 compute-0 sudo[384117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:05:53 compute-0 sudo[384117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:05:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:54 compute-0 sudo[384117]: pam_unix(sudo:session): session closed for user root
Feb 25 13:05:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:05:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:05:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:05:54 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:05:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:05:54 compute-0 ceph-mon[76335]: pgmap v2570: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:54 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:05:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:05:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:05:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:05:54 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:05:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:05:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:05:54 compute-0 nova_compute[244014]: 2026-02-25 13:05:54.233 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 44f7664a-bc33-4251-9857-884c305df488_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.852s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:54 compute-0 sudo[384176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:05:54 compute-0 sudo[384176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:05:54 compute-0 sudo[384176]: pam_unix(sudo:session): session closed for user root
Feb 25 13:05:54 compute-0 nova_compute[244014]: 2026-02-25 13:05:54.317 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] resizing rbd image 44f7664a-bc33-4251-9857-884c305df488_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 25 13:05:54 compute-0 sudo[384237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:05:54 compute-0 sudo[384237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:05:54 compute-0 nova_compute[244014]: 2026-02-25 13:05:54.557 244018 DEBUG nova.objects.instance [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lazy-loading 'migration_context' on Instance uuid 44f7664a-bc33-4251-9857-884c305df488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:05:54 compute-0 nova_compute[244014]: 2026-02-25 13:05:54.583 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 25 13:05:54 compute-0 nova_compute[244014]: 2026-02-25 13:05:54.584 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Ensure instance console log exists: /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 25 13:05:54 compute-0 nova_compute[244014]: 2026-02-25 13:05:54.585 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:54 compute-0 nova_compute[244014]: 2026-02-25 13:05:54.586 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:54 compute-0 nova_compute[244014]: 2026-02-25 13:05:54.586 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:54 compute-0 podman[384309]: 2026-02-25 13:05:54.693654379 +0000 UTC m=+0.057401459 container create 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:05:54 compute-0 podman[384309]: 2026-02-25 13:05:54.665512416 +0000 UTC m=+0.029259566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:05:54 compute-0 systemd[1]: Started libpod-conmon-3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c.scope.
Feb 25 13:05:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:05:54 compute-0 nova_compute[244014]: 2026-02-25 13:05:54.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:05:54 compute-0 podman[384309]: 2026-02-25 13:05:54.96958054 +0000 UTC m=+0.333327690 container init 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:05:54 compute-0 podman[384309]: 2026-02-25 13:05:54.980651222 +0000 UTC m=+0.344398322 container start 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:05:54 compute-0 mystifying_mclaren[384325]: 167 167
Feb 25 13:05:54 compute-0 systemd[1]: libpod-3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c.scope: Deactivated successfully.
Feb 25 13:05:55 compute-0 podman[384309]: 2026-02-25 13:05:55.013577419 +0000 UTC m=+0.377324569 container attach 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:05:55 compute-0 podman[384309]: 2026-02-25 13:05:55.01642698 +0000 UTC m=+0.380174080 container died 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:05:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:55.044 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:05:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-06ac11d5deecd53e49b56438ff9f51280bb645fea97318b76ec0a0e348a2944f-merged.mount: Deactivated successfully.
Feb 25 13:05:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:05:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:05:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:05:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:05:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:05:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:05:55 compute-0 ceph-mon[76335]: pgmap v2571: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.165 244018 DEBUG nova.network.neutron [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.166 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.170 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.179 244018 WARNING nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.188 244018 DEBUG nova.virt.libvirt.host [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.191 244018 DEBUG nova.virt.libvirt.host [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.197 244018 DEBUG nova.virt.libvirt.host [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.198 244018 DEBUG nova.virt.libvirt.host [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.199 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.200 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.201 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.201 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.202 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.202 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.203 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.203 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.204 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.204 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.205 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.205 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.211 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:55 compute-0 podman[384309]: 2026-02-25 13:05:55.234038546 +0000 UTC m=+0.597785636 container remove 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 13:05:55 compute-0 systemd[1]: libpod-conmon-3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c.scope: Deactivated successfully.
Feb 25 13:05:55 compute-0 podman[384363]: 2026-02-25 13:05:55.437890215 +0000 UTC m=+0.062964917 container create e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 13:05:55 compute-0 systemd[1]: Started libpod-conmon-e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9.scope.
Feb 25 13:05:55 compute-0 podman[384363]: 2026-02-25 13:05:55.412743245 +0000 UTC m=+0.037818027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:05:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:55 compute-0 podman[384363]: 2026-02-25 13:05:55.562933511 +0000 UTC m=+0.188008213 container init e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:05:55 compute-0 podman[384363]: 2026-02-25 13:05:55.572791269 +0000 UTC m=+0.197865981 container start e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 13:05:55 compute-0 podman[384363]: 2026-02-25 13:05:55.634024005 +0000 UTC m=+0.259098717 container attach e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:05:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000528062' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.806 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.837 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:05:55 compute-0 nova_compute[244014]: 2026-02-25 13:05:55.841 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:56 compute-0 heuristic_hoover[384389]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:05:56 compute-0 heuristic_hoover[384389]: --> All data devices are unavailable
Feb 25 13:05:56 compute-0 systemd[1]: libpod-e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9.scope: Deactivated successfully.
Feb 25 13:05:56 compute-0 podman[384450]: 2026-02-25 13:05:56.0948584 +0000 UTC m=+0.032326382 container died e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:05:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7-merged.mount: Deactivated successfully.
Feb 25 13:05:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2000528062' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:05:56 compute-0 podman[384450]: 2026-02-25 13:05:56.277898062 +0000 UTC m=+0.215366054 container remove e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:05:56 compute-0 podman[384449]: 2026-02-25 13:05:56.281351679 +0000 UTC m=+0.206410021 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:05:56 compute-0 systemd[1]: libpod-conmon-e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9.scope: Deactivated successfully.
Feb 25 13:05:56 compute-0 sudo[384237]: pam_unix(sudo:session): session closed for user root
Feb 25 13:05:56 compute-0 podman[384456]: 2026-02-25 13:05:56.363385943 +0000 UTC m=+0.288294071 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 13:05:56 compute-0 sudo[384501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:05:56 compute-0 sudo[384501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:05:56 compute-0 sudo[384501]: pam_unix(sudo:session): session closed for user root
Feb 25 13:05:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 13:05:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496961262' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:05:56 compute-0 nova_compute[244014]: 2026-02-25 13:05:56.415 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:56 compute-0 nova_compute[244014]: 2026-02-25 13:05:56.417 244018 DEBUG nova.objects.instance [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44f7664a-bc33-4251-9857-884c305df488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:05:56 compute-0 nova_compute[244014]: 2026-02-25 13:05:56.438 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] End _get_guest_xml xml=<domain type="kvm">
Feb 25 13:05:56 compute-0 nova_compute[244014]:   <uuid>44f7664a-bc33-4251-9857-884c305df488</uuid>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   <name>instance-00000099</name>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   <memory>131072</memory>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   <vcpu>1</vcpu>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   <metadata>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <nova:name>tempest-AggregatesAdminTestJSON-server-1169940556</nova:name>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <nova:creationTime>2026-02-25 13:05:55</nova:creationTime>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <nova:flavor name="m1.nano">
Feb 25 13:05:56 compute-0 nova_compute[244014]:         <nova:memory>128</nova:memory>
Feb 25 13:05:56 compute-0 nova_compute[244014]:         <nova:disk>1</nova:disk>
Feb 25 13:05:56 compute-0 nova_compute[244014]:         <nova:swap>0</nova:swap>
Feb 25 13:05:56 compute-0 nova_compute[244014]:         <nova:ephemeral>0</nova:ephemeral>
Feb 25 13:05:56 compute-0 nova_compute[244014]:         <nova:vcpus>1</nova:vcpus>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       </nova:flavor>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <nova:owner>
Feb 25 13:05:56 compute-0 nova_compute[244014]:         <nova:user uuid="c4171ec1b0ec456bbea3ceb6441b3e0c">tempest-AggregatesAdminTestJSON-1553068081-project-member</nova:user>
Feb 25 13:05:56 compute-0 nova_compute[244014]:         <nova:project uuid="a573180da6e44eebadfb343d41eb8105">tempest-AggregatesAdminTestJSON-1553068081</nova:project>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       </nova:owner>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <nova:ports/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     </nova:instance>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   </metadata>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   <sysinfo type="smbios">
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <system>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <entry name="manufacturer">RDO</entry>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <entry name="product">OpenStack Compute</entry>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <entry name="serial">44f7664a-bc33-4251-9857-884c305df488</entry>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <entry name="uuid">44f7664a-bc33-4251-9857-884c305df488</entry>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <entry name="family">Virtual Machine</entry>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     </system>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   </sysinfo>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   <os>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <boot dev="hd"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <smbios mode="sysinfo"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   </os>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   <features>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <acpi/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <apic/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <vmcoreinfo/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   </features>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   <clock offset="utc">
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <timer name="pit" tickpolicy="delay"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <timer name="hpet" present="no"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   </clock>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   <cpu mode="host-model" match="exact">
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <topology sockets="1" cores="1" threads="1"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   </cpu>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   <devices>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <disk type="network" device="disk">
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/44f7664a-bc33-4251-9857-884c305df488_disk">
Feb 25 13:05:56 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       </source>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:05:56 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <target dev="vda" bus="virtio"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <disk type="network" device="cdrom">
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <driver type="raw" cache="none"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <source protocol="rbd" name="vms/44f7664a-bc33-4251-9857-884c305df488_disk.config">
Feb 25 13:05:56 compute-0 nova_compute[244014]:         <host name="192.168.122.100" port="6789"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       </source>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <auth username="openstack">
Feb 25 13:05:56 compute-0 nova_compute[244014]:         <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       </auth>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <target dev="sda" bus="sata"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     </disk>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <serial type="pty">
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <log file="/var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/console.log" append="off"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     </serial>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <video>
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <model type="virtio"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     </video>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <input type="tablet" bus="usb"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <rng model="virtio">
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <backend model="random">/dev/urandom</backend>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     </rng>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="pci" model="pcie-root-port"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <controller type="usb" index="0"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     <memballoon model="virtio">
Feb 25 13:05:56 compute-0 nova_compute[244014]:       <stats period="10"/>
Feb 25 13:05:56 compute-0 nova_compute[244014]:     </memballoon>
Feb 25 13:05:56 compute-0 nova_compute[244014]:   </devices>
Feb 25 13:05:56 compute-0 nova_compute[244014]: </domain>
Feb 25 13:05:56 compute-0 nova_compute[244014]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 25 13:05:56 compute-0 sudo[384532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:05:56 compute-0 sudo[384532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:05:56 compute-0 nova_compute[244014]: 2026-02-25 13:05:56.573 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:05:56 compute-0 nova_compute[244014]: 2026-02-25 13:05:56.574 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 25 13:05:56 compute-0 nova_compute[244014]: 2026-02-25 13:05:56.578 244018 INFO nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Using config drive
Feb 25 13:05:56 compute-0 nova_compute[244014]: 2026-02-25 13:05:56.621 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:05:56 compute-0 podman[384588]: 2026-02-25 13:05:56.725299448 +0000 UTC m=+0.062213145 container create d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 13:05:56 compute-0 podman[384588]: 2026-02-25 13:05:56.68350041 +0000 UTC m=+0.020413997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:05:56 compute-0 systemd[1]: Started libpod-conmon-d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7.scope.
Feb 25 13:05:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:05:56 compute-0 nova_compute[244014]: 2026-02-25 13:05:56.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:05:56 compute-0 podman[384588]: 2026-02-25 13:05:56.878149488 +0000 UTC m=+0.215063055 container init d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:05:56 compute-0 podman[384588]: 2026-02-25 13:05:56.884310172 +0000 UTC m=+0.221223739 container start d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:05:56 compute-0 focused_montalcini[384604]: 167 167
Feb 25 13:05:56 compute-0 podman[384588]: 2026-02-25 13:05:56.890437855 +0000 UTC m=+0.227351422 container attach d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:05:56 compute-0 systemd[1]: libpod-d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7.scope: Deactivated successfully.
Feb 25 13:05:56 compute-0 podman[384588]: 2026-02-25 13:05:56.891783163 +0000 UTC m=+0.228696830 container died d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 13:05:56 compute-0 nova_compute[244014]: 2026-02-25 13:05:56.899 244018 INFO nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Creating config drive at /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config
Feb 25 13:05:56 compute-0 nova_compute[244014]: 2026-02-25 13:05:56.902 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbo97o_fa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d1291ae2908d17ebdc5055a4b298f77b32fd5cb0cb98bc17d932f0347c2a22a-merged.mount: Deactivated successfully.
Feb 25 13:05:56 compute-0 podman[384588]: 2026-02-25 13:05:56.997626227 +0000 UTC m=+0.334539824 container remove d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 13:05:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 183 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Feb 25 13:05:57 compute-0 systemd[1]: libpod-conmon-d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7.scope: Deactivated successfully.
Feb 25 13:05:57 compute-0 nova_compute[244014]: 2026-02-25 13:05:57.045 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbo97o_fa" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:57 compute-0 nova_compute[244014]: 2026-02-25 13:05:57.080 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 25 13:05:57 compute-0 nova_compute[244014]: 2026-02-25 13:05:57.084 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config 44f7664a-bc33-4251-9857-884c305df488_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:05:57 compute-0 podman[384652]: 2026-02-25 13:05:57.16901348 +0000 UTC m=+0.035737049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:05:57 compute-0 podman[384652]: 2026-02-25 13:05:57.362266639 +0000 UTC m=+0.229017639 container create 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:05:57 compute-0 nova_compute[244014]: 2026-02-25 13:05:57.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:05:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1496961262' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 13:05:57 compute-0 ceph-mon[76335]: pgmap v2572: 305 pgs: 305 active+clean; 183 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Feb 25 13:05:57 compute-0 systemd[1]: Started libpod-conmon-893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5.scope.
Feb 25 13:05:57 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68786a4ff9fd8e2dcde6a63abd7287994d71a0bd82cb186d3c649bae622af76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68786a4ff9fd8e2dcde6a63abd7287994d71a0bd82cb186d3c649bae622af76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68786a4ff9fd8e2dcde6a63abd7287994d71a0bd82cb186d3c649bae622af76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68786a4ff9fd8e2dcde6a63abd7287994d71a0bd82cb186d3c649bae622af76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:57 compute-0 podman[384652]: 2026-02-25 13:05:57.604806549 +0000 UTC m=+0.471530118 container init 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:05:57 compute-0 podman[384652]: 2026-02-25 13:05:57.612074304 +0000 UTC m=+0.478797873 container start 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 13:05:57 compute-0 podman[384652]: 2026-02-25 13:05:57.625341018 +0000 UTC m=+0.492064587 container attach 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:05:57 compute-0 nova_compute[244014]: 2026-02-25 13:05:57.633 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config 44f7664a-bc33-4251-9857-884c305df488_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:05:57 compute-0 nova_compute[244014]: 2026-02-25 13:05:57.634 244018 INFO nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Deleting local config drive /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config because it was imported into RBD.
Feb 25 13:05:57 compute-0 systemd-machined[210048]: New machine qemu-187-instance-00000099.
Feb 25 13:05:57 compute-0 systemd[1]: Started Virtual Machine qemu-187-instance-00000099.
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]: {
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:     "0": [
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:         {
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "devices": [
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "/dev/loop3"
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             ],
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_name": "ceph_lv0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_size": "21470642176",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "name": "ceph_lv0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "tags": {
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.cluster_name": "ceph",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.crush_device_class": "",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.encrypted": "0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.objectstore": "bluestore",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.osd_id": "0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.type": "block",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.vdo": "0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.with_tpm": "0"
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             },
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "type": "block",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "vg_name": "ceph_vg0"
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:         }
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:     ],
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:     "1": [
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:         {
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "devices": [
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "/dev/loop4"
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             ],
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_name": "ceph_lv1",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_size": "21470642176",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "name": "ceph_lv1",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "tags": {
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.cluster_name": "ceph",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.crush_device_class": "",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.encrypted": "0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.objectstore": "bluestore",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.osd_id": "1",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.type": "block",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.vdo": "0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.with_tpm": "0"
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             },
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "type": "block",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "vg_name": "ceph_vg1"
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:         }
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:     ],
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:     "2": [
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:         {
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "devices": [
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "/dev/loop5"
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             ],
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_name": "ceph_lv2",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_size": "21470642176",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "name": "ceph_lv2",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "tags": {
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.cluster_name": "ceph",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.crush_device_class": "",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.encrypted": "0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.objectstore": "bluestore",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.osd_id": "2",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.type": "block",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.vdo": "0",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:                 "ceph.with_tpm": "0"
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             },
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "type": "block",
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:             "vg_name": "ceph_vg2"
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:         }
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]:     ]
Feb 25 13:05:57 compute-0 beautiful_mahavira[384683]: }
Feb 25 13:05:57 compute-0 podman[384652]: 2026-02-25 13:05:57.930178794 +0000 UTC m=+0.796902403 container died 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:05:57 compute-0 systemd[1]: libpod-893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5.scope: Deactivated successfully.
Feb 25 13:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a68786a4ff9fd8e2dcde6a63abd7287994d71a0bd82cb186d3c649bae622af76-merged.mount: Deactivated successfully.
Feb 25 13:05:58 compute-0 podman[384652]: 2026-02-25 13:05:58.090776132 +0000 UTC m=+0.957499661 container remove 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 13:05:58 compute-0 systemd[1]: libpod-conmon-893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5.scope: Deactivated successfully.
Feb 25 13:05:58 compute-0 sudo[384532]: pam_unix(sudo:session): session closed for user root
Feb 25 13:05:58 compute-0 sudo[384758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:05:58 compute-0 sudo[384758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:05:58 compute-0 sudo[384758]: pam_unix(sudo:session): session closed for user root
Feb 25 13:05:58 compute-0 sudo[384787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:05:58 compute-0 sudo[384787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.306 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024758.3062577, 44f7664a-bc33-4251-9857-884c305df488 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.308 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] VM Resumed (Lifecycle Event)
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.314 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.315 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.319 244018 INFO nova.virt.libvirt.driver [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance spawned successfully.
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.320 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.335 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.341 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.345 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.345 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.346 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.346 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.347 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.347 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.379 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.380 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024758.312484, 44f7664a-bc33-4251-9857-884c305df488 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.380 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] VM Started (Lifecycle Event)
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.410 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.417 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.421 244018 INFO nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Took 5.25 seconds to spawn the instance on the hypervisor.
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.422 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.485 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.539 244018 INFO nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Took 6.39 seconds to build instance.
Feb 25 13:05:58 compute-0 nova_compute[244014]: 2026-02-25 13:05:58.557 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:58 compute-0 podman[384825]: 2026-02-25 13:05:58.59146487 +0000 UTC m=+0.047139279 container create 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:05:58 compute-0 systemd[1]: Started libpod-conmon-502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3.scope.
Feb 25 13:05:58 compute-0 podman[384825]: 2026-02-25 13:05:58.566032264 +0000 UTC m=+0.021706693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:05:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:05:58 compute-0 podman[384825]: 2026-02-25 13:05:58.682488707 +0000 UTC m=+0.138163196 container init 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:05:58 compute-0 podman[384825]: 2026-02-25 13:05:58.690993037 +0000 UTC m=+0.146667446 container start 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:05:58 compute-0 podman[384825]: 2026-02-25 13:05:58.696952815 +0000 UTC m=+0.152627274 container attach 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:05:58 compute-0 affectionate_bartik[384841]: 167 167
Feb 25 13:05:58 compute-0 systemd[1]: libpod-502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3.scope: Deactivated successfully.
Feb 25 13:05:58 compute-0 conmon[384841]: conmon 502cefe468ad4ee09d4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3.scope/container/memory.events
Feb 25 13:05:58 compute-0 podman[384825]: 2026-02-25 13:05:58.697913152 +0000 UTC m=+0.153587561 container died 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Feb 25 13:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-92b92eece74bae07cae4c0368b4de6007e84249d40fff741ca7b90f01a332e81-merged.mount: Deactivated successfully.
Feb 25 13:05:58 compute-0 podman[384825]: 2026-02-25 13:05:58.763837631 +0000 UTC m=+0.219512040 container remove 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 13:05:58 compute-0 systemd[1]: libpod-conmon-502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3.scope: Deactivated successfully.
Feb 25 13:05:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:05:58 compute-0 podman[384867]: 2026-02-25 13:05:58.946805641 +0000 UTC m=+0.063463261 container create fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:05:58 compute-0 systemd[1]: Started libpod-conmon-fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a.scope.
Feb 25 13:05:59 compute-0 podman[384867]: 2026-02-25 13:05:58.914429338 +0000 UTC m=+0.031087018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:05:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:05:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca8dd9da628a506c6dc931924660f564597705bf3a7c5adafdc574dc2b0f75d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca8dd9da628a506c6dc931924660f564597705bf3a7c5adafdc574dc2b0f75d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca8dd9da628a506c6dc931924660f564597705bf3a7c5adafdc574dc2b0f75d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca8dd9da628a506c6dc931924660f564597705bf3a7c5adafdc574dc2b0f75d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:05:59 compute-0 podman[384867]: 2026-02-25 13:05:59.06026293 +0000 UTC m=+0.176920600 container init fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:05:59 compute-0 podman[384867]: 2026-02-25 13:05:59.070111438 +0000 UTC m=+0.186769078 container start fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:05:59 compute-0 podman[384867]: 2026-02-25 13:05:59.126596751 +0000 UTC m=+0.243254371 container attach fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:05:59 compute-0 ceph-mon[76335]: pgmap v2573: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:05:59 compute-0 nova_compute[244014]: 2026-02-25 13:05:59.593 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "44f7664a-bc33-4251-9857-884c305df488" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:59 compute-0 nova_compute[244014]: 2026-02-25 13:05:59.594 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:59 compute-0 nova_compute[244014]: 2026-02-25 13:05:59.595 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "44f7664a-bc33-4251-9857-884c305df488-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:05:59 compute-0 nova_compute[244014]: 2026-02-25 13:05:59.595 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:05:59 compute-0 nova_compute[244014]: 2026-02-25 13:05:59.595 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:05:59 compute-0 nova_compute[244014]: 2026-02-25 13:05:59.596 244018 INFO nova.compute.manager [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Terminating instance
Feb 25 13:05:59 compute-0 nova_compute[244014]: 2026-02-25 13:05:59.597 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "refresh_cache-44f7664a-bc33-4251-9857-884c305df488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 25 13:05:59 compute-0 nova_compute[244014]: 2026-02-25 13:05:59.597 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquired lock "refresh_cache-44f7664a-bc33-4251-9857-884c305df488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 25 13:05:59 compute-0 nova_compute[244014]: 2026-02-25 13:05:59.597 244018 DEBUG nova.network.neutron [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 25 13:05:59 compute-0 lvm[384958]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:05:59 compute-0 lvm[384958]: VG ceph_vg0 finished
Feb 25 13:05:59 compute-0 lvm[384961]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:05:59 compute-0 lvm[384961]: VG ceph_vg1 finished
Feb 25 13:05:59 compute-0 lvm[384962]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:05:59 compute-0 lvm[384962]: VG ceph_vg2 finished
Feb 25 13:05:59 compute-0 elated_carver[384883]: {}
Feb 25 13:05:59 compute-0 systemd[1]: libpod-fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a.scope: Deactivated successfully.
Feb 25 13:05:59 compute-0 podman[384867]: 2026-02-25 13:05:59.882896907 +0000 UTC m=+0.999554547 container died fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:05:59 compute-0 systemd[1]: libpod-fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a.scope: Consumed 1.070s CPU time.
Feb 25 13:05:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bca8dd9da628a506c6dc931924660f564597705bf3a7c5adafdc574dc2b0f75d-merged.mount: Deactivated successfully.
Feb 25 13:05:59 compute-0 podman[384867]: 2026-02-25 13:05:59.981105687 +0000 UTC m=+1.097763287 container remove fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:06:00 compute-0 sudo[384787]: pam_unix(sudo:session): session closed for user root
Feb 25 13:06:00 compute-0 systemd[1]: libpod-conmon-fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a.scope: Deactivated successfully.
Feb 25 13:06:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:06:00 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:06:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:06:00 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:06:00 compute-0 sudo[384977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:06:00 compute-0 sudo[384977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:06:00 compute-0 sudo[384977]: pam_unix(sudo:session): session closed for user root
Feb 25 13:06:00 compute-0 nova_compute[244014]: 2026-02-25 13:06:00.169 244018 DEBUG nova.network.neutron [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 13:06:00 compute-0 nova_compute[244014]: 2026-02-25 13:06:00.455 244018 DEBUG nova.network.neutron [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:06:00 compute-0 nova_compute[244014]: 2026-02-25 13:06:00.471 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Releasing lock "refresh_cache-44f7664a-bc33-4251-9857-884c305df488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 25 13:06:00 compute-0 nova_compute[244014]: 2026-02-25 13:06:00.471 244018 DEBUG nova.compute.manager [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 25 13:06:00 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Deactivated successfully.
Feb 25 13:06:00 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Consumed 2.765s CPU time.
Feb 25 13:06:00 compute-0 systemd-machined[210048]: Machine qemu-187-instance-00000099 terminated.
Feb 25 13:06:00 compute-0 nova_compute[244014]: 2026-02-25 13:06:00.699 244018 INFO nova.virt.libvirt.driver [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance destroyed successfully.
Feb 25 13:06:00 compute-0 nova_compute[244014]: 2026-02-25 13:06:00.700 244018 DEBUG nova.objects.instance [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lazy-loading 'resources' on Instance uuid 44f7664a-bc33-4251-9857-884c305df488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 25 13:06:00 compute-0 nova_compute[244014]: 2026-02-25 13:06:00.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:06:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:06:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:06:01 compute-0 ceph-mon[76335]: pgmap v2574: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 13:06:01 compute-0 nova_compute[244014]: 2026-02-25 13:06:01.564 244018 INFO nova.virt.libvirt.driver [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Deleting instance files /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488_del
Feb 25 13:06:01 compute-0 nova_compute[244014]: 2026-02-25 13:06:01.565 244018 INFO nova.virt.libvirt.driver [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Deletion of /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488_del complete
Feb 25 13:06:01 compute-0 nova_compute[244014]: 2026-02-25 13:06:01.611 244018 INFO nova.compute.manager [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Took 1.14 seconds to destroy the instance on the hypervisor.
Feb 25 13:06:01 compute-0 nova_compute[244014]: 2026-02-25 13:06:01.612 244018 DEBUG oslo.service.loopingcall [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 25 13:06:01 compute-0 nova_compute[244014]: 2026-02-25 13:06:01.612 244018 DEBUG nova.compute.manager [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 25 13:06:01 compute-0 nova_compute[244014]: 2026-02-25 13:06:01.612 244018 DEBUG nova.network.neutron [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 25 13:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.131 244018 DEBUG nova.network.neutron [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.152 244018 DEBUG nova.network.neutron [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.169 244018 INFO nova.compute.manager [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Took 0.56 seconds to deallocate network for instance.
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.217 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.218 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.301 244018 DEBUG oslo_concurrency.processutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:06:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510927282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.850 244018 DEBUG oslo_concurrency.processutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.857 244018 DEBUG nova.compute.provider_tree [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:06:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3510927282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.889 244018 DEBUG nova.scheduler.client.report [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.924 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:06:02 compute-0 nova_compute[244014]: 2026-02-25 13:06:02.964 244018 INFO nova.scheduler.client.report [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Deleted allocations for instance 44f7664a-bc33-4251-9857-884c305df488
Feb 25 13:06:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 13:06:03 compute-0 nova_compute[244014]: 2026-02-25 13:06:03.052 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:06:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:03 compute-0 ceph-mon[76335]: pgmap v2575: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 13:06:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 13:06:05 compute-0 ceph-mon[76335]: pgmap v2576: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 13:06:05 compute-0 nova_compute[244014]: 2026-02-25 13:06:05.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:06 compute-0 nova_compute[244014]: 2026-02-25 13:06:06.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:06:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 13:06:07 compute-0 ceph-mon[76335]: pgmap v2577: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 13:06:07 compute-0 nova_compute[244014]: 2026-02-25 13:06:07.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 727 KiB/s wr, 101 op/s
Feb 25 13:06:09 compute-0 ceph-mon[76335]: pgmap v2578: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 727 KiB/s wr, 101 op/s
Feb 25 13:06:10 compute-0 nova_compute[244014]: 2026-02-25 13:06:10.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 13:06:11 compute-0 ceph-mon[76335]: pgmap v2579: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 13:06:12 compute-0 nova_compute[244014]: 2026-02-25 13:06:12.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 13:06:13 compute-0 ceph-mon[76335]: pgmap v2580: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 13:06:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:14 compute-0 ovn_controller[147040]: 2026-02-25T13:06:14Z|01621|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Feb 25 13:06:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Feb 25 13:06:15 compute-0 ceph-mon[76335]: pgmap v2581: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Feb 25 13:06:15 compute-0 nova_compute[244014]: 2026-02-25 13:06:15.697 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024760.6957903, 44f7664a-bc33-4251-9857-884c305df488 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 25 13:06:15 compute-0 nova_compute[244014]: 2026-02-25 13:06:15.697 244018 INFO nova.compute.manager [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] VM Stopped (Lifecycle Event)
Feb 25 13:06:15 compute-0 nova_compute[244014]: 2026-02-25 13:06:15.756 244018 DEBUG nova.compute.manager [None req-eb285364-1e0b-4e25-b374-3e758321ec70 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 25 13:06:15 compute-0 nova_compute[244014]: 2026-02-25 13:06:15.769 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Feb 25 13:06:17 compute-0 nova_compute[244014]: 2026-02-25 13:06:17.389 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:18 compute-0 ceph-mon[76335]: pgmap v2582: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Feb 25 13:06:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:20 compute-0 ceph-mon[76335]: pgmap v2583: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:20 compute-0 nova_compute[244014]: 2026-02-25 13:06:20.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:22 compute-0 ceph-mon[76335]: pgmap v2584: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:22 compute-0 nova_compute[244014]: 2026-02-25 13:06:22.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:24 compute-0 ceph-mon[76335]: pgmap v2585: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:25 compute-0 nova_compute[244014]: 2026-02-25 13:06:25.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:26 compute-0 ceph-mon[76335]: pgmap v2586: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:26 compute-0 podman[385046]: 2026-02-25 13:06:26.742737679 +0000 UTC m=+0.074419949 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 25 13:06:26 compute-0 podman[385047]: 2026-02-25 13:06:26.777596242 +0000 UTC m=+0.109325594 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 13:06:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:27 compute-0 nova_compute[244014]: 2026-02-25 13:06:27.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:28 compute-0 ceph-mon[76335]: pgmap v2587: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:30 compute-0 ceph-mon[76335]: pgmap v2588: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:30 compute-0 nova_compute[244014]: 2026-02-25 13:06:30.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:06:31
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'backups']
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:06:32 compute-0 ceph-mon[76335]: pgmap v2589: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:32 compute-0 nova_compute[244014]: 2026-02-25 13:06:32.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:34 compute-0 ceph-mon[76335]: pgmap v2590: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:06:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:06:34.555 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:06:34 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:06:34.556 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 13:06:34 compute-0 nova_compute[244014]: 2026-02-25 13:06:34.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:35 compute-0 nova_compute[244014]: 2026-02-25 13:06:35.776 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:36 compute-0 ceph-mon[76335]: pgmap v2591: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:36 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:06:36.558 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:06:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:37 compute-0 nova_compute[244014]: 2026-02-25 13:06:37.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:38 compute-0 ceph-mon[76335]: pgmap v2592: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:40 compute-0 ceph-mon[76335]: pgmap v2593: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:40 compute-0 nova_compute[244014]: 2026-02-25 13:06:40.778 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:41 compute-0 nova_compute[244014]: 2026-02-25 13:06:41.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:06:41 compute-0 nova_compute[244014]: 2026-02-25 13:06:41.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:06:41 compute-0 nova_compute[244014]: 2026-02-25 13:06:41.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:06:41 compute-0 nova_compute[244014]: 2026-02-25 13:06:41.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:06:41 compute-0 nova_compute[244014]: 2026-02-25 13:06:41.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:06:41 compute-0 nova_compute[244014]: 2026-02-25 13:06:41.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:06:42 compute-0 nova_compute[244014]: 2026-02-25 13:06:42.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:06:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1948775156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:06:42 compute-0 nova_compute[244014]: 2026-02-25 13:06:42.465 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:06:42 compute-0 ceph-mon[76335]: pgmap v2594: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1948775156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:06:42 compute-0 nova_compute[244014]: 2026-02-25 13:06:42.679 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:06:42 compute-0 nova_compute[244014]: 2026-02-25 13:06:42.681 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3527MB free_disk=59.98725803941488GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:06:42 compute-0 nova_compute[244014]: 2026-02-25 13:06:42.682 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:06:42 compute-0 nova_compute[244014]: 2026-02-25 13:06:42.683 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:06:42 compute-0 nova_compute[244014]: 2026-02-25 13:06:42.754 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:06:42 compute-0 nova_compute[244014]: 2026-02-25 13:06:42.754 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:06:42 compute-0 nova_compute[244014]: 2026-02-25 13:06:42.772 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.705684116624595e-05 of space, bias 1.0, pg target 0.005117052349873786 quantized to 32 (current 32)
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494511648089794 of space, bias 1.0, pg target 0.7483534944269382 quantized to 32 (current 32)
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4068823102287942e-06 of space, bias 4.0, pg target 0.001688258772274553 quantized to 16 (current 16)
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:06:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:06:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:06:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375962641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:06:43 compute-0 nova_compute[244014]: 2026-02-25 13:06:43.315 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:06:43 compute-0 nova_compute[244014]: 2026-02-25 13:06:43.325 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:06:43 compute-0 nova_compute[244014]: 2026-02-25 13:06:43.346 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:06:43 compute-0 nova_compute[244014]: 2026-02-25 13:06:43.379 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:06:43 compute-0 nova_compute[244014]: 2026-02-25 13:06:43.379 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:06:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/375962641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:06:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:44 compute-0 ceph-mon[76335]: pgmap v2595: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:45 compute-0 nova_compute[244014]: 2026-02-25 13:06:45.381 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:06:45 compute-0 nova_compute[244014]: 2026-02-25 13:06:45.381 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:06:45 compute-0 nova_compute[244014]: 2026-02-25 13:06:45.381 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:06:45 compute-0 nova_compute[244014]: 2026-02-25 13:06:45.406 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:06:45 compute-0 nova_compute[244014]: 2026-02-25 13:06:45.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:46 compute-0 ceph-mon[76335]: pgmap v2596: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:46 compute-0 nova_compute[244014]: 2026-02-25 13:06:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:06:46 compute-0 nova_compute[244014]: 2026-02-25 13:06:46.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:06:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:47 compute-0 nova_compute[244014]: 2026-02-25 13:06:47.410 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:06:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/939471437' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:06:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:06:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/939471437' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:06:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/939471437' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:06:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/939471437' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:06:48 compute-0 ceph-mon[76335]: pgmap v2597: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:49 compute-0 nova_compute[244014]: 2026-02-25 13:06:49.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:06:50 compute-0 ceph-mon[76335]: pgmap v2598: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:50 compute-0 nova_compute[244014]: 2026-02-25 13:06:50.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:50 compute-0 nova_compute[244014]: 2026-02-25 13:06:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:06:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:51 compute-0 nova_compute[244014]: 2026-02-25 13:06:51.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:06:52 compute-0 nova_compute[244014]: 2026-02-25 13:06:52.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:52 compute-0 ceph-mon[76335]: pgmap v2599: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:53 compute-0 nova_compute[244014]: 2026-02-25 13:06:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:06:53 compute-0 nova_compute[244014]: 2026-02-25 13:06:53.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:06:54 compute-0 ceph-mon[76335]: pgmap v2600: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:06:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:06:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:06:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:06:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:06:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:06:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:55 compute-0 nova_compute[244014]: 2026-02-25 13:06:55.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:56 compute-0 ceph-mon[76335]: pgmap v2601: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:56 compute-0 nova_compute[244014]: 2026-02-25 13:06:56.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:06:56 compute-0 nova_compute[244014]: 2026-02-25 13:06:56.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:06:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:57 compute-0 nova_compute[244014]: 2026-02-25 13:06:57.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:06:57 compute-0 podman[385135]: 2026-02-25 13:06:57.722634904 +0000 UTC m=+0.059469498 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 13:06:57 compute-0 podman[385136]: 2026-02-25 13:06:57.747213917 +0000 UTC m=+0.083945268 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:06:58 compute-0 ceph-mon[76335]: pgmap v2602: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:06:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:06:59 compute-0 ovn_controller[147040]: 2026-02-25T13:06:59Z|01622|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Feb 25 13:07:00 compute-0 sudo[385181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:07:00 compute-0 sudo[385181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:07:00 compute-0 sudo[385181]: pam_unix(sudo:session): session closed for user root
Feb 25 13:07:00 compute-0 sudo[385206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:07:00 compute-0 sudo[385206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:07:00 compute-0 sudo[385206]: pam_unix(sudo:session): session closed for user root
Feb 25 13:07:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:07:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:07:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:07:00 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:07:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:07:00 compute-0 ceph-mon[76335]: pgmap v2603: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:00 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:07:00 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:07:00 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:07:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:07:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:07:00 compute-0 nova_compute[244014]: 2026-02-25 13:07:00.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:07:00 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:07:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:07:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:07:00 compute-0 sudo[385262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:07:00 compute-0 sudo[385262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:07:00 compute-0 sudo[385262]: pam_unix(sudo:session): session closed for user root
Feb 25 13:07:00 compute-0 sudo[385287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:07:00 compute-0 sudo[385287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:07:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:01 compute-0 podman[385324]: 2026-02-25 13:07:01.238001733 +0000 UTC m=+0.085210533 container create dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:07:01 compute-0 podman[385324]: 2026-02-25 13:07:01.192434188 +0000 UTC m=+0.039643038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:07:01 compute-0 systemd[1]: Started libpod-conmon-dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6.scope.
Feb 25 13:07:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:07:01 compute-0 podman[385324]: 2026-02-25 13:07:01.362872225 +0000 UTC m=+0.210081085 container init dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:07:01 compute-0 podman[385324]: 2026-02-25 13:07:01.368868654 +0000 UTC m=+0.216077484 container start dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:07:01 compute-0 systemd[1]: libpod-dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6.scope: Deactivated successfully.
Feb 25 13:07:01 compute-0 peaceful_lovelace[385341]: 167 167
Feb 25 13:07:01 compute-0 conmon[385341]: conmon dcd689f82f30c43a9090 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6.scope/container/memory.events
Feb 25 13:07:01 compute-0 podman[385324]: 2026-02-25 13:07:01.401862434 +0000 UTC m=+0.249071294 container attach dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:07:01 compute-0 podman[385324]: 2026-02-25 13:07:01.402519863 +0000 UTC m=+0.249728693 container died dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:07:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2db2506ae02e201926c0a2351d92c9747f9be73b5069a1389544ae506a8fe2c-merged.mount: Deactivated successfully.
Feb 25 13:07:01 compute-0 podman[385324]: 2026-02-25 13:07:01.480146402 +0000 UTC m=+0.327355192 container remove dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:07:01 compute-0 systemd[1]: libpod-conmon-dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6.scope: Deactivated successfully.
Feb 25 13:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:07:01 compute-0 podman[385365]: 2026-02-25 13:07:01.620400027 +0000 UTC m=+0.030371358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:07:01 compute-0 podman[385365]: 2026-02-25 13:07:01.716391334 +0000 UTC m=+0.126362675 container create 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:07:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:07:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:07:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:07:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:07:01 compute-0 systemd[1]: Started libpod-conmon-558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b.scope.
Feb 25 13:07:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:01 compute-0 podman[385365]: 2026-02-25 13:07:01.92299675 +0000 UTC m=+0.332968131 container init 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb 25 13:07:01 compute-0 podman[385365]: 2026-02-25 13:07:01.931921221 +0000 UTC m=+0.341892562 container start 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 13:07:01 compute-0 podman[385365]: 2026-02-25 13:07:01.93825373 +0000 UTC m=+0.348225131 container attach 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:07:02 compute-0 wonderful_mestorf[385382]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:07:02 compute-0 wonderful_mestorf[385382]: --> All data devices are unavailable
Feb 25 13:07:02 compute-0 systemd[1]: libpod-558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b.scope: Deactivated successfully.
Feb 25 13:07:02 compute-0 conmon[385382]: conmon 558b0f8c67d4982d2cdb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b.scope/container/memory.events
Feb 25 13:07:02 compute-0 nova_compute[244014]: 2026-02-25 13:07:02.422 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:02 compute-0 podman[385365]: 2026-02-25 13:07:02.425137829 +0000 UTC m=+0.835109170 container died 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:07:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370-merged.mount: Deactivated successfully.
Feb 25 13:07:02 compute-0 podman[385365]: 2026-02-25 13:07:02.535705097 +0000 UTC m=+0.945676408 container remove 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:07:02 compute-0 systemd[1]: libpod-conmon-558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b.scope: Deactivated successfully.
Feb 25 13:07:02 compute-0 sudo[385287]: pam_unix(sudo:session): session closed for user root
Feb 25 13:07:02 compute-0 sudo[385416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:07:02 compute-0 sudo[385416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:07:02 compute-0 sudo[385416]: pam_unix(sudo:session): session closed for user root
Feb 25 13:07:02 compute-0 sudo[385441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:07:02 compute-0 sudo[385441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:07:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Feb 25 13:07:02 compute-0 ceph-mon[76335]: pgmap v2604: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Feb 25 13:07:02 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Feb 25 13:07:03 compute-0 podman[385479]: 2026-02-25 13:07:03.002667754 +0000 UTC m=+0.047603392 container create 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:07:03 compute-0 systemd[1]: Started libpod-conmon-2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8.scope.
Feb 25 13:07:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:03 compute-0 podman[385479]: 2026-02-25 13:07:02.985242874 +0000 UTC m=+0.030178502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:07:03 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:07:03 compute-0 podman[385479]: 2026-02-25 13:07:03.109211349 +0000 UTC m=+0.154146987 container init 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 13:07:03 compute-0 podman[385479]: 2026-02-25 13:07:03.118520331 +0000 UTC m=+0.163455929 container start 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:07:03 compute-0 podman[385479]: 2026-02-25 13:07:03.122651368 +0000 UTC m=+0.167587056 container attach 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:07:03 compute-0 confident_engelbart[385495]: 167 167
Feb 25 13:07:03 compute-0 systemd[1]: libpod-2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8.scope: Deactivated successfully.
Feb 25 13:07:03 compute-0 podman[385479]: 2026-02-25 13:07:03.125139708 +0000 UTC m=+0.170075316 container died 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:07:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5de943fdc16be057fc3e9a5c6d1cec65a5110d98a7719a2ae37d5b543e303a4a-merged.mount: Deactivated successfully.
Feb 25 13:07:03 compute-0 podman[385479]: 2026-02-25 13:07:03.169155659 +0000 UTC m=+0.214091287 container remove 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:07:03 compute-0 systemd[1]: libpod-conmon-2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8.scope: Deactivated successfully.
Feb 25 13:07:03 compute-0 podman[385520]: 2026-02-25 13:07:03.350278256 +0000 UTC m=+0.061995109 container create 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 13:07:03 compute-0 systemd[1]: Started libpod-conmon-5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93.scope.
Feb 25 13:07:03 compute-0 podman[385520]: 2026-02-25 13:07:03.323671046 +0000 UTC m=+0.035387929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:07:03 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350dce8a289b8ac8ac03656986f9d1bc9fc66b4a8f58ab1485bd8fcf6596322b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350dce8a289b8ac8ac03656986f9d1bc9fc66b4a8f58ab1485bd8fcf6596322b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350dce8a289b8ac8ac03656986f9d1bc9fc66b4a8f58ab1485bd8fcf6596322b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350dce8a289b8ac8ac03656986f9d1bc9fc66b4a8f58ab1485bd8fcf6596322b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:03 compute-0 podman[385520]: 2026-02-25 13:07:03.453878948 +0000 UTC m=+0.165595841 container init 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 13:07:03 compute-0 podman[385520]: 2026-02-25 13:07:03.46211213 +0000 UTC m=+0.173828963 container start 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:07:03 compute-0 podman[385520]: 2026-02-25 13:07:03.466814423 +0000 UTC m=+0.178531336 container attach 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]: {
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:     "0": [
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:         {
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "devices": [
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "/dev/loop3"
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             ],
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_name": "ceph_lv0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_size": "21470642176",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "name": "ceph_lv0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "tags": {
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.cluster_name": "ceph",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.crush_device_class": "",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.encrypted": "0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.objectstore": "bluestore",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.osd_id": "0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.type": "block",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.vdo": "0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.with_tpm": "0"
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             },
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "type": "block",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "vg_name": "ceph_vg0"
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:         }
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:     ],
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:     "1": [
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:         {
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "devices": [
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "/dev/loop4"
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             ],
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_name": "ceph_lv1",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_size": "21470642176",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "name": "ceph_lv1",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "tags": {
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.cluster_name": "ceph",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.crush_device_class": "",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.encrypted": "0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.objectstore": "bluestore",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.osd_id": "1",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.type": "block",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.vdo": "0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.with_tpm": "0"
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             },
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "type": "block",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "vg_name": "ceph_vg1"
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:         }
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:     ],
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:     "2": [
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:         {
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "devices": [
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "/dev/loop5"
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             ],
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_name": "ceph_lv2",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_size": "21470642176",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "name": "ceph_lv2",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "tags": {
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.cluster_name": "ceph",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.crush_device_class": "",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.encrypted": "0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.objectstore": "bluestore",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.osd_id": "2",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.type": "block",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.vdo": "0",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:                 "ceph.with_tpm": "0"
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             },
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "type": "block",
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:             "vg_name": "ceph_vg2"
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:         }
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]:     ]
Feb 25 13:07:03 compute-0 hopeful_cartwright[385536]: }
Feb 25 13:07:03 compute-0 systemd[1]: libpod-5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93.scope: Deactivated successfully.
Feb 25 13:07:03 compute-0 podman[385520]: 2026-02-25 13:07:03.773355177 +0000 UTC m=+0.485072000 container died 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:07:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-350dce8a289b8ac8ac03656986f9d1bc9fc66b4a8f58ab1485bd8fcf6596322b-merged.mount: Deactivated successfully.
Feb 25 13:07:03 compute-0 podman[385520]: 2026-02-25 13:07:03.820764854 +0000 UTC m=+0.532481697 container remove 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:07:03 compute-0 systemd[1]: libpod-conmon-5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93.scope: Deactivated successfully.
Feb 25 13:07:03 compute-0 sudo[385441]: pam_unix(sudo:session): session closed for user root
Feb 25 13:07:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:03 compute-0 sudo[385558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:07:03 compute-0 sudo[385558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:07:03 compute-0 sudo[385558]: pam_unix(sudo:session): session closed for user root
Feb 25 13:07:03 compute-0 ceph-mon[76335]: osdmap e290: 3 total, 3 up, 3 in
Feb 25 13:07:03 compute-0 sudo[385583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:07:03 compute-0 sudo[385583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:07:04 compute-0 podman[385620]: 2026-02-25 13:07:04.296163529 +0000 UTC m=+0.047948963 container create 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 13:07:04 compute-0 systemd[1]: Started libpod-conmon-704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a.scope.
Feb 25 13:07:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:07:04 compute-0 podman[385620]: 2026-02-25 13:07:04.368783197 +0000 UTC m=+0.120568641 container init 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:07:04 compute-0 podman[385620]: 2026-02-25 13:07:04.37420795 +0000 UTC m=+0.125993374 container start 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:07:04 compute-0 podman[385620]: 2026-02-25 13:07:04.279076138 +0000 UTC m=+0.030861582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:07:04 compute-0 zen_wing[385636]: 167 167
Feb 25 13:07:04 compute-0 podman[385620]: 2026-02-25 13:07:04.377904484 +0000 UTC m=+0.129689908 container attach 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:07:04 compute-0 systemd[1]: libpod-704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a.scope: Deactivated successfully.
Feb 25 13:07:04 compute-0 podman[385620]: 2026-02-25 13:07:04.378482021 +0000 UTC m=+0.130267445 container died 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-99b896afc94016e8366c4b303b2747dfe353788549f4392563de8f36bcd1ed8f-merged.mount: Deactivated successfully.
Feb 25 13:07:04 compute-0 podman[385620]: 2026-02-25 13:07:04.417435309 +0000 UTC m=+0.169220733 container remove 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:07:04 compute-0 systemd[1]: libpod-conmon-704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a.scope: Deactivated successfully.
Feb 25 13:07:04 compute-0 podman[385661]: 2026-02-25 13:07:04.55646333 +0000 UTC m=+0.055149007 container create a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 13:07:04 compute-0 systemd[1]: Started libpod-conmon-a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa.scope.
Feb 25 13:07:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:07:04 compute-0 podman[385661]: 2026-02-25 13:07:04.532312689 +0000 UTC m=+0.030998386 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e70fc4f1e1a85c75104e300d15b37466f6cc9ce61f33ebc662d447a1689d2fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e70fc4f1e1a85c75104e300d15b37466f6cc9ce61f33ebc662d447a1689d2fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e70fc4f1e1a85c75104e300d15b37466f6cc9ce61f33ebc662d447a1689d2fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e70fc4f1e1a85c75104e300d15b37466f6cc9ce61f33ebc662d447a1689d2fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:07:04 compute-0 podman[385661]: 2026-02-25 13:07:04.659382842 +0000 UTC m=+0.158068609 container init a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:07:04 compute-0 podman[385661]: 2026-02-25 13:07:04.668090087 +0000 UTC m=+0.166775804 container start a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 13:07:04 compute-0 podman[385661]: 2026-02-25 13:07:04.679960532 +0000 UTC m=+0.178646299 container attach a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 13:07:04 compute-0 ceph-mon[76335]: pgmap v2606: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:05 compute-0 lvm[385756]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:07:05 compute-0 lvm[385756]: VG ceph_vg1 finished
Feb 25 13:07:05 compute-0 lvm[385755]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:07:05 compute-0 lvm[385755]: VG ceph_vg0 finished
Feb 25 13:07:05 compute-0 lvm[385758]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:07:05 compute-0 lvm[385758]: VG ceph_vg2 finished
Feb 25 13:07:05 compute-0 interesting_hofstadter[385677]: {}
Feb 25 13:07:05 compute-0 systemd[1]: libpod-a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa.scope: Deactivated successfully.
Feb 25 13:07:05 compute-0 podman[385661]: 2026-02-25 13:07:05.544562243 +0000 UTC m=+1.043248000 container died a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:07:05 compute-0 systemd[1]: libpod-a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa.scope: Consumed 1.102s CPU time.
Feb 25 13:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e70fc4f1e1a85c75104e300d15b37466f6cc9ce61f33ebc662d447a1689d2fb-merged.mount: Deactivated successfully.
Feb 25 13:07:05 compute-0 podman[385661]: 2026-02-25 13:07:05.781801963 +0000 UTC m=+1.280487680 container remove a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:07:05 compute-0 nova_compute[244014]: 2026-02-25 13:07:05.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:05 compute-0 systemd[1]: libpod-conmon-a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa.scope: Deactivated successfully.
Feb 25 13:07:05 compute-0 sudo[385583]: pam_unix(sudo:session): session closed for user root
Feb 25 13:07:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:07:05 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:07:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:07:05 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 13:07:05 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 13:07:05 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:07:05 compute-0 sudo[385776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:07:05 compute-0 sudo[385776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:07:05 compute-0 sudo[385776]: pam_unix(sudo:session): session closed for user root
Feb 25 13:07:06 compute-0 ceph-mon[76335]: pgmap v2607: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:06 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:07:06 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:07:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 81 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 7 op/s
Feb 25 13:07:07 compute-0 nova_compute[244014]: 2026-02-25 13:07:07.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:08 compute-0 ceph-mon[76335]: pgmap v2608: 305 pgs: 305 active+clean; 81 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 7 op/s
Feb 25 13:07:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 13:07:10 compute-0 nova_compute[244014]: 2026-02-25 13:07:10.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:10 compute-0 ceph-mon[76335]: pgmap v2609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 13:07:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 13:07:12 compute-0 nova_compute[244014]: 2026-02-25 13:07:12.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:12 compute-0 ceph-mon[76335]: pgmap v2610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 13:07:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 13:07:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Feb 25 13:07:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Feb 25 13:07:13 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Feb 25 13:07:14 compute-0 ceph-mon[76335]: pgmap v2611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 13:07:14 compute-0 ceph-mon[76335]: osdmap e291: 3 total, 3 up, 3 in
Feb 25 13:07:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 13:07:15 compute-0 nova_compute[244014]: 2026-02-25 13:07:15.791 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:17 compute-0 ceph-mon[76335]: pgmap v2613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 13:07:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 921 B/s wr, 18 op/s
Feb 25 13:07:17 compute-0 nova_compute[244014]: 2026-02-25 13:07:17.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:18 compute-0 ceph-mon[76335]: pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 921 B/s wr, 18 op/s
Feb 25 13:07:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:20 compute-0 ceph-mon[76335]: pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:20 compute-0 nova_compute[244014]: 2026-02-25 13:07:20.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:22 compute-0 ceph-mon[76335]: pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:22 compute-0 nova_compute[244014]: 2026-02-25 13:07:22.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:24 compute-0 ceph-mon[76335]: pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:25 compute-0 nova_compute[244014]: 2026-02-25 13:07:25.795 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:26 compute-0 ceph-mon[76335]: pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:27 compute-0 nova_compute[244014]: 2026-02-25 13:07:27.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:28 compute-0 ceph-mon[76335]: pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:28 compute-0 podman[385801]: 2026-02-25 13:07:28.769368753 +0000 UTC m=+0.102064609 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 25 13:07:28 compute-0 podman[385802]: 2026-02-25 13:07:28.770063832 +0000 UTC m=+0.105504606 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 13:07:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:30 compute-0 nova_compute[244014]: 2026-02-25 13:07:30.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:30 compute-0 ceph-mon[76335]: pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:07:31
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'vms', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', '.rgw.root']
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:07:32 compute-0 nova_compute[244014]: 2026-02-25 13:07:32.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:32 compute-0 ceph-mon[76335]: pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:34 compute-0 ceph-mon[76335]: pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:35 compute-0 nova_compute[244014]: 2026-02-25 13:07:35.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:37 compute-0 ceph-mon[76335]: pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:37 compute-0 nova_compute[244014]: 2026-02-25 13:07:37.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:38 compute-0 ceph-mon[76335]: pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:40 compute-0 ceph-mon[76335]: pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:40 compute-0 nova_compute[244014]: 2026-02-25 13:07:40.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:42 compute-0 ceph-mon[76335]: pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:42 compute-0 sshd-session[385845]: Invalid user sol from 80.94.92.186 port 60390
Feb 25 13:07:42 compute-0 nova_compute[244014]: 2026-02-25 13:07:42.455 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:42 compute-0 sshd-session[385845]: Connection closed by invalid user sol 80.94.92.186 port 60390 [preauth]
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.7142632731533132e-05 of space, bias 1.0, pg target 0.00514278981945994 quantized to 32 (current 32)
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006710760464711434 of space, bias 1.0, pg target 0.20132281394134302 quantized to 32 (current 32)
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4125800128093941e-06 of space, bias 4.0, pg target 0.001695096015371273 quantized to 16 (current 16)
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:07:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:07:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:43 compute-0 nova_compute[244014]: 2026-02-25 13:07:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:07:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:43 compute-0 nova_compute[244014]: 2026-02-25 13:07:43.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:07:43 compute-0 nova_compute[244014]: 2026-02-25 13:07:43.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:07:43 compute-0 nova_compute[244014]: 2026-02-25 13:07:43.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:07:43 compute-0 nova_compute[244014]: 2026-02-25 13:07:43.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:07:43 compute-0 nova_compute[244014]: 2026-02-25 13:07:43.911 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:07:44 compute-0 ceph-mon[76335]: pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:07:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448772498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:07:44 compute-0 nova_compute[244014]: 2026-02-25 13:07:44.479 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:07:44 compute-0 nova_compute[244014]: 2026-02-25 13:07:44.647 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:07:44 compute-0 nova_compute[244014]: 2026-02-25 13:07:44.649 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3528MB free_disk=59.987252892926335GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:07:44 compute-0 nova_compute[244014]: 2026-02-25 13:07:44.650 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:07:44 compute-0 nova_compute[244014]: 2026-02-25 13:07:44.650 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:07:44 compute-0 nova_compute[244014]: 2026-02-25 13:07:44.749 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:07:44 compute-0 nova_compute[244014]: 2026-02-25 13:07:44.750 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:07:44 compute-0 nova_compute[244014]: 2026-02-25 13:07:44.775 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:07:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:07:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/501854444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:07:45 compute-0 nova_compute[244014]: 2026-02-25 13:07:45.332 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:07:45 compute-0 nova_compute[244014]: 2026-02-25 13:07:45.337 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:07:45 compute-0 nova_compute[244014]: 2026-02-25 13:07:45.358 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:07:45 compute-0 nova_compute[244014]: 2026-02-25 13:07:45.361 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:07:45 compute-0 nova_compute[244014]: 2026-02-25 13:07:45.361 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:07:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1448772498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:07:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/501854444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:07:45 compute-0 nova_compute[244014]: 2026-02-25 13:07:45.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:46 compute-0 nova_compute[244014]: 2026-02-25 13:07:46.364 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:07:46 compute-0 nova_compute[244014]: 2026-02-25 13:07:46.364 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:07:46 compute-0 nova_compute[244014]: 2026-02-25 13:07:46.365 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:07:46 compute-0 nova_compute[244014]: 2026-02-25 13:07:46.393 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:07:46 compute-0 ceph-mon[76335]: pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:46 compute-0 nova_compute[244014]: 2026-02-25 13:07:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:07:46 compute-0 nova_compute[244014]: 2026-02-25 13:07:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:07:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:47 compute-0 sshd-session[385891]: Received disconnect from 103.59.161.112 port 58082:11:  [preauth]
Feb 25 13:07:47 compute-0 sshd-session[385891]: Disconnected from authenticating user root 103.59.161.112 port 58082 [preauth]
Feb 25 13:07:47 compute-0 nova_compute[244014]: 2026-02-25 13:07:47.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:07:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396203184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:07:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:07:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396203184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:07:48 compute-0 ceph-mon[76335]: pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/396203184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:07:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/396203184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:07:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.895067) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024868895106, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1546, "num_deletes": 256, "total_data_size": 2455540, "memory_usage": 2492128, "flush_reason": "Manual Compaction"}
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024868910061, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 2420424, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54569, "largest_seqno": 56114, "table_properties": {"data_size": 2413208, "index_size": 4222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14838, "raw_average_key_size": 19, "raw_value_size": 2398685, "raw_average_value_size": 3193, "num_data_blocks": 188, "num_entries": 751, "num_filter_entries": 751, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024709, "oldest_key_time": 1772024709, "file_creation_time": 1772024868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 15088 microseconds, and 6483 cpu microseconds.
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.910153) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 2420424 bytes OK
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.910177) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.913828) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.913851) EVENT_LOG_v1 {"time_micros": 1772024868913844, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.913874) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2448782, prev total WAL file size 2448782, number of live WAL files 2.
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.914778) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323635' seq:72057594037927935, type:22 .. '6C6F676D0032353136' seq:0, type:0; will stop at (end)
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(2363KB)], [128(8134KB)]
Feb 25 13:07:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024868914824, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 10750002, "oldest_snapshot_seqno": -1}
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7649 keys, 10632648 bytes, temperature: kUnknown
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024869044625, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10632648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10582117, "index_size": 30322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 199282, "raw_average_key_size": 26, "raw_value_size": 10446360, "raw_average_value_size": 1365, "num_data_blocks": 1189, "num_entries": 7649, "num_filter_entries": 7649, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.044991) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10632648 bytes
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.048908) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.7 rd, 81.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 7.9 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(8.8) write-amplify(4.4) OK, records in: 8177, records dropped: 528 output_compression: NoCompression
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.048938) EVENT_LOG_v1 {"time_micros": 1772024869048924, "job": 78, "event": "compaction_finished", "compaction_time_micros": 129939, "compaction_time_cpu_micros": 20211, "output_level": 6, "num_output_files": 1, "total_output_size": 10632648, "num_input_records": 8177, "num_output_records": 7649, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024869049574, "job": 78, "event": "table_file_deletion", "file_number": 130}
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024869051114, "job": 78, "event": "table_file_deletion", "file_number": 128}
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.914721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.051209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.051216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.051219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.051222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:07:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.051225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:07:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:49 compute-0 nova_compute[244014]: 2026-02-25 13:07:49.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:07:50 compute-0 nova_compute[244014]: 2026-02-25 13:07:50.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:50 compute-0 ceph-mon[76335]: pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:51 compute-0 nova_compute[244014]: 2026-02-25 13:07:51.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:07:52 compute-0 nova_compute[244014]: 2026-02-25 13:07:52.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:53 compute-0 ceph-mon[76335]: pgmap v2631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:53 compute-0 nova_compute[244014]: 2026-02-25 13:07:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:07:53 compute-0 nova_compute[244014]: 2026-02-25 13:07:53.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:07:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:07:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:07:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:07:55.046 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:07:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:07:55.046 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:07:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:55 compute-0 ceph-mon[76335]: pgmap v2632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:55 compute-0 nova_compute[244014]: 2026-02-25 13:07:55.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:56 compute-0 ceph-mon[76335]: pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:57 compute-0 nova_compute[244014]: 2026-02-25 13:07:57.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:07:57 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 25 13:07:57 compute-0 systemd[1]: virtsecretd.service: Consumed 1.165s CPU time.
Feb 25 13:07:57 compute-0 nova_compute[244014]: 2026-02-25 13:07:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:07:57 compute-0 nova_compute[244014]: 2026-02-25 13:07:57.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:07:58 compute-0 ceph-mon[76335]: pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:07:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:07:59 compute-0 podman[385894]: 2026-02-25 13:07:59.764096317 +0000 UTC m=+0.100764542 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 25 13:07:59 compute-0 podman[385895]: 2026-02-25 13:07:59.769812168 +0000 UTC m=+0.106655638 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:08:00 compute-0 ceph-mon[76335]: pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:00 compute-0 nova_compute[244014]: 2026-02-25 13:08:00.808 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:08:02 compute-0 ceph-mon[76335]: pgmap v2636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:02 compute-0 nova_compute[244014]: 2026-02-25 13:08:02.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:04 compute-0 ceph-mon[76335]: pgmap v2637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:05 compute-0 nova_compute[244014]: 2026-02-25 13:08:05.810 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:06 compute-0 sudo[385936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:08:06 compute-0 sudo[385936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:08:06 compute-0 sudo[385936]: pam_unix(sudo:session): session closed for user root
Feb 25 13:08:06 compute-0 sudo[385961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:08:06 compute-0 sudo[385961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:08:06 compute-0 ceph-mon[76335]: pgmap v2638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:06 compute-0 sudo[385961]: pam_unix(sudo:session): session closed for user root
Feb 25 13:08:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:08:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:08:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:08:06 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:08:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:08:06 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:08:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:08:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:08:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:08:06 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:08:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:08:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:08:06 compute-0 sudo[386017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:08:06 compute-0 sudo[386017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:08:06 compute-0 sudo[386017]: pam_unix(sudo:session): session closed for user root
Feb 25 13:08:06 compute-0 sudo[386042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:08:06 compute-0 sudo[386042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:08:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:07 compute-0 podman[386081]: 2026-02-25 13:08:07.139679539 +0000 UTC m=+0.036938503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:08:07 compute-0 podman[386081]: 2026-02-25 13:08:07.237640521 +0000 UTC m=+0.134899395 container create b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:08:07 compute-0 systemd[1]: Started libpod-conmon-b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61.scope.
Feb 25 13:08:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:08:07 compute-0 podman[386081]: 2026-02-25 13:08:07.430736135 +0000 UTC m=+0.327995109 container init b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 13:08:07 compute-0 podman[386081]: 2026-02-25 13:08:07.43942151 +0000 UTC m=+0.336680434 container start b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 13:08:07 compute-0 loving_galileo[386098]: 167 167
Feb 25 13:08:07 compute-0 systemd[1]: libpod-b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61.scope: Deactivated successfully.
Feb 25 13:08:07 compute-0 nova_compute[244014]: 2026-02-25 13:08:07.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:07 compute-0 podman[386081]: 2026-02-25 13:08:07.487224808 +0000 UTC m=+0.384483772 container attach b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:08:07 compute-0 podman[386081]: 2026-02-25 13:08:07.487768163 +0000 UTC m=+0.385027097 container died b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 13:08:07 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:08:07 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:08:07 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:08:07 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:08:07 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:08:07 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:08:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd8e24f435283ecb0a620845d1f31bd85aff4108bf39629a86d2bad1a88fd88e-merged.mount: Deactivated successfully.
Feb 25 13:08:07 compute-0 podman[386081]: 2026-02-25 13:08:07.78338934 +0000 UTC m=+0.680648254 container remove b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:08:07 compute-0 systemd[1]: libpod-conmon-b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61.scope: Deactivated successfully.
Feb 25 13:08:07 compute-0 podman[386125]: 2026-02-25 13:08:07.989389659 +0000 UTC m=+0.096894044 container create d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:08:08 compute-0 podman[386125]: 2026-02-25 13:08:07.915254298 +0000 UTC m=+0.022758653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:08:08 compute-0 systemd[1]: Started libpod-conmon-d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6.scope.
Feb 25 13:08:08 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:08 compute-0 podman[386125]: 2026-02-25 13:08:08.167174132 +0000 UTC m=+0.274678537 container init d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:08:08 compute-0 podman[386125]: 2026-02-25 13:08:08.17488548 +0000 UTC m=+0.282389855 container start d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:08:08 compute-0 podman[386125]: 2026-02-25 13:08:08.220530927 +0000 UTC m=+0.328035292 container attach d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:08:08 compute-0 trusting_diffie[386141]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:08:08 compute-0 trusting_diffie[386141]: --> All data devices are unavailable
Feb 25 13:08:08 compute-0 systemd[1]: libpod-d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6.scope: Deactivated successfully.
Feb 25 13:08:08 compute-0 podman[386125]: 2026-02-25 13:08:08.681098784 +0000 UTC m=+0.788603179 container died d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:08:08 compute-0 ceph-mon[76335]: pgmap v2639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c-merged.mount: Deactivated successfully.
Feb 25 13:08:08 compute-0 nova_compute[244014]: 2026-02-25 13:08:08.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:08 compute-0 podman[386125]: 2026-02-25 13:08:08.992036762 +0000 UTC m=+1.099541127 container remove d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 13:08:08 compute-0 systemd[1]: libpod-conmon-d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6.scope: Deactivated successfully.
Feb 25 13:08:09 compute-0 sudo[386042]: pam_unix(sudo:session): session closed for user root
Feb 25 13:08:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:09 compute-0 sudo[386175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:08:09 compute-0 sudo[386175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:08:09 compute-0 sudo[386175]: pam_unix(sudo:session): session closed for user root
Feb 25 13:08:09 compute-0 sudo[386200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:08:09 compute-0 sudo[386200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:08:09 compute-0 podman[386239]: 2026-02-25 13:08:09.485559549 +0000 UTC m=+0.084674068 container create 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:08:09 compute-0 podman[386239]: 2026-02-25 13:08:09.426622087 +0000 UTC m=+0.025736666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:08:09 compute-0 systemd[1]: Started libpod-conmon-24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d.scope.
Feb 25 13:08:09 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:08:09 compute-0 podman[386239]: 2026-02-25 13:08:09.647494046 +0000 UTC m=+0.246608655 container init 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:08:09 compute-0 podman[386239]: 2026-02-25 13:08:09.656322035 +0000 UTC m=+0.255436564 container start 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 13:08:09 compute-0 eager_goldstine[386257]: 167 167
Feb 25 13:08:09 compute-0 systemd[1]: libpod-24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d.scope: Deactivated successfully.
Feb 25 13:08:09 compute-0 podman[386239]: 2026-02-25 13:08:09.69375404 +0000 UTC m=+0.292868639 container attach 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:08:09 compute-0 podman[386239]: 2026-02-25 13:08:09.694252264 +0000 UTC m=+0.293366793 container died 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-529385d85c231b1acd84d9e4d2c770c61609a83aef408ab17fc864ce67349b29-merged.mount: Deactivated successfully.
Feb 25 13:08:09 compute-0 podman[386239]: 2026-02-25 13:08:09.939991324 +0000 UTC m=+0.539105843 container remove 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:08:09 compute-0 systemd[1]: libpod-conmon-24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d.scope: Deactivated successfully.
Feb 25 13:08:10 compute-0 podman[386283]: 2026-02-25 13:08:10.160220234 +0000 UTC m=+0.085325437 container create a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:08:10 compute-0 podman[386283]: 2026-02-25 13:08:10.107311132 +0000 UTC m=+0.032416385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:08:10 compute-0 systemd[1]: Started libpod-conmon-a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629.scope.
Feb 25 13:08:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda7a6d527ef1004c759cffe95cdf2d8c39451206c0ab37c67cc9696010ea838/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda7a6d527ef1004c759cffe95cdf2d8c39451206c0ab37c67cc9696010ea838/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda7a6d527ef1004c759cffe95cdf2d8c39451206c0ab37c67cc9696010ea838/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda7a6d527ef1004c759cffe95cdf2d8c39451206c0ab37c67cc9696010ea838/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:10 compute-0 podman[386283]: 2026-02-25 13:08:10.692596856 +0000 UTC m=+0.617702119 container init a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:08:10 compute-0 podman[386283]: 2026-02-25 13:08:10.701170148 +0000 UTC m=+0.626275321 container start a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:08:10 compute-0 podman[386283]: 2026-02-25 13:08:10.730883466 +0000 UTC m=+0.655988729 container attach a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:08:10 compute-0 nova_compute[244014]: 2026-02-25 13:08:10.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:10 compute-0 ceph-mon[76335]: pgmap v2640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:10 compute-0 clever_noyce[386299]: {
Feb 25 13:08:10 compute-0 clever_noyce[386299]:     "0": [
Feb 25 13:08:10 compute-0 clever_noyce[386299]:         {
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "devices": [
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "/dev/loop3"
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             ],
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_name": "ceph_lv0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_size": "21470642176",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "name": "ceph_lv0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "tags": {
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.cluster_name": "ceph",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.crush_device_class": "",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.encrypted": "0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.objectstore": "bluestore",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.osd_id": "0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.type": "block",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.vdo": "0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.with_tpm": "0"
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             },
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "type": "block",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "vg_name": "ceph_vg0"
Feb 25 13:08:10 compute-0 clever_noyce[386299]:         }
Feb 25 13:08:10 compute-0 clever_noyce[386299]:     ],
Feb 25 13:08:10 compute-0 clever_noyce[386299]:     "1": [
Feb 25 13:08:10 compute-0 clever_noyce[386299]:         {
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "devices": [
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "/dev/loop4"
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             ],
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_name": "ceph_lv1",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_size": "21470642176",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "name": "ceph_lv1",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "tags": {
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.cluster_name": "ceph",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.crush_device_class": "",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.encrypted": "0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.objectstore": "bluestore",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.osd_id": "1",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.type": "block",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.vdo": "0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.with_tpm": "0"
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             },
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "type": "block",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "vg_name": "ceph_vg1"
Feb 25 13:08:10 compute-0 clever_noyce[386299]:         }
Feb 25 13:08:10 compute-0 clever_noyce[386299]:     ],
Feb 25 13:08:10 compute-0 clever_noyce[386299]:     "2": [
Feb 25 13:08:10 compute-0 clever_noyce[386299]:         {
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "devices": [
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "/dev/loop5"
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             ],
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_name": "ceph_lv2",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_size": "21470642176",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "name": "ceph_lv2",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "tags": {
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.cluster_name": "ceph",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.crush_device_class": "",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.encrypted": "0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.objectstore": "bluestore",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.osd_id": "2",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.type": "block",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.vdo": "0",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:                 "ceph.with_tpm": "0"
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             },
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "type": "block",
Feb 25 13:08:10 compute-0 clever_noyce[386299]:             "vg_name": "ceph_vg2"
Feb 25 13:08:10 compute-0 clever_noyce[386299]:         }
Feb 25 13:08:10 compute-0 clever_noyce[386299]:     ]
Feb 25 13:08:10 compute-0 clever_noyce[386299]: }
Feb 25 13:08:11 compute-0 podman[386283]: 2026-02-25 13:08:11.033192529 +0000 UTC m=+0.958297742 container died a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:08:11 compute-0 systemd[1]: libpod-a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629.scope: Deactivated successfully.
Feb 25 13:08:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bda7a6d527ef1004c759cffe95cdf2d8c39451206c0ab37c67cc9696010ea838-merged.mount: Deactivated successfully.
Feb 25 13:08:11 compute-0 podman[386283]: 2026-02-25 13:08:11.23463771 +0000 UTC m=+1.159742913 container remove a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:08:11 compute-0 systemd[1]: libpod-conmon-a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629.scope: Deactivated successfully.
Feb 25 13:08:11 compute-0 sudo[386200]: pam_unix(sudo:session): session closed for user root
Feb 25 13:08:11 compute-0 sudo[386322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:08:11 compute-0 sudo[386322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:08:11 compute-0 sudo[386322]: pam_unix(sudo:session): session closed for user root
Feb 25 13:08:11 compute-0 sudo[386347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:08:11 compute-0 sudo[386347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:08:11 compute-0 podman[386384]: 2026-02-25 13:08:11.79278544 +0000 UTC m=+0.107488642 container create 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:08:11 compute-0 podman[386384]: 2026-02-25 13:08:11.720276765 +0000 UTC m=+0.034980037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:08:11 compute-0 systemd[1]: Started libpod-conmon-9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c.scope.
Feb 25 13:08:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:08:11 compute-0 podman[386384]: 2026-02-25 13:08:11.94316386 +0000 UTC m=+0.257867072 container init 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 13:08:11 compute-0 podman[386384]: 2026-02-25 13:08:11.949764336 +0000 UTC m=+0.264467548 container start 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:08:11 compute-0 stupefied_mendeleev[386400]: 167 167
Feb 25 13:08:11 compute-0 systemd[1]: libpod-9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c.scope: Deactivated successfully.
Feb 25 13:08:11 compute-0 podman[386384]: 2026-02-25 13:08:11.970330056 +0000 UTC m=+0.285033328 container attach 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:08:11 compute-0 podman[386384]: 2026-02-25 13:08:11.971018746 +0000 UTC m=+0.285721958 container died 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e504318cdf1566d6ae4dc58cf107572b6170b28ef134a4e3d73270ff1ebdc910-merged.mount: Deactivated successfully.
Feb 25 13:08:12 compute-0 podman[386384]: 2026-02-25 13:08:12.017373823 +0000 UTC m=+0.332077035 container remove 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 13:08:12 compute-0 systemd[1]: libpod-conmon-9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c.scope: Deactivated successfully.
Feb 25 13:08:12 compute-0 podman[386425]: 2026-02-25 13:08:12.213751271 +0000 UTC m=+0.051085532 container create 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:08:12 compute-0 systemd[1]: Started libpod-conmon-828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b.scope.
Feb 25 13:08:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:08:12 compute-0 podman[386425]: 2026-02-25 13:08:12.193960963 +0000 UTC m=+0.031295264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f489aef18e8d21422bc5703790bfaf6b268fa6cf873f2ef3a2fcc11a35e9c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f489aef18e8d21422bc5703790bfaf6b268fa6cf873f2ef3a2fcc11a35e9c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f489aef18e8d21422bc5703790bfaf6b268fa6cf873f2ef3a2fcc11a35e9c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f489aef18e8d21422bc5703790bfaf6b268fa6cf873f2ef3a2fcc11a35e9c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:08:12 compute-0 podman[386425]: 2026-02-25 13:08:12.31161285 +0000 UTC m=+0.148947131 container init 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 13:08:12 compute-0 podman[386425]: 2026-02-25 13:08:12.324417841 +0000 UTC m=+0.161752102 container start 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:08:12 compute-0 podman[386425]: 2026-02-25 13:08:12.328018393 +0000 UTC m=+0.165352644 container attach 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:08:12 compute-0 nova_compute[244014]: 2026-02-25 13:08:12.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:12 compute-0 ceph-mon[76335]: pgmap v2641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:12 compute-0 lvm[386520]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:08:12 compute-0 lvm[386519]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:08:12 compute-0 lvm[386520]: VG ceph_vg1 finished
Feb 25 13:08:12 compute-0 lvm[386519]: VG ceph_vg0 finished
Feb 25 13:08:13 compute-0 lvm[386522]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:08:13 compute-0 lvm[386522]: VG ceph_vg2 finished
Feb 25 13:08:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:13 compute-0 condescending_solomon[386441]: {}
Feb 25 13:08:13 compute-0 systemd[1]: libpod-828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b.scope: Deactivated successfully.
Feb 25 13:08:13 compute-0 systemd[1]: libpod-828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b.scope: Consumed 1.159s CPU time.
Feb 25 13:08:13 compute-0 podman[386425]: 2026-02-25 13:08:13.154242961 +0000 UTC m=+0.991577222 container died 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 13:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5f489aef18e8d21422bc5703790bfaf6b268fa6cf873f2ef3a2fcc11a35e9c2-merged.mount: Deactivated successfully.
Feb 25 13:08:13 compute-0 podman[386425]: 2026-02-25 13:08:13.510062125 +0000 UTC m=+1.347396406 container remove 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 13:08:13 compute-0 systemd[1]: libpod-conmon-828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b.scope: Deactivated successfully.
Feb 25 13:08:13 compute-0 sudo[386347]: pam_unix(sudo:session): session closed for user root
Feb 25 13:08:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:08:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:08:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:08:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:08:13 compute-0 sudo[386539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:08:13 compute-0 sudo[386539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:08:13 compute-0 sudo[386539]: pam_unix(sudo:session): session closed for user root
Feb 25 13:08:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:14 compute-0 ceph-mon[76335]: pgmap v2642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:08:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:08:14 compute-0 nova_compute[244014]: 2026-02-25 13:08:14.738 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:15 compute-0 nova_compute[244014]: 2026-02-25 13:08:15.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:15 compute-0 nova_compute[244014]: 2026-02-25 13:08:15.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:15 compute-0 nova_compute[244014]: 2026-02-25 13:08:15.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:08:15 compute-0 nova_compute[244014]: 2026-02-25 13:08:15.890 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:08:16 compute-0 ceph-mon[76335]: pgmap v2643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:17 compute-0 nova_compute[244014]: 2026-02-25 13:08:17.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:18 compute-0 ceph-mon[76335]: pgmap v2644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:20 compute-0 nova_compute[244014]: 2026-02-25 13:08:20.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:20 compute-0 ceph-mon[76335]: pgmap v2645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:21 compute-0 nova_compute[244014]: 2026-02-25 13:08:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:22 compute-0 nova_compute[244014]: 2026-02-25 13:08:22.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:22 compute-0 ceph-mon[76335]: pgmap v2646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:24 compute-0 ceph-mon[76335]: pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:25 compute-0 nova_compute[244014]: 2026-02-25 13:08:25.818 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:27 compute-0 ceph-mon[76335]: pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.096739) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907096783, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 557, "num_deletes": 251, "total_data_size": 602471, "memory_usage": 613688, "flush_reason": "Manual Compaction"}
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Feb 25 13:08:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907122076, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 597222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56115, "largest_seqno": 56671, "table_properties": {"data_size": 594100, "index_size": 1093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7188, "raw_average_key_size": 19, "raw_value_size": 587906, "raw_average_value_size": 1571, "num_data_blocks": 48, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024869, "oldest_key_time": 1772024869, "file_creation_time": 1772024907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 25425 microseconds, and 3111 cpu microseconds.
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.122153) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 597222 bytes OK
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.122195) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.131439) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.131480) EVENT_LOG_v1 {"time_micros": 1772024907131469, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.131517) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 599353, prev total WAL file size 599353, number of live WAL files 2.
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.132304) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(583KB)], [131(10MB)]
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907132354, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11229870, "oldest_snapshot_seqno": -1}
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7510 keys, 9512100 bytes, temperature: kUnknown
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907202519, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9512100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9463504, "index_size": 28707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 197067, "raw_average_key_size": 26, "raw_value_size": 9331131, "raw_average_value_size": 1242, "num_data_blocks": 1114, "num_entries": 7510, "num_filter_entries": 7510, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.203340) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9512100 bytes
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.213858) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.8 rd, 135.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 10.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(34.7) write-amplify(15.9) OK, records in: 8023, records dropped: 513 output_compression: NoCompression
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.213898) EVENT_LOG_v1 {"time_micros": 1772024907213881, "job": 80, "event": "compaction_finished", "compaction_time_micros": 70291, "compaction_time_cpu_micros": 21564, "output_level": 6, "num_output_files": 1, "total_output_size": 9512100, "num_input_records": 8023, "num_output_records": 7510, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907214468, "job": 80, "event": "table_file_deletion", "file_number": 133}
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907216243, "job": 80, "event": "table_file_deletion", "file_number": 131}
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.132206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.216343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.216354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.216357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.216361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:08:27 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.216364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:08:27 compute-0 nova_compute[244014]: 2026-02-25 13:08:27.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:28 compute-0 ceph-mon[76335]: pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:28 compute-0 nova_compute[244014]: 2026-02-25 13:08:28.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:28 compute-0 nova_compute[244014]: 2026-02-25 13:08:28.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:08:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:30 compute-0 ceph-mon[76335]: pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:30 compute-0 podman[386564]: 2026-02-25 13:08:30.783261034 +0000 UTC m=+0.047417465 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 25 13:08:30 compute-0 podman[386565]: 2026-02-25 13:08:30.799256395 +0000 UTC m=+0.062219992 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:08:30 compute-0 nova_compute[244014]: 2026-02-25 13:08:30.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:08:31
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', 'images', 'default.rgw.control', '.rgw.root', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr']
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:08:32 compute-0 ceph-mon[76335]: pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:32 compute-0 nova_compute[244014]: 2026-02-25 13:08:32.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:34 compute-0 ceph-mon[76335]: pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:35 compute-0 nova_compute[244014]: 2026-02-25 13:08:35.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:36 compute-0 ceph-mon[76335]: pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:37 compute-0 nova_compute[244014]: 2026-02-25 13:08:37.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:38 compute-0 ceph-mon[76335]: pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:40 compute-0 ceph-mon[76335]: pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:40 compute-0 nova_compute[244014]: 2026-02-25 13:08:40.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:42 compute-0 nova_compute[244014]: 2026-02-25 13:08:42.499 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:42 compute-0 ceph-mon[76335]: pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.7142632731533132e-05 of space, bias 1.0, pg target 0.00514278981945994 quantized to 32 (current 32)
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006710760464711434 of space, bias 1.0, pg target 0.20132281394134302 quantized to 32 (current 32)
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4125800128093941e-06 of space, bias 4.0, pg target 0.001695096015371273 quantized to 16 (current 16)
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:08:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:08:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:43 compute-0 nova_compute[244014]: 2026-02-25 13:08:43.757 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:43 compute-0 nova_compute[244014]: 2026-02-25 13:08:43.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:43 compute-0 nova_compute[244014]: 2026-02-25 13:08:43.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:08:43 compute-0 nova_compute[244014]: 2026-02-25 13:08:43.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:08:43 compute-0 nova_compute[244014]: 2026-02-25 13:08:43.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:08:43 compute-0 nova_compute[244014]: 2026-02-25 13:08:43.906 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:08:43 compute-0 nova_compute[244014]: 2026-02-25 13:08:43.907 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:08:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:08:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376885638' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:08:44 compute-0 nova_compute[244014]: 2026-02-25 13:08:44.501 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:08:44 compute-0 nova_compute[244014]: 2026-02-25 13:08:44.650 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:08:44 compute-0 nova_compute[244014]: 2026-02-25 13:08:44.651 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.987252892926335GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:08:44 compute-0 nova_compute[244014]: 2026-02-25 13:08:44.651 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:08:44 compute-0 nova_compute[244014]: 2026-02-25 13:08:44.651 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:08:44 compute-0 nova_compute[244014]: 2026-02-25 13:08:44.728 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:08:44 compute-0 nova_compute[244014]: 2026-02-25 13:08:44.729 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:08:44 compute-0 nova_compute[244014]: 2026-02-25 13:08:44.747 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:08:44 compute-0 ceph-mon[76335]: pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:44 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2376885638' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:08:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:08:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005707079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:08:45 compute-0 nova_compute[244014]: 2026-02-25 13:08:45.345 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:08:45 compute-0 nova_compute[244014]: 2026-02-25 13:08:45.354 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:08:45 compute-0 nova_compute[244014]: 2026-02-25 13:08:45.381 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:08:45 compute-0 nova_compute[244014]: 2026-02-25 13:08:45.384 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:08:45 compute-0 nova_compute[244014]: 2026-02-25 13:08:45.384 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:08:45 compute-0 nova_compute[244014]: 2026-02-25 13:08:45.826 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4005707079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:08:46 compute-0 nova_compute[244014]: 2026-02-25 13:08:46.385 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:46 compute-0 nova_compute[244014]: 2026-02-25 13:08:46.386 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:08:46 compute-0 nova_compute[244014]: 2026-02-25 13:08:46.386 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:08:46 compute-0 nova_compute[244014]: 2026-02-25 13:08:46.401 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:08:46 compute-0 nova_compute[244014]: 2026-02-25 13:08:46.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:46 compute-0 ceph-mon[76335]: pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:47 compute-0 nova_compute[244014]: 2026-02-25 13:08:47.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:08:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664787863' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:08:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:08:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664787863' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:08:47 compute-0 nova_compute[244014]: 2026-02-25 13:08:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2664787863' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:08:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2664787863' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:08:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:49 compute-0 ceph-mon[76335]: pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:50 compute-0 ceph-mon[76335]: pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:50 compute-0 nova_compute[244014]: 2026-02-25 13:08:50.828 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:51 compute-0 nova_compute[244014]: 2026-02-25 13:08:51.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:52 compute-0 ceph-mon[76335]: pgmap v2661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:52 compute-0 nova_compute[244014]: 2026-02-25 13:08:52.508 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:52 compute-0 nova_compute[244014]: 2026-02-25 13:08:52.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:54 compute-0 ceph-mon[76335]: pgmap v2662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:54 compute-0 nova_compute[244014]: 2026-02-25 13:08:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:54 compute-0 nova_compute[244014]: 2026-02-25 13:08:54.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:08:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:08:55.047 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:08:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:08:55.047 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:08:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:08:55.048 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:08:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:55 compute-0 nova_compute[244014]: 2026-02-25 13:08:55.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:56 compute-0 ceph-mon[76335]: pgmap v2663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:57 compute-0 nova_compute[244014]: 2026-02-25 13:08:57.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:08:58 compute-0 ceph-mon[76335]: pgmap v2664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:08:58 compute-0 nova_compute[244014]: 2026-02-25 13:08:58.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:08:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:08:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 5 op/s
Feb 25 13:08:59 compute-0 nova_compute[244014]: 2026-02-25 13:08:59.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:09:00 compute-0 ceph-mon[76335]: pgmap v2665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 5 op/s
Feb 25 13:09:00 compute-0 nova_compute[244014]: 2026-02-25 13:09:00.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 5 op/s
Feb 25 13:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:09:01 compute-0 podman[386655]: 2026-02-25 13:09:01.736044642 +0000 UTC m=+0.067413112 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 13:09:01 compute-0 podman[386656]: 2026-02-25 13:09:01.778474508 +0000 UTC m=+0.107432970 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Feb 25 13:09:02 compute-0 nova_compute[244014]: 2026-02-25 13:09:02.513 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:02 compute-0 ceph-mon[76335]: pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 5 op/s
Feb 25 13:09:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 25 13:09:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:04 compute-0 ceph-mon[76335]: pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 25 13:09:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 25 13:09:05 compute-0 nova_compute[244014]: 2026-02-25 13:09:05.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Feb 25 13:09:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Feb 25 13:09:06 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Feb 25 13:09:06 compute-0 ceph-mon[76335]: pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 25 13:09:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Feb 25 13:09:07 compute-0 ceph-mon[76335]: osdmap e292: 3 total, 3 up, 3 in
Feb 25 13:09:07 compute-0 nova_compute[244014]: 2026-02-25 13:09:07.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:08 compute-0 ceph-mon[76335]: pgmap v2670: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Feb 25 13:09:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 25 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1023 B/s wr, 21 op/s
Feb 25 13:09:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Feb 25 13:09:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Feb 25 13:09:10 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Feb 25 13:09:10 compute-0 ceph-mon[76335]: pgmap v2671: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 25 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1023 B/s wr, 21 op/s
Feb 25 13:09:10 compute-0 nova_compute[244014]: 2026-02-25 13:09:10.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 25 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 25 13:09:11 compute-0 ceph-mon[76335]: osdmap e293: 3 total, 3 up, 3 in
Feb 25 13:09:12 compute-0 nova_compute[244014]: 2026-02-25 13:09:12.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:12 compute-0 ceph-mon[76335]: pgmap v2673: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 25 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 25 13:09:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:09:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 56K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1405 writes, 6581 keys, 1405 commit groups, 1.0 writes per commit group, ingest: 9.04 MB, 0.02 MB/s
                                           Interval WAL: 1405 writes, 1405 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     57.4      1.19              0.19        40    0.030       0      0       0.0       0.0
                                             L6      1/0    9.07 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.9    108.2     91.9      3.60              0.97        39    0.092    243K    21K       0.0       0.0
                                            Sum      1/0    9.07 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9     81.4     83.3      4.79              1.15        79    0.061    243K    21K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0    110.3    112.5      0.60              0.19        12    0.050     47K   3109       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    108.2     91.9      3.60              0.97        39    0.092    243K    21K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     57.6      1.18              0.19        39    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.066, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.39 GB write, 0.08 MB/s write, 0.38 GB read, 0.08 MB/s read, 4.8 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 44.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000406 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2810,42.94 MB,14.1264%) FilterBlock(80,680.55 KB,0.218617%) IndexBlock(80,1.11 MB,0.366045%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 13:09:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 462 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Feb 25 13:09:13 compute-0 sudo[386701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:09:13 compute-0 sudo[386701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:09:13 compute-0 sudo[386701]: pam_unix(sudo:session): session closed for user root
Feb 25 13:09:13 compute-0 sudo[386726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:09:13 compute-0 sudo[386726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:09:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Feb 25 13:09:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Feb 25 13:09:13 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Feb 25 13:09:14 compute-0 sudo[386726]: pam_unix(sudo:session): session closed for user root
Feb 25 13:09:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:09:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:09:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:09:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:09:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:09:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:09:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:09:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:09:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:09:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:09:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:09:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:09:14 compute-0 sudo[386782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:09:14 compute-0 sudo[386782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:09:14 compute-0 sudo[386782]: pam_unix(sudo:session): session closed for user root
Feb 25 13:09:14 compute-0 sudo[386807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:09:14 compute-0 sudo[386807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:09:14 compute-0 podman[386844]: 2026-02-25 13:09:14.871512065 +0000 UTC m=+0.036046497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:09:14 compute-0 ceph-mon[76335]: pgmap v2674: 305 pgs: 305 active+clean; 462 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Feb 25 13:09:14 compute-0 ceph-mon[76335]: osdmap e294: 3 total, 3 up, 3 in
Feb 25 13:09:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:09:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:09:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:09:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:09:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:09:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:09:14 compute-0 podman[386844]: 2026-02-25 13:09:14.97097173 +0000 UTC m=+0.135506102 container create 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 13:09:15 compute-0 systemd[1]: Started libpod-conmon-10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9.scope.
Feb 25 13:09:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:09:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 462 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.4 KiB/s wr, 60 op/s
Feb 25 13:09:15 compute-0 podman[386844]: 2026-02-25 13:09:15.15114102 +0000 UTC m=+0.315675462 container init 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:09:15 compute-0 podman[386844]: 2026-02-25 13:09:15.159585298 +0000 UTC m=+0.324119670 container start 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:09:15 compute-0 stupefied_margulis[386860]: 167 167
Feb 25 13:09:15 compute-0 systemd[1]: libpod-10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9.scope: Deactivated successfully.
Feb 25 13:09:15 compute-0 conmon[386860]: conmon 10dd8bdc40b78cc2708c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9.scope/container/memory.events
Feb 25 13:09:15 compute-0 podman[386844]: 2026-02-25 13:09:15.189482481 +0000 UTC m=+0.354016844 container attach 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 13:09:15 compute-0 podman[386844]: 2026-02-25 13:09:15.190160201 +0000 UTC m=+0.354694533 container died 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:09:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b8a726ea5ff8570b109c9ddd517f70cfd248193bdd182ece47d0f01e369281e-merged.mount: Deactivated successfully.
Feb 25 13:09:15 compute-0 podman[386844]: 2026-02-25 13:09:15.668594471 +0000 UTC m=+0.833128853 container remove 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:09:15 compute-0 systemd[1]: libpod-conmon-10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9.scope: Deactivated successfully.
Feb 25 13:09:15 compute-0 nova_compute[244014]: 2026-02-25 13:09:15.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:15 compute-0 podman[386886]: 2026-02-25 13:09:15.931038162 +0000 UTC m=+0.108902392 container create 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:09:15 compute-0 podman[386886]: 2026-02-25 13:09:15.854624327 +0000 UTC m=+0.032488607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:09:16 compute-0 systemd[1]: Started libpod-conmon-786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7.scope.
Feb 25 13:09:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:16 compute-0 podman[386886]: 2026-02-25 13:09:16.187795212 +0000 UTC m=+0.365659472 container init 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:09:16 compute-0 podman[386886]: 2026-02-25 13:09:16.198103173 +0000 UTC m=+0.375967383 container start 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:09:16 compute-0 ceph-mon[76335]: pgmap v2676: 305 pgs: 305 active+clean; 462 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.4 KiB/s wr, 60 op/s
Feb 25 13:09:16 compute-0 podman[386886]: 2026-02-25 13:09:16.241928749 +0000 UTC m=+0.419792979 container attach 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:09:16 compute-0 determined_taussig[386902]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:09:16 compute-0 determined_taussig[386902]: --> All data devices are unavailable
Feb 25 13:09:16 compute-0 systemd[1]: libpod-786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7.scope: Deactivated successfully.
Feb 25 13:09:16 compute-0 conmon[386902]: conmon 786defd59f8137ae66c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7.scope/container/memory.events
Feb 25 13:09:16 compute-0 podman[386886]: 2026-02-25 13:09:16.808776493 +0000 UTC m=+0.986640733 container died 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020-merged.mount: Deactivated successfully.
Feb 25 13:09:17 compute-0 podman[386886]: 2026-02-25 13:09:17.086229897 +0000 UTC m=+1.264094137 container remove 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:09:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 37 op/s
Feb 25 13:09:17 compute-0 sudo[386807]: pam_unix(sudo:session): session closed for user root
Feb 25 13:09:17 compute-0 systemd[1]: libpod-conmon-786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7.scope: Deactivated successfully.
Feb 25 13:09:17 compute-0 sudo[386935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:09:17 compute-0 sudo[386935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:09:17 compute-0 sudo[386935]: pam_unix(sudo:session): session closed for user root
Feb 25 13:09:17 compute-0 sudo[386960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:09:17 compute-0 sudo[386960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:09:17 compute-0 nova_compute[244014]: 2026-02-25 13:09:17.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:17 compute-0 podman[386997]: 2026-02-25 13:09:17.649246314 +0000 UTC m=+0.114388557 container create 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 13:09:17 compute-0 podman[386997]: 2026-02-25 13:09:17.556678873 +0000 UTC m=+0.021821096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:09:17 compute-0 systemd[1]: Started libpod-conmon-09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961.scope.
Feb 25 13:09:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:09:17 compute-0 podman[386997]: 2026-02-25 13:09:17.817999113 +0000 UTC m=+0.283141416 container init 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:09:17 compute-0 podman[386997]: 2026-02-25 13:09:17.825897555 +0000 UTC m=+0.291039788 container start 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:09:17 compute-0 admiring_cori[387013]: 167 167
Feb 25 13:09:17 compute-0 systemd[1]: libpod-09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961.scope: Deactivated successfully.
Feb 25 13:09:17 compute-0 podman[386997]: 2026-02-25 13:09:17.880077233 +0000 UTC m=+0.345219476 container attach 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:09:17 compute-0 podman[386997]: 2026-02-25 13:09:17.880565437 +0000 UTC m=+0.345707680 container died 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Feb 25 13:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b84d5a81af260a32f16401d3c9a3b3f3bdf2c7ce9c1c277e591bb1003092acb-merged.mount: Deactivated successfully.
Feb 25 13:09:18 compute-0 podman[386997]: 2026-02-25 13:09:18.177748237 +0000 UTC m=+0.642890460 container remove 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:09:18 compute-0 systemd[1]: libpod-conmon-09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961.scope: Deactivated successfully.
Feb 25 13:09:18 compute-0 ceph-mon[76335]: pgmap v2677: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 37 op/s
Feb 25 13:09:18 compute-0 podman[387040]: 2026-02-25 13:09:18.381459231 +0000 UTC m=+0.086295654 container create 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 13:09:18 compute-0 podman[387040]: 2026-02-25 13:09:18.3204159 +0000 UTC m=+0.025252383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:09:18 compute-0 systemd[1]: Started libpod-conmon-1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2.scope.
Feb 25 13:09:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c1ba6ca5f5920d5e8e89c56e8d59c16c676f3448fef4f12f4c2cff1be4072b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c1ba6ca5f5920d5e8e89c56e8d59c16c676f3448fef4f12f4c2cff1be4072b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c1ba6ca5f5920d5e8e89c56e8d59c16c676f3448fef4f12f4c2cff1be4072b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c1ba6ca5f5920d5e8e89c56e8d59c16c676f3448fef4f12f4c2cff1be4072b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:18 compute-0 podman[387040]: 2026-02-25 13:09:18.580327959 +0000 UTC m=+0.285164352 container init 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:09:18 compute-0 podman[387040]: 2026-02-25 13:09:18.588985353 +0000 UTC m=+0.293821736 container start 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:09:18 compute-0 podman[387040]: 2026-02-25 13:09:18.621658224 +0000 UTC m=+0.326494647 container attach 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]: {
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:     "0": [
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:         {
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "devices": [
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "/dev/loop3"
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             ],
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_name": "ceph_lv0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_size": "21470642176",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "name": "ceph_lv0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "tags": {
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.cluster_name": "ceph",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.crush_device_class": "",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.encrypted": "0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.objectstore": "bluestore",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.osd_id": "0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.type": "block",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.vdo": "0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.with_tpm": "0"
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             },
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "type": "block",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "vg_name": "ceph_vg0"
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:         }
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:     ],
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:     "1": [
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:         {
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "devices": [
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "/dev/loop4"
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             ],
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_name": "ceph_lv1",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_size": "21470642176",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "name": "ceph_lv1",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "tags": {
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.cluster_name": "ceph",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.crush_device_class": "",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.encrypted": "0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.objectstore": "bluestore",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.osd_id": "1",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.type": "block",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.vdo": "0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.with_tpm": "0"
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             },
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "type": "block",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "vg_name": "ceph_vg1"
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:         }
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:     ],
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:     "2": [
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:         {
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "devices": [
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "/dev/loop5"
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             ],
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_name": "ceph_lv2",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_size": "21470642176",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "name": "ceph_lv2",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "tags": {
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.cluster_name": "ceph",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.crush_device_class": "",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.encrypted": "0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.objectstore": "bluestore",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.osd_id": "2",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.type": "block",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.vdo": "0",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:                 "ceph.with_tpm": "0"
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             },
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "type": "block",
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:             "vg_name": "ceph_vg2"
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:         }
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]:     ]
Feb 25 13:09:18 compute-0 crazy_meninsky[387057]: }
Feb 25 13:09:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:18 compute-0 systemd[1]: libpod-1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2.scope: Deactivated successfully.
Feb 25 13:09:18 compute-0 podman[387040]: 2026-02-25 13:09:18.936724549 +0000 UTC m=+0.641560952 container died 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-99c1ba6ca5f5920d5e8e89c56e8d59c16c676f3448fef4f12f4c2cff1be4072b-merged.mount: Deactivated successfully.
Feb 25 13:09:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Feb 25 13:09:19 compute-0 podman[387040]: 2026-02-25 13:09:19.267461745 +0000 UTC m=+0.972298168 container remove 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:09:19 compute-0 systemd[1]: libpod-conmon-1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2.scope: Deactivated successfully.
Feb 25 13:09:19 compute-0 sudo[386960]: pam_unix(sudo:session): session closed for user root
Feb 25 13:09:19 compute-0 sudo[387081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:09:19 compute-0 sudo[387081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:09:19 compute-0 sudo[387081]: pam_unix(sudo:session): session closed for user root
Feb 25 13:09:19 compute-0 sudo[387106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:09:19 compute-0 sudo[387106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:09:19 compute-0 podman[387143]: 2026-02-25 13:09:19.81697888 +0000 UTC m=+0.097726466 container create 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Feb 25 13:09:19 compute-0 podman[387143]: 2026-02-25 13:09:19.742081148 +0000 UTC m=+0.022828724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:09:19 compute-0 systemd[1]: Started libpod-conmon-4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53.scope.
Feb 25 13:09:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:09:19 compute-0 podman[387143]: 2026-02-25 13:09:19.992557891 +0000 UTC m=+0.273305537 container init 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 13:09:20 compute-0 podman[387143]: 2026-02-25 13:09:20.000274109 +0000 UTC m=+0.281021715 container start 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 13:09:20 compute-0 zen_heisenberg[387159]: 167 167
Feb 25 13:09:20 compute-0 systemd[1]: libpod-4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53.scope: Deactivated successfully.
Feb 25 13:09:20 compute-0 podman[387143]: 2026-02-25 13:09:20.026508699 +0000 UTC m=+0.307256285 container attach 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 13:09:20 compute-0 podman[387143]: 2026-02-25 13:09:20.027235249 +0000 UTC m=+0.307982825 container died 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-71868a684af994b3d13042458fd2fb760b5af80349af26f4c02ae581a1f018ec-merged.mount: Deactivated successfully.
Feb 25 13:09:20 compute-0 podman[387143]: 2026-02-25 13:09:20.268432631 +0000 UTC m=+0.549180237 container remove 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:09:20 compute-0 systemd[1]: libpod-conmon-4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53.scope: Deactivated successfully.
Feb 25 13:09:20 compute-0 ceph-mon[76335]: pgmap v2678: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Feb 25 13:09:20 compute-0 podman[387187]: 2026-02-25 13:09:20.504778936 +0000 UTC m=+0.103295244 container create 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:09:20 compute-0 podman[387187]: 2026-02-25 13:09:20.426318043 +0000 UTC m=+0.024834391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:09:20 compute-0 systemd[1]: Started libpod-conmon-08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3.scope.
Feb 25 13:09:20 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55148601925dbc1443ed079060c8272ba53b51bdf4d3e376f5f5e2711a25c6ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55148601925dbc1443ed079060c8272ba53b51bdf4d3e376f5f5e2711a25c6ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55148601925dbc1443ed079060c8272ba53b51bdf4d3e376f5f5e2711a25c6ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55148601925dbc1443ed079060c8272ba53b51bdf4d3e376f5f5e2711a25c6ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:09:20 compute-0 podman[387187]: 2026-02-25 13:09:20.705552487 +0000 UTC m=+0.304068785 container init 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:09:20 compute-0 podman[387187]: 2026-02-25 13:09:20.714188421 +0000 UTC m=+0.312704729 container start 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:09:20 compute-0 podman[387187]: 2026-02-25 13:09:20.753048427 +0000 UTC m=+0.351564795 container attach 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:09:20 compute-0 nova_compute[244014]: 2026-02-25 13:09:20.841 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 30 op/s
Feb 25 13:09:21 compute-0 lvm[387281]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:09:21 compute-0 lvm[387281]: VG ceph_vg0 finished
Feb 25 13:09:21 compute-0 lvm[387282]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:09:21 compute-0 lvm[387282]: VG ceph_vg1 finished
Feb 25 13:09:21 compute-0 lvm[387284]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:09:21 compute-0 lvm[387284]: VG ceph_vg2 finished
Feb 25 13:09:21 compute-0 exciting_bardeen[387203]: {}
Feb 25 13:09:21 compute-0 systemd[1]: libpod-08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3.scope: Deactivated successfully.
Feb 25 13:09:21 compute-0 systemd[1]: libpod-08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3.scope: Consumed 1.103s CPU time.
Feb 25 13:09:21 compute-0 podman[387187]: 2026-02-25 13:09:21.473185494 +0000 UTC m=+1.071701792 container died 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 13:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-55148601925dbc1443ed079060c8272ba53b51bdf4d3e376f5f5e2711a25c6ab-merged.mount: Deactivated successfully.
Feb 25 13:09:21 compute-0 podman[387187]: 2026-02-25 13:09:21.666611248 +0000 UTC m=+1.265127546 container remove 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:09:21 compute-0 systemd[1]: libpod-conmon-08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3.scope: Deactivated successfully.
Feb 25 13:09:21 compute-0 sudo[387106]: pam_unix(sudo:session): session closed for user root
Feb 25 13:09:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:09:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:09:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:09:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:09:21 compute-0 sudo[387301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:09:21 compute-0 sudo[387301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:09:21 compute-0 sudo[387301]: pam_unix(sudo:session): session closed for user root
Feb 25 13:09:22 compute-0 nova_compute[244014]: 2026-02-25 13:09:22.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:22 compute-0 ceph-mon[76335]: pgmap v2679: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 30 op/s
Feb 25 13:09:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:09:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:09:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 13:09:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:24 compute-0 ceph-mon[76335]: pgmap v2680: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 13:09:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 91 B/s rd, 0 B/s wr, 0 op/s
Feb 25 13:09:25 compute-0 nova_compute[244014]: 2026-02-25 13:09:25.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:27 compute-0 ceph-mon[76335]: pgmap v2681: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 91 B/s rd, 0 B/s wr, 0 op/s
Feb 25 13:09:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 6.0 MiB/s wr, 19 op/s
Feb 25 13:09:27 compute-0 nova_compute[244014]: 2026-02-25 13:09:27.532 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Feb 25 13:09:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Feb 25 13:09:28 compute-0 ceph-mon[76335]: pgmap v2682: 305 pgs: 305 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 6.0 MiB/s wr, 19 op/s
Feb 25 13:09:28 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Feb 25 13:09:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 80 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 8.0 MiB/s wr, 22 op/s
Feb 25 13:09:29 compute-0 ceph-mon[76335]: osdmap e295: 3 total, 3 up, 3 in
Feb 25 13:09:30 compute-0 ceph-mon[76335]: pgmap v2684: 305 pgs: 305 active+clean; 80 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 8.0 MiB/s wr, 22 op/s
Feb 25 13:09:30 compute-0 nova_compute[244014]: 2026-02-25 13:09:30.846 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:09:31
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', '.rgw.root', 'images', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 80 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 8.0 MiB/s wr, 22 op/s
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:09:32 compute-0 nova_compute[244014]: 2026-02-25 13:09:32.536 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:32 compute-0 ceph-mon[76335]: pgmap v2685: 305 pgs: 305 active+clean; 80 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 8.0 MiB/s wr, 22 op/s
Feb 25 13:09:32 compute-0 podman[387326]: 2026-02-25 13:09:32.738472807 +0000 UTC m=+0.071188889 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:09:32 compute-0 podman[387327]: 2026-02-25 13:09:32.772108115 +0000 UTC m=+0.104926690 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Feb 25 13:09:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 13:09:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:34 compute-0 ceph-mon[76335]: pgmap v2686: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 13:09:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 13:09:35 compute-0 nova_compute[244014]: 2026-02-25 13:09:35.849 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:36 compute-0 ceph-mon[76335]: pgmap v2687: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 13:09:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 4.0 MiB/s wr, 10 op/s
Feb 25 13:09:37 compute-0 nova_compute[244014]: 2026-02-25 13:09:37.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:38 compute-0 sshd-session[387369]: Received disconnect from 45.148.10.141 port 52100:11:  [preauth]
Feb 25 13:09:38 compute-0 sshd-session[387369]: Disconnected from authenticating user root 45.148.10.141 port 52100 [preauth]
Feb 25 13:09:38 compute-0 ceph-mon[76335]: pgmap v2688: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 4.0 MiB/s wr, 10 op/s
Feb 25 13:09:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 2.9 MiB/s wr, 9 op/s
Feb 25 13:09:40 compute-0 nova_compute[244014]: 2026-02-25 13:09:40.849 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:41 compute-0 ceph-mon[76335]: pgmap v2689: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 2.9 MiB/s wr, 9 op/s
Feb 25 13:09:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 2.7 MiB/s wr, 8 op/s
Feb 25 13:09:42 compute-0 nova_compute[244014]: 2026-02-25 13:09:42.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.723792564281303e-05 of space, bias 1.0, pg target 0.005171377692843909 quantized to 32 (current 32)
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0018289410571936495 of space, bias 1.0, pg target 0.5486823171580949 quantized to 32 (current 32)
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4127973638615152e-06 of space, bias 4.0, pg target 0.0016953568366338183 quantized to 16 (current 16)
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:09:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:09:43 compute-0 ceph-mon[76335]: pgmap v2690: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 2.7 MiB/s wr, 8 op/s
Feb 25 13:09:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 2.7 MiB/s wr, 8 op/s
Feb 25 13:09:43 compute-0 nova_compute[244014]: 2026-02-25 13:09:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:09:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:43 compute-0 nova_compute[244014]: 2026-02-25 13:09:43.980 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:09:43 compute-0 nova_compute[244014]: 2026-02-25 13:09:43.981 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:09:43 compute-0 nova_compute[244014]: 2026-02-25 13:09:43.981 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:09:43 compute-0 nova_compute[244014]: 2026-02-25 13:09:43.981 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:09:43 compute-0 nova_compute[244014]: 2026-02-25 13:09:43.981 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:09:44 compute-0 ceph-mon[76335]: pgmap v2691: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 2.7 MiB/s wr, 8 op/s
Feb 25 13:09:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:09:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/861744831' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:09:44 compute-0 nova_compute[244014]: 2026-02-25 13:09:44.525 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:09:44 compute-0 nova_compute[244014]: 2026-02-25 13:09:44.641 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:09:44 compute-0 nova_compute[244014]: 2026-02-25 13:09:44.642 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3559MB free_disk=59.98724717646837GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:09:44 compute-0 nova_compute[244014]: 2026-02-25 13:09:44.642 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:09:44 compute-0 nova_compute[244014]: 2026-02-25 13:09:44.642 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:09:44 compute-0 nova_compute[244014]: 2026-02-25 13:09:44.713 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:09:44 compute-0 nova_compute[244014]: 2026-02-25 13:09:44.714 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:09:44 compute-0 nova_compute[244014]: 2026-02-25 13:09:44.733 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:09:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:09:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/613871194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:09:45 compute-0 nova_compute[244014]: 2026-02-25 13:09:45.279 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:09:45 compute-0 nova_compute[244014]: 2026-02-25 13:09:45.285 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:09:45 compute-0 nova_compute[244014]: 2026-02-25 13:09:45.307 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:09:45 compute-0 nova_compute[244014]: 2026-02-25 13:09:45.308 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:09:45 compute-0 nova_compute[244014]: 2026-02-25 13:09:45.309 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:09:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/861744831' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:09:45 compute-0 nova_compute[244014]: 2026-02-25 13:09:45.852 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:46 compute-0 ceph-mon[76335]: pgmap v2692: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/613871194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:09:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:47 compute-0 nova_compute[244014]: 2026-02-25 13:09:47.310 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:09:47 compute-0 nova_compute[244014]: 2026-02-25 13:09:47.310 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:09:47 compute-0 nova_compute[244014]: 2026-02-25 13:09:47.311 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:09:47 compute-0 nova_compute[244014]: 2026-02-25 13:09:47.326 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:09:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:09:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2473000453' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:09:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:09:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2473000453' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:09:47 compute-0 nova_compute[244014]: 2026-02-25 13:09:47.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:47 compute-0 nova_compute[244014]: 2026-02-25 13:09:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:09:48 compute-0 ceph-mon[76335]: pgmap v2693: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2473000453' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:09:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2473000453' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:09:48 compute-0 nova_compute[244014]: 2026-02-25 13:09:48.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:09:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:50 compute-0 ceph-mon[76335]: pgmap v2694: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:50 compute-0 nova_compute[244014]: 2026-02-25 13:09:50.854 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:52 compute-0 nova_compute[244014]: 2026-02-25 13:09:52.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:52 compute-0 ceph-mon[76335]: pgmap v2695: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:52 compute-0 nova_compute[244014]: 2026-02-25 13:09:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:09:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:54 compute-0 nova_compute[244014]: 2026-02-25 13:09:54.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:09:54 compute-0 ceph-mon[76335]: pgmap v2696: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:09:55.048 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:09:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:09:55.049 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:09:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:09:55.049 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:09:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:55 compute-0 nova_compute[244014]: 2026-02-25 13:09:55.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:56 compute-0 nova_compute[244014]: 2026-02-25 13:09:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:09:56 compute-0 nova_compute[244014]: 2026-02-25 13:09:56.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:09:57 compute-0 ceph-mon[76335]: pgmap v2697: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:57 compute-0 nova_compute[244014]: 2026-02-25 13:09:57.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:09:58 compute-0 ceph-mon[76335]: pgmap v2698: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:09:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:09:59 compute-0 nova_compute[244014]: 2026-02-25 13:09:59.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:10:00 compute-0 ceph-mon[76335]: pgmap v2699: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:00 compute-0 nova_compute[244014]: 2026-02-25 13:10:00.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:00 compute-0 nova_compute[244014]: 2026-02-25 13:10:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:10:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:10:02 compute-0 ceph-mon[76335]: pgmap v2700: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:02 compute-0 nova_compute[244014]: 2026-02-25 13:10:02.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:03 compute-0 podman[387415]: 2026-02-25 13:10:03.731638959 +0000 UTC m=+0.075374186 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 25 13:10:03 compute-0 podman[387416]: 2026-02-25 13:10:03.764064884 +0000 UTC m=+0.102734839 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 13:10:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:04 compute-0 ceph-mon[76335]: pgmap v2701: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:05 compute-0 nova_compute[244014]: 2026-02-25 13:10:05.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:06 compute-0 ceph-mon[76335]: pgmap v2702: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:07 compute-0 nova_compute[244014]: 2026-02-25 13:10:07.560 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:08 compute-0 ceph-mon[76335]: pgmap v2703: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:10 compute-0 ceph-mon[76335]: pgmap v2704: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:10 compute-0 nova_compute[244014]: 2026-02-25 13:10:10.863 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:12 compute-0 nova_compute[244014]: 2026-02-25 13:10:12.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:12 compute-0 ceph-mon[76335]: pgmap v2705: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:13 compute-0 nova_compute[244014]: 2026-02-25 13:10:13.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:10:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:14 compute-0 ceph-mon[76335]: pgmap v2706: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:10:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.75 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2661 writes, 10K keys, 2661 commit groups, 1.0 writes per commit group, ingest: 10.48 MB, 0.02 MB/s
                                           Interval WAL: 2661 writes, 1045 syncs, 2.55 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:10:15 compute-0 nova_compute[244014]: 2026-02-25 13:10:15.865 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:16 compute-0 ceph-mon[76335]: pgmap v2707: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:17 compute-0 nova_compute[244014]: 2026-02-25 13:10:17.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:18 compute-0 ceph-mon[76335]: pgmap v2708: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:10:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.2 total, 600.0 interval
                                           Cumulative writes: 48K writes, 193K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.76 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2895 writes, 12K keys, 2895 commit groups, 1.0 writes per commit group, ingest: 11.47 MB, 0.02 MB/s
                                           Interval WAL: 2895 writes, 1135 syncs, 2.55 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:10:20 compute-0 nova_compute[244014]: 2026-02-25 13:10:20.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:20 compute-0 ceph-mon[76335]: pgmap v2709: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:21 compute-0 sudo[387460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:10:21 compute-0 sudo[387460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:10:21 compute-0 sudo[387460]: pam_unix(sudo:session): session closed for user root
Feb 25 13:10:22 compute-0 sudo[387485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:10:22 compute-0 sudo[387485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:10:22 compute-0 sudo[387485]: pam_unix(sudo:session): session closed for user root
Feb 25 13:10:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:10:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:10:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:10:22 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:10:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:10:22 compute-0 nova_compute[244014]: 2026-02-25 13:10:22.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:22 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:10:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:10:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:10:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:10:22 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:10:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:10:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:10:22 compute-0 sudo[387542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:10:22 compute-0 sudo[387542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:10:22 compute-0 sudo[387542]: pam_unix(sudo:session): session closed for user root
Feb 25 13:10:22 compute-0 sudo[387567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:10:22 compute-0 sudo[387567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:10:22 compute-0 ceph-mon[76335]: pgmap v2710: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:10:23 compute-0 podman[387606]: 2026-02-25 13:10:23.062470855 +0000 UTC m=+0.061965659 container create eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:10:23 compute-0 podman[387606]: 2026-02-25 13:10:23.032814558 +0000 UTC m=+0.032309402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:10:23 compute-0 systemd[1]: Started libpod-conmon-eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1.scope.
Feb 25 13:10:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:10:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:23 compute-0 podman[387606]: 2026-02-25 13:10:23.209368937 +0000 UTC m=+0.208863791 container init eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:10:23 compute-0 podman[387606]: 2026-02-25 13:10:23.218895526 +0000 UTC m=+0.218390330 container start eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:10:23 compute-0 reverent_burnell[387622]: 167 167
Feb 25 13:10:23 compute-0 systemd[1]: libpod-eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1.scope: Deactivated successfully.
Feb 25 13:10:23 compute-0 conmon[387622]: conmon eba78ca3b24bb983f539 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1.scope/container/memory.events
Feb 25 13:10:23 compute-0 podman[387606]: 2026-02-25 13:10:23.241986187 +0000 UTC m=+0.241480991 container attach eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:10:23 compute-0 podman[387606]: 2026-02-25 13:10:23.242876032 +0000 UTC m=+0.242370836 container died eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 13:10:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7f93c1970064bb46e0de382d5ef5080920829432fb418ae88df05c28a7371d0-merged.mount: Deactivated successfully.
Feb 25 13:10:23 compute-0 podman[387606]: 2026-02-25 13:10:23.482163708 +0000 UTC m=+0.481658502 container remove eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:10:23 compute-0 systemd[1]: libpod-conmon-eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1.scope: Deactivated successfully.
Feb 25 13:10:23 compute-0 podman[387647]: 2026-02-25 13:10:23.637147389 +0000 UTC m=+0.072870076 container create df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:10:23 compute-0 podman[387647]: 2026-02-25 13:10:23.585038659 +0000 UTC m=+0.020761346 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:10:23 compute-0 systemd[1]: Started libpod-conmon-df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13.scope.
Feb 25 13:10:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:23 compute-0 podman[387647]: 2026-02-25 13:10:23.809760966 +0000 UTC m=+0.245483693 container init df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 13:10:23 compute-0 podman[387647]: 2026-02-25 13:10:23.82016586 +0000 UTC m=+0.255888507 container start df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 13:10:23 compute-0 podman[387647]: 2026-02-25 13:10:23.851758891 +0000 UTC m=+0.287481598 container attach df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:10:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:24 compute-0 elated_kirch[387663]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:10:24 compute-0 elated_kirch[387663]: --> All data devices are unavailable
Feb 25 13:10:24 compute-0 systemd[1]: libpod-df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13.scope: Deactivated successfully.
Feb 25 13:10:24 compute-0 podman[387647]: 2026-02-25 13:10:24.269336136 +0000 UTC m=+0.705058793 container died df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Feb 25 13:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17-merged.mount: Deactivated successfully.
Feb 25 13:10:24 compute-0 podman[387647]: 2026-02-25 13:10:24.5088649 +0000 UTC m=+0.944587587 container remove df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Feb 25 13:10:24 compute-0 systemd[1]: libpod-conmon-df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13.scope: Deactivated successfully.
Feb 25 13:10:24 compute-0 sudo[387567]: pam_unix(sudo:session): session closed for user root
Feb 25 13:10:24 compute-0 sudo[387697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:10:24 compute-0 sudo[387697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:10:24 compute-0 sudo[387697]: pam_unix(sudo:session): session closed for user root
Feb 25 13:10:24 compute-0 sudo[387722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:10:24 compute-0 sudo[387722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:10:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:10:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.6 total, 600.0 interval
                                           Cumulative writes: 37K writes, 148K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.78 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1807 writes, 6626 keys, 1807 commit groups, 1.0 writes per commit group, ingest: 5.30 MB, 0.01 MB/s
                                           Interval WAL: 1807 writes, 773 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:10:25 compute-0 ceph-mon[76335]: pgmap v2711: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:25 compute-0 podman[387761]: 2026-02-25 13:10:25.10496699 +0000 UTC m=+0.100488335 container create aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 13:10:25 compute-0 podman[387761]: 2026-02-25 13:10:25.044043872 +0000 UTC m=+0.039565277 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:10:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:25 compute-0 systemd[1]: Started libpod-conmon-aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad.scope.
Feb 25 13:10:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:10:25 compute-0 podman[387761]: 2026-02-25 13:10:25.348764035 +0000 UTC m=+0.344285390 container init aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:10:25 compute-0 podman[387761]: 2026-02-25 13:10:25.358454318 +0000 UTC m=+0.353975673 container start aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:10:25 compute-0 awesome_lumiere[387777]: 167 167
Feb 25 13:10:25 compute-0 systemd[1]: libpod-aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad.scope: Deactivated successfully.
Feb 25 13:10:25 compute-0 podman[387761]: 2026-02-25 13:10:25.483933546 +0000 UTC m=+0.479454911 container attach aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:10:25 compute-0 podman[387761]: 2026-02-25 13:10:25.485501911 +0000 UTC m=+0.481023326 container died aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-36a9b4590c0e515b068d37467e8532f0afbf3140fcf072cbf8c9da1e363bc5fe-merged.mount: Deactivated successfully.
Feb 25 13:10:25 compute-0 nova_compute[244014]: 2026-02-25 13:10:25.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:25 compute-0 podman[387761]: 2026-02-25 13:10:25.902988634 +0000 UTC m=+0.898509989 container remove aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 13:10:25 compute-0 systemd[1]: libpod-conmon-aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad.scope: Deactivated successfully.
Feb 25 13:10:26 compute-0 podman[387803]: 2026-02-25 13:10:26.062579234 +0000 UTC m=+0.029278106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:10:26 compute-0 podman[387803]: 2026-02-25 13:10:26.191351716 +0000 UTC m=+0.158050558 container create 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:10:26 compute-0 ceph-mon[76335]: pgmap v2712: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:26 compute-0 systemd[1]: Started libpod-conmon-992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c.scope.
Feb 25 13:10:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e5dae33155446abc287c6156981fb26365959027938c0c0f9650aba576d1f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e5dae33155446abc287c6156981fb26365959027938c0c0f9650aba576d1f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e5dae33155446abc287c6156981fb26365959027938c0c0f9650aba576d1f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e5dae33155446abc287c6156981fb26365959027938c0c0f9650aba576d1f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:26 compute-0 podman[387803]: 2026-02-25 13:10:26.501516222 +0000 UTC m=+0.468215034 container init 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:10:26 compute-0 podman[387803]: 2026-02-25 13:10:26.507991065 +0000 UTC m=+0.474689877 container start 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:10:26 compute-0 podman[387803]: 2026-02-25 13:10:26.64541176 +0000 UTC m=+0.612110612 container attach 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:10:26 compute-0 jovial_boyd[387820]: {
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:     "0": [
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:         {
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "devices": [
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "/dev/loop3"
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             ],
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_name": "ceph_lv0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_size": "21470642176",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "name": "ceph_lv0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "tags": {
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.cluster_name": "ceph",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.crush_device_class": "",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.encrypted": "0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.objectstore": "bluestore",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.osd_id": "0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.type": "block",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.vdo": "0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.with_tpm": "0"
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             },
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "type": "block",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "vg_name": "ceph_vg0"
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:         }
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:     ],
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:     "1": [
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:         {
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "devices": [
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "/dev/loop4"
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             ],
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_name": "ceph_lv1",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_size": "21470642176",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "name": "ceph_lv1",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "tags": {
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.cluster_name": "ceph",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.crush_device_class": "",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.encrypted": "0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.objectstore": "bluestore",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.osd_id": "1",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.type": "block",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.vdo": "0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.with_tpm": "0"
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             },
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "type": "block",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "vg_name": "ceph_vg1"
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:         }
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:     ],
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:     "2": [
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:         {
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "devices": [
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "/dev/loop5"
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             ],
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_name": "ceph_lv2",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_size": "21470642176",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "name": "ceph_lv2",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "tags": {
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.cluster_name": "ceph",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.crush_device_class": "",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.encrypted": "0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.objectstore": "bluestore",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.osd_id": "2",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.type": "block",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.vdo": "0",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:                 "ceph.with_tpm": "0"
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             },
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "type": "block",
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:             "vg_name": "ceph_vg2"
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:         }
Feb 25 13:10:26 compute-0 jovial_boyd[387820]:     ]
Feb 25 13:10:26 compute-0 jovial_boyd[387820]: }
Feb 25 13:10:26 compute-0 systemd[1]: libpod-992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c.scope: Deactivated successfully.
Feb 25 13:10:26 compute-0 podman[387803]: 2026-02-25 13:10:26.793578488 +0000 UTC m=+0.760277280 container died 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-82e5dae33155446abc287c6156981fb26365959027938c0c0f9650aba576d1f4-merged.mount: Deactivated successfully.
Feb 25 13:10:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 13:10:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:27 compute-0 podman[387803]: 2026-02-25 13:10:27.302850998 +0000 UTC m=+1.269549830 container remove 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 13:10:27 compute-0 systemd[1]: libpod-conmon-992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c.scope: Deactivated successfully.
Feb 25 13:10:27 compute-0 sudo[387722]: pam_unix(sudo:session): session closed for user root
Feb 25 13:10:27 compute-0 sudo[387843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:10:27 compute-0 sudo[387843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:10:27 compute-0 sudo[387843]: pam_unix(sudo:session): session closed for user root
Feb 25 13:10:27 compute-0 sudo[387868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:10:27 compute-0 sudo[387868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:10:27 compute-0 nova_compute[244014]: 2026-02-25 13:10:27.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:27 compute-0 podman[387905]: 2026-02-25 13:10:27.93907956 +0000 UTC m=+0.124070740 container create 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 13:10:27 compute-0 podman[387905]: 2026-02-25 13:10:27.848990059 +0000 UTC m=+0.033981279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:10:28 compute-0 systemd[1]: Started libpod-conmon-33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1.scope.
Feb 25 13:10:28 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:10:28 compute-0 podman[387905]: 2026-02-25 13:10:28.290848269 +0000 UTC m=+0.475839459 container init 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:10:28 compute-0 podman[387905]: 2026-02-25 13:10:28.300130751 +0000 UTC m=+0.485121921 container start 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 13:10:28 compute-0 infallible_beaver[387921]: 167 167
Feb 25 13:10:28 compute-0 systemd[1]: libpod-33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1.scope: Deactivated successfully.
Feb 25 13:10:28 compute-0 ceph-mon[76335]: pgmap v2713: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:28 compute-0 podman[387905]: 2026-02-25 13:10:28.424551899 +0000 UTC m=+0.609543119 container attach 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:10:28 compute-0 podman[387905]: 2026-02-25 13:10:28.425051693 +0000 UTC m=+0.610042863 container died 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfd77ceea6eb450576a33ffa1d08214b58bfc7849c22f00f96711566b6c9d246-merged.mount: Deactivated successfully.
Feb 25 13:10:28 compute-0 podman[387905]: 2026-02-25 13:10:28.726025 +0000 UTC m=+0.911016150 container remove 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:10:28 compute-0 systemd[1]: libpod-conmon-33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1.scope: Deactivated successfully.
Feb 25 13:10:28 compute-0 podman[387946]: 2026-02-25 13:10:28.95687029 +0000 UTC m=+0.092094198 container create d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:10:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:28 compute-0 podman[387946]: 2026-02-25 13:10:28.890395716 +0000 UTC m=+0.025619694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:10:29 compute-0 systemd[1]: Started libpod-conmon-d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652.scope.
Feb 25 13:10:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3ffde88bdedbc12627b9bdec881582b5b463c4ae0e16d4ddb5cb145f8b0152/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3ffde88bdedbc12627b9bdec881582b5b463c4ae0e16d4ddb5cb145f8b0152/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3ffde88bdedbc12627b9bdec881582b5b463c4ae0e16d4ddb5cb145f8b0152/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3ffde88bdedbc12627b9bdec881582b5b463c4ae0e16d4ddb5cb145f8b0152/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:10:29 compute-0 podman[387946]: 2026-02-25 13:10:29.122237803 +0000 UTC m=+0.257461731 container init d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 13:10:29 compute-0 podman[387946]: 2026-02-25 13:10:29.12922648 +0000 UTC m=+0.264450368 container start d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:10:29 compute-0 podman[387946]: 2026-02-25 13:10:29.159208106 +0000 UTC m=+0.294432004 container attach d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 13:10:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:29 compute-0 lvm[388041]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:10:29 compute-0 lvm[388041]: VG ceph_vg1 finished
Feb 25 13:10:29 compute-0 lvm[388040]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:10:29 compute-0 lvm[388040]: VG ceph_vg0 finished
Feb 25 13:10:29 compute-0 lvm[388043]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:10:29 compute-0 lvm[388043]: VG ceph_vg2 finished
Feb 25 13:10:29 compute-0 beautiful_morse[387962]: {}
Feb 25 13:10:30 compute-0 systemd[1]: libpod-d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652.scope: Deactivated successfully.
Feb 25 13:10:30 compute-0 systemd[1]: libpod-d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652.scope: Consumed 1.308s CPU time.
Feb 25 13:10:30 compute-0 podman[387946]: 2026-02-25 13:10:30.000968863 +0000 UTC m=+1.136192771 container died d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa3ffde88bdedbc12627b9bdec881582b5b463c4ae0e16d4ddb5cb145f8b0152-merged.mount: Deactivated successfully.
Feb 25 13:10:30 compute-0 podman[387946]: 2026-02-25 13:10:30.290664332 +0000 UTC m=+1.425888250 container remove d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:10:30 compute-0 systemd[1]: libpod-conmon-d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652.scope: Deactivated successfully.
Feb 25 13:10:30 compute-0 sudo[387868]: pam_unix(sudo:session): session closed for user root
Feb 25 13:10:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:10:30 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:10:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:10:30 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:10:30 compute-0 ceph-mon[76335]: pgmap v2714: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:10:30 compute-0 sudo[388058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:10:30 compute-0 sudo[388058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:10:30 compute-0 sudo[388058]: pam_unix(sudo:session): session closed for user root
Feb 25 13:10:30 compute-0 nova_compute[244014]: 2026-02-25 13:10:30.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:10:31
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'backups', 'images', 'cephfs.cephfs.meta']
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:10:32 compute-0 nova_compute[244014]: 2026-02-25 13:10:32.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:32 compute-0 ceph-mon[76335]: pgmap v2715: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:34 compute-0 ceph-mon[76335]: pgmap v2716: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:34 compute-0 podman[388083]: 2026-02-25 13:10:34.734071881 +0000 UTC m=+0.077164827 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:10:34 compute-0 podman[388084]: 2026-02-25 13:10:34.791902592 +0000 UTC m=+0.129565515 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 13:10:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:35 compute-0 nova_compute[244014]: 2026-02-25 13:10:35.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:36 compute-0 ceph-mon[76335]: pgmap v2717: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:37 compute-0 nova_compute[244014]: 2026-02-25 13:10:37.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:38 compute-0 ceph-mon[76335]: pgmap v2718: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:40 compute-0 nova_compute[244014]: 2026-02-25 13:10:40.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:40 compute-0 ceph-mon[76335]: pgmap v2719: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:42 compute-0 nova_compute[244014]: 2026-02-25 13:10:42.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.723792564281303e-05 of space, bias 1.0, pg target 0.005171377692843909 quantized to 32 (current 32)
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0018289410571936495 of space, bias 1.0, pg target 0.5486823171580949 quantized to 32 (current 32)
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4127973638615152e-06 of space, bias 4.0, pg target 0.0016953568366338183 quantized to 16 (current 16)
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:10:42 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:10:43 compute-0 ceph-mon[76335]: pgmap v2720: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:45 compute-0 ceph-mon[76335]: pgmap v2721: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:45 compute-0 nova_compute[244014]: 2026-02-25 13:10:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:10:45 compute-0 nova_compute[244014]: 2026-02-25 13:10:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:10:45 compute-0 nova_compute[244014]: 2026-02-25 13:10:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:10:45 compute-0 nova_compute[244014]: 2026-02-25 13:10:45.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:45 compute-0 nova_compute[244014]: 2026-02-25 13:10:45.923 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:10:45 compute-0 nova_compute[244014]: 2026-02-25 13:10:45.924 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:10:45 compute-0 nova_compute[244014]: 2026-02-25 13:10:45.962 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:10:45 compute-0 nova_compute[244014]: 2026-02-25 13:10:45.962 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:10:45 compute-0 nova_compute[244014]: 2026-02-25 13:10:45.962 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:10:45 compute-0 nova_compute[244014]: 2026-02-25 13:10:45.963 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:10:45 compute-0 nova_compute[244014]: 2026-02-25 13:10:45.963 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:10:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:10:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640803340' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:10:46 compute-0 nova_compute[244014]: 2026-02-25 13:10:46.497 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:10:46 compute-0 nova_compute[244014]: 2026-02-25 13:10:46.656 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:10:46 compute-0 nova_compute[244014]: 2026-02-25 13:10:46.657 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3589MB free_disk=59.98724717646837GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:10:46 compute-0 nova_compute[244014]: 2026-02-25 13:10:46.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:10:46 compute-0 nova_compute[244014]: 2026-02-25 13:10:46.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:10:46 compute-0 nova_compute[244014]: 2026-02-25 13:10:46.902 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:10:46 compute-0 nova_compute[244014]: 2026-02-25 13:10:46.902 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:10:46 compute-0 nova_compute[244014]: 2026-02-25 13:10:46.995 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:10:47 compute-0 nova_compute[244014]: 2026-02-25 13:10:47.117 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:10:47 compute-0 nova_compute[244014]: 2026-02-25 13:10:47.118 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:10:47 compute-0 nova_compute[244014]: 2026-02-25 13:10:47.132 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:10:47 compute-0 nova_compute[244014]: 2026-02-25 13:10:47.154 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:10:47 compute-0 nova_compute[244014]: 2026-02-25 13:10:47.174 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:10:47 compute-0 ceph-mon[76335]: pgmap v2722: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/640803340' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:10:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:10:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136985533' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:10:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:10:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136985533' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:10:47 compute-0 nova_compute[244014]: 2026-02-25 13:10:47.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:10:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286221838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:10:47 compute-0 nova_compute[244014]: 2026-02-25 13:10:47.728 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:10:47 compute-0 nova_compute[244014]: 2026-02-25 13:10:47.734 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:10:47 compute-0 nova_compute[244014]: 2026-02-25 13:10:47.794 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:10:47 compute-0 nova_compute[244014]: 2026-02-25 13:10:47.797 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:10:47 compute-0 nova_compute[244014]: 2026-02-25 13:10:47.797 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:10:48 compute-0 ceph-mon[76335]: pgmap v2723: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4136985533' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:10:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4136985533' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:10:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3286221838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:10:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:50 compute-0 ceph-mon[76335]: pgmap v2724: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:50 compute-0 nova_compute[244014]: 2026-02-25 13:10:50.793 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:10:50 compute-0 nova_compute[244014]: 2026-02-25 13:10:50.794 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:10:50 compute-0 nova_compute[244014]: 2026-02-25 13:10:50.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:52 compute-0 ceph-mon[76335]: pgmap v2725: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:52 compute-0 nova_compute[244014]: 2026-02-25 13:10:52.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:53 compute-0 nova_compute[244014]: 2026-02-25 13:10:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:10:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:54 compute-0 ceph-mon[76335]: pgmap v2726: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:10:55.050 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:10:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:10:55.051 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:10:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:10:55.051 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:10:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:55 compute-0 nova_compute[244014]: 2026-02-25 13:10:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:10:55 compute-0 nova_compute[244014]: 2026-02-25 13:10:55.880 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:56 compute-0 ceph-mon[76335]: pgmap v2727: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:57 compute-0 nova_compute[244014]: 2026-02-25 13:10:57.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:10:58 compute-0 ceph-mon[76335]: pgmap v2728: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:10:58 compute-0 nova_compute[244014]: 2026-02-25 13:10:58.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:10:58 compute-0 nova_compute[244014]: 2026-02-25 13:10:58.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:10:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:10:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:00 compute-0 nova_compute[244014]: 2026-02-25 13:11:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:11:00 compute-0 nova_compute[244014]: 2026-02-25 13:11:00.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:00 compute-0 ceph-mon[76335]: pgmap v2729: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:11:02 compute-0 nova_compute[244014]: 2026-02-25 13:11:02.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:02 compute-0 nova_compute[244014]: 2026-02-25 13:11:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:11:03 compute-0 ceph-mon[76335]: pgmap v2730: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:05 compute-0 ceph-mon[76335]: pgmap v2731: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:05 compute-0 podman[388172]: 2026-02-25 13:11:05.73822123 +0000 UTC m=+0.075872030 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 13:11:05 compute-0 podman[388173]: 2026-02-25 13:11:05.81092846 +0000 UTC m=+0.135903333 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:11:05 compute-0 nova_compute[244014]: 2026-02-25 13:11:05.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:06 compute-0 ceph-mon[76335]: pgmap v2732: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:07 compute-0 nova_compute[244014]: 2026-02-25 13:11:07.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:08 compute-0 ceph-mon[76335]: pgmap v2733: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:10 compute-0 ceph-mon[76335]: pgmap v2734: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:10 compute-0 nova_compute[244014]: 2026-02-25 13:11:10.887 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:12 compute-0 nova_compute[244014]: 2026-02-25 13:11:12.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:12 compute-0 ceph-mon[76335]: pgmap v2735: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:14 compute-0 ceph-mon[76335]: pgmap v2736: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:15 compute-0 sshd-session[388219]: Invalid user sol from 80.94.92.186 port 35336
Feb 25 13:11:15 compute-0 sshd-session[388219]: Connection closed by invalid user sol 80.94.92.186 port 35336 [preauth]
Feb 25 13:11:15 compute-0 nova_compute[244014]: 2026-02-25 13:11:15.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Feb 25 13:11:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Feb 25 13:11:16 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Feb 25 13:11:16 compute-0 ceph-mon[76335]: pgmap v2737: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 133 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 2.0 MiB/s wr, 14 op/s
Feb 25 13:11:17 compute-0 nova_compute[244014]: 2026-02-25 13:11:17.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:18 compute-0 ceph-mon[76335]: osdmap e296: 3 total, 3 up, 3 in
Feb 25 13:11:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:19 compute-0 ceph-mon[76335]: pgmap v2739: 305 pgs: 305 active+clean; 133 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 2.0 MiB/s wr, 14 op/s
Feb 25 13:11:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Feb 25 13:11:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Feb 25 13:11:19 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Feb 25 13:11:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 133 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s
Feb 25 13:11:20 compute-0 ceph-mon[76335]: osdmap e297: 3 total, 3 up, 3 in
Feb 25 13:11:20 compute-0 ceph-mon[76335]: pgmap v2741: 305 pgs: 305 active+clean; 133 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s
Feb 25 13:11:20 compute-0 nova_compute[244014]: 2026-02-25 13:11:20.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 133 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s
Feb 25 13:11:22 compute-0 ceph-mon[76335]: pgmap v2742: 305 pgs: 305 active+clean; 133 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s
Feb 25 13:11:22 compute-0 nova_compute[244014]: 2026-02-25 13:11:22.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 5.1 MiB/s wr, 68 op/s
Feb 25 13:11:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:24 compute-0 ceph-mon[76335]: pgmap v2743: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 5.1 MiB/s wr, 68 op/s
Feb 25 13:11:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.5 MiB/s wr, 49 op/s
Feb 25 13:11:25 compute-0 nova_compute[244014]: 2026-02-25 13:11:25.893 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:26 compute-0 ceph-mon[76335]: pgmap v2744: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.5 MiB/s wr, 49 op/s
Feb 25 13:11:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.0 MiB/s wr, 95 op/s
Feb 25 13:11:27 compute-0 nova_compute[244014]: 2026-02-25 13:11:27.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:28 compute-0 ceph-mon[76335]: pgmap v2745: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.0 MiB/s wr, 95 op/s
Feb 25 13:11:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.0 MiB/s wr, 95 op/s
Feb 25 13:11:30 compute-0 ceph-mon[76335]: pgmap v2746: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.0 MiB/s wr, 95 op/s
Feb 25 13:11:30 compute-0 sudo[388221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:11:30 compute-0 sudo[388221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:11:30 compute-0 sudo[388221]: pam_unix(sudo:session): session closed for user root
Feb 25 13:11:30 compute-0 sudo[388246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:11:30 compute-0 sudo[388246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:11:30 compute-0 nova_compute[244014]: 2026-02-25 13:11:30.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:31 compute-0 sudo[388246]: pam_unix(sudo:session): session closed for user root
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:11:31
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', '.mgr']
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:11:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:11:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:11:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:11:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:11:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:11:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:11:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:11:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:11:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:11:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:11:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 1.7 MiB/s wr, 79 op/s
Feb 25 13:11:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:11:31 compute-0 sudo[388303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:11:31 compute-0 sudo[388303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:11:31 compute-0 sudo[388303]: pam_unix(sudo:session): session closed for user root
Feb 25 13:11:31 compute-0 sudo[388328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:11:31 compute-0 sudo[388328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:11:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:11:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:11:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:11:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:11:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:11:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:11:31 compute-0 podman[388364]: 2026-02-25 13:11:31.615759495 +0000 UTC m=+0.027922209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:11:31 compute-0 podman[388364]: 2026-02-25 13:11:31.747075248 +0000 UTC m=+0.159237912 container create 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:11:31 compute-0 systemd[1]: Started libpod-conmon-85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887.scope.
Feb 25 13:11:31 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:11:31 compute-0 podman[388364]: 2026-02-25 13:11:31.89859327 +0000 UTC m=+0.310756044 container init 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:11:31 compute-0 podman[388364]: 2026-02-25 13:11:31.908547921 +0000 UTC m=+0.320710635 container start 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:11:31 compute-0 confident_shirley[388380]: 167 167
Feb 25 13:11:31 compute-0 systemd[1]: libpod-85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887.scope: Deactivated successfully.
Feb 25 13:11:31 compute-0 podman[388364]: 2026-02-25 13:11:31.949155346 +0000 UTC m=+0.361318070 container attach 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 13:11:31 compute-0 podman[388364]: 2026-02-25 13:11:31.949900397 +0000 UTC m=+0.362063091 container died 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba2bfa754e0f2961c59b94ad2797e57100ebfba7b1a626079925ea3e79962075-merged.mount: Deactivated successfully.
Feb 25 13:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:11:32 compute-0 podman[388364]: 2026-02-25 13:11:32.306821942 +0000 UTC m=+0.718984636 container remove 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 13:11:32 compute-0 systemd[1]: libpod-conmon-85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887.scope: Deactivated successfully.
Feb 25 13:11:32 compute-0 podman[388404]: 2026-02-25 13:11:32.540970085 +0000 UTC m=+0.101210995 container create c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:11:32 compute-0 podman[388404]: 2026-02-25 13:11:32.465997201 +0000 UTC m=+0.026238161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:11:32 compute-0 ceph-mon[76335]: pgmap v2747: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 1.7 MiB/s wr, 79 op/s
Feb 25 13:11:32 compute-0 systemd[1]: Started libpod-conmon-c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020.scope.
Feb 25 13:11:32 compute-0 nova_compute[244014]: 2026-02-25 13:11:32.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:32 compute-0 podman[388404]: 2026-02-25 13:11:32.673892353 +0000 UTC m=+0.234133313 container init c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 13:11:32 compute-0 podman[388404]: 2026-02-25 13:11:32.684607115 +0000 UTC m=+0.244848015 container start c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 13:11:32 compute-0 podman[388404]: 2026-02-25 13:11:32.704107065 +0000 UTC m=+0.264348025 container attach c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:11:33 compute-0 charming_lumiere[388422]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:11:33 compute-0 charming_lumiere[388422]: --> All data devices are unavailable
Feb 25 13:11:33 compute-0 systemd[1]: libpod-c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020.scope: Deactivated successfully.
Feb 25 13:11:33 compute-0 podman[388404]: 2026-02-25 13:11:33.126232948 +0000 UTC m=+0.686473848 container died c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:11:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 1.7 MiB/s wr, 79 op/s
Feb 25 13:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef-merged.mount: Deactivated successfully.
Feb 25 13:11:33 compute-0 podman[388404]: 2026-02-25 13:11:33.301519501 +0000 UTC m=+0.861760411 container remove c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 13:11:33 compute-0 systemd[1]: libpod-conmon-c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020.scope: Deactivated successfully.
Feb 25 13:11:33 compute-0 sudo[388328]: pam_unix(sudo:session): session closed for user root
Feb 25 13:11:33 compute-0 sudo[388457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:11:33 compute-0 sudo[388457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:11:33 compute-0 sudo[388457]: pam_unix(sudo:session): session closed for user root
Feb 25 13:11:33 compute-0 sudo[388482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:11:33 compute-0 sudo[388482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:11:33 compute-0 podman[388519]: 2026-02-25 13:11:33.835920921 +0000 UTC m=+0.091244654 container create 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:11:33 compute-0 podman[388519]: 2026-02-25 13:11:33.780984612 +0000 UTC m=+0.036308395 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:11:33 compute-0 systemd[1]: Started libpod-conmon-27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7.scope.
Feb 25 13:11:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:11:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:33 compute-0 podman[388519]: 2026-02-25 13:11:33.985557471 +0000 UTC m=+0.240881244 container init 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:11:33 compute-0 podman[388519]: 2026-02-25 13:11:33.993265828 +0000 UTC m=+0.248589551 container start 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:11:34 compute-0 adoring_satoshi[388535]: 167 167
Feb 25 13:11:34 compute-0 systemd[1]: libpod-27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7.scope: Deactivated successfully.
Feb 25 13:11:34 compute-0 conmon[388535]: conmon 27f32422b1287c1688cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7.scope/container/memory.events
Feb 25 13:11:34 compute-0 podman[388519]: 2026-02-25 13:11:34.017777789 +0000 UTC m=+0.273101572 container attach 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:11:34 compute-0 podman[388519]: 2026-02-25 13:11:34.018328935 +0000 UTC m=+0.273652658 container died 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef1218157ad34fb7ad8e7bf239015886f71dd45abb17aee75de5044e1475d83d-merged.mount: Deactivated successfully.
Feb 25 13:11:34 compute-0 podman[388519]: 2026-02-25 13:11:34.211587394 +0000 UTC m=+0.466911087 container remove 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:11:34 compute-0 systemd[1]: libpod-conmon-27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7.scope: Deactivated successfully.
Feb 25 13:11:34 compute-0 podman[388561]: 2026-02-25 13:11:34.444121322 +0000 UTC m=+0.096842062 container create fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 13:11:34 compute-0 podman[388561]: 2026-02-25 13:11:34.387573857 +0000 UTC m=+0.040294397 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:11:34 compute-0 systemd[1]: Started libpod-conmon-fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197.scope.
Feb 25 13:11:34 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da387f8eef0e1d2c0eaa2d7b51b17361d88a0503413c4c6fbf8ca0efc85f578/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da387f8eef0e1d2c0eaa2d7b51b17361d88a0503413c4c6fbf8ca0efc85f578/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da387f8eef0e1d2c0eaa2d7b51b17361d88a0503413c4c6fbf8ca0efc85f578/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da387f8eef0e1d2c0eaa2d7b51b17361d88a0503413c4c6fbf8ca0efc85f578/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:34 compute-0 podman[388561]: 2026-02-25 13:11:34.610255756 +0000 UTC m=+0.262976267 container init fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:11:34 compute-0 ceph-mon[76335]: pgmap v2748: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 1.7 MiB/s wr, 79 op/s
Feb 25 13:11:34 compute-0 podman[388561]: 2026-02-25 13:11:34.621396821 +0000 UTC m=+0.274117371 container start fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 13:11:34 compute-0 podman[388561]: 2026-02-25 13:11:34.692057533 +0000 UTC m=+0.344778073 container attach fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]: {
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:     "0": [
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:         {
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "devices": [
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "/dev/loop3"
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             ],
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_name": "ceph_lv0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_size": "21470642176",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "name": "ceph_lv0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "tags": {
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.cluster_name": "ceph",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.crush_device_class": "",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.encrypted": "0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.objectstore": "bluestore",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.osd_id": "0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.type": "block",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.vdo": "0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.with_tpm": "0"
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             },
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "type": "block",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "vg_name": "ceph_vg0"
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:         }
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:     ],
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:     "1": [
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:         {
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "devices": [
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "/dev/loop4"
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             ],
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_name": "ceph_lv1",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_size": "21470642176",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "name": "ceph_lv1",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "tags": {
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.cluster_name": "ceph",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.crush_device_class": "",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.encrypted": "0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.objectstore": "bluestore",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.osd_id": "1",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.type": "block",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.vdo": "0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.with_tpm": "0"
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             },
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "type": "block",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "vg_name": "ceph_vg1"
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:         }
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:     ],
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:     "2": [
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:         {
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "devices": [
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "/dev/loop5"
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             ],
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_name": "ceph_lv2",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_size": "21470642176",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "name": "ceph_lv2",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "tags": {
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.cluster_name": "ceph",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.crush_device_class": "",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.encrypted": "0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.objectstore": "bluestore",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.osd_id": "2",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.type": "block",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.vdo": "0",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:                 "ceph.with_tpm": "0"
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             },
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "type": "block",
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:             "vg_name": "ceph_vg2"
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:         }
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]:     ]
Feb 25 13:11:34 compute-0 eloquent_leavitt[388577]: }
Feb 25 13:11:34 compute-0 systemd[1]: libpod-fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197.scope: Deactivated successfully.
Feb 25 13:11:34 compute-0 podman[388561]: 2026-02-25 13:11:34.969351843 +0000 UTC m=+0.622072383 container died fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7da387f8eef0e1d2c0eaa2d7b51b17361d88a0503413c4c6fbf8ca0efc85f578-merged.mount: Deactivated successfully.
Feb 25 13:11:35 compute-0 podman[388561]: 2026-02-25 13:11:35.199003637 +0000 UTC m=+0.851724177 container remove fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:11:35 compute-0 systemd[1]: libpod-conmon-fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197.scope: Deactivated successfully.
Feb 25 13:11:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Feb 25 13:11:35 compute-0 sudo[388482]: pam_unix(sudo:session): session closed for user root
Feb 25 13:11:35 compute-0 sudo[388598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:11:35 compute-0 sudo[388598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:11:35 compute-0 sudo[388598]: pam_unix(sudo:session): session closed for user root
Feb 25 13:11:35 compute-0 sudo[388623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:11:35 compute-0 sudo[388623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:11:35 compute-0 podman[388660]: 2026-02-25 13:11:35.77924696 +0000 UTC m=+0.099181958 container create 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:11:35 compute-0 podman[388660]: 2026-02-25 13:11:35.711981593 +0000 UTC m=+0.031916651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:11:35 compute-0 systemd[1]: Started libpod-conmon-731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230.scope.
Feb 25 13:11:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:11:35 compute-0 nova_compute[244014]: 2026-02-25 13:11:35.897 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:35 compute-0 podman[388660]: 2026-02-25 13:11:35.936061972 +0000 UTC m=+0.255997020 container init 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 13:11:35 compute-0 podman[388660]: 2026-02-25 13:11:35.945338974 +0000 UTC m=+0.265273982 container start 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:11:35 compute-0 keen_pike[388689]: 167 167
Feb 25 13:11:35 compute-0 systemd[1]: libpod-731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230.scope: Deactivated successfully.
Feb 25 13:11:35 compute-0 podman[388660]: 2026-02-25 13:11:35.992354099 +0000 UTC m=+0.312289147 container attach 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:11:35 compute-0 podman[388660]: 2026-02-25 13:11:35.992896715 +0000 UTC m=+0.312831713 container died 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce8aff379a310feaa89dd0b0a76efaf9ae8407c8bfd519f700b684c6f4497040-merged.mount: Deactivated successfully.
Feb 25 13:11:36 compute-0 podman[388660]: 2026-02-25 13:11:36.222866719 +0000 UTC m=+0.542801727 container remove 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:11:36 compute-0 podman[388674]: 2026-02-25 13:11:36.225610997 +0000 UTC m=+0.412757770 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 13:11:36 compute-0 podman[388690]: 2026-02-25 13:11:36.229457755 +0000 UTC m=+0.356572846 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 13:11:36 compute-0 systemd[1]: libpod-conmon-731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230.scope: Deactivated successfully.
Feb 25 13:11:36 compute-0 podman[388746]: 2026-02-25 13:11:36.445363704 +0000 UTC m=+0.079873784 container create 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 13:11:36 compute-0 podman[388746]: 2026-02-25 13:11:36.396238559 +0000 UTC m=+0.030748699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:11:36 compute-0 systemd[1]: Started libpod-conmon-7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99.scope.
Feb 25 13:11:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2855a0a0bd0f7a7ba9fe8c416a48225b4e3f79002c865ddad898a9981d16d487/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2855a0a0bd0f7a7ba9fe8c416a48225b4e3f79002c865ddad898a9981d16d487/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2855a0a0bd0f7a7ba9fe8c416a48225b4e3f79002c865ddad898a9981d16d487/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2855a0a0bd0f7a7ba9fe8c416a48225b4e3f79002c865ddad898a9981d16d487/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:11:36 compute-0 podman[388746]: 2026-02-25 13:11:36.587858512 +0000 UTC m=+0.222368622 container init 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:11:36 compute-0 podman[388746]: 2026-02-25 13:11:36.595399485 +0000 UTC m=+0.229909565 container start 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:11:36 compute-0 podman[388746]: 2026-02-25 13:11:36.617806717 +0000 UTC m=+0.252316807 container attach 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:11:36 compute-0 ceph-mon[76335]: pgmap v2749: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Feb 25 13:11:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Feb 25 13:11:37 compute-0 lvm[388839]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:11:37 compute-0 lvm[388839]: VG ceph_vg0 finished
Feb 25 13:11:37 compute-0 lvm[388842]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:11:37 compute-0 lvm[388842]: VG ceph_vg1 finished
Feb 25 13:11:37 compute-0 lvm[388844]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:11:37 compute-0 lvm[388844]: VG ceph_vg2 finished
Feb 25 13:11:37 compute-0 serene_shockley[388762]: {}
Feb 25 13:11:37 compute-0 systemd[1]: libpod-7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99.scope: Deactivated successfully.
Feb 25 13:11:37 compute-0 systemd[1]: libpod-7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99.scope: Consumed 1.185s CPU time.
Feb 25 13:11:37 compute-0 podman[388746]: 2026-02-25 13:11:37.411087516 +0000 UTC m=+1.045597576 container died 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2855a0a0bd0f7a7ba9fe8c416a48225b4e3f79002c865ddad898a9981d16d487-merged.mount: Deactivated successfully.
Feb 25 13:11:37 compute-0 podman[388746]: 2026-02-25 13:11:37.584434395 +0000 UTC m=+1.218944485 container remove 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:11:37 compute-0 systemd[1]: libpod-conmon-7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99.scope: Deactivated successfully.
Feb 25 13:11:37 compute-0 nova_compute[244014]: 2026-02-25 13:11:37.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:37 compute-0 sudo[388623]: pam_unix(sudo:session): session closed for user root
Feb 25 13:11:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:11:37 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:11:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:11:37 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:11:37 compute-0 sudo[388859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:11:37 compute-0 sudo[388859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:11:37 compute-0 sudo[388859]: pam_unix(sudo:session): session closed for user root
Feb 25 13:11:38 compute-0 ceph-mon[76335]: pgmap v2750: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Feb 25 13:11:38 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:11:38 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:11:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:40 compute-0 ceph-mon[76335]: pgmap v2751: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:40 compute-0 nova_compute[244014]: 2026-02-25 13:11:40.902 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:11:41.096 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:11:41 compute-0 nova_compute[244014]: 2026-02-25 13:11:41.097 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:41 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:11:41.097 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 13:11:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:42 compute-0 nova_compute[244014]: 2026-02-25 13:11:42.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:42 compute-0 ceph-mon[76335]: pgmap v2752: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.7350063260632303e-05 of space, bias 1.0, pg target 0.005205018978189691 quantized to 32 (current 32)
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024948454527306263 of space, bias 1.0, pg target 0.7484536358191879 quantized to 32 (current 32)
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4184950664421151e-06 of space, bias 4.0, pg target 0.0017021940797305381 quantized to 16 (current 16)
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:11:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:44 compute-0 ceph-mon[76335]: pgmap v2753: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:45 compute-0 nova_compute[244014]: 2026-02-25 13:11:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:11:45 compute-0 nova_compute[244014]: 2026-02-25 13:11:45.902 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:45 compute-0 nova_compute[244014]: 2026-02-25 13:11:45.925 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:11:45 compute-0 nova_compute[244014]: 2026-02-25 13:11:45.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:11:45 compute-0 nova_compute[244014]: 2026-02-25 13:11:45.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:11:45 compute-0 nova_compute[244014]: 2026-02-25 13:11:45.928 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:11:45 compute-0 nova_compute[244014]: 2026-02-25 13:11:45.928 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:11:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:11:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242954410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:11:46 compute-0 nova_compute[244014]: 2026-02-25 13:11:46.486 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:11:46 compute-0 nova_compute[244014]: 2026-02-25 13:11:46.684 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:11:46 compute-0 nova_compute[244014]: 2026-02-25 13:11:46.686 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3557MB free_disk=59.987240449525416GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:11:46 compute-0 nova_compute[244014]: 2026-02-25 13:11:46.686 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:11:46 compute-0 nova_compute[244014]: 2026-02-25 13:11:46.687 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:11:46 compute-0 nova_compute[244014]: 2026-02-25 13:11:46.759 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:11:46 compute-0 nova_compute[244014]: 2026-02-25 13:11:46.759 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:11:46 compute-0 nova_compute[244014]: 2026-02-25 13:11:46.775 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:11:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Feb 25 13:11:46 compute-0 ceph-mon[76335]: pgmap v2754: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:11:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1242954410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:11:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Feb 25 13:11:47 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Feb 25 13:11:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 13:11:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:11:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957610471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:11:47 compute-0 nova_compute[244014]: 2026-02-25 13:11:47.343 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:11:47 compute-0 nova_compute[244014]: 2026-02-25 13:11:47.351 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:11:47 compute-0 nova_compute[244014]: 2026-02-25 13:11:47.383 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:11:47 compute-0 nova_compute[244014]: 2026-02-25 13:11:47.386 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:11:47 compute-0 nova_compute[244014]: 2026-02-25 13:11:47.386 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:11:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:11:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4066214724' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:11:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:11:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4066214724' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:11:47 compute-0 nova_compute[244014]: 2026-02-25 13:11:47.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:48 compute-0 ceph-mon[76335]: osdmap e298: 3 total, 3 up, 3 in
Feb 25 13:11:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3957610471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:11:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4066214724' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:11:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4066214724' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:11:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:11:49.099 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:11:49 compute-0 ceph-mon[76335]: pgmap v2756: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 13:11:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 13:11:49 compute-0 nova_compute[244014]: 2026-02-25 13:11:49.387 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:11:49 compute-0 nova_compute[244014]: 2026-02-25 13:11:49.388 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:11:49 compute-0 nova_compute[244014]: 2026-02-25 13:11:49.388 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:11:49 compute-0 nova_compute[244014]: 2026-02-25 13:11:49.403 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:11:49 compute-0 nova_compute[244014]: 2026-02-25 13:11:49.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:11:50 compute-0 ceph-mon[76335]: pgmap v2757: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 13:11:50 compute-0 nova_compute[244014]: 2026-02-25 13:11:50.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:11:50 compute-0 nova_compute[244014]: 2026-02-25 13:11:50.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 13:11:52 compute-0 ceph-mon[76335]: pgmap v2758: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 13:11:52 compute-0 nova_compute[244014]: 2026-02-25 13:11:52.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 13:11:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Feb 25 13:11:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Feb 25 13:11:54 compute-0 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Feb 25 13:11:55 compute-0 ceph-mon[76335]: pgmap v2759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 13:11:55 compute-0 ceph-mon[76335]: osdmap e299: 3 total, 3 up, 3 in
Feb 25 13:11:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:11:55.051 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:11:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:11:55.052 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:11:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:11:55.052 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:11:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 25 13:11:55 compute-0 nova_compute[244014]: 2026-02-25 13:11:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:11:55 compute-0 nova_compute[244014]: 2026-02-25 13:11:55.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:57 compute-0 ceph-mon[76335]: pgmap v2761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 25 13:11:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 25 13:11:57 compute-0 nova_compute[244014]: 2026-02-25 13:11:57.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:11:57 compute-0 nova_compute[244014]: 2026-02-25 13:11:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:11:58 compute-0 nova_compute[244014]: 2026-02-25 13:11:58.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:11:58 compute-0 nova_compute[244014]: 2026-02-25 13:11:58.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:11:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:11:59 compute-0 ceph-mon[76335]: pgmap v2762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 25 13:11:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 25 13:12:00 compute-0 ceph-mon[76335]: pgmap v2763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 25 13:12:00 compute-0 nova_compute[244014]: 2026-02-25 13:12:00.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 25 13:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:12:02 compute-0 ceph-mon[76335]: pgmap v2764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 25 13:12:02 compute-0 nova_compute[244014]: 2026-02-25 13:12:02.644 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:02 compute-0 nova_compute[244014]: 2026-02-25 13:12:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:12:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:03 compute-0 nova_compute[244014]: 2026-02-25 13:12:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:12:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:04 compute-0 ceph-mon[76335]: pgmap v2765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:05 compute-0 nova_compute[244014]: 2026-02-25 13:12:05.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:06 compute-0 ceph-mon[76335]: pgmap v2766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:06 compute-0 podman[388929]: 2026-02-25 13:12:06.729830455 +0000 UTC m=+0.064428878 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 13:12:06 compute-0 podman[388930]: 2026-02-25 13:12:06.76580941 +0000 UTC m=+0.095481474 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:12:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:07 compute-0 nova_compute[244014]: 2026-02-25 13:12:07.647 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:08 compute-0 ceph-mon[76335]: pgmap v2767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:10 compute-0 ceph-mon[76335]: pgmap v2768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.712005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130712049, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2081, "num_deletes": 253, "total_data_size": 3481435, "memory_usage": 3542080, "flush_reason": "Manual Compaction"}
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130771526, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 3423942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56672, "largest_seqno": 58752, "table_properties": {"data_size": 3414337, "index_size": 6162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19124, "raw_average_key_size": 20, "raw_value_size": 3395243, "raw_average_value_size": 3596, "num_data_blocks": 272, "num_entries": 944, "num_filter_entries": 944, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024908, "oldest_key_time": 1772024908, "file_creation_time": 1772025130, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 59628 microseconds, and 10860 cpu microseconds.
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.771627) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 3423942 bytes OK
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.771655) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.816065) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.816125) EVENT_LOG_v1 {"time_micros": 1772025130816110, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.816161) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3472735, prev total WAL file size 3472735, number of live WAL files 2.
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.817550) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(3343KB)], [134(9289KB)]
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130817629, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 12936042, "oldest_snapshot_seqno": -1}
Feb 25 13:12:10 compute-0 nova_compute[244014]: 2026-02-25 13:12:10.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7932 keys, 11196676 bytes, temperature: kUnknown
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130941103, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11196676, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11143495, "index_size": 32268, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 206468, "raw_average_key_size": 26, "raw_value_size": 11001943, "raw_average_value_size": 1387, "num_data_blocks": 1261, "num_entries": 7932, "num_filter_entries": 7932, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025130, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.941415) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11196676 bytes
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.966149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.7 rd, 90.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.1 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(7.0) write-amplify(3.3) OK, records in: 8454, records dropped: 522 output_compression: NoCompression
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.966202) EVENT_LOG_v1 {"time_micros": 1772025130966180, "job": 82, "event": "compaction_finished", "compaction_time_micros": 123559, "compaction_time_cpu_micros": 29480, "output_level": 6, "num_output_files": 1, "total_output_size": 11196676, "num_input_records": 8454, "num_output_records": 7932, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130967000, "job": 82, "event": "table_file_deletion", "file_number": 136}
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130968524, "job": 82, "event": "table_file_deletion", "file_number": 134}
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.817416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.968621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.968629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.968632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.968635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:12:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.968638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:12:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:12 compute-0 nova_compute[244014]: 2026-02-25 13:12:12.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:12 compute-0 ceph-mon[76335]: pgmap v2769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:14 compute-0 ceph-mon[76335]: pgmap v2770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:15 compute-0 nova_compute[244014]: 2026-02-25 13:12:15.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:16 compute-0 ceph-mon[76335]: pgmap v2771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:17 compute-0 nova_compute[244014]: 2026-02-25 13:12:17.655 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:18 compute-0 nova_compute[244014]: 2026-02-25 13:12:18.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:12:18 compute-0 ceph-mon[76335]: pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:20 compute-0 nova_compute[244014]: 2026-02-25 13:12:20.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:20 compute-0 ceph-mon[76335]: pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:22 compute-0 nova_compute[244014]: 2026-02-25 13:12:22.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:23 compute-0 ceph-mon[76335]: pgmap v2774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:25 compute-0 ceph-mon[76335]: pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:25 compute-0 nova_compute[244014]: 2026-02-25 13:12:25.921 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:26 compute-0 ceph-mon[76335]: pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:27 compute-0 nova_compute[244014]: 2026-02-25 13:12:27.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:28 compute-0 ceph-mon[76335]: pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:30 compute-0 ceph-mon[76335]: pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:30 compute-0 nova_compute[244014]: 2026-02-25 13:12:30.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:12:31
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:12:32 compute-0 ceph-mon[76335]: pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:32 compute-0 nova_compute[244014]: 2026-02-25 13:12:32.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:34 compute-0 ceph-mon[76335]: pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:35 compute-0 nova_compute[244014]: 2026-02-25 13:12:35.925 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:36 compute-0 ceph-mon[76335]: pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:37 compute-0 nova_compute[244014]: 2026-02-25 13:12:37.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:37 compute-0 podman[388974]: 2026-02-25 13:12:37.735330577 +0000 UTC m=+0.077288150 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:12:37 compute-0 podman[388975]: 2026-02-25 13:12:37.761068613 +0000 UTC m=+0.099392854 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:12:37 compute-0 sudo[389018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:12:37 compute-0 sudo[389018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:12:37 compute-0 sudo[389018]: pam_unix(sudo:session): session closed for user root
Feb 25 13:12:37 compute-0 sudo[389043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 13:12:37 compute-0 sudo[389043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:12:38 compute-0 sudo[389043]: pam_unix(sudo:session): session closed for user root
Feb 25 13:12:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:12:38 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:12:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:12:38 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:12:38 compute-0 sudo[389089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:12:38 compute-0 sudo[389089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:12:38 compute-0 sudo[389089]: pam_unix(sudo:session): session closed for user root
Feb 25 13:12:38 compute-0 sudo[389114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:12:38 compute-0 sudo[389114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:12:38 compute-0 ceph-mon[76335]: pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:38 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:12:38 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:12:38 compute-0 sudo[389114]: pam_unix(sudo:session): session closed for user root
Feb 25 13:12:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:12:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:12:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:12:38 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:12:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:12:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:12:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:12:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:12:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:12:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:12:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:12:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:12:39 compute-0 sudo[389173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:12:39 compute-0 sudo[389173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:12:39 compute-0 sudo[389173]: pam_unix(sudo:session): session closed for user root
Feb 25 13:12:39 compute-0 sudo[389198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:12:39 compute-0 sudo[389198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:12:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:39 compute-0 podman[389235]: 2026-02-25 13:12:39.479604713 +0000 UTC m=+0.077613989 container create cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:12:39 compute-0 podman[389235]: 2026-02-25 13:12:39.427939977 +0000 UTC m=+0.025949303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:12:39 compute-0 systemd[1]: Started libpod-conmon-cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10.scope.
Feb 25 13:12:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:12:39 compute-0 podman[389235]: 2026-02-25 13:12:39.638964126 +0000 UTC m=+0.236973442 container init cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 25 13:12:39 compute-0 podman[389235]: 2026-02-25 13:12:39.64901231 +0000 UTC m=+0.247021576 container start cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:12:39 compute-0 infallible_shockley[389251]: 167 167
Feb 25 13:12:39 compute-0 systemd[1]: libpod-cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10.scope: Deactivated successfully.
Feb 25 13:12:39 compute-0 podman[389235]: 2026-02-25 13:12:39.68696586 +0000 UTC m=+0.284975126 container attach cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:12:39 compute-0 podman[389235]: 2026-02-25 13:12:39.68804276 +0000 UTC m=+0.286052026 container died cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 13:12:39 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:12:39 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:12:39 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:12:39 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:12:39 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:12:39 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e34f12875dfaf04ba9ca18b6876e0f6cffbc9677bb95c2395ed16adde12b4c1-merged.mount: Deactivated successfully.
Feb 25 13:12:39 compute-0 podman[389235]: 2026-02-25 13:12:39.887098374 +0000 UTC m=+0.485107660 container remove cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:12:39 compute-0 systemd[1]: libpod-conmon-cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10.scope: Deactivated successfully.
Feb 25 13:12:40 compute-0 podman[389275]: 2026-02-25 13:12:40.107830178 +0000 UTC m=+0.080456120 container create a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:12:40 compute-0 podman[389275]: 2026-02-25 13:12:40.065811293 +0000 UTC m=+0.038437275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:12:40 compute-0 systemd[1]: Started libpod-conmon-a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887.scope.
Feb 25 13:12:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:40 compute-0 podman[389275]: 2026-02-25 13:12:40.240042856 +0000 UTC m=+0.212668808 container init a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:12:40 compute-0 podman[389275]: 2026-02-25 13:12:40.245651604 +0000 UTC m=+0.218277546 container start a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 13:12:40 compute-0 podman[389275]: 2026-02-25 13:12:40.271678778 +0000 UTC m=+0.244304770 container attach a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:12:40 compute-0 epic_snyder[389291]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:12:40 compute-0 epic_snyder[389291]: --> All data devices are unavailable
Feb 25 13:12:40 compute-0 systemd[1]: libpod-a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887.scope: Deactivated successfully.
Feb 25 13:12:40 compute-0 conmon[389291]: conmon a57761b2dc5a7090ed76 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887.scope/container/memory.events
Feb 25 13:12:40 compute-0 podman[389275]: 2026-02-25 13:12:40.732355359 +0000 UTC m=+0.704981261 container died a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 13:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b-merged.mount: Deactivated successfully.
Feb 25 13:12:40 compute-0 nova_compute[244014]: 2026-02-25 13:12:40.926 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:40 compute-0 podman[389275]: 2026-02-25 13:12:40.950255774 +0000 UTC m=+0.922881686 container remove a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:12:40 compute-0 systemd[1]: libpod-conmon-a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887.scope: Deactivated successfully.
Feb 25 13:12:40 compute-0 sudo[389198]: pam_unix(sudo:session): session closed for user root
Feb 25 13:12:41 compute-0 sudo[389325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:12:41 compute-0 sudo[389325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:12:41 compute-0 sudo[389325]: pam_unix(sudo:session): session closed for user root
Feb 25 13:12:41 compute-0 ceph-mon[76335]: pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:41 compute-0 sudo[389350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:12:41 compute-0 sudo[389350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:12:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:41 compute-0 podman[389387]: 2026-02-25 13:12:41.385511998 +0000 UTC m=+0.062146764 container create f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:12:41 compute-0 podman[389387]: 2026-02-25 13:12:41.346136957 +0000 UTC m=+0.022771763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:12:41 compute-0 systemd[1]: Started libpod-conmon-f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0.scope.
Feb 25 13:12:41 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:12:41 compute-0 podman[389387]: 2026-02-25 13:12:41.528947032 +0000 UTC m=+0.205581888 container init f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 13:12:41 compute-0 podman[389387]: 2026-02-25 13:12:41.537071551 +0000 UTC m=+0.213706357 container start f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:12:41 compute-0 trusting_swartz[389403]: 167 167
Feb 25 13:12:41 compute-0 systemd[1]: libpod-f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0.scope: Deactivated successfully.
Feb 25 13:12:41 compute-0 podman[389387]: 2026-02-25 13:12:41.558241568 +0000 UTC m=+0.234876414 container attach f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 13:12:41 compute-0 podman[389387]: 2026-02-25 13:12:41.558848505 +0000 UTC m=+0.235483301 container died f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-eeedaa37a4f34ce23227c2c36d7c167f4055db91766cee158e058c7f7a43df78-merged.mount: Deactivated successfully.
Feb 25 13:12:41 compute-0 podman[389387]: 2026-02-25 13:12:41.695674334 +0000 UTC m=+0.372309130 container remove f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:12:41 compute-0 systemd[1]: libpod-conmon-f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0.scope: Deactivated successfully.
Feb 25 13:12:41 compute-0 podman[389427]: 2026-02-25 13:12:41.866220803 +0000 UTC m=+0.062912785 container create 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:12:41 compute-0 podman[389427]: 2026-02-25 13:12:41.824052964 +0000 UTC m=+0.020745036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:12:41 compute-0 systemd[1]: Started libpod-conmon-07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0.scope.
Feb 25 13:12:41 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6141721858c18f2cc811783fbe8d0fe7993f061333088afa8a4b553500aba87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6141721858c18f2cc811783fbe8d0fe7993f061333088afa8a4b553500aba87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6141721858c18f2cc811783fbe8d0fe7993f061333088afa8a4b553500aba87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6141721858c18f2cc811783fbe8d0fe7993f061333088afa8a4b553500aba87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:41 compute-0 podman[389427]: 2026-02-25 13:12:41.994192092 +0000 UTC m=+0.190884104 container init 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:12:42 compute-0 podman[389427]: 2026-02-25 13:12:42.000212972 +0000 UTC m=+0.196905034 container start 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:12:42 compute-0 podman[389427]: 2026-02-25 13:12:42.02249306 +0000 UTC m=+0.219185122 container attach 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:12:42 compute-0 awesome_feistel[389444]: {
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:     "0": [
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:         {
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "devices": [
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "/dev/loop3"
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             ],
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_name": "ceph_lv0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_size": "21470642176",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "name": "ceph_lv0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "tags": {
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.cluster_name": "ceph",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.crush_device_class": "",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.encrypted": "0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.objectstore": "bluestore",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.osd_id": "0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.type": "block",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.vdo": "0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.with_tpm": "0"
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             },
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "type": "block",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "vg_name": "ceph_vg0"
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:         }
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:     ],
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:     "1": [
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:         {
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "devices": [
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "/dev/loop4"
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             ],
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_name": "ceph_lv1",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_size": "21470642176",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "name": "ceph_lv1",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "tags": {
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.cluster_name": "ceph",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.crush_device_class": "",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.encrypted": "0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.objectstore": "bluestore",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.osd_id": "1",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.type": "block",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.vdo": "0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.with_tpm": "0"
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             },
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "type": "block",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "vg_name": "ceph_vg1"
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:         }
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:     ],
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:     "2": [
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:         {
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "devices": [
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "/dev/loop5"
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             ],
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_name": "ceph_lv2",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_size": "21470642176",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "name": "ceph_lv2",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "tags": {
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.cluster_name": "ceph",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.crush_device_class": "",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.encrypted": "0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.objectstore": "bluestore",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.osd_id": "2",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.type": "block",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.vdo": "0",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:                 "ceph.with_tpm": "0"
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             },
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "type": "block",
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:             "vg_name": "ceph_vg2"
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:         }
Feb 25 13:12:42 compute-0 awesome_feistel[389444]:     ]
Feb 25 13:12:42 compute-0 awesome_feistel[389444]: }
Feb 25 13:12:42 compute-0 systemd[1]: libpod-07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0.scope: Deactivated successfully.
Feb 25 13:12:42 compute-0 podman[389427]: 2026-02-25 13:12:42.288399198 +0000 UTC m=+0.485091190 container died 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 13:12:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6141721858c18f2cc811783fbe8d0fe7993f061333088afa8a4b553500aba87-merged.mount: Deactivated successfully.
Feb 25 13:12:42 compute-0 podman[389427]: 2026-02-25 13:12:42.45621885 +0000 UTC m=+0.652910832 container remove 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:12:42 compute-0 systemd[1]: libpod-conmon-07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0.scope: Deactivated successfully.
Feb 25 13:12:42 compute-0 sudo[389350]: pam_unix(sudo:session): session closed for user root
Feb 25 13:12:42 compute-0 sudo[389467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:12:42 compute-0 sudo[389467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:12:42 compute-0 sudo[389467]: pam_unix(sudo:session): session closed for user root
Feb 25 13:12:42 compute-0 sudo[389492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:12:42 compute-0 sudo[389492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:12:42 compute-0 nova_compute[244014]: 2026-02-25 13:12:42.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:42 compute-0 podman[389529]: 2026-02-25 13:12:42.984012564 +0000 UTC m=+0.090367260 container create ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:12:43 compute-0 podman[389529]: 2026-02-25 13:12:42.924056513 +0000 UTC m=+0.030411239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:12:43 compute-0 systemd[1]: Started libpod-conmon-ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8.scope.
Feb 25 13:12:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:12:43 compute-0 ceph-mon[76335]: pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:43 compute-0 podman[389529]: 2026-02-25 13:12:43.134560718 +0000 UTC m=+0.240915384 container init ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:12:43 compute-0 podman[389529]: 2026-02-25 13:12:43.144417066 +0000 UTC m=+0.250771772 container start ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 13:12:43 compute-0 friendly_vaughan[389545]: 167 167
Feb 25 13:12:43 compute-0 systemd[1]: libpod-ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8.scope: Deactivated successfully.
Feb 25 13:12:43 compute-0 podman[389529]: 2026-02-25 13:12:43.165536101 +0000 UTC m=+0.271890777 container attach ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 13:12:43 compute-0 podman[389529]: 2026-02-25 13:12:43.16688992 +0000 UTC m=+0.273244596 container died ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:12:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6aad1699090114f49a7087e520d026bc599f7498de3cc77e1e8a16c7a01b600f-merged.mount: Deactivated successfully.
Feb 25 13:12:43 compute-0 podman[389529]: 2026-02-25 13:12:43.377851198 +0000 UTC m=+0.484205904 container remove ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:12:43 compute-0 systemd[1]: libpod-conmon-ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8.scope: Deactivated successfully.
Feb 25 13:12:43 compute-0 podman[389570]: 2026-02-25 13:12:43.599823078 +0000 UTC m=+0.084282188 container create fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:12:43 compute-0 podman[389570]: 2026-02-25 13:12:43.551139625 +0000 UTC m=+0.035598725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:12:43 compute-0 systemd[1]: Started libpod-conmon-fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736.scope.
Feb 25 13:12:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6149f953c6a21441004105408f7a2b2016a1cf459ede3a62052456aa26f0a632/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6149f953c6a21441004105408f7a2b2016a1cf459ede3a62052456aa26f0a632/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6149f953c6a21441004105408f7a2b2016a1cf459ede3a62052456aa26f0a632/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6149f953c6a21441004105408f7a2b2016a1cf459ede3a62052456aa26f0a632/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:12:43 compute-0 podman[389570]: 2026-02-25 13:12:43.738183949 +0000 UTC m=+0.222643009 container init fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:12:43 compute-0 podman[389570]: 2026-02-25 13:12:43.744473296 +0000 UTC m=+0.228932306 container start fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 13:12:43 compute-0 podman[389570]: 2026-02-25 13:12:43.788246811 +0000 UTC m=+0.272705871 container attach fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 13:12:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:44 compute-0 ceph-mon[76335]: pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:44 compute-0 lvm[389664]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:12:44 compute-0 lvm[389664]: VG ceph_vg1 finished
Feb 25 13:12:44 compute-0 lvm[389665]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:12:44 compute-0 lvm[389665]: VG ceph_vg0 finished
Feb 25 13:12:44 compute-0 lvm[389667]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:12:44 compute-0 lvm[389667]: VG ceph_vg2 finished
Feb 25 13:12:44 compute-0 busy_taussig[389586]: {}
Feb 25 13:12:44 compute-0 systemd[1]: libpod-fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736.scope: Deactivated successfully.
Feb 25 13:12:44 compute-0 systemd[1]: libpod-fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736.scope: Consumed 1.064s CPU time.
Feb 25 13:12:44 compute-0 podman[389570]: 2026-02-25 13:12:44.486546872 +0000 UTC m=+0.971005912 container died fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6149f953c6a21441004105408f7a2b2016a1cf459ede3a62052456aa26f0a632-merged.mount: Deactivated successfully.
Feb 25 13:12:44 compute-0 podman[389570]: 2026-02-25 13:12:44.859757467 +0000 UTC m=+1.344216477 container remove fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:12:44 compute-0 sudo[389492]: pam_unix(sudo:session): session closed for user root
Feb 25 13:12:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:12:44 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:12:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:12:44 compute-0 systemd[1]: libpod-conmon-fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736.scope: Deactivated successfully.
Feb 25 13:12:44 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:12:45 compute-0 sudo[389684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:12:45 compute-0 sudo[389684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:12:45 compute-0 sudo[389684]: pam_unix(sudo:session): session closed for user root
Feb 25 13:12:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:45 compute-0 nova_compute[244014]: 2026-02-25 13:12:45.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:45 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:12:45 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:12:46 compute-0 nova_compute[244014]: 2026-02-25 13:12:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:12:46 compute-0 nova_compute[244014]: 2026-02-25 13:12:46.925 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:12:46 compute-0 nova_compute[244014]: 2026-02-25 13:12:46.925 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:12:46 compute-0 nova_compute[244014]: 2026-02-25 13:12:46.925 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:12:46 compute-0 nova_compute[244014]: 2026-02-25 13:12:46.925 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:12:46 compute-0 nova_compute[244014]: 2026-02-25 13:12:46.926 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:12:47 compute-0 ceph-mon[76335]: pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:12:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1293435788' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:12:47 compute-0 nova_compute[244014]: 2026-02-25 13:12:47.472 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:12:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:12:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/672947010' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:12:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:12:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/672947010' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:12:47 compute-0 nova_compute[244014]: 2026-02-25 13:12:47.675 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:12:47 compute-0 nova_compute[244014]: 2026-02-25 13:12:47.677 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3542MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:12:47 compute-0 nova_compute[244014]: 2026-02-25 13:12:47.677 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:12:47 compute-0 nova_compute[244014]: 2026-02-25 13:12:47.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:12:47 compute-0 nova_compute[244014]: 2026-02-25 13:12:47.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:47 compute-0 nova_compute[244014]: 2026-02-25 13:12:47.753 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:12:47 compute-0 nova_compute[244014]: 2026-02-25 13:12:47.754 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:12:47 compute-0 nova_compute[244014]: 2026-02-25 13:12:47.775 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:12:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:12:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366442388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:12:48 compute-0 nova_compute[244014]: 2026-02-25 13:12:48.467 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:12:48 compute-0 nova_compute[244014]: 2026-02-25 13:12:48.475 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:12:48 compute-0 nova_compute[244014]: 2026-02-25 13:12:48.496 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:12:48 compute-0 nova_compute[244014]: 2026-02-25 13:12:48.498 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:12:48 compute-0 nova_compute[244014]: 2026-02-25 13:12:48.499 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:12:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1293435788' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:12:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/672947010' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:12:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/672947010' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:12:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:49 compute-0 nova_compute[244014]: 2026-02-25 13:12:49.499 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:12:49 compute-0 nova_compute[244014]: 2026-02-25 13:12:49.501 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:12:49 compute-0 nova_compute[244014]: 2026-02-25 13:12:49.501 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:12:49 compute-0 ceph-mon[76335]: pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1366442388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:12:49 compute-0 nova_compute[244014]: 2026-02-25 13:12:49.731 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:12:50 compute-0 ceph-mon[76335]: pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:50 compute-0 nova_compute[244014]: 2026-02-25 13:12:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:12:50 compute-0 nova_compute[244014]: 2026-02-25 13:12:50.931 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:52 compute-0 nova_compute[244014]: 2026-02-25 13:12:52.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:52 compute-0 ceph-mon[76335]: pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:52 compute-0 nova_compute[244014]: 2026-02-25 13:12:52.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:12:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:54 compute-0 ceph-mon[76335]: pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:12:55.052 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:12:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:12:55.053 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:12:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:12:55.054 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:12:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:55 compute-0 nova_compute[244014]: 2026-02-25 13:12:55.933 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:56 compute-0 nova_compute[244014]: 2026-02-25 13:12:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:12:56 compute-0 ceph-mon[76335]: pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:57 compute-0 nova_compute[244014]: 2026-02-25 13:12:57.686 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:12:57 compute-0 nova_compute[244014]: 2026-02-25 13:12:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:12:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:12:59 compute-0 ceph-mon[76335]: pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:12:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:00 compute-0 nova_compute[244014]: 2026-02-25 13:13:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:00 compute-0 nova_compute[244014]: 2026-02-25 13:13:00.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:13:00 compute-0 nova_compute[244014]: 2026-02-25 13:13:00.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:01 compute-0 ceph-mon[76335]: pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:13:02 compute-0 nova_compute[244014]: 2026-02-25 13:13:02.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:03 compute-0 ceph-mon[76335]: pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:03 compute-0 nova_compute[244014]: 2026-02-25 13:13:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:04 compute-0 ceph-mon[76335]: pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:05 compute-0 nova_compute[244014]: 2026-02-25 13:13:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:05 compute-0 nova_compute[244014]: 2026-02-25 13:13:05.938 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:06 compute-0 ceph-mon[76335]: pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:07 compute-0 nova_compute[244014]: 2026-02-25 13:13:07.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:08 compute-0 ceph-mon[76335]: pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:08 compute-0 podman[389753]: 2026-02-25 13:13:08.748624354 +0000 UTC m=+0.077995021 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 25 13:13:08 compute-0 podman[389754]: 2026-02-25 13:13:08.783382904 +0000 UTC m=+0.113017898 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 13:13:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:10 compute-0 ceph-mon[76335]: pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:10 compute-0 nova_compute[244014]: 2026-02-25 13:13:10.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:12 compute-0 ceph-mon[76335]: pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:12 compute-0 nova_compute[244014]: 2026-02-25 13:13:12.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:14 compute-0 ceph-mon[76335]: pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:15 compute-0 nova_compute[244014]: 2026-02-25 13:13:15.943 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:17 compute-0 ceph-mon[76335]: pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:17 compute-0 nova_compute[244014]: 2026-02-25 13:13:17.699 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:19 compute-0 ceph-mon[76335]: pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:20 compute-0 nova_compute[244014]: 2026-02-25 13:13:20.945 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:21 compute-0 ceph-mon[76335]: pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:22 compute-0 ceph-mon[76335]: pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:22 compute-0 nova_compute[244014]: 2026-02-25 13:13:22.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:23 compute-0 nova_compute[244014]: 2026-02-25 13:13:23.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:24 compute-0 ceph-mon[76335]: pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:25 compute-0 nova_compute[244014]: 2026-02-25 13:13:25.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:25 compute-0 nova_compute[244014]: 2026-02-25 13:13:25.957 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:25 compute-0 nova_compute[244014]: 2026-02-25 13:13:25.957 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:13:25 compute-0 nova_compute[244014]: 2026-02-25 13:13:25.994 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:13:26 compute-0 ceph-mon[76335]: pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:27 compute-0 nova_compute[244014]: 2026-02-25 13:13:27.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:28 compute-0 ceph-mon[76335]: pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.059611) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209059751, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 870, "num_deletes": 250, "total_data_size": 1220985, "memory_usage": 1237896, "flush_reason": "Manual Compaction"}
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209103990, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 759052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58753, "largest_seqno": 59622, "table_properties": {"data_size": 755524, "index_size": 1307, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9394, "raw_average_key_size": 20, "raw_value_size": 747947, "raw_average_value_size": 1640, "num_data_blocks": 59, "num_entries": 456, "num_filter_entries": 456, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025131, "oldest_key_time": 1772025131, "file_creation_time": 1772025209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 44447 microseconds, and 3634 cpu microseconds.
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.104068) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 759052 bytes OK
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.104107) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.138657) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.138755) EVENT_LOG_v1 {"time_micros": 1772025209138744, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.138785) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1216733, prev total WAL file size 1216733, number of live WAL files 2.
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.139517) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323533' seq:72057594037927935, type:22 .. '6D6772737461740032353034' seq:0, type:0; will stop at (end)
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(741KB)], [137(10MB)]
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209139556, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11955728, "oldest_snapshot_seqno": -1}
Feb 25 13:13:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7909 keys, 9042563 bytes, temperature: kUnknown
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209322865, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 9042563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8993316, "index_size": 28398, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19781, "raw_key_size": 206129, "raw_average_key_size": 26, "raw_value_size": 8855881, "raw_average_value_size": 1119, "num_data_blocks": 1102, "num_entries": 7909, "num_filter_entries": 7909, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.323154) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9042563 bytes
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.330555) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.2 rd, 49.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.7 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(27.7) write-amplify(11.9) OK, records in: 8388, records dropped: 479 output_compression: NoCompression
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.330588) EVENT_LOG_v1 {"time_micros": 1772025209330574, "job": 84, "event": "compaction_finished", "compaction_time_micros": 183397, "compaction_time_cpu_micros": 33913, "output_level": 6, "num_output_files": 1, "total_output_size": 9042563, "num_input_records": 8388, "num_output_records": 7909, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209330887, "job": 84, "event": "table_file_deletion", "file_number": 139}
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209332899, "job": 84, "event": "table_file_deletion", "file_number": 137}
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.139425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.332971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.332977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.332980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.332983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:13:29 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.332986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:13:29 compute-0 nova_compute[244014]: 2026-02-25 13:13:29.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:29 compute-0 nova_compute[244014]: 2026-02-25 13:13:29.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:13:30 compute-0 nova_compute[244014]: 2026-02-25 13:13:30.949 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:13:31
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'volumes', 'backups', 'images', 'default.rgw.meta', 'vms', 'default.rgw.control']
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:13:31 compute-0 ceph-mon[76335]: pgmap v2808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:13:32 compute-0 nova_compute[244014]: 2026-02-25 13:13:32.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:33 compute-0 ceph-mon[76335]: pgmap v2809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:34 compute-0 ceph-mon[76335]: pgmap v2810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:35 compute-0 nova_compute[244014]: 2026-02-25 13:13:35.951 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:36 compute-0 ceph-mon[76335]: pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:37 compute-0 nova_compute[244014]: 2026-02-25 13:13:37.713 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:38 compute-0 ceph-mon[76335]: pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:39 compute-0 podman[389796]: 2026-02-25 13:13:39.721768738 +0000 UTC m=+0.058242003 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:13:39 compute-0 podman[389797]: 2026-02-25 13:13:39.787764859 +0000 UTC m=+0.117043241 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller)
Feb 25 13:13:40 compute-0 ceph-mon[76335]: pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:40 compute-0 nova_compute[244014]: 2026-02-25 13:13:40.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:42 compute-0 ceph-mon[76335]: pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:42 compute-0 nova_compute[244014]: 2026-02-25 13:13:42.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:13:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:44 compute-0 ceph-mon[76335]: pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:45 compute-0 sudo[389842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:13:45 compute-0 sudo[389842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:13:45 compute-0 sudo[389842]: pam_unix(sudo:session): session closed for user root
Feb 25 13:13:45 compute-0 sudo[389867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:13:45 compute-0 sudo[389867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:13:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:45 compute-0 sudo[389867]: pam_unix(sudo:session): session closed for user root
Feb 25 13:13:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 13:13:45 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:13:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:13:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:13:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:13:45 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:13:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:13:45 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:13:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:13:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:13:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:13:45 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:13:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:13:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:13:45 compute-0 sudo[389923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:13:45 compute-0 sudo[389923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:13:45 compute-0 sudo[389923]: pam_unix(sudo:session): session closed for user root
Feb 25 13:13:45 compute-0 nova_compute[244014]: 2026-02-25 13:13:45.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:46 compute-0 sudo[389948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:13:46 compute-0 sudo[389948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:13:46 compute-0 podman[389985]: 2026-02-25 13:13:46.353859584 +0000 UTC m=+0.081572511 container create 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:13:46 compute-0 podman[389985]: 2026-02-25 13:13:46.30725869 +0000 UTC m=+0.034971587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:13:46 compute-0 systemd[1]: Started libpod-conmon-151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264.scope.
Feb 25 13:13:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:13:46 compute-0 podman[389985]: 2026-02-25 13:13:46.49377937 +0000 UTC m=+0.221492357 container init 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 13:13:46 compute-0 podman[389985]: 2026-02-25 13:13:46.503499154 +0000 UTC m=+0.231212081 container start 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:13:46 compute-0 vigilant_mendel[390001]: 167 167
Feb 25 13:13:46 compute-0 systemd[1]: libpod-151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264.scope: Deactivated successfully.
Feb 25 13:13:46 compute-0 podman[389985]: 2026-02-25 13:13:46.525860374 +0000 UTC m=+0.253573351 container attach 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:13:46 compute-0 podman[389985]: 2026-02-25 13:13:46.526267326 +0000 UTC m=+0.253980253 container died 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e989c288c7b5eda488ebd1edf6a84c9c1b24fbe3efcb56a56e1363a521c657e6-merged.mount: Deactivated successfully.
Feb 25 13:13:46 compute-0 podman[389985]: 2026-02-25 13:13:46.760994955 +0000 UTC m=+0.488707882 container remove 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:13:46 compute-0 systemd[1]: libpod-conmon-151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264.scope: Deactivated successfully.
Feb 25 13:13:46 compute-0 ceph-mon[76335]: pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:13:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:13:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:13:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:13:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:13:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:13:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:13:46 compute-0 podman[390027]: 2026-02-25 13:13:46.947374441 +0000 UTC m=+0.067437143 container create 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 13:13:46 compute-0 podman[390027]: 2026-02-25 13:13:46.902990759 +0000 UTC m=+0.023053551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:13:47 compute-0 systemd[1]: Started libpod-conmon-91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a.scope.
Feb 25 13:13:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:47 compute-0 podman[390027]: 2026-02-25 13:13:47.077951073 +0000 UTC m=+0.198013875 container init 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:13:47 compute-0 podman[390027]: 2026-02-25 13:13:47.089082516 +0000 UTC m=+0.209145228 container start 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:13:47 compute-0 podman[390027]: 2026-02-25 13:13:47.110725297 +0000 UTC m=+0.230788099 container attach 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 13:13:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:47 compute-0 sshd-session[390049]: Accepted publickey for zuul from 192.168.122.30 port 44936 ssh2: ECDSA SHA256:Lt+nyLvEHFdh9YsbJVkTJEEj74N9LTMmF+RpURDsZbk
Feb 25 13:13:47 compute-0 systemd-logind[811]: New session 52 of user zuul.
Feb 25 13:13:47 compute-0 systemd[1]: Started Session 52 of User zuul.
Feb 25 13:13:47 compute-0 sshd-session[390049]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 25 13:13:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:13:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/519517514' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:13:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:13:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/519517514' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:13:47 compute-0 pensive_cannon[390044]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:13:47 compute-0 pensive_cannon[390044]: --> All data devices are unavailable
Feb 25 13:13:47 compute-0 systemd[1]: libpod-91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a.scope: Deactivated successfully.
Feb 25 13:13:47 compute-0 podman[390114]: 2026-02-25 13:13:47.623100054 +0000 UTC m=+0.032440586 container died 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 13:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b-merged.mount: Deactivated successfully.
Feb 25 13:13:47 compute-0 nova_compute[244014]: 2026-02-25 13:13:47.720 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:47 compute-0 podman[390114]: 2026-02-25 13:13:47.749472608 +0000 UTC m=+0.158813130 container remove 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:13:47 compute-0 systemd[1]: libpod-conmon-91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a.scope: Deactivated successfully.
Feb 25 13:13:47 compute-0 sudo[389948]: pam_unix(sudo:session): session closed for user root
Feb 25 13:13:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/519517514' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:13:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/519517514' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:13:47 compute-0 sudo[390129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:13:47 compute-0 sudo[390129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:13:47 compute-0 sudo[390129]: pam_unix(sudo:session): session closed for user root
Feb 25 13:13:47 compute-0 sudo[390154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:13:47 compute-0 sudo[390154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:13:48 compute-0 podman[390192]: 2026-02-25 13:13:48.288345854 +0000 UTC m=+0.074259616 container create 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:13:48 compute-0 podman[390192]: 2026-02-25 13:13:48.247116741 +0000 UTC m=+0.033030563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:13:48 compute-0 systemd[1]: Started libpod-conmon-5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762.scope.
Feb 25 13:13:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:13:48 compute-0 podman[390192]: 2026-02-25 13:13:48.413321678 +0000 UTC m=+0.199235490 container init 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 13:13:48 compute-0 podman[390192]: 2026-02-25 13:13:48.421144958 +0000 UTC m=+0.207058700 container start 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:13:48 compute-0 gracious_cray[390208]: 167 167
Feb 25 13:13:48 compute-0 systemd[1]: libpod-5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762.scope: Deactivated successfully.
Feb 25 13:13:48 compute-0 nova_compute[244014]: 2026-02-25 13:13:48.431 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:48 compute-0 podman[390192]: 2026-02-25 13:13:48.454023375 +0000 UTC m=+0.239937107 container attach 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 13:13:48 compute-0 podman[390192]: 2026-02-25 13:13:48.454449687 +0000 UTC m=+0.240363419 container died 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:13:48 compute-0 nova_compute[244014]: 2026-02-25 13:13:48.487 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:13:48 compute-0 nova_compute[244014]: 2026-02-25 13:13:48.487 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:13:48 compute-0 nova_compute[244014]: 2026-02-25 13:13:48.487 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:13:48 compute-0 nova_compute[244014]: 2026-02-25 13:13:48.487 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:13:48 compute-0 nova_compute[244014]: 2026-02-25 13:13:48.487 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-06cafe6ad857fad7149c131511fa881ab1402be7ae63aad83419805028cd094a-merged.mount: Deactivated successfully.
Feb 25 13:13:48 compute-0 podman[390192]: 2026-02-25 13:13:48.641335928 +0000 UTC m=+0.427249700 container remove 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:13:48 compute-0 systemd[1]: libpod-conmon-5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762.scope: Deactivated successfully.
Feb 25 13:13:48 compute-0 podman[390255]: 2026-02-25 13:13:48.861774074 +0000 UTC m=+0.079949886 container create 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:13:48 compute-0 ceph-mon[76335]: pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:48 compute-0 podman[390255]: 2026-02-25 13:13:48.817227187 +0000 UTC m=+0.035403049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:13:48 compute-0 systemd[1]: Started libpod-conmon-8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6.scope.
Feb 25 13:13:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db674359c749870a527409bae09f6c29dafc9cda2a4abf8a78eef2ac5bc94a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db674359c749870a527409bae09f6c29dafc9cda2a4abf8a78eef2ac5bc94a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db674359c749870a527409bae09f6c29dafc9cda2a4abf8a78eef2ac5bc94a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db674359c749870a527409bae09f6c29dafc9cda2a4abf8a78eef2ac5bc94a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:48 compute-0 podman[390255]: 2026-02-25 13:13:48.991973935 +0000 UTC m=+0.210149797 container init 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 25 13:13:49 compute-0 podman[390255]: 2026-02-25 13:13:49.001077392 +0000 UTC m=+0.219253204 container start 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:13:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:13:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605900426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:13:49 compute-0 podman[390255]: 2026-02-25 13:13:49.012169975 +0000 UTC m=+0.230345847 container attach 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.021 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:13:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.172 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.175 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.175 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.176 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.241 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.241 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.256 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:13:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]: {
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:     "0": [
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:         {
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "devices": [
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "/dev/loop3"
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             ],
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_name": "ceph_lv0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_size": "21470642176",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "name": "ceph_lv0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "tags": {
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.cluster_name": "ceph",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.crush_device_class": "",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.encrypted": "0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.objectstore": "bluestore",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.osd_id": "0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.type": "block",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.vdo": "0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.with_tpm": "0"
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             },
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "type": "block",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "vg_name": "ceph_vg0"
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:         }
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:     ],
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:     "1": [
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:         {
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "devices": [
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "/dev/loop4"
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             ],
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_name": "ceph_lv1",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_size": "21470642176",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "name": "ceph_lv1",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "tags": {
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.cluster_name": "ceph",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.crush_device_class": "",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.encrypted": "0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.objectstore": "bluestore",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.osd_id": "1",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.type": "block",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.vdo": "0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.with_tpm": "0"
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             },
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "type": "block",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "vg_name": "ceph_vg1"
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:         }
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:     ],
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:     "2": [
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:         {
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "devices": [
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "/dev/loop5"
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             ],
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_name": "ceph_lv2",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_size": "21470642176",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "name": "ceph_lv2",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "tags": {
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.cluster_name": "ceph",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.crush_device_class": "",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.encrypted": "0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.objectstore": "bluestore",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.osd_id": "2",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.type": "block",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.vdo": "0",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:                 "ceph.with_tpm": "0"
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             },
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "type": "block",
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:             "vg_name": "ceph_vg2"
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:         }
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]:     ]
Feb 25 13:13:49 compute-0 awesome_ardinghelli[390271]: }
Feb 25 13:13:49 compute-0 systemd[1]: libpod-8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6.scope: Deactivated successfully.
Feb 25 13:13:49 compute-0 podman[390283]: 2026-02-25 13:13:49.381786707 +0000 UTC m=+0.024252914 container died 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:13:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:13:49.423 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:13:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:13:49.424 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 13:13:49 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:13:49.424 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.424 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3db674359c749870a527409bae09f6c29dafc9cda2a4abf8a78eef2ac5bc94a3-merged.mount: Deactivated successfully.
Feb 25 13:13:49 compute-0 podman[390283]: 2026-02-25 13:13:49.542674064 +0000 UTC m=+0.185140261 container remove 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:13:49 compute-0 systemd[1]: libpod-conmon-8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6.scope: Deactivated successfully.
Feb 25 13:13:49 compute-0 sudo[390154]: pam_unix(sudo:session): session closed for user root
Feb 25 13:13:49 compute-0 sudo[390317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:13:49 compute-0 sudo[390317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:13:49 compute-0 sudo[390317]: pam_unix(sudo:session): session closed for user root
Feb 25 13:13:49 compute-0 sudo[390342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:13:49 compute-0 sudo[390342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:13:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:13:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807305594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.857 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.862 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.882 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.884 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:13:49 compute-0 nova_compute[244014]: 2026-02-25 13:13:49.885 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:13:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/605900426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:13:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2807305594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:13:50 compute-0 podman[390426]: 2026-02-25 13:13:50.024037248 +0000 UTC m=+0.062669258 container create fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:13:50 compute-0 podman[390426]: 2026-02-25 13:13:49.97906576 +0000 UTC m=+0.017697760 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:13:50 compute-0 systemd[1]: Started libpod-conmon-fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249.scope.
Feb 25 13:13:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:13:50 compute-0 podman[390426]: 2026-02-25 13:13:50.153481468 +0000 UTC m=+0.192113518 container init fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:13:50 compute-0 podman[390426]: 2026-02-25 13:13:50.16418436 +0000 UTC m=+0.202816340 container start fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:13:50 compute-0 jovial_mirzakhani[390508]: 167 167
Feb 25 13:13:50 compute-0 systemd[1]: libpod-fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249.scope: Deactivated successfully.
Feb 25 13:13:50 compute-0 podman[390426]: 2026-02-25 13:13:50.195167444 +0000 UTC m=+0.233799494 container attach fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:13:50 compute-0 podman[390426]: 2026-02-25 13:13:50.195514394 +0000 UTC m=+0.234146404 container died fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Feb 25 13:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f684db7d63c84d66b26e028e617b9a43a891e681ed1745088b7bf770bc14721-merged.mount: Deactivated successfully.
Feb 25 13:13:50 compute-0 podman[390426]: 2026-02-25 13:13:50.409437126 +0000 UTC m=+0.448069096 container remove fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 13:13:50 compute-0 systemd[1]: libpod-conmon-fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249.scope: Deactivated successfully.
Feb 25 13:13:50 compute-0 ceph-osd[89088]: bluestore.MempoolThread fragmentation_score=0.004165 took=0.000096s
Feb 25 13:13:50 compute-0 podman[390604]: 2026-02-25 13:13:50.590238514 +0000 UTC m=+0.077183427 container create 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:13:50 compute-0 ceph-osd[86953]: bluestore.MempoolThread fragmentation_score=0.004726 took=0.000082s
Feb 25 13:13:50 compute-0 ceph-osd[88012]: bluestore.MempoolThread fragmentation_score=0.004788 took=0.000089s
Feb 25 13:13:50 compute-0 podman[390604]: 2026-02-25 13:13:50.546938744 +0000 UTC m=+0.033883707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:13:50 compute-0 systemd[1]: Started libpod-conmon-7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508.scope.
Feb 25 13:13:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014b91665c03d8cbb525f583ba3133ab09952b80dfe0b26a0bb08a1bacc17e53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014b91665c03d8cbb525f583ba3133ab09952b80dfe0b26a0bb08a1bacc17e53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014b91665c03d8cbb525f583ba3133ab09952b80dfe0b26a0bb08a1bacc17e53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014b91665c03d8cbb525f583ba3133ab09952b80dfe0b26a0bb08a1bacc17e53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:13:50 compute-0 podman[390604]: 2026-02-25 13:13:50.727981559 +0000 UTC m=+0.214926512 container init 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 13:13:50 compute-0 podman[390604]: 2026-02-25 13:13:50.736092617 +0000 UTC m=+0.223037500 container start 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:13:50 compute-0 podman[390604]: 2026-02-25 13:13:50.773755199 +0000 UTC m=+0.260700172 container attach 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:13:50 compute-0 nova_compute[244014]: 2026-02-25 13:13:50.956 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:51 compute-0 ceph-mon[76335]: pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:51 compute-0 nova_compute[244014]: 2026-02-25 13:13:51.330 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:51 compute-0 nova_compute[244014]: 2026-02-25 13:13:51.331 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:13:51 compute-0 nova_compute[244014]: 2026-02-25 13:13:51.331 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:13:51 compute-0 nova_compute[244014]: 2026-02-25 13:13:51.349 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:13:51 compute-0 nova_compute[244014]: 2026-02-25 13:13:51.350 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:51 compute-0 lvm[390702]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:13:51 compute-0 lvm[390699]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:13:51 compute-0 lvm[390702]: VG ceph_vg2 finished
Feb 25 13:13:51 compute-0 lvm[390699]: VG ceph_vg0 finished
Feb 25 13:13:51 compute-0 lvm[390700]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:13:51 compute-0 lvm[390700]: VG ceph_vg1 finished
Feb 25 13:13:51 compute-0 quirky_mclean[390621]: {}
Feb 25 13:13:51 compute-0 systemd[1]: libpod-7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508.scope: Deactivated successfully.
Feb 25 13:13:51 compute-0 systemd[1]: libpod-7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508.scope: Consumed 1.212s CPU time.
Feb 25 13:13:51 compute-0 podman[390604]: 2026-02-25 13:13:51.577991337 +0000 UTC m=+1.064936270 container died 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-014b91665c03d8cbb525f583ba3133ab09952b80dfe0b26a0bb08a1bacc17e53-merged.mount: Deactivated successfully.
Feb 25 13:13:51 compute-0 podman[390604]: 2026-02-25 13:13:51.811684517 +0000 UTC m=+1.298629440 container remove 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:13:51 compute-0 systemd[1]: libpod-conmon-7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508.scope: Deactivated successfully.
Feb 25 13:13:51 compute-0 sudo[390342]: pam_unix(sudo:session): session closed for user root
Feb 25 13:13:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:13:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:13:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:13:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:13:51 compute-0 sudo[390719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:13:51 compute-0 sudo[390719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:13:51 compute-0 sudo[390719]: pam_unix(sudo:session): session closed for user root
Feb 25 13:13:52 compute-0 sshd-session[390059]: Connection closed by 192.168.122.30 port 44936
Feb 25 13:13:52 compute-0 sshd-session[390049]: pam_unix(sshd:session): session closed for user zuul
Feb 25 13:13:52 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Feb 25 13:13:52 compute-0 systemd-logind[811]: Session 52 logged out. Waiting for processes to exit.
Feb 25 13:13:52 compute-0 systemd-logind[811]: Removed session 52.
Feb 25 13:13:52 compute-0 nova_compute[244014]: 2026-02-25 13:13:52.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:52 compute-0 ceph-mon[76335]: pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:13:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:13:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:53 compute-0 nova_compute[244014]: 2026-02-25 13:13:53.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:54 compute-0 ceph-mon[76335]: pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:13:55.053 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:13:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:13:55.054 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:13:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:13:55.054 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:13:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:55 compute-0 nova_compute[244014]: 2026-02-25 13:13:55.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:57 compute-0 ceph-mon[76335]: pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:57 compute-0 nova_compute[244014]: 2026-02-25 13:13:57.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:13:57 compute-0 nova_compute[244014]: 2026-02-25 13:13:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:57 compute-0 nova_compute[244014]: 2026-02-25 13:13:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:13:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:13:59 compute-0 ceph-mon[76335]: pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:13:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:00 compute-0 nova_compute[244014]: 2026-02-25 13:14:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:14:00 compute-0 nova_compute[244014]: 2026-02-25 13:14:00.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:14:00 compute-0 nova_compute[244014]: 2026-02-25 13:14:00.961 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:01 compute-0 ceph-mon[76335]: pgmap v2823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:14:02 compute-0 nova_compute[244014]: 2026-02-25 13:14:02.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:03 compute-0 ceph-mon[76335]: pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:04 compute-0 ceph-mon[76335]: pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:05 compute-0 nova_compute[244014]: 2026-02-25 13:14:05.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:14:05 compute-0 nova_compute[244014]: 2026-02-25 13:14:05.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:14:05 compute-0 nova_compute[244014]: 2026-02-25 13:14:05.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:06 compute-0 ceph-mon[76335]: pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:07 compute-0 nova_compute[244014]: 2026-02-25 13:14:07.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:08 compute-0 ceph-mon[76335]: pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:10 compute-0 nova_compute[244014]: 2026-02-25 13:14:10.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:11 compute-0 ceph-mon[76335]: pgmap v2828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:11 compute-0 podman[390767]: 2026-02-25 13:14:11.15451592 +0000 UTC m=+0.098984422 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 13:14:11 compute-0 podman[390768]: 2026-02-25 13:14:11.167553508 +0000 UTC m=+0.109065727 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 13:14:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:12 compute-0 nova_compute[244014]: 2026-02-25 13:14:12.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:13 compute-0 ceph-mon[76335]: pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:15 compute-0 ceph-mon[76335]: pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:15 compute-0 nova_compute[244014]: 2026-02-25 13:14:15.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:17 compute-0 ceph-mon[76335]: pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:17 compute-0 nova_compute[244014]: 2026-02-25 13:14:17.745 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:18 compute-0 ceph-mon[76335]: pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:19 compute-0 nova_compute[244014]: 2026-02-25 13:14:19.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:14:20 compute-0 ceph-mon[76335]: pgmap v2833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:20 compute-0 nova_compute[244014]: 2026-02-25 13:14:20.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:22 compute-0 ceph-mon[76335]: pgmap v2834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:22 compute-0 nova_compute[244014]: 2026-02-25 13:14:22.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:24 compute-0 ceph-mon[76335]: pgmap v2835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:25 compute-0 nova_compute[244014]: 2026-02-25 13:14:25.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:26 compute-0 ceph-mon[76335]: pgmap v2836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:27 compute-0 nova_compute[244014]: 2026-02-25 13:14:27.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:28 compute-0 ceph-mon[76335]: pgmap v2837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:30 compute-0 ceph-mon[76335]: pgmap v2838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:30 compute-0 nova_compute[244014]: 2026-02-25 13:14:30.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:14:31
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.control', 'images', '.mgr', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta']
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:14:32 compute-0 nova_compute[244014]: 2026-02-25 13:14:32.754 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:32 compute-0 ceph-mon[76335]: pgmap v2839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:34 compute-0 ceph-mon[76335]: pgmap v2840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:35 compute-0 nova_compute[244014]: 2026-02-25 13:14:35.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:36 compute-0 ceph-mon[76335]: pgmap v2841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:37 compute-0 nova_compute[244014]: 2026-02-25 13:14:37.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:38 compute-0 ceph-mon[76335]: pgmap v2842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:40 compute-0 nova_compute[244014]: 2026-02-25 13:14:40.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:41 compute-0 ceph-mon[76335]: pgmap v2843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:41 compute-0 podman[390811]: 2026-02-25 13:14:41.751837858 +0000 UTC m=+0.089420962 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 25 13:14:41 compute-0 podman[390812]: 2026-02-25 13:14:41.770786953 +0000 UTC m=+0.105805655 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 25 13:14:42 compute-0 nova_compute[244014]: 2026-02-25 13:14:42.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:14:43 compute-0 ceph-mon[76335]: pgmap v2844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:45 compute-0 ceph-mon[76335]: pgmap v2845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:45 compute-0 nova_compute[244014]: 2026-02-25 13:14:45.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:47 compute-0 ceph-mon[76335]: pgmap v2846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:14:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043124153' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:14:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:14:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043124153' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:14:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:14:47.723 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 25 13:14:47 compute-0 nova_compute[244014]: 2026-02-25 13:14:47.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:47 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:14:47.726 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 25 13:14:47 compute-0 nova_compute[244014]: 2026-02-25 13:14:47.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:47 compute-0 sshd-session[390857]: Accepted publickey for zuul from 192.168.122.30 port 51352 ssh2: ECDSA SHA256:Lt+nyLvEHFdh9YsbJVkTJEEj74N9LTMmF+RpURDsZbk
Feb 25 13:14:47 compute-0 systemd-logind[811]: New session 53 of user zuul.
Feb 25 13:14:47 compute-0 systemd[1]: Started Session 53 of User zuul.
Feb 25 13:14:47 compute-0 sshd-session[390857]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 25 13:14:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1043124153' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:14:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1043124153' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:14:48 compute-0 sudo[390932]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain iscsid.service
Feb 25 13:14:48 compute-0 sudo[390932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:48 compute-0 sudo[390932]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:48 compute-0 sudo[390957]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_nova_compute.service
Feb 25 13:14:48 compute-0 sudo[390957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:48 compute-0 sudo[390957]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:48 compute-0 sudo[390982]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_controller.service
Feb 25 13:14:48 compute-0 sudo[390982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:48 compute-0 sudo[390982]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:48 compute-0 sshd-session[390930]: Invalid user ubuntu from 80.94.92.186 port 38512
Feb 25 13:14:48 compute-0 sudo[391007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_metadata_agent.service edpm_ovn_agent.service
Feb 25 13:14:48 compute-0 sudo[391007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:48 compute-0 nova_compute[244014]: 2026-02-25 13:14:48.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:14:48 compute-0 sudo[391007]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:48 compute-0 nova_compute[244014]: 2026-02-25 13:14:48.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:14:48 compute-0 nova_compute[244014]: 2026-02-25 13:14:48.922 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:14:48 compute-0 nova_compute[244014]: 2026-02-25 13:14:48.922 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:14:48 compute-0 nova_compute[244014]: 2026-02-25 13:14:48.923 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:14:48 compute-0 nova_compute[244014]: 2026-02-25 13:14:48.923 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:14:48 compute-0 sshd-session[390930]: Connection closed by invalid user ubuntu 80.94.92.186 port 38512 [preauth]
Feb 25 13:14:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:49 compute-0 ceph-mon[76335]: pgmap v2847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:14:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2653152889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:14:49 compute-0 nova_compute[244014]: 2026-02-25 13:14:49.518 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:14:49 compute-0 nova_compute[244014]: 2026-02-25 13:14:49.736 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:14:49 compute-0 nova_compute[244014]: 2026-02-25 13:14:49.738 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3597MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:14:49 compute-0 nova_compute[244014]: 2026-02-25 13:14:49.739 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:14:49 compute-0 nova_compute[244014]: 2026-02-25 13:14:49.740 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:14:49 compute-0 nova_compute[244014]: 2026-02-25 13:14:49.821 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:14:49 compute-0 nova_compute[244014]: 2026-02-25 13:14:49.822 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:14:49 compute-0 nova_compute[244014]: 2026-02-25 13:14:49.849 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:14:50 compute-0 ceph-mon[76335]: pgmap v2848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2653152889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:14:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:14:50 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874255157' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:14:50 compute-0 nova_compute[244014]: 2026-02-25 13:14:50.420 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:14:50 compute-0 nova_compute[244014]: 2026-02-25 13:14:50.426 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:14:50 compute-0 nova_compute[244014]: 2026-02-25 13:14:50.450 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:14:50 compute-0 nova_compute[244014]: 2026-02-25 13:14:50.452 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:14:50 compute-0 nova_compute[244014]: 2026-02-25 13:14:50.452 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:14:50 compute-0 nova_compute[244014]: 2026-02-25 13:14:50.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:51 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1874255157' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:14:52 compute-0 sudo[391076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:14:52 compute-0 sudo[391076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:14:52 compute-0 sudo[391076]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:52 compute-0 sudo[391101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 13:14:52 compute-0 sudo[391101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:14:52 compute-0 nova_compute[244014]: 2026-02-25 13:14:52.454 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:14:52 compute-0 nova_compute[244014]: 2026-02-25 13:14:52.454 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:14:52 compute-0 nova_compute[244014]: 2026-02-25 13:14:52.454 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:14:52 compute-0 nova_compute[244014]: 2026-02-25 13:14:52.493 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:14:52 compute-0 nova_compute[244014]: 2026-02-25 13:14:52.493 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:14:52 compute-0 ceph-mon[76335]: pgmap v2849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:52 compute-0 sshd-session[391184]: Accepted publickey for zuul from 192.168.122.30 port 35828 ssh2: ECDSA SHA256:Lt+nyLvEHFdh9YsbJVkTJEEj74N9LTMmF+RpURDsZbk
Feb 25 13:14:52 compute-0 systemd-logind[811]: New session 54 of user zuul.
Feb 25 13:14:52 compute-0 systemd[1]: Started Session 54 of User zuul.
Feb 25 13:14:52 compute-0 sshd-session[391184]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 25 13:14:52 compute-0 nova_compute[244014]: 2026-02-25 13:14:52.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:52 compute-0 podman[391170]: 2026-02-25 13:14:52.777042757 +0000 UTC m=+0.263151112 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:14:52 compute-0 podman[391170]: 2026-02-25 13:14:52.977761047 +0000 UTC m=+0.463869342 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:14:53 compute-0 sudo[391274]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Feb 25 13:14:53 compute-0 sudo[391274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:53 compute-0 sudo[391274]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:53 compute-0 sudo[391307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Feb 25 13:14:53 compute-0 sudo[391307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:53 compute-0 groupadd[391313]: group added to /etc/group: name=podman, GID=42479
Feb 25 13:14:53 compute-0 groupadd[391313]: group added to /etc/gshadow: name=podman
Feb 25 13:14:53 compute-0 groupadd[391313]: new group: name=podman, GID=42479
Feb 25 13:14:53 compute-0 sudo[391307]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:53 compute-0 sudo[391342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Feb 25 13:14:53 compute-0 sudo[391342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:53 compute-0 usermod[391346]: add 'zuul' to group 'podman'
Feb 25 13:14:53 compute-0 usermod[391346]: add 'zuul' to shadow group 'podman'
Feb 25 13:14:53 compute-0 sudo[391342]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:53 compute-0 sudo[391404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Feb 25 13:14:53 compute-0 sudo[391404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:53 compute-0 sudo[391404]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:53 compute-0 sudo[391411]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Feb 25 13:14:53 compute-0 sudo[391411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:53 compute-0 sudo[391411]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:53 compute-0 sudo[391423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Feb 25 13:14:53 compute-0 sudo[391423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:53 compute-0 sudo[391423]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:53 compute-0 sudo[391429]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Feb 25 13:14:53 compute-0 sudo[391429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:53 compute-0 sudo[391429]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:53 compute-0 sudo[391467]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Feb 25 13:14:53 compute-0 sudo[391467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:53 compute-0 sudo[391101]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:14:53 compute-0 sudo[391467]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:53 compute-0 sudo[391488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Feb 25 13:14:53 compute-0 sudo[391488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:53 compute-0 systemd[1]: Reloading.
Feb 25 13:14:53 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:14:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:14:54 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:14:54 compute-0 systemd-rc-local-generator[391512]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 13:14:54 compute-0 systemd-sysv-generator[391516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 13:14:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:54 compute-0 sudo[391488]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:54 compute-0 sudo[391509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:14:54 compute-0 sudo[391509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:14:54 compute-0 sudo[391509]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:54 compute-0 sudo[391557]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Feb 25 13:14:54 compute-0 sudo[391557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:54 compute-0 sudo[391561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:14:54 compute-0 sudo[391561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:14:54 compute-0 sudo[391557]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:54 compute-0 ceph-mon[76335]: pgmap v2850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:54 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:14:54 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:14:54 compute-0 sudo[391586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Feb 25 13:14:54 compute-0 sudo[391586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:55 compute-0 systemd[1]: Reloading.
Feb 25 13:14:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:14:55.053 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:14:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:14:55.055 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:14:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:14:55.055 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:14:55 compute-0 systemd-rc-local-generator[391624]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 13:14:55 compute-0 systemd-sysv-generator[391628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 13:14:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:55 compute-0 sudo[391561]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:55 compute-0 systemd[1]: Starting Podman API Socket...
Feb 25 13:14:55 compute-0 systemd[1]: Listening on Podman API Socket.
Feb 25 13:14:55 compute-0 sudo[391586]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:14:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:14:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:14:55 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:14:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:14:55 compute-0 sudo[391662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Feb 25 13:14:55 compute-0 sudo[391662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:55 compute-0 sudo[391662]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:55 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:14:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:14:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:14:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:14:55 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:14:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:14:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:14:55 compute-0 sudo[391665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Feb 25 13:14:55 compute-0 sudo[391665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:55 compute-0 sudo[391665]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:55 compute-0 sudo[391674]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Feb 25 13:14:55 compute-0 sudo[391674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:55 compute-0 sudo[391674]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:55 compute-0 sudo[391667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:14:55 compute-0 sudo[391667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:14:55 compute-0 sudo[391667]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:55 compute-0 sudo[391694]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Feb 25 13:14:55 compute-0 sudo[391694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:55 compute-0 sudo[391694]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:55 compute-0 sudo[391700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Feb 25 13:14:55 compute-0 sudo[391700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:55 compute-0 sudo[391699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:14:55 compute-0 dbus-broker-launch[795]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Feb 25 13:14:55 compute-0 sudo[391699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:14:55 compute-0 sudo[391700]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:55 compute-0 sudo[391727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Feb 25 13:14:55 compute-0 sudo[391727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:55 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Feb 25 13:14:55 compute-0 systemd[1]: Closed Podman API Socket.
Feb 25 13:14:55 compute-0 systemd[1]: Stopping Podman API Socket...
Feb 25 13:14:55 compute-0 systemd[1]: Starting Podman API Socket...
Feb 25 13:14:55 compute-0 systemd[1]: Listening on Podman API Socket.
Feb 25 13:14:55 compute-0 sudo[391727]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:55 compute-0 sudo[391277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Feb 25 13:14:55 compute-0 sudo[391277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:14:55 compute-0 sudo[391277]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:14:55.728 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 25 13:14:55 compute-0 sshd-session[391733]: Accepted publickey for zuul from 192.168.122.30 port 35840 ssh2: ECDSA SHA256:Lt+nyLvEHFdh9YsbJVkTJEEj74N9LTMmF+RpURDsZbk
Feb 25 13:14:55 compute-0 systemd-logind[811]: New session 55 of user zuul.
Feb 25 13:14:55 compute-0 systemd[1]: Started Session 55 of User zuul.
Feb 25 13:14:55 compute-0 sshd-session[391733]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 25 13:14:55 compute-0 systemd[1]: Starting Podman API Service...
Feb 25 13:14:55 compute-0 systemd[1]: Started Podman API Service.
Feb 25 13:14:55 compute-0 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="/usr/bin/podman filtering at log level info"
Feb 25 13:14:55 compute-0 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="Setting parallel job count to 25"
Feb 25 13:14:55 compute-0 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="Using sqlite as database backend"
Feb 25 13:14:55 compute-0 nova_compute[244014]: 2026-02-25 13:14:55.912 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:14:55 compute-0 podman[391749]: 2026-02-25 13:14:55.937310092 +0000 UTC m=+0.095369870 container create ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:14:55 compute-0 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Feb 25 13:14:55 compute-0 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="Using systemd socket activation to determine API endpoint"
Feb 25 13:14:55 compute-0 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Feb 25 13:14:55 compute-0 podman[391749]: 2026-02-25 13:14:55.864754966 +0000 UTC m=+0.022814764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:14:55 compute-0 podman[391760]: @ - - [25/Feb/2026:13:14:55 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Feb 25 13:14:55 compute-0 nova_compute[244014]: 2026-02-25 13:14:55.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:56 compute-0 systemd[1]: Started libpod-conmon-ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a.scope.
Feb 25 13:14:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:14:56 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:14:56 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:14:56 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:14:56 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:14:56 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:14:56 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:14:56 compute-0 podman[391749]: 2026-02-25 13:14:56.182327991 +0000 UTC m=+0.340387759 container init ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Feb 25 13:14:56 compute-0 podman[391749]: 2026-02-25 13:14:56.191759437 +0000 UTC m=+0.349819225 container start ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 13:14:56 compute-0 laughing_jackson[391780]: 167 167
Feb 25 13:14:56 compute-0 systemd[1]: libpod-ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a.scope: Deactivated successfully.
Feb 25 13:14:56 compute-0 podman[391749]: 2026-02-25 13:14:56.230797858 +0000 UTC m=+0.388857636 container attach ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:14:56 compute-0 podman[391760]: 2026-02-25 13:14:56.231949 +0000 UTC m=+0.352874741 container died ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 13:14:56 compute-0 podman[391760]: @ - - [25/Feb/2026:13:14:55 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 22903 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Feb 25 13:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-29ac0a6ec45d2b7b01c59176f7d3214958938745e6534066b192d7b7f33af01a-merged.mount: Deactivated successfully.
Feb 25 13:14:56 compute-0 podman[391749]: 2026-02-25 13:14:56.595935014 +0000 UTC m=+0.753994792 container remove ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:14:56 compute-0 systemd[1]: libpod-conmon-ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a.scope: Deactivated successfully.
Feb 25 13:14:56 compute-0 podman[391803]: 2026-02-25 13:14:56.760124104 +0000 UTC m=+0.084198265 container create 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:14:56 compute-0 podman[391803]: 2026-02-25 13:14:56.709278201 +0000 UTC m=+0.033352432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:14:56 compute-0 systemd[1]: Started libpod-conmon-08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4.scope.
Feb 25 13:14:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:14:56 compute-0 podman[391803]: 2026-02-25 13:14:56.947503798 +0000 UTC m=+0.271577989 container init 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:14:56 compute-0 podman[391803]: 2026-02-25 13:14:56.957887361 +0000 UTC m=+0.281961552 container start 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:14:57 compute-0 podman[391803]: 2026-02-25 13:14:56.999942007 +0000 UTC m=+0.324016198 container attach 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:14:57 compute-0 ceph-mon[76335]: pgmap v2851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:57 compute-0 xenodochial_bose[391820]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:14:57 compute-0 xenodochial_bose[391820]: --> All data devices are unavailable
Feb 25 13:14:57 compute-0 systemd[1]: libpod-08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4.scope: Deactivated successfully.
Feb 25 13:14:57 compute-0 podman[391803]: 2026-02-25 13:14:57.468748137 +0000 UTC m=+0.792822318 container died 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 13:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa-merged.mount: Deactivated successfully.
Feb 25 13:14:57 compute-0 nova_compute[244014]: 2026-02-25 13:14:57.769 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:14:57 compute-0 podman[391803]: 2026-02-25 13:14:57.815057873 +0000 UTC m=+1.139132024 container remove 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:14:57 compute-0 systemd[1]: libpod-conmon-08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4.scope: Deactivated successfully.
Feb 25 13:14:57 compute-0 sudo[391699]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:57 compute-0 sudo[391854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:14:57 compute-0 sudo[391854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:14:57 compute-0 sudo[391854]: pam_unix(sudo:session): session closed for user root
Feb 25 13:14:58 compute-0 sudo[391879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:14:58 compute-0 sudo[391879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:14:58 compute-0 ceph-mon[76335]: pgmap v2852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:58 compute-0 podman[391916]: 2026-02-25 13:14:58.381177737 +0000 UTC m=+0.114007916 container create 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:14:58 compute-0 podman[391916]: 2026-02-25 13:14:58.305529303 +0000 UTC m=+0.038359572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:14:58 compute-0 systemd[1]: Started libpod-conmon-1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96.scope.
Feb 25 13:14:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:14:58 compute-0 podman[391916]: 2026-02-25 13:14:58.518802468 +0000 UTC m=+0.251632727 container init 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:14:58 compute-0 podman[391916]: 2026-02-25 13:14:58.528291065 +0000 UTC m=+0.261121264 container start 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:14:58 compute-0 exciting_hamilton[391933]: 167 167
Feb 25 13:14:58 compute-0 systemd[1]: libpod-1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96.scope: Deactivated successfully.
Feb 25 13:14:58 compute-0 podman[391916]: 2026-02-25 13:14:58.556900362 +0000 UTC m=+0.289730621 container attach 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 13:14:58 compute-0 podman[391916]: 2026-02-25 13:14:58.557607022 +0000 UTC m=+0.290437221 container died 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 13:14:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e64bc4e1c02ea56fa7a3b4adb6ef692f2c841aef9042aa362d675a2349fe7ed4-merged.mount: Deactivated successfully.
Feb 25 13:14:58 compute-0 podman[391916]: 2026-02-25 13:14:58.770628139 +0000 UTC m=+0.503458338 container remove 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:14:58 compute-0 systemd[1]: libpod-conmon-1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96.scope: Deactivated successfully.
Feb 25 13:14:58 compute-0 nova_compute[244014]: 2026-02-25 13:14:58.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:14:58 compute-0 podman[391959]: 2026-02-25 13:14:58.992779023 +0000 UTC m=+0.081672314 container create b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:14:59 compute-0 podman[391959]: 2026-02-25 13:14:58.945685195 +0000 UTC m=+0.034578556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:14:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:14:59 compute-0 systemd[1]: Started libpod-conmon-b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9.scope.
Feb 25 13:14:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8644fae478fb2cadf732445acd0cfeefa0b4f88f115125a9743f346c8e86975e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8644fae478fb2cadf732445acd0cfeefa0b4f88f115125a9743f346c8e86975e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8644fae478fb2cadf732445acd0cfeefa0b4f88f115125a9743f346c8e86975e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8644fae478fb2cadf732445acd0cfeefa0b4f88f115125a9743f346c8e86975e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:14:59 compute-0 podman[391959]: 2026-02-25 13:14:59.178029696 +0000 UTC m=+0.266922977 container init b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:14:59 compute-0 podman[391959]: 2026-02-25 13:14:59.186821144 +0000 UTC m=+0.275714445 container start b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 13:14:59 compute-0 podman[391959]: 2026-02-25 13:14:59.246430665 +0000 UTC m=+0.335324026 container attach b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:14:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]: {
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:     "0": [
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:         {
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "devices": [
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "/dev/loop3"
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             ],
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_name": "ceph_lv0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_size": "21470642176",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "name": "ceph_lv0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "tags": {
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.cluster_name": "ceph",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.crush_device_class": "",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.encrypted": "0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.objectstore": "bluestore",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.osd_id": "0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.type": "block",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.vdo": "0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.with_tpm": "0"
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             },
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "type": "block",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "vg_name": "ceph_vg0"
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:         }
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:     ],
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:     "1": [
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:         {
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "devices": [
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "/dev/loop4"
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             ],
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_name": "ceph_lv1",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_size": "21470642176",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "name": "ceph_lv1",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "tags": {
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.cluster_name": "ceph",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.crush_device_class": "",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.encrypted": "0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.objectstore": "bluestore",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.osd_id": "1",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.type": "block",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.vdo": "0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.with_tpm": "0"
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             },
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "type": "block",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "vg_name": "ceph_vg1"
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:         }
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:     ],
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:     "2": [
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:         {
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "devices": [
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "/dev/loop5"
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             ],
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_name": "ceph_lv2",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_size": "21470642176",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "name": "ceph_lv2",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "tags": {
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.cluster_name": "ceph",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.crush_device_class": "",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.encrypted": "0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.objectstore": "bluestore",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.osd_id": "2",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.type": "block",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.vdo": "0",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:                 "ceph.with_tpm": "0"
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             },
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "type": "block",
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:             "vg_name": "ceph_vg2"
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:         }
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]:     ]
Feb 25 13:14:59 compute-0 crazy_rhodes[391975]: }
Feb 25 13:14:59 compute-0 systemd[1]: libpod-b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9.scope: Deactivated successfully.
Feb 25 13:14:59 compute-0 podman[391959]: 2026-02-25 13:14:59.536033181 +0000 UTC m=+0.624926512 container died b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8644fae478fb2cadf732445acd0cfeefa0b4f88f115125a9743f346c8e86975e-merged.mount: Deactivated successfully.
Feb 25 13:14:59 compute-0 nova_compute[244014]: 2026-02-25 13:14:59.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:14:59 compute-0 podman[391959]: 2026-02-25 13:14:59.971353467 +0000 UTC m=+1.060246758 container remove b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 13:14:59 compute-0 systemd[1]: libpod-conmon-b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9.scope: Deactivated successfully.
Feb 25 13:15:00 compute-0 sudo[391879]: pam_unix(sudo:session): session closed for user root
Feb 25 13:15:00 compute-0 sudo[391996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:15:00 compute-0 sudo[391996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:15:00 compute-0 sudo[391996]: pam_unix(sudo:session): session closed for user root
Feb 25 13:15:00 compute-0 sudo[392021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:15:00 compute-0 sudo[392021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:15:00 compute-0 ceph-mon[76335]: pgmap v2853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:00 compute-0 podman[392059]: 2026-02-25 13:15:00.586206525 +0000 UTC m=+0.125250743 container create c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 13:15:00 compute-0 podman[392059]: 2026-02-25 13:15:00.49134958 +0000 UTC m=+0.030393828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:15:00 compute-0 systemd[1]: Started libpod-conmon-c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e.scope.
Feb 25 13:15:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:15:00 compute-0 podman[392059]: 2026-02-25 13:15:00.755132438 +0000 UTC m=+0.294176696 container init c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:15:00 compute-0 podman[392059]: 2026-02-25 13:15:00.762920388 +0000 UTC m=+0.301964636 container start c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:15:00 compute-0 vibrant_hodgkin[392075]: 167 167
Feb 25 13:15:00 compute-0 systemd[1]: libpod-c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e.scope: Deactivated successfully.
Feb 25 13:15:00 compute-0 podman[392059]: 2026-02-25 13:15:00.876059059 +0000 UTC m=+0.415103277 container attach c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:15:00 compute-0 nova_compute[244014]: 2026-02-25 13:15:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:15:00 compute-0 podman[392059]: 2026-02-25 13:15:00.876401358 +0000 UTC m=+0.415445576 container died c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:15:00 compute-0 nova_compute[244014]: 2026-02-25 13:15:00.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:15:00 compute-0 nova_compute[244014]: 2026-02-25 13:15:00.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf8ad3185058c38b4b5439c03b0c21f3a49eb13c0866adfcd52928f6f14d3f7b-merged.mount: Deactivated successfully.
Feb 25 13:15:01 compute-0 podman[392059]: 2026-02-25 13:15:01.283043295 +0000 UTC m=+0.822087543 container remove c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 13:15:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:01 compute-0 systemd[1]: libpod-conmon-c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e.scope: Deactivated successfully.
Feb 25 13:15:01 compute-0 podman[392099]: 2026-02-25 13:15:01.439363593 +0000 UTC m=+0.026887979 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:15:01 compute-0 podman[392099]: 2026-02-25 13:15:01.562817865 +0000 UTC m=+0.150342271 container create b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 13:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:15:01 compute-0 systemd[1]: Started libpod-conmon-b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd.scope.
Feb 25 13:15:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b0577b17480e530ea10bfa1f39d428cb1f22f4828b255f8be01f6532126513/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b0577b17480e530ea10bfa1f39d428cb1f22f4828b255f8be01f6532126513/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b0577b17480e530ea10bfa1f39d428cb1f22f4828b255f8be01f6532126513/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b0577b17480e530ea10bfa1f39d428cb1f22f4828b255f8be01f6532126513/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:15:01 compute-0 podman[392099]: 2026-02-25 13:15:01.814519773 +0000 UTC m=+0.402044189 container init b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 13:15:01 compute-0 podman[392099]: 2026-02-25 13:15:01.824683859 +0000 UTC m=+0.412208255 container start b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 13:15:01 compute-0 podman[392099]: 2026-02-25 13:15:01.856088545 +0000 UTC m=+0.443612951 container attach b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:15:02 compute-0 lvm[392194]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:15:02 compute-0 lvm[392193]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:15:02 compute-0 lvm[392194]: VG ceph_vg1 finished
Feb 25 13:15:02 compute-0 lvm[392193]: VG ceph_vg0 finished
Feb 25 13:15:02 compute-0 lvm[392196]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:15:02 compute-0 lvm[392196]: VG ceph_vg2 finished
Feb 25 13:15:02 compute-0 musing_williams[392115]: {}
Feb 25 13:15:02 compute-0 systemd[1]: libpod-b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd.scope: Deactivated successfully.
Feb 25 13:15:02 compute-0 podman[392099]: 2026-02-25 13:15:02.661726622 +0000 UTC m=+1.249250978 container died b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:15:02 compute-0 systemd[1]: libpod-b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd.scope: Consumed 1.176s CPU time.
Feb 25 13:15:02 compute-0 ceph-mon[76335]: pgmap v2854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:02 compute-0 nova_compute[244014]: 2026-02-25 13:15:02.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-86b0577b17480e530ea10bfa1f39d428cb1f22f4828b255f8be01f6532126513-merged.mount: Deactivated successfully.
Feb 25 13:15:03 compute-0 podman[392099]: 2026-02-25 13:15:03.098839097 +0000 UTC m=+1.686363503 container remove b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:15:03 compute-0 systemd[1]: libpod-conmon-b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd.scope: Deactivated successfully.
Feb 25 13:15:03 compute-0 sudo[392021]: pam_unix(sudo:session): session closed for user root
Feb 25 13:15:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:15:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:15:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:15:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:15:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:03 compute-0 sudo[392211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:15:03 compute-0 sudo[392211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:15:03 compute-0 sudo[392211]: pam_unix(sudo:session): session closed for user root
Feb 25 13:15:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:15:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:15:04 compute-0 ceph-mon[76335]: pgmap v2855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:05 compute-0 nova_compute[244014]: 2026-02-25 13:15:05.993 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:06 compute-0 ceph-mon[76335]: pgmap v2856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:06 compute-0 nova_compute[244014]: 2026-02-25 13:15:06.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:15:06 compute-0 nova_compute[244014]: 2026-02-25 13:15:06.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:15:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:07 compute-0 nova_compute[244014]: 2026-02-25 13:15:07.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:08 compute-0 ceph-mon[76335]: pgmap v2857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:10 compute-0 ceph-mon[76335]: pgmap v2858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:10 compute-0 nova_compute[244014]: 2026-02-25 13:15:10.996 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:11 compute-0 podman[391760]: time="2026-02-25T13:15:11Z" level=info msg="Received shutdown.Stop(), terminating!" PID=391760
Feb 25 13:15:11 compute-0 systemd[1]: podman.service: Deactivated successfully.
Feb 25 13:15:12 compute-0 podman[392236]: 2026-02-25 13:15:12.7264241 +0000 UTC m=+0.063304006 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:15:12 compute-0 podman[392237]: 2026-02-25 13:15:12.757526577 +0000 UTC m=+0.095373180 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:15:12 compute-0 ceph-mon[76335]: pgmap v2859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:12 compute-0 nova_compute[244014]: 2026-02-25 13:15:12.781 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:14 compute-0 sshd-session[392283]: Received disconnect from 91.224.92.54 port 52964:11:  [preauth]
Feb 25 13:15:14 compute-0 sshd-session[392283]: Disconnected from authenticating user root 91.224.92.54 port 52964 [preauth]
Feb 25 13:15:14 compute-0 ceph-mon[76335]: pgmap v2860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:15 compute-0 nova_compute[244014]: 2026-02-25 13:15:15.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:17 compute-0 ceph-mon[76335]: pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:17 compute-0 nova_compute[244014]: 2026-02-25 13:15:17.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:19 compute-0 ceph-mon[76335]: pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:19 compute-0 sudo[392285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Feb 25 13:15:19 compute-0 sudo[392285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:15:19 compute-0 sudo[392285]: pam_unix(sudo:session): session closed for user root
Feb 25 13:15:19 compute-0 sudo[392310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Feb 25 13:15:19 compute-0 sudo[392310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 13:15:19 compute-0 sudo[392310]: pam_unix(sudo:session): session closed for user root
Feb 25 13:15:19 compute-0 sshd-session[390860]: Connection closed by 192.168.122.30 port 51352
Feb 25 13:15:19 compute-0 sshd-session[390857]: pam_unix(sshd:session): session closed for user zuul
Feb 25 13:15:19 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Feb 25 13:15:19 compute-0 systemd-logind[811]: Session 53 logged out. Waiting for processes to exit.
Feb 25 13:15:19 compute-0 systemd-logind[811]: Removed session 53.
Feb 25 13:15:19 compute-0 sshd-session[391187]: Connection closed by 192.168.122.30 port 35828
Feb 25 13:15:19 compute-0 sshd-session[391184]: pam_unix(sshd:session): session closed for user zuul
Feb 25 13:15:19 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Feb 25 13:15:19 compute-0 systemd[1]: session-54.scope: Consumed 1.180s CPU time.
Feb 25 13:15:19 compute-0 systemd-logind[811]: Session 54 logged out. Waiting for processes to exit.
Feb 25 13:15:19 compute-0 systemd-logind[811]: Removed session 54.
Feb 25 13:15:20 compute-0 sshd-session[391758]: Connection closed by 192.168.122.30 port 35840
Feb 25 13:15:20 compute-0 sshd-session[391733]: pam_unix(sshd:session): session closed for user zuul
Feb 25 13:15:20 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Feb 25 13:15:20 compute-0 systemd-logind[811]: Session 55 logged out. Waiting for processes to exit.
Feb 25 13:15:20 compute-0 systemd-logind[811]: Removed session 55.
Feb 25 13:15:21 compute-0 nova_compute[244014]: 2026-02-25 13:15:21.000 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:21 compute-0 ceph-mon[76335]: pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:22 compute-0 ceph-mon[76335]: pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:22 compute-0 nova_compute[244014]: 2026-02-25 13:15:22.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:24 compute-0 ceph-mon[76335]: pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:26 compute-0 nova_compute[244014]: 2026-02-25 13:15:26.002 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:26 compute-0 ceph-mon[76335]: pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:27 compute-0 nova_compute[244014]: 2026-02-25 13:15:27.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:28 compute-0 ceph-mon[76335]: pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:30 compute-0 ceph-mon[76335]: pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:31 compute-0 nova_compute[244014]: 2026-02-25 13:15:31.003 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:15:31
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', '.mgr', 'images', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.meta', 'vms']
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:15:32 compute-0 ceph-mon[76335]: pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:32 compute-0 nova_compute[244014]: 2026-02-25 13:15:32.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.100105) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334100231, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1253, "num_deletes": 256, "total_data_size": 1956944, "memory_usage": 1987904, "flush_reason": "Manual Compaction"}
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334125673, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 1915780, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59623, "largest_seqno": 60875, "table_properties": {"data_size": 1909795, "index_size": 3315, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12351, "raw_average_key_size": 19, "raw_value_size": 1897786, "raw_average_value_size": 2983, "num_data_blocks": 149, "num_entries": 636, "num_filter_entries": 636, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025210, "oldest_key_time": 1772025210, "file_creation_time": 1772025334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 25693 microseconds, and 5624 cpu microseconds.
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.125810) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 1915780 bytes OK
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.125847) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.132895) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.132954) EVENT_LOG_v1 {"time_micros": 1772025334132942, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.132992) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 1951253, prev total WAL file size 1951253, number of live WAL files 2.
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.133873) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353135' seq:72057594037927935, type:22 .. '6C6F676D0032373637' seq:0, type:0; will stop at (end)
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(1870KB)], [140(8830KB)]
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334133922, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 10958343, "oldest_snapshot_seqno": -1}
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8021 keys, 10838721 bytes, temperature: kUnknown
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334322822, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 10838721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10786313, "index_size": 31239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20101, "raw_key_size": 209336, "raw_average_key_size": 26, "raw_value_size": 10644395, "raw_average_value_size": 1327, "num_data_blocks": 1223, "num_entries": 8021, "num_filter_entries": 8021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.323165) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 10838721 bytes
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.372033) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.0 rd, 57.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.6 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(11.4) write-amplify(5.7) OK, records in: 8545, records dropped: 524 output_compression: NoCompression
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.372081) EVENT_LOG_v1 {"time_micros": 1772025334372062, "job": 86, "event": "compaction_finished", "compaction_time_micros": 188998, "compaction_time_cpu_micros": 19461, "output_level": 6, "num_output_files": 1, "total_output_size": 10838721, "num_input_records": 8545, "num_output_records": 8021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334372539, "job": 86, "event": "table_file_deletion", "file_number": 142}
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334373769, "job": 86, "event": "table_file_deletion", "file_number": 140}
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.133805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.373806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.373812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.373815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.373818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:34 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.373821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:35 compute-0 ceph-mon[76335]: pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:36 compute-0 nova_compute[244014]: 2026-02-25 13:15:36.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:36 compute-0 ceph-mon[76335]: pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:37 compute-0 nova_compute[244014]: 2026-02-25 13:15:37.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:38 compute-0 ceph-mon[76335]: pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:40 compute-0 ceph-mon[76335]: pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:41 compute-0 nova_compute[244014]: 2026-02-25 13:15:41.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:42 compute-0 nova_compute[244014]: 2026-02-25 13:15:42.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:42 compute-0 ceph-mon[76335]: pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:15:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:43 compute-0 podman[392336]: 2026-02-25 13:15:43.760012516 +0000 UTC m=+0.093054735 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 13:15:43 compute-0 podman[392335]: 2026-02-25 13:15:43.76296967 +0000 UTC m=+0.097106700 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:15:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:44 compute-0 ceph-mon[76335]: pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:46 compute-0 nova_compute[244014]: 2026-02-25 13:15:46.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:46 compute-0 ceph-mon[76335]: pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2428991676' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2428991676' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:15:47 compute-0 nova_compute[244014]: 2026-02-25 13:15:47.810 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2428991676' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:15:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2428991676' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:15:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:49 compute-0 ceph-mon[76335]: pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:50 compute-0 nova_compute[244014]: 2026-02-25 13:15:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:15:50 compute-0 nova_compute[244014]: 2026-02-25 13:15:50.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:15:50 compute-0 nova_compute[244014]: 2026-02-25 13:15:50.914 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:15:50 compute-0 nova_compute[244014]: 2026-02-25 13:15:50.914 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:15:50 compute-0 nova_compute[244014]: 2026-02-25 13:15:50.915 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:15:50 compute-0 nova_compute[244014]: 2026-02-25 13:15:50.915 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:15:51 compute-0 nova_compute[244014]: 2026-02-25 13:15:51.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:51 compute-0 ceph-mon[76335]: pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:15:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109523005' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:15:51 compute-0 nova_compute[244014]: 2026-02-25 13:15:51.503 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:15:51 compute-0 nova_compute[244014]: 2026-02-25 13:15:51.694 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:15:51 compute-0 nova_compute[244014]: 2026-02-25 13:15:51.695 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3611MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:15:51 compute-0 nova_compute[244014]: 2026-02-25 13:15:51.695 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:15:51 compute-0 nova_compute[244014]: 2026-02-25 13:15:51.695 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:15:51 compute-0 nova_compute[244014]: 2026-02-25 13:15:51.896 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:15:51 compute-0 nova_compute[244014]: 2026-02-25 13:15:51.896 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:15:51 compute-0 nova_compute[244014]: 2026-02-25 13:15:51.964 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:15:52 compute-0 nova_compute[244014]: 2026-02-25 13:15:52.034 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:15:52 compute-0 nova_compute[244014]: 2026-02-25 13:15:52.035 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:15:52 compute-0 nova_compute[244014]: 2026-02-25 13:15:52.057 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:15:52 compute-0 nova_compute[244014]: 2026-02-25 13:15:52.084 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:15:52 compute-0 nova_compute[244014]: 2026-02-25 13:15:52.105 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:15:52 compute-0 ceph-mon[76335]: pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:52 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3109523005' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:15:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:15:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4196835951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:15:52 compute-0 nova_compute[244014]: 2026-02-25 13:15:52.651 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:15:52 compute-0 nova_compute[244014]: 2026-02-25 13:15:52.658 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:15:52 compute-0 nova_compute[244014]: 2026-02-25 13:15:52.679 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:15:52 compute-0 nova_compute[244014]: 2026-02-25 13:15:52.683 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:15:52 compute-0 nova_compute[244014]: 2026-02-25 13:15:52.683 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:15:52 compute-0 nova_compute[244014]: 2026-02-25 13:15:52.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4196835951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:15:53 compute-0 nova_compute[244014]: 2026-02-25 13:15:53.685 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:15:53 compute-0 nova_compute[244014]: 2026-02-25 13:15:53.686 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:15:53 compute-0 nova_compute[244014]: 2026-02-25 13:15:53.686 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:15:53 compute-0 nova_compute[244014]: 2026-02-25 13:15:53.713 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:15:53 compute-0 nova_compute[244014]: 2026-02-25 13:15:53.714 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:15:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:54 compute-0 ceph-mon[76335]: pgmap v2880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:15:55.054 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:15:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:15:55.055 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:15:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:15:55.056 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:15:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.708788) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355708857, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 426, "num_deletes": 251, "total_data_size": 326747, "memory_usage": 334472, "flush_reason": "Manual Compaction"}
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355727829, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 323674, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60876, "largest_seqno": 61301, "table_properties": {"data_size": 321192, "index_size": 581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6019, "raw_average_key_size": 18, "raw_value_size": 316316, "raw_average_value_size": 982, "num_data_blocks": 26, "num_entries": 322, "num_filter_entries": 322, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025335, "oldest_key_time": 1772025335, "file_creation_time": 1772025355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 19113 microseconds, and 2591 cpu microseconds.
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.727901) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 323674 bytes OK
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.727932) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.752782) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.752823) EVENT_LOG_v1 {"time_micros": 1772025355752813, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.752850) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 324114, prev total WAL file size 324114, number of live WAL files 2.
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.753471) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(316KB)], [143(10MB)]
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355753545, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 11162395, "oldest_snapshot_seqno": -1}
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7834 keys, 9420206 bytes, temperature: kUnknown
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355850648, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 9420206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9370366, "index_size": 29119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 206136, "raw_average_key_size": 26, "raw_value_size": 9233116, "raw_average_value_size": 1178, "num_data_blocks": 1125, "num_entries": 7834, "num_filter_entries": 7834, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.850929) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 9420206 bytes
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.864928) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.8 rd, 96.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.3 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(63.6) write-amplify(29.1) OK, records in: 8343, records dropped: 509 output_compression: NoCompression
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.864977) EVENT_LOG_v1 {"time_micros": 1772025355864957, "job": 88, "event": "compaction_finished", "compaction_time_micros": 97204, "compaction_time_cpu_micros": 24339, "output_level": 6, "num_output_files": 1, "total_output_size": 9420206, "num_input_records": 8343, "num_output_records": 7834, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355865293, "job": 88, "event": "table_file_deletion", "file_number": 145}
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355866987, "job": 88, "event": "table_file_deletion", "file_number": 143}
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.753396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.867075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.867085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.867089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.867093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.867097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:15:56 compute-0 nova_compute[244014]: 2026-02-25 13:15:56.014 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:56 compute-0 ceph-mon[76335]: pgmap v2881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:56 compute-0 nova_compute[244014]: 2026-02-25 13:15:56.901 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:15:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:57 compute-0 nova_compute[244014]: 2026-02-25 13:15:57.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:15:58 compute-0 ceph-mon[76335]: pgmap v2882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:15:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:15:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:00 compute-0 ceph-mon[76335]: pgmap v2883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:00 compute-0 nova_compute[244014]: 2026-02-25 13:16:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:16:00 compute-0 nova_compute[244014]: 2026-02-25 13:16:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:16:00 compute-0 nova_compute[244014]: 2026-02-25 13:16:00.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:16:01 compute-0 nova_compute[244014]: 2026-02-25 13:16:01.016 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:16:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:16:01 compute-0 nova_compute[244014]: 2026-02-25 13:16:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:16:02 compute-0 nova_compute[244014]: 2026-02-25 13:16:02.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:02 compute-0 ceph-mon[76335]: pgmap v2884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:03 compute-0 sudo[392423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:16:03 compute-0 sudo[392423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:16:03 compute-0 sudo[392423]: pam_unix(sudo:session): session closed for user root
Feb 25 13:16:03 compute-0 sudo[392448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:16:03 compute-0 sudo[392448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:16:03 compute-0 sudo[392448]: pam_unix(sudo:session): session closed for user root
Feb 25 13:16:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:16:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:16:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:16:04 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:16:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:16:04 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:16:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:16:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:16:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:16:04 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:16:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:16:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:16:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:04 compute-0 sudo[392504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:16:04 compute-0 sudo[392504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:16:04 compute-0 sudo[392504]: pam_unix(sudo:session): session closed for user root
Feb 25 13:16:04 compute-0 sudo[392529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:16:04 compute-0 sudo[392529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:16:04 compute-0 podman[392566]: 2026-02-25 13:16:04.428427817 +0000 UTC m=+0.022006111 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:16:04 compute-0 podman[392566]: 2026-02-25 13:16:04.584936111 +0000 UTC m=+0.178514385 container create 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:16:04 compute-0 systemd[1]: Started libpod-conmon-47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b.scope.
Feb 25 13:16:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:16:04 compute-0 podman[392566]: 2026-02-25 13:16:04.868986261 +0000 UTC m=+0.462564555 container init 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:16:04 compute-0 podman[392566]: 2026-02-25 13:16:04.878663904 +0000 UTC m=+0.472242218 container start 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:16:04 compute-0 beautiful_poitras[392582]: 167 167
Feb 25 13:16:04 compute-0 systemd[1]: libpod-47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b.scope: Deactivated successfully.
Feb 25 13:16:04 compute-0 podman[392566]: 2026-02-25 13:16:04.935799775 +0000 UTC m=+0.529378069 container attach 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:16:04 compute-0 podman[392566]: 2026-02-25 13:16:04.936462573 +0000 UTC m=+0.530040847 container died 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:16:05 compute-0 ceph-mon[76335]: pgmap v2885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:05 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:16:05 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:16:05 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:16:05 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:16:05 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:16:05 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4163020eac511f760582b23ece772c307aea096c5cdbe9099c2c1d45b11700e-merged.mount: Deactivated successfully.
Feb 25 13:16:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:05 compute-0 podman[392566]: 2026-02-25 13:16:05.598179923 +0000 UTC m=+1.191758237 container remove 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:16:05 compute-0 systemd[1]: libpod-conmon-47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b.scope: Deactivated successfully.
Feb 25 13:16:05 compute-0 podman[392606]: 2026-02-25 13:16:05.823294771 +0000 UTC m=+0.125857380 container create 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:16:05 compute-0 podman[392606]: 2026-02-25 13:16:05.738050817 +0000 UTC m=+0.040613466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:16:05 compute-0 systemd[1]: Started libpod-conmon-05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb.scope.
Feb 25 13:16:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:06 compute-0 nova_compute[244014]: 2026-02-25 13:16:06.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:06 compute-0 podman[392606]: 2026-02-25 13:16:06.11088178 +0000 UTC m=+0.413444379 container init 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:16:06 compute-0 podman[392606]: 2026-02-25 13:16:06.118969378 +0000 UTC m=+0.421531987 container start 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 13:16:06 compute-0 podman[392606]: 2026-02-25 13:16:06.201070483 +0000 UTC m=+0.503633072 container attach 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:16:06 compute-0 distracted_herschel[392623]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:16:06 compute-0 distracted_herschel[392623]: --> All data devices are unavailable
Feb 25 13:16:06 compute-0 systemd[1]: libpod-05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb.scope: Deactivated successfully.
Feb 25 13:16:06 compute-0 podman[392643]: 2026-02-25 13:16:06.718248087 +0000 UTC m=+0.036175691 container died 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 13:16:06 compute-0 nova_compute[244014]: 2026-02-25 13:16:06.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a-merged.mount: Deactivated successfully.
Feb 25 13:16:07 compute-0 ceph-mon[76335]: pgmap v2886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:07 compute-0 podman[392643]: 2026-02-25 13:16:07.282846767 +0000 UTC m=+0.600774291 container remove 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:16:07 compute-0 systemd[1]: libpod-conmon-05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb.scope: Deactivated successfully.
Feb 25 13:16:07 compute-0 sudo[392529]: pam_unix(sudo:session): session closed for user root
Feb 25 13:16:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:07 compute-0 sudo[392660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:16:07 compute-0 sudo[392660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:16:07 compute-0 sudo[392660]: pam_unix(sudo:session): session closed for user root
Feb 25 13:16:07 compute-0 sudo[392685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:16:07 compute-0 sudo[392685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:16:07 compute-0 nova_compute[244014]: 2026-02-25 13:16:07.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:07 compute-0 podman[392721]: 2026-02-25 13:16:07.834778502 +0000 UTC m=+0.108292425 container create 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:16:07 compute-0 podman[392721]: 2026-02-25 13:16:07.746643456 +0000 UTC m=+0.020157419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:16:07 compute-0 systemd[1]: Started libpod-conmon-59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8.scope.
Feb 25 13:16:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:16:08 compute-0 podman[392721]: 2026-02-25 13:16:08.023468862 +0000 UTC m=+0.296982825 container init 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 13:16:08 compute-0 podman[392721]: 2026-02-25 13:16:08.033531766 +0000 UTC m=+0.307045689 container start 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:16:08 compute-0 pensive_kirch[392737]: 167 167
Feb 25 13:16:08 compute-0 systemd[1]: libpod-59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8.scope: Deactivated successfully.
Feb 25 13:16:08 compute-0 podman[392721]: 2026-02-25 13:16:08.153313584 +0000 UTC m=+0.426827587 container attach 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:16:08 compute-0 podman[392721]: 2026-02-25 13:16:08.153843809 +0000 UTC m=+0.427357762 container died 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:16:08 compute-0 ceph-mon[76335]: pgmap v2887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e6058b545dccf91b6455fbb0e0239dc850ee1b0ba00a76363dfa9f8a89ce4d8-merged.mount: Deactivated successfully.
Feb 25 13:16:08 compute-0 podman[392721]: 2026-02-25 13:16:08.827542226 +0000 UTC m=+1.101056169 container remove 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:16:08 compute-0 systemd[1]: libpod-conmon-59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8.scope: Deactivated successfully.
Feb 25 13:16:08 compute-0 nova_compute[244014]: 2026-02-25 13:16:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:16:09 compute-0 podman[392763]: 2026-02-25 13:16:08.99821435 +0000 UTC m=+0.031095328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:16:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:09 compute-0 podman[392763]: 2026-02-25 13:16:09.102311495 +0000 UTC m=+0.135192353 container create 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 13:16:09 compute-0 systemd[1]: Started libpod-conmon-7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552.scope.
Feb 25 13:16:09 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fe74231d6ef6dd57158e0cf3a6f56150af7d2179e52caadf8f21e39f2b6df2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fe74231d6ef6dd57158e0cf3a6f56150af7d2179e52caadf8f21e39f2b6df2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fe74231d6ef6dd57158e0cf3a6f56150af7d2179e52caadf8f21e39f2b6df2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fe74231d6ef6dd57158e0cf3a6f56150af7d2179e52caadf8f21e39f2b6df2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:09 compute-0 podman[392763]: 2026-02-25 13:16:09.343938709 +0000 UTC m=+0.376819587 container init 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 13:16:09 compute-0 podman[392763]: 2026-02-25 13:16:09.351220424 +0000 UTC m=+0.384101292 container start 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:16:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:09 compute-0 podman[392763]: 2026-02-25 13:16:09.454005752 +0000 UTC m=+0.486886680 container attach 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:16:09 compute-0 lucid_williamson[392779]: {
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:     "0": [
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:         {
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "devices": [
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "/dev/loop3"
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             ],
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_name": "ceph_lv0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_size": "21470642176",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "name": "ceph_lv0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "tags": {
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.cluster_name": "ceph",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.crush_device_class": "",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.encrypted": "0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.objectstore": "bluestore",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.osd_id": "0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.type": "block",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.vdo": "0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.with_tpm": "0"
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             },
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "type": "block",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "vg_name": "ceph_vg0"
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:         }
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:     ],
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:     "1": [
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:         {
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "devices": [
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "/dev/loop4"
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             ],
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_name": "ceph_lv1",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_size": "21470642176",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "name": "ceph_lv1",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "tags": {
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.cluster_name": "ceph",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.crush_device_class": "",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.encrypted": "0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.objectstore": "bluestore",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.osd_id": "1",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.type": "block",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.vdo": "0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.with_tpm": "0"
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             },
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "type": "block",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "vg_name": "ceph_vg1"
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:         }
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:     ],
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:     "2": [
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:         {
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "devices": [
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "/dev/loop5"
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             ],
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_name": "ceph_lv2",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_size": "21470642176",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "name": "ceph_lv2",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "tags": {
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.cluster_name": "ceph",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.crush_device_class": "",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.encrypted": "0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.objectstore": "bluestore",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.osd_id": "2",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.type": "block",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.vdo": "0",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:                 "ceph.with_tpm": "0"
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             },
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "type": "block",
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:             "vg_name": "ceph_vg2"
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:         }
Feb 25 13:16:09 compute-0 lucid_williamson[392779]:     ]
Feb 25 13:16:09 compute-0 lucid_williamson[392779]: }
Feb 25 13:16:09 compute-0 systemd[1]: libpod-7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552.scope: Deactivated successfully.
Feb 25 13:16:09 compute-0 podman[392788]: 2026-02-25 13:16:09.699360901 +0000 UTC m=+0.035274096 container died 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-85fe74231d6ef6dd57158e0cf3a6f56150af7d2179e52caadf8f21e39f2b6df2-merged.mount: Deactivated successfully.
Feb 25 13:16:10 compute-0 podman[392788]: 2026-02-25 13:16:10.163017865 +0000 UTC m=+0.498931020 container remove 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:16:10 compute-0 systemd[1]: libpod-conmon-7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552.scope: Deactivated successfully.
Feb 25 13:16:10 compute-0 sudo[392685]: pam_unix(sudo:session): session closed for user root
Feb 25 13:16:10 compute-0 sudo[392803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:16:10 compute-0 sudo[392803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:16:10 compute-0 sudo[392803]: pam_unix(sudo:session): session closed for user root
Feb 25 13:16:10 compute-0 sudo[392828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:16:10 compute-0 sudo[392828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:16:11 compute-0 nova_compute[244014]: 2026-02-25 13:16:11.019 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:11 compute-0 ceph-mon[76335]: pgmap v2888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:11 compute-0 podman[392866]: 2026-02-25 13:16:11.181022171 +0000 UTC m=+0.105734633 container create a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 13:16:11 compute-0 podman[392866]: 2026-02-25 13:16:11.114062603 +0000 UTC m=+0.038775115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:16:11 compute-0 systemd[1]: Started libpod-conmon-a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa.scope.
Feb 25 13:16:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:16:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:11 compute-0 podman[392866]: 2026-02-25 13:16:11.397112535 +0000 UTC m=+0.321825057 container init a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 13:16:11 compute-0 podman[392866]: 2026-02-25 13:16:11.403519525 +0000 UTC m=+0.328231997 container start a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:16:11 compute-0 laughing_elbakyan[392883]: 167 167
Feb 25 13:16:11 compute-0 systemd[1]: libpod-a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa.scope: Deactivated successfully.
Feb 25 13:16:11 compute-0 podman[392866]: 2026-02-25 13:16:11.468666692 +0000 UTC m=+0.393379184 container attach a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:16:11 compute-0 podman[392866]: 2026-02-25 13:16:11.469428524 +0000 UTC m=+0.394140986 container died a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c2229fafcf850a592fedf9e0efb03182e90b0f71b74879cc08b02cebddc9ba8-merged.mount: Deactivated successfully.
Feb 25 13:16:11 compute-0 podman[392866]: 2026-02-25 13:16:11.924559018 +0000 UTC m=+0.849271490 container remove a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Feb 25 13:16:11 compute-0 systemd[1]: libpod-conmon-a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa.scope: Deactivated successfully.
Feb 25 13:16:12 compute-0 podman[392905]: 2026-02-25 13:16:12.160928994 +0000 UTC m=+0.115278102 container create de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:16:12 compute-0 podman[392905]: 2026-02-25 13:16:12.081590176 +0000 UTC m=+0.035939284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:16:12 compute-0 systemd[1]: Started libpod-conmon-de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d.scope.
Feb 25 13:16:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0443692e1abc9a96ec6c33c3b8b11d5dabc037acd415dc742149010fca94a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0443692e1abc9a96ec6c33c3b8b11d5dabc037acd415dc742149010fca94a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0443692e1abc9a96ec6c33c3b8b11d5dabc037acd415dc742149010fca94a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0443692e1abc9a96ec6c33c3b8b11d5dabc037acd415dc742149010fca94a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:16:12 compute-0 podman[392905]: 2026-02-25 13:16:12.445644093 +0000 UTC m=+0.399993201 container init de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Feb 25 13:16:12 compute-0 podman[392905]: 2026-02-25 13:16:12.454333478 +0000 UTC m=+0.408682586 container start de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:16:12 compute-0 podman[392905]: 2026-02-25 13:16:12.579417265 +0000 UTC m=+0.533766373 container attach de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:16:12 compute-0 nova_compute[244014]: 2026-02-25 13:16:12.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:13 compute-0 lvm[393002]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:16:13 compute-0 lvm[393002]: VG ceph_vg1 finished
Feb 25 13:16:13 compute-0 lvm[393003]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:16:13 compute-0 lvm[393003]: VG ceph_vg2 finished
Feb 25 13:16:13 compute-0 lvm[393000]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:16:13 compute-0 lvm[393000]: VG ceph_vg0 finished
Feb 25 13:16:13 compute-0 strange_cray[392922]: {}
Feb 25 13:16:13 compute-0 systemd[1]: libpod-de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d.scope: Deactivated successfully.
Feb 25 13:16:13 compute-0 systemd[1]: libpod-de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d.scope: Consumed 1.094s CPU time.
Feb 25 13:16:13 compute-0 podman[392905]: 2026-02-25 13:16:13.214186495 +0000 UTC m=+1.168535563 container died de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 13:16:13 compute-0 ceph-mon[76335]: pgmap v2889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea0443692e1abc9a96ec6c33c3b8b11d5dabc037acd415dc742149010fca94a1-merged.mount: Deactivated successfully.
Feb 25 13:16:13 compute-0 podman[392905]: 2026-02-25 13:16:13.610219102 +0000 UTC m=+1.564568180 container remove de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:16:13 compute-0 systemd[1]: libpod-conmon-de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d.scope: Deactivated successfully.
Feb 25 13:16:13 compute-0 sudo[392828]: pam_unix(sudo:session): session closed for user root
Feb 25 13:16:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:16:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:16:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:16:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:16:13 compute-0 sudo[393020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:16:13 compute-0 sudo[393020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:16:13 compute-0 sudo[393020]: pam_unix(sudo:session): session closed for user root
Feb 25 13:16:14 compute-0 podman[393044]: 2026-02-25 13:16:14.034743315 +0000 UTC m=+0.102500913 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 25 13:16:14 compute-0 podman[393045]: 2026-02-25 13:16:14.057106634 +0000 UTC m=+0.123285988 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 13:16:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:14 compute-0 ceph-mon[76335]: pgmap v2890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:16:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:16:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:16 compute-0 nova_compute[244014]: 2026-02-25 13:16:16.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:17 compute-0 ceph-mon[76335]: pgmap v2891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:17 compute-0 nova_compute[244014]: 2026-02-25 13:16:17.831 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:19 compute-0 ceph-mon[76335]: pgmap v2892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:19 compute-0 nova_compute[244014]: 2026-02-25 13:16:19.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:16:20 compute-0 ceph-mon[76335]: pgmap v2893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:21 compute-0 nova_compute[244014]: 2026-02-25 13:16:21.025 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:22 compute-0 ceph-mon[76335]: pgmap v2894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:22 compute-0 nova_compute[244014]: 2026-02-25 13:16:22.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:24 compute-0 ceph-mon[76335]: pgmap v2895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:26 compute-0 nova_compute[244014]: 2026-02-25 13:16:26.028 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:26 compute-0 ceph-mon[76335]: pgmap v2896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:27 compute-0 nova_compute[244014]: 2026-02-25 13:16:27.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:29 compute-0 ceph-mon[76335]: pgmap v2897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:31 compute-0 nova_compute[244014]: 2026-02-25 13:16:31.031 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:16:31
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.log', '.rgw.root', 'backups']
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:16:31 compute-0 ceph-mon[76335]: pgmap v2898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:16:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:16:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:16:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:16:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:16:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:16:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:16:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:16:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:16:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:16:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:16:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:16:32 compute-0 ceph-mon[76335]: pgmap v2899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:32 compute-0 nova_compute[244014]: 2026-02-25 13:16:32.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:34 compute-0 ceph-mon[76335]: pgmap v2900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:36 compute-0 nova_compute[244014]: 2026-02-25 13:16:36.034 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:36 compute-0 ceph-mon[76335]: pgmap v2901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:37 compute-0 nova_compute[244014]: 2026-02-25 13:16:37.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:38 compute-0 ceph-mon[76335]: pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:40 compute-0 ceph-mon[76335]: pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:41 compute-0 nova_compute[244014]: 2026-02-25 13:16:41.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:42 compute-0 nova_compute[244014]: 2026-02-25 13:16:42.846 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:42 compute-0 ceph-mon[76335]: pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:16:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:44 compute-0 podman[393088]: 2026-02-25 13:16:44.775052309 +0000 UTC m=+0.103142720 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:16:44 compute-0 podman[393089]: 2026-02-25 13:16:44.813798891 +0000 UTC m=+0.140517483 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 13:16:45 compute-0 ceph-mon[76335]: pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:46 compute-0 nova_compute[244014]: 2026-02-25 13:16:46.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:47 compute-0 ceph-mon[76335]: pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:16:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2179013673' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:16:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:16:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2179013673' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:16:47 compute-0 nova_compute[244014]: 2026-02-25 13:16:47.850 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2179013673' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:16:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2179013673' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:16:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:49 compute-0 ceph-mon[76335]: pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:50 compute-0 ceph-mon[76335]: pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:50 compute-0 nova_compute[244014]: 2026-02-25 13:16:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:16:50 compute-0 nova_compute[244014]: 2026-02-25 13:16:50.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:16:50 compute-0 nova_compute[244014]: 2026-02-25 13:16:50.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:16:50 compute-0 nova_compute[244014]: 2026-02-25 13:16:50.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:16:50 compute-0 nova_compute[244014]: 2026-02-25 13:16:50.904 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:16:50 compute-0 nova_compute[244014]: 2026-02-25 13:16:50.904 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:16:51 compute-0 nova_compute[244014]: 2026-02-25 13:16:51.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:16:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/360309141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:16:51 compute-0 nova_compute[244014]: 2026-02-25 13:16:51.439 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:16:51 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/360309141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:16:51 compute-0 nova_compute[244014]: 2026-02-25 13:16:51.635 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:16:51 compute-0 nova_compute[244014]: 2026-02-25 13:16:51.636 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:16:51 compute-0 nova_compute[244014]: 2026-02-25 13:16:51.636 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:16:51 compute-0 nova_compute[244014]: 2026-02-25 13:16:51.637 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:16:51 compute-0 nova_compute[244014]: 2026-02-25 13:16:51.694 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:16:51 compute-0 nova_compute[244014]: 2026-02-25 13:16:51.694 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:16:51 compute-0 nova_compute[244014]: 2026-02-25 13:16:51.708 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:16:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:16:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1866881172' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:16:52 compute-0 nova_compute[244014]: 2026-02-25 13:16:52.241 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:16:52 compute-0 nova_compute[244014]: 2026-02-25 13:16:52.249 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:16:52 compute-0 nova_compute[244014]: 2026-02-25 13:16:52.276 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:16:52 compute-0 nova_compute[244014]: 2026-02-25 13:16:52.279 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:16:52 compute-0 nova_compute[244014]: 2026-02-25 13:16:52.280 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:16:52 compute-0 ceph-mon[76335]: pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:52 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1866881172' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:16:52 compute-0 nova_compute[244014]: 2026-02-25 13:16:52.853 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:53 compute-0 nova_compute[244014]: 2026-02-25 13:16:53.281 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:16:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:53 compute-0 nova_compute[244014]: 2026-02-25 13:16:53.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:16:53 compute-0 nova_compute[244014]: 2026-02-25 13:16:53.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:16:53 compute-0 nova_compute[244014]: 2026-02-25 13:16:53.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:16:53 compute-0 nova_compute[244014]: 2026-02-25 13:16:53.896 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:16:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:54 compute-0 ceph-mon[76335]: pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:16:55.056 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:16:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:16:55.056 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:16:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:16:55.057 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:16:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:56 compute-0 nova_compute[244014]: 2026-02-25 13:16:56.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:56 compute-0 nova_compute[244014]: 2026-02-25 13:16:56.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:16:56 compute-0 ceph-mon[76335]: pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:57 compute-0 nova_compute[244014]: 2026-02-25 13:16:57.857 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:16:58 compute-0 ceph-mon[76335]: pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:16:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:16:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:01 compute-0 ceph-mon[76335]: pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:01 compute-0 nova_compute[244014]: 2026-02-25 13:17:01.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:17:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:17:01 compute-0 nova_compute[244014]: 2026-02-25 13:17:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:17:02 compute-0 nova_compute[244014]: 2026-02-25 13:17:02.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:02 compute-0 nova_compute[244014]: 2026-02-25 13:17:02.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:17:02 compute-0 nova_compute[244014]: 2026-02-25 13:17:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:17:02 compute-0 nova_compute[244014]: 2026-02-25 13:17:02.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:17:03 compute-0 ceph-mon[76335]: pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:05 compute-0 ceph-mon[76335]: pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:06 compute-0 nova_compute[244014]: 2026-02-25 13:17:06.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:06 compute-0 ceph-mon[76335]: pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:07 compute-0 nova_compute[244014]: 2026-02-25 13:17:07.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:07 compute-0 nova_compute[244014]: 2026-02-25 13:17:07.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:17:08 compute-0 ceph-mon[76335]: pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:10 compute-0 ceph-mon[76335]: pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:10 compute-0 nova_compute[244014]: 2026-02-25 13:17:10.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:17:11 compute-0 nova_compute[244014]: 2026-02-25 13:17:11.045 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:12 compute-0 ceph-mon[76335]: pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:12 compute-0 nova_compute[244014]: 2026-02-25 13:17:12.867 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:14 compute-0 sudo[393176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:17:14 compute-0 sudo[393176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:17:14 compute-0 sudo[393176]: pam_unix(sudo:session): session closed for user root
Feb 25 13:17:14 compute-0 sudo[393201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:17:14 compute-0 sudo[393201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:17:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:14 compute-0 sudo[393201]: pam_unix(sudo:session): session closed for user root
Feb 25 13:17:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:17:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:17:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:17:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:17:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:17:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:17:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:17:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:17:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:17:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:17:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:17:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:17:14 compute-0 ceph-mon[76335]: pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:17:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:17:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:17:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:17:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:17:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:17:14 compute-0 sudo[393257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:17:14 compute-0 sudo[393257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:17:14 compute-0 sudo[393257]: pam_unix(sudo:session): session closed for user root
Feb 25 13:17:14 compute-0 sudo[393282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:17:14 compute-0 sudo[393282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:17:14 compute-0 podman[393306]: 2026-02-25 13:17:14.895398503 +0000 UTC m=+0.063217974 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 25 13:17:14 compute-0 podman[393307]: 2026-02-25 13:17:14.92439179 +0000 UTC m=+0.089108134 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 25 13:17:15 compute-0 podman[393363]: 2026-02-25 13:17:15.165504539 +0000 UTC m=+0.107690958 container create 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:17:15 compute-0 podman[393363]: 2026-02-25 13:17:15.083025893 +0000 UTC m=+0.025212312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:17:15 compute-0 systemd[1]: Started libpod-conmon-87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1.scope.
Feb 25 13:17:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:17:15 compute-0 podman[393363]: 2026-02-25 13:17:15.286973174 +0000 UTC m=+0.229159663 container init 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:17:15 compute-0 podman[393363]: 2026-02-25 13:17:15.295777822 +0000 UTC m=+0.237964261 container start 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:17:15 compute-0 interesting_heyrovsky[393380]: 167 167
Feb 25 13:17:15 compute-0 systemd[1]: libpod-87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1.scope: Deactivated successfully.
Feb 25 13:17:15 compute-0 podman[393363]: 2026-02-25 13:17:15.318119002 +0000 UTC m=+0.260305481 container attach 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:17:15 compute-0 podman[393363]: 2026-02-25 13:17:15.318608126 +0000 UTC m=+0.260794555 container died 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 13:17:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e90193464154d31b60bebc690dc085c3959eeb99625ace28b6f253a1eceb67d9-merged.mount: Deactivated successfully.
Feb 25 13:17:15 compute-0 podman[393363]: 2026-02-25 13:17:15.670680624 +0000 UTC m=+0.612867053 container remove 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:17:15 compute-0 systemd[1]: libpod-conmon-87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1.scope: Deactivated successfully.
Feb 25 13:17:15 compute-0 podman[393407]: 2026-02-25 13:17:15.905372883 +0000 UTC m=+0.089932068 container create 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:17:15 compute-0 podman[393407]: 2026-02-25 13:17:15.850021502 +0000 UTC m=+0.034580657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:17:15 compute-0 systemd[1]: Started libpod-conmon-0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4.scope.
Feb 25 13:17:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:16 compute-0 nova_compute[244014]: 2026-02-25 13:17:16.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:16 compute-0 podman[393407]: 2026-02-25 13:17:16.068992596 +0000 UTC m=+0.253551791 container init 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:17:16 compute-0 podman[393407]: 2026-02-25 13:17:16.079204804 +0000 UTC m=+0.263763979 container start 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:17:16 compute-0 podman[393407]: 2026-02-25 13:17:16.118834972 +0000 UTC m=+0.303394147 container attach 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:17:16 compute-0 heuristic_khayyam[393424]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:17:16 compute-0 heuristic_khayyam[393424]: --> All data devices are unavailable
Feb 25 13:17:16 compute-0 systemd[1]: libpod-0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4.scope: Deactivated successfully.
Feb 25 13:17:16 compute-0 podman[393407]: 2026-02-25 13:17:16.561619628 +0000 UTC m=+0.746178763 container died 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:17:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86-merged.mount: Deactivated successfully.
Feb 25 13:17:16 compute-0 ceph-mon[76335]: pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:16 compute-0 podman[393407]: 2026-02-25 13:17:16.851975716 +0000 UTC m=+1.036534881 container remove 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:17:16 compute-0 systemd[1]: libpod-conmon-0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4.scope: Deactivated successfully.
Feb 25 13:17:16 compute-0 sudo[393282]: pam_unix(sudo:session): session closed for user root
Feb 25 13:17:16 compute-0 sudo[393458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:17:16 compute-0 sudo[393458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:17:16 compute-0 sudo[393458]: pam_unix(sudo:session): session closed for user root
Feb 25 13:17:17 compute-0 sudo[393483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:17:17 compute-0 sudo[393483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:17:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:17 compute-0 podman[393522]: 2026-02-25 13:17:17.417000269 +0000 UTC m=+0.107345928 container create e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 13:17:17 compute-0 podman[393522]: 2026-02-25 13:17:17.348765205 +0000 UTC m=+0.039110914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:17:17 compute-0 systemd[1]: Started libpod-conmon-e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd.scope.
Feb 25 13:17:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:17:17 compute-0 podman[393522]: 2026-02-25 13:17:17.58053374 +0000 UTC m=+0.270879449 container init e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 13:17:17 compute-0 podman[393522]: 2026-02-25 13:17:17.588968348 +0000 UTC m=+0.279314007 container start e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:17:17 compute-0 practical_lichterman[393538]: 167 167
Feb 25 13:17:17 compute-0 systemd[1]: libpod-e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd.scope: Deactivated successfully.
Feb 25 13:17:17 compute-0 podman[393522]: 2026-02-25 13:17:17.65746547 +0000 UTC m=+0.347811119 container attach e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:17:17 compute-0 podman[393522]: 2026-02-25 13:17:17.658417587 +0000 UTC m=+0.348763246 container died e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 13:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8286e0765f25e1cf641c8c32c64db067663c250d50465313862393de2f9c311a-merged.mount: Deactivated successfully.
Feb 25 13:17:17 compute-0 nova_compute[244014]: 2026-02-25 13:17:17.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:18 compute-0 podman[393522]: 2026-02-25 13:17:18.017157522 +0000 UTC m=+0.707503181 container remove e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:17:18 compute-0 systemd[1]: libpod-conmon-e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd.scope: Deactivated successfully.
Feb 25 13:17:18 compute-0 podman[393564]: 2026-02-25 13:17:18.192854447 +0000 UTC m=+0.062574326 container create 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:17:18 compute-0 podman[393564]: 2026-02-25 13:17:18.152971952 +0000 UTC m=+0.022691841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:17:18 compute-0 systemd[1]: Started libpod-conmon-308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe.scope.
Feb 25 13:17:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9a89e2d1fb376c7a6093ef45626f860157c9e31ee92bbc6c069d5c91dedfa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9a89e2d1fb376c7a6093ef45626f860157c9e31ee92bbc6c069d5c91dedfa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9a89e2d1fb376c7a6093ef45626f860157c9e31ee92bbc6c069d5c91dedfa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9a89e2d1fb376c7a6093ef45626f860157c9e31ee92bbc6c069d5c91dedfa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:18 compute-0 podman[393564]: 2026-02-25 13:17:18.34758341 +0000 UTC m=+0.217303349 container init 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:17:18 compute-0 podman[393564]: 2026-02-25 13:17:18.357250913 +0000 UTC m=+0.226970792 container start 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:17:18 compute-0 podman[393564]: 2026-02-25 13:17:18.404618868 +0000 UTC m=+0.274338807 container attach 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]: {
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:     "0": [
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:         {
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "devices": [
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "/dev/loop3"
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             ],
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_name": "ceph_lv0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_size": "21470642176",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "name": "ceph_lv0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "tags": {
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.cluster_name": "ceph",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.crush_device_class": "",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.encrypted": "0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.objectstore": "bluestore",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.osd_id": "0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.type": "block",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.vdo": "0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.with_tpm": "0"
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             },
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "type": "block",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "vg_name": "ceph_vg0"
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:         }
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:     ],
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:     "1": [
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:         {
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "devices": [
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "/dev/loop4"
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             ],
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_name": "ceph_lv1",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_size": "21470642176",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "name": "ceph_lv1",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "tags": {
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.cluster_name": "ceph",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.crush_device_class": "",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.encrypted": "0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.objectstore": "bluestore",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.osd_id": "1",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.type": "block",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.vdo": "0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.with_tpm": "0"
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             },
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "type": "block",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "vg_name": "ceph_vg1"
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:         }
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:     ],
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:     "2": [
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:         {
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "devices": [
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "/dev/loop5"
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             ],
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_name": "ceph_lv2",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_size": "21470642176",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "name": "ceph_lv2",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "tags": {
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.cluster_name": "ceph",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.crush_device_class": "",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.encrypted": "0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.objectstore": "bluestore",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.osd_id": "2",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.type": "block",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.vdo": "0",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:                 "ceph.with_tpm": "0"
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             },
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "type": "block",
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:             "vg_name": "ceph_vg2"
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:         }
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]:     ]
Feb 25 13:17:18 compute-0 reverent_ganguly[393580]: }
Feb 25 13:17:18 compute-0 systemd[1]: libpod-308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe.scope: Deactivated successfully.
Feb 25 13:17:18 compute-0 podman[393589]: 2026-02-25 13:17:18.70648009 +0000 UTC m=+0.032501698 container died 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d9a89e2d1fb376c7a6093ef45626f860157c9e31ee92bbc6c069d5c91dedfa1-merged.mount: Deactivated successfully.
Feb 25 13:17:18 compute-0 podman[393589]: 2026-02-25 13:17:18.913372374 +0000 UTC m=+0.239393962 container remove 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:17:18 compute-0 ceph-mon[76335]: pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:18 compute-0 systemd[1]: libpod-conmon-308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe.scope: Deactivated successfully.
Feb 25 13:17:18 compute-0 sudo[393483]: pam_unix(sudo:session): session closed for user root
Feb 25 13:17:19 compute-0 sudo[393604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:17:19 compute-0 sudo[393604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:17:19 compute-0 sudo[393604]: pam_unix(sudo:session): session closed for user root
Feb 25 13:17:19 compute-0 sudo[393629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:17:19 compute-0 sudo[393629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:17:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:19 compute-0 podman[393667]: 2026-02-25 13:17:19.507127117 +0000 UTC m=+0.114881931 container create 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb 25 13:17:19 compute-0 podman[393667]: 2026-02-25 13:17:19.416737918 +0000 UTC m=+0.024492782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:17:19 compute-0 systemd[1]: Started libpod-conmon-2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77.scope.
Feb 25 13:17:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:17:19 compute-0 podman[393667]: 2026-02-25 13:17:19.694865151 +0000 UTC m=+0.302619985 container init 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:17:19 compute-0 podman[393667]: 2026-02-25 13:17:19.702836936 +0000 UTC m=+0.310591730 container start 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 13:17:19 compute-0 xenodochial_chaum[393683]: 167 167
Feb 25 13:17:19 compute-0 systemd[1]: libpod-2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77.scope: Deactivated successfully.
Feb 25 13:17:19 compute-0 podman[393667]: 2026-02-25 13:17:19.731728801 +0000 UTC m=+0.339483675 container attach 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:17:19 compute-0 podman[393667]: 2026-02-25 13:17:19.732434971 +0000 UTC m=+0.340189795 container died 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:17:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-af27a813d730bc32e8bf095735c6e517ec0e2878f5f4891ebf3c6997838f7a11-merged.mount: Deactivated successfully.
Feb 25 13:17:20 compute-0 podman[393667]: 2026-02-25 13:17:20.02037245 +0000 UTC m=+0.628127274 container remove 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:17:20 compute-0 systemd[1]: libpod-conmon-2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77.scope: Deactivated successfully.
Feb 25 13:17:20 compute-0 podman[393710]: 2026-02-25 13:17:20.268373943 +0000 UTC m=+0.108836250 container create 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:17:20 compute-0 podman[393710]: 2026-02-25 13:17:20.182598194 +0000 UTC m=+0.023060571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:17:20 compute-0 systemd[1]: Started libpod-conmon-7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a.scope.
Feb 25 13:17:20 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b50a8af593e5a5f618215eb4170ad7cf4b2d8591625dc9629d5c9a76fcc8a4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b50a8af593e5a5f618215eb4170ad7cf4b2d8591625dc9629d5c9a76fcc8a4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b50a8af593e5a5f618215eb4170ad7cf4b2d8591625dc9629d5c9a76fcc8a4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b50a8af593e5a5f618215eb4170ad7cf4b2d8591625dc9629d5c9a76fcc8a4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:17:20 compute-0 podman[393710]: 2026-02-25 13:17:20.441134305 +0000 UTC m=+0.281596692 container init 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 13:17:20 compute-0 podman[393710]: 2026-02-25 13:17:20.449804569 +0000 UTC m=+0.290266896 container start 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 13:17:20 compute-0 podman[393710]: 2026-02-25 13:17:20.465611015 +0000 UTC m=+0.306073412 container attach 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:17:21 compute-0 ceph-mon[76335]: pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:21 compute-0 nova_compute[244014]: 2026-02-25 13:17:21.049 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:21 compute-0 lvm[393806]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:17:21 compute-0 lvm[393805]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:17:21 compute-0 lvm[393805]: VG ceph_vg0 finished
Feb 25 13:17:21 compute-0 lvm[393806]: VG ceph_vg1 finished
Feb 25 13:17:21 compute-0 lvm[393808]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:17:21 compute-0 lvm[393808]: VG ceph_vg2 finished
Feb 25 13:17:21 compute-0 optimistic_fermi[393727]: {}
Feb 25 13:17:21 compute-0 systemd[1]: libpod-7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a.scope: Deactivated successfully.
Feb 25 13:17:21 compute-0 podman[393710]: 2026-02-25 13:17:21.253601456 +0000 UTC m=+1.094063823 container died 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:17:21 compute-0 systemd[1]: libpod-7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a.scope: Consumed 1.107s CPU time.
Feb 25 13:17:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b50a8af593e5a5f618215eb4170ad7cf4b2d8591625dc9629d5c9a76fcc8a4a-merged.mount: Deactivated successfully.
Feb 25 13:17:21 compute-0 podman[393710]: 2026-02-25 13:17:21.619530634 +0000 UTC m=+1.459992971 container remove 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:17:21 compute-0 systemd[1]: libpod-conmon-7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a.scope: Deactivated successfully.
Feb 25 13:17:21 compute-0 sudo[393629]: pam_unix(sudo:session): session closed for user root
Feb 25 13:17:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:17:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:17:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:17:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:17:21 compute-0 sudo[393825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:17:21 compute-0 sudo[393825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:17:21 compute-0 sudo[393825]: pam_unix(sudo:session): session closed for user root
Feb 25 13:17:22 compute-0 ceph-mon[76335]: pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:17:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:17:22 compute-0 nova_compute[244014]: 2026-02-25 13:17:22.875 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:24 compute-0 ceph-mon[76335]: pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:26 compute-0 nova_compute[244014]: 2026-02-25 13:17:26.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:26 compute-0 ceph-mon[76335]: pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:27 compute-0 nova_compute[244014]: 2026-02-25 13:17:27.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:29 compute-0 ceph-mon[76335]: pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:31 compute-0 nova_compute[244014]: 2026-02-25 13:17:31.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:17:31
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:17:31 compute-0 ceph-mon[76335]: pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:17:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:17:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:17:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:17:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:17:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:17:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:17:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:17:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:17:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:17:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:17:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:17:32 compute-0 ceph-mon[76335]: pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:32 compute-0 nova_compute[244014]: 2026-02-25 13:17:32.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:34 compute-0 ceph-mon[76335]: pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:36 compute-0 nova_compute[244014]: 2026-02-25 13:17:36.059 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:36 compute-0 ceph-mon[76335]: pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:37 compute-0 nova_compute[244014]: 2026-02-25 13:17:37.887 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:38 compute-0 ceph-mon[76335]: pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:41 compute-0 ceph-mon[76335]: pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:41 compute-0 nova_compute[244014]: 2026-02-25 13:17:41.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:42 compute-0 nova_compute[244014]: 2026-02-25 13:17:42.891 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:17:43 compute-0 ceph-mon[76335]: pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:44 compute-0 ceph-mon[76335]: pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:45 compute-0 podman[393850]: 2026-02-25 13:17:45.740867092 +0000 UTC m=+0.079370989 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 13:17:45 compute-0 podman[393851]: 2026-02-25 13:17:45.77979748 +0000 UTC m=+0.116136136 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:17:46 compute-0 nova_compute[244014]: 2026-02-25 13:17:46.064 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:46 compute-0 ceph-mon[76335]: pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:17:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2122349000' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:17:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:17:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2122349000' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:17:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2122349000' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:17:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2122349000' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:17:47 compute-0 nova_compute[244014]: 2026-02-25 13:17:47.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:48 compute-0 ceph-mon[76335]: pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:50 compute-0 ceph-mon[76335]: pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:51 compute-0 nova_compute[244014]: 2026-02-25 13:17:51.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:51 compute-0 nova_compute[244014]: 2026-02-25 13:17:51.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:17:51 compute-0 nova_compute[244014]: 2026-02-25 13:17:51.907 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:17:51 compute-0 nova_compute[244014]: 2026-02-25 13:17:51.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:17:51 compute-0 nova_compute[244014]: 2026-02-25 13:17:51.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:17:51 compute-0 nova_compute[244014]: 2026-02-25 13:17:51.908 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:17:51 compute-0 nova_compute[244014]: 2026-02-25 13:17:51.909 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:17:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:17:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4266829098' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:17:52 compute-0 nova_compute[244014]: 2026-02-25 13:17:52.445 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:17:52 compute-0 nova_compute[244014]: 2026-02-25 13:17:52.655 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:17:52 compute-0 nova_compute[244014]: 2026-02-25 13:17:52.656 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3580MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:17:52 compute-0 nova_compute[244014]: 2026-02-25 13:17:52.656 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:17:52 compute-0 nova_compute[244014]: 2026-02-25 13:17:52.656 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:17:52 compute-0 nova_compute[244014]: 2026-02-25 13:17:52.717 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:17:52 compute-0 nova_compute[244014]: 2026-02-25 13:17:52.717 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:17:52 compute-0 nova_compute[244014]: 2026-02-25 13:17:52.731 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:17:52 compute-0 nova_compute[244014]: 2026-02-25 13:17:52.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:53 compute-0 ceph-mon[76335]: pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:53 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4266829098' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:17:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:17:53 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4281192284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:17:53 compute-0 nova_compute[244014]: 2026-02-25 13:17:53.292 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:17:53 compute-0 nova_compute[244014]: 2026-02-25 13:17:53.298 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:17:53 compute-0 nova_compute[244014]: 2026-02-25 13:17:53.315 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:17:53 compute-0 nova_compute[244014]: 2026-02-25 13:17:53.318 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:17:53 compute-0 nova_compute[244014]: 2026-02-25 13:17:53.318 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:17:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4281192284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:17:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:17:55.057 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:17:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:17:55.058 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:17:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:17:55.058 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:17:55 compute-0 ceph-mon[76335]: pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:55 compute-0 nova_compute[244014]: 2026-02-25 13:17:55.319 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:17:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:55 compute-0 nova_compute[244014]: 2026-02-25 13:17:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:17:55 compute-0 nova_compute[244014]: 2026-02-25 13:17:55.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:17:55 compute-0 nova_compute[244014]: 2026-02-25 13:17:55.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:17:55 compute-0 nova_compute[244014]: 2026-02-25 13:17:55.891 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:17:56 compute-0 nova_compute[244014]: 2026-02-25 13:17:56.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:57 compute-0 ceph-mon[76335]: pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:57 compute-0 nova_compute[244014]: 2026-02-25 13:17:57.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:17:58 compute-0 ceph-mon[76335]: pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:17:58 compute-0 nova_compute[244014]: 2026-02-25 13:17:58.886 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:17:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:17:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:00 compute-0 ceph-mon[76335]: pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:01 compute-0 nova_compute[244014]: 2026-02-25 13:18:01.070 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:18:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:18:02 compute-0 ceph-mon[76335]: pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:02 compute-0 nova_compute[244014]: 2026-02-25 13:18:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:02 compute-0 nova_compute[244014]: 2026-02-25 13:18:02.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:03 compute-0 sshd-session[393937]: banner exchange: Connection from 159.65.38.146 port 54720: invalid format
Feb 25 13:18:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:03 compute-0 nova_compute[244014]: 2026-02-25 13:18:03.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:03 compute-0 nova_compute[244014]: 2026-02-25 13:18:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:03 compute-0 nova_compute[244014]: 2026-02-25 13:18:03.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:18:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:04 compute-0 ceph-mon[76335]: pgmap v2945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:06 compute-0 nova_compute[244014]: 2026-02-25 13:18:06.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:06 compute-0 ceph-mon[76335]: pgmap v2946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:07 compute-0 nova_compute[244014]: 2026-02-25 13:18:07.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:09 compute-0 ceph-mon[76335]: pgmap v2947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:09 compute-0 nova_compute[244014]: 2026-02-25 13:18:09.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:11 compute-0 nova_compute[244014]: 2026-02-25 13:18:11.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:11 compute-0 ceph-mon[76335]: pgmap v2948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:12 compute-0 ceph-mon[76335]: pgmap v2949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:12 compute-0 nova_compute[244014]: 2026-02-25 13:18:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:12 compute-0 nova_compute[244014]: 2026-02-25 13:18:12.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:14 compute-0 ceph-mon[76335]: pgmap v2950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:16 compute-0 nova_compute[244014]: 2026-02-25 13:18:16.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:16 compute-0 ceph-mon[76335]: pgmap v2951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:16 compute-0 podman[393938]: 2026-02-25 13:18:16.768826808 +0000 UTC m=+0.105046324 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:18:16 compute-0 podman[393939]: 2026-02-25 13:18:16.778713226 +0000 UTC m=+0.113241654 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, managed_by=edpm_ansible)
Feb 25 13:18:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:17 compute-0 nova_compute[244014]: 2026-02-25 13:18:17.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:18 compute-0 ceph-mon[76335]: pgmap v2952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:20 compute-0 ceph-mon[76335]: pgmap v2953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:20 compute-0 nova_compute[244014]: 2026-02-25 13:18:20.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:21 compute-0 nova_compute[244014]: 2026-02-25 13:18:21.078 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:21 compute-0 sudo[393983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:18:22 compute-0 sudo[393983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:18:22 compute-0 sudo[393983]: pam_unix(sudo:session): session closed for user root
Feb 25 13:18:22 compute-0 sudo[394008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:18:22 compute-0 sudo[394008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:18:22 compute-0 sudo[394008]: pam_unix(sudo:session): session closed for user root
Feb 25 13:18:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:18:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:18:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:18:22 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:18:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:18:22 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:18:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:18:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:18:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:18:22 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:18:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:18:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:18:22 compute-0 sudo[394064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:18:22 compute-0 sudo[394064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:18:22 compute-0 sudo[394064]: pam_unix(sudo:session): session closed for user root
Feb 25 13:18:22 compute-0 sudo[394089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:18:22 compute-0 sudo[394089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:18:22 compute-0 nova_compute[244014]: 2026-02-25 13:18:22.924 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:22 compute-0 ceph-mon[76335]: pgmap v2954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:18:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:18:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:18:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:18:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:18:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:18:23 compute-0 podman[394124]: 2026-02-25 13:18:23.114360972 +0000 UTC m=+0.026564960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:18:23 compute-0 podman[394124]: 2026-02-25 13:18:23.226007091 +0000 UTC m=+0.138211099 container create dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 13:18:23 compute-0 systemd[1]: Started libpod-conmon-dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08.scope.
Feb 25 13:18:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:18:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:23 compute-0 podman[394124]: 2026-02-25 13:18:23.476611258 +0000 UTC m=+0.388815266 container init dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 13:18:23 compute-0 podman[394124]: 2026-02-25 13:18:23.485953821 +0000 UTC m=+0.398157819 container start dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:18:23 compute-0 affectionate_leakey[394140]: 167 167
Feb 25 13:18:23 compute-0 systemd[1]: libpod-dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08.scope: Deactivated successfully.
Feb 25 13:18:23 compute-0 podman[394124]: 2026-02-25 13:18:23.528984845 +0000 UTC m=+0.441188853 container attach dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 13:18:23 compute-0 podman[394124]: 2026-02-25 13:18:23.529917041 +0000 UTC m=+0.442121029 container died dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:18:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8549f61d61791ff68f31dc6786cb1d5fca4735267c8e4f67265051c9f21e4c1-merged.mount: Deactivated successfully.
Feb 25 13:18:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:24 compute-0 podman[394124]: 2026-02-25 13:18:24.267653595 +0000 UTC m=+1.179857593 container remove dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:18:24 compute-0 systemd[1]: libpod-conmon-dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08.scope: Deactivated successfully.
Feb 25 13:18:24 compute-0 podman[394166]: 2026-02-25 13:18:24.427046509 +0000 UTC m=+0.038238379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:18:24 compute-0 podman[394166]: 2026-02-25 13:18:24.529464648 +0000 UTC m=+0.140656538 container create c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:18:24 compute-0 systemd[1]: Started libpod-conmon-c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5.scope.
Feb 25 13:18:24 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:24 compute-0 podman[394166]: 2026-02-25 13:18:24.755546112 +0000 UTC m=+0.366738052 container init c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 13:18:24 compute-0 podman[394166]: 2026-02-25 13:18:24.765044021 +0000 UTC m=+0.376235901 container start c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 13:18:24 compute-0 podman[394166]: 2026-02-25 13:18:24.832491223 +0000 UTC m=+0.443683173 container attach c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:18:25 compute-0 ceph-mon[76335]: pgmap v2955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:25 compute-0 upbeat_haslett[394183]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:18:25 compute-0 upbeat_haslett[394183]: --> All data devices are unavailable
Feb 25 13:18:25 compute-0 systemd[1]: libpod-c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5.scope: Deactivated successfully.
Feb 25 13:18:25 compute-0 conmon[394183]: conmon c679dd12f1f217e07c59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5.scope/container/memory.events
Feb 25 13:18:25 compute-0 podman[394166]: 2026-02-25 13:18:25.312485638 +0000 UTC m=+0.923677528 container died c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:18:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8-merged.mount: Deactivated successfully.
Feb 25 13:18:25 compute-0 podman[394166]: 2026-02-25 13:18:25.851085446 +0000 UTC m=+1.462277336 container remove c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:18:25 compute-0 systemd[1]: libpod-conmon-c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5.scope: Deactivated successfully.
Feb 25 13:18:25 compute-0 sudo[394089]: pam_unix(sudo:session): session closed for user root
Feb 25 13:18:26 compute-0 sudo[394217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:18:26 compute-0 sudo[394217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:18:26 compute-0 sudo[394217]: pam_unix(sudo:session): session closed for user root
Feb 25 13:18:26 compute-0 sudo[394242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:18:26 compute-0 sudo[394242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:18:26 compute-0 nova_compute[244014]: 2026-02-25 13:18:26.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:26 compute-0 podman[394279]: 2026-02-25 13:18:26.365178353 +0000 UTC m=+0.038671771 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:18:26 compute-0 podman[394279]: 2026-02-25 13:18:26.530484755 +0000 UTC m=+0.203978093 container create 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:18:26 compute-0 systemd[1]: Started libpod-conmon-0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8.scope.
Feb 25 13:18:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:18:26 compute-0 podman[394279]: 2026-02-25 13:18:26.847957516 +0000 UTC m=+0.521450924 container init 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 13:18:26 compute-0 podman[394279]: 2026-02-25 13:18:26.856719243 +0000 UTC m=+0.530212601 container start 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:18:26 compute-0 adoring_fermat[394295]: 167 167
Feb 25 13:18:26 compute-0 systemd[1]: libpod-0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8.scope: Deactivated successfully.
Feb 25 13:18:26 compute-0 podman[394279]: 2026-02-25 13:18:26.903983246 +0000 UTC m=+0.577476604 container attach 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:18:26 compute-0 podman[394279]: 2026-02-25 13:18:26.905839068 +0000 UTC m=+0.579332426 container died 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:18:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-902b78625c8eb3d4ecdefab77387772df62f7020af4e90875168ea5ad74da193-merged.mount: Deactivated successfully.
Feb 25 13:18:27 compute-0 ceph-mon[76335]: pgmap v2956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:27 compute-0 podman[394279]: 2026-02-25 13:18:27.513185684 +0000 UTC m=+1.186679042 container remove 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 13:18:27 compute-0 systemd[1]: libpod-conmon-0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8.scope: Deactivated successfully.
Feb 25 13:18:27 compute-0 podman[394321]: 2026-02-25 13:18:27.684295599 +0000 UTC m=+0.031698814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:18:27 compute-0 podman[394321]: 2026-02-25 13:18:27.779956417 +0000 UTC m=+0.127359542 container create e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:18:27 compute-0 sshd-session[394311]: Invalid user minima from 80.94.92.186 port 41708
Feb 25 13:18:27 compute-0 systemd[1]: Started libpod-conmon-e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032.scope.
Feb 25 13:18:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15edb6f83ecf7a1b0d19bff60ae62dc46822fc2cf292babae29e5aeeb5524d4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15edb6f83ecf7a1b0d19bff60ae62dc46822fc2cf292babae29e5aeeb5524d4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15edb6f83ecf7a1b0d19bff60ae62dc46822fc2cf292babae29e5aeeb5524d4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15edb6f83ecf7a1b0d19bff60ae62dc46822fc2cf292babae29e5aeeb5524d4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:27 compute-0 nova_compute[244014]: 2026-02-25 13:18:27.927 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:27 compute-0 sshd-session[394311]: Connection closed by invalid user minima 80.94.92.186 port 41708 [preauth]
Feb 25 13:18:28 compute-0 podman[394321]: 2026-02-25 13:18:28.047511291 +0000 UTC m=+0.394914496 container init e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:18:28 compute-0 podman[394321]: 2026-02-25 13:18:28.056457084 +0000 UTC m=+0.403860229 container start e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 13:18:28 compute-0 podman[394321]: 2026-02-25 13:18:28.123017881 +0000 UTC m=+0.470421026 container attach e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:18:28 compute-0 zen_nightingale[394338]: {
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:     "0": [
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:         {
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "devices": [
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "/dev/loop3"
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             ],
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_name": "ceph_lv0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_size": "21470642176",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "name": "ceph_lv0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "tags": {
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.cluster_name": "ceph",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.crush_device_class": "",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.encrypted": "0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.objectstore": "bluestore",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.osd_id": "0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.type": "block",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.vdo": "0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.with_tpm": "0"
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             },
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "type": "block",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "vg_name": "ceph_vg0"
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:         }
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:     ],
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:     "1": [
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:         {
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "devices": [
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "/dev/loop4"
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             ],
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_name": "ceph_lv1",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_size": "21470642176",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "name": "ceph_lv1",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "tags": {
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.cluster_name": "ceph",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.crush_device_class": "",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.encrypted": "0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.objectstore": "bluestore",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.osd_id": "1",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.type": "block",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.vdo": "0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.with_tpm": "0"
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             },
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "type": "block",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "vg_name": "ceph_vg1"
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:         }
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:     ],
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:     "2": [
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:         {
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "devices": [
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "/dev/loop5"
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             ],
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_name": "ceph_lv2",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_size": "21470642176",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "name": "ceph_lv2",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "tags": {
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.cluster_name": "ceph",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.crush_device_class": "",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.encrypted": "0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.objectstore": "bluestore",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.osd_id": "2",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.type": "block",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.vdo": "0",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:                 "ceph.with_tpm": "0"
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             },
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "type": "block",
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:             "vg_name": "ceph_vg2"
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:         }
Feb 25 13:18:28 compute-0 zen_nightingale[394338]:     ]
Feb 25 13:18:28 compute-0 zen_nightingale[394338]: }
Feb 25 13:18:28 compute-0 systemd[1]: libpod-e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032.scope: Deactivated successfully.
Feb 25 13:18:28 compute-0 podman[394321]: 2026-02-25 13:18:28.38398341 +0000 UTC m=+0.731386545 container died e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 13:18:28 compute-0 ceph-mon[76335]: pgmap v2957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-15edb6f83ecf7a1b0d19bff60ae62dc46822fc2cf292babae29e5aeeb5524d4c-merged.mount: Deactivated successfully.
Feb 25 13:18:28 compute-0 podman[394321]: 2026-02-25 13:18:28.802036878 +0000 UTC m=+1.149440033 container remove e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:18:28 compute-0 systemd[1]: libpod-conmon-e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032.scope: Deactivated successfully.
Feb 25 13:18:28 compute-0 sudo[394242]: pam_unix(sudo:session): session closed for user root
Feb 25 13:18:28 compute-0 sudo[394361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:18:28 compute-0 sudo[394361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:18:28 compute-0 sudo[394361]: pam_unix(sudo:session): session closed for user root
Feb 25 13:18:29 compute-0 sudo[394386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:18:29 compute-0 sudo[394386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:18:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:29 compute-0 podman[394423]: 2026-02-25 13:18:29.383478244 +0000 UTC m=+0.117536745 container create 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:18:29 compute-0 podman[394423]: 2026-02-25 13:18:29.301318077 +0000 UTC m=+0.035376568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:18:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:29 compute-0 systemd[1]: Started libpod-conmon-9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508.scope.
Feb 25 13:18:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:18:29 compute-0 podman[394423]: 2026-02-25 13:18:29.59076592 +0000 UTC m=+0.324824471 container init 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:18:29 compute-0 podman[394423]: 2026-02-25 13:18:29.599113805 +0000 UTC m=+0.333172306 container start 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 13:18:29 compute-0 sweet_panini[394439]: 167 167
Feb 25 13:18:29 compute-0 systemd[1]: libpod-9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508.scope: Deactivated successfully.
Feb 25 13:18:29 compute-0 podman[394423]: 2026-02-25 13:18:29.644091953 +0000 UTC m=+0.378150514 container attach 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:18:29 compute-0 podman[394423]: 2026-02-25 13:18:29.644629918 +0000 UTC m=+0.378688419 container died 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 13:18:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-51b98b82405b81e4c2ece1f53af7dcb398b449d62dd9aed423dc9fa64045c616-merged.mount: Deactivated successfully.
Feb 25 13:18:29 compute-0 podman[394423]: 2026-02-25 13:18:29.987608 +0000 UTC m=+0.721666491 container remove 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:18:29 compute-0 systemd[1]: libpod-conmon-9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508.scope: Deactivated successfully.
Feb 25 13:18:30 compute-0 podman[394464]: 2026-02-25 13:18:30.156741339 +0000 UTC m=+0.035334427 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:18:30 compute-0 podman[394464]: 2026-02-25 13:18:30.254753222 +0000 UTC m=+0.133346240 container create fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:18:30 compute-0 systemd[1]: Started libpod-conmon-fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786.scope.
Feb 25 13:18:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:18:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b06ce619c2403b8442acd923856030a570669f149d1e8af5bb99d307ea615/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b06ce619c2403b8442acd923856030a570669f149d1e8af5bb99d307ea615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b06ce619c2403b8442acd923856030a570669f149d1e8af5bb99d307ea615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b06ce619c2403b8442acd923856030a570669f149d1e8af5bb99d307ea615/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:18:30 compute-0 podman[394464]: 2026-02-25 13:18:30.535087167 +0000 UTC m=+0.413680185 container init fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:18:30 compute-0 podman[394464]: 2026-02-25 13:18:30.54297956 +0000 UTC m=+0.421572568 container start fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:18:30 compute-0 podman[394464]: 2026-02-25 13:18:30.60045787 +0000 UTC m=+0.479050898 container attach fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:18:30 compute-0 ceph-mon[76335]: pgmap v2958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:30 compute-0 nova_compute[244014]: 2026-02-25 13:18:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:31 compute-0 nova_compute[244014]: 2026-02-25 13:18:31.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:18:31
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'backups']
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:18:31 compute-0 lvm[394559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:18:31 compute-0 lvm[394560]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:18:31 compute-0 lvm[394559]: VG ceph_vg0 finished
Feb 25 13:18:31 compute-0 lvm[394562]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:18:31 compute-0 lvm[394562]: VG ceph_vg2 finished
Feb 25 13:18:31 compute-0 lvm[394560]: VG ceph_vg1 finished
Feb 25 13:18:31 compute-0 wonderful_lederberg[394481]: {}
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:31 compute-0 systemd[1]: libpod-fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786.scope: Deactivated successfully.
Feb 25 13:18:31 compute-0 systemd[1]: libpod-fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786.scope: Consumed 1.254s CPU time.
Feb 25 13:18:31 compute-0 podman[394464]: 2026-02-25 13:18:31.452358063 +0000 UTC m=+1.330951051 container died fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d62b06ce619c2403b8442acd923856030a570669f149d1e8af5bb99d307ea615-merged.mount: Deactivated successfully.
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:18:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:18:31 compute-0 podman[394464]: 2026-02-25 13:18:31.692956227 +0000 UTC m=+1.571549245 container remove fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:18:31 compute-0 systemd[1]: libpod-conmon-fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786.scope: Deactivated successfully.
Feb 25 13:18:31 compute-0 sudo[394386]: pam_unix(sudo:session): session closed for user root
Feb 25 13:18:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:18:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:18:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:18:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:18:31 compute-0 sudo[394579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:18:31 compute-0 sudo[394579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:18:31 compute-0 sudo[394579]: pam_unix(sudo:session): session closed for user root
Feb 25 13:18:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:18:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:18:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:18:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:18:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:18:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:18:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:18:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:18:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:18:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:18:32 compute-0 ceph-mon[76335]: pgmap v2959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:18:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:18:32 compute-0 nova_compute[244014]: 2026-02-25 13:18:32.931 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:34 compute-0 ceph-mon[76335]: pgmap v2960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:35 compute-0 nova_compute[244014]: 2026-02-25 13:18:35.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:35 compute-0 nova_compute[244014]: 2026-02-25 13:18:35.891 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:18:36 compute-0 nova_compute[244014]: 2026-02-25 13:18:36.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:37 compute-0 ceph-mon[76335]: pgmap v2961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:37 compute-0 nova_compute[244014]: 2026-02-25 13:18:37.896 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:37 compute-0 nova_compute[244014]: 2026-02-25 13:18:37.897 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:18:37 compute-0 nova_compute[244014]: 2026-02-25 13:18:37.911 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:18:37 compute-0 nova_compute[244014]: 2026-02-25 13:18:37.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:39 compute-0 ceph-mon[76335]: pgmap v2962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:41 compute-0 nova_compute[244014]: 2026-02-25 13:18:41.086 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:41 compute-0 ceph-mon[76335]: pgmap v2963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:42 compute-0 ceph-mon[76335]: pgmap v2964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:42 compute-0 nova_compute[244014]: 2026-02-25 13:18:42.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:18:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:44 compute-0 ceph-mon[76335]: pgmap v2965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:46 compute-0 nova_compute[244014]: 2026-02-25 13:18:46.087 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:46 compute-0 ceph-mon[76335]: pgmap v2966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:18:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3649150109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:18:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:18:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3649150109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:18:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3649150109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:18:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3649150109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:18:47 compute-0 podman[394604]: 2026-02-25 13:18:47.754857272 +0000 UTC m=+0.091002397 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 13:18:47 compute-0 podman[394605]: 2026-02-25 13:18:47.784729614 +0000 UTC m=+0.120914020 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 13:18:47 compute-0 nova_compute[244014]: 2026-02-25 13:18:47.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:48 compute-0 ceph-mon[76335]: pgmap v2967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:50 compute-0 nova_compute[244014]: 2026-02-25 13:18:50.665 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:51 compute-0 nova_compute[244014]: 2026-02-25 13:18:51.089 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:51 compute-0 ceph-mon[76335]: pgmap v2968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:52 compute-0 nova_compute[244014]: 2026-02-25 13:18:52.945 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:53 compute-0 ceph-mon[76335]: pgmap v2969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:53 compute-0 nova_compute[244014]: 2026-02-25 13:18:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:53 compute-0 nova_compute[244014]: 2026-02-25 13:18:53.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:18:53 compute-0 nova_compute[244014]: 2026-02-25 13:18:53.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:18:53 compute-0 nova_compute[244014]: 2026-02-25 13:18:53.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:18:53 compute-0 nova_compute[244014]: 2026-02-25 13:18:53.909 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:18:53 compute-0 nova_compute[244014]: 2026-02-25 13:18:53.910 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:18:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:18:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150166556' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:18:54 compute-0 nova_compute[244014]: 2026-02-25 13:18:54.525 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:18:54 compute-0 nova_compute[244014]: 2026-02-25 13:18:54.740 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:18:54 compute-0 nova_compute[244014]: 2026-02-25 13:18:54.742 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3603MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:18:54 compute-0 nova_compute[244014]: 2026-02-25 13:18:54.743 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:18:54 compute-0 nova_compute[244014]: 2026-02-25 13:18:54.743 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:18:54 compute-0 nova_compute[244014]: 2026-02-25 13:18:54.807 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:18:54 compute-0 nova_compute[244014]: 2026-02-25 13:18:54.808 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:18:54 compute-0 nova_compute[244014]: 2026-02-25 13:18:54.825 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:18:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:18:55.058 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:18:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:18:55.059 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:18:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:18:55.059 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:18:55 compute-0 ceph-mon[76335]: pgmap v2970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4150166556' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:18:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:18:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928768406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:18:55 compute-0 nova_compute[244014]: 2026-02-25 13:18:55.416 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:18:55 compute-0 nova_compute[244014]: 2026-02-25 13:18:55.422 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:18:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:55 compute-0 nova_compute[244014]: 2026-02-25 13:18:55.448 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:18:55 compute-0 nova_compute[244014]: 2026-02-25 13:18:55.451 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:18:55 compute-0 nova_compute[244014]: 2026-02-25 13:18:55.452 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:18:56 compute-0 nova_compute[244014]: 2026-02-25 13:18:56.091 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1928768406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:18:56 compute-0 nova_compute[244014]: 2026-02-25 13:18:56.453 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:57 compute-0 ceph-mon[76335]: pgmap v2971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:57 compute-0 nova_compute[244014]: 2026-02-25 13:18:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:57 compute-0 nova_compute[244014]: 2026-02-25 13:18:57.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:18:57 compute-0 nova_compute[244014]: 2026-02-25 13:18:57.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:18:57 compute-0 nova_compute[244014]: 2026-02-25 13:18:57.902 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:18:57 compute-0 nova_compute[244014]: 2026-02-25 13:18:57.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:18:58 compute-0 ceph-mon[76335]: pgmap v2972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:18:58 compute-0 nova_compute[244014]: 2026-02-25 13:18:58.895 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:18:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:18:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:00 compute-0 ceph-mon[76335]: pgmap v2973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:01 compute-0 nova_compute[244014]: 2026-02-25 13:19:01.093 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:19:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:19:02 compute-0 ceph-mon[76335]: pgmap v2974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:02 compute-0 nova_compute[244014]: 2026-02-25 13:19:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:19:02 compute-0 nova_compute[244014]: 2026-02-25 13:19:02.951 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:03 compute-0 nova_compute[244014]: 2026-02-25 13:19:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:19:03 compute-0 nova_compute[244014]: 2026-02-25 13:19:03.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:19:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:04 compute-0 ceph-mon[76335]: pgmap v2975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:05 compute-0 nova_compute[244014]: 2026-02-25 13:19:05.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:19:06 compute-0 nova_compute[244014]: 2026-02-25 13:19:06.094 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:07 compute-0 ceph-mon[76335]: pgmap v2976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:07 compute-0 nova_compute[244014]: 2026-02-25 13:19:07.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:09 compute-0 ceph-mon[76335]: pgmap v2977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:09 compute-0 nova_compute[244014]: 2026-02-25 13:19:09.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:19:11 compute-0 ceph-mon[76335]: pgmap v2978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:11 compute-0 nova_compute[244014]: 2026-02-25 13:19:11.096 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:19:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 62K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1328 writes, 5767 keys, 1328 commit groups, 1.0 writes per commit group, ingest: 8.78 MB, 0.01 MB/s
                                           Interval WAL: 1328 writes, 1328 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     55.6      1.33              0.21        44    0.030       0      0       0.0       0.0
                                             L6      1/0    8.98 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   5.0    103.6     88.1      4.20              1.07        43    0.098    276K    23K       0.0       0.0
                                            Sum      1/0    8.98 MB   0.0      0.4     0.1      0.4       0.4      0.1       0.0   6.0     78.6     80.2      5.53              1.28        87    0.064    276K    23K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.3     60.4     60.3      0.74              0.13         8    0.093     33K   2034       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0    103.6     88.1      4.20              1.07        43    0.098    276K    23K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     55.7      1.33              0.21        43    0.031       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.072, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.43 GB write, 0.08 MB/s write, 0.42 GB read, 0.08 MB/s read, 5.5 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 49.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000326 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3116,47.66 MB,15.6781%) FilterBlock(88,767.17 KB,0.246445%) IndexBlock(88,1.25 MB,0.411139%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 13:19:12 compute-0 nova_compute[244014]: 2026-02-25 13:19:12.956 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:13 compute-0 ceph-mon[76335]: pgmap v2979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:13 compute-0 nova_compute[244014]: 2026-02-25 13:19:13.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:19:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:15 compute-0 ceph-mon[76335]: pgmap v2980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:16 compute-0 nova_compute[244014]: 2026-02-25 13:19:16.098 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:16 compute-0 ceph-mon[76335]: pgmap v2981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:17 compute-0 nova_compute[244014]: 2026-02-25 13:19:17.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:18 compute-0 ceph-mon[76335]: pgmap v2982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:18 compute-0 podman[394693]: 2026-02-25 13:19:18.719660117 +0000 UTC m=+0.059118329 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:19:18 compute-0 podman[394694]: 2026-02-25 13:19:18.741597755 +0000 UTC m=+0.078225647 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 25 13:19:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:20 compute-0 ceph-mon[76335]: pgmap v2983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:21 compute-0 nova_compute[244014]: 2026-02-25 13:19:21.101 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:22 compute-0 ceph-mon[76335]: pgmap v2984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:23 compute-0 nova_compute[244014]: 2026-02-25 13:19:23.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:24 compute-0 ceph-mon[76335]: pgmap v2985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:26 compute-0 nova_compute[244014]: 2026-02-25 13:19:26.101 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:26 compute-0 ceph-mon[76335]: pgmap v2986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:28 compute-0 nova_compute[244014]: 2026-02-25 13:19:28.012 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:29 compute-0 ceph-mon[76335]: pgmap v2987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:31 compute-0 nova_compute[244014]: 2026-02-25 13:19:31.102 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:19:31
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms', 'default.rgw.meta', 'volumes', '.rgw.root', 'default.rgw.log']
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:19:31 compute-0 ceph-mon[76335]: pgmap v2988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:19:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:19:32 compute-0 sudo[394737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:19:32 compute-0 sudo[394737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:19:32 compute-0 sudo[394737]: pam_unix(sudo:session): session closed for user root
Feb 25 13:19:32 compute-0 sudo[394762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:19:32 compute-0 sudo[394762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:19:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:19:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:19:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:19:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:19:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:19:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:19:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:19:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:19:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:19:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:19:32 compute-0 sudo[394762]: pam_unix(sudo:session): session closed for user root
Feb 25 13:19:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:19:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:19:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:19:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:19:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:19:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:19:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:19:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:19:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:19:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:19:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:19:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:19:32 compute-0 sudo[394818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:19:32 compute-0 sudo[394818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:19:32 compute-0 sudo[394818]: pam_unix(sudo:session): session closed for user root
Feb 25 13:19:32 compute-0 sudo[394843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:19:32 compute-0 sudo[394843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:19:33 compute-0 nova_compute[244014]: 2026-02-25 13:19:33.015 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:33 compute-0 podman[394882]: 2026-02-25 13:19:33.187600952 +0000 UTC m=+0.109219741 container create 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:19:33 compute-0 podman[394882]: 2026-02-25 13:19:33.103445269 +0000 UTC m=+0.025064068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:19:33 compute-0 ceph-mon[76335]: pgmap v2989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:19:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:19:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:19:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:19:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:19:33 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:19:33 compute-0 systemd[1]: Started libpod-conmon-7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4.scope.
Feb 25 13:19:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:19:33 compute-0 podman[394882]: 2026-02-25 13:19:33.414496151 +0000 UTC m=+0.336114990 container init 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:19:33 compute-0 podman[394882]: 2026-02-25 13:19:33.425226093 +0000 UTC m=+0.346844852 container start 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 13:19:33 compute-0 great_taussig[394898]: 167 167
Feb 25 13:19:33 compute-0 systemd[1]: libpod-7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4.scope: Deactivated successfully.
Feb 25 13:19:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:33 compute-0 podman[394882]: 2026-02-25 13:19:33.469662486 +0000 UTC m=+0.391281315 container attach 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:19:33 compute-0 podman[394882]: 2026-02-25 13:19:33.470289234 +0000 UTC m=+0.391908023 container died 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:19:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f185d3ec9a2d61050ceadb2f38700487c0f0836e0f0d713b5deedb3a5b5e82b-merged.mount: Deactivated successfully.
Feb 25 13:19:33 compute-0 podman[394882]: 2026-02-25 13:19:33.903467919 +0000 UTC m=+0.825086708 container remove 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 13:19:33 compute-0 systemd[1]: libpod-conmon-7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4.scope: Deactivated successfully.
Feb 25 13:19:34 compute-0 podman[394924]: 2026-02-25 13:19:34.129597215 +0000 UTC m=+0.079842332 container create e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:19:34 compute-0 podman[394924]: 2026-02-25 13:19:34.080639505 +0000 UTC m=+0.030884622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:19:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:34 compute-0 systemd[1]: Started libpod-conmon-e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952.scope.
Feb 25 13:19:34 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:34 compute-0 podman[394924]: 2026-02-25 13:19:34.283363672 +0000 UTC m=+0.233608819 container init e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 13:19:34 compute-0 podman[394924]: 2026-02-25 13:19:34.292911651 +0000 UTC m=+0.243156768 container start e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:19:34 compute-0 podman[394924]: 2026-02-25 13:19:34.342992493 +0000 UTC m=+0.293237630 container attach e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 13:19:34 compute-0 ceph-mon[76335]: pgmap v2990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:34 compute-0 optimistic_colden[394941]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:19:34 compute-0 optimistic_colden[394941]: --> All data devices are unavailable
Feb 25 13:19:34 compute-0 systemd[1]: libpod-e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952.scope: Deactivated successfully.
Feb 25 13:19:34 compute-0 podman[394924]: 2026-02-25 13:19:34.761896245 +0000 UTC m=+0.712141412 container died e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:19:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1-merged.mount: Deactivated successfully.
Feb 25 13:19:35 compute-0 podman[394924]: 2026-02-25 13:19:35.082769683 +0000 UTC m=+1.033014830 container remove e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:19:35 compute-0 systemd[1]: libpod-conmon-e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952.scope: Deactivated successfully.
Feb 25 13:19:35 compute-0 sudo[394843]: pam_unix(sudo:session): session closed for user root
Feb 25 13:19:35 compute-0 sudo[394974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:19:35 compute-0 sudo[394974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:19:35 compute-0 sudo[394974]: pam_unix(sudo:session): session closed for user root
Feb 25 13:19:35 compute-0 sudo[394999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:19:35 compute-0 sudo[394999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:19:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:35 compute-0 podman[395037]: 2026-02-25 13:19:35.576914617 +0000 UTC m=+0.037019835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:19:35 compute-0 podman[395037]: 2026-02-25 13:19:35.725664082 +0000 UTC m=+0.185769240 container create 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 13:19:35 compute-0 systemd[1]: Started libpod-conmon-7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32.scope.
Feb 25 13:19:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:19:36 compute-0 podman[395037]: 2026-02-25 13:19:36.069904089 +0000 UTC m=+0.530009267 container init 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:19:36 compute-0 podman[395037]: 2026-02-25 13:19:36.077498223 +0000 UTC m=+0.537603381 container start 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:19:36 compute-0 nice_visvesvaraya[395054]: 167 167
Feb 25 13:19:36 compute-0 systemd[1]: libpod-7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32.scope: Deactivated successfully.
Feb 25 13:19:36 compute-0 nova_compute[244014]: 2026-02-25 13:19:36.103 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:36 compute-0 podman[395037]: 2026-02-25 13:19:36.189682177 +0000 UTC m=+0.649787385 container attach 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:19:36 compute-0 podman[395037]: 2026-02-25 13:19:36.190131089 +0000 UTC m=+0.650236237 container died 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:19:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bc090a5629e3e2998eaa45eaf766acf265a7c60a0b6c578bfa2dad44e079fa4-merged.mount: Deactivated successfully.
Feb 25 13:19:36 compute-0 podman[395037]: 2026-02-25 13:19:36.477922544 +0000 UTC m=+0.938027702 container remove 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:19:36 compute-0 systemd[1]: libpod-conmon-7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32.scope: Deactivated successfully.
Feb 25 13:19:36 compute-0 ceph-mon[76335]: pgmap v2991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:36 compute-0 podman[395079]: 2026-02-25 13:19:36.631006491 +0000 UTC m=+0.032026624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:19:36 compute-0 podman[395079]: 2026-02-25 13:19:36.760400569 +0000 UTC m=+0.161420652 container create 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:19:36 compute-0 systemd[1]: Started libpod-conmon-036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189.scope.
Feb 25 13:19:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b601dc05a3886d339d99154bc19f0c2b31760a46c8e76c8d3a0e266f663032ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b601dc05a3886d339d99154bc19f0c2b31760a46c8e76c8d3a0e266f663032ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b601dc05a3886d339d99154bc19f0c2b31760a46c8e76c8d3a0e266f663032ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b601dc05a3886d339d99154bc19f0c2b31760a46c8e76c8d3a0e266f663032ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:36 compute-0 podman[395079]: 2026-02-25 13:19:36.961306375 +0000 UTC m=+0.362326438 container init 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb 25 13:19:36 compute-0 podman[395079]: 2026-02-25 13:19:36.966238324 +0000 UTC m=+0.367258367 container start 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:19:36 compute-0 podman[395079]: 2026-02-25 13:19:36.998239137 +0000 UTC m=+0.399259180 container attach 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]: {
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:     "0": [
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:         {
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "devices": [
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "/dev/loop3"
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             ],
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_name": "ceph_lv0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_size": "21470642176",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "name": "ceph_lv0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "tags": {
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.cluster_name": "ceph",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.crush_device_class": "",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.encrypted": "0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.objectstore": "bluestore",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.osd_id": "0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.type": "block",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.vdo": "0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.with_tpm": "0"
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             },
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "type": "block",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "vg_name": "ceph_vg0"
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:         }
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:     ],
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:     "1": [
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:         {
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "devices": [
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "/dev/loop4"
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             ],
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_name": "ceph_lv1",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_size": "21470642176",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "name": "ceph_lv1",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "tags": {
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.cluster_name": "ceph",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.crush_device_class": "",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.encrypted": "0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.objectstore": "bluestore",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.osd_id": "1",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.type": "block",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.vdo": "0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.with_tpm": "0"
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             },
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "type": "block",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "vg_name": "ceph_vg1"
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:         }
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:     ],
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:     "2": [
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:         {
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "devices": [
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "/dev/loop5"
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             ],
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_name": "ceph_lv2",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_size": "21470642176",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "name": "ceph_lv2",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "tags": {
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.cluster_name": "ceph",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.crush_device_class": "",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.encrypted": "0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.objectstore": "bluestore",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.osd_id": "2",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.type": "block",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.vdo": "0",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:                 "ceph.with_tpm": "0"
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             },
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "type": "block",
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:             "vg_name": "ceph_vg2"
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:         }
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]:     ]
Feb 25 13:19:37 compute-0 ecstatic_sanderson[395096]: }
Feb 25 13:19:37 compute-0 systemd[1]: libpod-036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189.scope: Deactivated successfully.
Feb 25 13:19:37 compute-0 podman[395079]: 2026-02-25 13:19:37.292444993 +0000 UTC m=+0.693465066 container died 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:19:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b601dc05a3886d339d99154bc19f0c2b31760a46c8e76c8d3a0e266f663032ce-merged.mount: Deactivated successfully.
Feb 25 13:19:38 compute-0 nova_compute[244014]: 2026-02-25 13:19:38.020 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:38 compute-0 podman[395079]: 2026-02-25 13:19:38.240120545 +0000 UTC m=+1.641140628 container remove 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:19:38 compute-0 sudo[394999]: pam_unix(sudo:session): session closed for user root
Feb 25 13:19:38 compute-0 systemd[1]: libpod-conmon-036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189.scope: Deactivated successfully.
Feb 25 13:19:38 compute-0 sudo[395120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:19:38 compute-0 sudo[395120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:19:38 compute-0 sudo[395120]: pam_unix(sudo:session): session closed for user root
Feb 25 13:19:38 compute-0 sudo[395145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:19:38 compute-0 sudo[395145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:19:38 compute-0 podman[395182]: 2026-02-25 13:19:38.731573644 +0000 UTC m=+0.031566032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:19:38 compute-0 podman[395182]: 2026-02-25 13:19:38.924069652 +0000 UTC m=+0.224061980 container create adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:19:38 compute-0 ceph-mon[76335]: pgmap v2992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:39 compute-0 systemd[1]: Started libpod-conmon-adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b.scope.
Feb 25 13:19:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:19:39 compute-0 podman[395182]: 2026-02-25 13:19:39.126214532 +0000 UTC m=+0.426206870 container init adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:19:39 compute-0 podman[395182]: 2026-02-25 13:19:39.136193493 +0000 UTC m=+0.436185831 container start adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:19:39 compute-0 sad_feynman[395198]: 167 167
Feb 25 13:19:39 compute-0 systemd[1]: libpod-adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b.scope: Deactivated successfully.
Feb 25 13:19:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:39 compute-0 podman[395182]: 2026-02-25 13:19:39.198392777 +0000 UTC m=+0.498385105 container attach adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 13:19:39 compute-0 podman[395182]: 2026-02-25 13:19:39.198882371 +0000 UTC m=+0.498874709 container died adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb 25 13:19:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-986ddff3983d1c196f836a3f4e0834d59bd0193cabf42d678b6db63d55eacc33-merged.mount: Deactivated successfully.
Feb 25 13:19:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:39 compute-0 podman[395182]: 2026-02-25 13:19:39.612304079 +0000 UTC m=+0.912296377 container remove adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 13:19:39 compute-0 systemd[1]: libpod-conmon-adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b.scope: Deactivated successfully.
Feb 25 13:19:39 compute-0 podman[395223]: 2026-02-25 13:19:39.860803077 +0000 UTC m=+0.108270554 container create 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 13:19:39 compute-0 podman[395223]: 2026-02-25 13:19:39.775937894 +0000 UTC m=+0.023405381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:19:39 compute-0 systemd[1]: Started libpod-conmon-44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0.scope.
Feb 25 13:19:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098a51ac0d9296898eda4e5f487e7e12ea1fec5b8e27f74088e4c0897f23ff22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098a51ac0d9296898eda4e5f487e7e12ea1fec5b8e27f74088e4c0897f23ff22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098a51ac0d9296898eda4e5f487e7e12ea1fec5b8e27f74088e4c0897f23ff22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098a51ac0d9296898eda4e5f487e7e12ea1fec5b8e27f74088e4c0897f23ff22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:19:40 compute-0 podman[395223]: 2026-02-25 13:19:40.075264124 +0000 UTC m=+0.322731621 container init 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:19:40 compute-0 podman[395223]: 2026-02-25 13:19:40.086685856 +0000 UTC m=+0.334153323 container start 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:19:40 compute-0 podman[395223]: 2026-02-25 13:19:40.165393246 +0000 UTC m=+0.412860723 container attach 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:19:40 compute-0 lvm[395319]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:19:40 compute-0 lvm[395318]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:19:40 compute-0 lvm[395318]: VG ceph_vg0 finished
Feb 25 13:19:40 compute-0 lvm[395319]: VG ceph_vg1 finished
Feb 25 13:19:40 compute-0 lvm[395321]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:19:40 compute-0 lvm[395321]: VG ceph_vg2 finished
Feb 25 13:19:40 compute-0 youthful_maxwell[395240]: {}
Feb 25 13:19:41 compute-0 systemd[1]: libpod-44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0.scope: Deactivated successfully.
Feb 25 13:19:41 compute-0 systemd[1]: libpod-44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0.scope: Consumed 1.268s CPU time.
Feb 25 13:19:41 compute-0 podman[395223]: 2026-02-25 13:19:41.013417099 +0000 UTC m=+1.260884566 container died 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:19:41 compute-0 ceph-mon[76335]: pgmap v2993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:41 compute-0 nova_compute[244014]: 2026-02-25 13:19:41.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-098a51ac0d9296898eda4e5f487e7e12ea1fec5b8e27f74088e4c0897f23ff22-merged.mount: Deactivated successfully.
Feb 25 13:19:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:41 compute-0 podman[395223]: 2026-02-25 13:19:41.500591137 +0000 UTC m=+1.748058604 container remove 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:19:41 compute-0 sudo[395145]: pam_unix(sudo:session): session closed for user root
Feb 25 13:19:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:19:41 compute-0 systemd[1]: libpod-conmon-44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0.scope: Deactivated successfully.
Feb 25 13:19:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:19:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:19:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:19:41 compute-0 sudo[395336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:19:41 compute-0 sudo[395336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:19:41 compute-0 sudo[395336]: pam_unix(sudo:session): session closed for user root
Feb 25 13:19:42 compute-0 ceph-mon[76335]: pgmap v2994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:19:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:19:43 compute-0 nova_compute[244014]: 2026-02-25 13:19:43.024 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:19:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:44 compute-0 ceph-mon[76335]: pgmap v2995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:46 compute-0 nova_compute[244014]: 2026-02-25 13:19:46.139 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:47 compute-0 ceph-mon[76335]: pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:19:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3303749402' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:19:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:19:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3303749402' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:19:48 compute-0 nova_compute[244014]: 2026-02-25 13:19:48.064 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.090474) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588090548, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3513323, "memory_usage": 3577360, "flush_reason": "Manual Compaction"}
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Feb 25 13:19:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3303749402' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:19:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3303749402' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588158759, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 3445434, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61302, "largest_seqno": 63353, "table_properties": {"data_size": 3435980, "index_size": 6011, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18656, "raw_average_key_size": 20, "raw_value_size": 3417321, "raw_average_value_size": 3678, "num_data_blocks": 267, "num_entries": 929, "num_filter_entries": 929, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025356, "oldest_key_time": 1772025356, "file_creation_time": 1772025588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 68318 microseconds, and 6417 cpu microseconds.
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.158805) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 3445434 bytes OK
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.158824) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.222556) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.222608) EVENT_LOG_v1 {"time_micros": 1772025588222597, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.222636) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 3504732, prev total WAL file size 3504732, number of live WAL files 2.
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.223829) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(3364KB)], [146(9199KB)]
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588223941, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 12865640, "oldest_snapshot_seqno": -1}
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8249 keys, 11104958 bytes, temperature: kUnknown
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588367440, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11104958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11050781, "index_size": 32432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 215365, "raw_average_key_size": 26, "raw_value_size": 10904577, "raw_average_value_size": 1321, "num_data_blocks": 1262, "num_entries": 8249, "num_filter_entries": 8249, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.367869) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11104958 bytes
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.381037) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.6 rd, 77.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 8763, records dropped: 514 output_compression: NoCompression
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.381067) EVENT_LOG_v1 {"time_micros": 1772025588381054, "job": 90, "event": "compaction_finished", "compaction_time_micros": 143573, "compaction_time_cpu_micros": 18641, "output_level": 6, "num_output_files": 1, "total_output_size": 11104958, "num_input_records": 8763, "num_output_records": 8249, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588381618, "job": 90, "event": "table_file_deletion", "file_number": 148}
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588382882, "job": 90, "event": "table_file_deletion", "file_number": 146}
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.223621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.383017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.383029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.383033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.383037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:19:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.383040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:19:49 compute-0 ceph-mon[76335]: pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:49 compute-0 podman[395361]: 2026-02-25 13:19:49.763095209 +0000 UTC m=+0.088192398 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 13:19:49 compute-0 podman[395362]: 2026-02-25 13:19:49.802463 +0000 UTC m=+0.127564289 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:19:51 compute-0 nova_compute[244014]: 2026-02-25 13:19:51.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:51 compute-0 ceph-mon[76335]: pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:52 compute-0 ceph-mon[76335]: pgmap v2999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:53 compute-0 nova_compute[244014]: 2026-02-25 13:19:53.100 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:53 compute-0 nova_compute[244014]: 2026-02-25 13:19:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:19:53 compute-0 nova_compute[244014]: 2026-02-25 13:19:53.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:19:53 compute-0 nova_compute[244014]: 2026-02-25 13:19:53.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:19:53 compute-0 nova_compute[244014]: 2026-02-25 13:19:53.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:19:53 compute-0 nova_compute[244014]: 2026-02-25 13:19:53.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:19:53 compute-0 nova_compute[244014]: 2026-02-25 13:19:53.906 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:19:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:19:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425891276' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:19:54 compute-0 nova_compute[244014]: 2026-02-25 13:19:54.563 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:19:54 compute-0 ceph-mon[76335]: pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:54 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3425891276' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:19:54 compute-0 nova_compute[244014]: 2026-02-25 13:19:54.757 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:19:54 compute-0 nova_compute[244014]: 2026-02-25 13:19:54.758 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3596MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:19:54 compute-0 nova_compute[244014]: 2026-02-25 13:19:54.759 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:19:54 compute-0 nova_compute[244014]: 2026-02-25 13:19:54.759 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:19:54 compute-0 nova_compute[244014]: 2026-02-25 13:19:54.823 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:19:54 compute-0 nova_compute[244014]: 2026-02-25 13:19:54.824 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:19:54 compute-0 nova_compute[244014]: 2026-02-25 13:19:54.838 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:19:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:19:55.060 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:19:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:19:55.061 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:19:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:19:55.061 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:19:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:19:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331256269' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:19:55 compute-0 nova_compute[244014]: 2026-02-25 13:19:55.385 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:19:55 compute-0 nova_compute[244014]: 2026-02-25 13:19:55.391 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:19:55 compute-0 nova_compute[244014]: 2026-02-25 13:19:55.420 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:19:55 compute-0 nova_compute[244014]: 2026-02-25 13:19:55.424 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:19:55 compute-0 nova_compute[244014]: 2026-02-25 13:19:55.425 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:19:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3331256269' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:19:56 compute-0 nova_compute[244014]: 2026-02-25 13:19:56.144 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:56 compute-0 nova_compute[244014]: 2026-02-25 13:19:56.425 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:19:56 compute-0 ceph-mon[76335]: pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:57 compute-0 nova_compute[244014]: 2026-02-25 13:19:57.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:19:57 compute-0 nova_compute[244014]: 2026-02-25 13:19:57.879 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:19:57 compute-0 nova_compute[244014]: 2026-02-25 13:19:57.879 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:19:57 compute-0 nova_compute[244014]: 2026-02-25 13:19:57.906 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:19:58 compute-0 nova_compute[244014]: 2026-02-25 13:19:58.104 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:19:59 compute-0 ceph-mon[76335]: pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:19:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:19:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:00 compute-0 nova_compute[244014]: 2026-02-25 13:20:00.900 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:20:01 compute-0 ceph-mon[76335]: pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:01 compute-0 nova_compute[244014]: 2026-02-25 13:20:01.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:20:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:20:03 compute-0 nova_compute[244014]: 2026-02-25 13:20:03.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:03 compute-0 ceph-mon[76335]: pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:03 compute-0 nova_compute[244014]: 2026-02-25 13:20:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:20:03 compute-0 nova_compute[244014]: 2026-02-25 13:20:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:20:03 compute-0 nova_compute[244014]: 2026-02-25 13:20:03.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:20:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:05 compute-0 ceph-mon[76335]: pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:06 compute-0 nova_compute[244014]: 2026-02-25 13:20:06.146 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:06 compute-0 ceph-mon[76335]: pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:06 compute-0 nova_compute[244014]: 2026-02-25 13:20:06.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:20:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:08 compute-0 nova_compute[244014]: 2026-02-25 13:20:08.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:08 compute-0 ceph-mon[76335]: pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:09 compute-0 nova_compute[244014]: 2026-02-25 13:20:09.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:20:10 compute-0 ceph-mon[76335]: pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:11 compute-0 nova_compute[244014]: 2026-02-25 13:20:11.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:12 compute-0 ceph-mon[76335]: pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:13 compute-0 nova_compute[244014]: 2026-02-25 13:20:13.148 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:14 compute-0 ceph-mon[76335]: pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:20:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 184K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 262 writes, 636 keys, 262 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s
                                           Interval WAL: 262 writes, 126 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:20:15 compute-0 nova_compute[244014]: 2026-02-25 13:20:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:20:16 compute-0 nova_compute[244014]: 2026-02-25 13:20:16.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:16 compute-0 ceph-mon[76335]: pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:18 compute-0 nova_compute[244014]: 2026-02-25 13:20:18.150 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:18 compute-0 ceph-mon[76335]: pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:20:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.2 total, 600.0 interval
                                           Cumulative writes: 48K writes, 194K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.76 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 335 writes, 989 keys, 335 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s
                                           Interval WAL: 335 writes, 158 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:20:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:20 compute-0 podman[395449]: 2026-02-25 13:20:20.727455989 +0000 UTC m=+0.064990515 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:20:20 compute-0 podman[395450]: 2026-02-25 13:20:20.741390662 +0000 UTC m=+0.080722249 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 13:20:20 compute-0 ceph-mon[76335]: pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:21 compute-0 nova_compute[244014]: 2026-02-25 13:20:21.195 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:22 compute-0 nova_compute[244014]: 2026-02-25 13:20:22.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:20:23 compute-0 ceph-mon[76335]: pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:23 compute-0 nova_compute[244014]: 2026-02-25 13:20:23.153 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:20:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.6 total, 600.0 interval
                                           Cumulative writes: 37K writes, 149K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.77 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 312 writes, 919 keys, 312 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s
                                           Interval WAL: 312 writes, 148 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:20:25 compute-0 ceph-mon[76335]: pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:26 compute-0 nova_compute[244014]: 2026-02-25 13:20:26.198 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:27 compute-0 ceph-mon[76335]: pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 13:20:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:28 compute-0 nova_compute[244014]: 2026-02-25 13:20:28.200 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:29 compute-0 ceph-mon[76335]: pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:20:31
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'vms', 'volumes', '.mgr', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log']
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:20:31 compute-0 nova_compute[244014]: 2026-02-25 13:20:31.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:31 compute-0 ceph-mon[76335]: pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:20:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:20:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:20:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:20:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:20:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:20:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:20:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:20:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:20:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:20:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:20:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:20:32 compute-0 ceph-mon[76335]: pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:33 compute-0 nova_compute[244014]: 2026-02-25 13:20:33.205 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:34 compute-0 ceph-mon[76335]: pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:36 compute-0 nova_compute[244014]: 2026-02-25 13:20:36.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:36 compute-0 ceph-mon[76335]: pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:38 compute-0 sshd-session[395496]: Received disconnect from 45.148.10.141 port 43114:11:  [preauth]
Feb 25 13:20:38 compute-0 sshd-session[395496]: Disconnected from authenticating user root 45.148.10.141 port 43114 [preauth]
Feb 25 13:20:38 compute-0 nova_compute[244014]: 2026-02-25 13:20:38.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:38 compute-0 ceph-mon[76335]: pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:40 compute-0 ceph-mon[76335]: pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:41 compute-0 nova_compute[244014]: 2026-02-25 13:20:41.204 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:41 compute-0 sudo[395498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:20:41 compute-0 sudo[395498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:20:41 compute-0 sudo[395498]: pam_unix(sudo:session): session closed for user root
Feb 25 13:20:42 compute-0 sudo[395523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:20:42 compute-0 sudo[395523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:20:42 compute-0 sudo[395523]: pam_unix(sudo:session): session closed for user root
Feb 25 13:20:42 compute-0 sudo[395580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:20:42 compute-0 sudo[395580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:20:42 compute-0 sudo[395580]: pam_unix(sudo:session): session closed for user root
Feb 25 13:20:42 compute-0 sudo[395605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Feb 25 13:20:42 compute-0 sudo[395605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:20:42 compute-0 ceph-mon[76335]: pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:42 compute-0 sudo[395605]: pam_unix(sudo:session): session closed for user root
Feb 25 13:20:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:20:43 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:20:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:20:43 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:20:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:20:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:20:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:20:43 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:20:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:20:43 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:20:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:20:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:20:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:20:43 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:20:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:20:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:20:43 compute-0 sudo[395648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:20:43 compute-0 sudo[395648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:20:43 compute-0 sudo[395648]: pam_unix(sudo:session): session closed for user root
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:20:43 compute-0 sudo[395673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:20:43 compute-0 sudo[395673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:20:43 compute-0 nova_compute[244014]: 2026-02-25 13:20:43.256 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:43 compute-0 podman[395711]: 2026-02-25 13:20:43.513676964 +0000 UTC m=+0.036526632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:20:43 compute-0 podman[395711]: 2026-02-25 13:20:43.610406703 +0000 UTC m=+0.133256261 container create 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:20:43 compute-0 systemd[1]: Started libpod-conmon-3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1.scope.
Feb 25 13:20:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:20:43 compute-0 podman[395711]: 2026-02-25 13:20:43.800473008 +0000 UTC m=+0.323322656 container init 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:20:43 compute-0 podman[395711]: 2026-02-25 13:20:43.808888605 +0000 UTC m=+0.331738183 container start 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:20:43 compute-0 systemd[1]: libpod-3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1.scope: Deactivated successfully.
Feb 25 13:20:43 compute-0 sad_lichterman[395727]: 167 167
Feb 25 13:20:43 compute-0 conmon[395727]: conmon 3d3d2466ad02f2123a49 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1.scope/container/memory.events
Feb 25 13:20:43 compute-0 podman[395711]: 2026-02-25 13:20:43.856778477 +0000 UTC m=+0.379628125 container attach 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:20:43 compute-0 podman[395711]: 2026-02-25 13:20:43.857623311 +0000 UTC m=+0.380472919 container died 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 13:20:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-19d50b3a7dfa393d2c008de1680e14bd93c31a181eb10737033cf8eaa16a8f75-merged.mount: Deactivated successfully.
Feb 25 13:20:44 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:20:44 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:20:44 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:20:44 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:20:44 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:20:44 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:20:44 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:20:44 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:20:44 compute-0 podman[395711]: 2026-02-25 13:20:44.16445588 +0000 UTC m=+0.687305458 container remove 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:20:44 compute-0 systemd[1]: libpod-conmon-3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1.scope: Deactivated successfully.
Feb 25 13:20:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:44 compute-0 podman[395752]: 2026-02-25 13:20:44.404434803 +0000 UTC m=+0.099767507 container create f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 13:20:44 compute-0 podman[395752]: 2026-02-25 13:20:44.340118338 +0000 UTC m=+0.035451112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:20:44 compute-0 systemd[1]: Started libpod-conmon-f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32.scope.
Feb 25 13:20:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:44 compute-0 podman[395752]: 2026-02-25 13:20:44.576088117 +0000 UTC m=+0.271420901 container init f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:20:44 compute-0 podman[395752]: 2026-02-25 13:20:44.583723523 +0000 UTC m=+0.279056207 container start f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:20:44 compute-0 podman[395752]: 2026-02-25 13:20:44.608178423 +0000 UTC m=+0.303511187 container attach f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:20:45 compute-0 compassionate_rhodes[395770]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:20:45 compute-0 compassionate_rhodes[395770]: --> All data devices are unavailable
Feb 25 13:20:45 compute-0 systemd[1]: libpod-f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32.scope: Deactivated successfully.
Feb 25 13:20:45 compute-0 podman[395752]: 2026-02-25 13:20:45.072120257 +0000 UTC m=+0.767452951 container died f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e-merged.mount: Deactivated successfully.
Feb 25 13:20:45 compute-0 ceph-mon[76335]: pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:45 compute-0 podman[395752]: 2026-02-25 13:20:45.256184001 +0000 UTC m=+0.951516715 container remove f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 13:20:45 compute-0 systemd[1]: libpod-conmon-f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32.scope: Deactivated successfully.
Feb 25 13:20:45 compute-0 sudo[395673]: pam_unix(sudo:session): session closed for user root
Feb 25 13:20:45 compute-0 sudo[395803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:20:45 compute-0 sudo[395803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:20:45 compute-0 sudo[395803]: pam_unix(sudo:session): session closed for user root
Feb 25 13:20:45 compute-0 sudo[395828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:20:45 compute-0 sudo[395828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:20:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:45 compute-0 podman[395866]: 2026-02-25 13:20:45.776249339 +0000 UTC m=+0.082122349 container create 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:20:45 compute-0 podman[395866]: 2026-02-25 13:20:45.720376332 +0000 UTC m=+0.026249382 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:20:45 compute-0 systemd[1]: Started libpod-conmon-497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6.scope.
Feb 25 13:20:45 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:20:45 compute-0 podman[395866]: 2026-02-25 13:20:45.901136374 +0000 UTC m=+0.207009444 container init 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Feb 25 13:20:45 compute-0 podman[395866]: 2026-02-25 13:20:45.909297534 +0000 UTC m=+0.215170544 container start 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:20:45 compute-0 awesome_babbage[395883]: 167 167
Feb 25 13:20:45 compute-0 systemd[1]: libpod-497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6.scope: Deactivated successfully.
Feb 25 13:20:45 compute-0 podman[395866]: 2026-02-25 13:20:45.93254318 +0000 UTC m=+0.238416240 container attach 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:20:45 compute-0 podman[395866]: 2026-02-25 13:20:45.933582289 +0000 UTC m=+0.239455369 container died 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e68432b294e11086fed4ac9c4675563975d7c2d943018602cbce70925fe07657-merged.mount: Deactivated successfully.
Feb 25 13:20:46 compute-0 podman[395866]: 2026-02-25 13:20:46.104897764 +0000 UTC m=+0.410770784 container remove 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 13:20:46 compute-0 systemd[1]: libpod-conmon-497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6.scope: Deactivated successfully.
Feb 25 13:20:46 compute-0 nova_compute[244014]: 2026-02-25 13:20:46.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:46 compute-0 podman[395907]: 2026-02-25 13:20:46.34289043 +0000 UTC m=+0.096430092 container create 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:20:46 compute-0 podman[395907]: 2026-02-25 13:20:46.281984671 +0000 UTC m=+0.035524383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:20:46 compute-0 systemd[1]: Started libpod-conmon-442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d.scope.
Feb 25 13:20:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b3043ed79b771a2f0f280411a0d3d849cd5d649d56e857d8e150a4480095c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b3043ed79b771a2f0f280411a0d3d849cd5d649d56e857d8e150a4480095c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b3043ed79b771a2f0f280411a0d3d849cd5d649d56e857d8e150a4480095c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b3043ed79b771a2f0f280411a0d3d849cd5d649d56e857d8e150a4480095c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:46 compute-0 podman[395907]: 2026-02-25 13:20:46.504558483 +0000 UTC m=+0.258098125 container init 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:20:46 compute-0 podman[395907]: 2026-02-25 13:20:46.513968218 +0000 UTC m=+0.267507850 container start 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:20:46 compute-0 podman[395907]: 2026-02-25 13:20:46.539775877 +0000 UTC m=+0.293315509 container attach 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]: {
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:     "0": [
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:         {
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "devices": [
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "/dev/loop3"
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             ],
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_name": "ceph_lv0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_size": "21470642176",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "name": "ceph_lv0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "tags": {
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.cluster_name": "ceph",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.crush_device_class": "",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.encrypted": "0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.objectstore": "bluestore",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.osd_id": "0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.type": "block",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.vdo": "0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.with_tpm": "0"
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             },
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "type": "block",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "vg_name": "ceph_vg0"
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:         }
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:     ],
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:     "1": [
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:         {
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "devices": [
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "/dev/loop4"
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             ],
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_name": "ceph_lv1",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_size": "21470642176",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "name": "ceph_lv1",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "tags": {
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.cluster_name": "ceph",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.crush_device_class": "",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.encrypted": "0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.objectstore": "bluestore",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.osd_id": "1",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.type": "block",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.vdo": "0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.with_tpm": "0"
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             },
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "type": "block",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "vg_name": "ceph_vg1"
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:         }
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:     ],
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:     "2": [
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:         {
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "devices": [
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "/dev/loop5"
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             ],
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_name": "ceph_lv2",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_size": "21470642176",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "name": "ceph_lv2",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "tags": {
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.cluster_name": "ceph",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.crush_device_class": "",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.encrypted": "0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.objectstore": "bluestore",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.osd_id": "2",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.type": "block",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.vdo": "0",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:                 "ceph.with_tpm": "0"
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             },
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "type": "block",
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:             "vg_name": "ceph_vg2"
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:         }
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]:     ]
Feb 25 13:20:46 compute-0 optimistic_goldstine[395924]: }
Feb 25 13:20:46 compute-0 systemd[1]: libpod-442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d.scope: Deactivated successfully.
Feb 25 13:20:46 compute-0 podman[395907]: 2026-02-25 13:20:46.781107648 +0000 UTC m=+0.534647300 container died 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-468b3043ed79b771a2f0f280411a0d3d849cd5d649d56e857d8e150a4480095c-merged.mount: Deactivated successfully.
Feb 25 13:20:46 compute-0 podman[395907]: 2026-02-25 13:20:46.945571879 +0000 UTC m=+0.699111541 container remove 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:20:46 compute-0 systemd[1]: libpod-conmon-442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d.scope: Deactivated successfully.
Feb 25 13:20:46 compute-0 sudo[395828]: pam_unix(sudo:session): session closed for user root
Feb 25 13:20:47 compute-0 sudo[395947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:20:47 compute-0 sudo[395947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:20:47 compute-0 sudo[395947]: pam_unix(sudo:session): session closed for user root
Feb 25 13:20:47 compute-0 sudo[395972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:20:47 compute-0 sudo[395972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:20:47 compute-0 ceph-mon[76335]: pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:47 compute-0 podman[396009]: 2026-02-25 13:20:47.409041989 +0000 UTC m=+0.063828312 container create 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb 25 13:20:47 compute-0 podman[396009]: 2026-02-25 13:20:47.378105897 +0000 UTC m=+0.032892220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:20:47 compute-0 systemd[1]: Started libpod-conmon-006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2.scope.
Feb 25 13:20:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:20:47 compute-0 podman[396009]: 2026-02-25 13:20:47.543109083 +0000 UTC m=+0.197895376 container init 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:20:47 compute-0 podman[396009]: 2026-02-25 13:20:47.552973382 +0000 UTC m=+0.207759695 container start 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:20:47 compute-0 confident_borg[396024]: 167 167
Feb 25 13:20:47 compute-0 systemd[1]: libpod-006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2.scope: Deactivated successfully.
Feb 25 13:20:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:20:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4236125798' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:20:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:20:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4236125798' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:20:47 compute-0 podman[396009]: 2026-02-25 13:20:47.579966584 +0000 UTC m=+0.234752867 container attach 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:20:47 compute-0 podman[396009]: 2026-02-25 13:20:47.581471696 +0000 UTC m=+0.236257979 container died 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad56cb9785c65fccfb29b75fc1375afca10c3af7b51e9ecf291399e846e77502-merged.mount: Deactivated successfully.
Feb 25 13:20:47 compute-0 podman[396009]: 2026-02-25 13:20:47.810967313 +0000 UTC m=+0.465753636 container remove 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 13:20:47 compute-0 systemd[1]: libpod-conmon-006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2.scope: Deactivated successfully.
Feb 25 13:20:48 compute-0 podman[396051]: 2026-02-25 13:20:48.011340608 +0000 UTC m=+0.076095829 container create 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:20:48 compute-0 podman[396051]: 2026-02-25 13:20:47.961732308 +0000 UTC m=+0.026487549 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:20:48 compute-0 systemd[1]: Started libpod-conmon-526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942.scope.
Feb 25 13:20:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742874af25eaa16ea953606d72a7a094ed9d97248746d9e9da7980e1dce53c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742874af25eaa16ea953606d72a7a094ed9d97248746d9e9da7980e1dce53c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742874af25eaa16ea953606d72a7a094ed9d97248746d9e9da7980e1dce53c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742874af25eaa16ea953606d72a7a094ed9d97248746d9e9da7980e1dce53c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:20:48 compute-0 podman[396051]: 2026-02-25 13:20:48.168170214 +0000 UTC m=+0.232925435 container init 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:20:48 compute-0 podman[396051]: 2026-02-25 13:20:48.178682281 +0000 UTC m=+0.243437472 container start 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:20:48 compute-0 podman[396051]: 2026-02-25 13:20:48.1938864 +0000 UTC m=+0.258641601 container attach 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:20:48 compute-0 nova_compute[244014]: 2026-02-25 13:20:48.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4236125798' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:20:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4236125798' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:20:48 compute-0 lvm[396146]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:20:48 compute-0 lvm[396146]: VG ceph_vg1 finished
Feb 25 13:20:48 compute-0 lvm[396148]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:20:48 compute-0 lvm[396148]: VG ceph_vg2 finished
Feb 25 13:20:48 compute-0 lvm[396145]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:20:48 compute-0 lvm[396145]: VG ceph_vg0 finished
Feb 25 13:20:49 compute-0 quizzical_snyder[396067]: {}
Feb 25 13:20:49 compute-0 systemd[1]: libpod-526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942.scope: Deactivated successfully.
Feb 25 13:20:49 compute-0 podman[396051]: 2026-02-25 13:20:49.027986831 +0000 UTC m=+1.092742122 container died 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:20:49 compute-0 systemd[1]: libpod-526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942.scope: Consumed 1.099s CPU time.
Feb 25 13:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1742874af25eaa16ea953606d72a7a094ed9d97248746d9e9da7980e1dce53c6-merged.mount: Deactivated successfully.
Feb 25 13:20:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:49 compute-0 podman[396051]: 2026-02-25 13:20:49.243645157 +0000 UTC m=+1.308400388 container remove 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:20:49 compute-0 systemd[1]: libpod-conmon-526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942.scope: Deactivated successfully.
Feb 25 13:20:49 compute-0 ceph-mon[76335]: pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:49 compute-0 sudo[395972]: pam_unix(sudo:session): session closed for user root
Feb 25 13:20:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:20:49 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:20:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:20:49 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:20:49 compute-0 sudo[396165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:20:49 compute-0 sudo[396165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:20:49 compute-0 sudo[396165]: pam_unix(sudo:session): session closed for user root
Feb 25 13:20:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:50 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:20:50 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:20:50 compute-0 ceph-mon[76335]: pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:51 compute-0 nova_compute[244014]: 2026-02-25 13:20:51.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:51 compute-0 podman[396190]: 2026-02-25 13:20:51.730897302 +0000 UTC m=+0.065029136 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 13:20:51 compute-0 podman[396191]: 2026-02-25 13:20:51.802330028 +0000 UTC m=+0.134166477 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, tcib_managed=true)
Feb 25 13:20:52 compute-0 ceph-mon[76335]: pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:53 compute-0 nova_compute[244014]: 2026-02-25 13:20:53.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:54 compute-0 ceph-mon[76335]: pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:54 compute-0 nova_compute[244014]: 2026-02-25 13:20:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:20:54 compute-0 nova_compute[244014]: 2026-02-25 13:20:54.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:20:54 compute-0 nova_compute[244014]: 2026-02-25 13:20:54.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:20:54 compute-0 nova_compute[244014]: 2026-02-25 13:20:54.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:20:54 compute-0 nova_compute[244014]: 2026-02-25 13:20:54.917 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:20:54 compute-0 nova_compute[244014]: 2026-02-25 13:20:54.917 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:20:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:20:55.062 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:20:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:20:55.062 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:20:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:20:55.063 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:20:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:20:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2186151979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:20:55 compute-0 nova_compute[244014]: 2026-02-25 13:20:55.495 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:20:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:55 compute-0 nova_compute[244014]: 2026-02-25 13:20:55.651 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:20:55 compute-0 nova_compute[244014]: 2026-02-25 13:20:55.653 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3575MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:20:55 compute-0 nova_compute[244014]: 2026-02-25 13:20:55.653 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:20:55 compute-0 nova_compute[244014]: 2026-02-25 13:20:55.653 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:20:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2186151979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:20:55 compute-0 nova_compute[244014]: 2026-02-25 13:20:55.821 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:20:55 compute-0 nova_compute[244014]: 2026-02-25 13:20:55.822 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:20:55 compute-0 nova_compute[244014]: 2026-02-25 13:20:55.905 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:20:55 compute-0 nova_compute[244014]: 2026-02-25 13:20:55.974 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:20:55 compute-0 nova_compute[244014]: 2026-02-25 13:20:55.974 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:20:55 compute-0 nova_compute[244014]: 2026-02-25 13:20:55.990 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:20:56 compute-0 nova_compute[244014]: 2026-02-25 13:20:56.009 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:20:56 compute-0 nova_compute[244014]: 2026-02-25 13:20:56.050 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:20:56 compute-0 nova_compute[244014]: 2026-02-25 13:20:56.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:20:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836706908' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:20:56 compute-0 nova_compute[244014]: 2026-02-25 13:20:56.620 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:20:56 compute-0 nova_compute[244014]: 2026-02-25 13:20:56.627 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:20:56 compute-0 nova_compute[244014]: 2026-02-25 13:20:56.645 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:20:56 compute-0 nova_compute[244014]: 2026-02-25 13:20:56.648 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:20:56 compute-0 nova_compute[244014]: 2026-02-25 13:20:56.648 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:20:56 compute-0 ceph-mon[76335]: pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3836706908' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:20:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:57 compute-0 nova_compute[244014]: 2026-02-25 13:20:57.649 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:20:57 compute-0 nova_compute[244014]: 2026-02-25 13:20:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:20:57 compute-0 nova_compute[244014]: 2026-02-25 13:20:57.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:20:57 compute-0 nova_compute[244014]: 2026-02-25 13:20:57.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:20:57 compute-0 nova_compute[244014]: 2026-02-25 13:20:57.893 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:20:58 compute-0 nova_compute[244014]: 2026-02-25 13:20:58.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:20:58 compute-0 ceph-mon[76335]: pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:20:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:20:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:00 compute-0 nova_compute[244014]: 2026-02-25 13:21:00.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:21:00 compute-0 ceph-mon[76335]: pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:01 compute-0 nova_compute[244014]: 2026-02-25 13:21:01.213 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:21:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:21:02 compute-0 ceph-mon[76335]: pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:03 compute-0 nova_compute[244014]: 2026-02-25 13:21:03.319 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:04 compute-0 nova_compute[244014]: 2026-02-25 13:21:04.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:21:04 compute-0 nova_compute[244014]: 2026-02-25 13:21:04.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:21:04 compute-0 nova_compute[244014]: 2026-02-25 13:21:04.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:21:04 compute-0 ceph-mon[76335]: pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:06 compute-0 nova_compute[244014]: 2026-02-25 13:21:06.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:06 compute-0 ceph-mon[76335]: pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:08 compute-0 nova_compute[244014]: 2026-02-25 13:21:08.323 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:08 compute-0 nova_compute[244014]: 2026-02-25 13:21:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:21:08 compute-0 ceph-mon[76335]: pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:11 compute-0 ceph-mon[76335]: pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:11 compute-0 nova_compute[244014]: 2026-02-25 13:21:11.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:11 compute-0 nova_compute[244014]: 2026-02-25 13:21:11.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:21:13 compute-0 ceph-mon[76335]: pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:13 compute-0 nova_compute[244014]: 2026-02-25 13:21:13.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:15 compute-0 ceph-mon[76335]: pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:15 compute-0 nova_compute[244014]: 2026-02-25 13:21:15.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:21:16 compute-0 nova_compute[244014]: 2026-02-25 13:21:16.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:17 compute-0 ceph-mon[76335]: pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:18 compute-0 nova_compute[244014]: 2026-02-25 13:21:18.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:19 compute-0 ceph-mon[76335]: pgmap v3042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:21 compute-0 ceph-mon[76335]: pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:21 compute-0 nova_compute[244014]: 2026-02-25 13:21:21.219 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Feb 25 13:21:22 compute-0 podman[396276]: 2026-02-25 13:21:22.735498595 +0000 UTC m=+0.071376475 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 13:21:22 compute-0 podman[396277]: 2026-02-25 13:21:22.776608305 +0000 UTC m=+0.110599422 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 13:21:23 compute-0 ceph-mon[76335]: pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Feb 25 13:21:23 compute-0 nova_compute[244014]: 2026-02-25 13:21:23.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 13:21:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:25 compute-0 ceph-mon[76335]: pgmap v3045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 13:21:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 13:21:26 compute-0 nova_compute[244014]: 2026-02-25 13:21:26.221 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:27 compute-0 ceph-mon[76335]: pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 13:21:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Feb 25 13:21:28 compute-0 nova_compute[244014]: 2026-02-25 13:21:28.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:29 compute-0 ceph-mon[76335]: pgmap v3047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Feb 25 13:21:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:21:31
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.mgr', 'backups', '.rgw.root', 'images']
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:21:31 compute-0 nova_compute[244014]: 2026-02-25 13:21:31.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:31 compute-0 ceph-mon[76335]: pgmap v3048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:21:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:21:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:21:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:21:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:21:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:21:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:21:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:21:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:21:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:21:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:21:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:21:33 compute-0 ceph-mon[76335]: pgmap v3049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Feb 25 13:21:33 compute-0 nova_compute[244014]: 2026-02-25 13:21:33.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Feb 25 13:21:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:34 compute-0 ceph-mon[76335]: pgmap v3050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Feb 25 13:21:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 13:21:36 compute-0 nova_compute[244014]: 2026-02-25 13:21:36.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:36 compute-0 ceph-mon[76335]: pgmap v3051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 13:21:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 13:21:38 compute-0 nova_compute[244014]: 2026-02-25 13:21:38.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:38 compute-0 ceph-mon[76335]: pgmap v3052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 13:21:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:40 compute-0 ceph-mon[76335]: pgmap v3053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:41 compute-0 nova_compute[244014]: 2026-02-25 13:21:41.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:42 compute-0 ceph-mon[76335]: pgmap v3054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:21:43 compute-0 nova_compute[244014]: 2026-02-25 13:21:43.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:44 compute-0 ceph-mon[76335]: pgmap v3055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:46 compute-0 nova_compute[244014]: 2026-02-25 13:21:46.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:46 compute-0 ceph-mon[76335]: pgmap v3056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:21:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4285245314' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:21:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:21:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4285245314' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:21:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4285245314' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:21:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4285245314' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:21:48 compute-0 nova_compute[244014]: 2026-02-25 13:21:48.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:48 compute-0 ceph-mon[76335]: pgmap v3057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.218726) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709218768, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1189, "num_deletes": 250, "total_data_size": 1825449, "memory_usage": 1845736, "flush_reason": "Manual Compaction"}
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709226321, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 1075170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63354, "largest_seqno": 64542, "table_properties": {"data_size": 1070843, "index_size": 1850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11380, "raw_average_key_size": 20, "raw_value_size": 1061415, "raw_average_value_size": 1922, "num_data_blocks": 84, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025589, "oldest_key_time": 1772025589, "file_creation_time": 1772025709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 7670 microseconds, and 2822 cpu microseconds.
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.226395) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 1075170 bytes OK
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.226416) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.228760) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.228774) EVENT_LOG_v1 {"time_micros": 1772025709228769, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.228792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1820044, prev total WAL file size 1820044, number of live WAL files 2.
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.229356) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353033' seq:72057594037927935, type:22 .. '6D6772737461740032373534' seq:0, type:0; will stop at (end)
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(1049KB)], [149(10MB)]
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709229407, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12180128, "oldest_snapshot_seqno": -1}
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8343 keys, 9557371 bytes, temperature: kUnknown
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709302328, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 9557371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9505821, "index_size": 29613, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 217432, "raw_average_key_size": 26, "raw_value_size": 9361100, "raw_average_value_size": 1122, "num_data_blocks": 1148, "num_entries": 8343, "num_filter_entries": 8343, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.302660) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 9557371 bytes
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.304436) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.8 rd, 130.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.6 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(20.2) write-amplify(8.9) OK, records in: 8801, records dropped: 458 output_compression: NoCompression
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.304466) EVENT_LOG_v1 {"time_micros": 1772025709304452, "job": 92, "event": "compaction_finished", "compaction_time_micros": 73034, "compaction_time_cpu_micros": 27351, "output_level": 6, "num_output_files": 1, "total_output_size": 9557371, "num_input_records": 8801, "num_output_records": 8343, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709304795, "job": 92, "event": "table_file_deletion", "file_number": 151}
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709306456, "job": 92, "event": "table_file_deletion", "file_number": 149}
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.229285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.306496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.306503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.306506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.306509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:21:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.306512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:21:49 compute-0 sudo[396319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:21:49 compute-0 sudo[396319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:21:49 compute-0 sudo[396319]: pam_unix(sudo:session): session closed for user root
Feb 25 13:21:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:49 compute-0 sudo[396344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:21:49 compute-0 sudo[396344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:21:50 compute-0 sudo[396344]: pam_unix(sudo:session): session closed for user root
Feb 25 13:21:50 compute-0 sudo[396400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:21:50 compute-0 sudo[396400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:21:50 compute-0 sudo[396400]: pam_unix(sudo:session): session closed for user root
Feb 25 13:21:50 compute-0 sudo[396425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- inventory --format=json-pretty --filter-for-batch
Feb 25 13:21:50 compute-0 sudo[396425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:21:50 compute-0 podman[396464]: 2026-02-25 13:21:50.624158685 +0000 UTC m=+0.070660145 container create e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 13:21:50 compute-0 systemd[1]: Started libpod-conmon-e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f.scope.
Feb 25 13:21:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:21:50 compute-0 podman[396464]: 2026-02-25 13:21:50.587880382 +0000 UTC m=+0.034381762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:21:50 compute-0 podman[396464]: 2026-02-25 13:21:50.697607897 +0000 UTC m=+0.144109257 container init e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:21:50 compute-0 podman[396464]: 2026-02-25 13:21:50.703253677 +0000 UTC m=+0.149755017 container start e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:21:50 compute-0 sweet_dubinsky[396480]: 167 167
Feb 25 13:21:50 compute-0 systemd[1]: libpod-e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f.scope: Deactivated successfully.
Feb 25 13:21:50 compute-0 podman[396464]: 2026-02-25 13:21:50.72534881 +0000 UTC m=+0.171850240 container attach e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:21:50 compute-0 podman[396464]: 2026-02-25 13:21:50.725771282 +0000 UTC m=+0.172272622 container died e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Feb 25 13:21:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6697c9bd8fa06246c6da4221d441f2f8e41aa2a2d545f8e119df4f991369a522-merged.mount: Deactivated successfully.
Feb 25 13:21:50 compute-0 podman[396464]: 2026-02-25 13:21:50.787817673 +0000 UTC m=+0.234319053 container remove e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:21:50 compute-0 systemd[1]: libpod-conmon-e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f.scope: Deactivated successfully.
Feb 25 13:21:50 compute-0 podman[396506]: 2026-02-25 13:21:50.954386014 +0000 UTC m=+0.048783558 container create e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:21:51 compute-0 systemd[1]: Started libpod-conmon-e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d.scope.
Feb 25 13:21:51 compute-0 podman[396506]: 2026-02-25 13:21:50.931107127 +0000 UTC m=+0.025504761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:21:51 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925cd72d561f9a9a86214093e40c4d463d746aa45dbeb1d817e894c3f45b078b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925cd72d561f9a9a86214093e40c4d463d746aa45dbeb1d817e894c3f45b078b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925cd72d561f9a9a86214093e40c4d463d746aa45dbeb1d817e894c3f45b078b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925cd72d561f9a9a86214093e40c4d463d746aa45dbeb1d817e894c3f45b078b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:51 compute-0 podman[396506]: 2026-02-25 13:21:51.061642361 +0000 UTC m=+0.156039935 container init e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:21:51 compute-0 podman[396506]: 2026-02-25 13:21:51.069676888 +0000 UTC m=+0.164074472 container start e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:21:51 compute-0 podman[396506]: 2026-02-25 13:21:51.07329616 +0000 UTC m=+0.167693714 container attach e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:21:51 compute-0 ceph-mon[76335]: pgmap v3058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:51 compute-0 nova_compute[244014]: 2026-02-25 13:21:51.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:51 compute-0 nifty_hertz[396523]: [
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:     {
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:         "available": false,
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:         "being_replaced": false,
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:         "ceph_device_lvm": false,
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:         "lsm_data": {},
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:         "lvs": [],
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:         "path": "/dev/sr0",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:         "rejected_reasons": [
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "Insufficient space (<5GB)",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "Has a FileSystem"
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:         ],
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:         "sys_api": {
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "actuators": null,
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "device_nodes": [
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:                 "sr0"
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             ],
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "devname": "sr0",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "human_readable_size": "482.00 KB",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "id_bus": "ata",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "model": "QEMU DVD-ROM",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "nr_requests": "2",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "parent": "/dev/sr0",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "partitions": {},
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "path": "/dev/sr0",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "removable": "1",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "rev": "2.5+",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "ro": "0",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "rotational": "1",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "sas_address": "",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "sas_device_handle": "",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "scheduler_mode": "mq-deadline",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "sectors": 0,
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "sectorsize": "2048",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "size": 493568.0,
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "support_discard": "2048",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "type": "disk",
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:             "vendor": "QEMU"
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:         }
Feb 25 13:21:51 compute-0 nifty_hertz[396523]:     }
Feb 25 13:21:51 compute-0 nifty_hertz[396523]: ]
Feb 25 13:21:51 compute-0 systemd[1]: libpod-e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d.scope: Deactivated successfully.
Feb 25 13:21:51 compute-0 podman[396506]: 2026-02-25 13:21:51.647922877 +0000 UTC m=+0.742320461 container died e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-925cd72d561f9a9a86214093e40c4d463d746aa45dbeb1d817e894c3f45b078b-merged.mount: Deactivated successfully.
Feb 25 13:21:51 compute-0 podman[396506]: 2026-02-25 13:21:51.702483647 +0000 UTC m=+0.796881221 container remove e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 13:21:51 compute-0 systemd[1]: libpod-conmon-e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d.scope: Deactivated successfully.
Feb 25 13:21:51 compute-0 sudo[396425]: pam_unix(sudo:session): session closed for user root
Feb 25 13:21:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:21:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:21:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:21:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:21:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:21:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:21:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:21:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:21:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:21:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:21:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:21:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:21:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:21:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:21:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:21:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:21:51 compute-0 sudo[397255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:21:51 compute-0 sudo[397255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:21:51 compute-0 sudo[397255]: pam_unix(sudo:session): session closed for user root
Feb 25 13:21:51 compute-0 sudo[397280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:21:51 compute-0 sudo[397280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:21:52 compute-0 podman[397319]: 2026-02-25 13:21:52.245080021 +0000 UTC m=+0.057889195 container create 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:21:52 compute-0 systemd[1]: Started libpod-conmon-3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0.scope.
Feb 25 13:21:52 compute-0 podman[397319]: 2026-02-25 13:21:52.220736104 +0000 UTC m=+0.033545318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:21:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:21:52 compute-0 podman[397319]: 2026-02-25 13:21:52.333869677 +0000 UTC m=+0.146678911 container init 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 13:21:52 compute-0 podman[397319]: 2026-02-25 13:21:52.341747889 +0000 UTC m=+0.154557053 container start 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:21:52 compute-0 podman[397319]: 2026-02-25 13:21:52.345441463 +0000 UTC m=+0.158250687 container attach 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:21:52 compute-0 pedantic_kepler[397336]: 167 167
Feb 25 13:21:52 compute-0 systemd[1]: libpod-3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0.scope: Deactivated successfully.
Feb 25 13:21:52 compute-0 podman[397319]: 2026-02-25 13:21:52.348099438 +0000 UTC m=+0.160908602 container died 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 13:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-175e1d6f7cf79deeb9b1fee486766be8534d5e06517a4e669506a0bd43dbf66b-merged.mount: Deactivated successfully.
Feb 25 13:21:52 compute-0 podman[397319]: 2026-02-25 13:21:52.399912471 +0000 UTC m=+0.212721635 container remove 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Feb 25 13:21:52 compute-0 systemd[1]: libpod-conmon-3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0.scope: Deactivated successfully.
Feb 25 13:21:52 compute-0 podman[397361]: 2026-02-25 13:21:52.584962753 +0000 UTC m=+0.057698659 container create 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb 25 13:21:52 compute-0 systemd[1]: Started libpod-conmon-873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972.scope.
Feb 25 13:21:52 compute-0 podman[397361]: 2026-02-25 13:21:52.562101368 +0000 UTC m=+0.034837344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:21:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:52 compute-0 podman[397361]: 2026-02-25 13:21:52.6904488 +0000 UTC m=+0.163184776 container init 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:21:52 compute-0 podman[397361]: 2026-02-25 13:21:52.703772306 +0000 UTC m=+0.176508212 container start 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 13:21:52 compute-0 podman[397361]: 2026-02-25 13:21:52.708563632 +0000 UTC m=+0.181299558 container attach 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:21:52 compute-0 ceph-mon[76335]: pgmap v3059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:21:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:21:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:21:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:21:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:21:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:21:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:21:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:21:53 compute-0 happy_spence[397378]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:21:53 compute-0 happy_spence[397378]: --> All data devices are unavailable
Feb 25 13:21:53 compute-0 systemd[1]: libpod-873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972.scope: Deactivated successfully.
Feb 25 13:21:53 compute-0 podman[397361]: 2026-02-25 13:21:53.176380475 +0000 UTC m=+0.649116361 container died 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e-merged.mount: Deactivated successfully.
Feb 25 13:21:53 compute-0 podman[397361]: 2026-02-25 13:21:53.236591654 +0000 UTC m=+0.709327530 container remove 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:21:53 compute-0 systemd[1]: libpod-conmon-873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972.scope: Deactivated successfully.
Feb 25 13:21:53 compute-0 sudo[397280]: pam_unix(sudo:session): session closed for user root
Feb 25 13:21:53 compute-0 podman[397399]: 2026-02-25 13:21:53.330436173 +0000 UTC m=+0.122768806 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 25 13:21:53 compute-0 podman[397406]: 2026-02-25 13:21:53.342986137 +0000 UTC m=+0.132870701 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 13:21:53 compute-0 sudo[397443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:21:53 compute-0 sudo[397443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:21:53 compute-0 sudo[397443]: pam_unix(sudo:session): session closed for user root
Feb 25 13:21:53 compute-0 sudo[397475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:21:53 compute-0 sudo[397475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:21:53 compute-0 nova_compute[244014]: 2026-02-25 13:21:53.437 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:53 compute-0 podman[397515]: 2026-02-25 13:21:53.686308086 +0000 UTC m=+0.062168405 container create 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:21:53 compute-0 systemd[1]: Started libpod-conmon-0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1.scope.
Feb 25 13:21:53 compute-0 podman[397515]: 2026-02-25 13:21:53.659136229 +0000 UTC m=+0.034996608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:21:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:21:53 compute-0 podman[397515]: 2026-02-25 13:21:53.781979496 +0000 UTC m=+0.157839875 container init 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:21:53 compute-0 podman[397515]: 2026-02-25 13:21:53.789321693 +0000 UTC m=+0.165182022 container start 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 13:21:53 compute-0 podman[397515]: 2026-02-25 13:21:53.794807798 +0000 UTC m=+0.170668167 container attach 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:21:53 compute-0 clever_mcnulty[397531]: 167 167
Feb 25 13:21:53 compute-0 systemd[1]: libpod-0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1.scope: Deactivated successfully.
Feb 25 13:21:53 compute-0 podman[397515]: 2026-02-25 13:21:53.796242019 +0000 UTC m=+0.172102348 container died 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f903077075e667bcb6095c3a4eac141570a50628b01871cf9f64cb920de8dee9-merged.mount: Deactivated successfully.
Feb 25 13:21:53 compute-0 podman[397515]: 2026-02-25 13:21:53.837869674 +0000 UTC m=+0.213729993 container remove 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:21:53 compute-0 systemd[1]: libpod-conmon-0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1.scope: Deactivated successfully.
Feb 25 13:21:54 compute-0 podman[397555]: 2026-02-25 13:21:54.001570804 +0000 UTC m=+0.054228492 container create f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:21:54 compute-0 systemd[1]: Started libpod-conmon-f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643.scope.
Feb 25 13:21:54 compute-0 podman[397555]: 2026-02-25 13:21:53.971876056 +0000 UTC m=+0.024533764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:21:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81f49af591c3ed5f02096a19d703dc60122e73c4b23ff44770358c1b881c3dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81f49af591c3ed5f02096a19d703dc60122e73c4b23ff44770358c1b881c3dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81f49af591c3ed5f02096a19d703dc60122e73c4b23ff44770358c1b881c3dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81f49af591c3ed5f02096a19d703dc60122e73c4b23ff44770358c1b881c3dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:54 compute-0 podman[397555]: 2026-02-25 13:21:54.103635724 +0000 UTC m=+0.156293392 container init f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 13:21:54 compute-0 podman[397555]: 2026-02-25 13:21:54.119010758 +0000 UTC m=+0.171668416 container start f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 13:21:54 compute-0 podman[397555]: 2026-02-25 13:21:54.123558067 +0000 UTC m=+0.176215725 container attach f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:21:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:54 compute-0 adoring_thompson[397571]: {
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:     "0": [
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:         {
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "devices": [
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "/dev/loop3"
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             ],
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_name": "ceph_lv0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_size": "21470642176",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "name": "ceph_lv0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "tags": {
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.cluster_name": "ceph",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.crush_device_class": "",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.encrypted": "0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.objectstore": "bluestore",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.osd_id": "0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.type": "block",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.vdo": "0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.with_tpm": "0"
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             },
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "type": "block",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "vg_name": "ceph_vg0"
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:         }
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:     ],
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:     "1": [
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:         {
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "devices": [
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "/dev/loop4"
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             ],
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_name": "ceph_lv1",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_size": "21470642176",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "name": "ceph_lv1",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "tags": {
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.cluster_name": "ceph",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.crush_device_class": "",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.encrypted": "0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.objectstore": "bluestore",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.osd_id": "1",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.type": "block",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.vdo": "0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.with_tpm": "0"
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             },
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "type": "block",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "vg_name": "ceph_vg1"
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:         }
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:     ],
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:     "2": [
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:         {
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "devices": [
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "/dev/loop5"
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             ],
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_name": "ceph_lv2",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_size": "21470642176",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "name": "ceph_lv2",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "tags": {
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.cluster_name": "ceph",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.crush_device_class": "",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.encrypted": "0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.objectstore": "bluestore",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.osd_id": "2",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.type": "block",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.vdo": "0",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:                 "ceph.with_tpm": "0"
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             },
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "type": "block",
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:             "vg_name": "ceph_vg2"
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:         }
Feb 25 13:21:54 compute-0 adoring_thompson[397571]:     ]
Feb 25 13:21:54 compute-0 adoring_thompson[397571]: }
Feb 25 13:21:54 compute-0 systemd[1]: libpod-f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643.scope: Deactivated successfully.
Feb 25 13:21:54 compute-0 podman[397555]: 2026-02-25 13:21:54.439749459 +0000 UTC m=+0.492407157 container died f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c81f49af591c3ed5f02096a19d703dc60122e73c4b23ff44770358c1b881c3dd-merged.mount: Deactivated successfully.
Feb 25 13:21:54 compute-0 podman[397555]: 2026-02-25 13:21:54.495720619 +0000 UTC m=+0.548378297 container remove f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:21:54 compute-0 systemd[1]: libpod-conmon-f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643.scope: Deactivated successfully.
Feb 25 13:21:54 compute-0 sudo[397475]: pam_unix(sudo:session): session closed for user root
Feb 25 13:21:54 compute-0 sudo[397592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:21:54 compute-0 sudo[397592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:21:54 compute-0 sudo[397592]: pam_unix(sudo:session): session closed for user root
Feb 25 13:21:54 compute-0 sudo[397617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:21:54 compute-0 sudo[397617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:21:54 compute-0 ceph-mon[76335]: pgmap v3060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:54 compute-0 nova_compute[244014]: 2026-02-25 13:21:54.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:21:54 compute-0 nova_compute[244014]: 2026-02-25 13:21:54.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:21:54 compute-0 nova_compute[244014]: 2026-02-25 13:21:54.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:21:54 compute-0 nova_compute[244014]: 2026-02-25 13:21:54.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:21:54 compute-0 nova_compute[244014]: 2026-02-25 13:21:54.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:21:54 compute-0 nova_compute[244014]: 2026-02-25 13:21:54.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:21:55 compute-0 podman[397656]: 2026-02-25 13:21:55.033488606 +0000 UTC m=+0.068309949 container create a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:21:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:21:55.063 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:21:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:21:55.065 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:21:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:21:55.065 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:21:55 compute-0 systemd[1]: Started libpod-conmon-a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3.scope.
Feb 25 13:21:55 compute-0 podman[397656]: 2026-02-25 13:21:55.001280597 +0000 UTC m=+0.036101970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:21:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:21:55 compute-0 podman[397656]: 2026-02-25 13:21:55.131454331 +0000 UTC m=+0.166275744 container init a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:21:55 compute-0 podman[397656]: 2026-02-25 13:21:55.141532176 +0000 UTC m=+0.176353529 container start a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:21:55 compute-0 podman[397656]: 2026-02-25 13:21:55.150110598 +0000 UTC m=+0.184932001 container attach a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:21:55 compute-0 adoring_cerf[397693]: 167 167
Feb 25 13:21:55 compute-0 systemd[1]: libpod-a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3.scope: Deactivated successfully.
Feb 25 13:21:55 compute-0 conmon[397693]: conmon a194225dfab1511ddcb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3.scope/container/memory.events
Feb 25 13:21:55 compute-0 podman[397656]: 2026-02-25 13:21:55.154152702 +0000 UTC m=+0.188974045 container died a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 13:21:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeafa9c77999755d48ee17b738ee5c2b332e1568032f037f59d9a1e9b2813f82-merged.mount: Deactivated successfully.
Feb 25 13:21:55 compute-0 podman[397656]: 2026-02-25 13:21:55.214861455 +0000 UTC m=+0.249682808 container remove a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:21:55 compute-0 systemd[1]: libpod-conmon-a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3.scope: Deactivated successfully.
Feb 25 13:21:55 compute-0 podman[397717]: 2026-02-25 13:21:55.354368772 +0000 UTC m=+0.043040975 container create 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:21:55 compute-0 systemd[1]: Started libpod-conmon-24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671.scope.
Feb 25 13:21:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/926c7b64e86c2544cca1f1cfde25be80fea6a3040eba2faab5be2f0a00e58982/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/926c7b64e86c2544cca1f1cfde25be80fea6a3040eba2faab5be2f0a00e58982/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/926c7b64e86c2544cca1f1cfde25be80fea6a3040eba2faab5be2f0a00e58982/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/926c7b64e86c2544cca1f1cfde25be80fea6a3040eba2faab5be2f0a00e58982/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:21:55 compute-0 podman[397717]: 2026-02-25 13:21:55.335296884 +0000 UTC m=+0.023969087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:21:55 compute-0 podman[397717]: 2026-02-25 13:21:55.446838772 +0000 UTC m=+0.135510975 container init 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb 25 13:21:55 compute-0 podman[397717]: 2026-02-25 13:21:55.456746282 +0000 UTC m=+0.145418505 container start 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:21:55 compute-0 podman[397717]: 2026-02-25 13:21:55.461745073 +0000 UTC m=+0.150417276 container attach 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:21:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:21:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1494353662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:21:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:55 compute-0 nova_compute[244014]: 2026-02-25 13:21:55.534 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:21:55 compute-0 nova_compute[244014]: 2026-02-25 13:21:55.671 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:21:55 compute-0 nova_compute[244014]: 2026-02-25 13:21:55.672 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:21:55 compute-0 nova_compute[244014]: 2026-02-25 13:21:55.672 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:21:55 compute-0 nova_compute[244014]: 2026-02-25 13:21:55.672 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:21:55 compute-0 nova_compute[244014]: 2026-02-25 13:21:55.738 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:21:55 compute-0 nova_compute[244014]: 2026-02-25 13:21:55.739 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:21:55 compute-0 nova_compute[244014]: 2026-02-25 13:21:55.757 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:21:55 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1494353662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:21:56 compute-0 lvm[397835]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:21:56 compute-0 lvm[397835]: VG ceph_vg1 finished
Feb 25 13:21:56 compute-0 lvm[397833]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:21:56 compute-0 lvm[397833]: VG ceph_vg0 finished
Feb 25 13:21:56 compute-0 lvm[397836]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:21:56 compute-0 lvm[397836]: VG ceph_vg2 finished
Feb 25 13:21:56 compute-0 admiring_jones[397733]: {}
Feb 25 13:21:56 compute-0 systemd[1]: libpod-24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671.scope: Deactivated successfully.
Feb 25 13:21:56 compute-0 systemd[1]: libpod-24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671.scope: Consumed 1.078s CPU time.
Feb 25 13:21:56 compute-0 podman[397717]: 2026-02-25 13:21:56.212392228 +0000 UTC m=+0.901064421 container died 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 13:21:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-926c7b64e86c2544cca1f1cfde25be80fea6a3040eba2faab5be2f0a00e58982-merged.mount: Deactivated successfully.
Feb 25 13:21:56 compute-0 podman[397717]: 2026-02-25 13:21:56.25248845 +0000 UTC m=+0.941160633 container remove 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:21:56 compute-0 nova_compute[244014]: 2026-02-25 13:21:56.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:21:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/552420912' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:21:56 compute-0 systemd[1]: libpod-conmon-24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671.scope: Deactivated successfully.
Feb 25 13:21:56 compute-0 nova_compute[244014]: 2026-02-25 13:21:56.299 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:21:56 compute-0 nova_compute[244014]: 2026-02-25 13:21:56.304 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:21:56 compute-0 sudo[397617]: pam_unix(sudo:session): session closed for user root
Feb 25 13:21:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:21:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:21:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:21:56 compute-0 nova_compute[244014]: 2026-02-25 13:21:56.323 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:21:56 compute-0 nova_compute[244014]: 2026-02-25 13:21:56.325 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:21:56 compute-0 nova_compute[244014]: 2026-02-25 13:21:56.325 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:21:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:21:56 compute-0 sudo[397854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:21:56 compute-0 sudo[397854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:21:56 compute-0 sudo[397854]: pam_unix(sudo:session): session closed for user root
Feb 25 13:21:56 compute-0 ceph-mon[76335]: pgmap v3061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/552420912' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:21:56 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:21:56 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:21:57 compute-0 nova_compute[244014]: 2026-02-25 13:21:57.326 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:21:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:57 compute-0 nova_compute[244014]: 2026-02-25 13:21:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:21:57 compute-0 nova_compute[244014]: 2026-02-25 13:21:57.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:21:57 compute-0 nova_compute[244014]: 2026-02-25 13:21:57.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:21:57 compute-0 nova_compute[244014]: 2026-02-25 13:21:57.892 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:21:58 compute-0 nova_compute[244014]: 2026-02-25 13:21:58.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:21:58 compute-0 ceph-mon[76335]: pgmap v3062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:21:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:21:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:00 compute-0 ceph-mon[76335]: pgmap v3063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:01 compute-0 nova_compute[244014]: 2026-02-25 13:22:01.269 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:22:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:22:02 compute-0 ceph-mon[76335]: pgmap v3064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:02 compute-0 nova_compute[244014]: 2026-02-25 13:22:02.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:22:03 compute-0 sshd-session[397879]: Invalid user validator from 80.94.92.186 port 44902
Feb 25 13:22:03 compute-0 nova_compute[244014]: 2026-02-25 13:22:03.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:03 compute-0 sshd-session[397879]: Connection closed by invalid user validator 80.94.92.186 port 44902 [preauth]
Feb 25 13:22:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:04 compute-0 ceph-mon[76335]: pgmap v3065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:05 compute-0 nova_compute[244014]: 2026-02-25 13:22:05.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:22:05 compute-0 nova_compute[244014]: 2026-02-25 13:22:05.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:22:06 compute-0 nova_compute[244014]: 2026-02-25 13:22:06.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:06 compute-0 ceph-mon[76335]: pgmap v3066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:06 compute-0 nova_compute[244014]: 2026-02-25 13:22:06.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:22:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:08 compute-0 nova_compute[244014]: 2026-02-25 13:22:08.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:08 compute-0 ceph-mon[76335]: pgmap v3067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:08 compute-0 nova_compute[244014]: 2026-02-25 13:22:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:22:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:10 compute-0 ceph-mon[76335]: pgmap v3068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:11 compute-0 nova_compute[244014]: 2026-02-25 13:22:11.274 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:11 compute-0 nova_compute[244014]: 2026-02-25 13:22:11.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:22:12 compute-0 ceph-mon[76335]: pgmap v3069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:13 compute-0 nova_compute[244014]: 2026-02-25 13:22:13.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:14 compute-0 ceph-mon[76335]: pgmap v3070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:16 compute-0 nova_compute[244014]: 2026-02-25 13:22:16.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:16 compute-0 ceph-mon[76335]: pgmap v3071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:17 compute-0 nova_compute[244014]: 2026-02-25 13:22:17.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:22:18 compute-0 nova_compute[244014]: 2026-02-25 13:22:18.456 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:18 compute-0 ceph-mon[76335]: pgmap v3072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:20 compute-0 ceph-mon[76335]: pgmap v3073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:21 compute-0 nova_compute[244014]: 2026-02-25 13:22:21.277 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:22 compute-0 ceph-mon[76335]: pgmap v3074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:23 compute-0 nova_compute[244014]: 2026-02-25 13:22:23.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:23 compute-0 podman[397881]: 2026-02-25 13:22:23.761838817 +0000 UTC m=+0.091794822 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 25 13:22:23 compute-0 podman[397882]: 2026-02-25 13:22:23.781891193 +0000 UTC m=+0.110156600 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 13:22:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:24 compute-0 ceph-mon[76335]: pgmap v3075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:25 compute-0 nova_compute[244014]: 2026-02-25 13:22:25.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:22:26 compute-0 nova_compute[244014]: 2026-02-25 13:22:26.279 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:26 compute-0 ceph-mon[76335]: pgmap v3076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:28 compute-0 nova_compute[244014]: 2026-02-25 13:22:28.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:28 compute-0 ceph-mon[76335]: pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:30 compute-0 ceph-mon[76335]: pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:22:31
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.mgr', 'backups', 'images']
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:22:31 compute-0 nova_compute[244014]: 2026-02-25 13:22:31.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:22:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:22:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:22:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:22:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:22:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:22:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:22:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:22:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:22:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:22:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:22:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:22:32 compute-0 ceph-mon[76335]: pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:33 compute-0 nova_compute[244014]: 2026-02-25 13:22:33.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:34 compute-0 ceph-mon[76335]: pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:36 compute-0 nova_compute[244014]: 2026-02-25 13:22:36.281 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:36 compute-0 ceph-mon[76335]: pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:38 compute-0 nova_compute[244014]: 2026-02-25 13:22:38.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:38 compute-0 ceph-mon[76335]: pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:41 compute-0 ceph-mon[76335]: pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:41 compute-0 nova_compute[244014]: 2026-02-25 13:22:41.284 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:43 compute-0 ceph-mon[76335]: pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:22:43 compute-0 nova_compute[244014]: 2026-02-25 13:22:43.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:45 compute-0 ceph-mon[76335]: pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:46 compute-0 nova_compute[244014]: 2026-02-25 13:22:46.285 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:47 compute-0 ceph-mon[76335]: pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:22:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/716683261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:22:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:22:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/716683261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:22:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/716683261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:22:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/716683261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:22:48 compute-0 nova_compute[244014]: 2026-02-25 13:22:48.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:49 compute-0 ceph-mon[76335]: pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:51 compute-0 ceph-mon[76335]: pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:51 compute-0 nova_compute[244014]: 2026-02-25 13:22:51.287 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:53 compute-0 ceph-mon[76335]: pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:53 compute-0 nova_compute[244014]: 2026-02-25 13:22:53.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:54 compute-0 podman[397927]: 2026-02-25 13:22:54.721537649 +0000 UTC m=+0.062580318 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 13:22:54 compute-0 podman[397928]: 2026-02-25 13:22:54.782934261 +0000 UTC m=+0.122111137 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 13:22:54 compute-0 nova_compute[244014]: 2026-02-25 13:22:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:22:54 compute-0 nova_compute[244014]: 2026-02-25 13:22:54.976 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:22:54 compute-0 nova_compute[244014]: 2026-02-25 13:22:54.977 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:22:54 compute-0 nova_compute[244014]: 2026-02-25 13:22:54.977 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:22:54 compute-0 nova_compute[244014]: 2026-02-25 13:22:54.977 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:22:54 compute-0 nova_compute[244014]: 2026-02-25 13:22:54.978 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:22:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:22:55.065 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:22:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:22:55.065 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:22:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:22:55.065 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:22:55 compute-0 ceph-mon[76335]: pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:22:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/39669680' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:22:55 compute-0 nova_compute[244014]: 2026-02-25 13:22:55.491 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:22:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:55 compute-0 nova_compute[244014]: 2026-02-25 13:22:55.657 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:22:55 compute-0 nova_compute[244014]: 2026-02-25 13:22:55.659 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3596MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:22:55 compute-0 nova_compute[244014]: 2026-02-25 13:22:55.659 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:22:55 compute-0 nova_compute[244014]: 2026-02-25 13:22:55.660 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:22:55 compute-0 nova_compute[244014]: 2026-02-25 13:22:55.759 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:22:55 compute-0 nova_compute[244014]: 2026-02-25 13:22:55.759 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:22:55 compute-0 nova_compute[244014]: 2026-02-25 13:22:55.777 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:22:56 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/39669680' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:22:56 compute-0 nova_compute[244014]: 2026-02-25 13:22:56.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:22:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4053153168' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:22:56 compute-0 nova_compute[244014]: 2026-02-25 13:22:56.345 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:22:56 compute-0 nova_compute[244014]: 2026-02-25 13:22:56.353 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:22:56 compute-0 nova_compute[244014]: 2026-02-25 13:22:56.370 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:22:56 compute-0 nova_compute[244014]: 2026-02-25 13:22:56.373 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:22:56 compute-0 nova_compute[244014]: 2026-02-25 13:22:56.373 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:22:56 compute-0 sudo[398014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:22:56 compute-0 sudo[398014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:22:56 compute-0 sudo[398014]: pam_unix(sudo:session): session closed for user root
Feb 25 13:22:56 compute-0 sudo[398039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 13:22:56 compute-0 sudo[398039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:22:56 compute-0 sudo[398039]: pam_unix(sudo:session): session closed for user root
Feb 25 13:22:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:22:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:22:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:22:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:22:57 compute-0 sudo[398086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:22:57 compute-0 sudo[398086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:22:57 compute-0 sudo[398086]: pam_unix(sudo:session): session closed for user root
Feb 25 13:22:57 compute-0 sudo[398111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:22:57 compute-0 sudo[398111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:22:57 compute-0 ceph-mon[76335]: pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4053153168' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:22:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:22:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:22:57 compute-0 nova_compute[244014]: 2026-02-25 13:22:57.374 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:22:57 compute-0 sudo[398111]: pam_unix(sudo:session): session closed for user root
Feb 25 13:22:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:22:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:22:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:22:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:22:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:22:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:22:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:22:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:22:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:22:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:22:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:22:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:22:57 compute-0 sudo[398167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:22:57 compute-0 sudo[398167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:22:57 compute-0 sudo[398167]: pam_unix(sudo:session): session closed for user root
Feb 25 13:22:57 compute-0 sudo[398192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:22:57 compute-0 sudo[398192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:22:57 compute-0 nova_compute[244014]: 2026-02-25 13:22:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:22:57 compute-0 nova_compute[244014]: 2026-02-25 13:22:57.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:22:57 compute-0 nova_compute[244014]: 2026-02-25 13:22:57.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:22:57 compute-0 nova_compute[244014]: 2026-02-25 13:22:57.902 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:22:58 compute-0 podman[398230]: 2026-02-25 13:22:58.042607767 +0000 UTC m=+0.060464308 container create aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:22:58 compute-0 systemd[1]: Started libpod-conmon-aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1.scope.
Feb 25 13:22:58 compute-0 podman[398230]: 2026-02-25 13:22:58.014544505 +0000 UTC m=+0.032401106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:22:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:22:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:22:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:22:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:22:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:22:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:22:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:22:58 compute-0 podman[398230]: 2026-02-25 13:22:58.141976861 +0000 UTC m=+0.159833462 container init aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 13:22:58 compute-0 podman[398230]: 2026-02-25 13:22:58.152538529 +0000 UTC m=+0.170395070 container start aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 13:22:58 compute-0 podman[398230]: 2026-02-25 13:22:58.157341285 +0000 UTC m=+0.175197836 container attach aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:22:58 compute-0 quirky_carson[398246]: 167 167
Feb 25 13:22:58 compute-0 systemd[1]: libpod-aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1.scope: Deactivated successfully.
Feb 25 13:22:58 compute-0 podman[398230]: 2026-02-25 13:22:58.159587068 +0000 UTC m=+0.177443629 container died aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 13:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b7e37d35d025ac9672fc0493a04fe334df0423993b1840342037db5550cd251-merged.mount: Deactivated successfully.
Feb 25 13:22:58 compute-0 podman[398230]: 2026-02-25 13:22:58.199746211 +0000 UTC m=+0.217602722 container remove aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 13:22:58 compute-0 systemd[1]: libpod-conmon-aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1.scope: Deactivated successfully.
Feb 25 13:22:58 compute-0 podman[398271]: 2026-02-25 13:22:58.362881746 +0000 UTC m=+0.069282117 container create 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 13:22:58 compute-0 systemd[1]: Started libpod-conmon-9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7.scope.
Feb 25 13:22:58 compute-0 podman[398271]: 2026-02-25 13:22:58.331577592 +0000 UTC m=+0.037978023 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:22:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:22:58 compute-0 podman[398271]: 2026-02-25 13:22:58.478367435 +0000 UTC m=+0.184767816 container init 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:22:58 compute-0 podman[398271]: 2026-02-25 13:22:58.496440705 +0000 UTC m=+0.202841076 container start 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 13:22:58 compute-0 podman[398271]: 2026-02-25 13:22:58.500811928 +0000 UTC m=+0.207212299 container attach 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:22:58 compute-0 nova_compute[244014]: 2026-02-25 13:22:58.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:22:59 compute-0 nostalgic_khorana[398288]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:22:59 compute-0 nostalgic_khorana[398288]: --> All data devices are unavailable
Feb 25 13:22:59 compute-0 systemd[1]: libpod-9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7.scope: Deactivated successfully.
Feb 25 13:22:59 compute-0 podman[398308]: 2026-02-25 13:22:59.082304008 +0000 UTC m=+0.025308885 container died 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 13:22:59 compute-0 ceph-mon[76335]: pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e-merged.mount: Deactivated successfully.
Feb 25 13:22:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:22:59 compute-0 podman[398308]: 2026-02-25 13:22:59.232115006 +0000 UTC m=+0.175119833 container remove 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 13:22:59 compute-0 systemd[1]: libpod-conmon-9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7.scope: Deactivated successfully.
Feb 25 13:22:59 compute-0 sudo[398192]: pam_unix(sudo:session): session closed for user root
Feb 25 13:22:59 compute-0 sudo[398323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:22:59 compute-0 sudo[398323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:22:59 compute-0 sudo[398323]: pam_unix(sudo:session): session closed for user root
Feb 25 13:22:59 compute-0 sudo[398348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:22:59 compute-0 sudo[398348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:22:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:22:59 compute-0 podman[398385]: 2026-02-25 13:22:59.742908172 +0000 UTC m=+0.058055109 container create add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:22:59 compute-0 systemd[1]: Started libpod-conmon-add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75.scope.
Feb 25 13:22:59 compute-0 podman[398385]: 2026-02-25 13:22:59.719670366 +0000 UTC m=+0.034817383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:22:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:22:59 compute-0 podman[398385]: 2026-02-25 13:22:59.85019781 +0000 UTC m=+0.165344827 container init add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:22:59 compute-0 podman[398385]: 2026-02-25 13:22:59.858375891 +0000 UTC m=+0.173522858 container start add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:22:59 compute-0 podman[398385]: 2026-02-25 13:22:59.862737304 +0000 UTC m=+0.177884281 container attach add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 13:22:59 compute-0 pensive_payne[398402]: 167 167
Feb 25 13:22:59 compute-0 systemd[1]: libpod-add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75.scope: Deactivated successfully.
Feb 25 13:22:59 compute-0 podman[398385]: 2026-02-25 13:22:59.86505805 +0000 UTC m=+0.180205017 container died add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 13:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e25561167777e21a12837edee48c1eaabdbd72c34a398f9367296591814f5bea-merged.mount: Deactivated successfully.
Feb 25 13:22:59 compute-0 podman[398385]: 2026-02-25 13:22:59.919586748 +0000 UTC m=+0.234733715 container remove add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:22:59 compute-0 systemd[1]: libpod-conmon-add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75.scope: Deactivated successfully.
Feb 25 13:23:00 compute-0 podman[398426]: 2026-02-25 13:23:00.109148518 +0000 UTC m=+0.082895020 container create 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:23:00 compute-0 podman[398426]: 2026-02-25 13:23:00.062173013 +0000 UTC m=+0.035919485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:23:00 compute-0 systemd[1]: Started libpod-conmon-8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3.scope.
Feb 25 13:23:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fbc7cf782ebce48467802c590068fb5718a5a11661c6345a3552a641e1b2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fbc7cf782ebce48467802c590068fb5718a5a11661c6345a3552a641e1b2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fbc7cf782ebce48467802c590068fb5718a5a11661c6345a3552a641e1b2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fbc7cf782ebce48467802c590068fb5718a5a11661c6345a3552a641e1b2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:23:00 compute-0 podman[398426]: 2026-02-25 13:23:00.278472347 +0000 UTC m=+0.252218769 container init 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 13:23:00 compute-0 podman[398426]: 2026-02-25 13:23:00.287229044 +0000 UTC m=+0.260975466 container start 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 13:23:00 compute-0 podman[398426]: 2026-02-25 13:23:00.292605706 +0000 UTC m=+0.266352128 container attach 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:23:00 compute-0 musing_feynman[398442]: {
Feb 25 13:23:00 compute-0 musing_feynman[398442]:     "0": [
Feb 25 13:23:00 compute-0 musing_feynman[398442]:         {
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "devices": [
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "/dev/loop3"
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             ],
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_name": "ceph_lv0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_size": "21470642176",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "name": "ceph_lv0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "tags": {
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.cluster_name": "ceph",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.crush_device_class": "",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.encrypted": "0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.objectstore": "bluestore",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.osd_id": "0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.type": "block",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.vdo": "0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.with_tpm": "0"
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             },
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "type": "block",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "vg_name": "ceph_vg0"
Feb 25 13:23:00 compute-0 musing_feynman[398442]:         }
Feb 25 13:23:00 compute-0 musing_feynman[398442]:     ],
Feb 25 13:23:00 compute-0 musing_feynman[398442]:     "1": [
Feb 25 13:23:00 compute-0 musing_feynman[398442]:         {
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "devices": [
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "/dev/loop4"
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             ],
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_name": "ceph_lv1",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_size": "21470642176",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "name": "ceph_lv1",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "tags": {
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.cluster_name": "ceph",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.crush_device_class": "",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.encrypted": "0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.objectstore": "bluestore",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.osd_id": "1",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.type": "block",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.vdo": "0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.with_tpm": "0"
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             },
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "type": "block",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "vg_name": "ceph_vg1"
Feb 25 13:23:00 compute-0 musing_feynman[398442]:         }
Feb 25 13:23:00 compute-0 musing_feynman[398442]:     ],
Feb 25 13:23:00 compute-0 musing_feynman[398442]:     "2": [
Feb 25 13:23:00 compute-0 musing_feynman[398442]:         {
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "devices": [
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "/dev/loop5"
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             ],
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_name": "ceph_lv2",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_size": "21470642176",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "name": "ceph_lv2",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "tags": {
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.cluster_name": "ceph",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.crush_device_class": "",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.encrypted": "0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.objectstore": "bluestore",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.osd_id": "2",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.type": "block",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.vdo": "0",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:                 "ceph.with_tpm": "0"
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             },
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "type": "block",
Feb 25 13:23:00 compute-0 musing_feynman[398442]:             "vg_name": "ceph_vg2"
Feb 25 13:23:00 compute-0 musing_feynman[398442]:         }
Feb 25 13:23:00 compute-0 musing_feynman[398442]:     ]
Feb 25 13:23:00 compute-0 musing_feynman[398442]: }
Feb 25 13:23:00 compute-0 systemd[1]: libpod-8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3.scope: Deactivated successfully.
Feb 25 13:23:00 compute-0 podman[398426]: 2026-02-25 13:23:00.592460359 +0000 UTC m=+0.566206751 container died 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:23:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c0fbc7cf782ebce48467802c590068fb5718a5a11661c6345a3552a641e1b2e-merged.mount: Deactivated successfully.
Feb 25 13:23:00 compute-0 podman[398426]: 2026-02-25 13:23:00.690847166 +0000 UTC m=+0.664593588 container remove 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:23:00 compute-0 systemd[1]: libpod-conmon-8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3.scope: Deactivated successfully.
Feb 25 13:23:00 compute-0 sudo[398348]: pam_unix(sudo:session): session closed for user root
Feb 25 13:23:00 compute-0 sudo[398464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:23:00 compute-0 sudo[398464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:23:00 compute-0 sudo[398464]: pam_unix(sudo:session): session closed for user root
Feb 25 13:23:00 compute-0 sudo[398489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:23:00 compute-0 sudo[398489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:23:01 compute-0 podman[398527]: 2026-02-25 13:23:01.205612213 +0000 UTC m=+0.069687998 container create 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:23:01 compute-0 ceph-mon[76335]: pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:01 compute-0 systemd[1]: Started libpod-conmon-8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885.scope.
Feb 25 13:23:01 compute-0 podman[398527]: 2026-02-25 13:23:01.169526535 +0000 UTC m=+0.033602389 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:23:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:23:01 compute-0 nova_compute[244014]: 2026-02-25 13:23:01.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:01 compute-0 podman[398527]: 2026-02-25 13:23:01.293940336 +0000 UTC m=+0.158016190 container init 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:23:01 compute-0 podman[398527]: 2026-02-25 13:23:01.304433793 +0000 UTC m=+0.168509587 container start 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:23:01 compute-0 stupefied_matsumoto[398543]: 167 167
Feb 25 13:23:01 compute-0 systemd[1]: libpod-8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885.scope: Deactivated successfully.
Feb 25 13:23:01 compute-0 podman[398527]: 2026-02-25 13:23:01.317552273 +0000 UTC m=+0.181628027 container attach 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:23:01 compute-0 podman[398527]: 2026-02-25 13:23:01.31815497 +0000 UTC m=+0.182230794 container died 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Feb 25 13:23:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5a6c5e0ff7948e2fd5f7ec918715cf540c574ff85cbddf15c42122a78cd3719-merged.mount: Deactivated successfully.
Feb 25 13:23:01 compute-0 podman[398527]: 2026-02-25 13:23:01.400244027 +0000 UTC m=+0.264319811 container remove 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 13:23:01 compute-0 systemd[1]: libpod-conmon-8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885.scope: Deactivated successfully.
Feb 25 13:23:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:01 compute-0 podman[398567]: 2026-02-25 13:23:01.608912706 +0000 UTC m=+0.081201563 container create 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 25 13:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:23:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:23:01 compute-0 podman[398567]: 2026-02-25 13:23:01.556535887 +0000 UTC m=+0.028824804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:23:01 compute-0 systemd[1]: Started libpod-conmon-92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12.scope.
Feb 25 13:23:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740e57860a96bf295a8894079a2ba24a1cdb1ff7059cf8d72d8a096cd98f50a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740e57860a96bf295a8894079a2ba24a1cdb1ff7059cf8d72d8a096cd98f50a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740e57860a96bf295a8894079a2ba24a1cdb1ff7059cf8d72d8a096cd98f50a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740e57860a96bf295a8894079a2ba24a1cdb1ff7059cf8d72d8a096cd98f50a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:23:01 compute-0 podman[398567]: 2026-02-25 13:23:01.717201302 +0000 UTC m=+0.189490179 container init 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:23:01 compute-0 podman[398567]: 2026-02-25 13:23:01.723717436 +0000 UTC m=+0.196006303 container start 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:23:01 compute-0 podman[398567]: 2026-02-25 13:23:01.763020405 +0000 UTC m=+0.235309242 container attach 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 13:23:02 compute-0 lvm[398659]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:23:02 compute-0 lvm[398659]: VG ceph_vg0 finished
Feb 25 13:23:02 compute-0 lvm[398662]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:23:02 compute-0 lvm[398662]: VG ceph_vg1 finished
Feb 25 13:23:02 compute-0 lvm[398664]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:23:02 compute-0 lvm[398664]: VG ceph_vg2 finished
Feb 25 13:23:02 compute-0 frosty_mccarthy[398583]: {}
Feb 25 13:23:02 compute-0 systemd[1]: libpod-92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12.scope: Deactivated successfully.
Feb 25 13:23:02 compute-0 podman[398567]: 2026-02-25 13:23:02.531360279 +0000 UTC m=+1.003649146 container died 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 13:23:02 compute-0 systemd[1]: libpod-92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12.scope: Consumed 1.178s CPU time.
Feb 25 13:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-740e57860a96bf295a8894079a2ba24a1cdb1ff7059cf8d72d8a096cd98f50a6-merged.mount: Deactivated successfully.
Feb 25 13:23:02 compute-0 podman[398567]: 2026-02-25 13:23:02.702586901 +0000 UTC m=+1.174875758 container remove 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:23:02 compute-0 systemd[1]: libpod-conmon-92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12.scope: Deactivated successfully.
Feb 25 13:23:02 compute-0 sudo[398489]: pam_unix(sudo:session): session closed for user root
Feb 25 13:23:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:23:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:23:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:23:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:23:02 compute-0 sudo[398682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:23:02 compute-0 sudo[398682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:23:02 compute-0 sudo[398682]: pam_unix(sudo:session): session closed for user root
Feb 25 13:23:03 compute-0 ceph-mon[76335]: pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:23:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:23:03 compute-0 nova_compute[244014]: 2026-02-25 13:23:03.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:03 compute-0 nova_compute[244014]: 2026-02-25 13:23:03.898 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:23:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:05 compute-0 ceph-mon[76335]: pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:05 compute-0 nova_compute[244014]: 2026-02-25 13:23:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:23:05 compute-0 nova_compute[244014]: 2026-02-25 13:23:05.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:23:06 compute-0 nova_compute[244014]: 2026-02-25 13:23:06.294 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:07 compute-0 ceph-mon[76335]: pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:07 compute-0 nova_compute[244014]: 2026-02-25 13:23:07.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:23:08 compute-0 nova_compute[244014]: 2026-02-25 13:23:08.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:08 compute-0 nova_compute[244014]: 2026-02-25 13:23:08.881 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:23:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:09 compute-0 ceph-mon[76335]: pgmap v3097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:10 compute-0 ceph-mon[76335]: pgmap v3098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:11 compute-0 nova_compute[244014]: 2026-02-25 13:23:11.295 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:12 compute-0 ceph-mon[76335]: pgmap v3099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:12 compute-0 nova_compute[244014]: 2026-02-25 13:23:12.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:23:13 compute-0 nova_compute[244014]: 2026-02-25 13:23:13.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:14 compute-0 ceph-mon[76335]: pgmap v3100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:16 compute-0 nova_compute[244014]: 2026-02-25 13:23:16.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:16 compute-0 ceph-mon[76335]: pgmap v3101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:18 compute-0 nova_compute[244014]: 2026-02-25 13:23:18.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:18 compute-0 ceph-mon[76335]: pgmap v3102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:18 compute-0 nova_compute[244014]: 2026-02-25 13:23:18.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:23:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.393534) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799393632, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1011, "num_deletes": 256, "total_data_size": 1489277, "memory_usage": 1513168, "flush_reason": "Manual Compaction"}
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799526941, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 1442145, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64543, "largest_seqno": 65553, "table_properties": {"data_size": 1437199, "index_size": 2469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10513, "raw_average_key_size": 19, "raw_value_size": 1427275, "raw_average_value_size": 2609, "num_data_blocks": 111, "num_entries": 547, "num_filter_entries": 547, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025710, "oldest_key_time": 1772025710, "file_creation_time": 1772025799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 133512 microseconds, and 5296 cpu microseconds.
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:23:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.527048) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 1442145 bytes OK
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.527095) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.578287) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.578362) EVENT_LOG_v1 {"time_micros": 1772025799578344, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.578411) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 1484464, prev total WAL file size 1484464, number of live WAL files 2.
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.579532) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373636' seq:72057594037927935, type:22 .. '6C6F676D0033303138' seq:0, type:0; will stop at (end)
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(1408KB)], [152(9333KB)]
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799579596, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 10999516, "oldest_snapshot_seqno": -1}
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8367 keys, 10888073 bytes, temperature: kUnknown
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799853221, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 10888073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10834439, "index_size": 31633, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 218825, "raw_average_key_size": 26, "raw_value_size": 10687356, "raw_average_value_size": 1277, "num_data_blocks": 1234, "num_entries": 8367, "num_filter_entries": 8367, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.853672) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10888073 bytes
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.906080) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.2 rd, 39.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.1 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(15.2) write-amplify(7.5) OK, records in: 8890, records dropped: 523 output_compression: NoCompression
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.906123) EVENT_LOG_v1 {"time_micros": 1772025799906104, "job": 94, "event": "compaction_finished", "compaction_time_micros": 273813, "compaction_time_cpu_micros": 39349, "output_level": 6, "num_output_files": 1, "total_output_size": 10888073, "num_input_records": 8890, "num_output_records": 8367, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799906835, "job": 94, "event": "table_file_deletion", "file_number": 154}
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799909018, "job": 94, "event": "table_file_deletion", "file_number": 152}
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.579433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.909176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.909183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.909185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.909187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.909189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:20 compute-0 ceph-mon[76335]: pgmap v3103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:21 compute-0 nova_compute[244014]: 2026-02-25 13:23:21.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:22 compute-0 ceph-mon[76335]: pgmap v3104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:23 compute-0 nova_compute[244014]: 2026-02-25 13:23:23.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:24 compute-0 ceph-mon[76335]: pgmap v3105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:25 compute-0 podman[398707]: 2026-02-25 13:23:25.769465211 +0000 UTC m=+0.101773913 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 13:23:25 compute-0 podman[398708]: 2026-02-25 13:23:25.800831186 +0000 UTC m=+0.133265562 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 13:23:26 compute-0 nova_compute[244014]: 2026-02-25 13:23:26.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:26 compute-0 ceph-mon[76335]: pgmap v3106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:28 compute-0 nova_compute[244014]: 2026-02-25 13:23:28.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:28 compute-0 ceph-mon[76335]: pgmap v3107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:30 compute-0 ceph-mon[76335]: pgmap v3108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:23:31
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log']
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:23:31 compute-0 nova_compute[244014]: 2026-02-25 13:23:31.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:23:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.825346) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025811825428, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 349, "num_deletes": 251, "total_data_size": 211202, "memory_usage": 217864, "flush_reason": "Manual Compaction"}
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025811847803, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 209711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65554, "largest_seqno": 65902, "table_properties": {"data_size": 207478, "index_size": 396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5415, "raw_average_key_size": 18, "raw_value_size": 203173, "raw_average_value_size": 693, "num_data_blocks": 18, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025800, "oldest_key_time": 1772025800, "file_creation_time": 1772025811, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 22489 microseconds, and 1367 cpu microseconds.
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.847847) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 209711 bytes OK
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.847883) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.893260) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.893307) EVENT_LOG_v1 {"time_micros": 1772025811893297, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.893337) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 208844, prev total WAL file size 208844, number of live WAL files 2.
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.893944) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(204KB)], [155(10MB)]
Feb 25 13:23:31 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025811894044, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 11097784, "oldest_snapshot_seqno": -1}
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8150 keys, 9334176 bytes, temperature: kUnknown
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025812013842, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 9334176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9283403, "index_size": 29284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 214997, "raw_average_key_size": 26, "raw_value_size": 9141514, "raw_average_value_size": 1121, "num_data_blocks": 1126, "num_entries": 8150, "num_filter_entries": 8150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025811, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.014157) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9334176 bytes
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.047538) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.6 rd, 77.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(97.4) write-amplify(44.5) OK, records in: 8660, records dropped: 510 output_compression: NoCompression
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.047584) EVENT_LOG_v1 {"time_micros": 1772025812047566, "job": 96, "event": "compaction_finished", "compaction_time_micros": 119886, "compaction_time_cpu_micros": 33449, "output_level": 6, "num_output_files": 1, "total_output_size": 9334176, "num_input_records": 8660, "num_output_records": 8150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025812047910, "job": 96, "event": "table_file_deletion", "file_number": 157}
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025812049601, "job": 96, "event": "table_file_deletion", "file_number": 155}
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.893806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.049733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.049741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.049746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.049750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.049754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:23:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:23:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:23:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:23:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:23:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:23:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:23:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:23:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:23:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:23:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:23:32 compute-0 ceph-mon[76335]: pgmap v3109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:32 compute-0 nova_compute[244014]: 2026-02-25 13:23:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:23:33 compute-0 nova_compute[244014]: 2026-02-25 13:23:33.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:34 compute-0 ceph-mon[76335]: pgmap v3110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:36 compute-0 nova_compute[244014]: 2026-02-25 13:23:36.310 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:36 compute-0 ceph-mon[76335]: pgmap v3111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:38 compute-0 nova_compute[244014]: 2026-02-25 13:23:38.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:38 compute-0 ceph-mon[76335]: pgmap v3112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:40 compute-0 ceph-mon[76335]: pgmap v3113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:41 compute-0 nova_compute[244014]: 2026-02-25 13:23:41.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:42 compute-0 ceph-mon[76335]: pgmap v3114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:23:43 compute-0 nova_compute[244014]: 2026-02-25 13:23:43.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:44 compute-0 ceph-mon[76335]: pgmap v3115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:46 compute-0 nova_compute[244014]: 2026-02-25 13:23:46.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:46 compute-0 nova_compute[244014]: 2026-02-25 13:23:46.907 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:23:46 compute-0 nova_compute[244014]: 2026-02-25 13:23:46.908 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:23:46 compute-0 ceph-mon[76335]: pgmap v3116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:23:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/259004825' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:23:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:23:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/259004825' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:23:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/259004825' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:23:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/259004825' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:23:48 compute-0 nova_compute[244014]: 2026-02-25 13:23:48.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:48 compute-0 ceph-mon[76335]: pgmap v3117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:50 compute-0 ceph-mon[76335]: pgmap v3118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:51 compute-0 nova_compute[244014]: 2026-02-25 13:23:51.316 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:51 compute-0 nova_compute[244014]: 2026-02-25 13:23:51.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:23:51 compute-0 nova_compute[244014]: 2026-02-25 13:23:51.890 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:23:51 compute-0 nova_compute[244014]: 2026-02-25 13:23:51.903 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:23:52 compute-0 ceph-mon[76335]: pgmap v3119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:53 compute-0 nova_compute[244014]: 2026-02-25 13:23:53.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:55 compute-0 ceph-mon[76335]: pgmap v3120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:23:55.066 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:23:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:23:55.066 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:23:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:23:55.067 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:23:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:55 compute-0 nova_compute[244014]: 2026-02-25 13:23:55.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:23:55 compute-0 nova_compute[244014]: 2026-02-25 13:23:55.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:23:55 compute-0 nova_compute[244014]: 2026-02-25 13:23:55.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:23:55 compute-0 nova_compute[244014]: 2026-02-25 13:23:55.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:23:55 compute-0 nova_compute[244014]: 2026-02-25 13:23:55.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:23:55 compute-0 nova_compute[244014]: 2026-02-25 13:23:55.921 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:23:55 compute-0 nova_compute[244014]: 2026-02-25 13:23:55.923 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:23:56 compute-0 nova_compute[244014]: 2026-02-25 13:23:56.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:23:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2115060889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:23:56 compute-0 nova_compute[244014]: 2026-02-25 13:23:56.540 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:23:56 compute-0 podman[398771]: 2026-02-25 13:23:56.734652941 +0000 UTC m=+0.073385242 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent)
Feb 25 13:23:56 compute-0 nova_compute[244014]: 2026-02-25 13:23:56.799 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:23:56 compute-0 nova_compute[244014]: 2026-02-25 13:23:56.801 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:23:56 compute-0 nova_compute[244014]: 2026-02-25 13:23:56.802 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:23:56 compute-0 nova_compute[244014]: 2026-02-25 13:23:56.802 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:23:56 compute-0 podman[398772]: 2026-02-25 13:23:56.806856769 +0000 UTC m=+0.145933680 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:23:56 compute-0 nova_compute[244014]: 2026-02-25 13:23:56.866 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:23:56 compute-0 nova_compute[244014]: 2026-02-25 13:23:56.867 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:23:56 compute-0 nova_compute[244014]: 2026-02-25 13:23:56.884 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:23:57 compute-0 ceph-mon[76335]: pgmap v3121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2115060889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:23:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:23:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3811398553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:23:57 compute-0 nova_compute[244014]: 2026-02-25 13:23:57.453 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:23:57 compute-0 nova_compute[244014]: 2026-02-25 13:23:57.459 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:23:57 compute-0 nova_compute[244014]: 2026-02-25 13:23:57.471 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:23:57 compute-0 nova_compute[244014]: 2026-02-25 13:23:57.473 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:23:57 compute-0 nova_compute[244014]: 2026-02-25 13:23:57.473 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:23:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3811398553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:23:58 compute-0 nova_compute[244014]: 2026-02-25 13:23:58.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:23:59 compute-0 ceph-mon[76335]: pgmap v3122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:23:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:23:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:01 compute-0 ceph-mon[76335]: pgmap v3123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:01 compute-0 nova_compute[244014]: 2026-02-25 13:24:01.320 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:01 compute-0 nova_compute[244014]: 2026-02-25 13:24:01.458 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:24:01 compute-0 nova_compute[244014]: 2026-02-25 13:24:01.459 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:24:01 compute-0 nova_compute[244014]: 2026-02-25 13:24:01.459 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:24:01 compute-0 nova_compute[244014]: 2026-02-25 13:24:01.474 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:24:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:24:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:24:02 compute-0 sudo[398841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:24:02 compute-0 sudo[398841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:24:02 compute-0 sudo[398841]: pam_unix(sudo:session): session closed for user root
Feb 25 13:24:02 compute-0 sudo[398866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:24:02 compute-0 sudo[398866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:24:03 compute-0 ceph-mon[76335]: pgmap v3124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:03 compute-0 sudo[398866]: pam_unix(sudo:session): session closed for user root
Feb 25 13:24:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 13:24:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:24:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:24:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:24:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:24:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:24:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:24:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:24:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:24:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:24:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:24:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:24:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:24:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:24:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:03 compute-0 nova_compute[244014]: 2026-02-25 13:24:03.653 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:03 compute-0 sudo[398923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:24:03 compute-0 sudo[398923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:24:03 compute-0 sudo[398923]: pam_unix(sudo:session): session closed for user root
Feb 25 13:24:03 compute-0 sudo[398948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:24:03 compute-0 sudo[398948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:24:03 compute-0 nova_compute[244014]: 2026-02-25 13:24:03.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:24:03 compute-0 podman[398986]: 2026-02-25 13:24:03.974486265 +0000 UTC m=+0.039523026 container create 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:24:04 compute-0 systemd[1]: Started libpod-conmon-289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e.scope.
Feb 25 13:24:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:24:04 compute-0 podman[398986]: 2026-02-25 13:24:03.952021531 +0000 UTC m=+0.017058292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:24:04 compute-0 podman[398986]: 2026-02-25 13:24:04.048952617 +0000 UTC m=+0.113989358 container init 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:24:04 compute-0 podman[398986]: 2026-02-25 13:24:04.056367386 +0000 UTC m=+0.121404117 container start 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 13:24:04 compute-0 podman[398986]: 2026-02-25 13:24:04.060067641 +0000 UTC m=+0.125104392 container attach 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:24:04 compute-0 quizzical_ritchie[399002]: 167 167
Feb 25 13:24:04 compute-0 systemd[1]: libpod-289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e.scope: Deactivated successfully.
Feb 25 13:24:04 compute-0 podman[398986]: 2026-02-25 13:24:04.062504169 +0000 UTC m=+0.127540930 container died 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 13:24:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:24:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:24:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:24:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:24:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:24:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:24:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-44b60101a2870ff2af998998ab2ba707f9a5a232da5652c85be0ea1fdea847cc-merged.mount: Deactivated successfully.
Feb 25 13:24:04 compute-0 podman[398986]: 2026-02-25 13:24:04.115134875 +0000 UTC m=+0.180171606 container remove 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:24:04 compute-0 systemd[1]: libpod-conmon-289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e.scope: Deactivated successfully.
Feb 25 13:24:04 compute-0 podman[399026]: 2026-02-25 13:24:04.285332198 +0000 UTC m=+0.055435595 container create af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:24:04 compute-0 systemd[1]: Started libpod-conmon-af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222.scope.
Feb 25 13:24:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:04 compute-0 podman[399026]: 2026-02-25 13:24:04.261889017 +0000 UTC m=+0.031992434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:24:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:04 compute-0 podman[399026]: 2026-02-25 13:24:04.41224373 +0000 UTC m=+0.182347187 container init af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 13:24:04 compute-0 podman[399026]: 2026-02-25 13:24:04.421510281 +0000 UTC m=+0.191613678 container start af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:24:04 compute-0 podman[399026]: 2026-02-25 13:24:04.425645038 +0000 UTC m=+0.195748445 container attach af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:24:04 compute-0 serene_thompson[399043]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:24:04 compute-0 serene_thompson[399043]: --> All data devices are unavailable
Feb 25 13:24:04 compute-0 systemd[1]: libpod-af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222.scope: Deactivated successfully.
Feb 25 13:24:04 compute-0 podman[399026]: 2026-02-25 13:24:04.917225932 +0000 UTC m=+0.687329339 container died af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6-merged.mount: Deactivated successfully.
Feb 25 13:24:04 compute-0 podman[399026]: 2026-02-25 13:24:04.960685128 +0000 UTC m=+0.730788495 container remove af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 13:24:04 compute-0 systemd[1]: libpod-conmon-af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222.scope: Deactivated successfully.
Feb 25 13:24:04 compute-0 sudo[398948]: pam_unix(sudo:session): session closed for user root
Feb 25 13:24:05 compute-0 sudo[399074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:24:05 compute-0 sudo[399074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:24:05 compute-0 sudo[399074]: pam_unix(sudo:session): session closed for user root
Feb 25 13:24:05 compute-0 ceph-mon[76335]: pgmap v3125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:05 compute-0 sudo[399099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:24:05 compute-0 sudo[399099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:24:05 compute-0 podman[399137]: 2026-02-25 13:24:05.413856358 +0000 UTC m=+0.057369780 container create 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:24:05 compute-0 systemd[1]: Started libpod-conmon-4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77.scope.
Feb 25 13:24:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:24:05 compute-0 podman[399137]: 2026-02-25 13:24:05.392373452 +0000 UTC m=+0.035886954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:24:05 compute-0 podman[399137]: 2026-02-25 13:24:05.499404642 +0000 UTC m=+0.142918114 container init 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 13:24:05 compute-0 podman[399137]: 2026-02-25 13:24:05.507926543 +0000 UTC m=+0.151439965 container start 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:24:05 compute-0 hopeful_shannon[399154]: 167 167
Feb 25 13:24:05 compute-0 systemd[1]: libpod-4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77.scope: Deactivated successfully.
Feb 25 13:24:05 compute-0 podman[399137]: 2026-02-25 13:24:05.517765281 +0000 UTC m=+0.161278783 container attach 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:24:05 compute-0 podman[399137]: 2026-02-25 13:24:05.518376788 +0000 UTC m=+0.161890240 container died 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ed51cc6b16cd23d0944517a165eea8e90f81bd813949b9449eca53247fd71e6-merged.mount: Deactivated successfully.
Feb 25 13:24:05 compute-0 podman[399137]: 2026-02-25 13:24:05.560477916 +0000 UTC m=+0.203991338 container remove 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:24:05 compute-0 systemd[1]: libpod-conmon-4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77.scope: Deactivated successfully.
Feb 25 13:24:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:05 compute-0 podman[399178]: 2026-02-25 13:24:05.747258208 +0000 UTC m=+0.050664471 container create a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:24:05 compute-0 systemd[1]: Started libpod-conmon-a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26.scope.
Feb 25 13:24:05 compute-0 podman[399178]: 2026-02-25 13:24:05.722633743 +0000 UTC m=+0.026040066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:24:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9457f49f48fa19129a19a637dc634327f06ad02c532716b78e89ebb0c641d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9457f49f48fa19129a19a637dc634327f06ad02c532716b78e89ebb0c641d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9457f49f48fa19129a19a637dc634327f06ad02c532716b78e89ebb0c641d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9457f49f48fa19129a19a637dc634327f06ad02c532716b78e89ebb0c641d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:05 compute-0 podman[399178]: 2026-02-25 13:24:05.8614248 +0000 UTC m=+0.164831123 container init a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:24:05 compute-0 podman[399178]: 2026-02-25 13:24:05.868109748 +0000 UTC m=+0.171515981 container start a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:24:05 compute-0 podman[399178]: 2026-02-25 13:24:05.87276676 +0000 UTC m=+0.176173003 container attach a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 13:24:05 compute-0 nova_compute[244014]: 2026-02-25 13:24:05.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:24:05 compute-0 nova_compute[244014]: 2026-02-25 13:24:05.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:24:06 compute-0 focused_babbage[399195]: {
Feb 25 13:24:06 compute-0 focused_babbage[399195]:     "0": [
Feb 25 13:24:06 compute-0 focused_babbage[399195]:         {
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "devices": [
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "/dev/loop3"
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             ],
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_name": "ceph_lv0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_size": "21470642176",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "name": "ceph_lv0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "tags": {
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.cluster_name": "ceph",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.crush_device_class": "",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.encrypted": "0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.objectstore": "bluestore",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.osd_id": "0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.type": "block",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.vdo": "0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.with_tpm": "0"
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             },
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "type": "block",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "vg_name": "ceph_vg0"
Feb 25 13:24:06 compute-0 focused_babbage[399195]:         }
Feb 25 13:24:06 compute-0 focused_babbage[399195]:     ],
Feb 25 13:24:06 compute-0 focused_babbage[399195]:     "1": [
Feb 25 13:24:06 compute-0 focused_babbage[399195]:         {
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "devices": [
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "/dev/loop4"
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             ],
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_name": "ceph_lv1",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_size": "21470642176",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "name": "ceph_lv1",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "tags": {
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.cluster_name": "ceph",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.crush_device_class": "",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.encrypted": "0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.objectstore": "bluestore",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.osd_id": "1",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.type": "block",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.vdo": "0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.with_tpm": "0"
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             },
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "type": "block",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "vg_name": "ceph_vg1"
Feb 25 13:24:06 compute-0 focused_babbage[399195]:         }
Feb 25 13:24:06 compute-0 focused_babbage[399195]:     ],
Feb 25 13:24:06 compute-0 focused_babbage[399195]:     "2": [
Feb 25 13:24:06 compute-0 focused_babbage[399195]:         {
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "devices": [
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "/dev/loop5"
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             ],
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_name": "ceph_lv2",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_size": "21470642176",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "name": "ceph_lv2",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "tags": {
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.cluster_name": "ceph",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.crush_device_class": "",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.encrypted": "0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.objectstore": "bluestore",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.osd_id": "2",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.type": "block",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.vdo": "0",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:                 "ceph.with_tpm": "0"
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             },
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "type": "block",
Feb 25 13:24:06 compute-0 focused_babbage[399195]:             "vg_name": "ceph_vg2"
Feb 25 13:24:06 compute-0 focused_babbage[399195]:         }
Feb 25 13:24:06 compute-0 focused_babbage[399195]:     ]
Feb 25 13:24:06 compute-0 focused_babbage[399195]: }
Feb 25 13:24:06 compute-0 systemd[1]: libpod-a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26.scope: Deactivated successfully.
Feb 25 13:24:06 compute-0 podman[399178]: 2026-02-25 13:24:06.182551693 +0000 UTC m=+0.485958056 container died a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b9457f49f48fa19129a19a637dc634327f06ad02c532716b78e89ebb0c641d6-merged.mount: Deactivated successfully.
Feb 25 13:24:06 compute-0 podman[399178]: 2026-02-25 13:24:06.221041359 +0000 UTC m=+0.524447592 container remove a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:24:06 compute-0 systemd[1]: libpod-conmon-a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26.scope: Deactivated successfully.
Feb 25 13:24:06 compute-0 sudo[399099]: pam_unix(sudo:session): session closed for user root
Feb 25 13:24:06 compute-0 nova_compute[244014]: 2026-02-25 13:24:06.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:06 compute-0 sudo[399217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:24:06 compute-0 sudo[399217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:24:06 compute-0 sudo[399217]: pam_unix(sudo:session): session closed for user root
Feb 25 13:24:06 compute-0 sudo[399242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:24:06 compute-0 sudo[399242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:24:06 compute-0 podman[399279]: 2026-02-25 13:24:06.737306318 +0000 UTC m=+0.060210870 container create 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 13:24:06 compute-0 systemd[1]: Started libpod-conmon-262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232.scope.
Feb 25 13:24:06 compute-0 podman[399279]: 2026-02-25 13:24:06.710313256 +0000 UTC m=+0.033217888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:24:06 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:24:06 compute-0 podman[399279]: 2026-02-25 13:24:06.82738003 +0000 UTC m=+0.150284662 container init 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:24:06 compute-0 podman[399279]: 2026-02-25 13:24:06.835992163 +0000 UTC m=+0.158896715 container start 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:24:06 compute-0 podman[399279]: 2026-02-25 13:24:06.839881213 +0000 UTC m=+0.162785845 container attach 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 13:24:06 compute-0 jovial_ardinghelli[399295]: 167 167
Feb 25 13:24:06 compute-0 systemd[1]: libpod-262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232.scope: Deactivated successfully.
Feb 25 13:24:06 compute-0 podman[399279]: 2026-02-25 13:24:06.84258916 +0000 UTC m=+0.165493742 container died 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e963b47f02d26cf43cd34ca3f887c4fa85a38faf16887d1427ed6971df45c700-merged.mount: Deactivated successfully.
Feb 25 13:24:06 compute-0 podman[399279]: 2026-02-25 13:24:06.886833338 +0000 UTC m=+0.209737910 container remove 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:24:06 compute-0 systemd[1]: libpod-conmon-262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232.scope: Deactivated successfully.
Feb 25 13:24:07 compute-0 podman[399318]: 2026-02-25 13:24:07.068044773 +0000 UTC m=+0.061732654 container create ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 13:24:07 compute-0 ceph-mon[76335]: pgmap v3126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:07 compute-0 systemd[1]: Started libpod-conmon-ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9.scope.
Feb 25 13:24:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8450383ee497e4a21aa8e2173561d35a0dd2fb5e4e56fee84508832b4071bb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8450383ee497e4a21aa8e2173561d35a0dd2fb5e4e56fee84508832b4071bb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8450383ee497e4a21aa8e2173561d35a0dd2fb5e4e56fee84508832b4071bb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8450383ee497e4a21aa8e2173561d35a0dd2fb5e4e56fee84508832b4071bb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:24:07 compute-0 podman[399318]: 2026-02-25 13:24:07.038789217 +0000 UTC m=+0.032477148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:24:07 compute-0 podman[399318]: 2026-02-25 13:24:07.153477424 +0000 UTC m=+0.147165345 container init ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:24:07 compute-0 podman[399318]: 2026-02-25 13:24:07.159562775 +0000 UTC m=+0.153250656 container start ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 13:24:07 compute-0 podman[399318]: 2026-02-25 13:24:07.163741203 +0000 UTC m=+0.157429134 container attach ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:24:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:07 compute-0 lvm[399414]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:24:07 compute-0 lvm[399414]: VG ceph_vg2 finished
Feb 25 13:24:07 compute-0 lvm[399412]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:24:07 compute-0 lvm[399413]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:24:07 compute-0 lvm[399413]: VG ceph_vg1 finished
Feb 25 13:24:07 compute-0 lvm[399412]: VG ceph_vg0 finished
Feb 25 13:24:07 compute-0 inspiring_babbage[399333]: {}
Feb 25 13:24:07 compute-0 systemd[1]: libpod-ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9.scope: Deactivated successfully.
Feb 25 13:24:07 compute-0 systemd[1]: libpod-ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9.scope: Consumed 1.033s CPU time.
Feb 25 13:24:07 compute-0 podman[399318]: 2026-02-25 13:24:07.871378005 +0000 UTC m=+0.865065876 container died ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8450383ee497e4a21aa8e2173561d35a0dd2fb5e4e56fee84508832b4071bb7-merged.mount: Deactivated successfully.
Feb 25 13:24:07 compute-0 podman[399318]: 2026-02-25 13:24:07.939820986 +0000 UTC m=+0.933508857 container remove ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:24:07 compute-0 systemd[1]: libpod-conmon-ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9.scope: Deactivated successfully.
Feb 25 13:24:07 compute-0 sudo[399242]: pam_unix(sudo:session): session closed for user root
Feb 25 13:24:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:24:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:24:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:24:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:24:08 compute-0 sudo[399430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:24:08 compute-0 sudo[399430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:24:08 compute-0 sudo[399430]: pam_unix(sudo:session): session closed for user root
Feb 25 13:24:08 compute-0 nova_compute[244014]: 2026-02-25 13:24:08.689 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:08 compute-0 nova_compute[244014]: 2026-02-25 13:24:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:24:09 compute-0 ceph-mon[76335]: pgmap v3127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:24:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:24:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:10 compute-0 nova_compute[244014]: 2026-02-25 13:24:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:24:11 compute-0 ceph-mon[76335]: pgmap v3128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:11 compute-0 nova_compute[244014]: 2026-02-25 13:24:11.324 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:12 compute-0 nova_compute[244014]: 2026-02-25 13:24:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:24:13 compute-0 ceph-mon[76335]: pgmap v3129: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:13 compute-0 nova_compute[244014]: 2026-02-25 13:24:13.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:15 compute-0 ceph-mon[76335]: pgmap v3130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:16 compute-0 nova_compute[244014]: 2026-02-25 13:24:16.325 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:17 compute-0 ceph-mon[76335]: pgmap v3131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:18 compute-0 nova_compute[244014]: 2026-02-25 13:24:18.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:19 compute-0 ceph-mon[76335]: pgmap v3132: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:19 compute-0 nova_compute[244014]: 2026-02-25 13:24:19.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:24:21 compute-0 ceph-mon[76335]: pgmap v3133: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:21 compute-0 nova_compute[244014]: 2026-02-25 13:24:21.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:23 compute-0 ceph-mon[76335]: pgmap v3134: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:23 compute-0 nova_compute[244014]: 2026-02-25 13:24:23.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:25 compute-0 ceph-mon[76335]: pgmap v3135: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:25 compute-0 nova_compute[244014]: 2026-02-25 13:24:25.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:24:26 compute-0 nova_compute[244014]: 2026-02-25 13:24:26.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:27 compute-0 ceph-mon[76335]: pgmap v3136: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:27 compute-0 podman[399455]: 2026-02-25 13:24:27.738616194 +0000 UTC m=+0.072586580 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:24:27 compute-0 podman[399456]: 2026-02-25 13:24:27.774240849 +0000 UTC m=+0.108088831 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 13:24:28 compute-0 nova_compute[244014]: 2026-02-25 13:24:28.700 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:29 compute-0 ceph-mon[76335]: pgmap v3137: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:24:31
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'backups', '.rgw.root']
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:24:31 compute-0 nova_compute[244014]: 2026-02-25 13:24:31.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:31 compute-0 ceph-mon[76335]: pgmap v3138: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:24:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:24:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:24:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:24:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:24:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:24:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:24:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:24:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:24:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:24:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:24:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:24:33 compute-0 ceph-mon[76335]: pgmap v3139: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:33 compute-0 nova_compute[244014]: 2026-02-25 13:24:33.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:35 compute-0 ceph-mon[76335]: pgmap v3140: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:36 compute-0 nova_compute[244014]: 2026-02-25 13:24:36.333 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:36 compute-0 ceph-mon[76335]: pgmap v3141: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:38 compute-0 nova_compute[244014]: 2026-02-25 13:24:38.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:38 compute-0 ceph-mon[76335]: pgmap v3142: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:40 compute-0 ceph-mon[76335]: pgmap v3143: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:41 compute-0 nova_compute[244014]: 2026-02-25 13:24:41.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:42 compute-0 ceph-mon[76335]: pgmap v3144: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:24:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:43 compute-0 nova_compute[244014]: 2026-02-25 13:24:43.711 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:44 compute-0 ceph-mon[76335]: pgmap v3145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:46 compute-0 nova_compute[244014]: 2026-02-25 13:24:46.337 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:46 compute-0 ceph-mon[76335]: pgmap v3146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:24:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2120872837' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:24:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:24:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2120872837' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:24:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2120872837' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:24:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2120872837' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:24:48 compute-0 nova_compute[244014]: 2026-02-25 13:24:48.715 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:48 compute-0 ceph-mon[76335]: pgmap v3147: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:50 compute-0 ceph-mon[76335]: pgmap v3148: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:51 compute-0 nova_compute[244014]: 2026-02-25 13:24:51.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:52 compute-0 ceph-mon[76335]: pgmap v3149: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:53 compute-0 nova_compute[244014]: 2026-02-25 13:24:53.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:54 compute-0 ceph-mon[76335]: pgmap v3150: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:24:55.067 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:24:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:24:55.068 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:24:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:24:55.068 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:24:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:55 compute-0 nova_compute[244014]: 2026-02-25 13:24:55.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:24:56 compute-0 nova_compute[244014]: 2026-02-25 13:24:56.340 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:56 compute-0 nova_compute[244014]: 2026-02-25 13:24:56.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:24:56 compute-0 nova_compute[244014]: 2026-02-25 13:24:56.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:24:56 compute-0 nova_compute[244014]: 2026-02-25 13:24:56.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:24:56 compute-0 nova_compute[244014]: 2026-02-25 13:24:56.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:24:56 compute-0 nova_compute[244014]: 2026-02-25 13:24:56.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:24:56 compute-0 nova_compute[244014]: 2026-02-25 13:24:56.906 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:24:56 compute-0 ceph-mon[76335]: pgmap v3151: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:24:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3061810097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:24:57 compute-0 nova_compute[244014]: 2026-02-25 13:24:57.486 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:24:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:57 compute-0 nova_compute[244014]: 2026-02-25 13:24:57.655 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:24:57 compute-0 nova_compute[244014]: 2026-02-25 13:24:57.657 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:24:57 compute-0 nova_compute[244014]: 2026-02-25 13:24:57.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:24:57 compute-0 nova_compute[244014]: 2026-02-25 13:24:57.658 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:24:57 compute-0 nova_compute[244014]: 2026-02-25 13:24:57.720 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:24:57 compute-0 nova_compute[244014]: 2026-02-25 13:24:57.720 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:24:57 compute-0 nova_compute[244014]: 2026-02-25 13:24:57.751 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:24:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3061810097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:24:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:24:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93496846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:24:58 compute-0 nova_compute[244014]: 2026-02-25 13:24:58.266 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:24:58 compute-0 nova_compute[244014]: 2026-02-25 13:24:58.272 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:24:58 compute-0 nova_compute[244014]: 2026-02-25 13:24:58.290 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:24:58 compute-0 nova_compute[244014]: 2026-02-25 13:24:58.292 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:24:58 compute-0 nova_compute[244014]: 2026-02-25 13:24:58.293 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:24:58 compute-0 podman[399544]: 2026-02-25 13:24:58.709393587 +0000 UTC m=+0.049411416 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 13:24:58 compute-0 nova_compute[244014]: 2026-02-25 13:24:58.733 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:24:58 compute-0 podman[399545]: 2026-02-25 13:24:58.740447383 +0000 UTC m=+0.075793430 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:24:58 compute-0 ceph-mon[76335]: pgmap v3152: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:24:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/93496846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:24:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:24:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:00 compute-0 ceph-mon[76335]: pgmap v3153: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:01 compute-0 nova_compute[244014]: 2026-02-25 13:25:01.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:25:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:25:02 compute-0 ceph-mon[76335]: pgmap v3154: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:03 compute-0 nova_compute[244014]: 2026-02-25 13:25:03.295 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:25:03 compute-0 nova_compute[244014]: 2026-02-25 13:25:03.296 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:25:03 compute-0 nova_compute[244014]: 2026-02-25 13:25:03.296 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:25:03 compute-0 nova_compute[244014]: 2026-02-25 13:25:03.314 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:25:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:03 compute-0 nova_compute[244014]: 2026-02-25 13:25:03.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:03 compute-0 nova_compute[244014]: 2026-02-25 13:25:03.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:25:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:04 compute-0 ceph-mon[76335]: pgmap v3155: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:06 compute-0 nova_compute[244014]: 2026-02-25 13:25:06.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:06 compute-0 nova_compute[244014]: 2026-02-25 13:25:06.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:25:06 compute-0 nova_compute[244014]: 2026-02-25 13:25:06.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:25:06 compute-0 ceph-mon[76335]: pgmap v3156: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:08 compute-0 sudo[399589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:25:08 compute-0 sudo[399589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:25:08 compute-0 sudo[399589]: pam_unix(sudo:session): session closed for user root
Feb 25 13:25:08 compute-0 sudo[399614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 13:25:08 compute-0 sudo[399614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:25:08 compute-0 podman[399685]: 2026-02-25 13:25:08.719547435 +0000 UTC m=+0.065572381 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:25:08 compute-0 nova_compute[244014]: 2026-02-25 13:25:08.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:08 compute-0 podman[399685]: 2026-02-25 13:25:08.810144612 +0000 UTC m=+0.156169558 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 13:25:09 compute-0 ceph-mon[76335]: pgmap v3157: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:09 compute-0 sudo[399614]: pam_unix(sudo:session): session closed for user root
Feb 25 13:25:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:25:09 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:25:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:25:09 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:25:09 compute-0 sudo[399874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:25:09 compute-0 sudo[399874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:25:09 compute-0 sudo[399874]: pam_unix(sudo:session): session closed for user root
Feb 25 13:25:09 compute-0 sudo[399899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:25:09 compute-0 sudo[399899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:25:09 compute-0 nova_compute[244014]: 2026-02-25 13:25:09.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:25:10 compute-0 sudo[399899]: pam_unix(sudo:session): session closed for user root
Feb 25 13:25:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:25:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:25:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:25:10 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:25:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:25:10 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:25:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:25:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:25:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:25:10 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:25:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:25:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:25:10 compute-0 sudo[399955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:25:10 compute-0 sudo[399955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:25:10 compute-0 sudo[399955]: pam_unix(sudo:session): session closed for user root
Feb 25 13:25:10 compute-0 sudo[399980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:25:10 compute-0 sudo[399980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:25:10 compute-0 ceph-mon[76335]: pgmap v3158: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:10 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:25:10 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:25:10 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:25:10 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:25:10 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:25:10 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:25:10 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:25:10 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:25:10 compute-0 podman[400017]: 2026-02-25 13:25:10.853771219 +0000 UTC m=+0.053536652 container create 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 13:25:10 compute-0 nova_compute[244014]: 2026-02-25 13:25:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:25:10 compute-0 systemd[1]: Started libpod-conmon-5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5.scope.
Feb 25 13:25:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:25:10 compute-0 podman[400017]: 2026-02-25 13:25:10.826127369 +0000 UTC m=+0.025892852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:25:10 compute-0 podman[400017]: 2026-02-25 13:25:10.933669634 +0000 UTC m=+0.133435107 container init 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 13:25:10 compute-0 podman[400017]: 2026-02-25 13:25:10.939368094 +0000 UTC m=+0.139133527 container start 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 13:25:10 compute-0 podman[400017]: 2026-02-25 13:25:10.942183304 +0000 UTC m=+0.141948757 container attach 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Feb 25 13:25:10 compute-0 wonderful_wilbur[400033]: 167 167
Feb 25 13:25:10 compute-0 systemd[1]: libpod-5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5.scope: Deactivated successfully.
Feb 25 13:25:10 compute-0 conmon[400033]: conmon 5400225fc248d81e6aa9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5.scope/container/memory.events
Feb 25 13:25:10 compute-0 podman[400017]: 2026-02-25 13:25:10.945105226 +0000 UTC m=+0.144870689 container died 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-107c1ca87659f65c1790c1262e9c88d0f58da94b334b7c6ad3a90ac67f424303-merged.mount: Deactivated successfully.
Feb 25 13:25:10 compute-0 podman[400017]: 2026-02-25 13:25:10.985559058 +0000 UTC m=+0.185324491 container remove 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:25:10 compute-0 systemd[1]: libpod-conmon-5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5.scope: Deactivated successfully.
Feb 25 13:25:11 compute-0 podman[400058]: 2026-02-25 13:25:11.134170451 +0000 UTC m=+0.047687787 container create cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:25:11 compute-0 systemd[1]: Started libpod-conmon-cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a.scope.
Feb 25 13:25:11 compute-0 podman[400058]: 2026-02-25 13:25:11.106859911 +0000 UTC m=+0.020377307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:25:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:11 compute-0 podman[400058]: 2026-02-25 13:25:11.244350531 +0000 UTC m=+0.157867917 container init cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:25:11 compute-0 podman[400058]: 2026-02-25 13:25:11.251786321 +0000 UTC m=+0.165303667 container start cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 13:25:11 compute-0 podman[400058]: 2026-02-25 13:25:11.255425933 +0000 UTC m=+0.168943249 container attach cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:25:11 compute-0 nova_compute[244014]: 2026-02-25 13:25:11.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:11 compute-0 sharp_satoshi[400075]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:25:11 compute-0 sharp_satoshi[400075]: --> All data devices are unavailable
Feb 25 13:25:11 compute-0 systemd[1]: libpod-cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a.scope: Deactivated successfully.
Feb 25 13:25:11 compute-0 podman[400058]: 2026-02-25 13:25:11.74605915 +0000 UTC m=+0.659576496 container died cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41-merged.mount: Deactivated successfully.
Feb 25 13:25:11 compute-0 podman[400058]: 2026-02-25 13:25:11.791153383 +0000 UTC m=+0.704670699 container remove cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:25:11 compute-0 systemd[1]: libpod-conmon-cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a.scope: Deactivated successfully.
Feb 25 13:25:11 compute-0 sudo[399980]: pam_unix(sudo:session): session closed for user root
Feb 25 13:25:11 compute-0 sudo[400107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:25:11 compute-0 sudo[400107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:25:11 compute-0 sudo[400107]: pam_unix(sudo:session): session closed for user root
Feb 25 13:25:11 compute-0 sudo[400132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:25:11 compute-0 sudo[400132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:25:12 compute-0 podman[400169]: 2026-02-25 13:25:12.283503838 +0000 UTC m=+0.044202498 container create d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:25:12 compute-0 systemd[1]: Started libpod-conmon-d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9.scope.
Feb 25 13:25:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:25:12 compute-0 podman[400169]: 2026-02-25 13:25:12.355512821 +0000 UTC m=+0.116211481 container init d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:25:12 compute-0 podman[400169]: 2026-02-25 13:25:12.262093014 +0000 UTC m=+0.022791694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:25:12 compute-0 podman[400169]: 2026-02-25 13:25:12.366150181 +0000 UTC m=+0.126848881 container start d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 13:25:12 compute-0 podman[400169]: 2026-02-25 13:25:12.370069052 +0000 UTC m=+0.130767732 container attach d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 13:25:12 compute-0 amazing_saha[400185]: 167 167
Feb 25 13:25:12 compute-0 systemd[1]: libpod-d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9.scope: Deactivated successfully.
Feb 25 13:25:12 compute-0 podman[400169]: 2026-02-25 13:25:12.37426915 +0000 UTC m=+0.134967820 container died d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef2ececb9d1b5f14faae4d418bac24830e553bc271e0572f4af47150f4c9e1b7-merged.mount: Deactivated successfully.
Feb 25 13:25:12 compute-0 podman[400169]: 2026-02-25 13:25:12.415071442 +0000 UTC m=+0.175770142 container remove d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 13:25:12 compute-0 systemd[1]: libpod-conmon-d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9.scope: Deactivated successfully.
Feb 25 13:25:12 compute-0 podman[400210]: 2026-02-25 13:25:12.602716317 +0000 UTC m=+0.057119263 container create 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 13:25:12 compute-0 systemd[1]: Started libpod-conmon-7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c.scope.
Feb 25 13:25:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819404410e6d20528a47a5251d304b33af890d27ceb0ce521d0918ba98c683e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819404410e6d20528a47a5251d304b33af890d27ceb0ce521d0918ba98c683e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819404410e6d20528a47a5251d304b33af890d27ceb0ce521d0918ba98c683e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819404410e6d20528a47a5251d304b33af890d27ceb0ce521d0918ba98c683e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:12 compute-0 podman[400210]: 2026-02-25 13:25:12.583270469 +0000 UTC m=+0.037673445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:25:12 compute-0 podman[400210]: 2026-02-25 13:25:12.685988058 +0000 UTC m=+0.140391054 container init 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 13:25:12 compute-0 podman[400210]: 2026-02-25 13:25:12.696157685 +0000 UTC m=+0.150560631 container start 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:25:12 compute-0 podman[400210]: 2026-02-25 13:25:12.699970842 +0000 UTC m=+0.154373878 container attach 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 13:25:12 compute-0 ceph-mon[76335]: pgmap v3159: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]: {
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:     "0": [
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:         {
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "devices": [
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "/dev/loop3"
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             ],
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_name": "ceph_lv0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_size": "21470642176",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "name": "ceph_lv0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "tags": {
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.cluster_name": "ceph",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.crush_device_class": "",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.encrypted": "0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.objectstore": "bluestore",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.osd_id": "0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.type": "block",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.vdo": "0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.with_tpm": "0"
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             },
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "type": "block",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "vg_name": "ceph_vg0"
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:         }
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:     ],
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:     "1": [
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:         {
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "devices": [
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "/dev/loop4"
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             ],
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_name": "ceph_lv1",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_size": "21470642176",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "name": "ceph_lv1",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "tags": {
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.cluster_name": "ceph",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.crush_device_class": "",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.encrypted": "0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.objectstore": "bluestore",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.osd_id": "1",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.type": "block",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.vdo": "0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.with_tpm": "0"
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             },
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "type": "block",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "vg_name": "ceph_vg1"
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:         }
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:     ],
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:     "2": [
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:         {
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "devices": [
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "/dev/loop5"
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             ],
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_name": "ceph_lv2",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_size": "21470642176",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "name": "ceph_lv2",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "tags": {
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.cluster_name": "ceph",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.crush_device_class": "",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.encrypted": "0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.objectstore": "bluestore",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.osd_id": "2",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.type": "block",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.vdo": "0",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:                 "ceph.with_tpm": "0"
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             },
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "type": "block",
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:             "vg_name": "ceph_vg2"
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:         }
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]:     ]
Feb 25 13:25:12 compute-0 interesting_dubinsky[400227]: }
Feb 25 13:25:12 compute-0 systemd[1]: libpod-7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c.scope: Deactivated successfully.
Feb 25 13:25:13 compute-0 podman[400236]: 2026-02-25 13:25:13.041060009 +0000 UTC m=+0.028798144 container died 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 13:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-819404410e6d20528a47a5251d304b33af890d27ceb0ce521d0918ba98c683e8-merged.mount: Deactivated successfully.
Feb 25 13:25:13 compute-0 podman[400236]: 2026-02-25 13:25:13.090872164 +0000 UTC m=+0.078610319 container remove 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:25:13 compute-0 systemd[1]: libpod-conmon-7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c.scope: Deactivated successfully.
Feb 25 13:25:13 compute-0 sudo[400132]: pam_unix(sudo:session): session closed for user root
Feb 25 13:25:13 compute-0 sudo[400252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:25:13 compute-0 sudo[400252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:25:13 compute-0 sudo[400252]: pam_unix(sudo:session): session closed for user root
Feb 25 13:25:13 compute-0 sudo[400277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:25:13 compute-0 sudo[400277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:25:13 compute-0 podman[400313]: 2026-02-25 13:25:13.597379969 +0000 UTC m=+0.061949399 container create 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:25:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:13 compute-0 systemd[1]: Started libpod-conmon-2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d.scope.
Feb 25 13:25:13 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:25:13 compute-0 podman[400313]: 2026-02-25 13:25:13.573398403 +0000 UTC m=+0.037967803 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:25:13 compute-0 podman[400313]: 2026-02-25 13:25:13.679673532 +0000 UTC m=+0.144242962 container init 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:25:13 compute-0 podman[400313]: 2026-02-25 13:25:13.688028018 +0000 UTC m=+0.152597448 container start 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:25:13 compute-0 podman[400313]: 2026-02-25 13:25:13.692223686 +0000 UTC m=+0.156793126 container attach 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 13:25:13 compute-0 trusting_newton[400330]: 167 167
Feb 25 13:25:13 compute-0 systemd[1]: libpod-2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d.scope: Deactivated successfully.
Feb 25 13:25:13 compute-0 conmon[400330]: conmon 2b3f26a247b1a67a6249 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d.scope/container/memory.events
Feb 25 13:25:13 compute-0 podman[400313]: 2026-02-25 13:25:13.69695478 +0000 UTC m=+0.161524210 container died 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd52b9cbfcd70bebf77f70d53d944c579d2eb4eab523d9245aad92edc7023b03-merged.mount: Deactivated successfully.
Feb 25 13:25:13 compute-0 podman[400313]: 2026-02-25 13:25:13.740666233 +0000 UTC m=+0.205235653 container remove 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:25:13 compute-0 systemd[1]: libpod-conmon-2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d.scope: Deactivated successfully.
Feb 25 13:25:13 compute-0 nova_compute[244014]: 2026-02-25 13:25:13.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:13 compute-0 podman[400353]: 2026-02-25 13:25:13.89750361 +0000 UTC m=+0.050012763 container create fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:25:13 compute-0 systemd[1]: Started libpod-conmon-fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab.scope.
Feb 25 13:25:13 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb5e9ed3b3ed8ab8b826134ab3cee082ab7d4a5dc104117811e7cc6c3e4b328/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb5e9ed3b3ed8ab8b826134ab3cee082ab7d4a5dc104117811e7cc6c3e4b328/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb5e9ed3b3ed8ab8b826134ab3cee082ab7d4a5dc104117811e7cc6c3e4b328/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb5e9ed3b3ed8ab8b826134ab3cee082ab7d4a5dc104117811e7cc6c3e4b328/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:25:13 compute-0 podman[400353]: 2026-02-25 13:25:13.878515584 +0000 UTC m=+0.031024827 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:25:13 compute-0 podman[400353]: 2026-02-25 13:25:13.993397286 +0000 UTC m=+0.145906529 container init fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:25:14 compute-0 podman[400353]: 2026-02-25 13:25:14.000489326 +0000 UTC m=+0.152998539 container start fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 13:25:14 compute-0 podman[400353]: 2026-02-25 13:25:14.005004514 +0000 UTC m=+0.157513747 container attach fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 13:25:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:14 compute-0 lvm[400448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:25:14 compute-0 lvm[400448]: VG ceph_vg0 finished
Feb 25 13:25:14 compute-0 lvm[400449]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:25:14 compute-0 lvm[400449]: VG ceph_vg1 finished
Feb 25 13:25:14 compute-0 lvm[400451]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:25:14 compute-0 lvm[400451]: VG ceph_vg2 finished
Feb 25 13:25:14 compute-0 ceph-mon[76335]: pgmap v3160: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:14 compute-0 musing_mclaren[400370]: {}
Feb 25 13:25:14 compute-0 systemd[1]: libpod-fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab.scope: Deactivated successfully.
Feb 25 13:25:14 compute-0 systemd[1]: libpod-fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab.scope: Consumed 1.293s CPU time.
Feb 25 13:25:14 compute-0 podman[400353]: 2026-02-25 13:25:14.870592052 +0000 UTC m=+1.023101205 container died fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 13:25:14 compute-0 nova_compute[244014]: 2026-02-25 13:25:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfb5e9ed3b3ed8ab8b826134ab3cee082ab7d4a5dc104117811e7cc6c3e4b328-merged.mount: Deactivated successfully.
Feb 25 13:25:14 compute-0 podman[400353]: 2026-02-25 13:25:14.921087497 +0000 UTC m=+1.073596670 container remove fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:25:14 compute-0 systemd[1]: libpod-conmon-fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab.scope: Deactivated successfully.
Feb 25 13:25:14 compute-0 sudo[400277]: pam_unix(sudo:session): session closed for user root
Feb 25 13:25:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:25:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:25:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:25:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:25:15 compute-0 sudo[400465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:25:15 compute-0 sudo[400465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:25:15 compute-0 sudo[400465]: pam_unix(sudo:session): session closed for user root
Feb 25 13:25:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:25:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:25:16 compute-0 nova_compute[244014]: 2026-02-25 13:25:16.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:16 compute-0 ceph-mon[76335]: pgmap v3161: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:18 compute-0 nova_compute[244014]: 2026-02-25 13:25:18.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:19 compute-0 ceph-mon[76335]: pgmap v3162: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:20 compute-0 nova_compute[244014]: 2026-02-25 13:25:20.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:25:21 compute-0 ceph-mon[76335]: pgmap v3163: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:21 compute-0 nova_compute[244014]: 2026-02-25 13:25:21.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:23 compute-0 ceph-mon[76335]: pgmap v3164: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:23 compute-0 nova_compute[244014]: 2026-02-25 13:25:23.886 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:25 compute-0 ceph-mon[76335]: pgmap v3165: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:26 compute-0 nova_compute[244014]: 2026-02-25 13:25:26.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:27 compute-0 ceph-mon[76335]: pgmap v3166: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:28 compute-0 nova_compute[244014]: 2026-02-25 13:25:28.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:29 compute-0 ceph-mon[76335]: pgmap v3167: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:29 compute-0 podman[400490]: 2026-02-25 13:25:29.730921925 +0000 UTC m=+0.070307445 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 13:25:29 compute-0 podman[400491]: 2026-02-25 13:25:29.768556457 +0000 UTC m=+0.107226597 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 13:25:31 compute-0 ceph-mon[76335]: pgmap v3168: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:25:31
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'images', '.mgr', 'backups', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta']
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:25:31 compute-0 nova_compute[244014]: 2026-02-25 13:25:31.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:25:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:25:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:25:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:25:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:25:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:25:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:25:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:25:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:25:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:25:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:25:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:25:33 compute-0 ceph-mon[76335]: pgmap v3169: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:33 compute-0 nova_compute[244014]: 2026-02-25 13:25:33.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:35 compute-0 ceph-mon[76335]: pgmap v3170: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:36 compute-0 nova_compute[244014]: 2026-02-25 13:25:36.397 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:37 compute-0 ceph-mon[76335]: pgmap v3171: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:38 compute-0 nova_compute[244014]: 2026-02-25 13:25:38.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:39 compute-0 ceph-mon[76335]: pgmap v3172: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:41 compute-0 ceph-mon[76335]: pgmap v3173: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:41 compute-0 nova_compute[244014]: 2026-02-25 13:25:41.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:42 compute-0 sshd-session[400537]: Invalid user eth from 80.94.92.186 port 48110
Feb 25 13:25:42 compute-0 sshd-session[400537]: Connection closed by invalid user eth 80.94.92.186 port 48110 [preauth]
Feb 25 13:25:43 compute-0 ceph-mon[76335]: pgmap v3174: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:25:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:43 compute-0 nova_compute[244014]: 2026-02-25 13:25:43.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:45 compute-0 ceph-mon[76335]: pgmap v3175: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:46 compute-0 nova_compute[244014]: 2026-02-25 13:25:46.475 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:47 compute-0 ceph-mon[76335]: pgmap v3176: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:25:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2341387188' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:25:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:25:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2341387188' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:25:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2341387188' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:25:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2341387188' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:25:48 compute-0 nova_compute[244014]: 2026-02-25 13:25:48.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:49 compute-0 ceph-mon[76335]: pgmap v3177: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:51 compute-0 ceph-mon[76335]: pgmap v3178: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:51 compute-0 nova_compute[244014]: 2026-02-25 13:25:51.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:53 compute-0 ceph-mon[76335]: pgmap v3179: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:53 compute-0 nova_compute[244014]: 2026-02-25 13:25:53.943 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:25:55.069 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:25:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:25:55.070 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:25:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:25:55.070 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:25:55 compute-0 ceph-mon[76335]: pgmap v3180: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:56 compute-0 nova_compute[244014]: 2026-02-25 13:25:56.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:56 compute-0 nova_compute[244014]: 2026-02-25 13:25:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:25:56 compute-0 nova_compute[244014]: 2026-02-25 13:25:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:25:56 compute-0 nova_compute[244014]: 2026-02-25 13:25:56.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:25:56 compute-0 nova_compute[244014]: 2026-02-25 13:25:56.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:25:56 compute-0 nova_compute[244014]: 2026-02-25 13:25:56.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:25:56 compute-0 nova_compute[244014]: 2026-02-25 13:25:56.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:25:56 compute-0 nova_compute[244014]: 2026-02-25 13:25:56.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:25:57 compute-0 ceph-mon[76335]: pgmap v3181: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:25:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976471032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:25:57 compute-0 nova_compute[244014]: 2026-02-25 13:25:57.495 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:25:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:57 compute-0 nova_compute[244014]: 2026-02-25 13:25:57.655 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:25:57 compute-0 nova_compute[244014]: 2026-02-25 13:25:57.656 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3578MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:25:57 compute-0 nova_compute[244014]: 2026-02-25 13:25:57.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:25:57 compute-0 nova_compute[244014]: 2026-02-25 13:25:57.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:25:57 compute-0 nova_compute[244014]: 2026-02-25 13:25:57.917 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:25:57 compute-0 nova_compute[244014]: 2026-02-25 13:25:57.918 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:25:57 compute-0 nova_compute[244014]: 2026-02-25 13:25:57.991 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:25:58 compute-0 nova_compute[244014]: 2026-02-25 13:25:58.085 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:25:58 compute-0 nova_compute[244014]: 2026-02-25 13:25:58.085 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:25:58 compute-0 nova_compute[244014]: 2026-02-25 13:25:58.100 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:25:58 compute-0 nova_compute[244014]: 2026-02-25 13:25:58.120 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:25:58 compute-0 nova_compute[244014]: 2026-02-25 13:25:58.134 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:25:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1976471032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:25:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:25:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051834929' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:25:58 compute-0 nova_compute[244014]: 2026-02-25 13:25:58.701 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:25:58 compute-0 nova_compute[244014]: 2026-02-25 13:25:58.707 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:25:58 compute-0 nova_compute[244014]: 2026-02-25 13:25:58.723 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:25:58 compute-0 nova_compute[244014]: 2026-02-25 13:25:58.724 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:25:58 compute-0 nova_compute[244014]: 2026-02-25 13:25:58.724 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:25:58 compute-0 nova_compute[244014]: 2026-02-25 13:25:58.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:25:59 compute-0 ceph-mon[76335]: pgmap v3182: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:25:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1051834929' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:25:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:25:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:00 compute-0 podman[400583]: 2026-02-25 13:26:00.73848434 +0000 UTC m=+0.081167802 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 13:26:00 compute-0 podman[400584]: 2026-02-25 13:26:00.752452134 +0000 UTC m=+0.088279332 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 25 13:26:01 compute-0 ceph-mon[76335]: pgmap v3183: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:01 compute-0 nova_compute[244014]: 2026-02-25 13:26:01.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:26:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:26:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:03 compute-0 ceph-mon[76335]: pgmap v3184: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:03 compute-0 nova_compute[244014]: 2026-02-25 13:26:03.724 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:26:03 compute-0 nova_compute[244014]: 2026-02-25 13:26:03.725 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:26:03 compute-0 nova_compute[244014]: 2026-02-25 13:26:03.725 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:26:03 compute-0 nova_compute[244014]: 2026-02-25 13:26:03.742 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:26:03 compute-0 nova_compute[244014]: 2026-02-25 13:26:03.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:26:03 compute-0 nova_compute[244014]: 2026-02-25 13:26:03.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:05 compute-0 ceph-mon[76335]: pgmap v3185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:06 compute-0 nova_compute[244014]: 2026-02-25 13:26:06.483 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:07 compute-0 ceph-mon[76335]: pgmap v3186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:07 compute-0 nova_compute[244014]: 2026-02-25 13:26:07.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:26:07 compute-0 nova_compute[244014]: 2026-02-25 13:26:07.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:26:08 compute-0 nova_compute[244014]: 2026-02-25 13:26:08.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:09 compute-0 ceph-mon[76335]: pgmap v3187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:10 compute-0 nova_compute[244014]: 2026-02-25 13:26:10.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:26:11 compute-0 ceph-mon[76335]: pgmap v3188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:11 compute-0 nova_compute[244014]: 2026-02-25 13:26:11.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:12 compute-0 nova_compute[244014]: 2026-02-25 13:26:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:26:13 compute-0 ceph-mon[76335]: pgmap v3189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:13 compute-0 nova_compute[244014]: 2026-02-25 13:26:13.957 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:14 compute-0 ceph-mon[76335]: pgmap v3190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:14 compute-0 nova_compute[244014]: 2026-02-25 13:26:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:26:15 compute-0 sudo[400629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:26:15 compute-0 sudo[400629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:26:15 compute-0 sudo[400629]: pam_unix(sudo:session): session closed for user root
Feb 25 13:26:15 compute-0 sudo[400654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:26:15 compute-0 sudo[400654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:26:15 compute-0 sudo[400654]: pam_unix(sudo:session): session closed for user root
Feb 25 13:26:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:26:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:26:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:26:15 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:26:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:26:15 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:26:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:26:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:26:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:26:15 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:26:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:26:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:26:15 compute-0 sudo[400711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:26:15 compute-0 sudo[400711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:26:15 compute-0 sudo[400711]: pam_unix(sudo:session): session closed for user root
Feb 25 13:26:15 compute-0 sudo[400736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:26:15 compute-0 sudo[400736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:26:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:26:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:26:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:26:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:26:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:26:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:26:16 compute-0 podman[400774]: 2026-02-25 13:26:16.080097294 +0000 UTC m=+0.060297553 container create fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Feb 25 13:26:16 compute-0 podman[400774]: 2026-02-25 13:26:16.044347655 +0000 UTC m=+0.024547924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:26:16 compute-0 systemd[1]: Started libpod-conmon-fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954.scope.
Feb 25 13:26:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:26:16 compute-0 podman[400774]: 2026-02-25 13:26:16.201424668 +0000 UTC m=+0.181624957 container init fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 13:26:16 compute-0 podman[400774]: 2026-02-25 13:26:16.209097844 +0000 UTC m=+0.189298113 container start fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Feb 25 13:26:16 compute-0 happy_carson[400791]: 167 167
Feb 25 13:26:16 compute-0 podman[400774]: 2026-02-25 13:26:16.214754724 +0000 UTC m=+0.194955043 container attach fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:26:16 compute-0 systemd[1]: libpod-fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954.scope: Deactivated successfully.
Feb 25 13:26:16 compute-0 podman[400774]: 2026-02-25 13:26:16.215523176 +0000 UTC m=+0.195723445 container died fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 13:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-01adfa051313829154f07c00f5756cbfc125bcf8480ffbbbf07d3cea68dcbca9-merged.mount: Deactivated successfully.
Feb 25 13:26:16 compute-0 podman[400774]: 2026-02-25 13:26:16.374200554 +0000 UTC m=+0.354400783 container remove fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb 25 13:26:16 compute-0 systemd[1]: libpod-conmon-fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954.scope: Deactivated successfully.
Feb 25 13:26:16 compute-0 nova_compute[244014]: 2026-02-25 13:26:16.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:16 compute-0 podman[400817]: 2026-02-25 13:26:16.511435987 +0000 UTC m=+0.056097374 container create 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:26:16 compute-0 podman[400817]: 2026-02-25 13:26:16.475718599 +0000 UTC m=+0.020380036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:26:16 compute-0 systemd[1]: Started libpod-conmon-6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1.scope.
Feb 25 13:26:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:16 compute-0 podman[400817]: 2026-02-25 13:26:16.786214962 +0000 UTC m=+0.330876399 container init 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 13:26:16 compute-0 podman[400817]: 2026-02-25 13:26:16.795208256 +0000 UTC m=+0.339869633 container start 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:26:16 compute-0 podman[400817]: 2026-02-25 13:26:16.813339748 +0000 UTC m=+0.358001135 container attach 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 13:26:16 compute-0 ceph-mon[76335]: pgmap v3191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:17 compute-0 fervent_brattain[400833]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:26:17 compute-0 fervent_brattain[400833]: --> All data devices are unavailable
Feb 25 13:26:17 compute-0 systemd[1]: libpod-6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1.scope: Deactivated successfully.
Feb 25 13:26:17 compute-0 podman[400817]: 2026-02-25 13:26:17.256074033 +0000 UTC m=+0.800735420 container died 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 13:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07-merged.mount: Deactivated successfully.
Feb 25 13:26:17 compute-0 podman[400817]: 2026-02-25 13:26:17.561208405 +0000 UTC m=+1.105869782 container remove 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:26:17 compute-0 systemd[1]: libpod-conmon-6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1.scope: Deactivated successfully.
Feb 25 13:26:17 compute-0 sudo[400736]: pam_unix(sudo:session): session closed for user root
Feb 25 13:26:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:17 compute-0 sudo[400867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:26:17 compute-0 sudo[400867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:26:17 compute-0 sudo[400867]: pam_unix(sudo:session): session closed for user root
Feb 25 13:26:17 compute-0 sudo[400892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:26:17 compute-0 sudo[400892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:26:18 compute-0 podman[400928]: 2026-02-25 13:26:18.072224427 +0000 UTC m=+0.053319866 container create 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:26:18 compute-0 systemd[1]: Started libpod-conmon-566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f.scope.
Feb 25 13:26:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:26:18 compute-0 podman[400928]: 2026-02-25 13:26:18.124433371 +0000 UTC m=+0.105528690 container init 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 13:26:18 compute-0 podman[400928]: 2026-02-25 13:26:18.131709636 +0000 UTC m=+0.112804955 container start 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:26:18 compute-0 podman[400928]: 2026-02-25 13:26:18.135434631 +0000 UTC m=+0.116529960 container attach 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:26:18 compute-0 adoring_vaughan[400944]: 167 167
Feb 25 13:26:18 compute-0 systemd[1]: libpod-566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f.scope: Deactivated successfully.
Feb 25 13:26:18 compute-0 podman[400928]: 2026-02-25 13:26:18.136398078 +0000 UTC m=+0.117493427 container died 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:26:18 compute-0 podman[400928]: 2026-02-25 13:26:18.04505765 +0000 UTC m=+0.026153019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c14fc64b8203ca1328886bf9388603627280efaa971a3ce8d5fadcbd55c63fa0-merged.mount: Deactivated successfully.
Feb 25 13:26:18 compute-0 podman[400928]: 2026-02-25 13:26:18.180902574 +0000 UTC m=+0.161997923 container remove 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:26:18 compute-0 systemd[1]: libpod-conmon-566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f.scope: Deactivated successfully.
Feb 25 13:26:18 compute-0 podman[400967]: 2026-02-25 13:26:18.338110581 +0000 UTC m=+0.047869792 container create 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:26:18 compute-0 systemd[1]: Started libpod-conmon-5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782.scope.
Feb 25 13:26:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e2e63ec4e7e42ee52b9931607c5082e8f23dcd6dd33086948ca9a0f6b3e849/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e2e63ec4e7e42ee52b9931607c5082e8f23dcd6dd33086948ca9a0f6b3e849/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e2e63ec4e7e42ee52b9931607c5082e8f23dcd6dd33086948ca9a0f6b3e849/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e2e63ec4e7e42ee52b9931607c5082e8f23dcd6dd33086948ca9a0f6b3e849/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:18 compute-0 podman[400967]: 2026-02-25 13:26:18.317798288 +0000 UTC m=+0.027557499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:26:18 compute-0 podman[400967]: 2026-02-25 13:26:18.445168062 +0000 UTC m=+0.154927333 container init 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:26:18 compute-0 podman[400967]: 2026-02-25 13:26:18.453816147 +0000 UTC m=+0.163575328 container start 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:26:18 compute-0 podman[400967]: 2026-02-25 13:26:18.463301844 +0000 UTC m=+0.173061055 container attach 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]: {
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:     "0": [
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:         {
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "devices": [
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "/dev/loop3"
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             ],
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_name": "ceph_lv0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_size": "21470642176",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "name": "ceph_lv0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "tags": {
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.cluster_name": "ceph",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.crush_device_class": "",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.encrypted": "0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.objectstore": "bluestore",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.osd_id": "0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.type": "block",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.vdo": "0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.with_tpm": "0"
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             },
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "type": "block",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "vg_name": "ceph_vg0"
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:         }
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:     ],
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:     "1": [
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:         {
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "devices": [
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "/dev/loop4"
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             ],
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_name": "ceph_lv1",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_size": "21470642176",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "name": "ceph_lv1",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "tags": {
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.cluster_name": "ceph",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.crush_device_class": "",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.encrypted": "0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.objectstore": "bluestore",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.osd_id": "1",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.type": "block",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.vdo": "0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.with_tpm": "0"
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             },
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "type": "block",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "vg_name": "ceph_vg1"
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:         }
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:     ],
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:     "2": [
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:         {
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "devices": [
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "/dev/loop5"
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             ],
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_name": "ceph_lv2",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_size": "21470642176",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "name": "ceph_lv2",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "tags": {
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.cluster_name": "ceph",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.crush_device_class": "",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.encrypted": "0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.objectstore": "bluestore",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.osd_id": "2",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.type": "block",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.vdo": "0",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:                 "ceph.with_tpm": "0"
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             },
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "type": "block",
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:             "vg_name": "ceph_vg2"
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:         }
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]:     ]
Feb 25 13:26:18 compute-0 tender_chaplygin[400984]: }
Feb 25 13:26:18 compute-0 systemd[1]: libpod-5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782.scope: Deactivated successfully.
Feb 25 13:26:18 compute-0 podman[400993]: 2026-02-25 13:26:18.808208998 +0000 UTC m=+0.036760228 container died 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e2e63ec4e7e42ee52b9931607c5082e8f23dcd6dd33086948ca9a0f6b3e849-merged.mount: Deactivated successfully.
Feb 25 13:26:18 compute-0 podman[400993]: 2026-02-25 13:26:18.85043747 +0000 UTC m=+0.078988680 container remove 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:26:18 compute-0 systemd[1]: libpod-conmon-5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782.scope: Deactivated successfully.
Feb 25 13:26:18 compute-0 sudo[400892]: pam_unix(sudo:session): session closed for user root
Feb 25 13:26:18 compute-0 ceph-mon[76335]: pgmap v3192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:18 compute-0 sudo[401008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:26:18 compute-0 sudo[401008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:26:18 compute-0 sudo[401008]: pam_unix(sudo:session): session closed for user root
Feb 25 13:26:18 compute-0 nova_compute[244014]: 2026-02-25 13:26:18.962 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:19 compute-0 sudo[401033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:26:19 compute-0 sudo[401033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:26:19 compute-0 podman[401072]: 2026-02-25 13:26:19.326170586 +0000 UTC m=+0.048790128 container create 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:26:19 compute-0 systemd[1]: Started libpod-conmon-4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29.scope.
Feb 25 13:26:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:26:19 compute-0 podman[401072]: 2026-02-25 13:26:19.303412333 +0000 UTC m=+0.026031915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:26:19 compute-0 podman[401072]: 2026-02-25 13:26:19.406306817 +0000 UTC m=+0.128926449 container init 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:26:19 compute-0 podman[401072]: 2026-02-25 13:26:19.411240117 +0000 UTC m=+0.133859689 container start 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:26:19 compute-0 podman[401072]: 2026-02-25 13:26:19.414947471 +0000 UTC m=+0.137567103 container attach 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:26:19 compute-0 amazing_beaver[401088]: 167 167
Feb 25 13:26:19 compute-0 systemd[1]: libpod-4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29.scope: Deactivated successfully.
Feb 25 13:26:19 compute-0 podman[401072]: 2026-02-25 13:26:19.417130933 +0000 UTC m=+0.139750505 container died 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 13:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d27586696e125ab9a0321af42179fd80826e875552d46471f093f866d1ff37a-merged.mount: Deactivated successfully.
Feb 25 13:26:19 compute-0 podman[401072]: 2026-02-25 13:26:19.460184938 +0000 UTC m=+0.182804500 container remove 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:26:19 compute-0 systemd[1]: libpod-conmon-4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29.scope: Deactivated successfully.
Feb 25 13:26:19 compute-0 podman[401112]: 2026-02-25 13:26:19.643300656 +0000 UTC m=+0.060622892 container create e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:26:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:19 compute-0 systemd[1]: Started libpod-conmon-e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51.scope.
Feb 25 13:26:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:26:19 compute-0 podman[401112]: 2026-02-25 13:26:19.622913221 +0000 UTC m=+0.040235487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c9d7734a4ea9fca85ce1d912d1e3cce9934536ded80be0fdd3f40016b6613e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c9d7734a4ea9fca85ce1d912d1e3cce9934536ded80be0fdd3f40016b6613e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c9d7734a4ea9fca85ce1d912d1e3cce9934536ded80be0fdd3f40016b6613e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c9d7734a4ea9fca85ce1d912d1e3cce9934536ded80be0fdd3f40016b6613e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:26:19 compute-0 podman[401112]: 2026-02-25 13:26:19.737235737 +0000 UTC m=+0.154558013 container init e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:26:19 compute-0 podman[401112]: 2026-02-25 13:26:19.749845893 +0000 UTC m=+0.167168149 container start e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:26:19 compute-0 podman[401112]: 2026-02-25 13:26:19.756323446 +0000 UTC m=+0.173645752 container attach e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:26:20 compute-0 lvm[401207]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:26:20 compute-0 lvm[401207]: VG ceph_vg0 finished
Feb 25 13:26:20 compute-0 lvm[401208]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:26:20 compute-0 lvm[401208]: VG ceph_vg1 finished
Feb 25 13:26:20 compute-0 lvm[401210]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:26:20 compute-0 lvm[401210]: VG ceph_vg2 finished
Feb 25 13:26:20 compute-0 ecstatic_brown[401129]: {}
Feb 25 13:26:20 compute-0 systemd[1]: libpod-e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51.scope: Deactivated successfully.
Feb 25 13:26:20 compute-0 systemd[1]: libpod-e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51.scope: Consumed 1.231s CPU time.
Feb 25 13:26:20 compute-0 podman[401112]: 2026-02-25 13:26:20.561990844 +0000 UTC m=+0.979313110 container died e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 25 13:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-09c9d7734a4ea9fca85ce1d912d1e3cce9934536ded80be0fdd3f40016b6613e-merged.mount: Deactivated successfully.
Feb 25 13:26:20 compute-0 podman[401112]: 2026-02-25 13:26:20.811592618 +0000 UTC m=+1.228914884 container remove e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:26:20 compute-0 systemd[1]: libpod-conmon-e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51.scope: Deactivated successfully.
Feb 25 13:26:20 compute-0 sudo[401033]: pam_unix(sudo:session): session closed for user root
Feb 25 13:26:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:26:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:26:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:26:20 compute-0 nova_compute[244014]: 2026-02-25 13:26:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:26:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:26:20 compute-0 ceph-mon[76335]: pgmap v3193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:26:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:26:20 compute-0 sudo[401225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:26:20 compute-0 sudo[401225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:26:20 compute-0 sudo[401225]: pam_unix(sudo:session): session closed for user root
Feb 25 13:26:21 compute-0 nova_compute[244014]: 2026-02-25 13:26:21.522 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:22 compute-0 ceph-mon[76335]: pgmap v3194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:23 compute-0 sshd-session[401250]: Received disconnect from 45.148.10.151 port 9612:11:  [preauth]
Feb 25 13:26:23 compute-0 sshd-session[401250]: Disconnected from authenticating user root 45.148.10.151 port 9612 [preauth]
Feb 25 13:26:23 compute-0 nova_compute[244014]: 2026-02-25 13:26:23.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:24 compute-0 ceph-mon[76335]: pgmap v3195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:26 compute-0 nova_compute[244014]: 2026-02-25 13:26:26.522 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:26 compute-0 ceph-mon[76335]: pgmap v3196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:28 compute-0 nova_compute[244014]: 2026-02-25 13:26:28.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:26:28 compute-0 ceph-mon[76335]: pgmap v3197: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:28 compute-0 nova_compute[244014]: 2026-02-25 13:26:28.969 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:30 compute-0 ceph-mon[76335]: pgmap v3198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:26:31
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'volumes', 'backups', 'images']
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:26:31 compute-0 nova_compute[244014]: 2026-02-25 13:26:31.525 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:26:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:31 compute-0 podman[401252]: 2026-02-25 13:26:31.739794761 +0000 UTC m=+0.078649751 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 13:26:31 compute-0 podman[401253]: 2026-02-25 13:26:31.802952634 +0000 UTC m=+0.141665870 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 13:26:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:26:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:26:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:26:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:26:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:26:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:26:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:26:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:26:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:26:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:26:32 compute-0 ceph-mon[76335]: pgmap v3199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:33 compute-0 nova_compute[244014]: 2026-02-25 13:26:33.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:34 compute-0 ceph-mon[76335]: pgmap v3200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:36 compute-0 nova_compute[244014]: 2026-02-25 13:26:36.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:36 compute-0 ceph-mon[76335]: pgmap v3201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:38 compute-0 nova_compute[244014]: 2026-02-25 13:26:38.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:38 compute-0 ceph-mon[76335]: pgmap v3202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:41 compute-0 ceph-mon[76335]: pgmap v3203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:41 compute-0 nova_compute[244014]: 2026-02-25 13:26:41.555 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:43 compute-0 ceph-mon[76335]: pgmap v3204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:26:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:43 compute-0 nova_compute[244014]: 2026-02-25 13:26:43.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:45 compute-0 ceph-mon[76335]: pgmap v3205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:46 compute-0 nova_compute[244014]: 2026-02-25 13:26:46.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:47 compute-0 ceph-mon[76335]: pgmap v3206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:26:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3026047940' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:26:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:26:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3026047940' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:26:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3026047940' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:26:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3026047940' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:26:48 compute-0 nova_compute[244014]: 2026-02-25 13:26:48.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:49 compute-0 ceph-mon[76335]: pgmap v3207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:51 compute-0 ceph-mon[76335]: pgmap v3208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:51 compute-0 nova_compute[244014]: 2026-02-25 13:26:51.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:53 compute-0 ceph-mon[76335]: pgmap v3209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:53 compute-0 nova_compute[244014]: 2026-02-25 13:26:53.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:26:55.070 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:26:55.070 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:26:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:26:55.070 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:26:55 compute-0 ceph-mon[76335]: pgmap v3210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:56 compute-0 nova_compute[244014]: 2026-02-25 13:26:56.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:56 compute-0 nova_compute[244014]: 2026-02-25 13:26:56.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:26:56 compute-0 nova_compute[244014]: 2026-02-25 13:26:56.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:26:56 compute-0 nova_compute[244014]: 2026-02-25 13:26:56.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:26:56 compute-0 nova_compute[244014]: 2026-02-25 13:26:56.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:26:56 compute-0 nova_compute[244014]: 2026-02-25 13:26:56.898 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:26:56 compute-0 nova_compute[244014]: 2026-02-25 13:26:56.899 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:26:57 compute-0 ceph-mon[76335]: pgmap v3211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:26:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2786108979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:26:57 compute-0 nova_compute[244014]: 2026-02-25 13:26:57.451 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:26:57 compute-0 nova_compute[244014]: 2026-02-25 13:26:57.658 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:26:57 compute-0 nova_compute[244014]: 2026-02-25 13:26:57.659 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:26:57 compute-0 nova_compute[244014]: 2026-02-25 13:26:57.660 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:26:57 compute-0 nova_compute[244014]: 2026-02-25 13:26:57.660 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:26:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:57 compute-0 nova_compute[244014]: 2026-02-25 13:26:57.725 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:26:57 compute-0 nova_compute[244014]: 2026-02-25 13:26:57.725 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:26:57 compute-0 nova_compute[244014]: 2026-02-25 13:26:57.742 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:26:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2786108979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:26:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:26:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978799601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:26:58 compute-0 nova_compute[244014]: 2026-02-25 13:26:58.279 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:26:58 compute-0 nova_compute[244014]: 2026-02-25 13:26:58.286 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:26:58 compute-0 nova_compute[244014]: 2026-02-25 13:26:58.318 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:26:58 compute-0 nova_compute[244014]: 2026-02-25 13:26:58.321 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:26:58 compute-0 nova_compute[244014]: 2026-02-25 13:26:58.321 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:26:58 compute-0 nova_compute[244014]: 2026-02-25 13:26:58.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:26:59 compute-0 ceph-mon[76335]: pgmap v3212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:26:59 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/978799601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:26:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:26:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:00 compute-0 nova_compute[244014]: 2026-02-25 13:27:00.323 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:27:01 compute-0 ceph-mon[76335]: pgmap v3213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:01 compute-0 nova_compute[244014]: 2026-02-25 13:27:01.594 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:27:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:27:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:01 compute-0 nova_compute[244014]: 2026-02-25 13:27:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:27:01 compute-0 nova_compute[244014]: 2026-02-25 13:27:01.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:27:01 compute-0 nova_compute[244014]: 2026-02-25 13:27:01.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:27:01 compute-0 nova_compute[244014]: 2026-02-25 13:27:01.924 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:27:02 compute-0 podman[401343]: 2026-02-25 13:27:02.721715462 +0000 UTC m=+0.063212365 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:27:02 compute-0 podman[401344]: 2026-02-25 13:27:02.789594868 +0000 UTC m=+0.126597814 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 13:27:03 compute-0 ceph-mon[76335]: pgmap v3214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:03 compute-0 nova_compute[244014]: 2026-02-25 13:27:03.919 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:27:03 compute-0 nova_compute[244014]: 2026-02-25 13:27:03.989 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:05 compute-0 ceph-mon[76335]: pgmap v3215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:06 compute-0 nova_compute[244014]: 2026-02-25 13:27:06.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:07 compute-0 ceph-mon[76335]: pgmap v3216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:08 compute-0 nova_compute[244014]: 2026-02-25 13:27:08.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:27:08 compute-0 nova_compute[244014]: 2026-02-25 13:27:08.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:27:08 compute-0 nova_compute[244014]: 2026-02-25 13:27:08.992 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:09 compute-0 ceph-mon[76335]: pgmap v3217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:11 compute-0 ceph-mon[76335]: pgmap v3218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:11 compute-0 nova_compute[244014]: 2026-02-25 13:27:11.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:12 compute-0 ceph-mon[76335]: pgmap v3219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:12 compute-0 nova_compute[244014]: 2026-02-25 13:27:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:27:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:13 compute-0 nova_compute[244014]: 2026-02-25 13:27:13.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:14 compute-0 ceph-mon[76335]: pgmap v3220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:14 compute-0 nova_compute[244014]: 2026-02-25 13:27:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:27:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:16 compute-0 nova_compute[244014]: 2026-02-25 13:27:16.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:16 compute-0 ceph-mon[76335]: pgmap v3221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:16 compute-0 nova_compute[244014]: 2026-02-25 13:27:16.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:27:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:18 compute-0 ceph-mon[76335]: pgmap v3222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:18 compute-0 nova_compute[244014]: 2026-02-25 13:27:18.999 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.841969) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039842002, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3448554, "memory_usage": 3505608, "flush_reason": "Manual Compaction"}
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039858613, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 3381859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65903, "largest_seqno": 67944, "table_properties": {"data_size": 3372522, "index_size": 5894, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18497, "raw_average_key_size": 20, "raw_value_size": 3354061, "raw_average_value_size": 3633, "num_data_blocks": 261, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025812, "oldest_key_time": 1772025812, "file_creation_time": 1772026039, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 16899 microseconds, and 5081 cpu microseconds.
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.858848) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 3381859 bytes OK
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.858928) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.861435) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.861460) EVENT_LOG_v1 {"time_micros": 1772026039861451, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.861494) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3440033, prev total WAL file size 3440033, number of live WAL files 2.
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.862900) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(3302KB)], [158(9115KB)]
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039862982, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12716035, "oldest_snapshot_seqno": -1}
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8559 keys, 10939832 bytes, temperature: kUnknown
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039929087, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 10939832, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10884785, "index_size": 32534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21445, "raw_key_size": 224067, "raw_average_key_size": 26, "raw_value_size": 10734144, "raw_average_value_size": 1254, "num_data_blocks": 1260, "num_entries": 8559, "num_filter_entries": 8559, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026039, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.929393) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 10939832 bytes
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.930935) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.1 rd, 165.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.9 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 9073, records dropped: 514 output_compression: NoCompression
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.930960) EVENT_LOG_v1 {"time_micros": 1772026039930948, "job": 98, "event": "compaction_finished", "compaction_time_micros": 66206, "compaction_time_cpu_micros": 29981, "output_level": 6, "num_output_files": 1, "total_output_size": 10939832, "num_input_records": 9073, "num_output_records": 8559, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039931634, "job": 98, "event": "table_file_deletion", "file_number": 160}
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039933110, "job": 98, "event": "table_file_deletion", "file_number": 158}
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.862777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.933208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.933215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.933217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.933219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:27:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.933220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:27:20 compute-0 ceph-mon[76335]: pgmap v3223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:21 compute-0 sudo[401390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:27:21 compute-0 sudo[401390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:27:21 compute-0 sudo[401390]: pam_unix(sudo:session): session closed for user root
Feb 25 13:27:21 compute-0 sudo[401415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:27:21 compute-0 sudo[401415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:27:21 compute-0 sudo[401415]: pam_unix(sudo:session): session closed for user root
Feb 25 13:27:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:27:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:27:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:27:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:27:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:27:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:27:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:27:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:27:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:27:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:27:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:27:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:27:21 compute-0 nova_compute[244014]: 2026-02-25 13:27:21.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:21 compute-0 sudo[401471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:27:21 compute-0 sudo[401471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:27:21 compute-0 sudo[401471]: pam_unix(sudo:session): session closed for user root
Feb 25 13:27:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:21 compute-0 sudo[401496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:27:21 compute-0 sudo[401496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:27:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:27:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:27:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:27:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:27:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:27:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:27:21 compute-0 nova_compute[244014]: 2026-02-25 13:27:21.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:27:21 compute-0 podman[401533]: 2026-02-25 13:27:21.953930072 +0000 UTC m=+0.050951439 container create 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 13:27:22 compute-0 systemd[1]: Started libpod-conmon-904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614.scope.
Feb 25 13:27:22 compute-0 podman[401533]: 2026-02-25 13:27:21.929887703 +0000 UTC m=+0.026908980 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:27:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:27:22 compute-0 podman[401533]: 2026-02-25 13:27:22.052965307 +0000 UTC m=+0.149986584 container init 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 13:27:22 compute-0 podman[401533]: 2026-02-25 13:27:22.058119532 +0000 UTC m=+0.155140729 container start 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 13:27:22 compute-0 podman[401533]: 2026-02-25 13:27:22.061258891 +0000 UTC m=+0.158280118 container attach 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 13:27:22 compute-0 recursing_brattain[401549]: 167 167
Feb 25 13:27:22 compute-0 systemd[1]: libpod-904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614.scope: Deactivated successfully.
Feb 25 13:27:22 compute-0 conmon[401549]: conmon 904697578e47b3b4eb42 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614.scope/container/memory.events
Feb 25 13:27:22 compute-0 podman[401533]: 2026-02-25 13:27:22.067173188 +0000 UTC m=+0.164194425 container died 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dbee3334885b4f9d9a202c3e1819c942cf16115e5a537044ce0692c4ff7fb63-merged.mount: Deactivated successfully.
Feb 25 13:27:22 compute-0 podman[401533]: 2026-02-25 13:27:22.116907781 +0000 UTC m=+0.213928978 container remove 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:27:22 compute-0 systemd[1]: libpod-conmon-904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614.scope: Deactivated successfully.
Feb 25 13:27:22 compute-0 podman[401573]: 2026-02-25 13:27:22.281043584 +0000 UTC m=+0.055896409 container create faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 13:27:22 compute-0 systemd[1]: Started libpod-conmon-faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107.scope.
Feb 25 13:27:22 compute-0 podman[401573]: 2026-02-25 13:27:22.259529927 +0000 UTC m=+0.034382822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:27:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:22 compute-0 podman[401573]: 2026-02-25 13:27:22.381620702 +0000 UTC m=+0.156473597 container init faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:27:22 compute-0 podman[401573]: 2026-02-25 13:27:22.396246355 +0000 UTC m=+0.171099210 container start faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 13:27:22 compute-0 podman[401573]: 2026-02-25 13:27:22.401259877 +0000 UTC m=+0.176112782 container attach faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 25 13:27:22 compute-0 competent_mclaren[401590]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:27:22 compute-0 competent_mclaren[401590]: --> All data devices are unavailable
Feb 25 13:27:22 compute-0 ceph-mon[76335]: pgmap v3224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:22 compute-0 systemd[1]: libpod-faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107.scope: Deactivated successfully.
Feb 25 13:27:22 compute-0 podman[401610]: 2026-02-25 13:27:22.939388924 +0000 UTC m=+0.036030268 container died faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 13:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a-merged.mount: Deactivated successfully.
Feb 25 13:27:22 compute-0 podman[401610]: 2026-02-25 13:27:22.98742397 +0000 UTC m=+0.084065294 container remove faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:27:22 compute-0 systemd[1]: libpod-conmon-faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107.scope: Deactivated successfully.
Feb 25 13:27:23 compute-0 sudo[401496]: pam_unix(sudo:session): session closed for user root
Feb 25 13:27:23 compute-0 sudo[401625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:27:23 compute-0 sudo[401625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:27:23 compute-0 sudo[401625]: pam_unix(sudo:session): session closed for user root
Feb 25 13:27:23 compute-0 sudo[401650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:27:23 compute-0 sudo[401650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:27:23 compute-0 podman[401689]: 2026-02-25 13:27:23.419453973 +0000 UTC m=+0.053848451 container create 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 13:27:23 compute-0 systemd[1]: Started libpod-conmon-09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a.scope.
Feb 25 13:27:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:27:23 compute-0 podman[401689]: 2026-02-25 13:27:23.392329817 +0000 UTC m=+0.026724295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:27:23 compute-0 podman[401689]: 2026-02-25 13:27:23.49554188 +0000 UTC m=+0.129936408 container init 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 13:27:23 compute-0 podman[401689]: 2026-02-25 13:27:23.503580977 +0000 UTC m=+0.137975455 container start 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:27:23 compute-0 inspiring_hodgkin[401705]: 167 167
Feb 25 13:27:23 compute-0 systemd[1]: libpod-09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a.scope: Deactivated successfully.
Feb 25 13:27:23 compute-0 podman[401689]: 2026-02-25 13:27:23.511601423 +0000 UTC m=+0.145995971 container attach 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:27:23 compute-0 podman[401689]: 2026-02-25 13:27:23.51219221 +0000 UTC m=+0.146586688 container died 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1bd878ac09ff8b2fd416dbdd347d2d228e08988b5d5e029a5826dbabef9be98-merged.mount: Deactivated successfully.
Feb 25 13:27:23 compute-0 podman[401689]: 2026-02-25 13:27:23.559163305 +0000 UTC m=+0.193557783 container remove 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:27:23 compute-0 systemd[1]: libpod-conmon-09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a.scope: Deactivated successfully.
Feb 25 13:27:23 compute-0 podman[401729]: 2026-02-25 13:27:23.68301498 +0000 UTC m=+0.043640853 container create 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:27:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:23 compute-0 systemd[1]: Started libpod-conmon-303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986.scope.
Feb 25 13:27:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87d0dad9ded3b284ab1a5c36b906e606ba7e7a62a7a223658a91616db3ee0b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87d0dad9ded3b284ab1a5c36b906e606ba7e7a62a7a223658a91616db3ee0b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87d0dad9ded3b284ab1a5c36b906e606ba7e7a62a7a223658a91616db3ee0b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87d0dad9ded3b284ab1a5c36b906e606ba7e7a62a7a223658a91616db3ee0b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:23 compute-0 podman[401729]: 2026-02-25 13:27:23.6656612 +0000 UTC m=+0.026287123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:27:23 compute-0 podman[401729]: 2026-02-25 13:27:23.782057785 +0000 UTC m=+0.142683678 container init 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 13:27:23 compute-0 podman[401729]: 2026-02-25 13:27:23.789511576 +0000 UTC m=+0.150137449 container start 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:27:23 compute-0 podman[401729]: 2026-02-25 13:27:23.792914492 +0000 UTC m=+0.153540365 container attach 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:27:24 compute-0 nova_compute[244014]: 2026-02-25 13:27:24.003 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:24 compute-0 musing_brown[401745]: {
Feb 25 13:27:24 compute-0 musing_brown[401745]:     "0": [
Feb 25 13:27:24 compute-0 musing_brown[401745]:         {
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "devices": [
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "/dev/loop3"
Feb 25 13:27:24 compute-0 musing_brown[401745]:             ],
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_name": "ceph_lv0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_size": "21470642176",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "name": "ceph_lv0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "tags": {
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.cluster_name": "ceph",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.crush_device_class": "",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.encrypted": "0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.objectstore": "bluestore",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.osd_id": "0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.type": "block",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.vdo": "0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.with_tpm": "0"
Feb 25 13:27:24 compute-0 musing_brown[401745]:             },
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "type": "block",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "vg_name": "ceph_vg0"
Feb 25 13:27:24 compute-0 musing_brown[401745]:         }
Feb 25 13:27:24 compute-0 musing_brown[401745]:     ],
Feb 25 13:27:24 compute-0 musing_brown[401745]:     "1": [
Feb 25 13:27:24 compute-0 musing_brown[401745]:         {
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "devices": [
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "/dev/loop4"
Feb 25 13:27:24 compute-0 musing_brown[401745]:             ],
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_name": "ceph_lv1",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_size": "21470642176",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "name": "ceph_lv1",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "tags": {
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.cluster_name": "ceph",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.crush_device_class": "",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.encrypted": "0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.objectstore": "bluestore",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.osd_id": "1",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.type": "block",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.vdo": "0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.with_tpm": "0"
Feb 25 13:27:24 compute-0 musing_brown[401745]:             },
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "type": "block",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "vg_name": "ceph_vg1"
Feb 25 13:27:24 compute-0 musing_brown[401745]:         }
Feb 25 13:27:24 compute-0 musing_brown[401745]:     ],
Feb 25 13:27:24 compute-0 musing_brown[401745]:     "2": [
Feb 25 13:27:24 compute-0 musing_brown[401745]:         {
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "devices": [
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "/dev/loop5"
Feb 25 13:27:24 compute-0 musing_brown[401745]:             ],
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_name": "ceph_lv2",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_size": "21470642176",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "name": "ceph_lv2",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "tags": {
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.cluster_name": "ceph",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.crush_device_class": "",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.encrypted": "0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.objectstore": "bluestore",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.osd_id": "2",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.type": "block",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.vdo": "0",
Feb 25 13:27:24 compute-0 musing_brown[401745]:                 "ceph.with_tpm": "0"
Feb 25 13:27:24 compute-0 musing_brown[401745]:             },
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "type": "block",
Feb 25 13:27:24 compute-0 musing_brown[401745]:             "vg_name": "ceph_vg2"
Feb 25 13:27:24 compute-0 musing_brown[401745]:         }
Feb 25 13:27:24 compute-0 musing_brown[401745]:     ]
Feb 25 13:27:24 compute-0 musing_brown[401745]: }
Feb 25 13:27:24 compute-0 systemd[1]: libpod-303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986.scope: Deactivated successfully.
Feb 25 13:27:24 compute-0 podman[401729]: 2026-02-25 13:27:24.120646411 +0000 UTC m=+0.481272324 container died 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 13:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f87d0dad9ded3b284ab1a5c36b906e606ba7e7a62a7a223658a91616db3ee0b6-merged.mount: Deactivated successfully.
Feb 25 13:27:24 compute-0 podman[401729]: 2026-02-25 13:27:24.179876733 +0000 UTC m=+0.540502646 container remove 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:27:24 compute-0 systemd[1]: libpod-conmon-303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986.scope: Deactivated successfully.
Feb 25 13:27:24 compute-0 sudo[401650]: pam_unix(sudo:session): session closed for user root
Feb 25 13:27:24 compute-0 sudo[401768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:27:24 compute-0 sudo[401768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:27:24 compute-0 sudo[401768]: pam_unix(sudo:session): session closed for user root
Feb 25 13:27:24 compute-0 sudo[401793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:27:24 compute-0 sudo[401793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:27:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:24 compute-0 podman[401830]: 2026-02-25 13:27:24.704462438 +0000 UTC m=+0.059743997 container create 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 13:27:24 compute-0 systemd[1]: Started libpod-conmon-531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b.scope.
Feb 25 13:27:24 compute-0 podman[401830]: 2026-02-25 13:27:24.670397027 +0000 UTC m=+0.025678686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:27:24 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:27:24 compute-0 podman[401830]: 2026-02-25 13:27:24.794299953 +0000 UTC m=+0.149581522 container init 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 13:27:24 compute-0 podman[401830]: 2026-02-25 13:27:24.80303362 +0000 UTC m=+0.158315199 container start 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:27:24 compute-0 podman[401830]: 2026-02-25 13:27:24.807157126 +0000 UTC m=+0.162438735 container attach 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:27:24 compute-0 gracious_ellis[401846]: 167 167
Feb 25 13:27:24 compute-0 systemd[1]: libpod-531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b.scope: Deactivated successfully.
Feb 25 13:27:24 compute-0 conmon[401846]: conmon 531fd18e8e76ae1eefff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b.scope/container/memory.events
Feb 25 13:27:24 compute-0 podman[401830]: 2026-02-25 13:27:24.812013843 +0000 UTC m=+0.167295402 container died 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7955590d32cf8b16f7aa83e561e1ff3427b3084a3781f857a6570fa92313e393-merged.mount: Deactivated successfully.
Feb 25 13:27:24 compute-0 podman[401830]: 2026-02-25 13:27:24.856326354 +0000 UTC m=+0.211607953 container remove 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:27:24 compute-0 ceph-mon[76335]: pgmap v3225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:24 compute-0 systemd[1]: libpod-conmon-531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b.scope: Deactivated successfully.
Feb 25 13:27:25 compute-0 podman[401872]: 2026-02-25 13:27:25.06132116 +0000 UTC m=+0.054210561 container create ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 13:27:25 compute-0 systemd[1]: Started libpod-conmon-ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad.scope.
Feb 25 13:27:25 compute-0 podman[401872]: 2026-02-25 13:27:25.043459875 +0000 UTC m=+0.036349266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:27:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8735976528e93f207e2eb195322ff674ad88ab64a3ee03bda687ede125e6b436/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8735976528e93f207e2eb195322ff674ad88ab64a3ee03bda687ede125e6b436/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8735976528e93f207e2eb195322ff674ad88ab64a3ee03bda687ede125e6b436/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8735976528e93f207e2eb195322ff674ad88ab64a3ee03bda687ede125e6b436/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:27:25 compute-0 podman[401872]: 2026-02-25 13:27:25.171959012 +0000 UTC m=+0.164848483 container init ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:27:25 compute-0 podman[401872]: 2026-02-25 13:27:25.181161902 +0000 UTC m=+0.174051273 container start ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:27:25 compute-0 podman[401872]: 2026-02-25 13:27:25.186856772 +0000 UTC m=+0.179746164 container attach ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:27:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:25 compute-0 lvm[401966]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:27:25 compute-0 lvm[401967]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:27:25 compute-0 lvm[401967]: VG ceph_vg1 finished
Feb 25 13:27:25 compute-0 lvm[401966]: VG ceph_vg0 finished
Feb 25 13:27:25 compute-0 lvm[401969]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:27:25 compute-0 lvm[401969]: VG ceph_vg2 finished
Feb 25 13:27:26 compute-0 peaceful_lovelace[401888]: {}
Feb 25 13:27:26 compute-0 systemd[1]: libpod-ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad.scope: Deactivated successfully.
Feb 25 13:27:26 compute-0 podman[401872]: 2026-02-25 13:27:26.083089397 +0000 UTC m=+1.075978758 container died ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:27:26 compute-0 systemd[1]: libpod-ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad.scope: Consumed 1.240s CPU time.
Feb 25 13:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8735976528e93f207e2eb195322ff674ad88ab64a3ee03bda687ede125e6b436-merged.mount: Deactivated successfully.
Feb 25 13:27:26 compute-0 podman[401872]: 2026-02-25 13:27:26.126674257 +0000 UTC m=+1.119563618 container remove ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 13:27:26 compute-0 systemd[1]: libpod-conmon-ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad.scope: Deactivated successfully.
Feb 25 13:27:26 compute-0 sudo[401793]: pam_unix(sudo:session): session closed for user root
Feb 25 13:27:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:27:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:27:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:27:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:27:26 compute-0 sudo[401986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:27:26 compute-0 sudo[401986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:27:26 compute-0 sudo[401986]: pam_unix(sudo:session): session closed for user root
Feb 25 13:27:26 compute-0 nova_compute[244014]: 2026-02-25 13:27:26.605 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:27 compute-0 ceph-mon[76335]: pgmap v3226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:27:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:27:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:29 compute-0 nova_compute[244014]: 2026-02-25 13:27:29.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:29 compute-0 ceph-mon[76335]: pgmap v3227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:27:31
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:27:31 compute-0 ceph-mon[76335]: pgmap v3228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:27:31 compute-0 nova_compute[244014]: 2026-02-25 13:27:31.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:27:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:27:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:27:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:27:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:27:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:27:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:27:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:27:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:27:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:27:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:27:33 compute-0 ceph-mon[76335]: pgmap v3229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:33 compute-0 podman[402011]: 2026-02-25 13:27:33.788966685 +0000 UTC m=+0.109561814 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 13:27:33 compute-0 podman[402012]: 2026-02-25 13:27:33.802825996 +0000 UTC m=+0.120197294 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Feb 25 13:27:34 compute-0 nova_compute[244014]: 2026-02-25 13:27:34.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:35 compute-0 ceph-mon[76335]: pgmap v3230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:36 compute-0 nova_compute[244014]: 2026-02-25 13:27:36.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:37 compute-0 ceph-mon[76335]: pgmap v3231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:39 compute-0 nova_compute[244014]: 2026-02-25 13:27:39.014 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:39 compute-0 ceph-mon[76335]: pgmap v3232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:41 compute-0 ceph-mon[76335]: pgmap v3233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:41 compute-0 nova_compute[244014]: 2026-02-25 13:27:41.700 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:27:43 compute-0 ceph-mon[76335]: pgmap v3234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:44 compute-0 nova_compute[244014]: 2026-02-25 13:27:44.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:45 compute-0 ceph-mon[76335]: pgmap v3235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:46 compute-0 nova_compute[244014]: 2026-02-25 13:27:46.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:47 compute-0 ceph-mon[76335]: pgmap v3236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:27:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1587764996' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:27:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:27:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1587764996' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:27:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1587764996' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:27:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1587764996' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:27:49 compute-0 nova_compute[244014]: 2026-02-25 13:27:49.022 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:49 compute-0 ceph-mon[76335]: pgmap v3237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:50 compute-0 ceph-mon[76335]: pgmap v3238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:51 compute-0 nova_compute[244014]: 2026-02-25 13:27:51.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:52 compute-0 ceph-mon[76335]: pgmap v3239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:54 compute-0 nova_compute[244014]: 2026-02-25 13:27:54.026 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:54 compute-0 ceph-mon[76335]: pgmap v3240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:27:55.071 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:27:55.071 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:27:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:27:55.071 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:27:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:56 compute-0 nova_compute[244014]: 2026-02-25 13:27:56.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:56 compute-0 nova_compute[244014]: 2026-02-25 13:27:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:27:56 compute-0 ceph-mon[76335]: pgmap v3241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:56 compute-0 nova_compute[244014]: 2026-02-25 13:27:56.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:27:56 compute-0 nova_compute[244014]: 2026-02-25 13:27:56.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:27:56 compute-0 nova_compute[244014]: 2026-02-25 13:27:56.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:27:56 compute-0 nova_compute[244014]: 2026-02-25 13:27:56.919 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:27:56 compute-0 nova_compute[244014]: 2026-02-25 13:27:56.920 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:27:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:27:57 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4104010315' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:27:57 compute-0 nova_compute[244014]: 2026-02-25 13:27:57.433 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:27:57 compute-0 nova_compute[244014]: 2026-02-25 13:27:57.611 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:27:57 compute-0 nova_compute[244014]: 2026-02-25 13:27:57.613 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3563MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:27:57 compute-0 nova_compute[244014]: 2026-02-25 13:27:57.614 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:27:57 compute-0 nova_compute[244014]: 2026-02-25 13:27:57.614 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:27:57 compute-0 nova_compute[244014]: 2026-02-25 13:27:57.689 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:27:57 compute-0 nova_compute[244014]: 2026-02-25 13:27:57.690 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:27:57 compute-0 nova_compute[244014]: 2026-02-25 13:27:57.706 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:27:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:57 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4104010315' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:27:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:27:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3195868700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:27:58 compute-0 nova_compute[244014]: 2026-02-25 13:27:58.279 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:27:58 compute-0 nova_compute[244014]: 2026-02-25 13:27:58.285 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:27:58 compute-0 nova_compute[244014]: 2026-02-25 13:27:58.298 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:27:58 compute-0 nova_compute[244014]: 2026-02-25 13:27:58.300 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:27:58 compute-0 nova_compute[244014]: 2026-02-25 13:27:58.300 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:27:58 compute-0 ceph-mon[76335]: pgmap v3242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:27:58 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3195868700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:27:59 compute-0 nova_compute[244014]: 2026-02-25 13:27:59.029 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:27:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:27:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:00 compute-0 ceph-mon[76335]: pgmap v3243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:28:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:28:01 compute-0 nova_compute[244014]: 2026-02-25 13:28:01.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:02 compute-0 nova_compute[244014]: 2026-02-25 13:28:02.301 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:02 compute-0 ceph-mon[76335]: pgmap v3244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:03 compute-0 nova_compute[244014]: 2026-02-25 13:28:03.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:03 compute-0 nova_compute[244014]: 2026-02-25 13:28:03.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:03 compute-0 nova_compute[244014]: 2026-02-25 13:28:03.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:28:03 compute-0 nova_compute[244014]: 2026-02-25 13:28:03.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:28:03 compute-0 nova_compute[244014]: 2026-02-25 13:28:03.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:28:04 compute-0 nova_compute[244014]: 2026-02-25 13:28:04.033 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:04 compute-0 podman[402100]: 2026-02-25 13:28:04.715852333 +0000 UTC m=+0.053510331 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 13:28:04 compute-0 podman[402101]: 2026-02-25 13:28:04.757666223 +0000 UTC m=+0.095862197 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 13:28:05 compute-0 ceph-mon[76335]: pgmap v3245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:06 compute-0 nova_compute[244014]: 2026-02-25 13:28:06.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:07 compute-0 ceph-mon[76335]: pgmap v3246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:09 compute-0 nova_compute[244014]: 2026-02-25 13:28:09.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:09 compute-0 ceph-mon[76335]: pgmap v3247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:10 compute-0 nova_compute[244014]: 2026-02-25 13:28:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:10 compute-0 nova_compute[244014]: 2026-02-25 13:28:10.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:28:11 compute-0 ceph-mon[76335]: pgmap v3248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:11 compute-0 nova_compute[244014]: 2026-02-25 13:28:11.713 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:13 compute-0 ceph-mon[76335]: pgmap v3249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:13 compute-0 nova_compute[244014]: 2026-02-25 13:28:13.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:14 compute-0 nova_compute[244014]: 2026-02-25 13:28:14.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:14 compute-0 nova_compute[244014]: 2026-02-25 13:28:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:15 compute-0 ceph-mon[76335]: pgmap v3250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:16 compute-0 nova_compute[244014]: 2026-02-25 13:28:16.714 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:17 compute-0 ceph-mon[76335]: pgmap v3251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:18 compute-0 nova_compute[244014]: 2026-02-25 13:28:18.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:19 compute-0 nova_compute[244014]: 2026-02-25 13:28:19.045 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:19 compute-0 ceph-mon[76335]: pgmap v3252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:21 compute-0 ceph-mon[76335]: pgmap v3253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:21 compute-0 nova_compute[244014]: 2026-02-25 13:28:21.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:21 compute-0 nova_compute[244014]: 2026-02-25 13:28:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:23 compute-0 ceph-mon[76335]: pgmap v3254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:24 compute-0 nova_compute[244014]: 2026-02-25 13:28:24.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:25 compute-0 ceph-mon[76335]: pgmap v3255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:26 compute-0 sudo[402144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:28:26 compute-0 sudo[402144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:28:26 compute-0 sudo[402144]: pam_unix(sudo:session): session closed for user root
Feb 25 13:28:26 compute-0 sudo[402169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:28:26 compute-0 sudo[402169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:28:26 compute-0 nova_compute[244014]: 2026-02-25 13:28:26.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:26 compute-0 sudo[402169]: pam_unix(sudo:session): session closed for user root
Feb 25 13:28:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:28:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:28:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:28:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:28:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:28:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:28:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:28:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:28:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:28:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:28:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:28:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:28:27 compute-0 sudo[402225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:28:27 compute-0 sudo[402225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:28:27 compute-0 sudo[402225]: pam_unix(sudo:session): session closed for user root
Feb 25 13:28:27 compute-0 sudo[402250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:28:27 compute-0 sudo[402250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:28:27 compute-0 podman[402287]: 2026-02-25 13:28:27.352800631 +0000 UTC m=+0.035293657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:28:27 compute-0 podman[402287]: 2026-02-25 13:28:27.585368935 +0000 UTC m=+0.267861951 container create 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 13:28:27 compute-0 ceph-mon[76335]: pgmap v3256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:28:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:28:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:28:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:28:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:28:27 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:28:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:27 compute-0 systemd[1]: Started libpod-conmon-266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd.scope.
Feb 25 13:28:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:28:27 compute-0 podman[402287]: 2026-02-25 13:28:27.871073448 +0000 UTC m=+0.553566484 container init 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:28:27 compute-0 podman[402287]: 2026-02-25 13:28:27.880504044 +0000 UTC m=+0.562997040 container start 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:28:27 compute-0 zealous_haslett[402304]: 167 167
Feb 25 13:28:27 compute-0 systemd[1]: libpod-266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd.scope: Deactivated successfully.
Feb 25 13:28:28 compute-0 podman[402287]: 2026-02-25 13:28:28.017837959 +0000 UTC m=+0.700330975 container attach 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:28:28 compute-0 podman[402287]: 2026-02-25 13:28:28.019113335 +0000 UTC m=+0.701606361 container died 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 13:28:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a09d9e9d9f7ba9d62d30ccc0253f316a167e74e0825d5c6d96a51032530354a1-merged.mount: Deactivated successfully.
Feb 25 13:28:28 compute-0 podman[402287]: 2026-02-25 13:28:28.63722165 +0000 UTC m=+1.319714666 container remove 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 13:28:28 compute-0 systemd[1]: libpod-conmon-266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd.scope: Deactivated successfully.
Feb 25 13:28:28 compute-0 ceph-mon[76335]: pgmap v3257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:28 compute-0 podman[402329]: 2026-02-25 13:28:28.802593557 +0000 UTC m=+0.038081795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:28:28 compute-0 podman[402329]: 2026-02-25 13:28:28.986418195 +0000 UTC m=+0.221906453 container create 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:29 compute-0 systemd[1]: Started libpod-conmon-54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940.scope.
Feb 25 13:28:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:29 compute-0 podman[402329]: 2026-02-25 13:28:29.50107557 +0000 UTC m=+0.736563828 container init 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:28:29 compute-0 podman[402329]: 2026-02-25 13:28:29.510478515 +0000 UTC m=+0.745966763 container start 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:28:29 compute-0 podman[402329]: 2026-02-25 13:28:29.610323114 +0000 UTC m=+0.845811372 container attach 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:28:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.879 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.880 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.880 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.881 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.882 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:28:29 compute-0 great_margulis[402346]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:28:29 compute-0 great_margulis[402346]: --> All data devices are unavailable
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.954 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.971 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.971 244018 WARNING nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.971 244018 WARNING nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.972 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Removable base files: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.972 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.973 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.973 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.973 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.974 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Feb 25 13:28:29 compute-0 nova_compute[244014]: 2026-02-25 13:28:29.974 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Feb 25 13:28:29 compute-0 systemd[1]: libpod-54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940.scope: Deactivated successfully.
Feb 25 13:28:29 compute-0 podman[402329]: 2026-02-25 13:28:29.983589998 +0000 UTC m=+1.219078306 container died 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:28:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b-merged.mount: Deactivated successfully.
Feb 25 13:28:30 compute-0 podman[402329]: 2026-02-25 13:28:30.348028234 +0000 UTC m=+1.583516482 container remove 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:28:30 compute-0 systemd[1]: libpod-conmon-54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940.scope: Deactivated successfully.
Feb 25 13:28:30 compute-0 sudo[402250]: pam_unix(sudo:session): session closed for user root
Feb 25 13:28:30 compute-0 sudo[402378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:28:30 compute-0 sudo[402378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:28:30 compute-0 sudo[402378]: pam_unix(sudo:session): session closed for user root
Feb 25 13:28:30 compute-0 sudo[402403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:28:30 compute-0 sudo[402403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:28:30 compute-0 podman[402440]: 2026-02-25 13:28:30.765856636 +0000 UTC m=+0.049101007 container create 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 13:28:30 compute-0 podman[402440]: 2026-02-25 13:28:30.736786645 +0000 UTC m=+0.020030936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:28:30 compute-0 ceph-mon[76335]: pgmap v3258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:30 compute-0 nova_compute[244014]: 2026-02-25 13:28:30.970 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:30 compute-0 systemd[1]: Started libpod-conmon-945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b.scope.
Feb 25 13:28:31 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:28:31 compute-0 podman[402440]: 2026-02-25 13:28:31.140657484 +0000 UTC m=+0.423901765 container init 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:28:31 compute-0 podman[402440]: 2026-02-25 13:28:31.147286341 +0000 UTC m=+0.430530602 container start 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 13:28:31 compute-0 zen_swanson[402457]: 167 167
Feb 25 13:28:31 compute-0 systemd[1]: libpod-945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b.scope: Deactivated successfully.
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:28:31
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'images', '.mgr', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta']
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:28:31 compute-0 podman[402440]: 2026-02-25 13:28:31.218767638 +0000 UTC m=+0.502012009 container attach 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:28:31 compute-0 podman[402440]: 2026-02-25 13:28:31.219739435 +0000 UTC m=+0.502983696 container died 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 13:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-19bbc5d8d720a1c326d1f00cdfc4db54fa38b27467ec4c45d3f445254dc12d07-merged.mount: Deactivated successfully.
Feb 25 13:28:31 compute-0 podman[402440]: 2026-02-25 13:28:31.264964042 +0000 UTC m=+0.548208303 container remove 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 13:28:31 compute-0 systemd[1]: libpod-conmon-945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b.scope: Deactivated successfully.
Feb 25 13:28:31 compute-0 podman[402483]: 2026-02-25 13:28:31.392312556 +0000 UTC m=+0.022657070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:28:31 compute-0 podman[402483]: 2026-02-25 13:28:31.520559315 +0000 UTC m=+0.150903759 container create 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:28:31 compute-0 systemd[1]: Started libpod-conmon-1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c.scope.
Feb 25 13:28:31 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e03afe70cafa10626e260673cac8c390d1138630758fa85902c009d8adbb084/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e03afe70cafa10626e260673cac8c390d1138630758fa85902c009d8adbb084/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e03afe70cafa10626e260673cac8c390d1138630758fa85902c009d8adbb084/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e03afe70cafa10626e260673cac8c390d1138630758fa85902c009d8adbb084/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:31 compute-0 podman[402483]: 2026-02-25 13:28:31.643256317 +0000 UTC m=+0.273600781 container init 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:28:31 compute-0 podman[402483]: 2026-02-25 13:28:31.652034315 +0000 UTC m=+0.282378789 container start 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:28:31 compute-0 podman[402483]: 2026-02-25 13:28:31.656146241 +0000 UTC m=+0.286490685 container attach 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:28:31 compute-0 nova_compute[244014]: 2026-02-25 13:28:31.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:31 compute-0 elastic_merkle[402500]: {
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:     "0": [
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:         {
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "devices": [
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "/dev/loop3"
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             ],
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_name": "ceph_lv0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_size": "21470642176",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "name": "ceph_lv0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "tags": {
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.cluster_name": "ceph",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.crush_device_class": "",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.encrypted": "0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.objectstore": "bluestore",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.osd_id": "0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.type": "block",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.vdo": "0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.with_tpm": "0"
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             },
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "type": "block",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "vg_name": "ceph_vg0"
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:         }
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:     ],
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:     "1": [
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:         {
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "devices": [
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "/dev/loop4"
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             ],
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_name": "ceph_lv1",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_size": "21470642176",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "name": "ceph_lv1",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "tags": {
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.cluster_name": "ceph",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.crush_device_class": "",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.encrypted": "0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.objectstore": "bluestore",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.osd_id": "1",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.type": "block",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.vdo": "0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.with_tpm": "0"
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             },
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "type": "block",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "vg_name": "ceph_vg1"
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:         }
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:     ],
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:     "2": [
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:         {
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "devices": [
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "/dev/loop5"
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             ],
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_name": "ceph_lv2",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_size": "21470642176",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "name": "ceph_lv2",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "tags": {
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.cluster_name": "ceph",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.crush_device_class": "",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.encrypted": "0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.objectstore": "bluestore",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.osd_id": "2",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.type": "block",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.vdo": "0",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:                 "ceph.with_tpm": "0"
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             },
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "type": "block",
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:             "vg_name": "ceph_vg2"
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:         }
Feb 25 13:28:31 compute-0 elastic_merkle[402500]:     ]
Feb 25 13:28:31 compute-0 elastic_merkle[402500]: }
Feb 25 13:28:32 compute-0 systemd[1]: libpod-1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c.scope: Deactivated successfully.
Feb 25 13:28:32 compute-0 podman[402483]: 2026-02-25 13:28:32.006596292 +0000 UTC m=+0.636940766 container died 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:28:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e03afe70cafa10626e260673cac8c390d1138630758fa85902c009d8adbb084-merged.mount: Deactivated successfully.
Feb 25 13:28:32 compute-0 podman[402483]: 2026-02-25 13:28:32.061411609 +0000 UTC m=+0.691756053 container remove 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:28:32 compute-0 systemd[1]: libpod-conmon-1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c.scope: Deactivated successfully.
Feb 25 13:28:32 compute-0 sudo[402403]: pam_unix(sudo:session): session closed for user root
Feb 25 13:28:32 compute-0 sudo[402522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:28:32 compute-0 sudo[402522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:28:32 compute-0 sudo[402522]: pam_unix(sudo:session): session closed for user root
Feb 25 13:28:32 compute-0 sudo[402547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:28:32 compute-0 sudo[402547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:28:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:28:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:28:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:28:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:28:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:28:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:28:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:28:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:28:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:28:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:28:32 compute-0 podman[402584]: 2026-02-25 13:28:32.512244482 +0000 UTC m=+0.039467674 container create decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 13:28:32 compute-0 systemd[1]: Started libpod-conmon-decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658.scope.
Feb 25 13:28:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:28:32 compute-0 podman[402584]: 2026-02-25 13:28:32.494600524 +0000 UTC m=+0.021823796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:28:32 compute-0 podman[402584]: 2026-02-25 13:28:32.598239569 +0000 UTC m=+0.125462872 container init decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 13:28:32 compute-0 podman[402584]: 2026-02-25 13:28:32.603146018 +0000 UTC m=+0.130369260 container start decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 13:28:32 compute-0 confident_chaum[402602]: 167 167
Feb 25 13:28:32 compute-0 systemd[1]: libpod-decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658.scope: Deactivated successfully.
Feb 25 13:28:32 compute-0 podman[402584]: 2026-02-25 13:28:32.607632885 +0000 UTC m=+0.134856167 container attach decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 13:28:32 compute-0 podman[402584]: 2026-02-25 13:28:32.609740114 +0000 UTC m=+0.136963336 container died decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 13:28:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-37d7839bdfe70a38e9f7ca5c4a1640090fabac713c285df47fafba65177fad48-merged.mount: Deactivated successfully.
Feb 25 13:28:32 compute-0 podman[402584]: 2026-02-25 13:28:32.651594585 +0000 UTC m=+0.178817777 container remove decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 13:28:32 compute-0 systemd[1]: libpod-conmon-decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658.scope: Deactivated successfully.
Feb 25 13:28:32 compute-0 podman[402628]: 2026-02-25 13:28:32.777442727 +0000 UTC m=+0.038438316 container create 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:28:32 compute-0 systemd[1]: Started libpod-conmon-52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e.scope.
Feb 25 13:28:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9932d22b592cc940385780343397cb8b144e3d0a89c3e41f36708a8a600546/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9932d22b592cc940385780343397cb8b144e3d0a89c3e41f36708a8a600546/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9932d22b592cc940385780343397cb8b144e3d0a89c3e41f36708a8a600546/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9932d22b592cc940385780343397cb8b144e3d0a89c3e41f36708a8a600546/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:28:32 compute-0 podman[402628]: 2026-02-25 13:28:32.85303154 +0000 UTC m=+0.114027039 container init 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:28:32 compute-0 podman[402628]: 2026-02-25 13:28:32.75911078 +0000 UTC m=+0.020106299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:28:32 compute-0 podman[402628]: 2026-02-25 13:28:32.859662318 +0000 UTC m=+0.120657817 container start 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:28:32 compute-0 podman[402628]: 2026-02-25 13:28:32.863206178 +0000 UTC m=+0.124201677 container attach 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 13:28:32 compute-0 ceph-mon[76335]: pgmap v3259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:33 compute-0 lvm[402720]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:28:33 compute-0 lvm[402720]: VG ceph_vg0 finished
Feb 25 13:28:33 compute-0 lvm[402723]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:28:33 compute-0 lvm[402723]: VG ceph_vg1 finished
Feb 25 13:28:33 compute-0 lvm[402725]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:28:33 compute-0 lvm[402725]: VG ceph_vg2 finished
Feb 25 13:28:33 compute-0 crazy_franklin[402644]: {}
Feb 25 13:28:33 compute-0 systemd[1]: libpod-52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e.scope: Deactivated successfully.
Feb 25 13:28:33 compute-0 systemd[1]: libpod-52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e.scope: Consumed 1.043s CPU time.
Feb 25 13:28:33 compute-0 podman[402628]: 2026-02-25 13:28:33.606622209 +0000 UTC m=+0.867617698 container died 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 13:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e9932d22b592cc940385780343397cb8b144e3d0a89c3e41f36708a8a600546-merged.mount: Deactivated successfully.
Feb 25 13:28:33 compute-0 podman[402628]: 2026-02-25 13:28:33.65877047 +0000 UTC m=+0.919765939 container remove 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 13:28:33 compute-0 systemd[1]: libpod-conmon-52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e.scope: Deactivated successfully.
Feb 25 13:28:33 compute-0 sudo[402547]: pam_unix(sudo:session): session closed for user root
Feb 25 13:28:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:28:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:33 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:28:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:28:33 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:28:33 compute-0 sudo[402741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:28:33 compute-0 sudo[402741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:28:33 compute-0 sudo[402741]: pam_unix(sudo:session): session closed for user root
Feb 25 13:28:34 compute-0 nova_compute[244014]: 2026-02-25 13:28:34.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:34 compute-0 ceph-mon[76335]: pgmap v3260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:28:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:28:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:35 compute-0 podman[402766]: 2026-02-25 13:28:35.731493857 +0000 UTC m=+0.071886580 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 13:28:35 compute-0 podman[402767]: 2026-02-25 13:28:35.770513298 +0000 UTC m=+0.107460954 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:28:36 compute-0 nova_compute[244014]: 2026-02-25 13:28:36.722 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:36 compute-0 ceph-mon[76335]: pgmap v3261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:37 compute-0 nova_compute[244014]: 2026-02-25 13:28:37.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:38 compute-0 ceph-mon[76335]: pgmap v3262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:39 compute-0 nova_compute[244014]: 2026-02-25 13:28:39.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:41 compute-0 ceph-mon[76335]: pgmap v3263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:41 compute-0 nova_compute[244014]: 2026-02-25 13:28:41.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:43 compute-0 ceph-mon[76335]: pgmap v3264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:28:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:44 compute-0 nova_compute[244014]: 2026-02-25 13:28:44.067 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:45 compute-0 ceph-mon[76335]: pgmap v3265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:46 compute-0 nova_compute[244014]: 2026-02-25 13:28:46.725 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:46 compute-0 nova_compute[244014]: 2026-02-25 13:28:46.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:46 compute-0 nova_compute[244014]: 2026-02-25 13:28:46.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:28:47 compute-0 ceph-mon[76335]: pgmap v3266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:28:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774299455' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:28:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:28:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774299455' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:28:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/774299455' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:28:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/774299455' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:28:49 compute-0 nova_compute[244014]: 2026-02-25 13:28:49.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:49 compute-0 ceph-mon[76335]: pgmap v3267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:51 compute-0 ceph-mon[76335]: pgmap v3268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:51 compute-0 nova_compute[244014]: 2026-02-25 13:28:51.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:53 compute-0 ceph-mon[76335]: pgmap v3269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:54 compute-0 nova_compute[244014]: 2026-02-25 13:28:54.075 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:54 compute-0 nova_compute[244014]: 2026-02-25 13:28:54.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:54 compute-0 nova_compute[244014]: 2026-02-25 13:28:54.892 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:28:54 compute-0 nova_compute[244014]: 2026-02-25 13:28:54.910 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:28:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:28:55.071 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:28:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:28:55.072 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:28:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:28:55.072 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:28:55 compute-0 ceph-mon[76335]: pgmap v3270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:56 compute-0 nova_compute[244014]: 2026-02-25 13:28:56.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:57 compute-0 ceph-mon[76335]: pgmap v3271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:58 compute-0 nova_compute[244014]: 2026-02-25 13:28:58.895 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:28:58 compute-0 nova_compute[244014]: 2026-02-25 13:28:58.925 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:28:58 compute-0 nova_compute[244014]: 2026-02-25 13:28:58.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:28:58 compute-0 nova_compute[244014]: 2026-02-25 13:28:58.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:28:58 compute-0 nova_compute[244014]: 2026-02-25 13:28:58.926 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:28:58 compute-0 nova_compute[244014]: 2026-02-25 13:28:58.927 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:28:59 compute-0 nova_compute[244014]: 2026-02-25 13:28:59.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:28:59 compute-0 ceph-mon[76335]: pgmap v3272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:28:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:28:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:28:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2321926870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:28:59 compute-0 nova_compute[244014]: 2026-02-25 13:28:59.474 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:28:59 compute-0 nova_compute[244014]: 2026-02-25 13:28:59.600 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:28:59 compute-0 nova_compute[244014]: 2026-02-25 13:28:59.601 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3577MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:28:59 compute-0 nova_compute[244014]: 2026-02-25 13:28:59.602 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:28:59 compute-0 nova_compute[244014]: 2026-02-25 13:28:59.602 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:28:59 compute-0 nova_compute[244014]: 2026-02-25 13:28:59.664 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:28:59 compute-0 nova_compute[244014]: 2026-02-25 13:28:59.665 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:28:59 compute-0 nova_compute[244014]: 2026-02-25 13:28:59.679 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:28:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:29:00 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1574394186' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:29:00 compute-0 nova_compute[244014]: 2026-02-25 13:29:00.227 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:29:00 compute-0 nova_compute[244014]: 2026-02-25 13:29:00.233 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:29:00 compute-0 nova_compute[244014]: 2026-02-25 13:29:00.250 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:29:00 compute-0 nova_compute[244014]: 2026-02-25 13:29:00.252 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:29:00 compute-0 nova_compute[244014]: 2026-02-25 13:29:00.253 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:29:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2321926870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:29:00 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1574394186' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:29:01 compute-0 ceph-mon[76335]: pgmap v3273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:29:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:29:01 compute-0 nova_compute[244014]: 2026-02-25 13:29:01.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:02 compute-0 nova_compute[244014]: 2026-02-25 13:29:02.235 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:29:03 compute-0 ceph-mon[76335]: pgmap v3274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:03 compute-0 nova_compute[244014]: 2026-02-25 13:29:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:29:03 compute-0 nova_compute[244014]: 2026-02-25 13:29:03.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:29:03 compute-0 nova_compute[244014]: 2026-02-25 13:29:03.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:29:03 compute-0 nova_compute[244014]: 2026-02-25 13:29:03.893 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:29:04 compute-0 nova_compute[244014]: 2026-02-25 13:29:04.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:04 compute-0 nova_compute[244014]: 2026-02-25 13:29:04.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:29:05 compute-0 ceph-mon[76335]: pgmap v3275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:06 compute-0 podman[402852]: 2026-02-25 13:29:06.720595799 +0000 UTC m=+0.063478532 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:29:06 compute-0 nova_compute[244014]: 2026-02-25 13:29:06.734 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:06 compute-0 podman[402853]: 2026-02-25 13:29:06.739486382 +0000 UTC m=+0.083336363 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Feb 25 13:29:07 compute-0 ceph-mon[76335]: pgmap v3276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:09 compute-0 nova_compute[244014]: 2026-02-25 13:29:09.084 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:09 compute-0 ceph-mon[76335]: pgmap v3277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:10 compute-0 nova_compute[244014]: 2026-02-25 13:29:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:29:10 compute-0 nova_compute[244014]: 2026-02-25 13:29:10.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:29:11 compute-0 ceph-mon[76335]: pgmap v3278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:11 compute-0 nova_compute[244014]: 2026-02-25 13:29:11.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:29:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 15K writes, 68K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1330 writes, 6014 keys, 1330 commit groups, 1.0 writes per commit group, ingest: 8.79 MB, 0.01 MB/s
                                           Interval WAL: 1330 writes, 1330 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     52.6      1.58              0.23        49    0.032       0      0       0.0       0.0
                                             L6      1/0   10.43 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    100.9     86.0      4.87              1.22        48    0.102    321K    25K       0.0       0.0
                                            Sum      1/0   10.43 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     76.2     77.8      6.46              1.45        97    0.067    321K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.4     61.7     63.3      0.93              0.17        10    0.093     44K   2519       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    100.9     86.0      4.87              1.22        48    0.102    321K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     52.7      1.58              0.23        48    0.033       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.081, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.49 GB write, 0.08 MB/s write, 0.48 GB read, 0.08 MB/s read, 6.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 56.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000411 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3566,54.67 MB,17.9839%) FilterBlock(98,892.36 KB,0.286659%) IndexBlock(98,1.42 MB,0.468219%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 13:29:13 compute-0 ceph-mon[76335]: pgmap v3279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:14 compute-0 nova_compute[244014]: 2026-02-25 13:29:14.088 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:14 compute-0 ceph-mon[76335]: pgmap v3280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:14 compute-0 nova_compute[244014]: 2026-02-25 13:29:14.651 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:29:14 compute-0 nova_compute[244014]: 2026-02-25 13:29:14.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:29:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:16 compute-0 nova_compute[244014]: 2026-02-25 13:29:16.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:16 compute-0 ceph-mon[76335]: pgmap v3281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:16 compute-0 nova_compute[244014]: 2026-02-25 13:29:16.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:29:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:18 compute-0 sshd-session[402894]: Invalid user ethereum from 80.94.92.186 port 51294
Feb 25 13:29:18 compute-0 sshd-session[402894]: Connection closed by invalid user ethereum 80.94.92.186 port 51294 [preauth]
Feb 25 13:29:18 compute-0 ceph-mon[76335]: pgmap v3282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:19 compute-0 nova_compute[244014]: 2026-02-25 13:29:19.091 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:20 compute-0 ceph-mon[76335]: pgmap v3283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:20 compute-0 nova_compute[244014]: 2026-02-25 13:29:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:29:21 compute-0 nova_compute[244014]: 2026-02-25 13:29:21.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:22 compute-0 ceph-mon[76335]: pgmap v3284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:23 compute-0 nova_compute[244014]: 2026-02-25 13:29:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:29:24 compute-0 nova_compute[244014]: 2026-02-25 13:29:24.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:24 compute-0 ceph-mon[76335]: pgmap v3285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:26 compute-0 nova_compute[244014]: 2026-02-25 13:29:26.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:26 compute-0 ceph-mon[76335]: pgmap v3286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:28 compute-0 ceph-mon[76335]: pgmap v3287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:29 compute-0 nova_compute[244014]: 2026-02-25 13:29:29.099 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:30 compute-0 ceph-mon[76335]: pgmap v3288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:29:31
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.mgr', 'default.rgw.log', 'vms', 'backups', '.rgw.root', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:29:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:31 compute-0 nova_compute[244014]: 2026-02-25 13:29:31.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:29:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:29:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:29:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:29:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:29:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:29:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:29:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:29:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:29:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:29:32 compute-0 ceph-mon[76335]: pgmap v3289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:34 compute-0 sudo[402896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:29:34 compute-0 sudo[402896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:29:34 compute-0 sudo[402896]: pam_unix(sudo:session): session closed for user root
Feb 25 13:29:34 compute-0 sudo[402921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:29:34 compute-0 sudo[402921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:29:34 compute-0 nova_compute[244014]: 2026-02-25 13:29:34.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:34 compute-0 sudo[402921]: pam_unix(sudo:session): session closed for user root
Feb 25 13:29:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:29:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:29:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:29:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:29:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:29:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:29:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:29:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:29:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:29:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:29:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:29:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:29:34 compute-0 sudo[402978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:29:34 compute-0 sudo[402978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:29:34 compute-0 sudo[402978]: pam_unix(sudo:session): session closed for user root
Feb 25 13:29:34 compute-0 sudo[403003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:29:34 compute-0 sudo[403003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:29:34 compute-0 ceph-mon[76335]: pgmap v3290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:29:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:29:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:29:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:29:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:29:34 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:29:35 compute-0 podman[403042]: 2026-02-25 13:29:35.058363575 +0000 UTC m=+0.052515364 container create ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:29:35 compute-0 systemd[1]: Started libpod-conmon-ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58.scope.
Feb 25 13:29:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:29:35 compute-0 podman[403042]: 2026-02-25 13:29:35.038457313 +0000 UTC m=+0.032609122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:29:35 compute-0 podman[403042]: 2026-02-25 13:29:35.133118754 +0000 UTC m=+0.127270623 container init ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:29:35 compute-0 podman[403042]: 2026-02-25 13:29:35.14006207 +0000 UTC m=+0.134213889 container start ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:29:35 compute-0 podman[403042]: 2026-02-25 13:29:35.144511616 +0000 UTC m=+0.138663485 container attach ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 13:29:35 compute-0 stoic_solomon[403058]: 167 167
Feb 25 13:29:35 compute-0 systemd[1]: libpod-ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58.scope: Deactivated successfully.
Feb 25 13:29:35 compute-0 conmon[403058]: conmon ae5e88320468ea62ff6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58.scope/container/memory.events
Feb 25 13:29:35 compute-0 podman[403042]: 2026-02-25 13:29:35.147778748 +0000 UTC m=+0.141930597 container died ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:29:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2212dda2bf8fb1a948e631fb56e3a54f167699bb40f8b55f02860109d76f51a7-merged.mount: Deactivated successfully.
Feb 25 13:29:35 compute-0 podman[403042]: 2026-02-25 13:29:35.199715394 +0000 UTC m=+0.193867223 container remove ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 13:29:35 compute-0 systemd[1]: libpod-conmon-ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58.scope: Deactivated successfully.
Feb 25 13:29:35 compute-0 podman[403081]: 2026-02-25 13:29:35.393306777 +0000 UTC m=+0.059976693 container create 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:29:35 compute-0 systemd[1]: Started libpod-conmon-77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e.scope.
Feb 25 13:29:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:35 compute-0 podman[403081]: 2026-02-25 13:29:35.366913203 +0000 UTC m=+0.033583169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:29:35 compute-0 podman[403081]: 2026-02-25 13:29:35.475611051 +0000 UTC m=+0.142281027 container init 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 13:29:35 compute-0 podman[403081]: 2026-02-25 13:29:35.482476904 +0000 UTC m=+0.149146820 container start 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:29:35 compute-0 podman[403081]: 2026-02-25 13:29:35.54077321 +0000 UTC m=+0.207443136 container attach 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:29:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:35 compute-0 bold_ritchie[403097]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:29:35 compute-0 bold_ritchie[403097]: --> All data devices are unavailable
Feb 25 13:29:35 compute-0 systemd[1]: libpod-77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e.scope: Deactivated successfully.
Feb 25 13:29:35 compute-0 podman[403081]: 2026-02-25 13:29:35.959434144 +0000 UTC m=+0.626104100 container died 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b-merged.mount: Deactivated successfully.
Feb 25 13:29:36 compute-0 podman[403081]: 2026-02-25 13:29:36.04148594 +0000 UTC m=+0.708155886 container remove 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 13:29:36 compute-0 systemd[1]: libpod-conmon-77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e.scope: Deactivated successfully.
Feb 25 13:29:36 compute-0 sudo[403003]: pam_unix(sudo:session): session closed for user root
Feb 25 13:29:36 compute-0 sudo[403129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:29:36 compute-0 sudo[403129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:29:36 compute-0 sudo[403129]: pam_unix(sudo:session): session closed for user root
Feb 25 13:29:36 compute-0 sudo[403154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:29:36 compute-0 sudo[403154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:29:36 compute-0 podman[403191]: 2026-02-25 13:29:36.545458213 +0000 UTC m=+0.076936582 container create bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:29:36 compute-0 systemd[1]: Started libpod-conmon-bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561.scope.
Feb 25 13:29:36 compute-0 podman[403191]: 2026-02-25 13:29:36.502387698 +0000 UTC m=+0.033866127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:29:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:29:36 compute-0 podman[403191]: 2026-02-25 13:29:36.634032983 +0000 UTC m=+0.165511352 container init bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 13:29:36 compute-0 podman[403191]: 2026-02-25 13:29:36.641954827 +0000 UTC m=+0.173433186 container start bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:29:36 compute-0 podman[403191]: 2026-02-25 13:29:36.647001839 +0000 UTC m=+0.178480178 container attach bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:29:36 compute-0 awesome_brown[403208]: 167 167
Feb 25 13:29:36 compute-0 systemd[1]: libpod-bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561.scope: Deactivated successfully.
Feb 25 13:29:36 compute-0 conmon[403208]: conmon bc68d3c15ef2f6e97aec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561.scope/container/memory.events
Feb 25 13:29:36 compute-0 podman[403191]: 2026-02-25 13:29:36.650451127 +0000 UTC m=+0.181929466 container died bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cf06cc1e162033785b54d299adf9f8a4b6d45b5464b66565c554c47243206ec-merged.mount: Deactivated successfully.
Feb 25 13:29:36 compute-0 podman[403191]: 2026-02-25 13:29:36.699900942 +0000 UTC m=+0.231379311 container remove bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:29:36 compute-0 systemd[1]: libpod-conmon-bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561.scope: Deactivated successfully.
Feb 25 13:29:36 compute-0 nova_compute[244014]: 2026-02-25 13:29:36.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:36 compute-0 podman[403231]: 2026-02-25 13:29:36.863847149 +0000 UTC m=+0.054799247 container create f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 13:29:36 compute-0 systemd[1]: Started libpod-conmon-f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791.scope.
Feb 25 13:29:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe94d3ec4b12884c9c67f021f89cd713858fb0aa4dd0a6f762c0c002252fe6f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe94d3ec4b12884c9c67f021f89cd713858fb0aa4dd0a6f762c0c002252fe6f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe94d3ec4b12884c9c67f021f89cd713858fb0aa4dd0a6f762c0c002252fe6f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe94d3ec4b12884c9c67f021f89cd713858fb0aa4dd0a6f762c0c002252fe6f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:36 compute-0 podman[403231]: 2026-02-25 13:29:36.840908802 +0000 UTC m=+0.031860880 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:29:36 compute-0 podman[403231]: 2026-02-25 13:29:36.971361423 +0000 UTC m=+0.162313551 container init f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:29:36 compute-0 podman[403247]: 2026-02-25 13:29:36.974360858 +0000 UTC m=+0.068943937 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 25 13:29:36 compute-0 podman[403231]: 2026-02-25 13:29:36.980579144 +0000 UTC m=+0.171531232 container start f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:29:37 compute-0 podman[403250]: 2026-02-25 13:29:37.007952856 +0000 UTC m=+0.104476979 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 13:29:37 compute-0 podman[403231]: 2026-02-25 13:29:37.00666061 +0000 UTC m=+0.197612728 container attach f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 13:29:37 compute-0 ceph-mon[76335]: pgmap v3291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]: {
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:     "0": [
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:         {
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "devices": [
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "/dev/loop3"
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             ],
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_name": "ceph_lv0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_size": "21470642176",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "name": "ceph_lv0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "tags": {
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.cluster_name": "ceph",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.crush_device_class": "",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.encrypted": "0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.objectstore": "bluestore",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.osd_id": "0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.type": "block",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.vdo": "0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.with_tpm": "0"
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             },
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "type": "block",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "vg_name": "ceph_vg0"
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:         }
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:     ],
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:     "1": [
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:         {
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "devices": [
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "/dev/loop4"
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             ],
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_name": "ceph_lv1",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_size": "21470642176",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "name": "ceph_lv1",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "tags": {
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.cluster_name": "ceph",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.crush_device_class": "",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.encrypted": "0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.objectstore": "bluestore",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.osd_id": "1",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.type": "block",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.vdo": "0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.with_tpm": "0"
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             },
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "type": "block",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "vg_name": "ceph_vg1"
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:         }
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:     ],
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:     "2": [
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:         {
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "devices": [
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "/dev/loop5"
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             ],
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_name": "ceph_lv2",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_size": "21470642176",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "name": "ceph_lv2",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "tags": {
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.cluster_name": "ceph",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.crush_device_class": "",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.encrypted": "0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.objectstore": "bluestore",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.osd_id": "2",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.type": "block",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.vdo": "0",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:                 "ceph.with_tpm": "0"
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             },
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "type": "block",
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:             "vg_name": "ceph_vg2"
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:         }
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]:     ]
Feb 25 13:29:37 compute-0 peaceful_poincare[403251]: }
Feb 25 13:29:37 compute-0 systemd[1]: libpod-f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791.scope: Deactivated successfully.
Feb 25 13:29:37 compute-0 podman[403231]: 2026-02-25 13:29:37.299937997 +0000 UTC m=+0.490890055 container died f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe94d3ec4b12884c9c67f021f89cd713858fb0aa4dd0a6f762c0c002252fe6f0-merged.mount: Deactivated successfully.
Feb 25 13:29:37 compute-0 podman[403231]: 2026-02-25 13:29:37.337659231 +0000 UTC m=+0.528611299 container remove f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:29:37 compute-0 systemd[1]: libpod-conmon-f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791.scope: Deactivated successfully.
Feb 25 13:29:37 compute-0 sudo[403154]: pam_unix(sudo:session): session closed for user root
Feb 25 13:29:37 compute-0 sudo[403314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:29:37 compute-0 sudo[403314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:29:37 compute-0 sudo[403314]: pam_unix(sudo:session): session closed for user root
Feb 25 13:29:37 compute-0 sudo[403339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:29:37 compute-0 sudo[403339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:29:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:37 compute-0 podman[403378]: 2026-02-25 13:29:37.797353375 +0000 UTC m=+0.028715281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:29:37 compute-0 podman[403378]: 2026-02-25 13:29:37.924636217 +0000 UTC m=+0.155998133 container create 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:29:38 compute-0 systemd[1]: Started libpod-conmon-26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b.scope.
Feb 25 13:29:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:29:38 compute-0 podman[403378]: 2026-02-25 13:29:38.127113612 +0000 UTC m=+0.358475578 container init 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Feb 25 13:29:38 compute-0 podman[403378]: 2026-02-25 13:29:38.136440395 +0000 UTC m=+0.367802311 container start 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 25 13:29:38 compute-0 podman[403378]: 2026-02-25 13:29:38.14087396 +0000 UTC m=+0.372235936 container attach 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:29:38 compute-0 vigorous_poitras[403394]: 167 167
Feb 25 13:29:38 compute-0 systemd[1]: libpod-26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b.scope: Deactivated successfully.
Feb 25 13:29:38 compute-0 podman[403378]: 2026-02-25 13:29:38.142745723 +0000 UTC m=+0.374107639 container died 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:29:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4a1fb3cca6948e7e1704d596cf7fcdf1fd5a43a043aa1c58f69a441de1c7da0-merged.mount: Deactivated successfully.
Feb 25 13:29:38 compute-0 podman[403378]: 2026-02-25 13:29:38.23017183 +0000 UTC m=+0.461533746 container remove 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:29:38 compute-0 systemd[1]: libpod-conmon-26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b.scope: Deactivated successfully.
Feb 25 13:29:38 compute-0 podman[403419]: 2026-02-25 13:29:38.401447974 +0000 UTC m=+0.064696487 container create ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:29:38 compute-0 systemd[1]: Started libpod-conmon-ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af.scope.
Feb 25 13:29:38 compute-0 podman[403419]: 2026-02-25 13:29:38.373961949 +0000 UTC m=+0.037210532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:29:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc177674907a4033a178e69821653badc64231f4e5d8ad02fed63a6537fad47e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc177674907a4033a178e69821653badc64231f4e5d8ad02fed63a6537fad47e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc177674907a4033a178e69821653badc64231f4e5d8ad02fed63a6537fad47e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc177674907a4033a178e69821653badc64231f4e5d8ad02fed63a6537fad47e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:29:38 compute-0 podman[403419]: 2026-02-25 13:29:38.498181814 +0000 UTC m=+0.161430327 container init ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:29:38 compute-0 podman[403419]: 2026-02-25 13:29:38.507059625 +0000 UTC m=+0.170308138 container start ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:29:38 compute-0 podman[403419]: 2026-02-25 13:29:38.511230383 +0000 UTC m=+0.174478876 container attach ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:29:39 compute-0 ceph-mon[76335]: pgmap v3292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:39 compute-0 lvm[403514]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:29:39 compute-0 lvm[403514]: VG ceph_vg0 finished
Feb 25 13:29:39 compute-0 lvm[403516]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:29:39 compute-0 lvm[403516]: VG ceph_vg1 finished
Feb 25 13:29:39 compute-0 lvm[403518]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:29:39 compute-0 lvm[403518]: VG ceph_vg2 finished
Feb 25 13:29:39 compute-0 nova_compute[244014]: 2026-02-25 13:29:39.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:39 compute-0 elated_gauss[403435]: {}
Feb 25 13:29:39 compute-0 systemd[1]: libpod-ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af.scope: Deactivated successfully.
Feb 25 13:29:39 compute-0 podman[403419]: 2026-02-25 13:29:39.25806402 +0000 UTC m=+0.921312513 container died ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:29:39 compute-0 systemd[1]: libpod-ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af.scope: Consumed 1.011s CPU time.
Feb 25 13:29:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc177674907a4033a178e69821653badc64231f4e5d8ad02fed63a6537fad47e-merged.mount: Deactivated successfully.
Feb 25 13:29:39 compute-0 podman[403419]: 2026-02-25 13:29:39.346382263 +0000 UTC m=+1.009630776 container remove ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 13:29:39 compute-0 systemd[1]: libpod-conmon-ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af.scope: Deactivated successfully.
Feb 25 13:29:39 compute-0 sudo[403339]: pam_unix(sudo:session): session closed for user root
Feb 25 13:29:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:29:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:29:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:29:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:29:39 compute-0 sudo[403534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:29:39 compute-0 sudo[403534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:29:39 compute-0 sudo[403534]: pam_unix(sudo:session): session closed for user root
Feb 25 13:29:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:29:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:29:41 compute-0 ceph-mon[76335]: pgmap v3293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:41 compute-0 nova_compute[244014]: 2026-02-25 13:29:41.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:29:43 compute-0 ceph-mon[76335]: pgmap v3294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:44 compute-0 nova_compute[244014]: 2026-02-25 13:29:44.175 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:45 compute-0 ceph-mon[76335]: pgmap v3295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:46 compute-0 ceph-mon[76335]: pgmap v3296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:46 compute-0 nova_compute[244014]: 2026-02-25 13:29:46.806 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:29:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1837394778' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:29:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:29:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1837394778' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:29:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1837394778' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:29:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1837394778' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:29:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:48 compute-0 ceph-mon[76335]: pgmap v3297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:49 compute-0 nova_compute[244014]: 2026-02-25 13:29:49.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:50 compute-0 ceph-mon[76335]: pgmap v3298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:51 compute-0 nova_compute[244014]: 2026-02-25 13:29:51.808 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:52 compute-0 ceph-mon[76335]: pgmap v3299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:54 compute-0 nova_compute[244014]: 2026-02-25 13:29:54.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:54 compute-0 ceph-mon[76335]: pgmap v3300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:29:55.072 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:29:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:29:55.072 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:29:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:29:55.072 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:29:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:56 compute-0 nova_compute[244014]: 2026-02-25 13:29:56.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:56 compute-0 ceph-mon[76335]: pgmap v3301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:58 compute-0 ceph-mon[76335]: pgmap v3302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:29:59 compute-0 nova_compute[244014]: 2026-02-25 13:29:59.251 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:29:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:29:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:00 compute-0 nova_compute[244014]: 2026-02-25 13:30:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:30:00 compute-0 nova_compute[244014]: 2026-02-25 13:30:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:30:00 compute-0 ceph-mon[76335]: pgmap v3303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:00 compute-0 nova_compute[244014]: 2026-02-25 13:30:00.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:30:00 compute-0 nova_compute[244014]: 2026-02-25 13:30:00.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:30:00 compute-0 nova_compute[244014]: 2026-02-25 13:30:00.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:30:00 compute-0 nova_compute[244014]: 2026-02-25 13:30:00.928 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:30:00 compute-0 nova_compute[244014]: 2026-02-25 13:30:00.928 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:30:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:30:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440226851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:30:01 compute-0 nova_compute[244014]: 2026-02-25 13:30:01.468 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:30:01 compute-0 nova_compute[244014]: 2026-02-25 13:30:01.628 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:30:01 compute-0 nova_compute[244014]: 2026-02-25 13:30:01.629 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3537MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:30:01 compute-0 nova_compute[244014]: 2026-02-25 13:30:01.630 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:30:01 compute-0 nova_compute[244014]: 2026-02-25 13:30:01.630 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:30:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:30:01 compute-0 nova_compute[244014]: 2026-02-25 13:30:01.694 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:30:01 compute-0 nova_compute[244014]: 2026-02-25 13:30:01.694 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:30:01 compute-0 nova_compute[244014]: 2026-02-25 13:30:01.709 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:30:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:01 compute-0 nova_compute[244014]: 2026-02-25 13:30:01.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:02 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2440226851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:30:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:30:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322894189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:30:02 compute-0 nova_compute[244014]: 2026-02-25 13:30:02.306 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:30:02 compute-0 nova_compute[244014]: 2026-02-25 13:30:02.311 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:30:02 compute-0 nova_compute[244014]: 2026-02-25 13:30:02.327 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:30:02 compute-0 nova_compute[244014]: 2026-02-25 13:30:02.329 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:30:02 compute-0 nova_compute[244014]: 2026-02-25 13:30:02.330 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:30:03 compute-0 ceph-mon[76335]: pgmap v3304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1322894189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:30:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:04 compute-0 nova_compute[244014]: 2026-02-25 13:30:04.254 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:05 compute-0 ceph-mon[76335]: pgmap v3305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:06 compute-0 nova_compute[244014]: 2026-02-25 13:30:06.330 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:30:06 compute-0 nova_compute[244014]: 2026-02-25 13:30:06.331 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:30:06 compute-0 nova_compute[244014]: 2026-02-25 13:30:06.331 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:30:06 compute-0 nova_compute[244014]: 2026-02-25 13:30:06.349 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:30:06 compute-0 nova_compute[244014]: 2026-02-25 13:30:06.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:06 compute-0 nova_compute[244014]: 2026-02-25 13:30:06.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:30:07 compute-0 ceph-mon[76335]: pgmap v3306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:07 compute-0 podman[403604]: 2026-02-25 13:30:07.704588586 +0000 UTC m=+0.047522752 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 13:30:07 compute-0 podman[403605]: 2026-02-25 13:30:07.74724512 +0000 UTC m=+0.085632388 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 13:30:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:09 compute-0 nova_compute[244014]: 2026-02-25 13:30:09.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:09 compute-0 ceph-mon[76335]: pgmap v3307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.451092) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209451148, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1589, "num_deletes": 250, "total_data_size": 2555789, "memory_usage": 2590904, "flush_reason": "Manual Compaction"}
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209490307, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1471334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67945, "largest_seqno": 69533, "table_properties": {"data_size": 1465965, "index_size": 2572, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14067, "raw_average_key_size": 20, "raw_value_size": 1454104, "raw_average_value_size": 2138, "num_data_blocks": 118, "num_entries": 680, "num_filter_entries": 680, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026040, "oldest_key_time": 1772026040, "file_creation_time": 1772026209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 39298 microseconds, and 6208 cpu microseconds.
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.490390) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1471334 bytes OK
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.490423) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.506796) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.506841) EVENT_LOG_v1 {"time_micros": 1772026209506831, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.506866) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 2548904, prev total WAL file size 2548904, number of live WAL files 2.
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.507735) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373533' seq:72057594037927935, type:22 .. '6D6772737461740033303034' seq:0, type:0; will stop at (end)
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1436KB)], [161(10MB)]
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209507790, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12411166, "oldest_snapshot_seqno": -1}
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8805 keys, 10105871 bytes, temperature: kUnknown
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209603763, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10105871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10051764, "index_size": 31018, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 229301, "raw_average_key_size": 26, "raw_value_size": 9899227, "raw_average_value_size": 1124, "num_data_blocks": 1203, "num_entries": 8805, "num_filter_entries": 8805, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.604119) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10105871 bytes
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.685421) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.2 rd, 105.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.4 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(15.3) write-amplify(6.9) OK, records in: 9239, records dropped: 434 output_compression: NoCompression
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.685452) EVENT_LOG_v1 {"time_micros": 1772026209685441, "job": 100, "event": "compaction_finished", "compaction_time_micros": 96070, "compaction_time_cpu_micros": 34020, "output_level": 6, "num_output_files": 1, "total_output_size": 10105871, "num_input_records": 9239, "num_output_records": 8805, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209685809, "job": 100, "event": "table_file_deletion", "file_number": 163}
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209686782, "job": 100, "event": "table_file_deletion", "file_number": 161}
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.507603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.686897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.686906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.686910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.686914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.686918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:11 compute-0 ceph-mon[76335]: pgmap v3308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:11 compute-0 nova_compute[244014]: 2026-02-25 13:30:11.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:30:11 compute-0 nova_compute[244014]: 2026-02-25 13:30:11.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:30:11 compute-0 nova_compute[244014]: 2026-02-25 13:30:11.881 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:13 compute-0 ceph-mon[76335]: pgmap v3309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:14 compute-0 nova_compute[244014]: 2026-02-25 13:30:14.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:15 compute-0 ceph-mon[76335]: pgmap v3310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:30:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 385 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516837350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516837350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516837350#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 25 13:30:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:15 compute-0 nova_compute[244014]: 2026-02-25 13:30:15.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:30:16 compute-0 nova_compute[244014]: 2026-02-25 13:30:16.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:30:16 compute-0 nova_compute[244014]: 2026-02-25 13:30:16.881 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:17 compute-0 ceph-mon[76335]: pgmap v3311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:19 compute-0 ceph-mon[76335]: pgmap v3312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:19 compute-0 nova_compute[244014]: 2026-02-25 13:30:19.305 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:30:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.2 total, 600.0 interval
                                           Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 17K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 347 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a354b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a354b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a354b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 25 13:30:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:20 compute-0 nova_compute[244014]: 2026-02-25 13:30:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:30:21 compute-0 ceph-mon[76335]: pgmap v3313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:21 compute-0 nova_compute[244014]: 2026-02-25 13:30:21.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:23 compute-0 ceph-mon[76335]: pgmap v3314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:24 compute-0 nova_compute[244014]: 2026-02-25 13:30:24.342 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:30:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.6 total, 600.0 interval
                                           Cumulative writes: 37K writes, 150K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.77 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 353 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.176       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.176       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.176       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.166       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.166       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.17              0.00         1    0.166       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df7a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 25 13:30:25 compute-0 ceph-mon[76335]: pgmap v3315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:25 compute-0 nova_compute[244014]: 2026-02-25 13:30:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:30:26 compute-0 nova_compute[244014]: 2026-02-25 13:30:26.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 13:30:27 compute-0 ceph-mon[76335]: pgmap v3316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:28 compute-0 ceph-mon[76335]: pgmap v3317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:29 compute-0 nova_compute[244014]: 2026-02-25 13:30:29.346 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:30 compute-0 ceph-mon[76335]: pgmap v3318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:30:31
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'images', 'volumes', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.control']
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:30:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:31 compute-0 nova_compute[244014]: 2026-02-25 13:30:31.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:30:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:30:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:30:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:30:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:30:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:30:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:30:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:30:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:30:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:30:32 compute-0 nova_compute[244014]: 2026-02-25 13:30:32.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:30:32 compute-0 ceph-mon[76335]: pgmap v3319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:34 compute-0 nova_compute[244014]: 2026-02-25 13:30:34.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:34 compute-0 ceph-mon[76335]: pgmap v3320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:36 compute-0 ceph-mon[76335]: pgmap v3321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:36 compute-0 nova_compute[244014]: 2026-02-25 13:30:36.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:38 compute-0 podman[403649]: 2026-02-25 13:30:38.725492898 +0000 UTC m=+0.060498859 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 13:30:38 compute-0 podman[403650]: 2026-02-25 13:30:38.792107308 +0000 UTC m=+0.127873820 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:30:38 compute-0 ceph-mon[76335]: pgmap v3322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:39 compute-0 nova_compute[244014]: 2026-02-25 13:30:39.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:39 compute-0 sudo[403694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:30:39 compute-0 sudo[403694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:30:39 compute-0 sudo[403694]: pam_unix(sudo:session): session closed for user root
Feb 25 13:30:39 compute-0 sudo[403719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:30:39 compute-0 sudo[403719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:30:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:40 compute-0 sudo[403719]: pam_unix(sudo:session): session closed for user root
Feb 25 13:30:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:30:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:30:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:30:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:30:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:30:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:30:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:30:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:30:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:30:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:30:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:30:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:30:40 compute-0 sudo[403776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:30:40 compute-0 sudo[403776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:30:40 compute-0 sudo[403776]: pam_unix(sudo:session): session closed for user root
Feb 25 13:30:40 compute-0 sudo[403801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:30:40 compute-0 sudo[403801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:30:40 compute-0 podman[403838]: 2026-02-25 13:30:40.638811896 +0000 UTC m=+0.058278316 container create e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 13:30:40 compute-0 systemd[1]: Started libpod-conmon-e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55.scope.
Feb 25 13:30:40 compute-0 podman[403838]: 2026-02-25 13:30:40.611058022 +0000 UTC m=+0.030524462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:30:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:30:40 compute-0 podman[403838]: 2026-02-25 13:30:40.732610733 +0000 UTC m=+0.152077233 container init e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:30:40 compute-0 podman[403838]: 2026-02-25 13:30:40.741423922 +0000 UTC m=+0.160890382 container start e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:30:40 compute-0 distracted_ride[403855]: 167 167
Feb 25 13:30:40 compute-0 systemd[1]: libpod-e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55.scope: Deactivated successfully.
Feb 25 13:30:40 compute-0 podman[403838]: 2026-02-25 13:30:40.748024228 +0000 UTC m=+0.167490688 container attach e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:30:40 compute-0 podman[403838]: 2026-02-25 13:30:40.749616313 +0000 UTC m=+0.169082763 container died e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:30:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd65b05d06f03b3ddf2476bfacc68ffaf176cc77e3a22835a26a5dfe5f804ca2-merged.mount: Deactivated successfully.
Feb 25 13:30:40 compute-0 podman[403838]: 2026-02-25 13:30:40.815837292 +0000 UTC m=+0.235303712 container remove e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 13:30:40 compute-0 systemd[1]: libpod-conmon-e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55.scope: Deactivated successfully.
Feb 25 13:30:40 compute-0 ceph-mon[76335]: pgmap v3323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:30:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:30:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:30:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:30:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:30:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:30:40 compute-0 podman[403879]: 2026-02-25 13:30:40.971892896 +0000 UTC m=+0.051024591 container create 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:30:41 compute-0 systemd[1]: Started libpod-conmon-42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f.scope.
Feb 25 13:30:41 compute-0 podman[403879]: 2026-02-25 13:30:40.947206429 +0000 UTC m=+0.026338164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:30:41 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:41 compute-0 podman[403879]: 2026-02-25 13:30:41.085219754 +0000 UTC m=+0.164351479 container init 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 13:30:41 compute-0 podman[403879]: 2026-02-25 13:30:41.092428477 +0000 UTC m=+0.171560162 container start 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:30:41 compute-0 podman[403879]: 2026-02-25 13:30:41.096458631 +0000 UTC m=+0.175590316 container attach 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:30:41 compute-0 charming_burnell[403896]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:30:41 compute-0 charming_burnell[403896]: --> All data devices are unavailable
Feb 25 13:30:41 compute-0 systemd[1]: libpod-42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f.scope: Deactivated successfully.
Feb 25 13:30:41 compute-0 conmon[403896]: conmon 42ac099aa900e3aa866c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f.scope/container/memory.events
Feb 25 13:30:41 compute-0 podman[403879]: 2026-02-25 13:30:41.557207274 +0000 UTC m=+0.636338999 container died 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:30:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd-merged.mount: Deactivated successfully.
Feb 25 13:30:41 compute-0 podman[403879]: 2026-02-25 13:30:41.633797676 +0000 UTC m=+0.712929371 container remove 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 13:30:41 compute-0 systemd[1]: libpod-conmon-42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f.scope: Deactivated successfully.
Feb 25 13:30:41 compute-0 sudo[403801]: pam_unix(sudo:session): session closed for user root
Feb 25 13:30:41 compute-0 sudo[403930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:30:41 compute-0 sudo[403930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:30:41 compute-0 sudo[403930]: pam_unix(sudo:session): session closed for user root
Feb 25 13:30:41 compute-0 sudo[403955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:30:41 compute-0 sudo[403955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:30:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:41 compute-0 nova_compute[244014]: 2026-02-25 13:30:41.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:42 compute-0 podman[403991]: 2026-02-25 13:30:42.10274097 +0000 UTC m=+0.040024970 container create 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:30:42 compute-0 systemd[1]: Started libpod-conmon-59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d.scope.
Feb 25 13:30:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:30:42 compute-0 podman[403991]: 2026-02-25 13:30:42.084144376 +0000 UTC m=+0.021428486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:30:42 compute-0 podman[403991]: 2026-02-25 13:30:42.19235641 +0000 UTC m=+0.129640510 container init 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:30:42 compute-0 podman[403991]: 2026-02-25 13:30:42.20016471 +0000 UTC m=+0.137448760 container start 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 13:30:42 compute-0 recursing_keller[404008]: 167 167
Feb 25 13:30:42 compute-0 systemd[1]: libpod-59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d.scope: Deactivated successfully.
Feb 25 13:30:42 compute-0 podman[403991]: 2026-02-25 13:30:42.206768586 +0000 UTC m=+0.144052686 container attach 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:30:42 compute-0 podman[403991]: 2026-02-25 13:30:42.207983931 +0000 UTC m=+0.145267981 container died 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 13:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bf66153b4353a344cf7eb141d9076d2349dcd09d6f2c3316414e2013a6948d8-merged.mount: Deactivated successfully.
Feb 25 13:30:42 compute-0 podman[403991]: 2026-02-25 13:30:42.28342333 +0000 UTC m=+0.220707370 container remove 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:30:42 compute-0 systemd[1]: libpod-conmon-59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d.scope: Deactivated successfully.
Feb 25 13:30:42 compute-0 podman[404031]: 2026-02-25 13:30:42.4311884 +0000 UTC m=+0.043592131 container create c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:30:42 compute-0 systemd[1]: Started libpod-conmon-c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b.scope.
Feb 25 13:30:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dc42163140e89944425e33b066c610034805847869a11f66918244649345be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dc42163140e89944425e33b066c610034805847869a11f66918244649345be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dc42163140e89944425e33b066c610034805847869a11f66918244649345be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dc42163140e89944425e33b066c610034805847869a11f66918244649345be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:42 compute-0 podman[404031]: 2026-02-25 13:30:42.411090063 +0000 UTC m=+0.023493844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:30:42 compute-0 podman[404031]: 2026-02-25 13:30:42.512632719 +0000 UTC m=+0.125036430 container init c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:30:42 compute-0 podman[404031]: 2026-02-25 13:30:42.516431736 +0000 UTC m=+0.128835437 container start c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:30:42 compute-0 podman[404031]: 2026-02-25 13:30:42.520584123 +0000 UTC m=+0.132987824 container attach c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]: {
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:     "0": [
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:         {
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "devices": [
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "/dev/loop3"
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             ],
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_name": "ceph_lv0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_size": "21470642176",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "name": "ceph_lv0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "tags": {
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.cluster_name": "ceph",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.crush_device_class": "",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.encrypted": "0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.objectstore": "bluestore",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.osd_id": "0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.type": "block",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.vdo": "0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.with_tpm": "0"
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             },
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "type": "block",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "vg_name": "ceph_vg0"
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:         }
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:     ],
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:     "1": [
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:         {
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "devices": [
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "/dev/loop4"
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             ],
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_name": "ceph_lv1",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_size": "21470642176",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "name": "ceph_lv1",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "tags": {
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.cluster_name": "ceph",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.crush_device_class": "",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.encrypted": "0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.objectstore": "bluestore",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.osd_id": "1",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.type": "block",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.vdo": "0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.with_tpm": "0"
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             },
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "type": "block",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "vg_name": "ceph_vg1"
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:         }
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:     ],
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:     "2": [
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:         {
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "devices": [
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "/dev/loop5"
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             ],
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_name": "ceph_lv2",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_size": "21470642176",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "name": "ceph_lv2",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "tags": {
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.cluster_name": "ceph",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.crush_device_class": "",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.encrypted": "0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.objectstore": "bluestore",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.osd_id": "2",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.type": "block",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.vdo": "0",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:                 "ceph.with_tpm": "0"
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             },
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "type": "block",
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:             "vg_name": "ceph_vg2"
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:         }
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]:     ]
Feb 25 13:30:42 compute-0 gifted_ritchie[404048]: }
Feb 25 13:30:42 compute-0 systemd[1]: libpod-c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b.scope: Deactivated successfully.
Feb 25 13:30:42 compute-0 podman[404031]: 2026-02-25 13:30:42.812155262 +0000 UTC m=+0.424559003 container died c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-98dc42163140e89944425e33b066c610034805847869a11f66918244649345be-merged.mount: Deactivated successfully.
Feb 25 13:30:43 compute-0 ceph-mon[76335]: pgmap v3324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:43 compute-0 podman[404031]: 2026-02-25 13:30:43.024972628 +0000 UTC m=+0.637376369 container remove c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:30:43 compute-0 sudo[403955]: pam_unix(sudo:session): session closed for user root
Feb 25 13:30:43 compute-0 systemd[1]: libpod-conmon-c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b.scope: Deactivated successfully.
Feb 25 13:30:43 compute-0 sudo[404069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:30:43 compute-0 sudo[404069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:30:43 compute-0 sudo[404069]: pam_unix(sudo:session): session closed for user root
Feb 25 13:30:43 compute-0 sudo[404094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:30:43 compute-0 sudo[404094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:30:43 compute-0 podman[404131]: 2026-02-25 13:30:43.605615665 +0000 UTC m=+0.114089370 container create 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:30:43 compute-0 podman[404131]: 2026-02-25 13:30:43.523800056 +0000 UTC m=+0.032273761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:30:43 compute-0 systemd[1]: Started libpod-conmon-9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977.scope.
Feb 25 13:30:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:30:43 compute-0 podman[404131]: 2026-02-25 13:30:43.706357018 +0000 UTC m=+0.214830783 container init 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 13:30:43 compute-0 podman[404131]: 2026-02-25 13:30:43.718379278 +0000 UTC m=+0.226852953 container start 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:30:43 compute-0 confident_jepsen[404147]: 167 167
Feb 25 13:30:43 compute-0 systemd[1]: libpod-9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977.scope: Deactivated successfully.
Feb 25 13:30:43 compute-0 podman[404131]: 2026-02-25 13:30:43.784390721 +0000 UTC m=+0.292864476 container attach 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:30:43 compute-0 podman[404131]: 2026-02-25 13:30:43.786328096 +0000 UTC m=+0.294801801 container died 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:30:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-92eabbc82def4bfbc2ff37e7eef6e6850ba2b955404b1ecb8fbd389a8b64e592-merged.mount: Deactivated successfully.
Feb 25 13:30:43 compute-0 podman[404131]: 2026-02-25 13:30:43.975476433 +0000 UTC m=+0.483950128 container remove 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 13:30:43 compute-0 systemd[1]: libpod-conmon-9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977.scope: Deactivated successfully.
Feb 25 13:30:44 compute-0 podman[404174]: 2026-02-25 13:30:44.171001781 +0000 UTC m=+0.061166237 container create e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:30:44 compute-0 systemd[1]: Started libpod-conmon-e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3.scope.
Feb 25 13:30:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bc15a584f5d5dca13a75fa2bb3a51d675530db58dedbdcc428785edd1058fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bc15a584f5d5dca13a75fa2bb3a51d675530db58dedbdcc428785edd1058fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bc15a584f5d5dca13a75fa2bb3a51d675530db58dedbdcc428785edd1058fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bc15a584f5d5dca13a75fa2bb3a51d675530db58dedbdcc428785edd1058fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:30:44 compute-0 podman[404174]: 2026-02-25 13:30:44.143958778 +0000 UTC m=+0.034123314 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:30:44 compute-0 podman[404174]: 2026-02-25 13:30:44.243996781 +0000 UTC m=+0.134161267 container init e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:30:44 compute-0 podman[404174]: 2026-02-25 13:30:44.25173416 +0000 UTC m=+0.141898616 container start e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:30:44 compute-0 podman[404174]: 2026-02-25 13:30:44.260801396 +0000 UTC m=+0.150965852 container attach e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 13:30:44 compute-0 nova_compute[244014]: 2026-02-25 13:30:44.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:44 compute-0 lvm[404266]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:30:44 compute-0 lvm[404266]: VG ceph_vg0 finished
Feb 25 13:30:44 compute-0 lvm[404269]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:30:44 compute-0 lvm[404269]: VG ceph_vg1 finished
Feb 25 13:30:44 compute-0 lvm[404271]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:30:44 compute-0 lvm[404271]: VG ceph_vg2 finished
Feb 25 13:30:45 compute-0 elegant_williamson[404190]: {}
Feb 25 13:30:45 compute-0 systemd[1]: libpod-e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3.scope: Deactivated successfully.
Feb 25 13:30:45 compute-0 systemd[1]: libpod-e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3.scope: Consumed 1.096s CPU time.
Feb 25 13:30:45 compute-0 podman[404174]: 2026-02-25 13:30:45.050464522 +0000 UTC m=+0.940628978 container died e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:30:45 compute-0 ceph-mon[76335]: pgmap v3325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-44bc15a584f5d5dca13a75fa2bb3a51d675530db58dedbdcc428785edd1058fb-merged.mount: Deactivated successfully.
Feb 25 13:30:45 compute-0 podman[404174]: 2026-02-25 13:30:45.122302059 +0000 UTC m=+1.012466505 container remove e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:30:45 compute-0 systemd[1]: libpod-conmon-e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3.scope: Deactivated successfully.
Feb 25 13:30:45 compute-0 sudo[404094]: pam_unix(sudo:session): session closed for user root
Feb 25 13:30:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:30:45 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:30:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:30:45 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:30:45 compute-0 sudo[404287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:30:45 compute-0 sudo[404287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:30:45 compute-0 sudo[404287]: pam_unix(sudo:session): session closed for user root
Feb 25 13:30:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:30:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:30:46 compute-0 nova_compute[244014]: 2026-02-25 13:30:46.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:47 compute-0 ceph-mon[76335]: pgmap v3326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:30:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2000992389' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:30:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:30:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2000992389' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:30:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2000992389' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:30:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2000992389' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:30:49 compute-0 ceph-mon[76335]: pgmap v3327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:49 compute-0 nova_compute[244014]: 2026-02-25 13:30:49.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:51 compute-0 ceph-mon[76335]: pgmap v3328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:52 compute-0 nova_compute[244014]: 2026-02-25 13:30:52.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:53 compute-0 ceph-mon[76335]: pgmap v3329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:54 compute-0 nova_compute[244014]: 2026-02-25 13:30:54.401 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:30:55.073 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:30:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:30:55.073 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:30:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:30:55.073 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:30:55 compute-0 ceph-mon[76335]: pgmap v3330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:57 compute-0 nova_compute[244014]: 2026-02-25 13:30:57.015 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:57 compute-0 ceph-mon[76335]: pgmap v3331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:59 compute-0 nova_compute[244014]: 2026-02-25 13:30:59.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:30:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.464188) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259464219, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 646, "num_deletes": 255, "total_data_size": 770294, "memory_usage": 783072, "flush_reason": "Manual Compaction"}
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259473639, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 763599, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69534, "largest_seqno": 70179, "table_properties": {"data_size": 760156, "index_size": 1350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7666, "raw_average_key_size": 18, "raw_value_size": 753224, "raw_average_value_size": 1841, "num_data_blocks": 60, "num_entries": 409, "num_filter_entries": 409, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026211, "oldest_key_time": 1772026211, "file_creation_time": 1772026259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 9512 microseconds, and 2234 cpu microseconds.
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.473675) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 763599 bytes OK
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.473748) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.478951) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.478975) EVENT_LOG_v1 {"time_micros": 1772026259478968, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.478994) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 766837, prev total WAL file size 767994, number of live WAL files 2.
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.479457) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303137' seq:72057594037927935, type:22 .. '6C6F676D0033323638' seq:0, type:0; will stop at (end)
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(745KB)], [164(9869KB)]
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259479483, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 10869470, "oldest_snapshot_seqno": -1}
Feb 25 13:30:59 compute-0 ceph-mon[76335]: pgmap v3332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8693 keys, 10766512 bytes, temperature: kUnknown
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259525716, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 10766512, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10711761, "index_size": 31918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227881, "raw_average_key_size": 26, "raw_value_size": 10559830, "raw_average_value_size": 1214, "num_data_blocks": 1240, "num_entries": 8693, "num_filter_entries": 8693, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.525965) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10766512 bytes
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.530805) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.7 rd, 232.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.6 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(28.3) write-amplify(14.1) OK, records in: 9214, records dropped: 521 output_compression: NoCompression
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.530829) EVENT_LOG_v1 {"time_micros": 1772026259530818, "job": 102, "event": "compaction_finished", "compaction_time_micros": 46309, "compaction_time_cpu_micros": 20667, "output_level": 6, "num_output_files": 1, "total_output_size": 10766512, "num_input_records": 9214, "num_output_records": 8693, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259531079, "job": 102, "event": "table_file_deletion", "file_number": 166}
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259532585, "job": 102, "event": "table_file_deletion", "file_number": 164}
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.479399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.532610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.532615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.532617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.532619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.532622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:30:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:00 compute-0 ceph-mon[76335]: pgmap v3333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:00 compute-0 nova_compute[244014]: 2026-02-25 13:31:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:31:00 compute-0 nova_compute[244014]: 2026-02-25 13:31:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:31:00 compute-0 nova_compute[244014]: 2026-02-25 13:31:00.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:31:00 compute-0 nova_compute[244014]: 2026-02-25 13:31:00.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:31:00 compute-0 nova_compute[244014]: 2026-02-25 13:31:00.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:31:00 compute-0 nova_compute[244014]: 2026-02-25 13:31:00.918 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:31:00 compute-0 nova_compute[244014]: 2026-02-25 13:31:00.919 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:31:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:31:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/356898532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:31:01 compute-0 nova_compute[244014]: 2026-02-25 13:31:01.546 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:31:01 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/356898532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:31:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:31:01 compute-0 nova_compute[244014]: 2026-02-25 13:31:01.699 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:31:01 compute-0 nova_compute[244014]: 2026-02-25 13:31:01.700 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3557MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:31:01 compute-0 nova_compute[244014]: 2026-02-25 13:31:01.700 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:31:01 compute-0 nova_compute[244014]: 2026-02-25 13:31:01.701 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:31:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.038 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.039 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.135 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.276 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.277 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.292 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.321 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.339 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:31:02 compute-0 ceph-mon[76335]: pgmap v3334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:31:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1034938038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.855 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.862 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.878 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.881 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:31:02 compute-0 nova_compute[244014]: 2026-02-25 13:31:02.882 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:31:03 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1034938038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:31:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:04 compute-0 nova_compute[244014]: 2026-02-25 13:31:04.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.621949) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264622031, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 315, "num_deletes": 251, "total_data_size": 101251, "memory_usage": 107176, "flush_reason": "Manual Compaction"}
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Feb 25 13:31:04 compute-0 ceph-mon[76335]: pgmap v3335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264641028, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 100286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70180, "largest_seqno": 70494, "table_properties": {"data_size": 98250, "index_size": 199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5195, "raw_average_key_size": 18, "raw_value_size": 94295, "raw_average_value_size": 334, "num_data_blocks": 9, "num_entries": 282, "num_filter_entries": 282, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026259, "oldest_key_time": 1772026259, "file_creation_time": 1772026264, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 19137 microseconds, and 1146 cpu microseconds.
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.641093) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 100286 bytes OK
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.641121) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.645948) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.645974) EVENT_LOG_v1 {"time_micros": 1772026264645966, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.645995) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 99011, prev total WAL file size 99011, number of live WAL files 2.
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.646434) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(97KB)], [167(10MB)]
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264646638, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10866798, "oldest_snapshot_seqno": -1}
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8466 keys, 9092546 bytes, temperature: kUnknown
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264714305, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 9092546, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9040920, "index_size": 29369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 223857, "raw_average_key_size": 26, "raw_value_size": 8894493, "raw_average_value_size": 1050, "num_data_blocks": 1124, "num_entries": 8466, "num_filter_entries": 8466, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026264, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.714580) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 9092546 bytes
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.716077) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.6 rd, 134.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.3 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(199.0) write-amplify(90.7) OK, records in: 8975, records dropped: 509 output_compression: NoCompression
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.716108) EVENT_LOG_v1 {"time_micros": 1772026264716094, "job": 104, "event": "compaction_finished", "compaction_time_micros": 67661, "compaction_time_cpu_micros": 28549, "output_level": 6, "num_output_files": 1, "total_output_size": 9092546, "num_input_records": 8975, "num_output_records": 8466, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264716260, "job": 104, "event": "table_file_deletion", "file_number": 169}
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264718136, "job": 104, "event": "table_file_deletion", "file_number": 167}
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.646335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.718362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.718376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.718380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.718384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:31:04 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.718389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:31:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:06 compute-0 ceph-mon[76335]: pgmap v3336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:06 compute-0 nova_compute[244014]: 2026-02-25 13:31:06.883 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:31:06 compute-0 nova_compute[244014]: 2026-02-25 13:31:06.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:31:06 compute-0 nova_compute[244014]: 2026-02-25 13:31:06.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:31:06 compute-0 nova_compute[244014]: 2026-02-25 13:31:06.975 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:31:07 compute-0 nova_compute[244014]: 2026-02-25 13:31:07.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:07 compute-0 nova_compute[244014]: 2026-02-25 13:31:07.964 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:31:08 compute-0 ceph-mon[76335]: pgmap v3337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:09 compute-0 nova_compute[244014]: 2026-02-25 13:31:09.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:09 compute-0 podman[404356]: 2026-02-25 13:31:09.776718862 +0000 UTC m=+0.104692275 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Feb 25 13:31:09 compute-0 podman[404357]: 2026-02-25 13:31:09.78124922 +0000 UTC m=+0.109412549 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 13:31:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:10 compute-0 ceph-mon[76335]: pgmap v3338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:12 compute-0 nova_compute[244014]: 2026-02-25 13:31:12.072 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:12 compute-0 ceph-mon[76335]: pgmap v3339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:12 compute-0 nova_compute[244014]: 2026-02-25 13:31:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:31:12 compute-0 nova_compute[244014]: 2026-02-25 13:31:12.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:31:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:14 compute-0 nova_compute[244014]: 2026-02-25 13:31:14.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:14 compute-0 ceph-mon[76335]: pgmap v3340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:15 compute-0 nova_compute[244014]: 2026-02-25 13:31:15.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:31:16 compute-0 ceph-mon[76335]: pgmap v3341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:17 compute-0 nova_compute[244014]: 2026-02-25 13:31:17.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:17 compute-0 nova_compute[244014]: 2026-02-25 13:31:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:31:18 compute-0 ceph-mon[76335]: pgmap v3342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:19 compute-0 nova_compute[244014]: 2026-02-25 13:31:19.498 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:20 compute-0 ceph-mon[76335]: pgmap v3343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Feb 25 13:31:21 compute-0 nova_compute[244014]: 2026-02-25 13:31:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:31:22 compute-0 nova_compute[244014]: 2026-02-25 13:31:22.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:22 compute-0 ceph-mon[76335]: pgmap v3344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Feb 25 13:31:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 13:31:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:24 compute-0 nova_compute[244014]: 2026-02-25 13:31:24.501 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:24 compute-0 ceph-mon[76335]: pgmap v3345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 13:31:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 13:31:25 compute-0 nova_compute[244014]: 2026-02-25 13:31:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:31:27 compute-0 ceph-mon[76335]: pgmap v3346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 13:31:27 compute-0 nova_compute[244014]: 2026-02-25 13:31:27.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:31:29 compute-0 ceph-mon[76335]: pgmap v3347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:31:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:29 compute-0 nova_compute[244014]: 2026-02-25 13:31:29.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:31:31 compute-0 ceph-mon[76335]: pgmap v3348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:31:31
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'vms', 'images', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes']
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:31:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:31:32 compute-0 nova_compute[244014]: 2026-02-25 13:31:32.171 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:31:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:31:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:31:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:31:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:31:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:31:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:31:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:31:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:31:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:31:33 compute-0 ceph-mon[76335]: pgmap v3349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:31:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Feb 25 13:31:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:34 compute-0 nova_compute[244014]: 2026-02-25 13:31:34.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:35 compute-0 ceph-mon[76335]: pgmap v3350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Feb 25 13:31:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 13:31:37 compute-0 nova_compute[244014]: 2026-02-25 13:31:37.171 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:37 compute-0 ceph-mon[76335]: pgmap v3351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 13:31:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 13:31:39 compute-0 ceph-mon[76335]: pgmap v3352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 13:31:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:39 compute-0 nova_compute[244014]: 2026-02-25 13:31:39.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:40 compute-0 podman[404403]: 2026-02-25 13:31:40.719703793 +0000 UTC m=+0.057492123 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:31:40 compute-0 podman[404404]: 2026-02-25 13:31:40.796544212 +0000 UTC m=+0.126630855 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:31:41 compute-0 ceph-mon[76335]: pgmap v3353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:42 compute-0 nova_compute[244014]: 2026-02-25 13:31:42.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:43 compute-0 ceph-mon[76335]: pgmap v3354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:31:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:44 compute-0 nova_compute[244014]: 2026-02-25 13:31:44.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:45 compute-0 ceph-mon[76335]: pgmap v3355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:45 compute-0 sudo[404449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:31:45 compute-0 sudo[404449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:31:45 compute-0 sudo[404449]: pam_unix(sudo:session): session closed for user root
Feb 25 13:31:45 compute-0 auditd[725]: Audit daemon rotating log files
Feb 25 13:31:45 compute-0 sudo[404474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:31:45 compute-0 sudo[404474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:31:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:45 compute-0 sudo[404474]: pam_unix(sudo:session): session closed for user root
Feb 25 13:31:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:31:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:31:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:31:45 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:31:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:31:45 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:31:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:31:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:31:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:31:45 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:31:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:31:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:31:45 compute-0 sudo[404531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:31:45 compute-0 sudo[404531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:31:45 compute-0 sudo[404531]: pam_unix(sudo:session): session closed for user root
Feb 25 13:31:46 compute-0 sudo[404556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:31:46 compute-0 sudo[404556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:31:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:31:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:31:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:31:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:31:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:31:46 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:31:46 compute-0 podman[404593]: 2026-02-25 13:31:46.303175012 +0000 UTC m=+0.057681339 container create c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 13:31:46 compute-0 systemd[1]: Started libpod-conmon-c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184.scope.
Feb 25 13:31:46 compute-0 podman[404593]: 2026-02-25 13:31:46.278321021 +0000 UTC m=+0.032827378 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:31:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:31:46 compute-0 podman[404593]: 2026-02-25 13:31:46.394050107 +0000 UTC m=+0.148556404 container init c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 13:31:46 compute-0 podman[404593]: 2026-02-25 13:31:46.39949659 +0000 UTC m=+0.154002917 container start c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:31:46 compute-0 podman[404593]: 2026-02-25 13:31:46.403162144 +0000 UTC m=+0.157668451 container attach c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:31:46 compute-0 zen_darwin[404609]: 167 167
Feb 25 13:31:46 compute-0 systemd[1]: libpod-c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184.scope: Deactivated successfully.
Feb 25 13:31:46 compute-0 podman[404593]: 2026-02-25 13:31:46.40623475 +0000 UTC m=+0.160741067 container died c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-60537f04877f9e70e0d30c2243bffc238f7e0c7eebad064c654f920fb1f98f84-merged.mount: Deactivated successfully.
Feb 25 13:31:46 compute-0 podman[404593]: 2026-02-25 13:31:46.454102311 +0000 UTC m=+0.208608628 container remove c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:31:46 compute-0 systemd[1]: libpod-conmon-c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184.scope: Deactivated successfully.
Feb 25 13:31:46 compute-0 podman[404634]: 2026-02-25 13:31:46.624026397 +0000 UTC m=+0.055222979 container create e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:31:46 compute-0 systemd[1]: Started libpod-conmon-e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06.scope.
Feb 25 13:31:46 compute-0 podman[404634]: 2026-02-25 13:31:46.595234715 +0000 UTC m=+0.026431347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:31:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:46 compute-0 podman[404634]: 2026-02-25 13:31:46.716380874 +0000 UTC m=+0.147577456 container init e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:31:46 compute-0 podman[404634]: 2026-02-25 13:31:46.724159373 +0000 UTC m=+0.155355945 container start e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:31:46 compute-0 podman[404634]: 2026-02-25 13:31:46.730591645 +0000 UTC m=+0.161788227 container attach e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:31:47 compute-0 affectionate_euler[404651]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:31:47 compute-0 affectionate_euler[404651]: --> All data devices are unavailable
Feb 25 13:31:47 compute-0 nova_compute[244014]: 2026-02-25 13:31:47.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:47 compute-0 systemd[1]: libpod-e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06.scope: Deactivated successfully.
Feb 25 13:31:47 compute-0 ceph-mon[76335]: pgmap v3356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:47 compute-0 podman[404671]: 2026-02-25 13:31:47.266265823 +0000 UTC m=+0.032188800 container died e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03-merged.mount: Deactivated successfully.
Feb 25 13:31:47 compute-0 podman[404671]: 2026-02-25 13:31:47.319206927 +0000 UTC m=+0.085129904 container remove e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:31:47 compute-0 systemd[1]: libpod-conmon-e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06.scope: Deactivated successfully.
Feb 25 13:31:47 compute-0 sudo[404556]: pam_unix(sudo:session): session closed for user root
Feb 25 13:31:47 compute-0 sudo[404687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:31:47 compute-0 sudo[404687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:31:47 compute-0 sudo[404687]: pam_unix(sudo:session): session closed for user root
Feb 25 13:31:47 compute-0 sudo[404712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:31:47 compute-0 sudo[404712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:31:47 compute-0 podman[404749]: 2026-02-25 13:31:47.800708956 +0000 UTC m=+0.048834829 container create 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:31:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:47 compute-0 systemd[1]: Started libpod-conmon-68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58.scope.
Feb 25 13:31:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:31:47 compute-0 podman[404749]: 2026-02-25 13:31:47.783000336 +0000 UTC m=+0.031126259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:31:47 compute-0 podman[404749]: 2026-02-25 13:31:47.885464758 +0000 UTC m=+0.133590761 container init 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:31:47 compute-0 podman[404749]: 2026-02-25 13:31:47.894257376 +0000 UTC m=+0.142383279 container start 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 13:31:47 compute-0 pedantic_leavitt[404766]: 167 167
Feb 25 13:31:47 compute-0 podman[404749]: 2026-02-25 13:31:47.897860458 +0000 UTC m=+0.145986371 container attach 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:31:47 compute-0 systemd[1]: libpod-68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58.scope: Deactivated successfully.
Feb 25 13:31:47 compute-0 podman[404749]: 2026-02-25 13:31:47.898509436 +0000 UTC m=+0.146635349 container died 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca7c6c60f9573baa1e6e964031b22639186cfe9e2c7742e401159599a280632e-merged.mount: Deactivated successfully.
Feb 25 13:31:47 compute-0 podman[404749]: 2026-02-25 13:31:47.94646048 +0000 UTC m=+0.194586383 container remove 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:31:47 compute-0 systemd[1]: libpod-conmon-68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58.scope: Deactivated successfully.
Feb 25 13:31:48 compute-0 podman[404790]: 2026-02-25 13:31:48.109545012 +0000 UTC m=+0.047373448 container create 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:31:48 compute-0 systemd[1]: Started libpod-conmon-7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168.scope.
Feb 25 13:31:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9482df189ec1d759f1cbcde8328518d38c8ea19eca252c295e02083bf25ce641/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9482df189ec1d759f1cbcde8328518d38c8ea19eca252c295e02083bf25ce641/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9482df189ec1d759f1cbcde8328518d38c8ea19eca252c295e02083bf25ce641/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9482df189ec1d759f1cbcde8328518d38c8ea19eca252c295e02083bf25ce641/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:48 compute-0 podman[404790]: 2026-02-25 13:31:48.085559935 +0000 UTC m=+0.023388421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:31:48 compute-0 podman[404790]: 2026-02-25 13:31:48.191428533 +0000 UTC m=+0.129256999 container init 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:31:48 compute-0 podman[404790]: 2026-02-25 13:31:48.199400908 +0000 UTC m=+0.137229384 container start 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 13:31:48 compute-0 podman[404790]: 2026-02-25 13:31:48.202932218 +0000 UTC m=+0.140760684 container attach 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]: {
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:     "0": [
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:         {
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "devices": [
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "/dev/loop3"
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             ],
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_name": "ceph_lv0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_size": "21470642176",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "name": "ceph_lv0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "tags": {
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.cluster_name": "ceph",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.crush_device_class": "",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.encrypted": "0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.objectstore": "bluestore",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.osd_id": "0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.type": "block",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.vdo": "0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.with_tpm": "0"
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             },
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "type": "block",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "vg_name": "ceph_vg0"
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:         }
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:     ],
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:     "1": [
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:         {
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "devices": [
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "/dev/loop4"
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             ],
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_name": "ceph_lv1",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_size": "21470642176",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "name": "ceph_lv1",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "tags": {
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.cluster_name": "ceph",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.crush_device_class": "",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.encrypted": "0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.objectstore": "bluestore",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.osd_id": "1",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.type": "block",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.vdo": "0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.with_tpm": "0"
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             },
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "type": "block",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "vg_name": "ceph_vg1"
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:         }
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:     ],
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:     "2": [
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:         {
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "devices": [
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "/dev/loop5"
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             ],
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_name": "ceph_lv2",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_size": "21470642176",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "name": "ceph_lv2",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "tags": {
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.cluster_name": "ceph",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.crush_device_class": "",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.encrypted": "0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.objectstore": "bluestore",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.osd_id": "2",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.type": "block",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.vdo": "0",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:                 "ceph.with_tpm": "0"
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             },
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "type": "block",
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:             "vg_name": "ceph_vg2"
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:         }
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]:     ]
Feb 25 13:31:48 compute-0 hopeful_driscoll[404806]: }
Feb 25 13:31:48 compute-0 systemd[1]: libpod-7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168.scope: Deactivated successfully.
Feb 25 13:31:48 compute-0 podman[404790]: 2026-02-25 13:31:48.510280961 +0000 UTC m=+0.448109427 container died 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9482df189ec1d759f1cbcde8328518d38c8ea19eca252c295e02083bf25ce641-merged.mount: Deactivated successfully.
Feb 25 13:31:48 compute-0 podman[404790]: 2026-02-25 13:31:48.548929492 +0000 UTC m=+0.486757928 container remove 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:31:48 compute-0 systemd[1]: libpod-conmon-7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168.scope: Deactivated successfully.
Feb 25 13:31:48 compute-0 sudo[404712]: pam_unix(sudo:session): session closed for user root
Feb 25 13:31:48 compute-0 sudo[404825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:31:48 compute-0 sudo[404825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:31:48 compute-0 sudo[404825]: pam_unix(sudo:session): session closed for user root
Feb 25 13:31:48 compute-0 sudo[404850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:31:48 compute-0 sudo[404850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:31:49 compute-0 podman[404887]: 2026-02-25 13:31:49.006020302 +0000 UTC m=+0.064961064 container create 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 13:31:49 compute-0 systemd[1]: Started libpod-conmon-587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5.scope.
Feb 25 13:31:49 compute-0 podman[404887]: 2026-02-25 13:31:48.977617721 +0000 UTC m=+0.036558543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:31:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:31:49 compute-0 podman[404887]: 2026-02-25 13:31:49.102250757 +0000 UTC m=+0.161191559 container init 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 13:31:49 compute-0 podman[404887]: 2026-02-25 13:31:49.112364013 +0000 UTC m=+0.171304745 container start 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:31:49 compute-0 podman[404887]: 2026-02-25 13:31:49.115780079 +0000 UTC m=+0.174720901 container attach 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:31:49 compute-0 vigilant_kilby[404903]: 167 167
Feb 25 13:31:49 compute-0 systemd[1]: libpod-587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5.scope: Deactivated successfully.
Feb 25 13:31:49 compute-0 conmon[404903]: conmon 587e0e27246aee1a898f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5.scope/container/memory.events
Feb 25 13:31:49 compute-0 podman[404887]: 2026-02-25 13:31:49.119061362 +0000 UTC m=+0.178002124 container died 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-223d78bc8b2f6235e874f56f9f61bca2c49537fdfc1a647490ecd97fa0a317b3-merged.mount: Deactivated successfully.
Feb 25 13:31:49 compute-0 podman[404887]: 2026-02-25 13:31:49.167871879 +0000 UTC m=+0.226812641 container remove 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:31:49 compute-0 systemd[1]: libpod-conmon-587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5.scope: Deactivated successfully.
Feb 25 13:31:49 compute-0 ceph-mon[76335]: pgmap v3357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:49 compute-0 podman[404927]: 2026-02-25 13:31:49.374666885 +0000 UTC m=+0.057949206 container create 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:31:49 compute-0 systemd[1]: Started libpod-conmon-591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e.scope.
Feb 25 13:31:49 compute-0 podman[404927]: 2026-02-25 13:31:49.352991814 +0000 UTC m=+0.036274125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:31:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bb3543ed2be9a2b56ec4b23c4baadaa68c43aadd58471e102ea98d806b9d9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bb3543ed2be9a2b56ec4b23c4baadaa68c43aadd58471e102ea98d806b9d9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bb3543ed2be9a2b56ec4b23c4baadaa68c43aadd58471e102ea98d806b9d9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bb3543ed2be9a2b56ec4b23c4baadaa68c43aadd58471e102ea98d806b9d9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:31:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:49 compute-0 podman[404927]: 2026-02-25 13:31:49.491958606 +0000 UTC m=+0.175240927 container init 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:31:49 compute-0 podman[404927]: 2026-02-25 13:31:49.500218259 +0000 UTC m=+0.183500580 container start 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 13:31:49 compute-0 podman[404927]: 2026-02-25 13:31:49.503512082 +0000 UTC m=+0.186794403 container attach 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 13:31:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:31:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3089635615' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:31:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:31:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3089635615' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:31:49 compute-0 nova_compute[244014]: 2026-02-25 13:31:49.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:50 compute-0 lvm[405024]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:31:50 compute-0 lvm[405024]: VG ceph_vg1 finished
Feb 25 13:31:50 compute-0 lvm[405025]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:31:50 compute-0 lvm[405025]: VG ceph_vg2 finished
Feb 25 13:31:50 compute-0 lvm[405023]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:31:50 compute-0 lvm[405023]: VG ceph_vg0 finished
Feb 25 13:31:50 compute-0 thirsty_hawking[404944]: {}
Feb 25 13:31:50 compute-0 systemd[1]: libpod-591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e.scope: Deactivated successfully.
Feb 25 13:31:50 compute-0 systemd[1]: libpod-591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e.scope: Consumed 1.128s CPU time.
Feb 25 13:31:50 compute-0 podman[404927]: 2026-02-25 13:31:50.258628443 +0000 UTC m=+0.941910754 container died 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:31:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3089635615' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:31:50 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3089635615' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:31:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5bb3543ed2be9a2b56ec4b23c4baadaa68c43aadd58471e102ea98d806b9d9b-merged.mount: Deactivated successfully.
Feb 25 13:31:50 compute-0 podman[404927]: 2026-02-25 13:31:50.316053103 +0000 UTC m=+0.999335414 container remove 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 13:31:50 compute-0 systemd[1]: libpod-conmon-591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e.scope: Deactivated successfully.
Feb 25 13:31:50 compute-0 sudo[404850]: pam_unix(sudo:session): session closed for user root
Feb 25 13:31:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:31:50 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:31:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:31:50 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:31:50 compute-0 sudo[405039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:31:50 compute-0 sudo[405039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:31:50 compute-0 sudo[405039]: pam_unix(sudo:session): session closed for user root
Feb 25 13:31:51 compute-0 ceph-mon[76335]: pgmap v3358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:51 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:31:51 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:31:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:52 compute-0 nova_compute[244014]: 2026-02-25 13:31:52.213 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:53 compute-0 ceph-mon[76335]: pgmap v3359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:54 compute-0 nova_compute[244014]: 2026-02-25 13:31:54.582 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:31:55.074 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:31:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:31:55.075 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:31:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:31:55.075 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:31:55 compute-0 ceph-mon[76335]: pgmap v3360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:57 compute-0 nova_compute[244014]: 2026-02-25 13:31:57.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:57 compute-0 ceph-mon[76335]: pgmap v3361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:59 compute-0 ceph-mon[76335]: pgmap v3362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:31:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:31:59 compute-0 nova_compute[244014]: 2026-02-25 13:31:59.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:31:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:01 compute-0 ceph-mon[76335]: pgmap v3363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:32:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:32:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:01 compute-0 nova_compute[244014]: 2026-02-25 13:32:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:32:02 compute-0 nova_compute[244014]: 2026-02-25 13:32:02.216 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:02 compute-0 nova_compute[244014]: 2026-02-25 13:32:02.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:32:02 compute-0 nova_compute[244014]: 2026-02-25 13:32:02.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:32:02 compute-0 nova_compute[244014]: 2026-02-25 13:32:02.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:32:02 compute-0 nova_compute[244014]: 2026-02-25 13:32:02.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:32:02 compute-0 nova_compute[244014]: 2026-02-25 13:32:02.904 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:32:02 compute-0 nova_compute[244014]: 2026-02-25 13:32:02.904 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:32:03 compute-0 ceph-mon[76335]: pgmap v3364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:32:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170130205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:32:03 compute-0 nova_compute[244014]: 2026-02-25 13:32:03.462 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:32:03 compute-0 nova_compute[244014]: 2026-02-25 13:32:03.649 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:32:03 compute-0 nova_compute[244014]: 2026-02-25 13:32:03.651 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3555MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:32:03 compute-0 nova_compute[244014]: 2026-02-25 13:32:03.651 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:32:03 compute-0 nova_compute[244014]: 2026-02-25 13:32:03.652 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:32:03 compute-0 nova_compute[244014]: 2026-02-25 13:32:03.804 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:32:03 compute-0 nova_compute[244014]: 2026-02-25 13:32:03.805 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:32:03 compute-0 nova_compute[244014]: 2026-02-25 13:32:03.821 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:32:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:03 compute-0 sshd-session[405084]: Received disconnect from 45.148.10.152 port 18464:11:  [preauth]
Feb 25 13:32:03 compute-0 sshd-session[405084]: Disconnected from authenticating user root 45.148.10.152 port 18464 [preauth]
Feb 25 13:32:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:32:04 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236991576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:32:04 compute-0 nova_compute[244014]: 2026-02-25 13:32:04.354 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:32:04 compute-0 nova_compute[244014]: 2026-02-25 13:32:04.360 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:32:04 compute-0 nova_compute[244014]: 2026-02-25 13:32:04.374 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:32:04 compute-0 nova_compute[244014]: 2026-02-25 13:32:04.376 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:32:04 compute-0 nova_compute[244014]: 2026-02-25 13:32:04.376 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:32:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/170130205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:32:04 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/236991576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:32:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:04 compute-0 nova_compute[244014]: 2026-02-25 13:32:04.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:05 compute-0 ceph-mon[76335]: pgmap v3365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:07 compute-0 nova_compute[244014]: 2026-02-25 13:32:07.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:07 compute-0 ceph-mon[76335]: pgmap v3366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:08 compute-0 nova_compute[244014]: 2026-02-25 13:32:08.378 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:32:08 compute-0 nova_compute[244014]: 2026-02-25 13:32:08.378 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:32:08 compute-0 nova_compute[244014]: 2026-02-25 13:32:08.378 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:32:08 compute-0 nova_compute[244014]: 2026-02-25 13:32:08.400 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:32:09 compute-0 ceph-mon[76335]: pgmap v3367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:09 compute-0 nova_compute[244014]: 2026-02-25 13:32:09.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:10 compute-0 nova_compute[244014]: 2026-02-25 13:32:10.874 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:32:10 compute-0 ceph-mon[76335]: pgmap v3368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:11 compute-0 podman[405111]: 2026-02-25 13:32:11.723408075 +0000 UTC m=+0.065130319 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:32:11 compute-0 podman[405112]: 2026-02-25 13:32:11.791790455 +0000 UTC m=+0.129298480 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 13:32:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:11 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 13:32:11 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 13:32:12 compute-0 nova_compute[244014]: 2026-02-25 13:32:12.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:12 compute-0 nova_compute[244014]: 2026-02-25 13:32:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:32:12 compute-0 nova_compute[244014]: 2026-02-25 13:32:12.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:32:12 compute-0 ceph-mon[76335]: pgmap v3369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:14 compute-0 nova_compute[244014]: 2026-02-25 13:32:14.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:14 compute-0 ceph-mon[76335]: pgmap v3370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:16 compute-0 ceph-mon[76335]: pgmap v3371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:17 compute-0 nova_compute[244014]: 2026-02-25 13:32:17.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:17 compute-0 nova_compute[244014]: 2026-02-25 13:32:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:32:18 compute-0 ceph-mon[76335]: pgmap v3372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:19 compute-0 nova_compute[244014]: 2026-02-25 13:32:19.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:19 compute-0 nova_compute[244014]: 2026-02-25 13:32:19.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:32:20 compute-0 ceph-mon[76335]: pgmap v3373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:22 compute-0 nova_compute[244014]: 2026-02-25 13:32:22.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:22 compute-0 ceph-mon[76335]: pgmap v3374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:23 compute-0 nova_compute[244014]: 2026-02-25 13:32:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:32:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:24 compute-0 nova_compute[244014]: 2026-02-25 13:32:24.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:24 compute-0 ceph-mon[76335]: pgmap v3375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:26 compute-0 nova_compute[244014]: 2026-02-25 13:32:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:32:27 compute-0 ceph-mon[76335]: pgmap v3376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:27 compute-0 nova_compute[244014]: 2026-02-25 13:32:27.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:29 compute-0 ceph-mon[76335]: pgmap v3377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:29 compute-0 nova_compute[244014]: 2026-02-25 13:32:29.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:31 compute-0 ceph-mon[76335]: pgmap v3378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:32:31
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['images', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.control']
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:32:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:32 compute-0 nova_compute[244014]: 2026-02-25 13:32:32.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:32:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:32:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:32:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:32:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:32:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:32:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:32:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:32:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:32:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:32:33 compute-0 ceph-mon[76335]: pgmap v3379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:34 compute-0 nova_compute[244014]: 2026-02-25 13:32:34.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:35 compute-0 ceph-mon[76335]: pgmap v3380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:35 compute-0 nova_compute[244014]: 2026-02-25 13:32:35.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:32:37 compute-0 ceph-mon[76335]: pgmap v3381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:37 compute-0 nova_compute[244014]: 2026-02-25 13:32:37.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:39 compute-0 ceph-mon[76335]: pgmap v3382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:39 compute-0 nova_compute[244014]: 2026-02-25 13:32:39.643 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:41 compute-0 ceph-mon[76335]: pgmap v3383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:42 compute-0 nova_compute[244014]: 2026-02-25 13:32:42.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:42 compute-0 podman[405158]: 2026-02-25 13:32:42.731162186 +0000 UTC m=+0.068117824 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent)
Feb 25 13:32:42 compute-0 podman[405159]: 2026-02-25 13:32:42.749508014 +0000 UTC m=+0.083929470 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:32:43 compute-0 ceph-mon[76335]: pgmap v3384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:32:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:44 compute-0 nova_compute[244014]: 2026-02-25 13:32:44.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:45 compute-0 ceph-mon[76335]: pgmap v3385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:47 compute-0 ceph-mon[76335]: pgmap v3386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:47 compute-0 nova_compute[244014]: 2026-02-25 13:32:47.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:32:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1269309088' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:32:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:32:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1269309088' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:32:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1269309088' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:32:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1269309088' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:32:49 compute-0 ceph-mon[76335]: pgmap v3387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:49 compute-0 nova_compute[244014]: 2026-02-25 13:32:49.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:50 compute-0 sudo[405204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:32:50 compute-0 sudo[405204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:32:50 compute-0 sudo[405204]: pam_unix(sudo:session): session closed for user root
Feb 25 13:32:50 compute-0 sudo[405229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:32:50 compute-0 sudo[405229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:32:51 compute-0 sudo[405229]: pam_unix(sudo:session): session closed for user root
Feb 25 13:32:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:32:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:32:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:32:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:32:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:32:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:32:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:32:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:32:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:32:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:32:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:32:51 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:32:51 compute-0 sudo[405286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:32:51 compute-0 sudo[405286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:32:51 compute-0 sudo[405286]: pam_unix(sudo:session): session closed for user root
Feb 25 13:32:51 compute-0 ceph-mon[76335]: pgmap v3388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:51 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:32:51 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:32:51 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:32:51 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:32:51 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:32:51 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:32:51 compute-0 sudo[405311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:32:51 compute-0 sudo[405311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:32:51 compute-0 podman[405348]: 2026-02-25 13:32:51.461313871 +0000 UTC m=+0.046100983 container create 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:32:51 compute-0 systemd[1]: Started libpod-conmon-9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e.scope.
Feb 25 13:32:51 compute-0 podman[405348]: 2026-02-25 13:32:51.437987042 +0000 UTC m=+0.022774144 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:32:51 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:32:51 compute-0 podman[405348]: 2026-02-25 13:32:51.557943228 +0000 UTC m=+0.142730400 container init 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 13:32:51 compute-0 podman[405348]: 2026-02-25 13:32:51.570501352 +0000 UTC m=+0.155288464 container start 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:32:51 compute-0 podman[405348]: 2026-02-25 13:32:51.577974623 +0000 UTC m=+0.162761725 container attach 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:32:51 compute-0 systemd[1]: libpod-9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e.scope: Deactivated successfully.
Feb 25 13:32:51 compute-0 sharp_leakey[405364]: 167 167
Feb 25 13:32:51 compute-0 conmon[405364]: conmon 9abe4c86096b7858aa25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e.scope/container/memory.events
Feb 25 13:32:51 compute-0 podman[405348]: 2026-02-25 13:32:51.580953427 +0000 UTC m=+0.165740539 container died 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:32:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fda520e2c9d4bd3796ad15a8df085ae483a569d25881a0caf2e41620549e747-merged.mount: Deactivated successfully.
Feb 25 13:32:51 compute-0 podman[405348]: 2026-02-25 13:32:51.650011686 +0000 UTC m=+0.234798798 container remove 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 13:32:51 compute-0 systemd[1]: libpod-conmon-9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e.scope: Deactivated successfully.
Feb 25 13:32:51 compute-0 podman[405389]: 2026-02-25 13:32:51.785463309 +0000 UTC m=+0.048088928 container create c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:32:51 compute-0 systemd[1]: Started libpod-conmon-c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4.scope.
Feb 25 13:32:51 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:51 compute-0 podman[405389]: 2026-02-25 13:32:51.760727211 +0000 UTC m=+0.023352840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:32:51 compute-0 podman[405389]: 2026-02-25 13:32:51.874130021 +0000 UTC m=+0.136755610 container init c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:32:51 compute-0 podman[405389]: 2026-02-25 13:32:51.8854206 +0000 UTC m=+0.148046189 container start c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:32:51 compute-0 podman[405389]: 2026-02-25 13:32:51.888869007 +0000 UTC m=+0.151494686 container attach c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:32:52 compute-0 charming_mirzakhani[405405]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:32:52 compute-0 charming_mirzakhani[405405]: --> All data devices are unavailable
Feb 25 13:32:52 compute-0 systemd[1]: libpod-c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4.scope: Deactivated successfully.
Feb 25 13:32:52 compute-0 podman[405389]: 2026-02-25 13:32:52.350848676 +0000 UTC m=+0.613474255 container died c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 13:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e-merged.mount: Deactivated successfully.
Feb 25 13:32:52 compute-0 podman[405389]: 2026-02-25 13:32:52.396964927 +0000 UTC m=+0.659590516 container remove c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:32:52 compute-0 systemd[1]: libpod-conmon-c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4.scope: Deactivated successfully.
Feb 25 13:32:52 compute-0 sudo[405311]: pam_unix(sudo:session): session closed for user root
Feb 25 13:32:52 compute-0 nova_compute[244014]: 2026-02-25 13:32:52.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:52 compute-0 sudo[405437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:32:52 compute-0 sudo[405437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:32:52 compute-0 sudo[405437]: pam_unix(sudo:session): session closed for user root
Feb 25 13:32:52 compute-0 sudo[405462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:32:52 compute-0 sudo[405462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:32:52 compute-0 podman[405498]: 2026-02-25 13:32:52.849985151 +0000 UTC m=+0.041922094 container create ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:32:52 compute-0 systemd[1]: Started libpod-conmon-ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249.scope.
Feb 25 13:32:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:32:52 compute-0 podman[405498]: 2026-02-25 13:32:52.831680125 +0000 UTC m=+0.023617098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:32:52 compute-0 podman[405498]: 2026-02-25 13:32:52.93143183 +0000 UTC m=+0.123368843 container init ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 13:32:52 compute-0 podman[405498]: 2026-02-25 13:32:52.938134359 +0000 UTC m=+0.130071312 container start ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:32:52 compute-0 infallible_leavitt[405515]: 167 167
Feb 25 13:32:52 compute-0 systemd[1]: libpod-ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249.scope: Deactivated successfully.
Feb 25 13:32:52 compute-0 podman[405498]: 2026-02-25 13:32:52.94206443 +0000 UTC m=+0.134001423 container attach ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb 25 13:32:52 compute-0 podman[405498]: 2026-02-25 13:32:52.942391759 +0000 UTC m=+0.134328702 container died ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-80fb7a27b11c05fdd97126bf2f71fbe5d9b63022cdc10d947577ea33fb2e53ab-merged.mount: Deactivated successfully.
Feb 25 13:32:52 compute-0 podman[405498]: 2026-02-25 13:32:52.983915141 +0000 UTC m=+0.175852074 container remove ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:32:52 compute-0 systemd[1]: libpod-conmon-ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249.scope: Deactivated successfully.
Feb 25 13:32:53 compute-0 podman[405540]: 2026-02-25 13:32:53.132363721 +0000 UTC m=+0.044768854 container create 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:32:53 compute-0 systemd[1]: Started libpod-conmon-036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72.scope.
Feb 25 13:32:53 compute-0 ceph-mon[76335]: pgmap v3389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391dd125f5577384a1f2dd5f795fe3f9aa0877d3c98f81094a335ba9b4a6ef1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391dd125f5577384a1f2dd5f795fe3f9aa0877d3c98f81094a335ba9b4a6ef1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391dd125f5577384a1f2dd5f795fe3f9aa0877d3c98f81094a335ba9b4a6ef1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391dd125f5577384a1f2dd5f795fe3f9aa0877d3c98f81094a335ba9b4a6ef1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:53 compute-0 podman[405540]: 2026-02-25 13:32:53.113599201 +0000 UTC m=+0.026004384 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:32:53 compute-0 podman[405540]: 2026-02-25 13:32:53.22482551 +0000 UTC m=+0.137230703 container init 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:32:53 compute-0 podman[405540]: 2026-02-25 13:32:53.231010295 +0000 UTC m=+0.143415438 container start 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:32:53 compute-0 podman[405540]: 2026-02-25 13:32:53.23472831 +0000 UTC m=+0.147133473 container attach 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]: {
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:     "0": [
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:         {
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "devices": [
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "/dev/loop3"
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             ],
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_name": "ceph_lv0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_size": "21470642176",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "name": "ceph_lv0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "tags": {
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.cluster_name": "ceph",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.crush_device_class": "",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.encrypted": "0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.objectstore": "bluestore",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.osd_id": "0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.type": "block",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.vdo": "0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.with_tpm": "0"
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             },
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "type": "block",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "vg_name": "ceph_vg0"
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:         }
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:     ],
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:     "1": [
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:         {
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "devices": [
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "/dev/loop4"
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             ],
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_name": "ceph_lv1",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_size": "21470642176",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "name": "ceph_lv1",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "tags": {
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.cluster_name": "ceph",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.crush_device_class": "",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.encrypted": "0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.objectstore": "bluestore",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.osd_id": "1",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.type": "block",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.vdo": "0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.with_tpm": "0"
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             },
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "type": "block",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "vg_name": "ceph_vg1"
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:         }
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:     ],
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:     "2": [
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:         {
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "devices": [
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "/dev/loop5"
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             ],
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_name": "ceph_lv2",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_size": "21470642176",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "name": "ceph_lv2",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "tags": {
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.cluster_name": "ceph",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.crush_device_class": "",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.encrypted": "0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.objectstore": "bluestore",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.osd_id": "2",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.type": "block",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.vdo": "0",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:                 "ceph.with_tpm": "0"
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             },
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "type": "block",
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:             "vg_name": "ceph_vg2"
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:         }
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]:     ]
Feb 25 13:32:53 compute-0 lucid_khayyam[405556]: }
Feb 25 13:32:53 compute-0 systemd[1]: libpod-036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72.scope: Deactivated successfully.
Feb 25 13:32:53 compute-0 podman[405540]: 2026-02-25 13:32:53.542095405 +0000 UTC m=+0.454500578 container died 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:32:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-391dd125f5577384a1f2dd5f795fe3f9aa0877d3c98f81094a335ba9b4a6ef1a-merged.mount: Deactivated successfully.
Feb 25 13:32:53 compute-0 podman[405540]: 2026-02-25 13:32:53.591791307 +0000 UTC m=+0.504196450 container remove 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:32:53 compute-0 systemd[1]: libpod-conmon-036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72.scope: Deactivated successfully.
Feb 25 13:32:53 compute-0 sudo[405462]: pam_unix(sudo:session): session closed for user root
Feb 25 13:32:53 compute-0 sudo[405580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:32:53 compute-0 sudo[405580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:32:53 compute-0 sudo[405580]: pam_unix(sudo:session): session closed for user root
Feb 25 13:32:53 compute-0 sudo[405605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:32:53 compute-0 sudo[405605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:32:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:54 compute-0 podman[405643]: 2026-02-25 13:32:54.05783969 +0000 UTC m=+0.045023091 container create 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:32:54 compute-0 systemd[1]: Started libpod-conmon-941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3.scope.
Feb 25 13:32:54 compute-0 podman[405643]: 2026-02-25 13:32:54.037056334 +0000 UTC m=+0.024239765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:32:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:32:54 compute-0 podman[405643]: 2026-02-25 13:32:54.149220539 +0000 UTC m=+0.136403990 container init 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:32:54 compute-0 podman[405643]: 2026-02-25 13:32:54.157850173 +0000 UTC m=+0.145033584 container start 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:32:54 compute-0 nice_greider[405659]: 167 167
Feb 25 13:32:54 compute-0 systemd[1]: libpod-941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3.scope: Deactivated successfully.
Feb 25 13:32:54 compute-0 podman[405643]: 2026-02-25 13:32:54.162220416 +0000 UTC m=+0.149403827 container attach 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:32:54 compute-0 podman[405643]: 2026-02-25 13:32:54.163137482 +0000 UTC m=+0.150320873 container died 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 13:32:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7917bb8e6d2bd63c5aeee4a380212ff24f990bf8c50b41ca725b6f0478c66af-merged.mount: Deactivated successfully.
Feb 25 13:32:54 compute-0 podman[405643]: 2026-02-25 13:32:54.203873302 +0000 UTC m=+0.191056693 container remove 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:32:54 compute-0 systemd[1]: libpod-conmon-941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3.scope: Deactivated successfully.
Feb 25 13:32:54 compute-0 podman[405685]: 2026-02-25 13:32:54.368377115 +0000 UTC m=+0.048355336 container create 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:32:54 compute-0 systemd[1]: Started libpod-conmon-3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a.scope.
Feb 25 13:32:54 compute-0 podman[405685]: 2026-02-25 13:32:54.343285136 +0000 UTC m=+0.023263407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:32:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee68e03f7a4bfb0f69eaec00f405fb3ca25a325b4cca35959375e767cc4886e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee68e03f7a4bfb0f69eaec00f405fb3ca25a325b4cca35959375e767cc4886e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee68e03f7a4bfb0f69eaec00f405fb3ca25a325b4cca35959375e767cc4886e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee68e03f7a4bfb0f69eaec00f405fb3ca25a325b4cca35959375e767cc4886e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:32:54 compute-0 podman[405685]: 2026-02-25 13:32:54.485553622 +0000 UTC m=+0.165531893 container init 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:32:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:54 compute-0 podman[405685]: 2026-02-25 13:32:54.493846946 +0000 UTC m=+0.173825167 container start 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:32:54 compute-0 podman[405685]: 2026-02-25 13:32:54.497369455 +0000 UTC m=+0.177347726 container attach 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 13:32:54 compute-0 nova_compute[244014]: 2026-02-25 13:32:54.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:32:55.075 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:32:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:32:55.076 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:32:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:32:55.076 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:32:55 compute-0 lvm[405780]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:32:55 compute-0 lvm[405780]: VG ceph_vg0 finished
Feb 25 13:32:55 compute-0 lvm[405781]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:32:55 compute-0 lvm[405781]: VG ceph_vg1 finished
Feb 25 13:32:55 compute-0 lvm[405783]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:32:55 compute-0 lvm[405783]: VG ceph_vg2 finished
Feb 25 13:32:55 compute-0 ceph-mon[76335]: pgmap v3390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:55 compute-0 vibrant_robinson[405702]: {}
Feb 25 13:32:55 compute-0 systemd[1]: libpod-3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a.scope: Deactivated successfully.
Feb 25 13:32:55 compute-0 systemd[1]: libpod-3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a.scope: Consumed 1.117s CPU time.
Feb 25 13:32:55 compute-0 podman[405685]: 2026-02-25 13:32:55.302506668 +0000 UTC m=+0.982484849 container died 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ee68e03f7a4bfb0f69eaec00f405fb3ca25a325b4cca35959375e767cc4886e-merged.mount: Deactivated successfully.
Feb 25 13:32:55 compute-0 podman[405685]: 2026-02-25 13:32:55.353631051 +0000 UTC m=+1.033609242 container remove 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 13:32:55 compute-0 systemd[1]: libpod-conmon-3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a.scope: Deactivated successfully.
Feb 25 13:32:55 compute-0 sudo[405605]: pam_unix(sudo:session): session closed for user root
Feb 25 13:32:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:32:55 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:32:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:32:55 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:32:55 compute-0 sudo[405797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:32:55 compute-0 sudo[405797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:32:55 compute-0 sudo[405797]: pam_unix(sudo:session): session closed for user root
Feb 25 13:32:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:56 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:32:56 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:32:57 compute-0 ceph-mon[76335]: pgmap v3391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:57 compute-0 nova_compute[244014]: 2026-02-25 13:32:57.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:59 compute-0 ceph-mon[76335]: pgmap v3392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:32:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:32:59 compute-0 nova_compute[244014]: 2026-02-25 13:32:59.654 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:32:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:01 compute-0 ceph-mon[76335]: pgmap v3393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:33:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:33:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:02 compute-0 nova_compute[244014]: 2026-02-25 13:33:02.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:03 compute-0 ceph-mon[76335]: pgmap v3394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:03 compute-0 nova_compute[244014]: 2026-02-25 13:33:03.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:04 compute-0 nova_compute[244014]: 2026-02-25 13:33:04.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:04 compute-0 nova_compute[244014]: 2026-02-25 13:33:04.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:04 compute-0 nova_compute[244014]: 2026-02-25 13:33:04.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:33:04 compute-0 nova_compute[244014]: 2026-02-25 13:33:04.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:33:04 compute-0 nova_compute[244014]: 2026-02-25 13:33:04.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:33:04 compute-0 nova_compute[244014]: 2026-02-25 13:33:04.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:33:04 compute-0 nova_compute[244014]: 2026-02-25 13:33:04.917 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:33:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:33:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3265346086' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:33:05 compute-0 nova_compute[244014]: 2026-02-25 13:33:05.456 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:33:05 compute-0 ceph-mon[76335]: pgmap v3395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3265346086' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:33:05 compute-0 nova_compute[244014]: 2026-02-25 13:33:05.603 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:33:05 compute-0 nova_compute[244014]: 2026-02-25 13:33:05.604 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3528MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:33:05 compute-0 nova_compute[244014]: 2026-02-25 13:33:05.604 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:33:05 compute-0 nova_compute[244014]: 2026-02-25 13:33:05.605 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:33:05 compute-0 nova_compute[244014]: 2026-02-25 13:33:05.766 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:33:05 compute-0 nova_compute[244014]: 2026-02-25 13:33:05.766 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:33:05 compute-0 nova_compute[244014]: 2026-02-25 13:33:05.782 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:33:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:33:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660571484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:33:06 compute-0 nova_compute[244014]: 2026-02-25 13:33:06.326 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:33:06 compute-0 nova_compute[244014]: 2026-02-25 13:33:06.334 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:33:06 compute-0 nova_compute[244014]: 2026-02-25 13:33:06.379 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:33:06 compute-0 nova_compute[244014]: 2026-02-25 13:33:06.385 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:33:06 compute-0 nova_compute[244014]: 2026-02-25 13:33:06.385 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:33:06 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1660571484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:33:07 compute-0 nova_compute[244014]: 2026-02-25 13:33:07.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:07 compute-0 ceph-mon[76335]: pgmap v3396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:08 compute-0 ceph-mon[76335]: pgmap v3397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:09 compute-0 nova_compute[244014]: 2026-02-25 13:33:09.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:10 compute-0 nova_compute[244014]: 2026-02-25 13:33:10.387 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:10 compute-0 nova_compute[244014]: 2026-02-25 13:33:10.387 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:33:10 compute-0 nova_compute[244014]: 2026-02-25 13:33:10.388 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:33:10 compute-0 nova_compute[244014]: 2026-02-25 13:33:10.462 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:33:10 compute-0 nova_compute[244014]: 2026-02-25 13:33:10.947 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:11 compute-0 ceph-mon[76335]: pgmap v3398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:12 compute-0 nova_compute[244014]: 2026-02-25 13:33:12.467 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:13 compute-0 ceph-mon[76335]: pgmap v3399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:13 compute-0 podman[405866]: 2026-02-25 13:33:13.720949129 +0000 UTC m=+0.062736112 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:33:13 compute-0 podman[405867]: 2026-02-25 13:33:13.752902061 +0000 UTC m=+0.094685604 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 25 13:33:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:13 compute-0 nova_compute[244014]: 2026-02-25 13:33:13.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:13 compute-0 nova_compute[244014]: 2026-02-25 13:33:13.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:33:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:14 compute-0 nova_compute[244014]: 2026-02-25 13:33:14.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:15 compute-0 ceph-mon[76335]: pgmap v3400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:17 compute-0 ceph-mon[76335]: pgmap v3401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:17 compute-0 nova_compute[244014]: 2026-02-25 13:33:17.505 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:19 compute-0 ceph-mon[76335]: pgmap v3402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:19 compute-0 nova_compute[244014]: 2026-02-25 13:33:19.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:19 compute-0 nova_compute[244014]: 2026-02-25 13:33:19.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:20 compute-0 ceph-mon[76335]: pgmap v3403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:20 compute-0 nova_compute[244014]: 2026-02-25 13:33:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:22 compute-0 nova_compute[244014]: 2026-02-25 13:33:22.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:22 compute-0 ceph-mon[76335]: pgmap v3404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:24 compute-0 nova_compute[244014]: 2026-02-25 13:33:24.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:25 compute-0 ceph-mon[76335]: pgmap v3405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:25 compute-0 nova_compute[244014]: 2026-02-25 13:33:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:27 compute-0 ceph-mon[76335]: pgmap v3406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:27 compute-0 nova_compute[244014]: 2026-02-25 13:33:27.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:28 compute-0 nova_compute[244014]: 2026-02-25 13:33:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:29 compute-0 ceph-mon[76335]: pgmap v3407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:29 compute-0 nova_compute[244014]: 2026-02-25 13:33:29.713 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:31 compute-0 ceph-mon[76335]: pgmap v3408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:33:31
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.mgr', 'vms', 'default.rgw.log', 'backups', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.control']
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:33:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:33:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:33:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:33:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:33:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:33:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:33:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:33:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:33:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:33:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:33:32 compute-0 nova_compute[244014]: 2026-02-25 13:33:32.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:33 compute-0 ceph-mon[76335]: pgmap v3409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:34 compute-0 nova_compute[244014]: 2026-02-25 13:33:34.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:35 compute-0 ceph-mon[76335]: pgmap v3410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:37 compute-0 ceph-mon[76335]: pgmap v3411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:37 compute-0 nova_compute[244014]: 2026-02-25 13:33:37.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:39 compute-0 ceph-mon[76335]: pgmap v3412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:39 compute-0 nova_compute[244014]: 2026-02-25 13:33:39.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:41 compute-0 ceph-mon[76335]: pgmap v3413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:42 compute-0 nova_compute[244014]: 2026-02-25 13:33:42.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:43 compute-0 ceph-mon[76335]: pgmap v3414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:33:43 compute-0 nova_compute[244014]: 2026-02-25 13:33:43.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:44 compute-0 podman[405912]: 2026-02-25 13:33:44.717087247 +0000 UTC m=+0.062536306 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 13:33:44 compute-0 podman[405913]: 2026-02-25 13:33:44.757419145 +0000 UTC m=+0.093263633 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 25 13:33:44 compute-0 nova_compute[244014]: 2026-02-25 13:33:44.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:45 compute-0 ceph-mon[76335]: pgmap v3415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:47 compute-0 ceph-mon[76335]: pgmap v3416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:33:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674833158' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:33:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:33:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674833158' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:33:47 compute-0 nova_compute[244014]: 2026-02-25 13:33:47.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3674833158' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:33:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3674833158' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:33:49 compute-0 ceph-mon[76335]: pgmap v3417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:49 compute-0 nova_compute[244014]: 2026-02-25 13:33:49.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:50 compute-0 nova_compute[244014]: 2026-02-25 13:33:50.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:50 compute-0 nova_compute[244014]: 2026-02-25 13:33:50.895 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:33:51 compute-0 ceph-mon[76335]: pgmap v3418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:52 compute-0 nova_compute[244014]: 2026-02-25 13:33:52.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:53 compute-0 ceph-mon[76335]: pgmap v3419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:54 compute-0 nova_compute[244014]: 2026-02-25 13:33:54.805 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:33:55.076 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:33:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:33:55.077 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:33:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:33:55.077 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:33:55 compute-0 ceph-mon[76335]: pgmap v3420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:55 compute-0 sudo[405957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:33:55 compute-0 sudo[405957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:33:55 compute-0 sudo[405957]: pam_unix(sudo:session): session closed for user root
Feb 25 13:33:55 compute-0 sudo[405982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 13:33:55 compute-0 sudo[405982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:33:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:56 compute-0 sudo[405982]: pam_unix(sudo:session): session closed for user root
Feb 25 13:33:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:33:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:33:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:33:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:33:56 compute-0 sudo[406027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:33:56 compute-0 sudo[406027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:33:56 compute-0 sudo[406027]: pam_unix(sudo:session): session closed for user root
Feb 25 13:33:56 compute-0 sudo[406052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:33:56 compute-0 sudo[406052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:33:56 compute-0 sudo[406052]: pam_unix(sudo:session): session closed for user root
Feb 25 13:33:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:33:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:33:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:33:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:33:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:33:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:33:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:33:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:33:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:33:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:33:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:33:56 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:33:56 compute-0 sudo[406110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:33:56 compute-0 sudo[406110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:33:56 compute-0 sudo[406110]: pam_unix(sudo:session): session closed for user root
Feb 25 13:33:56 compute-0 sudo[406135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:33:56 compute-0 sudo[406135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:33:57 compute-0 ceph-mon[76335]: pgmap v3421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:33:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:33:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:33:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:33:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:33:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:33:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:33:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:33:57 compute-0 podman[406173]: 2026-02-25 13:33:57.174889095 +0000 UTC m=+0.058703198 container create 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:33:57 compute-0 systemd[1]: Started libpod-conmon-771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53.scope.
Feb 25 13:33:57 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:33:57 compute-0 podman[406173]: 2026-02-25 13:33:57.150998861 +0000 UTC m=+0.034813014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:33:57 compute-0 podman[406173]: 2026-02-25 13:33:57.257635929 +0000 UTC m=+0.141450083 container init 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:33:57 compute-0 podman[406173]: 2026-02-25 13:33:57.265973585 +0000 UTC m=+0.149787668 container start 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:33:57 compute-0 podman[406173]: 2026-02-25 13:33:57.2693436 +0000 UTC m=+0.153157713 container attach 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:33:57 compute-0 loving_hamilton[406190]: 167 167
Feb 25 13:33:57 compute-0 systemd[1]: libpod-771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53.scope: Deactivated successfully.
Feb 25 13:33:57 compute-0 podman[406173]: 2026-02-25 13:33:57.271551672 +0000 UTC m=+0.155365785 container died 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:33:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-71e72b451bdefb754ae5d10586e431c404b5616c80aaafa642349f100b07b15a-merged.mount: Deactivated successfully.
Feb 25 13:33:57 compute-0 podman[406173]: 2026-02-25 13:33:57.316155791 +0000 UTC m=+0.199969894 container remove 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:33:57 compute-0 systemd[1]: libpod-conmon-771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53.scope: Deactivated successfully.
Feb 25 13:33:57 compute-0 podman[406213]: 2026-02-25 13:33:57.492738115 +0000 UTC m=+0.054273063 container create 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:33:57 compute-0 systemd[1]: Started libpod-conmon-26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5.scope.
Feb 25 13:33:57 compute-0 podman[406213]: 2026-02-25 13:33:57.469460688 +0000 UTC m=+0.030995676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:33:57 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:33:57 compute-0 nova_compute[244014]: 2026-02-25 13:33:57.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:57 compute-0 podman[406213]: 2026-02-25 13:33:57.587662434 +0000 UTC m=+0.149197412 container init 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:33:57 compute-0 podman[406213]: 2026-02-25 13:33:57.59959674 +0000 UTC m=+0.161131678 container start 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 13:33:57 compute-0 podman[406213]: 2026-02-25 13:33:57.602955325 +0000 UTC m=+0.164490313 container attach 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:33:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:58 compute-0 modest_wilbur[406229]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:33:58 compute-0 modest_wilbur[406229]: --> All data devices are unavailable
Feb 25 13:33:58 compute-0 systemd[1]: libpod-26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5.scope: Deactivated successfully.
Feb 25 13:33:58 compute-0 podman[406213]: 2026-02-25 13:33:58.079224937 +0000 UTC m=+0.640759875 container died 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e-merged.mount: Deactivated successfully.
Feb 25 13:33:58 compute-0 podman[406213]: 2026-02-25 13:33:58.130325419 +0000 UTC m=+0.691860357 container remove 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 13:33:58 compute-0 systemd[1]: libpod-conmon-26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5.scope: Deactivated successfully.
Feb 25 13:33:58 compute-0 sudo[406135]: pam_unix(sudo:session): session closed for user root
Feb 25 13:33:58 compute-0 sudo[406260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:33:58 compute-0 sudo[406260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:33:58 compute-0 sudo[406260]: pam_unix(sudo:session): session closed for user root
Feb 25 13:33:58 compute-0 sudo[406285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:33:58 compute-0 sudo[406285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:33:58 compute-0 podman[406324]: 2026-02-25 13:33:58.607511937 +0000 UTC m=+0.045358172 container create e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:33:58 compute-0 systemd[1]: Started libpod-conmon-e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83.scope.
Feb 25 13:33:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:33:58 compute-0 podman[406324]: 2026-02-25 13:33:58.591879195 +0000 UTC m=+0.029725430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:33:58 compute-0 podman[406324]: 2026-02-25 13:33:58.691946359 +0000 UTC m=+0.129792624 container init e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:33:58 compute-0 podman[406324]: 2026-02-25 13:33:58.703416423 +0000 UTC m=+0.141262658 container start e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 13:33:58 compute-0 podman[406324]: 2026-02-25 13:33:58.706964353 +0000 UTC m=+0.144810658 container attach e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 13:33:58 compute-0 dreamy_napier[406340]: 167 167
Feb 25 13:33:58 compute-0 systemd[1]: libpod-e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83.scope: Deactivated successfully.
Feb 25 13:33:58 compute-0 podman[406324]: 2026-02-25 13:33:58.708485626 +0000 UTC m=+0.146331861 container died e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 13:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5a2497e6cbf500356a6d2e3b499f5c1022b824bccf6a92ddbf02c1ec1885a38-merged.mount: Deactivated successfully.
Feb 25 13:33:58 compute-0 podman[406324]: 2026-02-25 13:33:58.749458383 +0000 UTC m=+0.187304588 container remove e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:33:58 compute-0 systemd[1]: libpod-conmon-e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83.scope: Deactivated successfully.
Feb 25 13:33:58 compute-0 podman[406365]: 2026-02-25 13:33:58.920782698 +0000 UTC m=+0.058392419 container create 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:33:58 compute-0 systemd[1]: Started libpod-conmon-6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1.scope.
Feb 25 13:33:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b73b878a2d91db9503a5605d9cd4fafc259715ee03c85798994c96944b9c1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:33:58 compute-0 podman[406365]: 2026-02-25 13:33:58.900089664 +0000 UTC m=+0.037699445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b73b878a2d91db9503a5605d9cd4fafc259715ee03c85798994c96944b9c1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b73b878a2d91db9503a5605d9cd4fafc259715ee03c85798994c96944b9c1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b73b878a2d91db9503a5605d9cd4fafc259715ee03c85798994c96944b9c1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:33:59 compute-0 podman[406365]: 2026-02-25 13:33:59.014268636 +0000 UTC m=+0.151878457 container init 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:33:59 compute-0 podman[406365]: 2026-02-25 13:33:59.023356363 +0000 UTC m=+0.160966084 container start 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:33:59 compute-0 podman[406365]: 2026-02-25 13:33:59.027381506 +0000 UTC m=+0.164991287 container attach 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:33:59 compute-0 ceph-mon[76335]: pgmap v3422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]: {
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:     "0": [
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:         {
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "devices": [
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "/dev/loop3"
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             ],
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_name": "ceph_lv0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_size": "21470642176",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "name": "ceph_lv0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "tags": {
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.cluster_name": "ceph",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.crush_device_class": "",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.encrypted": "0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.objectstore": "bluestore",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.osd_id": "0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.type": "block",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.vdo": "0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.with_tpm": "0"
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             },
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "type": "block",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "vg_name": "ceph_vg0"
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:         }
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:     ],
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:     "1": [
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:         {
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "devices": [
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "/dev/loop4"
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             ],
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_name": "ceph_lv1",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_size": "21470642176",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "name": "ceph_lv1",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "tags": {
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.cluster_name": "ceph",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.crush_device_class": "",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.encrypted": "0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.objectstore": "bluestore",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.osd_id": "1",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.type": "block",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.vdo": "0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.with_tpm": "0"
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             },
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "type": "block",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "vg_name": "ceph_vg1"
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:         }
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:     ],
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:     "2": [
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:         {
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "devices": [
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "/dev/loop5"
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             ],
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_name": "ceph_lv2",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_size": "21470642176",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "name": "ceph_lv2",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "tags": {
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.cluster_name": "ceph",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.crush_device_class": "",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.encrypted": "0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.objectstore": "bluestore",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.osd_id": "2",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.type": "block",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.vdo": "0",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:                 "ceph.with_tpm": "0"
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             },
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "type": "block",
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:             "vg_name": "ceph_vg2"
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:         }
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]:     ]
Feb 25 13:33:59 compute-0 dazzling_fermat[406381]: }
Feb 25 13:33:59 compute-0 systemd[1]: libpod-6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1.scope: Deactivated successfully.
Feb 25 13:33:59 compute-0 podman[406365]: 2026-02-25 13:33:59.357197985 +0000 UTC m=+0.494807716 container died 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 13:33:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-76b73b878a2d91db9503a5605d9cd4fafc259715ee03c85798994c96944b9c1b-merged.mount: Deactivated successfully.
Feb 25 13:33:59 compute-0 podman[406365]: 2026-02-25 13:33:59.408738859 +0000 UTC m=+0.546348590 container remove 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 13:33:59 compute-0 systemd[1]: libpod-conmon-6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1.scope: Deactivated successfully.
Feb 25 13:33:59 compute-0 sudo[406285]: pam_unix(sudo:session): session closed for user root
Feb 25 13:33:59 compute-0 sudo[406401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:33:59 compute-0 sudo[406401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:33:59 compute-0 sudo[406401]: pam_unix(sudo:session): session closed for user root
Feb 25 13:33:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:33:59 compute-0 sudo[406426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:33:59 compute-0 sudo[406426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:33:59 compute-0 nova_compute[244014]: 2026-02-25 13:33:59.808 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:33:59 compute-0 podman[406464]: 2026-02-25 13:33:59.878560149 +0000 UTC m=+0.042182542 container create f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 13:33:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:33:59 compute-0 nova_compute[244014]: 2026-02-25 13:33:59.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:33:59 compute-0 nova_compute[244014]: 2026-02-25 13:33:59.893 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:33:59 compute-0 nova_compute[244014]: 2026-02-25 13:33:59.915 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:33:59 compute-0 systemd[1]: Started libpod-conmon-f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476.scope.
Feb 25 13:33:59 compute-0 podman[406464]: 2026-02-25 13:33:59.857598357 +0000 UTC m=+0.021220840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:33:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:33:59 compute-0 podman[406464]: 2026-02-25 13:33:59.987403341 +0000 UTC m=+0.151025744 container init f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:33:59 compute-0 podman[406464]: 2026-02-25 13:33:59.997563067 +0000 UTC m=+0.161185470 container start f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 13:34:00 compute-0 podman[406464]: 2026-02-25 13:34:00.001769516 +0000 UTC m=+0.165391929 container attach f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:34:00 compute-0 loving_lichterman[406481]: 167 167
Feb 25 13:34:00 compute-0 systemd[1]: libpod-f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476.scope: Deactivated successfully.
Feb 25 13:34:00 compute-0 podman[406464]: 2026-02-25 13:34:00.003857925 +0000 UTC m=+0.167480338 container died f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Feb 25 13:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0059155171ce90425e44194ce3f3883950e0180bf15788e5e5d98bdb97ed6d76-merged.mount: Deactivated successfully.
Feb 25 13:34:00 compute-0 podman[406464]: 2026-02-25 13:34:00.054482834 +0000 UTC m=+0.218105217 container remove f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:34:00 compute-0 systemd[1]: libpod-conmon-f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476.scope: Deactivated successfully.
Feb 25 13:34:00 compute-0 podman[406505]: 2026-02-25 13:34:00.242993914 +0000 UTC m=+0.066820517 container create 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:34:00 compute-0 systemd[1]: Started libpod-conmon-1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8.scope.
Feb 25 13:34:00 compute-0 podman[406505]: 2026-02-25 13:34:00.213726118 +0000 UTC m=+0.037552721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:34:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce2d1d1906a88f06659c44a48b8fe65ce9e2518ffeddc33828b5c9886de1380/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce2d1d1906a88f06659c44a48b8fe65ce9e2518ffeddc33828b5c9886de1380/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce2d1d1906a88f06659c44a48b8fe65ce9e2518ffeddc33828b5c9886de1380/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce2d1d1906a88f06659c44a48b8fe65ce9e2518ffeddc33828b5c9886de1380/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:34:00 compute-0 podman[406505]: 2026-02-25 13:34:00.35020246 +0000 UTC m=+0.174029073 container init 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:34:00 compute-0 podman[406505]: 2026-02-25 13:34:00.363017522 +0000 UTC m=+0.186844085 container start 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb 25 13:34:00 compute-0 podman[406505]: 2026-02-25 13:34:00.36652262 +0000 UTC m=+0.190349243 container attach 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:34:01 compute-0 ceph-mon[76335]: pgmap v3423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:01 compute-0 lvm[406600]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:34:01 compute-0 lvm[406600]: VG ceph_vg1 finished
Feb 25 13:34:01 compute-0 lvm[406598]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:34:01 compute-0 lvm[406598]: VG ceph_vg0 finished
Feb 25 13:34:01 compute-0 lvm[406602]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:34:01 compute-0 lvm[406602]: VG ceph_vg2 finished
Feb 25 13:34:01 compute-0 busy_perlman[406521]: {}
Feb 25 13:34:01 compute-0 systemd[1]: libpod-1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8.scope: Deactivated successfully.
Feb 25 13:34:01 compute-0 systemd[1]: libpod-1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8.scope: Consumed 1.137s CPU time.
Feb 25 13:34:01 compute-0 podman[406505]: 2026-02-25 13:34:01.187213982 +0000 UTC m=+1.011040575 container died 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:34:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ce2d1d1906a88f06659c44a48b8fe65ce9e2518ffeddc33828b5c9886de1380-merged.mount: Deactivated successfully.
Feb 25 13:34:01 compute-0 podman[406505]: 2026-02-25 13:34:01.235909356 +0000 UTC m=+1.059735919 container remove 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 13:34:01 compute-0 systemd[1]: libpod-conmon-1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8.scope: Deactivated successfully.
Feb 25 13:34:01 compute-0 sudo[406426]: pam_unix(sudo:session): session closed for user root
Feb 25 13:34:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:34:01 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:34:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:34:01 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:34:01 compute-0 sudo[406619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:34:01 compute-0 sudo[406619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:34:01 compute-0 sudo[406619]: pam_unix(sudo:session): session closed for user root
Feb 25 13:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:34:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:34:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:34:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:34:02 compute-0 nova_compute[244014]: 2026-02-25 13:34:02.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:03 compute-0 ceph-mon[76335]: pgmap v3424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:03 compute-0 nova_compute[244014]: 2026-02-25 13:34:03.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:34:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:04 compute-0 nova_compute[244014]: 2026-02-25 13:34:04.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:04 compute-0 nova_compute[244014]: 2026-02-25 13:34:04.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:34:04 compute-0 nova_compute[244014]: 2026-02-25 13:34:04.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:34:04 compute-0 nova_compute[244014]: 2026-02-25 13:34:04.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:34:04 compute-0 nova_compute[244014]: 2026-02-25 13:34:04.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:34:04 compute-0 nova_compute[244014]: 2026-02-25 13:34:04.903 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:34:04 compute-0 nova_compute[244014]: 2026-02-25 13:34:04.904 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:34:05 compute-0 ceph-mon[76335]: pgmap v3425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:34:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3709027456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:34:05 compute-0 nova_compute[244014]: 2026-02-25 13:34:05.469 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:34:05 compute-0 nova_compute[244014]: 2026-02-25 13:34:05.699 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:34:05 compute-0 nova_compute[244014]: 2026-02-25 13:34:05.700 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3491MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:34:05 compute-0 nova_compute[244014]: 2026-02-25 13:34:05.700 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:34:05 compute-0 nova_compute[244014]: 2026-02-25 13:34:05.701 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:34:05 compute-0 nova_compute[244014]: 2026-02-25 13:34:05.764 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:34:05 compute-0 nova_compute[244014]: 2026-02-25 13:34:05.764 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:34:05 compute-0 nova_compute[244014]: 2026-02-25 13:34:05.779 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:34:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:06 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3709027456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:34:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:34:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365646580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:34:06 compute-0 nova_compute[244014]: 2026-02-25 13:34:06.345 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:34:06 compute-0 nova_compute[244014]: 2026-02-25 13:34:06.352 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:34:06 compute-0 nova_compute[244014]: 2026-02-25 13:34:06.373 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:34:06 compute-0 nova_compute[244014]: 2026-02-25 13:34:06.375 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:34:06 compute-0 nova_compute[244014]: 2026-02-25 13:34:06.376 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:34:07 compute-0 ceph-mon[76335]: pgmap v3426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3365646580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:34:07 compute-0 nova_compute[244014]: 2026-02-25 13:34:07.581 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:09 compute-0 ceph-mon[76335]: pgmap v3427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:09 compute-0 nova_compute[244014]: 2026-02-25 13:34:09.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:10 compute-0 ceph-mon[76335]: pgmap v3428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:11 compute-0 nova_compute[244014]: 2026-02-25 13:34:11.377 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:34:11 compute-0 nova_compute[244014]: 2026-02-25 13:34:11.378 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:34:11 compute-0 nova_compute[244014]: 2026-02-25 13:34:11.378 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:34:11 compute-0 nova_compute[244014]: 2026-02-25 13:34:11.397 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:34:11 compute-0 nova_compute[244014]: 2026-02-25 13:34:11.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:34:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:12 compute-0 nova_compute[244014]: 2026-02-25 13:34:12.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:12 compute-0 ceph-mon[76335]: pgmap v3429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:14 compute-0 nova_compute[244014]: 2026-02-25 13:34:14.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:34:14 compute-0 nova_compute[244014]: 2026-02-25 13:34:14.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:34:14 compute-0 nova_compute[244014]: 2026-02-25 13:34:14.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:14 compute-0 ceph-mon[76335]: pgmap v3430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:15 compute-0 podman[406688]: 2026-02-25 13:34:15.730856764 +0000 UTC m=+0.065204641 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 13:34:15 compute-0 podman[406689]: 2026-02-25 13:34:15.799575384 +0000 UTC m=+0.133055406 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 13:34:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:16 compute-0 ceph-mon[76335]: pgmap v3431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:17 compute-0 nova_compute[244014]: 2026-02-25 13:34:17.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:18 compute-0 ceph-mon[76335]: pgmap v3432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:19 compute-0 nova_compute[244014]: 2026-02-25 13:34:19.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:20 compute-0 ceph-mon[76335]: pgmap v3433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:21 compute-0 nova_compute[244014]: 2026-02-25 13:34:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:34:21 compute-0 nova_compute[244014]: 2026-02-25 13:34:21.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:34:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:22 compute-0 nova_compute[244014]: 2026-02-25 13:34:22.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:23 compute-0 ceph-mon[76335]: pgmap v3434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:24 compute-0 nova_compute[244014]: 2026-02-25 13:34:24.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:25 compute-0 ceph-mon[76335]: pgmap v3435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:25 compute-0 nova_compute[244014]: 2026-02-25 13:34:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:34:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:27 compute-0 ceph-mon[76335]: pgmap v3436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:27 compute-0 nova_compute[244014]: 2026-02-25 13:34:27.670 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:28 compute-0 nova_compute[244014]: 2026-02-25 13:34:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:34:29 compute-0 ceph-mon[76335]: pgmap v3437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:29 compute-0 nova_compute[244014]: 2026-02-25 13:34:29.963 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:31 compute-0 ceph-mon[76335]: pgmap v3438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:34:31
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:34:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:34:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:34:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:34:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:34:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:34:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:34:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:34:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:34:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:34:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:34:32 compute-0 nova_compute[244014]: 2026-02-25 13:34:32.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:33 compute-0 ceph-mon[76335]: pgmap v3439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:35 compute-0 nova_compute[244014]: 2026-02-25 13:34:35.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:35 compute-0 ceph-mon[76335]: pgmap v3440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:37 compute-0 ceph-mon[76335]: pgmap v3441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:37 compute-0 nova_compute[244014]: 2026-02-25 13:34:37.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:39 compute-0 ceph-mon[76335]: pgmap v3442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:39 compute-0 nova_compute[244014]: 2026-02-25 13:34:39.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:34:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:40 compute-0 nova_compute[244014]: 2026-02-25 13:34:40.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:41 compute-0 ceph-mon[76335]: pgmap v3443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:42 compute-0 nova_compute[244014]: 2026-02-25 13:34:42.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:43 compute-0 ceph-mon[76335]: pgmap v3444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:34:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:45 compute-0 nova_compute[244014]: 2026-02-25 13:34:45.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:45 compute-0 ceph-mon[76335]: pgmap v3445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:46 compute-0 podman[406731]: 2026-02-25 13:34:46.719926746 +0000 UTC m=+0.062727572 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:34:46 compute-0 podman[406732]: 2026-02-25 13:34:46.754469371 +0000 UTC m=+0.093405328 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Feb 25 13:34:47 compute-0 ceph-mon[76335]: pgmap v3446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:34:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1860475510' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:34:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:34:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1860475510' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:34:47 compute-0 nova_compute[244014]: 2026-02-25 13:34:47.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1860475510' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:34:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1860475510' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:34:49 compute-0 ceph-mon[76335]: pgmap v3447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:50 compute-0 nova_compute[244014]: 2026-02-25 13:34:50.099 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:51 compute-0 ceph-mon[76335]: pgmap v3448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:52 compute-0 nova_compute[244014]: 2026-02-25 13:34:52.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:53 compute-0 ceph-mon[76335]: pgmap v3449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.437169) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494437199, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3475600, "memory_usage": 3534368, "flush_reason": "Manual Compaction"}
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494494807, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 3408896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70495, "largest_seqno": 72536, "table_properties": {"data_size": 3399582, "index_size": 5935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18441, "raw_average_key_size": 20, "raw_value_size": 3381102, "raw_average_value_size": 3671, "num_data_blocks": 264, "num_entries": 921, "num_filter_entries": 921, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026265, "oldest_key_time": 1772026265, "file_creation_time": 1772026494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 57720 microseconds, and 4688 cpu microseconds.
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.494879) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 3408896 bytes OK
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.494906) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.508726) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.508785) EVENT_LOG_v1 {"time_micros": 1772026494508773, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.508816) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 3467079, prev total WAL file size 3467079, number of live WAL files 2.
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.509884) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(3329KB)], [170(8879KB)]
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494509929, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12501442, "oldest_snapshot_seqno": -1}
Feb 25 13:34:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 8873 keys, 10757820 bytes, temperature: kUnknown
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494668960, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 10757820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10701812, "index_size": 32727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 232871, "raw_average_key_size": 26, "raw_value_size": 10546606, "raw_average_value_size": 1188, "num_data_blocks": 1263, "num_entries": 8873, "num_filter_entries": 8873, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.669494) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 10757820 bytes
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.776929) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.5 rd, 67.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.7 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 9387, records dropped: 514 output_compression: NoCompression
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.776971) EVENT_LOG_v1 {"time_micros": 1772026494776954, "job": 106, "event": "compaction_finished", "compaction_time_micros": 159335, "compaction_time_cpu_micros": 28534, "output_level": 6, "num_output_files": 1, "total_output_size": 10757820, "num_input_records": 9387, "num_output_records": 8873, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494778084, "job": 106, "event": "table_file_deletion", "file_number": 172}
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494779120, "job": 106, "event": "table_file_deletion", "file_number": 170}
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.509799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.779253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.779258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.779260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.779263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:34:54 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.779265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:34:55.076 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:34:55.077 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:34:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:34:55.077 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:34:55 compute-0 nova_compute[244014]: 2026-02-25 13:34:55.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:55 compute-0 ceph-mon[76335]: pgmap v3450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:57 compute-0 ceph-mon[76335]: pgmap v3451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:57 compute-0 nova_compute[244014]: 2026-02-25 13:34:57.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:34:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:59 compute-0 ceph-mon[76335]: pgmap v3452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:34:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:34:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:00 compute-0 nova_compute[244014]: 2026-02-25 13:35:00.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:01 compute-0 sudo[406776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:35:01 compute-0 sudo[406776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:35:01 compute-0 sudo[406776]: pam_unix(sudo:session): session closed for user root
Feb 25 13:35:01 compute-0 sudo[406801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:35:01 compute-0 sudo[406801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:35:01 compute-0 ceph-mon[76335]: pgmap v3453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:35:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:35:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:02 compute-0 sudo[406801]: pam_unix(sudo:session): session closed for user root
Feb 25 13:35:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 13:35:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:35:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:35:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:35:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:35:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:35:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:35:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:35:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:35:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:35:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:35:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:35:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:35:02 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:35:02 compute-0 sudo[406857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:35:02 compute-0 sudo[406857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:35:02 compute-0 sudo[406857]: pam_unix(sudo:session): session closed for user root
Feb 25 13:35:02 compute-0 sudo[406882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:35:02 compute-0 sudo[406882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:35:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:35:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:35:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:35:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:35:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:35:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:35:02 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:35:02 compute-0 podman[406920]: 2026-02-25 13:35:02.486420028 +0000 UTC m=+0.055684602 container create 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:35:02 compute-0 systemd[1]: Started libpod-conmon-479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b.scope.
Feb 25 13:35:02 compute-0 podman[406920]: 2026-02-25 13:35:02.454759105 +0000 UTC m=+0.024023739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:35:02 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:35:02 compute-0 podman[406920]: 2026-02-25 13:35:02.583451067 +0000 UTC m=+0.152715631 container init 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:35:02 compute-0 podman[406920]: 2026-02-25 13:35:02.591822163 +0000 UTC m=+0.161086727 container start 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:35:02 compute-0 podman[406920]: 2026-02-25 13:35:02.594753696 +0000 UTC m=+0.164018240 container attach 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 13:35:02 compute-0 hardcore_hopper[406936]: 167 167
Feb 25 13:35:02 compute-0 systemd[1]: libpod-479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b.scope: Deactivated successfully.
Feb 25 13:35:02 compute-0 conmon[406936]: conmon 479839b617e32968e5a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b.scope/container/memory.events
Feb 25 13:35:02 compute-0 podman[406941]: 2026-02-25 13:35:02.652143435 +0000 UTC m=+0.036080819 container died 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 13:35:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7b9680e34998470af8d682ce4a49b346a0953a6d807798e01eabd9948e4a010-merged.mount: Deactivated successfully.
Feb 25 13:35:02 compute-0 podman[406941]: 2026-02-25 13:35:02.692552896 +0000 UTC m=+0.076490270 container remove 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:35:02 compute-0 systemd[1]: libpod-conmon-479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b.scope: Deactivated successfully.
Feb 25 13:35:02 compute-0 nova_compute[244014]: 2026-02-25 13:35:02.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:02 compute-0 podman[406964]: 2026-02-25 13:35:02.883866705 +0000 UTC m=+0.051419522 container create 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 25 13:35:02 compute-0 systemd[1]: Started libpod-conmon-093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac.scope.
Feb 25 13:35:02 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:35:02 compute-0 podman[406964]: 2026-02-25 13:35:02.859345433 +0000 UTC m=+0.026898330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:02 compute-0 podman[406964]: 2026-02-25 13:35:02.979824804 +0000 UTC m=+0.147377701 container init 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:35:02 compute-0 podman[406964]: 2026-02-25 13:35:02.987121169 +0000 UTC m=+0.154674026 container start 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 13:35:02 compute-0 podman[406964]: 2026-02-25 13:35:02.992738048 +0000 UTC m=+0.160290905 container attach 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 13:35:03 compute-0 agitated_knuth[406981]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:35:03 compute-0 agitated_knuth[406981]: --> All data devices are unavailable
Feb 25 13:35:03 compute-0 systemd[1]: libpod-093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac.scope: Deactivated successfully.
Feb 25 13:35:03 compute-0 podman[406964]: 2026-02-25 13:35:03.476508221 +0000 UTC m=+0.644061078 container died 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 13:35:03 compute-0 ceph-mon[76335]: pgmap v3454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f-merged.mount: Deactivated successfully.
Feb 25 13:35:03 compute-0 podman[406964]: 2026-02-25 13:35:03.529016923 +0000 UTC m=+0.696569740 container remove 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:35:03 compute-0 systemd[1]: libpod-conmon-093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac.scope: Deactivated successfully.
Feb 25 13:35:03 compute-0 sudo[406882]: pam_unix(sudo:session): session closed for user root
Feb 25 13:35:03 compute-0 sudo[407013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:35:03 compute-0 sudo[407013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:35:03 compute-0 sudo[407013]: pam_unix(sudo:session): session closed for user root
Feb 25 13:35:03 compute-0 sudo[407038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:35:03 compute-0 sudo[407038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:35:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:03 compute-0 podman[407075]: 2026-02-25 13:35:03.961967522 +0000 UTC m=+0.056133735 container create 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:35:04 compute-0 systemd[1]: Started libpod-conmon-2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed.scope.
Feb 25 13:35:04 compute-0 podman[407075]: 2026-02-25 13:35:03.935814524 +0000 UTC m=+0.029980787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:35:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:35:04 compute-0 podman[407075]: 2026-02-25 13:35:04.048870745 +0000 UTC m=+0.143036968 container init 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:35:04 compute-0 podman[407075]: 2026-02-25 13:35:04.055952835 +0000 UTC m=+0.150119058 container start 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb 25 13:35:04 compute-0 podman[407075]: 2026-02-25 13:35:04.059404082 +0000 UTC m=+0.153570325 container attach 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:35:04 compute-0 eloquent_bhaskara[407091]: 167 167
Feb 25 13:35:04 compute-0 systemd[1]: libpod-2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed.scope: Deactivated successfully.
Feb 25 13:35:04 compute-0 podman[407075]: 2026-02-25 13:35:04.061291915 +0000 UTC m=+0.155458128 container died 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 13:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-228d71ac330a135e557a938a64d77bc0e44881eb38ed48649272688994b1f9fa-merged.mount: Deactivated successfully.
Feb 25 13:35:04 compute-0 podman[407075]: 2026-02-25 13:35:04.110164525 +0000 UTC m=+0.204330748 container remove 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb 25 13:35:04 compute-0 systemd[1]: libpod-conmon-2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed.scope: Deactivated successfully.
Feb 25 13:35:04 compute-0 podman[407115]: 2026-02-25 13:35:04.287824769 +0000 UTC m=+0.042345706 container create 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:35:04 compute-0 systemd[1]: Started libpod-conmon-56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16.scope.
Feb 25 13:35:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c79afc9c76c7b1493054b0a2875b862fee207201696c61d3a912ca616b3b88e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:04 compute-0 podman[407115]: 2026-02-25 13:35:04.27052031 +0000 UTC m=+0.025041227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c79afc9c76c7b1493054b0a2875b862fee207201696c61d3a912ca616b3b88e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c79afc9c76c7b1493054b0a2875b862fee207201696c61d3a912ca616b3b88e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c79afc9c76c7b1493054b0a2875b862fee207201696c61d3a912ca616b3b88e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:04 compute-0 podman[407115]: 2026-02-25 13:35:04.382436509 +0000 UTC m=+0.136957496 container init 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:35:04 compute-0 podman[407115]: 2026-02-25 13:35:04.389653423 +0000 UTC m=+0.144174360 container start 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:35:04 compute-0 podman[407115]: 2026-02-25 13:35:04.39346379 +0000 UTC m=+0.147984727 container attach 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 13:35:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:04 compute-0 inspiring_wu[407132]: {
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:     "0": [
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:         {
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "devices": [
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "/dev/loop3"
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             ],
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_name": "ceph_lv0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_size": "21470642176",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "name": "ceph_lv0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "tags": {
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.cluster_name": "ceph",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.crush_device_class": "",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.encrypted": "0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.objectstore": "bluestore",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.osd_id": "0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.type": "block",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.vdo": "0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.with_tpm": "0"
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             },
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "type": "block",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "vg_name": "ceph_vg0"
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:         }
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:     ],
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:     "1": [
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:         {
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "devices": [
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "/dev/loop4"
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             ],
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_name": "ceph_lv1",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_size": "21470642176",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "name": "ceph_lv1",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "tags": {
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.cluster_name": "ceph",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.crush_device_class": "",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.encrypted": "0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.objectstore": "bluestore",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.osd_id": "1",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.type": "block",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.vdo": "0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.with_tpm": "0"
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             },
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "type": "block",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "vg_name": "ceph_vg1"
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:         }
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:     ],
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:     "2": [
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:         {
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "devices": [
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "/dev/loop5"
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             ],
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_name": "ceph_lv2",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_size": "21470642176",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "name": "ceph_lv2",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "tags": {
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.cluster_name": "ceph",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.crush_device_class": "",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.encrypted": "0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.objectstore": "bluestore",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.osd_id": "2",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.type": "block",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.vdo": "0",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:                 "ceph.with_tpm": "0"
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             },
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "type": "block",
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:             "vg_name": "ceph_vg2"
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:         }
Feb 25 13:35:04 compute-0 inspiring_wu[407132]:     ]
Feb 25 13:35:04 compute-0 inspiring_wu[407132]: }
Feb 25 13:35:04 compute-0 systemd[1]: libpod-56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16.scope: Deactivated successfully.
Feb 25 13:35:04 compute-0 podman[407115]: 2026-02-25 13:35:04.721330093 +0000 UTC m=+0.475851000 container died 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c79afc9c76c7b1493054b0a2875b862fee207201696c61d3a912ca616b3b88e-merged.mount: Deactivated successfully.
Feb 25 13:35:04 compute-0 podman[407115]: 2026-02-25 13:35:04.762012592 +0000 UTC m=+0.516533509 container remove 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:35:04 compute-0 systemd[1]: libpod-conmon-56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16.scope: Deactivated successfully.
Feb 25 13:35:04 compute-0 sudo[407038]: pam_unix(sudo:session): session closed for user root
Feb 25 13:35:04 compute-0 sudo[407152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:35:04 compute-0 sudo[407152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:35:04 compute-0 nova_compute[244014]: 2026-02-25 13:35:04.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:35:04 compute-0 nova_compute[244014]: 2026-02-25 13:35:04.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:35:04 compute-0 sudo[407152]: pam_unix(sudo:session): session closed for user root
Feb 25 13:35:04 compute-0 nova_compute[244014]: 2026-02-25 13:35:04.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:35:04 compute-0 nova_compute[244014]: 2026-02-25 13:35:04.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:35:04 compute-0 nova_compute[244014]: 2026-02-25 13:35:04.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:35:04 compute-0 nova_compute[244014]: 2026-02-25 13:35:04.910 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:35:04 compute-0 nova_compute[244014]: 2026-02-25 13:35:04.911 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:35:04 compute-0 sudo[407177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:35:04 compute-0 sudo[407177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:35:05 compute-0 nova_compute[244014]: 2026-02-25 13:35:05.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:05 compute-0 podman[407233]: 2026-02-25 13:35:05.268779373 +0000 UTC m=+0.038488256 container create 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:35:05 compute-0 systemd[1]: Started libpod-conmon-1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de.scope.
Feb 25 13:35:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:35:05 compute-0 podman[407233]: 2026-02-25 13:35:05.250506997 +0000 UTC m=+0.020215890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:35:05 compute-0 podman[407233]: 2026-02-25 13:35:05.347550956 +0000 UTC m=+0.117259829 container init 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:35:05 compute-0 podman[407233]: 2026-02-25 13:35:05.354846012 +0000 UTC m=+0.124554885 container start 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:35:05 compute-0 podman[407233]: 2026-02-25 13:35:05.357653121 +0000 UTC m=+0.127362014 container attach 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:35:05 compute-0 stupefied_stonebraker[407250]: 167 167
Feb 25 13:35:05 compute-0 systemd[1]: libpod-1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de.scope: Deactivated successfully.
Feb 25 13:35:05 compute-0 podman[407233]: 2026-02-25 13:35:05.359474422 +0000 UTC m=+0.129183305 container died 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e990e5032b2d4456d26ac81a58a555d07b58336667146a021dd7e6be12be8ca9-merged.mount: Deactivated successfully.
Feb 25 13:35:05 compute-0 podman[407233]: 2026-02-25 13:35:05.39979073 +0000 UTC m=+0.169499603 container remove 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:35:05 compute-0 systemd[1]: libpod-conmon-1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de.scope: Deactivated successfully.
Feb 25 13:35:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:35:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397516983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:35:05 compute-0 nova_compute[244014]: 2026-02-25 13:35:05.497 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:35:05 compute-0 ceph-mon[76335]: pgmap v3455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:05 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/397516983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:35:05 compute-0 podman[407276]: 2026-02-25 13:35:05.570131798 +0000 UTC m=+0.059879281 container create 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:35:05 compute-0 systemd[1]: Started libpod-conmon-8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e.scope.
Feb 25 13:35:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:35:05 compute-0 podman[407276]: 2026-02-25 13:35:05.54931865 +0000 UTC m=+0.039066123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8697738b3cbd88a7a8c3ac0507337f621c80c11a1fabec81c5d374959408c605/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8697738b3cbd88a7a8c3ac0507337f621c80c11a1fabec81c5d374959408c605/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8697738b3cbd88a7a8c3ac0507337f621c80c11a1fabec81c5d374959408c605/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8697738b3cbd88a7a8c3ac0507337f621c80c11a1fabec81c5d374959408c605/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:35:05 compute-0 podman[407276]: 2026-02-25 13:35:05.669307137 +0000 UTC m=+0.159054630 container init 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:35:05 compute-0 podman[407276]: 2026-02-25 13:35:05.67404205 +0000 UTC m=+0.163789503 container start 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:35:05 compute-0 podman[407276]: 2026-02-25 13:35:05.677422816 +0000 UTC m=+0.167170319 container attach 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 13:35:05 compute-0 nova_compute[244014]: 2026-02-25 13:35:05.695 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:35:05 compute-0 nova_compute[244014]: 2026-02-25 13:35:05.697 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3492MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:35:05 compute-0 nova_compute[244014]: 2026-02-25 13:35:05.697 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:35:05 compute-0 nova_compute[244014]: 2026-02-25 13:35:05.697 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:35:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:05 compute-0 nova_compute[244014]: 2026-02-25 13:35:05.987 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:35:05 compute-0 nova_compute[244014]: 2026-02-25 13:35:05.987 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:35:06 compute-0 nova_compute[244014]: 2026-02-25 13:35:06.004 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:35:06 compute-0 lvm[407391]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:35:06 compute-0 lvm[407391]: VG ceph_vg0 finished
Feb 25 13:35:06 compute-0 lvm[407392]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:35:06 compute-0 lvm[407392]: VG ceph_vg1 finished
Feb 25 13:35:06 compute-0 lvm[407394]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:35:06 compute-0 lvm[407394]: VG ceph_vg2 finished
Feb 25 13:35:06 compute-0 infallible_chatelet[407292]: {}
Feb 25 13:35:06 compute-0 podman[407276]: 2026-02-25 13:35:06.431784966 +0000 UTC m=+0.921532449 container died 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:35:06 compute-0 systemd[1]: libpod-8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e.scope: Deactivated successfully.
Feb 25 13:35:06 compute-0 systemd[1]: libpod-8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e.scope: Consumed 1.116s CPU time.
Feb 25 13:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8697738b3cbd88a7a8c3ac0507337f621c80c11a1fabec81c5d374959408c605-merged.mount: Deactivated successfully.
Feb 25 13:35:06 compute-0 podman[407276]: 2026-02-25 13:35:06.478435393 +0000 UTC m=+0.968182856 container remove 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:35:06 compute-0 systemd[1]: libpod-conmon-8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e.scope: Deactivated successfully.
Feb 25 13:35:06 compute-0 sudo[407177]: pam_unix(sudo:session): session closed for user root
Feb 25 13:35:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:35:06 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:35:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:35:06 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:35:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:35:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443480120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:35:06 compute-0 sudo[407408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:35:06 compute-0 sudo[407408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:35:06 compute-0 sudo[407408]: pam_unix(sudo:session): session closed for user root
Feb 25 13:35:06 compute-0 nova_compute[244014]: 2026-02-25 13:35:06.605 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:35:06 compute-0 nova_compute[244014]: 2026-02-25 13:35:06.612 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:35:06 compute-0 nova_compute[244014]: 2026-02-25 13:35:06.667 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:35:06 compute-0 nova_compute[244014]: 2026-02-25 13:35:06.670 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:35:06 compute-0 nova_compute[244014]: 2026-02-25 13:35:06.670 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:35:07 compute-0 ceph-mon[76335]: pgmap v3456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:07 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:35:07 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:35:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1443480120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:35:07 compute-0 nova_compute[244014]: 2026-02-25 13:35:07.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:08 compute-0 ceph-mon[76335]: pgmap v3457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:10 compute-0 nova_compute[244014]: 2026-02-25 13:35:10.198 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:11 compute-0 ceph-mon[76335]: pgmap v3458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:12 compute-0 nova_compute[244014]: 2026-02-25 13:35:12.671 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:35:12 compute-0 nova_compute[244014]: 2026-02-25 13:35:12.671 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:35:12 compute-0 nova_compute[244014]: 2026-02-25 13:35:12.672 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:35:12 compute-0 nova_compute[244014]: 2026-02-25 13:35:12.688 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:35:12 compute-0 nova_compute[244014]: 2026-02-25 13:35:12.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:13 compute-0 ceph-mon[76335]: pgmap v3459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:13 compute-0 nova_compute[244014]: 2026-02-25 13:35:13.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:35:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:15 compute-0 ceph-mon[76335]: pgmap v3460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:15 compute-0 nova_compute[244014]: 2026-02-25 13:35:15.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:16 compute-0 nova_compute[244014]: 2026-02-25 13:35:16.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:35:16 compute-0 nova_compute[244014]: 2026-02-25 13:35:16.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:35:17 compute-0 ceph-mon[76335]: pgmap v3461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:17 compute-0 podman[407435]: 2026-02-25 13:35:17.777837789 +0000 UTC m=+0.112075494 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 13:35:17 compute-0 nova_compute[244014]: 2026-02-25 13:35:17.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:17 compute-0 podman[407436]: 2026-02-25 13:35:17.842783672 +0000 UTC m=+0.172862359 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller)
Feb 25 13:35:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:19 compute-0 ceph-mon[76335]: pgmap v3462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:20 compute-0 nova_compute[244014]: 2026-02-25 13:35:20.254 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:21 compute-0 ceph-mon[76335]: pgmap v3463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:21 compute-0 nova_compute[244014]: 2026-02-25 13:35:21.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:35:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:22 compute-0 nova_compute[244014]: 2026-02-25 13:35:22.855 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:22 compute-0 nova_compute[244014]: 2026-02-25 13:35:22.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:35:23 compute-0 ceph-mon[76335]: pgmap v3464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:25 compute-0 ceph-mon[76335]: pgmap v3465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:25 compute-0 nova_compute[244014]: 2026-02-25 13:35:25.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:27 compute-0 ceph-mon[76335]: pgmap v3466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:27 compute-0 nova_compute[244014]: 2026-02-25 13:35:27.857 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:27 compute-0 nova_compute[244014]: 2026-02-25 13:35:27.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:35:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:29 compute-0 ceph-mon[76335]: pgmap v3467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:29 compute-0 nova_compute[244014]: 2026-02-25 13:35:29.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:35:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:30 compute-0 nova_compute[244014]: 2026-02-25 13:35:30.261 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:31 compute-0 ceph-mon[76335]: pgmap v3468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:35:31
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:35:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:35:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:35:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:35:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:35:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:35:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:35:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:35:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:35:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:35:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:35:32 compute-0 nova_compute[244014]: 2026-02-25 13:35:32.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:33 compute-0 ceph-mon[76335]: pgmap v3469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:35 compute-0 ceph-mon[76335]: pgmap v3470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:35 compute-0 nova_compute[244014]: 2026-02-25 13:35:35.264 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:37 compute-0 ceph-mon[76335]: pgmap v3471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:37 compute-0 nova_compute[244014]: 2026-02-25 13:35:37.862 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:39 compute-0 ceph-mon[76335]: pgmap v3472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:40 compute-0 nova_compute[244014]: 2026-02-25 13:35:40.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:41 compute-0 ceph-mon[76335]: pgmap v3473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:42 compute-0 nova_compute[244014]: 2026-02-25 13:35:42.911 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:43 compute-0 ceph-mon[76335]: pgmap v3474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:35:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:45 compute-0 ceph-mon[76335]: pgmap v3475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:45 compute-0 nova_compute[244014]: 2026-02-25 13:35:45.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:47 compute-0 ceph-mon[76335]: pgmap v3476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:35:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3154633280' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:35:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:35:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3154633280' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:35:47 compute-0 nova_compute[244014]: 2026-02-25 13:35:47.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3154633280' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:35:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3154633280' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:35:48 compute-0 podman[407481]: 2026-02-25 13:35:48.734201847 +0000 UTC m=+0.070540322 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 13:35:48 compute-0 podman[407482]: 2026-02-25 13:35:48.768528116 +0000 UTC m=+0.104957033 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 25 13:35:49 compute-0 ceph-mon[76335]: pgmap v3477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:50 compute-0 nova_compute[244014]: 2026-02-25 13:35:50.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:51 compute-0 ceph-mon[76335]: pgmap v3478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:52 compute-0 nova_compute[244014]: 2026-02-25 13:35:52.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:53 compute-0 ceph-mon[76335]: pgmap v3479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:35:55.077 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:35:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:35:55.078 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:35:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:35:55.078 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:35:55 compute-0 ceph-mon[76335]: pgmap v3480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:55 compute-0 nova_compute[244014]: 2026-02-25 13:35:55.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:57 compute-0 ceph-mon[76335]: pgmap v3481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:57 compute-0 nova_compute[244014]: 2026-02-25 13:35:57.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:35:59 compute-0 ceph-mon[76335]: pgmap v3482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:35:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:35:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:00 compute-0 nova_compute[244014]: 2026-02-25 13:36:00.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:01 compute-0 ceph-mon[76335]: pgmap v3483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:36:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:36:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:02 compute-0 nova_compute[244014]: 2026-02-25 13:36:02.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:03 compute-0 ceph-mon[76335]: pgmap v3484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:04 compute-0 nova_compute[244014]: 2026-02-25 13:36:04.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:36:04 compute-0 nova_compute[244014]: 2026-02-25 13:36:04.932 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:36:04 compute-0 nova_compute[244014]: 2026-02-25 13:36:04.933 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:36:04 compute-0 nova_compute[244014]: 2026-02-25 13:36:04.934 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:36:04 compute-0 nova_compute[244014]: 2026-02-25 13:36:04.934 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:36:04 compute-0 nova_compute[244014]: 2026-02-25 13:36:04.935 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:36:05 compute-0 ceph-mon[76335]: pgmap v3485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:05 compute-0 nova_compute[244014]: 2026-02-25 13:36:05.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:36:05 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814438940' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:36:05 compute-0 nova_compute[244014]: 2026-02-25 13:36:05.570 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:36:05 compute-0 nova_compute[244014]: 2026-02-25 13:36:05.819 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:36:05 compute-0 nova_compute[244014]: 2026-02-25 13:36:05.822 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3554MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:36:05 compute-0 nova_compute[244014]: 2026-02-25 13:36:05.822 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:36:05 compute-0 nova_compute[244014]: 2026-02-25 13:36:05.823 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:36:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:06 compute-0 nova_compute[244014]: 2026-02-25 13:36:06.076 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:36:06 compute-0 nova_compute[244014]: 2026-02-25 13:36:06.077 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:36:06 compute-0 nova_compute[244014]: 2026-02-25 13:36:06.201 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:36:06 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/814438940' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:36:06 compute-0 nova_compute[244014]: 2026-02-25 13:36:06.329 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:36:06 compute-0 nova_compute[244014]: 2026-02-25 13:36:06.330 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:36:06 compute-0 nova_compute[244014]: 2026-02-25 13:36:06.350 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:36:06 compute-0 nova_compute[244014]: 2026-02-25 13:36:06.372 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:36:06 compute-0 nova_compute[244014]: 2026-02-25 13:36:06.386 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:36:06 compute-0 sudo[407565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:36:06 compute-0 sudo[407565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:36:06 compute-0 sudo[407565]: pam_unix(sudo:session): session closed for user root
Feb 25 13:36:06 compute-0 sudo[407590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 13:36:06 compute-0 sudo[407590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:36:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:36:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582902453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:36:06 compute-0 nova_compute[244014]: 2026-02-25 13:36:06.979 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:36:06 compute-0 nova_compute[244014]: 2026-02-25 13:36:06.985 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:36:07 compute-0 nova_compute[244014]: 2026-02-25 13:36:07.000 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:36:07 compute-0 nova_compute[244014]: 2026-02-25 13:36:07.002 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:36:07 compute-0 nova_compute[244014]: 2026-02-25 13:36:07.003 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:36:07 compute-0 podman[407661]: 2026-02-25 13:36:07.138485518 +0000 UTC m=+0.055312862 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:36:07 compute-0 podman[407661]: 2026-02-25 13:36:07.217132288 +0000 UTC m=+0.133959662 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 13:36:07 compute-0 ceph-mon[76335]: pgmap v3486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3582902453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:36:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:07 compute-0 nova_compute[244014]: 2026-02-25 13:36:07.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:08 compute-0 nova_compute[244014]: 2026-02-25 13:36:08.002 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:36:08 compute-0 sudo[407590]: pam_unix(sudo:session): session closed for user root
Feb 25 13:36:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:36:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:36:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:36:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:36:08 compute-0 sudo[407851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:36:08 compute-0 sudo[407851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:36:08 compute-0 sudo[407851]: pam_unix(sudo:session): session closed for user root
Feb 25 13:36:08 compute-0 sudo[407876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:36:08 compute-0 sudo[407876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:36:08 compute-0 sudo[407876]: pam_unix(sudo:session): session closed for user root
Feb 25 13:36:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:36:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:36:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:36:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:36:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:36:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:36:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:36:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:36:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:36:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:36:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:36:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:36:08 compute-0 sudo[407932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:36:08 compute-0 sudo[407932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:36:08 compute-0 sudo[407932]: pam_unix(sudo:session): session closed for user root
Feb 25 13:36:09 compute-0 sudo[407957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:36:09 compute-0 sudo[407957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:36:09 compute-0 ceph-mon[76335]: pgmap v3487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:36:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:36:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:36:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:36:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:36:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:36:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:36:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:36:09 compute-0 podman[407994]: 2026-02-25 13:36:09.339582299 +0000 UTC m=+0.038815796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:36:09 compute-0 podman[407994]: 2026-02-25 13:36:09.495767157 +0000 UTC m=+0.195000654 container create 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 25 13:36:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:09 compute-0 systemd[1]: Started libpod-conmon-3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6.scope.
Feb 25 13:36:09 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:36:09 compute-0 podman[407994]: 2026-02-25 13:36:09.587557478 +0000 UTC m=+0.286791005 container init 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:36:09 compute-0 podman[407994]: 2026-02-25 13:36:09.596457599 +0000 UTC m=+0.295691086 container start 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 13:36:09 compute-0 podman[407994]: 2026-02-25 13:36:09.600149033 +0000 UTC m=+0.299382590 container attach 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:36:09 compute-0 great_dijkstra[408010]: 167 167
Feb 25 13:36:09 compute-0 systemd[1]: libpod-3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6.scope: Deactivated successfully.
Feb 25 13:36:09 compute-0 podman[407994]: 2026-02-25 13:36:09.604833975 +0000 UTC m=+0.304067472 container died 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e59f8605a1413455443d0f17ed8a3e7eab879dab1d4166735a6b3ec2fad7a7f-merged.mount: Deactivated successfully.
Feb 25 13:36:09 compute-0 podman[407994]: 2026-02-25 13:36:09.642856369 +0000 UTC m=+0.342089866 container remove 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:36:09 compute-0 systemd[1]: libpod-conmon-3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6.scope: Deactivated successfully.
Feb 25 13:36:09 compute-0 podman[408033]: 2026-02-25 13:36:09.84349468 +0000 UTC m=+0.060854278 container create c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 13:36:09 compute-0 systemd[1]: Started libpod-conmon-c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a.scope.
Feb 25 13:36:09 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:36:09 compute-0 podman[408033]: 2026-02-25 13:36:09.812581648 +0000 UTC m=+0.029941266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:09 compute-0 podman[408033]: 2026-02-25 13:36:09.941256569 +0000 UTC m=+0.158616177 container init c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 13:36:09 compute-0 podman[408033]: 2026-02-25 13:36:09.948238886 +0000 UTC m=+0.165598514 container start c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:36:09 compute-0 podman[408033]: 2026-02-25 13:36:09.95225513 +0000 UTC m=+0.169614818 container attach c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:36:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:10 compute-0 nova_compute[244014]: 2026-02-25 13:36:10.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:10 compute-0 recursing_panini[408050]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:36:10 compute-0 recursing_panini[408050]: --> All data devices are unavailable
Feb 25 13:36:10 compute-0 systemd[1]: libpod-c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a.scope: Deactivated successfully.
Feb 25 13:36:10 compute-0 conmon[408050]: conmon c1f0d52b02bed71abee2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a.scope/container/memory.events
Feb 25 13:36:10 compute-0 podman[408033]: 2026-02-25 13:36:10.468327485 +0000 UTC m=+0.685687113 container died c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:36:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a-merged.mount: Deactivated successfully.
Feb 25 13:36:10 compute-0 podman[408033]: 2026-02-25 13:36:10.52203778 +0000 UTC m=+0.739397368 container remove c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:36:10 compute-0 systemd[1]: libpod-conmon-c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a.scope: Deactivated successfully.
Feb 25 13:36:10 compute-0 sudo[407957]: pam_unix(sudo:session): session closed for user root
Feb 25 13:36:10 compute-0 sudo[408083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:36:10 compute-0 sudo[408083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:36:10 compute-0 sudo[408083]: pam_unix(sudo:session): session closed for user root
Feb 25 13:36:10 compute-0 sudo[408108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:36:10 compute-0 sudo[408108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:36:10 compute-0 podman[408145]: 2026-02-25 13:36:10.988071363 +0000 UTC m=+0.056392342 container create 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:36:11 compute-0 systemd[1]: Started libpod-conmon-8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6.scope.
Feb 25 13:36:11 compute-0 podman[408145]: 2026-02-25 13:36:10.962931454 +0000 UTC m=+0.031252493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:36:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:36:11 compute-0 podman[408145]: 2026-02-25 13:36:11.079582956 +0000 UTC m=+0.147903925 container init 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:36:11 compute-0 podman[408145]: 2026-02-25 13:36:11.086134681 +0000 UTC m=+0.154455640 container start 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:36:11 compute-0 podman[408145]: 2026-02-25 13:36:11.090741791 +0000 UTC m=+0.159062770 container attach 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:36:11 compute-0 systemd[1]: libpod-8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6.scope: Deactivated successfully.
Feb 25 13:36:11 compute-0 adoring_hopper[408161]: 167 167
Feb 25 13:36:11 compute-0 conmon[408161]: conmon 8198afdb452a7f67490e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6.scope/container/memory.events
Feb 25 13:36:11 compute-0 podman[408145]: 2026-02-25 13:36:11.093403506 +0000 UTC m=+0.161724455 container died 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 13:36:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dd3514c1982c177d0afb88e57c33d3c989a319da9e154dca00ae7cb4cce4f69-merged.mount: Deactivated successfully.
Feb 25 13:36:11 compute-0 podman[408145]: 2026-02-25 13:36:11.135415072 +0000 UTC m=+0.203736051 container remove 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:36:11 compute-0 systemd[1]: libpod-conmon-8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6.scope: Deactivated successfully.
Feb 25 13:36:11 compute-0 ceph-mon[76335]: pgmap v3488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:11 compute-0 podman[408185]: 2026-02-25 13:36:11.266429669 +0000 UTC m=+0.037697095 container create d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:36:11 compute-0 systemd[1]: Started libpod-conmon-d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa.scope.
Feb 25 13:36:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1411771f850afc6c182f201d843f968a9c20ab26b8ca3038b10412f614997c9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1411771f850afc6c182f201d843f968a9c20ab26b8ca3038b10412f614997c9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1411771f850afc6c182f201d843f968a9c20ab26b8ca3038b10412f614997c9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1411771f850afc6c182f201d843f968a9c20ab26b8ca3038b10412f614997c9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:11 compute-0 podman[408185]: 2026-02-25 13:36:11.342849206 +0000 UTC m=+0.114116662 container init d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 13:36:11 compute-0 podman[408185]: 2026-02-25 13:36:11.252976299 +0000 UTC m=+0.024243725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:36:11 compute-0 podman[408185]: 2026-02-25 13:36:11.348763743 +0000 UTC m=+0.120031159 container start d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 13:36:11 compute-0 podman[408185]: 2026-02-25 13:36:11.358407905 +0000 UTC m=+0.129675381 container attach d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]: {
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:     "0": [
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:         {
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "devices": [
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "/dev/loop3"
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             ],
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_name": "ceph_lv0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_size": "21470642176",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "name": "ceph_lv0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "tags": {
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.cluster_name": "ceph",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.crush_device_class": "",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.encrypted": "0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.objectstore": "bluestore",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.osd_id": "0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.type": "block",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.vdo": "0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.with_tpm": "0"
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             },
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "type": "block",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "vg_name": "ceph_vg0"
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:         }
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:     ],
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:     "1": [
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:         {
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "devices": [
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "/dev/loop4"
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             ],
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_name": "ceph_lv1",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_size": "21470642176",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "name": "ceph_lv1",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "tags": {
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.cluster_name": "ceph",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.crush_device_class": "",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.encrypted": "0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.objectstore": "bluestore",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.osd_id": "1",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.type": "block",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.vdo": "0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.with_tpm": "0"
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             },
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "type": "block",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "vg_name": "ceph_vg1"
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:         }
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:     ],
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:     "2": [
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:         {
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "devices": [
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "/dev/loop5"
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             ],
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_name": "ceph_lv2",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_size": "21470642176",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "name": "ceph_lv2",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "tags": {
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.cluster_name": "ceph",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.crush_device_class": "",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.encrypted": "0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.objectstore": "bluestore",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.osd_id": "2",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.type": "block",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.vdo": "0",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:                 "ceph.with_tpm": "0"
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             },
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "type": "block",
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:             "vg_name": "ceph_vg2"
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:         }
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]:     ]
Feb 25 13:36:11 compute-0 intelligent_nightingale[408202]: }
Feb 25 13:36:11 compute-0 systemd[1]: libpod-d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa.scope: Deactivated successfully.
Feb 25 13:36:11 compute-0 podman[408185]: 2026-02-25 13:36:11.62931038 +0000 UTC m=+0.400577856 container died d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:36:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1411771f850afc6c182f201d843f968a9c20ab26b8ca3038b10412f614997c9b-merged.mount: Deactivated successfully.
Feb 25 13:36:11 compute-0 podman[408185]: 2026-02-25 13:36:11.693848882 +0000 UTC m=+0.465116328 container remove d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:36:11 compute-0 systemd[1]: libpod-conmon-d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa.scope: Deactivated successfully.
Feb 25 13:36:11 compute-0 sudo[408108]: pam_unix(sudo:session): session closed for user root
Feb 25 13:36:11 compute-0 sudo[408225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:36:11 compute-0 sudo[408225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:36:11 compute-0 sudo[408225]: pam_unix(sudo:session): session closed for user root
Feb 25 13:36:11 compute-0 sudo[408250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:36:11 compute-0 sudo[408250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:36:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:12 compute-0 podman[408287]: 2026-02-25 13:36:12.262520731 +0000 UTC m=+0.054898780 container create d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:36:12 compute-0 systemd[1]: Started libpod-conmon-d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec.scope.
Feb 25 13:36:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:36:12 compute-0 podman[408287]: 2026-02-25 13:36:12.23837238 +0000 UTC m=+0.030750499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:36:12 compute-0 podman[408287]: 2026-02-25 13:36:12.352075439 +0000 UTC m=+0.144453488 container init d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:36:12 compute-0 podman[408287]: 2026-02-25 13:36:12.362374819 +0000 UTC m=+0.154752888 container start d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:36:12 compute-0 podman[408287]: 2026-02-25 13:36:12.366916108 +0000 UTC m=+0.159294177 container attach d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Feb 25 13:36:12 compute-0 loving_sanderson[408304]: 167 167
Feb 25 13:36:12 compute-0 systemd[1]: libpod-d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec.scope: Deactivated successfully.
Feb 25 13:36:12 compute-0 podman[408287]: 2026-02-25 13:36:12.369332196 +0000 UTC m=+0.161710255 container died d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 13:36:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4c5d3dd577ccdfad935af8d26456f688b7565d0550c603666acce5a05c5404e-merged.mount: Deactivated successfully.
Feb 25 13:36:12 compute-0 podman[408287]: 2026-02-25 13:36:12.407964776 +0000 UTC m=+0.200342815 container remove d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:36:12 compute-0 systemd[1]: libpod-conmon-d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec.scope: Deactivated successfully.
Feb 25 13:36:12 compute-0 podman[408328]: 2026-02-25 13:36:12.609764161 +0000 UTC m=+0.065732476 container create 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:36:12 compute-0 systemd[1]: Started libpod-conmon-45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b.scope.
Feb 25 13:36:12 compute-0 podman[408328]: 2026-02-25 13:36:12.584269352 +0000 UTC m=+0.040237747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:36:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac316ab41687d5302a5ab03364c8ecf27bbb399c2ab2d9734a583f43f36feb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac316ab41687d5302a5ab03364c8ecf27bbb399c2ab2d9734a583f43f36feb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac316ab41687d5302a5ab03364c8ecf27bbb399c2ab2d9734a583f43f36feb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac316ab41687d5302a5ab03364c8ecf27bbb399c2ab2d9734a583f43f36feb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:36:12 compute-0 podman[408328]: 2026-02-25 13:36:12.718955703 +0000 UTC m=+0.174924108 container init 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:36:12 compute-0 podman[408328]: 2026-02-25 13:36:12.729419478 +0000 UTC m=+0.185387813 container start 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:36:12 compute-0 podman[408328]: 2026-02-25 13:36:12.733798562 +0000 UTC m=+0.189766907 container attach 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:36:12 compute-0 nova_compute[244014]: 2026-02-25 13:36:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:36:12 compute-0 nova_compute[244014]: 2026-02-25 13:36:12.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:36:12 compute-0 nova_compute[244014]: 2026-02-25 13:36:12.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:36:12 compute-0 nova_compute[244014]: 2026-02-25 13:36:12.940 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:36:13 compute-0 nova_compute[244014]: 2026-02-25 13:36:13.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:13 compute-0 ceph-mon[76335]: pgmap v3489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:13 compute-0 lvm[408420]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:36:13 compute-0 lvm[408420]: VG ceph_vg0 finished
Feb 25 13:36:13 compute-0 lvm[408423]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:36:13 compute-0 lvm[408423]: VG ceph_vg1 finished
Feb 25 13:36:13 compute-0 lvm[408425]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:36:13 compute-0 lvm[408425]: VG ceph_vg2 finished
Feb 25 13:36:13 compute-0 friendly_galileo[408344]: {}
Feb 25 13:36:13 compute-0 systemd[1]: libpod-45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b.scope: Deactivated successfully.
Feb 25 13:36:13 compute-0 podman[408328]: 2026-02-25 13:36:13.618945262 +0000 UTC m=+1.074913607 container died 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:36:13 compute-0 systemd[1]: libpod-45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b.scope: Consumed 1.176s CPU time.
Feb 25 13:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ac316ab41687d5302a5ab03364c8ecf27bbb399c2ab2d9734a583f43f36feb3-merged.mount: Deactivated successfully.
Feb 25 13:36:13 compute-0 podman[408328]: 2026-02-25 13:36:13.678266226 +0000 UTC m=+1.134234521 container remove 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:36:13 compute-0 systemd[1]: libpod-conmon-45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b.scope: Deactivated successfully.
Feb 25 13:36:13 compute-0 sudo[408250]: pam_unix(sudo:session): session closed for user root
Feb 25 13:36:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:36:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:36:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:36:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:36:13 compute-0 sudo[408442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:36:13 compute-0 sudo[408442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:36:13 compute-0 sudo[408442]: pam_unix(sudo:session): session closed for user root
Feb 25 13:36:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:36:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:36:14 compute-0 ceph-mon[76335]: pgmap v3490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:14 compute-0 nova_compute[244014]: 2026-02-25 13:36:14.937 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:36:15 compute-0 nova_compute[244014]: 2026-02-25 13:36:15.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:17 compute-0 ceph-mon[76335]: pgmap v3491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:17 compute-0 nova_compute[244014]: 2026-02-25 13:36:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:36:17 compute-0 nova_compute[244014]: 2026-02-25 13:36:17.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:36:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:18 compute-0 nova_compute[244014]: 2026-02-25 13:36:18.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:19 compute-0 ceph-mon[76335]: pgmap v3492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:19 compute-0 podman[408467]: 2026-02-25 13:36:19.723520407 +0000 UTC m=+0.059423948 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 13:36:19 compute-0 podman[408468]: 2026-02-25 13:36:19.756742635 +0000 UTC m=+0.091953866 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller)
Feb 25 13:36:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:20 compute-0 nova_compute[244014]: 2026-02-25 13:36:20.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:21 compute-0 ceph-mon[76335]: pgmap v3493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:21 compute-0 nova_compute[244014]: 2026-02-25 13:36:21.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:36:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 25 13:36:23 compute-0 ceph-mon[76335]: pgmap v3494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 25 13:36:23 compute-0 nova_compute[244014]: 2026-02-25 13:36:23.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:36:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:24 compute-0 nova_compute[244014]: 2026-02-25 13:36:24.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:36:25 compute-0 ceph-mon[76335]: pgmap v3495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:36:25 compute-0 nova_compute[244014]: 2026-02-25 13:36:25.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:36:27 compute-0 ceph-mon[76335]: pgmap v3496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:36:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:36:28 compute-0 nova_compute[244014]: 2026-02-25 13:36:28.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:28 compute-0 nova_compute[244014]: 2026-02-25 13:36:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:36:29 compute-0 ceph-mon[76335]: pgmap v3497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:36:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:36:30 compute-0 nova_compute[244014]: 2026-02-25 13:36:30.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:31 compute-0 ceph-mon[76335]: pgmap v3498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:36:31
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'backups', 'images', 'vms', 'cephfs.cephfs.data']
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:36:31 compute-0 nova_compute[244014]: 2026-02-25 13:36:31.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:36:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:36:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:36:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:36:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:36:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:36:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:36:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:36:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:36:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:36:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:36:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:36:33 compute-0 ceph-mon[76335]: pgmap v3499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 13:36:33 compute-0 nova_compute[244014]: 2026-02-25 13:36:33.188 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 25 13:36:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:35 compute-0 ceph-mon[76335]: pgmap v3500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 25 13:36:35 compute-0 nova_compute[244014]: 2026-02-25 13:36:35.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:37 compute-0 ceph-mon[76335]: pgmap v3501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:38 compute-0 nova_compute[244014]: 2026-02-25 13:36:38.232 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:39 compute-0 ceph-mon[76335]: pgmap v3502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:40 compute-0 nova_compute[244014]: 2026-02-25 13:36:40.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:40 compute-0 nova_compute[244014]: 2026-02-25 13:36:40.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:36:41 compute-0 ceph-mon[76335]: pgmap v3503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:43 compute-0 ceph-mon[76335]: pgmap v3504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:43 compute-0 nova_compute[244014]: 2026-02-25 13:36:43.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:36:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:45 compute-0 ceph-mon[76335]: pgmap v3505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:45 compute-0 nova_compute[244014]: 2026-02-25 13:36:45.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:47 compute-0 ceph-mon[76335]: pgmap v3506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:36:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/578608184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:36:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:36:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/578608184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:36:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/578608184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:36:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/578608184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:36:48 compute-0 nova_compute[244014]: 2026-02-25 13:36:48.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:49 compute-0 ceph-mon[76335]: pgmap v3507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:50 compute-0 nova_compute[244014]: 2026-02-25 13:36:50.670 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:50 compute-0 podman[408513]: 2026-02-25 13:36:50.8036442 +0000 UTC m=+0.104378546 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:36:50 compute-0 podman[408514]: 2026-02-25 13:36:50.819412615 +0000 UTC m=+0.111858667 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:36:51 compute-0 ceph-mon[76335]: pgmap v3508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:53 compute-0 ceph-mon[76335]: pgmap v3509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:53 compute-0 nova_compute[244014]: 2026-02-25 13:36:53.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:36:55.079 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:36:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:36:55.080 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:36:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:36:55.080 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:36:55 compute-0 ceph-mon[76335]: pgmap v3510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:55 compute-0 nova_compute[244014]: 2026-02-25 13:36:55.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:57 compute-0 ceph-mon[76335]: pgmap v3511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:58 compute-0 nova_compute[244014]: 2026-02-25 13:36:58.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:36:59 compute-0 ceph-mon[76335]: pgmap v3512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:36:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:36:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:00 compute-0 nova_compute[244014]: 2026-02-25 13:37:00.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:01 compute-0 ceph-mon[76335]: pgmap v3513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:37:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:37:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:03 compute-0 nova_compute[244014]: 2026-02-25 13:37:03.320 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:03 compute-0 ceph-mon[76335]: pgmap v3514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:37:05 compute-0 ceph-mon[76335]: pgmap v3515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:05 compute-0 nova_compute[244014]: 2026-02-25 13:37:05.783 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:05 compute-0 nova_compute[244014]: 2026-02-25 13:37:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:37:05 compute-0 nova_compute[244014]: 2026-02-25 13:37:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:37:05 compute-0 nova_compute[244014]: 2026-02-25 13:37:05.931 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:37:05 compute-0 nova_compute[244014]: 2026-02-25 13:37:05.932 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:37:05 compute-0 nova_compute[244014]: 2026-02-25 13:37:05.932 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:37:05 compute-0 nova_compute[244014]: 2026-02-25 13:37:05.932 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:37:05 compute-0 nova_compute[244014]: 2026-02-25 13:37:05.933 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:37:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:37:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181088331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:37:06 compute-0 nova_compute[244014]: 2026-02-25 13:37:06.483 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:37:06 compute-0 nova_compute[244014]: 2026-02-25 13:37:06.691 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:37:06 compute-0 nova_compute[244014]: 2026-02-25 13:37:06.693 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3556MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:37:06 compute-0 nova_compute[244014]: 2026-02-25 13:37:06.693 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:37:06 compute-0 nova_compute[244014]: 2026-02-25 13:37:06.694 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:37:06 compute-0 nova_compute[244014]: 2026-02-25 13:37:06.783 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:37:06 compute-0 nova_compute[244014]: 2026-02-25 13:37:06.784 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:37:06 compute-0 nova_compute[244014]: 2026-02-25 13:37:06.809 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:37:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:37:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524635839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:37:07 compute-0 nova_compute[244014]: 2026-02-25 13:37:07.353 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:37:07 compute-0 nova_compute[244014]: 2026-02-25 13:37:07.360 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:37:07 compute-0 nova_compute[244014]: 2026-02-25 13:37:07.376 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:37:07 compute-0 nova_compute[244014]: 2026-02-25 13:37:07.378 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:37:07 compute-0 nova_compute[244014]: 2026-02-25 13:37:07.378 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:37:07 compute-0 ceph-mon[76335]: pgmap v3516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4181088331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:37:07 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2524635839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:37:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:08 compute-0 nova_compute[244014]: 2026-02-25 13:37:08.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:09 compute-0 ceph-mon[76335]: pgmap v3517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:37:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:10 compute-0 nova_compute[244014]: 2026-02-25 13:37:10.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:11 compute-0 ceph-mon[76335]: pgmap v3518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:13 compute-0 nova_compute[244014]: 2026-02-25 13:37:13.324 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:13 compute-0 ceph-mon[76335]: pgmap v3519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:13 compute-0 sudo[408600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:37:13 compute-0 sudo[408600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:37:13 compute-0 sudo[408600]: pam_unix(sudo:session): session closed for user root
Feb 25 13:37:13 compute-0 sudo[408625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:37:13 compute-0 sudo[408625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:37:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:14 compute-0 sudo[408625]: pam_unix(sudo:session): session closed for user root
Feb 25 13:37:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:37:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:37:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:37:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:37:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:37:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:37:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:37:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:37:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:37:14 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:37:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:37:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:37:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:37:14 compute-0 sudo[408681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:37:14 compute-0 sudo[408681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:37:14 compute-0 sudo[408681]: pam_unix(sudo:session): session closed for user root
Feb 25 13:37:14 compute-0 sudo[408706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:37:14 compute-0 sudo[408706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:37:14 compute-0 podman[408743]: 2026-02-25 13:37:14.969968799 +0000 UTC m=+0.074454742 container create 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:37:15 compute-0 systemd[1]: Started libpod-conmon-1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922.scope.
Feb 25 13:37:15 compute-0 podman[408743]: 2026-02-25 13:37:14.923515888 +0000 UTC m=+0.028001651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:37:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:37:15 compute-0 podman[408743]: 2026-02-25 13:37:15.070890227 +0000 UTC m=+0.175375930 container init 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:37:15 compute-0 podman[408743]: 2026-02-25 13:37:15.079675555 +0000 UTC m=+0.184161238 container start 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:37:15 compute-0 podman[408743]: 2026-02-25 13:37:15.083174284 +0000 UTC m=+0.187659967 container attach 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:37:15 compute-0 sharp_brattain[408760]: 167 167
Feb 25 13:37:15 compute-0 systemd[1]: libpod-1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922.scope: Deactivated successfully.
Feb 25 13:37:15 compute-0 conmon[408760]: conmon 1c7026e6d2e14d982e1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922.scope/container/memory.events
Feb 25 13:37:15 compute-0 podman[408743]: 2026-02-25 13:37:15.088793253 +0000 UTC m=+0.193278966 container died 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:37:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-43f8fa95b7333d04f1ffdac96f596ebdb2f0000fed1964d91a151968ba3c7dd0-merged.mount: Deactivated successfully.
Feb 25 13:37:15 compute-0 podman[408743]: 2026-02-25 13:37:15.130096328 +0000 UTC m=+0.234582021 container remove 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:37:15 compute-0 systemd[1]: libpod-conmon-1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922.scope: Deactivated successfully.
Feb 25 13:37:15 compute-0 podman[408784]: 2026-02-25 13:37:15.286813591 +0000 UTC m=+0.046532204 container create f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 13:37:15 compute-0 systemd[1]: Started libpod-conmon-f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd.scope.
Feb 25 13:37:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:15 compute-0 podman[408784]: 2026-02-25 13:37:15.26834746 +0000 UTC m=+0.028066083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:37:15 compute-0 podman[408784]: 2026-02-25 13:37:15.396901488 +0000 UTC m=+0.156620121 container init f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:37:15 compute-0 podman[408784]: 2026-02-25 13:37:15.404907454 +0000 UTC m=+0.164626107 container start f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:37:15 compute-0 podman[408784]: 2026-02-25 13:37:15.409119943 +0000 UTC m=+0.168838596 container attach f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:37:15 compute-0 ceph-mon[76335]: pgmap v3520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:37:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:37:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:37:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:37:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:37:15 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:37:15 compute-0 nova_compute[244014]: 2026-02-25 13:37:15.847 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:15 compute-0 nice_buck[408801]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:37:15 compute-0 nice_buck[408801]: --> All data devices are unavailable
Feb 25 13:37:15 compute-0 systemd[1]: libpod-f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd.scope: Deactivated successfully.
Feb 25 13:37:15 compute-0 podman[408784]: 2026-02-25 13:37:15.9036454 +0000 UTC m=+0.663364053 container died f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 13:37:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa-merged.mount: Deactivated successfully.
Feb 25 13:37:15 compute-0 podman[408784]: 2026-02-25 13:37:15.961790561 +0000 UTC m=+0.721509184 container remove f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 13:37:15 compute-0 systemd[1]: libpod-conmon-f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd.scope: Deactivated successfully.
Feb 25 13:37:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:16 compute-0 sudo[408706]: pam_unix(sudo:session): session closed for user root
Feb 25 13:37:16 compute-0 sudo[408835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:37:16 compute-0 sudo[408835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:37:16 compute-0 sudo[408835]: pam_unix(sudo:session): session closed for user root
Feb 25 13:37:16 compute-0 sudo[408860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:37:16 compute-0 sudo[408860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:37:16 compute-0 nova_compute[244014]: 2026-02-25 13:37:16.374 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:37:16 compute-0 nova_compute[244014]: 2026-02-25 13:37:16.374 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:37:16 compute-0 nova_compute[244014]: 2026-02-25 13:37:16.375 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:37:16 compute-0 nova_compute[244014]: 2026-02-25 13:37:16.375 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:37:16 compute-0 nova_compute[244014]: 2026-02-25 13:37:16.391 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:37:16 compute-0 podman[408897]: 2026-02-25 13:37:16.429573063 +0000 UTC m=+0.048355456 container create 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:37:16 compute-0 systemd[1]: Started libpod-conmon-7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325.scope.
Feb 25 13:37:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:37:16 compute-0 podman[408897]: 2026-02-25 13:37:16.415677261 +0000 UTC m=+0.034459674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:37:16 compute-0 podman[408897]: 2026-02-25 13:37:16.515830157 +0000 UTC m=+0.134612590 container init 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 13:37:16 compute-0 podman[408897]: 2026-02-25 13:37:16.52335647 +0000 UTC m=+0.142138903 container start 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:37:16 compute-0 kind_jang[408914]: 167 167
Feb 25 13:37:16 compute-0 podman[408897]: 2026-02-25 13:37:16.526645143 +0000 UTC m=+0.145427646 container attach 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:37:16 compute-0 systemd[1]: libpod-7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325.scope: Deactivated successfully.
Feb 25 13:37:16 compute-0 podman[408897]: 2026-02-25 13:37:16.527771034 +0000 UTC m=+0.146553517 container died 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-410358fdca1bdc3a11cacd0527710b4efc11b1b0e5d5cbe5dd88fbd61f1b3077-merged.mount: Deactivated successfully.
Feb 25 13:37:16 compute-0 podman[408897]: 2026-02-25 13:37:16.57012249 +0000 UTC m=+0.188904923 container remove 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:37:16 compute-0 systemd[1]: libpod-conmon-7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325.scope: Deactivated successfully.
Feb 25 13:37:16 compute-0 podman[408939]: 2026-02-25 13:37:16.774003404 +0000 UTC m=+0.060322994 container create 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:37:16 compute-0 systemd[1]: Started libpod-conmon-0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4.scope.
Feb 25 13:37:16 compute-0 podman[408939]: 2026-02-25 13:37:16.751876619 +0000 UTC m=+0.038196269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:37:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3221252e21fb9722ae8bd1c020a8226522ce26a8d8831390416c6453a248c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3221252e21fb9722ae8bd1c020a8226522ce26a8d8831390416c6453a248c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3221252e21fb9722ae8bd1c020a8226522ce26a8d8831390416c6453a248c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3221252e21fb9722ae8bd1c020a8226522ce26a8d8831390416c6453a248c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:16 compute-0 podman[408939]: 2026-02-25 13:37:16.881800476 +0000 UTC m=+0.168120086 container init 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 13:37:16 compute-0 podman[408939]: 2026-02-25 13:37:16.890383728 +0000 UTC m=+0.176703278 container start 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:37:16 compute-0 podman[408939]: 2026-02-25 13:37:16.893588569 +0000 UTC m=+0.179908129 container attach 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]: {
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:     "0": [
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:         {
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "devices": [
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "/dev/loop3"
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             ],
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_name": "ceph_lv0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_size": "21470642176",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "name": "ceph_lv0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "tags": {
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.cluster_name": "ceph",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.crush_device_class": "",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.encrypted": "0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.objectstore": "bluestore",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.osd_id": "0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.type": "block",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.vdo": "0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.with_tpm": "0"
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             },
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "type": "block",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "vg_name": "ceph_vg0"
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:         }
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:     ],
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:     "1": [
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:         {
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "devices": [
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "/dev/loop4"
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             ],
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_name": "ceph_lv1",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_size": "21470642176",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "name": "ceph_lv1",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "tags": {
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.cluster_name": "ceph",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.crush_device_class": "",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.encrypted": "0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.objectstore": "bluestore",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.osd_id": "1",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.type": "block",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.vdo": "0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.with_tpm": "0"
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             },
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "type": "block",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "vg_name": "ceph_vg1"
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:         }
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:     ],
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:     "2": [
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:         {
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "devices": [
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "/dev/loop5"
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             ],
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_name": "ceph_lv2",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_size": "21470642176",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "name": "ceph_lv2",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "tags": {
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.cluster_name": "ceph",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.crush_device_class": "",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.encrypted": "0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.objectstore": "bluestore",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.osd_id": "2",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.type": "block",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.vdo": "0",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:                 "ceph.with_tpm": "0"
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             },
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "type": "block",
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:             "vg_name": "ceph_vg2"
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:         }
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]:     ]
Feb 25 13:37:17 compute-0 beautiful_goldstine[408956]: }
Feb 25 13:37:17 compute-0 systemd[1]: libpod-0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4.scope: Deactivated successfully.
Feb 25 13:37:17 compute-0 podman[408939]: 2026-02-25 13:37:17.203595198 +0000 UTC m=+0.489914798 container died 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb 25 13:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-cce3221252e21fb9722ae8bd1c020a8226522ce26a8d8831390416c6453a248c-merged.mount: Deactivated successfully.
Feb 25 13:37:17 compute-0 podman[408939]: 2026-02-25 13:37:17.253129576 +0000 UTC m=+0.539449176 container remove 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:37:17 compute-0 systemd[1]: libpod-conmon-0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4.scope: Deactivated successfully.
Feb 25 13:37:17 compute-0 sudo[408860]: pam_unix(sudo:session): session closed for user root
Feb 25 13:37:17 compute-0 sudo[408976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:37:17 compute-0 sudo[408976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:37:17 compute-0 sudo[408976]: pam_unix(sudo:session): session closed for user root
Feb 25 13:37:17 compute-0 sudo[409001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:37:17 compute-0 sudo[409001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:37:17 compute-0 ceph-mon[76335]: pgmap v3521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:17 compute-0 podman[409038]: 2026-02-25 13:37:17.732975127 +0000 UTC m=+0.050890517 container create c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:37:17 compute-0 systemd[1]: Started libpod-conmon-c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15.scope.
Feb 25 13:37:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:37:17 compute-0 podman[409038]: 2026-02-25 13:37:17.708558438 +0000 UTC m=+0.026473918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:37:17 compute-0 podman[409038]: 2026-02-25 13:37:17.806556504 +0000 UTC m=+0.124471954 container init c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 13:37:17 compute-0 podman[409038]: 2026-02-25 13:37:17.814963081 +0000 UTC m=+0.132878461 container start c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:37:17 compute-0 podman[409038]: 2026-02-25 13:37:17.818627875 +0000 UTC m=+0.136543295 container attach c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:37:17 compute-0 nervous_brattain[409054]: 167 167
Feb 25 13:37:17 compute-0 systemd[1]: libpod-c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15.scope: Deactivated successfully.
Feb 25 13:37:17 compute-0 podman[409038]: 2026-02-25 13:37:17.820878288 +0000 UTC m=+0.138793708 container died c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d292c59b9b86d003400440795c4278bcab64b611aed77c44835df3290f43647-merged.mount: Deactivated successfully.
Feb 25 13:37:17 compute-0 podman[409038]: 2026-02-25 13:37:17.864387796 +0000 UTC m=+0.182303396 container remove c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:37:17 compute-0 systemd[1]: libpod-conmon-c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15.scope: Deactivated successfully.
Feb 25 13:37:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:18 compute-0 podman[409079]: 2026-02-25 13:37:18.01795362 +0000 UTC m=+0.058804221 container create de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:37:18 compute-0 systemd[1]: Started libpod-conmon-de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4.scope.
Feb 25 13:37:18 compute-0 podman[409079]: 2026-02-25 13:37:17.99136029 +0000 UTC m=+0.032210941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:37:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2812c59015fad87faa53daa6e551ed329b7349cd5d0c8336998d75a796f0f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2812c59015fad87faa53daa6e551ed329b7349cd5d0c8336998d75a796f0f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2812c59015fad87faa53daa6e551ed329b7349cd5d0c8336998d75a796f0f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2812c59015fad87faa53daa6e551ed329b7349cd5d0c8336998d75a796f0f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:37:18 compute-0 podman[409079]: 2026-02-25 13:37:18.122191152 +0000 UTC m=+0.163041723 container init de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:37:18 compute-0 podman[409079]: 2026-02-25 13:37:18.1306153 +0000 UTC m=+0.171465881 container start de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:37:18 compute-0 podman[409079]: 2026-02-25 13:37:18.134666364 +0000 UTC m=+0.175516975 container attach de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:37:18 compute-0 nova_compute[244014]: 2026-02-25 13:37:18.327 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:18 compute-0 lvm[409172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:37:18 compute-0 lvm[409172]: VG ceph_vg0 finished
Feb 25 13:37:18 compute-0 lvm[409175]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:37:18 compute-0 lvm[409175]: VG ceph_vg1 finished
Feb 25 13:37:18 compute-0 lvm[409177]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:37:18 compute-0 lvm[409177]: VG ceph_vg2 finished
Feb 25 13:37:18 compute-0 adoring_buck[409096]: {}
Feb 25 13:37:18 compute-0 nova_compute[244014]: 2026-02-25 13:37:18.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:37:18 compute-0 nova_compute[244014]: 2026-02-25 13:37:18.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:37:18 compute-0 systemd[1]: libpod-de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4.scope: Deactivated successfully.
Feb 25 13:37:18 compute-0 podman[409079]: 2026-02-25 13:37:18.902808603 +0000 UTC m=+0.943659174 container died de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:37:18 compute-0 systemd[1]: libpod-de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4.scope: Consumed 1.070s CPU time.
Feb 25 13:37:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a2812c59015fad87faa53daa6e551ed329b7349cd5d0c8336998d75a796f0f5-merged.mount: Deactivated successfully.
Feb 25 13:37:18 compute-0 podman[409079]: 2026-02-25 13:37:18.948244535 +0000 UTC m=+0.989095106 container remove de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:37:18 compute-0 systemd[1]: libpod-conmon-de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4.scope: Deactivated successfully.
Feb 25 13:37:18 compute-0 sudo[409001]: pam_unix(sudo:session): session closed for user root
Feb 25 13:37:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:37:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:37:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:37:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:37:19 compute-0 sudo[409193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:37:19 compute-0 sudo[409193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:37:19 compute-0 sudo[409193]: pam_unix(sudo:session): session closed for user root
Feb 25 13:37:19 compute-0 ceph-mon[76335]: pgmap v3522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:37:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:37:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:37:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:20 compute-0 nova_compute[244014]: 2026-02-25 13:37:20.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:21 compute-0 ceph-mon[76335]: pgmap v3523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:21 compute-0 podman[409218]: 2026-02-25 13:37:21.77280479 +0000 UTC m=+0.107358721 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:37:21 compute-0 podman[409219]: 2026-02-25 13:37:21.792078084 +0000 UTC m=+0.127066937 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 13:37:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:22 compute-0 ceph-mon[76335]: pgmap v3524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:22 compute-0 nova_compute[244014]: 2026-02-25 13:37:22.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:37:23 compute-0 nova_compute[244014]: 2026-02-25 13:37:23.330 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:37:25 compute-0 ceph-mon[76335]: pgmap v3525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:25 compute-0 nova_compute[244014]: 2026-02-25 13:37:25.888 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:26 compute-0 nova_compute[244014]: 2026-02-25 13:37:26.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:37:27 compute-0 ceph-mon[76335]: pgmap v3526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:28 compute-0 nova_compute[244014]: 2026-02-25 13:37:28.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:28 compute-0 nova_compute[244014]: 2026-02-25 13:37:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:37:29 compute-0 ceph-mon[76335]: pgmap v3527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:37:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:30 compute-0 nova_compute[244014]: 2026-02-25 13:37:30.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:37:31
Feb 25 13:37:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:37:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:37:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'volumes']
Feb 25 13:37:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:37:31 compute-0 ceph-mon[76335]: pgmap v3528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:37:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:37:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:37:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:37:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:37:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:37:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:37:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:37:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:37:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:37:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:37:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:37:32 compute-0 nova_compute[244014]: 2026-02-25 13:37:32.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:37:33 compute-0 sshd-session[409265]: Received disconnect from 45.148.10.157 port 62818:11:  [preauth]
Feb 25 13:37:33 compute-0 sshd-session[409265]: Disconnected from authenticating user root 45.148.10.157 port 62818 [preauth]
Feb 25 13:37:33 compute-0 nova_compute[244014]: 2026-02-25 13:37:33.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:33 compute-0 ceph-mon[76335]: pgmap v3529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:37:35 compute-0 ceph-mon[76335]: pgmap v3530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:35 compute-0 nova_compute[244014]: 2026-02-25 13:37:35.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:37 compute-0 ceph-mon[76335]: pgmap v3531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:38 compute-0 nova_compute[244014]: 2026-02-25 13:37:38.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:37:39 compute-0 ceph-mon[76335]: pgmap v3532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:40 compute-0 nova_compute[244014]: 2026-02-25 13:37:40.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:41 compute-0 ceph-mon[76335]: pgmap v3533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:43 compute-0 nova_compute[244014]: 2026-02-25 13:37:43.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:37:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:37:43 compute-0 ceph-mon[76335]: pgmap v3534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:37:45 compute-0 ceph-mon[76335]: pgmap v3535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:45 compute-0 nova_compute[244014]: 2026-02-25 13:37:45.896 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:46 compute-0 ceph-mon[76335]: pgmap v3536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:37:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521667704' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:37:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:37:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521667704' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:37:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3521667704' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:37:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3521667704' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:37:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:48 compute-0 nova_compute[244014]: 2026-02-25 13:37:48.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:48 compute-0 ceph-mon[76335]: pgmap v3537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:37:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:50 compute-0 nova_compute[244014]: 2026-02-25 13:37:50.900 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:51 compute-0 ceph-mon[76335]: pgmap v3538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:52 compute-0 podman[409267]: 2026-02-25 13:37:52.726925157 +0000 UTC m=+0.064908034 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 13:37:52 compute-0 podman[409268]: 2026-02-25 13:37:52.746761717 +0000 UTC m=+0.085631250 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Feb 25 13:37:53 compute-0 ceph-mon[76335]: pgmap v3539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:53 compute-0 nova_compute[244014]: 2026-02-25 13:37:53.429 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:37:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:37:55.081 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:37:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:37:55.081 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:37:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:37:55.081 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:37:55 compute-0 ceph-mon[76335]: pgmap v3540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:55 compute-0 nova_compute[244014]: 2026-02-25 13:37:55.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:57 compute-0 ceph-mon[76335]: pgmap v3541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:58 compute-0 nova_compute[244014]: 2026-02-25 13:37:58.433 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:37:59 compute-0 ceph-mon[76335]: pgmap v3542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:37:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:00 compute-0 nova_compute[244014]: 2026-02-25 13:38:00.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:01 compute-0 ceph-mon[76335]: pgmap v3543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:38:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:38:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:03 compute-0 ceph-mon[76335]: pgmap v3544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:03 compute-0 nova_compute[244014]: 2026-02-25 13:38:03.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:05 compute-0 ceph-mon[76335]: pgmap v3545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:05 compute-0 nova_compute[244014]: 2026-02-25 13:38:05.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:06 compute-0 nova_compute[244014]: 2026-02-25 13:38:06.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:38:06 compute-0 nova_compute[244014]: 2026-02-25 13:38:06.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:38:06 compute-0 nova_compute[244014]: 2026-02-25 13:38:06.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:38:06 compute-0 nova_compute[244014]: 2026-02-25 13:38:06.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:38:06 compute-0 nova_compute[244014]: 2026-02-25 13:38:06.927 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:38:06 compute-0 nova_compute[244014]: 2026-02-25 13:38:06.927 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:38:07 compute-0 ceph-mon[76335]: pgmap v3546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:38:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4070224935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:38:07 compute-0 nova_compute[244014]: 2026-02-25 13:38:07.474 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:38:07 compute-0 nova_compute[244014]: 2026-02-25 13:38:07.668 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:38:07 compute-0 nova_compute[244014]: 2026-02-25 13:38:07.669 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3525MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:38:07 compute-0 nova_compute[244014]: 2026-02-25 13:38:07.670 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:38:07 compute-0 nova_compute[244014]: 2026-02-25 13:38:07.670 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:38:07 compute-0 nova_compute[244014]: 2026-02-25 13:38:07.777 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:38:07 compute-0 nova_compute[244014]: 2026-02-25 13:38:07.777 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:38:07 compute-0 nova_compute[244014]: 2026-02-25 13:38:07.818 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:38:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4070224935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:38:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:38:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162965353' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:38:08 compute-0 nova_compute[244014]: 2026-02-25 13:38:08.362 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:38:08 compute-0 nova_compute[244014]: 2026-02-25 13:38:08.369 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:38:08 compute-0 nova_compute[244014]: 2026-02-25 13:38:08.383 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:38:08 compute-0 nova_compute[244014]: 2026-02-25 13:38:08.385 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:38:08 compute-0 nova_compute[244014]: 2026-02-25 13:38:08.385 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:38:08 compute-0 nova_compute[244014]: 2026-02-25 13:38:08.437 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:09 compute-0 nova_compute[244014]: 2026-02-25 13:38:09.387 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:38:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:09 compute-0 ceph-mon[76335]: pgmap v3547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4162965353' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:38:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:10 compute-0 ceph-mon[76335]: pgmap v3548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:10 compute-0 nova_compute[244014]: 2026-02-25 13:38:10.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:13 compute-0 ceph-mon[76335]: pgmap v3549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:13 compute-0 nova_compute[244014]: 2026-02-25 13:38:13.441 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:15 compute-0 ceph-mon[76335]: pgmap v3550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:15 compute-0 nova_compute[244014]: 2026-02-25 13:38:15.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:38:15 compute-0 nova_compute[244014]: 2026-02-25 13:38:15.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:38:15 compute-0 nova_compute[244014]: 2026-02-25 13:38:15.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:38:15 compute-0 nova_compute[244014]: 2026-02-25 13:38:15.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:38:15 compute-0 nova_compute[244014]: 2026-02-25 13:38:15.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:16 compute-0 nova_compute[244014]: 2026-02-25 13:38:16.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:38:17 compute-0 ceph-mon[76335]: pgmap v3551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:18 compute-0 nova_compute[244014]: 2026-02-25 13:38:18.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:18 compute-0 nova_compute[244014]: 2026-02-25 13:38:18.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:38:18 compute-0 nova_compute[244014]: 2026-02-25 13:38:18.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:38:19 compute-0 sudo[409358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:38:19 compute-0 sudo[409358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:38:19 compute-0 sudo[409358]: pam_unix(sudo:session): session closed for user root
Feb 25 13:38:19 compute-0 ceph-mon[76335]: pgmap v3552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:19 compute-0 sudo[409383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:38:19 compute-0 sudo[409383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:38:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:19 compute-0 sudo[409383]: pam_unix(sudo:session): session closed for user root
Feb 25 13:38:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:38:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:38:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:38:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:38:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:38:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:38:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:38:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:38:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:38:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:38:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:38:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:38:19 compute-0 sudo[409440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:38:19 compute-0 sudo[409440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:38:19 compute-0 sudo[409440]: pam_unix(sudo:session): session closed for user root
Feb 25 13:38:19 compute-0 sudo[409465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:38:19 compute-0 sudo[409465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:38:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:38:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:38:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:38:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:38:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:38:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:38:20 compute-0 podman[409502]: 2026-02-25 13:38:20.181073022 +0000 UTC m=+0.050653015 container create f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 13:38:20 compute-0 systemd[1]: Started libpod-conmon-f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa.scope.
Feb 25 13:38:20 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:38:20 compute-0 podman[409502]: 2026-02-25 13:38:20.244239885 +0000 UTC m=+0.113819899 container init f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 13:38:20 compute-0 podman[409502]: 2026-02-25 13:38:20.151791361 +0000 UTC m=+0.021371414 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:38:20 compute-0 podman[409502]: 2026-02-25 13:38:20.248637032 +0000 UTC m=+0.118217015 container start f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:38:20 compute-0 youthful_gould[409519]: 167 167
Feb 25 13:38:20 compute-0 systemd[1]: libpod-f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa.scope: Deactivated successfully.
Feb 25 13:38:20 compute-0 conmon[409519]: conmon f1b8188469fc3f52afb6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa.scope/container/memory.events
Feb 25 13:38:20 compute-0 podman[409502]: 2026-02-25 13:38:20.259442032 +0000 UTC m=+0.129022025 container attach f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:38:20 compute-0 podman[409502]: 2026-02-25 13:38:20.259870004 +0000 UTC m=+0.129450007 container died f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b393cf27f121cf6bd9ecea87b63bed72dc59c710ce1088d93be5a135c576069d-merged.mount: Deactivated successfully.
Feb 25 13:38:20 compute-0 podman[409502]: 2026-02-25 13:38:20.315383888 +0000 UTC m=+0.184963911 container remove f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:38:20 compute-0 systemd[1]: libpod-conmon-f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa.scope: Deactivated successfully.
Feb 25 13:38:20 compute-0 podman[409543]: 2026-02-25 13:38:20.493913494 +0000 UTC m=+0.061479867 container create 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 13:38:20 compute-0 systemd[1]: Started libpod-conmon-637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780.scope.
Feb 25 13:38:20 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:38:20 compute-0 podman[409543]: 2026-02-25 13:38:20.46799975 +0000 UTC m=+0.035566183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:20 compute-0 podman[409543]: 2026-02-25 13:38:20.591105144 +0000 UTC m=+0.158671537 container init 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:38:20 compute-0 podman[409543]: 2026-02-25 13:38:20.601224474 +0000 UTC m=+0.168790857 container start 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:38:20 compute-0 podman[409543]: 2026-02-25 13:38:20.60491135 +0000 UTC m=+0.172477793 container attach 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 13:38:20 compute-0 nova_compute[244014]: 2026-02-25 13:38:20.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:20 compute-0 clever_ganguly[409560]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:38:20 compute-0 clever_ganguly[409560]: --> All data devices are unavailable
Feb 25 13:38:21 compute-0 systemd[1]: libpod-637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780.scope: Deactivated successfully.
Feb 25 13:38:21 compute-0 podman[409543]: 2026-02-25 13:38:21.004198834 +0000 UTC m=+0.571765217 container died 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102-merged.mount: Deactivated successfully.
Feb 25 13:38:21 compute-0 podman[409543]: 2026-02-25 13:38:21.063779314 +0000 UTC m=+0.631345667 container remove 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:38:21 compute-0 systemd[1]: libpod-conmon-637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780.scope: Deactivated successfully.
Feb 25 13:38:21 compute-0 sudo[409465]: pam_unix(sudo:session): session closed for user root
Feb 25 13:38:21 compute-0 sudo[409594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:38:21 compute-0 ceph-mon[76335]: pgmap v3553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:21 compute-0 sudo[409594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:38:21 compute-0 sudo[409594]: pam_unix(sudo:session): session closed for user root
Feb 25 13:38:21 compute-0 sudo[409619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:38:21 compute-0 sudo[409619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:38:21 compute-0 podman[409656]: 2026-02-25 13:38:21.510855759 +0000 UTC m=+0.037675902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:38:21 compute-0 podman[409656]: 2026-02-25 13:38:21.622561456 +0000 UTC m=+0.149381559 container create 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:38:21 compute-0 systemd[1]: Started libpod-conmon-03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878.scope.
Feb 25 13:38:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:38:21 compute-0 podman[409656]: 2026-02-25 13:38:21.70732595 +0000 UTC m=+0.234146033 container init 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 13:38:21 compute-0 podman[409656]: 2026-02-25 13:38:21.712960232 +0000 UTC m=+0.239780295 container start 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 13:38:21 compute-0 stoic_montalcini[409672]: 167 167
Feb 25 13:38:21 compute-0 systemd[1]: libpod-03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878.scope: Deactivated successfully.
Feb 25 13:38:21 compute-0 podman[409656]: 2026-02-25 13:38:21.721589439 +0000 UTC m=+0.248409532 container attach 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 13:38:21 compute-0 podman[409656]: 2026-02-25 13:38:21.722123055 +0000 UTC m=+0.248943118 container died 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-afc0b2b5f982e4585f7f933df1f82d26a43275af7a68bbf76530439bbbdc1369-merged.mount: Deactivated successfully.
Feb 25 13:38:21 compute-0 podman[409656]: 2026-02-25 13:38:21.774830898 +0000 UTC m=+0.301650961 container remove 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 13:38:21 compute-0 systemd[1]: libpod-conmon-03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878.scope: Deactivated successfully.
Feb 25 13:38:21 compute-0 podman[409695]: 2026-02-25 13:38:21.967543941 +0000 UTC m=+0.066274624 container create 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:38:22 compute-0 systemd[1]: Started libpod-conmon-7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1.scope.
Feb 25 13:38:22 compute-0 podman[409695]: 2026-02-25 13:38:21.938083215 +0000 UTC m=+0.036813948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:38:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcfd305bc0b1fd9bc1593986989f082d3fc53d568bded84d0edd31e65259aeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcfd305bc0b1fd9bc1593986989f082d3fc53d568bded84d0edd31e65259aeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcfd305bc0b1fd9bc1593986989f082d3fc53d568bded84d0edd31e65259aeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcfd305bc0b1fd9bc1593986989f082d3fc53d568bded84d0edd31e65259aeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:22 compute-0 podman[409695]: 2026-02-25 13:38:22.069168378 +0000 UTC m=+0.167899111 container init 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 25 13:38:22 compute-0 podman[409695]: 2026-02-25 13:38:22.078881577 +0000 UTC m=+0.177612230 container start 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:38:22 compute-0 podman[409695]: 2026-02-25 13:38:22.084014184 +0000 UTC m=+0.182744877 container attach 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]: {
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:     "0": [
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:         {
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "devices": [
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "/dev/loop3"
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             ],
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_name": "ceph_lv0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_size": "21470642176",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "name": "ceph_lv0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "tags": {
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.cluster_name": "ceph",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.crush_device_class": "",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.encrypted": "0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.objectstore": "bluestore",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.osd_id": "0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.type": "block",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.vdo": "0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.with_tpm": "0"
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             },
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "type": "block",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "vg_name": "ceph_vg0"
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:         }
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:     ],
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:     "1": [
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:         {
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "devices": [
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "/dev/loop4"
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             ],
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_name": "ceph_lv1",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_size": "21470642176",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "name": "ceph_lv1",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "tags": {
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.cluster_name": "ceph",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.crush_device_class": "",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.encrypted": "0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.objectstore": "bluestore",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.osd_id": "1",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.type": "block",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.vdo": "0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.with_tpm": "0"
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             },
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "type": "block",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "vg_name": "ceph_vg1"
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:         }
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:     ],
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:     "2": [
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:         {
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "devices": [
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "/dev/loop5"
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             ],
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_name": "ceph_lv2",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_size": "21470642176",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "name": "ceph_lv2",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "tags": {
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.cluster_name": "ceph",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.crush_device_class": "",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.encrypted": "0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.objectstore": "bluestore",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.osd_id": "2",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.type": "block",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.vdo": "0",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:                 "ceph.with_tpm": "0"
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             },
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "type": "block",
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:             "vg_name": "ceph_vg2"
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:         }
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]:     ]
Feb 25 13:38:22 compute-0 focused_ishizaka[409711]: }
Feb 25 13:38:22 compute-0 systemd[1]: libpod-7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1.scope: Deactivated successfully.
Feb 25 13:38:22 compute-0 podman[409695]: 2026-02-25 13:38:22.407946543 +0000 UTC m=+0.506677266 container died 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 13:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdcfd305bc0b1fd9bc1593986989f082d3fc53d568bded84d0edd31e65259aeb-merged.mount: Deactivated successfully.
Feb 25 13:38:22 compute-0 podman[409695]: 2026-02-25 13:38:22.468532933 +0000 UTC m=+0.567263596 container remove 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:38:22 compute-0 systemd[1]: libpod-conmon-7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1.scope: Deactivated successfully.
Feb 25 13:38:22 compute-0 sudo[409619]: pam_unix(sudo:session): session closed for user root
Feb 25 13:38:22 compute-0 sudo[409732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:38:22 compute-0 sudo[409732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:38:22 compute-0 sudo[409732]: pam_unix(sudo:session): session closed for user root
Feb 25 13:38:22 compute-0 sudo[409757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:38:22 compute-0 sudo[409757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:38:22 compute-0 podman[409794]: 2026-02-25 13:38:22.922135015 +0000 UTC m=+0.044670313 container create ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 13:38:22 compute-0 systemd[1]: Started libpod-conmon-ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91.scope.
Feb 25 13:38:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:38:22 compute-0 podman[409794]: 2026-02-25 13:38:22.902541773 +0000 UTC m=+0.025077101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:38:23 compute-0 podman[409794]: 2026-02-25 13:38:23.001586196 +0000 UTC m=+0.124121474 container init ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:38:23 compute-0 podman[409794]: 2026-02-25 13:38:23.01043464 +0000 UTC m=+0.132969928 container start ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 13:38:23 compute-0 gracious_einstein[409812]: 167 167
Feb 25 13:38:23 compute-0 systemd[1]: libpod-ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91.scope: Deactivated successfully.
Feb 25 13:38:23 compute-0 podman[409794]: 2026-02-25 13:38:23.015750263 +0000 UTC m=+0.138285561 container attach ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:38:23 compute-0 podman[409794]: 2026-02-25 13:38:23.016450613 +0000 UTC m=+0.138985881 container died ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1068e2d16d9b57d537d31a69a973aae86e14e6bb87f30f01b4332aa6a9a7b123-merged.mount: Deactivated successfully.
Feb 25 13:38:23 compute-0 podman[409794]: 2026-02-25 13:38:23.069682731 +0000 UTC m=+0.192218019 container remove ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:38:23 compute-0 podman[409808]: 2026-02-25 13:38:23.074843009 +0000 UTC m=+0.107565290 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 13:38:23 compute-0 systemd[1]: libpod-conmon-ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91.scope: Deactivated successfully.
Feb 25 13:38:23 compute-0 podman[409811]: 2026-02-25 13:38:23.107296251 +0000 UTC m=+0.139668921 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 13:38:23 compute-0 ceph-mon[76335]: pgmap v3554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:23 compute-0 podman[409877]: 2026-02-25 13:38:23.210327729 +0000 UTC m=+0.038889457 container create f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:38:23 compute-0 systemd[1]: Started libpod-conmon-f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604.scope.
Feb 25 13:38:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267bdf6be22d4916161fa344824b9aaefdd3b97349199497ad84b74ea901d4d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267bdf6be22d4916161fa344824b9aaefdd3b97349199497ad84b74ea901d4d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267bdf6be22d4916161fa344824b9aaefdd3b97349199497ad84b74ea901d4d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267bdf6be22d4916161fa344824b9aaefdd3b97349199497ad84b74ea901d4d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:38:23 compute-0 podman[409877]: 2026-02-25 13:38:23.192471367 +0000 UTC m=+0.021033115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:38:23 compute-0 podman[409877]: 2026-02-25 13:38:23.295859135 +0000 UTC m=+0.124420873 container init f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:38:23 compute-0 podman[409877]: 2026-02-25 13:38:23.301476296 +0000 UTC m=+0.130038034 container start f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:38:23 compute-0 podman[409877]: 2026-02-25 13:38:23.314062057 +0000 UTC m=+0.142623775 container attach f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:38:23 compute-0 nova_compute[244014]: 2026-02-25 13:38:23.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:23 compute-0 lvm[409969]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:38:23 compute-0 lvm[409969]: VG ceph_vg0 finished
Feb 25 13:38:23 compute-0 lvm[409972]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:38:23 compute-0 lvm[409972]: VG ceph_vg1 finished
Feb 25 13:38:23 compute-0 lvm[409974]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:38:23 compute-0 lvm[409974]: VG ceph_vg2 finished
Feb 25 13:38:23 compute-0 stoic_borg[409893]: {}
Feb 25 13:38:24 compute-0 podman[409877]: 2026-02-25 13:38:24.019092238 +0000 UTC m=+0.847654186 container died f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:38:24 compute-0 systemd[1]: libpod-f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604.scope: Deactivated successfully.
Feb 25 13:38:24 compute-0 systemd[1]: libpod-f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604.scope: Consumed 1.041s CPU time.
Feb 25 13:38:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-267bdf6be22d4916161fa344824b9aaefdd3b97349199497ad84b74ea901d4d2-merged.mount: Deactivated successfully.
Feb 25 13:38:24 compute-0 podman[409877]: 2026-02-25 13:38:24.086995958 +0000 UTC m=+0.915557676 container remove f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:38:24 compute-0 systemd[1]: libpod-conmon-f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604.scope: Deactivated successfully.
Feb 25 13:38:24 compute-0 sudo[409757]: pam_unix(sudo:session): session closed for user root
Feb 25 13:38:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:38:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:38:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:38:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:38:24 compute-0 sudo[409990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:38:24 compute-0 sudo[409990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:38:24 compute-0 sudo[409990]: pam_unix(sudo:session): session closed for user root
Feb 25 13:38:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:24 compute-0 nova_compute[244014]: 2026-02-25 13:38:24.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:38:25 compute-0 ceph-mon[76335]: pgmap v3555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:38:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:38:25 compute-0 nova_compute[244014]: 2026-02-25 13:38:25.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:27 compute-0 ceph-mon[76335]: pgmap v3556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:27 compute-0 nova_compute[244014]: 2026-02-25 13:38:27.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:38:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:28 compute-0 nova_compute[244014]: 2026-02-25 13:38:28.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:29 compute-0 ceph-mon[76335]: pgmap v3557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:29 compute-0 nova_compute[244014]: 2026-02-25 13:38:29.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.011510) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710011585, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1953, "num_deletes": 250, "total_data_size": 3328374, "memory_usage": 3377576, "flush_reason": "Manual Compaction"}
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710034443, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1898215, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72537, "largest_seqno": 74489, "table_properties": {"data_size": 1891909, "index_size": 3251, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16429, "raw_average_key_size": 20, "raw_value_size": 1877798, "raw_average_value_size": 2367, "num_data_blocks": 150, "num_entries": 793, "num_filter_entries": 793, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026495, "oldest_key_time": 1772026495, "file_creation_time": 1772026710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 22980 microseconds, and 7539 cpu microseconds.
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:38:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.034496) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1898215 bytes OK
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.034518) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.105168) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.105237) EVENT_LOG_v1 {"time_micros": 1772026710105223, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.105274) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 3320141, prev total WAL file size 3320141, number of live WAL files 2.
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.106534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303033' seq:72057594037927935, type:22 .. '6D6772737461740033323534' seq:0, type:0; will stop at (end)
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1853KB)], [173(10MB)]
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710106609, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12656035, "oldest_snapshot_seqno": -1}
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9255 keys, 10658676 bytes, temperature: kUnknown
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710329641, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10658676, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10601959, "index_size": 32476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 240927, "raw_average_key_size": 26, "raw_value_size": 10441760, "raw_average_value_size": 1128, "num_data_blocks": 1261, "num_entries": 9255, "num_filter_entries": 9255, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.330060) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10658676 bytes
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.341968) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.7 rd, 47.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 9666, records dropped: 411 output_compression: NoCompression
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.341999) EVENT_LOG_v1 {"time_micros": 1772026710341985, "job": 108, "event": "compaction_finished", "compaction_time_micros": 223193, "compaction_time_cpu_micros": 19932, "output_level": 6, "num_output_files": 1, "total_output_size": 10658676, "num_input_records": 9666, "num_output_records": 9255, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710342303, "job": 108, "event": "table_file_deletion", "file_number": 175}
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710343840, "job": 108, "event": "table_file_deletion", "file_number": 173}
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.106410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.344020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.344036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.344042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.344046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.344051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:30 compute-0 nova_compute[244014]: 2026-02-25 13:38:30.956 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:31 compute-0 ceph-mon[76335]: pgmap v3558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:38:31
Feb 25 13:38:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:38:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:38:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'images']
Feb 25 13:38:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:38:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:38:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:38:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:38:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:38:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:38:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:38:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:38:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:38:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:38:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:38:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:38:33 compute-0 ceph-mon[76335]: pgmap v3559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:33 compute-0 nova_compute[244014]: 2026-02-25 13:38:33.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:34 compute-0 nova_compute[244014]: 2026-02-25 13:38:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:38:35 compute-0 ceph-mon[76335]: pgmap v3560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:35 compute-0 nova_compute[244014]: 2026-02-25 13:38:35.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:37 compute-0 ceph-mon[76335]: pgmap v3561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:38 compute-0 nova_compute[244014]: 2026-02-25 13:38:38.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:39 compute-0 ceph-mon[76335]: pgmap v3562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.738328) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719738397, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 329, "num_deletes": 254, "total_data_size": 153185, "memory_usage": 159480, "flush_reason": "Manual Compaction"}
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719740888, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 152257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74490, "largest_seqno": 74818, "table_properties": {"data_size": 150134, "index_size": 286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5048, "raw_average_key_size": 17, "raw_value_size": 146043, "raw_average_value_size": 508, "num_data_blocks": 13, "num_entries": 287, "num_filter_entries": 287, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026711, "oldest_key_time": 1772026711, "file_creation_time": 1772026719, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 2581 microseconds, and 977 cpu microseconds.
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.740922) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 152257 bytes OK
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.740937) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.742020) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.742032) EVENT_LOG_v1 {"time_micros": 1772026719742028, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.742049) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 150892, prev total WAL file size 150892, number of live WAL files 2.
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.742392) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323637' seq:72057594037927935, type:22 .. '6C6F676D0033353138' seq:0, type:0; will stop at (end)
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(148KB)], [176(10MB)]
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719742432, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 10810933, "oldest_snapshot_seqno": -1}
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9027 keys, 10722315 bytes, temperature: kUnknown
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719810092, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 10722315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10666330, "index_size": 32320, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22597, "raw_key_size": 237117, "raw_average_key_size": 26, "raw_value_size": 10509285, "raw_average_value_size": 1164, "num_data_blocks": 1253, "num_entries": 9027, "num_filter_entries": 9027, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026719, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.810510) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 10722315 bytes
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.812314) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.5 rd, 158.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(141.4) write-amplify(70.4) OK, records in: 9542, records dropped: 515 output_compression: NoCompression
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.812359) EVENT_LOG_v1 {"time_micros": 1772026719812339, "job": 110, "event": "compaction_finished", "compaction_time_micros": 67788, "compaction_time_cpu_micros": 19816, "output_level": 6, "num_output_files": 1, "total_output_size": 10722315, "num_input_records": 9542, "num_output_records": 9027, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719812589, "job": 110, "event": "table_file_deletion", "file_number": 178}
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719815269, "job": 110, "event": "table_file_deletion", "file_number": 176}
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.742307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.815390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.815399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.815402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.815405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:39 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.815408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:40 compute-0 ceph-mon[76335]: pgmap v3563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.759847) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720759894, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 270, "num_deletes": 251, "total_data_size": 41355, "memory_usage": 46536, "flush_reason": "Manual Compaction"}
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720762684, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 41289, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74819, "largest_seqno": 75088, "table_properties": {"data_size": 39405, "index_size": 112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4821, "raw_average_key_size": 18, "raw_value_size": 35790, "raw_average_value_size": 135, "num_data_blocks": 5, "num_entries": 264, "num_filter_entries": 264, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026720, "oldest_key_time": 1772026720, "file_creation_time": 1772026720, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 2907 microseconds, and 903 cpu microseconds.
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.762758) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 41289 bytes OK
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.762777) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.764448) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.764474) EVENT_LOG_v1 {"time_micros": 1772026720764466, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.764498) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 39286, prev total WAL file size 39286, number of live WAL files 2.
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.765093) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(40KB)], [179(10MB)]
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720765136, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 10763604, "oldest_snapshot_seqno": -1}
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 8782 keys, 9038677 bytes, temperature: kUnknown
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720830390, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 9038677, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8985917, "index_size": 29671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 232719, "raw_average_key_size": 26, "raw_value_size": 8834733, "raw_average_value_size": 1006, "num_data_blocks": 1132, "num_entries": 8782, "num_filter_entries": 8782, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026720, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.830813) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 9038677 bytes
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.832650) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.7 rd, 138.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.2 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(479.6) write-amplify(218.9) OK, records in: 9291, records dropped: 509 output_compression: NoCompression
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.832750) EVENT_LOG_v1 {"time_micros": 1772026720832722, "job": 112, "event": "compaction_finished", "compaction_time_micros": 65349, "compaction_time_cpu_micros": 35833, "output_level": 6, "num_output_files": 1, "total_output_size": 9038677, "num_input_records": 9291, "num_output_records": 8782, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720833016, "job": 112, "event": "table_file_deletion", "file_number": 181}
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720834846, "job": 112, "event": "table_file_deletion", "file_number": 179}
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.764972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.834962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.835003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.835008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.835013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.835017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:38:40 compute-0 nova_compute[244014]: 2026-02-25 13:38:40.963 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:43 compute-0 ceph-mon[76335]: pgmap v3564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:43 compute-0 nova_compute[244014]: 2026-02-25 13:38:43.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:38:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:38:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:45 compute-0 ceph-mon[76335]: pgmap v3565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:45 compute-0 nova_compute[244014]: 2026-02-25 13:38:45.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:38:45 compute-0 nova_compute[244014]: 2026-02-25 13:38:45.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:46 compute-0 nova_compute[244014]: 2026-02-25 13:38:46.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:38:47 compute-0 ceph-mon[76335]: pgmap v3566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:38:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1627703015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:38:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:38:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1627703015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:38:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1627703015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:38:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1627703015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:38:48 compute-0 nova_compute[244014]: 2026-02-25 13:38:48.454 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:49 compute-0 ceph-mon[76335]: pgmap v3567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:50 compute-0 nova_compute[244014]: 2026-02-25 13:38:50.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:51 compute-0 ceph-mon[76335]: pgmap v3568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:53 compute-0 ceph-mon[76335]: pgmap v3569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:53 compute-0 nova_compute[244014]: 2026-02-25 13:38:53.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:53 compute-0 podman[410015]: 2026-02-25 13:38:53.747268007 +0000 UTC m=+0.074059397 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:38:53 compute-0 podman[410016]: 2026-02-25 13:38:53.782996923 +0000 UTC m=+0.109555226 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 13:38:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:38:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:38:55.082 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:38:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:38:55.082 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:38:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:38:55.082 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:38:55 compute-0 ceph-mon[76335]: pgmap v3570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:55 compute-0 nova_compute[244014]: 2026-02-25 13:38:55.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:57 compute-0 ceph-mon[76335]: pgmap v3571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:58 compute-0 nova_compute[244014]: 2026-02-25 13:38:58.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:38:59 compute-0 ceph-mon[76335]: pgmap v3572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:38:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:00 compute-0 nova_compute[244014]: 2026-02-25 13:39:00.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:01 compute-0 ceph-mon[76335]: pgmap v3573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:39:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:39:01 compute-0 nova_compute[244014]: 2026-02-25 13:39:01.892 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:01 compute-0 nova_compute[244014]: 2026-02-25 13:39:01.892 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:39:01 compute-0 nova_compute[244014]: 2026-02-25 13:39:01.911 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:39:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:03 compute-0 ceph-mon[76335]: pgmap v3574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:03 compute-0 nova_compute[244014]: 2026-02-25 13:39:03.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:03 compute-0 nova_compute[244014]: 2026-02-25 13:39:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:03 compute-0 nova_compute[244014]: 2026-02-25 13:39:03.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:39:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:05 compute-0 ceph-mon[76335]: pgmap v3575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:06 compute-0 nova_compute[244014]: 2026-02-25 13:39:06.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:06 compute-0 nova_compute[244014]: 2026-02-25 13:39:06.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:06 compute-0 nova_compute[244014]: 2026-02-25 13:39:06.934 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:39:06 compute-0 nova_compute[244014]: 2026-02-25 13:39:06.934 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:39:06 compute-0 nova_compute[244014]: 2026-02-25 13:39:06.935 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:39:06 compute-0 nova_compute[244014]: 2026-02-25 13:39:06.935 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:39:06 compute-0 nova_compute[244014]: 2026-02-25 13:39:06.936 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:39:07 compute-0 ceph-mon[76335]: pgmap v3576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:39:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1447459465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:39:07 compute-0 nova_compute[244014]: 2026-02-25 13:39:07.557 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:39:07 compute-0 nova_compute[244014]: 2026-02-25 13:39:07.766 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:39:07 compute-0 nova_compute[244014]: 2026-02-25 13:39:07.768 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:39:07 compute-0 nova_compute[244014]: 2026-02-25 13:39:07.768 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:39:07 compute-0 nova_compute[244014]: 2026-02-25 13:39:07.769 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:39:07 compute-0 nova_compute[244014]: 2026-02-25 13:39:07.838 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:39:07 compute-0 nova_compute[244014]: 2026-02-25 13:39:07.839 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:39:07 compute-0 nova_compute[244014]: 2026-02-25 13:39:07.859 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:39:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:08 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1447459465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:39:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:39:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641077900' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:39:08 compute-0 nova_compute[244014]: 2026-02-25 13:39:08.447 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:39:08 compute-0 nova_compute[244014]: 2026-02-25 13:39:08.453 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:39:08 compute-0 nova_compute[244014]: 2026-02-25 13:39:08.470 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:39:08 compute-0 nova_compute[244014]: 2026-02-25 13:39:08.472 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:39:08 compute-0 nova_compute[244014]: 2026-02-25 13:39:08.472 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:39:08 compute-0 nova_compute[244014]: 2026-02-25 13:39:08.526 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:09 compute-0 ceph-mon[76335]: pgmap v3577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/641077900' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:39:09 compute-0 nova_compute[244014]: 2026-02-25 13:39:09.457 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:09 compute-0 nova_compute[244014]: 2026-02-25 13:39:09.738 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:11 compute-0 nova_compute[244014]: 2026-02-25 13:39:11.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:11 compute-0 ceph-mon[76335]: pgmap v3578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:39:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 75K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1326 writes, 6497 keys, 1326 commit groups, 1.0 writes per commit group, ingest: 8.73 MB, 0.01 MB/s
                                           Interval WAL: 1326 writes, 1326 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     52.2      1.74              0.25        56    0.031       0      0       0.0       0.0
                                             L6      1/0    8.62 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.4    101.6     87.0      5.60              1.41        55    0.102    386K    29K       0.0       0.0
                                            Sum      1/0    8.62 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.4     77.6     78.7      7.34              1.66       111    0.066    386K    29K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0  10.1     87.7     85.6      0.88              0.21        14    0.063     65K   3413       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    101.6     87.0      5.60              1.41        55    0.102    386K    29K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     52.3      1.73              0.25        55    0.032       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.089, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.56 GB write, 0.09 MB/s write, 0.56 GB read, 0.09 MB/s read, 7.3 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 64.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.001126 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4046,62.28 MB,20.4875%) FilterBlock(112,1.05 MB,0.34394%) IndexBlock(112,1.66 MB,0.545015%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 13:39:13 compute-0 ceph-mon[76335]: pgmap v3579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:13 compute-0 nova_compute[244014]: 2026-02-25 13:39:13.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:15 compute-0 ceph-mon[76335]: pgmap v3580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:16 compute-0 nova_compute[244014]: 2026-02-25 13:39:16.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:16 compute-0 nova_compute[244014]: 2026-02-25 13:39:16.874 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:16 compute-0 nova_compute[244014]: 2026-02-25 13:39:16.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:16 compute-0 nova_compute[244014]: 2026-02-25 13:39:16.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:39:16 compute-0 nova_compute[244014]: 2026-02-25 13:39:16.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:39:16 compute-0 nova_compute[244014]: 2026-02-25 13:39:16.891 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:39:17 compute-0 ceph-mon[76335]: pgmap v3581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:18 compute-0 nova_compute[244014]: 2026-02-25 13:39:18.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:18 compute-0 nova_compute[244014]: 2026-02-25 13:39:18.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:18 compute-0 nova_compute[244014]: 2026-02-25 13:39:18.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:39:19 compute-0 ceph-mon[76335]: pgmap v3582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:21 compute-0 nova_compute[244014]: 2026-02-25 13:39:21.020 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:21 compute-0 ceph-mon[76335]: pgmap v3583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:23 compute-0 ceph-mon[76335]: pgmap v3584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:23 compute-0 nova_compute[244014]: 2026-02-25 13:39:23.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:24 compute-0 sudo[410106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:39:24 compute-0 sudo[410106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:39:24 compute-0 sudo[410106]: pam_unix(sudo:session): session closed for user root
Feb 25 13:39:24 compute-0 sudo[410138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:39:24 compute-0 sudo[410138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:39:24 compute-0 podman[410130]: 2026-02-25 13:39:24.394881134 +0000 UTC m=+0.068647202 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:39:24 compute-0 podman[410131]: 2026-02-25 13:39:24.443304134 +0000 UTC m=+0.112215613 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Feb 25 13:39:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:24 compute-0 sudo[410138]: pam_unix(sudo:session): session closed for user root
Feb 25 13:39:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:39:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:39:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:39:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:39:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:39:25 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:39:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:39:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:39:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:39:25 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:39:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:39:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:39:25 compute-0 sudo[410231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:39:25 compute-0 sudo[410231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:39:25 compute-0 sudo[410231]: pam_unix(sudo:session): session closed for user root
Feb 25 13:39:25 compute-0 sudo[410256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:39:25 compute-0 sudo[410256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:39:25 compute-0 podman[410293]: 2026-02-25 13:39:25.45643661 +0000 UTC m=+0.066261553 container create e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:39:25 compute-0 ceph-mon[76335]: pgmap v3585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:39:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:39:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:39:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:39:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:39:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:39:25 compute-0 podman[410293]: 2026-02-25 13:39:25.414250849 +0000 UTC m=+0.024075852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:39:25 compute-0 systemd[1]: Started libpod-conmon-e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457.scope.
Feb 25 13:39:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:39:25 compute-0 podman[410293]: 2026-02-25 13:39:25.586468253 +0000 UTC m=+0.196293156 container init e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 13:39:25 compute-0 podman[410293]: 2026-02-25 13:39:25.597085208 +0000 UTC m=+0.206910151 container start e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:39:25 compute-0 podman[410293]: 2026-02-25 13:39:25.605207931 +0000 UTC m=+0.215032884 container attach e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 13:39:25 compute-0 distracted_montalcini[410309]: 167 167
Feb 25 13:39:25 compute-0 systemd[1]: libpod-e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457.scope: Deactivated successfully.
Feb 25 13:39:25 compute-0 podman[410293]: 2026-02-25 13:39:25.607673892 +0000 UTC m=+0.217498825 container died e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:39:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a18f116b686561bc2be2f66d280c5bca9dcbb5fc12186fec5bf86abb19c9fdff-merged.mount: Deactivated successfully.
Feb 25 13:39:25 compute-0 podman[410293]: 2026-02-25 13:39:25.669562019 +0000 UTC m=+0.279386932 container remove e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 13:39:25 compute-0 systemd[1]: libpod-conmon-e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457.scope: Deactivated successfully.
Feb 25 13:39:25 compute-0 podman[410335]: 2026-02-25 13:39:25.908079747 +0000 UTC m=+0.089986275 container create 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 25 13:39:25 compute-0 podman[410335]: 2026-02-25 13:39:25.857377451 +0000 UTC m=+0.039284029 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:39:25 compute-0 systemd[1]: Started libpod-conmon-7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46.scope.
Feb 25 13:39:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:26 compute-0 podman[410335]: 2026-02-25 13:39:26.00991448 +0000 UTC m=+0.191821058 container init 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:39:26 compute-0 podman[410335]: 2026-02-25 13:39:26.016479619 +0000 UTC m=+0.198386107 container start 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 13:39:26 compute-0 podman[410335]: 2026-02-25 13:39:26.020807323 +0000 UTC m=+0.202713811 container attach 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:39:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:26 compute-0 nova_compute[244014]: 2026-02-25 13:39:26.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:26 compute-0 focused_roentgen[410352]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:39:26 compute-0 focused_roentgen[410352]: --> All data devices are unavailable
Feb 25 13:39:26 compute-0 systemd[1]: libpod-7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46.scope: Deactivated successfully.
Feb 25 13:39:26 compute-0 podman[410335]: 2026-02-25 13:39:26.493658628 +0000 UTC m=+0.675565196 container died 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 13:39:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b-merged.mount: Deactivated successfully.
Feb 25 13:39:26 compute-0 podman[410335]: 2026-02-25 13:39:26.549861811 +0000 UTC m=+0.731768309 container remove 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:39:26 compute-0 systemd[1]: libpod-conmon-7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46.scope: Deactivated successfully.
Feb 25 13:39:26 compute-0 sudo[410256]: pam_unix(sudo:session): session closed for user root
Feb 25 13:39:26 compute-0 sudo[410383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:39:26 compute-0 sudo[410383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:39:26 compute-0 sudo[410383]: pam_unix(sudo:session): session closed for user root
Feb 25 13:39:26 compute-0 sudo[410408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:39:26 compute-0 sudo[410408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:39:26 compute-0 nova_compute[244014]: 2026-02-25 13:39:26.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:27 compute-0 podman[410445]: 2026-02-25 13:39:27.024106856 +0000 UTC m=+0.051921771 container create 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 13:39:27 compute-0 systemd[1]: Started libpod-conmon-55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce.scope.
Feb 25 13:39:27 compute-0 podman[410445]: 2026-02-25 13:39:26.997856973 +0000 UTC m=+0.025671978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:39:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:39:27 compute-0 podman[410445]: 2026-02-25 13:39:27.113571805 +0000 UTC m=+0.141386730 container init 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 13:39:27 compute-0 podman[410445]: 2026-02-25 13:39:27.123345475 +0000 UTC m=+0.151160380 container start 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:39:27 compute-0 sweet_goodall[410461]: 167 167
Feb 25 13:39:27 compute-0 systemd[1]: libpod-55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce.scope: Deactivated successfully.
Feb 25 13:39:27 compute-0 podman[410445]: 2026-02-25 13:39:27.132703394 +0000 UTC m=+0.160518329 container attach 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:39:27 compute-0 podman[410445]: 2026-02-25 13:39:27.133109716 +0000 UTC m=+0.160924631 container died 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f543c270a9df64899965e7b39b9027dcb5e51a5ab9fa1cb77bc919b7337b9a42-merged.mount: Deactivated successfully.
Feb 25 13:39:27 compute-0 podman[410445]: 2026-02-25 13:39:27.247150759 +0000 UTC m=+0.274965674 container remove 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:39:27 compute-0 systemd[1]: libpod-conmon-55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce.scope: Deactivated successfully.
Feb 25 13:39:27 compute-0 podman[410488]: 2026-02-25 13:39:27.380177218 +0000 UTC m=+0.040801292 container create 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:39:27 compute-0 systemd[1]: Started libpod-conmon-75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c.scope.
Feb 25 13:39:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0390ea1be3afc50cf01e6e2894577eb9f58914b9bbc38040ad0ea2db8c499e68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0390ea1be3afc50cf01e6e2894577eb9f58914b9bbc38040ad0ea2db8c499e68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0390ea1be3afc50cf01e6e2894577eb9f58914b9bbc38040ad0ea2db8c499e68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0390ea1be3afc50cf01e6e2894577eb9f58914b9bbc38040ad0ea2db8c499e68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:27 compute-0 podman[410488]: 2026-02-25 13:39:27.363757607 +0000 UTC m=+0.024381681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:39:27 compute-0 podman[410488]: 2026-02-25 13:39:27.465044794 +0000 UTC m=+0.125668898 container init 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:39:27 compute-0 podman[410488]: 2026-02-25 13:39:27.472233491 +0000 UTC m=+0.132857545 container start 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 13:39:27 compute-0 podman[410488]: 2026-02-25 13:39:27.475505835 +0000 UTC m=+0.136129909 container attach 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:39:27 compute-0 ceph-mon[76335]: pgmap v3586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]: {
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:     "0": [
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:         {
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "devices": [
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "/dev/loop3"
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             ],
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_name": "ceph_lv0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_size": "21470642176",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "name": "ceph_lv0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "tags": {
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.cluster_name": "ceph",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.crush_device_class": "",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.encrypted": "0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.objectstore": "bluestore",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.osd_id": "0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.type": "block",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.vdo": "0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.with_tpm": "0"
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             },
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "type": "block",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "vg_name": "ceph_vg0"
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:         }
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:     ],
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:     "1": [
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:         {
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "devices": [
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "/dev/loop4"
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             ],
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_name": "ceph_lv1",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_size": "21470642176",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "name": "ceph_lv1",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "tags": {
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.cluster_name": "ceph",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.crush_device_class": "",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.encrypted": "0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.objectstore": "bluestore",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.osd_id": "1",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.type": "block",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.vdo": "0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.with_tpm": "0"
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             },
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "type": "block",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "vg_name": "ceph_vg1"
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:         }
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:     ],
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:     "2": [
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:         {
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "devices": [
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "/dev/loop5"
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             ],
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_name": "ceph_lv2",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_size": "21470642176",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "name": "ceph_lv2",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "tags": {
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.cluster_name": "ceph",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.crush_device_class": "",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.encrypted": "0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.objectstore": "bluestore",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.osd_id": "2",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.type": "block",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.vdo": "0",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:                 "ceph.with_tpm": "0"
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             },
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "type": "block",
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:             "vg_name": "ceph_vg2"
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:         }
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]:     ]
Feb 25 13:39:27 compute-0 flamboyant_elgamal[410505]: }
Feb 25 13:39:27 compute-0 systemd[1]: libpod-75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c.scope: Deactivated successfully.
Feb 25 13:39:27 compute-0 podman[410488]: 2026-02-25 13:39:27.742442748 +0000 UTC m=+0.403066842 container died 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 13:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0390ea1be3afc50cf01e6e2894577eb9f58914b9bbc38040ad0ea2db8c499e68-merged.mount: Deactivated successfully.
Feb 25 13:39:27 compute-0 podman[410488]: 2026-02-25 13:39:27.784794034 +0000 UTC m=+0.445418078 container remove 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb 25 13:39:27 compute-0 systemd[1]: libpod-conmon-75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c.scope: Deactivated successfully.
Feb 25 13:39:27 compute-0 sudo[410408]: pam_unix(sudo:session): session closed for user root
Feb 25 13:39:27 compute-0 sudo[410526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:39:27 compute-0 sudo[410526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:39:27 compute-0 sudo[410526]: pam_unix(sudo:session): session closed for user root
Feb 25 13:39:27 compute-0 nova_compute[244014]: 2026-02-25 13:39:27.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:27 compute-0 sudo[410551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:39:27 compute-0 sudo[410551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:39:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:28 compute-0 podman[410590]: 2026-02-25 13:39:28.23222825 +0000 UTC m=+0.045348223 container create a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:39:28 compute-0 systemd[1]: Started libpod-conmon-a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe.scope.
Feb 25 13:39:28 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:39:28 compute-0 podman[410590]: 2026-02-25 13:39:28.213580825 +0000 UTC m=+0.026700848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:39:28 compute-0 podman[410590]: 2026-02-25 13:39:28.313154704 +0000 UTC m=+0.126274737 container init a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:39:28 compute-0 podman[410590]: 2026-02-25 13:39:28.319260559 +0000 UTC m=+0.132380532 container start a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:39:28 compute-0 unruffled_ishizaka[410607]: 167 167
Feb 25 13:39:28 compute-0 systemd[1]: libpod-a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe.scope: Deactivated successfully.
Feb 25 13:39:28 compute-0 conmon[410607]: conmon a58f7a1ccdae60407b1a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe.scope/container/memory.events
Feb 25 13:39:28 compute-0 podman[410590]: 2026-02-25 13:39:28.326736313 +0000 UTC m=+0.139856386 container attach a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:39:28 compute-0 podman[410590]: 2026-02-25 13:39:28.327170206 +0000 UTC m=+0.140290219 container died a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-17ab5ac3e96527171ec5fd51f5584b11e787368a4a5e595af79fa12ec87fac34-merged.mount: Deactivated successfully.
Feb 25 13:39:28 compute-0 podman[410590]: 2026-02-25 13:39:28.366015271 +0000 UTC m=+0.179135274 container remove a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:39:28 compute-0 systemd[1]: libpod-conmon-a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe.scope: Deactivated successfully.
Feb 25 13:39:28 compute-0 podman[410631]: 2026-02-25 13:39:28.513817085 +0000 UTC m=+0.049447101 container create 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:39:28 compute-0 systemd[1]: Started libpod-conmon-875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e.scope.
Feb 25 13:39:28 compute-0 podman[410631]: 2026-02-25 13:39:28.489038523 +0000 UTC m=+0.024668539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:39:28 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0798e061a4fa9ce8ab15fab0cea1513db3ffe7761c051ff6a5656b228605472a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0798e061a4fa9ce8ab15fab0cea1513db3ffe7761c051ff6a5656b228605472a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0798e061a4fa9ce8ab15fab0cea1513db3ffe7761c051ff6a5656b228605472a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0798e061a4fa9ce8ab15fab0cea1513db3ffe7761c051ff6a5656b228605472a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:39:28 compute-0 nova_compute[244014]: 2026-02-25 13:39:28.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:28 compute-0 podman[410631]: 2026-02-25 13:39:28.630418262 +0000 UTC m=+0.166048318 container init 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 13:39:28 compute-0 podman[410631]: 2026-02-25 13:39:28.639340878 +0000 UTC m=+0.174970854 container start 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb 25 13:39:28 compute-0 podman[410631]: 2026-02-25 13:39:28.64287588 +0000 UTC m=+0.178505956 container attach 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:39:29 compute-0 lvm[410727]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:39:29 compute-0 lvm[410727]: VG ceph_vg1 finished
Feb 25 13:39:29 compute-0 lvm[410726]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:39:29 compute-0 lvm[410726]: VG ceph_vg0 finished
Feb 25 13:39:29 compute-0 lvm[410729]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:39:29 compute-0 lvm[410729]: VG ceph_vg2 finished
Feb 25 13:39:29 compute-0 pedantic_black[410648]: {}
Feb 25 13:39:29 compute-0 systemd[1]: libpod-875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e.scope: Deactivated successfully.
Feb 25 13:39:29 compute-0 systemd[1]: libpod-875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e.scope: Consumed 1.095s CPU time.
Feb 25 13:39:29 compute-0 podman[410732]: 2026-02-25 13:39:29.45274784 +0000 UTC m=+0.037204249 container died 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-0798e061a4fa9ce8ab15fab0cea1513db3ffe7761c051ff6a5656b228605472a-merged.mount: Deactivated successfully.
Feb 25 13:39:29 compute-0 podman[410732]: 2026-02-25 13:39:29.501902282 +0000 UTC m=+0.086358641 container remove 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 13:39:29 compute-0 systemd[1]: libpod-conmon-875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e.scope: Deactivated successfully.
Feb 25 13:39:29 compute-0 ceph-mon[76335]: pgmap v3587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:29 compute-0 sudo[410551]: pam_unix(sudo:session): session closed for user root
Feb 25 13:39:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:39:29 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:39:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:39:29 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:39:29 compute-0 sudo[410748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:39:29 compute-0 sudo[410748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:39:29 compute-0 sudo[410748]: pam_unix(sudo:session): session closed for user root
Feb 25 13:39:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:39:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:39:31 compute-0 nova_compute[244014]: 2026-02-25 13:39:31.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:39:31
Feb 25 13:39:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:39:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:39:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Feb 25 13:39:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:39:31 compute-0 ceph-mon[76335]: pgmap v3588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:39:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:39:31 compute-0 nova_compute[244014]: 2026-02-25 13:39:31.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:39:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:39:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:39:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:39:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:39:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:39:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:39:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:39:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:39:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:39:33 compute-0 nova_compute[244014]: 2026-02-25 13:39:33.605 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:33 compute-0 ceph-mon[76335]: pgmap v3589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:34 compute-0 nova_compute[244014]: 2026-02-25 13:39:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:35 compute-0 ceph-mon[76335]: pgmap v3590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:36 compute-0 nova_compute[244014]: 2026-02-25 13:39:36.079 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:36 compute-0 ceph-mon[76335]: pgmap v3591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:38 compute-0 nova_compute[244014]: 2026-02-25 13:39:38.643 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:39 compute-0 ceph-mon[76335]: pgmap v3592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:41 compute-0 nova_compute[244014]: 2026-02-25 13:39:41.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:41 compute-0 ceph-mon[76335]: pgmap v3593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:41 compute-0 nova_compute[244014]: 2026-02-25 13:39:41.650 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:39:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:43 compute-0 ceph-mon[76335]: pgmap v3594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:39:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:39:43 compute-0 nova_compute[244014]: 2026-02-25 13:39:43.646 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:45 compute-0 ceph-mon[76335]: pgmap v3595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:46 compute-0 nova_compute[244014]: 2026-02-25 13:39:46.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:47 compute-0 ceph-mon[76335]: pgmap v3596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:39:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2194471842' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:39:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:39:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2194471842' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:39:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2194471842' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:39:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2194471842' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:39:48 compute-0 nova_compute[244014]: 2026-02-25 13:39:48.647 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:49 compute-0 ceph-mon[76335]: pgmap v3597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:51 compute-0 nova_compute[244014]: 2026-02-25 13:39:51.161 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:51 compute-0 ceph-mon[76335]: pgmap v3598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:53 compute-0 ceph-mon[76335]: pgmap v3599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:53 compute-0 nova_compute[244014]: 2026-02-25 13:39:53.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:54 compute-0 podman[410773]: 2026-02-25 13:39:54.747776888 +0000 UTC m=+0.077759644 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:39:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:39:54 compute-0 podman[410774]: 2026-02-25 13:39:54.778972194 +0000 UTC m=+0.108022903 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 25 13:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:39:55.083 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:39:55.084 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:39:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:39:55.084 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:39:55 compute-0 ceph-mon[76335]: pgmap v3600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:56 compute-0 nova_compute[244014]: 2026-02-25 13:39:56.195 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:57 compute-0 ceph-mon[76335]: pgmap v3601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:58 compute-0 nova_compute[244014]: 2026-02-25 13:39:58.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:39:58 compute-0 ceph-mon[76335]: pgmap v3602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:39:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:01 compute-0 ceph-mon[76335]: pgmap v3603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:01 compute-0 nova_compute[244014]: 2026-02-25 13:40:01.199 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:40:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:40:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:03 compute-0 ceph-mon[76335]: pgmap v3604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:03 compute-0 nova_compute[244014]: 2026-02-25 13:40:03.720 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:05 compute-0 ceph-mon[76335]: pgmap v3605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:06 compute-0 nova_compute[244014]: 2026-02-25 13:40:06.203 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:07 compute-0 ceph-mon[76335]: pgmap v3606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:08 compute-0 nova_compute[244014]: 2026-02-25 13:40:08.722 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:08 compute-0 nova_compute[244014]: 2026-02-25 13:40:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:40:08 compute-0 nova_compute[244014]: 2026-02-25 13:40:08.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:40:08 compute-0 nova_compute[244014]: 2026-02-25 13:40:08.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:40:08 compute-0 nova_compute[244014]: 2026-02-25 13:40:08.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:40:08 compute-0 nova_compute[244014]: 2026-02-25 13:40:08.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:40:08 compute-0 nova_compute[244014]: 2026-02-25 13:40:08.913 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:40:08 compute-0 nova_compute[244014]: 2026-02-25 13:40:08.914 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:40:09 compute-0 ceph-mon[76335]: pgmap v3607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:40:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1337006553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:40:09 compute-0 nova_compute[244014]: 2026-02-25 13:40:09.456 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:40:09 compute-0 nova_compute[244014]: 2026-02-25 13:40:09.702 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:40:09 compute-0 nova_compute[244014]: 2026-02-25 13:40:09.704 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3534MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:40:09 compute-0 nova_compute[244014]: 2026-02-25 13:40:09.705 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:40:09 compute-0 nova_compute[244014]: 2026-02-25 13:40:09.706 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:40:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:09 compute-0 nova_compute[244014]: 2026-02-25 13:40:09.775 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:40:09 compute-0 nova_compute[244014]: 2026-02-25 13:40:09.776 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:40:09 compute-0 nova_compute[244014]: 2026-02-25 13:40:09.795 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:40:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:10 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1337006553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:40:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:40:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864052239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:40:10 compute-0 nova_compute[244014]: 2026-02-25 13:40:10.397 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:40:10 compute-0 nova_compute[244014]: 2026-02-25 13:40:10.404 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:40:10 compute-0 nova_compute[244014]: 2026-02-25 13:40:10.423 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:40:10 compute-0 nova_compute[244014]: 2026-02-25 13:40:10.427 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:40:10 compute-0 nova_compute[244014]: 2026-02-25 13:40:10.428 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:40:11 compute-0 nova_compute[244014]: 2026-02-25 13:40:11.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:11 compute-0 ceph-mon[76335]: pgmap v3608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1864052239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:40:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:13 compute-0 ceph-mon[76335]: pgmap v3609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:13 compute-0 nova_compute[244014]: 2026-02-25 13:40:13.725 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:15 compute-0 ceph-mon[76335]: pgmap v3610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:40:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:40:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:16 compute-0 nova_compute[244014]: 2026-02-25 13:40:16.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:17 compute-0 ceph-mon[76335]: pgmap v3611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:18 compute-0 nova_compute[244014]: 2026-02-25 13:40:18.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:19 compute-0 ceph-mon[76335]: pgmap v3612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:40:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.2 total, 600.0 interval
                                           Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 17K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:40:19 compute-0 nova_compute[244014]: 2026-02-25 13:40:19.430 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:40:19 compute-0 nova_compute[244014]: 2026-02-25 13:40:19.430 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:40:19 compute-0 nova_compute[244014]: 2026-02-25 13:40:19.430 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:40:19 compute-0 nova_compute[244014]: 2026-02-25 13:40:19.431 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:40:19 compute-0 nova_compute[244014]: 2026-02-25 13:40:19.452 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:40:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:20 compute-0 nova_compute[244014]: 2026-02-25 13:40:20.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:40:20 compute-0 nova_compute[244014]: 2026-02-25 13:40:20.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:40:21 compute-0 nova_compute[244014]: 2026-02-25 13:40:21.221 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:21 compute-0 ceph-mon[76335]: pgmap v3613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:23 compute-0 ceph-mon[76335]: pgmap v3614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:23 compute-0 nova_compute[244014]: 2026-02-25 13:40:23.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:40:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.6 total, 600.0 interval
                                           Cumulative writes: 37K writes, 150K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:40:25 compute-0 ceph-mon[76335]: pgmap v3615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:25 compute-0 podman[410862]: 2026-02-25 13:40:25.733349027 +0000 UTC m=+0.074820899 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 13:40:25 compute-0 podman[410863]: 2026-02-25 13:40:25.820931001 +0000 UTC m=+0.148959807 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 13:40:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:26 compute-0 nova_compute[244014]: 2026-02-25 13:40:26.223 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 13:40:27 compute-0 ceph-mon[76335]: pgmap v3616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:28 compute-0 nova_compute[244014]: 2026-02-25 13:40:28.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:28 compute-0 nova_compute[244014]: 2026-02-25 13:40:28.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:40:28 compute-0 nova_compute[244014]: 2026-02-25 13:40:28.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:40:29 compute-0 ceph-mon[76335]: pgmap v3617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:29 compute-0 sudo[410905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:40:29 compute-0 sudo[410905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:40:29 compute-0 sudo[410905]: pam_unix(sudo:session): session closed for user root
Feb 25 13:40:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:29 compute-0 sudo[410930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:40:29 compute-0 sudo[410930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:40:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:30 compute-0 sudo[410930]: pam_unix(sudo:session): session closed for user root
Feb 25 13:40:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:40:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:40:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:40:30 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:40:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:40:30 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:40:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:40:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:40:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:40:30 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:40:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:40:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:40:30 compute-0 sudo[410986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:40:30 compute-0 sudo[410986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:40:30 compute-0 sudo[410986]: pam_unix(sudo:session): session closed for user root
Feb 25 13:40:30 compute-0 sudo[411011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:40:30 compute-0 sudo[411011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:40:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:40:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:40:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:40:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:40:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:40:30 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:40:30 compute-0 podman[411048]: 2026-02-25 13:40:30.676524181 +0000 UTC m=+0.069027243 container create b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:40:30 compute-0 systemd[1]: Started libpod-conmon-b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f.scope.
Feb 25 13:40:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:40:30 compute-0 podman[411048]: 2026-02-25 13:40:30.648088535 +0000 UTC m=+0.040591627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:40:30 compute-0 podman[411048]: 2026-02-25 13:40:30.760579944 +0000 UTC m=+0.153083106 container init b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:40:30 compute-0 podman[411048]: 2026-02-25 13:40:30.771810537 +0000 UTC m=+0.164313629 container start b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 13:40:30 compute-0 podman[411048]: 2026-02-25 13:40:30.775449061 +0000 UTC m=+0.167952273 container attach b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 13:40:30 compute-0 systemd[1]: libpod-b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f.scope: Deactivated successfully.
Feb 25 13:40:30 compute-0 boring_chandrasekhar[411064]: 167 167
Feb 25 13:40:30 compute-0 conmon[411064]: conmon b854fdb7c6bfb93b2e40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f.scope/container/memory.events
Feb 25 13:40:30 compute-0 podman[411048]: 2026-02-25 13:40:30.781540986 +0000 UTC m=+0.174044058 container died b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-47cecd2cd9816e3686cfc2b65bf16d1b26beecbff4f51eead22f14402a5e5830-merged.mount: Deactivated successfully.
Feb 25 13:40:30 compute-0 podman[411048]: 2026-02-25 13:40:30.824573042 +0000 UTC m=+0.217076114 container remove b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 13:40:30 compute-0 systemd[1]: libpod-conmon-b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f.scope: Deactivated successfully.
Feb 25 13:40:30 compute-0 podman[411089]: 2026-02-25 13:40:30.989794514 +0000 UTC m=+0.048718359 container create d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:40:31 compute-0 systemd[1]: Started libpod-conmon-d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc.scope.
Feb 25 13:40:31 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:31 compute-0 podman[411089]: 2026-02-25 13:40:30.973511746 +0000 UTC m=+0.032435381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:40:31 compute-0 podman[411089]: 2026-02-25 13:40:31.072733395 +0000 UTC m=+0.131657030 container init d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 13:40:31 compute-0 podman[411089]: 2026-02-25 13:40:31.086178191 +0000 UTC m=+0.145101836 container start d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 13:40:31 compute-0 podman[411089]: 2026-02-25 13:40:31.089110555 +0000 UTC m=+0.148034190 container attach d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 13:40:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:40:31
Feb 25 13:40:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:40:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:40:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.mgr', '.rgw.root', 'default.rgw.log', 'backups', 'vms', 'images', 'default.rgw.control', 'cephfs.cephfs.data']
Feb 25 13:40:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:40:31 compute-0 nova_compute[244014]: 2026-02-25 13:40:31.225 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:31 compute-0 ceph-mon[76335]: pgmap v3618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:31 compute-0 elastic_mayer[411106]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:40:31 compute-0 elastic_mayer[411106]: --> All data devices are unavailable
Feb 25 13:40:31 compute-0 systemd[1]: libpod-d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc.scope: Deactivated successfully.
Feb 25 13:40:31 compute-0 podman[411089]: 2026-02-25 13:40:31.592225409 +0000 UTC m=+0.651149054 container died d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8-merged.mount: Deactivated successfully.
Feb 25 13:40:31 compute-0 podman[411089]: 2026-02-25 13:40:31.646673993 +0000 UTC m=+0.705597638 container remove d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:40:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:40:31 compute-0 systemd[1]: libpod-conmon-d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc.scope: Deactivated successfully.
Feb 25 13:40:31 compute-0 sudo[411011]: pam_unix(sudo:session): session closed for user root
Feb 25 13:40:31 compute-0 sudo[411137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:40:31 compute-0 sudo[411137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:40:31 compute-0 sudo[411137]: pam_unix(sudo:session): session closed for user root
Feb 25 13:40:31 compute-0 sudo[411162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:40:31 compute-0 sudo[411162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:40:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:32 compute-0 podman[411199]: 2026-02-25 13:40:32.124978914 +0000 UTC m=+0.044117607 container create 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:40:32 compute-0 systemd[1]: Started libpod-conmon-557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064.scope.
Feb 25 13:40:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:40:32 compute-0 podman[411199]: 2026-02-25 13:40:32.103135537 +0000 UTC m=+0.022274290 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:40:32 compute-0 podman[411199]: 2026-02-25 13:40:32.216129891 +0000 UTC m=+0.135268644 container init 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 13:40:32 compute-0 podman[411199]: 2026-02-25 13:40:32.224584984 +0000 UTC m=+0.143723677 container start 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 13:40:32 compute-0 podman[411199]: 2026-02-25 13:40:32.228218458 +0000 UTC m=+0.147357171 container attach 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:40:32 compute-0 xenodochial_johnson[411215]: 167 167
Feb 25 13:40:32 compute-0 systemd[1]: libpod-557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064.scope: Deactivated successfully.
Feb 25 13:40:32 compute-0 podman[411199]: 2026-02-25 13:40:32.232084859 +0000 UTC m=+0.151223572 container died 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-eeb433963c808659c6c56d73c7970b615d912c1bb4c82bb52a4f276b618086b5-merged.mount: Deactivated successfully.
Feb 25 13:40:32 compute-0 podman[411199]: 2026-02-25 13:40:32.272292564 +0000 UTC m=+0.191431257 container remove 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 13:40:32 compute-0 systemd[1]: libpod-conmon-557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064.scope: Deactivated successfully.
Feb 25 13:40:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:40:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:40:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:40:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:40:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:40:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:40:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:40:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:40:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:40:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:40:32 compute-0 podman[411239]: 2026-02-25 13:40:32.500812485 +0000 UTC m=+0.079453333 container create 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 13:40:32 compute-0 systemd[1]: Started libpod-conmon-7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a.scope.
Feb 25 13:40:32 compute-0 podman[411239]: 2026-02-25 13:40:32.46720761 +0000 UTC m=+0.045848518 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:40:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d169bcc17047e5a8ba80931c26329fbdc0699bb5e7a0dfe93b6646f9fcbe463/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d169bcc17047e5a8ba80931c26329fbdc0699bb5e7a0dfe93b6646f9fcbe463/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d169bcc17047e5a8ba80931c26329fbdc0699bb5e7a0dfe93b6646f9fcbe463/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d169bcc17047e5a8ba80931c26329fbdc0699bb5e7a0dfe93b6646f9fcbe463/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:32 compute-0 podman[411239]: 2026-02-25 13:40:32.612259814 +0000 UTC m=+0.190900682 container init 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 13:40:32 compute-0 podman[411239]: 2026-02-25 13:40:32.62221203 +0000 UTC m=+0.200852848 container start 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 13:40:32 compute-0 podman[411239]: 2026-02-25 13:40:32.626069611 +0000 UTC m=+0.204710499 container attach 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 13:40:32 compute-0 nova_compute[244014]: 2026-02-25 13:40:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]: {
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:     "0": [
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:         {
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "devices": [
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "/dev/loop3"
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             ],
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_name": "ceph_lv0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_size": "21470642176",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "name": "ceph_lv0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "tags": {
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.cluster_name": "ceph",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.crush_device_class": "",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.encrypted": "0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.objectstore": "bluestore",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.osd_id": "0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.type": "block",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.vdo": "0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.with_tpm": "0"
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             },
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "type": "block",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "vg_name": "ceph_vg0"
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:         }
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:     ],
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:     "1": [
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:         {
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "devices": [
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "/dev/loop4"
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             ],
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_name": "ceph_lv1",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_size": "21470642176",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "name": "ceph_lv1",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "tags": {
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.cluster_name": "ceph",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.crush_device_class": "",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.encrypted": "0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.objectstore": "bluestore",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.osd_id": "1",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.type": "block",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.vdo": "0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.with_tpm": "0"
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             },
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "type": "block",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "vg_name": "ceph_vg1"
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:         }
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:     ],
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:     "2": [
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:         {
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "devices": [
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "/dev/loop5"
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             ],
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_name": "ceph_lv2",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_size": "21470642176",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "name": "ceph_lv2",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "tags": {
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.cluster_name": "ceph",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.crush_device_class": "",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.encrypted": "0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.objectstore": "bluestore",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.osd_id": "2",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.type": "block",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.vdo": "0",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:                 "ceph.with_tpm": "0"
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             },
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "type": "block",
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:             "vg_name": "ceph_vg2"
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:         }
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]:     ]
Feb 25 13:40:32 compute-0 mystifying_haibt[411256]: }
Feb 25 13:40:32 compute-0 systemd[1]: libpod-7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a.scope: Deactivated successfully.
Feb 25 13:40:32 compute-0 podman[411239]: 2026-02-25 13:40:32.938008986 +0000 UTC m=+0.516649874 container died 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d169bcc17047e5a8ba80931c26329fbdc0699bb5e7a0dfe93b6646f9fcbe463-merged.mount: Deactivated successfully.
Feb 25 13:40:32 compute-0 podman[411239]: 2026-02-25 13:40:32.97924919 +0000 UTC m=+0.557889998 container remove 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:40:32 compute-0 systemd[1]: libpod-conmon-7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a.scope: Deactivated successfully.
Feb 25 13:40:33 compute-0 sudo[411162]: pam_unix(sudo:session): session closed for user root
Feb 25 13:40:33 compute-0 sudo[411277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:40:33 compute-0 sudo[411277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:40:33 compute-0 sudo[411277]: pam_unix(sudo:session): session closed for user root
Feb 25 13:40:33 compute-0 sudo[411302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:40:33 compute-0 sudo[411302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:40:33 compute-0 ceph-mon[76335]: pgmap v3619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:33 compute-0 podman[411339]: 2026-02-25 13:40:33.420143278 +0000 UTC m=+0.040976587 container create 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 13:40:33 compute-0 systemd[1]: Started libpod-conmon-1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a.scope.
Feb 25 13:40:33 compute-0 podman[411339]: 2026-02-25 13:40:33.400487444 +0000 UTC m=+0.021320813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:40:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:40:33 compute-0 podman[411339]: 2026-02-25 13:40:33.525310377 +0000 UTC m=+0.146143676 container init 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:40:33 compute-0 podman[411339]: 2026-02-25 13:40:33.538186477 +0000 UTC m=+0.159019816 container start 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 13:40:33 compute-0 podman[411339]: 2026-02-25 13:40:33.542086549 +0000 UTC m=+0.162919888 container attach 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:40:33 compute-0 systemd[1]: libpod-1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a.scope: Deactivated successfully.
Feb 25 13:40:33 compute-0 vibrant_swartz[411356]: 167 167
Feb 25 13:40:33 compute-0 conmon[411356]: conmon 1d433edeaf3336fb2fc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a.scope/container/memory.events
Feb 25 13:40:33 compute-0 podman[411339]: 2026-02-25 13:40:33.545363953 +0000 UTC m=+0.166197252 container died 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 13:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d8f607a08e229b0793e7ef8d39b5af184901e962e75cb16906105a9e7869315-merged.mount: Deactivated successfully.
Feb 25 13:40:33 compute-0 podman[411339]: 2026-02-25 13:40:33.586430862 +0000 UTC m=+0.207264141 container remove 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:40:33 compute-0 systemd[1]: libpod-conmon-1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a.scope: Deactivated successfully.
Feb 25 13:40:33 compute-0 nova_compute[244014]: 2026-02-25 13:40:33.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:33 compute-0 podman[411381]: 2026-02-25 13:40:33.772356 +0000 UTC m=+0.053401164 container create 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 13:40:33 compute-0 systemd[1]: Started libpod-conmon-9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95.scope.
Feb 25 13:40:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e6d0e7cb9e3cbf67bfddb875d4c8bf011d4791098031eab6c3b59ca1fb042e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:33 compute-0 podman[411381]: 2026-02-25 13:40:33.743237714 +0000 UTC m=+0.024282918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e6d0e7cb9e3cbf67bfddb875d4c8bf011d4791098031eab6c3b59ca1fb042e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e6d0e7cb9e3cbf67bfddb875d4c8bf011d4791098031eab6c3b59ca1fb042e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e6d0e7cb9e3cbf67bfddb875d4c8bf011d4791098031eab6c3b59ca1fb042e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:40:33 compute-0 podman[411381]: 2026-02-25 13:40:33.856248178 +0000 UTC m=+0.137293352 container init 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:40:33 compute-0 podman[411381]: 2026-02-25 13:40:33.861945012 +0000 UTC m=+0.142990146 container start 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:40:33 compute-0 podman[411381]: 2026-02-25 13:40:33.865967787 +0000 UTC m=+0.147012951 container attach 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 13:40:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:34 compute-0 lvm[411477]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:40:34 compute-0 lvm[411477]: VG ceph_vg1 finished
Feb 25 13:40:34 compute-0 lvm[411476]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:40:34 compute-0 lvm[411476]: VG ceph_vg0 finished
Feb 25 13:40:34 compute-0 lvm[411479]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:40:34 compute-0 lvm[411479]: VG ceph_vg2 finished
Feb 25 13:40:34 compute-0 reverent_hoover[411398]: {}
Feb 25 13:40:34 compute-0 systemd[1]: libpod-9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95.scope: Deactivated successfully.
Feb 25 13:40:34 compute-0 systemd[1]: libpod-9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95.scope: Consumed 1.315s CPU time.
Feb 25 13:40:34 compute-0 podman[411381]: 2026-02-25 13:40:34.729254811 +0000 UTC m=+1.010299975 container died 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:40:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-80e6d0e7cb9e3cbf67bfddb875d4c8bf011d4791098031eab6c3b59ca1fb042e-merged.mount: Deactivated successfully.
Feb 25 13:40:34 compute-0 podman[411381]: 2026-02-25 13:40:34.784455376 +0000 UTC m=+1.065500540 container remove 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:40:34 compute-0 systemd[1]: libpod-conmon-9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95.scope: Deactivated successfully.
Feb 25 13:40:34 compute-0 sudo[411302]: pam_unix(sudo:session): session closed for user root
Feb 25 13:40:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:40:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:40:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:40:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:40:34 compute-0 sudo[411494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:40:34 compute-0 sudo[411494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:40:34 compute-0 sudo[411494]: pam_unix(sudo:session): session closed for user root
Feb 25 13:40:35 compute-0 ceph-mon[76335]: pgmap v3620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:40:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:40:35 compute-0 nova_compute[244014]: 2026-02-25 13:40:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:40:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:36 compute-0 nova_compute[244014]: 2026-02-25 13:40:36.231 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:37 compute-0 ceph-mon[76335]: pgmap v3621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:38 compute-0 nova_compute[244014]: 2026-02-25 13:40:38.770 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:39 compute-0 ceph-mon[76335]: pgmap v3622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:41 compute-0 nova_compute[244014]: 2026-02-25 13:40:41.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:41 compute-0 ceph-mon[76335]: pgmap v3623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:43 compute-0 ceph-mon[76335]: pgmap v3624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:40:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:40:43 compute-0 nova_compute[244014]: 2026-02-25 13:40:43.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:45 compute-0 ceph-mon[76335]: pgmap v3625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:46 compute-0 nova_compute[244014]: 2026-02-25 13:40:46.239 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:47 compute-0 ceph-mon[76335]: pgmap v3626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:40:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3630820822' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:40:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:40:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3630820822' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:40:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3630820822' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:40:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3630820822' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:40:48 compute-0 nova_compute[244014]: 2026-02-25 13:40:48.776 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:48 compute-0 nova_compute[244014]: 2026-02-25 13:40:48.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:40:49 compute-0 ceph-mon[76335]: pgmap v3627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:51 compute-0 nova_compute[244014]: 2026-02-25 13:40:51.244 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:51 compute-0 ceph-mon[76335]: pgmap v3628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:53 compute-0 ceph-mon[76335]: pgmap v3629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:53 compute-0 nova_compute[244014]: 2026-02-25 13:40:53.776 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:40:55.084 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:40:55.084 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:40:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:40:55.085 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:40:55 compute-0 ceph-mon[76335]: pgmap v3630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:56 compute-0 nova_compute[244014]: 2026-02-25 13:40:56.249 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:56 compute-0 podman[411519]: 2026-02-25 13:40:56.728363924 +0000 UTC m=+0.061923728 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 13:40:56 compute-0 podman[411520]: 2026-02-25 13:40:56.760020313 +0000 UTC m=+0.092490546 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 13:40:57 compute-0 ceph-mon[76335]: pgmap v3631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:58 compute-0 nova_compute[244014]: 2026-02-25 13:40:58.778 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:40:59 compute-0 ceph-mon[76335]: pgmap v3632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:40:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:01 compute-0 nova_compute[244014]: 2026-02-25 13:41:01.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:01 compute-0 ceph-mon[76335]: pgmap v3633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:41:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:41:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:03 compute-0 ceph-mon[76335]: pgmap v3634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:03 compute-0 nova_compute[244014]: 2026-02-25 13:41:03.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:05 compute-0 ceph-mon[76335]: pgmap v3635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:06 compute-0 nova_compute[244014]: 2026-02-25 13:41:06.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:07 compute-0 ceph-mon[76335]: pgmap v3636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:08 compute-0 nova_compute[244014]: 2026-02-25 13:41:08.783 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:08 compute-0 nova_compute[244014]: 2026-02-25 13:41:08.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:41:08 compute-0 nova_compute[244014]: 2026-02-25 13:41:08.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:41:08 compute-0 nova_compute[244014]: 2026-02-25 13:41:08.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:41:08 compute-0 nova_compute[244014]: 2026-02-25 13:41:08.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:41:08 compute-0 nova_compute[244014]: 2026-02-25 13:41:08.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:41:08 compute-0 nova_compute[244014]: 2026-02-25 13:41:08.917 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:41:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:41:09 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/632894411' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:41:09 compute-0 nova_compute[244014]: 2026-02-25 13:41:09.471 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:41:09 compute-0 ceph-mon[76335]: pgmap v3637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:09 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/632894411' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:41:09 compute-0 nova_compute[244014]: 2026-02-25 13:41:09.676 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:41:09 compute-0 nova_compute[244014]: 2026-02-25 13:41:09.677 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3546MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:41:09 compute-0 nova_compute[244014]: 2026-02-25 13:41:09.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:41:09 compute-0 nova_compute[244014]: 2026-02-25 13:41:09.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:41:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:09 compute-0 nova_compute[244014]: 2026-02-25 13:41:09.847 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:41:09 compute-0 nova_compute[244014]: 2026-02-25 13:41:09.848 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:41:09 compute-0 nova_compute[244014]: 2026-02-25 13:41:09.935 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:41:10 compute-0 nova_compute[244014]: 2026-02-25 13:41:10.039 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:41:10 compute-0 nova_compute[244014]: 2026-02-25 13:41:10.040 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:41:10 compute-0 nova_compute[244014]: 2026-02-25 13:41:10.058 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:41:10 compute-0 nova_compute[244014]: 2026-02-25 13:41:10.081 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:41:10 compute-0 nova_compute[244014]: 2026-02-25 13:41:10.098 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:41:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:41:10 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197378911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:41:10 compute-0 nova_compute[244014]: 2026-02-25 13:41:10.669 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:41:10 compute-0 nova_compute[244014]: 2026-02-25 13:41:10.676 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:41:10 compute-0 nova_compute[244014]: 2026-02-25 13:41:10.708 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:41:10 compute-0 nova_compute[244014]: 2026-02-25 13:41:10.711 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:41:10 compute-0 nova_compute[244014]: 2026-02-25 13:41:10.712 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:41:11 compute-0 nova_compute[244014]: 2026-02-25 13:41:11.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:11 compute-0 ceph-mon[76335]: pgmap v3638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3197378911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:41:11 compute-0 nova_compute[244014]: 2026-02-25 13:41:11.713 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:41:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:13 compute-0 ceph-mon[76335]: pgmap v3639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:13 compute-0 nova_compute[244014]: 2026-02-25 13:41:13.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:15 compute-0 ceph-mon[76335]: pgmap v3640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:16 compute-0 nova_compute[244014]: 2026-02-25 13:41:16.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:17 compute-0 ceph-mon[76335]: pgmap v3641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:17 compute-0 nova_compute[244014]: 2026-02-25 13:41:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:41:17 compute-0 nova_compute[244014]: 2026-02-25 13:41:17.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:41:17 compute-0 nova_compute[244014]: 2026-02-25 13:41:17.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:41:17 compute-0 nova_compute[244014]: 2026-02-25 13:41:17.895 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:41:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:18 compute-0 nova_compute[244014]: 2026-02-25 13:41:18.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:19 compute-0 ceph-mon[76335]: pgmap v3642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:20 compute-0 nova_compute[244014]: 2026-02-25 13:41:20.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:41:21 compute-0 nova_compute[244014]: 2026-02-25 13:41:21.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:21 compute-0 ceph-mon[76335]: pgmap v3643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:21 compute-0 nova_compute[244014]: 2026-02-25 13:41:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:41:21 compute-0 nova_compute[244014]: 2026-02-25 13:41:21.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:41:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Feb 25 13:41:23 compute-0 ceph-mon[76335]: pgmap v3644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Feb 25 13:41:23 compute-0 nova_compute[244014]: 2026-02-25 13:41:23.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 13:41:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:25 compute-0 ceph-mon[76335]: pgmap v3645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 13:41:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 13:41:26 compute-0 nova_compute[244014]: 2026-02-25 13:41:26.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:27 compute-0 ceph-mon[76335]: pgmap v3646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 13:41:27 compute-0 podman[411606]: 2026-02-25 13:41:27.744717875 +0000 UTC m=+0.084237769 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:41:27 compute-0 podman[411607]: 2026-02-25 13:41:27.74907339 +0000 UTC m=+0.087931115 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:41:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:41:28 compute-0 ceph-mon[76335]: pgmap v3647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:41:28 compute-0 nova_compute[244014]: 2026-02-25 13:41:28.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:29 compute-0 nova_compute[244014]: 2026-02-25 13:41:29.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:41:29 compute-0 nova_compute[244014]: 2026-02-25 13:41:29.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:41:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:41:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:41:31
Feb 25 13:41:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:41:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:41:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['images', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes']
Feb 25 13:41:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:41:31 compute-0 ceph-mon[76335]: pgmap v3648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:41:31 compute-0 nova_compute[244014]: 2026-02-25 13:41:31.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:41:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:41:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:41:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:41:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:41:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:41:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:41:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:41:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:41:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:41:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:41:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:41:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:41:32 compute-0 nova_compute[244014]: 2026-02-25 13:41:32.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:41:33 compute-0 ceph-mon[76335]: pgmap v3649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 13:41:33 compute-0 nova_compute[244014]: 2026-02-25 13:41:33.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Feb 25 13:41:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:35 compute-0 sudo[411649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:41:35 compute-0 sudo[411649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:41:35 compute-0 sudo[411649]: pam_unix(sudo:session): session closed for user root
Feb 25 13:41:35 compute-0 sudo[411674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:41:35 compute-0 sudo[411674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:41:35 compute-0 ceph-mon[76335]: pgmap v3650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Feb 25 13:41:35 compute-0 sudo[411674]: pam_unix(sudo:session): session closed for user root
Feb 25 13:41:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:41:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:41:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:41:35 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:41:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:41:35 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:41:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:41:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:41:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:41:35 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:41:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:41:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:41:35 compute-0 sudo[411731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:41:35 compute-0 sudo[411731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:41:35 compute-0 sudo[411731]: pam_unix(sudo:session): session closed for user root
Feb 25 13:41:35 compute-0 sudo[411756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:41:35 compute-0 sudo[411756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:41:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 13:41:36 compute-0 podman[411795]: 2026-02-25 13:41:36.103494307 +0000 UTC m=+0.037964241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:41:36 compute-0 podman[411795]: 2026-02-25 13:41:36.393159143 +0000 UTC m=+0.327629037 container create fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:41:36 compute-0 nova_compute[244014]: 2026-02-25 13:41:36.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:41:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:41:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:41:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:41:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:41:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:41:36 compute-0 systemd[1]: Started libpod-conmon-fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0.scope.
Feb 25 13:41:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:41:36 compute-0 podman[411795]: 2026-02-25 13:41:36.549032888 +0000 UTC m=+0.483502812 container init fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:41:36 compute-0 podman[411795]: 2026-02-25 13:41:36.55989979 +0000 UTC m=+0.494369684 container start fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:41:36 compute-0 podman[411795]: 2026-02-25 13:41:36.564297356 +0000 UTC m=+0.498767310 container attach fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb 25 13:41:36 compute-0 admiring_hugle[411812]: 167 167
Feb 25 13:41:36 compute-0 systemd[1]: libpod-fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0.scope: Deactivated successfully.
Feb 25 13:41:36 compute-0 podman[411795]: 2026-02-25 13:41:36.572831321 +0000 UTC m=+0.507301225 container died fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-538fa063cc7656193617f738e2c911c406992be5ff989fb5f17edebb821009e1-merged.mount: Deactivated successfully.
Feb 25 13:41:36 compute-0 podman[411795]: 2026-02-25 13:41:36.662435204 +0000 UTC m=+0.596905098 container remove fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 13:41:36 compute-0 systemd[1]: libpod-conmon-fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0.scope: Deactivated successfully.
Feb 25 13:41:36 compute-0 podman[411835]: 2026-02-25 13:41:36.8758386 +0000 UTC m=+0.051045446 container create f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 13:41:36 compute-0 nova_compute[244014]: 2026-02-25 13:41:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:41:36 compute-0 systemd[1]: Started libpod-conmon-f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987.scope.
Feb 25 13:41:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:41:36 compute-0 podman[411835]: 2026-02-25 13:41:36.85597177 +0000 UTC m=+0.031178426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:37 compute-0 podman[411835]: 2026-02-25 13:41:37.091809851 +0000 UTC m=+0.267016507 container init f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:41:37 compute-0 podman[411835]: 2026-02-25 13:41:37.100561442 +0000 UTC m=+0.275768098 container start f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:41:37 compute-0 podman[411835]: 2026-02-25 13:41:37.120197816 +0000 UTC m=+0.295404492 container attach f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:41:37 compute-0 stupefied_feistel[411851]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:41:37 compute-0 stupefied_feistel[411851]: --> All data devices are unavailable
Feb 25 13:41:37 compute-0 systemd[1]: libpod-f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987.scope: Deactivated successfully.
Feb 25 13:41:37 compute-0 podman[411872]: 2026-02-25 13:41:37.734959165 +0000 UTC m=+0.044957901 container died f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:41:37 compute-0 ceph-mon[76335]: pgmap v3651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 13:41:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1-merged.mount: Deactivated successfully.
Feb 25 13:41:38 compute-0 podman[411872]: 2026-02-25 13:41:38.055611951 +0000 UTC m=+0.365610687 container remove f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 13:41:38 compute-0 systemd[1]: libpod-conmon-f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987.scope: Deactivated successfully.
Feb 25 13:41:38 compute-0 sudo[411756]: pam_unix(sudo:session): session closed for user root
Feb 25 13:41:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 13:41:38 compute-0 sudo[411888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:41:38 compute-0 sudo[411888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:41:38 compute-0 sudo[411888]: pam_unix(sudo:session): session closed for user root
Feb 25 13:41:38 compute-0 sudo[411913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:41:38 compute-0 sudo[411913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:41:38 compute-0 podman[411951]: 2026-02-25 13:41:38.521594119 +0000 UTC m=+0.046769254 container create da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:41:38 compute-0 systemd[1]: Started libpod-conmon-da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff.scope.
Feb 25 13:41:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:41:38 compute-0 podman[411951]: 2026-02-25 13:41:38.496817818 +0000 UTC m=+0.021992963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:41:38 compute-0 podman[411951]: 2026-02-25 13:41:38.608182125 +0000 UTC m=+0.133357310 container init da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb 25 13:41:38 compute-0 podman[411951]: 2026-02-25 13:41:38.617594025 +0000 UTC m=+0.142769160 container start da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:41:38 compute-0 vigorous_cartwright[411967]: 167 167
Feb 25 13:41:38 compute-0 systemd[1]: libpod-da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff.scope: Deactivated successfully.
Feb 25 13:41:38 compute-0 podman[411951]: 2026-02-25 13:41:38.626053158 +0000 UTC m=+0.151228353 container attach da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:41:38 compute-0 podman[411951]: 2026-02-25 13:41:38.627411227 +0000 UTC m=+0.152586322 container died da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fc0eabf5ef6fa6fe6a6803adbcc5ad364b2d9ea588041a89117f2d583c3c994-merged.mount: Deactivated successfully.
Feb 25 13:41:38 compute-0 podman[411951]: 2026-02-25 13:41:38.70484493 +0000 UTC m=+0.230020025 container remove da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:41:38 compute-0 systemd[1]: libpod-conmon-da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff.scope: Deactivated successfully.
Feb 25 13:41:38 compute-0 ceph-mon[76335]: pgmap v3652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 13:41:38 compute-0 nova_compute[244014]: 2026-02-25 13:41:38.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:38 compute-0 podman[411994]: 2026-02-25 13:41:38.958296895 +0000 UTC m=+0.101936946 container create 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:41:38 compute-0 podman[411994]: 2026-02-25 13:41:38.895518564 +0000 UTC m=+0.039158655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:41:39 compute-0 systemd[1]: Started libpod-conmon-94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf.scope.
Feb 25 13:41:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebebc17e82a824634b32afdb9b01b7f7c259e72a93c8f07893f04766c9ec129/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebebc17e82a824634b32afdb9b01b7f7c259e72a93c8f07893f04766c9ec129/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebebc17e82a824634b32afdb9b01b7f7c259e72a93c8f07893f04766c9ec129/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebebc17e82a824634b32afdb9b01b7f7c259e72a93c8f07893f04766c9ec129/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:39 compute-0 podman[411994]: 2026-02-25 13:41:39.088189445 +0000 UTC m=+0.231829476 container init 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 13:41:39 compute-0 podman[411994]: 2026-02-25 13:41:39.096869444 +0000 UTC m=+0.240509445 container start 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 13:41:39 compute-0 podman[411994]: 2026-02-25 13:41:39.102507396 +0000 UTC m=+0.246147387 container attach 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:41:39 compute-0 unruffled_euler[412010]: {
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:     "0": [
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:         {
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "devices": [
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "/dev/loop3"
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             ],
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_name": "ceph_lv0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_size": "21470642176",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "name": "ceph_lv0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "tags": {
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.cluster_name": "ceph",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.crush_device_class": "",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.encrypted": "0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.objectstore": "bluestore",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.osd_id": "0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.type": "block",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.vdo": "0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.with_tpm": "0"
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             },
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "type": "block",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "vg_name": "ceph_vg0"
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:         }
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:     ],
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:     "1": [
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:         {
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "devices": [
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "/dev/loop4"
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             ],
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_name": "ceph_lv1",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_size": "21470642176",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "name": "ceph_lv1",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "tags": {
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.cluster_name": "ceph",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.crush_device_class": "",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.encrypted": "0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.objectstore": "bluestore",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.osd_id": "1",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.type": "block",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.vdo": "0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.with_tpm": "0"
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             },
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "type": "block",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "vg_name": "ceph_vg1"
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:         }
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:     ],
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:     "2": [
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:         {
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "devices": [
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "/dev/loop5"
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             ],
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_name": "ceph_lv2",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_size": "21470642176",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "name": "ceph_lv2",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "tags": {
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.cluster_name": "ceph",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.crush_device_class": "",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.encrypted": "0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.objectstore": "bluestore",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.osd_id": "2",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.type": "block",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.vdo": "0",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:                 "ceph.with_tpm": "0"
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             },
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "type": "block",
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:             "vg_name": "ceph_vg2"
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:         }
Feb 25 13:41:39 compute-0 unruffled_euler[412010]:     ]
Feb 25 13:41:39 compute-0 unruffled_euler[412010]: }
Feb 25 13:41:39 compute-0 systemd[1]: libpod-94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf.scope: Deactivated successfully.
Feb 25 13:41:39 compute-0 podman[411994]: 2026-02-25 13:41:39.415267775 +0000 UTC m=+0.558907776 container died 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-eebebc17e82a824634b32afdb9b01b7f7c259e72a93c8f07893f04766c9ec129-merged.mount: Deactivated successfully.
Feb 25 13:41:39 compute-0 podman[411994]: 2026-02-25 13:41:39.473678282 +0000 UTC m=+0.617318273 container remove 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:41:39 compute-0 systemd[1]: libpod-conmon-94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf.scope: Deactivated successfully.
Feb 25 13:41:39 compute-0 sudo[411913]: pam_unix(sudo:session): session closed for user root
Feb 25 13:41:39 compute-0 sudo[412029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:41:39 compute-0 sudo[412029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:41:39 compute-0 sudo[412029]: pam_unix(sudo:session): session closed for user root
Feb 25 13:41:39 compute-0 sudo[412054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:41:39 compute-0 sudo[412054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:41:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:39 compute-0 podman[412093]: 2026-02-25 13:41:39.990976933 +0000 UTC m=+0.086064482 container create ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 13:41:40 compute-0 podman[412093]: 2026-02-25 13:41:39.931143715 +0000 UTC m=+0.026231324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:41:40 compute-0 systemd[1]: Started libpod-conmon-ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8.scope.
Feb 25 13:41:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:41:40 compute-0 podman[412093]: 2026-02-25 13:41:40.08490616 +0000 UTC m=+0.179993669 container init ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 13:41:40 compute-0 podman[412093]: 2026-02-25 13:41:40.094289849 +0000 UTC m=+0.189377388 container start ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:41:40 compute-0 cranky_dubinsky[412109]: 167 167
Feb 25 13:41:40 compute-0 systemd[1]: libpod-ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8.scope: Deactivated successfully.
Feb 25 13:41:40 compute-0 podman[412093]: 2026-02-25 13:41:40.105681856 +0000 UTC m=+0.200769395 container attach ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:41:40 compute-0 podman[412093]: 2026-02-25 13:41:40.106965133 +0000 UTC m=+0.202052642 container died ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 13:41:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d90b6984cfc6d28c1bfc6c7b2c0edc1643098481a3f37f544544814231aff08-merged.mount: Deactivated successfully.
Feb 25 13:41:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:40 compute-0 podman[412093]: 2026-02-25 13:41:40.178781415 +0000 UTC m=+0.273868934 container remove ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 13:41:40 compute-0 systemd[1]: libpod-conmon-ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8.scope: Deactivated successfully.
Feb 25 13:41:40 compute-0 podman[412131]: 2026-02-25 13:41:40.358960838 +0000 UTC m=+0.053438245 container create 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:41:40 compute-0 systemd[1]: Started libpod-conmon-036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e.scope.
Feb 25 13:41:40 compute-0 podman[412131]: 2026-02-25 13:41:40.331807888 +0000 UTC m=+0.026285295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:41:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6641106191e3bf7331cff4f38338ed7b0a3a68668faa14573aba53a4837571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6641106191e3bf7331cff4f38338ed7b0a3a68668faa14573aba53a4837571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6641106191e3bf7331cff4f38338ed7b0a3a68668faa14573aba53a4837571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6641106191e3bf7331cff4f38338ed7b0a3a68668faa14573aba53a4837571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:41:40 compute-0 podman[412131]: 2026-02-25 13:41:40.515582374 +0000 UTC m=+0.210059751 container init 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 13:41:40 compute-0 podman[412131]: 2026-02-25 13:41:40.522517614 +0000 UTC m=+0.216994981 container start 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:41:40 compute-0 podman[412131]: 2026-02-25 13:41:40.545650078 +0000 UTC m=+0.240127455 container attach 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:41:41 compute-0 ceph-mon[76335]: pgmap v3653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:41 compute-0 lvm[412225]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:41:41 compute-0 lvm[412227]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:41:41 compute-0 lvm[412228]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:41:41 compute-0 lvm[412225]: VG ceph_vg0 finished
Feb 25 13:41:41 compute-0 lvm[412228]: VG ceph_vg2 finished
Feb 25 13:41:41 compute-0 lvm[412227]: VG ceph_vg1 finished
Feb 25 13:41:41 compute-0 adoring_cartwright[412147]: {}
Feb 25 13:41:41 compute-0 systemd[1]: libpod-036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e.scope: Deactivated successfully.
Feb 25 13:41:41 compute-0 systemd[1]: libpod-036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e.scope: Consumed 1.273s CPU time.
Feb 25 13:41:41 compute-0 podman[412131]: 2026-02-25 13:41:41.372452335 +0000 UTC m=+1.066929702 container died 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:41:41 compute-0 nova_compute[244014]: 2026-02-25 13:41:41.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f6641106191e3bf7331cff4f38338ed7b0a3a68668faa14573aba53a4837571-merged.mount: Deactivated successfully.
Feb 25 13:41:41 compute-0 podman[412131]: 2026-02-25 13:41:41.781178799 +0000 UTC m=+1.475656186 container remove 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 13:41:41 compute-0 systemd[1]: libpod-conmon-036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e.scope: Deactivated successfully.
Feb 25 13:41:41 compute-0 sudo[412054]: pam_unix(sudo:session): session closed for user root
Feb 25 13:41:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:41:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:41:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:41:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:41:41 compute-0 sudo[412245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:41:41 compute-0 sudo[412245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:41:41 compute-0 sudo[412245]: pam_unix(sudo:session): session closed for user root
Feb 25 13:41:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:41:42 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:41:42 compute-0 ceph-mon[76335]: pgmap v3654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:41:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:41:43 compute-0 nova_compute[244014]: 2026-02-25 13:41:43.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:45 compute-0 ceph-mon[76335]: pgmap v3655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:46 compute-0 nova_compute[244014]: 2026-02-25 13:41:46.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:47 compute-0 ceph-mon[76335]: pgmap v3656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:41:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1120100958' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:41:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:41:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1120100958' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:41:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1120100958' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:41:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1120100958' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:41:48 compute-0 nova_compute[244014]: 2026-02-25 13:41:48.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:49 compute-0 ceph-mon[76335]: pgmap v3657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:51 compute-0 ceph-mon[76335]: pgmap v3658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:51 compute-0 nova_compute[244014]: 2026-02-25 13:41:51.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:53 compute-0 ceph-mon[76335]: pgmap v3659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:53 compute-0 nova_compute[244014]: 2026-02-25 13:41:53.841 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:54 compute-0 kernel: hrtimer: interrupt took 12430707 ns
Feb 25 13:41:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:41:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:41:55.085 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:41:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:41:55.086 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:41:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:41:55.086 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:41:55 compute-0 ceph-mon[76335]: pgmap v3660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:56 compute-0 nova_compute[244014]: 2026-02-25 13:41:56.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:57 compute-0 ceph-mon[76335]: pgmap v3661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:58 compute-0 nova_compute[244014]: 2026-02-25 13:41:58.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:41:58 compute-0 podman[412271]: 2026-02-25 13:41:58.84679613 +0000 UTC m=+0.178016381 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 13:41:58 compute-0 podman[412272]: 2026-02-25 13:41:58.869987566 +0000 UTC m=+0.085374401 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 13:41:59 compute-0 ceph-mon[76335]: pgmap v3662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:41:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:01 compute-0 nova_compute[244014]: 2026-02-25 13:42:01.467 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:01 compute-0 ceph-mon[76335]: pgmap v3663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:42:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:42:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:03 compute-0 ceph-mon[76335]: pgmap v3664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:03 compute-0 nova_compute[244014]: 2026-02-25 13:42:03.846 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:05 compute-0 ceph-mon[76335]: pgmap v3665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:06 compute-0 nova_compute[244014]: 2026-02-25 13:42:06.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:07 compute-0 ceph-mon[76335]: pgmap v3666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:08 compute-0 nova_compute[244014]: 2026-02-25 13:42:08.849 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:09 compute-0 ceph-mon[76335]: pgmap v3667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:10 compute-0 nova_compute[244014]: 2026-02-25 13:42:10.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:42:10 compute-0 nova_compute[244014]: 2026-02-25 13:42:10.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:42:10 compute-0 nova_compute[244014]: 2026-02-25 13:42:10.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:42:10 compute-0 nova_compute[244014]: 2026-02-25 13:42:10.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:42:10 compute-0 nova_compute[244014]: 2026-02-25 13:42:10.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:42:10 compute-0 nova_compute[244014]: 2026-02-25 13:42:10.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:42:10 compute-0 nova_compute[244014]: 2026-02-25 13:42:10.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:42:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:42:11 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069399012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:42:11 compute-0 nova_compute[244014]: 2026-02-25 13:42:11.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:11 compute-0 nova_compute[244014]: 2026-02-25 13:42:11.500 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:42:11 compute-0 ceph-mon[76335]: pgmap v3668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:11 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1069399012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:42:11 compute-0 nova_compute[244014]: 2026-02-25 13:42:11.659 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:42:11 compute-0 nova_compute[244014]: 2026-02-25 13:42:11.660 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3529MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:42:11 compute-0 nova_compute[244014]: 2026-02-25 13:42:11.661 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:42:11 compute-0 nova_compute[244014]: 2026-02-25 13:42:11.661 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:42:11 compute-0 nova_compute[244014]: 2026-02-25 13:42:11.751 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:42:11 compute-0 nova_compute[244014]: 2026-02-25 13:42:11.751 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:42:11 compute-0 nova_compute[244014]: 2026-02-25 13:42:11.773 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:42:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:42:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1420449654' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:42:12 compute-0 nova_compute[244014]: 2026-02-25 13:42:12.332 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:42:12 compute-0 nova_compute[244014]: 2026-02-25 13:42:12.337 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:42:12 compute-0 nova_compute[244014]: 2026-02-25 13:42:12.366 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:42:12 compute-0 nova_compute[244014]: 2026-02-25 13:42:12.368 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:42:12 compute-0 nova_compute[244014]: 2026-02-25 13:42:12.369 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:42:12 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1420449654' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:42:13 compute-0 ceph-mon[76335]: pgmap v3669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:13 compute-0 nova_compute[244014]: 2026-02-25 13:42:13.850 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:15 compute-0 ceph-mon[76335]: pgmap v3670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:16 compute-0 nova_compute[244014]: 2026-02-25 13:42:16.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:17 compute-0 ceph-mon[76335]: pgmap v3671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:18 compute-0 nova_compute[244014]: 2026-02-25 13:42:18.853 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:19 compute-0 nova_compute[244014]: 2026-02-25 13:42:19.369 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:42:19 compute-0 nova_compute[244014]: 2026-02-25 13:42:19.369 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:42:19 compute-0 nova_compute[244014]: 2026-02-25 13:42:19.369 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:42:19 compute-0 nova_compute[244014]: 2026-02-25 13:42:19.398 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:42:19 compute-0 ceph-mon[76335]: pgmap v3672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:21 compute-0 nova_compute[244014]: 2026-02-25 13:42:21.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:21 compute-0 ceph-mon[76335]: pgmap v3673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:22 compute-0 nova_compute[244014]: 2026-02-25 13:42:22.901 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:42:23 compute-0 ceph-mon[76335]: pgmap v3674: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:23 compute-0 nova_compute[244014]: 2026-02-25 13:42:23.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:23 compute-0 nova_compute[244014]: 2026-02-25 13:42:23.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:42:23 compute-0 nova_compute[244014]: 2026-02-25 13:42:23.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:42:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:24 compute-0 ceph-mon[76335]: pgmap v3675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:26 compute-0 nova_compute[244014]: 2026-02-25 13:42:26.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:27 compute-0 ceph-mon[76335]: pgmap v3676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:28 compute-0 nova_compute[244014]: 2026-02-25 13:42:28.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:29 compute-0 ceph-mon[76335]: pgmap v3677: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:29 compute-0 podman[412359]: 2026-02-25 13:42:29.760468517 +0000 UTC m=+0.084043314 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 13:42:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:29 compute-0 podman[412360]: 2026-02-25 13:42:29.839201677 +0000 UTC m=+0.154710883 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 13:42:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:30 compute-0 nova_compute[244014]: 2026-02-25 13:42:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:42:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:42:31
Feb 25 13:42:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:42:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:42:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'volumes', '.rgw.root', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta']
Feb 25 13:42:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:42:31 compute-0 ceph-mon[76335]: pgmap v3678: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:31 compute-0 nova_compute[244014]: 2026-02-25 13:42:31.499 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:42:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:42:31 compute-0 nova_compute[244014]: 2026-02-25 13:42:31.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:42:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.442559) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952442606, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3458396, "memory_usage": 3523184, "flush_reason": "Manual Compaction"}
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952461430, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 3402960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75089, "largest_seqno": 77130, "table_properties": {"data_size": 3393565, "index_size": 5952, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18479, "raw_average_key_size": 20, "raw_value_size": 3375080, "raw_average_value_size": 3660, "num_data_blocks": 264, "num_entries": 922, "num_filter_entries": 922, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026722, "oldest_key_time": 1772026722, "file_creation_time": 1772026952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 18906 microseconds, and 5937 cpu microseconds.
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.461464) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 3402960 bytes OK
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.461484) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.463177) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.463200) EVENT_LOG_v1 {"time_micros": 1772026952463193, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.463224) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 3449875, prev total WAL file size 3449875, number of live WAL files 2.
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.464350) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(3323KB)], [182(8826KB)]
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952464451, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 12441637, "oldest_snapshot_seqno": -1}
Feb 25 13:42:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:42:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:42:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:42:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:42:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:42:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:42:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:42:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:42:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:42:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9190 keys, 10684347 bytes, temperature: kUnknown
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952543173, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 10684347, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10627266, "index_size": 33032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241771, "raw_average_key_size": 26, "raw_value_size": 10467300, "raw_average_value_size": 1138, "num_data_blocks": 1271, "num_entries": 9190, "num_filter_entries": 9190, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.543644) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 10684347 bytes
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.544967) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.8 rd, 135.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.6 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 9704, records dropped: 514 output_compression: NoCompression
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.545000) EVENT_LOG_v1 {"time_micros": 1772026952544983, "job": 114, "event": "compaction_finished", "compaction_time_micros": 78865, "compaction_time_cpu_micros": 40636, "output_level": 6, "num_output_files": 1, "total_output_size": 10684347, "num_input_records": 9704, "num_output_records": 9190, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952545756, "job": 114, "event": "table_file_deletion", "file_number": 184}
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952547512, "job": 114, "event": "table_file_deletion", "file_number": 182}
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.464144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.547648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.547657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.547659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.547661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:42:32 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.547663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:42:33 compute-0 ceph-mon[76335]: pgmap v3679: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:33 compute-0 nova_compute[244014]: 2026-02-25 13:42:33.860 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:33 compute-0 nova_compute[244014]: 2026-02-25 13:42:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:42:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:35 compute-0 ceph-mon[76335]: pgmap v3680: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:36 compute-0 nova_compute[244014]: 2026-02-25 13:42:36.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:37 compute-0 ceph-mon[76335]: pgmap v3681: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:37 compute-0 nova_compute[244014]: 2026-02-25 13:42:37.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:42:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:38 compute-0 nova_compute[244014]: 2026-02-25 13:42:38.862 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:39 compute-0 ceph-mon[76335]: pgmap v3682: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:41 compute-0 ceph-mon[76335]: pgmap v3683: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:41 compute-0 nova_compute[244014]: 2026-02-25 13:42:41.507 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:42 compute-0 sudo[412405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:42:42 compute-0 sudo[412405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:42:42 compute-0 sudo[412405]: pam_unix(sudo:session): session closed for user root
Feb 25 13:42:42 compute-0 sudo[412430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:42:42 compute-0 sudo[412430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:42:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:42 compute-0 sudo[412430]: pam_unix(sudo:session): session closed for user root
Feb 25 13:42:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:42:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:42:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:42:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:42:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:42:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:42:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:42:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:42:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:42:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:42:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:42:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:42:42 compute-0 sudo[412485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:42:42 compute-0 sudo[412485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:42:42 compute-0 sudo[412485]: pam_unix(sudo:session): session closed for user root
Feb 25 13:42:42 compute-0 sudo[412510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:42:42 compute-0 sudo[412510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:42:43 compute-0 podman[412548]: 2026-02-25 13:42:43.083593054 +0000 UTC m=+0.067069087 container create 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:42:43 compute-0 systemd[1]: Started libpod-conmon-98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f.scope.
Feb 25 13:42:43 compute-0 podman[412548]: 2026-02-25 13:42:43.052991725 +0000 UTC m=+0.036467818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:42:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:42:43 compute-0 podman[412548]: 2026-02-25 13:42:43.207978245 +0000 UTC m=+0.191454248 container init 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:42:43 compute-0 podman[412548]: 2026-02-25 13:42:43.219617739 +0000 UTC m=+0.203093772 container start 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:42:43 compute-0 podman[412548]: 2026-02-25 13:42:43.223321605 +0000 UTC m=+0.206797658 container attach 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 13:42:43 compute-0 dazzling_lovelace[412564]: 167 167
Feb 25 13:42:43 compute-0 systemd[1]: libpod-98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f.scope: Deactivated successfully.
Feb 25 13:42:43 compute-0 conmon[412564]: conmon 98a809290a253b89a7b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f.scope/container/memory.events
Feb 25 13:42:43 compute-0 podman[412548]: 2026-02-25 13:42:43.228956697 +0000 UTC m=+0.212432740 container died 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eff0d699604d8151910fb9c6151e30dab6dc48dcf1ce115164bda94694b851c-merged.mount: Deactivated successfully.
Feb 25 13:42:43 compute-0 podman[412548]: 2026-02-25 13:42:43.27713342 +0000 UTC m=+0.260609463 container remove 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:42:43 compute-0 systemd[1]: libpod-conmon-98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f.scope: Deactivated successfully.
Feb 25 13:42:43 compute-0 podman[412586]: 2026-02-25 13:42:43.463651854 +0000 UTC m=+0.052591941 container create 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:42:43 compute-0 ceph-mon[76335]: pgmap v3684: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:42:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:42:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:42:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:42:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:42:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:42:43 compute-0 systemd[1]: Started libpod-conmon-6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491.scope.
Feb 25 13:42:43 compute-0 podman[412586]: 2026-02-25 13:42:43.437927205 +0000 UTC m=+0.026867402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:42:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:42:43 compute-0 podman[412586]: 2026-02-25 13:42:43.572339724 +0000 UTC m=+0.161279871 container init 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:42:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:42:43 compute-0 podman[412586]: 2026-02-25 13:42:43.581517338 +0000 UTC m=+0.170457475 container start 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:42:43 compute-0 podman[412586]: 2026-02-25 13:42:43.586295175 +0000 UTC m=+0.175235342 container attach 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 13:42:43 compute-0 nova_compute[244014]: 2026-02-25 13:42:43.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:44 compute-0 pedantic_northcutt[412602]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:42:44 compute-0 pedantic_northcutt[412602]: --> All data devices are unavailable
Feb 25 13:42:44 compute-0 systemd[1]: libpod-6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491.scope: Deactivated successfully.
Feb 25 13:42:44 compute-0 podman[412586]: 2026-02-25 13:42:44.038408005 +0000 UTC m=+0.627348132 container died 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e-merged.mount: Deactivated successfully.
Feb 25 13:42:44 compute-0 podman[412586]: 2026-02-25 13:42:44.090939603 +0000 UTC m=+0.679879700 container remove 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:42:44 compute-0 systemd[1]: libpod-conmon-6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491.scope: Deactivated successfully.
Feb 25 13:42:44 compute-0 sudo[412510]: pam_unix(sudo:session): session closed for user root
Feb 25 13:42:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:44 compute-0 sudo[412637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:42:44 compute-0 sudo[412637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:42:44 compute-0 sudo[412637]: pam_unix(sudo:session): session closed for user root
Feb 25 13:42:44 compute-0 sudo[412662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:42:44 compute-0 sudo[412662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:42:44 compute-0 podman[412700]: 2026-02-25 13:42:44.567781363 +0000 UTC m=+0.038474586 container create 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 13:42:44 compute-0 systemd[1]: Started libpod-conmon-89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a.scope.
Feb 25 13:42:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:42:44 compute-0 podman[412700]: 2026-02-25 13:42:44.550801365 +0000 UTC m=+0.021494638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:42:44 compute-0 podman[412700]: 2026-02-25 13:42:44.663519631 +0000 UTC m=+0.134212904 container init 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:42:44 compute-0 podman[412700]: 2026-02-25 13:42:44.672610253 +0000 UTC m=+0.143303516 container start 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:42:44 compute-0 podman[412700]: 2026-02-25 13:42:44.676913736 +0000 UTC m=+0.147606969 container attach 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:42:44 compute-0 systemd[1]: libpod-89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a.scope: Deactivated successfully.
Feb 25 13:42:44 compute-0 confident_noether[412716]: 167 167
Feb 25 13:42:44 compute-0 conmon[412716]: conmon 89a3790791818cb4ed44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a.scope/container/memory.events
Feb 25 13:42:44 compute-0 podman[412700]: 2026-02-25 13:42:44.679326885 +0000 UTC m=+0.150020148 container died 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-97a36df360caa88c3618cba59c03e1edc62fc4bf19d1b31e65e5e3a735fc2a08-merged.mount: Deactivated successfully.
Feb 25 13:42:44 compute-0 podman[412700]: 2026-02-25 13:42:44.729052403 +0000 UTC m=+0.199745666 container remove 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:42:44 compute-0 systemd[1]: libpod-conmon-89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a.scope: Deactivated successfully.
Feb 25 13:42:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:44 compute-0 podman[412740]: 2026-02-25 13:42:44.913880299 +0000 UTC m=+0.050327926 container create 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:42:44 compute-0 systemd[1]: Started libpod-conmon-1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0.scope.
Feb 25 13:42:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:42:44 compute-0 podman[412740]: 2026-02-25 13:42:44.888962104 +0000 UTC m=+0.025409821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be718dc34891acd46b9622dd801978a5e396c93bac3828e36e86ab02c3fd984/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be718dc34891acd46b9622dd801978a5e396c93bac3828e36e86ab02c3fd984/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be718dc34891acd46b9622dd801978a5e396c93bac3828e36e86ab02c3fd984/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be718dc34891acd46b9622dd801978a5e396c93bac3828e36e86ab02c3fd984/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:45 compute-0 podman[412740]: 2026-02-25 13:42:45.005493809 +0000 UTC m=+0.141941476 container init 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:42:45 compute-0 podman[412740]: 2026-02-25 13:42:45.013212081 +0000 UTC m=+0.149659748 container start 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 13:42:45 compute-0 podman[412740]: 2026-02-25 13:42:45.017089302 +0000 UTC m=+0.153537079 container attach 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]: {
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:     "0": [
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:         {
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "devices": [
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "/dev/loop3"
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             ],
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_name": "ceph_lv0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_size": "21470642176",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "name": "ceph_lv0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "tags": {
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.cluster_name": "ceph",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.crush_device_class": "",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.encrypted": "0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.objectstore": "bluestore",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.osd_id": "0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.type": "block",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.vdo": "0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.with_tpm": "0"
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             },
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "type": "block",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "vg_name": "ceph_vg0"
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:         }
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:     ],
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:     "1": [
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:         {
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "devices": [
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "/dev/loop4"
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             ],
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_name": "ceph_lv1",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_size": "21470642176",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "name": "ceph_lv1",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "tags": {
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.cluster_name": "ceph",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.crush_device_class": "",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.encrypted": "0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.objectstore": "bluestore",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.osd_id": "1",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.type": "block",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.vdo": "0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.with_tpm": "0"
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             },
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "type": "block",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "vg_name": "ceph_vg1"
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:         }
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:     ],
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:     "2": [
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:         {
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "devices": [
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "/dev/loop5"
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             ],
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_name": "ceph_lv2",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_size": "21470642176",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "name": "ceph_lv2",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "tags": {
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.cluster_name": "ceph",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.crush_device_class": "",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.encrypted": "0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.objectstore": "bluestore",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.osd_id": "2",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.type": "block",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.vdo": "0",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:                 "ceph.with_tpm": "0"
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             },
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "type": "block",
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:             "vg_name": "ceph_vg2"
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:         }
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]:     ]
Feb 25 13:42:45 compute-0 practical_mcclintock[412757]: }
Feb 25 13:42:45 compute-0 systemd[1]: libpod-1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0.scope: Deactivated successfully.
Feb 25 13:42:45 compute-0 podman[412740]: 2026-02-25 13:42:45.32177274 +0000 UTC m=+0.458220407 container died 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4be718dc34891acd46b9622dd801978a5e396c93bac3828e36e86ab02c3fd984-merged.mount: Deactivated successfully.
Feb 25 13:42:45 compute-0 podman[412740]: 2026-02-25 13:42:45.368455 +0000 UTC m=+0.504902667 container remove 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:42:45 compute-0 systemd[1]: libpod-conmon-1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0.scope: Deactivated successfully.
Feb 25 13:42:45 compute-0 sudo[412662]: pam_unix(sudo:session): session closed for user root
Feb 25 13:42:45 compute-0 ceph-mon[76335]: pgmap v3685: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:45 compute-0 sudo[412777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:42:45 compute-0 sudo[412777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:42:45 compute-0 sudo[412777]: pam_unix(sudo:session): session closed for user root
Feb 25 13:42:45 compute-0 sudo[412802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:42:45 compute-0 sudo[412802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:42:45 compute-0 podman[412839]: 2026-02-25 13:42:45.890730714 +0000 UTC m=+0.058649185 container create fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 13:42:45 compute-0 systemd[1]: Started libpod-conmon-fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71.scope.
Feb 25 13:42:45 compute-0 podman[412839]: 2026-02-25 13:42:45.853478745 +0000 UTC m=+0.021397246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:42:45 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:42:46 compute-0 podman[412839]: 2026-02-25 13:42:46.026238924 +0000 UTC m=+0.194157385 container init fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:42:46 compute-0 podman[412839]: 2026-02-25 13:42:46.036639903 +0000 UTC m=+0.204558374 container start fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:42:46 compute-0 charming_kowalevski[412855]: 167 167
Feb 25 13:42:46 compute-0 systemd[1]: libpod-fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71.scope: Deactivated successfully.
Feb 25 13:42:46 compute-0 podman[412839]: 2026-02-25 13:42:46.082195281 +0000 UTC m=+0.250113822 container attach fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:42:46 compute-0 podman[412839]: 2026-02-25 13:42:46.083822298 +0000 UTC m=+0.251740739 container died fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c239df0a4946ace8bfe7dbc22ffa400c6a9b0605155463151a94e6934459e13e-merged.mount: Deactivated successfully.
Feb 25 13:42:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:46 compute-0 podman[412839]: 2026-02-25 13:42:46.188140422 +0000 UTC m=+0.356058873 container remove fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:42:46 compute-0 systemd[1]: libpod-conmon-fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71.scope: Deactivated successfully.
Feb 25 13:42:46 compute-0 podman[412879]: 2026-02-25 13:42:46.405353429 +0000 UTC m=+0.066279504 container create 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:42:46 compute-0 podman[412879]: 2026-02-25 13:42:46.374615416 +0000 UTC m=+0.035541531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:42:46 compute-0 systemd[1]: Started libpod-conmon-093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06.scope.
Feb 25 13:42:46 compute-0 nova_compute[244014]: 2026-02-25 13:42:46.510 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e714aee9383cb47f2877ab562ea929d067cc736f72b32ea00cc1464a91c19d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e714aee9383cb47f2877ab562ea929d067cc736f72b32ea00cc1464a91c19d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e714aee9383cb47f2877ab562ea929d067cc736f72b32ea00cc1464a91c19d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e714aee9383cb47f2877ab562ea929d067cc736f72b32ea00cc1464a91c19d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:42:46 compute-0 podman[412879]: 2026-02-25 13:42:46.57607988 +0000 UTC m=+0.237006015 container init 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 13:42:46 compute-0 podman[412879]: 2026-02-25 13:42:46.581815845 +0000 UTC m=+0.242741920 container start 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:42:46 compute-0 podman[412879]: 2026-02-25 13:42:46.609650084 +0000 UTC m=+0.270576219 container attach 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:42:47 compute-0 lvm[412975]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:42:47 compute-0 lvm[412975]: VG ceph_vg0 finished
Feb 25 13:42:47 compute-0 lvm[412976]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:42:47 compute-0 lvm[412976]: VG ceph_vg1 finished
Feb 25 13:42:47 compute-0 lvm[412978]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:42:47 compute-0 lvm[412978]: VG ceph_vg2 finished
Feb 25 13:42:47 compute-0 epic_kilby[412896]: {}
Feb 25 13:42:47 compute-0 systemd[1]: libpod-093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06.scope: Deactivated successfully.
Feb 25 13:42:47 compute-0 systemd[1]: libpod-093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06.scope: Consumed 1.256s CPU time.
Feb 25 13:42:47 compute-0 podman[412879]: 2026-02-25 13:42:47.413532322 +0000 UTC m=+1.074458367 container died 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 13:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e714aee9383cb47f2877ab562ea929d067cc736f72b32ea00cc1464a91c19d9-merged.mount: Deactivated successfully.
Feb 25 13:42:47 compute-0 ceph-mon[76335]: pgmap v3686: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:47 compute-0 podman[412879]: 2026-02-25 13:42:47.57473935 +0000 UTC m=+1.235665395 container remove 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:42:47 compute-0 systemd[1]: libpod-conmon-093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06.scope: Deactivated successfully.
Feb 25 13:42:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:42:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3517324608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:42:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:42:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3517324608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:42:47 compute-0 sudo[412802]: pam_unix(sudo:session): session closed for user root
Feb 25 13:42:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:42:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:42:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:42:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:42:47 compute-0 sudo[412995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:42:47 compute-0 sudo[412995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:42:47 compute-0 sudo[412995]: pam_unix(sudo:session): session closed for user root
Feb 25 13:42:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3517324608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:42:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3517324608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:42:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:42:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:42:48 compute-0 nova_compute[244014]: 2026-02-25 13:42:48.866 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:49 compute-0 ceph-mon[76335]: pgmap v3687: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:50 compute-0 ceph-mon[76335]: pgmap v3688: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:51 compute-0 nova_compute[244014]: 2026-02-25 13:42:51.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:52 compute-0 nova_compute[244014]: 2026-02-25 13:42:52.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:42:53 compute-0 ceph-mon[76335]: pgmap v3689: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:53 compute-0 nova_compute[244014]: 2026-02-25 13:42:53.868 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:42:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:42:55.086 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:42:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:42:55.086 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:42:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:42:55.087 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:42:55 compute-0 ceph-mon[76335]: pgmap v3690: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:56 compute-0 nova_compute[244014]: 2026-02-25 13:42:56.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:56 compute-0 ceph-mon[76335]: pgmap v3691: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:58 compute-0 nova_compute[244014]: 2026-02-25 13:42:58.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:42:59 compute-0 ceph-mon[76335]: pgmap v3692: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:42:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:00 compute-0 podman[413020]: 2026-02-25 13:43:00.738704837 +0000 UTC m=+0.071752821 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 13:43:00 compute-0 podman[413021]: 2026-02-25 13:43:00.785941083 +0000 UTC m=+0.119644636 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 13:43:01 compute-0 ceph-mon[76335]: pgmap v3693: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:01 compute-0 nova_compute[244014]: 2026-02-25 13:43:01.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:43:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:43:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:03 compute-0 ceph-mon[76335]: pgmap v3694: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:03 compute-0 nova_compute[244014]: 2026-02-25 13:43:03.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:05 compute-0 ceph-mon[76335]: pgmap v3695: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:06 compute-0 nova_compute[244014]: 2026-02-25 13:43:06.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:06 compute-0 ceph-mon[76335]: pgmap v3696: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:08 compute-0 nova_compute[244014]: 2026-02-25 13:43:08.873 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:09 compute-0 ceph-mon[76335]: pgmap v3697: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:10 compute-0 nova_compute[244014]: 2026-02-25 13:43:10.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:43:11 compute-0 ceph-mon[76335]: pgmap v3698: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:11 compute-0 nova_compute[244014]: 2026-02-25 13:43:11.558 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:12 compute-0 nova_compute[244014]: 2026-02-25 13:43:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:43:12 compute-0 nova_compute[244014]: 2026-02-25 13:43:12.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:43:12 compute-0 nova_compute[244014]: 2026-02-25 13:43:12.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:43:12 compute-0 nova_compute[244014]: 2026-02-25 13:43:12.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:43:12 compute-0 nova_compute[244014]: 2026-02-25 13:43:12.904 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:43:12 compute-0 nova_compute[244014]: 2026-02-25 13:43:12.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:43:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:43:13 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2432907870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:43:13 compute-0 nova_compute[244014]: 2026-02-25 13:43:13.489 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:43:13 compute-0 ceph-mon[76335]: pgmap v3699: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:13 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2432907870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:43:13 compute-0 nova_compute[244014]: 2026-02-25 13:43:13.640 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:43:13 compute-0 nova_compute[244014]: 2026-02-25 13:43:13.642 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3538MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:43:13 compute-0 nova_compute[244014]: 2026-02-25 13:43:13.643 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:43:13 compute-0 nova_compute[244014]: 2026-02-25 13:43:13.643 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:43:13 compute-0 nova_compute[244014]: 2026-02-25 13:43:13.718 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:43:13 compute-0 nova_compute[244014]: 2026-02-25 13:43:13.719 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:43:13 compute-0 nova_compute[244014]: 2026-02-25 13:43:13.734 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:43:13 compute-0 nova_compute[244014]: 2026-02-25 13:43:13.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:43:14 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022764544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:43:14 compute-0 nova_compute[244014]: 2026-02-25 13:43:14.328 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:43:14 compute-0 nova_compute[244014]: 2026-02-25 13:43:14.335 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:43:14 compute-0 nova_compute[244014]: 2026-02-25 13:43:14.357 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:43:14 compute-0 nova_compute[244014]: 2026-02-25 13:43:14.361 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:43:14 compute-0 nova_compute[244014]: 2026-02-25 13:43:14.361 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:43:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4022764544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:43:16 compute-0 ceph-mon[76335]: pgmap v3700: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:16 compute-0 nova_compute[244014]: 2026-02-25 13:43:16.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:17 compute-0 ceph-mon[76335]: pgmap v3701: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:18 compute-0 nova_compute[244014]: 2026-02-25 13:43:18.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:19 compute-0 ceph-mon[76335]: pgmap v3702: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:20 compute-0 sshd-session[413110]: Received disconnect from 91.224.92.54 port 41240:11:  [preauth]
Feb 25 13:43:20 compute-0 sshd-session[413110]: Disconnected from authenticating user root 91.224.92.54 port 41240 [preauth]
Feb 25 13:43:21 compute-0 nova_compute[244014]: 2026-02-25 13:43:21.362 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:43:21 compute-0 nova_compute[244014]: 2026-02-25 13:43:21.363 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:43:21 compute-0 nova_compute[244014]: 2026-02-25 13:43:21.364 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:43:21 compute-0 nova_compute[244014]: 2026-02-25 13:43:21.383 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:43:21 compute-0 ceph-mon[76335]: pgmap v3703: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:21 compute-0 nova_compute[244014]: 2026-02-25 13:43:21.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:23 compute-0 ceph-mon[76335]: pgmap v3704: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:23 compute-0 nova_compute[244014]: 2026-02-25 13:43:23.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:24 compute-0 ceph-mon[76335]: pgmap v3705: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:24 compute-0 nova_compute[244014]: 2026-02-25 13:43:24.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:43:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:25 compute-0 nova_compute[244014]: 2026-02-25 13:43:25.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:43:25 compute-0 nova_compute[244014]: 2026-02-25 13:43:25.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:43:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:26 compute-0 nova_compute[244014]: 2026-02-25 13:43:26.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:27 compute-0 ceph-mon[76335]: pgmap v3706: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:28 compute-0 nova_compute[244014]: 2026-02-25 13:43:28.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:28 compute-0 ceph-mon[76335]: pgmap v3707: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:43:31
Feb 25 13:43:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:43:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:43:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'vms', '.mgr']
Feb 25 13:43:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:43:31 compute-0 ceph-mon[76335]: pgmap v3708: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:31 compute-0 nova_compute[244014]: 2026-02-25 13:43:31.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:43:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:43:31 compute-0 podman[413112]: 2026-02-25 13:43:31.72313026 +0000 UTC m=+0.066932463 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:43:31 compute-0 podman[413113]: 2026-02-25 13:43:31.753543353 +0000 UTC m=+0.097363326 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 25 13:43:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:43:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:43:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:43:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:43:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:43:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:43:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:43:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:43:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:43:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:43:32 compute-0 nova_compute[244014]: 2026-02-25 13:43:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:43:33 compute-0 ceph-mon[76335]: pgmap v3709: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:33 compute-0 nova_compute[244014]: 2026-02-25 13:43:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:43:33 compute-0 nova_compute[244014]: 2026-02-25 13:43:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:43:33 compute-0 nova_compute[244014]: 2026-02-25 13:43:33.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:35 compute-0 ceph-mon[76335]: pgmap v3710: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:36 compute-0 nova_compute[244014]: 2026-02-25 13:43:36.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:37 compute-0 ceph-mon[76335]: pgmap v3711: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:38 compute-0 ceph-mon[76335]: pgmap v3712: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:38 compute-0 nova_compute[244014]: 2026-02-25 13:43:38.917 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:39 compute-0 nova_compute[244014]: 2026-02-25 13:43:39.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:43:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:41 compute-0 ceph-mon[76335]: pgmap v3713: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:41 compute-0 nova_compute[244014]: 2026-02-25 13:43:41.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:43 compute-0 ceph-mon[76335]: pgmap v3714: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:43:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:43:43 compute-0 nova_compute[244014]: 2026-02-25 13:43:43.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:45 compute-0 ceph-mon[76335]: pgmap v3715: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:46 compute-0 nova_compute[244014]: 2026-02-25 13:43:46.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:47 compute-0 ceph-mon[76335]: pgmap v3716: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:43:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2084020284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:43:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:43:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2084020284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:43:47 compute-0 sudo[413155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:43:47 compute-0 sudo[413155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:43:47 compute-0 sudo[413155]: pam_unix(sudo:session): session closed for user root
Feb 25 13:43:47 compute-0 sudo[413180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:43:47 compute-0 sudo[413180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:43:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2084020284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:43:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2084020284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:43:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:48 compute-0 sudo[413180]: pam_unix(sudo:session): session closed for user root
Feb 25 13:43:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:43:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:43:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:43:48 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:43:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:43:48 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:43:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:43:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:43:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:43:48 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:43:48 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:43:48 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:43:48 compute-0 sudo[413236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:43:48 compute-0 sudo[413236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:43:48 compute-0 sudo[413236]: pam_unix(sudo:session): session closed for user root
Feb 25 13:43:48 compute-0 sudo[413261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:43:48 compute-0 sudo[413261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:43:48 compute-0 nova_compute[244014]: 2026-02-25 13:43:48.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:49 compute-0 podman[413298]: 2026-02-25 13:43:49.134491863 +0000 UTC m=+0.112826671 container create 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:43:49 compute-0 podman[413298]: 2026-02-25 13:43:49.047466644 +0000 UTC m=+0.025801472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:43:49 compute-0 systemd[1]: Started libpod-conmon-36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da.scope.
Feb 25 13:43:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:43:49 compute-0 podman[413298]: 2026-02-25 13:43:49.426254509 +0000 UTC m=+0.404589397 container init 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:43:49 compute-0 podman[413298]: 2026-02-25 13:43:49.433185228 +0000 UTC m=+0.411520016 container start 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 13:43:49 compute-0 competent_ardinghelli[413314]: 167 167
Feb 25 13:43:49 compute-0 systemd[1]: libpod-36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da.scope: Deactivated successfully.
Feb 25 13:43:49 compute-0 ceph-mon[76335]: pgmap v3717: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:43:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:43:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:43:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:43:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:43:49 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:43:49 compute-0 podman[413298]: 2026-02-25 13:43:49.533281882 +0000 UTC m=+0.511616660 container attach 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:43:49 compute-0 podman[413298]: 2026-02-25 13:43:49.534720773 +0000 UTC m=+0.513055611 container died 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-181f92c405fa7e24e69e3f0ba55a9f352e51b35cca2a35b18af4bc8052bd0f18-merged.mount: Deactivated successfully.
Feb 25 13:43:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:50 compute-0 podman[413298]: 2026-02-25 13:43:50.046355351 +0000 UTC m=+1.024690169 container remove 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:43:50 compute-0 systemd[1]: libpod-conmon-36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da.scope: Deactivated successfully.
Feb 25 13:43:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:50 compute-0 podman[413339]: 2026-02-25 13:43:50.241898735 +0000 UTC m=+0.067683284 container create 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 13:43:50 compute-0 podman[413339]: 2026-02-25 13:43:50.200039763 +0000 UTC m=+0.025824372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:43:50 compute-0 systemd[1]: Started libpod-conmon-82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920.scope.
Feb 25 13:43:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:50 compute-0 podman[413339]: 2026-02-25 13:43:50.434845285 +0000 UTC m=+0.260629884 container init 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 13:43:50 compute-0 podman[413339]: 2026-02-25 13:43:50.445851471 +0000 UTC m=+0.271635990 container start 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 13:43:50 compute-0 podman[413339]: 2026-02-25 13:43:50.587878828 +0000 UTC m=+0.413663377 container attach 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Feb 25 13:43:50 compute-0 nova_compute[244014]: 2026-02-25 13:43:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:43:50 compute-0 keen_keldysh[413356]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:43:50 compute-0 keen_keldysh[413356]: --> All data devices are unavailable
Feb 25 13:43:50 compute-0 systemd[1]: libpod-82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920.scope: Deactivated successfully.
Feb 25 13:43:50 compute-0 podman[413339]: 2026-02-25 13:43:50.945973019 +0000 UTC m=+0.771757578 container died 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 13:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a-merged.mount: Deactivated successfully.
Feb 25 13:43:51 compute-0 nova_compute[244014]: 2026-02-25 13:43:51.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:51 compute-0 ceph-mon[76335]: pgmap v3718: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:51 compute-0 podman[413339]: 2026-02-25 13:43:51.825106247 +0000 UTC m=+1.650890806 container remove 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 13:43:51 compute-0 systemd[1]: libpod-conmon-82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920.scope: Deactivated successfully.
Feb 25 13:43:51 compute-0 sudo[413261]: pam_unix(sudo:session): session closed for user root
Feb 25 13:43:51 compute-0 sudo[413391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:43:51 compute-0 sudo[413391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:43:51 compute-0 sudo[413391]: pam_unix(sudo:session): session closed for user root
Feb 25 13:43:52 compute-0 sudo[413416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:43:52 compute-0 sudo[413416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:43:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:52 compute-0 podman[413453]: 2026-02-25 13:43:52.401823054 +0000 UTC m=+0.078903206 container create 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:43:52 compute-0 podman[413453]: 2026-02-25 13:43:52.345387094 +0000 UTC m=+0.022467286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:43:52 compute-0 systemd[1]: Started libpod-conmon-150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112.scope.
Feb 25 13:43:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:43:52 compute-0 podman[413453]: 2026-02-25 13:43:52.532821955 +0000 UTC m=+0.209902067 container init 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:43:52 compute-0 podman[413453]: 2026-02-25 13:43:52.539799516 +0000 UTC m=+0.216879628 container start 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:43:52 compute-0 hardcore_blackburn[413469]: 167 167
Feb 25 13:43:52 compute-0 systemd[1]: libpod-150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112.scope: Deactivated successfully.
Feb 25 13:43:52 compute-0 podman[413453]: 2026-02-25 13:43:52.547146117 +0000 UTC m=+0.224226229 container attach 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 13:43:52 compute-0 podman[413453]: 2026-02-25 13:43:52.547668892 +0000 UTC m=+0.224749004 container died 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab81f7ccd4ef84386ff3e835908aabb5ea977538dd7f8c38b347e8a140d3e9a7-merged.mount: Deactivated successfully.
Feb 25 13:43:52 compute-0 ceph-mon[76335]: pgmap v3719: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:52 compute-0 podman[413453]: 2026-02-25 13:43:52.81235368 +0000 UTC m=+0.489433832 container remove 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 13:43:52 compute-0 systemd[1]: libpod-conmon-150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112.scope: Deactivated successfully.
Feb 25 13:43:53 compute-0 podman[413494]: 2026-02-25 13:43:53.093877282 +0000 UTC m=+0.123536037 container create 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 13:43:53 compute-0 podman[413494]: 2026-02-25 13:43:53.006762292 +0000 UTC m=+0.036421057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:43:53 compute-0 systemd[1]: Started libpod-conmon-666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b.scope.
Feb 25 13:43:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3bed7b34cf90be767f7a791dc520543fd416b38a4a60c009525849cabefa53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3bed7b34cf90be767f7a791dc520543fd416b38a4a60c009525849cabefa53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3bed7b34cf90be767f7a791dc520543fd416b38a4a60c009525849cabefa53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3bed7b34cf90be767f7a791dc520543fd416b38a4a60c009525849cabefa53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:53 compute-0 podman[413494]: 2026-02-25 13:43:53.303666905 +0000 UTC m=+0.333325701 container init 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:43:53 compute-0 podman[413494]: 2026-02-25 13:43:53.314513607 +0000 UTC m=+0.344172362 container start 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:43:53 compute-0 podman[413494]: 2026-02-25 13:43:53.340719709 +0000 UTC m=+0.370378464 container attach 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:43:53 compute-0 awesome_cray[413510]: {
Feb 25 13:43:53 compute-0 awesome_cray[413510]:     "0": [
Feb 25 13:43:53 compute-0 awesome_cray[413510]:         {
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "devices": [
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "/dev/loop3"
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             ],
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_name": "ceph_lv0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_size": "21470642176",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "name": "ceph_lv0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "tags": {
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.cluster_name": "ceph",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.crush_device_class": "",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.encrypted": "0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.objectstore": "bluestore",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.osd_id": "0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.type": "block",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.vdo": "0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.with_tpm": "0"
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             },
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "type": "block",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "vg_name": "ceph_vg0"
Feb 25 13:43:53 compute-0 awesome_cray[413510]:         }
Feb 25 13:43:53 compute-0 awesome_cray[413510]:     ],
Feb 25 13:43:53 compute-0 awesome_cray[413510]:     "1": [
Feb 25 13:43:53 compute-0 awesome_cray[413510]:         {
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "devices": [
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "/dev/loop4"
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             ],
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_name": "ceph_lv1",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_size": "21470642176",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "name": "ceph_lv1",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "tags": {
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.cluster_name": "ceph",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.crush_device_class": "",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.encrypted": "0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.objectstore": "bluestore",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.osd_id": "1",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.type": "block",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.vdo": "0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.with_tpm": "0"
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             },
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "type": "block",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "vg_name": "ceph_vg1"
Feb 25 13:43:53 compute-0 awesome_cray[413510]:         }
Feb 25 13:43:53 compute-0 awesome_cray[413510]:     ],
Feb 25 13:43:53 compute-0 awesome_cray[413510]:     "2": [
Feb 25 13:43:53 compute-0 awesome_cray[413510]:         {
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "devices": [
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "/dev/loop5"
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             ],
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_name": "ceph_lv2",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_size": "21470642176",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "name": "ceph_lv2",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "tags": {
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.cluster_name": "ceph",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.crush_device_class": "",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.encrypted": "0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.objectstore": "bluestore",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.osd_id": "2",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.type": "block",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.vdo": "0",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:                 "ceph.with_tpm": "0"
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             },
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "type": "block",
Feb 25 13:43:53 compute-0 awesome_cray[413510]:             "vg_name": "ceph_vg2"
Feb 25 13:43:53 compute-0 awesome_cray[413510]:         }
Feb 25 13:43:53 compute-0 awesome_cray[413510]:     ]
Feb 25 13:43:53 compute-0 awesome_cray[413510]: }
Feb 25 13:43:53 compute-0 systemd[1]: libpod-666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b.scope: Deactivated successfully.
Feb 25 13:43:53 compute-0 podman[413494]: 2026-02-25 13:43:53.637885161 +0000 UTC m=+0.667543886 container died 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 13:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f3bed7b34cf90be767f7a791dc520543fd416b38a4a60c009525849cabefa53-merged.mount: Deactivated successfully.
Feb 25 13:43:53 compute-0 podman[413494]: 2026-02-25 13:43:53.882257237 +0000 UTC m=+0.911915952 container remove 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:43:53 compute-0 systemd[1]: libpod-conmon-666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b.scope: Deactivated successfully.
Feb 25 13:43:53 compute-0 sudo[413416]: pam_unix(sudo:session): session closed for user root
Feb 25 13:43:53 compute-0 nova_compute[244014]: 2026-02-25 13:43:53.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:53 compute-0 sudo[413531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:43:53 compute-0 sudo[413531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:43:53 compute-0 sudo[413531]: pam_unix(sudo:session): session closed for user root
Feb 25 13:43:54 compute-0 sudo[413556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:43:54 compute-0 sudo[413556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:43:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:54 compute-0 podman[413593]: 2026-02-25 13:43:54.300861255 +0000 UTC m=+0.056629117 container create c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:43:54 compute-0 podman[413593]: 2026-02-25 13:43:54.272733187 +0000 UTC m=+0.028501119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:43:54 compute-0 systemd[1]: Started libpod-conmon-c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6.scope.
Feb 25 13:43:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:43:54 compute-0 podman[413593]: 2026-02-25 13:43:54.505439508 +0000 UTC m=+0.261207400 container init c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:43:54 compute-0 podman[413593]: 2026-02-25 13:43:54.512513301 +0000 UTC m=+0.268281153 container start c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:43:54 compute-0 distracted_jones[413610]: 167 167
Feb 25 13:43:54 compute-0 systemd[1]: libpod-c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6.scope: Deactivated successfully.
Feb 25 13:43:54 compute-0 podman[413593]: 2026-02-25 13:43:54.572527094 +0000 UTC m=+0.328294966 container attach c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 13:43:54 compute-0 podman[413593]: 2026-02-25 13:43:54.57342423 +0000 UTC m=+0.329192082 container died c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 13:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8c8ca95ce2a8ed9feef03fcd061634892afd217b1dae720f3cfe66d45fc2008-merged.mount: Deactivated successfully.
Feb 25 13:43:54 compute-0 podman[413593]: 2026-02-25 13:43:54.727209685 +0000 UTC m=+0.482977537 container remove c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:43:54 compute-0 systemd[1]: libpod-conmon-c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6.scope: Deactivated successfully.
Feb 25 13:43:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:43:54 compute-0 podman[413633]: 2026-02-25 13:43:54.954174241 +0000 UTC m=+0.093920298 container create d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 13:43:54 compute-0 podman[413633]: 2026-02-25 13:43:54.900959083 +0000 UTC m=+0.040705210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:43:55 compute-0 systemd[1]: Started libpod-conmon-d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1.scope.
Feb 25 13:43:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:43:55.087 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:43:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:43:55.089 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:43:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:43:55.089 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:43:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a84038c714feb6afc15792d3e959e1f10c6860d3c9ac78114beb661a1d1596/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a84038c714feb6afc15792d3e959e1f10c6860d3c9ac78114beb661a1d1596/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a84038c714feb6afc15792d3e959e1f10c6860d3c9ac78114beb661a1d1596/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a84038c714feb6afc15792d3e959e1f10c6860d3c9ac78114beb661a1d1596/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:43:55 compute-0 podman[413633]: 2026-02-25 13:43:55.108011356 +0000 UTC m=+0.247757403 container init d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 13:43:55 compute-0 podman[413633]: 2026-02-25 13:43:55.113883095 +0000 UTC m=+0.253629142 container start d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:43:55 compute-0 podman[413633]: 2026-02-25 13:43:55.282183977 +0000 UTC m=+0.421930064 container attach d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:43:55 compute-0 ceph-mon[76335]: pgmap v3720: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:55 compute-0 lvm[413729]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:43:55 compute-0 lvm[413729]: VG ceph_vg1 finished
Feb 25 13:43:55 compute-0 lvm[413728]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:43:55 compute-0 lvm[413728]: VG ceph_vg0 finished
Feb 25 13:43:55 compute-0 lvm[413731]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:43:55 compute-0 lvm[413731]: VG ceph_vg2 finished
Feb 25 13:43:55 compute-0 elegant_dijkstra[413650]: {}
Feb 25 13:43:55 compute-0 systemd[1]: libpod-d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1.scope: Deactivated successfully.
Feb 25 13:43:55 compute-0 systemd[1]: libpod-d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1.scope: Consumed 1.200s CPU time.
Feb 25 13:43:55 compute-0 podman[413734]: 2026-02-25 13:43:55.952084159 +0000 UTC m=+0.030593389 container died d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 13:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-07a84038c714feb6afc15792d3e959e1f10c6860d3c9ac78114beb661a1d1596-merged.mount: Deactivated successfully.
Feb 25 13:43:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:56 compute-0 podman[413734]: 2026-02-25 13:43:56.485899905 +0000 UTC m=+0.564409155 container remove d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:43:56 compute-0 systemd[1]: libpod-conmon-d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1.scope: Deactivated successfully.
Feb 25 13:43:56 compute-0 sudo[413556]: pam_unix(sudo:session): session closed for user root
Feb 25 13:43:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:43:56 compute-0 nova_compute[244014]: 2026-02-25 13:43:56.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:43:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:43:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:43:56 compute-0 sudo[413750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:43:56 compute-0 sudo[413750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:43:56 compute-0 sudo[413750]: pam_unix(sudo:session): session closed for user root
Feb 25 13:43:57 compute-0 ceph-mon[76335]: pgmap v3721: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:43:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:43:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:58 compute-0 ceph-mon[76335]: pgmap v3722: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:43:59 compute-0 nova_compute[244014]: 2026-02-25 13:43:59.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:43:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:01 compute-0 ceph-mon[76335]: pgmap v3723: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:44:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:44:01 compute-0 nova_compute[244014]: 2026-02-25 13:44:01.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:02 compute-0 podman[413775]: 2026-02-25 13:44:02.720251917 +0000 UTC m=+0.062722422 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 13:44:02 compute-0 podman[413776]: 2026-02-25 13:44:02.772808975 +0000 UTC m=+0.109819824 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller)
Feb 25 13:44:03 compute-0 ceph-mon[76335]: pgmap v3724: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:04 compute-0 nova_compute[244014]: 2026-02-25 13:44:04.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:05 compute-0 ceph-mon[76335]: pgmap v3725: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:06 compute-0 nova_compute[244014]: 2026-02-25 13:44:06.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:06 compute-0 ceph-mon[76335]: pgmap v3726: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:07 compute-0 nova_compute[244014]: 2026-02-25 13:44:07.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:07 compute-0 nova_compute[244014]: 2026-02-25 13:44:07.890 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:44:07 compute-0 nova_compute[244014]: 2026-02-25 13:44:07.907 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:44:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:09 compute-0 nova_compute[244014]: 2026-02-25 13:44:09.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:09 compute-0 ceph-mon[76335]: pgmap v3727: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:09 compute-0 nova_compute[244014]: 2026-02-25 13:44:09.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:09 compute-0 nova_compute[244014]: 2026-02-25 13:44:09.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:44:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:10 compute-0 ceph-mon[76335]: pgmap v3728: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:11 compute-0 nova_compute[244014]: 2026-02-25 13:44:11.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:11 compute-0 nova_compute[244014]: 2026-02-25 13:44:11.892 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:13 compute-0 ceph-mon[76335]: pgmap v3729: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:14 compute-0 nova_compute[244014]: 2026-02-25 13:44:14.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:14 compute-0 nova_compute[244014]: 2026-02-25 13:44:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:14 compute-0 nova_compute[244014]: 2026-02-25 13:44:14.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:44:14 compute-0 nova_compute[244014]: 2026-02-25 13:44:14.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:44:14 compute-0 nova_compute[244014]: 2026-02-25 13:44:14.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:44:14 compute-0 nova_compute[244014]: 2026-02-25 13:44:14.921 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:44:14 compute-0 nova_compute[244014]: 2026-02-25 13:44:14.921 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:44:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:44:15 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048526410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:44:15 compute-0 nova_compute[244014]: 2026-02-25 13:44:15.461 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:44:15 compute-0 ceph-mon[76335]: pgmap v3730: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:15 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3048526410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:44:15 compute-0 nova_compute[244014]: 2026-02-25 13:44:15.643 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:44:15 compute-0 nova_compute[244014]: 2026-02-25 13:44:15.645 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3535MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:44:15 compute-0 nova_compute[244014]: 2026-02-25 13:44:15.646 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:44:15 compute-0 nova_compute[244014]: 2026-02-25 13:44:15.646 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:44:15 compute-0 nova_compute[244014]: 2026-02-25 13:44:15.736 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:44:15 compute-0 nova_compute[244014]: 2026-02-25 13:44:15.736 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:44:15 compute-0 nova_compute[244014]: 2026-02-25 13:44:15.761 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:44:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:44:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324508909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:44:16 compute-0 nova_compute[244014]: 2026-02-25 13:44:16.326 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:44:16 compute-0 nova_compute[244014]: 2026-02-25 13:44:16.331 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:44:16 compute-0 nova_compute[244014]: 2026-02-25 13:44:16.346 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:44:16 compute-0 nova_compute[244014]: 2026-02-25 13:44:16.348 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:44:16 compute-0 nova_compute[244014]: 2026-02-25 13:44:16.348 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:44:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1324508909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:44:16 compute-0 nova_compute[244014]: 2026-02-25 13:44:16.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:17 compute-0 ceph-mon[76335]: pgmap v3731: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:18 compute-0 ceph-mon[76335]: pgmap v3732: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:19 compute-0 nova_compute[244014]: 2026-02-25 13:44:19.052 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:21 compute-0 nova_compute[244014]: 2026-02-25 13:44:21.349 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:21 compute-0 nova_compute[244014]: 2026-02-25 13:44:21.350 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:44:21 compute-0 nova_compute[244014]: 2026-02-25 13:44:21.350 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:44:21 compute-0 nova_compute[244014]: 2026-02-25 13:44:21.372 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:44:21 compute-0 ceph-mon[76335]: pgmap v3733: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:21 compute-0 nova_compute[244014]: 2026-02-25 13:44:21.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:23 compute-0 ceph-mon[76335]: pgmap v3734: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:24 compute-0 nova_compute[244014]: 2026-02-25 13:44:24.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:25 compute-0 ceph-mon[76335]: pgmap v3735: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:26 compute-0 nova_compute[244014]: 2026-02-25 13:44:26.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:26 compute-0 nova_compute[244014]: 2026-02-25 13:44:26.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:27 compute-0 ceph-mon[76335]: pgmap v3736: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:27 compute-0 nova_compute[244014]: 2026-02-25 13:44:27.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:27 compute-0 nova_compute[244014]: 2026-02-25 13:44:27.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:44:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:29 compute-0 nova_compute[244014]: 2026-02-25 13:44:29.069 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:29 compute-0 ceph-mon[76335]: pgmap v3737: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:44:31
Feb 25 13:44:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:44:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:44:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'vms', 'backups', '.rgw.root', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Feb 25 13:44:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:44:31 compute-0 ceph-mon[76335]: pgmap v3738: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:44:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:44:31 compute-0 nova_compute[244014]: 2026-02-25 13:44:31.734 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:44:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:44:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:44:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:44:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:44:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:44:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:44:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:44:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:44:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:44:32 compute-0 nova_compute[244014]: 2026-02-25 13:44:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:33 compute-0 ceph-mon[76335]: pgmap v3739: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:33 compute-0 podman[413862]: 2026-02-25 13:44:33.734168442 +0000 UTC m=+0.072924004 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 13:44:33 compute-0 podman[413863]: 2026-02-25 13:44:33.805716667 +0000 UTC m=+0.137439087 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 13:44:33 compute-0 nova_compute[244014]: 2026-02-25 13:44:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:34 compute-0 nova_compute[244014]: 2026-02-25 13:44:34.098 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:35 compute-0 ceph-mon[76335]: pgmap v3740: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:35 compute-0 nova_compute[244014]: 2026-02-25 13:44:35.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:36 compute-0 nova_compute[244014]: 2026-02-25 13:44:36.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:37 compute-0 ceph-mon[76335]: pgmap v3741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:38 compute-0 ceph-mon[76335]: pgmap v3742: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:39 compute-0 nova_compute[244014]: 2026-02-25 13:44:39.148 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:40 compute-0 nova_compute[244014]: 2026-02-25 13:44:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:41 compute-0 ceph-mon[76335]: pgmap v3743: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:41 compute-0 nova_compute[244014]: 2026-02-25 13:44:41.746 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:43 compute-0 ceph-mon[76335]: pgmap v3744: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:44:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:44:44 compute-0 nova_compute[244014]: 2026-02-25 13:44:44.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:45 compute-0 ceph-mon[76335]: pgmap v3745: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:46 compute-0 nova_compute[244014]: 2026-02-25 13:44:46.779 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:46 compute-0 ceph-mon[76335]: pgmap v3746: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:44:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4207949395' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:44:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:44:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4207949395' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:44:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4207949395' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:44:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4207949395' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:44:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:49 compute-0 ceph-mon[76335]: pgmap v3747: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:49 compute-0 nova_compute[244014]: 2026-02-25 13:44:49.220 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:51 compute-0 ceph-mon[76335]: pgmap v3748: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:51 compute-0 nova_compute[244014]: 2026-02-25 13:44:51.783 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:53 compute-0 ceph-mon[76335]: pgmap v3749: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:53 compute-0 nova_compute[244014]: 2026-02-25 13:44:53.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:44:54 compute-0 nova_compute[244014]: 2026-02-25 13:44:54.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:44:55.088 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:44:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:44:55.089 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:44:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:44:55.089 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:44:55 compute-0 ceph-mon[76335]: pgmap v3750: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3751: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:56 compute-0 nova_compute[244014]: 2026-02-25 13:44:56.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:57 compute-0 sudo[413907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:44:57 compute-0 sudo[413907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:44:57 compute-0 sudo[413907]: pam_unix(sudo:session): session closed for user root
Feb 25 13:44:57 compute-0 sudo[413932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 13:44:57 compute-0 sudo[413932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:44:57 compute-0 sudo[413932]: pam_unix(sudo:session): session closed for user root
Feb 25 13:44:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:44:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:44:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:44:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:44:57 compute-0 sudo[413976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:44:57 compute-0 ceph-mon[76335]: pgmap v3751: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:44:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:44:57 compute-0 sudo[413976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:44:57 compute-0 sudo[413976]: pam_unix(sudo:session): session closed for user root
Feb 25 13:44:57 compute-0 sudo[414001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:44:57 compute-0 sudo[414001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:44:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:44:58 compute-0 sudo[414001]: pam_unix(sudo:session): session closed for user root
Feb 25 13:44:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:44:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:44:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:44:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:44:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:44:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:44:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:44:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:44:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:44:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:44:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:44:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:44:58 compute-0 sudo[414056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:44:58 compute-0 sudo[414056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:44:58 compute-0 sudo[414056]: pam_unix(sudo:session): session closed for user root
Feb 25 13:44:58 compute-0 sudo[414081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:44:58 compute-0 sudo[414081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:44:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:44:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:44:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:44:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:44:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:44:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:44:58 compute-0 podman[414118]: 2026-02-25 13:44:58.826302067 +0000 UTC m=+0.103926105 container create af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:44:58 compute-0 podman[414118]: 2026-02-25 13:44:58.752631502 +0000 UTC m=+0.030255660 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:44:58 compute-0 systemd[1]: Started libpod-conmon-af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd.scope.
Feb 25 13:44:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:44:59 compute-0 podman[414118]: 2026-02-25 13:44:59.0211278 +0000 UTC m=+0.298751928 container init af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:44:59 compute-0 podman[414118]: 2026-02-25 13:44:59.029906362 +0000 UTC m=+0.307530400 container start af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:44:59 compute-0 awesome_lichterman[414134]: 167 167
Feb 25 13:44:59 compute-0 systemd[1]: libpod-af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd.scope: Deactivated successfully.
Feb 25 13:44:59 compute-0 podman[414118]: 2026-02-25 13:44:59.086357423 +0000 UTC m=+0.363981881 container attach af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:44:59 compute-0 podman[414118]: 2026-02-25 13:44:59.087564938 +0000 UTC m=+0.365189006 container died af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 13:44:59 compute-0 nova_compute[244014]: 2026-02-25 13:44:59.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-30d8bc6a90a6be1c36977364c5b591e7623a4255e86d2ecba616f1dbb517bdd6-merged.mount: Deactivated successfully.
Feb 25 13:44:59 compute-0 podman[414118]: 2026-02-25 13:44:59.901127972 +0000 UTC m=+1.178752040 container remove af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:44:59 compute-0 systemd[1]: libpod-conmon-af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd.scope: Deactivated successfully.
Feb 25 13:44:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:44:59 compute-0 ceph-mon[76335]: pgmap v3752: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:00 compute-0 podman[414158]: 2026-02-25 13:45:00.04524171 +0000 UTC m=+0.028129708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:45:00 compute-0 podman[414158]: 2026-02-25 13:45:00.177419785 +0000 UTC m=+0.160307703 container create a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:45:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3753: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:00 compute-0 systemd[1]: Started libpod-conmon-a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663.scope.
Feb 25 13:45:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:00 compute-0 podman[414158]: 2026-02-25 13:45:00.582075742 +0000 UTC m=+0.564963670 container init a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 13:45:00 compute-0 podman[414158]: 2026-02-25 13:45:00.591784821 +0000 UTC m=+0.574672769 container start a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:45:00 compute-0 podman[414158]: 2026-02-25 13:45:00.701528042 +0000 UTC m=+0.684415980 container attach a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:45:01 compute-0 ecstatic_haibt[414174]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:45:01 compute-0 ecstatic_haibt[414174]: --> All data devices are unavailable
Feb 25 13:45:01 compute-0 systemd[1]: libpod-a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663.scope: Deactivated successfully.
Feb 25 13:45:01 compute-0 podman[414158]: 2026-02-25 13:45:01.07347747 +0000 UTC m=+1.056365398 container died a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:45:01 compute-0 ceph-mon[76335]: pgmap v3753: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c-merged.mount: Deactivated successfully.
Feb 25 13:45:01 compute-0 podman[414158]: 2026-02-25 13:45:01.583642617 +0000 UTC m=+1.566530565 container remove a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 13:45:01 compute-0 systemd[1]: libpod-conmon-a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663.scope: Deactivated successfully.
Feb 25 13:45:01 compute-0 sudo[414081]: pam_unix(sudo:session): session closed for user root
Feb 25 13:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:45:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:45:01 compute-0 sudo[414208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:45:01 compute-0 sudo[414208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:45:01 compute-0 sudo[414208]: pam_unix(sudo:session): session closed for user root
Feb 25 13:45:01 compute-0 sudo[414233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:45:01 compute-0 sudo[414233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:45:01 compute-0 nova_compute[244014]: 2026-02-25 13:45:01.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:02 compute-0 podman[414270]: 2026-02-25 13:45:02.007397893 +0000 UTC m=+0.033081381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:45:02 compute-0 podman[414270]: 2026-02-25 13:45:02.170421303 +0000 UTC m=+0.196104741 container create a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:45:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:02 compute-0 systemd[1]: Started libpod-conmon-a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47.scope.
Feb 25 13:45:02 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:45:02 compute-0 podman[414270]: 2026-02-25 13:45:02.383860811 +0000 UTC m=+0.409544259 container init a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:45:02 compute-0 podman[414270]: 2026-02-25 13:45:02.389157143 +0000 UTC m=+0.414840541 container start a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:45:02 compute-0 epic_matsumoto[414287]: 167 167
Feb 25 13:45:02 compute-0 systemd[1]: libpod-a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47.scope: Deactivated successfully.
Feb 25 13:45:02 compute-0 podman[414270]: 2026-02-25 13:45:02.500648584 +0000 UTC m=+0.526332082 container attach a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:45:02 compute-0 podman[414270]: 2026-02-25 13:45:02.501371394 +0000 UTC m=+0.527054822 container died a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:45:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-777d5397c68e377d63c29e0279f7b6c3bae18c844ba775a2419203a2dfdd5e3d-merged.mount: Deactivated successfully.
Feb 25 13:45:02 compute-0 podman[414270]: 2026-02-25 13:45:02.789523677 +0000 UTC m=+0.815207085 container remove a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:45:02 compute-0 systemd[1]: libpod-conmon-a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47.scope: Deactivated successfully.
Feb 25 13:45:03 compute-0 podman[414311]: 2026-02-25 13:45:03.005853487 +0000 UTC m=+0.088261754 container create 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 13:45:03 compute-0 podman[414311]: 2026-02-25 13:45:02.946533675 +0000 UTC m=+0.028941952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:45:03 compute-0 systemd[1]: Started libpod-conmon-224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db.scope.
Feb 25 13:45:03 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9249cee0f8af2ef12a5f8a4f616b2674a47303d9024b4ee4c6ef152575f5a89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9249cee0f8af2ef12a5f8a4f616b2674a47303d9024b4ee4c6ef152575f5a89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9249cee0f8af2ef12a5f8a4f616b2674a47303d9024b4ee4c6ef152575f5a89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9249cee0f8af2ef12a5f8a4f616b2674a47303d9024b4ee4c6ef152575f5a89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:03 compute-0 podman[414311]: 2026-02-25 13:45:03.151249991 +0000 UTC m=+0.233658338 container init 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:45:03 compute-0 podman[414311]: 2026-02-25 13:45:03.160883728 +0000 UTC m=+0.243292025 container start 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:45:03 compute-0 podman[414311]: 2026-02-25 13:45:03.36718272 +0000 UTC m=+0.449591087 container attach 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]: {
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:     "0": [
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:         {
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "devices": [
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "/dev/loop3"
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             ],
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_name": "ceph_lv0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_size": "21470642176",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "name": "ceph_lv0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "tags": {
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.cluster_name": "ceph",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.crush_device_class": "",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.encrypted": "0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.objectstore": "bluestore",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.osd_id": "0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.type": "block",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.vdo": "0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.with_tpm": "0"
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             },
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "type": "block",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "vg_name": "ceph_vg0"
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:         }
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:     ],
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:     "1": [
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:         {
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "devices": [
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "/dev/loop4"
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             ],
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_name": "ceph_lv1",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_size": "21470642176",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "name": "ceph_lv1",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "tags": {
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.cluster_name": "ceph",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.crush_device_class": "",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.encrypted": "0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.objectstore": "bluestore",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.osd_id": "1",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.type": "block",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.vdo": "0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.with_tpm": "0"
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             },
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "type": "block",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "vg_name": "ceph_vg1"
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:         }
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:     ],
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:     "2": [
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:         {
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "devices": [
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "/dev/loop5"
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             ],
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_name": "ceph_lv2",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_size": "21470642176",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "name": "ceph_lv2",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "tags": {
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.cluster_name": "ceph",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.crush_device_class": "",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.encrypted": "0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.objectstore": "bluestore",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.osd_id": "2",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.type": "block",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.vdo": "0",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:                 "ceph.with_tpm": "0"
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             },
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "type": "block",
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:             "vg_name": "ceph_vg2"
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:         }
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]:     ]
Feb 25 13:45:03 compute-0 ecstatic_elgamal[414328]: }
Feb 25 13:45:03 compute-0 ceph-mon[76335]: pgmap v3754: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:03 compute-0 systemd[1]: libpod-224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db.scope: Deactivated successfully.
Feb 25 13:45:03 compute-0 podman[414311]: 2026-02-25 13:45:03.494283009 +0000 UTC m=+0.576691306 container died 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Feb 25 13:45:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9249cee0f8af2ef12a5f8a4f616b2674a47303d9024b4ee4c6ef152575f5a89-merged.mount: Deactivated successfully.
Feb 25 13:45:03 compute-0 podman[414311]: 2026-02-25 13:45:03.995554071 +0000 UTC m=+1.077962338 container remove 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 13:45:04 compute-0 sudo[414233]: pam_unix(sudo:session): session closed for user root
Feb 25 13:45:04 compute-0 podman[414349]: 2026-02-25 13:45:04.087561382 +0000 UTC m=+0.304218925 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:45:04 compute-0 systemd[1]: libpod-conmon-224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db.scope: Deactivated successfully.
Feb 25 13:45:04 compute-0 sudo[414360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:45:04 compute-0 sudo[414360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:45:04 compute-0 sudo[414360]: pam_unix(sudo:session): session closed for user root
Feb 25 13:45:04 compute-0 sudo[414401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:45:04 compute-0 sudo[414401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:45:04 compute-0 podman[414392]: 2026-02-25 13:45:04.233744849 +0000 UTC m=+0.097249453 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 13:45:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:04 compute-0 nova_compute[244014]: 2026-02-25 13:45:04.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:04 compute-0 podman[414457]: 2026-02-25 13:45:04.518419282 +0000 UTC m=+0.098894500 container create 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 13:45:04 compute-0 podman[414457]: 2026-02-25 13:45:04.444763468 +0000 UTC m=+0.025238676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:45:04 compute-0 systemd[1]: Started libpod-conmon-7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002.scope.
Feb 25 13:45:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:45:04 compute-0 podman[414457]: 2026-02-25 13:45:04.651715469 +0000 UTC m=+0.232190677 container init 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:45:04 compute-0 podman[414457]: 2026-02-25 13:45:04.659646957 +0000 UTC m=+0.240122145 container start 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 13:45:04 compute-0 boring_sanderson[414474]: 167 167
Feb 25 13:45:04 compute-0 systemd[1]: libpod-7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002.scope: Deactivated successfully.
Feb 25 13:45:04 compute-0 conmon[414474]: conmon 7375749be80d90bed8d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002.scope/container/memory.events
Feb 25 13:45:04 compute-0 podman[414457]: 2026-02-25 13:45:04.669971473 +0000 UTC m=+0.250446691 container attach 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb 25 13:45:04 compute-0 podman[414457]: 2026-02-25 13:45:04.67091736 +0000 UTC m=+0.251392548 container died 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:45:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-475a2c90b3ca4dbebb3a7e0f48695749c7a5360fb2761835599df753194bc314-merged.mount: Deactivated successfully.
Feb 25 13:45:04 compute-0 podman[414457]: 2026-02-25 13:45:04.93143969 +0000 UTC m=+0.511914898 container remove 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 13:45:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:45:04 compute-0 systemd[1]: libpod-conmon-7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002.scope: Deactivated successfully.
Feb 25 13:45:05 compute-0 podman[414498]: 2026-02-25 13:45:05.142604172 +0000 UTC m=+0.109535415 container create ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:45:05 compute-0 podman[414498]: 2026-02-25 13:45:05.070203704 +0000 UTC m=+0.037134967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:45:05 compute-0 systemd[1]: Started libpod-conmon-ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb.scope.
Feb 25 13:45:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9bbc8d5554595e6ebd61dd411be6fe42eca5d1b0df83a7f26543048fc59c3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9bbc8d5554595e6ebd61dd411be6fe42eca5d1b0df83a7f26543048fc59c3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9bbc8d5554595e6ebd61dd411be6fe42eca5d1b0df83a7f26543048fc59c3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9bbc8d5554595e6ebd61dd411be6fe42eca5d1b0df83a7f26543048fc59c3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:45:05 compute-0 podman[414498]: 2026-02-25 13:45:05.278633488 +0000 UTC m=+0.245564751 container init ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:45:05 compute-0 podman[414498]: 2026-02-25 13:45:05.285080273 +0000 UTC m=+0.252011506 container start ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:45:05 compute-0 podman[414498]: 2026-02-25 13:45:05.331157986 +0000 UTC m=+0.298089219 container attach ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:45:05 compute-0 ceph-mon[76335]: pgmap v3755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:06 compute-0 lvm[414592]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:45:06 compute-0 lvm[414592]: VG ceph_vg0 finished
Feb 25 13:45:06 compute-0 lvm[414593]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:45:06 compute-0 lvm[414593]: VG ceph_vg1 finished
Feb 25 13:45:06 compute-0 lvm[414595]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:45:06 compute-0 lvm[414595]: VG ceph_vg2 finished
Feb 25 13:45:06 compute-0 adoring_wilbur[414514]: {}
Feb 25 13:45:06 compute-0 systemd[1]: libpod-ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb.scope: Deactivated successfully.
Feb 25 13:45:06 compute-0 systemd[1]: libpod-ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb.scope: Consumed 1.353s CPU time.
Feb 25 13:45:06 compute-0 podman[414498]: 2026-02-25 13:45:06.200905296 +0000 UTC m=+1.167836539 container died ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:45:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb9bbc8d5554595e6ebd61dd411be6fe42eca5d1b0df83a7f26543048fc59c3e-merged.mount: Deactivated successfully.
Feb 25 13:45:06 compute-0 podman[414498]: 2026-02-25 13:45:06.766802911 +0000 UTC m=+1.733734144 container remove ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 13:45:06 compute-0 systemd[1]: libpod-conmon-ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb.scope: Deactivated successfully.
Feb 25 13:45:06 compute-0 nova_compute[244014]: 2026-02-25 13:45:06.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:06 compute-0 sudo[414401]: pam_unix(sudo:session): session closed for user root
Feb 25 13:45:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:45:06 compute-0 ceph-mon[76335]: pgmap v3756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:06 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:45:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:45:07 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:45:07 compute-0 sudo[414613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:45:07 compute-0 sudo[414613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:45:07 compute-0 sudo[414613]: pam_unix(sudo:session): session closed for user root
Feb 25 13:45:08 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:45:08 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:45:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3757: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:09 compute-0 ceph-mon[76335]: pgmap v3757: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:09 compute-0 nova_compute[244014]: 2026-02-25 13:45:09.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:45:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:11 compute-0 ceph-mon[76335]: pgmap v3758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:11 compute-0 nova_compute[244014]: 2026-02-25 13:45:11.826 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:11 compute-0 nova_compute[244014]: 2026-02-25 13:45:11.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:45:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:13 compute-0 ceph-mon[76335]: pgmap v3759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:14 compute-0 nova_compute[244014]: 2026-02-25 13:45:14.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:45:15 compute-0 ceph-mon[76335]: pgmap v3760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:15 compute-0 nova_compute[244014]: 2026-02-25 13:45:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:45:15 compute-0 nova_compute[244014]: 2026-02-25 13:45:15.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:45:15 compute-0 nova_compute[244014]: 2026-02-25 13:45:15.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:45:15 compute-0 nova_compute[244014]: 2026-02-25 13:45:15.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:45:15 compute-0 nova_compute[244014]: 2026-02-25 13:45:15.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:45:15 compute-0 nova_compute[244014]: 2026-02-25 13:45:15.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:45:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:45:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2491815796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:45:16 compute-0 nova_compute[244014]: 2026-02-25 13:45:16.457 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:45:16 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2491815796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:45:16 compute-0 nova_compute[244014]: 2026-02-25 13:45:16.659 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:45:16 compute-0 nova_compute[244014]: 2026-02-25 13:45:16.662 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3517MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:45:16 compute-0 nova_compute[244014]: 2026-02-25 13:45:16.662 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:45:16 compute-0 nova_compute[244014]: 2026-02-25 13:45:16.663 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:45:16 compute-0 nova_compute[244014]: 2026-02-25 13:45:16.747 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:45:16 compute-0 nova_compute[244014]: 2026-02-25 13:45:16.748 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:45:16 compute-0 nova_compute[244014]: 2026-02-25 13:45:16.772 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:45:16 compute-0 nova_compute[244014]: 2026-02-25 13:45:16.865 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:45:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3849367050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:45:17 compute-0 nova_compute[244014]: 2026-02-25 13:45:17.344 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:45:17 compute-0 nova_compute[244014]: 2026-02-25 13:45:17.351 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:45:17 compute-0 nova_compute[244014]: 2026-02-25 13:45:17.377 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:45:17 compute-0 nova_compute[244014]: 2026-02-25 13:45:17.380 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:45:17 compute-0 nova_compute[244014]: 2026-02-25 13:45:17.380 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:45:17 compute-0 ceph-mon[76335]: pgmap v3761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3849367050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:45:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:18 compute-0 ceph-mon[76335]: pgmap v3762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:19 compute-0 nova_compute[244014]: 2026-02-25 13:45:19.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:45:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:21 compute-0 ceph-mon[76335]: pgmap v3763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:21 compute-0 nova_compute[244014]: 2026-02-25 13:45:21.868 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:22 compute-0 nova_compute[244014]: 2026-02-25 13:45:22.382 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:45:22 compute-0 nova_compute[244014]: 2026-02-25 13:45:22.382 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:45:22 compute-0 nova_compute[244014]: 2026-02-25 13:45:22.382 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:45:22 compute-0 nova_compute[244014]: 2026-02-25 13:45:22.397 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:45:23 compute-0 ceph-mon[76335]: pgmap v3764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:24 compute-0 nova_compute[244014]: 2026-02-25 13:45:24.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:24 compute-0 ceph-mon[76335]: pgmap v3765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:45:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:26 compute-0 nova_compute[244014]: 2026-02-25 13:45:26.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:27 compute-0 ceph-mon[76335]: pgmap v3766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:27 compute-0 nova_compute[244014]: 2026-02-25 13:45:27.886 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:45:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:28 compute-0 nova_compute[244014]: 2026-02-25 13:45:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:45:28 compute-0 nova_compute[244014]: 2026-02-25 13:45:28.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:45:29 compute-0 nova_compute[244014]: 2026-02-25 13:45:29.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:29 compute-0 ceph-mon[76335]: pgmap v3767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:45:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:45:31
Feb 25 13:45:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:45:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:45:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control']
Feb 25 13:45:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:45:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:45:31 compute-0 ceph-mon[76335]: pgmap v3768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:31 compute-0 nova_compute[244014]: 2026-02-25 13:45:31.873 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:45:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:45:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:45:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:45:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:45:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:45:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:45:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:45:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:45:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:45:32 compute-0 ceph-mon[76335]: pgmap v3769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:33 compute-0 nova_compute[244014]: 2026-02-25 13:45:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:45:33 compute-0 nova_compute[244014]: 2026-02-25 13:45:33.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:45:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:34 compute-0 nova_compute[244014]: 2026-02-25 13:45:34.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:34 compute-0 podman[414683]: 2026-02-25 13:45:34.734552039 +0000 UTC m=+0.067929021 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Feb 25 13:45:34 compute-0 podman[414684]: 2026-02-25 13:45:34.781540798 +0000 UTC m=+0.107861278 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 13:45:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:45:35 compute-0 nova_compute[244014]: 2026-02-25 13:45:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:45:35 compute-0 ceph-mon[76335]: pgmap v3770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:36 compute-0 nova_compute[244014]: 2026-02-25 13:45:36.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:37 compute-0 ceph-mon[76335]: pgmap v3771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:39 compute-0 nova_compute[244014]: 2026-02-25 13:45:39.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:39 compute-0 ceph-mon[76335]: pgmap v3772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:45:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:40 compute-0 nova_compute[244014]: 2026-02-25 13:45:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:45:41 compute-0 ceph-mon[76335]: pgmap v3773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:41 compute-0 nova_compute[244014]: 2026-02-25 13:45:41.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:45:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:45:43 compute-0 ceph-mon[76335]: pgmap v3774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:44 compute-0 nova_compute[244014]: 2026-02-25 13:45:44.320 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:44 compute-0 ceph-mon[76335]: pgmap v3775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:45:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:46 compute-0 nova_compute[244014]: 2026-02-25 13:45:46.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:47 compute-0 ceph-mon[76335]: pgmap v3776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:45:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/721339229' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:45:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:45:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/721339229' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:45:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/721339229' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:45:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/721339229' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:45:49 compute-0 nova_compute[244014]: 2026-02-25 13:45:49.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:49 compute-0 ceph-mon[76335]: pgmap v3777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:45:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:50 compute-0 ceph-mon[76335]: pgmap v3778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:51 compute-0 nova_compute[244014]: 2026-02-25 13:45:51.924 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:53 compute-0 ceph-mon[76335]: pgmap v3779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:54 compute-0 nova_compute[244014]: 2026-02-25 13:45:54.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:45:55.090 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:45:55.091 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:45:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:45:55.091 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:45:55 compute-0 ceph-mon[76335]: pgmap v3780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:56 compute-0 ceph-mon[76335]: pgmap v3781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:56 compute-0 nova_compute[244014]: 2026-02-25 13:45:56.928 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:59 compute-0 nova_compute[244014]: 2026-02-25 13:45:59.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:45:59 compute-0 ceph-mon[76335]: pgmap v3782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:45:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:01 compute-0 ceph-mon[76335]: pgmap v3783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:46:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:46:01 compute-0 nova_compute[244014]: 2026-02-25 13:46:01.932 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:03 compute-0 ceph-mon[76335]: pgmap v3784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:04 compute-0 nova_compute[244014]: 2026-02-25 13:46:04.330 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:04 compute-0 ceph-mon[76335]: pgmap v3785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:05 compute-0 podman[414729]: 2026-02-25 13:46:05.736940678 +0000 UTC m=+0.071833383 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 13:46:05 compute-0 podman[414730]: 2026-02-25 13:46:05.769709299 +0000 UTC m=+0.099385574 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 13:46:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:06 compute-0 nova_compute[244014]: 2026-02-25 13:46:06.934 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:07 compute-0 sudo[414770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:46:07 compute-0 sudo[414770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:46:07 compute-0 sudo[414770]: pam_unix(sudo:session): session closed for user root
Feb 25 13:46:07 compute-0 sudo[414795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:46:07 compute-0 sudo[414795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:46:07 compute-0 ceph-mon[76335]: pgmap v3786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:07 compute-0 sudo[414795]: pam_unix(sudo:session): session closed for user root
Feb 25 13:46:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 13:46:07 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:46:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:46:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:46:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:46:07 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:46:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:46:07 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:46:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:46:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:46:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:46:07 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:46:07 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:46:07 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:46:07 compute-0 sudo[414852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:46:07 compute-0 sudo[414852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:46:07 compute-0 sudo[414852]: pam_unix(sudo:session): session closed for user root
Feb 25 13:46:08 compute-0 sudo[414877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:46:08 compute-0 sudo[414877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:46:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:08 compute-0 podman[414914]: 2026-02-25 13:46:08.338111926 +0000 UTC m=+0.026086240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:46:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:46:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:46:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:46:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:46:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:46:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:46:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:46:09 compute-0 ceph-mon[76335]: pgmap v3787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:09 compute-0 podman[414914]: 2026-02-25 13:46:09.325346079 +0000 UTC m=+1.013320363 container create 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:46:09 compute-0 nova_compute[244014]: 2026-02-25 13:46:09.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:09 compute-0 systemd[1]: Started libpod-conmon-91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7.scope.
Feb 25 13:46:09 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:46:09 compute-0 podman[414914]: 2026-02-25 13:46:09.597166612 +0000 UTC m=+1.285140906 container init 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 13:46:09 compute-0 podman[414914]: 2026-02-25 13:46:09.610410033 +0000 UTC m=+1.298384337 container start 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:46:09 compute-0 thirsty_bouman[414930]: 167 167
Feb 25 13:46:09 compute-0 systemd[1]: libpod-91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7.scope: Deactivated successfully.
Feb 25 13:46:09 compute-0 conmon[414930]: conmon 91ba03ab8f192535c23f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7.scope/container/memory.events
Feb 25 13:46:09 compute-0 podman[414914]: 2026-02-25 13:46:09.647012143 +0000 UTC m=+1.334986467 container attach 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:46:09 compute-0 podman[414914]: 2026-02-25 13:46:09.648609039 +0000 UTC m=+1.336583333 container died 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:46:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-47d05874ebf4f654853e690b2a9a358a46c1e3bc41cac5103a0d69914fd9085b-merged.mount: Deactivated successfully.
Feb 25 13:46:09 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:10 compute-0 podman[414914]: 2026-02-25 13:46:10.358964223 +0000 UTC m=+2.046938527 container remove 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 13:46:10 compute-0 systemd[1]: libpod-conmon-91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7.scope: Deactivated successfully.
Feb 25 13:46:10 compute-0 podman[414956]: 2026-02-25 13:46:10.544629723 +0000 UTC m=+0.037286841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:46:10 compute-0 podman[414956]: 2026-02-25 13:46:10.7148367 +0000 UTC m=+0.207493848 container create ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:46:10 compute-0 systemd[1]: Started libpod-conmon-ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d.scope.
Feb 25 13:46:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:11 compute-0 podman[414956]: 2026-02-25 13:46:11.031788518 +0000 UTC m=+0.524445656 container init ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:46:11 compute-0 podman[414956]: 2026-02-25 13:46:11.041289491 +0000 UTC m=+0.533946599 container start ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:46:11 compute-0 podman[414956]: 2026-02-25 13:46:11.138179713 +0000 UTC m=+0.630836871 container attach ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:46:11 compute-0 romantic_hermann[414973]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:46:11 compute-0 romantic_hermann[414973]: --> All data devices are unavailable
Feb 25 13:46:11 compute-0 systemd[1]: libpod-ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d.scope: Deactivated successfully.
Feb 25 13:46:11 compute-0 podman[414956]: 2026-02-25 13:46:11.493898605 +0000 UTC m=+0.986555763 container died ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:46:11 compute-0 ceph-mon[76335]: pgmap v3788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd-merged.mount: Deactivated successfully.
Feb 25 13:46:11 compute-0 nova_compute[244014]: 2026-02-25 13:46:11.937 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:11 compute-0 podman[414956]: 2026-02-25 13:46:11.964682731 +0000 UTC m=+1.457339859 container remove ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:46:11 compute-0 systemd[1]: libpod-conmon-ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d.scope: Deactivated successfully.
Feb 25 13:46:12 compute-0 sudo[414877]: pam_unix(sudo:session): session closed for user root
Feb 25 13:46:12 compute-0 sudo[415004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:46:12 compute-0 sudo[415004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:46:12 compute-0 sudo[415004]: pam_unix(sudo:session): session closed for user root
Feb 25 13:46:12 compute-0 sudo[415029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:46:12 compute-0 sudo[415029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:46:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:12 compute-0 podman[415066]: 2026-02-25 13:46:12.525051049 +0000 UTC m=+0.106692454 container create 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:46:12 compute-0 podman[415066]: 2026-02-25 13:46:12.451042574 +0000 UTC m=+0.032684019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:46:12 compute-0 systemd[1]: Started libpod-conmon-5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c.scope.
Feb 25 13:46:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:46:12 compute-0 podman[415066]: 2026-02-25 13:46:12.74557778 +0000 UTC m=+0.327219235 container init 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 13:46:12 compute-0 podman[415066]: 2026-02-25 13:46:12.753683083 +0000 UTC m=+0.335324488 container start 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:46:12 compute-0 thirsty_lalande[415083]: 167 167
Feb 25 13:46:12 compute-0 systemd[1]: libpod-5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c.scope: Deactivated successfully.
Feb 25 13:46:12 compute-0 podman[415066]: 2026-02-25 13:46:12.812493811 +0000 UTC m=+0.394135216 container attach 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:46:12 compute-0 podman[415066]: 2026-02-25 13:46:12.81313427 +0000 UTC m=+0.394775675 container died 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:46:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-38f7ad349a542bde433583cbf24f3251ae1752ef23060cc11db7ca916947a1d9-merged.mount: Deactivated successfully.
Feb 25 13:46:13 compute-0 podman[415066]: 2026-02-25 13:46:13.085975523 +0000 UTC m=+0.667616918 container remove 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:46:13 compute-0 systemd[1]: libpod-conmon-5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c.scope: Deactivated successfully.
Feb 25 13:46:13 compute-0 podman[415108]: 2026-02-25 13:46:13.281329641 +0000 UTC m=+0.034682686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:46:13 compute-0 podman[415108]: 2026-02-25 13:46:13.3843896 +0000 UTC m=+0.137742665 container create 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 13:46:13 compute-0 systemd[1]: Started libpod-conmon-3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01.scope.
Feb 25 13:46:13 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd55e2588934be1968233ea146fd72fea9c709b0f98a4f465cc70460e4caac8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd55e2588934be1968233ea146fd72fea9c709b0f98a4f465cc70460e4caac8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd55e2588934be1968233ea146fd72fea9c709b0f98a4f465cc70460e4caac8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd55e2588934be1968233ea146fd72fea9c709b0f98a4f465cc70460e4caac8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:13 compute-0 ceph-mon[76335]: pgmap v3789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:13 compute-0 podman[415108]: 2026-02-25 13:46:13.632848893 +0000 UTC m=+0.386201958 container init 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 13:46:13 compute-0 podman[415108]: 2026-02-25 13:46:13.639843214 +0000 UTC m=+0.393196249 container start 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 13:46:13 compute-0 podman[415108]: 2026-02-25 13:46:13.650388027 +0000 UTC m=+0.403741102 container attach 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb 25 13:46:13 compute-0 nova_compute[244014]: 2026-02-25 13:46:13.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:46:13 compute-0 brave_wright[415125]: {
Feb 25 13:46:13 compute-0 brave_wright[415125]:     "0": [
Feb 25 13:46:13 compute-0 brave_wright[415125]:         {
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "devices": [
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "/dev/loop3"
Feb 25 13:46:13 compute-0 brave_wright[415125]:             ],
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_name": "ceph_lv0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_size": "21470642176",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "name": "ceph_lv0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "tags": {
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.cluster_name": "ceph",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.crush_device_class": "",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.encrypted": "0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.objectstore": "bluestore",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.osd_id": "0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.type": "block",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.vdo": "0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.with_tpm": "0"
Feb 25 13:46:13 compute-0 brave_wright[415125]:             },
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "type": "block",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "vg_name": "ceph_vg0"
Feb 25 13:46:13 compute-0 brave_wright[415125]:         }
Feb 25 13:46:13 compute-0 brave_wright[415125]:     ],
Feb 25 13:46:13 compute-0 brave_wright[415125]:     "1": [
Feb 25 13:46:13 compute-0 brave_wright[415125]:         {
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "devices": [
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "/dev/loop4"
Feb 25 13:46:13 compute-0 brave_wright[415125]:             ],
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_name": "ceph_lv1",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_size": "21470642176",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "name": "ceph_lv1",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "tags": {
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.cluster_name": "ceph",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.crush_device_class": "",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.encrypted": "0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.objectstore": "bluestore",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.osd_id": "1",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.type": "block",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.vdo": "0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.with_tpm": "0"
Feb 25 13:46:13 compute-0 brave_wright[415125]:             },
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "type": "block",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "vg_name": "ceph_vg1"
Feb 25 13:46:13 compute-0 brave_wright[415125]:         }
Feb 25 13:46:13 compute-0 brave_wright[415125]:     ],
Feb 25 13:46:13 compute-0 brave_wright[415125]:     "2": [
Feb 25 13:46:13 compute-0 brave_wright[415125]:         {
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "devices": [
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "/dev/loop5"
Feb 25 13:46:13 compute-0 brave_wright[415125]:             ],
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_name": "ceph_lv2",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_size": "21470642176",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "name": "ceph_lv2",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "tags": {
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.cluster_name": "ceph",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.crush_device_class": "",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.encrypted": "0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.objectstore": "bluestore",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.osd_id": "2",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.type": "block",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.vdo": "0",
Feb 25 13:46:13 compute-0 brave_wright[415125]:                 "ceph.with_tpm": "0"
Feb 25 13:46:13 compute-0 brave_wright[415125]:             },
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "type": "block",
Feb 25 13:46:13 compute-0 brave_wright[415125]:             "vg_name": "ceph_vg2"
Feb 25 13:46:13 compute-0 brave_wright[415125]:         }
Feb 25 13:46:13 compute-0 brave_wright[415125]:     ]
Feb 25 13:46:13 compute-0 brave_wright[415125]: }
Feb 25 13:46:13 compute-0 systemd[1]: libpod-3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01.scope: Deactivated successfully.
Feb 25 13:46:13 compute-0 podman[415108]: 2026-02-25 13:46:13.938489328 +0000 UTC m=+0.691842383 container died 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 13:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fd55e2588934be1968233ea146fd72fea9c709b0f98a4f465cc70460e4caac8-merged.mount: Deactivated successfully.
Feb 25 13:46:14 compute-0 podman[415108]: 2026-02-25 13:46:14.26005627 +0000 UTC m=+1.013409335 container remove 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:46:14 compute-0 systemd[1]: libpod-conmon-3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01.scope: Deactivated successfully.
Feb 25 13:46:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:14 compute-0 sudo[415029]: pam_unix(sudo:session): session closed for user root
Feb 25 13:46:14 compute-0 nova_compute[244014]: 2026-02-25 13:46:14.334 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:14 compute-0 sudo[415148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:46:14 compute-0 sudo[415148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:46:14 compute-0 sudo[415148]: pam_unix(sudo:session): session closed for user root
Feb 25 13:46:14 compute-0 sudo[415173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:46:14 compute-0 sudo[415173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:46:14 compute-0 podman[415210]: 2026-02-25 13:46:14.775797956 +0000 UTC m=+0.038997521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:46:14 compute-0 podman[415210]: 2026-02-25 13:46:14.886185424 +0000 UTC m=+0.149384999 container create e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:46:14 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:15 compute-0 systemd[1]: Started libpod-conmon-e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4.scope.
Feb 25 13:46:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:46:15 compute-0 podman[415210]: 2026-02-25 13:46:15.210984709 +0000 UTC m=+0.474184304 container init e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:46:15 compute-0 podman[415210]: 2026-02-25 13:46:15.220812752 +0000 UTC m=+0.484012337 container start e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:46:15 compute-0 blissful_diffie[415227]: 167 167
Feb 25 13:46:15 compute-0 systemd[1]: libpod-e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4.scope: Deactivated successfully.
Feb 25 13:46:15 compute-0 podman[415210]: 2026-02-25 13:46:15.241055343 +0000 UTC m=+0.504254988 container attach e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:46:15 compute-0 podman[415210]: 2026-02-25 13:46:15.242725321 +0000 UTC m=+0.505924876 container died e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-210ed1822ca5176939e49be21ff0c23fcfb31b7a4fda6c14394a5b03920fb4dc-merged.mount: Deactivated successfully.
Feb 25 13:46:15 compute-0 podman[415210]: 2026-02-25 13:46:15.436455842 +0000 UTC m=+0.699655407 container remove e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:46:15 compute-0 systemd[1]: libpod-conmon-e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4.scope: Deactivated successfully.
Feb 25 13:46:15 compute-0 podman[415253]: 2026-02-25 13:46:15.648889101 +0000 UTC m=+0.096623135 container create a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:46:15 compute-0 podman[415253]: 2026-02-25 13:46:15.579588512 +0000 UTC m=+0.027322556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:46:15 compute-0 systemd[1]: Started libpod-conmon-a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260.scope.
Feb 25 13:46:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89af7b82d723e33117f79798ef770d68d4afbc58a820f3abacd179239575d8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89af7b82d723e33117f79798ef770d68d4afbc58a820f3abacd179239575d8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:15 compute-0 ceph-mon[76335]: pgmap v3790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89af7b82d723e33117f79798ef770d68d4afbc58a820f3abacd179239575d8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89af7b82d723e33117f79798ef770d68d4afbc58a820f3abacd179239575d8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:46:15 compute-0 podman[415253]: 2026-02-25 13:46:15.817137322 +0000 UTC m=+0.264871356 container init a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb 25 13:46:15 compute-0 podman[415253]: 2026-02-25 13:46:15.825326317 +0000 UTC m=+0.273060321 container start a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:46:15 compute-0 podman[415253]: 2026-02-25 13:46:15.852662591 +0000 UTC m=+0.300396595 container attach a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:46:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:16 compute-0 lvm[415348]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:46:16 compute-0 lvm[415348]: VG ceph_vg1 finished
Feb 25 13:46:16 compute-0 lvm[415349]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:46:16 compute-0 lvm[415349]: VG ceph_vg0 finished
Feb 25 13:46:16 compute-0 lvm[415351]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:46:16 compute-0 lvm[415351]: VG ceph_vg2 finished
Feb 25 13:46:16 compute-0 pedantic_carson[415270]: {}
Feb 25 13:46:16 compute-0 systemd[1]: libpod-a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260.scope: Deactivated successfully.
Feb 25 13:46:16 compute-0 systemd[1]: libpod-a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260.scope: Consumed 1.213s CPU time.
Feb 25 13:46:16 compute-0 podman[415253]: 2026-02-25 13:46:16.624306455 +0000 UTC m=+1.072040449 container died a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:46:16 compute-0 nova_compute[244014]: 2026-02-25 13:46:16.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:46:16 compute-0 nova_compute[244014]: 2026-02-25 13:46:16.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:46:16 compute-0 nova_compute[244014]: 2026-02-25 13:46:16.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:46:16 compute-0 nova_compute[244014]: 2026-02-25 13:46:16.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:46:16 compute-0 nova_compute[244014]: 2026-02-25 13:46:16.902 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:46:16 compute-0 nova_compute[244014]: 2026-02-25 13:46:16.903 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:46:16 compute-0 nova_compute[244014]: 2026-02-25 13:46:16.946 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:17 compute-0 ceph-mon[76335]: pgmap v3791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d89af7b82d723e33117f79798ef770d68d4afbc58a820f3abacd179239575d8c-merged.mount: Deactivated successfully.
Feb 25 13:46:17 compute-0 podman[415253]: 2026-02-25 13:46:17.438908142 +0000 UTC m=+1.886642176 container remove a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:46:17 compute-0 systemd[1]: libpod-conmon-a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260.scope: Deactivated successfully.
Feb 25 13:46:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:46:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1220553453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:46:17 compute-0 nova_compute[244014]: 2026-02-25 13:46:17.502 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:46:17 compute-0 sudo[415173]: pam_unix(sudo:session): session closed for user root
Feb 25 13:46:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:46:17 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:46:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:46:17 compute-0 nova_compute[244014]: 2026-02-25 13:46:17.695 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:46:17 compute-0 nova_compute[244014]: 2026-02-25 13:46:17.697 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3494MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:46:17 compute-0 nova_compute[244014]: 2026-02-25 13:46:17.697 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:46:17 compute-0 nova_compute[244014]: 2026-02-25 13:46:17.697 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:46:17 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:46:17 compute-0 sudo[415390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:46:17 compute-0 sudo[415390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:46:17 compute-0 sudo[415390]: pam_unix(sudo:session): session closed for user root
Feb 25 13:46:17 compute-0 nova_compute[244014]: 2026-02-25 13:46:17.934 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:46:17 compute-0 nova_compute[244014]: 2026-02-25 13:46:17.935 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:46:18 compute-0 nova_compute[244014]: 2026-02-25 13:46:18.054 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:46:18 compute-0 nova_compute[244014]: 2026-02-25 13:46:18.152 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:46:18 compute-0 nova_compute[244014]: 2026-02-25 13:46:18.153 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:46:18 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1220553453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:46:18 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:46:18 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:46:18 compute-0 nova_compute[244014]: 2026-02-25 13:46:18.179 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:46:18 compute-0 nova_compute[244014]: 2026-02-25 13:46:18.199 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:46:18 compute-0 nova_compute[244014]: 2026-02-25 13:46:18.217 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:46:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:46:18 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/322719798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:46:18 compute-0 nova_compute[244014]: 2026-02-25 13:46:18.802 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:46:18 compute-0 nova_compute[244014]: 2026-02-25 13:46:18.812 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:46:18 compute-0 nova_compute[244014]: 2026-02-25 13:46:18.830 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:46:18 compute-0 nova_compute[244014]: 2026-02-25 13:46:18.832 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:46:18 compute-0 nova_compute[244014]: 2026-02-25 13:46:18.832 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:46:19 compute-0 nova_compute[244014]: 2026-02-25 13:46:19.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.564902) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027179564957, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 2054, "num_deletes": 251, "total_data_size": 3476039, "memory_usage": 3521168, "flush_reason": "Manual Compaction"}
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Feb 25 13:46:19 compute-0 ceph-mon[76335]: pgmap v3792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/322719798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027179699351, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 3397976, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77131, "largest_seqno": 79184, "table_properties": {"data_size": 3388592, "index_size": 5941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18718, "raw_average_key_size": 20, "raw_value_size": 3369990, "raw_average_value_size": 3615, "num_data_blocks": 264, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026953, "oldest_key_time": 1772026953, "file_creation_time": 1772027179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 134514 microseconds, and 10322 cpu microseconds.
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.699411) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 3397976 bytes OK
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.699442) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.852921) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.852995) EVENT_LOG_v1 {"time_micros": 1772027179852980, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.853037) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 3467434, prev total WAL file size 3467434, number of live WAL files 2.
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.854534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(3318KB)], [185(10MB)]
Feb 25 13:46:19 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027179854603, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 14082323, "oldest_snapshot_seqno": -1}
Feb 25 13:46:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 9608 keys, 12298268 bytes, temperature: kUnknown
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180062359, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 12298268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12236832, "index_size": 36299, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 251062, "raw_average_key_size": 26, "raw_value_size": 12067981, "raw_average_value_size": 1256, "num_data_blocks": 1406, "num_entries": 9608, "num_filter_entries": 9608, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.062992) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 12298268 bytes
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.084152) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 67.7 rd, 59.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.2 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 10122, records dropped: 514 output_compression: NoCompression
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.084198) EVENT_LOG_v1 {"time_micros": 1772027180084175, "job": 116, "event": "compaction_finished", "compaction_time_micros": 208110, "compaction_time_cpu_micros": 51090, "output_level": 6, "num_output_files": 1, "total_output_size": 12298268, "num_input_records": 10122, "num_output_records": 9608, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180085916, "job": 116, "event": "table_file_deletion", "file_number": 187}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180088231, "job": 116, "event": "table_file_deletion", "file_number": 185}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.854396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.088475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.088484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.088486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.088489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.088492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.322227) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180322298, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 260, "num_deletes": 255, "total_data_size": 13120, "memory_usage": 19768, "flush_reason": "Manual Compaction"}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180400060, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 13362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79185, "largest_seqno": 79444, "table_properties": {"data_size": 11545, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4481, "raw_average_key_size": 17, "raw_value_size": 8118, "raw_average_value_size": 31, "num_data_blocks": 2, "num_entries": 260, "num_filter_entries": 260, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027180, "oldest_key_time": 1772027180, "file_creation_time": 1772027180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 77923 microseconds, and 1678 cpu microseconds.
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.400146) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 13362 bytes OK
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.400183) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.423225) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.423291) EVENT_LOG_v1 {"time_micros": 1772027180423278, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.423336) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 11069, prev total WAL file size 11069, number of live WAL files 2.
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.424044) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353137' seq:72057594037927935, type:22 .. '6C6F676D0033373638' seq:0, type:0; will stop at (end)
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(13KB)], [188(11MB)]
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180424092, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 12311630, "oldest_snapshot_seqno": -1}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 9354 keys, 12223722 bytes, temperature: kUnknown
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180689039, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 12223722, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12163303, "index_size": 35922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 246720, "raw_average_key_size": 26, "raw_value_size": 11998189, "raw_average_value_size": 1282, "num_data_blocks": 1388, "num_entries": 9354, "num_filter_entries": 9354, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.689801) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 12223722 bytes
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.718609) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 46.4 rd, 46.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 11.7 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(1836.2) write-amplify(914.8) OK, records in: 9868, records dropped: 514 output_compression: NoCompression
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.718668) EVENT_LOG_v1 {"time_micros": 1772027180718643, "job": 118, "event": "compaction_finished", "compaction_time_micros": 265075, "compaction_time_cpu_micros": 42579, "output_level": 6, "num_output_files": 1, "total_output_size": 12223722, "num_input_records": 9868, "num_output_records": 9354, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180718978, "job": 118, "event": "table_file_deletion", "file_number": 190}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180721624, "job": 118, "event": "table_file_deletion", "file_number": 188}
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.423951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.721757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.721766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.721769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.721773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.721776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:21 compute-0 ceph-mon[76335]: pgmap v3793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:21 compute-0 nova_compute[244014]: 2026-02-25 13:46:21.949 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:22 compute-0 ceph-mon[76335]: pgmap v3794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:24 compute-0 nova_compute[244014]: 2026-02-25 13:46:24.337 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:24 compute-0 nova_compute[244014]: 2026-02-25 13:46:24.833 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:46:24 compute-0 nova_compute[244014]: 2026-02-25 13:46:24.834 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:46:24 compute-0 nova_compute[244014]: 2026-02-25 13:46:24.834 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:46:24 compute-0 nova_compute[244014]: 2026-02-25 13:46:24.850 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:46:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:25 compute-0 ceph-mon[76335]: pgmap v3795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:26 compute-0 nova_compute[244014]: 2026-02-25 13:46:26.952 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:27 compute-0 ceph-mon[76335]: pgmap v3796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:27 compute-0 nova_compute[244014]: 2026-02-25 13:46:27.888 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:46:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:29 compute-0 nova_compute[244014]: 2026-02-25 13:46:29.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:29 compute-0 ceph-mon[76335]: pgmap v3797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:30 compute-0 ceph-mon[76335]: pgmap v3798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:30 compute-0 nova_compute[244014]: 2026-02-25 13:46:30.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:46:30 compute-0 nova_compute[244014]: 2026-02-25 13:46:30.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:46:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:46:31
Feb 25 13:46:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:46:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:46:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.meta', 'vms', 'images', '.mgr', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Feb 25 13:46:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:46:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:46:31 compute-0 nova_compute[244014]: 2026-02-25 13:46:31.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:46:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:46:33 compute-0 ceph-mon[76335]: pgmap v3799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:33 compute-0 nova_compute[244014]: 2026-02-25 13:46:33.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:46:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:34 compute-0 nova_compute[244014]: 2026-02-25 13:46:34.343 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:35 compute-0 ceph-mon[76335]: pgmap v3800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:35 compute-0 nova_compute[244014]: 2026-02-25 13:46:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:46:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:36 compute-0 podman[415437]: 2026-02-25 13:46:36.735413545 +0000 UTC m=+0.070616568 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:46:36 compute-0 podman[415438]: 2026-02-25 13:46:36.78400161 +0000 UTC m=+0.118095941 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 13:46:36 compute-0 ceph-mon[76335]: pgmap v3801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:36 compute-0 nova_compute[244014]: 2026-02-25 13:46:36.956 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:37 compute-0 nova_compute[244014]: 2026-02-25 13:46:37.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:46:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:39 compute-0 nova_compute[244014]: 2026-02-25 13:46:39.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:39 compute-0 ceph-mon[76335]: pgmap v3802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:40 compute-0 nova_compute[244014]: 2026-02-25 13:46:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:46:41 compute-0 ceph-mon[76335]: pgmap v3803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:41 compute-0 nova_compute[244014]: 2026-02-25 13:46:41.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:42 compute-0 ceph-mon[76335]: pgmap v3804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:46:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:46:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:44 compute-0 nova_compute[244014]: 2026-02-25 13:46:44.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:45 compute-0 ceph-mon[76335]: pgmap v3805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:46 compute-0 nova_compute[244014]: 2026-02-25 13:46:46.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:47 compute-0 ceph-mon[76335]: pgmap v3806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:46:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592143154' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:46:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:46:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592143154' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:46:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2592143154' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:46:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2592143154' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:46:49 compute-0 nova_compute[244014]: 2026-02-25 13:46:49.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:49 compute-0 ceph-mon[76335]: pgmap v3807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.198471) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210198513, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 473, "num_deletes": 250, "total_data_size": 437336, "memory_usage": 446528, "flush_reason": "Manual Compaction"}
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210204576, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 313249, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79445, "largest_seqno": 79917, "table_properties": {"data_size": 310811, "index_size": 537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6642, "raw_average_key_size": 20, "raw_value_size": 305835, "raw_average_value_size": 932, "num_data_blocks": 25, "num_entries": 328, "num_filter_entries": 328, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027180, "oldest_key_time": 1772027180, "file_creation_time": 1772027210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 6157 microseconds, and 1463 cpu microseconds.
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.204626) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 313249 bytes OK
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.204648) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.211580) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.211637) EVENT_LOG_v1 {"time_micros": 1772027210211627, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.211672) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 434543, prev total WAL file size 434543, number of live WAL files 2.
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.212252) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323533' seq:72057594037927935, type:22 .. '6D6772737461740033353034' seq:0, type:0; will stop at (end)
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(305KB)], [191(11MB)]
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210212515, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 12536971, "oldest_snapshot_seqno": -1}
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 9184 keys, 9327679 bytes, temperature: kUnknown
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210310335, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 9327679, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9272954, "index_size": 30676, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 243357, "raw_average_key_size": 26, "raw_value_size": 9115295, "raw_average_value_size": 992, "num_data_blocks": 1170, "num_entries": 9184, "num_filter_entries": 9184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:46:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.310782) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 9327679 bytes
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.324420) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.1 rd, 95.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.7 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(69.8) write-amplify(29.8) OK, records in: 9682, records dropped: 498 output_compression: NoCompression
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.324465) EVENT_LOG_v1 {"time_micros": 1772027210324445, "job": 120, "event": "compaction_finished", "compaction_time_micros": 97874, "compaction_time_cpu_micros": 38178, "output_level": 6, "num_output_files": 1, "total_output_size": 9327679, "num_input_records": 9682, "num_output_records": 9184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210324814, "job": 120, "event": "table_file_deletion", "file_number": 193}
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210327091, "job": 120, "event": "table_file_deletion", "file_number": 191}
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.212119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.327214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.327238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.327241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.327244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:50 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.327247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:46:51 compute-0 ceph-mon[76335]: pgmap v3808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:51 compute-0 nova_compute[244014]: 2026-02-25 13:46:51.963 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:53 compute-0 ceph-mon[76335]: pgmap v3809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:54 compute-0 nova_compute[244014]: 2026-02-25 13:46:54.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:46:55.091 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:46:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:46:55.092 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:46:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:46:55.092 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:46:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:46:55 compute-0 ceph-mon[76335]: pgmap v3810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:56 compute-0 nova_compute[244014]: 2026-02-25 13:46:56.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:57 compute-0 ceph-mon[76335]: pgmap v3811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:58 compute-0 nova_compute[244014]: 2026-02-25 13:46:58.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:46:59 compute-0 nova_compute[244014]: 2026-02-25 13:46:59.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:46:59 compute-0 ceph-mon[76335]: pgmap v3812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:46:59 compute-0 sshd-session[415484]: error: kex_exchange_identification: read: Connection timed out
Feb 25 13:46:59 compute-0 sshd-session[415484]: banner exchange: Connection from 118.145.110.241 port 42570: Connection timed out
Feb 25 13:47:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:01 compute-0 ceph-mon[76335]: pgmap v3813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:47:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:47:01 compute-0 nova_compute[244014]: 2026-02-25 13:47:01.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:03 compute-0 ceph-mon[76335]: pgmap v3814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:04 compute-0 nova_compute[244014]: 2026-02-25 13:47:04.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:05 compute-0 ceph-mon[76335]: pgmap v3815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:06 compute-0 nova_compute[244014]: 2026-02-25 13:47:06.969 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:07 compute-0 ceph-mon[76335]: pgmap v3816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:07 compute-0 podman[415485]: 2026-02-25 13:47:07.70449063 +0000 UTC m=+0.049843972 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:47:07 compute-0 podman[415486]: 2026-02-25 13:47:07.736066327 +0000 UTC m=+0.078681590 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:47:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:09 compute-0 ceph-mon[76335]: pgmap v3817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:09 compute-0 nova_compute[244014]: 2026-02-25 13:47:09.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:11 compute-0 ceph-mon[76335]: pgmap v3818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:11 compute-0 nova_compute[244014]: 2026-02-25 13:47:11.973 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:13 compute-0 ceph-mon[76335]: pgmap v3819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:13 compute-0 nova_compute[244014]: 2026-02-25 13:47:13.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:47:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:14 compute-0 nova_compute[244014]: 2026-02-25 13:47:14.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:15 compute-0 ceph-mon[76335]: pgmap v3820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:16 compute-0 nova_compute[244014]: 2026-02-25 13:47:16.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:17 compute-0 ceph-mon[76335]: pgmap v3821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:17 compute-0 sudo[415529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:47:17 compute-0 sudo[415529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:47:17 compute-0 sudo[415529]: pam_unix(sudo:session): session closed for user root
Feb 25 13:47:17 compute-0 sudo[415554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 13:47:17 compute-0 sudo[415554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:47:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:18 compute-0 podman[415623]: 2026-02-25 13:47:18.538802612 +0000 UTC m=+0.112825920 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:47:18 compute-0 podman[415623]: 2026-02-25 13:47:18.675177997 +0000 UTC m=+0.249201335 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:47:18 compute-0 nova_compute[244014]: 2026-02-25 13:47:18.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:47:18 compute-0 nova_compute[244014]: 2026-02-25 13:47:18.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:47:18 compute-0 nova_compute[244014]: 2026-02-25 13:47:18.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:47:18 compute-0 nova_compute[244014]: 2026-02-25 13:47:18.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:47:18 compute-0 nova_compute[244014]: 2026-02-25 13:47:18.919 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:47:18 compute-0 nova_compute[244014]: 2026-02-25 13:47:18.919 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:47:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:47:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856843576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:47:19 compute-0 sudo[415554]: pam_unix(sudo:session): session closed for user root
Feb 25 13:47:19 compute-0 nova_compute[244014]: 2026-02-25 13:47:19.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:47:19 compute-0 nova_compute[244014]: 2026-02-25 13:47:19.485 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:47:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:47:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:47:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:47:19 compute-0 ceph-mon[76335]: pgmap v3822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:19 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3856843576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:47:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:47:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:47:19 compute-0 sudo[415833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:47:19 compute-0 sudo[415833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:47:19 compute-0 sudo[415833]: pam_unix(sudo:session): session closed for user root
Feb 25 13:47:19 compute-0 sudo[415858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:47:19 compute-0 sudo[415858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:47:19 compute-0 nova_compute[244014]: 2026-02-25 13:47:19.661 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:47:19 compute-0 nova_compute[244014]: 2026-02-25 13:47:19.663 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3506MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:47:19 compute-0 nova_compute[244014]: 2026-02-25 13:47:19.663 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:47:19 compute-0 nova_compute[244014]: 2026-02-25 13:47:19.664 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:47:19 compute-0 nova_compute[244014]: 2026-02-25 13:47:19.730 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:47:19 compute-0 nova_compute[244014]: 2026-02-25 13:47:19.730 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:47:19 compute-0 nova_compute[244014]: 2026-02-25 13:47:19.749 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:47:20 compute-0 sudo[415858]: pam_unix(sudo:session): session closed for user root
Feb 25 13:47:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:47:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:47:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:47:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:47:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:47:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:47:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:47:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:47:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:47:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:47:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:47:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:47:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:20 compute-0 sudo[415934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:47:20 compute-0 sudo[415934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:47:20 compute-0 sudo[415934]: pam_unix(sudo:session): session closed for user root
Feb 25 13:47:20 compute-0 sudo[415959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:47:20 compute-0 sudo[415959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:47:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:47:20 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918798991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:47:20 compute-0 nova_compute[244014]: 2026-02-25 13:47:20.404 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:47:20 compute-0 nova_compute[244014]: 2026-02-25 13:47:20.412 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:47:20 compute-0 nova_compute[244014]: 2026-02-25 13:47:20.429 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:47:20 compute-0 nova_compute[244014]: 2026-02-25 13:47:20.431 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:47:20 compute-0 nova_compute[244014]: 2026-02-25 13:47:20.431 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:47:20 compute-0 podman[415997]: 2026-02-25 13:47:20.555489769 +0000 UTC m=+0.044310593 container create 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:47:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:47:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:47:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:47:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:47:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:47:20 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:47:20 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2918798991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:47:20 compute-0 systemd[1]: Started libpod-conmon-653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6.scope.
Feb 25 13:47:20 compute-0 podman[415997]: 2026-02-25 13:47:20.533398835 +0000 UTC m=+0.022219699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:47:20 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:47:20 compute-0 podman[415997]: 2026-02-25 13:47:20.688203939 +0000 UTC m=+0.177024863 container init 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:47:20 compute-0 podman[415997]: 2026-02-25 13:47:20.698270528 +0000 UTC m=+0.187091392 container start 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:47:20 compute-0 amazing_liskov[416013]: 167 167
Feb 25 13:47:20 compute-0 systemd[1]: libpod-653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6.scope: Deactivated successfully.
Feb 25 13:47:20 compute-0 conmon[416013]: conmon 653745aee1c5e0d06e81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6.scope/container/memory.events
Feb 25 13:47:20 compute-0 podman[415997]: 2026-02-25 13:47:20.717548242 +0000 UTC m=+0.206369106 container attach 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 13:47:20 compute-0 podman[415997]: 2026-02-25 13:47:20.719465117 +0000 UTC m=+0.208285971 container died 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 13:47:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9c43c3b23dcd6678a5c4175abf03869e6bcad0cddab3da78d565412090eac95-merged.mount: Deactivated successfully.
Feb 25 13:47:20 compute-0 podman[415997]: 2026-02-25 13:47:20.859015763 +0000 UTC m=+0.347836587 container remove 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 13:47:20 compute-0 systemd[1]: libpod-conmon-653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6.scope: Deactivated successfully.
Feb 25 13:47:21 compute-0 podman[416040]: 2026-02-25 13:47:21.052071615 +0000 UTC m=+0.068196358 container create 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:47:21 compute-0 podman[416040]: 2026-02-25 13:47:21.012446738 +0000 UTC m=+0.028571511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:47:21 compute-0 systemd[1]: Started libpod-conmon-3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08.scope.
Feb 25 13:47:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:21 compute-0 podman[416040]: 2026-02-25 13:47:21.211598125 +0000 UTC m=+0.227722928 container init 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 13:47:21 compute-0 podman[416040]: 2026-02-25 13:47:21.22048324 +0000 UTC m=+0.236607973 container start 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:47:21 compute-0 podman[416040]: 2026-02-25 13:47:21.242675368 +0000 UTC m=+0.258800181 container attach 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 13:47:21 compute-0 ceph-mon[76335]: pgmap v3823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:21 compute-0 romantic_goldwasser[416057]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:47:21 compute-0 romantic_goldwasser[416057]: --> All data devices are unavailable
Feb 25 13:47:21 compute-0 systemd[1]: libpod-3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08.scope: Deactivated successfully.
Feb 25 13:47:21 compute-0 conmon[416057]: conmon 3853e692a6eb833f0a25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08.scope/container/memory.events
Feb 25 13:47:21 compute-0 podman[416040]: 2026-02-25 13:47:21.738463411 +0000 UTC m=+0.754588154 container died 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 13:47:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636-merged.mount: Deactivated successfully.
Feb 25 13:47:21 compute-0 podman[416040]: 2026-02-25 13:47:21.813410603 +0000 UTC m=+0.829535326 container remove 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Feb 25 13:47:21 compute-0 systemd[1]: libpod-conmon-3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08.scope: Deactivated successfully.
Feb 25 13:47:21 compute-0 sudo[415959]: pam_unix(sudo:session): session closed for user root
Feb 25 13:47:21 compute-0 sudo[416091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:47:21 compute-0 sudo[416091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:47:21 compute-0 sudo[416091]: pam_unix(sudo:session): session closed for user root
Feb 25 13:47:21 compute-0 sudo[416116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:47:21 compute-0 sudo[416116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:47:21 compute-0 nova_compute[244014]: 2026-02-25 13:47:21.981 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:22 compute-0 podman[416154]: 2026-02-25 13:47:22.273721828 +0000 UTC m=+0.058758287 container create 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 13:47:22 compute-0 systemd[1]: Started libpod-conmon-170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048.scope.
Feb 25 13:47:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:47:22 compute-0 podman[416154]: 2026-02-25 13:47:22.236732696 +0000 UTC m=+0.021769165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:47:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:22 compute-0 podman[416154]: 2026-02-25 13:47:22.353150529 +0000 UTC m=+0.138187008 container init 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:47:22 compute-0 podman[416154]: 2026-02-25 13:47:22.358746489 +0000 UTC m=+0.143782958 container start 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:47:22 compute-0 nifty_raman[416171]: 167 167
Feb 25 13:47:22 compute-0 systemd[1]: libpod-170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048.scope: Deactivated successfully.
Feb 25 13:47:22 compute-0 podman[416154]: 2026-02-25 13:47:22.378682782 +0000 UTC m=+0.163719261 container attach 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:47:22 compute-0 podman[416154]: 2026-02-25 13:47:22.380385861 +0000 UTC m=+0.165422330 container died 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 13:47:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f290521a73e6315359bd24e32aeeea145a3441e0ba0f9b07aec9d134e1dd3c3-merged.mount: Deactivated successfully.
Feb 25 13:47:22 compute-0 podman[416154]: 2026-02-25 13:47:22.505861023 +0000 UTC m=+0.290897482 container remove 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 13:47:22 compute-0 systemd[1]: libpod-conmon-170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048.scope: Deactivated successfully.
Feb 25 13:47:22 compute-0 podman[416195]: 2026-02-25 13:47:22.672131245 +0000 UTC m=+0.070924267 container create bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:47:22 compute-0 systemd[1]: Started libpod-conmon-bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b.scope.
Feb 25 13:47:22 compute-0 podman[416195]: 2026-02-25 13:47:22.637560643 +0000 UTC m=+0.036353675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:47:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5c659aaad5d0200192bc97399ad3102099c2168f05bcfa13c9890b7883488d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5c659aaad5d0200192bc97399ad3102099c2168f05bcfa13c9890b7883488d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5c659aaad5d0200192bc97399ad3102099c2168f05bcfa13c9890b7883488d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5c659aaad5d0200192bc97399ad3102099c2168f05bcfa13c9890b7883488d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:22 compute-0 podman[416195]: 2026-02-25 13:47:22.782519105 +0000 UTC m=+0.181312147 container init bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 13:47:22 compute-0 podman[416195]: 2026-02-25 13:47:22.791940075 +0000 UTC m=+0.190733097 container start bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:47:22 compute-0 podman[416195]: 2026-02-25 13:47:22.809475429 +0000 UTC m=+0.208268531 container attach bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:47:23 compute-0 sad_brattain[416212]: {
Feb 25 13:47:23 compute-0 sad_brattain[416212]:     "0": [
Feb 25 13:47:23 compute-0 sad_brattain[416212]:         {
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "devices": [
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "/dev/loop3"
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             ],
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_name": "ceph_lv0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_size": "21470642176",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "name": "ceph_lv0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "tags": {
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.cluster_name": "ceph",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.crush_device_class": "",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.encrypted": "0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.objectstore": "bluestore",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.osd_id": "0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.type": "block",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.vdo": "0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.with_tpm": "0"
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             },
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "type": "block",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "vg_name": "ceph_vg0"
Feb 25 13:47:23 compute-0 sad_brattain[416212]:         }
Feb 25 13:47:23 compute-0 sad_brattain[416212]:     ],
Feb 25 13:47:23 compute-0 sad_brattain[416212]:     "1": [
Feb 25 13:47:23 compute-0 sad_brattain[416212]:         {
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "devices": [
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "/dev/loop4"
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             ],
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_name": "ceph_lv1",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_size": "21470642176",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "name": "ceph_lv1",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "tags": {
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.cluster_name": "ceph",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.crush_device_class": "",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.encrypted": "0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.objectstore": "bluestore",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.osd_id": "1",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.type": "block",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.vdo": "0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.with_tpm": "0"
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             },
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "type": "block",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "vg_name": "ceph_vg1"
Feb 25 13:47:23 compute-0 sad_brattain[416212]:         }
Feb 25 13:47:23 compute-0 sad_brattain[416212]:     ],
Feb 25 13:47:23 compute-0 sad_brattain[416212]:     "2": [
Feb 25 13:47:23 compute-0 sad_brattain[416212]:         {
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "devices": [
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "/dev/loop5"
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             ],
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_name": "ceph_lv2",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_size": "21470642176",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "name": "ceph_lv2",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "tags": {
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.cluster_name": "ceph",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.crush_device_class": "",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.encrypted": "0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.objectstore": "bluestore",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.osd_id": "2",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.type": "block",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.vdo": "0",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:                 "ceph.with_tpm": "0"
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             },
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "type": "block",
Feb 25 13:47:23 compute-0 sad_brattain[416212]:             "vg_name": "ceph_vg2"
Feb 25 13:47:23 compute-0 sad_brattain[416212]:         }
Feb 25 13:47:23 compute-0 sad_brattain[416212]:     ]
Feb 25 13:47:23 compute-0 sad_brattain[416212]: }
Feb 25 13:47:23 compute-0 systemd[1]: libpod-bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b.scope: Deactivated successfully.
Feb 25 13:47:23 compute-0 podman[416195]: 2026-02-25 13:47:23.077400551 +0000 UTC m=+0.476193563 container died bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:47:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f5c659aaad5d0200192bc97399ad3102099c2168f05bcfa13c9890b7883488d-merged.mount: Deactivated successfully.
Feb 25 13:47:23 compute-0 podman[416195]: 2026-02-25 13:47:23.208187935 +0000 UTC m=+0.606980967 container remove bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:47:23 compute-0 systemd[1]: libpod-conmon-bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b.scope: Deactivated successfully.
Feb 25 13:47:23 compute-0 sudo[416116]: pam_unix(sudo:session): session closed for user root
Feb 25 13:47:23 compute-0 sudo[416236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:47:23 compute-0 sudo[416236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:47:23 compute-0 sudo[416236]: pam_unix(sudo:session): session closed for user root
Feb 25 13:47:23 compute-0 sudo[416261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:47:23 compute-0 sudo[416261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:47:23 compute-0 ceph-mon[76335]: pgmap v3824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:23 compute-0 podman[416298]: 2026-02-25 13:47:23.790083772 +0000 UTC m=+0.118262487 container create 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:47:23 compute-0 podman[416298]: 2026-02-25 13:47:23.696862685 +0000 UTC m=+0.025041400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:47:23 compute-0 systemd[1]: Started libpod-conmon-127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56.scope.
Feb 25 13:47:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:47:23 compute-0 podman[416298]: 2026-02-25 13:47:23.932068958 +0000 UTC m=+0.260247683 container init 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:47:23 compute-0 podman[416298]: 2026-02-25 13:47:23.945845543 +0000 UTC m=+0.274024238 container start 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Feb 25 13:47:23 compute-0 intelligent_sammet[416315]: 167 167
Feb 25 13:47:23 compute-0 systemd[1]: libpod-127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56.scope: Deactivated successfully.
Feb 25 13:47:23 compute-0 podman[416298]: 2026-02-25 13:47:23.959271199 +0000 UTC m=+0.287449894 container attach 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb 25 13:47:23 compute-0 podman[416298]: 2026-02-25 13:47:23.961676898 +0000 UTC m=+0.289855623 container died 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4125d7dc175fb8266aecbaad9b9c8ce4d3d10a3cf858413db2ad730b3fdf398b-merged.mount: Deactivated successfully.
Feb 25 13:47:24 compute-0 podman[416298]: 2026-02-25 13:47:24.044578378 +0000 UTC m=+0.372757103 container remove 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:47:24 compute-0 systemd[1]: libpod-conmon-127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56.scope: Deactivated successfully.
Feb 25 13:47:24 compute-0 podman[416339]: 2026-02-25 13:47:24.229292741 +0000 UTC m=+0.055874325 container create d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:47:24 compute-0 podman[416339]: 2026-02-25 13:47:24.194643816 +0000 UTC m=+0.021225460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:47:24 compute-0 systemd[1]: Started libpod-conmon-d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1.scope.
Feb 25 13:47:24 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48f580e87c455f2360f0d43fc5aa0baafb943e307bf0f904a60a2cb8a8af0a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48f580e87c455f2360f0d43fc5aa0baafb943e307bf0f904a60a2cb8a8af0a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48f580e87c455f2360f0d43fc5aa0baafb943e307bf0f904a60a2cb8a8af0a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48f580e87c455f2360f0d43fc5aa0baafb943e307bf0f904a60a2cb8a8af0a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:47:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:24 compute-0 podman[416339]: 2026-02-25 13:47:24.361381473 +0000 UTC m=+0.187963077 container init d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 13:47:24 compute-0 podman[416339]: 2026-02-25 13:47:24.366090699 +0000 UTC m=+0.192672283 container start d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 13:47:24 compute-0 podman[416339]: 2026-02-25 13:47:24.379150883 +0000 UTC m=+0.205732497 container attach d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:47:24 compute-0 nova_compute[244014]: 2026-02-25 13:47:24.502 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:24 compute-0 lvm[416434]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:47:24 compute-0 lvm[416434]: VG ceph_vg1 finished
Feb 25 13:47:24 compute-0 lvm[416435]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:47:24 compute-0 lvm[416435]: VG ceph_vg0 finished
Feb 25 13:47:24 compute-0 lvm[416437]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:47:24 compute-0 lvm[416437]: VG ceph_vg2 finished
Feb 25 13:47:25 compute-0 reverent_gould[416356]: {}
Feb 25 13:47:25 compute-0 systemd[1]: libpod-d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1.scope: Deactivated successfully.
Feb 25 13:47:25 compute-0 systemd[1]: libpod-d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1.scope: Consumed 1.106s CPU time.
Feb 25 13:47:25 compute-0 podman[416339]: 2026-02-25 13:47:25.097146857 +0000 UTC m=+0.923728491 container died d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 13:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d48f580e87c455f2360f0d43fc5aa0baafb943e307bf0f904a60a2cb8a8af0a1-merged.mount: Deactivated successfully.
Feb 25 13:47:25 compute-0 podman[416339]: 2026-02-25 13:47:25.191492265 +0000 UTC m=+1.018073849 container remove d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:47:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:25 compute-0 systemd[1]: libpod-conmon-d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1.scope: Deactivated successfully.
Feb 25 13:47:25 compute-0 sudo[416261]: pam_unix(sudo:session): session closed for user root
Feb 25 13:47:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:47:25 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:47:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:47:25 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:47:25 compute-0 sudo[416455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:47:25 compute-0 sudo[416455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:47:25 compute-0 sudo[416455]: pam_unix(sudo:session): session closed for user root
Feb 25 13:47:25 compute-0 nova_compute[244014]: 2026-02-25 13:47:25.433 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:47:25 compute-0 nova_compute[244014]: 2026-02-25 13:47:25.435 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:47:25 compute-0 nova_compute[244014]: 2026-02-25 13:47:25.435 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:47:25 compute-0 nova_compute[244014]: 2026-02-25 13:47:25.451 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:47:25 compute-0 ceph-mon[76335]: pgmap v3825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:47:25 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:47:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:27 compute-0 nova_compute[244014]: 2026-02-25 13:47:27.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:27 compute-0 ceph-mon[76335]: pgmap v3826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:28 compute-0 nova_compute[244014]: 2026-02-25 13:47:28.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:47:29 compute-0 nova_compute[244014]: 2026-02-25 13:47:29.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:29 compute-0 ceph-mon[76335]: pgmap v3827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:30 compute-0 ceph-mon[76335]: pgmap v3828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:47:31
Feb 25 13:47:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:47:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:47:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', 'vms', 'backups', '.mgr']
Feb 25 13:47:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:47:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:47:32 compute-0 nova_compute[244014]: 2026-02-25 13:47:32.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:47:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:47:32 compute-0 nova_compute[244014]: 2026-02-25 13:47:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:47:32 compute-0 nova_compute[244014]: 2026-02-25 13:47:32.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:47:33 compute-0 ceph-mon[76335]: pgmap v3829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:34 compute-0 nova_compute[244014]: 2026-02-25 13:47:34.564 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:34 compute-0 nova_compute[244014]: 2026-02-25 13:47:34.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:47:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:35 compute-0 ceph-mon[76335]: pgmap v3830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:37 compute-0 nova_compute[244014]: 2026-02-25 13:47:37.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:37 compute-0 ceph-mon[76335]: pgmap v3831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:37 compute-0 nova_compute[244014]: 2026-02-25 13:47:37.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:47:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:38 compute-0 podman[416480]: 2026-02-25 13:47:38.720928876 +0000 UTC m=+0.061161677 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 25 13:47:38 compute-0 podman[416481]: 2026-02-25 13:47:38.772069084 +0000 UTC m=+0.111827871 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:47:38 compute-0 nova_compute[244014]: 2026-02-25 13:47:38.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:47:39 compute-0 ceph-mon[76335]: pgmap v3832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:39 compute-0 nova_compute[244014]: 2026-02-25 13:47:39.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:40 compute-0 nova_compute[244014]: 2026-02-25 13:47:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:47:41 compute-0 ceph-mon[76335]: pgmap v3833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:42 compute-0 nova_compute[244014]: 2026-02-25 13:47:42.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:43 compute-0 ceph-mon[76335]: pgmap v3834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:47:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:47:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:44 compute-0 nova_compute[244014]: 2026-02-25 13:47:44.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:45 compute-0 ceph-mon[76335]: pgmap v3835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:47 compute-0 nova_compute[244014]: 2026-02-25 13:47:47.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:47:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1916913705' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:47:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:47:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1916913705' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:47:47 compute-0 ceph-mon[76335]: pgmap v3836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1916913705' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:47:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1916913705' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:47:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:49 compute-0 nova_compute[244014]: 2026-02-25 13:47:49.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:49 compute-0 ceph-mon[76335]: pgmap v3837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:51 compute-0 ceph-mon[76335]: pgmap v3838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:52 compute-0 nova_compute[244014]: 2026-02-25 13:47:52.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:53 compute-0 ceph-mon[76335]: pgmap v3839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:54 compute-0 nova_compute[244014]: 2026-02-25 13:47:54.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:47:55.093 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:47:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:47:55.093 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:47:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:47:55.093 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:47:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:47:55 compute-0 ceph-mon[76335]: pgmap v3840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:57 compute-0 nova_compute[244014]: 2026-02-25 13:47:57.160 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:57 compute-0 ceph-mon[76335]: pgmap v3841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:47:59 compute-0 nova_compute[244014]: 2026-02-25 13:47:59.636 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:47:59 compute-0 ceph-mon[76335]: pgmap v3842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:48:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:48:01 compute-0 ceph-mon[76335]: pgmap v3843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:02 compute-0 nova_compute[244014]: 2026-02-25 13:48:02.163 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:03 compute-0 ceph-mon[76335]: pgmap v3844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:04 compute-0 nova_compute[244014]: 2026-02-25 13:48:04.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:05 compute-0 ceph-mon[76335]: pgmap v3845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:06 compute-0 ceph-mon[76335]: pgmap v3846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:07 compute-0 nova_compute[244014]: 2026-02-25 13:48:07.166 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:09 compute-0 nova_compute[244014]: 2026-02-25 13:48:09.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:09 compute-0 ceph-mon[76335]: pgmap v3847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:09 compute-0 podman[416523]: 2026-02-25 13:48:09.766630012 +0000 UTC m=+0.066795709 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 25 13:48:09 compute-0 podman[416524]: 2026-02-25 13:48:09.793402821 +0000 UTC m=+0.093419723 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 13:48:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:11 compute-0 ceph-mon[76335]: pgmap v3848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:12 compute-0 nova_compute[244014]: 2026-02-25 13:48:12.201 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:12 compute-0 ceph-mon[76335]: pgmap v3849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:14 compute-0 nova_compute[244014]: 2026-02-25 13:48:14.718 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:15 compute-0 ceph-mon[76335]: pgmap v3850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:15 compute-0 nova_compute[244014]: 2026-02-25 13:48:15.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:48:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:17 compute-0 nova_compute[244014]: 2026-02-25 13:48:17.205 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:17 compute-0 ceph-mon[76335]: pgmap v3851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:19 compute-0 ceph-mon[76335]: pgmap v3852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:19 compute-0 nova_compute[244014]: 2026-02-25 13:48:19.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:20 compute-0 nova_compute[244014]: 2026-02-25 13:48:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:48:20 compute-0 nova_compute[244014]: 2026-02-25 13:48:20.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:48:20 compute-0 nova_compute[244014]: 2026-02-25 13:48:20.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:48:20 compute-0 nova_compute[244014]: 2026-02-25 13:48:20.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:48:20 compute-0 nova_compute[244014]: 2026-02-25 13:48:20.906 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:48:20 compute-0 nova_compute[244014]: 2026-02-25 13:48:20.906 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:48:21 compute-0 ceph-mon[76335]: pgmap v3853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:48:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671368893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:48:21 compute-0 nova_compute[244014]: 2026-02-25 13:48:21.487 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:48:21 compute-0 nova_compute[244014]: 2026-02-25 13:48:21.646 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:48:21 compute-0 nova_compute[244014]: 2026-02-25 13:48:21.647 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3555MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:48:21 compute-0 nova_compute[244014]: 2026-02-25 13:48:21.647 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:48:21 compute-0 nova_compute[244014]: 2026-02-25 13:48:21.648 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:48:21 compute-0 nova_compute[244014]: 2026-02-25 13:48:21.716 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:48:21 compute-0 nova_compute[244014]: 2026-02-25 13:48:21.717 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:48:21 compute-0 nova_compute[244014]: 2026-02-25 13:48:21.735 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:48:22 compute-0 nova_compute[244014]: 2026-02-25 13:48:22.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:22 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:48:22 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3219253594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:48:22 compute-0 nova_compute[244014]: 2026-02-25 13:48:22.266 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:48:22 compute-0 nova_compute[244014]: 2026-02-25 13:48:22.273 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:48:22 compute-0 nova_compute[244014]: 2026-02-25 13:48:22.297 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:48:22 compute-0 nova_compute[244014]: 2026-02-25 13:48:22.300 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:48:22 compute-0 nova_compute[244014]: 2026-02-25 13:48:22.300 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:48:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:22 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3671368893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:48:22 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3219253594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:48:23 compute-0 ceph-mon[76335]: pgmap v3854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:24 compute-0 nova_compute[244014]: 2026-02-25 13:48:24.755 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:25 compute-0 sudo[416614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:48:25 compute-0 sudo[416614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:48:25 compute-0 sudo[416614]: pam_unix(sudo:session): session closed for user root
Feb 25 13:48:25 compute-0 ceph-mon[76335]: pgmap v3855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:25 compute-0 sudo[416639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:48:25 compute-0 sudo[416639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:48:25 compute-0 sudo[416639]: pam_unix(sudo:session): session closed for user root
Feb 25 13:48:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:48:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:48:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:48:25 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:48:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:48:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:48:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:48:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:48:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:48:26 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:48:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:48:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:48:26 compute-0 sudo[416695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:48:26 compute-0 sudo[416695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:48:26 compute-0 sudo[416695]: pam_unix(sudo:session): session closed for user root
Feb 25 13:48:26 compute-0 sudo[416720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:48:26 compute-0 sudo[416720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:48:26 compute-0 nova_compute[244014]: 2026-02-25 13:48:26.302 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:48:26 compute-0 nova_compute[244014]: 2026-02-25 13:48:26.303 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:48:26 compute-0 nova_compute[244014]: 2026-02-25 13:48:26.304 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:48:26 compute-0 nova_compute[244014]: 2026-02-25 13:48:26.332 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:48:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:26 compute-0 podman[416756]: 2026-02-25 13:48:26.415169038 +0000 UTC m=+0.058001186 container create 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 13:48:26 compute-0 systemd[1]: Started libpod-conmon-8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c.scope.
Feb 25 13:48:26 compute-0 podman[416756]: 2026-02-25 13:48:26.390023006 +0000 UTC m=+0.032855194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:48:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:48:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:48:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:48:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:48:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:48:26 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:48:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:48:26 compute-0 podman[416756]: 2026-02-25 13:48:26.517101745 +0000 UTC m=+0.159933943 container init 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:48:26 compute-0 podman[416756]: 2026-02-25 13:48:26.52529302 +0000 UTC m=+0.168125158 container start 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 13:48:26 compute-0 podman[416756]: 2026-02-25 13:48:26.529607134 +0000 UTC m=+0.172439272 container attach 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:48:26 compute-0 keen_lewin[416772]: 167 167
Feb 25 13:48:26 compute-0 systemd[1]: libpod-8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c.scope: Deactivated successfully.
Feb 25 13:48:26 compute-0 conmon[416772]: conmon 8a3f9d70cc287c088954 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c.scope/container/memory.events
Feb 25 13:48:26 compute-0 podman[416756]: 2026-02-25 13:48:26.533602458 +0000 UTC m=+0.176434626 container died 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a2f586921dbc67a698e0e4bd8147107acefb22a8dff68b8e647f0ca37ec4f61-merged.mount: Deactivated successfully.
Feb 25 13:48:26 compute-0 podman[416756]: 2026-02-25 13:48:26.586101676 +0000 UTC m=+0.228933824 container remove 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:48:26 compute-0 systemd[1]: libpod-conmon-8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c.scope: Deactivated successfully.
Feb 25 13:48:26 compute-0 podman[416798]: 2026-02-25 13:48:26.76077492 +0000 UTC m=+0.064770190 container create 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:48:26 compute-0 systemd[1]: Started libpod-conmon-3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526.scope.
Feb 25 13:48:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:26 compute-0 podman[416798]: 2026-02-25 13:48:26.737597745 +0000 UTC m=+0.041593085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:48:26 compute-0 podman[416798]: 2026-02-25 13:48:26.857662872 +0000 UTC m=+0.161658132 container init 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb 25 13:48:26 compute-0 podman[416798]: 2026-02-25 13:48:26.875438062 +0000 UTC m=+0.179433322 container start 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:48:26 compute-0 podman[416798]: 2026-02-25 13:48:26.879277462 +0000 UTC m=+0.183272752 container attach 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:48:27 compute-0 nova_compute[244014]: 2026-02-25 13:48:27.285 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:27 compute-0 pensive_sanderson[416815]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:48:27 compute-0 pensive_sanderson[416815]: --> All data devices are unavailable
Feb 25 13:48:27 compute-0 systemd[1]: libpod-3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526.scope: Deactivated successfully.
Feb 25 13:48:27 compute-0 podman[416798]: 2026-02-25 13:48:27.456421161 +0000 UTC m=+0.760416461 container died 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e-merged.mount: Deactivated successfully.
Feb 25 13:48:27 compute-0 ceph-mon[76335]: pgmap v3856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:27 compute-0 podman[416798]: 2026-02-25 13:48:27.506792267 +0000 UTC m=+0.810787527 container remove 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:48:27 compute-0 systemd[1]: libpod-conmon-3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526.scope: Deactivated successfully.
Feb 25 13:48:27 compute-0 sudo[416720]: pam_unix(sudo:session): session closed for user root
Feb 25 13:48:27 compute-0 sudo[416846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:48:27 compute-0 sudo[416846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:48:27 compute-0 sudo[416846]: pam_unix(sudo:session): session closed for user root
Feb 25 13:48:27 compute-0 sudo[416871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:48:27 compute-0 sudo[416871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:48:28 compute-0 podman[416909]: 2026-02-25 13:48:28.010172839 +0000 UTC m=+0.060618781 container create a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:48:28 compute-0 podman[416909]: 2026-02-25 13:48:27.981563057 +0000 UTC m=+0.032009019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:48:28 compute-0 systemd[1]: Started libpod-conmon-a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088.scope.
Feb 25 13:48:28 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:48:28 compute-0 podman[416909]: 2026-02-25 13:48:28.172574031 +0000 UTC m=+0.223020033 container init a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:48:28 compute-0 podman[416909]: 2026-02-25 13:48:28.181170568 +0000 UTC m=+0.231616550 container start a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:48:28 compute-0 peaceful_euclid[416925]: 167 167
Feb 25 13:48:28 compute-0 systemd[1]: libpod-a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088.scope: Deactivated successfully.
Feb 25 13:48:28 compute-0 conmon[416925]: conmon a9d065eae17d8a583646 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088.scope/container/memory.events
Feb 25 13:48:28 compute-0 podman[416909]: 2026-02-25 13:48:28.246597856 +0000 UTC m=+0.297043898 container attach a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:48:28 compute-0 podman[416909]: 2026-02-25 13:48:28.249227372 +0000 UTC m=+0.299673374 container died a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:48:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6459034f7da4d54f3e08b42abc8eccf36606954d8319cace49b19af788687cf2-merged.mount: Deactivated successfully.
Feb 25 13:48:28 compute-0 podman[416909]: 2026-02-25 13:48:28.681472661 +0000 UTC m=+0.731918643 container remove a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:48:28 compute-0 systemd[1]: libpod-conmon-a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088.scope: Deactivated successfully.
Feb 25 13:48:28 compute-0 podman[416951]: 2026-02-25 13:48:28.898103141 +0000 UTC m=+0.060189989 container create 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:48:28 compute-0 podman[416951]: 2026-02-25 13:48:28.864740803 +0000 UTC m=+0.026827731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:48:28 compute-0 systemd[1]: Started libpod-conmon-04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184.scope.
Feb 25 13:48:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d127f2825e17f9df3ca30a3c22295fe3c4af07ddf5aa7d9a1840fb999a519434/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d127f2825e17f9df3ca30a3c22295fe3c4af07ddf5aa7d9a1840fb999a519434/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d127f2825e17f9df3ca30a3c22295fe3c4af07ddf5aa7d9a1840fb999a519434/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d127f2825e17f9df3ca30a3c22295fe3c4af07ddf5aa7d9a1840fb999a519434/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:29 compute-0 podman[416951]: 2026-02-25 13:48:29.049501287 +0000 UTC m=+0.211588135 container init 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 13:48:29 compute-0 podman[416951]: 2026-02-25 13:48:29.058710722 +0000 UTC m=+0.220797590 container start 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:48:29 compute-0 podman[416951]: 2026-02-25 13:48:29.065680142 +0000 UTC m=+0.227766990 container attach 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:48:29 compute-0 laughing_jemison[416968]: {
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:     "0": [
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:         {
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "devices": [
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "/dev/loop3"
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             ],
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_name": "ceph_lv0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_size": "21470642176",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "name": "ceph_lv0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "tags": {
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.cluster_name": "ceph",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.crush_device_class": "",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.encrypted": "0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.objectstore": "bluestore",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.osd_id": "0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.type": "block",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.vdo": "0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.with_tpm": "0"
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             },
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "type": "block",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "vg_name": "ceph_vg0"
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:         }
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:     ],
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:     "1": [
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:         {
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "devices": [
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "/dev/loop4"
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             ],
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_name": "ceph_lv1",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_size": "21470642176",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "name": "ceph_lv1",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "tags": {
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.cluster_name": "ceph",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.crush_device_class": "",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.encrypted": "0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.objectstore": "bluestore",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.osd_id": "1",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.type": "block",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.vdo": "0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.with_tpm": "0"
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             },
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "type": "block",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "vg_name": "ceph_vg1"
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:         }
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:     ],
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:     "2": [
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:         {
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "devices": [
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "/dev/loop5"
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             ],
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_name": "ceph_lv2",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_size": "21470642176",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "name": "ceph_lv2",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "tags": {
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.cluster_name": "ceph",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.crush_device_class": "",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.encrypted": "0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.objectstore": "bluestore",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.osd_id": "2",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.type": "block",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.vdo": "0",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:                 "ceph.with_tpm": "0"
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             },
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "type": "block",
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:             "vg_name": "ceph_vg2"
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:         }
Feb 25 13:48:29 compute-0 laughing_jemison[416968]:     ]
Feb 25 13:48:29 compute-0 laughing_jemison[416968]: }
Feb 25 13:48:29 compute-0 systemd[1]: libpod-04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184.scope: Deactivated successfully.
Feb 25 13:48:29 compute-0 podman[416951]: 2026-02-25 13:48:29.357099708 +0000 UTC m=+0.519186556 container died 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 13:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d127f2825e17f9df3ca30a3c22295fe3c4af07ddf5aa7d9a1840fb999a519434-merged.mount: Deactivated successfully.
Feb 25 13:48:29 compute-0 podman[416951]: 2026-02-25 13:48:29.406459645 +0000 UTC m=+0.568546483 container remove 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 13:48:29 compute-0 systemd[1]: libpod-conmon-04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184.scope: Deactivated successfully.
Feb 25 13:48:29 compute-0 sudo[416871]: pam_unix(sudo:session): session closed for user root
Feb 25 13:48:29 compute-0 sudo[416991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:48:29 compute-0 sudo[416991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:48:29 compute-0 sudo[416991]: pam_unix(sudo:session): session closed for user root
Feb 25 13:48:29 compute-0 sudo[417016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:48:29 compute-0 sudo[417016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:48:29 compute-0 ceph-mon[76335]: pgmap v3857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:29 compute-0 nova_compute[244014]: 2026-02-25 13:48:29.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:29 compute-0 nova_compute[244014]: 2026-02-25 13:48:29.902 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:48:29 compute-0 podman[417053]: 2026-02-25 13:48:29.903887836 +0000 UTC m=+0.045521178 container create fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 13:48:29 compute-0 systemd[1]: Started libpod-conmon-fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29.scope.
Feb 25 13:48:29 compute-0 podman[417053]: 2026-02-25 13:48:29.88522641 +0000 UTC m=+0.026859712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:48:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:48:30 compute-0 podman[417053]: 2026-02-25 13:48:30.034784254 +0000 UTC m=+0.176417576 container init fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:48:30 compute-0 podman[417053]: 2026-02-25 13:48:30.042432494 +0000 UTC m=+0.184065806 container start fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:48:30 compute-0 distracted_elgamal[417069]: 167 167
Feb 25 13:48:30 compute-0 systemd[1]: libpod-fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29.scope: Deactivated successfully.
Feb 25 13:48:30 compute-0 podman[417053]: 2026-02-25 13:48:30.048435676 +0000 UTC m=+0.190069008 container attach fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:48:30 compute-0 podman[417053]: 2026-02-25 13:48:30.050129915 +0000 UTC m=+0.191763217 container died fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ba04d0a548b4e04f0447706d4a2eec0253fe07cda8976d72238164945091ef8-merged.mount: Deactivated successfully.
Feb 25 13:48:30 compute-0 podman[417053]: 2026-02-25 13:48:30.105306619 +0000 UTC m=+0.246939921 container remove fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:48:30 compute-0 systemd[1]: libpod-conmon-fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29.scope: Deactivated successfully.
Feb 25 13:48:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:30 compute-0 podman[417093]: 2026-02-25 13:48:30.258415154 +0000 UTC m=+0.044378315 container create bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:48:30 compute-0 systemd[1]: Started libpod-conmon-bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe.scope.
Feb 25 13:48:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:48:30 compute-0 podman[417093]: 2026-02-25 13:48:30.233336054 +0000 UTC m=+0.019299205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8df5f4cff6da65729321a606740ffb4506ae6bff24f72776ce513082872396/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8df5f4cff6da65729321a606740ffb4506ae6bff24f72776ce513082872396/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8df5f4cff6da65729321a606740ffb4506ae6bff24f72776ce513082872396/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8df5f4cff6da65729321a606740ffb4506ae6bff24f72776ce513082872396/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:48:30 compute-0 podman[417093]: 2026-02-25 13:48:30.356454299 +0000 UTC m=+0.142417510 container init bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 13:48:30 compute-0 podman[417093]: 2026-02-25 13:48:30.365342694 +0000 UTC m=+0.151305865 container start bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:48:30 compute-0 podman[417093]: 2026-02-25 13:48:30.370326147 +0000 UTC m=+0.156289358 container attach bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 13:48:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:31 compute-0 lvm[417188]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:48:31 compute-0 lvm[417188]: VG ceph_vg0 finished
Feb 25 13:48:31 compute-0 lvm[417189]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:48:31 compute-0 lvm[417189]: VG ceph_vg1 finished
Feb 25 13:48:31 compute-0 lvm[417191]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:48:31 compute-0 lvm[417191]: VG ceph_vg2 finished
Feb 25 13:48:31 compute-0 focused_hertz[417110]: {}
Feb 25 13:48:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:48:31
Feb 25 13:48:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:48:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:48:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms', 'backups', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta']
Feb 25 13:48:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:48:31 compute-0 systemd[1]: libpod-bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe.scope: Deactivated successfully.
Feb 25 13:48:31 compute-0 systemd[1]: libpod-bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe.scope: Consumed 1.332s CPU time.
Feb 25 13:48:31 compute-0 podman[417093]: 2026-02-25 13:48:31.269995325 +0000 UTC m=+1.055958496 container died bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa8df5f4cff6da65729321a606740ffb4506ae6bff24f72776ce513082872396-merged.mount: Deactivated successfully.
Feb 25 13:48:31 compute-0 podman[417093]: 2026-02-25 13:48:31.320130385 +0000 UTC m=+1.106093556 container remove bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:48:31 compute-0 systemd[1]: libpod-conmon-bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe.scope: Deactivated successfully.
Feb 25 13:48:31 compute-0 sudo[417016]: pam_unix(sudo:session): session closed for user root
Feb 25 13:48:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:48:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:48:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:48:31 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:48:31 compute-0 sudo[417206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:48:31 compute-0 sudo[417206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:48:31 compute-0 sudo[417206]: pam_unix(sudo:session): session closed for user root
Feb 25 13:48:31 compute-0 ceph-mon[76335]: pgmap v3858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:48:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:48:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:48:32 compute-0 nova_compute[244014]: 2026-02-25 13:48:32.289 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:48:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:48:33 compute-0 ceph-mon[76335]: pgmap v3859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:33 compute-0 nova_compute[244014]: 2026-02-25 13:48:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:48:33 compute-0 nova_compute[244014]: 2026-02-25 13:48:33.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:48:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:34 compute-0 nova_compute[244014]: 2026-02-25 13:48:34.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:35 compute-0 ceph-mon[76335]: pgmap v3860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:36 compute-0 nova_compute[244014]: 2026-02-25 13:48:36.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:48:37 compute-0 nova_compute[244014]: 2026-02-25 13:48:37.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:37 compute-0 ceph-mon[76335]: pgmap v3861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:37 compute-0 nova_compute[244014]: 2026-02-25 13:48:37.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:48:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:39 compute-0 ceph-mon[76335]: pgmap v3862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:39 compute-0 nova_compute[244014]: 2026-02-25 13:48:39.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:40 compute-0 podman[417231]: 2026-02-25 13:48:40.713100181 +0000 UTC m=+0.058976994 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 13:48:40 compute-0 podman[417232]: 2026-02-25 13:48:40.784135601 +0000 UTC m=+0.124217778 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 13:48:40 compute-0 nova_compute[244014]: 2026-02-25 13:48:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:48:41 compute-0 ceph-mon[76335]: pgmap v3863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:41 compute-0 nova_compute[244014]: 2026-02-25 13:48:41.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:48:42 compute-0 nova_compute[244014]: 2026-02-25 13:48:42.294 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:43 compute-0 ceph-mon[76335]: pgmap v3864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:48:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:48:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:44 compute-0 nova_compute[244014]: 2026-02-25 13:48:44.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:45 compute-0 ceph-mon[76335]: pgmap v3865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:47 compute-0 nova_compute[244014]: 2026-02-25 13:48:47.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:48:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533190081' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:48:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:48:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533190081' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:48:47 compute-0 ceph-mon[76335]: pgmap v3866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2533190081' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:48:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2533190081' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:48:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:49 compute-0 ceph-mon[76335]: pgmap v3867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:49 compute-0 nova_compute[244014]: 2026-02-25 13:48:49.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:51 compute-0 ceph-mon[76335]: pgmap v3868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:52 compute-0 nova_compute[244014]: 2026-02-25 13:48:52.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:53 compute-0 ceph-mon[76335]: pgmap v3869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:54 compute-0 nova_compute[244014]: 2026-02-25 13:48:54.867 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:48:55.093 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:48:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:48:55.094 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:48:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:48:55.094 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:48:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:48:55 compute-0 ceph-mon[76335]: pgmap v3870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:57 compute-0 nova_compute[244014]: 2026-02-25 13:48:57.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:48:57 compute-0 ceph-mon[76335]: pgmap v3871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:59 compute-0 ceph-mon[76335]: pgmap v3872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:48:59 compute-0 nova_compute[244014]: 2026-02-25 13:48:59.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:00 compute-0 nova_compute[244014]: 2026-02-25 13:49:00.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:49:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:49:01 compute-0 ceph-mon[76335]: pgmap v3873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:01 compute-0 nova_compute[244014]: 2026-02-25 13:49:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:02 compute-0 nova_compute[244014]: 2026-02-25 13:49:02.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:03 compute-0 ceph-mon[76335]: pgmap v3874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:04 compute-0 nova_compute[244014]: 2026-02-25 13:49:04.957 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:05 compute-0 ceph-mon[76335]: pgmap v3875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:06 compute-0 ceph-mon[76335]: pgmap v3876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:07 compute-0 nova_compute[244014]: 2026-02-25 13:49:07.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:09 compute-0 ceph-mon[76335]: pgmap v3877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:09 compute-0 nova_compute[244014]: 2026-02-25 13:49:09.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:11 compute-0 podman[417278]: 2026-02-25 13:49:11.744766262 +0000 UTC m=+0.076152548 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 13:49:11 compute-0 ceph-mon[76335]: pgmap v3878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:11 compute-0 podman[417279]: 2026-02-25 13:49:11.786584402 +0000 UTC m=+0.108412413 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:49:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:12 compute-0 nova_compute[244014]: 2026-02-25 13:49:12.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:49:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 17K writes, 80K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1310 writes, 5692 keys, 1310 commit groups, 1.0 writes per commit group, ingest: 8.65 MB, 0.01 MB/s
                                           Interval WAL: 1310 writes, 1310 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     49.4      1.98              0.27        60    0.033       0      0       0.0       0.0
                                             L6      1/0    8.90 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.4     98.9     84.7      6.25              1.58        59    0.106    425K    31K       0.0       0.0
                                            Sum      1/0    8.90 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.4     75.2     76.2      8.22              1.86       119    0.069    425K    31K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.2     55.2     55.5      0.89              0.19         8    0.111     39K   2040       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     98.9     84.7      6.25              1.58        59    0.106    425K    31K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     49.5      1.97              0.27        59    0.033       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.095, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.61 GB write, 0.09 MB/s write, 0.60 GB read, 0.09 MB/s read, 8.2 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 70.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000815 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4355,67.12 MB,22.0781%) FilterBlock(120,1.15 MB,0.376987%) IndexBlock(120,1.82 MB,0.598099%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 13:49:13 compute-0 ceph-mon[76335]: pgmap v3879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:14 compute-0 nova_compute[244014]: 2026-02-25 13:49:14.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:15 compute-0 ceph-mon[76335]: pgmap v3880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:15 compute-0 nova_compute[244014]: 2026-02-25 13:49:15.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:16 compute-0 ceph-mon[76335]: pgmap v3881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:17 compute-0 nova_compute[244014]: 2026-02-25 13:49:17.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:19 compute-0 ceph-mon[76335]: pgmap v3882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:20 compute-0 nova_compute[244014]: 2026-02-25 13:49:20.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:21 compute-0 ceph-mon[76335]: pgmap v3883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:21 compute-0 nova_compute[244014]: 2026-02-25 13:49:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:21 compute-0 nova_compute[244014]: 2026-02-25 13:49:21.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:49:21 compute-0 nova_compute[244014]: 2026-02-25 13:49:21.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:49:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:22 compute-0 nova_compute[244014]: 2026-02-25 13:49:22.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:22 compute-0 nova_compute[244014]: 2026-02-25 13:49:22.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:22 compute-0 nova_compute[244014]: 2026-02-25 13:49:22.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:49:22 compute-0 nova_compute[244014]: 2026-02-25 13:49:22.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:49:22 compute-0 nova_compute[244014]: 2026-02-25 13:49:22.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:49:22 compute-0 nova_compute[244014]: 2026-02-25 13:49:22.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:49:22 compute-0 nova_compute[244014]: 2026-02-25 13:49:22.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:49:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:49:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003319857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:49:23 compute-0 nova_compute[244014]: 2026-02-25 13:49:23.473 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:49:23 compute-0 ceph-mon[76335]: pgmap v3884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4003319857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:49:23 compute-0 nova_compute[244014]: 2026-02-25 13:49:23.629 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:49:23 compute-0 nova_compute[244014]: 2026-02-25 13:49:23.631 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:49:23 compute-0 nova_compute[244014]: 2026-02-25 13:49:23.631 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:49:23 compute-0 nova_compute[244014]: 2026-02-25 13:49:23.632 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:49:23 compute-0 nova_compute[244014]: 2026-02-25 13:49:23.685 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:49:23 compute-0 nova_compute[244014]: 2026-02-25 13:49:23.686 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:49:23 compute-0 nova_compute[244014]: 2026-02-25 13:49:23.702 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:49:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:49:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713972298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:49:24 compute-0 nova_compute[244014]: 2026-02-25 13:49:24.213 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:49:24 compute-0 nova_compute[244014]: 2026-02-25 13:49:24.220 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:49:24 compute-0 nova_compute[244014]: 2026-02-25 13:49:24.234 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:49:24 compute-0 nova_compute[244014]: 2026-02-25 13:49:24.237 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:49:24 compute-0 nova_compute[244014]: 2026-02-25 13:49:24.237 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:49:24 compute-0 nova_compute[244014]: 2026-02-25 13:49:24.238 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:24 compute-0 nova_compute[244014]: 2026-02-25 13:49:24.239 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:49:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1713972298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:49:25 compute-0 nova_compute[244014]: 2026-02-25 13:49:25.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:25 compute-0 ceph-mon[76335]: pgmap v3885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:26 compute-0 nova_compute[244014]: 2026-02-25 13:49:26.248 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:26 compute-0 nova_compute[244014]: 2026-02-25 13:49:26.249 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:49:26 compute-0 nova_compute[244014]: 2026-02-25 13:49:26.249 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:49:26 compute-0 nova_compute[244014]: 2026-02-25 13:49:26.269 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:49:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:27 compute-0 nova_compute[244014]: 2026-02-25 13:49:27.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:27 compute-0 ceph-mon[76335]: pgmap v3886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:29 compute-0 sshd-session[417367]: Received disconnect from 45.148.10.151 port 2242:11:  [preauth]
Feb 25 13:49:29 compute-0 sshd-session[417367]: Disconnected from authenticating user root 45.148.10.151 port 2242 [preauth]
Feb 25 13:49:29 compute-0 ceph-mon[76335]: pgmap v3887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:29 compute-0 nova_compute[244014]: 2026-02-25 13:49:29.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:30 compute-0 nova_compute[244014]: 2026-02-25 13:49:30.086 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:49:31
Feb 25 13:49:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:49:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:49:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.meta', 'vms']
Feb 25 13:49:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:49:31 compute-0 ceph-mon[76335]: pgmap v3888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:31 compute-0 sudo[417369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:49:31 compute-0 sudo[417369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:49:31 compute-0 sudo[417369]: pam_unix(sudo:session): session closed for user root
Feb 25 13:49:31 compute-0 sudo[417394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:49:31 compute-0 sudo[417394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:49:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:49:32 compute-0 sudo[417394]: pam_unix(sudo:session): session closed for user root
Feb 25 13:49:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:49:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:49:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:49:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:49:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:49:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:49:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:49:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:49:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:49:32 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:49:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:49:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:49:32 compute-0 sudo[417450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:49:32 compute-0 sudo[417450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:49:32 compute-0 sudo[417450]: pam_unix(sudo:session): session closed for user root
Feb 25 13:49:32 compute-0 sudo[417475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:49:32 compute-0 sudo[417475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:49:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:32 compute-0 nova_compute[244014]: 2026-02-25 13:49:32.522 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:49:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:49:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:49:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:49:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:49:32 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:49:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:49:32 compute-0 podman[417511]: 2026-02-25 13:49:32.610624713 +0000 UTC m=+0.033186304 container create bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:49:32 compute-0 systemd[1]: Started libpod-conmon-bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3.scope.
Feb 25 13:49:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:49:32 compute-0 podman[417511]: 2026-02-25 13:49:32.595628962 +0000 UTC m=+0.018190593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:49:32 compute-0 podman[417511]: 2026-02-25 13:49:32.705191098 +0000 UTC m=+0.127752719 container init bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:49:32 compute-0 podman[417511]: 2026-02-25 13:49:32.713057624 +0000 UTC m=+0.135619235 container start bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:49:32 compute-0 podman[417511]: 2026-02-25 13:49:32.716605696 +0000 UTC m=+0.139167297 container attach bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 13:49:32 compute-0 systemd[1]: libpod-bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3.scope: Deactivated successfully.
Feb 25 13:49:32 compute-0 serene_liskov[417527]: 167 167
Feb 25 13:49:32 compute-0 conmon[417527]: conmon bbfa3a978b13ca2b7e5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3.scope/container/memory.events
Feb 25 13:49:32 compute-0 podman[417511]: 2026-02-25 13:49:32.719406666 +0000 UTC m=+0.141968267 container died bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 13:49:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-18122d899c3e0a774677654341a4559f8375506253ab79aba0b8d269d11bcb09-merged.mount: Deactivated successfully.
Feb 25 13:49:32 compute-0 podman[417511]: 2026-02-25 13:49:32.75786813 +0000 UTC m=+0.180429731 container remove bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 25 13:49:32 compute-0 systemd[1]: libpod-conmon-bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3.scope: Deactivated successfully.
Feb 25 13:49:32 compute-0 podman[417549]: 2026-02-25 13:49:32.923442874 +0000 UTC m=+0.060650772 container create 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 13:49:32 compute-0 systemd[1]: Started libpod-conmon-5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12.scope.
Feb 25 13:49:32 compute-0 podman[417549]: 2026-02-25 13:49:32.890536789 +0000 UTC m=+0.027744787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:49:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:33 compute-0 podman[417549]: 2026-02-25 13:49:33.037771226 +0000 UTC m=+0.174979164 container init 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:49:33 compute-0 podman[417549]: 2026-02-25 13:49:33.052852659 +0000 UTC m=+0.190060557 container start 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 13:49:33 compute-0 podman[417549]: 2026-02-25 13:49:33.056671949 +0000 UTC m=+0.193879847 container attach 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 13:49:33 compute-0 beautiful_cori[417565]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:49:33 compute-0 beautiful_cori[417565]: --> All data devices are unavailable
Feb 25 13:49:33 compute-0 systemd[1]: libpod-5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12.scope: Deactivated successfully.
Feb 25 13:49:33 compute-0 podman[417549]: 2026-02-25 13:49:33.50048662 +0000 UTC m=+0.637694548 container died 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:49:33 compute-0 ceph-mon[76335]: pgmap v3889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd-merged.mount: Deactivated successfully.
Feb 25 13:49:33 compute-0 podman[417549]: 2026-02-25 13:49:33.576992627 +0000 UTC m=+0.714200555 container remove 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:49:33 compute-0 systemd[1]: libpod-conmon-5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12.scope: Deactivated successfully.
Feb 25 13:49:33 compute-0 sudo[417475]: pam_unix(sudo:session): session closed for user root
Feb 25 13:49:33 compute-0 sudo[417596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:49:33 compute-0 sudo[417596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:49:33 compute-0 sudo[417596]: pam_unix(sudo:session): session closed for user root
Feb 25 13:49:33 compute-0 sudo[417621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:49:33 compute-0 sudo[417621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:49:33 compute-0 nova_compute[244014]: 2026-02-25 13:49:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:33 compute-0 nova_compute[244014]: 2026-02-25 13:49:33.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:49:34 compute-0 podman[417659]: 2026-02-25 13:49:34.00345536 +0000 UTC m=+0.039236097 container create 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:49:34 compute-0 systemd[1]: Started libpod-conmon-21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee.scope.
Feb 25 13:49:34 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:49:34 compute-0 podman[417659]: 2026-02-25 13:49:33.987127101 +0000 UTC m=+0.022907848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:49:34 compute-0 podman[417659]: 2026-02-25 13:49:34.088833631 +0000 UTC m=+0.124614358 container init 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:49:34 compute-0 podman[417659]: 2026-02-25 13:49:34.094831904 +0000 UTC m=+0.130612601 container start 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 13:49:34 compute-0 podman[417659]: 2026-02-25 13:49:34.098037986 +0000 UTC m=+0.133818703 container attach 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:49:34 compute-0 fervent_lamarr[417675]: 167 167
Feb 25 13:49:34 compute-0 systemd[1]: libpod-21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee.scope: Deactivated successfully.
Feb 25 13:49:34 compute-0 podman[417659]: 2026-02-25 13:49:34.09993111 +0000 UTC m=+0.135711847 container died 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb 25 13:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f2e79767b05bd29bde5f655772d3ce7fb8d741d9a358c96e0b8975366026efe-merged.mount: Deactivated successfully.
Feb 25 13:49:34 compute-0 podman[417659]: 2026-02-25 13:49:34.144562031 +0000 UTC m=+0.180342728 container remove 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 13:49:34 compute-0 systemd[1]: libpod-conmon-21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee.scope: Deactivated successfully.
Feb 25 13:49:34 compute-0 podman[417698]: 2026-02-25 13:49:34.308783236 +0000 UTC m=+0.045966571 container create 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 13:49:34 compute-0 systemd[1]: Started libpod-conmon-468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe.scope.
Feb 25 13:49:34 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c962d65377f6b14f09a14c7f62a4739144c5b48f1016c09b0184342679a3f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c962d65377f6b14f09a14c7f62a4739144c5b48f1016c09b0184342679a3f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c962d65377f6b14f09a14c7f62a4739144c5b48f1016c09b0184342679a3f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c962d65377f6b14f09a14c7f62a4739144c5b48f1016c09b0184342679a3f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:34 compute-0 podman[417698]: 2026-02-25 13:49:34.290525102 +0000 UTC m=+0.027708467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:49:34 compute-0 podman[417698]: 2026-02-25 13:49:34.399630554 +0000 UTC m=+0.136813899 container init 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:49:34 compute-0 podman[417698]: 2026-02-25 13:49:34.412939316 +0000 UTC m=+0.150122651 container start 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 13:49:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:34 compute-0 podman[417698]: 2026-02-25 13:49:34.416633982 +0000 UTC m=+0.153817337 container attach 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 13:49:34 compute-0 stoic_snyder[417715]: {
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:     "0": [
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:         {
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "devices": [
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "/dev/loop3"
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             ],
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_name": "ceph_lv0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_size": "21470642176",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "name": "ceph_lv0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "tags": {
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.cluster_name": "ceph",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.crush_device_class": "",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.encrypted": "0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.objectstore": "bluestore",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.osd_id": "0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.type": "block",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.vdo": "0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.with_tpm": "0"
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             },
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "type": "block",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "vg_name": "ceph_vg0"
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:         }
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:     ],
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:     "1": [
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:         {
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "devices": [
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "/dev/loop4"
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             ],
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_name": "ceph_lv1",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_size": "21470642176",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "name": "ceph_lv1",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "tags": {
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.cluster_name": "ceph",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.crush_device_class": "",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.encrypted": "0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.objectstore": "bluestore",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.osd_id": "1",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.type": "block",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.vdo": "0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.with_tpm": "0"
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             },
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "type": "block",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "vg_name": "ceph_vg1"
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:         }
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:     ],
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:     "2": [
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:         {
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "devices": [
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "/dev/loop5"
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             ],
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_name": "ceph_lv2",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_size": "21470642176",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "name": "ceph_lv2",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "tags": {
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.cluster_name": "ceph",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.crush_device_class": "",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.encrypted": "0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.objectstore": "bluestore",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.osd_id": "2",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.type": "block",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.vdo": "0",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:                 "ceph.with_tpm": "0"
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             },
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "type": "block",
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:             "vg_name": "ceph_vg2"
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:         }
Feb 25 13:49:34 compute-0 stoic_snyder[417715]:     ]
Feb 25 13:49:34 compute-0 stoic_snyder[417715]: }
Feb 25 13:49:34 compute-0 systemd[1]: libpod-468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe.scope: Deactivated successfully.
Feb 25 13:49:34 compute-0 podman[417698]: 2026-02-25 13:49:34.697207847 +0000 UTC m=+0.434391202 container died 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 13:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4c962d65377f6b14f09a14c7f62a4739144c5b48f1016c09b0184342679a3f8-merged.mount: Deactivated successfully.
Feb 25 13:49:34 compute-0 podman[417698]: 2026-02-25 13:49:34.739816841 +0000 UTC m=+0.477000176 container remove 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:49:34 compute-0 systemd[1]: libpod-conmon-468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe.scope: Deactivated successfully.
Feb 25 13:49:34 compute-0 sudo[417621]: pam_unix(sudo:session): session closed for user root
Feb 25 13:49:34 compute-0 sudo[417737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:49:34 compute-0 sudo[417737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:49:34 compute-0 sudo[417737]: pam_unix(sudo:session): session closed for user root
Feb 25 13:49:34 compute-0 sudo[417762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:49:34 compute-0 sudo[417762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:49:35 compute-0 nova_compute[244014]: 2026-02-25 13:49:35.133 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:35 compute-0 podman[417799]: 2026-02-25 13:49:35.250026498 +0000 UTC m=+0.053829687 container create 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:49:35 compute-0 systemd[1]: Started libpod-conmon-10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3.scope.
Feb 25 13:49:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:49:35 compute-0 podman[417799]: 2026-02-25 13:49:35.23062206 +0000 UTC m=+0.034425269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:49:35 compute-0 podman[417799]: 2026-02-25 13:49:35.331397884 +0000 UTC m=+0.135201073 container init 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:49:35 compute-0 podman[417799]: 2026-02-25 13:49:35.337430767 +0000 UTC m=+0.141233956 container start 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 13:49:35 compute-0 podman[417799]: 2026-02-25 13:49:35.340378251 +0000 UTC m=+0.144181440 container attach 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 13:49:35 compute-0 trusting_vaughan[417815]: 167 167
Feb 25 13:49:35 compute-0 systemd[1]: libpod-10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3.scope: Deactivated successfully.
Feb 25 13:49:35 compute-0 podman[417799]: 2026-02-25 13:49:35.342682298 +0000 UTC m=+0.146485547 container died 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 13:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-03a6cba1b78c944d9fccb3909d431ed03821f52fc5a82e714c37d4d850b1d9f4-merged.mount: Deactivated successfully.
Feb 25 13:49:35 compute-0 podman[417799]: 2026-02-25 13:49:35.382064568 +0000 UTC m=+0.185867757 container remove 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 25 13:49:35 compute-0 systemd[1]: libpod-conmon-10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3.scope: Deactivated successfully.
Feb 25 13:49:35 compute-0 ceph-mon[76335]: pgmap v3890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:35 compute-0 podman[417840]: 2026-02-25 13:49:35.559707598 +0000 UTC m=+0.058921742 container create 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:49:35 compute-0 systemd[1]: Started libpod-conmon-469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9.scope.
Feb 25 13:49:35 compute-0 podman[417840]: 2026-02-25 13:49:35.535620317 +0000 UTC m=+0.034834541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:49:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebfe8fb2dd12dcfa4b0c7a94fe2e30221d42bc2016e89496aa9d4b309b755e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebfe8fb2dd12dcfa4b0c7a94fe2e30221d42bc2016e89496aa9d4b309b755e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebfe8fb2dd12dcfa4b0c7a94fe2e30221d42bc2016e89496aa9d4b309b755e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebfe8fb2dd12dcfa4b0c7a94fe2e30221d42bc2016e89496aa9d4b309b755e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:49:35 compute-0 podman[417840]: 2026-02-25 13:49:35.65623715 +0000 UTC m=+0.155451304 container init 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 13:49:35 compute-0 podman[417840]: 2026-02-25 13:49:35.669108449 +0000 UTC m=+0.168322583 container start 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:49:35 compute-0 podman[417840]: 2026-02-25 13:49:35.674180345 +0000 UTC m=+0.173394519 container attach 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 13:49:36 compute-0 lvm[417936]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:49:36 compute-0 lvm[417936]: VG ceph_vg0 finished
Feb 25 13:49:36 compute-0 lvm[417937]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:49:36 compute-0 lvm[417937]: VG ceph_vg1 finished
Feb 25 13:49:36 compute-0 lvm[417939]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:49:36 compute-0 lvm[417939]: VG ceph_vg2 finished
Feb 25 13:49:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:36 compute-0 elated_tharp[417857]: {}
Feb 25 13:49:36 compute-0 systemd[1]: libpod-469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9.scope: Deactivated successfully.
Feb 25 13:49:36 compute-0 systemd[1]: libpod-469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9.scope: Consumed 1.282s CPU time.
Feb 25 13:49:36 compute-0 podman[417840]: 2026-02-25 13:49:36.529056388 +0000 UTC m=+1.028270562 container died 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cebfe8fb2dd12dcfa4b0c7a94fe2e30221d42bc2016e89496aa9d4b309b755e3-merged.mount: Deactivated successfully.
Feb 25 13:49:36 compute-0 podman[417840]: 2026-02-25 13:49:36.591095769 +0000 UTC m=+1.090309933 container remove 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:49:36 compute-0 systemd[1]: libpod-conmon-469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9.scope: Deactivated successfully.
Feb 25 13:49:36 compute-0 sudo[417762]: pam_unix(sudo:session): session closed for user root
Feb 25 13:49:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:49:36 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:49:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:49:36 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:49:36 compute-0 sudo[417956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:49:36 compute-0 sudo[417956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:49:36 compute-0 sudo[417956]: pam_unix(sudo:session): session closed for user root
Feb 25 13:49:36 compute-0 nova_compute[244014]: 2026-02-25 13:49:36.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:37 compute-0 nova_compute[244014]: 2026-02-25 13:49:37.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:37 compute-0 ceph-mon[76335]: pgmap v3891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:49:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:49:37 compute-0 nova_compute[244014]: 2026-02-25 13:49:37.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:39 compute-0 ceph-mon[76335]: pgmap v3892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:40 compute-0 nova_compute[244014]: 2026-02-25 13:49:40.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:40 compute-0 nova_compute[244014]: 2026-02-25 13:49:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:41 compute-0 ceph-mon[76335]: pgmap v3893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:42 compute-0 nova_compute[244014]: 2026-02-25 13:49:42.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:42 compute-0 podman[417982]: 2026-02-25 13:49:42.73693681 +0000 UTC m=+0.071844194 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:49:42 compute-0 podman[417983]: 2026-02-25 13:49:42.793989198 +0000 UTC m=+0.121635003 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 13:49:42 compute-0 nova_compute[244014]: 2026-02-25 13:49:42.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:49:43 compute-0 ceph-mon[76335]: pgmap v3894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:49:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:49:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:44 compute-0 nova_compute[244014]: 2026-02-25 13:49:44.650 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:49:45 compute-0 nova_compute[244014]: 2026-02-25 13:49:45.186 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:45 compute-0 ceph-mon[76335]: pgmap v3895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:49:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/558277791' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:49:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:49:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/558277791' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:49:47 compute-0 nova_compute[244014]: 2026-02-25 13:49:47.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:47 compute-0 ceph-mon[76335]: pgmap v3896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/558277791' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:49:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/558277791' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:49:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:49 compute-0 ceph-mon[76335]: pgmap v3897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:50 compute-0 nova_compute[244014]: 2026-02-25 13:49:50.186 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:51 compute-0 ceph-mon[76335]: pgmap v3898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:52 compute-0 nova_compute[244014]: 2026-02-25 13:49:52.673 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:53 compute-0 ceph-mon[76335]: pgmap v3899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:49:55.095 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:49:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:49:55.095 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:49:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:49:55.096 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:49:55 compute-0 nova_compute[244014]: 2026-02-25 13:49:55.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:49:55 compute-0 ceph-mon[76335]: pgmap v3900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:57 compute-0 nova_compute[244014]: 2026-02-25 13:49:57.676 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:49:57 compute-0 ceph-mon[76335]: pgmap v3901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:49:59 compute-0 ceph-mon[76335]: pgmap v3902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:00 compute-0 nova_compute[244014]: 2026-02-25 13:50:00.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:50:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:50:01 compute-0 ceph-mon[76335]: pgmap v3903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:02 compute-0 nova_compute[244014]: 2026-02-25 13:50:02.680 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:03 compute-0 ceph-mon[76335]: pgmap v3904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:05 compute-0 nova_compute[244014]: 2026-02-25 13:50:05.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:05 compute-0 ceph-mon[76335]: pgmap v3905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:07 compute-0 nova_compute[244014]: 2026-02-25 13:50:07.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:07 compute-0 ceph-mon[76335]: pgmap v3906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:09 compute-0 ceph-mon[76335]: pgmap v3907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.450610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409450665, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1815, "num_deletes": 251, "total_data_size": 3053461, "memory_usage": 3101296, "flush_reason": "Manual Compaction"}
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409471029, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 2991160, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79918, "largest_seqno": 81732, "table_properties": {"data_size": 2982765, "index_size": 5208, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16852, "raw_average_key_size": 20, "raw_value_size": 2966066, "raw_average_value_size": 3522, "num_data_blocks": 232, "num_entries": 842, "num_filter_entries": 842, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027211, "oldest_key_time": 1772027211, "file_creation_time": 1772027409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 20510 microseconds, and 9534 cpu microseconds.
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.471112) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 2991160 bytes OK
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.471151) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.473479) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.473506) EVENT_LOG_v1 {"time_micros": 1772027409473498, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.473539) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 3045737, prev total WAL file size 3045737, number of live WAL files 2.
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.474515) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(2921KB)], [194(9109KB)]
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409474597, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 12318839, "oldest_snapshot_seqno": -1}
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 9509 keys, 10570270 bytes, temperature: kUnknown
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409536366, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 10570270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10512081, "index_size": 33308, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23813, "raw_key_size": 250709, "raw_average_key_size": 26, "raw_value_size": 10347423, "raw_average_value_size": 1088, "num_data_blocks": 1277, "num_entries": 9509, "num_filter_entries": 9509, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.536843) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10570270 bytes
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.538040) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.1 rd, 170.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 8.9 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 10026, records dropped: 517 output_compression: NoCompression
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.538065) EVENT_LOG_v1 {"time_micros": 1772027409538053, "job": 122, "event": "compaction_finished", "compaction_time_micros": 61883, "compaction_time_cpu_micros": 26509, "output_level": 6, "num_output_files": 1, "total_output_size": 10570270, "num_input_records": 10026, "num_output_records": 9509, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409538751, "job": 122, "event": "table_file_deletion", "file_number": 196}
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409540423, "job": 122, "event": "table_file_deletion", "file_number": 194}
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.474378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.540499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.540507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.540510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.540513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:50:09 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.540515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:50:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:10 compute-0 nova_compute[244014]: 2026-02-25 13:50:10.242 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:11 compute-0 ceph-mon[76335]: pgmap v3908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:12 compute-0 nova_compute[244014]: 2026-02-25 13:50:12.779 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:13 compute-0 ceph-mon[76335]: pgmap v3909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:13 compute-0 podman[418029]: 2026-02-25 13:50:13.739875832 +0000 UTC m=+0.073547892 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Feb 25 13:50:13 compute-0 podman[418030]: 2026-02-25 13:50:13.776642608 +0000 UTC m=+0.104250734 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:50:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:15 compute-0 nova_compute[244014]: 2026-02-25 13:50:15.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:15 compute-0 ceph-mon[76335]: pgmap v3910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:50:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 186K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:50:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:17 compute-0 ceph-mon[76335]: pgmap v3911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:17 compute-0 nova_compute[244014]: 2026-02-25 13:50:17.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:17 compute-0 nova_compute[244014]: 2026-02-25 13:50:17.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:50:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:50:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.2 total, 600.0 interval
                                           Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.74 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:50:19 compute-0 ceph-mon[76335]: pgmap v3912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:20 compute-0 nova_compute[244014]: 2026-02-25 13:50:20.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:21 compute-0 ceph-mon[76335]: pgmap v3913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:22 compute-0 nova_compute[244014]: 2026-02-25 13:50:22.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:22 compute-0 nova_compute[244014]: 2026-02-25 13:50:22.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:50:22 compute-0 nova_compute[244014]: 2026-02-25 13:50:22.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:50:22 compute-0 nova_compute[244014]: 2026-02-25 13:50:22.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:50:22 compute-0 nova_compute[244014]: 2026-02-25 13:50:22.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:50:22 compute-0 nova_compute[244014]: 2026-02-25 13:50:22.906 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:50:22 compute-0 nova_compute[244014]: 2026-02-25 13:50:22.906 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:50:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:50:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015985271' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:50:23 compute-0 nova_compute[244014]: 2026-02-25 13:50:23.517 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:50:23 compute-0 ceph-mon[76335]: pgmap v3914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1015985271' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:50:23 compute-0 nova_compute[244014]: 2026-02-25 13:50:23.704 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:50:23 compute-0 nova_compute[244014]: 2026-02-25 13:50:23.706 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3538MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:50:23 compute-0 nova_compute[244014]: 2026-02-25 13:50:23.707 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:50:23 compute-0 nova_compute[244014]: 2026-02-25 13:50:23.707 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:50:23 compute-0 nova_compute[244014]: 2026-02-25 13:50:23.880 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:50:23 compute-0 nova_compute[244014]: 2026-02-25 13:50:23.882 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:50:23 compute-0 nova_compute[244014]: 2026-02-25 13:50:23.923 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:50:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:50:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1033871979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:50:24 compute-0 nova_compute[244014]: 2026-02-25 13:50:24.471 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:50:24 compute-0 nova_compute[244014]: 2026-02-25 13:50:24.476 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:50:24 compute-0 nova_compute[244014]: 2026-02-25 13:50:24.496 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:50:24 compute-0 nova_compute[244014]: 2026-02-25 13:50:24.498 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:50:24 compute-0 nova_compute[244014]: 2026-02-25 13:50:24.498 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:50:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1033871979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:50:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:50:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.6 total, 600.0 interval
                                           Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 13:50:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:25 compute-0 nova_compute[244014]: 2026-02-25 13:50:25.247 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:25 compute-0 ceph-mon[76335]: pgmap v3915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:26 compute-0 nova_compute[244014]: 2026-02-25 13:50:26.499 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:50:26 compute-0 nova_compute[244014]: 2026-02-25 13:50:26.500 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:50:26 compute-0 nova_compute[244014]: 2026-02-25 13:50:26.500 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:50:26 compute-0 nova_compute[244014]: 2026-02-25 13:50:26.524 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:50:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 13:50:27 compute-0 ceph-mon[76335]: pgmap v3916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:27 compute-0 nova_compute[244014]: 2026-02-25 13:50:27.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:29 compute-0 ceph-mon[76335]: pgmap v3917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:29 compute-0 nova_compute[244014]: 2026-02-25 13:50:29.896 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:50:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:30 compute-0 nova_compute[244014]: 2026-02-25 13:50:30.250 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:50:31
Feb 25 13:50:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:50:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:50:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', 'vms']
Feb 25 13:50:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:50:31 compute-0 ceph-mon[76335]: pgmap v3918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:50:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:50:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:50:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:50:32 compute-0 nova_compute[244014]: 2026-02-25 13:50:32.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:33 compute-0 ceph-mon[76335]: pgmap v3919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:34 compute-0 nova_compute[244014]: 2026-02-25 13:50:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:50:34 compute-0 nova_compute[244014]: 2026-02-25 13:50:34.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:50:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:35 compute-0 nova_compute[244014]: 2026-02-25 13:50:35.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:35 compute-0 ceph-mon[76335]: pgmap v3920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:36 compute-0 sudo[418117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:50:36 compute-0 sudo[418117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:50:36 compute-0 sudo[418117]: pam_unix(sudo:session): session closed for user root
Feb 25 13:50:36 compute-0 sudo[418142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:50:36 compute-0 sudo[418142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:50:37 compute-0 sudo[418142]: pam_unix(sudo:session): session closed for user root
Feb 25 13:50:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:50:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:50:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:50:37 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:50:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:50:37 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:50:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:50:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:50:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:50:37 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:50:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:50:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:50:37 compute-0 sudo[418200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:50:37 compute-0 sudo[418200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:50:37 compute-0 sudo[418200]: pam_unix(sudo:session): session closed for user root
Feb 25 13:50:37 compute-0 sudo[418225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:50:37 compute-0 ceph-mon[76335]: pgmap v3921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:50:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:50:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:50:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:50:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:50:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:50:37 compute-0 sudo[418225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:50:37 compute-0 nova_compute[244014]: 2026-02-25 13:50:37.909 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:37 compute-0 podman[418262]: 2026-02-25 13:50:37.947255404 +0000 UTC m=+0.113691275 container create 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 13:50:37 compute-0 podman[418262]: 2026-02-25 13:50:37.858076904 +0000 UTC m=+0.024512815 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:50:37 compute-0 systemd[1]: Started libpod-conmon-15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17.scope.
Feb 25 13:50:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:50:38 compute-0 podman[418262]: 2026-02-25 13:50:38.049002606 +0000 UTC m=+0.215438497 container init 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 13:50:38 compute-0 podman[418262]: 2026-02-25 13:50:38.058770956 +0000 UTC m=+0.225206827 container start 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:50:38 compute-0 podman[418262]: 2026-02-25 13:50:38.0620609 +0000 UTC m=+0.228496771 container attach 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 13:50:38 compute-0 angry_wu[418279]: 167 167
Feb 25 13:50:38 compute-0 systemd[1]: libpod-15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17.scope: Deactivated successfully.
Feb 25 13:50:38 compute-0 podman[418262]: 2026-02-25 13:50:38.06830552 +0000 UTC m=+0.234741411 container died 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:50:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d957ff41396857cfafcfc983ab6ad07c98c701bff37145827c977431279970bd-merged.mount: Deactivated successfully.
Feb 25 13:50:38 compute-0 podman[418262]: 2026-02-25 13:50:38.125504072 +0000 UTC m=+0.291939943 container remove 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 13:50:38 compute-0 systemd[1]: libpod-conmon-15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17.scope: Deactivated successfully.
Feb 25 13:50:38 compute-0 podman[418302]: 2026-02-25 13:50:38.294237436 +0000 UTC m=+0.057052749 container create f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 13:50:38 compute-0 systemd[1]: Started libpod-conmon-f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06.scope.
Feb 25 13:50:38 compute-0 podman[418302]: 2026-02-25 13:50:38.268771815 +0000 UTC m=+0.031587098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:50:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:38 compute-0 podman[418302]: 2026-02-25 13:50:38.40791403 +0000 UTC m=+0.170729413 container init f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:50:38 compute-0 podman[418302]: 2026-02-25 13:50:38.420729838 +0000 UTC m=+0.183545151 container start f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:50:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:38 compute-0 podman[418302]: 2026-02-25 13:50:38.458124801 +0000 UTC m=+0.220940154 container attach f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:50:38 compute-0 nova_compute[244014]: 2026-02-25 13:50:38.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:50:38 compute-0 nova_compute[244014]: 2026-02-25 13:50:38.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:50:38 compute-0 cool_chaum[418319]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:50:38 compute-0 cool_chaum[418319]: --> All data devices are unavailable
Feb 25 13:50:38 compute-0 systemd[1]: libpod-f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06.scope: Deactivated successfully.
Feb 25 13:50:38 compute-0 podman[418302]: 2026-02-25 13:50:38.954018188 +0000 UTC m=+0.716833461 container died f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:50:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815-merged.mount: Deactivated successfully.
Feb 25 13:50:39 compute-0 podman[418302]: 2026-02-25 13:50:39.008051569 +0000 UTC m=+0.770866852 container remove f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:50:39 compute-0 systemd[1]: libpod-conmon-f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06.scope: Deactivated successfully.
Feb 25 13:50:39 compute-0 sudo[418225]: pam_unix(sudo:session): session closed for user root
Feb 25 13:50:39 compute-0 sudo[418352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:50:39 compute-0 sudo[418352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:50:39 compute-0 sudo[418352]: pam_unix(sudo:session): session closed for user root
Feb 25 13:50:39 compute-0 sudo[418377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:50:39 compute-0 sudo[418377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:50:39 compute-0 podman[418414]: 2026-02-25 13:50:39.475489238 +0000 UTC m=+0.047719601 container create 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:50:39 compute-0 systemd[1]: Started libpod-conmon-72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8.scope.
Feb 25 13:50:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:50:39 compute-0 podman[418414]: 2026-02-25 13:50:39.451259703 +0000 UTC m=+0.023490146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:50:39 compute-0 podman[418414]: 2026-02-25 13:50:39.553609161 +0000 UTC m=+0.125839574 container init 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:50:39 compute-0 podman[418414]: 2026-02-25 13:50:39.568439977 +0000 UTC m=+0.140670380 container start 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:50:39 compute-0 hopeful_noether[418430]: 167 167
Feb 25 13:50:39 compute-0 systemd[1]: libpod-72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8.scope: Deactivated successfully.
Feb 25 13:50:39 compute-0 podman[418414]: 2026-02-25 13:50:39.573148592 +0000 UTC m=+0.145379045 container attach 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:50:39 compute-0 podman[418414]: 2026-02-25 13:50:39.574313245 +0000 UTC m=+0.146543648 container died 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:50:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d941f70cb23bd14b2418997dcdf1d42bb041ea8555644c2bac83604a99287dd9-merged.mount: Deactivated successfully.
Feb 25 13:50:39 compute-0 podman[418414]: 2026-02-25 13:50:39.621100139 +0000 UTC m=+0.193330542 container remove 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 13:50:39 compute-0 systemd[1]: libpod-conmon-72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8.scope: Deactivated successfully.
Feb 25 13:50:39 compute-0 ceph-mon[76335]: pgmap v3922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:39 compute-0 podman[418457]: 2026-02-25 13:50:39.791294125 +0000 UTC m=+0.048542165 container create bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 13:50:39 compute-0 systemd[1]: Started libpod-conmon-bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b.scope.
Feb 25 13:50:39 compute-0 podman[418457]: 2026-02-25 13:50:39.764846075 +0000 UTC m=+0.022094165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:50:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b2ebdbff89bf055ba9631e7ac4aa4de942f26ebc48e4aad667fc8cccf05a15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b2ebdbff89bf055ba9631e7ac4aa4de942f26ebc48e4aad667fc8cccf05a15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b2ebdbff89bf055ba9631e7ac4aa4de942f26ebc48e4aad667fc8cccf05a15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b2ebdbff89bf055ba9631e7ac4aa4de942f26ebc48e4aad667fc8cccf05a15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:39 compute-0 podman[418457]: 2026-02-25 13:50:39.901680944 +0000 UTC m=+0.158929014 container init bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:50:39 compute-0 podman[418457]: 2026-02-25 13:50:39.90780861 +0000 UTC m=+0.165056610 container start bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:50:39 compute-0 podman[418457]: 2026-02-25 13:50:39.911564768 +0000 UTC m=+0.168812778 container attach bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 13:50:40 compute-0 crazy_williams[418473]: {
Feb 25 13:50:40 compute-0 crazy_williams[418473]:     "0": [
Feb 25 13:50:40 compute-0 crazy_williams[418473]:         {
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "devices": [
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "/dev/loop3"
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             ],
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_name": "ceph_lv0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_size": "21470642176",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "name": "ceph_lv0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "tags": {
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.cluster_name": "ceph",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.crush_device_class": "",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.encrypted": "0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.objectstore": "bluestore",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.osd_id": "0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.type": "block",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.vdo": "0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.with_tpm": "0"
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             },
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "type": "block",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "vg_name": "ceph_vg0"
Feb 25 13:50:40 compute-0 crazy_williams[418473]:         }
Feb 25 13:50:40 compute-0 crazy_williams[418473]:     ],
Feb 25 13:50:40 compute-0 crazy_williams[418473]:     "1": [
Feb 25 13:50:40 compute-0 crazy_williams[418473]:         {
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "devices": [
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "/dev/loop4"
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             ],
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_name": "ceph_lv1",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_size": "21470642176",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "name": "ceph_lv1",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "tags": {
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.cluster_name": "ceph",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.crush_device_class": "",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.encrypted": "0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.objectstore": "bluestore",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.osd_id": "1",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.type": "block",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.vdo": "0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.with_tpm": "0"
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             },
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "type": "block",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "vg_name": "ceph_vg1"
Feb 25 13:50:40 compute-0 crazy_williams[418473]:         }
Feb 25 13:50:40 compute-0 crazy_williams[418473]:     ],
Feb 25 13:50:40 compute-0 crazy_williams[418473]:     "2": [
Feb 25 13:50:40 compute-0 crazy_williams[418473]:         {
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "devices": [
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "/dev/loop5"
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             ],
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_name": "ceph_lv2",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_size": "21470642176",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "name": "ceph_lv2",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "tags": {
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.cluster_name": "ceph",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.crush_device_class": "",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.encrypted": "0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.objectstore": "bluestore",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.osd_id": "2",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.type": "block",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.vdo": "0",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:                 "ceph.with_tpm": "0"
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             },
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "type": "block",
Feb 25 13:50:40 compute-0 crazy_williams[418473]:             "vg_name": "ceph_vg2"
Feb 25 13:50:40 compute-0 crazy_williams[418473]:         }
Feb 25 13:50:40 compute-0 crazy_williams[418473]:     ]
Feb 25 13:50:40 compute-0 crazy_williams[418473]: }
Feb 25 13:50:40 compute-0 systemd[1]: libpod-bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b.scope: Deactivated successfully.
Feb 25 13:50:40 compute-0 podman[418457]: 2026-02-25 13:50:40.201948554 +0000 UTC m=+0.459196554 container died bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:50:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-14b2ebdbff89bf055ba9631e7ac4aa4de942f26ebc48e4aad667fc8cccf05a15-merged.mount: Deactivated successfully.
Feb 25 13:50:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:40 compute-0 podman[418457]: 2026-02-25 13:50:40.247852592 +0000 UTC m=+0.505100592 container remove bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:50:40 compute-0 systemd[1]: libpod-conmon-bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b.scope: Deactivated successfully.
Feb 25 13:50:40 compute-0 nova_compute[244014]: 2026-02-25 13:50:40.291 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:40 compute-0 sudo[418377]: pam_unix(sudo:session): session closed for user root
Feb 25 13:50:40 compute-0 sudo[418494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:50:40 compute-0 sudo[418494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:50:40 compute-0 sudo[418494]: pam_unix(sudo:session): session closed for user root
Feb 25 13:50:40 compute-0 sudo[418519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:50:40 compute-0 sudo[418519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:50:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:40 compute-0 podman[418556]: 2026-02-25 13:50:40.722008035 +0000 UTC m=+0.055362080 container create a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:50:40 compute-0 systemd[1]: Started libpod-conmon-a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c.scope.
Feb 25 13:50:40 compute-0 podman[418556]: 2026-02-25 13:50:40.696444611 +0000 UTC m=+0.029798726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:50:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:50:40 compute-0 podman[418556]: 2026-02-25 13:50:40.824360904 +0000 UTC m=+0.157715029 container init a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 25 13:50:40 compute-0 podman[418556]: 2026-02-25 13:50:40.834285528 +0000 UTC m=+0.167639543 container start a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:50:40 compute-0 podman[418556]: 2026-02-25 13:50:40.83780503 +0000 UTC m=+0.171159075 container attach a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 13:50:40 compute-0 agitated_payne[418573]: 167 167
Feb 25 13:50:40 compute-0 systemd[1]: libpod-a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c.scope: Deactivated successfully.
Feb 25 13:50:40 compute-0 podman[418578]: 2026-02-25 13:50:40.896755202 +0000 UTC m=+0.035415728 container died a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:50:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-87576b35e9d88143f705d93babfe7d5513bb57ba7fe82a1bf0df3f9bf2a78b70-merged.mount: Deactivated successfully.
Feb 25 13:50:40 compute-0 podman[418578]: 2026-02-25 13:50:40.937755789 +0000 UTC m=+0.076416255 container remove a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 13:50:40 compute-0 systemd[1]: libpod-conmon-a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c.scope: Deactivated successfully.
Feb 25 13:50:41 compute-0 podman[418600]: 2026-02-25 13:50:41.124522071 +0000 UTC m=+0.041044919 container create bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:50:41 compute-0 systemd[1]: Started libpod-conmon-bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5.scope.
Feb 25 13:50:41 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d721e828d642c79dda4ae3366b1319ec9797c4ef071bf3660180832cbed8cf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d721e828d642c79dda4ae3366b1319ec9797c4ef071bf3660180832cbed8cf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d721e828d642c79dda4ae3366b1319ec9797c4ef071bf3660180832cbed8cf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d721e828d642c79dda4ae3366b1319ec9797c4ef071bf3660180832cbed8cf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:50:41 compute-0 podman[418600]: 2026-02-25 13:50:41.106208255 +0000 UTC m=+0.022731123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:50:41 compute-0 podman[418600]: 2026-02-25 13:50:41.206666379 +0000 UTC m=+0.123189247 container init bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 13:50:41 compute-0 podman[418600]: 2026-02-25 13:50:41.219731584 +0000 UTC m=+0.136254432 container start bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:50:41 compute-0 podman[418600]: 2026-02-25 13:50:41.223572424 +0000 UTC m=+0.140095272 container attach bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:50:41 compute-0 ceph-mon[76335]: pgmap v3923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:41 compute-0 lvm[418696]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:50:41 compute-0 lvm[418696]: VG ceph_vg1 finished
Feb 25 13:50:41 compute-0 lvm[418695]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:50:41 compute-0 lvm[418695]: VG ceph_vg0 finished
Feb 25 13:50:41 compute-0 lvm[418698]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:50:41 compute-0 lvm[418698]: VG ceph_vg2 finished
Feb 25 13:50:41 compute-0 nova_compute[244014]: 2026-02-25 13:50:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:50:41 compute-0 quirky_payne[418616]: {}
Feb 25 13:50:42 compute-0 systemd[1]: libpod-bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5.scope: Deactivated successfully.
Feb 25 13:50:42 compute-0 systemd[1]: libpod-bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5.scope: Consumed 1.148s CPU time.
Feb 25 13:50:42 compute-0 podman[418600]: 2026-02-25 13:50:42.014268565 +0000 UTC m=+0.930791413 container died bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 13:50:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d721e828d642c79dda4ae3366b1319ec9797c4ef071bf3660180832cbed8cf0-merged.mount: Deactivated successfully.
Feb 25 13:50:42 compute-0 podman[418600]: 2026-02-25 13:50:42.062959233 +0000 UTC m=+0.979482081 container remove bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 13:50:42 compute-0 systemd[1]: libpod-conmon-bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5.scope: Deactivated successfully.
Feb 25 13:50:42 compute-0 sudo[418519]: pam_unix(sudo:session): session closed for user root
Feb 25 13:50:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:50:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:50:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:50:42 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:50:42 compute-0 sudo[418712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:50:42 compute-0 sudo[418712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:50:42 compute-0 sudo[418712]: pam_unix(sudo:session): session closed for user root
Feb 25 13:50:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:42 compute-0 nova_compute[244014]: 2026-02-25 13:50:42.951 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:50:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:50:43 compute-0 ceph-mon[76335]: pgmap v3924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:50:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:50:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:44 compute-0 podman[418737]: 2026-02-25 13:50:44.726436628 +0000 UTC m=+0.065934994 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb 25 13:50:44 compute-0 podman[418738]: 2026-02-25 13:50:44.767529138 +0000 UTC m=+0.106141528 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 13:50:44 compute-0 nova_compute[244014]: 2026-02-25 13:50:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:50:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:45 compute-0 nova_compute[244014]: 2026-02-25 13:50:45.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:45 compute-0 ceph-mon[76335]: pgmap v3925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:47 compute-0 ceph-mon[76335]: pgmap v3926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:50:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3464252026' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:50:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:50:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3464252026' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:50:48 compute-0 nova_compute[244014]: 2026-02-25 13:50:48.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3464252026' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:50:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3464252026' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:50:49 compute-0 ceph-mon[76335]: pgmap v3927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:50 compute-0 nova_compute[244014]: 2026-02-25 13:50:50.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:51 compute-0 ceph-mon[76335]: pgmap v3928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:53 compute-0 nova_compute[244014]: 2026-02-25 13:50:53.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:53 compute-0 ceph-mon[76335]: pgmap v3929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:50:55.096 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:50:55.096 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:50:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:50:55.096 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:50:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:50:55 compute-0 nova_compute[244014]: 2026-02-25 13:50:55.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:55 compute-0 ceph-mon[76335]: pgmap v3930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:57 compute-0 ceph-mon[76335]: pgmap v3931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:58 compute-0 nova_compute[244014]: 2026-02-25 13:50:58.010 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:50:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:50:59 compute-0 ceph-mon[76335]: pgmap v3932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:00 compute-0 nova_compute[244014]: 2026-02-25 13:51:00.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:01 compute-0 ceph-mon[76335]: pgmap v3933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:51:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:51:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:03 compute-0 nova_compute[244014]: 2026-02-25 13:51:03.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:03 compute-0 ceph-mon[76335]: pgmap v3934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:04 compute-0 nova_compute[244014]: 2026-02-25 13:51:04.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:51:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:05 compute-0 nova_compute[244014]: 2026-02-25 13:51:05.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:05 compute-0 ceph-mon[76335]: pgmap v3935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:07 compute-0 ceph-mon[76335]: pgmap v3936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:08 compute-0 nova_compute[244014]: 2026-02-25 13:51:08.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:09 compute-0 ceph-mon[76335]: pgmap v3937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:10 compute-0 nova_compute[244014]: 2026-02-25 13:51:10.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:11 compute-0 ceph-mon[76335]: pgmap v3938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:13 compute-0 nova_compute[244014]: 2026-02-25 13:51:13.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:13 compute-0 ceph-mon[76335]: pgmap v3939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:15 compute-0 nova_compute[244014]: 2026-02-25 13:51:15.454 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:15 compute-0 ceph-mon[76335]: pgmap v3940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:15 compute-0 podman[418784]: 2026-02-25 13:51:15.735996175 +0000 UTC m=+0.075563410 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:51:15 compute-0 podman[418785]: 2026-02-25 13:51:15.808206268 +0000 UTC m=+0.137027385 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:51:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:17 compute-0 ceph-mon[76335]: pgmap v3941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:18 compute-0 nova_compute[244014]: 2026-02-25 13:51:18.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:19 compute-0 ceph-mon[76335]: pgmap v3942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:19 compute-0 nova_compute[244014]: 2026-02-25 13:51:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:51:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:20 compute-0 nova_compute[244014]: 2026-02-25 13:51:20.455 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:21 compute-0 ceph-mon[76335]: pgmap v3943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 30 op/s
Feb 25 13:51:23 compute-0 nova_compute[244014]: 2026-02-25 13:51:23.163 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:23 compute-0 ceph-mon[76335]: pgmap v3944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 30 op/s
Feb 25 13:51:23 compute-0 nova_compute[244014]: 2026-02-25 13:51:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:51:23 compute-0 nova_compute[244014]: 2026-02-25 13:51:23.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:51:23 compute-0 nova_compute[244014]: 2026-02-25 13:51:23.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:51:23 compute-0 nova_compute[244014]: 2026-02-25 13:51:23.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:51:23 compute-0 nova_compute[244014]: 2026-02-25 13:51:23.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:51:23 compute-0 nova_compute[244014]: 2026-02-25 13:51:23.911 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:51:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:51:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053274885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.448 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:51:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 30 op/s
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.590 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.591 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3537MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.591 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.592 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.729 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.730 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:51:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2053274885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.802 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.902 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.903 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.919 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.943 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:51:24 compute-0 nova_compute[244014]: 2026-02-25 13:51:24.960 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:51:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:25 compute-0 nova_compute[244014]: 2026-02-25 13:51:25.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:51:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077855961' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:51:25 compute-0 nova_compute[244014]: 2026-02-25 13:51:25.514 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:51:25 compute-0 nova_compute[244014]: 2026-02-25 13:51:25.520 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:51:25 compute-0 nova_compute[244014]: 2026-02-25 13:51:25.541 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:51:25 compute-0 nova_compute[244014]: 2026-02-25 13:51:25.542 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:51:25 compute-0 nova_compute[244014]: 2026-02-25 13:51:25.542 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:51:25 compute-0 ceph-mon[76335]: pgmap v3945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 30 op/s
Feb 25 13:51:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1077855961' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:51:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 13:51:26 compute-0 nova_compute[244014]: 2026-02-25 13:51:26.542 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:51:26 compute-0 nova_compute[244014]: 2026-02-25 13:51:26.543 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:51:26 compute-0 nova_compute[244014]: 2026-02-25 13:51:26.543 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:51:26 compute-0 nova_compute[244014]: 2026-02-25 13:51:26.558 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:51:27 compute-0 ceph-mon[76335]: pgmap v3946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 13:51:28 compute-0 nova_compute[244014]: 2026-02-25 13:51:28.201 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 88 op/s
Feb 25 13:51:29 compute-0 ceph-mon[76335]: pgmap v3947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 88 op/s
Feb 25 13:51:29 compute-0 nova_compute[244014]: 2026-02-25 13:51:29.888 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:51:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:30 compute-0 nova_compute[244014]: 2026-02-25 13:51:30.460 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 88 op/s
Feb 25 13:51:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:51:31
Feb 25 13:51:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:51:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:51:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control']
Feb 25 13:51:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:51:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:51:31 compute-0 ceph-mon[76335]: pgmap v3948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 88 op/s
Feb 25 13:51:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 88 op/s
Feb 25 13:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:51:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:51:33 compute-0 nova_compute[244014]: 2026-02-25 13:51:33.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:33 compute-0 ceph-mon[76335]: pgmap v3949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 88 op/s
Feb 25 13:51:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Feb 25 13:51:34 compute-0 nova_compute[244014]: 2026-02-25 13:51:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:51:34 compute-0 nova_compute[244014]: 2026-02-25 13:51:34.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:51:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:35 compute-0 nova_compute[244014]: 2026-02-25 13:51:35.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:35 compute-0 ceph-mon[76335]: pgmap v3950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Feb 25 13:51:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Feb 25 13:51:37 compute-0 ceph-mon[76335]: pgmap v3951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Feb 25 13:51:38 compute-0 nova_compute[244014]: 2026-02-25 13:51:38.289 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Feb 25 13:51:38 compute-0 nova_compute[244014]: 2026-02-25 13:51:38.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:51:39 compute-0 ceph-mon[76335]: pgmap v3952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Feb 25 13:51:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:40 compute-0 nova_compute[244014]: 2026-02-25 13:51:40.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:40 compute-0 nova_compute[244014]: 2026-02-25 13:51:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:51:41 compute-0 ceph-mon[76335]: pgmap v3953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:41 compute-0 nova_compute[244014]: 2026-02-25 13:51:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:51:42 compute-0 sudo[418873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:51:42 compute-0 sudo[418873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:51:42 compute-0 sudo[418873]: pam_unix(sudo:session): session closed for user root
Feb 25 13:51:42 compute-0 sudo[418898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:51:42 compute-0 sudo[418898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:51:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:42 compute-0 sudo[418898]: pam_unix(sudo:session): session closed for user root
Feb 25 13:51:42 compute-0 sudo[418954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:51:42 compute-0 sudo[418954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:51:42 compute-0 sudo[418954]: pam_unix(sudo:session): session closed for user root
Feb 25 13:51:43 compute-0 sudo[418979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Feb 25 13:51:43 compute-0 sudo[418979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:51:43 compute-0 sudo[418979]: pam_unix(sudo:session): session closed for user root
Feb 25 13:51:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:51:43 compute-0 nova_compute[244014]: 2026-02-25 13:51:43.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:43 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:51:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:51:43 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:51:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:51:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:51:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:51:43 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:51:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:51:43 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:51:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:51:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:51:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:51:43 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:51:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:51:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:51:43 compute-0 sudo[419022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:51:43 compute-0 sudo[419022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:51:43 compute-0 sudo[419022]: pam_unix(sudo:session): session closed for user root
Feb 25 13:51:43 compute-0 sudo[419047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:51:43 compute-0 sudo[419047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:51:43 compute-0 podman[419084]: 2026-02-25 13:51:43.729401311 +0000 UTC m=+0.038575039 container create 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:51:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:51:43 compute-0 systemd[1]: Started libpod-conmon-2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a.scope.
Feb 25 13:51:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:51:43 compute-0 podman[419084]: 2026-02-25 13:51:43.80706577 +0000 UTC m=+0.116239518 container init 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:51:43 compute-0 podman[419084]: 2026-02-25 13:51:43.712291639 +0000 UTC m=+0.021465367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:51:43 compute-0 podman[419084]: 2026-02-25 13:51:43.816326426 +0000 UTC m=+0.125500154 container start 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:51:43 compute-0 podman[419084]: 2026-02-25 13:51:43.820145476 +0000 UTC m=+0.129319224 container attach 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:51:43 compute-0 priceless_lovelace[419101]: 167 167
Feb 25 13:51:43 compute-0 systemd[1]: libpod-2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a.scope: Deactivated successfully.
Feb 25 13:51:43 compute-0 podman[419084]: 2026-02-25 13:51:43.824240003 +0000 UTC m=+0.133413741 container died 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:51:43 compute-0 ceph-mon[76335]: pgmap v3954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:51:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:51:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:51:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:51:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:51:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:51:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:51:43 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e55a14074da5eee28cf4baec77129003b82a0695e87049d0512a5e5dc270201-merged.mount: Deactivated successfully.
Feb 25 13:51:43 compute-0 podman[419084]: 2026-02-25 13:51:43.866811125 +0000 UTC m=+0.175984853 container remove 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:51:43 compute-0 systemd[1]: libpod-conmon-2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a.scope: Deactivated successfully.
Feb 25 13:51:43 compute-0 podman[419126]: 2026-02-25 13:51:43.998335631 +0000 UTC m=+0.041135823 container create f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:51:44 compute-0 systemd[1]: Started libpod-conmon-f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582.scope.
Feb 25 13:51:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:44 compute-0 podman[419126]: 2026-02-25 13:51:43.982083134 +0000 UTC m=+0.024883346 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:51:44 compute-0 podman[419126]: 2026-02-25 13:51:44.081609561 +0000 UTC m=+0.124409773 container init f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:51:44 compute-0 podman[419126]: 2026-02-25 13:51:44.087201542 +0000 UTC m=+0.130001734 container start f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:51:44 compute-0 podman[419126]: 2026-02-25 13:51:44.091164736 +0000 UTC m=+0.133964928 container attach f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 13:51:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:44 compute-0 hardcore_yonath[419143]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:51:44 compute-0 hardcore_yonath[419143]: --> All data devices are unavailable
Feb 25 13:51:44 compute-0 systemd[1]: libpod-f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582.scope: Deactivated successfully.
Feb 25 13:51:44 compute-0 podman[419126]: 2026-02-25 13:51:44.617467725 +0000 UTC m=+0.660267927 container died f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:51:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b-merged.mount: Deactivated successfully.
Feb 25 13:51:44 compute-0 podman[419126]: 2026-02-25 13:51:44.671528497 +0000 UTC m=+0.714328689 container remove f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:51:44 compute-0 systemd[1]: libpod-conmon-f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582.scope: Deactivated successfully.
Feb 25 13:51:44 compute-0 sudo[419047]: pam_unix(sudo:session): session closed for user root
Feb 25 13:51:44 compute-0 sudo[419173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:51:44 compute-0 sudo[419173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:51:44 compute-0 sudo[419173]: pam_unix(sudo:session): session closed for user root
Feb 25 13:51:44 compute-0 sudo[419198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:51:44 compute-0 sudo[419198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:51:45 compute-0 podman[419235]: 2026-02-25 13:51:45.149918672 +0000 UTC m=+0.064619476 container create fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:51:45 compute-0 systemd[1]: Started libpod-conmon-fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6.scope.
Feb 25 13:51:45 compute-0 podman[419235]: 2026-02-25 13:51:45.122894236 +0000 UTC m=+0.037595130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:51:45 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:51:45 compute-0 podman[419235]: 2026-02-25 13:51:45.241017067 +0000 UTC m=+0.155717891 container init fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 13:51:45 compute-0 podman[419235]: 2026-02-25 13:51:45.251633662 +0000 UTC m=+0.166334486 container start fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:51:45 compute-0 podman[419235]: 2026-02-25 13:51:45.256158532 +0000 UTC m=+0.170859356 container attach fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:51:45 compute-0 relaxed_proskuriakova[419251]: 167 167
Feb 25 13:51:45 compute-0 systemd[1]: libpod-fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6.scope: Deactivated successfully.
Feb 25 13:51:45 compute-0 podman[419235]: 2026-02-25 13:51:45.258747856 +0000 UTC m=+0.173448690 container died fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:51:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-befe559a2a5510f2ea65676191490b5a17dc8545a63fbfb6756f3ebbcd77c092-merged.mount: Deactivated successfully.
Feb 25 13:51:45 compute-0 podman[419235]: 2026-02-25 13:51:45.306022223 +0000 UTC m=+0.220723027 container remove fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 13:51:45 compute-0 systemd[1]: libpod-conmon-fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6.scope: Deactivated successfully.
Feb 25 13:51:45 compute-0 nova_compute[244014]: 2026-02-25 13:51:45.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:45 compute-0 podman[419277]: 2026-02-25 13:51:45.469921249 +0000 UTC m=+0.045437326 container create 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:51:45 compute-0 systemd[1]: Started libpod-conmon-3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6.scope.
Feb 25 13:51:45 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:51:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaac5c7dfeb7007731ac909c2ba7765fa7f8c8c9e705af967fb83ae446cb6a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaac5c7dfeb7007731ac909c2ba7765fa7f8c8c9e705af967fb83ae446cb6a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaac5c7dfeb7007731ac909c2ba7765fa7f8c8c9e705af967fb83ae446cb6a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaac5c7dfeb7007731ac909c2ba7765fa7f8c8c9e705af967fb83ae446cb6a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:45 compute-0 podman[419277]: 2026-02-25 13:51:45.546488997 +0000 UTC m=+0.122005094 container init 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:51:45 compute-0 podman[419277]: 2026-02-25 13:51:45.449956346 +0000 UTC m=+0.025472453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:51:45 compute-0 podman[419277]: 2026-02-25 13:51:45.553115047 +0000 UTC m=+0.128631124 container start 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 13:51:45 compute-0 podman[419277]: 2026-02-25 13:51:45.556813503 +0000 UTC m=+0.132329590 container attach 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:51:45 compute-0 charming_tharp[419294]: {
Feb 25 13:51:45 compute-0 charming_tharp[419294]:     "0": [
Feb 25 13:51:45 compute-0 charming_tharp[419294]:         {
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "devices": [
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "/dev/loop3"
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             ],
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_name": "ceph_lv0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_size": "21470642176",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "name": "ceph_lv0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "tags": {
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.cluster_name": "ceph",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.crush_device_class": "",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.encrypted": "0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.objectstore": "bluestore",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.osd_id": "0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.type": "block",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.vdo": "0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.with_tpm": "0"
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             },
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "type": "block",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "vg_name": "ceph_vg0"
Feb 25 13:51:45 compute-0 charming_tharp[419294]:         }
Feb 25 13:51:45 compute-0 charming_tharp[419294]:     ],
Feb 25 13:51:45 compute-0 charming_tharp[419294]:     "1": [
Feb 25 13:51:45 compute-0 charming_tharp[419294]:         {
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "devices": [
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "/dev/loop4"
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             ],
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_name": "ceph_lv1",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_size": "21470642176",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "name": "ceph_lv1",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "tags": {
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.cluster_name": "ceph",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.crush_device_class": "",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.encrypted": "0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.objectstore": "bluestore",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.osd_id": "1",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.type": "block",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.vdo": "0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.with_tpm": "0"
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             },
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "type": "block",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "vg_name": "ceph_vg1"
Feb 25 13:51:45 compute-0 charming_tharp[419294]:         }
Feb 25 13:51:45 compute-0 charming_tharp[419294]:     ],
Feb 25 13:51:45 compute-0 charming_tharp[419294]:     "2": [
Feb 25 13:51:45 compute-0 charming_tharp[419294]:         {
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "devices": [
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "/dev/loop5"
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             ],
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_name": "ceph_lv2",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_size": "21470642176",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "name": "ceph_lv2",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "tags": {
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.cluster_name": "ceph",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.crush_device_class": "",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.encrypted": "0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.objectstore": "bluestore",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.osd_id": "2",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.type": "block",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.vdo": "0",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:                 "ceph.with_tpm": "0"
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             },
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "type": "block",
Feb 25 13:51:45 compute-0 charming_tharp[419294]:             "vg_name": "ceph_vg2"
Feb 25 13:51:45 compute-0 charming_tharp[419294]:         }
Feb 25 13:51:45 compute-0 charming_tharp[419294]:     ]
Feb 25 13:51:45 compute-0 charming_tharp[419294]: }
Feb 25 13:51:45 compute-0 ceph-mon[76335]: pgmap v3955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:45 compute-0 systemd[1]: libpod-3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6.scope: Deactivated successfully.
Feb 25 13:51:45 compute-0 podman[419277]: 2026-02-25 13:51:45.861232933 +0000 UTC m=+0.436749040 container died 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:51:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaac5c7dfeb7007731ac909c2ba7765fa7f8c8c9e705af967fb83ae446cb6a96-merged.mount: Deactivated successfully.
Feb 25 13:51:45 compute-0 podman[419277]: 2026-02-25 13:51:45.912774243 +0000 UTC m=+0.488290310 container remove 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:51:45 compute-0 systemd[1]: libpod-conmon-3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6.scope: Deactivated successfully.
Feb 25 13:51:45 compute-0 podman[419304]: 2026-02-25 13:51:45.969940814 +0000 UTC m=+0.074623843 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 13:51:45 compute-0 sudo[419198]: pam_unix(sudo:session): session closed for user root
Feb 25 13:51:45 compute-0 podman[419312]: 2026-02-25 13:51:45.996065444 +0000 UTC m=+0.100776204 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:51:46 compute-0 sudo[419359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:51:46 compute-0 sudo[419359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:51:46 compute-0 sudo[419359]: pam_unix(sudo:session): session closed for user root
Feb 25 13:51:46 compute-0 sudo[419384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:51:46 compute-0 sudo[419384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:51:46 compute-0 podman[419422]: 2026-02-25 13:51:46.408803674 +0000 UTC m=+0.050478521 container create 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 13:51:46 compute-0 systemd[1]: Started libpod-conmon-9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3.scope.
Feb 25 13:51:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:51:46 compute-0 podman[419422]: 2026-02-25 13:51:46.392939528 +0000 UTC m=+0.034614395 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:51:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:46 compute-0 podman[419422]: 2026-02-25 13:51:46.496598804 +0000 UTC m=+0.138273671 container init 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:51:46 compute-0 podman[419422]: 2026-02-25 13:51:46.50166349 +0000 UTC m=+0.143338347 container start 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:51:46 compute-0 podman[419422]: 2026-02-25 13:51:46.505110459 +0000 UTC m=+0.146785306 container attach 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:51:46 compute-0 angry_diffie[419438]: 167 167
Feb 25 13:51:46 compute-0 systemd[1]: libpod-9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3.scope: Deactivated successfully.
Feb 25 13:51:46 compute-0 conmon[419438]: conmon 9ad0dcfad8afc1cc4dad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3.scope/container/memory.events
Feb 25 13:51:46 compute-0 podman[419422]: 2026-02-25 13:51:46.508151946 +0000 UTC m=+0.149826803 container died 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:51:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-00df8332409d441124d67236865c589afac612e46a048fca8d3dfe35d1957edb-merged.mount: Deactivated successfully.
Feb 25 13:51:46 compute-0 podman[419422]: 2026-02-25 13:51:46.552485979 +0000 UTC m=+0.194160866 container remove 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 13:51:46 compute-0 systemd[1]: libpod-conmon-9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3.scope: Deactivated successfully.
Feb 25 13:51:46 compute-0 podman[419462]: 2026-02-25 13:51:46.687923787 +0000 UTC m=+0.035672575 container create e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:51:46 compute-0 systemd[1]: Started libpod-conmon-e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5.scope.
Feb 25 13:51:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0550627222f89476317f637eb3f36dc9e3b80427cac7338cf6ddf7aecc39661/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0550627222f89476317f637eb3f36dc9e3b80427cac7338cf6ddf7aecc39661/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0550627222f89476317f637eb3f36dc9e3b80427cac7338cf6ddf7aecc39661/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0550627222f89476317f637eb3f36dc9e3b80427cac7338cf6ddf7aecc39661/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:51:46 compute-0 podman[419462]: 2026-02-25 13:51:46.672674759 +0000 UTC m=+0.020423567 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:51:46 compute-0 podman[419462]: 2026-02-25 13:51:46.781784002 +0000 UTC m=+0.129532810 container init e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:51:46 compute-0 podman[419462]: 2026-02-25 13:51:46.788489044 +0000 UTC m=+0.136237852 container start e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:51:46 compute-0 podman[419462]: 2026-02-25 13:51:46.791658945 +0000 UTC m=+0.139407733 container attach e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 13:51:46 compute-0 nova_compute[244014]: 2026-02-25 13:51:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:51:47 compute-0 lvm[419556]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:51:47 compute-0 lvm[419556]: VG ceph_vg0 finished
Feb 25 13:51:47 compute-0 lvm[419557]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:51:47 compute-0 lvm[419557]: VG ceph_vg1 finished
Feb 25 13:51:47 compute-0 lvm[419559]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:51:47 compute-0 lvm[419559]: VG ceph_vg2 finished
Feb 25 13:51:47 compute-0 boring_newton[419478]: {}
Feb 25 13:51:47 compute-0 systemd[1]: libpod-e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5.scope: Deactivated successfully.
Feb 25 13:51:47 compute-0 systemd[1]: libpod-e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5.scope: Consumed 1.161s CPU time.
Feb 25 13:51:47 compute-0 podman[419462]: 2026-02-25 13:51:47.558376106 +0000 UTC m=+0.906124904 container died e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:51:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:51:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/485389855' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:51:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:51:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/485389855' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:51:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0550627222f89476317f637eb3f36dc9e3b80427cac7338cf6ddf7aecc39661-merged.mount: Deactivated successfully.
Feb 25 13:51:47 compute-0 podman[419462]: 2026-02-25 13:51:47.601644198 +0000 UTC m=+0.949392996 container remove e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:51:47 compute-0 systemd[1]: libpod-conmon-e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5.scope: Deactivated successfully.
Feb 25 13:51:47 compute-0 sudo[419384]: pam_unix(sudo:session): session closed for user root
Feb 25 13:51:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:51:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:51:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:51:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:51:47 compute-0 sudo[419573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:51:47 compute-0 sudo[419573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:51:47 compute-0 sudo[419573]: pam_unix(sudo:session): session closed for user root
Feb 25 13:51:47 compute-0 ceph-mon[76335]: pgmap v3956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/485389855' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:51:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/485389855' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:51:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:51:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:51:48 compute-0 nova_compute[244014]: 2026-02-25 13:51:48.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:49 compute-0 ceph-mon[76335]: pgmap v3957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:50 compute-0 nova_compute[244014]: 2026-02-25 13:51:50.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:51 compute-0 ceph-mon[76335]: pgmap v3958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:52 compute-0 ceph-mon[76335]: pgmap v3959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:53 compute-0 nova_compute[244014]: 2026-02-25 13:51:53.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:51:55.097 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:51:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:51:55.097 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:51:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:51:55.098 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:51:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:51:55 compute-0 nova_compute[244014]: 2026-02-25 13:51:55.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:55 compute-0 ceph-mon[76335]: pgmap v3960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:57 compute-0 ceph-mon[76335]: pgmap v3961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:58 compute-0 nova_compute[244014]: 2026-02-25 13:51:58.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:51:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:51:59 compute-0 ceph-mon[76335]: pgmap v3962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:00 compute-0 nova_compute[244014]: 2026-02-25 13:52:00.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:01 compute-0 ceph-mon[76335]: pgmap v3963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:52:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:52:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:03 compute-0 nova_compute[244014]: 2026-02-25 13:52:03.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:03 compute-0 ceph-mon[76335]: pgmap v3964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:05 compute-0 nova_compute[244014]: 2026-02-25 13:52:05.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:05 compute-0 ceph-mon[76335]: pgmap v3965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:07 compute-0 ceph-mon[76335]: pgmap v3966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:08 compute-0 nova_compute[244014]: 2026-02-25 13:52:08.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:09 compute-0 ceph-mon[76335]: pgmap v3967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:10 compute-0 nova_compute[244014]: 2026-02-25 13:52:10.480 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:11 compute-0 ceph-mon[76335]: pgmap v3968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:13 compute-0 nova_compute[244014]: 2026-02-25 13:52:13.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:13 compute-0 ceph-mon[76335]: pgmap v3969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:15 compute-0 nova_compute[244014]: 2026-02-25 13:52:15.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:15 compute-0 ceph-mon[76335]: pgmap v3970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:16 compute-0 podman[419598]: 2026-02-25 13:52:16.743181496 +0000 UTC m=+0.080373853 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 25 13:52:16 compute-0 podman[419599]: 2026-02-25 13:52:16.787061076 +0000 UTC m=+0.124341436 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:52:17 compute-0 ceph-mon[76335]: pgmap v3971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:18 compute-0 nova_compute[244014]: 2026-02-25 13:52:18.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:19 compute-0 ceph-mon[76335]: pgmap v3972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:20 compute-0 nova_compute[244014]: 2026-02-25 13:52:20.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:20 compute-0 nova_compute[244014]: 2026-02-25 13:52:20.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:52:21 compute-0 ceph-mon[76335]: pgmap v3973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:23 compute-0 nova_compute[244014]: 2026-02-25 13:52:23.516 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:23 compute-0 ceph-mon[76335]: pgmap v3974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:24 compute-0 nova_compute[244014]: 2026-02-25 13:52:24.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:52:24 compute-0 nova_compute[244014]: 2026-02-25 13:52:24.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:52:24 compute-0 nova_compute[244014]: 2026-02-25 13:52:24.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:52:24 compute-0 nova_compute[244014]: 2026-02-25 13:52:24.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:52:24 compute-0 nova_compute[244014]: 2026-02-25 13:52:24.910 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:52:24 compute-0 nova_compute[244014]: 2026-02-25 13:52:24.910 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:52:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:25 compute-0 nova_compute[244014]: 2026-02-25 13:52:25.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:52:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3263051956' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:52:25 compute-0 nova_compute[244014]: 2026-02-25 13:52:25.610 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:52:25 compute-0 nova_compute[244014]: 2026-02-25 13:52:25.761 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:52:25 compute-0 nova_compute[244014]: 2026-02-25 13:52:25.763 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3543MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:52:25 compute-0 nova_compute[244014]: 2026-02-25 13:52:25.763 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:52:25 compute-0 nova_compute[244014]: 2026-02-25 13:52:25.763 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:52:25 compute-0 nova_compute[244014]: 2026-02-25 13:52:25.851 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:52:25 compute-0 nova_compute[244014]: 2026-02-25 13:52:25.852 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:52:25 compute-0 nova_compute[244014]: 2026-02-25 13:52:25.872 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:52:25 compute-0 ceph-mon[76335]: pgmap v3975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3263051956' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:52:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:52:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878544754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:52:26 compute-0 nova_compute[244014]: 2026-02-25 13:52:26.483 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:52:26 compute-0 nova_compute[244014]: 2026-02-25 13:52:26.490 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:52:26 compute-0 nova_compute[244014]: 2026-02-25 13:52:26.513 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:52:26 compute-0 nova_compute[244014]: 2026-02-25 13:52:26.515 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:52:26 compute-0 nova_compute[244014]: 2026-02-25 13:52:26.515 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:52:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2878544754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:52:26 compute-0 ceph-mon[76335]: pgmap v3976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:28 compute-0 nova_compute[244014]: 2026-02-25 13:52:28.516 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:52:28 compute-0 nova_compute[244014]: 2026-02-25 13:52:28.517 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:52:28 compute-0 nova_compute[244014]: 2026-02-25 13:52:28.518 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:52:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:28 compute-0 nova_compute[244014]: 2026-02-25 13:52:28.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:28 compute-0 nova_compute[244014]: 2026-02-25 13:52:28.585 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:52:29 compute-0 ceph-mon[76335]: pgmap v3977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:29 compute-0 nova_compute[244014]: 2026-02-25 13:52:29.941 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:52:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:30 compute-0 nova_compute[244014]: 2026-02-25 13:52:30.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:52:31
Feb 25 13:52:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:52:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:52:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'vms', 'volumes', '.rgw.root']
Feb 25 13:52:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:52:31 compute-0 ceph-mon[76335]: pgmap v3978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:52:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:52:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:52:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:52:33 compute-0 nova_compute[244014]: 2026-02-25 13:52:33.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:33 compute-0 ceph-mon[76335]: pgmap v3979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:34 compute-0 nova_compute[244014]: 2026-02-25 13:52:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:52:34 compute-0 nova_compute[244014]: 2026-02-25 13:52:34.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:52:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:35 compute-0 nova_compute[244014]: 2026-02-25 13:52:35.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:35 compute-0 ceph-mon[76335]: pgmap v3980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:37 compute-0 ceph-mon[76335]: pgmap v3981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:38 compute-0 nova_compute[244014]: 2026-02-25 13:52:38.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:39 compute-0 ceph-mon[76335]: pgmap v3982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:39 compute-0 nova_compute[244014]: 2026-02-25 13:52:39.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:52:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:40 compute-0 nova_compute[244014]: 2026-02-25 13:52:40.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:41 compute-0 ceph-mon[76335]: pgmap v3983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:41 compute-0 nova_compute[244014]: 2026-02-25 13:52:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:52:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:43 compute-0 nova_compute[244014]: 2026-02-25 13:52:43.558 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:43 compute-0 ceph-mon[76335]: pgmap v3984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:52:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:52:43 compute-0 nova_compute[244014]: 2026-02-25 13:52:43.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:52:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:45 compute-0 nova_compute[244014]: 2026-02-25 13:52:45.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:45 compute-0 ceph-mon[76335]: pgmap v3985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:52:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1092072367' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:52:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:52:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1092072367' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:52:47 compute-0 ceph-mon[76335]: pgmap v3986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1092072367' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:52:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1092072367' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:52:47 compute-0 podman[419687]: 2026-02-25 13:52:47.730530164 +0000 UTC m=+0.065314158 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 13:52:47 compute-0 podman[419688]: 2026-02-25 13:52:47.794881023 +0000 UTC m=+0.124733837 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:52:47 compute-0 sudo[419731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:52:47 compute-0 sudo[419731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:52:47 compute-0 sudo[419731]: pam_unix(sudo:session): session closed for user root
Feb 25 13:52:47 compute-0 nova_compute[244014]: 2026-02-25 13:52:47.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:52:47 compute-0 sudo[419756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:52:47 compute-0 sudo[419756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:52:48 compute-0 sudo[419756]: pam_unix(sudo:session): session closed for user root
Feb 25 13:52:48 compute-0 sudo[419812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:52:48 compute-0 sudo[419812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:52:48 compute-0 sudo[419812]: pam_unix(sudo:session): session closed for user root
Feb 25 13:52:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:48 compute-0 nova_compute[244014]: 2026-02-25 13:52:48.559 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:48 compute-0 sudo[419837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- inventory --format=json-pretty --filter-for-batch
Feb 25 13:52:48 compute-0 sudo[419837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:52:48 compute-0 podman[419876]: 2026-02-25 13:52:48.847180549 +0000 UTC m=+0.042700348 container create 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:52:48 compute-0 systemd[1]: Started libpod-conmon-0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d.scope.
Feb 25 13:52:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:52:48 compute-0 podman[419876]: 2026-02-25 13:52:48.915857861 +0000 UTC m=+0.111377680 container init 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:52:48 compute-0 podman[419876]: 2026-02-25 13:52:48.828585654 +0000 UTC m=+0.024105463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:52:48 compute-0 podman[419876]: 2026-02-25 13:52:48.925581106 +0000 UTC m=+0.121100875 container start 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 13:52:48 compute-0 podman[419876]: 2026-02-25 13:52:48.928517539 +0000 UTC m=+0.124037358 container attach 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 13:52:48 compute-0 elated_shirley[419892]: 167 167
Feb 25 13:52:48 compute-0 systemd[1]: libpod-0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d.scope: Deactivated successfully.
Feb 25 13:52:48 compute-0 conmon[419892]: conmon 0c170368ae4abf749a81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d.scope/container/memory.events
Feb 25 13:52:48 compute-0 podman[419876]: 2026-02-25 13:52:48.933346075 +0000 UTC m=+0.128865844 container died 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-01ef9fc6236db1d9890f774fd7a3fab50fd25f6bbc271b89575251500f4fd0a0-merged.mount: Deactivated successfully.
Feb 25 13:52:48 compute-0 podman[419876]: 2026-02-25 13:52:48.975714753 +0000 UTC m=+0.171234542 container remove 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:52:48 compute-0 systemd[1]: libpod-conmon-0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d.scope: Deactivated successfully.
Feb 25 13:52:49 compute-0 podman[419916]: 2026-02-25 13:52:49.144905426 +0000 UTC m=+0.045754644 container create 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 13:52:49 compute-0 systemd[1]: Started libpod-conmon-735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5.scope.
Feb 25 13:52:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:52:49 compute-0 podman[419916]: 2026-02-25 13:52:49.125048205 +0000 UTC m=+0.025897413 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c24db5a905c91a9d4f49c05eddc40e0cf2590dea187b27b70dbf704e1facca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c24db5a905c91a9d4f49c05eddc40e0cf2590dea187b27b70dbf704e1facca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c24db5a905c91a9d4f49c05eddc40e0cf2590dea187b27b70dbf704e1facca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c24db5a905c91a9d4f49c05eddc40e0cf2590dea187b27b70dbf704e1facca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:49 compute-0 podman[419916]: 2026-02-25 13:52:49.235807036 +0000 UTC m=+0.136656234 container init 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:52:49 compute-0 podman[419916]: 2026-02-25 13:52:49.244615915 +0000 UTC m=+0.145465103 container start 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:52:49 compute-0 podman[419916]: 2026-02-25 13:52:49.248023661 +0000 UTC m=+0.148872859 container attach 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:52:49 compute-0 ceph-mon[76335]: pgmap v3987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]: [
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:     {
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:         "available": false,
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:         "being_replaced": false,
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:         "ceph_device_lvm": false,
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:         "lsm_data": {},
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:         "lvs": [],
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:         "path": "/dev/sr0",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:         "rejected_reasons": [
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "Insufficient space (<5GB)",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "Has a FileSystem"
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:         ],
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:         "sys_api": {
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "actuators": null,
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "device_nodes": [
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:                 "sr0"
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             ],
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "devname": "sr0",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "human_readable_size": "482.00 KB",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "id_bus": "ata",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "model": "QEMU DVD-ROM",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "nr_requests": "2",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "parent": "/dev/sr0",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "partitions": {},
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "path": "/dev/sr0",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "removable": "1",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "rev": "2.5+",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "ro": "0",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "rotational": "1",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "sas_address": "",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "sas_device_handle": "",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "scheduler_mode": "mq-deadline",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "sectors": 0,
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "sectorsize": "2048",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "size": 493568.0,
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "support_discard": "2048",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "type": "disk",
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:             "vendor": "QEMU"
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:         }
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]:     }
Feb 25 13:52:49 compute-0 elegant_leavitt[419932]: ]
Feb 25 13:52:49 compute-0 systemd[1]: libpod-735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5.scope: Deactivated successfully.
Feb 25 13:52:49 compute-0 podman[419916]: 2026-02-25 13:52:49.753533232 +0000 UTC m=+0.654382410 container died 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9c24db5a905c91a9d4f49c05eddc40e0cf2590dea187b27b70dbf704e1facca-merged.mount: Deactivated successfully.
Feb 25 13:52:49 compute-0 podman[419916]: 2026-02-25 13:52:49.791774713 +0000 UTC m=+0.692623891 container remove 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 13:52:49 compute-0 systemd[1]: libpod-conmon-735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5.scope: Deactivated successfully.
Feb 25 13:52:49 compute-0 sudo[419837]: pam_unix(sudo:session): session closed for user root
Feb 25 13:52:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:52:49 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:52:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:52:49 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:52:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:52:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:52:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:52:49 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:52:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:52:49 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:52:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:52:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:52:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:52:49 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:52:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:52:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:52:49 compute-0 sudo[420661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:52:49 compute-0 sudo[420661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:52:49 compute-0 sudo[420661]: pam_unix(sudo:session): session closed for user root
Feb 25 13:52:50 compute-0 sudo[420686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:52:50 compute-0 sudo[420686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:52:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:50 compute-0 podman[420724]: 2026-02-25 13:52:50.328927328 +0000 UTC m=+0.045501277 container create a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:52:50 compute-0 systemd[1]: Started libpod-conmon-a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3.scope.
Feb 25 13:52:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:52:50 compute-0 podman[420724]: 2026-02-25 13:52:50.310762695 +0000 UTC m=+0.027336724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:52:50 compute-0 podman[420724]: 2026-02-25 13:52:50.408490688 +0000 UTC m=+0.125064727 container init a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 13:52:50 compute-0 podman[420724]: 2026-02-25 13:52:50.417565584 +0000 UTC m=+0.134139523 container start a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 13:52:50 compute-0 podman[420724]: 2026-02-25 13:52:50.421113454 +0000 UTC m=+0.137687493 container attach a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:52:50 compute-0 distracted_leavitt[420740]: 167 167
Feb 25 13:52:50 compute-0 systemd[1]: libpod-a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3.scope: Deactivated successfully.
Feb 25 13:52:50 compute-0 podman[420724]: 2026-02-25 13:52:50.424344006 +0000 UTC m=+0.140917995 container died a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:52:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9c17d532b11a575727f20da86b918c1d9b17c5d802b7096e9fcc3d240e5f8d9-merged.mount: Deactivated successfully.
Feb 25 13:52:50 compute-0 podman[420724]: 2026-02-25 13:52:50.47648915 +0000 UTC m=+0.193063099 container remove a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 13:52:50 compute-0 systemd[1]: libpod-conmon-a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3.scope: Deactivated successfully.
Feb 25 13:52:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:50 compute-0 nova_compute[244014]: 2026-02-25 13:52:50.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:50 compute-0 podman[420765]: 2026-02-25 13:52:50.676537705 +0000 UTC m=+0.046963899 container create 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 13:52:50 compute-0 systemd[1]: Started libpod-conmon-1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f.scope.
Feb 25 13:52:50 compute-0 podman[420765]: 2026-02-25 13:52:50.651470086 +0000 UTC m=+0.021896300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:52:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:50 compute-0 podman[420765]: 2026-02-25 13:52:50.774558986 +0000 UTC m=+0.144985160 container init 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:52:50 compute-0 podman[420765]: 2026-02-25 13:52:50.788729246 +0000 UTC m=+0.159155420 container start 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:52:50 compute-0 podman[420765]: 2026-02-25 13:52:50.792353189 +0000 UTC m=+0.162779363 container attach 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:52:50 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:52:50 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:52:50 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:52:50 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:52:50 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:52:50 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:52:50 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:52:50 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:52:51 compute-0 strange_galileo[420782]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:52:51 compute-0 strange_galileo[420782]: --> All data devices are unavailable
Feb 25 13:52:51 compute-0 systemd[1]: libpod-1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f.scope: Deactivated successfully.
Feb 25 13:52:51 compute-0 podman[420765]: 2026-02-25 13:52:51.240991082 +0000 UTC m=+0.611417306 container died 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 13:52:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b-merged.mount: Deactivated successfully.
Feb 25 13:52:51 compute-0 podman[420765]: 2026-02-25 13:52:51.295106102 +0000 UTC m=+0.665532266 container remove 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:52:51 compute-0 systemd[1]: libpod-conmon-1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f.scope: Deactivated successfully.
Feb 25 13:52:51 compute-0 sudo[420686]: pam_unix(sudo:session): session closed for user root
Feb 25 13:52:51 compute-0 sudo[420813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:52:51 compute-0 sudo[420813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:52:51 compute-0 sudo[420813]: pam_unix(sudo:session): session closed for user root
Feb 25 13:52:51 compute-0 sudo[420838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:52:51 compute-0 sudo[420838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:52:51 compute-0 podman[420875]: 2026-02-25 13:52:51.820324709 +0000 UTC m=+0.056848008 container create a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 13:52:51 compute-0 systemd[1]: Started libpod-conmon-a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be.scope.
Feb 25 13:52:51 compute-0 ceph-mon[76335]: pgmap v3988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:51 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:52:51 compute-0 podman[420875]: 2026-02-25 13:52:51.797509224 +0000 UTC m=+0.034032583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:52:51 compute-0 podman[420875]: 2026-02-25 13:52:51.899296961 +0000 UTC m=+0.135820280 container init a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:52:51 compute-0 podman[420875]: 2026-02-25 13:52:51.907294547 +0000 UTC m=+0.143817856 container start a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:52:51 compute-0 podman[420875]: 2026-02-25 13:52:51.911116525 +0000 UTC m=+0.147639874 container attach a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 13:52:51 compute-0 condescending_ramanujan[420892]: 167 167
Feb 25 13:52:51 compute-0 systemd[1]: libpod-a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be.scope: Deactivated successfully.
Feb 25 13:52:51 compute-0 podman[420875]: 2026-02-25 13:52:51.914251313 +0000 UTC m=+0.150774652 container died a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:52:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-89792a4a1a02b8fe218c01e30f0fb40a4f2c51c2f66c1ccbf78c958bc9c39dcf-merged.mount: Deactivated successfully.
Feb 25 13:52:51 compute-0 podman[420875]: 2026-02-25 13:52:51.955430007 +0000 UTC m=+0.191953336 container remove a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:52:51 compute-0 systemd[1]: libpod-conmon-a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be.scope: Deactivated successfully.
Feb 25 13:52:52 compute-0 podman[420916]: 2026-02-25 13:52:52.130286161 +0000 UTC m=+0.049544612 container create f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:52:52 compute-0 systemd[1]: Started libpod-conmon-f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20.scope.
Feb 25 13:52:52 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:52:52 compute-0 podman[420916]: 2026-02-25 13:52:52.106436326 +0000 UTC m=+0.025694777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:52:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a475b77ec6264a0859e21b946d3f586088635d0e3c2a0c9a8f7aaf8c30444087/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a475b77ec6264a0859e21b946d3f586088635d0e3c2a0c9a8f7aaf8c30444087/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a475b77ec6264a0859e21b946d3f586088635d0e3c2a0c9a8f7aaf8c30444087/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a475b77ec6264a0859e21b946d3f586088635d0e3c2a0c9a8f7aaf8c30444087/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:52 compute-0 podman[420916]: 2026-02-25 13:52:52.229451464 +0000 UTC m=+0.148709925 container init f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:52:52 compute-0 podman[420916]: 2026-02-25 13:52:52.244914851 +0000 UTC m=+0.164173282 container start f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:52:52 compute-0 podman[420916]: 2026-02-25 13:52:52.249262534 +0000 UTC m=+0.168520975 container attach f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:52:52 compute-0 crazy_golick[420933]: {
Feb 25 13:52:52 compute-0 crazy_golick[420933]:     "0": [
Feb 25 13:52:52 compute-0 crazy_golick[420933]:         {
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "devices": [
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "/dev/loop3"
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             ],
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_name": "ceph_lv0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_size": "21470642176",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "name": "ceph_lv0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "tags": {
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.cluster_name": "ceph",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.crush_device_class": "",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.encrypted": "0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.objectstore": "bluestore",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.osd_id": "0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.type": "block",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.vdo": "0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.with_tpm": "0"
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             },
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "type": "block",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "vg_name": "ceph_vg0"
Feb 25 13:52:52 compute-0 crazy_golick[420933]:         }
Feb 25 13:52:52 compute-0 crazy_golick[420933]:     ],
Feb 25 13:52:52 compute-0 crazy_golick[420933]:     "1": [
Feb 25 13:52:52 compute-0 crazy_golick[420933]:         {
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "devices": [
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "/dev/loop4"
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             ],
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_name": "ceph_lv1",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_size": "21470642176",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "name": "ceph_lv1",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "tags": {
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.cluster_name": "ceph",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.crush_device_class": "",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.encrypted": "0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.objectstore": "bluestore",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.osd_id": "1",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.type": "block",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.vdo": "0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.with_tpm": "0"
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             },
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "type": "block",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "vg_name": "ceph_vg1"
Feb 25 13:52:52 compute-0 crazy_golick[420933]:         }
Feb 25 13:52:52 compute-0 crazy_golick[420933]:     ],
Feb 25 13:52:52 compute-0 crazy_golick[420933]:     "2": [
Feb 25 13:52:52 compute-0 crazy_golick[420933]:         {
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "devices": [
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "/dev/loop5"
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             ],
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_name": "ceph_lv2",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_size": "21470642176",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "name": "ceph_lv2",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "tags": {
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.cluster_name": "ceph",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.crush_device_class": "",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.encrypted": "0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.objectstore": "bluestore",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.osd_id": "2",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.type": "block",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.vdo": "0",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:                 "ceph.with_tpm": "0"
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             },
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "type": "block",
Feb 25 13:52:52 compute-0 crazy_golick[420933]:             "vg_name": "ceph_vg2"
Feb 25 13:52:52 compute-0 crazy_golick[420933]:         }
Feb 25 13:52:52 compute-0 crazy_golick[420933]:     ]
Feb 25 13:52:52 compute-0 crazy_golick[420933]: }
Feb 25 13:52:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:52 compute-0 systemd[1]: libpod-f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20.scope: Deactivated successfully.
Feb 25 13:52:52 compute-0 podman[420916]: 2026-02-25 13:52:52.557515198 +0000 UTC m=+0.476773659 container died f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 13:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a475b77ec6264a0859e21b946d3f586088635d0e3c2a0c9a8f7aaf8c30444087-merged.mount: Deactivated successfully.
Feb 25 13:52:52 compute-0 podman[420916]: 2026-02-25 13:52:52.603861158 +0000 UTC m=+0.523119579 container remove f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:52:52 compute-0 systemd[1]: libpod-conmon-f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20.scope: Deactivated successfully.
Feb 25 13:52:52 compute-0 sudo[420838]: pam_unix(sudo:session): session closed for user root
Feb 25 13:52:52 compute-0 sudo[420954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:52:52 compute-0 sudo[420954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:52:52 compute-0 sudo[420954]: pam_unix(sudo:session): session closed for user root
Feb 25 13:52:52 compute-0 sudo[420979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:52:52 compute-0 sudo[420979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:52:52 compute-0 ceph-mon[76335]: pgmap v3989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:53 compute-0 podman[421016]: 2026-02-25 13:52:53.157645743 +0000 UTC m=+0.057227349 container create 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:52:53 compute-0 systemd[1]: Started libpod-conmon-9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df.scope.
Feb 25 13:52:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:52:53 compute-0 podman[421016]: 2026-02-25 13:52:53.137141173 +0000 UTC m=+0.036722819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:52:53 compute-0 podman[421016]: 2026-02-25 13:52:53.244279042 +0000 UTC m=+0.143860648 container init 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 13:52:53 compute-0 podman[421016]: 2026-02-25 13:52:53.252672269 +0000 UTC m=+0.152253875 container start 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:52:53 compute-0 podman[421016]: 2026-02-25 13:52:53.256734844 +0000 UTC m=+0.156316480 container attach 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 25 13:52:53 compute-0 dazzling_wozniak[421033]: 167 167
Feb 25 13:52:53 compute-0 systemd[1]: libpod-9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df.scope: Deactivated successfully.
Feb 25 13:52:53 compute-0 podman[421016]: 2026-02-25 13:52:53.258544485 +0000 UTC m=+0.158126121 container died 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:52:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b51d7a8a18b1f3ba7cde7455cb1383c2ca8673d693f140716ec43a9ae0c0c685-merged.mount: Deactivated successfully.
Feb 25 13:52:53 compute-0 podman[421016]: 2026-02-25 13:52:53.301950682 +0000 UTC m=+0.201532318 container remove 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 13:52:53 compute-0 systemd[1]: libpod-conmon-9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df.scope: Deactivated successfully.
Feb 25 13:52:53 compute-0 podman[421059]: 2026-02-25 13:52:53.502492122 +0000 UTC m=+0.059336699 container create 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:52:53 compute-0 systemd[1]: Started libpod-conmon-07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74.scope.
Feb 25 13:52:53 compute-0 nova_compute[244014]: 2026-02-25 13:52:53.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:53 compute-0 podman[421059]: 2026-02-25 13:52:53.481759135 +0000 UTC m=+0.038603752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:52:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d248827eaf6570bc18441fcdd6a74fe0f21713229a00f9c2cd753d3efba78771/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d248827eaf6570bc18441fcdd6a74fe0f21713229a00f9c2cd753d3efba78771/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d248827eaf6570bc18441fcdd6a74fe0f21713229a00f9c2cd753d3efba78771/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d248827eaf6570bc18441fcdd6a74fe0f21713229a00f9c2cd753d3efba78771/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:52:53 compute-0 podman[421059]: 2026-02-25 13:52:53.608619952 +0000 UTC m=+0.165464619 container init 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 25 13:52:53 compute-0 podman[421059]: 2026-02-25 13:52:53.624441259 +0000 UTC m=+0.181285856 container start 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:52:53 compute-0 podman[421059]: 2026-02-25 13:52:53.63050675 +0000 UTC m=+0.187351357 container attach 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 13:52:54 compute-0 lvm[421154]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:52:54 compute-0 lvm[421155]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:52:54 compute-0 lvm[421155]: VG ceph_vg1 finished
Feb 25 13:52:54 compute-0 lvm[421154]: VG ceph_vg0 finished
Feb 25 13:52:54 compute-0 lvm[421157]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:52:54 compute-0 lvm[421157]: VG ceph_vg2 finished
Feb 25 13:52:54 compute-0 nervous_gauss[421076]: {}
Feb 25 13:52:54 compute-0 systemd[1]: libpod-07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74.scope: Deactivated successfully.
Feb 25 13:52:54 compute-0 podman[421059]: 2026-02-25 13:52:54.403137562 +0000 UTC m=+0.959982169 container died 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 13:52:54 compute-0 systemd[1]: libpod-07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74.scope: Consumed 1.228s CPU time.
Feb 25 13:52:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d248827eaf6570bc18441fcdd6a74fe0f21713229a00f9c2cd753d3efba78771-merged.mount: Deactivated successfully.
Feb 25 13:52:54 compute-0 podman[421059]: 2026-02-25 13:52:54.452092556 +0000 UTC m=+1.008937153 container remove 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:52:54 compute-0 systemd[1]: libpod-conmon-07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74.scope: Deactivated successfully.
Feb 25 13:52:54 compute-0 sudo[420979]: pam_unix(sudo:session): session closed for user root
Feb 25 13:52:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:52:54 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:52:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:52:54 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:52:54 compute-0 sudo[421173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:52:54 compute-0 sudo[421173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:52:54 compute-0 sudo[421173]: pam_unix(sudo:session): session closed for user root
Feb 25 13:52:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:52:55.097 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:52:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:52:55.098 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:52:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:52:55.098 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:52:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:52:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:52:55 compute-0 ceph-mon[76335]: pgmap v3990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:52:55 compute-0 nova_compute[244014]: 2026-02-25 13:52:55.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:57 compute-0 ceph-mon[76335]: pgmap v3991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:52:58 compute-0 nova_compute[244014]: 2026-02-25 13:52:58.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:52:59 compute-0 ceph-mon[76335]: pgmap v3992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:00 compute-0 nova_compute[244014]: 2026-02-25 13:53:00.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:01 compute-0 ceph-mon[76335]: pgmap v3993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:53:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:53:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:03 compute-0 nova_compute[244014]: 2026-02-25 13:53:03.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:03 compute-0 ceph-mon[76335]: pgmap v3994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:05 compute-0 ceph-mon[76335]: pgmap v3995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:05 compute-0 nova_compute[244014]: 2026-02-25 13:53:05.600 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:07 compute-0 ceph-mon[76335]: pgmap v3996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:08 compute-0 nova_compute[244014]: 2026-02-25 13:53:08.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:08 compute-0 nova_compute[244014]: 2026-02-25 13:53:08.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:53:09 compute-0 ceph-mon[76335]: pgmap v3997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:10 compute-0 nova_compute[244014]: 2026-02-25 13:53:10.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:11 compute-0 ceph-mon[76335]: pgmap v3998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:13 compute-0 ceph-mon[76335]: pgmap v3999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:13 compute-0 nova_compute[244014]: 2026-02-25 13:53:13.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:15 compute-0 nova_compute[244014]: 2026-02-25 13:53:15.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:15 compute-0 ceph-mon[76335]: pgmap v4000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:17 compute-0 ceph-mon[76335]: pgmap v4001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:18 compute-0 nova_compute[244014]: 2026-02-25 13:53:18.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:18 compute-0 podman[421199]: 2026-02-25 13:53:18.748515217 +0000 UTC m=+0.081444373 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:53:18 compute-0 podman[421200]: 2026-02-25 13:53:18.779616587 +0000 UTC m=+0.111304918 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Feb 25 13:53:19 compute-0 ceph-mon[76335]: pgmap v4002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:20 compute-0 nova_compute[244014]: 2026-02-25 13:53:20.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:20 compute-0 nova_compute[244014]: 2026-02-25 13:53:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:53:21 compute-0 ceph-mon[76335]: pgmap v4003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:23 compute-0 ceph-mon[76335]: pgmap v4004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:23 compute-0 nova_compute[244014]: 2026-02-25 13:53:23.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:25 compute-0 nova_compute[244014]: 2026-02-25 13:53:25.609 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:25 compute-0 ceph-mon[76335]: pgmap v4005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:26 compute-0 nova_compute[244014]: 2026-02-25 13:53:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:53:26 compute-0 nova_compute[244014]: 2026-02-25 13:53:26.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:53:26 compute-0 nova_compute[244014]: 2026-02-25 13:53:26.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:53:26 compute-0 nova_compute[244014]: 2026-02-25 13:53:26.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:53:26 compute-0 nova_compute[244014]: 2026-02-25 13:53:26.910 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:53:26 compute-0 nova_compute[244014]: 2026-02-25 13:53:26.911 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:53:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:53:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4132084412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:53:27 compute-0 nova_compute[244014]: 2026-02-25 13:53:27.497 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:53:27 compute-0 nova_compute[244014]: 2026-02-25 13:53:27.672 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:53:27 compute-0 nova_compute[244014]: 2026-02-25 13:53:27.673 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3533MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:53:27 compute-0 nova_compute[244014]: 2026-02-25 13:53:27.674 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:53:27 compute-0 nova_compute[244014]: 2026-02-25 13:53:27.674 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:53:27 compute-0 ceph-mon[76335]: pgmap v4006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4132084412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:53:27 compute-0 nova_compute[244014]: 2026-02-25 13:53:27.736 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:53:27 compute-0 nova_compute[244014]: 2026-02-25 13:53:27.736 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:53:27 compute-0 nova_compute[244014]: 2026-02-25 13:53:27.753 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:53:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:53:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840975524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:53:28 compute-0 nova_compute[244014]: 2026-02-25 13:53:28.302 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:53:28 compute-0 nova_compute[244014]: 2026-02-25 13:53:28.310 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:53:28 compute-0 nova_compute[244014]: 2026-02-25 13:53:28.332 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:53:28 compute-0 nova_compute[244014]: 2026-02-25 13:53:28.335 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:53:28 compute-0 nova_compute[244014]: 2026-02-25 13:53:28.335 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:53:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1840975524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:53:28 compute-0 nova_compute[244014]: 2026-02-25 13:53:28.723 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:29 compute-0 ceph-mon[76335]: pgmap v4007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:30 compute-0 nova_compute[244014]: 2026-02-25 13:53:30.337 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:53:30 compute-0 nova_compute[244014]: 2026-02-25 13:53:30.338 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:53:30 compute-0 nova_compute[244014]: 2026-02-25 13:53:30.338 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:53:30 compute-0 nova_compute[244014]: 2026-02-25 13:53:30.395 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:53:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:30 compute-0 nova_compute[244014]: 2026-02-25 13:53:30.610 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:30 compute-0 nova_compute[244014]: 2026-02-25 13:53:30.930 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:53:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:53:31
Feb 25 13:53:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:53:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:53:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'images', '.mgr', '.rgw.root', 'backups', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control']
Feb 25 13:53:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:53:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:53:31 compute-0 ceph-mon[76335]: pgmap v4008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:53:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:53:33 compute-0 nova_compute[244014]: 2026-02-25 13:53:33.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:33 compute-0 ceph-mon[76335]: pgmap v4009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:35 compute-0 nova_compute[244014]: 2026-02-25 13:53:35.613 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:35 compute-0 ceph-mon[76335]: pgmap v4010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:35 compute-0 nova_compute[244014]: 2026-02-25 13:53:35.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:53:35 compute-0 nova_compute[244014]: 2026-02-25 13:53:35.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:53:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:37 compute-0 ceph-mon[76335]: pgmap v4011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:38 compute-0 nova_compute[244014]: 2026-02-25 13:53:38.731 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:39 compute-0 ceph-mon[76335]: pgmap v4012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:40 compute-0 nova_compute[244014]: 2026-02-25 13:53:40.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:40 compute-0 nova_compute[244014]: 2026-02-25 13:53:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:53:41 compute-0 ceph-mon[76335]: pgmap v4013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:43 compute-0 nova_compute[244014]: 2026-02-25 13:53:43.734 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:43 compute-0 ceph-mon[76335]: pgmap v4014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:53:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:53:43 compute-0 nova_compute[244014]: 2026-02-25 13:53:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:53:43 compute-0 nova_compute[244014]: 2026-02-25 13:53:43.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:53:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:45 compute-0 nova_compute[244014]: 2026-02-25 13:53:45.619 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:45 compute-0 ceph-mon[76335]: pgmap v4015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:53:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2938783657' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:53:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:53:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2938783657' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:53:47 compute-0 ceph-mon[76335]: pgmap v4016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2938783657' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:53:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2938783657' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:53:47 compute-0 nova_compute[244014]: 2026-02-25 13:53:47.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:53:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:48 compute-0 nova_compute[244014]: 2026-02-25 13:53:48.738 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:49 compute-0 podman[421286]: 2026-02-25 13:53:49.772146955 +0000 UTC m=+0.104908346 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 13:53:49 compute-0 ceph-mon[76335]: pgmap v4017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:49 compute-0 podman[421287]: 2026-02-25 13:53:49.814012299 +0000 UTC m=+0.142813278 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 13:53:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:50 compute-0 nova_compute[244014]: 2026-02-25 13:53:50.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:51 compute-0 ceph-mon[76335]: pgmap v4018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:53 compute-0 nova_compute[244014]: 2026-02-25 13:53:53.742 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:53 compute-0 ceph-mon[76335]: pgmap v4019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:54 compute-0 sudo[421330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:53:54 compute-0 sudo[421330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:53:54 compute-0 sudo[421330]: pam_unix(sudo:session): session closed for user root
Feb 25 13:53:54 compute-0 sudo[421355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:53:54 compute-0 sudo[421355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:53:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:53:55.098 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:53:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:53:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:53:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:53:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:53:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:53:55 compute-0 sudo[421355]: pam_unix(sudo:session): session closed for user root
Feb 25 13:53:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:53:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:53:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:53:55 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:53:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:53:55 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:53:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:53:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:53:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:53:55 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:53:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:53:55 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:53:55 compute-0 sudo[421410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:53:55 compute-0 sudo[421410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:53:55 compute-0 sudo[421410]: pam_unix(sudo:session): session closed for user root
Feb 25 13:53:55 compute-0 sudo[421435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:53:55 compute-0 sudo[421435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:53:55 compute-0 nova_compute[244014]: 2026-02-25 13:53:55.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:55 compute-0 podman[421472]: 2026-02-25 13:53:55.804580966 +0000 UTC m=+0.066593804 container create 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 13:53:55 compute-0 ceph-mon[76335]: pgmap v4020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:53:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:53:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:53:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:53:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:53:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:53:55 compute-0 systemd[1]: Started libpod-conmon-138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e.scope.
Feb 25 13:53:55 compute-0 podman[421472]: 2026-02-25 13:53:55.785457115 +0000 UTC m=+0.047469963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:53:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:53:55 compute-0 podman[421472]: 2026-02-25 13:53:55.902406851 +0000 UTC m=+0.164419719 container init 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:53:55 compute-0 podman[421472]: 2026-02-25 13:53:55.911318603 +0000 UTC m=+0.173331431 container start 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:53:55 compute-0 podman[421472]: 2026-02-25 13:53:55.914617826 +0000 UTC m=+0.176630664 container attach 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:53:55 compute-0 vigilant_cartwright[421488]: 167 167
Feb 25 13:53:55 compute-0 systemd[1]: libpod-138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e.scope: Deactivated successfully.
Feb 25 13:53:55 compute-0 conmon[421488]: conmon 138fd61d27243cf9b9c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e.scope/container/memory.events
Feb 25 13:53:55 compute-0 podman[421472]: 2026-02-25 13:53:55.920950145 +0000 UTC m=+0.182963013 container died 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 13:53:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9fda24b8bd42424b7ef678007043bc43c6db97a07ca4c4421dcaddc27908184-merged.mount: Deactivated successfully.
Feb 25 13:53:55 compute-0 podman[421472]: 2026-02-25 13:53:55.967212723 +0000 UTC m=+0.229225571 container remove 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 13:53:55 compute-0 systemd[1]: libpod-conmon-138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e.scope: Deactivated successfully.
Feb 25 13:53:56 compute-0 podman[421510]: 2026-02-25 13:53:56.099736939 +0000 UTC m=+0.045194708 container create df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:53:56 compute-0 systemd[1]: Started libpod-conmon-df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6.scope.
Feb 25 13:53:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:56 compute-0 podman[421510]: 2026-02-25 13:53:56.074919998 +0000 UTC m=+0.020377747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:53:56 compute-0 podman[421510]: 2026-02-25 13:53:56.187200112 +0000 UTC m=+0.132657861 container init df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:53:56 compute-0 podman[421510]: 2026-02-25 13:53:56.196502665 +0000 UTC m=+0.141960434 container start df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:53:56 compute-0 podman[421510]: 2026-02-25 13:53:56.201462665 +0000 UTC m=+0.146920414 container attach df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 13:53:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:56 compute-0 confident_ptolemy[421526]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:53:56 compute-0 confident_ptolemy[421526]: --> All data devices are unavailable
Feb 25 13:53:56 compute-0 systemd[1]: libpod-df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6.scope: Deactivated successfully.
Feb 25 13:53:56 compute-0 podman[421510]: 2026-02-25 13:53:56.676083411 +0000 UTC m=+0.621541140 container died df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:53:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b-merged.mount: Deactivated successfully.
Feb 25 13:53:56 compute-0 podman[421510]: 2026-02-25 13:53:56.727893006 +0000 UTC m=+0.673350725 container remove df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 13:53:56 compute-0 systemd[1]: libpod-conmon-df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6.scope: Deactivated successfully.
Feb 25 13:53:56 compute-0 sudo[421435]: pam_unix(sudo:session): session closed for user root
Feb 25 13:53:56 compute-0 sudo[421558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:53:56 compute-0 sudo[421558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:53:56 compute-0 sudo[421558]: pam_unix(sudo:session): session closed for user root
Feb 25 13:53:56 compute-0 sudo[421583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:53:56 compute-0 sudo[421583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:53:57 compute-0 podman[421620]: 2026-02-25 13:53:57.165440975 +0000 UTC m=+0.049311585 container create 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Feb 25 13:53:57 compute-0 systemd[1]: Started libpod-conmon-7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa.scope.
Feb 25 13:53:57 compute-0 podman[421620]: 2026-02-25 13:53:57.142004892 +0000 UTC m=+0.025875442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:53:57 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:53:57 compute-0 podman[421620]: 2026-02-25 13:53:57.25370841 +0000 UTC m=+0.137578930 container init 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:53:57 compute-0 podman[421620]: 2026-02-25 13:53:57.263408764 +0000 UTC m=+0.147279224 container start 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:53:57 compute-0 podman[421620]: 2026-02-25 13:53:57.267008786 +0000 UTC m=+0.150879306 container attach 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:53:57 compute-0 modest_antonelli[421636]: 167 167
Feb 25 13:53:57 compute-0 systemd[1]: libpod-7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa.scope: Deactivated successfully.
Feb 25 13:53:57 compute-0 podman[421620]: 2026-02-25 13:53:57.269810295 +0000 UTC m=+0.153680805 container died 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:53:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4f254375e483663939aa0d744d7b470c9e62b660bb6fd24fa8f54ac9fb96d70-merged.mount: Deactivated successfully.
Feb 25 13:53:57 compute-0 podman[421620]: 2026-02-25 13:53:57.310070273 +0000 UTC m=+0.193940743 container remove 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:53:57 compute-0 systemd[1]: libpod-conmon-7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa.scope: Deactivated successfully.
Feb 25 13:53:57 compute-0 podman[421660]: 2026-02-25 13:53:57.499252151 +0000 UTC m=+0.060715787 container create 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:53:57 compute-0 systemd[1]: Started libpod-conmon-29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2.scope.
Feb 25 13:53:57 compute-0 podman[421660]: 2026-02-25 13:53:57.471403314 +0000 UTC m=+0.032867010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:53:57 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c70b0aa475e7943e9757f8e880ba338c973eb96604f62b584f3ff279bd5de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c70b0aa475e7943e9757f8e880ba338c973eb96604f62b584f3ff279bd5de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c70b0aa475e7943e9757f8e880ba338c973eb96604f62b584f3ff279bd5de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c70b0aa475e7943e9757f8e880ba338c973eb96604f62b584f3ff279bd5de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:57 compute-0 podman[421660]: 2026-02-25 13:53:57.614961852 +0000 UTC m=+0.176425528 container init 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 13:53:57 compute-0 podman[421660]: 2026-02-25 13:53:57.627187678 +0000 UTC m=+0.188651324 container start 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 13:53:57 compute-0 podman[421660]: 2026-02-25 13:53:57.631620813 +0000 UTC m=+0.193084519 container attach 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:53:57 compute-0 ceph-mon[76335]: pgmap v4021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:57 compute-0 sad_borg[421677]: {
Feb 25 13:53:57 compute-0 sad_borg[421677]:     "0": [
Feb 25 13:53:57 compute-0 sad_borg[421677]:         {
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "devices": [
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "/dev/loop3"
Feb 25 13:53:57 compute-0 sad_borg[421677]:             ],
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_name": "ceph_lv0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_size": "21470642176",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "name": "ceph_lv0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "tags": {
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.cluster_name": "ceph",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.crush_device_class": "",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.encrypted": "0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.objectstore": "bluestore",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.osd_id": "0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.type": "block",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.vdo": "0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.with_tpm": "0"
Feb 25 13:53:57 compute-0 sad_borg[421677]:             },
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "type": "block",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "vg_name": "ceph_vg0"
Feb 25 13:53:57 compute-0 sad_borg[421677]:         }
Feb 25 13:53:57 compute-0 sad_borg[421677]:     ],
Feb 25 13:53:57 compute-0 sad_borg[421677]:     "1": [
Feb 25 13:53:57 compute-0 sad_borg[421677]:         {
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "devices": [
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "/dev/loop4"
Feb 25 13:53:57 compute-0 sad_borg[421677]:             ],
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_name": "ceph_lv1",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_size": "21470642176",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "name": "ceph_lv1",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "tags": {
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.cluster_name": "ceph",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.crush_device_class": "",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.encrypted": "0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.objectstore": "bluestore",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.osd_id": "1",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.type": "block",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.vdo": "0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.with_tpm": "0"
Feb 25 13:53:57 compute-0 sad_borg[421677]:             },
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "type": "block",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "vg_name": "ceph_vg1"
Feb 25 13:53:57 compute-0 sad_borg[421677]:         }
Feb 25 13:53:57 compute-0 sad_borg[421677]:     ],
Feb 25 13:53:57 compute-0 sad_borg[421677]:     "2": [
Feb 25 13:53:57 compute-0 sad_borg[421677]:         {
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "devices": [
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "/dev/loop5"
Feb 25 13:53:57 compute-0 sad_borg[421677]:             ],
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_name": "ceph_lv2",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_size": "21470642176",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "name": "ceph_lv2",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "tags": {
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.cluster_name": "ceph",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.crush_device_class": "",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.encrypted": "0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.objectstore": "bluestore",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.osd_id": "2",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.type": "block",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.vdo": "0",
Feb 25 13:53:57 compute-0 sad_borg[421677]:                 "ceph.with_tpm": "0"
Feb 25 13:53:57 compute-0 sad_borg[421677]:             },
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "type": "block",
Feb 25 13:53:57 compute-0 sad_borg[421677]:             "vg_name": "ceph_vg2"
Feb 25 13:53:57 compute-0 sad_borg[421677]:         }
Feb 25 13:53:57 compute-0 sad_borg[421677]:     ]
Feb 25 13:53:57 compute-0 sad_borg[421677]: }
Feb 25 13:53:57 compute-0 systemd[1]: libpod-29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2.scope: Deactivated successfully.
Feb 25 13:53:57 compute-0 podman[421660]: 2026-02-25 13:53:57.934742902 +0000 UTC m=+0.496206548 container died 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 13:53:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea2c70b0aa475e7943e9757f8e880ba338c973eb96604f62b584f3ff279bd5de-merged.mount: Deactivated successfully.
Feb 25 13:53:57 compute-0 podman[421660]: 2026-02-25 13:53:57.989095229 +0000 UTC m=+0.550558825 container remove 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 13:53:58 compute-0 systemd[1]: libpod-conmon-29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2.scope: Deactivated successfully.
Feb 25 13:53:58 compute-0 sudo[421583]: pam_unix(sudo:session): session closed for user root
Feb 25 13:53:58 compute-0 sudo[421699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:53:58 compute-0 sudo[421699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:53:58 compute-0 sudo[421699]: pam_unix(sudo:session): session closed for user root
Feb 25 13:53:58 compute-0 sudo[421724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:53:58 compute-0 sudo[421724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:53:58 compute-0 podman[421761]: 2026-02-25 13:53:58.434980784 +0000 UTC m=+0.048508943 container create 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:53:58 compute-0 systemd[1]: Started libpod-conmon-7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4.scope.
Feb 25 13:53:58 compute-0 podman[421761]: 2026-02-25 13:53:58.407533948 +0000 UTC m=+0.021062177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:53:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:53:58 compute-0 podman[421761]: 2026-02-25 13:53:58.521725496 +0000 UTC m=+0.135253675 container init 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:53:58 compute-0 podman[421761]: 2026-02-25 13:53:58.531288026 +0000 UTC m=+0.144816175 container start 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:53:58 compute-0 inspiring_brattain[421777]: 167 167
Feb 25 13:53:58 compute-0 systemd[1]: libpod-7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4.scope: Deactivated successfully.
Feb 25 13:53:58 compute-0 podman[421761]: 2026-02-25 13:53:58.537171292 +0000 UTC m=+0.150699441 container attach 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb 25 13:53:58 compute-0 podman[421761]: 2026-02-25 13:53:58.538422608 +0000 UTC m=+0.151950827 container died 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:53:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d65264b54993cb8cae763cba8784890ffc761df8b65d462f11fd8ad8b2f0c26-merged.mount: Deactivated successfully.
Feb 25 13:53:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:58 compute-0 podman[421761]: 2026-02-25 13:53:58.586109356 +0000 UTC m=+0.199637545 container remove 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 13:53:58 compute-0 systemd[1]: libpod-conmon-7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4.scope: Deactivated successfully.
Feb 25 13:53:58 compute-0 podman[421802]: 2026-02-25 13:53:58.726357141 +0000 UTC m=+0.042870043 container create 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 13:53:58 compute-0 nova_compute[244014]: 2026-02-25 13:53:58.745 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:53:58 compute-0 systemd[1]: Started libpod-conmon-91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775.scope.
Feb 25 13:53:58 compute-0 podman[421802]: 2026-02-25 13:53:58.709500874 +0000 UTC m=+0.026013766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:53:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a07be40ce41d763ce65a98fb3c8e2b2dec9ed404083da8d3b7a937a86024e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a07be40ce41d763ce65a98fb3c8e2b2dec9ed404083da8d3b7a937a86024e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a07be40ce41d763ce65a98fb3c8e2b2dec9ed404083da8d3b7a937a86024e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a07be40ce41d763ce65a98fb3c8e2b2dec9ed404083da8d3b7a937a86024e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:53:58 compute-0 podman[421802]: 2026-02-25 13:53:58.845862099 +0000 UTC m=+0.162375001 container init 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:53:58 compute-0 podman[421802]: 2026-02-25 13:53:58.855167462 +0000 UTC m=+0.171680364 container start 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:53:58 compute-0 podman[421802]: 2026-02-25 13:53:58.859765442 +0000 UTC m=+0.176278354 container attach 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:53:59 compute-0 lvm[421896]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:53:59 compute-0 lvm[421896]: VG ceph_vg0 finished
Feb 25 13:53:59 compute-0 lvm[421899]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:53:59 compute-0 lvm[421899]: VG ceph_vg2 finished
Feb 25 13:53:59 compute-0 lvm[421898]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:53:59 compute-0 lvm[421898]: VG ceph_vg1 finished
Feb 25 13:53:59 compute-0 pensive_golick[421818]: {}
Feb 25 13:53:59 compute-0 systemd[1]: libpod-91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775.scope: Deactivated successfully.
Feb 25 13:53:59 compute-0 podman[421802]: 2026-02-25 13:53:59.643493597 +0000 UTC m=+0.960006469 container died 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:53:59 compute-0 systemd[1]: libpod-91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775.scope: Consumed 1.155s CPU time.
Feb 25 13:53:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2a07be40ce41d763ce65a98fb3c8e2b2dec9ed404083da8d3b7a937a86024e7-merged.mount: Deactivated successfully.
Feb 25 13:53:59 compute-0 podman[421802]: 2026-02-25 13:53:59.694355295 +0000 UTC m=+1.010868167 container remove 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:53:59 compute-0 systemd[1]: libpod-conmon-91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775.scope: Deactivated successfully.
Feb 25 13:53:59 compute-0 sudo[421724]: pam_unix(sudo:session): session closed for user root
Feb 25 13:53:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:53:59 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:53:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:53:59 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.772331) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639772380, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2055, "num_deletes": 251, "total_data_size": 3595324, "memory_usage": 3643608, "flush_reason": "Manual Compaction"}
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639785874, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 3506195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81733, "largest_seqno": 83787, "table_properties": {"data_size": 3496702, "index_size": 6050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18674, "raw_average_key_size": 20, "raw_value_size": 3478028, "raw_average_value_size": 3739, "num_data_blocks": 269, "num_entries": 930, "num_filter_entries": 930, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027410, "oldest_key_time": 1772027410, "file_creation_time": 1772027639, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 13608 microseconds, and 5792 cpu microseconds.
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.785938) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 3506195 bytes OK
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.785963) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.787902) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.787933) EVENT_LOG_v1 {"time_micros": 1772027639787926, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.787963) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3586712, prev total WAL file size 3586712, number of live WAL files 2.
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.788713) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(3424KB)], [197(10MB)]
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639788755, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 14076465, "oldest_snapshot_seqno": -1}
Feb 25 13:53:59 compute-0 sudo[421913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:53:59 compute-0 sudo[421913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:53:59 compute-0 sudo[421913]: pam_unix(sudo:session): session closed for user root
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 9925 keys, 12318997 bytes, temperature: kUnknown
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639857466, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 12318997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12256357, "index_size": 36735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24837, "raw_key_size": 259956, "raw_average_key_size": 26, "raw_value_size": 12082653, "raw_average_value_size": 1217, "num_data_blocks": 1420, "num_entries": 9925, "num_filter_entries": 9925, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027639, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.858477) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 12318997 bytes
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.860757) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.4 rd, 177.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.1 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 10439, records dropped: 514 output_compression: NoCompression
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.860776) EVENT_LOG_v1 {"time_micros": 1772027639860767, "job": 124, "event": "compaction_finished", "compaction_time_micros": 69537, "compaction_time_cpu_micros": 24720, "output_level": 6, "num_output_files": 1, "total_output_size": 12318997, "num_input_records": 10439, "num_output_records": 9925, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:53:59 compute-0 ceph-mon[76335]: pgmap v4022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:53:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639861568, "job": 124, "event": "table_file_deletion", "file_number": 199}
Feb 25 13:53:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639862583, "job": 124, "event": "table_file_deletion", "file_number": 197}
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.788638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.862647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.862653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.862655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.862657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:53:59 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.862658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:54:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:00 compute-0 nova_compute[244014]: 2026-02-25 13:54:00.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:54:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:54:01 compute-0 ceph-mon[76335]: pgmap v4023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:03 compute-0 nova_compute[244014]: 2026-02-25 13:54:03.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:03 compute-0 ceph-mon[76335]: pgmap v4024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:05 compute-0 nova_compute[244014]: 2026-02-25 13:54:05.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:05 compute-0 ceph-mon[76335]: pgmap v4025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:07 compute-0 ceph-mon[76335]: pgmap v4026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:08 compute-0 nova_compute[244014]: 2026-02-25 13:54:08.754 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:09 compute-0 ceph-mon[76335]: pgmap v4027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:10 compute-0 nova_compute[244014]: 2026-02-25 13:54:10.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:10 compute-0 ceph-mon[76335]: pgmap v4028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:13 compute-0 ceph-mon[76335]: pgmap v4029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:13 compute-0 nova_compute[244014]: 2026-02-25 13:54:13.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:13 compute-0 nova_compute[244014]: 2026-02-25 13:54:13.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:15 compute-0 nova_compute[244014]: 2026-02-25 13:54:15.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:15 compute-0 ceph-mon[76335]: pgmap v4030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:17 compute-0 ceph-mon[76335]: pgmap v4031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:18 compute-0 nova_compute[244014]: 2026-02-25 13:54:18.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:19 compute-0 ceph-mon[76335]: pgmap v4032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.306244) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660306331, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 410, "num_deletes": 256, "total_data_size": 296456, "memory_usage": 305392, "flush_reason": "Manual Compaction"}
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660311765, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 294050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83788, "largest_seqno": 84197, "table_properties": {"data_size": 291614, "index_size": 535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5715, "raw_average_key_size": 17, "raw_value_size": 286806, "raw_average_value_size": 899, "num_data_blocks": 24, "num_entries": 319, "num_filter_entries": 319, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027639, "oldest_key_time": 1772027639, "file_creation_time": 1772027660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 5580 microseconds, and 2711 cpu microseconds.
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.311829) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 294050 bytes OK
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.311853) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.313860) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.313882) EVENT_LOG_v1 {"time_micros": 1772027660313875, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.313911) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 293858, prev total WAL file size 293858, number of live WAL files 2.
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.314473) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373637' seq:72057594037927935, type:22 .. '6C6F676D0034303139' seq:0, type:0; will stop at (end)
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(287KB)], [200(11MB)]
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660314512, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 12613047, "oldest_snapshot_seqno": -1}
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 9725 keys, 12523444 bytes, temperature: kUnknown
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660409572, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 12523444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12461188, "index_size": 36863, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24325, "raw_key_size": 256735, "raw_average_key_size": 26, "raw_value_size": 12289997, "raw_average_value_size": 1263, "num_data_blocks": 1423, "num_entries": 9725, "num_filter_entries": 9725, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.410091) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 12523444 bytes
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.411633) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.4 rd, 131.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.7 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(85.5) write-amplify(42.6) OK, records in: 10244, records dropped: 519 output_compression: NoCompression
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.411665) EVENT_LOG_v1 {"time_micros": 1772027660411649, "job": 126, "event": "compaction_finished", "compaction_time_micros": 95255, "compaction_time_cpu_micros": 40751, "output_level": 6, "num_output_files": 1, "total_output_size": 12523444, "num_input_records": 10244, "num_output_records": 9725, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660411902, "job": 126, "event": "table_file_deletion", "file_number": 202}
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660414059, "job": 126, "event": "table_file_deletion", "file_number": 200}
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.314371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.414179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.414187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.414190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.414193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:54:20 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.414196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:54:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:20 compute-0 nova_compute[244014]: 2026-02-25 13:54:20.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:20 compute-0 podman[421938]: 2026-02-25 13:54:20.729445134 +0000 UTC m=+0.063237648 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:54:20 compute-0 podman[421939]: 2026-02-25 13:54:20.771874614 +0000 UTC m=+0.102453298 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:54:21 compute-0 ceph-mon[76335]: pgmap v4033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:21 compute-0 nova_compute[244014]: 2026-02-25 13:54:21.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:23 compute-0 ceph-mon[76335]: pgmap v4034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:23 compute-0 nova_compute[244014]: 2026-02-25 13:54:23.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:25 compute-0 nova_compute[244014]: 2026-02-25 13:54:25.635 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:25 compute-0 ceph-mon[76335]: pgmap v4035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:26 compute-0 nova_compute[244014]: 2026-02-25 13:54:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:26 compute-0 nova_compute[244014]: 2026-02-25 13:54:26.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:54:26 compute-0 nova_compute[244014]: 2026-02-25 13:54:26.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:54:26 compute-0 nova_compute[244014]: 2026-02-25 13:54:26.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:54:26 compute-0 nova_compute[244014]: 2026-02-25 13:54:26.902 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:54:26 compute-0 nova_compute[244014]: 2026-02-25 13:54:26.903 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:54:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:54:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/432746270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:54:27 compute-0 nova_compute[244014]: 2026-02-25 13:54:27.441 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:54:27 compute-0 nova_compute[244014]: 2026-02-25 13:54:27.593 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:54:27 compute-0 nova_compute[244014]: 2026-02-25 13:54:27.594 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3520MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:54:27 compute-0 nova_compute[244014]: 2026-02-25 13:54:27.595 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:54:27 compute-0 nova_compute[244014]: 2026-02-25 13:54:27.595 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:54:27 compute-0 nova_compute[244014]: 2026-02-25 13:54:27.662 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:54:27 compute-0 nova_compute[244014]: 2026-02-25 13:54:27.663 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:54:27 compute-0 ceph-mon[76335]: pgmap v4036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/432746270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:54:27 compute-0 nova_compute[244014]: 2026-02-25 13:54:27.682 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:54:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:54:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282885588' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:54:28 compute-0 nova_compute[244014]: 2026-02-25 13:54:28.239 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:54:28 compute-0 nova_compute[244014]: 2026-02-25 13:54:28.244 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:54:28 compute-0 nova_compute[244014]: 2026-02-25 13:54:28.261 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:54:28 compute-0 nova_compute[244014]: 2026-02-25 13:54:28.263 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:54:28 compute-0 nova_compute[244014]: 2026-02-25 13:54:28.263 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:54:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/282885588' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:54:28 compute-0 nova_compute[244014]: 2026-02-25 13:54:28.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:29 compute-0 ceph-mon[76335]: pgmap v4037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:30 compute-0 nova_compute[244014]: 2026-02-25 13:54:30.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:31 compute-0 nova_compute[244014]: 2026-02-25 13:54:31.264 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:31 compute-0 nova_compute[244014]: 2026-02-25 13:54:31.265 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:54:31 compute-0 nova_compute[244014]: 2026-02-25 13:54:31.265 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:54:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:54:31
Feb 25 13:54:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:54:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:54:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb 25 13:54:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:54:31 compute-0 nova_compute[244014]: 2026-02-25 13:54:31.281 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:54:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:54:31 compute-0 ceph-mon[76335]: pgmap v4038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:54:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:54:32 compute-0 nova_compute[244014]: 2026-02-25 13:54:32.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:33 compute-0 ceph-mon[76335]: pgmap v4039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:33 compute-0 nova_compute[244014]: 2026-02-25 13:54:33.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:34 compute-0 nova_compute[244014]: 2026-02-25 13:54:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:34 compute-0 nova_compute[244014]: 2026-02-25 13:54:34.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:54:34 compute-0 nova_compute[244014]: 2026-02-25 13:54:34.901 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:54:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:35 compute-0 nova_compute[244014]: 2026-02-25 13:54:35.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:35 compute-0 ceph-mon[76335]: pgmap v4040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:35 compute-0 nova_compute[244014]: 2026-02-25 13:54:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:35 compute-0 nova_compute[244014]: 2026-02-25 13:54:35.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:54:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:36 compute-0 nova_compute[244014]: 2026-02-25 13:54:36.896 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:36 compute-0 nova_compute[244014]: 2026-02-25 13:54:36.897 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:54:37 compute-0 ceph-mon[76335]: pgmap v4041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:38 compute-0 nova_compute[244014]: 2026-02-25 13:54:38.777 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:39 compute-0 ceph-mon[76335]: pgmap v4042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:40 compute-0 nova_compute[244014]: 2026-02-25 13:54:40.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:40 compute-0 nova_compute[244014]: 2026-02-25 13:54:40.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:41 compute-0 ceph-mon[76335]: pgmap v4043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:43 compute-0 sshd-session[422026]: Received disconnect from 45.148.10.157 port 50244:11:  [preauth]
Feb 25 13:54:43 compute-0 sshd-session[422026]: Disconnected from authenticating user root 45.148.10.157 port 50244 [preauth]
Feb 25 13:54:43 compute-0 ceph-mon[76335]: pgmap v4044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:43 compute-0 nova_compute[244014]: 2026-02-25 13:54:43.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:54:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:54:43 compute-0 nova_compute[244014]: 2026-02-25 13:54:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:44 compute-0 nova_compute[244014]: 2026-02-25 13:54:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:45 compute-0 nova_compute[244014]: 2026-02-25 13:54:45.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:45 compute-0 ceph-mon[76335]: pgmap v4045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:54:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2614287424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:54:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:54:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2614287424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:54:47 compute-0 ceph-mon[76335]: pgmap v4046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2614287424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:54:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2614287424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:54:47 compute-0 nova_compute[244014]: 2026-02-25 13:54:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:54:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:48 compute-0 nova_compute[244014]: 2026-02-25 13:54:48.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:49 compute-0 ceph-mon[76335]: pgmap v4047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:50 compute-0 nova_compute[244014]: 2026-02-25 13:54:50.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:51 compute-0 podman[422028]: 2026-02-25 13:54:51.724424933 +0000 UTC m=+0.065801789 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 13:54:51 compute-0 podman[422029]: 2026-02-25 13:54:51.74332413 +0000 UTC m=+0.086752044 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 13:54:51 compute-0 ceph-mon[76335]: pgmap v4048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:53 compute-0 ceph-mon[76335]: pgmap v4049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:53 compute-0 nova_compute[244014]: 2026-02-25 13:54:53.844 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:54:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:54:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:54:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:54:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:54:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:54:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:54:55 compute-0 nova_compute[244014]: 2026-02-25 13:54:55.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:55 compute-0 ceph-mon[76335]: pgmap v4050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:57 compute-0 ceph-mon[76335]: pgmap v4051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:58 compute-0 nova_compute[244014]: 2026-02-25 13:54:58.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:54:59 compute-0 ceph-mon[76335]: pgmap v4052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:54:59 compute-0 sudo[422075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:54:59 compute-0 sudo[422075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:54:59 compute-0 sudo[422075]: pam_unix(sudo:session): session closed for user root
Feb 25 13:55:00 compute-0 sudo[422100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 13:55:00 compute-0 sudo[422100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:55:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:00 compute-0 sudo[422100]: pam_unix(sudo:session): session closed for user root
Feb 25 13:55:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:55:00 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:55:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:55:00 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:55:00 compute-0 sudo[422146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:55:00 compute-0 sudo[422146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:55:00 compute-0 sudo[422146]: pam_unix(sudo:session): session closed for user root
Feb 25 13:55:00 compute-0 sudo[422171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:55:00 compute-0 sudo[422171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:55:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:00 compute-0 nova_compute[244014]: 2026-02-25 13:55:00.680 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:01 compute-0 sudo[422171]: pam_unix(sudo:session): session closed for user root
Feb 25 13:55:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:55:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:55:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:55:01 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:55:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:55:01 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:55:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:55:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:55:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:55:01 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:55:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:55:01 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:55:01 compute-0 sudo[422227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:55:01 compute-0 sudo[422227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:55:01 compute-0 sudo[422227]: pam_unix(sudo:session): session closed for user root
Feb 25 13:55:01 compute-0 sudo[422252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:55:01 compute-0 sudo[422252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:55:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:55:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:55:01 compute-0 ceph-mon[76335]: pgmap v4053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:55:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:55:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:55:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:55:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:55:01 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:55:01 compute-0 podman[422289]: 2026-02-25 13:55:01.570312442 +0000 UTC m=+0.047185060 container create 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:55:01 compute-0 systemd[1]: Started libpod-conmon-387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006.scope.
Feb 25 13:55:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:55:01 compute-0 podman[422289]: 2026-02-25 13:55:01.546366212 +0000 UTC m=+0.023238840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:55:01 compute-0 podman[422289]: 2026-02-25 13:55:01.651134497 +0000 UTC m=+0.128007125 container init 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:55:01 compute-0 podman[422289]: 2026-02-25 13:55:01.660391549 +0000 UTC m=+0.137264157 container start 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 13:55:01 compute-0 podman[422289]: 2026-02-25 13:55:01.664584759 +0000 UTC m=+0.141457337 container attach 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 13:55:01 compute-0 practical_booth[422305]: 167 167
Feb 25 13:55:01 compute-0 systemd[1]: libpod-387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006.scope: Deactivated successfully.
Feb 25 13:55:01 compute-0 podman[422289]: 2026-02-25 13:55:01.668122329 +0000 UTC m=+0.144994907 container died 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:55:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:55:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2a857dca0d19a3edb9b4b3fb10fb10fe511462b9dfd29d1593c42f0e59b776a-merged.mount: Deactivated successfully.
Feb 25 13:55:01 compute-0 podman[422289]: 2026-02-25 13:55:01.710805131 +0000 UTC m=+0.187677739 container remove 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 13:55:01 compute-0 systemd[1]: libpod-conmon-387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006.scope: Deactivated successfully.
Feb 25 13:55:01 compute-0 podman[422327]: 2026-02-25 13:55:01.862323942 +0000 UTC m=+0.056677700 container create b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:55:01 compute-0 systemd[1]: Started libpod-conmon-b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02.scope.
Feb 25 13:55:01 compute-0 podman[422327]: 2026-02-25 13:55:01.836652383 +0000 UTC m=+0.031006211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:55:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:01 compute-0 podman[422327]: 2026-02-25 13:55:01.974390534 +0000 UTC m=+0.168744352 container init b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 13:55:01 compute-0 podman[422327]: 2026-02-25 13:55:01.988145174 +0000 UTC m=+0.182498942 container start b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 13:55:01 compute-0 podman[422327]: 2026-02-25 13:55:01.992485647 +0000 UTC m=+0.186839415 container attach b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 13:55:02 compute-0 affectionate_lichterman[422344]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:55:02 compute-0 affectionate_lichterman[422344]: --> All data devices are unavailable
Feb 25 13:55:02 compute-0 systemd[1]: libpod-b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02.scope: Deactivated successfully.
Feb 25 13:55:02 compute-0 podman[422327]: 2026-02-25 13:55:02.437651906 +0000 UTC m=+0.632005644 container died b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b-merged.mount: Deactivated successfully.
Feb 25 13:55:02 compute-0 podman[422327]: 2026-02-25 13:55:02.487073789 +0000 UTC m=+0.681427527 container remove b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 13:55:02 compute-0 systemd[1]: libpod-conmon-b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02.scope: Deactivated successfully.
Feb 25 13:55:02 compute-0 sudo[422252]: pam_unix(sudo:session): session closed for user root
Feb 25 13:55:02 compute-0 sudo[422375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:55:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:02 compute-0 sudo[422375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:55:02 compute-0 sudo[422375]: pam_unix(sudo:session): session closed for user root
Feb 25 13:55:02 compute-0 sudo[422400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:55:02 compute-0 sudo[422400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:55:03 compute-0 podman[422438]: 2026-02-25 13:55:03.028395367 +0000 UTC m=+0.044735171 container create bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:55:03 compute-0 systemd[1]: Started libpod-conmon-bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5.scope.
Feb 25 13:55:03 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:55:03 compute-0 podman[422438]: 2026-02-25 13:55:03.013651358 +0000 UTC m=+0.029991132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:55:03 compute-0 podman[422438]: 2026-02-25 13:55:03.118162885 +0000 UTC m=+0.134502679 container init bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 13:55:03 compute-0 podman[422438]: 2026-02-25 13:55:03.127594123 +0000 UTC m=+0.143933927 container start bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:55:03 compute-0 podman[422438]: 2026-02-25 13:55:03.131594756 +0000 UTC m=+0.147934550 container attach bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:55:03 compute-0 cranky_albattani[422454]: 167 167
Feb 25 13:55:03 compute-0 systemd[1]: libpod-bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5.scope: Deactivated successfully.
Feb 25 13:55:03 compute-0 podman[422438]: 2026-02-25 13:55:03.133213032 +0000 UTC m=+0.149552836 container died bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 13:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-143073bbc8fcef45fabea6976b3a57f91af66fe5d92b57c91d68cf9883264051-merged.mount: Deactivated successfully.
Feb 25 13:55:03 compute-0 podman[422438]: 2026-02-25 13:55:03.182375788 +0000 UTC m=+0.198715552 container remove bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:55:03 compute-0 systemd[1]: libpod-conmon-bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5.scope: Deactivated successfully.
Feb 25 13:55:03 compute-0 podman[422477]: 2026-02-25 13:55:03.377670942 +0000 UTC m=+0.057682778 container create 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 13:55:03 compute-0 systemd[1]: Started libpod-conmon-2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c.scope.
Feb 25 13:55:03 compute-0 podman[422477]: 2026-02-25 13:55:03.351024996 +0000 UTC m=+0.031036892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:55:03 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80236c88c7be38066b96c2cb10a39ad4b020e4d0602a26f0710b7661bf0ce5ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80236c88c7be38066b96c2cb10a39ad4b020e4d0602a26f0710b7661bf0ce5ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80236c88c7be38066b96c2cb10a39ad4b020e4d0602a26f0710b7661bf0ce5ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80236c88c7be38066b96c2cb10a39ad4b020e4d0602a26f0710b7661bf0ce5ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:03 compute-0 podman[422477]: 2026-02-25 13:55:03.475598263 +0000 UTC m=+0.155610149 container init 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:55:03 compute-0 podman[422477]: 2026-02-25 13:55:03.491222616 +0000 UTC m=+0.171234482 container start 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:55:03 compute-0 podman[422477]: 2026-02-25 13:55:03.497417022 +0000 UTC m=+0.177428848 container attach 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:55:03 compute-0 ceph-mon[76335]: pgmap v4054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:03 compute-0 crazy_tesla[422493]: {
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:     "0": [
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:         {
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "devices": [
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "/dev/loop3"
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             ],
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_name": "ceph_lv0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_size": "21470642176",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "name": "ceph_lv0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "tags": {
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.cluster_name": "ceph",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.crush_device_class": "",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.encrypted": "0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.objectstore": "bluestore",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.osd_id": "0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.type": "block",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.vdo": "0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.with_tpm": "0"
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             },
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "type": "block",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "vg_name": "ceph_vg0"
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:         }
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:     ],
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:     "1": [
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:         {
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "devices": [
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "/dev/loop4"
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             ],
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_name": "ceph_lv1",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_size": "21470642176",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "name": "ceph_lv1",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "tags": {
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.cluster_name": "ceph",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.crush_device_class": "",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.encrypted": "0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.objectstore": "bluestore",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.osd_id": "1",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.type": "block",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.vdo": "0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.with_tpm": "0"
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             },
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "type": "block",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "vg_name": "ceph_vg1"
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:         }
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:     ],
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:     "2": [
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:         {
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "devices": [
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "/dev/loop5"
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             ],
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_name": "ceph_lv2",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_size": "21470642176",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "name": "ceph_lv2",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "tags": {
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.cluster_name": "ceph",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.crush_device_class": "",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.encrypted": "0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.objectstore": "bluestore",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.osd_id": "2",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.type": "block",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.vdo": "0",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:                 "ceph.with_tpm": "0"
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             },
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "type": "block",
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:             "vg_name": "ceph_vg2"
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:         }
Feb 25 13:55:03 compute-0 crazy_tesla[422493]:     ]
Feb 25 13:55:03 compute-0 crazy_tesla[422493]: }
Feb 25 13:55:03 compute-0 systemd[1]: libpod-2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c.scope: Deactivated successfully.
Feb 25 13:55:03 compute-0 podman[422477]: 2026-02-25 13:55:03.787549309 +0000 UTC m=+0.467561095 container died 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 13:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-80236c88c7be38066b96c2cb10a39ad4b020e4d0602a26f0710b7661bf0ce5ad-merged.mount: Deactivated successfully.
Feb 25 13:55:03 compute-0 podman[422477]: 2026-02-25 13:55:03.833258357 +0000 UTC m=+0.513270153 container remove 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:55:03 compute-0 systemd[1]: libpod-conmon-2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c.scope: Deactivated successfully.
Feb 25 13:55:03 compute-0 nova_compute[244014]: 2026-02-25 13:55:03.852 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:03 compute-0 sudo[422400]: pam_unix(sudo:session): session closed for user root
Feb 25 13:55:03 compute-0 sudo[422514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:55:03 compute-0 sudo[422514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:55:03 compute-0 sudo[422514]: pam_unix(sudo:session): session closed for user root
Feb 25 13:55:04 compute-0 sudo[422539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:55:04 compute-0 sudo[422539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:55:04 compute-0 podman[422576]: 2026-02-25 13:55:04.35649437 +0000 UTC m=+0.056994459 container create 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 13:55:04 compute-0 systemd[1]: Started libpod-conmon-6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622.scope.
Feb 25 13:55:04 compute-0 podman[422576]: 2026-02-25 13:55:04.330680017 +0000 UTC m=+0.031179946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:55:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:55:04 compute-0 podman[422576]: 2026-02-25 13:55:04.450569901 +0000 UTC m=+0.151069790 container init 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 13:55:04 compute-0 podman[422576]: 2026-02-25 13:55:04.458885097 +0000 UTC m=+0.159384986 container start 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:55:04 compute-0 podman[422576]: 2026-02-25 13:55:04.462989543 +0000 UTC m=+0.163489432 container attach 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 13:55:04 compute-0 competent_satoshi[422592]: 167 167
Feb 25 13:55:04 compute-0 systemd[1]: libpod-6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622.scope: Deactivated successfully.
Feb 25 13:55:04 compute-0 podman[422576]: 2026-02-25 13:55:04.464323431 +0000 UTC m=+0.164823310 container died 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 13:55:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-312c4613423a61d653fab0fe4575e7a87a8e14167afe487c62af9468f5192378-merged.mount: Deactivated successfully.
Feb 25 13:55:04 compute-0 podman[422576]: 2026-02-25 13:55:04.50794293 +0000 UTC m=+0.208442789 container remove 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 13:55:04 compute-0 systemd[1]: libpod-conmon-6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622.scope: Deactivated successfully.
Feb 25 13:55:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:04 compute-0 podman[422618]: 2026-02-25 13:55:04.698358165 +0000 UTC m=+0.064401529 container create 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:55:04 compute-0 systemd[1]: Started libpod-conmon-2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb.scope.
Feb 25 13:55:04 compute-0 podman[422618]: 2026-02-25 13:55:04.671072911 +0000 UTC m=+0.037116345 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:55:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f258cb20ca0c661223764c1a1bdf31efe8c80e34026d98ad0fa2dbaac6dfaec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f258cb20ca0c661223764c1a1bdf31efe8c80e34026d98ad0fa2dbaac6dfaec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f258cb20ca0c661223764c1a1bdf31efe8c80e34026d98ad0fa2dbaac6dfaec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f258cb20ca0c661223764c1a1bdf31efe8c80e34026d98ad0fa2dbaac6dfaec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:55:04 compute-0 podman[422618]: 2026-02-25 13:55:04.798948021 +0000 UTC m=+0.164991495 container init 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:55:04 compute-0 podman[422618]: 2026-02-25 13:55:04.812560818 +0000 UTC m=+0.178604192 container start 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 13:55:04 compute-0 podman[422618]: 2026-02-25 13:55:04.816763557 +0000 UTC m=+0.182807041 container attach 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 13:55:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:05 compute-0 lvm[422714]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:55:05 compute-0 lvm[422714]: VG ceph_vg1 finished
Feb 25 13:55:05 compute-0 lvm[422713]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:55:05 compute-0 lvm[422713]: VG ceph_vg0 finished
Feb 25 13:55:05 compute-0 lvm[422716]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:55:05 compute-0 lvm[422716]: VG ceph_vg2 finished
Feb 25 13:55:05 compute-0 nova_compute[244014]: 2026-02-25 13:55:05.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:05 compute-0 ceph-mon[76335]: pgmap v4055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:05 compute-0 focused_brown[422634]: {}
Feb 25 13:55:05 compute-0 systemd[1]: libpod-2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb.scope: Deactivated successfully.
Feb 25 13:55:05 compute-0 systemd[1]: libpod-2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb.scope: Consumed 1.364s CPU time.
Feb 25 13:55:05 compute-0 podman[422618]: 2026-02-25 13:55:05.743376603 +0000 UTC m=+1.109420007 container died 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 13:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f258cb20ca0c661223764c1a1bdf31efe8c80e34026d98ad0fa2dbaac6dfaec4-merged.mount: Deactivated successfully.
Feb 25 13:55:05 compute-0 podman[422618]: 2026-02-25 13:55:05.803424488 +0000 UTC m=+1.169467902 container remove 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:55:05 compute-0 systemd[1]: libpod-conmon-2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb.scope: Deactivated successfully.
Feb 25 13:55:05 compute-0 sudo[422539]: pam_unix(sudo:session): session closed for user root
Feb 25 13:55:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:55:05 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:55:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:55:05 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:55:05 compute-0 sudo[422729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:55:05 compute-0 sudo[422729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:55:05 compute-0 sudo[422729]: pam_unix(sudo:session): session closed for user root
Feb 25 13:55:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:06 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:55:06 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:55:07 compute-0 ceph-mon[76335]: pgmap v4056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:08 compute-0 nova_compute[244014]: 2026-02-25 13:55:08.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:09 compute-0 ceph-mon[76335]: pgmap v4057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.319530) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710319617, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 668, "num_deletes": 250, "total_data_size": 825200, "memory_usage": 837408, "flush_reason": "Manual Compaction"}
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710325017, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 553653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84198, "largest_seqno": 84865, "table_properties": {"data_size": 550557, "index_size": 1003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8211, "raw_average_key_size": 20, "raw_value_size": 544102, "raw_average_value_size": 1367, "num_data_blocks": 45, "num_entries": 398, "num_filter_entries": 398, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027661, "oldest_key_time": 1772027661, "file_creation_time": 1772027710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 5504 microseconds, and 2164 cpu microseconds.
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.325050) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 553653 bytes OK
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.325067) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.326840) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.326863) EVENT_LOG_v1 {"time_micros": 1772027710326856, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.326893) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 821666, prev total WAL file size 821666, number of live WAL files 2.
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.327602) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353033' seq:72057594037927935, type:22 .. '6D6772737461740033373534' seq:0, type:0; will stop at (end)
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(540KB)], [203(11MB)]
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710327745, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 13077097, "oldest_snapshot_seqno": -1}
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 9630 keys, 9978578 bytes, temperature: kUnknown
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710399383, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 9978578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9921080, "index_size": 32297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24133, "raw_key_size": 254933, "raw_average_key_size": 26, "raw_value_size": 9755735, "raw_average_value_size": 1013, "num_data_blocks": 1235, "num_entries": 9630, "num_filter_entries": 9630, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.399935) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 9978578 bytes
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.401775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.1 rd, 138.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(41.6) write-amplify(18.0) OK, records in: 10123, records dropped: 493 output_compression: NoCompression
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.401810) EVENT_LOG_v1 {"time_micros": 1772027710401794, "job": 128, "event": "compaction_finished", "compaction_time_micros": 71822, "compaction_time_cpu_micros": 23974, "output_level": 6, "num_output_files": 1, "total_output_size": 9978578, "num_input_records": 10123, "num_output_records": 9630, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710402212, "job": 128, "event": "table_file_deletion", "file_number": 205}
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710405253, "job": 128, "event": "table_file_deletion", "file_number": 203}
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.327435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.405405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.405415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.405420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.405424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:55:10 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.405429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:55:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:10 compute-0 nova_compute[244014]: 2026-02-25 13:55:10.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:11 compute-0 ceph-mon[76335]: pgmap v4058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:12 compute-0 nova_compute[244014]: 2026-02-25 13:55:12.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:55:13 compute-0 ceph-mon[76335]: pgmap v4059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:13 compute-0 nova_compute[244014]: 2026-02-25 13:55:13.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:15 compute-0 nova_compute[244014]: 2026-02-25 13:55:15.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:15 compute-0 ceph-mon[76335]: pgmap v4060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:17 compute-0 ceph-mon[76335]: pgmap v4061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:18 compute-0 nova_compute[244014]: 2026-02-25 13:55:18.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:19 compute-0 ceph-mon[76335]: pgmap v4062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:20 compute-0 nova_compute[244014]: 2026-02-25 13:55:20.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:21 compute-0 ceph-mon[76335]: pgmap v4063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:22 compute-0 podman[422754]: 2026-02-25 13:55:22.767645862 +0000 UTC m=+0.102107810 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 13:55:22 compute-0 podman[422755]: 2026-02-25 13:55:22.820677787 +0000 UTC m=+0.155555578 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 25 13:55:23 compute-0 ceph-mon[76335]: pgmap v4064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:23 compute-0 nova_compute[244014]: 2026-02-25 13:55:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:55:23 compute-0 nova_compute[244014]: 2026-02-25 13:55:23.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:25 compute-0 nova_compute[244014]: 2026-02-25 13:55:25.693 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:25 compute-0 ceph-mon[76335]: pgmap v4065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:26 compute-0 nova_compute[244014]: 2026-02-25 13:55:26.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:55:26 compute-0 nova_compute[244014]: 2026-02-25 13:55:26.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:55:26 compute-0 nova_compute[244014]: 2026-02-25 13:55:26.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:55:26 compute-0 nova_compute[244014]: 2026-02-25 13:55:26.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:55:26 compute-0 nova_compute[244014]: 2026-02-25 13:55:26.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:55:26 compute-0 nova_compute[244014]: 2026-02-25 13:55:26.917 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:55:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:55:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806105915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:55:27 compute-0 nova_compute[244014]: 2026-02-25 13:55:27.496 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:55:27 compute-0 nova_compute[244014]: 2026-02-25 13:55:27.695 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:55:27 compute-0 nova_compute[244014]: 2026-02-25 13:55:27.697 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3498MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:55:27 compute-0 nova_compute[244014]: 2026-02-25 13:55:27.698 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:55:27 compute-0 nova_compute[244014]: 2026-02-25 13:55:27.699 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:55:27 compute-0 nova_compute[244014]: 2026-02-25 13:55:27.768 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:55:27 compute-0 nova_compute[244014]: 2026-02-25 13:55:27.768 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:55:27 compute-0 nova_compute[244014]: 2026-02-25 13:55:27.785 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:55:27 compute-0 ceph-mon[76335]: pgmap v4066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2806105915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:55:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:55:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3762191624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:55:28 compute-0 nova_compute[244014]: 2026-02-25 13:55:28.325 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:55:28 compute-0 nova_compute[244014]: 2026-02-25 13:55:28.331 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:55:28 compute-0 nova_compute[244014]: 2026-02-25 13:55:28.351 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:55:28 compute-0 nova_compute[244014]: 2026-02-25 13:55:28.354 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:55:28 compute-0 nova_compute[244014]: 2026-02-25 13:55:28.354 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:55:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3762191624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:55:28 compute-0 nova_compute[244014]: 2026-02-25 13:55:28.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:29 compute-0 ceph-mon[76335]: pgmap v4067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:30 compute-0 nova_compute[244014]: 2026-02-25 13:55:30.696 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:55:31
Feb 25 13:55:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:55:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:55:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Feb 25 13:55:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:55:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:55:31 compute-0 ceph-mon[76335]: pgmap v4068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:55:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:55:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:33 compute-0 nova_compute[244014]: 2026-02-25 13:55:33.356 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:55:33 compute-0 nova_compute[244014]: 2026-02-25 13:55:33.356 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:55:33 compute-0 nova_compute[244014]: 2026-02-25 13:55:33.357 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:55:33 compute-0 nova_compute[244014]: 2026-02-25 13:55:33.501 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:55:33 compute-0 ceph-mon[76335]: pgmap v4069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:33 compute-0 nova_compute[244014]: 2026-02-25 13:55:33.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:35 compute-0 nova_compute[244014]: 2026-02-25 13:55:35.016 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:55:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:35 compute-0 nova_compute[244014]: 2026-02-25 13:55:35.699 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:35 compute-0 ceph-mon[76335]: pgmap v4070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:37 compute-0 ceph-mon[76335]: pgmap v4071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:38 compute-0 nova_compute[244014]: 2026-02-25 13:55:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:55:38 compute-0 nova_compute[244014]: 2026-02-25 13:55:38.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:55:38 compute-0 ceph-mon[76335]: pgmap v4072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:39 compute-0 nova_compute[244014]: 2026-02-25 13:55:39.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:40 compute-0 nova_compute[244014]: 2026-02-25 13:55:40.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:40 compute-0 nova_compute[244014]: 2026-02-25 13:55:40.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:55:41 compute-0 ceph-mon[76335]: pgmap v4073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:43 compute-0 ceph-mon[76335]: pgmap v4074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:55:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:55:44 compute-0 nova_compute[244014]: 2026-02-25 13:55:44.031 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:45 compute-0 nova_compute[244014]: 2026-02-25 13:55:45.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:45 compute-0 ceph-mon[76335]: pgmap v4075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:45 compute-0 nova_compute[244014]: 2026-02-25 13:55:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:55:45 compute-0 nova_compute[244014]: 2026-02-25 13:55:45.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:55:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:47 compute-0 ceph-mon[76335]: pgmap v4076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:47 compute-0 nova_compute[244014]: 2026-02-25 13:55:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:55:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:49 compute-0 nova_compute[244014]: 2026-02-25 13:55:49.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:49 compute-0 ceph-mon[76335]: pgmap v4077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:50 compute-0 nova_compute[244014]: 2026-02-25 13:55:50.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:51 compute-0 ceph-mon[76335]: pgmap v4078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:53 compute-0 podman[422844]: 2026-02-25 13:55:53.732756243 +0000 UTC m=+0.074963989 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 13:55:53 compute-0 podman[422845]: 2026-02-25 13:55:53.770983408 +0000 UTC m=+0.108383068 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 13:55:53 compute-0 ceph-mon[76335]: pgmap v4079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:54 compute-0 nova_compute[244014]: 2026-02-25 13:55:54.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:55:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:55:55.100 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:55:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:55:55.100 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:55:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:55:55 compute-0 nova_compute[244014]: 2026-02-25 13:55:55.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:55 compute-0 ceph-mon[76335]: pgmap v4080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:57 compute-0 ceph-mon[76335]: pgmap v4081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:55:59 compute-0 nova_compute[244014]: 2026-02-25 13:55:59.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:55:59 compute-0 ceph-mon[76335]: pgmap v4082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:00 compute-0 nova_compute[244014]: 2026-02-25 13:56:00.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:56:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:56:01 compute-0 ceph-mon[76335]: pgmap v4083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:03 compute-0 ceph-mon[76335]: pgmap v4084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:04 compute-0 nova_compute[244014]: 2026-02-25 13:56:04.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:05 compute-0 nova_compute[244014]: 2026-02-25 13:56:05.711 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:05 compute-0 ceph-mon[76335]: pgmap v4085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:06 compute-0 sudo[422886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:56:06 compute-0 sudo[422886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:56:06 compute-0 sudo[422886]: pam_unix(sudo:session): session closed for user root
Feb 25 13:56:06 compute-0 sudo[422911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:56:06 compute-0 sudo[422911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:56:06 compute-0 sudo[422911]: pam_unix(sudo:session): session closed for user root
Feb 25 13:56:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:56:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:56:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:56:06 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:56:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:56:06 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:56:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:56:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:56:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:56:06 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:56:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:56:06 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:56:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:06 compute-0 sudo[422967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:56:06 compute-0 sudo[422967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:56:06 compute-0 sudo[422967]: pam_unix(sudo:session): session closed for user root
Feb 25 13:56:06 compute-0 sudo[422992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:56:06 compute-0 sudo[422992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:56:06 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:56:06 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:56:06 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:56:06 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:56:06 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:56:06 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:56:07 compute-0 podman[423028]: 2026-02-25 13:56:07.015822662 +0000 UTC m=+0.058477631 container create 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:56:07 compute-0 systemd[1]: Started libpod-conmon-5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f.scope.
Feb 25 13:56:07 compute-0 podman[423028]: 2026-02-25 13:56:06.989133934 +0000 UTC m=+0.031788953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:56:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:56:07 compute-0 podman[423028]: 2026-02-25 13:56:07.112168767 +0000 UTC m=+0.154823746 container init 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 13:56:07 compute-0 podman[423028]: 2026-02-25 13:56:07.121341298 +0000 UTC m=+0.163996247 container start 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 13:56:07 compute-0 podman[423028]: 2026-02-25 13:56:07.125233518 +0000 UTC m=+0.167888547 container attach 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:56:07 compute-0 determined_keller[423044]: 167 167
Feb 25 13:56:07 compute-0 systemd[1]: libpod-5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f.scope: Deactivated successfully.
Feb 25 13:56:07 compute-0 podman[423028]: 2026-02-25 13:56:07.129033596 +0000 UTC m=+0.171688575 container died 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:56:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-04001b8e0de71ca7f96ccb10c6190247130c13cd37b1db66caad495ba3b7973a-merged.mount: Deactivated successfully.
Feb 25 13:56:07 compute-0 podman[423028]: 2026-02-25 13:56:07.180450506 +0000 UTC m=+0.223105475 container remove 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:56:07 compute-0 systemd[1]: libpod-conmon-5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f.scope: Deactivated successfully.
Feb 25 13:56:07 compute-0 podman[423071]: 2026-02-25 13:56:07.361621969 +0000 UTC m=+0.059841440 container create c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:56:07 compute-0 systemd[1]: Started libpod-conmon-c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61.scope.
Feb 25 13:56:07 compute-0 podman[423071]: 2026-02-25 13:56:07.34156378 +0000 UTC m=+0.039783241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:56:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:07 compute-0 podman[423071]: 2026-02-25 13:56:07.491193367 +0000 UTC m=+0.189412869 container init c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 13:56:07 compute-0 podman[423071]: 2026-02-25 13:56:07.501208272 +0000 UTC m=+0.199427703 container start c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 13:56:07 compute-0 podman[423071]: 2026-02-25 13:56:07.506295416 +0000 UTC m=+0.204515227 container attach c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:56:07 compute-0 ceph-mon[76335]: pgmap v4086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:07 compute-0 thirsty_knuth[423086]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:56:07 compute-0 thirsty_knuth[423086]: --> All data devices are unavailable
Feb 25 13:56:07 compute-0 systemd[1]: libpod-c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61.scope: Deactivated successfully.
Feb 25 13:56:07 compute-0 podman[423071]: 2026-02-25 13:56:07.984926154 +0000 UTC m=+0.683145625 container died c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9-merged.mount: Deactivated successfully.
Feb 25 13:56:08 compute-0 podman[423071]: 2026-02-25 13:56:08.03233775 +0000 UTC m=+0.730557191 container remove c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 25 13:56:08 compute-0 systemd[1]: libpod-conmon-c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61.scope: Deactivated successfully.
Feb 25 13:56:08 compute-0 sudo[422992]: pam_unix(sudo:session): session closed for user root
Feb 25 13:56:08 compute-0 sudo[423117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:56:08 compute-0 sudo[423117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:56:08 compute-0 sudo[423117]: pam_unix(sudo:session): session closed for user root
Feb 25 13:56:08 compute-0 sudo[423142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:56:08 compute-0 sudo[423142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:56:08 compute-0 podman[423181]: 2026-02-25 13:56:08.521399035 +0000 UTC m=+0.048596401 container create 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:56:08 compute-0 systemd[1]: Started libpod-conmon-1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405.scope.
Feb 25 13:56:08 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:56:08 compute-0 podman[423181]: 2026-02-25 13:56:08.505759801 +0000 UTC m=+0.032957177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:56:08 compute-0 podman[423181]: 2026-02-25 13:56:08.606028497 +0000 UTC m=+0.133225863 container init 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:56:08 compute-0 podman[423181]: 2026-02-25 13:56:08.612610174 +0000 UTC m=+0.139807550 container start 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:56:08 compute-0 podman[423181]: 2026-02-25 13:56:08.616201796 +0000 UTC m=+0.143399182 container attach 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:56:08 compute-0 systemd[1]: libpod-1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405.scope: Deactivated successfully.
Feb 25 13:56:08 compute-0 upbeat_euler[423198]: 167 167
Feb 25 13:56:08 compute-0 conmon[423198]: conmon 1d7ac77c916c6392f8b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405.scope/container/memory.events
Feb 25 13:56:08 compute-0 podman[423181]: 2026-02-25 13:56:08.618387618 +0000 UTC m=+0.145585004 container died 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d1f1c2f4d2bcdbbe4e92aa6fc1b29b675c5b33ae56b361e88300faad0d43052-merged.mount: Deactivated successfully.
Feb 25 13:56:08 compute-0 podman[423181]: 2026-02-25 13:56:08.656356776 +0000 UTC m=+0.183554142 container remove 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 13:56:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:08 compute-0 systemd[1]: libpod-conmon-1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405.scope: Deactivated successfully.
Feb 25 13:56:08 compute-0 podman[423221]: 2026-02-25 13:56:08.785600374 +0000 UTC m=+0.047619033 container create 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 13:56:08 compute-0 systemd[1]: Started libpod-conmon-43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d.scope.
Feb 25 13:56:08 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f3d83e06f89515da07f05421826feccff17904b2e143480504b3a735769e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f3d83e06f89515da07f05421826feccff17904b2e143480504b3a735769e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f3d83e06f89515da07f05421826feccff17904b2e143480504b3a735769e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f3d83e06f89515da07f05421826feccff17904b2e143480504b3a735769e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:08 compute-0 podman[423221]: 2026-02-25 13:56:08.761383867 +0000 UTC m=+0.023402596 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:56:08 compute-0 podman[423221]: 2026-02-25 13:56:08.868661502 +0000 UTC m=+0.130680201 container init 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:56:08 compute-0 podman[423221]: 2026-02-25 13:56:08.880233371 +0000 UTC m=+0.142252030 container start 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:56:08 compute-0 podman[423221]: 2026-02-25 13:56:08.883764831 +0000 UTC m=+0.145783500 container attach 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 13:56:09 compute-0 nova_compute[244014]: 2026-02-25 13:56:09.120 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]: {
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:     "0": [
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:         {
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "devices": [
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "/dev/loop3"
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             ],
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_name": "ceph_lv0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_size": "21470642176",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "name": "ceph_lv0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "tags": {
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.cluster_name": "ceph",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.crush_device_class": "",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.encrypted": "0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.objectstore": "bluestore",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.osd_id": "0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.type": "block",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.vdo": "0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.with_tpm": "0"
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             },
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "type": "block",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "vg_name": "ceph_vg0"
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:         }
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:     ],
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:     "1": [
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:         {
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "devices": [
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "/dev/loop4"
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             ],
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_name": "ceph_lv1",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_size": "21470642176",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "name": "ceph_lv1",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "tags": {
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.cluster_name": "ceph",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.crush_device_class": "",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.encrypted": "0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.objectstore": "bluestore",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.osd_id": "1",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.type": "block",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.vdo": "0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.with_tpm": "0"
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             },
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "type": "block",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "vg_name": "ceph_vg1"
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:         }
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:     ],
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:     "2": [
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:         {
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "devices": [
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "/dev/loop5"
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             ],
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_name": "ceph_lv2",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_size": "21470642176",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "name": "ceph_lv2",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "tags": {
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.cluster_name": "ceph",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.crush_device_class": "",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.encrypted": "0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.objectstore": "bluestore",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.osd_id": "2",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.type": "block",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.vdo": "0",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:                 "ceph.with_tpm": "0"
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             },
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "type": "block",
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:             "vg_name": "ceph_vg2"
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:         }
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]:     ]
Feb 25 13:56:09 compute-0 funny_chatterjee[423238]: }
Feb 25 13:56:09 compute-0 systemd[1]: libpod-43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d.scope: Deactivated successfully.
Feb 25 13:56:09 compute-0 podman[423221]: 2026-02-25 13:56:09.558993561 +0000 UTC m=+0.821012190 container died 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:56:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f10f3d83e06f89515da07f05421826feccff17904b2e143480504b3a735769e8-merged.mount: Deactivated successfully.
Feb 25 13:56:09 compute-0 podman[423221]: 2026-02-25 13:56:09.598486072 +0000 UTC m=+0.860504701 container remove 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:56:09 compute-0 systemd[1]: libpod-conmon-43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d.scope: Deactivated successfully.
Feb 25 13:56:09 compute-0 sudo[423142]: pam_unix(sudo:session): session closed for user root
Feb 25 13:56:09 compute-0 sudo[423261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:56:09 compute-0 sudo[423261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:56:09 compute-0 sudo[423261]: pam_unix(sudo:session): session closed for user root
Feb 25 13:56:09 compute-0 sudo[423286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:56:09 compute-0 sudo[423286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:56:09 compute-0 ceph-mon[76335]: pgmap v4087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:10 compute-0 podman[423323]: 2026-02-25 13:56:10.038156054 +0000 UTC m=+0.048712804 container create 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:56:10 compute-0 systemd[1]: Started libpod-conmon-31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa.scope.
Feb 25 13:56:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:56:10 compute-0 podman[423323]: 2026-02-25 13:56:10.01689009 +0000 UTC m=+0.027446920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:56:10 compute-0 podman[423323]: 2026-02-25 13:56:10.112649299 +0000 UTC m=+0.123206139 container init 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:56:10 compute-0 podman[423323]: 2026-02-25 13:56:10.122383565 +0000 UTC m=+0.132940315 container start 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:56:10 compute-0 podman[423323]: 2026-02-25 13:56:10.12609023 +0000 UTC m=+0.136647070 container attach 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:56:10 compute-0 nostalgic_turing[423339]: 167 167
Feb 25 13:56:10 compute-0 systemd[1]: libpod-31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa.scope: Deactivated successfully.
Feb 25 13:56:10 compute-0 podman[423323]: 2026-02-25 13:56:10.127563522 +0000 UTC m=+0.138120272 container died 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 13:56:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-37cee36b7f991d3d8cd81e41870c1cc9bd6e0cfbe4efeda50c3a6e2bbc284efa-merged.mount: Deactivated successfully.
Feb 25 13:56:10 compute-0 podman[423323]: 2026-02-25 13:56:10.169348678 +0000 UTC m=+0.179905428 container remove 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:56:10 compute-0 systemd[1]: libpod-conmon-31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa.scope: Deactivated successfully.
Feb 25 13:56:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:10 compute-0 podman[423363]: 2026-02-25 13:56:10.345249602 +0000 UTC m=+0.056092583 container create f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:56:10 compute-0 systemd[1]: Started libpod-conmon-f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e.scope.
Feb 25 13:56:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:56:10 compute-0 podman[423363]: 2026-02-25 13:56:10.322674161 +0000 UTC m=+0.033517202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d028a7913c8f6045a652501b49751059cce4f1393064b3b78fcea492c392db5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d028a7913c8f6045a652501b49751059cce4f1393064b3b78fcea492c392db5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d028a7913c8f6045a652501b49751059cce4f1393064b3b78fcea492c392db5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d028a7913c8f6045a652501b49751059cce4f1393064b3b78fcea492c392db5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:56:10 compute-0 podman[423363]: 2026-02-25 13:56:10.440663081 +0000 UTC m=+0.151506062 container init f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 13:56:10 compute-0 podman[423363]: 2026-02-25 13:56:10.454654638 +0000 UTC m=+0.165497629 container start f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:56:10 compute-0 podman[423363]: 2026-02-25 13:56:10.459446154 +0000 UTC m=+0.170289195 container attach f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 13:56:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:10 compute-0 nova_compute[244014]: 2026-02-25 13:56:10.713 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:11 compute-0 lvm[423456]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:56:11 compute-0 lvm[423456]: VG ceph_vg0 finished
Feb 25 13:56:11 compute-0 lvm[423459]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:56:11 compute-0 lvm[423459]: VG ceph_vg1 finished
Feb 25 13:56:11 compute-0 lvm[423461]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:56:11 compute-0 lvm[423461]: VG ceph_vg2 finished
Feb 25 13:56:11 compute-0 lvm[423462]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:56:11 compute-0 lvm[423462]: VG ceph_vg1 finished
Feb 25 13:56:11 compute-0 inspiring_carson[423380]: {}
Feb 25 13:56:11 compute-0 systemd[1]: libpod-f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e.scope: Deactivated successfully.
Feb 25 13:56:11 compute-0 systemd[1]: libpod-f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e.scope: Consumed 1.227s CPU time.
Feb 25 13:56:11 compute-0 podman[423363]: 2026-02-25 13:56:11.262534204 +0000 UTC m=+0.973377195 container died f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Feb 25 13:56:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d028a7913c8f6045a652501b49751059cce4f1393064b3b78fcea492c392db5-merged.mount: Deactivated successfully.
Feb 25 13:56:11 compute-0 podman[423363]: 2026-02-25 13:56:11.317412602 +0000 UTC m=+1.028255583 container remove f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:56:11 compute-0 systemd[1]: libpod-conmon-f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e.scope: Deactivated successfully.
Feb 25 13:56:11 compute-0 sudo[423286]: pam_unix(sudo:session): session closed for user root
Feb 25 13:56:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:56:11 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:56:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:56:11 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:56:11 compute-0 sudo[423477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:56:11 compute-0 sudo[423477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:56:11 compute-0 sudo[423477]: pam_unix(sudo:session): session closed for user root
Feb 25 13:56:11 compute-0 ceph-mon[76335]: pgmap v4088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:56:11 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:56:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:13 compute-0 ceph-mon[76335]: pgmap v4089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:14 compute-0 nova_compute[244014]: 2026-02-25 13:56:14.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:15 compute-0 nova_compute[244014]: 2026-02-25 13:56:15.715 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:15 compute-0 ceph-mon[76335]: pgmap v4090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:16 compute-0 ceph-mon[76335]: pgmap v4091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:19 compute-0 nova_compute[244014]: 2026-02-25 13:56:19.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:19 compute-0 ceph-mon[76335]: pgmap v4092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:20 compute-0 nova_compute[244014]: 2026-02-25 13:56:20.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:21 compute-0 ceph-mon[76335]: pgmap v4093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:23 compute-0 ceph-mon[76335]: pgmap v4094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:24 compute-0 nova_compute[244014]: 2026-02-25 13:56:24.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:24 compute-0 podman[423502]: 2026-02-25 13:56:24.736348306 +0000 UTC m=+0.070995197 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:56:24 compute-0 podman[423503]: 2026-02-25 13:56:24.780633433 +0000 UTC m=+0.115272214 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:56:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:25 compute-0 nova_compute[244014]: 2026-02-25 13:56:25.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:25 compute-0 ceph-mon[76335]: pgmap v4095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:25 compute-0 nova_compute[244014]: 2026-02-25 13:56:25.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:56:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:26 compute-0 nova_compute[244014]: 2026-02-25 13:56:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:56:26 compute-0 nova_compute[244014]: 2026-02-25 13:56:26.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:56:26 compute-0 nova_compute[244014]: 2026-02-25 13:56:26.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:56:26 compute-0 nova_compute[244014]: 2026-02-25 13:56:26.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:56:26 compute-0 nova_compute[244014]: 2026-02-25 13:56:26.913 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:56:26 compute-0 nova_compute[244014]: 2026-02-25 13:56:26.914 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:56:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:56:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903102671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:56:27 compute-0 nova_compute[244014]: 2026-02-25 13:56:27.464 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:56:27 compute-0 nova_compute[244014]: 2026-02-25 13:56:27.603 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:56:27 compute-0 nova_compute[244014]: 2026-02-25 13:56:27.606 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3486MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:56:27 compute-0 nova_compute[244014]: 2026-02-25 13:56:27.606 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:56:27 compute-0 nova_compute[244014]: 2026-02-25 13:56:27.607 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:56:27 compute-0 ceph-mon[76335]: pgmap v4096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2903102671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:56:27 compute-0 nova_compute[244014]: 2026-02-25 13:56:27.862 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:56:27 compute-0 nova_compute[244014]: 2026-02-25 13:56:27.863 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:56:27 compute-0 nova_compute[244014]: 2026-02-25 13:56:27.956 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 13:56:28 compute-0 nova_compute[244014]: 2026-02-25 13:56:28.054 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 13:56:28 compute-0 nova_compute[244014]: 2026-02-25 13:56:28.054 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 13:56:28 compute-0 nova_compute[244014]: 2026-02-25 13:56:28.069 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 13:56:28 compute-0 nova_compute[244014]: 2026-02-25 13:56:28.092 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 13:56:28 compute-0 nova_compute[244014]: 2026-02-25 13:56:28.110 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:56:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:56:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1917647811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:56:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:28 compute-0 nova_compute[244014]: 2026-02-25 13:56:28.682 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:56:28 compute-0 nova_compute[244014]: 2026-02-25 13:56:28.690 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:56:28 compute-0 nova_compute[244014]: 2026-02-25 13:56:28.706 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:56:28 compute-0 nova_compute[244014]: 2026-02-25 13:56:28.709 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:56:28 compute-0 nova_compute[244014]: 2026-02-25 13:56:28.709 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:56:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1917647811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:56:29 compute-0 nova_compute[244014]: 2026-02-25 13:56:29.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:30 compute-0 ceph-mon[76335]: pgmap v4097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:30 compute-0 nova_compute[244014]: 2026-02-25 13:56:30.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:31 compute-0 ceph-mon[76335]: pgmap v4098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:56:31
Feb 25 13:56:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:56:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:56:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.mgr', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Feb 25 13:56:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:56:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:56:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:56:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:33 compute-0 nova_compute[244014]: 2026-02-25 13:56:33.710 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:56:33 compute-0 nova_compute[244014]: 2026-02-25 13:56:33.710 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:56:33 compute-0 nova_compute[244014]: 2026-02-25 13:56:33.710 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:56:33 compute-0 nova_compute[244014]: 2026-02-25 13:56:33.727 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:56:33 compute-0 ceph-mon[76335]: pgmap v4099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:34 compute-0 nova_compute[244014]: 2026-02-25 13:56:34.175 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:34 compute-0 nova_compute[244014]: 2026-02-25 13:56:34.888 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:56:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:35 compute-0 nova_compute[244014]: 2026-02-25 13:56:35.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:35 compute-0 ceph-mon[76335]: pgmap v4100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:37 compute-0 ceph-mon[76335]: pgmap v4101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:38 compute-0 nova_compute[244014]: 2026-02-25 13:56:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:56:38 compute-0 nova_compute[244014]: 2026-02-25 13:56:38.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:56:39 compute-0 nova_compute[244014]: 2026-02-25 13:56:39.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:39 compute-0 ceph-mon[76335]: pgmap v4102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:40 compute-0 nova_compute[244014]: 2026-02-25 13:56:40.726 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:41 compute-0 ceph-mon[76335]: pgmap v4103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:41 compute-0 nova_compute[244014]: 2026-02-25 13:56:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:56:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:43 compute-0 ceph-mon[76335]: pgmap v4104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:56:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:56:44 compute-0 nova_compute[244014]: 2026-02-25 13:56:44.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:45 compute-0 nova_compute[244014]: 2026-02-25 13:56:45.728 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:45 compute-0 ceph-mon[76335]: pgmap v4105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:45 compute-0 nova_compute[244014]: 2026-02-25 13:56:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:56:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:56:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1511691588' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:56:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:56:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1511691588' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:56:47 compute-0 ceph-mon[76335]: pgmap v4106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1511691588' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:56:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1511691588' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:56:47 compute-0 nova_compute[244014]: 2026-02-25 13:56:47.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:56:47 compute-0 nova_compute[244014]: 2026-02-25 13:56:47.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:56:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:49 compute-0 nova_compute[244014]: 2026-02-25 13:56:49.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:49 compute-0 ceph-mon[76335]: pgmap v4107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:50 compute-0 nova_compute[244014]: 2026-02-25 13:56:50.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:51 compute-0 ceph-mon[76335]: pgmap v4108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:53 compute-0 ceph-mon[76335]: pgmap v4109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:54 compute-0 nova_compute[244014]: 2026-02-25 13:56:54.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:56:55.100 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:56:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:56:55.101 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:56:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:56:55.101 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:56:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:56:55 compute-0 nova_compute[244014]: 2026-02-25 13:56:55.731 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:55 compute-0 podman[423593]: 2026-02-25 13:56:55.734895648 +0000 UTC m=+0.070869193 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:56:55 compute-0 podman[423594]: 2026-02-25 13:56:55.779191725 +0000 UTC m=+0.107148902 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 25 13:56:55 compute-0 ceph-mon[76335]: pgmap v4110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:57 compute-0 ceph-mon[76335]: pgmap v4111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:56:59 compute-0 nova_compute[244014]: 2026-02-25 13:56:59.323 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:56:59 compute-0 ceph-mon[76335]: pgmap v4112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:00 compute-0 nova_compute[244014]: 2026-02-25 13:57:00.733 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:57:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:57:01 compute-0 ceph-mon[76335]: pgmap v4113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:03 compute-0 ceph-mon[76335]: pgmap v4114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:04 compute-0 nova_compute[244014]: 2026-02-25 13:57:04.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:05 compute-0 nova_compute[244014]: 2026-02-25 13:57:05.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:05 compute-0 ceph-mon[76335]: pgmap v4115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:07 compute-0 ceph-mon[76335]: pgmap v4116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:09 compute-0 nova_compute[244014]: 2026-02-25 13:57:09.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:09 compute-0 ceph-mon[76335]: pgmap v4117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:10 compute-0 nova_compute[244014]: 2026-02-25 13:57:10.738 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:11 compute-0 sudo[423637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:57:11 compute-0 sudo[423637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:57:11 compute-0 sudo[423637]: pam_unix(sudo:session): session closed for user root
Feb 25 13:57:11 compute-0 sudo[423662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:57:11 compute-0 sudo[423662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:57:11 compute-0 ceph-mon[76335]: pgmap v4118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:12 compute-0 sudo[423662]: pam_unix(sudo:session): session closed for user root
Feb 25 13:57:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 13:57:12 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:57:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:57:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:57:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:57:12 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:57:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:57:12 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:57:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:57:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:57:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:57:12 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:57:12 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:57:12 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:57:12 compute-0 sudo[423717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:57:12 compute-0 sudo[423717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:57:12 compute-0 sudo[423717]: pam_unix(sudo:session): session closed for user root
Feb 25 13:57:12 compute-0 sudo[423742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:57:12 compute-0 sudo[423742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:57:12 compute-0 podman[423780]: 2026-02-25 13:57:12.61792078 +0000 UTC m=+0.057453822 container create 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:57:12 compute-0 systemd[1]: Started libpod-conmon-457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf.scope.
Feb 25 13:57:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:57:12 compute-0 podman[423780]: 2026-02-25 13:57:12.594866545 +0000 UTC m=+0.034399677 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:57:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:12 compute-0 podman[423780]: 2026-02-25 13:57:12.706248477 +0000 UTC m=+0.145781549 container init 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 25 13:57:12 compute-0 podman[423780]: 2026-02-25 13:57:12.716872379 +0000 UTC m=+0.156405451 container start 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:57:12 compute-0 podman[423780]: 2026-02-25 13:57:12.720985846 +0000 UTC m=+0.160518888 container attach 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 13:57:12 compute-0 jolly_kapitsa[423796]: 167 167
Feb 25 13:57:12 compute-0 systemd[1]: libpod-457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf.scope: Deactivated successfully.
Feb 25 13:57:12 compute-0 podman[423780]: 2026-02-25 13:57:12.724818685 +0000 UTC m=+0.164351767 container died 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:57:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ad2ca98cd4476bd32d729b4d3558fd10eab5cf41c162243b524eb580e1d9ae-merged.mount: Deactivated successfully.
Feb 25 13:57:12 compute-0 podman[423780]: 2026-02-25 13:57:12.774769833 +0000 UTC m=+0.214302885 container remove 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:57:12 compute-0 systemd[1]: libpod-conmon-457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf.scope: Deactivated successfully.
Feb 25 13:57:12 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 13:57:12 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:57:12 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:57:12 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:57:12 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:57:12 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:57:12 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:57:12 compute-0 podman[423821]: 2026-02-25 13:57:12.948193246 +0000 UTC m=+0.044983308 container create ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:57:12 compute-0 systemd[1]: Started libpod-conmon-ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e.scope.
Feb 25 13:57:13 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:57:13 compute-0 podman[423821]: 2026-02-25 13:57:12.92755429 +0000 UTC m=+0.024344282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:13 compute-0 podman[423821]: 2026-02-25 13:57:13.048624567 +0000 UTC m=+0.145414619 container init ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:57:13 compute-0 podman[423821]: 2026-02-25 13:57:13.061783941 +0000 UTC m=+0.158573903 container start ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:57:13 compute-0 podman[423821]: 2026-02-25 13:57:13.065753554 +0000 UTC m=+0.162543596 container attach ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:57:13 compute-0 upbeat_kowalevski[423838]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:57:13 compute-0 upbeat_kowalevski[423838]: --> All data devices are unavailable
Feb 25 13:57:13 compute-0 systemd[1]: libpod-ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e.scope: Deactivated successfully.
Feb 25 13:57:13 compute-0 podman[423821]: 2026-02-25 13:57:13.574840705 +0000 UTC m=+0.671630717 container died ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb 25 13:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9-merged.mount: Deactivated successfully.
Feb 25 13:57:13 compute-0 podman[423821]: 2026-02-25 13:57:13.628969112 +0000 UTC m=+0.725759074 container remove ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 13:57:13 compute-0 systemd[1]: libpod-conmon-ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e.scope: Deactivated successfully.
Feb 25 13:57:13 compute-0 sudo[423742]: pam_unix(sudo:session): session closed for user root
Feb 25 13:57:13 compute-0 sudo[423869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:57:13 compute-0 sudo[423869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:57:13 compute-0 sudo[423869]: pam_unix(sudo:session): session closed for user root
Feb 25 13:57:13 compute-0 sudo[423894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:57:13 compute-0 sudo[423894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:57:13 compute-0 ceph-mon[76335]: pgmap v4119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:14 compute-0 podman[423930]: 2026-02-25 13:57:14.097941636 +0000 UTC m=+0.047310804 container create 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:57:14 compute-0 systemd[1]: Started libpod-conmon-0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1.scope.
Feb 25 13:57:14 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:57:14 compute-0 podman[423930]: 2026-02-25 13:57:14.075686534 +0000 UTC m=+0.025055772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:57:14 compute-0 podman[423930]: 2026-02-25 13:57:14.180673685 +0000 UTC m=+0.130042923 container init 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:57:14 compute-0 podman[423930]: 2026-02-25 13:57:14.188061825 +0000 UTC m=+0.137431003 container start 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:57:14 compute-0 angry_bouman[423947]: 167 167
Feb 25 13:57:14 compute-0 systemd[1]: libpod-0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1.scope: Deactivated successfully.
Feb 25 13:57:14 compute-0 podman[423930]: 2026-02-25 13:57:14.194558649 +0000 UTC m=+0.143927907 container attach 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:57:14 compute-0 podman[423930]: 2026-02-25 13:57:14.195535057 +0000 UTC m=+0.144904245 container died 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 13:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-deac54028aa9bd81b4c567e92bc44ba279a3fb2d186c5a2bcadd620e85c8fe73-merged.mount: Deactivated successfully.
Feb 25 13:57:14 compute-0 podman[423930]: 2026-02-25 13:57:14.237246271 +0000 UTC m=+0.186615479 container remove 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:57:14 compute-0 systemd[1]: libpod-conmon-0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1.scope: Deactivated successfully.
Feb 25 13:57:14 compute-0 nova_compute[244014]: 2026-02-25 13:57:14.389 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:14 compute-0 podman[423970]: 2026-02-25 13:57:14.421859532 +0000 UTC m=+0.059849630 container create c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:57:14 compute-0 systemd[1]: Started libpod-conmon-c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41.scope.
Feb 25 13:57:14 compute-0 podman[423970]: 2026-02-25 13:57:14.399178428 +0000 UTC m=+0.037168606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:57:14 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6157dda46bdb74dd2e18d208bb1a6df6be6fdd81ad17cffb216d7df7113e86ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6157dda46bdb74dd2e18d208bb1a6df6be6fdd81ad17cffb216d7df7113e86ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6157dda46bdb74dd2e18d208bb1a6df6be6fdd81ad17cffb216d7df7113e86ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6157dda46bdb74dd2e18d208bb1a6df6be6fdd81ad17cffb216d7df7113e86ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:14 compute-0 podman[423970]: 2026-02-25 13:57:14.520096561 +0000 UTC m=+0.158086719 container init c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:57:14 compute-0 podman[423970]: 2026-02-25 13:57:14.532993687 +0000 UTC m=+0.170983805 container start c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:57:14 compute-0 podman[423970]: 2026-02-25 13:57:14.536907188 +0000 UTC m=+0.174897306 container attach c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:57:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:14 compute-0 gifted_snyder[423988]: {
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:     "0": [
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:         {
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "devices": [
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "/dev/loop3"
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             ],
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_name": "ceph_lv0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_size": "21470642176",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "name": "ceph_lv0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "tags": {
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.cluster_name": "ceph",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.crush_device_class": "",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.encrypted": "0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.objectstore": "bluestore",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.osd_id": "0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.type": "block",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.vdo": "0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.with_tpm": "0"
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             },
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "type": "block",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "vg_name": "ceph_vg0"
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:         }
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:     ],
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:     "1": [
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:         {
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "devices": [
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "/dev/loop4"
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             ],
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_name": "ceph_lv1",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_size": "21470642176",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "name": "ceph_lv1",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "tags": {
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.cluster_name": "ceph",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.crush_device_class": "",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.encrypted": "0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.objectstore": "bluestore",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.osd_id": "1",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.type": "block",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.vdo": "0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.with_tpm": "0"
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             },
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "type": "block",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "vg_name": "ceph_vg1"
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:         }
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:     ],
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:     "2": [
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:         {
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "devices": [
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "/dev/loop5"
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             ],
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_name": "ceph_lv2",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_size": "21470642176",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "name": "ceph_lv2",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "tags": {
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.cluster_name": "ceph",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.crush_device_class": "",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.encrypted": "0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.objectstore": "bluestore",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.osd_id": "2",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.type": "block",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.vdo": "0",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:                 "ceph.with_tpm": "0"
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             },
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "type": "block",
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:             "vg_name": "ceph_vg2"
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:         }
Feb 25 13:57:14 compute-0 gifted_snyder[423988]:     ]
Feb 25 13:57:14 compute-0 gifted_snyder[423988]: }
Feb 25 13:57:14 compute-0 systemd[1]: libpod-c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41.scope: Deactivated successfully.
Feb 25 13:57:14 compute-0 podman[423970]: 2026-02-25 13:57:14.843107691 +0000 UTC m=+0.481097849 container died c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 13:57:14 compute-0 nova_compute[244014]: 2026-02-25 13:57:14.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6157dda46bdb74dd2e18d208bb1a6df6be6fdd81ad17cffb216d7df7113e86ad-merged.mount: Deactivated successfully.
Feb 25 13:57:14 compute-0 podman[423970]: 2026-02-25 13:57:14.894233983 +0000 UTC m=+0.532224091 container remove c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:57:14 compute-0 systemd[1]: libpod-conmon-c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41.scope: Deactivated successfully.
Feb 25 13:57:14 compute-0 sudo[423894]: pam_unix(sudo:session): session closed for user root
Feb 25 13:57:15 compute-0 sudo[424009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:57:15 compute-0 sudo[424009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:57:15 compute-0 sudo[424009]: pam_unix(sudo:session): session closed for user root
Feb 25 13:57:15 compute-0 sudo[424034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:57:15 compute-0 sudo[424034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:57:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:15 compute-0 podman[424071]: 2026-02-25 13:57:15.386846098 +0000 UTC m=+0.054808887 container create 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 13:57:15 compute-0 systemd[1]: Started libpod-conmon-028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c.scope.
Feb 25 13:57:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:57:15 compute-0 podman[424071]: 2026-02-25 13:57:15.366057107 +0000 UTC m=+0.034019986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:57:15 compute-0 podman[424071]: 2026-02-25 13:57:15.472901521 +0000 UTC m=+0.140864390 container init 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:57:15 compute-0 podman[424071]: 2026-02-25 13:57:15.480778774 +0000 UTC m=+0.148741603 container start 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:57:15 compute-0 podman[424071]: 2026-02-25 13:57:15.48486352 +0000 UTC m=+0.152826399 container attach 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:57:15 compute-0 silly_beaver[424087]: 167 167
Feb 25 13:57:15 compute-0 systemd[1]: libpod-028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c.scope: Deactivated successfully.
Feb 25 13:57:15 compute-0 podman[424071]: 2026-02-25 13:57:15.485847238 +0000 UTC m=+0.153810057 container died 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:57:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e56c1dbb8bc5f72a01990905dd91f489ec5eefdab5ad331a9591eae19315ab2c-merged.mount: Deactivated successfully.
Feb 25 13:57:15 compute-0 podman[424071]: 2026-02-25 13:57:15.535470687 +0000 UTC m=+0.203433506 container remove 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 13:57:15 compute-0 systemd[1]: libpod-conmon-028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c.scope: Deactivated successfully.
Feb 25 13:57:15 compute-0 podman[424110]: 2026-02-25 13:57:15.731959335 +0000 UTC m=+0.052369767 container create 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 13:57:15 compute-0 nova_compute[244014]: 2026-02-25 13:57:15.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:15 compute-0 systemd[1]: Started libpod-conmon-3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744.scope.
Feb 25 13:57:15 compute-0 podman[424110]: 2026-02-25 13:57:15.714618413 +0000 UTC m=+0.035028865 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:57:15 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fc57362bd81823e8e1d68f2a351490466f05167a650407660206eb4f55acbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fc57362bd81823e8e1d68f2a351490466f05167a650407660206eb4f55acbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fc57362bd81823e8e1d68f2a351490466f05167a650407660206eb4f55acbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fc57362bd81823e8e1d68f2a351490466f05167a650407660206eb4f55acbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:57:15 compute-0 podman[424110]: 2026-02-25 13:57:15.832905411 +0000 UTC m=+0.153315863 container init 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:57:15 compute-0 podman[424110]: 2026-02-25 13:57:15.841953978 +0000 UTC m=+0.162364450 container start 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:57:15 compute-0 podman[424110]: 2026-02-25 13:57:15.845977462 +0000 UTC m=+0.166387894 container attach 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:57:15 compute-0 ceph-mon[76335]: pgmap v4120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:16 compute-0 lvm[424205]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:57:16 compute-0 lvm[424204]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:57:16 compute-0 lvm[424204]: VG ceph_vg0 finished
Feb 25 13:57:16 compute-0 lvm[424205]: VG ceph_vg1 finished
Feb 25 13:57:16 compute-0 lvm[424207]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:57:16 compute-0 lvm[424207]: VG ceph_vg2 finished
Feb 25 13:57:16 compute-0 awesome_bohr[424126]: {}
Feb 25 13:57:16 compute-0 systemd[1]: libpod-3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744.scope: Deactivated successfully.
Feb 25 13:57:16 compute-0 systemd[1]: libpod-3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744.scope: Consumed 1.167s CPU time.
Feb 25 13:57:16 compute-0 podman[424110]: 2026-02-25 13:57:16.656149313 +0000 UTC m=+0.976559745 container died 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:57:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-15fc57362bd81823e8e1d68f2a351490466f05167a650407660206eb4f55acbe-merged.mount: Deactivated successfully.
Feb 25 13:57:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:16 compute-0 podman[424110]: 2026-02-25 13:57:16.706002438 +0000 UTC m=+1.026412890 container remove 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:57:16 compute-0 systemd[1]: libpod-conmon-3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744.scope: Deactivated successfully.
Feb 25 13:57:16 compute-0 sudo[424034]: pam_unix(sudo:session): session closed for user root
Feb 25 13:57:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:57:16 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:57:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:57:16 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:57:16 compute-0 sudo[424223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:57:16 compute-0 sudo[424223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:57:16 compute-0 sudo[424223]: pam_unix(sudo:session): session closed for user root
Feb 25 13:57:17 compute-0 ceph-mon[76335]: pgmap v4121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:57:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:57:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:19 compute-0 nova_compute[244014]: 2026-02-25 13:57:19.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:19 compute-0 ceph-mon[76335]: pgmap v4122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:20 compute-0 nova_compute[244014]: 2026-02-25 13:57:20.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:21 compute-0 ceph-mon[76335]: pgmap v4123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:23 compute-0 ceph-mon[76335]: pgmap v4124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:24 compute-0 nova_compute[244014]: 2026-02-25 13:57:24.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:25 compute-0 nova_compute[244014]: 2026-02-25 13:57:25.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:25 compute-0 ceph-mon[76335]: pgmap v4125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:26 compute-0 podman[424248]: 2026-02-25 13:57:26.74534592 +0000 UTC m=+0.076911025 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 25 13:57:26 compute-0 podman[424249]: 2026-02-25 13:57:26.824776495 +0000 UTC m=+0.156035641 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 13:57:26 compute-0 nova_compute[244014]: 2026-02-25 13:57:26.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:57:26 compute-0 nova_compute[244014]: 2026-02-25 13:57:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:57:26 compute-0 nova_compute[244014]: 2026-02-25 13:57:26.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:57:26 compute-0 nova_compute[244014]: 2026-02-25 13:57:26.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:57:26 compute-0 nova_compute[244014]: 2026-02-25 13:57:26.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:57:26 compute-0 nova_compute[244014]: 2026-02-25 13:57:26.917 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:57:26 compute-0 nova_compute[244014]: 2026-02-25 13:57:26.917 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:57:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:57:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2850778559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:57:27 compute-0 nova_compute[244014]: 2026-02-25 13:57:27.475 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:57:27 compute-0 nova_compute[244014]: 2026-02-25 13:57:27.657 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:57:27 compute-0 nova_compute[244014]: 2026-02-25 13:57:27.661 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3465MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:57:27 compute-0 nova_compute[244014]: 2026-02-25 13:57:27.661 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:57:27 compute-0 nova_compute[244014]: 2026-02-25 13:57:27.662 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:57:27 compute-0 nova_compute[244014]: 2026-02-25 13:57:27.728 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:57:27 compute-0 nova_compute[244014]: 2026-02-25 13:57:27.729 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:57:27 compute-0 nova_compute[244014]: 2026-02-25 13:57:27.754 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:57:27 compute-0 ceph-mon[76335]: pgmap v4126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2850778559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:57:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:57:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611876398' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:57:28 compute-0 nova_compute[244014]: 2026-02-25 13:57:28.347 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:57:28 compute-0 nova_compute[244014]: 2026-02-25 13:57:28.353 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:57:28 compute-0 nova_compute[244014]: 2026-02-25 13:57:28.369 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:57:28 compute-0 nova_compute[244014]: 2026-02-25 13:57:28.371 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:57:28 compute-0 nova_compute[244014]: 2026-02-25 13:57:28.371 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:57:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1611876398' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:57:29 compute-0 nova_compute[244014]: 2026-02-25 13:57:29.495 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:29 compute-0 ceph-mon[76335]: pgmap v4127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:30 compute-0 nova_compute[244014]: 2026-02-25 13:57:30.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:57:31
Feb 25 13:57:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:57:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:57:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'volumes', 'vms', 'default.rgw.control', 'backups']
Feb 25 13:57:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:57:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:57:31 compute-0 ceph-mon[76335]: pgmap v4128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:57:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:57:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4129: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:33 compute-0 ceph-mon[76335]: pgmap v4129: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:34 compute-0 nova_compute[244014]: 2026-02-25 13:57:34.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:35 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 13:57:35 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 13:57:35 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 13:57:35 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 13:57:35 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 13:57:35 compute-0 nova_compute[244014]: 2026-02-25 13:57:35.372 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:57:35 compute-0 nova_compute[244014]: 2026-02-25 13:57:35.372 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:57:35 compute-0 nova_compute[244014]: 2026-02-25 13:57:35.372 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:57:35 compute-0 nova_compute[244014]: 2026-02-25 13:57:35.392 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:57:35 compute-0 nova_compute[244014]: 2026-02-25 13:57:35.775 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:35 compute-0 ceph-mon[76335]: pgmap v4130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:36 compute-0 nova_compute[244014]: 2026-02-25 13:57:36.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:57:37 compute-0 ceph-mon[76335]: pgmap v4131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4132: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:38 compute-0 nova_compute[244014]: 2026-02-25 13:57:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:57:38 compute-0 nova_compute[244014]: 2026-02-25 13:57:38.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:57:39 compute-0 nova_compute[244014]: 2026-02-25 13:57:39.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:39 compute-0 ceph-mon[76335]: pgmap v4132: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4133: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:40 compute-0 nova_compute[244014]: 2026-02-25 13:57:40.812 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:41 compute-0 ceph-mon[76335]: pgmap v4133: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4134: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:42 compute-0 nova_compute[244014]: 2026-02-25 13:57:42.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:57:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:57:43 compute-0 ceph-mon[76335]: pgmap v4134: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:44 compute-0 nova_compute[244014]: 2026-02-25 13:57:44.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4135: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:45 compute-0 nova_compute[244014]: 2026-02-25 13:57:45.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:45 compute-0 ceph-mon[76335]: pgmap v4135: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4136: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:57:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2625892273' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:57:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:57:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2625892273' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:57:47 compute-0 nova_compute[244014]: 2026-02-25 13:57:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:57:47 compute-0 nova_compute[244014]: 2026-02-25 13:57:47.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:57:47 compute-0 nova_compute[244014]: 2026-02-25 13:57:47.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:57:47 compute-0 ceph-mon[76335]: pgmap v4136: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2625892273' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:57:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2625892273' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:57:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4137: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.930459) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027868930514, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1470, "num_deletes": 251, "total_data_size": 2380725, "memory_usage": 2416512, "flush_reason": "Manual Compaction"}
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027868941438, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 2347175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84866, "largest_seqno": 86335, "table_properties": {"data_size": 2340244, "index_size": 4065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14141, "raw_average_key_size": 19, "raw_value_size": 2326452, "raw_average_value_size": 3272, "num_data_blocks": 182, "num_entries": 711, "num_filter_entries": 711, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027711, "oldest_key_time": 1772027711, "file_creation_time": 1772027868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 11049 microseconds, and 5284 cpu microseconds.
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.941503) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 2347175 bytes OK
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.941535) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.943562) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.943584) EVENT_LOG_v1 {"time_micros": 1772027868943578, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.943612) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 2374288, prev total WAL file size 2374288, number of live WAL files 2.
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.944503) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(2292KB)], [206(9744KB)]
Feb 25 13:57:48 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027868944564, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 12325753, "oldest_snapshot_seqno": -1}
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 9827 keys, 10551180 bytes, temperature: kUnknown
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027869021970, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 10551180, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10491854, "index_size": 33677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 259647, "raw_average_key_size": 26, "raw_value_size": 10322396, "raw_average_value_size": 1050, "num_data_blocks": 1289, "num_entries": 9827, "num_filter_entries": 9827, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.022331) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 10551180 bytes
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.023873) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.0 rd, 136.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(9.7) write-amplify(4.5) OK, records in: 10341, records dropped: 514 output_compression: NoCompression
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.023905) EVENT_LOG_v1 {"time_micros": 1772027869023890, "job": 130, "event": "compaction_finished", "compaction_time_micros": 77512, "compaction_time_cpu_micros": 32143, "output_level": 6, "num_output_files": 1, "total_output_size": 10551180, "num_input_records": 10341, "num_output_records": 9827, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027869024423, "job": 130, "event": "table_file_deletion", "file_number": 208}
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027869026711, "job": 130, "event": "table_file_deletion", "file_number": 206}
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.944364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.026866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.026877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.026880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.026883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:57:49 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.026886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 13:57:49 compute-0 nova_compute[244014]: 2026-02-25 13:57:49.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:49 compute-0 ceph-mon[76335]: pgmap v4137: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4138: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:50 compute-0 nova_compute[244014]: 2026-02-25 13:57:50.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:51 compute-0 ceph-mon[76335]: pgmap v4138: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4139: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:53 compute-0 ceph-mon[76335]: pgmap v4139: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:54 compute-0 nova_compute[244014]: 2026-02-25 13:57:54.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4140: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:57:55.101 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:57:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:57:55.102 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:57:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:57:55.102 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:57:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:57:55 compute-0 nova_compute[244014]: 2026-02-25 13:57:55.862 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:57:55 compute-0 ceph-mon[76335]: pgmap v4140: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4141: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:57 compute-0 podman[424337]: 2026-02-25 13:57:57.720575059 +0000 UTC m=+0.063090542 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:57:57 compute-0 podman[424338]: 2026-02-25 13:57:57.743170571 +0000 UTC m=+0.084720487 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 13:57:57 compute-0 ceph-mon[76335]: pgmap v4141: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4142: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:59 compute-0 ceph-mon[76335]: pgmap v4142: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:57:59 compute-0 nova_compute[244014]: 2026-02-25 13:57:59.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4143: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:00 compute-0 nova_compute[244014]: 2026-02-25 13:58:00.862 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:58:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:58:01 compute-0 ceph-mon[76335]: pgmap v4143: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4144: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:03 compute-0 ceph-mon[76335]: pgmap v4144: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:04 compute-0 nova_compute[244014]: 2026-02-25 13:58:04.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:05 compute-0 ceph-mon[76335]: pgmap v4145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:05 compute-0 nova_compute[244014]: 2026-02-25 13:58:05.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:07 compute-0 ceph-mon[76335]: pgmap v4146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4147: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:09 compute-0 nova_compute[244014]: 2026-02-25 13:58:09.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:09 compute-0 ceph-mon[76335]: pgmap v4147: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4148: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:10 compute-0 nova_compute[244014]: 2026-02-25 13:58:10.895 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:11 compute-0 ceph-mon[76335]: pgmap v4148: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4149: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:13 compute-0 ceph-mon[76335]: pgmap v4149: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:14 compute-0 nova_compute[244014]: 2026-02-25 13:58:14.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4150: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:15 compute-0 nova_compute[244014]: 2026-02-25 13:58:15.896 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:15 compute-0 ceph-mon[76335]: pgmap v4150: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4151: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:16 compute-0 sudo[424383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:58:16 compute-0 sudo[424383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:58:16 compute-0 sudo[424383]: pam_unix(sudo:session): session closed for user root
Feb 25 13:58:17 compute-0 sudo[424408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 13:58:17 compute-0 sudo[424408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:58:17 compute-0 podman[424479]: 2026-02-25 13:58:17.56445238 +0000 UTC m=+0.088022849 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:58:17 compute-0 podman[424479]: 2026-02-25 13:58:17.706370479 +0000 UTC m=+0.229940898 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:58:17 compute-0 ceph-mon[76335]: pgmap v4151: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:18 compute-0 sudo[424408]: pam_unix(sudo:session): session closed for user root
Feb 25 13:58:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:58:18 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:58:18 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:58:18 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:58:18 compute-0 sudo[424670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:58:18 compute-0 sudo[424670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:58:18 compute-0 sudo[424670]: pam_unix(sudo:session): session closed for user root
Feb 25 13:58:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4152: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:18 compute-0 sudo[424695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:58:18 compute-0 sudo[424695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:58:19 compute-0 sudo[424695]: pam_unix(sudo:session): session closed for user root
Feb 25 13:58:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:58:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:58:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:58:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:58:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:58:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:58:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:58:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:58:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:58:19 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:58:19 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:58:19 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:58:19 compute-0 sudo[424753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:58:19 compute-0 sudo[424753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:58:19 compute-0 sudo[424753]: pam_unix(sudo:session): session closed for user root
Feb 25 13:58:19 compute-0 sudo[424778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:58:19 compute-0 sudo[424778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:58:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:58:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:58:19 compute-0 ceph-mon[76335]: pgmap v4152: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:58:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:58:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:58:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:58:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:58:19 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:58:19 compute-0 nova_compute[244014]: 2026-02-25 13:58:19.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:19 compute-0 podman[424814]: 2026-02-25 13:58:19.870810657 +0000 UTC m=+0.050172215 container create 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 13:58:19 compute-0 systemd[1]: Started libpod-conmon-0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab.scope.
Feb 25 13:58:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:58:19 compute-0 podman[424814]: 2026-02-25 13:58:19.844575802 +0000 UTC m=+0.023937390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:58:19 compute-0 podman[424814]: 2026-02-25 13:58:19.957596051 +0000 UTC m=+0.136957649 container init 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 13:58:19 compute-0 podman[424814]: 2026-02-25 13:58:19.964482326 +0000 UTC m=+0.143843874 container start 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 13:58:19 compute-0 podman[424814]: 2026-02-25 13:58:19.96813702 +0000 UTC m=+0.147498568 container attach 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:58:19 compute-0 thirsty_pare[424829]: 167 167
Feb 25 13:58:19 compute-0 systemd[1]: libpod-0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab.scope: Deactivated successfully.
Feb 25 13:58:19 compute-0 podman[424814]: 2026-02-25 13:58:19.97166165 +0000 UTC m=+0.151023228 container died 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:58:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-936af154b56e88ba4947a48209644ae3961b486eab167fd8113f2f8fc7bc5309-merged.mount: Deactivated successfully.
Feb 25 13:58:20 compute-0 podman[424814]: 2026-02-25 13:58:20.018077308 +0000 UTC m=+0.197438886 container remove 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 13:58:20 compute-0 systemd[1]: libpod-conmon-0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab.scope: Deactivated successfully.
Feb 25 13:58:20 compute-0 podman[424853]: 2026-02-25 13:58:20.198975333 +0000 UTC m=+0.045010429 container create aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 13:58:20 compute-0 systemd[1]: Started libpod-conmon-aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74.scope.
Feb 25 13:58:20 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:20 compute-0 podman[424853]: 2026-02-25 13:58:20.17771044 +0000 UTC m=+0.023745536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:58:20 compute-0 podman[424853]: 2026-02-25 13:58:20.29607554 +0000 UTC m=+0.142110686 container init aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:58:20 compute-0 podman[424853]: 2026-02-25 13:58:20.30910934 +0000 UTC m=+0.155144406 container start aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:58:20 compute-0 podman[424853]: 2026-02-25 13:58:20.313279868 +0000 UTC m=+0.159314974 container attach aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:58:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4153: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:20 compute-0 determined_easley[424870]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:58:20 compute-0 determined_easley[424870]: --> All data devices are unavailable
Feb 25 13:58:20 compute-0 systemd[1]: libpod-aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74.scope: Deactivated successfully.
Feb 25 13:58:20 compute-0 podman[424853]: 2026-02-25 13:58:20.787636905 +0000 UTC m=+0.633672001 container died aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 13:58:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f-merged.mount: Deactivated successfully.
Feb 25 13:58:20 compute-0 podman[424853]: 2026-02-25 13:58:20.839857238 +0000 UTC m=+0.685892334 container remove aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:58:20 compute-0 systemd[1]: libpod-conmon-aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74.scope: Deactivated successfully.
Feb 25 13:58:20 compute-0 sudo[424778]: pam_unix(sudo:session): session closed for user root
Feb 25 13:58:20 compute-0 nova_compute[244014]: 2026-02-25 13:58:20.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:20 compute-0 sudo[424901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:58:20 compute-0 sudo[424901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:58:20 compute-0 sudo[424901]: pam_unix(sudo:session): session closed for user root
Feb 25 13:58:21 compute-0 sudo[424926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:58:21 compute-0 sudo[424926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:58:21 compute-0 podman[424963]: 2026-02-25 13:58:21.367892767 +0000 UTC m=+0.062547866 container create fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 13:58:21 compute-0 systemd[1]: Started libpod-conmon-fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d.scope.
Feb 25 13:58:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:58:21 compute-0 podman[424963]: 2026-02-25 13:58:21.343166325 +0000 UTC m=+0.037821464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:58:21 compute-0 podman[424963]: 2026-02-25 13:58:21.451862781 +0000 UTC m=+0.146517910 container init fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 13:58:21 compute-0 podman[424963]: 2026-02-25 13:58:21.461226047 +0000 UTC m=+0.155881106 container start fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 13:58:21 compute-0 podman[424963]: 2026-02-25 13:58:21.465175069 +0000 UTC m=+0.159830208 container attach fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 13:58:21 compute-0 mystifying_lamarr[424979]: 167 167
Feb 25 13:58:21 compute-0 systemd[1]: libpod-fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d.scope: Deactivated successfully.
Feb 25 13:58:21 compute-0 podman[424963]: 2026-02-25 13:58:21.466738464 +0000 UTC m=+0.161393553 container died fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 13:58:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ad57d49f52d2b6acef6c4472ba6ba1bd42ec9759ba46cdc2f4a7c17f25bb58d-merged.mount: Deactivated successfully.
Feb 25 13:58:21 compute-0 podman[424963]: 2026-02-25 13:58:21.512549634 +0000 UTC m=+0.207204693 container remove fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 13:58:21 compute-0 systemd[1]: libpod-conmon-fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d.scope: Deactivated successfully.
Feb 25 13:58:21 compute-0 podman[425001]: 2026-02-25 13:58:21.702792115 +0000 UTC m=+0.059381257 container create d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:58:21 compute-0 systemd[1]: Started libpod-conmon-d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce.scope.
Feb 25 13:58:21 compute-0 podman[425001]: 2026-02-25 13:58:21.680450671 +0000 UTC m=+0.037039803 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:58:21 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802f6cf34a7b80df724b7c3165016d910b523e22ab039166f297fef3058e1aa0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802f6cf34a7b80df724b7c3165016d910b523e22ab039166f297fef3058e1aa0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802f6cf34a7b80df724b7c3165016d910b523e22ab039166f297fef3058e1aa0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802f6cf34a7b80df724b7c3165016d910b523e22ab039166f297fef3058e1aa0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:21 compute-0 ceph-mon[76335]: pgmap v4153: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:21 compute-0 podman[425001]: 2026-02-25 13:58:21.792925014 +0000 UTC m=+0.149514156 container init d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 13:58:21 compute-0 podman[425001]: 2026-02-25 13:58:21.798674027 +0000 UTC m=+0.155263149 container start d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 13:58:21 compute-0 podman[425001]: 2026-02-25 13:58:21.802518596 +0000 UTC m=+0.159107718 container attach d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:58:22 compute-0 sweet_moore[425017]: {
Feb 25 13:58:22 compute-0 sweet_moore[425017]:     "0": [
Feb 25 13:58:22 compute-0 sweet_moore[425017]:         {
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "devices": [
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "/dev/loop3"
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             ],
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_name": "ceph_lv0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_size": "21470642176",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "name": "ceph_lv0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "tags": {
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.cluster_name": "ceph",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.crush_device_class": "",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.encrypted": "0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.objectstore": "bluestore",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.osd_id": "0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.type": "block",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.vdo": "0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.with_tpm": "0"
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             },
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "type": "block",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "vg_name": "ceph_vg0"
Feb 25 13:58:22 compute-0 sweet_moore[425017]:         }
Feb 25 13:58:22 compute-0 sweet_moore[425017]:     ],
Feb 25 13:58:22 compute-0 sweet_moore[425017]:     "1": [
Feb 25 13:58:22 compute-0 sweet_moore[425017]:         {
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "devices": [
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "/dev/loop4"
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             ],
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_name": "ceph_lv1",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_size": "21470642176",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "name": "ceph_lv1",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "tags": {
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.cluster_name": "ceph",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.crush_device_class": "",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.encrypted": "0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.objectstore": "bluestore",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.osd_id": "1",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.type": "block",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.vdo": "0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.with_tpm": "0"
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             },
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "type": "block",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "vg_name": "ceph_vg1"
Feb 25 13:58:22 compute-0 sweet_moore[425017]:         }
Feb 25 13:58:22 compute-0 sweet_moore[425017]:     ],
Feb 25 13:58:22 compute-0 sweet_moore[425017]:     "2": [
Feb 25 13:58:22 compute-0 sweet_moore[425017]:         {
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "devices": [
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "/dev/loop5"
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             ],
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_name": "ceph_lv2",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_size": "21470642176",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "name": "ceph_lv2",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "tags": {
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.cluster_name": "ceph",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.crush_device_class": "",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.encrypted": "0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.objectstore": "bluestore",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.osd_id": "2",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.type": "block",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.vdo": "0",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:                 "ceph.with_tpm": "0"
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             },
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "type": "block",
Feb 25 13:58:22 compute-0 sweet_moore[425017]:             "vg_name": "ceph_vg2"
Feb 25 13:58:22 compute-0 sweet_moore[425017]:         }
Feb 25 13:58:22 compute-0 sweet_moore[425017]:     ]
Feb 25 13:58:22 compute-0 sweet_moore[425017]: }
Feb 25 13:58:22 compute-0 systemd[1]: libpod-d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce.scope: Deactivated successfully.
Feb 25 13:58:22 compute-0 podman[425001]: 2026-02-25 13:58:22.099875138 +0000 UTC m=+0.456464280 container died d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 13:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-802f6cf34a7b80df724b7c3165016d910b523e22ab039166f297fef3058e1aa0-merged.mount: Deactivated successfully.
Feb 25 13:58:22 compute-0 podman[425001]: 2026-02-25 13:58:22.158260446 +0000 UTC m=+0.514849538 container remove d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:58:22 compute-0 systemd[1]: libpod-conmon-d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce.scope: Deactivated successfully.
Feb 25 13:58:22 compute-0 sudo[424926]: pam_unix(sudo:session): session closed for user root
Feb 25 13:58:22 compute-0 sudo[425038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:58:22 compute-0 sudo[425038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:58:22 compute-0 sudo[425038]: pam_unix(sudo:session): session closed for user root
Feb 25 13:58:22 compute-0 sudo[425063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:58:22 compute-0 sudo[425063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:58:22 compute-0 podman[425100]: 2026-02-25 13:58:22.637427729 +0000 UTC m=+0.050035851 container create 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:58:22 compute-0 systemd[1]: Started libpod-conmon-228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1.scope.
Feb 25 13:58:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:58:22 compute-0 podman[425100]: 2026-02-25 13:58:22.706345156 +0000 UTC m=+0.118953278 container init 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 13:58:22 compute-0 podman[425100]: 2026-02-25 13:58:22.712645385 +0000 UTC m=+0.125253487 container start 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:58:22 compute-0 podman[425100]: 2026-02-25 13:58:22.617661368 +0000 UTC m=+0.030269490 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:58:22 compute-0 podman[425100]: 2026-02-25 13:58:22.716266197 +0000 UTC m=+0.128874329 container attach 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:58:22 compute-0 vigorous_mahavira[425116]: 167 167
Feb 25 13:58:22 compute-0 systemd[1]: libpod-228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1.scope: Deactivated successfully.
Feb 25 13:58:22 compute-0 conmon[425116]: conmon 228f92849189026bc14a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1.scope/container/memory.events
Feb 25 13:58:22 compute-0 podman[425100]: 2026-02-25 13:58:22.718142081 +0000 UTC m=+0.130750213 container died 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:58:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4154: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-635062f2352dcaa241a69601e8dbfc66ba97491a7ce94001e3487841be36af70-merged.mount: Deactivated successfully.
Feb 25 13:58:22 compute-0 podman[425100]: 2026-02-25 13:58:22.759938377 +0000 UTC m=+0.172546489 container remove 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:58:22 compute-0 systemd[1]: libpod-conmon-228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1.scope: Deactivated successfully.
Feb 25 13:58:22 compute-0 podman[425139]: 2026-02-25 13:58:22.897129762 +0000 UTC m=+0.038348610 container create bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:58:22 compute-0 systemd[1]: Started libpod-conmon-bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891.scope.
Feb 25 13:58:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eac904433ec75554549039a00c555873898e2b9f6bbe0cfe52b1239918cf23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eac904433ec75554549039a00c555873898e2b9f6bbe0cfe52b1239918cf23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eac904433ec75554549039a00c555873898e2b9f6bbe0cfe52b1239918cf23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eac904433ec75554549039a00c555873898e2b9f6bbe0cfe52b1239918cf23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:58:22 compute-0 podman[425139]: 2026-02-25 13:58:22.880180501 +0000 UTC m=+0.021399399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:58:22 compute-0 podman[425139]: 2026-02-25 13:58:22.982681851 +0000 UTC m=+0.123900709 container init bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 13:58:22 compute-0 podman[425139]: 2026-02-25 13:58:22.98934245 +0000 UTC m=+0.130561318 container start bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:58:22 compute-0 podman[425139]: 2026-02-25 13:58:22.992686675 +0000 UTC m=+0.133905543 container attach bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:58:23 compute-0 lvm[425233]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:58:23 compute-0 lvm[425233]: VG ceph_vg0 finished
Feb 25 13:58:23 compute-0 lvm[425236]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:58:23 compute-0 lvm[425236]: VG ceph_vg2 finished
Feb 25 13:58:23 compute-0 lvm[425234]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:58:23 compute-0 lvm[425234]: VG ceph_vg1 finished
Feb 25 13:58:23 compute-0 optimistic_swartz[425155]: {}
Feb 25 13:58:23 compute-0 systemd[1]: libpod-bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891.scope: Deactivated successfully.
Feb 25 13:58:23 compute-0 systemd[1]: libpod-bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891.scope: Consumed 1.165s CPU time.
Feb 25 13:58:23 compute-0 podman[425139]: 2026-02-25 13:58:23.705620275 +0000 UTC m=+0.846839133 container died bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 13:58:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8eac904433ec75554549039a00c555873898e2b9f6bbe0cfe52b1239918cf23-merged.mount: Deactivated successfully.
Feb 25 13:58:23 compute-0 podman[425139]: 2026-02-25 13:58:23.751334773 +0000 UTC m=+0.892553621 container remove bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 13:58:23 compute-0 systemd[1]: libpod-conmon-bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891.scope: Deactivated successfully.
Feb 25 13:58:23 compute-0 ceph-mon[76335]: pgmap v4154: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:23 compute-0 sudo[425063]: pam_unix(sudo:session): session closed for user root
Feb 25 13:58:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:58:23 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:58:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:58:23 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:58:23 compute-0 sudo[425250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:58:23 compute-0 sudo[425250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:58:23 compute-0 sudo[425250]: pam_unix(sudo:session): session closed for user root
Feb 25 13:58:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4155: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:24 compute-0 nova_compute[244014]: 2026-02-25 13:58:24.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:58:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:58:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:25 compute-0 ceph-mon[76335]: pgmap v4155: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:25 compute-0 nova_compute[244014]: 2026-02-25 13:58:25.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4156: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:27 compute-0 ceph-mon[76335]: pgmap v4156: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:27 compute-0 nova_compute[244014]: 2026-02-25 13:58:27.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:58:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4157: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:28 compute-0 podman[425275]: 2026-02-25 13:58:28.75311647 +0000 UTC m=+0.085091126 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 13:58:28 compute-0 podman[425276]: 2026-02-25 13:58:28.795269037 +0000 UTC m=+0.127243613 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 13:58:28 compute-0 nova_compute[244014]: 2026-02-25 13:58:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:58:28 compute-0 nova_compute[244014]: 2026-02-25 13:58:28.907 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:58:28 compute-0 nova_compute[244014]: 2026-02-25 13:58:28.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:58:28 compute-0 nova_compute[244014]: 2026-02-25 13:58:28.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:58:28 compute-0 nova_compute[244014]: 2026-02-25 13:58:28.909 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:58:28 compute-0 nova_compute[244014]: 2026-02-25 13:58:28.909 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:58:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:58:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3071338508' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:58:29 compute-0 nova_compute[244014]: 2026-02-25 13:58:29.497 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:58:29 compute-0 nova_compute[244014]: 2026-02-25 13:58:29.660 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:58:29 compute-0 nova_compute[244014]: 2026-02-25 13:58:29.662 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3449MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:58:29 compute-0 nova_compute[244014]: 2026-02-25 13:58:29.662 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:58:29 compute-0 nova_compute[244014]: 2026-02-25 13:58:29.663 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:58:29 compute-0 nova_compute[244014]: 2026-02-25 13:58:29.726 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:58:29 compute-0 nova_compute[244014]: 2026-02-25 13:58:29.726 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:58:29 compute-0 nova_compute[244014]: 2026-02-25 13:58:29.743 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:58:29 compute-0 nova_compute[244014]: 2026-02-25 13:58:29.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:29 compute-0 ceph-mon[76335]: pgmap v4157: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3071338508' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:58:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:58:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/756672815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:58:30 compute-0 nova_compute[244014]: 2026-02-25 13:58:30.286 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:58:30 compute-0 nova_compute[244014]: 2026-02-25 13:58:30.292 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:58:30 compute-0 nova_compute[244014]: 2026-02-25 13:58:30.311 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:58:30 compute-0 nova_compute[244014]: 2026-02-25 13:58:30.313 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:58:30 compute-0 nova_compute[244014]: 2026-02-25 13:58:30.314 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:58:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4158: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/756672815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:58:30 compute-0 nova_compute[244014]: 2026-02-25 13:58:30.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:58:31
Feb 25 13:58:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:58:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:58:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.control', 'volumes', '.mgr']
Feb 25 13:58:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:58:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:58:31 compute-0 ceph-mon[76335]: pgmap v4158: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:58:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:58:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4159: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:33 compute-0 ceph-mon[76335]: pgmap v4159: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4160: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:34 compute-0 nova_compute[244014]: 2026-02-25 13:58:34.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:35 compute-0 ceph-mon[76335]: pgmap v4160: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:35 compute-0 nova_compute[244014]: 2026-02-25 13:58:35.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:36 compute-0 nova_compute[244014]: 2026-02-25 13:58:36.316 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:58:36 compute-0 nova_compute[244014]: 2026-02-25 13:58:36.316 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:58:36 compute-0 nova_compute[244014]: 2026-02-25 13:58:36.316 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:58:36 compute-0 nova_compute[244014]: 2026-02-25 13:58:36.339 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:58:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4161: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:37 compute-0 nova_compute[244014]: 2026-02-25 13:58:37.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:58:37 compute-0 ceph-mon[76335]: pgmap v4161: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4162: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:39 compute-0 nova_compute[244014]: 2026-02-25 13:58:39.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:39 compute-0 ceph-mon[76335]: pgmap v4162: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4163: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:40 compute-0 nova_compute[244014]: 2026-02-25 13:58:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:58:40 compute-0 nova_compute[244014]: 2026-02-25 13:58:40.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:58:40 compute-0 nova_compute[244014]: 2026-02-25 13:58:40.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:41 compute-0 ceph-mon[76335]: pgmap v4163: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4164: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:42 compute-0 nova_compute[244014]: 2026-02-25 13:58:42.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:58:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:58:43 compute-0 ceph-mon[76335]: pgmap v4164: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4165: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:44 compute-0 nova_compute[244014]: 2026-02-25 13:58:44.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:45 compute-0 ceph-mon[76335]: pgmap v4165: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:45 compute-0 nova_compute[244014]: 2026-02-25 13:58:45.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4166: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:58:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2981582514' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:58:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:58:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2981582514' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:58:47 compute-0 ceph-mon[76335]: pgmap v4166: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2981582514' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:58:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2981582514' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:58:47 compute-0 nova_compute[244014]: 2026-02-25 13:58:47.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:58:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4167: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:48 compute-0 nova_compute[244014]: 2026-02-25 13:58:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:58:49 compute-0 nova_compute[244014]: 2026-02-25 13:58:49.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:49 compute-0 ceph-mon[76335]: pgmap v4167: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:49 compute-0 nova_compute[244014]: 2026-02-25 13:58:49.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:58:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4168: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:50 compute-0 nova_compute[244014]: 2026-02-25 13:58:50.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:51 compute-0 ceph-mon[76335]: pgmap v4168: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4169: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:53 compute-0 ceph-mon[76335]: pgmap v4169: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4170: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:54 compute-0 nova_compute[244014]: 2026-02-25 13:58:54.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:58:55.102 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:58:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:58:55.103 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:58:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:58:55.103 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:58:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:58:55 compute-0 ceph-mon[76335]: pgmap v4170: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:56 compute-0 nova_compute[244014]: 2026-02-25 13:58:56.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4171: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:57 compute-0 ceph-mon[76335]: pgmap v4171: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4172: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:58:59 compute-0 podman[425365]: 2026-02-25 13:58:59.710664489 +0000 UTC m=+0.052927624 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 13:58:59 compute-0 podman[425366]: 2026-02-25 13:58:59.756515341 +0000 UTC m=+0.091403346 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.43.0)
Feb 25 13:58:59 compute-0 nova_compute[244014]: 2026-02-25 13:58:59.809 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:58:59 compute-0 ceph-mon[76335]: pgmap v4172: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4173: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:01 compute-0 nova_compute[244014]: 2026-02-25 13:59:01.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:59:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:59:01 compute-0 ceph-mon[76335]: pgmap v4173: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4174: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:03 compute-0 ceph-mon[76335]: pgmap v4174: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4175: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:04 compute-0 nova_compute[244014]: 2026-02-25 13:59:04.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:05 compute-0 ceph-mon[76335]: pgmap v4175: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:06 compute-0 nova_compute[244014]: 2026-02-25 13:59:06.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4176: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:07 compute-0 ceph-mon[76335]: pgmap v4176: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4177: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:09 compute-0 nova_compute[244014]: 2026-02-25 13:59:09.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:09 compute-0 ceph-mon[76335]: pgmap v4177: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4178: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:11 compute-0 nova_compute[244014]: 2026-02-25 13:59:11.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:11 compute-0 ceph-mon[76335]: pgmap v4178: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 13:59:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Cumulative writes: 19K writes, 86K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s
                                           Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1314 writes, 5964 keys, 1314 commit groups, 1.0 writes per commit group, ingest: 8.79 MB, 0.01 MB/s
                                           Interval WAL: 1314 writes, 1314 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     52.6      2.03              0.30        65    0.031       0      0       0.0       0.0
                                             L6      1/0   10.06 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.5    102.6     88.0      6.63              1.73        64    0.104    476K    33K       0.0       0.0
                                            Sum      1/0   10.06 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.5     78.5     79.7      8.66              2.03       129    0.067    476K    33K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8    142.1    144.8      0.43              0.17        10    0.043     51K   2557       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    102.6     88.0      6.63              1.73        64    0.104    476K    33K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     52.7      2.03              0.30        64    0.032       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.104, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.67 GB write, 0.09 MB/s write, 0.66 GB read, 0.09 MB/s read, 8.7 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 77.51 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000986 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4800,74.21 MB,24.4115%) FilterBlock(130,1.28 MB,0.419732%) IndexBlock(130,2.02 MB,0.66567%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 13:59:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4179: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:13 compute-0 ceph-mon[76335]: pgmap v4179: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4180: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:14 compute-0 nova_compute[244014]: 2026-02-25 13:59:14.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:15 compute-0 ceph-mon[76335]: pgmap v4180: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:16 compute-0 nova_compute[244014]: 2026-02-25 13:59:16.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4181: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:16 compute-0 nova_compute[244014]: 2026-02-25 13:59:16.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:17 compute-0 ceph-mon[76335]: pgmap v4181: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4182: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:19 compute-0 ceph-mon[76335]: pgmap v4182: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:19 compute-0 nova_compute[244014]: 2026-02-25 13:59:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:19 compute-0 nova_compute[244014]: 2026-02-25 13:59:19.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4183: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:21 compute-0 nova_compute[244014]: 2026-02-25 13:59:21.111 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:21 compute-0 ceph-mon[76335]: pgmap v4183: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4184: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:23 compute-0 ceph-mon[76335]: pgmap v4184: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:23 compute-0 sudo[425409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:59:23 compute-0 sudo[425409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:59:23 compute-0 sudo[425409]: pam_unix(sudo:session): session closed for user root
Feb 25 13:59:24 compute-0 sudo[425434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 13:59:24 compute-0 sudo[425434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:59:24 compute-0 sudo[425434]: pam_unix(sudo:session): session closed for user root
Feb 25 13:59:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:59:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:59:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 13:59:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:59:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 13:59:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:59:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 13:59:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:59:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 13:59:24 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:59:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 13:59:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:59:24 compute-0 sudo[425490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:59:24 compute-0 sudo[425490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:59:24 compute-0 sudo[425490]: pam_unix(sudo:session): session closed for user root
Feb 25 13:59:24 compute-0 sudo[425515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 13:59:24 compute-0 sudo[425515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:59:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:59:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 13:59:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:59:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 13:59:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 13:59:24 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 13:59:24 compute-0 nova_compute[244014]: 2026-02-25 13:59:24.886 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:25 compute-0 podman[425552]: 2026-02-25 13:59:25.020769339 +0000 UTC m=+0.048163798 container create 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:59:25 compute-0 systemd[1]: Started libpod-conmon-1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282.scope.
Feb 25 13:59:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:59:25 compute-0 podman[425552]: 2026-02-25 13:59:25.000080162 +0000 UTC m=+0.027474591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:59:25 compute-0 podman[425552]: 2026-02-25 13:59:25.108968573 +0000 UTC m=+0.136363022 container init 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:59:25 compute-0 podman[425552]: 2026-02-25 13:59:25.116852427 +0000 UTC m=+0.144246846 container start 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:59:25 compute-0 podman[425552]: 2026-02-25 13:59:25.120038018 +0000 UTC m=+0.147432467 container attach 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:59:25 compute-0 crazy_bartik[425569]: 167 167
Feb 25 13:59:25 compute-0 systemd[1]: libpod-1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282.scope: Deactivated successfully.
Feb 25 13:59:25 compute-0 podman[425552]: 2026-02-25 13:59:25.124408062 +0000 UTC m=+0.151802491 container died 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b118b3a1e382eff305e0fa1fd9ae1986ed19ffb12d8e37776f1a490fdad2d86-merged.mount: Deactivated successfully.
Feb 25 13:59:25 compute-0 podman[425552]: 2026-02-25 13:59:25.169324457 +0000 UTC m=+0.196718906 container remove 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 13:59:25 compute-0 systemd[1]: libpod-conmon-1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282.scope: Deactivated successfully.
Feb 25 13:59:25 compute-0 podman[425592]: 2026-02-25 13:59:25.33956988 +0000 UTC m=+0.052580674 container create 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:59:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:25 compute-0 systemd[1]: Started libpod-conmon-8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab.scope.
Feb 25 13:59:25 compute-0 podman[425592]: 2026-02-25 13:59:25.314620082 +0000 UTC m=+0.027630926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:59:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:25 compute-0 podman[425592]: 2026-02-25 13:59:25.467121101 +0000 UTC m=+0.180131905 container init 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:59:25 compute-0 podman[425592]: 2026-02-25 13:59:25.480443039 +0000 UTC m=+0.193453803 container start 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 13:59:25 compute-0 podman[425592]: 2026-02-25 13:59:25.484845584 +0000 UTC m=+0.197856448 container attach 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 13:59:25 compute-0 ceph-mon[76335]: pgmap v4185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:25 compute-0 sleepy_maxwell[425610]: --> passed data devices: 0 physical, 3 LVM
Feb 25 13:59:25 compute-0 sleepy_maxwell[425610]: --> All data devices are unavailable
Feb 25 13:59:25 compute-0 systemd[1]: libpod-8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab.scope: Deactivated successfully.
Feb 25 13:59:26 compute-0 podman[425630]: 2026-02-25 13:59:26.02154057 +0000 UTC m=+0.029468798 container died 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 13:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32-merged.mount: Deactivated successfully.
Feb 25 13:59:26 compute-0 podman[425630]: 2026-02-25 13:59:26.067182386 +0000 UTC m=+0.075110524 container remove 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Feb 25 13:59:26 compute-0 systemd[1]: libpod-conmon-8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab.scope: Deactivated successfully.
Feb 25 13:59:26 compute-0 sudo[425515]: pam_unix(sudo:session): session closed for user root
Feb 25 13:59:26 compute-0 nova_compute[244014]: 2026-02-25 13:59:26.166 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:26 compute-0 sudo[425645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:59:26 compute-0 sudo[425645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:59:26 compute-0 sudo[425645]: pam_unix(sudo:session): session closed for user root
Feb 25 13:59:26 compute-0 sudo[425670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 13:59:26 compute-0 sudo[425670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:59:26 compute-0 podman[425706]: 2026-02-25 13:59:26.613079654 +0000 UTC m=+0.060801907 container create f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Feb 25 13:59:26 compute-0 systemd[1]: Started libpod-conmon-f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75.scope.
Feb 25 13:59:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:59:26 compute-0 podman[425706]: 2026-02-25 13:59:26.588121185 +0000 UTC m=+0.035843498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:59:26 compute-0 podman[425706]: 2026-02-25 13:59:26.689420291 +0000 UTC m=+0.137142534 container init f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:59:26 compute-0 podman[425706]: 2026-02-25 13:59:26.699770515 +0000 UTC m=+0.147492738 container start f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:59:26 compute-0 podman[425706]: 2026-02-25 13:59:26.703548002 +0000 UTC m=+0.151270245 container attach f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 13:59:26 compute-0 sharp_edison[425723]: 167 167
Feb 25 13:59:26 compute-0 systemd[1]: libpod-f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75.scope: Deactivated successfully.
Feb 25 13:59:26 compute-0 podman[425706]: 2026-02-25 13:59:26.704742456 +0000 UTC m=+0.152464689 container died f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 13:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ee4223907eb74623d16c8fe3db032b5c92f30b8275b80c8de77e57e9e69976f-merged.mount: Deactivated successfully.
Feb 25 13:59:26 compute-0 podman[425706]: 2026-02-25 13:59:26.746900463 +0000 UTC m=+0.194622726 container remove f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 13:59:26 compute-0 systemd[1]: libpod-conmon-f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75.scope: Deactivated successfully.
Feb 25 13:59:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:26 compute-0 podman[425745]: 2026-02-25 13:59:26.93138537 +0000 UTC m=+0.052836011 container create b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 13:59:26 compute-0 systemd[1]: Started libpod-conmon-b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f.scope.
Feb 25 13:59:27 compute-0 podman[425745]: 2026-02-25 13:59:26.912169215 +0000 UTC m=+0.033619846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:59:27 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8be15eed94ec426d3a29753f1b7a82345e4923aabcc799fcdec470437ef5ac0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8be15eed94ec426d3a29753f1b7a82345e4923aabcc799fcdec470437ef5ac0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8be15eed94ec426d3a29753f1b7a82345e4923aabcc799fcdec470437ef5ac0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8be15eed94ec426d3a29753f1b7a82345e4923aabcc799fcdec470437ef5ac0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:27 compute-0 podman[425745]: 2026-02-25 13:59:27.048407373 +0000 UTC m=+0.169858014 container init b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 13:59:27 compute-0 podman[425745]: 2026-02-25 13:59:27.062084201 +0000 UTC m=+0.183534832 container start b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 13:59:27 compute-0 podman[425745]: 2026-02-25 13:59:27.066516507 +0000 UTC m=+0.187967198 container attach b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:59:27 compute-0 funny_bouman[425762]: {
Feb 25 13:59:27 compute-0 funny_bouman[425762]:     "0": [
Feb 25 13:59:27 compute-0 funny_bouman[425762]:         {
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "devices": [
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "/dev/loop3"
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             ],
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_name": "ceph_lv0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_size": "21470642176",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "name": "ceph_lv0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "tags": {
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.cluster_name": "ceph",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.crush_device_class": "",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.encrypted": "0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.objectstore": "bluestore",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.osd_id": "0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.type": "block",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.vdo": "0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.with_tpm": "0"
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             },
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "type": "block",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "vg_name": "ceph_vg0"
Feb 25 13:59:27 compute-0 funny_bouman[425762]:         }
Feb 25 13:59:27 compute-0 funny_bouman[425762]:     ],
Feb 25 13:59:27 compute-0 funny_bouman[425762]:     "1": [
Feb 25 13:59:27 compute-0 funny_bouman[425762]:         {
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "devices": [
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "/dev/loop4"
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             ],
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_name": "ceph_lv1",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_size": "21470642176",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "name": "ceph_lv1",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "tags": {
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.cluster_name": "ceph",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.crush_device_class": "",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.encrypted": "0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.objectstore": "bluestore",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.osd_id": "1",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.type": "block",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.vdo": "0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.with_tpm": "0"
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             },
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "type": "block",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "vg_name": "ceph_vg1"
Feb 25 13:59:27 compute-0 funny_bouman[425762]:         }
Feb 25 13:59:27 compute-0 funny_bouman[425762]:     ],
Feb 25 13:59:27 compute-0 funny_bouman[425762]:     "2": [
Feb 25 13:59:27 compute-0 funny_bouman[425762]:         {
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "devices": [
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "/dev/loop5"
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             ],
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_name": "ceph_lv2",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_size": "21470642176",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "name": "ceph_lv2",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "tags": {
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.cluster_name": "ceph",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.crush_device_class": "",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.encrypted": "0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.objectstore": "bluestore",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.osd_id": "2",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.type": "block",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.vdo": "0",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:                 "ceph.with_tpm": "0"
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             },
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "type": "block",
Feb 25 13:59:27 compute-0 funny_bouman[425762]:             "vg_name": "ceph_vg2"
Feb 25 13:59:27 compute-0 funny_bouman[425762]:         }
Feb 25 13:59:27 compute-0 funny_bouman[425762]:     ]
Feb 25 13:59:27 compute-0 funny_bouman[425762]: }
Feb 25 13:59:27 compute-0 systemd[1]: libpod-b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f.scope: Deactivated successfully.
Feb 25 13:59:27 compute-0 podman[425745]: 2026-02-25 13:59:27.428563635 +0000 UTC m=+0.550014246 container died b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 13:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8be15eed94ec426d3a29753f1b7a82345e4923aabcc799fcdec470437ef5ac0-merged.mount: Deactivated successfully.
Feb 25 13:59:27 compute-0 podman[425745]: 2026-02-25 13:59:27.478017549 +0000 UTC m=+0.599468180 container remove b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 13:59:27 compute-0 systemd[1]: libpod-conmon-b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f.scope: Deactivated successfully.
Feb 25 13:59:27 compute-0 sudo[425670]: pam_unix(sudo:session): session closed for user root
Feb 25 13:59:27 compute-0 sudo[425784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 13:59:27 compute-0 sudo[425784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:59:27 compute-0 sudo[425784]: pam_unix(sudo:session): session closed for user root
Feb 25 13:59:27 compute-0 sudo[425809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 13:59:27 compute-0 sudo[425809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:59:27 compute-0 ceph-mon[76335]: pgmap v4186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:27 compute-0 nova_compute[244014]: 2026-02-25 13:59:27.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:27 compute-0 podman[425846]: 2026-02-25 13:59:27.956498913 +0000 UTC m=+0.047325765 container create fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 13:59:27 compute-0 systemd[1]: Started libpod-conmon-fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36.scope.
Feb 25 13:59:28 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:59:28 compute-0 podman[425846]: 2026-02-25 13:59:27.933988144 +0000 UTC m=+0.024815066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:59:28 compute-0 podman[425846]: 2026-02-25 13:59:28.044051519 +0000 UTC m=+0.134878401 container init fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:59:28 compute-0 podman[425846]: 2026-02-25 13:59:28.053452605 +0000 UTC m=+0.144279457 container start fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:59:28 compute-0 podman[425846]: 2026-02-25 13:59:28.057759378 +0000 UTC m=+0.148586240 container attach fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 13:59:28 compute-0 systemd[1]: libpod-fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36.scope: Deactivated successfully.
Feb 25 13:59:28 compute-0 nervous_goldstine[425862]: 167 167
Feb 25 13:59:28 compute-0 conmon[425862]: conmon fa4529f3a9d110326288 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36.scope/container/memory.events
Feb 25 13:59:28 compute-0 podman[425846]: 2026-02-25 13:59:28.060258419 +0000 UTC m=+0.151085271 container died fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 13:59:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c76baa7b2ff7eafe30c828fae8aa89e5c0cc93e2c52b8758551b559288775000-merged.mount: Deactivated successfully.
Feb 25 13:59:28 compute-0 podman[425846]: 2026-02-25 13:59:28.10433981 +0000 UTC m=+0.195166662 container remove fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 13:59:28 compute-0 systemd[1]: libpod-conmon-fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36.scope: Deactivated successfully.
Feb 25 13:59:28 compute-0 podman[425887]: 2026-02-25 13:59:28.272528365 +0000 UTC m=+0.050338130 container create 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 13:59:28 compute-0 systemd[1]: Started libpod-conmon-7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262.scope.
Feb 25 13:59:28 compute-0 podman[425887]: 2026-02-25 13:59:28.24700522 +0000 UTC m=+0.024815015 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 13:59:28 compute-0 systemd[1]: Started libcrun container.
Feb 25 13:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf910ac552fe0e95830e6770637774d384382d8d1c3fa0154393186f924845e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf910ac552fe0e95830e6770637774d384382d8d1c3fa0154393186f924845e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf910ac552fe0e95830e6770637774d384382d8d1c3fa0154393186f924845e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf910ac552fe0e95830e6770637774d384382d8d1c3fa0154393186f924845e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 13:59:28 compute-0 podman[425887]: 2026-02-25 13:59:28.377591518 +0000 UTC m=+0.155401303 container init 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 13:59:28 compute-0 podman[425887]: 2026-02-25 13:59:28.38823609 +0000 UTC m=+0.166045845 container start 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 13:59:28 compute-0 podman[425887]: 2026-02-25 13:59:28.417525131 +0000 UTC m=+0.195334946 container attach 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 13:59:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:29 compute-0 lvm[425980]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 13:59:29 compute-0 lvm[425980]: VG ceph_vg0 finished
Feb 25 13:59:29 compute-0 lvm[425983]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 13:59:29 compute-0 lvm[425983]: VG ceph_vg1 finished
Feb 25 13:59:29 compute-0 lvm[425985]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 13:59:29 compute-0 lvm[425985]: VG ceph_vg2 finished
Feb 25 13:59:29 compute-0 epic_easley[425904]: {}
Feb 25 13:59:29 compute-0 systemd[1]: libpod-7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262.scope: Deactivated successfully.
Feb 25 13:59:29 compute-0 systemd[1]: libpod-7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262.scope: Consumed 1.050s CPU time.
Feb 25 13:59:29 compute-0 podman[425887]: 2026-02-25 13:59:29.176896049 +0000 UTC m=+0.954705784 container died 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 13:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cf910ac552fe0e95830e6770637774d384382d8d1c3fa0154393186f924845e-merged.mount: Deactivated successfully.
Feb 25 13:59:29 compute-0 podman[425887]: 2026-02-25 13:59:29.226133457 +0000 UTC m=+1.003943192 container remove 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 13:59:29 compute-0 systemd[1]: libpod-conmon-7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262.scope: Deactivated successfully.
Feb 25 13:59:29 compute-0 sudo[425809]: pam_unix(sudo:session): session closed for user root
Feb 25 13:59:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 13:59:29 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:59:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 13:59:29 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:59:29 compute-0 sudo[426001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 13:59:29 compute-0 sudo[426001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 13:59:29 compute-0 sudo[426001]: pam_unix(sudo:session): session closed for user root
Feb 25 13:59:29 compute-0 ceph-mon[76335]: pgmap v4187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:29 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:59:29 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 13:59:29 compute-0 nova_compute[244014]: 2026-02-25 13:59:29.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:30 compute-0 podman[426026]: 2026-02-25 13:59:30.74441887 +0000 UTC m=+0.084245202 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 25 13:59:30 compute-0 podman[426027]: 2026-02-25 13:59:30.766812956 +0000 UTC m=+0.102800829 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 25 13:59:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:30 compute-0 nova_compute[244014]: 2026-02-25 13:59:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:30 compute-0 nova_compute[244014]: 2026-02-25 13:59:30.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:59:30 compute-0 nova_compute[244014]: 2026-02-25 13:59:30.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:59:30 compute-0 nova_compute[244014]: 2026-02-25 13:59:30.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:59:30 compute-0 nova_compute[244014]: 2026-02-25 13:59:30.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 13:59:30 compute-0 nova_compute[244014]: 2026-02-25 13:59:30.916 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:59:31 compute-0 nova_compute[244014]: 2026-02-25 13:59:31.208 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:59:31
Feb 25 13:59:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 13:59:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 13:59:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'images', 'volumes']
Feb 25 13:59:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 13:59:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:59:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947648017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:59:31 compute-0 nova_compute[244014]: 2026-02-25 13:59:31.504 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:59:31 compute-0 nova_compute[244014]: 2026-02-25 13:59:31.657 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 13:59:31 compute-0 nova_compute[244014]: 2026-02-25 13:59:31.658 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3443MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 13:59:31 compute-0 nova_compute[244014]: 2026-02-25 13:59:31.658 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:59:31 compute-0 nova_compute[244014]: 2026-02-25 13:59:31.659 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 13:59:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 13:59:31 compute-0 nova_compute[244014]: 2026-02-25 13:59:31.720 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 13:59:31 compute-0 nova_compute[244014]: 2026-02-25 13:59:31.721 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 13:59:31 compute-0 nova_compute[244014]: 2026-02-25 13:59:31.735 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 13:59:31 compute-0 ceph-mon[76335]: pgmap v4188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1947648017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:59:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 13:59:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316481371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:59:32 compute-0 nova_compute[244014]: 2026-02-25 13:59:32.368 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 13:59:32 compute-0 nova_compute[244014]: 2026-02-25 13:59:32.375 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 13:59:32 compute-0 nova_compute[244014]: 2026-02-25 13:59:32.398 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 13:59:32 compute-0 nova_compute[244014]: 2026-02-25 13:59:32.400 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 13:59:32 compute-0 nova_compute[244014]: 2026-02-25 13:59:32.400 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 13:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 13:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 13:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 13:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 13:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:59:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 13:59:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3316481371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 13:59:33 compute-0 ceph-mon[76335]: pgmap v4189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:34 compute-0 nova_compute[244014]: 2026-02-25 13:59:34.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:35 compute-0 ceph-mon[76335]: pgmap v4190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:36 compute-0 nova_compute[244014]: 2026-02-25 13:59:36.221 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:37 compute-0 ceph-mon[76335]: pgmap v4191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:38 compute-0 nova_compute[244014]: 2026-02-25 13:59:38.400 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:38 compute-0 nova_compute[244014]: 2026-02-25 13:59:38.401 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 13:59:38 compute-0 nova_compute[244014]: 2026-02-25 13:59:38.401 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 13:59:38 compute-0 nova_compute[244014]: 2026-02-25 13:59:38.418 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 13:59:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:38 compute-0 nova_compute[244014]: 2026-02-25 13:59:38.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:39 compute-0 ceph-mon[76335]: pgmap v4192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:40 compute-0 nova_compute[244014]: 2026-02-25 13:59:40.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:40 compute-0 nova_compute[244014]: 2026-02-25 13:59:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:40 compute-0 nova_compute[244014]: 2026-02-25 13:59:40.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 13:59:41 compute-0 nova_compute[244014]: 2026-02-25 13:59:41.254 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:41 compute-0 ceph-mon[76335]: pgmap v4193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:42 compute-0 nova_compute[244014]: 2026-02-25 13:59:42.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 13:59:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 13:59:43 compute-0 ceph-mon[76335]: pgmap v4194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:44 compute-0 nova_compute[244014]: 2026-02-25 13:59:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:44 compute-0 nova_compute[244014]: 2026-02-25 13:59:44.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 13:59:44 compute-0 nova_compute[244014]: 2026-02-25 13:59:44.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 13:59:44 compute-0 nova_compute[244014]: 2026-02-25 13:59:44.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:44 compute-0 nova_compute[244014]: 2026-02-25 13:59:44.900 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 13:59:45 compute-0 nova_compute[244014]: 2026-02-25 13:59:45.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:45 compute-0 ceph-mon[76335]: pgmap v4195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:46 compute-0 nova_compute[244014]: 2026-02-25 13:59:46.255 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 13:59:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1566699395' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:59:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 13:59:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1566699395' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:59:47 compute-0 ceph-mon[76335]: pgmap v4196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1566699395' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 13:59:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1566699395' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 13:59:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4197: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:48 compute-0 nova_compute[244014]: 2026-02-25 13:59:48.923 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:49 compute-0 ceph-mon[76335]: pgmap v4197: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:50 compute-0 nova_compute[244014]: 2026-02-25 13:59:50.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:50 compute-0 nova_compute[244014]: 2026-02-25 13:59:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:51 compute-0 ceph-mon[76335]: pgmap v4198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:51 compute-0 nova_compute[244014]: 2026-02-25 13:59:51.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:51 compute-0 nova_compute[244014]: 2026-02-25 13:59:51.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:53 compute-0 nova_compute[244014]: 2026-02-25 13:59:53.650 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 13:59:53 compute-0 ceph-mon[76335]: pgmap v4199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:55 compute-0 nova_compute[244014]: 2026-02-25 13:59:55.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:59:55.103 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 13:59:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:59:55.104 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 13:59:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 13:59:55.104 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 13:59:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 13:59:55 compute-0 ceph-mon[76335]: pgmap v4200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:56 compute-0 nova_compute[244014]: 2026-02-25 13:59:56.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 13:59:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:57 compute-0 ceph-mon[76335]: pgmap v4201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 13:59:59 compute-0 ceph-mon[76335]: pgmap v4202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:00 compute-0 nova_compute[244014]: 2026-02-25 14:00:00.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:01 compute-0 nova_compute[244014]: 2026-02-25 14:00:01.260 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:00:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:00:01 compute-0 podman[426111]: 2026-02-25 14:00:01.733637957 +0000 UTC m=+0.066228251 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 14:00:01 compute-0 podman[426112]: 2026-02-25 14:00:01.764621277 +0000 UTC m=+0.092667622 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 14:00:01 compute-0 ceph-mon[76335]: pgmap v4203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:03 compute-0 ceph-mon[76335]: pgmap v4204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:05 compute-0 nova_compute[244014]: 2026-02-25 14:00:05.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:05 compute-0 ceph-mon[76335]: pgmap v4205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:06 compute-0 nova_compute[244014]: 2026-02-25 14:00:06.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:07 compute-0 ceph-mon[76335]: pgmap v4206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:09 compute-0 ceph-mon[76335]: pgmap v4207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:10 compute-0 nova_compute[244014]: 2026-02-25 14:00:10.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:11 compute-0 nova_compute[244014]: 2026-02-25 14:00:11.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:11 compute-0 ceph-mon[76335]: pgmap v4208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:13 compute-0 ceph-mon[76335]: pgmap v4209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:15 compute-0 nova_compute[244014]: 2026-02-25 14:00:15.121 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:00:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 186K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 300 writes, 450 keys, 300 commit groups, 1.0 writes per commit group, ingest: 0.15 MB, 0.00 MB/s
                                           Interval WAL: 300 writes, 150 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:00:15 compute-0 ceph-mon[76335]: pgmap v4210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:16 compute-0 nova_compute[244014]: 2026-02-25 14:00:16.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:17 compute-0 ceph-mon[76335]: pgmap v4211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:00:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.2 total, 600.0 interval
                                           Cumulative writes: 49K writes, 196K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.74 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:00:19 compute-0 ceph-mon[76335]: pgmap v4212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:20 compute-0 nova_compute[244014]: 2026-02-25 14:00:20.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:21 compute-0 nova_compute[244014]: 2026-02-25 14:00:21.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:21 compute-0 ceph-mon[76335]: pgmap v4213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:23 compute-0 ceph-mon[76335]: pgmap v4214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:00:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.6 total, 600.0 interval
                                           Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.75 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:00:25 compute-0 nova_compute[244014]: 2026-02-25 14:00:25.164 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:25 compute-0 ceph-mon[76335]: pgmap v4215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:26 compute-0 nova_compute[244014]: 2026-02-25 14:00:26.341 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 14:00:27 compute-0 nova_compute[244014]: 2026-02-25 14:00:27.895 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:00:27 compute-0 ceph-mon[76335]: pgmap v4216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:28 compute-0 sshd-session[426158]: Received disconnect from 91.224.92.78 port 41954:11:  [preauth]
Feb 25 14:00:28 compute-0 sshd-session[426158]: Disconnected from authenticating user root 91.224.92.78 port 41954 [preauth]
Feb 25 14:00:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:29 compute-0 sudo[426160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:00:29 compute-0 sudo[426160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:00:29 compute-0 sudo[426160]: pam_unix(sudo:session): session closed for user root
Feb 25 14:00:29 compute-0 sudo[426185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:00:29 compute-0 sudo[426185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:00:30 compute-0 ceph-mon[76335]: pgmap v4217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:30 compute-0 sudo[426185]: pam_unix(sudo:session): session closed for user root
Feb 25 14:00:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:00:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:00:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:00:30 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:00:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:00:30 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:00:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:00:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:00:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:00:30 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:00:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:00:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:00:30 compute-0 nova_compute[244014]: 2026-02-25 14:00:30.168 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:30 compute-0 sudo[426240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:00:30 compute-0 sudo[426240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:00:30 compute-0 sudo[426240]: pam_unix(sudo:session): session closed for user root
Feb 25 14:00:30 compute-0 sudo[426265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:00:30 compute-0 sudo[426265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:00:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:30 compute-0 podman[426302]: 2026-02-25 14:00:30.539865 +0000 UTC m=+0.044473144 container create 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:00:30 compute-0 systemd[1]: Started libpod-conmon-5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c.scope.
Feb 25 14:00:30 compute-0 podman[426302]: 2026-02-25 14:00:30.517356601 +0000 UTC m=+0.021964745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:00:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:00:30 compute-0 podman[426302]: 2026-02-25 14:00:30.640026123 +0000 UTC m=+0.144634257 container init 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 14:00:30 compute-0 podman[426302]: 2026-02-25 14:00:30.649631046 +0000 UTC m=+0.154239170 container start 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 14:00:30 compute-0 podman[426302]: 2026-02-25 14:00:30.653184827 +0000 UTC m=+0.157792951 container attach 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 14:00:30 compute-0 eager_yonath[426318]: 167 167
Feb 25 14:00:30 compute-0 systemd[1]: libpod-5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c.scope: Deactivated successfully.
Feb 25 14:00:30 compute-0 podman[426302]: 2026-02-25 14:00:30.657354965 +0000 UTC m=+0.161963099 container died 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e68da61d705fb6fb0a6a2382beca5679f6d650cd7bf0c8e0ba376870d95facaf-merged.mount: Deactivated successfully.
Feb 25 14:00:30 compute-0 podman[426302]: 2026-02-25 14:00:30.702042784 +0000 UTC m=+0.206650948 container remove 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 14:00:30 compute-0 systemd[1]: libpod-conmon-5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c.scope: Deactivated successfully.
Feb 25 14:00:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:30 compute-0 nova_compute[244014]: 2026-02-25 14:00:30.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:00:30 compute-0 podman[426343]: 2026-02-25 14:00:30.878223096 +0000 UTC m=+0.053905742 container create 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 14:00:30 compute-0 nova_compute[244014]: 2026-02-25 14:00:30.907 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:00:30 compute-0 nova_compute[244014]: 2026-02-25 14:00:30.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:00:30 compute-0 nova_compute[244014]: 2026-02-25 14:00:30.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:00:30 compute-0 nova_compute[244014]: 2026-02-25 14:00:30.908 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:00:30 compute-0 nova_compute[244014]: 2026-02-25 14:00:30.909 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:00:30 compute-0 systemd[1]: Started libpod-conmon-0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb.scope.
Feb 25 14:00:30 compute-0 podman[426343]: 2026-02-25 14:00:30.858048303 +0000 UTC m=+0.033731039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:00:30 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:30 compute-0 podman[426343]: 2026-02-25 14:00:30.978990717 +0000 UTC m=+0.154673363 container init 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Feb 25 14:00:30 compute-0 podman[426343]: 2026-02-25 14:00:30.993029295 +0000 UTC m=+0.168711981 container start 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:00:30 compute-0 podman[426343]: 2026-02-25 14:00:30.999375225 +0000 UTC m=+0.175057871 container attach 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:00:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:00:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:00:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:00:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:00:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:00:31 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:00:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:00:31
Feb 25 14:00:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:00:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:00:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta']
Feb 25 14:00:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:00:31 compute-0 nova_compute[244014]: 2026-02-25 14:00:31.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:00:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1430821585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:00:31 compute-0 nova_compute[244014]: 2026-02-25 14:00:31.477 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:00:31 compute-0 competent_roentgen[426360]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:00:31 compute-0 competent_roentgen[426360]: --> All data devices are unavailable
Feb 25 14:00:31 compute-0 systemd[1]: libpod-0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb.scope: Deactivated successfully.
Feb 25 14:00:31 compute-0 podman[426343]: 2026-02-25 14:00:31.52296166 +0000 UTC m=+0.698644376 container died 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 14:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf-merged.mount: Deactivated successfully.
Feb 25 14:00:31 compute-0 podman[426343]: 2026-02-25 14:00:31.575040458 +0000 UTC m=+0.750723114 container remove 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 14:00:31 compute-0 systemd[1]: libpod-conmon-0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb.scope: Deactivated successfully.
Feb 25 14:00:31 compute-0 sudo[426265]: pam_unix(sudo:session): session closed for user root
Feb 25 14:00:31 compute-0 nova_compute[244014]: 2026-02-25 14:00:31.679 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:00:31 compute-0 nova_compute[244014]: 2026-02-25 14:00:31.680 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3490MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:00:31 compute-0 nova_compute[244014]: 2026-02-25 14:00:31.680 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:00:31 compute-0 nova_compute[244014]: 2026-02-25 14:00:31.680 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:00:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:00:31 compute-0 sudo[426414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:00:31 compute-0 sudo[426414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:00:31 compute-0 sudo[426414]: pam_unix(sudo:session): session closed for user root
Feb 25 14:00:31 compute-0 nova_compute[244014]: 2026-02-25 14:00:31.738 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:00:31 compute-0 nova_compute[244014]: 2026-02-25 14:00:31.739 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:00:31 compute-0 nova_compute[244014]: 2026-02-25 14:00:31.758 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:00:31 compute-0 sudo[426439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:00:31 compute-0 sudo[426439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:00:31 compute-0 podman[426464]: 2026-02-25 14:00:31.925978581 +0000 UTC m=+0.100041881 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 25 14:00:31 compute-0 podman[426465]: 2026-02-25 14:00:31.9611375 +0000 UTC m=+0.135655463 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 14:00:32 compute-0 ceph-mon[76335]: pgmap v4218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1430821585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:00:32 compute-0 podman[426539]: 2026-02-25 14:00:32.113639459 +0000 UTC m=+0.035038586 container create 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:00:32 compute-0 systemd[1]: Started libpod-conmon-1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9.scope.
Feb 25 14:00:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:00:32 compute-0 podman[426539]: 2026-02-25 14:00:32.097522302 +0000 UTC m=+0.018921459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:00:32 compute-0 podman[426539]: 2026-02-25 14:00:32.204816968 +0000 UTC m=+0.126216185 container init 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 14:00:32 compute-0 podman[426539]: 2026-02-25 14:00:32.214368819 +0000 UTC m=+0.135767936 container start 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:00:32 compute-0 podman[426539]: 2026-02-25 14:00:32.220278407 +0000 UTC m=+0.141677614 container attach 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:00:32 compute-0 practical_fermi[426557]: 167 167
Feb 25 14:00:32 compute-0 systemd[1]: libpod-1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9.scope: Deactivated successfully.
Feb 25 14:00:32 compute-0 podman[426539]: 2026-02-25 14:00:32.222030696 +0000 UTC m=+0.143429833 container died 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-21af9834b512be85e2a07c21fe560acdd23edb4c1678b1d83c1f4bf9109f0b74-merged.mount: Deactivated successfully.
Feb 25 14:00:32 compute-0 podman[426539]: 2026-02-25 14:00:32.266030245 +0000 UTC m=+0.187429382 container remove 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 14:00:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:00:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841765983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:00:32 compute-0 systemd[1]: libpod-conmon-1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9.scope: Deactivated successfully.
Feb 25 14:00:32 compute-0 nova_compute[244014]: 2026-02-25 14:00:32.304 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:00:32 compute-0 nova_compute[244014]: 2026-02-25 14:00:32.313 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:00:32 compute-0 nova_compute[244014]: 2026-02-25 14:00:32.330 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:00:32 compute-0 nova_compute[244014]: 2026-02-25 14:00:32.332 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:00:32 compute-0 nova_compute[244014]: 2026-02-25 14:00:32.333 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:00:32 compute-0 podman[426583]: 2026-02-25 14:00:32.435929319 +0000 UTC m=+0.041765247 container create 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:00:32 compute-0 systemd[1]: Started libpod-conmon-716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e.scope.
Feb 25 14:00:32 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:00:32 compute-0 podman[426583]: 2026-02-25 14:00:32.418959907 +0000 UTC m=+0.024795825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e83c7ce98651ac53d064e8910e2720a34a596e243279ea334aa9cb1169c604/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e83c7ce98651ac53d064e8910e2720a34a596e243279ea334aa9cb1169c604/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e83c7ce98651ac53d064e8910e2720a34a596e243279ea334aa9cb1169c604/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e83c7ce98651ac53d064e8910e2720a34a596e243279ea334aa9cb1169c604/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:32 compute-0 podman[426583]: 2026-02-25 14:00:32.531990796 +0000 UTC m=+0.137826764 container init 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:00:32 compute-0 podman[426583]: 2026-02-25 14:00:32.539369276 +0000 UTC m=+0.145205214 container start 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 14:00:32 compute-0 podman[426583]: 2026-02-25 14:00:32.54374504 +0000 UTC m=+0.149580978 container attach 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 14:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:00:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:00:32 compute-0 sshd-session[426555]: Invalid user admin from 2.57.121.112 port 44843
Feb 25 14:00:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:32 compute-0 magical_gagarin[426599]: {
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:     "0": [
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:         {
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "devices": [
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "/dev/loop3"
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             ],
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_name": "ceph_lv0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_size": "21470642176",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "name": "ceph_lv0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "tags": {
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.cluster_name": "ceph",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.crush_device_class": "",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.encrypted": "0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.objectstore": "bluestore",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.osd_id": "0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.type": "block",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.vdo": "0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.with_tpm": "0"
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             },
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "type": "block",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "vg_name": "ceph_vg0"
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:         }
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:     ],
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:     "1": [
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:         {
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "devices": [
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "/dev/loop4"
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             ],
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_name": "ceph_lv1",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_size": "21470642176",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "name": "ceph_lv1",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "tags": {
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.cluster_name": "ceph",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.crush_device_class": "",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.encrypted": "0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.objectstore": "bluestore",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.osd_id": "1",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.type": "block",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.vdo": "0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.with_tpm": "0"
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             },
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "type": "block",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "vg_name": "ceph_vg1"
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:         }
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:     ],
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:     "2": [
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:         {
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "devices": [
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "/dev/loop5"
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             ],
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_name": "ceph_lv2",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_size": "21470642176",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "name": "ceph_lv2",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "tags": {
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.cluster_name": "ceph",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.crush_device_class": "",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.encrypted": "0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.objectstore": "bluestore",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.osd_id": "2",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.type": "block",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.vdo": "0",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:                 "ceph.with_tpm": "0"
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             },
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "type": "block",
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:             "vg_name": "ceph_vg2"
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:         }
Feb 25 14:00:32 compute-0 magical_gagarin[426599]:     ]
Feb 25 14:00:32 compute-0 magical_gagarin[426599]: }
Feb 25 14:00:32 compute-0 systemd[1]: libpod-716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e.scope: Deactivated successfully.
Feb 25 14:00:32 compute-0 podman[426583]: 2026-02-25 14:00:32.867292285 +0000 UTC m=+0.473128243 container died 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 14:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-99e83c7ce98651ac53d064e8910e2720a34a596e243279ea334aa9cb1169c604-merged.mount: Deactivated successfully.
Feb 25 14:00:32 compute-0 podman[426583]: 2026-02-25 14:00:32.922473042 +0000 UTC m=+0.528308960 container remove 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:00:32 compute-0 systemd[1]: libpod-conmon-716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e.scope: Deactivated successfully.
Feb 25 14:00:32 compute-0 sudo[426439]: pam_unix(sudo:session): session closed for user root
Feb 25 14:00:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/841765983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:00:33 compute-0 ceph-mon[76335]: pgmap v4219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:33 compute-0 sudo[426620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:00:33 compute-0 sudo[426620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:00:33 compute-0 sudo[426620]: pam_unix(sudo:session): session closed for user root
Feb 25 14:00:33 compute-0 sudo[426645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:00:33 compute-0 sudo[426645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:00:33 compute-0 sshd-session[426555]: Received disconnect from 2.57.121.112 port 44843:11: Bye [preauth]
Feb 25 14:00:33 compute-0 sshd-session[426555]: Disconnected from invalid user admin 2.57.121.112 port 44843 [preauth]
Feb 25 14:00:33 compute-0 podman[426682]: 2026-02-25 14:00:33.456725459 +0000 UTC m=+0.062694951 container create d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 14:00:33 compute-0 systemd[1]: Started libpod-conmon-d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f.scope.
Feb 25 14:00:33 compute-0 podman[426682]: 2026-02-25 14:00:33.43209623 +0000 UTC m=+0.038065772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:00:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:00:33 compute-0 podman[426682]: 2026-02-25 14:00:33.560599678 +0000 UTC m=+0.166569230 container init d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 14:00:33 compute-0 podman[426682]: 2026-02-25 14:00:33.567492664 +0000 UTC m=+0.173462126 container start d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 14:00:33 compute-0 agitated_goldstine[426698]: 167 167
Feb 25 14:00:33 compute-0 podman[426682]: 2026-02-25 14:00:33.574407089 +0000 UTC m=+0.180376641 container attach d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 14:00:33 compute-0 systemd[1]: libpod-d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f.scope: Deactivated successfully.
Feb 25 14:00:33 compute-0 conmon[426698]: conmon d9107f47a1bfb4f7ad03 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f.scope/container/memory.events
Feb 25 14:00:33 compute-0 podman[426682]: 2026-02-25 14:00:33.576053146 +0000 UTC m=+0.182022658 container died d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 14:00:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2f6c4010014db9fc74c381358d771332c7272fb04e61cba09545e4b1593e412-merged.mount: Deactivated successfully.
Feb 25 14:00:33 compute-0 podman[426682]: 2026-02-25 14:00:33.621515136 +0000 UTC m=+0.227484598 container remove d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:00:33 compute-0 systemd[1]: libpod-conmon-d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f.scope: Deactivated successfully.
Feb 25 14:00:33 compute-0 podman[426723]: 2026-02-25 14:00:33.80657235 +0000 UTC m=+0.040909422 container create 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 14:00:33 compute-0 systemd[1]: Started libpod-conmon-28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b.scope.
Feb 25 14:00:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84ce0c55d1100ff9dc1a9fe721dc972294986ebec5bef881f77b12c3cbbdade/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:33 compute-0 podman[426723]: 2026-02-25 14:00:33.78789773 +0000 UTC m=+0.022234872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84ce0c55d1100ff9dc1a9fe721dc972294986ebec5bef881f77b12c3cbbdade/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84ce0c55d1100ff9dc1a9fe721dc972294986ebec5bef881f77b12c3cbbdade/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84ce0c55d1100ff9dc1a9fe721dc972294986ebec5bef881f77b12c3cbbdade/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:00:33 compute-0 podman[426723]: 2026-02-25 14:00:33.902162364 +0000 UTC m=+0.136499466 container init 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 14:00:33 compute-0 podman[426723]: 2026-02-25 14:00:33.911471408 +0000 UTC m=+0.145808480 container start 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:00:33 compute-0 podman[426723]: 2026-02-25 14:00:33.91539864 +0000 UTC m=+0.149735712 container attach 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:00:34 compute-0 lvm[426817]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:00:34 compute-0 lvm[426817]: VG ceph_vg1 finished
Feb 25 14:00:34 compute-0 lvm[426818]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:00:34 compute-0 lvm[426818]: VG ceph_vg0 finished
Feb 25 14:00:34 compute-0 lvm[426820]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:00:34 compute-0 lvm[426820]: VG ceph_vg2 finished
Feb 25 14:00:34 compute-0 reverent_bouman[426739]: {}
Feb 25 14:00:34 compute-0 systemd[1]: libpod-28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b.scope: Deactivated successfully.
Feb 25 14:00:34 compute-0 systemd[1]: libpod-28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b.scope: Consumed 1.174s CPU time.
Feb 25 14:00:34 compute-0 podman[426723]: 2026-02-25 14:00:34.724921542 +0000 UTC m=+0.959258644 container died 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c84ce0c55d1100ff9dc1a9fe721dc972294986ebec5bef881f77b12c3cbbdade-merged.mount: Deactivated successfully.
Feb 25 14:00:34 compute-0 podman[426723]: 2026-02-25 14:00:34.777162985 +0000 UTC m=+1.011500057 container remove 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:00:34 compute-0 systemd[1]: libpod-conmon-28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b.scope: Deactivated successfully.
Feb 25 14:00:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:34 compute-0 sudo[426645]: pam_unix(sudo:session): session closed for user root
Feb 25 14:00:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:00:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:00:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:00:34 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:00:34 compute-0 sudo[426835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:00:34 compute-0 sudo[426835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:00:34 compute-0 sudo[426835]: pam_unix(sudo:session): session closed for user root
Feb 25 14:00:35 compute-0 nova_compute[244014]: 2026-02-25 14:00:35.309 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:35 compute-0 ceph-mon[76335]: pgmap v4220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:00:35 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:00:36 compute-0 nova_compute[244014]: 2026-02-25 14:00:36.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:37 compute-0 ceph-mon[76335]: pgmap v4221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:38 compute-0 nova_compute[244014]: 2026-02-25 14:00:38.334 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:00:38 compute-0 nova_compute[244014]: 2026-02-25 14:00:38.335 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:00:38 compute-0 nova_compute[244014]: 2026-02-25 14:00:38.335 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:00:38 compute-0 nova_compute[244014]: 2026-02-25 14:00:38.350 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:00:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:39 compute-0 ceph-mon[76335]: pgmap v4222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:39 compute-0 nova_compute[244014]: 2026-02-25 14:00:39.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:00:40 compute-0 nova_compute[244014]: 2026-02-25 14:00:40.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:41 compute-0 nova_compute[244014]: 2026-02-25 14:00:41.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:41 compute-0 ceph-mon[76335]: pgmap v4223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:42 compute-0 nova_compute[244014]: 2026-02-25 14:00:42.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:00:42 compute-0 nova_compute[244014]: 2026-02-25 14:00:42.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:00:43 compute-0 nova_compute[244014]: 2026-02-25 14:00:43.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:00:43 compute-0 ceph-mon[76335]: pgmap v4224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:00:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:00:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:45 compute-0 nova_compute[244014]: 2026-02-25 14:00:45.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:45 compute-0 ceph-mon[76335]: pgmap v4225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:46 compute-0 nova_compute[244014]: 2026-02-25 14:00:46.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:00:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2942443104' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:00:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:00:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2942443104' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:00:47 compute-0 ceph-mon[76335]: pgmap v4226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2942443104' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:00:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2942443104' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:00:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:49 compute-0 nova_compute[244014]: 2026-02-25 14:00:49.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:00:49 compute-0 ceph-mon[76335]: pgmap v4227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:50 compute-0 nova_compute[244014]: 2026-02-25 14:00:50.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:50 compute-0 nova_compute[244014]: 2026-02-25 14:00:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:00:51 compute-0 nova_compute[244014]: 2026-02-25 14:00:51.542 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:51 compute-0 ceph-mon[76335]: pgmap v4228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:52 compute-0 nova_compute[244014]: 2026-02-25 14:00:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:00:53 compute-0 ceph-mon[76335]: pgmap v4229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:00:55.104 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:00:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:00:55.105 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:00:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:00:55.105 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:00:55 compute-0 nova_compute[244014]: 2026-02-25 14:00:55.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:00:55 compute-0 ceph-mon[76335]: pgmap v4230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:56 compute-0 nova_compute[244014]: 2026-02-25 14:00:56.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:00:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:57 compute-0 ceph-mon[76335]: pgmap v4231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:00:59 compute-0 ceph-mon[76335]: pgmap v4232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:00 compute-0 nova_compute[244014]: 2026-02-25 14:01:00.364 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:01 compute-0 nova_compute[244014]: 2026-02-25 14:01:01.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:01:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:01:01 compute-0 CROND[426862]: (root) CMD (run-parts /etc/cron.hourly)
Feb 25 14:01:01 compute-0 run-parts[426865]: (/etc/cron.hourly) starting 0anacron
Feb 25 14:01:01 compute-0 run-parts[426871]: (/etc/cron.hourly) finished 0anacron
Feb 25 14:01:01 compute-0 CROND[426861]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 25 14:01:01 compute-0 ceph-mon[76335]: pgmap v4233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:02 compute-0 podman[426872]: 2026-02-25 14:01:02.719611924 +0000 UTC m=+0.058590404 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 14:01:02 compute-0 podman[426873]: 2026-02-25 14:01:02.798681189 +0000 UTC m=+0.133621765 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 14:01:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:03 compute-0 ceph-mon[76335]: pgmap v4234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:05 compute-0 nova_compute[244014]: 2026-02-25 14:01:05.403 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:06 compute-0 ceph-mon[76335]: pgmap v4235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:06 compute-0 nova_compute[244014]: 2026-02-25 14:01:06.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:08 compute-0 ceph-mon[76335]: pgmap v4236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:10 compute-0 ceph-mon[76335]: pgmap v4237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:10 compute-0 nova_compute[244014]: 2026-02-25 14:01:10.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:11 compute-0 ceph-mon[76335]: pgmap v4238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:11 compute-0 nova_compute[244014]: 2026-02-25 14:01:11.677 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:13 compute-0 ceph-mon[76335]: pgmap v4239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:15 compute-0 nova_compute[244014]: 2026-02-25 14:01:15.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:15 compute-0 ceph-mon[76335]: pgmap v4240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:16 compute-0 nova_compute[244014]: 2026-02-25 14:01:16.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:17 compute-0 ceph-mon[76335]: pgmap v4241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:18 compute-0 nova_compute[244014]: 2026-02-25 14:01:18.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:01:19 compute-0 ceph-mon[76335]: pgmap v4242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:20 compute-0 nova_compute[244014]: 2026-02-25 14:01:20.480 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:21 compute-0 nova_compute[244014]: 2026-02-25 14:01:21.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:21 compute-0 ceph-mon[76335]: pgmap v4243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 14:01:23 compute-0 ceph-mon[76335]: pgmap v4244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 14:01:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 14:01:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:25 compute-0 nova_compute[244014]: 2026-02-25 14:01:25.529 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:25 compute-0 ceph-mon[76335]: pgmap v4245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 14:01:26 compute-0 nova_compute[244014]: 2026-02-25 14:01:26.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 14:01:27 compute-0 nova_compute[244014]: 2026-02-25 14:01:27.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:01:27 compute-0 ceph-mon[76335]: pgmap v4246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 14:01:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 14:01:29 compute-0 ceph-mon[76335]: pgmap v4247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 14:01:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:30 compute-0 nova_compute[244014]: 2026-02-25 14:01:30.585 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 14:01:30 compute-0 nova_compute[244014]: 2026-02-25 14:01:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:01:30 compute-0 nova_compute[244014]: 2026-02-25 14:01:30.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:01:30 compute-0 nova_compute[244014]: 2026-02-25 14:01:30.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:01:30 compute-0 nova_compute[244014]: 2026-02-25 14:01:30.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:01:30 compute-0 nova_compute[244014]: 2026-02-25 14:01:30.918 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:01:30 compute-0 nova_compute[244014]: 2026-02-25 14:01:30.918 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:01:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:01:31
Feb 25 14:01:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:01:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:01:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'vms', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data']
Feb 25 14:01:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:01:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:01:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1282600901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.505 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.631 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.632 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3524MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.632 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.632 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:01:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.808 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.808 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.883 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.951 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.951 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.964 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 14:01:31 compute-0 ceph-mon[76335]: pgmap v4248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 14:01:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1282600901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.983 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 14:01:31 compute-0 nova_compute[244014]: 2026-02-25 14:01:31.998 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:01:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:01:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439463346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:01:32 compute-0 nova_compute[244014]: 2026-02-25 14:01:32.591 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:01:32 compute-0 nova_compute[244014]: 2026-02-25 14:01:32.596 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:01:32 compute-0 nova_compute[244014]: 2026-02-25 14:01:32.612 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:01:32 compute-0 nova_compute[244014]: 2026-02-25 14:01:32.614 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:01:32 compute-0 nova_compute[244014]: 2026-02-25 14:01:32.614 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:01:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:01:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 14:01:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/439463346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:01:33 compute-0 podman[426961]: 2026-02-25 14:01:33.708225955 +0000 UTC m=+0.056580357 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 14:01:33 compute-0 podman[426962]: 2026-02-25 14:01:33.762824855 +0000 UTC m=+0.102786129 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 14:01:33 compute-0 ceph-mon[76335]: pgmap v4249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 14:01:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 14:01:35 compute-0 sudo[427005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:01:35 compute-0 sudo[427005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:01:35 compute-0 sudo[427005]: pam_unix(sudo:session): session closed for user root
Feb 25 14:01:35 compute-0 sudo[427030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:01:35 compute-0 sudo[427030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:01:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:35 compute-0 nova_compute[244014]: 2026-02-25 14:01:35.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:35 compute-0 sudo[427030]: pam_unix(sudo:session): session closed for user root
Feb 25 14:01:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:01:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:01:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:01:35 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:01:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:01:35 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:01:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:01:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:01:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:01:35 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:01:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:01:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:01:35 compute-0 sudo[427087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:01:35 compute-0 sudo[427087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:01:35 compute-0 sudo[427087]: pam_unix(sudo:session): session closed for user root
Feb 25 14:01:35 compute-0 sudo[427112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:01:35 compute-0 sudo[427112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:01:36 compute-0 ceph-mon[76335]: pgmap v4250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 14:01:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:01:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:01:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:01:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:01:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:01:36 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:01:36 compute-0 podman[427150]: 2026-02-25 14:01:36.209624078 +0000 UTC m=+0.078559682 container create becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 14:01:36 compute-0 systemd[1]: Started libpod-conmon-becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c.scope.
Feb 25 14:01:36 compute-0 podman[427150]: 2026-02-25 14:01:36.178147134 +0000 UTC m=+0.047082778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:01:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:01:36 compute-0 podman[427150]: 2026-02-25 14:01:36.320677061 +0000 UTC m=+0.189612705 container init becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 14:01:36 compute-0 podman[427150]: 2026-02-25 14:01:36.331286762 +0000 UTC m=+0.200222366 container start becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 14:01:36 compute-0 determined_wilbur[427166]: 167 167
Feb 25 14:01:36 compute-0 podman[427150]: 2026-02-25 14:01:36.339469254 +0000 UTC m=+0.208404898 container attach becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:01:36 compute-0 systemd[1]: libpod-becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c.scope: Deactivated successfully.
Feb 25 14:01:36 compute-0 podman[427150]: 2026-02-25 14:01:36.340522594 +0000 UTC m=+0.209458188 container died becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 14:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab5c5e99eab395552b95a2294b8ceaf4d1624f8329c5131c76fd57542ada10cc-merged.mount: Deactivated successfully.
Feb 25 14:01:36 compute-0 podman[427150]: 2026-02-25 14:01:36.392840629 +0000 UTC m=+0.261776233 container remove becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:01:36 compute-0 systemd[1]: libpod-conmon-becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c.scope: Deactivated successfully.
Feb 25 14:01:36 compute-0 podman[427191]: 2026-02-25 14:01:36.575804034 +0000 UTC m=+0.051406421 container create 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:01:36 compute-0 systemd[1]: Started libpod-conmon-926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a.scope.
Feb 25 14:01:36 compute-0 podman[427191]: 2026-02-25 14:01:36.548666613 +0000 UTC m=+0.024269040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:01:36 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:36 compute-0 podman[427191]: 2026-02-25 14:01:36.674704781 +0000 UTC m=+0.150307178 container init 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:01:36 compute-0 podman[427191]: 2026-02-25 14:01:36.687372511 +0000 UTC m=+0.162974878 container start 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 14:01:36 compute-0 podman[427191]: 2026-02-25 14:01:36.691725434 +0000 UTC m=+0.167327791 container attach 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 14:01:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 14:01:36 compute-0 nova_compute[244014]: 2026-02-25 14:01:36.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:37 compute-0 wizardly_thompson[427208]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:01:37 compute-0 wizardly_thompson[427208]: --> All data devices are unavailable
Feb 25 14:01:37 compute-0 systemd[1]: libpod-926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a.scope: Deactivated successfully.
Feb 25 14:01:37 compute-0 podman[427191]: 2026-02-25 14:01:37.193210241 +0000 UTC m=+0.668812668 container died 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 14:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663-merged.mount: Deactivated successfully.
Feb 25 14:01:37 compute-0 podman[427191]: 2026-02-25 14:01:37.244180318 +0000 UTC m=+0.719782715 container remove 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:01:37 compute-0 systemd[1]: libpod-conmon-926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a.scope: Deactivated successfully.
Feb 25 14:01:37 compute-0 sudo[427112]: pam_unix(sudo:session): session closed for user root
Feb 25 14:01:37 compute-0 sudo[427242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:01:37 compute-0 sudo[427242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:01:37 compute-0 sudo[427242]: pam_unix(sudo:session): session closed for user root
Feb 25 14:01:37 compute-0 sudo[427267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:01:37 compute-0 sudo[427267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:01:37 compute-0 podman[427305]: 2026-02-25 14:01:37.735215869 +0000 UTC m=+0.054832748 container create 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:01:37 compute-0 systemd[1]: Started libpod-conmon-999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422.scope.
Feb 25 14:01:37 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:01:37 compute-0 podman[427305]: 2026-02-25 14:01:37.717345362 +0000 UTC m=+0.036962271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:01:37 compute-0 podman[427305]: 2026-02-25 14:01:37.822334432 +0000 UTC m=+0.141951401 container init 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 25 14:01:37 compute-0 podman[427305]: 2026-02-25 14:01:37.832987324 +0000 UTC m=+0.152604243 container start 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:01:37 compute-0 podman[427305]: 2026-02-25 14:01:37.837218115 +0000 UTC m=+0.156835084 container attach 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 14:01:37 compute-0 ecstatic_gagarin[427321]: 167 167
Feb 25 14:01:37 compute-0 systemd[1]: libpod-999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422.scope: Deactivated successfully.
Feb 25 14:01:37 compute-0 podman[427305]: 2026-02-25 14:01:37.839213031 +0000 UTC m=+0.158829970 container died 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 14:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7148729377a2342d6e994b4c5383aa57aa329346188abdcdbeccd8d90cdb11ce-merged.mount: Deactivated successfully.
Feb 25 14:01:37 compute-0 podman[427305]: 2026-02-25 14:01:37.884439115 +0000 UTC m=+0.204056024 container remove 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 14:01:37 compute-0 systemd[1]: libpod-conmon-999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422.scope: Deactivated successfully.
Feb 25 14:01:38 compute-0 ceph-mon[76335]: pgmap v4251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 14:01:38 compute-0 podman[427345]: 2026-02-25 14:01:38.09419429 +0000 UTC m=+0.065979375 container create 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:01:38 compute-0 systemd[1]: Started libpod-conmon-2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911.scope.
Feb 25 14:01:38 compute-0 podman[427345]: 2026-02-25 14:01:38.068236283 +0000 UTC m=+0.040021418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:01:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1577793097aab46e6da92c382555e2162a91a632cb788a8f805d79fba9463e32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1577793097aab46e6da92c382555e2162a91a632cb788a8f805d79fba9463e32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1577793097aab46e6da92c382555e2162a91a632cb788a8f805d79fba9463e32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1577793097aab46e6da92c382555e2162a91a632cb788a8f805d79fba9463e32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:38 compute-0 podman[427345]: 2026-02-25 14:01:38.209506733 +0000 UTC m=+0.181291858 container init 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:01:38 compute-0 podman[427345]: 2026-02-25 14:01:38.217249483 +0000 UTC m=+0.189034568 container start 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 14:01:38 compute-0 podman[427345]: 2026-02-25 14:01:38.220897177 +0000 UTC m=+0.192682262 container attach 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]: {
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:     "0": [
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:         {
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "devices": [
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "/dev/loop3"
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             ],
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_name": "ceph_lv0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_size": "21470642176",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "name": "ceph_lv0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "tags": {
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.cluster_name": "ceph",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.crush_device_class": "",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.encrypted": "0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.objectstore": "bluestore",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.osd_id": "0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.type": "block",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.vdo": "0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.with_tpm": "0"
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             },
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "type": "block",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "vg_name": "ceph_vg0"
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:         }
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:     ],
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:     "1": [
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:         {
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "devices": [
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "/dev/loop4"
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             ],
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_name": "ceph_lv1",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_size": "21470642176",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "name": "ceph_lv1",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "tags": {
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.cluster_name": "ceph",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.crush_device_class": "",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.encrypted": "0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.objectstore": "bluestore",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.osd_id": "1",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.type": "block",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.vdo": "0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.with_tpm": "0"
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             },
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "type": "block",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "vg_name": "ceph_vg1"
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:         }
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:     ],
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:     "2": [
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:         {
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "devices": [
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "/dev/loop5"
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             ],
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_name": "ceph_lv2",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_size": "21470642176",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "name": "ceph_lv2",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "tags": {
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.cluster_name": "ceph",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.crush_device_class": "",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.encrypted": "0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.objectstore": "bluestore",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.osd_id": "2",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.type": "block",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.vdo": "0",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:                 "ceph.with_tpm": "0"
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             },
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "type": "block",
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:             "vg_name": "ceph_vg2"
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:         }
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]:     ]
Feb 25 14:01:38 compute-0 lucid_proskuriakova[427361]: }
Feb 25 14:01:38 compute-0 systemd[1]: libpod-2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911.scope: Deactivated successfully.
Feb 25 14:01:38 compute-0 podman[427345]: 2026-02-25 14:01:38.573736494 +0000 UTC m=+0.545521579 container died 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 14:01:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1577793097aab46e6da92c382555e2162a91a632cb788a8f805d79fba9463e32-merged.mount: Deactivated successfully.
Feb 25 14:01:38 compute-0 nova_compute[244014]: 2026-02-25 14:01:38.614 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:01:38 compute-0 nova_compute[244014]: 2026-02-25 14:01:38.617 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:01:38 compute-0 nova_compute[244014]: 2026-02-25 14:01:38.617 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:01:38 compute-0 podman[427345]: 2026-02-25 14:01:38.628239101 +0000 UTC m=+0.600024186 container remove 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 14:01:38 compute-0 systemd[1]: libpod-conmon-2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911.scope: Deactivated successfully.
Feb 25 14:01:38 compute-0 nova_compute[244014]: 2026-02-25 14:01:38.637 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:01:38 compute-0 sudo[427267]: pam_unix(sudo:session): session closed for user root
Feb 25 14:01:38 compute-0 sudo[427382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:01:38 compute-0 sudo[427382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:01:38 compute-0 sudo[427382]: pam_unix(sudo:session): session closed for user root
Feb 25 14:01:38 compute-0 sudo[427407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:01:38 compute-0 sudo[427407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:01:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Feb 25 14:01:39 compute-0 podman[427446]: 2026-02-25 14:01:39.187816597 +0000 UTC m=+0.060262982 container create 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:01:39 compute-0 systemd[1]: Started libpod-conmon-6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631.scope.
Feb 25 14:01:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:01:39 compute-0 podman[427446]: 2026-02-25 14:01:39.162991993 +0000 UTC m=+0.035438468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:01:39 compute-0 podman[427446]: 2026-02-25 14:01:39.265794451 +0000 UTC m=+0.138240906 container init 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 14:01:39 compute-0 podman[427446]: 2026-02-25 14:01:39.27701993 +0000 UTC m=+0.149466325 container start 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:01:39 compute-0 podman[427446]: 2026-02-25 14:01:39.280763766 +0000 UTC m=+0.153210201 container attach 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:01:39 compute-0 inspiring_poitras[427462]: 167 167
Feb 25 14:01:39 compute-0 systemd[1]: libpod-6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631.scope: Deactivated successfully.
Feb 25 14:01:39 compute-0 podman[427446]: 2026-02-25 14:01:39.282789153 +0000 UTC m=+0.155235548 container died 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:01:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3e74982fc4a0e6d31e185721786ce4dcb3164325580a5a07e90428dc12e2dbe-merged.mount: Deactivated successfully.
Feb 25 14:01:39 compute-0 podman[427446]: 2026-02-25 14:01:39.326884265 +0000 UTC m=+0.199330650 container remove 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:01:39 compute-0 systemd[1]: libpod-conmon-6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631.scope: Deactivated successfully.
Feb 25 14:01:39 compute-0 podman[427487]: 2026-02-25 14:01:39.45912294 +0000 UTC m=+0.043602219 container create 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 14:01:39 compute-0 systemd[1]: Started libpod-conmon-3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731.scope.
Feb 25 14:01:39 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:01:39 compute-0 podman[427487]: 2026-02-25 14:01:39.438947027 +0000 UTC m=+0.023426246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c739d34a526031f1fdb78be3d5135652fd81631907d75c94f0c4cf409cb8c51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c739d34a526031f1fdb78be3d5135652fd81631907d75c94f0c4cf409cb8c51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c739d34a526031f1fdb78be3d5135652fd81631907d75c94f0c4cf409cb8c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c739d34a526031f1fdb78be3d5135652fd81631907d75c94f0c4cf409cb8c51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:01:39 compute-0 podman[427487]: 2026-02-25 14:01:39.55143042 +0000 UTC m=+0.135909669 container init 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 14:01:39 compute-0 podman[427487]: 2026-02-25 14:01:39.556783202 +0000 UTC m=+0.141262421 container start 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:01:39 compute-0 podman[427487]: 2026-02-25 14:01:39.561030573 +0000 UTC m=+0.145509792 container attach 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 14:01:40 compute-0 ceph-mon[76335]: pgmap v4252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Feb 25 14:01:40 compute-0 lvm[427581]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:01:40 compute-0 lvm[427582]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:01:40 compute-0 lvm[427582]: VG ceph_vg1 finished
Feb 25 14:01:40 compute-0 lvm[427581]: VG ceph_vg0 finished
Feb 25 14:01:40 compute-0 lvm[427584]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:01:40 compute-0 lvm[427584]: VG ceph_vg2 finished
Feb 25 14:01:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:40 compute-0 heuristic_herschel[427503]: {}
Feb 25 14:01:40 compute-0 systemd[1]: libpod-3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731.scope: Deactivated successfully.
Feb 25 14:01:40 compute-0 systemd[1]: libpod-3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731.scope: Consumed 1.276s CPU time.
Feb 25 14:01:40 compute-0 podman[427587]: 2026-02-25 14:01:40.494619797 +0000 UTC m=+0.032232306 container died 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 14:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c739d34a526031f1fdb78be3d5135652fd81631907d75c94f0c4cf409cb8c51-merged.mount: Deactivated successfully.
Feb 25 14:01:40 compute-0 podman[427587]: 2026-02-25 14:01:40.543612648 +0000 UTC m=+0.081225097 container remove 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 14:01:40 compute-0 systemd[1]: libpod-conmon-3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731.scope: Deactivated successfully.
Feb 25 14:01:40 compute-0 sudo[427407]: pam_unix(sudo:session): session closed for user root
Feb 25 14:01:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:01:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:01:40 compute-0 nova_compute[244014]: 2026-02-25 14:01:40.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #210. Immutable memtables: 0.
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.644317) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 210
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100644385, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3487904, "memory_usage": 3551088, "flush_reason": "Manual Compaction"}
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #211: started
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100670057, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 211, "file_size": 3420288, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86336, "largest_seqno": 88387, "table_properties": {"data_size": 3410846, "index_size": 5999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18655, "raw_average_key_size": 20, "raw_value_size": 3392214, "raw_average_value_size": 3651, "num_data_blocks": 266, "num_entries": 929, "num_filter_entries": 929, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027869, "oldest_key_time": 1772027869, "file_creation_time": 1772028100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 211, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 25798 microseconds, and 8725 cpu microseconds.
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.670120) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #211: 3420288 bytes OK
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.670148) [db/memtable_list.cc:519] [default] Level-0 commit table #211 started
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.672042) [db/memtable_list.cc:722] [default] Level-0 commit table #211: memtable #1 done
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.672063) EVENT_LOG_v1 {"time_micros": 1772028100672056, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.672088) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 3479313, prev total WAL file size 3521005, number of live WAL files 2.
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000207.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.673071) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [211(3340KB)], [209(10MB)]
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100673162, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [211], "files_L6": [209], "score": -1, "input_data_size": 13971468, "oldest_snapshot_seqno": -1}
Feb 25 14:01:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:01:40 compute-0 sudo[427602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:01:40 compute-0 sudo[427602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:01:40 compute-0 sudo[427602]: pam_unix(sudo:session): session closed for user root
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #212: 10242 keys, 12154253 bytes, temperature: kUnknown
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100768569, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 212, "file_size": 12154253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12090510, "index_size": 37006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 268875, "raw_average_key_size": 26, "raw_value_size": 11912164, "raw_average_value_size": 1163, "num_data_blocks": 1426, "num_entries": 10242, "num_filter_entries": 10242, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.769539) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 12154253 bytes
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.771259) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.3 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 10756, records dropped: 514 output_compression: NoCompression
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.771289) EVENT_LOG_v1 {"time_micros": 1772028100771275, "job": 132, "event": "compaction_finished", "compaction_time_micros": 96149, "compaction_time_cpu_micros": 48679, "output_level": 6, "num_output_files": 1, "total_output_size": 12154253, "num_input_records": 10756, "num_output_records": 10242, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000211.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100771934, "job": 132, "event": "table_file_deletion", "file_number": 211}
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100773518, "job": 132, "event": "table_file_deletion", "file_number": 209}
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.672919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.773601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.773606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.773608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.773610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:01:40 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.773612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:01:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:01:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:01:41 compute-0 ceph-mon[76335]: pgmap v4253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:41 compute-0 nova_compute[244014]: 2026-02-25 14:01:41.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:41 compute-0 nova_compute[244014]: 2026-02-25 14:01:41.895 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:01:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:43 compute-0 ceph-mon[76335]: pgmap v4254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:01:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:01:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:44 compute-0 nova_compute[244014]: 2026-02-25 14:01:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:01:44 compute-0 nova_compute[244014]: 2026-02-25 14:01:44.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:01:44 compute-0 nova_compute[244014]: 2026-02-25 14:01:44.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:01:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:45 compute-0 nova_compute[244014]: 2026-02-25 14:01:45.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:45 compute-0 ceph-mon[76335]: pgmap v4255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:46 compute-0 nova_compute[244014]: 2026-02-25 14:01:46.893 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:01:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2802048590' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:01:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:01:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2802048590' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:01:47 compute-0 ceph-mon[76335]: pgmap v4256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2802048590' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:01:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2802048590' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:01:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:49 compute-0 nova_compute[244014]: 2026-02-25 14:01:49.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:01:49 compute-0 ceph-mon[76335]: pgmap v4257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:50 compute-0 nova_compute[244014]: 2026-02-25 14:01:50.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:50 compute-0 nova_compute[244014]: 2026-02-25 14:01:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:01:51 compute-0 nova_compute[244014]: 2026-02-25 14:01:51.899 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:51 compute-0 ceph-mon[76335]: pgmap v4258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:53 compute-0 ceph-mon[76335]: pgmap v4259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:54 compute-0 nova_compute[244014]: 2026-02-25 14:01:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:01:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:01:55.106 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:01:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:01:55.106 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:01:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:01:55.107 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:01:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:01:55 compute-0 nova_compute[244014]: 2026-02-25 14:01:55.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:55 compute-0 ceph-mon[76335]: pgmap v4260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:56 compute-0 nova_compute[244014]: 2026-02-25 14:01:56.899 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:01:57 compute-0 ceph-mon[76335]: pgmap v4261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:01:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:00 compute-0 ceph-mon[76335]: pgmap v4262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:00 compute-0 nova_compute[244014]: 2026-02-25 14:02:00.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:02:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:02:01 compute-0 nova_compute[244014]: 2026-02-25 14:02:01.902 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:02 compute-0 ceph-mon[76335]: pgmap v4263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:04 compute-0 ceph-mon[76335]: pgmap v4264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:04 compute-0 podman[427627]: 2026-02-25 14:02:04.73253724 +0000 UTC m=+0.069772032 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 25 14:02:04 compute-0 podman[427628]: 2026-02-25 14:02:04.777645521 +0000 UTC m=+0.114551473 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:02:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:05 compute-0 nova_compute[244014]: 2026-02-25 14:02:05.670 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:06 compute-0 ceph-mon[76335]: pgmap v4265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:06 compute-0 nova_compute[244014]: 2026-02-25 14:02:06.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:07 compute-0 ceph-mon[76335]: pgmap v4266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:09 compute-0 ceph-mon[76335]: pgmap v4267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:10 compute-0 nova_compute[244014]: 2026-02-25 14:02:10.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:11 compute-0 nova_compute[244014]: 2026-02-25 14:02:11.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:11 compute-0 ceph-mon[76335]: pgmap v4268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:13 compute-0 ceph-mon[76335]: pgmap v4269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #213. Immutable memtables: 0.
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.416912) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 213
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135416971, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 516, "num_deletes": 255, "total_data_size": 562281, "memory_usage": 573272, "flush_reason": "Manual Compaction"}
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #214: started
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135422014, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 214, "file_size": 558024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88388, "largest_seqno": 88903, "table_properties": {"data_size": 555104, "index_size": 955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6546, "raw_average_key_size": 18, "raw_value_size": 549293, "raw_average_value_size": 1534, "num_data_blocks": 44, "num_entries": 358, "num_filter_entries": 358, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028100, "oldest_key_time": 1772028100, "file_creation_time": 1772028135, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 214, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 5188 microseconds, and 2765 cpu microseconds.
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.422097) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #214: 558024 bytes OK
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.422122) [db/memtable_list.cc:519] [default] Level-0 commit table #214 started
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.423918) [db/memtable_list.cc:722] [default] Level-0 commit table #214: memtable #1 done
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.423938) EVENT_LOG_v1 {"time_micros": 1772028135423931, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.423959) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 559302, prev total WAL file size 559302, number of live WAL files 2.
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000210.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.424494) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303138' seq:72057594037927935, type:22 .. '6C6F676D0034323639' seq:0, type:0; will stop at (end)
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [214(544KB)], [212(11MB)]
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135424561, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [214], "files_L6": [212], "score": -1, "input_data_size": 12712277, "oldest_snapshot_seqno": -1}
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #215: 10081 keys, 12612880 bytes, temperature: kUnknown
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135551582, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 215, "file_size": 12612880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12549062, "index_size": 37529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25221, "raw_key_size": 266445, "raw_average_key_size": 26, "raw_value_size": 12372196, "raw_average_value_size": 1227, "num_data_blocks": 1447, "num_entries": 10081, "num_filter_entries": 10081, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028135, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.552103) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12612880 bytes
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.558263) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.1 rd, 99.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.6 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(45.4) write-amplify(22.6) OK, records in: 10600, records dropped: 519 output_compression: NoCompression
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.558300) EVENT_LOG_v1 {"time_micros": 1772028135558284, "job": 134, "event": "compaction_finished", "compaction_time_micros": 126988, "compaction_time_cpu_micros": 42246, "output_level": 6, "num_output_files": 1, "total_output_size": 12612880, "num_input_records": 10600, "num_output_records": 10081, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000214.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135558609, "job": 134, "event": "table_file_deletion", "file_number": 214}
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135561015, "job": 134, "event": "table_file_deletion", "file_number": 212}
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.424391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.561199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.561209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.561216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.561219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:02:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.561222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:02:15 compute-0 nova_compute[244014]: 2026-02-25 14:02:15.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:15 compute-0 ceph-mon[76335]: pgmap v4270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:16 compute-0 nova_compute[244014]: 2026-02-25 14:02:16.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:17 compute-0 ceph-mon[76335]: pgmap v4271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:19 compute-0 ceph-mon[76335]: pgmap v4272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:20 compute-0 nova_compute[244014]: 2026-02-25 14:02:20.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:21 compute-0 nova_compute[244014]: 2026-02-25 14:02:21.951 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:21 compute-0 ceph-mon[76335]: pgmap v4273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:23 compute-0 ceph-mon[76335]: pgmap v4274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:25 compute-0 nova_compute[244014]: 2026-02-25 14:02:25.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:25 compute-0 ceph-mon[76335]: pgmap v4275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:26 compute-0 nova_compute[244014]: 2026-02-25 14:02:26.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:27 compute-0 ceph-mon[76335]: pgmap v4276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:28 compute-0 nova_compute[244014]: 2026-02-25 14:02:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:02:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:30 compute-0 ceph-mon[76335]: pgmap v4277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:30 compute-0 nova_compute[244014]: 2026-02-25 14:02:30.733 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:02:31
Feb 25 14:02:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:02:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:02:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'vms']
Feb 25 14:02:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:02:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:02:31 compute-0 nova_compute[244014]: 2026-02-25 14:02:31.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:32 compute-0 ceph-mon[76335]: pgmap v4278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:02:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:02:32 compute-0 nova_compute[244014]: 2026-02-25 14:02:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:02:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:32 compute-0 nova_compute[244014]: 2026-02-25 14:02:32.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:02:32 compute-0 nova_compute[244014]: 2026-02-25 14:02:32.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:02:32 compute-0 nova_compute[244014]: 2026-02-25 14:02:32.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:02:32 compute-0 nova_compute[244014]: 2026-02-25 14:02:32.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:02:32 compute-0 nova_compute[244014]: 2026-02-25 14:02:32.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:02:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:02:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077948250' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:02:33 compute-0 nova_compute[244014]: 2026-02-25 14:02:33.509 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:02:33 compute-0 nova_compute[244014]: 2026-02-25 14:02:33.691 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:02:33 compute-0 nova_compute[244014]: 2026-02-25 14:02:33.693 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3539MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:02:33 compute-0 nova_compute[244014]: 2026-02-25 14:02:33.693 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:02:33 compute-0 nova_compute[244014]: 2026-02-25 14:02:33.694 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:02:33 compute-0 nova_compute[244014]: 2026-02-25 14:02:33.769 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:02:33 compute-0 nova_compute[244014]: 2026-02-25 14:02:33.770 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:02:33 compute-0 nova_compute[244014]: 2026-02-25 14:02:33.798 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:02:34 compute-0 ceph-mon[76335]: pgmap v4279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2077948250' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:02:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:02:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1525895893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:02:34 compute-0 nova_compute[244014]: 2026-02-25 14:02:34.352 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:02:34 compute-0 nova_compute[244014]: 2026-02-25 14:02:34.357 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:02:34 compute-0 nova_compute[244014]: 2026-02-25 14:02:34.378 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:02:34 compute-0 nova_compute[244014]: 2026-02-25 14:02:34.379 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:02:34 compute-0 nova_compute[244014]: 2026-02-25 14:02:34.379 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:02:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1525895893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:02:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:35 compute-0 nova_compute[244014]: 2026-02-25 14:02:35.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:35 compute-0 podman[427715]: 2026-02-25 14:02:35.754535498 +0000 UTC m=+0.092351613 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 25 14:02:35 compute-0 podman[427716]: 2026-02-25 14:02:35.759100098 +0000 UTC m=+0.093137215 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 14:02:36 compute-0 ceph-mon[76335]: pgmap v4280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:36 compute-0 nova_compute[244014]: 2026-02-25 14:02:36.989 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:37 compute-0 ceph-mon[76335]: pgmap v4281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:39 compute-0 nova_compute[244014]: 2026-02-25 14:02:39.379 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:02:39 compute-0 nova_compute[244014]: 2026-02-25 14:02:39.380 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:02:39 compute-0 nova_compute[244014]: 2026-02-25 14:02:39.381 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:02:39 compute-0 nova_compute[244014]: 2026-02-25 14:02:39.397 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:02:39 compute-0 ceph-mon[76335]: pgmap v4282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:40 compute-0 nova_compute[244014]: 2026-02-25 14:02:40.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:40 compute-0 sudo[427760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:02:40 compute-0 sudo[427760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:02:40 compute-0 sudo[427760]: pam_unix(sudo:session): session closed for user root
Feb 25 14:02:40 compute-0 sudo[427785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:02:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:40 compute-0 sudo[427785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:02:41 compute-0 sudo[427785]: pam_unix(sudo:session): session closed for user root
Feb 25 14:02:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:02:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:02:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:02:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:02:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:02:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:02:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:02:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:02:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:02:41 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:02:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:02:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:02:41 compute-0 sudo[427841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:02:41 compute-0 sudo[427841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:02:41 compute-0 sudo[427841]: pam_unix(sudo:session): session closed for user root
Feb 25 14:02:41 compute-0 sudo[427866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:02:41 compute-0 sudo[427866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:02:41 compute-0 podman[427904]: 2026-02-25 14:02:41.875135699 +0000 UTC m=+0.059906931 container create a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb 25 14:02:41 compute-0 nova_compute[244014]: 2026-02-25 14:02:41.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:02:41 compute-0 systemd[1]: Started libpod-conmon-a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8.scope.
Feb 25 14:02:41 compute-0 podman[427904]: 2026-02-25 14:02:41.849005078 +0000 UTC m=+0.033776320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:02:41 compute-0 ceph-mon[76335]: pgmap v4283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:02:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:02:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:02:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:02:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:02:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:02:41 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:02:41 compute-0 podman[427904]: 2026-02-25 14:02:41.969936831 +0000 UTC m=+0.154708113 container init a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:02:41 compute-0 podman[427904]: 2026-02-25 14:02:41.97766984 +0000 UTC m=+0.162441072 container start a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:02:41 compute-0 podman[427904]: 2026-02-25 14:02:41.981916711 +0000 UTC m=+0.166688013 container attach a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:02:41 compute-0 vibrant_noether[427921]: 167 167
Feb 25 14:02:41 compute-0 systemd[1]: libpod-a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8.scope: Deactivated successfully.
Feb 25 14:02:41 compute-0 podman[427904]: 2026-02-25 14:02:41.985394529 +0000 UTC m=+0.170165731 container died a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 14:02:42 compute-0 nova_compute[244014]: 2026-02-25 14:02:42.025 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-70179f65dde04122ca07c7dea08c3ff71101190d05847825cc07008007242d2a-merged.mount: Deactivated successfully.
Feb 25 14:02:42 compute-0 podman[427904]: 2026-02-25 14:02:42.063885258 +0000 UTC m=+0.248656470 container remove a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:02:42 compute-0 systemd[1]: libpod-conmon-a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8.scope: Deactivated successfully.
Feb 25 14:02:42 compute-0 podman[427944]: 2026-02-25 14:02:42.263523206 +0000 UTC m=+0.063142254 container create 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:02:42 compute-0 systemd[1]: Started libpod-conmon-2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6.scope.
Feb 25 14:02:42 compute-0 podman[427944]: 2026-02-25 14:02:42.240040729 +0000 UTC m=+0.039659817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:02:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:42 compute-0 podman[427944]: 2026-02-25 14:02:42.373446766 +0000 UTC m=+0.173065844 container init 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 14:02:42 compute-0 podman[427944]: 2026-02-25 14:02:42.387926637 +0000 UTC m=+0.187545675 container start 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:02:42 compute-0 podman[427944]: 2026-02-25 14:02:42.393145305 +0000 UTC m=+0.192764323 container attach 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 14:02:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:42 compute-0 stoic_turing[427960]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:02:42 compute-0 stoic_turing[427960]: --> All data devices are unavailable
Feb 25 14:02:42 compute-0 systemd[1]: libpod-2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6.scope: Deactivated successfully.
Feb 25 14:02:42 compute-0 podman[427944]: 2026-02-25 14:02:42.957554288 +0000 UTC m=+0.757173336 container died 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 14:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3-merged.mount: Deactivated successfully.
Feb 25 14:02:43 compute-0 podman[427944]: 2026-02-25 14:02:43.014795723 +0000 UTC m=+0.814414771 container remove 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 14:02:43 compute-0 systemd[1]: libpod-conmon-2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6.scope: Deactivated successfully.
Feb 25 14:02:43 compute-0 sudo[427866]: pam_unix(sudo:session): session closed for user root
Feb 25 14:02:43 compute-0 sudo[427995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:02:43 compute-0 sudo[427995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:02:43 compute-0 sudo[427995]: pam_unix(sudo:session): session closed for user root
Feb 25 14:02:43 compute-0 sudo[428020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:02:43 compute-0 sudo[428020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:02:43 compute-0 podman[428055]: 2026-02-25 14:02:43.525604405 +0000 UTC m=+0.056743212 container create 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 14:02:43 compute-0 podman[428055]: 2026-02-25 14:02:43.495610523 +0000 UTC m=+0.026749380 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:02:43 compute-0 systemd[1]: Started libpod-conmon-0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452.scope.
Feb 25 14:02:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:02:43 compute-0 podman[428055]: 2026-02-25 14:02:43.662503711 +0000 UTC m=+0.193642558 container init 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:02:43 compute-0 podman[428055]: 2026-02-25 14:02:43.672446023 +0000 UTC m=+0.203584820 container start 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:02:43 compute-0 podman[428055]: 2026-02-25 14:02:43.676932851 +0000 UTC m=+0.208071658 container attach 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:02:43 compute-0 fervent_wescoff[428071]: 167 167
Feb 25 14:02:43 compute-0 systemd[1]: libpod-0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452.scope: Deactivated successfully.
Feb 25 14:02:43 compute-0 podman[428055]: 2026-02-25 14:02:43.679274437 +0000 UTC m=+0.210413244 container died 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:02:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-34566f80873d5f1129e5382cb9f91ba1f83c4232846b2de163f381078ef9f9da-merged.mount: Deactivated successfully.
Feb 25 14:02:43 compute-0 podman[428055]: 2026-02-25 14:02:43.723999057 +0000 UTC m=+0.255137864 container remove 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 14:02:43 compute-0 systemd[1]: libpod-conmon-0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452.scope: Deactivated successfully.
Feb 25 14:02:43 compute-0 podman[428095]: 2026-02-25 14:02:43.887772516 +0000 UTC m=+0.040261454 container create c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 14:02:43 compute-0 systemd[1]: Started libpod-conmon-c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52.scope.
Feb 25 14:02:43 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:02:43 compute-0 ceph-mon[76335]: pgmap v4284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce52767c03856a7c040ea6e78d871f81742fcdeda40e9d70919d1b7533eef20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce52767c03856a7c040ea6e78d871f81742fcdeda40e9d70919d1b7533eef20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce52767c03856a7c040ea6e78d871f81742fcdeda40e9d70919d1b7533eef20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce52767c03856a7c040ea6e78d871f81742fcdeda40e9d70919d1b7533eef20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:43 compute-0 podman[428095]: 2026-02-25 14:02:43.871451873 +0000 UTC m=+0.023940831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:02:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:02:43 compute-0 podman[428095]: 2026-02-25 14:02:43.983187845 +0000 UTC m=+0.135676803 container init c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:02:43 compute-0 podman[428095]: 2026-02-25 14:02:43.992101878 +0000 UTC m=+0.144590846 container start c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 14:02:43 compute-0 podman[428095]: 2026-02-25 14:02:43.996514494 +0000 UTC m=+0.149003442 container attach c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 14:02:44 compute-0 funny_maxwell[428111]: {
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:     "0": [
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:         {
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "devices": [
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "/dev/loop3"
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             ],
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_name": "ceph_lv0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_size": "21470642176",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "name": "ceph_lv0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "tags": {
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.cluster_name": "ceph",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.crush_device_class": "",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.encrypted": "0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.objectstore": "bluestore",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.osd_id": "0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.type": "block",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.vdo": "0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.with_tpm": "0"
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             },
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "type": "block",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "vg_name": "ceph_vg0"
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:         }
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:     ],
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:     "1": [
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:         {
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "devices": [
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "/dev/loop4"
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             ],
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_name": "ceph_lv1",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_size": "21470642176",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "name": "ceph_lv1",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "tags": {
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.cluster_name": "ceph",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.crush_device_class": "",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.encrypted": "0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.objectstore": "bluestore",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.osd_id": "1",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.type": "block",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.vdo": "0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.with_tpm": "0"
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             },
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "type": "block",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "vg_name": "ceph_vg1"
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:         }
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:     ],
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:     "2": [
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:         {
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "devices": [
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "/dev/loop5"
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             ],
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_name": "ceph_lv2",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_size": "21470642176",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "name": "ceph_lv2",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "tags": {
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.cluster_name": "ceph",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.crush_device_class": "",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.encrypted": "0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.objectstore": "bluestore",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.osd_id": "2",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.type": "block",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.vdo": "0",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:                 "ceph.with_tpm": "0"
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             },
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "type": "block",
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:             "vg_name": "ceph_vg2"
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:         }
Feb 25 14:02:44 compute-0 funny_maxwell[428111]:     ]
Feb 25 14:02:44 compute-0 funny_maxwell[428111]: }
Feb 25 14:02:44 compute-0 systemd[1]: libpod-c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52.scope: Deactivated successfully.
Feb 25 14:02:44 compute-0 podman[428095]: 2026-02-25 14:02:44.30566779 +0000 UTC m=+0.458156728 container died c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-bce52767c03856a7c040ea6e78d871f81742fcdeda40e9d70919d1b7533eef20-merged.mount: Deactivated successfully.
Feb 25 14:02:44 compute-0 podman[428095]: 2026-02-25 14:02:44.35038763 +0000 UTC m=+0.502876598 container remove c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:02:44 compute-0 systemd[1]: libpod-conmon-c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52.scope: Deactivated successfully.
Feb 25 14:02:44 compute-0 sudo[428020]: pam_unix(sudo:session): session closed for user root
Feb 25 14:02:44 compute-0 sudo[428132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:02:44 compute-0 sudo[428132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:02:44 compute-0 sudo[428132]: pam_unix(sudo:session): session closed for user root
Feb 25 14:02:44 compute-0 sudo[428157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:02:44 compute-0 sudo[428157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:02:44 compute-0 podman[428194]: 2026-02-25 14:02:44.860647706 +0000 UTC m=+0.066743606 container create cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:02:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:44 compute-0 systemd[1]: Started libpod-conmon-cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063.scope.
Feb 25 14:02:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:02:44 compute-0 podman[428194]: 2026-02-25 14:02:44.833589458 +0000 UTC m=+0.039685388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:02:44 compute-0 podman[428194]: 2026-02-25 14:02:44.930309874 +0000 UTC m=+0.136405784 container init cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:02:44 compute-0 podman[428194]: 2026-02-25 14:02:44.938420764 +0000 UTC m=+0.144516664 container start cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:02:44 compute-0 podman[428194]: 2026-02-25 14:02:44.942303674 +0000 UTC m=+0.148399584 container attach cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 14:02:44 compute-0 great_brown[428210]: 167 167
Feb 25 14:02:44 compute-0 systemd[1]: libpod-cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063.scope: Deactivated successfully.
Feb 25 14:02:44 compute-0 podman[428194]: 2026-02-25 14:02:44.945864205 +0000 UTC m=+0.151960105 container died cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 14:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-877e8e827b29b56a690ad15144456b05cc766b37a04bc6701460efe948a43148-merged.mount: Deactivated successfully.
Feb 25 14:02:44 compute-0 podman[428194]: 2026-02-25 14:02:44.989788273 +0000 UTC m=+0.195884133 container remove cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:02:45 compute-0 systemd[1]: libpod-conmon-cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063.scope: Deactivated successfully.
Feb 25 14:02:45 compute-0 podman[428234]: 2026-02-25 14:02:45.164865443 +0000 UTC m=+0.063642648 container create be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 14:02:45 compute-0 systemd[1]: Started libpod-conmon-be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a.scope.
Feb 25 14:02:45 compute-0 podman[428234]: 2026-02-25 14:02:45.134564493 +0000 UTC m=+0.033341738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:02:45 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d5e3c82da8db39792419a9846055ce54af9e9099c58b277274c5940deead3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d5e3c82da8db39792419a9846055ce54af9e9099c58b277274c5940deead3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d5e3c82da8db39792419a9846055ce54af9e9099c58b277274c5940deead3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d5e3c82da8db39792419a9846055ce54af9e9099c58b277274c5940deead3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:02:45 compute-0 podman[428234]: 2026-02-25 14:02:45.265379366 +0000 UTC m=+0.164156551 container init be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:02:45 compute-0 podman[428234]: 2026-02-25 14:02:45.277236513 +0000 UTC m=+0.176013708 container start be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:02:45 compute-0 podman[428234]: 2026-02-25 14:02:45.281847714 +0000 UTC m=+0.180624879 container attach be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 14:02:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:45 compute-0 nova_compute[244014]: 2026-02-25 14:02:45.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:45 compute-0 nova_compute[244014]: 2026-02-25 14:02:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:02:45 compute-0 ceph-mon[76335]: pgmap v4285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:45 compute-0 lvm[428329]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:02:45 compute-0 lvm[428329]: VG ceph_vg0 finished
Feb 25 14:02:46 compute-0 lvm[428330]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:02:46 compute-0 lvm[428330]: VG ceph_vg1 finished
Feb 25 14:02:46 compute-0 lvm[428332]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:02:46 compute-0 lvm[428332]: VG ceph_vg2 finished
Feb 25 14:02:46 compute-0 amazing_banzai[428251]: {}
Feb 25 14:02:46 compute-0 systemd[1]: libpod-be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a.scope: Deactivated successfully.
Feb 25 14:02:46 compute-0 systemd[1]: libpod-be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a.scope: Consumed 1.214s CPU time.
Feb 25 14:02:46 compute-0 podman[428234]: 2026-02-25 14:02:46.198896288 +0000 UTC m=+1.097673483 container died be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:02:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4d5e3c82da8db39792419a9846055ce54af9e9099c58b277274c5940deead3f-merged.mount: Deactivated successfully.
Feb 25 14:02:46 compute-0 podman[428234]: 2026-02-25 14:02:46.253844578 +0000 UTC m=+1.152621773 container remove be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 14:02:46 compute-0 systemd[1]: libpod-conmon-be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a.scope: Deactivated successfully.
Feb 25 14:02:46 compute-0 sudo[428157]: pam_unix(sudo:session): session closed for user root
Feb 25 14:02:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:02:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:02:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:02:46 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:02:46 compute-0 sudo[428349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:02:46 compute-0 sudo[428349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:02:46 compute-0 sudo[428349]: pam_unix(sudo:session): session closed for user root
Feb 25 14:02:46 compute-0 nova_compute[244014]: 2026-02-25 14:02:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:02:46 compute-0 nova_compute[244014]: 2026-02-25 14:02:46.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:02:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:47 compute-0 nova_compute[244014]: 2026-02-25 14:02:47.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:02:47 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:02:47 compute-0 ceph-mon[76335]: pgmap v4286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:02:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3576130475' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:02:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:02:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3576130475' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:02:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3576130475' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:02:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3576130475' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:02:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:49 compute-0 ceph-mon[76335]: pgmap v4287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:50 compute-0 nova_compute[244014]: 2026-02-25 14:02:50.825 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:50 compute-0 nova_compute[244014]: 2026-02-25 14:02:50.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:02:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:51 compute-0 ceph-mon[76335]: pgmap v4288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:52 compute-0 nova_compute[244014]: 2026-02-25 14:02:52.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:52 compute-0 nova_compute[244014]: 2026-02-25 14:02:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:02:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:53 compute-0 ceph-mon[76335]: pgmap v4289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:02:55.108 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:02:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:02:55.108 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:02:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:02:55.108 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:02:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:02:55 compute-0 nova_compute[244014]: 2026-02-25 14:02:55.857 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:55 compute-0 nova_compute[244014]: 2026-02-25 14:02:55.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:02:55 compute-0 ceph-mon[76335]: pgmap v4290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:57 compute-0 nova_compute[244014]: 2026-02-25 14:02:57.053 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:02:57 compute-0 ceph-mon[76335]: pgmap v4291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:02:59 compute-0 ceph-mon[76335]: pgmap v4292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:00 compute-0 nova_compute[244014]: 2026-02-25 14:03:00.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:03:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:03:01 compute-0 ceph-mon[76335]: pgmap v4293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:02 compute-0 nova_compute[244014]: 2026-02-25 14:03:02.083 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:04 compute-0 ceph-mon[76335]: pgmap v4294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:05 compute-0 nova_compute[244014]: 2026-02-25 14:03:05.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:06 compute-0 ceph-mon[76335]: pgmap v4295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:06 compute-0 sshd-session[428374]: Connection closed by 46.101.242.142 port 37148
Feb 25 14:03:06 compute-0 podman[428375]: 2026-02-25 14:03:06.761683126 +0000 UTC m=+0.080825106 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 14:03:06 compute-0 podman[428376]: 2026-02-25 14:03:06.807879657 +0000 UTC m=+0.126517243 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:03:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:07 compute-0 nova_compute[244014]: 2026-02-25 14:03:07.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:08 compute-0 ceph-mon[76335]: pgmap v4296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:09 compute-0 ceph-mon[76335]: pgmap v4297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:10 compute-0 nova_compute[244014]: 2026-02-25 14:03:10.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:11 compute-0 ceph-mon[76335]: pgmap v4298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:12 compute-0 nova_compute[244014]: 2026-02-25 14:03:12.116 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:13 compute-0 ceph-mon[76335]: pgmap v4299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:15 compute-0 ceph-mon[76335]: pgmap v4300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:15 compute-0 nova_compute[244014]: 2026-02-25 14:03:15.987 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:17 compute-0 nova_compute[244014]: 2026-02-25 14:03:17.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:17 compute-0 ceph-mon[76335]: pgmap v4301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:19 compute-0 ceph-mon[76335]: pgmap v4302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:20 compute-0 nova_compute[244014]: 2026-02-25 14:03:20.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:03:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:20 compute-0 nova_compute[244014]: 2026-02-25 14:03:20.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:22 compute-0 ceph-mon[76335]: pgmap v4303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:22 compute-0 nova_compute[244014]: 2026-02-25 14:03:22.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:24 compute-0 ceph-mon[76335]: pgmap v4304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:26 compute-0 ceph-mon[76335]: pgmap v4305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:26 compute-0 nova_compute[244014]: 2026-02-25 14:03:26.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:27 compute-0 nova_compute[244014]: 2026-02-25 14:03:27.158 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:28 compute-0 ceph-mon[76335]: pgmap v4306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:30 compute-0 ceph-mon[76335]: pgmap v4307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #216. Immutable memtables: 0.
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.436766) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 216
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210436824, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 827, "num_deletes": 250, "total_data_size": 1115255, "memory_usage": 1139920, "flush_reason": "Manual Compaction"}
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #217: started
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210444155, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 217, "file_size": 692333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88904, "largest_seqno": 89730, "table_properties": {"data_size": 688888, "index_size": 1224, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9141, "raw_average_key_size": 20, "raw_value_size": 681594, "raw_average_value_size": 1538, "num_data_blocks": 55, "num_entries": 443, "num_filter_entries": 443, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028135, "oldest_key_time": 1772028135, "file_creation_time": 1772028210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 217, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 7457 microseconds, and 3577 cpu microseconds.
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.444219) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #217: 692333 bytes OK
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.444246) [db/memtable_list.cc:519] [default] Level-0 commit table #217 started
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.446471) [db/memtable_list.cc:722] [default] Level-0 commit table #217: memtable #1 done
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.446493) EVENT_LOG_v1 {"time_micros": 1772028210446486, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.446522) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 1111160, prev total WAL file size 1111160, number of live WAL files 2.
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000213.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.447106) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373533' seq:72057594037927935, type:22 .. '6D6772737461740034303034' seq:0, type:0; will stop at (end)
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [217(676KB)], [215(12MB)]
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210447145, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [217], "files_L6": [215], "score": -1, "input_data_size": 13305213, "oldest_snapshot_seqno": -1}
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #218: 10043 keys, 10377031 bytes, temperature: kUnknown
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210539452, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 218, "file_size": 10377031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10317302, "index_size": 33504, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25157, "raw_key_size": 265813, "raw_average_key_size": 26, "raw_value_size": 10144952, "raw_average_value_size": 1010, "num_data_blocks": 1281, "num_entries": 10043, "num_filter_entries": 10043, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.540224) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 10377031 bytes
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.542188) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.4 rd, 111.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.0 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(34.2) write-amplify(15.0) OK, records in: 10524, records dropped: 481 output_compression: NoCompression
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.542224) EVENT_LOG_v1 {"time_micros": 1772028210542207, "job": 136, "event": "compaction_finished", "compaction_time_micros": 92761, "compaction_time_cpu_micros": 39140, "output_level": 6, "num_output_files": 1, "total_output_size": 10377031, "num_input_records": 10524, "num_output_records": 10043, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000217.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210543053, "job": 136, "event": "table_file_deletion", "file_number": 217}
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210545952, "job": 136, "event": "table_file_deletion", "file_number": 215}
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.447028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.546130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.546136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.546138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.546140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:30 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.546142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:30 compute-0 nova_compute[244014]: 2026-02-25 14:03:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:03:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:31 compute-0 nova_compute[244014]: 2026-02-25 14:03:31.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:03:31
Feb 25 14:03:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:03:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:03:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['images', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.log']
Feb 25 14:03:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:03:31 compute-0 ceph-mon[76335]: pgmap v4308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:03:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:03:32 compute-0 nova_compute[244014]: 2026-02-25 14:03:32.169 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:03:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:03:32 compute-0 nova_compute[244014]: 2026-02-25 14:03:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:03:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:32 compute-0 nova_compute[244014]: 2026-02-25 14:03:32.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:03:32 compute-0 nova_compute[244014]: 2026-02-25 14:03:32.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:03:32 compute-0 nova_compute[244014]: 2026-02-25 14:03:32.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:03:32 compute-0 nova_compute[244014]: 2026-02-25 14:03:32.918 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:03:32 compute-0 nova_compute[244014]: 2026-02-25 14:03:32.919 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:03:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:03:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4220057394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:03:33 compute-0 nova_compute[244014]: 2026-02-25 14:03:33.524 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:03:33 compute-0 nova_compute[244014]: 2026-02-25 14:03:33.712 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:03:33 compute-0 nova_compute[244014]: 2026-02-25 14:03:33.713 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:03:33 compute-0 nova_compute[244014]: 2026-02-25 14:03:33.714 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:03:33 compute-0 nova_compute[244014]: 2026-02-25 14:03:33.714 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:03:33 compute-0 ceph-mon[76335]: pgmap v4309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4220057394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:03:34 compute-0 nova_compute[244014]: 2026-02-25 14:03:34.419 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:03:34 compute-0 nova_compute[244014]: 2026-02-25 14:03:34.420 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:03:34 compute-0 nova_compute[244014]: 2026-02-25 14:03:34.746 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:03:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:03:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329039876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:03:35 compute-0 nova_compute[244014]: 2026-02-25 14:03:35.302 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:03:35 compute-0 nova_compute[244014]: 2026-02-25 14:03:35.308 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:03:35 compute-0 nova_compute[244014]: 2026-02-25 14:03:35.326 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:03:35 compute-0 nova_compute[244014]: 2026-02-25 14:03:35.328 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:03:35 compute-0 nova_compute[244014]: 2026-02-25 14:03:35.328 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:03:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:35 compute-0 ceph-mon[76335]: pgmap v4310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2329039876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:03:36 compute-0 nova_compute[244014]: 2026-02-25 14:03:36.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:37 compute-0 nova_compute[244014]: 2026-02-25 14:03:37.170 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:37 compute-0 podman[428463]: 2026-02-25 14:03:37.716052311 +0000 UTC m=+0.059206021 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 25 14:03:37 compute-0 podman[428464]: 2026-02-25 14:03:37.8008753 +0000 UTC m=+0.142051964 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb 25 14:03:37 compute-0 ceph-mon[76335]: pgmap v4311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:40 compute-0 ceph-mon[76335]: pgmap v4312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:41 compute-0 nova_compute[244014]: 2026-02-25 14:03:41.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:41 compute-0 nova_compute[244014]: 2026-02-25 14:03:41.328 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:03:41 compute-0 nova_compute[244014]: 2026-02-25 14:03:41.328 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:03:41 compute-0 nova_compute[244014]: 2026-02-25 14:03:41.329 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:03:41 compute-0 nova_compute[244014]: 2026-02-25 14:03:41.343 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:03:41 compute-0 nova_compute[244014]: 2026-02-25 14:03:41.886 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:03:42 compute-0 ceph-mon[76335]: pgmap v4313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:42 compute-0 nova_compute[244014]: 2026-02-25 14:03:42.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:03:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:03:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:03:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:03:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:03:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:03:43 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:03:44 compute-0 ceph-mon[76335]: pgmap v4314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:45 compute-0 nova_compute[244014]: 2026-02-25 14:03:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:03:46 compute-0 ceph-mon[76335]: pgmap v4315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:46 compute-0 nova_compute[244014]: 2026-02-25 14:03:46.094 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:46 compute-0 sudo[428505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:03:46 compute-0 sudo[428505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:03:46 compute-0 sudo[428505]: pam_unix(sudo:session): session closed for user root
Feb 25 14:03:46 compute-0 sudo[428530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:03:46 compute-0 sudo[428530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:03:46 compute-0 nova_compute[244014]: 2026-02-25 14:03:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:03:46 compute-0 nova_compute[244014]: 2026-02-25 14:03:46.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:03:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:47 compute-0 sudo[428530]: pam_unix(sudo:session): session closed for user root
Feb 25 14:03:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:03:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:03:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:03:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:03:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:03:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:03:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:03:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:03:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:03:47 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:03:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:03:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:03:47 compute-0 nova_compute[244014]: 2026-02-25 14:03:47.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:47 compute-0 sudo[428585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:03:47 compute-0 sudo[428585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:03:47 compute-0 sudo[428585]: pam_unix(sudo:session): session closed for user root
Feb 25 14:03:47 compute-0 sudo[428610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:03:47 compute-0 sudo[428610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:03:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:03:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3940397430' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:03:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:03:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3940397430' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:03:47 compute-0 podman[428649]: 2026-02-25 14:03:47.617720535 +0000 UTC m=+0.062560957 container create 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 14:03:47 compute-0 systemd[1]: Started libpod-conmon-06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298.scope.
Feb 25 14:03:47 compute-0 podman[428649]: 2026-02-25 14:03:47.590184013 +0000 UTC m=+0.035024445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:03:47 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:03:47 compute-0 podman[428649]: 2026-02-25 14:03:47.719317519 +0000 UTC m=+0.164157941 container init 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:03:47 compute-0 podman[428649]: 2026-02-25 14:03:47.730322061 +0000 UTC m=+0.175162453 container start 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:03:47 compute-0 podman[428649]: 2026-02-25 14:03:47.73414029 +0000 UTC m=+0.178980772 container attach 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:03:47 compute-0 eloquent_spence[428666]: 167 167
Feb 25 14:03:47 compute-0 systemd[1]: libpod-06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298.scope: Deactivated successfully.
Feb 25 14:03:47 compute-0 conmon[428666]: conmon 06208ddcf37e0b13187d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298.scope/container/memory.events
Feb 25 14:03:47 compute-0 podman[428649]: 2026-02-25 14:03:47.74294374 +0000 UTC m=+0.187784162 container died 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 14:03:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-84b8c6019347b653f6c3cfa0ab5844572f9408f6e26dc43ca3af642b21103121-merged.mount: Deactivated successfully.
Feb 25 14:03:47 compute-0 podman[428649]: 2026-02-25 14:03:47.7936747 +0000 UTC m=+0.238515082 container remove 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 14:03:47 compute-0 systemd[1]: libpod-conmon-06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298.scope: Deactivated successfully.
Feb 25 14:03:48 compute-0 podman[428688]: 2026-02-25 14:03:48.008784717 +0000 UTC m=+0.070032519 container create e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:03:48 compute-0 ceph-mon[76335]: pgmap v4316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:03:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:03:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:03:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:03:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:03:48 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:03:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3940397430' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:03:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3940397430' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:03:48 compute-0 systemd[1]: Started libpod-conmon-e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e.scope.
Feb 25 14:03:48 compute-0 podman[428688]: 2026-02-25 14:03:47.980415701 +0000 UTC m=+0.041663573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:03:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:48 compute-0 podman[428688]: 2026-02-25 14:03:48.118836211 +0000 UTC m=+0.180084073 container init e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 14:03:48 compute-0 podman[428688]: 2026-02-25 14:03:48.135370021 +0000 UTC m=+0.196617803 container start e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 14:03:48 compute-0 podman[428688]: 2026-02-25 14:03:48.139880849 +0000 UTC m=+0.201128701 container attach e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:03:48 compute-0 flamboyant_booth[428704]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:03:48 compute-0 flamboyant_booth[428704]: --> All data devices are unavailable
Feb 25 14:03:48 compute-0 systemd[1]: libpod-e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e.scope: Deactivated successfully.
Feb 25 14:03:48 compute-0 podman[428688]: 2026-02-25 14:03:48.595469213 +0000 UTC m=+0.656717025 container died e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 14:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4-merged.mount: Deactivated successfully.
Feb 25 14:03:48 compute-0 podman[428688]: 2026-02-25 14:03:48.649166247 +0000 UTC m=+0.710414059 container remove e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 14:03:48 compute-0 systemd[1]: libpod-conmon-e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e.scope: Deactivated successfully.
Feb 25 14:03:48 compute-0 sudo[428610]: pam_unix(sudo:session): session closed for user root
Feb 25 14:03:48 compute-0 sudo[428734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:03:48 compute-0 sudo[428734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:03:48 compute-0 sudo[428734]: pam_unix(sudo:session): session closed for user root
Feb 25 14:03:48 compute-0 sudo[428759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:03:48 compute-0 sudo[428759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:03:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:49 compute-0 podman[428797]: 2026-02-25 14:03:49.129957717 +0000 UTC m=+0.051999498 container create 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:03:49 compute-0 systemd[1]: Started libpod-conmon-3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718.scope.
Feb 25 14:03:49 compute-0 podman[428797]: 2026-02-25 14:03:49.109862766 +0000 UTC m=+0.031904637 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:03:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:03:49 compute-0 podman[428797]: 2026-02-25 14:03:49.221043613 +0000 UTC m=+0.143085414 container init 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:03:49 compute-0 podman[428797]: 2026-02-25 14:03:49.232283862 +0000 UTC m=+0.154325643 container start 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:03:49 compute-0 podman[428797]: 2026-02-25 14:03:49.235315218 +0000 UTC m=+0.157357389 container attach 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 14:03:49 compute-0 serene_dirac[428813]: 167 167
Feb 25 14:03:49 compute-0 systemd[1]: libpod-3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718.scope: Deactivated successfully.
Feb 25 14:03:49 compute-0 podman[428797]: 2026-02-25 14:03:49.238333233 +0000 UTC m=+0.160375004 container died 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7824f98200bb6cf76888445bc71c03a8bcec646e1c7b6b571520a3ec41cf9fd-merged.mount: Deactivated successfully.
Feb 25 14:03:49 compute-0 podman[428797]: 2026-02-25 14:03:49.275749296 +0000 UTC m=+0.197791077 container remove 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 14:03:49 compute-0 systemd[1]: libpod-conmon-3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718.scope: Deactivated successfully.
Feb 25 14:03:49 compute-0 podman[428836]: 2026-02-25 14:03:49.445280759 +0000 UTC m=+0.052417410 container create b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:03:49 compute-0 systemd[1]: Started libpod-conmon-b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b.scope.
Feb 25 14:03:49 compute-0 podman[428836]: 2026-02-25 14:03:49.424715895 +0000 UTC m=+0.031852536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:03:49 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89452d736882ae044332cf206067d4f81dc3ba4c4e6b58a5ca1b179c7ef1ec5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89452d736882ae044332cf206067d4f81dc3ba4c4e6b58a5ca1b179c7ef1ec5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89452d736882ae044332cf206067d4f81dc3ba4c4e6b58a5ca1b179c7ef1ec5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89452d736882ae044332cf206067d4f81dc3ba4c4e6b58a5ca1b179c7ef1ec5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:49 compute-0 podman[428836]: 2026-02-25 14:03:49.549623331 +0000 UTC m=+0.156759992 container init b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 14:03:49 compute-0 podman[428836]: 2026-02-25 14:03:49.560018216 +0000 UTC m=+0.167154877 container start b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 14:03:49 compute-0 podman[428836]: 2026-02-25 14:03:49.564222535 +0000 UTC m=+0.171359246 container attach b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]: {
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:     "0": [
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:         {
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "devices": [
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "/dev/loop3"
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             ],
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_name": "ceph_lv0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_size": "21470642176",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "name": "ceph_lv0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "tags": {
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.cluster_name": "ceph",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.crush_device_class": "",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.encrypted": "0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.objectstore": "bluestore",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.osd_id": "0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.type": "block",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.vdo": "0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.with_tpm": "0"
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             },
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "type": "block",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "vg_name": "ceph_vg0"
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:         }
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:     ],
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:     "1": [
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:         {
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "devices": [
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "/dev/loop4"
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             ],
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_name": "ceph_lv1",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_size": "21470642176",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "name": "ceph_lv1",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "tags": {
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.cluster_name": "ceph",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.crush_device_class": "",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.encrypted": "0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.objectstore": "bluestore",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.osd_id": "1",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.type": "block",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.vdo": "0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.with_tpm": "0"
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             },
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "type": "block",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "vg_name": "ceph_vg1"
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:         }
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:     ],
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:     "2": [
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:         {
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "devices": [
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "/dev/loop5"
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             ],
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_name": "ceph_lv2",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_size": "21470642176",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "name": "ceph_lv2",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "tags": {
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.cluster_name": "ceph",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.crush_device_class": "",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.encrypted": "0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.objectstore": "bluestore",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.osd_id": "2",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.type": "block",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.vdo": "0",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:                 "ceph.with_tpm": "0"
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             },
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "type": "block",
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:             "vg_name": "ceph_vg2"
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:         }
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]:     ]
Feb 25 14:03:49 compute-0 mystifying_tharp[428853]: }
Feb 25 14:03:49 compute-0 systemd[1]: libpod-b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b.scope: Deactivated successfully.
Feb 25 14:03:49 compute-0 podman[428836]: 2026-02-25 14:03:49.872803816 +0000 UTC m=+0.479940457 container died b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c89452d736882ae044332cf206067d4f81dc3ba4c4e6b58a5ca1b179c7ef1ec5-merged.mount: Deactivated successfully.
Feb 25 14:03:49 compute-0 podman[428836]: 2026-02-25 14:03:49.918627497 +0000 UTC m=+0.525764128 container remove b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 14:03:49 compute-0 systemd[1]: libpod-conmon-b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b.scope: Deactivated successfully.
Feb 25 14:03:49 compute-0 sudo[428759]: pam_unix(sudo:session): session closed for user root
Feb 25 14:03:50 compute-0 sudo[428873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:03:50 compute-0 sudo[428873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:03:50 compute-0 ceph-mon[76335]: pgmap v4317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:50 compute-0 sudo[428873]: pam_unix(sudo:session): session closed for user root
Feb 25 14:03:50 compute-0 sudo[428898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:03:50 compute-0 sudo[428898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:03:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:50 compute-0 podman[428935]: 2026-02-25 14:03:50.495164133 +0000 UTC m=+0.065187010 container create 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 14:03:50 compute-0 systemd[1]: Started libpod-conmon-2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf.scope.
Feb 25 14:03:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:03:50 compute-0 podman[428935]: 2026-02-25 14:03:50.46934003 +0000 UTC m=+0.039362967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:03:50 compute-0 podman[428935]: 2026-02-25 14:03:50.575896395 +0000 UTC m=+0.145919332 container init 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 14:03:50 compute-0 podman[428935]: 2026-02-25 14:03:50.584103728 +0000 UTC m=+0.154126605 container start 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:03:50 compute-0 boring_tharp[428951]: 167 167
Feb 25 14:03:50 compute-0 systemd[1]: libpod-2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf.scope: Deactivated successfully.
Feb 25 14:03:50 compute-0 podman[428935]: 2026-02-25 14:03:50.589298816 +0000 UTC m=+0.159321663 container attach 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:03:50 compute-0 podman[428935]: 2026-02-25 14:03:50.590266823 +0000 UTC m=+0.160289700 container died 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6afb1cf93247c7e8f2fd4c76b9930dfa31f664a6883d9d688d94e6f3d42162f3-merged.mount: Deactivated successfully.
Feb 25 14:03:50 compute-0 podman[428935]: 2026-02-25 14:03:50.632571464 +0000 UTC m=+0.202594311 container remove 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:03:50 compute-0 systemd[1]: libpod-conmon-2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf.scope: Deactivated successfully.
Feb 25 14:03:50 compute-0 podman[428973]: 2026-02-25 14:03:50.828753974 +0000 UTC m=+0.053306084 container create 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 14:03:50 compute-0 systemd[1]: Started libpod-conmon-211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4.scope.
Feb 25 14:03:50 compute-0 nova_compute[244014]: 2026-02-25 14:03:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:03:50 compute-0 podman[428973]: 2026-02-25 14:03:50.805442032 +0000 UTC m=+0.029994112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:03:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729c857b9728a091740866e5a6fd96820765dd4924e771e191bfa6d09d9c192f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729c857b9728a091740866e5a6fd96820765dd4924e771e191bfa6d09d9c192f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729c857b9728a091740866e5a6fd96820765dd4924e771e191bfa6d09d9c192f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729c857b9728a091740866e5a6fd96820765dd4924e771e191bfa6d09d9c192f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:03:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:50 compute-0 podman[428973]: 2026-02-25 14:03:50.940793325 +0000 UTC m=+0.165345365 container init 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 14:03:50 compute-0 podman[428973]: 2026-02-25 14:03:50.945801727 +0000 UTC m=+0.170353737 container start 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 14:03:50 compute-0 podman[428973]: 2026-02-25 14:03:50.949346378 +0000 UTC m=+0.173898388 container attach 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 14:03:51 compute-0 nova_compute[244014]: 2026-02-25 14:03:51.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:51 compute-0 lvm[429067]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:03:51 compute-0 lvm[429068]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:03:51 compute-0 lvm[429067]: VG ceph_vg1 finished
Feb 25 14:03:51 compute-0 lvm[429068]: VG ceph_vg0 finished
Feb 25 14:03:51 compute-0 lvm[429070]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:03:51 compute-0 lvm[429070]: VG ceph_vg2 finished
Feb 25 14:03:51 compute-0 dazzling_galois[428989]: {}
Feb 25 14:03:51 compute-0 systemd[1]: libpod-211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4.scope: Deactivated successfully.
Feb 25 14:03:51 compute-0 podman[428973]: 2026-02-25 14:03:51.825498411 +0000 UTC m=+1.050050441 container died 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 14:03:51 compute-0 systemd[1]: libpod-211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4.scope: Consumed 1.258s CPU time.
Feb 25 14:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-729c857b9728a091740866e5a6fd96820765dd4924e771e191bfa6d09d9c192f-merged.mount: Deactivated successfully.
Feb 25 14:03:51 compute-0 podman[428973]: 2026-02-25 14:03:51.881042018 +0000 UTC m=+1.105594018 container remove 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 14:03:51 compute-0 systemd[1]: libpod-conmon-211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4.scope: Deactivated successfully.
Feb 25 14:03:51 compute-0 sudo[428898]: pam_unix(sudo:session): session closed for user root
Feb 25 14:03:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:03:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:03:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:03:51 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:03:52 compute-0 sudo[429087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:03:52 compute-0 sudo[429087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:03:52 compute-0 sudo[429087]: pam_unix(sudo:session): session closed for user root
Feb 25 14:03:52 compute-0 ceph-mon[76335]: pgmap v4318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:03:52 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:03:52 compute-0 nova_compute[244014]: 2026-02-25 14:03:52.213 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:53 compute-0 nova_compute[244014]: 2026-02-25 14:03:53.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:03:54 compute-0 ceph-mon[76335]: pgmap v4319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:03:55.108 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:03:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:03:55.109 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:03:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:03:55.109 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:03:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #219. Immutable memtables: 0.
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.445624) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 219
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235445656, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 463, "num_deletes": 250, "total_data_size": 433774, "memory_usage": 442376, "flush_reason": "Manual Compaction"}
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #220: started
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235450521, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 220, "file_size": 431269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89731, "largest_seqno": 90193, "table_properties": {"data_size": 428514, "index_size": 790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 5564, "raw_average_key_size": 16, "raw_value_size": 423138, "raw_average_value_size": 1237, "num_data_blocks": 35, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028211, "oldest_key_time": 1772028211, "file_creation_time": 1772028235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 220, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 4952 microseconds, and 1633 cpu microseconds.
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.450570) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #220: 431269 bytes OK
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.450594) [db/memtable_list.cc:519] [default] Level-0 commit table #220 started
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.452555) [db/memtable_list.cc:722] [default] Level-0 commit table #220: memtable #1 done
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.452577) EVENT_LOG_v1 {"time_micros": 1772028235452570, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.452599) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 430979, prev total WAL file size 430979, number of live WAL files 2.
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000216.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.452987) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [220(421KB)], [218(10133KB)]
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235453039, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [220], "files_L6": [218], "score": -1, "input_data_size": 10808300, "oldest_snapshot_seqno": -1}
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #221: 9874 keys, 9747816 bytes, temperature: kUnknown
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235514268, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 221, "file_size": 9747816, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9689559, "index_size": 32480, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24709, "raw_key_size": 264013, "raw_average_key_size": 26, "raw_value_size": 9520220, "raw_average_value_size": 964, "num_data_blocks": 1224, "num_entries": 9874, "num_filter_entries": 9874, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.514601) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 9747816 bytes
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.515921) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.2 rd, 158.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.9 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(47.7) write-amplify(22.6) OK, records in: 10385, records dropped: 511 output_compression: NoCompression
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.515951) EVENT_LOG_v1 {"time_micros": 1772028235515936, "job": 138, "event": "compaction_finished", "compaction_time_micros": 61333, "compaction_time_cpu_micros": 32136, "output_level": 6, "num_output_files": 1, "total_output_size": 9747816, "num_input_records": 10385, "num_output_records": 9874, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000220.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235516157, "job": 138, "event": "table_file_deletion", "file_number": 220}
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235517658, "job": 138, "event": "table_file_deletion", "file_number": 218}
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.452913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.517717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.517722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.517725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.517727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:55 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.517730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:03:55 compute-0 nova_compute[244014]: 2026-02-25 14:03:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:03:56 compute-0 ceph-mon[76335]: pgmap v4320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:56 compute-0 nova_compute[244014]: 2026-02-25 14:03:56.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:56 compute-0 sshd-session[429112]: Connection closed by authenticating user root 46.101.242.142 port 38118 [preauth]
Feb 25 14:03:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:57 compute-0 nova_compute[244014]: 2026-02-25 14:03:57.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:03:58 compute-0 ceph-mon[76335]: pgmap v4321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:03:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:00 compute-0 ceph-mon[76335]: pgmap v4322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:01 compute-0 nova_compute[244014]: 2026-02-25 14:04:01.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:01 compute-0 ceph-mon[76335]: pgmap v4323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:04:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:04:02 compute-0 nova_compute[244014]: 2026-02-25 14:04:02.242 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:03 compute-0 ceph-mon[76335]: pgmap v4324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:05 compute-0 ceph-mon[76335]: pgmap v4325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:06 compute-0 nova_compute[244014]: 2026-02-25 14:04:06.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:07 compute-0 nova_compute[244014]: 2026-02-25 14:04:07.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:08 compute-0 ceph-mon[76335]: pgmap v4326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:08 compute-0 podman[429114]: 2026-02-25 14:04:08.729675581 +0000 UTC m=+0.058829721 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 14:04:08 compute-0 podman[429115]: 2026-02-25 14:04:08.757780619 +0000 UTC m=+0.088457862 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 14:04:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:10 compute-0 ceph-mon[76335]: pgmap v4327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:11 compute-0 nova_compute[244014]: 2026-02-25 14:04:11.150 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:12 compute-0 ceph-mon[76335]: pgmap v4328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:12 compute-0 nova_compute[244014]: 2026-02-25 14:04:12.274 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:14 compute-0 ceph-mon[76335]: pgmap v4329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:16 compute-0 ceph-mon[76335]: pgmap v4330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:16 compute-0 nova_compute[244014]: 2026-02-25 14:04:16.182 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:17 compute-0 nova_compute[244014]: 2026-02-25 14:04:17.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:18 compute-0 ceph-mon[76335]: pgmap v4331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:20 compute-0 ceph-mon[76335]: pgmap v4332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:20 compute-0 nova_compute[244014]: 2026-02-25 14:04:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:21 compute-0 nova_compute[244014]: 2026-02-25 14:04:21.186 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:22 compute-0 ceph-mon[76335]: pgmap v4333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:22 compute-0 nova_compute[244014]: 2026-02-25 14:04:22.279 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:24 compute-0 ceph-mon[76335]: pgmap v4334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:26 compute-0 ceph-mon[76335]: pgmap v4335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:26 compute-0 nova_compute[244014]: 2026-02-25 14:04:26.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:27 compute-0 nova_compute[244014]: 2026-02-25 14:04:27.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:28 compute-0 ceph-mon[76335]: pgmap v4336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:30 compute-0 ceph-mon[76335]: pgmap v4337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:31 compute-0 ceph-mon[76335]: pgmap v4338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:31 compute-0 nova_compute[244014]: 2026-02-25 14:04:31.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:04:31
Feb 25 14:04:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:04:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:04:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'volumes', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta']
Feb 25 14:04:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:04:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:04:32 compute-0 nova_compute[244014]: 2026-02-25 14:04:32.316 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:04:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:04:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:32 compute-0 nova_compute[244014]: 2026-02-25 14:04:32.970 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:34 compute-0 ceph-mon[76335]: pgmap v4339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:34 compute-0 nova_compute[244014]: 2026-02-25 14:04:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:34 compute-0 nova_compute[244014]: 2026-02-25 14:04:34.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:04:34 compute-0 nova_compute[244014]: 2026-02-25 14:04:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:04:34 compute-0 nova_compute[244014]: 2026-02-25 14:04:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:04:34 compute-0 nova_compute[244014]: 2026-02-25 14:04:34.912 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:04:34 compute-0 nova_compute[244014]: 2026-02-25 14:04:34.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:04:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:04:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2502407981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:04:35 compute-0 nova_compute[244014]: 2026-02-25 14:04:35.477 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:04:35 compute-0 nova_compute[244014]: 2026-02-25 14:04:35.673 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:04:35 compute-0 nova_compute[244014]: 2026-02-25 14:04:35.675 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3529MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:04:35 compute-0 nova_compute[244014]: 2026-02-25 14:04:35.675 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:04:35 compute-0 nova_compute[244014]: 2026-02-25 14:04:35.676 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:04:36 compute-0 ceph-mon[76335]: pgmap v4340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2502407981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:04:36 compute-0 nova_compute[244014]: 2026-02-25 14:04:36.277 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:36 compute-0 nova_compute[244014]: 2026-02-25 14:04:36.386 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:04:36 compute-0 nova_compute[244014]: 2026-02-25 14:04:36.386 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:04:36 compute-0 nova_compute[244014]: 2026-02-25 14:04:36.403 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:04:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:04:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2349764953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:04:37 compute-0 nova_compute[244014]: 2026-02-25 14:04:37.014 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:04:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2349764953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:04:37 compute-0 nova_compute[244014]: 2026-02-25 14:04:37.021 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:04:37 compute-0 nova_compute[244014]: 2026-02-25 14:04:37.039 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:04:37 compute-0 nova_compute[244014]: 2026-02-25 14:04:37.041 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:04:37 compute-0 nova_compute[244014]: 2026-02-25 14:04:37.041 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:04:37 compute-0 nova_compute[244014]: 2026-02-25 14:04:37.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:38 compute-0 ceph-mon[76335]: pgmap v4341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:39 compute-0 podman[429202]: 2026-02-25 14:04:39.733989927 +0000 UTC m=+0.074509486 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 14:04:39 compute-0 podman[429203]: 2026-02-25 14:04:39.826428072 +0000 UTC m=+0.159809858 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 14:04:40 compute-0 ceph-mon[76335]: pgmap v4342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:41 compute-0 nova_compute[244014]: 2026-02-25 14:04:41.043 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:41 compute-0 nova_compute[244014]: 2026-02-25 14:04:41.043 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:04:41 compute-0 nova_compute[244014]: 2026-02-25 14:04:41.044 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:04:41 compute-0 nova_compute[244014]: 2026-02-25 14:04:41.059 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:04:41 compute-0 nova_compute[244014]: 2026-02-25 14:04:41.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:42 compute-0 sshd-session[429247]: Connection closed by authenticating user root 46.101.242.142 port 56662 [preauth]
Feb 25 14:04:42 compute-0 ceph-mon[76335]: pgmap v4343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:42 compute-0 nova_compute[244014]: 2026-02-25 14:04:42.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:43 compute-0 nova_compute[244014]: 2026-02-25 14:04:43.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:04:44 compute-0 ceph-mon[76335]: pgmap v4344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:45 compute-0 nova_compute[244014]: 2026-02-25 14:04:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:45 compute-0 nova_compute[244014]: 2026-02-25 14:04:45.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:45 compute-0 nova_compute[244014]: 2026-02-25 14:04:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 14:04:45 compute-0 nova_compute[244014]: 2026-02-25 14:04:45.989 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 14:04:46 compute-0 ceph-mon[76335]: pgmap v4345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:46 compute-0 nova_compute[244014]: 2026-02-25 14:04:46.283 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:46 compute-0 nova_compute[244014]: 2026-02-25 14:04:46.989 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:46 compute-0 nova_compute[244014]: 2026-02-25 14:04:46.989 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:04:47 compute-0 nova_compute[244014]: 2026-02-25 14:04:47.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:04:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2864306368' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:04:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:04:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2864306368' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:04:48 compute-0 ceph-mon[76335]: pgmap v4346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2864306368' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:04:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2864306368' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:04:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:50 compute-0 ceph-mon[76335]: pgmap v4347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:50 compute-0 nova_compute[244014]: 2026-02-25 14:04:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:51 compute-0 nova_compute[244014]: 2026-02-25 14:04:51.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:52 compute-0 sudo[429249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:04:52 compute-0 sudo[429249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:04:52 compute-0 sudo[429249]: pam_unix(sudo:session): session closed for user root
Feb 25 14:04:52 compute-0 ceph-mon[76335]: pgmap v4348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:52 compute-0 sudo[429274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:04:52 compute-0 sudo[429274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:04:52 compute-0 nova_compute[244014]: 2026-02-25 14:04:52.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:52 compute-0 sudo[429274]: pam_unix(sudo:session): session closed for user root
Feb 25 14:04:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:04:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:04:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:04:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:04:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:04:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:04:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:04:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:04:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:04:52 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:04:52 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:04:52 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:04:52 compute-0 sudo[429329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:04:52 compute-0 sudo[429329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:04:52 compute-0 sudo[429329]: pam_unix(sudo:session): session closed for user root
Feb 25 14:04:52 compute-0 sudo[429354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:04:52 compute-0 sudo[429354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:04:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:04:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:04:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:04:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:04:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:04:53 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:04:53 compute-0 podman[429391]: 2026-02-25 14:04:53.145026662 +0000 UTC m=+0.060517959 container create 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:04:53 compute-0 systemd[1]: Started libpod-conmon-286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9.scope.
Feb 25 14:04:53 compute-0 podman[429391]: 2026-02-25 14:04:53.118375715 +0000 UTC m=+0.033867032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:04:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:04:53 compute-0 podman[429391]: 2026-02-25 14:04:53.26754979 +0000 UTC m=+0.183041157 container init 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:04:53 compute-0 podman[429391]: 2026-02-25 14:04:53.275600049 +0000 UTC m=+0.191091336 container start 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:04:53 compute-0 podman[429391]: 2026-02-25 14:04:53.279651014 +0000 UTC m=+0.195142311 container attach 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:04:53 compute-0 friendly_hawking[429408]: 167 167
Feb 25 14:04:53 compute-0 systemd[1]: libpod-286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9.scope: Deactivated successfully.
Feb 25 14:04:53 compute-0 podman[429391]: 2026-02-25 14:04:53.285477539 +0000 UTC m=+0.200968806 container died 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 14:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1531002e2ad4c3024c3481cbfaff746cada4cf59dc52f877d96a9426161f9f5a-merged.mount: Deactivated successfully.
Feb 25 14:04:53 compute-0 podman[429391]: 2026-02-25 14:04:53.329391216 +0000 UTC m=+0.244882473 container remove 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 14:04:53 compute-0 systemd[1]: libpod-conmon-286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9.scope: Deactivated successfully.
Feb 25 14:04:53 compute-0 podman[429430]: 2026-02-25 14:04:53.499835845 +0000 UTC m=+0.051050390 container create 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 14:04:53 compute-0 systemd[1]: Started libpod-conmon-72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869.scope.
Feb 25 14:04:53 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:53 compute-0 podman[429430]: 2026-02-25 14:04:53.478964573 +0000 UTC m=+0.030179038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:04:53 compute-0 podman[429430]: 2026-02-25 14:04:53.582536803 +0000 UTC m=+0.133751218 container init 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 14:04:53 compute-0 podman[429430]: 2026-02-25 14:04:53.593868235 +0000 UTC m=+0.145082640 container start 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 14:04:53 compute-0 podman[429430]: 2026-02-25 14:04:53.597960991 +0000 UTC m=+0.149175446 container attach 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 14:04:53 compute-0 nova_compute[244014]: 2026-02-25 14:04:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:54 compute-0 amazing_curran[429447]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:04:54 compute-0 amazing_curran[429447]: --> All data devices are unavailable
Feb 25 14:04:54 compute-0 systemd[1]: libpod-72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869.scope: Deactivated successfully.
Feb 25 14:04:54 compute-0 podman[429430]: 2026-02-25 14:04:54.044529739 +0000 UTC m=+0.595744174 container died 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 14:04:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7-merged.mount: Deactivated successfully.
Feb 25 14:04:54 compute-0 podman[429430]: 2026-02-25 14:04:54.09424396 +0000 UTC m=+0.645458405 container remove 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 14:04:54 compute-0 systemd[1]: libpod-conmon-72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869.scope: Deactivated successfully.
Feb 25 14:04:54 compute-0 ceph-mon[76335]: pgmap v4349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:54 compute-0 sudo[429354]: pam_unix(sudo:session): session closed for user root
Feb 25 14:04:54 compute-0 sudo[429480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:04:54 compute-0 sudo[429480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:04:54 compute-0 sudo[429480]: pam_unix(sudo:session): session closed for user root
Feb 25 14:04:54 compute-0 sudo[429505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:04:54 compute-0 sudo[429505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:04:54 compute-0 podman[429542]: 2026-02-25 14:04:54.57962209 +0000 UTC m=+0.058610465 container create 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:04:54 compute-0 systemd[1]: Started libpod-conmon-24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168.scope.
Feb 25 14:04:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:04:54 compute-0 podman[429542]: 2026-02-25 14:04:54.636634048 +0000 UTC m=+0.115622393 container init 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 14:04:54 compute-0 podman[429542]: 2026-02-25 14:04:54.645526811 +0000 UTC m=+0.124515156 container start 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:04:54 compute-0 podman[429542]: 2026-02-25 14:04:54.649246386 +0000 UTC m=+0.128234761 container attach 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:04:54 compute-0 podman[429542]: 2026-02-25 14:04:54.556404291 +0000 UTC m=+0.035392716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:04:54 compute-0 quirky_feistel[429558]: 167 167
Feb 25 14:04:54 compute-0 systemd[1]: libpod-24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168.scope: Deactivated successfully.
Feb 25 14:04:54 compute-0 podman[429542]: 2026-02-25 14:04:54.653199039 +0000 UTC m=+0.132187384 container died 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 14:04:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-382c237cd02fd1a1ac5dc4ace2f135105285b3079f7dbc9c88a78122f41bf12e-merged.mount: Deactivated successfully.
Feb 25 14:04:54 compute-0 podman[429542]: 2026-02-25 14:04:54.694029558 +0000 UTC m=+0.173017903 container remove 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 14:04:54 compute-0 systemd[1]: libpod-conmon-24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168.scope: Deactivated successfully.
Feb 25 14:04:54 compute-0 podman[429583]: 2026-02-25 14:04:54.82620531 +0000 UTC m=+0.039016928 container create 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:04:54 compute-0 systemd[1]: Started libpod-conmon-5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5.scope.
Feb 25 14:04:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e7692b542fb1597415f4f999f5fc731d775fb4036623271d1bb67cebc7742/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e7692b542fb1597415f4f999f5fc731d775fb4036623271d1bb67cebc7742/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e7692b542fb1597415f4f999f5fc731d775fb4036623271d1bb67cebc7742/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e7692b542fb1597415f4f999f5fc731d775fb4036623271d1bb67cebc7742/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:54 compute-0 podman[429583]: 2026-02-25 14:04:54.900883859 +0000 UTC m=+0.113695497 container init 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 14:04:54 compute-0 podman[429583]: 2026-02-25 14:04:54.809485256 +0000 UTC m=+0.022296904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:04:54 compute-0 podman[429583]: 2026-02-25 14:04:54.906774667 +0000 UTC m=+0.119586275 container start 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 14:04:54 compute-0 podman[429583]: 2026-02-25 14:04:54.909847744 +0000 UTC m=+0.122659392 container attach 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:04:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:04:55.109 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:04:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:04:55.111 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:04:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:04:55.111 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:04:55 compute-0 ceph-mon[76335]: pgmap v4350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:55 compute-0 trusting_merkle[429599]: {
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:     "0": [
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:         {
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "devices": [
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "/dev/loop3"
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             ],
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_name": "ceph_lv0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_size": "21470642176",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "name": "ceph_lv0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "tags": {
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.cluster_name": "ceph",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.crush_device_class": "",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.encrypted": "0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.objectstore": "bluestore",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.osd_id": "0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.type": "block",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.vdo": "0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.with_tpm": "0"
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             },
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "type": "block",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "vg_name": "ceph_vg0"
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:         }
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:     ],
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:     "1": [
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:         {
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "devices": [
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "/dev/loop4"
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             ],
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_name": "ceph_lv1",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_size": "21470642176",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "name": "ceph_lv1",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "tags": {
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.cluster_name": "ceph",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.crush_device_class": "",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.encrypted": "0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.objectstore": "bluestore",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.osd_id": "1",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.type": "block",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.vdo": "0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.with_tpm": "0"
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             },
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "type": "block",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "vg_name": "ceph_vg1"
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:         }
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:     ],
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:     "2": [
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:         {
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "devices": [
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "/dev/loop5"
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             ],
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_name": "ceph_lv2",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_size": "21470642176",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "name": "ceph_lv2",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "tags": {
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.cluster_name": "ceph",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.crush_device_class": "",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.encrypted": "0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.objectstore": "bluestore",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.osd_id": "2",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.type": "block",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.vdo": "0",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:                 "ceph.with_tpm": "0"
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             },
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "type": "block",
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:             "vg_name": "ceph_vg2"
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:         }
Feb 25 14:04:55 compute-0 trusting_merkle[429599]:     ]
Feb 25 14:04:55 compute-0 trusting_merkle[429599]: }
Feb 25 14:04:55 compute-0 systemd[1]: libpod-5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5.scope: Deactivated successfully.
Feb 25 14:04:55 compute-0 podman[429583]: 2026-02-25 14:04:55.217888939 +0000 UTC m=+0.430700577 container died 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e4e7692b542fb1597415f4f999f5fc731d775fb4036623271d1bb67cebc7742-merged.mount: Deactivated successfully.
Feb 25 14:04:55 compute-0 podman[429583]: 2026-02-25 14:04:55.255589719 +0000 UTC m=+0.468401337 container remove 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 14:04:55 compute-0 systemd[1]: libpod-conmon-5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5.scope: Deactivated successfully.
Feb 25 14:04:55 compute-0 sudo[429505]: pam_unix(sudo:session): session closed for user root
Feb 25 14:04:55 compute-0 sudo[429618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:04:55 compute-0 sudo[429618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:04:55 compute-0 sudo[429618]: pam_unix(sudo:session): session closed for user root
Feb 25 14:04:55 compute-0 sudo[429643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:04:55 compute-0 sudo[429643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:04:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:04:55 compute-0 podman[429678]: 2026-02-25 14:04:55.686229875 +0000 UTC m=+0.045797421 container create bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:04:55 compute-0 systemd[1]: Started libpod-conmon-bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1.scope.
Feb 25 14:04:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:04:55 compute-0 podman[429678]: 2026-02-25 14:04:55.668014258 +0000 UTC m=+0.027581814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:04:55 compute-0 podman[429678]: 2026-02-25 14:04:55.772950927 +0000 UTC m=+0.132518543 container init bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 14:04:55 compute-0 podman[429678]: 2026-02-25 14:04:55.777783584 +0000 UTC m=+0.137351140 container start bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Feb 25 14:04:55 compute-0 podman[429678]: 2026-02-25 14:04:55.782418126 +0000 UTC m=+0.141985682 container attach bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 14:04:55 compute-0 friendly_einstein[429694]: 167 167
Feb 25 14:04:55 compute-0 systemd[1]: libpod-bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1.scope: Deactivated successfully.
Feb 25 14:04:55 compute-0 podman[429678]: 2026-02-25 14:04:55.783881257 +0000 UTC m=+0.143448843 container died bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 25 14:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dd7054ee16dd5dddbf611744ef1163e15793ff3fb6f412589f5694d62e69845-merged.mount: Deactivated successfully.
Feb 25 14:04:55 compute-0 podman[429678]: 2026-02-25 14:04:55.826509828 +0000 UTC m=+0.186077374 container remove bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 14:04:55 compute-0 systemd[1]: libpod-conmon-bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1.scope: Deactivated successfully.
Feb 25 14:04:55 compute-0 nova_compute[244014]: 2026-02-25 14:04:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:55 compute-0 podman[429720]: 2026-02-25 14:04:55.962861229 +0000 UTC m=+0.047630024 container create 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:04:56 compute-0 systemd[1]: Started libpod-conmon-87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947.scope.
Feb 25 14:04:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c0acea03b3c546500581f2b21269598897eeef2af236bd556a45bcf590836/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c0acea03b3c546500581f2b21269598897eeef2af236bd556a45bcf590836/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c0acea03b3c546500581f2b21269598897eeef2af236bd556a45bcf590836/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c0acea03b3c546500581f2b21269598897eeef2af236bd556a45bcf590836/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:04:56 compute-0 podman[429720]: 2026-02-25 14:04:55.946148784 +0000 UTC m=+0.030917619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:04:56 compute-0 podman[429720]: 2026-02-25 14:04:56.044097875 +0000 UTC m=+0.128866700 container init 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:04:56 compute-0 podman[429720]: 2026-02-25 14:04:56.049801017 +0000 UTC m=+0.134569822 container start 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:04:56 compute-0 podman[429720]: 2026-02-25 14:04:56.052875384 +0000 UTC m=+0.137644179 container attach 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:04:56 compute-0 nova_compute[244014]: 2026-02-25 14:04:56.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:56 compute-0 lvm[429816]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:04:56 compute-0 lvm[429816]: VG ceph_vg1 finished
Feb 25 14:04:56 compute-0 lvm[429815]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:04:56 compute-0 lvm[429815]: VG ceph_vg0 finished
Feb 25 14:04:56 compute-0 lvm[429818]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:04:56 compute-0 lvm[429818]: VG ceph_vg2 finished
Feb 25 14:04:56 compute-0 lucid_wing[429736]: {}
Feb 25 14:04:56 compute-0 systemd[1]: libpod-87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947.scope: Deactivated successfully.
Feb 25 14:04:56 compute-0 podman[429720]: 2026-02-25 14:04:56.872430871 +0000 UTC m=+0.957199716 container died 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 14:04:56 compute-0 systemd[1]: libpod-87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947.scope: Consumed 1.105s CPU time.
Feb 25 14:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d2c0acea03b3c546500581f2b21269598897eeef2af236bd556a45bcf590836-merged.mount: Deactivated successfully.
Feb 25 14:04:56 compute-0 podman[429720]: 2026-02-25 14:04:56.92522845 +0000 UTC m=+1.009997255 container remove 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 14:04:56 compute-0 systemd[1]: libpod-conmon-87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947.scope: Deactivated successfully.
Feb 25 14:04:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:56 compute-0 sudo[429643]: pam_unix(sudo:session): session closed for user root
Feb 25 14:04:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:04:56 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:04:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:04:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:04:57 compute-0 sudo[429833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:04:57 compute-0 sudo[429833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:04:57 compute-0 sudo[429833]: pam_unix(sudo:session): session closed for user root
Feb 25 14:04:57 compute-0 nova_compute[244014]: 2026-02-25 14:04:57.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:04:57 compute-0 nova_compute[244014]: 2026-02-25 14:04:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:04:57 compute-0 nova_compute[244014]: 2026-02-25 14:04:57.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 14:04:57 compute-0 ceph-mon[76335]: pgmap v4351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:04:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:04:57 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:04:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:00 compute-0 ceph-mon[76335]: pgmap v4352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:01 compute-0 nova_compute[244014]: 2026-02-25 14:05:01.361 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:05:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:05:02 compute-0 ceph-mon[76335]: pgmap v4353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:02 compute-0 nova_compute[244014]: 2026-02-25 14:05:02.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:02 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:04 compute-0 ceph-mon[76335]: pgmap v4354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:04 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:06 compute-0 ceph-mon[76335]: pgmap v4355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:06 compute-0 nova_compute[244014]: 2026-02-25 14:05:06.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:06 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:07 compute-0 nova_compute[244014]: 2026-02-25 14:05:07.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:08 compute-0 ceph-mon[76335]: pgmap v4356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:08 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:10 compute-0 ceph-mon[76335]: pgmap v4357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:10 compute-0 podman[429858]: 2026-02-25 14:05:10.78293632 +0000 UTC m=+0.108369408 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 25 14:05:10 compute-0 podman[429859]: 2026-02-25 14:05:10.863091636 +0000 UTC m=+0.188837783 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:05:10 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:11 compute-0 nova_compute[244014]: 2026-02-25 14:05:11.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:12 compute-0 ceph-mon[76335]: pgmap v4358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:12 compute-0 nova_compute[244014]: 2026-02-25 14:05:12.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:12 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:14 compute-0 ceph-mon[76335]: pgmap v4359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:14 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:16 compute-0 ceph-mon[76335]: pgmap v4360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:16 compute-0 nova_compute[244014]: 2026-02-25 14:05:16.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:16 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:17 compute-0 ceph-mon[76335]: pgmap v4361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:17 compute-0 nova_compute[244014]: 2026-02-25 14:05:17.390 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:18 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:20 compute-0 ceph-mon[76335]: pgmap v4362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:20 compute-0 nova_compute[244014]: 2026-02-25 14:05:20.886 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:05:20 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:21 compute-0 nova_compute[244014]: 2026-02-25 14:05:21.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:22 compute-0 ceph-mon[76335]: pgmap v4363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:22 compute-0 nova_compute[244014]: 2026-02-25 14:05:22.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:22 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:24 compute-0 ceph-mon[76335]: pgmap v4364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:24 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:26 compute-0 ceph-mon[76335]: pgmap v4365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:26 compute-0 nova_compute[244014]: 2026-02-25 14:05:26.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:26 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:27 compute-0 nova_compute[244014]: 2026-02-25 14:05:27.394 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:28 compute-0 ceph-mon[76335]: pgmap v4366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:28 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:30 compute-0 ceph-mon[76335]: pgmap v4367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:30 compute-0 sshd-session[429903]: Connection closed by authenticating user root 46.101.242.142 port 37268 [preauth]
Feb 25 14:05:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:30 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:05:31
Feb 25 14:05:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:05:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:05:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'vms', 'backups']
Feb 25 14:05:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:05:31 compute-0 nova_compute[244014]: 2026-02-25 14:05:31.412 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:05:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:05:32 compute-0 ceph-mon[76335]: pgmap v4368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:32 compute-0 nova_compute[244014]: 2026-02-25 14:05:32.397 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:05:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:05:32 compute-0 nova_compute[244014]: 2026-02-25 14:05:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:05:32 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #222. Immutable memtables: 0.
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.101637) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 222
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333101721, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 989, "num_deletes": 251, "total_data_size": 1449757, "memory_usage": 1477856, "flush_reason": "Manual Compaction"}
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #223: started
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333110834, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 223, "file_size": 1436383, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90194, "largest_seqno": 91182, "table_properties": {"data_size": 1431474, "index_size": 2496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10437, "raw_average_key_size": 19, "raw_value_size": 1421698, "raw_average_value_size": 2672, "num_data_blocks": 112, "num_entries": 532, "num_filter_entries": 532, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028236, "oldest_key_time": 1772028236, "file_creation_time": 1772028333, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 223, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 9260 microseconds, and 4588 cpu microseconds.
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.110894) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #223: 1436383 bytes OK
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.110920) [db/memtable_list.cc:519] [default] Level-0 commit table #223 started
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.112750) [db/memtable_list.cc:722] [default] Level-0 commit table #223: memtable #1 done
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.112770) EVENT_LOG_v1 {"time_micros": 1772028333112764, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.112798) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 1445079, prev total WAL file size 1445079, number of live WAL files 2.
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000219.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.113270) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [223(1402KB)], [221(9519KB)]
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333113303, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [223], "files_L6": [221], "score": -1, "input_data_size": 11184199, "oldest_snapshot_seqno": -1}
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #224: 9892 keys, 9410383 bytes, temperature: kUnknown
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333163679, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 224, "file_size": 9410383, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9352300, "index_size": 32242, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24773, "raw_key_size": 265023, "raw_average_key_size": 26, "raw_value_size": 9182896, "raw_average_value_size": 928, "num_data_blocks": 1207, "num_entries": 9892, "num_filter_entries": 9892, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028333, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 224, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.164012) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 9410383 bytes
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.165217) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.4 rd, 186.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.3 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(14.3) write-amplify(6.6) OK, records in: 10406, records dropped: 514 output_compression: NoCompression
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.165239) EVENT_LOG_v1 {"time_micros": 1772028333165228, "job": 140, "event": "compaction_finished", "compaction_time_micros": 50508, "compaction_time_cpu_micros": 32401, "output_level": 6, "num_output_files": 1, "total_output_size": 9410383, "num_input_records": 10406, "num_output_records": 9892, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000223.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333165543, "job": 140, "event": "table_file_deletion", "file_number": 223}
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000221.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333167140, "job": 140, "event": "table_file_deletion", "file_number": 221}
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.113214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.167259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.167267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.167269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.167272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:05:33 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.167274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:05:34 compute-0 ceph-mon[76335]: pgmap v4369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:34 compute-0 nova_compute[244014]: 2026-02-25 14:05:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:05:34 compute-0 nova_compute[244014]: 2026-02-25 14:05:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:05:34 compute-0 nova_compute[244014]: 2026-02-25 14:05:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:05:34 compute-0 nova_compute[244014]: 2026-02-25 14:05:34.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:05:34 compute-0 nova_compute[244014]: 2026-02-25 14:05:34.913 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:05:34 compute-0 nova_compute[244014]: 2026-02-25 14:05:34.913 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:05:34 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:05:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1697179958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:05:35 compute-0 nova_compute[244014]: 2026-02-25 14:05:35.524 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:05:35 compute-0 nova_compute[244014]: 2026-02-25 14:05:35.697 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:05:35 compute-0 nova_compute[244014]: 2026-02-25 14:05:35.699 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3518MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:05:35 compute-0 nova_compute[244014]: 2026-02-25 14:05:35.699 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:05:35 compute-0 nova_compute[244014]: 2026-02-25 14:05:35.700 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:05:35 compute-0 nova_compute[244014]: 2026-02-25 14:05:35.763 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:05:35 compute-0 nova_compute[244014]: 2026-02-25 14:05:35.764 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:05:35 compute-0 nova_compute[244014]: 2026-02-25 14:05:35.783 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:05:36 compute-0 ceph-mon[76335]: pgmap v4370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1697179958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:05:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:05:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016862414' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:05:36 compute-0 nova_compute[244014]: 2026-02-25 14:05:36.386 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:05:36 compute-0 nova_compute[244014]: 2026-02-25 14:05:36.395 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:05:36 compute-0 nova_compute[244014]: 2026-02-25 14:05:36.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:36 compute-0 nova_compute[244014]: 2026-02-25 14:05:36.573 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:05:36 compute-0 nova_compute[244014]: 2026-02-25 14:05:36.575 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:05:36 compute-0 nova_compute[244014]: 2026-02-25 14:05:36.576 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:05:36 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1016862414' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:05:37 compute-0 ceph-mon[76335]: pgmap v4371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:37 compute-0 nova_compute[244014]: 2026-02-25 14:05:37.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:38 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:40 compute-0 ceph-mon[76335]: pgmap v4372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:40 compute-0 nova_compute[244014]: 2026-02-25 14:05:40.576 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:05:40 compute-0 nova_compute[244014]: 2026-02-25 14:05:40.577 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:05:40 compute-0 nova_compute[244014]: 2026-02-25 14:05:40.577 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:05:40 compute-0 nova_compute[244014]: 2026-02-25 14:05:40.598 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:05:40 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:41 compute-0 nova_compute[244014]: 2026-02-25 14:05:41.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:41 compute-0 podman[429949]: 2026-02-25 14:05:41.737575717 +0000 UTC m=+0.073264161 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 14:05:41 compute-0 podman[429950]: 2026-02-25 14:05:41.767080704 +0000 UTC m=+0.100448462 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 14:05:42 compute-0 ceph-mon[76335]: pgmap v4373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:42 compute-0 nova_compute[244014]: 2026-02-25 14:05:42.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:42 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:43 compute-0 nova_compute[244014]: 2026-02-25 14:05:43.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:05:44 compute-0 ceph-mon[76335]: pgmap v4374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:44 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:45 compute-0 nova_compute[244014]: 2026-02-25 14:05:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:05:46 compute-0 ceph-mon[76335]: pgmap v4375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:46 compute-0 nova_compute[244014]: 2026-02-25 14:05:46.537 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:46 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:47 compute-0 nova_compute[244014]: 2026-02-25 14:05:47.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:05:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836094477' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:05:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:05:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836094477' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:05:47 compute-0 nova_compute[244014]: 2026-02-25 14:05:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:05:47 compute-0 nova_compute[244014]: 2026-02-25 14:05:47.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:05:48 compute-0 ceph-mon[76335]: pgmap v4376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1836094477' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:05:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1836094477' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:05:48 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:50 compute-0 ceph-mon[76335]: pgmap v4377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:50 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:51 compute-0 nova_compute[244014]: 2026-02-25 14:05:51.542 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:52 compute-0 ceph-mon[76335]: pgmap v4378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:52 compute-0 nova_compute[244014]: 2026-02-25 14:05:52.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:52 compute-0 nova_compute[244014]: 2026-02-25 14:05:52.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:05:52 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:53 compute-0 ceph-mon[76335]: pgmap v4379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:53 compute-0 nova_compute[244014]: 2026-02-25 14:05:53.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:05:54 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:05:55.111 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:05:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:05:55.111 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:05:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:05:55.112 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:05:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:05:56 compute-0 ceph-mon[76335]: pgmap v4380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:56 compute-0 nova_compute[244014]: 2026-02-25 14:05:56.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:56 compute-0 nova_compute[244014]: 2026-02-25 14:05:56.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:05:56 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:57 compute-0 sudo[429994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:05:57 compute-0 sudo[429994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:05:57 compute-0 sudo[429994]: pam_unix(sudo:session): session closed for user root
Feb 25 14:05:57 compute-0 sudo[430019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 25 14:05:57 compute-0 sudo[430019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:05:57 compute-0 nova_compute[244014]: 2026-02-25 14:05:57.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:05:57 compute-0 sudo[430019]: pam_unix(sudo:session): session closed for user root
Feb 25 14:05:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:05:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:05:57 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:05:57 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:05:57 compute-0 sudo[430066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:05:57 compute-0 sudo[430066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:05:57 compute-0 sudo[430066]: pam_unix(sudo:session): session closed for user root
Feb 25 14:05:57 compute-0 sudo[430091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:05:57 compute-0 sudo[430091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:05:58 compute-0 ceph-mon[76335]: pgmap v4381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:05:58 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:05:58 compute-0 sudo[430091]: pam_unix(sudo:session): session closed for user root
Feb 25 14:05:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:05:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:05:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:05:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:05:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:05:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:05:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:05:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:05:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:05:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:05:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:05:58 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:05:58 compute-0 sudo[430149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:05:58 compute-0 sudo[430149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:05:58 compute-0 sudo[430149]: pam_unix(sudo:session): session closed for user root
Feb 25 14:05:58 compute-0 sudo[430174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:05:58 compute-0 sudo[430174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:05:58 compute-0 podman[430210]: 2026-02-25 14:05:58.667055355 +0000 UTC m=+0.052178303 container create ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:05:58 compute-0 systemd[1]: Started libpod-conmon-ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3.scope.
Feb 25 14:05:58 compute-0 podman[430210]: 2026-02-25 14:05:58.642219279 +0000 UTC m=+0.027342307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:05:58 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:05:58 compute-0 podman[430210]: 2026-02-25 14:05:58.755942488 +0000 UTC m=+0.141065466 container init ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:05:58 compute-0 podman[430210]: 2026-02-25 14:05:58.764197672 +0000 UTC m=+0.149320650 container start ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 14:05:58 compute-0 podman[430210]: 2026-02-25 14:05:58.767813315 +0000 UTC m=+0.152936283 container attach ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:05:58 compute-0 vibrant_liskov[430226]: 167 167
Feb 25 14:05:58 compute-0 systemd[1]: libpod-ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3.scope: Deactivated successfully.
Feb 25 14:05:58 compute-0 podman[430210]: 2026-02-25 14:05:58.771132219 +0000 UTC m=+0.156255177 container died ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ff239f5552e38cbd93c47335107bfd1089bf311a94cc2de1a4b241177417a4c-merged.mount: Deactivated successfully.
Feb 25 14:05:58 compute-0 podman[430210]: 2026-02-25 14:05:58.813274546 +0000 UTC m=+0.198397504 container remove ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:05:58 compute-0 systemd[1]: libpod-conmon-ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3.scope: Deactivated successfully.
Feb 25 14:05:58 compute-0 podman[430251]: 2026-02-25 14:05:58.947648711 +0000 UTC m=+0.040010477 container create 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:05:58 compute-0 systemd[1]: Started libpod-conmon-3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3.scope.
Feb 25 14:05:58 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:05:59 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:05:59 compute-0 podman[430251]: 2026-02-25 14:05:58.929892826 +0000 UTC m=+0.022254613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:05:59 compute-0 podman[430251]: 2026-02-25 14:05:59.035818974 +0000 UTC m=+0.128180760 container init 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 14:05:59 compute-0 podman[430251]: 2026-02-25 14:05:59.046501607 +0000 UTC m=+0.138863373 container start 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:05:59 compute-0 podman[430251]: 2026-02-25 14:05:59.04942454 +0000 UTC m=+0.141786306 container attach 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 14:05:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:05:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:05:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:05:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:05:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:05:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:05:59 compute-0 youthful_golick[430267]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:05:59 compute-0 youthful_golick[430267]: --> All data devices are unavailable
Feb 25 14:05:59 compute-0 systemd[1]: libpod-3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3.scope: Deactivated successfully.
Feb 25 14:05:59 compute-0 conmon[430267]: conmon 3b26f35051be32820c64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3.scope/container/memory.events
Feb 25 14:05:59 compute-0 podman[430251]: 2026-02-25 14:05:59.492778866 +0000 UTC m=+0.585140652 container died 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:05:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147-merged.mount: Deactivated successfully.
Feb 25 14:05:59 compute-0 podman[430251]: 2026-02-25 14:05:59.549368832 +0000 UTC m=+0.641730628 container remove 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 14:05:59 compute-0 systemd[1]: libpod-conmon-3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3.scope: Deactivated successfully.
Feb 25 14:05:59 compute-0 sudo[430174]: pam_unix(sudo:session): session closed for user root
Feb 25 14:05:59 compute-0 sudo[430300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:05:59 compute-0 sudo[430300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:05:59 compute-0 sudo[430300]: pam_unix(sudo:session): session closed for user root
Feb 25 14:05:59 compute-0 sudo[430325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:05:59 compute-0 sudo[430325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:06:00 compute-0 podman[430362]: 2026-02-25 14:06:00.027686812 +0000 UTC m=+0.051788962 container create 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 14:06:00 compute-0 systemd[1]: Started libpod-conmon-7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557.scope.
Feb 25 14:06:00 compute-0 ceph-mon[76335]: pgmap v4382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:06:00 compute-0 podman[430362]: 2026-02-25 14:06:00.101206569 +0000 UTC m=+0.125308709 container init 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 14:06:00 compute-0 podman[430362]: 2026-02-25 14:06:00.010485993 +0000 UTC m=+0.034588133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:06:00 compute-0 podman[430362]: 2026-02-25 14:06:00.107430615 +0000 UTC m=+0.131532775 container start 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:06:00 compute-0 podman[430362]: 2026-02-25 14:06:00.111266574 +0000 UTC m=+0.135368724 container attach 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:06:00 compute-0 magical_khorana[430378]: 167 167
Feb 25 14:06:00 compute-0 systemd[1]: libpod-7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557.scope: Deactivated successfully.
Feb 25 14:06:00 compute-0 podman[430362]: 2026-02-25 14:06:00.114006722 +0000 UTC m=+0.138108852 container died 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 14:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab39b5e1e7d37acb0759d3172b229497cbf60dbf2b68d2dc56e3a637d74cf424-merged.mount: Deactivated successfully.
Feb 25 14:06:00 compute-0 podman[430362]: 2026-02-25 14:06:00.157252 +0000 UTC m=+0.181354120 container remove 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 14:06:00 compute-0 systemd[1]: libpod-conmon-7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557.scope: Deactivated successfully.
Feb 25 14:06:00 compute-0 podman[430401]: 2026-02-25 14:06:00.287463917 +0000 UTC m=+0.036357514 container create e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 14:06:00 compute-0 systemd[1]: Started libpod-conmon-e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4.scope.
Feb 25 14:06:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35ff2056d7068e3a63a7826b4332637d15c0d6cb8e34d2246ea8e7f3448a0bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35ff2056d7068e3a63a7826b4332637d15c0d6cb8e34d2246ea8e7f3448a0bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35ff2056d7068e3a63a7826b4332637d15c0d6cb8e34d2246ea8e7f3448a0bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35ff2056d7068e3a63a7826b4332637d15c0d6cb8e34d2246ea8e7f3448a0bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:06:00 compute-0 podman[430401]: 2026-02-25 14:06:00.270913387 +0000 UTC m=+0.019807024 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:06:00 compute-0 podman[430401]: 2026-02-25 14:06:00.371830812 +0000 UTC m=+0.120724409 container init e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 14:06:00 compute-0 podman[430401]: 2026-02-25 14:06:00.376635638 +0000 UTC m=+0.125529245 container start e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:06:00 compute-0 podman[430401]: 2026-02-25 14:06:00.384012028 +0000 UTC m=+0.132905645 container attach e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 14:06:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]: {
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:     "0": [
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:         {
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "devices": [
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "/dev/loop3"
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             ],
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_name": "ceph_lv0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_size": "21470642176",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "name": "ceph_lv0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "tags": {
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.cluster_name": "ceph",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.crush_device_class": "",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.encrypted": "0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.objectstore": "bluestore",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.osd_id": "0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.type": "block",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.vdo": "0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.with_tpm": "0"
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             },
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "type": "block",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "vg_name": "ceph_vg0"
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:         }
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:     ],
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:     "1": [
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:         {
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "devices": [
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "/dev/loop4"
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             ],
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_name": "ceph_lv1",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_size": "21470642176",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "name": "ceph_lv1",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "tags": {
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.cluster_name": "ceph",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.crush_device_class": "",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.encrypted": "0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.objectstore": "bluestore",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.osd_id": "1",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.type": "block",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.vdo": "0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.with_tpm": "0"
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             },
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "type": "block",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "vg_name": "ceph_vg1"
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:         }
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:     ],
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:     "2": [
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:         {
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "devices": [
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "/dev/loop5"
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             ],
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_name": "ceph_lv2",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_size": "21470642176",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "name": "ceph_lv2",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "tags": {
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.cluster_name": "ceph",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.crush_device_class": "",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.encrypted": "0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.objectstore": "bluestore",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.osd_id": "2",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.type": "block",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.vdo": "0",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:                 "ceph.with_tpm": "0"
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             },
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "type": "block",
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:             "vg_name": "ceph_vg2"
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:         }
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]:     ]
Feb 25 14:06:00 compute-0 blissful_bhabha[430418]: }
Feb 25 14:06:00 compute-0 systemd[1]: libpod-e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4.scope: Deactivated successfully.
Feb 25 14:06:00 compute-0 podman[430427]: 2026-02-25 14:06:00.748950358 +0000 UTC m=+0.021303286 container died e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 25 14:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d35ff2056d7068e3a63a7826b4332637d15c0d6cb8e34d2246ea8e7f3448a0bb-merged.mount: Deactivated successfully.
Feb 25 14:06:00 compute-0 podman[430427]: 2026-02-25 14:06:00.791545247 +0000 UTC m=+0.063898165 container remove e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 14:06:00 compute-0 systemd[1]: libpod-conmon-e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4.scope: Deactivated successfully.
Feb 25 14:06:00 compute-0 sudo[430325]: pam_unix(sudo:session): session closed for user root
Feb 25 14:06:00 compute-0 sudo[430442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:06:00 compute-0 sudo[430442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:06:00 compute-0 sudo[430442]: pam_unix(sudo:session): session closed for user root
Feb 25 14:06:00 compute-0 sudo[430467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:06:00 compute-0 sudo[430467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:06:00 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:01 compute-0 podman[430504]: 2026-02-25 14:06:01.268596311 +0000 UTC m=+0.056670110 container create f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:06:01 compute-0 systemd[1]: Started libpod-conmon-f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f.scope.
Feb 25 14:06:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:06:01 compute-0 podman[430504]: 2026-02-25 14:06:01.245115164 +0000 UTC m=+0.033189013 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:06:01 compute-0 podman[430504]: 2026-02-25 14:06:01.354948942 +0000 UTC m=+0.143022801 container init f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:06:01 compute-0 podman[430504]: 2026-02-25 14:06:01.365412259 +0000 UTC m=+0.153486068 container start f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 14:06:01 compute-0 podman[430504]: 2026-02-25 14:06:01.370652288 +0000 UTC m=+0.158726107 container attach f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 14:06:01 compute-0 serene_newton[430520]: 167 167
Feb 25 14:06:01 compute-0 systemd[1]: libpod-f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f.scope: Deactivated successfully.
Feb 25 14:06:01 compute-0 podman[430504]: 2026-02-25 14:06:01.372318745 +0000 UTC m=+0.160392544 container died f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 14:06:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-745609be49af38f1397ff652c7323d563dedc1d2325f528db5b5451f374e38ec-merged.mount: Deactivated successfully.
Feb 25 14:06:01 compute-0 podman[430504]: 2026-02-25 14:06:01.417196059 +0000 UTC m=+0.205269858 container remove f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 14:06:01 compute-0 systemd[1]: libpod-conmon-f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f.scope: Deactivated successfully.
Feb 25 14:06:01 compute-0 nova_compute[244014]: 2026-02-25 14:06:01.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:01 compute-0 podman[430543]: 2026-02-25 14:06:01.619265076 +0000 UTC m=+0.052602354 container create 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 14:06:01 compute-0 systemd[1]: Started libpod-conmon-3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420.scope.
Feb 25 14:06:01 compute-0 podman[430543]: 2026-02-25 14:06:01.590398046 +0000 UTC m=+0.023735374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:06:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:06:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb79938a80a957159280486c20621e0666dffec0272fb27e34d21377d8c449f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb79938a80a957159280486c20621e0666dffec0272fb27e34d21377d8c449f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb79938a80a957159280486c20621e0666dffec0272fb27e34d21377d8c449f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb79938a80a957159280486c20621e0666dffec0272fb27e34d21377d8c449f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:06:01 compute-0 podman[430543]: 2026-02-25 14:06:01.732351566 +0000 UTC m=+0.165688854 container init 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 14:06:01 compute-0 podman[430543]: 2026-02-25 14:06:01.742358421 +0000 UTC m=+0.175695699 container start 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 14:06:01 compute-0 podman[430543]: 2026-02-25 14:06:01.745974243 +0000 UTC m=+0.179311531 container attach 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:06:02 compute-0 ceph-mon[76335]: pgmap v4383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:02 compute-0 nova_compute[244014]: 2026-02-25 14:06:02.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:02 compute-0 lvm[430636]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:06:02 compute-0 lvm[430636]: VG ceph_vg0 finished
Feb 25 14:06:02 compute-0 lvm[430639]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:06:02 compute-0 lvm[430639]: VG ceph_vg1 finished
Feb 25 14:06:02 compute-0 lvm[430641]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:06:02 compute-0 lvm[430641]: VG ceph_vg2 finished
Feb 25 14:06:02 compute-0 angry_ellis[430560]: {}
Feb 25 14:06:02 compute-0 systemd[1]: libpod-3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420.scope: Deactivated successfully.
Feb 25 14:06:02 compute-0 systemd[1]: libpod-3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420.scope: Consumed 1.393s CPU time.
Feb 25 14:06:02 compute-0 podman[430543]: 2026-02-25 14:06:02.690412626 +0000 UTC m=+1.123749884 container died 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 14:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fb79938a80a957159280486c20621e0666dffec0272fb27e34d21377d8c449f-merged.mount: Deactivated successfully.
Feb 25 14:06:02 compute-0 podman[430543]: 2026-02-25 14:06:02.802412675 +0000 UTC m=+1.235749913 container remove 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 14:06:02 compute-0 systemd[1]: libpod-conmon-3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420.scope: Deactivated successfully.
Feb 25 14:06:02 compute-0 sudo[430467]: pam_unix(sudo:session): session closed for user root
Feb 25 14:06:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:06:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:06:02 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:06:02 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:06:02 compute-0 sudo[430655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:06:02 compute-0 sudo[430655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:06:02 compute-0 sudo[430655]: pam_unix(sudo:session): session closed for user root
Feb 25 14:06:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:06:03 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:06:03 compute-0 ceph-mon[76335]: pgmap v4384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:06 compute-0 ceph-mon[76335]: pgmap v4385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:06 compute-0 nova_compute[244014]: 2026-02-25 14:06:06.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:07 compute-0 nova_compute[244014]: 2026-02-25 14:06:07.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:08 compute-0 ceph-mon[76335]: pgmap v4386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:08 compute-0 sshd-session[430680]: Received disconnect from 91.224.92.54 port 18812:11:  [preauth]
Feb 25 14:06:08 compute-0 sshd-session[430680]: Disconnected from authenticating user root 91.224.92.54 port 18812 [preauth]
Feb 25 14:06:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:10 compute-0 ceph-mon[76335]: pgmap v4387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:11 compute-0 nova_compute[244014]: 2026-02-25 14:06:11.558 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:12 compute-0 ceph-mon[76335]: pgmap v4388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:12 compute-0 nova_compute[244014]: 2026-02-25 14:06:12.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:12 compute-0 podman[430682]: 2026-02-25 14:06:12.737456597 +0000 UTC m=+0.070291397 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 14:06:12 compute-0 podman[430683]: 2026-02-25 14:06:12.762805226 +0000 UTC m=+0.094848583 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 25 14:06:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:14 compute-0 ceph-mon[76335]: pgmap v4389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:16 compute-0 ceph-mon[76335]: pgmap v4390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:16 compute-0 nova_compute[244014]: 2026-02-25 14:06:16.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:17 compute-0 nova_compute[244014]: 2026-02-25 14:06:17.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:18 compute-0 ceph-mon[76335]: pgmap v4391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:18 compute-0 sshd-session[430727]: Connection closed by authenticating user root 46.101.242.142 port 46320 [preauth]
Feb 25 14:06:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:19 compute-0 ceph-mon[76335]: pgmap v4392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:21 compute-0 nova_compute[244014]: 2026-02-25 14:06:21.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:22 compute-0 ceph-mon[76335]: pgmap v4393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:22 compute-0 nova_compute[244014]: 2026-02-25 14:06:22.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 14:06:24 compute-0 ceph-mon[76335]: pgmap v4394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 14:06:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 14:06:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:26 compute-0 ceph-mon[76335]: pgmap v4395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 14:06:26 compute-0 nova_compute[244014]: 2026-02-25 14:06:26.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 14:06:27 compute-0 nova_compute[244014]: 2026-02-25 14:06:27.456 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:28 compute-0 ceph-mon[76335]: pgmap v4396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 14:06:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 14:06:30 compute-0 ceph-mon[76335]: pgmap v4397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 14:06:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:06:31
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'images', 'volumes', 'default.rgw.log']
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:06:31 compute-0 nova_compute[244014]: 2026-02-25 14:06:31.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:06:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:06:32 compute-0 ceph-mon[76335]: pgmap v4398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 14:06:32 compute-0 nova_compute[244014]: 2026-02-25 14:06:32.457 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:06:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:06:32 compute-0 nova_compute[244014]: 2026-02-25 14:06:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:06:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 14:06:34 compute-0 ceph-mon[76335]: pgmap v4399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 14:06:34 compute-0 nova_compute[244014]: 2026-02-25 14:06:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:06:34 compute-0 nova_compute[244014]: 2026-02-25 14:06:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:06:34 compute-0 nova_compute[244014]: 2026-02-25 14:06:34.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:06:34 compute-0 nova_compute[244014]: 2026-02-25 14:06:34.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:06:34 compute-0 nova_compute[244014]: 2026-02-25 14:06:34.914 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:06:34 compute-0 nova_compute[244014]: 2026-02-25 14:06:34.914 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:06:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Feb 25 14:06:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:06:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446813007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:06:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.480 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.622 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.624 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3500MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.624 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.625 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.697 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.698 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.719 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.739 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.740 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.755 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.776 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 14:06:35 compute-0 nova_compute[244014]: 2026-02-25 14:06:35.790 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:06:36 compute-0 ceph-mon[76335]: pgmap v4400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Feb 25 14:06:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1446813007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:06:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:06:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857693278' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:06:36 compute-0 nova_compute[244014]: 2026-02-25 14:06:36.374 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:06:36 compute-0 nova_compute[244014]: 2026-02-25 14:06:36.383 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:06:36 compute-0 nova_compute[244014]: 2026-02-25 14:06:36.409 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:06:36 compute-0 nova_compute[244014]: 2026-02-25 14:06:36.412 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:06:36 compute-0 nova_compute[244014]: 2026-02-25 14:06:36.413 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:06:36 compute-0 nova_compute[244014]: 2026-02-25 14:06:36.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Feb 25 14:06:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3857693278' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:06:37 compute-0 nova_compute[244014]: 2026-02-25 14:06:37.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:38 compute-0 ceph-mon[76335]: pgmap v4401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Feb 25 14:06:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:39 compute-0 ceph-mon[76335]: pgmap v4402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:41 compute-0 nova_compute[244014]: 2026-02-25 14:06:41.414 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:06:41 compute-0 nova_compute[244014]: 2026-02-25 14:06:41.415 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:06:41 compute-0 nova_compute[244014]: 2026-02-25 14:06:41.415 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:06:41 compute-0 nova_compute[244014]: 2026-02-25 14:06:41.433 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:06:41 compute-0 nova_compute[244014]: 2026-02-25 14:06:41.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:42 compute-0 ceph-mon[76335]: pgmap v4403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:42 compute-0 nova_compute[244014]: 2026-02-25 14:06:42.460 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:43 compute-0 podman[430773]: 2026-02-25 14:06:43.730308207 +0000 UTC m=+0.071968284 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 25 14:06:43 compute-0 podman[430774]: 2026-02-25 14:06:43.763998063 +0000 UTC m=+0.102545953 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:06:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:06:44 compute-0 ceph-mon[76335]: pgmap v4404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:45 compute-0 nova_compute[244014]: 2026-02-25 14:06:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:06:45 compute-0 nova_compute[244014]: 2026-02-25 14:06:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:06:46 compute-0 ceph-mon[76335]: pgmap v4405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:46 compute-0 nova_compute[244014]: 2026-02-25 14:06:46.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:47 compute-0 nova_compute[244014]: 2026-02-25 14:06:47.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:06:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1600990381' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:06:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:06:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1600990381' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:06:48 compute-0 ceph-mon[76335]: pgmap v4406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1600990381' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:06:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1600990381' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:06:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:49 compute-0 nova_compute[244014]: 2026-02-25 14:06:49.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:06:49 compute-0 nova_compute[244014]: 2026-02-25 14:06:49.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:06:50 compute-0 ceph-mon[76335]: pgmap v4407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:51 compute-0 nova_compute[244014]: 2026-02-25 14:06:51.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:52 compute-0 ceph-mon[76335]: pgmap v4408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:52 compute-0 nova_compute[244014]: 2026-02-25 14:06:52.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:52 compute-0 nova_compute[244014]: 2026-02-25 14:06:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:06:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:54 compute-0 ceph-mon[76335]: pgmap v4409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:06:55.112 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:06:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:06:55.112 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:06:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:06:55.113 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:06:55 compute-0 ceph-mon[76335]: pgmap v4410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:06:55 compute-0 nova_compute[244014]: 2026-02-25 14:06:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:06:56 compute-0 nova_compute[244014]: 2026-02-25 14:06:56.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:57 compute-0 nova_compute[244014]: 2026-02-25 14:06:57.498 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:06:57 compute-0 nova_compute[244014]: 2026-02-25 14:06:57.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:06:58 compute-0 ceph-mon[76335]: pgmap v4411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:06:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:00 compute-0 ceph-mon[76335]: pgmap v4412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:07:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:07:01 compute-0 nova_compute[244014]: 2026-02-25 14:07:01.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:02 compute-0 ceph-mon[76335]: pgmap v4413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:02 compute-0 nova_compute[244014]: 2026-02-25 14:07:02.550 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:03 compute-0 sudo[430817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:07:03 compute-0 sudo[430817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:07:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:03 compute-0 sudo[430817]: pam_unix(sudo:session): session closed for user root
Feb 25 14:07:03 compute-0 sudo[430842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:07:03 compute-0 sudo[430842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:07:03 compute-0 sudo[430842]: pam_unix(sudo:session): session closed for user root
Feb 25 14:07:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:07:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:07:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:07:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:07:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:07:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:07:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:07:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:07:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:07:03 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:07:03 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:07:03 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:07:03 compute-0 sudo[430898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:07:03 compute-0 sudo[430898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:07:03 compute-0 sudo[430898]: pam_unix(sudo:session): session closed for user root
Feb 25 14:07:03 compute-0 sudo[430923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:07:03 compute-0 sudo[430923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:07:04 compute-0 podman[430960]: 2026-02-25 14:07:04.077123491 +0000 UTC m=+0.051809152 container create 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 14:07:04 compute-0 ceph-mon[76335]: pgmap v4414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:07:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:07:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:07:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:07:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:07:04 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:07:04 compute-0 systemd[1]: Started libpod-conmon-1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562.scope.
Feb 25 14:07:04 compute-0 podman[430960]: 2026-02-25 14:07:04.057792842 +0000 UTC m=+0.032478523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:07:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:07:04 compute-0 podman[430960]: 2026-02-25 14:07:04.173042744 +0000 UTC m=+0.147728465 container init 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 14:07:04 compute-0 podman[430960]: 2026-02-25 14:07:04.180072494 +0000 UTC m=+0.154758175 container start 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:07:04 compute-0 podman[430960]: 2026-02-25 14:07:04.183878232 +0000 UTC m=+0.158563923 container attach 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:07:04 compute-0 relaxed_edison[430976]: 167 167
Feb 25 14:07:04 compute-0 systemd[1]: libpod-1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562.scope: Deactivated successfully.
Feb 25 14:07:04 compute-0 conmon[430976]: conmon 1f51e79367c877e50386 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562.scope/container/memory.events
Feb 25 14:07:04 compute-0 podman[430960]: 2026-02-25 14:07:04.187038931 +0000 UTC m=+0.161724632 container died 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-54f401c572b62d0715e0a041ccc522cb4472f51a5a38ef5587feba1152431ce3-merged.mount: Deactivated successfully.
Feb 25 14:07:04 compute-0 podman[430960]: 2026-02-25 14:07:04.235767985 +0000 UTC m=+0.210453666 container remove 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 14:07:04 compute-0 systemd[1]: libpod-conmon-1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562.scope: Deactivated successfully.
Feb 25 14:07:04 compute-0 podman[430999]: 2026-02-25 14:07:04.416592338 +0000 UTC m=+0.058184352 container create 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 14:07:04 compute-0 systemd[1]: Started libpod-conmon-55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12.scope.
Feb 25 14:07:04 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:04 compute-0 podman[430999]: 2026-02-25 14:07:04.394136361 +0000 UTC m=+0.035728435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:04 compute-0 podman[430999]: 2026-02-25 14:07:04.520663583 +0000 UTC m=+0.162255587 container init 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:07:04 compute-0 podman[430999]: 2026-02-25 14:07:04.537070549 +0000 UTC m=+0.178662523 container start 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:07:04 compute-0 podman[430999]: 2026-02-25 14:07:04.541228317 +0000 UTC m=+0.182820291 container attach 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:07:05 compute-0 sharp_satoshi[431016]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:07:05 compute-0 sharp_satoshi[431016]: --> All data devices are unavailable
Feb 25 14:07:05 compute-0 systemd[1]: libpod-55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12.scope: Deactivated successfully.
Feb 25 14:07:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:05 compute-0 podman[431037]: 2026-02-25 14:07:05.093948028 +0000 UTC m=+0.040303855 container died 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 14:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a-merged.mount: Deactivated successfully.
Feb 25 14:07:05 compute-0 podman[431037]: 2026-02-25 14:07:05.142655221 +0000 UTC m=+0.089011038 container remove 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:07:05 compute-0 systemd[1]: libpod-conmon-55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12.scope: Deactivated successfully.
Feb 25 14:07:05 compute-0 sudo[430923]: pam_unix(sudo:session): session closed for user root
Feb 25 14:07:05 compute-0 sudo[431053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:07:05 compute-0 sudo[431053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:07:05 compute-0 sudo[431053]: pam_unix(sudo:session): session closed for user root
Feb 25 14:07:05 compute-0 sudo[431079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:07:05 compute-0 sudo[431079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:07:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:05 compute-0 podman[431117]: 2026-02-25 14:07:05.612908801 +0000 UTC m=+0.050411592 container create 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 14:07:05 compute-0 systemd[1]: Started libpod-conmon-194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3.scope.
Feb 25 14:07:05 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:07:05 compute-0 podman[431117]: 2026-02-25 14:07:05.59524251 +0000 UTC m=+0.032745321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:07:05 compute-0 podman[431117]: 2026-02-25 14:07:05.703876804 +0000 UTC m=+0.141379655 container init 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:07:05 compute-0 podman[431117]: 2026-02-25 14:07:05.71007557 +0000 UTC m=+0.147578391 container start 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 14:07:05 compute-0 podman[431117]: 2026-02-25 14:07:05.713173288 +0000 UTC m=+0.150676119 container attach 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:07:05 compute-0 lucid_bassi[431133]: 167 167
Feb 25 14:07:05 compute-0 systemd[1]: libpod-194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3.scope: Deactivated successfully.
Feb 25 14:07:05 compute-0 conmon[431133]: conmon 194699b71c0cc3b638e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3.scope/container/memory.events
Feb 25 14:07:05 compute-0 podman[431117]: 2026-02-25 14:07:05.717953634 +0000 UTC m=+0.155456415 container died 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-68c22a6ed0143a36fb2734edc3146cfd2299715d9a2bd126e2a1434de9ff7034-merged.mount: Deactivated successfully.
Feb 25 14:07:05 compute-0 podman[431117]: 2026-02-25 14:07:05.751333411 +0000 UTC m=+0.188836202 container remove 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:07:05 compute-0 systemd[1]: libpod-conmon-194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3.scope: Deactivated successfully.
Feb 25 14:07:05 compute-0 podman[431156]: 2026-02-25 14:07:05.93901701 +0000 UTC m=+0.066218861 container create e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:07:05 compute-0 systemd[1]: Started libpod-conmon-e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386.scope.
Feb 25 14:07:06 compute-0 podman[431156]: 2026-02-25 14:07:05.912512137 +0000 UTC m=+0.039714038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:07:06 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f85466b70c8e047097e61d2bb4533a92632192b58eb2307aa0c39da9703dbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f85466b70c8e047097e61d2bb4533a92632192b58eb2307aa0c39da9703dbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f85466b70c8e047097e61d2bb4533a92632192b58eb2307aa0c39da9703dbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f85466b70c8e047097e61d2bb4533a92632192b58eb2307aa0c39da9703dbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:06 compute-0 podman[431156]: 2026-02-25 14:07:06.065974424 +0000 UTC m=+0.193176265 container init e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:07:06 compute-0 podman[431156]: 2026-02-25 14:07:06.073293722 +0000 UTC m=+0.200495533 container start e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 14:07:06 compute-0 podman[431156]: 2026-02-25 14:07:06.077221353 +0000 UTC m=+0.204423194 container attach e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:07:06 compute-0 ceph-mon[76335]: pgmap v4415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]: {
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:     "0": [
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:         {
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "devices": [
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "/dev/loop3"
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             ],
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_name": "ceph_lv0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_size": "21470642176",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "name": "ceph_lv0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "tags": {
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.cluster_name": "ceph",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.crush_device_class": "",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.encrypted": "0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.objectstore": "bluestore",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.osd_id": "0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.type": "block",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.vdo": "0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.with_tpm": "0"
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             },
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "type": "block",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "vg_name": "ceph_vg0"
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:         }
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:     ],
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:     "1": [
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:         {
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "devices": [
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "/dev/loop4"
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             ],
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_name": "ceph_lv1",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_size": "21470642176",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "name": "ceph_lv1",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "tags": {
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.cluster_name": "ceph",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.crush_device_class": "",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.encrypted": "0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.objectstore": "bluestore",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.osd_id": "1",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.type": "block",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.vdo": "0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.with_tpm": "0"
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             },
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "type": "block",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "vg_name": "ceph_vg1"
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:         }
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:     ],
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:     "2": [
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:         {
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "devices": [
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "/dev/loop5"
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             ],
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_name": "ceph_lv2",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_size": "21470642176",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "name": "ceph_lv2",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "tags": {
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.cluster_name": "ceph",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.crush_device_class": "",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.encrypted": "0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.objectstore": "bluestore",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.osd_id": "2",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.type": "block",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.vdo": "0",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:                 "ceph.with_tpm": "0"
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             },
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "type": "block",
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:             "vg_name": "ceph_vg2"
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:         }
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]:     ]
Feb 25 14:07:06 compute-0 optimistic_johnson[431172]: }
Feb 25 14:07:06 compute-0 systemd[1]: libpod-e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386.scope: Deactivated successfully.
Feb 25 14:07:06 compute-0 podman[431156]: 2026-02-25 14:07:06.375865001 +0000 UTC m=+0.503066822 container died e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 14:07:06 compute-0 sshd-session[431033]: Connection closed by authenticating user root 46.101.242.142 port 59388 [preauth]
Feb 25 14:07:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-75f85466b70c8e047097e61d2bb4533a92632192b58eb2307aa0c39da9703dbf-merged.mount: Deactivated successfully.
Feb 25 14:07:06 compute-0 podman[431156]: 2026-02-25 14:07:06.43006398 +0000 UTC m=+0.557265831 container remove e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 14:07:06 compute-0 systemd[1]: libpod-conmon-e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386.scope: Deactivated successfully.
Feb 25 14:07:06 compute-0 sudo[431079]: pam_unix(sudo:session): session closed for user root
Feb 25 14:07:06 compute-0 sudo[431194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:07:06 compute-0 sudo[431194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:07:06 compute-0 sudo[431194]: pam_unix(sudo:session): session closed for user root
Feb 25 14:07:06 compute-0 sudo[431219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:07:06 compute-0 sudo[431219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:07:06 compute-0 nova_compute[244014]: 2026-02-25 14:07:06.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:06 compute-0 podman[431257]: 2026-02-25 14:07:06.900492615 +0000 UTC m=+0.037939627 container create 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 14:07:06 compute-0 systemd[1]: Started libpod-conmon-7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408.scope.
Feb 25 14:07:06 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:07:06 compute-0 podman[431257]: 2026-02-25 14:07:06.974379623 +0000 UTC m=+0.111826675 container init 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:07:06 compute-0 podman[431257]: 2026-02-25 14:07:06.885385087 +0000 UTC m=+0.022832109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:07:06 compute-0 podman[431257]: 2026-02-25 14:07:06.981785723 +0000 UTC m=+0.119232715 container start 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 14:07:06 compute-0 podman[431257]: 2026-02-25 14:07:06.985972502 +0000 UTC m=+0.123419514 container attach 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 14:07:06 compute-0 unruffled_volhard[431273]: 167 167
Feb 25 14:07:06 compute-0 systemd[1]: libpod-7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408.scope: Deactivated successfully.
Feb 25 14:07:06 compute-0 podman[431257]: 2026-02-25 14:07:06.987133085 +0000 UTC m=+0.124580097 container died 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 14:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-096cf9cc635943354750782f759b23e23449f3bb0463be5806f91d157b9b3b07-merged.mount: Deactivated successfully.
Feb 25 14:07:07 compute-0 podman[431257]: 2026-02-25 14:07:07.029127758 +0000 UTC m=+0.166574770 container remove 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:07:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:07 compute-0 systemd[1]: libpod-conmon-7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408.scope: Deactivated successfully.
Feb 25 14:07:07 compute-0 podman[431299]: 2026-02-25 14:07:07.208546121 +0000 UTC m=+0.067364323 container create 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 14:07:07 compute-0 systemd[1]: Started libpod-conmon-43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763.scope.
Feb 25 14:07:07 compute-0 podman[431299]: 2026-02-25 14:07:07.179772764 +0000 UTC m=+0.038590966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:07:07 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f324113e6a5691bbc80111fa8dfeb95df88cb022b84312f75ca7563f20373/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f324113e6a5691bbc80111fa8dfeb95df88cb022b84312f75ca7563f20373/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f324113e6a5691bbc80111fa8dfeb95df88cb022b84312f75ca7563f20373/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f324113e6a5691bbc80111fa8dfeb95df88cb022b84312f75ca7563f20373/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:07:07 compute-0 podman[431299]: 2026-02-25 14:07:07.313584612 +0000 UTC m=+0.172402894 container init 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:07:07 compute-0 podman[431299]: 2026-02-25 14:07:07.322982939 +0000 UTC m=+0.181801151 container start 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 14:07:07 compute-0 podman[431299]: 2026-02-25 14:07:07.3268898 +0000 UTC m=+0.185707972 container attach 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:07:07 compute-0 nova_compute[244014]: 2026-02-25 14:07:07.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:07 compute-0 lvm[431394]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:07:07 compute-0 lvm[431391]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:07:07 compute-0 lvm[431394]: VG ceph_vg1 finished
Feb 25 14:07:07 compute-0 lvm[431391]: VG ceph_vg0 finished
Feb 25 14:07:07 compute-0 lvm[431396]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:07:07 compute-0 lvm[431396]: VG ceph_vg2 finished
Feb 25 14:07:08 compute-0 funny_ramanujan[431315]: {}
Feb 25 14:07:08 compute-0 systemd[1]: libpod-43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763.scope: Deactivated successfully.
Feb 25 14:07:08 compute-0 systemd[1]: libpod-43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763.scope: Consumed 1.105s CPU time.
Feb 25 14:07:08 compute-0 podman[431299]: 2026-02-25 14:07:08.072085206 +0000 UTC m=+0.930903418 container died 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:07:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-da0f324113e6a5691bbc80111fa8dfeb95df88cb022b84312f75ca7563f20373-merged.mount: Deactivated successfully.
Feb 25 14:07:08 compute-0 podman[431299]: 2026-02-25 14:07:08.125985016 +0000 UTC m=+0.984803228 container remove 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 14:07:08 compute-0 systemd[1]: libpod-conmon-43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763.scope: Deactivated successfully.
Feb 25 14:07:08 compute-0 ceph-mon[76335]: pgmap v4416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:08 compute-0 sudo[431219]: pam_unix(sudo:session): session closed for user root
Feb 25 14:07:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:07:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:07:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:07:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:07:08 compute-0 sudo[431413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:07:08 compute-0 sudo[431413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:07:08 compute-0 sudo[431413]: pam_unix(sudo:session): session closed for user root
Feb 25 14:07:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:07:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:07:09 compute-0 ceph-mon[76335]: pgmap v4417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:11 compute-0 nova_compute[244014]: 2026-02-25 14:07:11.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:12 compute-0 ceph-mon[76335]: pgmap v4418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:12 compute-0 nova_compute[244014]: 2026-02-25 14:07:12.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:14 compute-0 ceph-mon[76335]: pgmap v4419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:14 compute-0 podman[431438]: 2026-02-25 14:07:14.720619224 +0000 UTC m=+0.055579799 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 14:07:14 compute-0 podman[431439]: 2026-02-25 14:07:14.788642335 +0000 UTC m=+0.122970242 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 14:07:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:16 compute-0 ceph-mon[76335]: pgmap v4420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:16 compute-0 nova_compute[244014]: 2026-02-25 14:07:16.866 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:17 compute-0 nova_compute[244014]: 2026-02-25 14:07:17.555 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:18 compute-0 ceph-mon[76335]: pgmap v4421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:20 compute-0 ceph-mon[76335]: pgmap v4422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:21 compute-0 ceph-mon[76335]: pgmap v4423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:21 compute-0 nova_compute[244014]: 2026-02-25 14:07:21.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:22 compute-0 nova_compute[244014]: 2026-02-25 14:07:22.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:22 compute-0 nova_compute[244014]: 2026-02-25 14:07:22.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:07:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:24 compute-0 ceph-mon[76335]: pgmap v4424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:26 compute-0 ceph-mon[76335]: pgmap v4425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:26 compute-0 nova_compute[244014]: 2026-02-25 14:07:26.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:27 compute-0 nova_compute[244014]: 2026-02-25 14:07:27.561 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:28 compute-0 ceph-mon[76335]: pgmap v4426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:30 compute-0 ceph-mon[76335]: pgmap v4427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:07:31
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes']
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:07:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:07:31 compute-0 nova_compute[244014]: 2026-02-25 14:07:31.921 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:32 compute-0 ceph-mon[76335]: pgmap v4428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:32 compute-0 nova_compute[244014]: 2026-02-25 14:07:32.563 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:07:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:07:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:34 compute-0 ceph-mon[76335]: pgmap v4429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:34 compute-0 nova_compute[244014]: 2026-02-25 14:07:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:07:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:35 compute-0 nova_compute[244014]: 2026-02-25 14:07:35.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:07:35 compute-0 nova_compute[244014]: 2026-02-25 14:07:35.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:07:35 compute-0 nova_compute[244014]: 2026-02-25 14:07:35.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:07:35 compute-0 nova_compute[244014]: 2026-02-25 14:07:35.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:07:35 compute-0 nova_compute[244014]: 2026-02-25 14:07:35.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:07:35 compute-0 nova_compute[244014]: 2026-02-25 14:07:35.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:07:36 compute-0 ceph-mon[76335]: pgmap v4430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:07:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1575696368' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:07:36 compute-0 nova_compute[244014]: 2026-02-25 14:07:36.478 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:07:36 compute-0 nova_compute[244014]: 2026-02-25 14:07:36.709 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:07:36 compute-0 nova_compute[244014]: 2026-02-25 14:07:36.711 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3511MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:07:36 compute-0 nova_compute[244014]: 2026-02-25 14:07:36.711 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:07:36 compute-0 nova_compute[244014]: 2026-02-25 14:07:36.712 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:07:36 compute-0 nova_compute[244014]: 2026-02-25 14:07:36.781 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:07:36 compute-0 nova_compute[244014]: 2026-02-25 14:07:36.782 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:07:36 compute-0 nova_compute[244014]: 2026-02-25 14:07:36.799 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:07:36 compute-0 nova_compute[244014]: 2026-02-25 14:07:36.925 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:37 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1575696368' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:07:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:07:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253551020' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:07:37 compute-0 nova_compute[244014]: 2026-02-25 14:07:37.300 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:07:37 compute-0 nova_compute[244014]: 2026-02-25 14:07:37.307 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:07:37 compute-0 nova_compute[244014]: 2026-02-25 14:07:37.333 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:07:37 compute-0 nova_compute[244014]: 2026-02-25 14:07:37.337 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:07:37 compute-0 nova_compute[244014]: 2026-02-25 14:07:37.338 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:07:37 compute-0 nova_compute[244014]: 2026-02-25 14:07:37.564 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:38 compute-0 ceph-mon[76335]: pgmap v4431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2253551020' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:07:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:39 compute-0 ceph-mon[76335]: pgmap v4432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:41 compute-0 nova_compute[244014]: 2026-02-25 14:07:41.928 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:42 compute-0 ceph-mon[76335]: pgmap v4433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:42 compute-0 nova_compute[244014]: 2026-02-25 14:07:42.339 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:07:42 compute-0 nova_compute[244014]: 2026-02-25 14:07:42.340 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:07:42 compute-0 nova_compute[244014]: 2026-02-25 14:07:42.340 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:07:42 compute-0 nova_compute[244014]: 2026-02-25 14:07:42.353 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:07:42 compute-0 nova_compute[244014]: 2026-02-25 14:07:42.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:07:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:07:44 compute-0 ceph-mon[76335]: pgmap v4434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:45 compute-0 podman[431529]: 2026-02-25 14:07:45.744805984 +0000 UTC m=+0.068870357 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 25 14:07:45 compute-0 podman[431530]: 2026-02-25 14:07:45.76861149 +0000 UTC m=+0.093463655 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Feb 25 14:07:46 compute-0 ceph-mon[76335]: pgmap v4435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:46 compute-0 nova_compute[244014]: 2026-02-25 14:07:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:07:46 compute-0 nova_compute[244014]: 2026-02-25 14:07:46.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:47 compute-0 nova_compute[244014]: 2026-02-25 14:07:47.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:07:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/306839189' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:07:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:07:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/306839189' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:07:47 compute-0 nova_compute[244014]: 2026-02-25 14:07:47.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:07:48 compute-0 ceph-mon[76335]: pgmap v4436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/306839189' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:07:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/306839189' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:07:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:50 compute-0 ceph-mon[76335]: pgmap v4437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:51 compute-0 nova_compute[244014]: 2026-02-25 14:07:51.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:07:51 compute-0 nova_compute[244014]: 2026-02-25 14:07:51.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:07:51 compute-0 nova_compute[244014]: 2026-02-25 14:07:51.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:52 compute-0 ceph-mon[76335]: pgmap v4438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:52 compute-0 nova_compute[244014]: 2026-02-25 14:07:52.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:54 compute-0 ceph-mon[76335]: pgmap v4439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:54 compute-0 sshd-session[431577]: Connection closed by authenticating user root 46.101.242.142 port 54778 [preauth]
Feb 25 14:07:54 compute-0 nova_compute[244014]: 2026-02-25 14:07:54.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:07:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:07:55.113 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:07:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:07:55.113 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:07:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:07:55.113 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:07:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:07:55 compute-0 nova_compute[244014]: 2026-02-25 14:07:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:07:56 compute-0 ceph-mon[76335]: pgmap v4440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:56 compute-0 nova_compute[244014]: 2026-02-25 14:07:56.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:57 compute-0 ceph-mon[76335]: pgmap v4441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:07:57 compute-0 nova_compute[244014]: 2026-02-25 14:07:57.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:07:57 compute-0 nova_compute[244014]: 2026-02-25 14:07:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:07:57 compute-0 nova_compute[244014]: 2026-02-25 14:07:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:07:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:00 compute-0 ceph-mon[76335]: pgmap v4442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:08:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:08:01 compute-0 nova_compute[244014]: 2026-02-25 14:08:01.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:02 compute-0 ceph-mon[76335]: pgmap v4443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:02 compute-0 nova_compute[244014]: 2026-02-25 14:08:02.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:04 compute-0 ceph-mon[76335]: pgmap v4444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:06 compute-0 ceph-mon[76335]: pgmap v4445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:06 compute-0 nova_compute[244014]: 2026-02-25 14:08:06.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:07 compute-0 nova_compute[244014]: 2026-02-25 14:08:07.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:08 compute-0 ceph-mon[76335]: pgmap v4446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:08 compute-0 sudo[431579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:08:08 compute-0 sudo[431579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:08:08 compute-0 sudo[431579]: pam_unix(sudo:session): session closed for user root
Feb 25 14:08:08 compute-0 sudo[431604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:08:08 compute-0 sudo[431604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:08:08 compute-0 sudo[431604]: pam_unix(sudo:session): session closed for user root
Feb 25 14:08:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 14:08:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 14:08:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:08:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:08:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:08:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:08:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:08:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:08:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:08:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:08:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:08:08 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:08:08 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:08:08 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:08:09 compute-0 sudo[431660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:08:09 compute-0 sudo[431660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:08:09 compute-0 sudo[431660]: pam_unix(sudo:session): session closed for user root
Feb 25 14:08:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:09 compute-0 sudo[431685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:08:09 compute-0 sudo[431685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:08:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 14:08:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:08:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:08:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:08:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:08:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:08:09 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:08:10 compute-0 podman[431724]: 2026-02-25 14:08:10.012212121 +0000 UTC m=+0.046671856 container create e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 14:08:10 compute-0 systemd[1]: Started libpod-conmon-e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1.scope.
Feb 25 14:08:10 compute-0 podman[431724]: 2026-02-25 14:08:09.990670979 +0000 UTC m=+0.025130614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:08:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:08:10 compute-0 podman[431724]: 2026-02-25 14:08:10.11716239 +0000 UTC m=+0.151622055 container init e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 14:08:10 compute-0 podman[431724]: 2026-02-25 14:08:10.12385221 +0000 UTC m=+0.158311855 container start e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:08:10 compute-0 podman[431724]: 2026-02-25 14:08:10.127325659 +0000 UTC m=+0.161785304 container attach e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:08:10 compute-0 kind_haslett[431741]: 167 167
Feb 25 14:08:10 compute-0 systemd[1]: libpod-e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1.scope: Deactivated successfully.
Feb 25 14:08:10 compute-0 conmon[431741]: conmon e2f2ac05192112e64bb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1.scope/container/memory.events
Feb 25 14:08:10 compute-0 podman[431724]: 2026-02-25 14:08:10.133747891 +0000 UTC m=+0.168207516 container died e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 14:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-262f7b25066251311fef566f9293576e3d0f290841a7e329a94d376af84968c5-merged.mount: Deactivated successfully.
Feb 25 14:08:10 compute-0 podman[431724]: 2026-02-25 14:08:10.177195184 +0000 UTC m=+0.211654789 container remove e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:08:10 compute-0 systemd[1]: libpod-conmon-e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1.scope: Deactivated successfully.
Feb 25 14:08:10 compute-0 podman[431765]: 2026-02-25 14:08:10.400269237 +0000 UTC m=+0.068897887 container create 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 14:08:10 compute-0 systemd[1]: Started libpod-conmon-4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e.scope.
Feb 25 14:08:10 compute-0 podman[431765]: 2026-02-25 14:08:10.369336689 +0000 UTC m=+0.037965419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:08:10 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:10 compute-0 podman[431765]: 2026-02-25 14:08:10.498307991 +0000 UTC m=+0.166936641 container init 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:08:10 compute-0 podman[431765]: 2026-02-25 14:08:10.508448619 +0000 UTC m=+0.177077249 container start 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:08:10 compute-0 podman[431765]: 2026-02-25 14:08:10.512546135 +0000 UTC m=+0.181174985 container attach 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 14:08:10 compute-0 reverent_joliot[431781]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:08:10 compute-0 reverent_joliot[431781]: --> All data devices are unavailable
Feb 25 14:08:10 compute-0 systemd[1]: libpod-4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e.scope: Deactivated successfully.
Feb 25 14:08:10 compute-0 podman[431765]: 2026-02-25 14:08:10.95653443 +0000 UTC m=+0.625163090 container died 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:08:10 compute-0 ceph-mon[76335]: pgmap v4447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab-merged.mount: Deactivated successfully.
Feb 25 14:08:11 compute-0 podman[431765]: 2026-02-25 14:08:11.002869495 +0000 UTC m=+0.671498135 container remove 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:08:11 compute-0 systemd[1]: libpod-conmon-4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e.scope: Deactivated successfully.
Feb 25 14:08:11 compute-0 sudo[431685]: pam_unix(sudo:session): session closed for user root
Feb 25 14:08:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:11 compute-0 sudo[431815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:08:11 compute-0 sudo[431815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:08:11 compute-0 sudo[431815]: pam_unix(sudo:session): session closed for user root
Feb 25 14:08:11 compute-0 sudo[431840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:08:11 compute-0 sudo[431840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:08:11 compute-0 podman[431877]: 2026-02-25 14:08:11.383617004 +0000 UTC m=+0.043421924 container create 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 14:08:11 compute-0 systemd[1]: Started libpod-conmon-92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0.scope.
Feb 25 14:08:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:08:11 compute-0 podman[431877]: 2026-02-25 14:08:11.443984048 +0000 UTC m=+0.103788988 container init 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 14:08:11 compute-0 podman[431877]: 2026-02-25 14:08:11.448342182 +0000 UTC m=+0.108147102 container start 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 14:08:11 compute-0 podman[431877]: 2026-02-25 14:08:11.451063669 +0000 UTC m=+0.110868579 container attach 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:08:11 compute-0 systemd[1]: libpod-92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0.scope: Deactivated successfully.
Feb 25 14:08:11 compute-0 condescending_kilby[431894]: 167 167
Feb 25 14:08:11 compute-0 conmon[431894]: conmon 92fa7ca0b19fcb38fe08 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0.scope/container/memory.events
Feb 25 14:08:11 compute-0 podman[431877]: 2026-02-25 14:08:11.454249779 +0000 UTC m=+0.114054699 container died 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 14:08:11 compute-0 podman[431877]: 2026-02-25 14:08:11.369068231 +0000 UTC m=+0.028873171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:08:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaf0a0479cbf361e13a8b2d9e2c71638f2929f58e4f376afdb068bd9ff4868f3-merged.mount: Deactivated successfully.
Feb 25 14:08:11 compute-0 podman[431877]: 2026-02-25 14:08:11.485755054 +0000 UTC m=+0.145559974 container remove 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:08:11 compute-0 systemd[1]: libpod-conmon-92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0.scope: Deactivated successfully.
Feb 25 14:08:11 compute-0 podman[431918]: 2026-02-25 14:08:11.659789545 +0000 UTC m=+0.053568872 container create ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 14:08:11 compute-0 systemd[1]: Started libpod-conmon-ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1.scope.
Feb 25 14:08:11 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99de0f64050c737fbdcd3962ae15eba7a252ab1724b11ce3e718a8177143a595/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99de0f64050c737fbdcd3962ae15eba7a252ab1724b11ce3e718a8177143a595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99de0f64050c737fbdcd3962ae15eba7a252ab1724b11ce3e718a8177143a595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99de0f64050c737fbdcd3962ae15eba7a252ab1724b11ce3e718a8177143a595/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:11 compute-0 podman[431918]: 2026-02-25 14:08:11.641158556 +0000 UTC m=+0.034937893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:08:11 compute-0 podman[431918]: 2026-02-25 14:08:11.741854443 +0000 UTC m=+0.135633790 container init ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 14:08:11 compute-0 podman[431918]: 2026-02-25 14:08:11.749956654 +0000 UTC m=+0.143735981 container start ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:08:11 compute-0 podman[431918]: 2026-02-25 14:08:11.753238127 +0000 UTC m=+0.147017684 container attach ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 14:08:11 compute-0 ceph-mon[76335]: pgmap v4448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:12 compute-0 nova_compute[244014]: 2026-02-25 14:08:12.023 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]: {
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:     "0": [
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:         {
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "devices": [
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "/dev/loop3"
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             ],
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_name": "ceph_lv0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_size": "21470642176",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "name": "ceph_lv0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "tags": {
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.cluster_name": "ceph",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.crush_device_class": "",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.encrypted": "0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.objectstore": "bluestore",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.osd_id": "0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.type": "block",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.vdo": "0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.with_tpm": "0"
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             },
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "type": "block",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "vg_name": "ceph_vg0"
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:         }
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:     ],
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:     "1": [
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:         {
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "devices": [
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "/dev/loop4"
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             ],
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_name": "ceph_lv1",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_size": "21470642176",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "name": "ceph_lv1",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "tags": {
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.cluster_name": "ceph",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.crush_device_class": "",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.encrypted": "0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.objectstore": "bluestore",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.osd_id": "1",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.type": "block",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.vdo": "0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.with_tpm": "0"
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             },
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "type": "block",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "vg_name": "ceph_vg1"
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:         }
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:     ],
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:     "2": [
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:         {
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "devices": [
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "/dev/loop5"
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             ],
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_name": "ceph_lv2",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_size": "21470642176",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "name": "ceph_lv2",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "tags": {
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.cluster_name": "ceph",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.crush_device_class": "",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.encrypted": "0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.objectstore": "bluestore",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.osd_id": "2",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.type": "block",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.vdo": "0",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:                 "ceph.with_tpm": "0"
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             },
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "type": "block",
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:             "vg_name": "ceph_vg2"
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:         }
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]:     ]
Feb 25 14:08:12 compute-0 strange_elbakyan[431936]: }
Feb 25 14:08:12 compute-0 systemd[1]: libpod-ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1.scope: Deactivated successfully.
Feb 25 14:08:12 compute-0 podman[431918]: 2026-02-25 14:08:12.070049571 +0000 UTC m=+0.463828928 container died ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-99de0f64050c737fbdcd3962ae15eba7a252ab1724b11ce3e718a8177143a595-merged.mount: Deactivated successfully.
Feb 25 14:08:12 compute-0 podman[431918]: 2026-02-25 14:08:12.117591621 +0000 UTC m=+0.511370988 container remove ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 14:08:12 compute-0 systemd[1]: libpod-conmon-ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1.scope: Deactivated successfully.
Feb 25 14:08:12 compute-0 sudo[431840]: pam_unix(sudo:session): session closed for user root
Feb 25 14:08:12 compute-0 sudo[431957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:08:12 compute-0 sudo[431957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:08:12 compute-0 sudo[431957]: pam_unix(sudo:session): session closed for user root
Feb 25 14:08:12 compute-0 sudo[431982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:08:12 compute-0 sudo[431982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:08:12 compute-0 nova_compute[244014]: 2026-02-25 14:08:12.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:12 compute-0 podman[432018]: 2026-02-25 14:08:12.588812448 +0000 UTC m=+0.043902467 container create 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Feb 25 14:08:12 compute-0 systemd[1]: Started libpod-conmon-080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6.scope.
Feb 25 14:08:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:08:12 compute-0 podman[432018]: 2026-02-25 14:08:12.572966319 +0000 UTC m=+0.028056388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:08:12 compute-0 podman[432018]: 2026-02-25 14:08:12.676962741 +0000 UTC m=+0.132052800 container init 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:08:12 compute-0 podman[432018]: 2026-02-25 14:08:12.685446112 +0000 UTC m=+0.140536141 container start 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:08:12 compute-0 podman[432018]: 2026-02-25 14:08:12.688717975 +0000 UTC m=+0.143808014 container attach 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 14:08:12 compute-0 pedantic_shaw[432035]: 167 167
Feb 25 14:08:12 compute-0 systemd[1]: libpod-080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6.scope: Deactivated successfully.
Feb 25 14:08:12 compute-0 podman[432018]: 2026-02-25 14:08:12.691298258 +0000 UTC m=+0.146388307 container died 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-994838c1f0e3b4ebfe24c83bc80f49af308180993b7135b4d14603b0ee669cee-merged.mount: Deactivated successfully.
Feb 25 14:08:12 compute-0 podman[432018]: 2026-02-25 14:08:12.731080887 +0000 UTC m=+0.186170906 container remove 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 14:08:12 compute-0 systemd[1]: libpod-conmon-080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6.scope: Deactivated successfully.
Feb 25 14:08:12 compute-0 podman[432061]: 2026-02-25 14:08:12.909257976 +0000 UTC m=+0.053890631 container create 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 14:08:12 compute-0 systemd[1]: Started libpod-conmon-2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6.scope.
Feb 25 14:08:12 compute-0 podman[432061]: 2026-02-25 14:08:12.882071944 +0000 UTC m=+0.026704639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:08:12 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7463a114bccbe731bc6b6d5019aef5b5ccad3d69af7f5647c5dd3fe72dfed027/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7463a114bccbe731bc6b6d5019aef5b5ccad3d69af7f5647c5dd3fe72dfed027/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7463a114bccbe731bc6b6d5019aef5b5ccad3d69af7f5647c5dd3fe72dfed027/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7463a114bccbe731bc6b6d5019aef5b5ccad3d69af7f5647c5dd3fe72dfed027/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:08:13 compute-0 podman[432061]: 2026-02-25 14:08:13.013099114 +0000 UTC m=+0.157731749 container init 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 14:08:13 compute-0 podman[432061]: 2026-02-25 14:08:13.020103213 +0000 UTC m=+0.164735828 container start 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 14:08:13 compute-0 podman[432061]: 2026-02-25 14:08:13.023750746 +0000 UTC m=+0.168383381 container attach 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:08:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:13 compute-0 lvm[432156]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:08:13 compute-0 lvm[432156]: VG ceph_vg0 finished
Feb 25 14:08:13 compute-0 lvm[432155]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:08:13 compute-0 lvm[432155]: VG ceph_vg1 finished
Feb 25 14:08:13 compute-0 lvm[432158]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:08:13 compute-0 lvm[432158]: VG ceph_vg2 finished
Feb 25 14:08:13 compute-0 optimistic_leakey[432077]: {}
Feb 25 14:08:13 compute-0 systemd[1]: libpod-2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6.scope: Deactivated successfully.
Feb 25 14:08:13 compute-0 systemd[1]: libpod-2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6.scope: Consumed 1.162s CPU time.
Feb 25 14:08:13 compute-0 podman[432061]: 2026-02-25 14:08:13.796761642 +0000 UTC m=+0.941394277 container died 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 14:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7463a114bccbe731bc6b6d5019aef5b5ccad3d69af7f5647c5dd3fe72dfed027-merged.mount: Deactivated successfully.
Feb 25 14:08:13 compute-0 podman[432061]: 2026-02-25 14:08:13.838897498 +0000 UTC m=+0.983530113 container remove 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 14:08:13 compute-0 systemd[1]: libpod-conmon-2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6.scope: Deactivated successfully.
Feb 25 14:08:13 compute-0 sudo[431982]: pam_unix(sudo:session): session closed for user root
Feb 25 14:08:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:08:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:08:13 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:08:13 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:08:13 compute-0 sudo[432173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:08:13 compute-0 sudo[432173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:08:13 compute-0 sudo[432173]: pam_unix(sudo:session): session closed for user root
Feb 25 14:08:14 compute-0 ceph-mon[76335]: pgmap v4449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:08:14 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:08:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:16 compute-0 ceph-mon[76335]: pgmap v4450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:16 compute-0 podman[432198]: 2026-02-25 14:08:16.7358383 +0000 UTC m=+0.065421468 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 25 14:08:16 compute-0 podman[432199]: 2026-02-25 14:08:16.781109706 +0000 UTC m=+0.110746225 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 25 14:08:17 compute-0 nova_compute[244014]: 2026-02-25 14:08:17.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:17 compute-0 nova_compute[244014]: 2026-02-25 14:08:17.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:18 compute-0 ceph-mon[76335]: pgmap v4451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:20 compute-0 ceph-mon[76335]: pgmap v4452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:22 compute-0 nova_compute[244014]: 2026-02-25 14:08:22.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:22 compute-0 ceph-mon[76335]: pgmap v4453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:22 compute-0 nova_compute[244014]: 2026-02-25 14:08:22.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:24 compute-0 ceph-mon[76335]: pgmap v4454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:26 compute-0 ceph-mon[76335]: pgmap v4455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:27 compute-0 nova_compute[244014]: 2026-02-25 14:08:27.059 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:27 compute-0 ceph-mon[76335]: pgmap v4456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:27 compute-0 nova_compute[244014]: 2026-02-25 14:08:27.582 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:30 compute-0 ceph-mon[76335]: pgmap v4457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:08:31
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['backups', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.mgr', 'cephfs.cephfs.meta']
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:08:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:08:32 compute-0 nova_compute[244014]: 2026-02-25 14:08:32.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:32 compute-0 ceph-mon[76335]: pgmap v4458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:32 compute-0 nova_compute[244014]: 2026-02-25 14:08:32.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:08:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:08:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:34 compute-0 ceph-mon[76335]: pgmap v4459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:35 compute-0 nova_compute[244014]: 2026-02-25 14:08:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:08:36 compute-0 ceph-mon[76335]: pgmap v4460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:36 compute-0 nova_compute[244014]: 2026-02-25 14:08:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:08:36 compute-0 nova_compute[244014]: 2026-02-25 14:08:36.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:08:36 compute-0 nova_compute[244014]: 2026-02-25 14:08:36.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:08:36 compute-0 nova_compute[244014]: 2026-02-25 14:08:36.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:08:36 compute-0 nova_compute[244014]: 2026-02-25 14:08:36.910 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:08:36 compute-0 nova_compute[244014]: 2026-02-25 14:08:36.910 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:08:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:37 compute-0 nova_compute[244014]: 2026-02-25 14:08:37.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:08:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211967623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:08:37 compute-0 nova_compute[244014]: 2026-02-25 14:08:37.479 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:08:37 compute-0 nova_compute[244014]: 2026-02-25 14:08:37.585 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:37 compute-0 nova_compute[244014]: 2026-02-25 14:08:37.651 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:08:37 compute-0 nova_compute[244014]: 2026-02-25 14:08:37.652 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3479MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:08:37 compute-0 nova_compute[244014]: 2026-02-25 14:08:37.652 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:08:37 compute-0 nova_compute[244014]: 2026-02-25 14:08:37.653 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:08:37 compute-0 nova_compute[244014]: 2026-02-25 14:08:37.735 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:08:37 compute-0 nova_compute[244014]: 2026-02-25 14:08:37.736 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:08:37 compute-0 nova_compute[244014]: 2026-02-25 14:08:37.903 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:08:38 compute-0 ceph-mon[76335]: pgmap v4461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1211967623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:08:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:08:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087689625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:08:38 compute-0 nova_compute[244014]: 2026-02-25 14:08:38.459 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:08:38 compute-0 nova_compute[244014]: 2026-02-25 14:08:38.466 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:08:38 compute-0 nova_compute[244014]: 2026-02-25 14:08:38.488 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:08:38 compute-0 nova_compute[244014]: 2026-02-25 14:08:38.491 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:08:38 compute-0 nova_compute[244014]: 2026-02-25 14:08:38.492 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:08:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:39 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1087689625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:08:40 compute-0 ceph-mon[76335]: pgmap v4462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:41 compute-0 ceph-mon[76335]: pgmap v4463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:42 compute-0 nova_compute[244014]: 2026-02-25 14:08:42.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:42 compute-0 sshd-session[432287]: Connection closed by authenticating user root 46.101.242.142 port 42576 [preauth]
Feb 25 14:08:42 compute-0 nova_compute[244014]: 2026-02-25 14:08:42.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:43 compute-0 nova_compute[244014]: 2026-02-25 14:08:43.493 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:08:43 compute-0 nova_compute[244014]: 2026-02-25 14:08:43.494 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:08:43 compute-0 nova_compute[244014]: 2026-02-25 14:08:43.494 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:08:43 compute-0 nova_compute[244014]: 2026-02-25 14:08:43.520 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:08:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:08:44 compute-0 ceph-mon[76335]: pgmap v4464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:46 compute-0 ceph-mon[76335]: pgmap v4465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:46 compute-0 nova_compute[244014]: 2026-02-25 14:08:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:08:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:47 compute-0 nova_compute[244014]: 2026-02-25 14:08:47.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:47 compute-0 nova_compute[244014]: 2026-02-25 14:08:47.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:08:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3090449203' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:08:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:08:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3090449203' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:08:47 compute-0 podman[432290]: 2026-02-25 14:08:47.738183101 +0000 UTC m=+0.072325964 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 14:08:47 compute-0 podman[432291]: 2026-02-25 14:08:47.777756165 +0000 UTC m=+0.112108694 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 14:08:48 compute-0 ceph-mon[76335]: pgmap v4466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3090449203' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:08:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3090449203' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:08:48 compute-0 nova_compute[244014]: 2026-02-25 14:08:48.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:08:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:50 compute-0 ceph-mon[76335]: pgmap v4467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:52 compute-0 nova_compute[244014]: 2026-02-25 14:08:52.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:52 compute-0 ceph-mon[76335]: pgmap v4468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:52 compute-0 nova_compute[244014]: 2026-02-25 14:08:52.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:52 compute-0 nova_compute[244014]: 2026-02-25 14:08:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:08:52 compute-0 nova_compute[244014]: 2026-02-25 14:08:52.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:08:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:54 compute-0 ceph-mon[76335]: pgmap v4469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:08:55.113 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:08:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:08:55.114 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:08:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:08:55.114 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:08:55 compute-0 ceph-mon[76335]: pgmap v4470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:08:55 compute-0 nova_compute[244014]: 2026-02-25 14:08:55.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:08:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:57 compute-0 nova_compute[244014]: 2026-02-25 14:08:57.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:57 compute-0 nova_compute[244014]: 2026-02-25 14:08:57.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:08:57 compute-0 nova_compute[244014]: 2026-02-25 14:08:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:08:57 compute-0 nova_compute[244014]: 2026-02-25 14:08:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:08:58 compute-0 ceph-mon[76335]: pgmap v4471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:08:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:00 compute-0 ceph-mon[76335]: pgmap v4472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:09:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:09:02 compute-0 nova_compute[244014]: 2026-02-25 14:09:02.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:02 compute-0 ceph-mon[76335]: pgmap v4473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:02 compute-0 nova_compute[244014]: 2026-02-25 14:09:02.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:04 compute-0 ceph-mon[76335]: pgmap v4474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:06 compute-0 ceph-mon[76335]: pgmap v4475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:07 compute-0 nova_compute[244014]: 2026-02-25 14:09:07.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:07 compute-0 ceph-mon[76335]: pgmap v4476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:07 compute-0 nova_compute[244014]: 2026-02-25 14:09:07.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:10 compute-0 ceph-mon[76335]: pgmap v4477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:12 compute-0 nova_compute[244014]: 2026-02-25 14:09:12.160 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:12 compute-0 ceph-mon[76335]: pgmap v4478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:12 compute-0 nova_compute[244014]: 2026-02-25 14:09:12.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:09:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.0 total, 600.0 interval
                                           Cumulative writes: 20K writes, 92K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s
                                           Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.13 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1294 writes, 5890 keys, 1294 commit groups, 1.0 writes per commit group, ingest: 8.65 MB, 0.01 MB/s
                                           Interval WAL: 1294 writes, 1294 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     54.2      2.08              0.32        70    0.030       0      0       0.0       0.0
                                             L6      1/0    8.97 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.6    104.7     90.0      7.05              1.93        69    0.102    529K    36K       0.0       0.0
                                            Sum      1/0    8.97 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.6     80.8     81.8      9.14              2.25       139    0.066    529K    36K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.3    123.0    120.8      0.48              0.22        10    0.048     52K   2539       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    104.7     90.0      7.05              1.93        69    0.102    529K    36K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     54.3      2.08              0.32        69    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 8400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.110, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.73 GB write, 0.09 MB/s write, 0.72 GB read, 0.09 MB/s read, 9.1 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 83.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 15 last_copies: 0 last_secs: 0.001113 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5126,79.39 MB,26.1151%) FilterBlock(140,1.41 MB,0.463882%) IndexBlock(140,2.22 MB,0.728763%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 25 14:09:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:14 compute-0 sudo[432336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:09:14 compute-0 sudo[432336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:09:14 compute-0 sudo[432336]: pam_unix(sudo:session): session closed for user root
Feb 25 14:09:14 compute-0 sudo[432361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 25 14:09:14 compute-0 sudo[432361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:09:14 compute-0 ceph-mon[76335]: pgmap v4479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:14 compute-0 podman[432431]: 2026-02-25 14:09:14.573182881 +0000 UTC m=+0.063325439 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:09:14 compute-0 podman[432431]: 2026-02-25 14:09:14.690672087 +0000 UTC m=+0.180814605 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:09:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:15 compute-0 sudo[432361]: pam_unix(sudo:session): session closed for user root
Feb 25 14:09:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:09:15 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:09:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:09:15 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:09:15 compute-0 sudo[432620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:09:15 compute-0 sudo[432620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:09:15 compute-0 sudo[432620]: pam_unix(sudo:session): session closed for user root
Feb 25 14:09:15 compute-0 sudo[432645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:09:15 compute-0 sudo[432645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:09:16 compute-0 ceph-mon[76335]: pgmap v4480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:09:16 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:09:16 compute-0 sudo[432645]: pam_unix(sudo:session): session closed for user root
Feb 25 14:09:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:09:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:09:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:09:16 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:09:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:09:16 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:09:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:09:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:09:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:09:16 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:09:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:09:16 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:09:16 compute-0 sudo[432703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:09:16 compute-0 sudo[432703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:09:16 compute-0 sudo[432703]: pam_unix(sudo:session): session closed for user root
Feb 25 14:09:16 compute-0 sudo[432728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:09:16 compute-0 sudo[432728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:09:16 compute-0 podman[432765]: 2026-02-25 14:09:16.755466684 +0000 UTC m=+0.057222895 container create deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:09:16 compute-0 systemd[1]: Started libpod-conmon-deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20.scope.
Feb 25 14:09:16 compute-0 podman[432765]: 2026-02-25 14:09:16.732014189 +0000 UTC m=+0.033770420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:09:16 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:09:16 compute-0 podman[432765]: 2026-02-25 14:09:16.85218445 +0000 UTC m=+0.153940721 container init deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 14:09:16 compute-0 podman[432765]: 2026-02-25 14:09:16.861605188 +0000 UTC m=+0.163361369 container start deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:09:16 compute-0 podman[432765]: 2026-02-25 14:09:16.865544379 +0000 UTC m=+0.167300590 container attach deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:09:16 compute-0 naughty_wright[432780]: 167 167
Feb 25 14:09:16 compute-0 systemd[1]: libpod-deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20.scope: Deactivated successfully.
Feb 25 14:09:16 compute-0 podman[432765]: 2026-02-25 14:09:16.868113952 +0000 UTC m=+0.169870173 container died deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 14:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2481b1efda1f898444d9203320be42b15621b664bd14ce3bd38236189348d047-merged.mount: Deactivated successfully.
Feb 25 14:09:16 compute-0 podman[432765]: 2026-02-25 14:09:16.918210145 +0000 UTC m=+0.219966356 container remove deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 14:09:16 compute-0 systemd[1]: libpod-conmon-deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20.scope: Deactivated successfully.
Feb 25 14:09:17 compute-0 podman[432804]: 2026-02-25 14:09:17.101633762 +0000 UTC m=+0.051540504 container create ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:09:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:17 compute-0 systemd[1]: Started libpod-conmon-ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444.scope.
Feb 25 14:09:17 compute-0 nova_compute[244014]: 2026-02-25 14:09:17.164 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:17 compute-0 podman[432804]: 2026-02-25 14:09:17.077350993 +0000 UTC m=+0.027257765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:09:17 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:09:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:09:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:09:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:09:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:09:17 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:09:17 compute-0 podman[432804]: 2026-02-25 14:09:17.212679394 +0000 UTC m=+0.162586116 container init ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:09:17 compute-0 podman[432804]: 2026-02-25 14:09:17.223253395 +0000 UTC m=+0.173160117 container start ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:09:17 compute-0 podman[432804]: 2026-02-25 14:09:17.226147897 +0000 UTC m=+0.176054619 container attach ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:09:17 compute-0 nova_compute[244014]: 2026-02-25 14:09:17.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:17 compute-0 boring_cerf[432821]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:09:17 compute-0 boring_cerf[432821]: --> All data devices are unavailable
Feb 25 14:09:17 compute-0 systemd[1]: libpod-ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444.scope: Deactivated successfully.
Feb 25 14:09:17 compute-0 podman[432804]: 2026-02-25 14:09:17.717760924 +0000 UTC m=+0.667667656 container died ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:09:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9-merged.mount: Deactivated successfully.
Feb 25 14:09:17 compute-0 podman[432804]: 2026-02-25 14:09:17.759611892 +0000 UTC m=+0.709518654 container remove ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 14:09:17 compute-0 systemd[1]: libpod-conmon-ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444.scope: Deactivated successfully.
Feb 25 14:09:17 compute-0 sudo[432728]: pam_unix(sudo:session): session closed for user root
Feb 25 14:09:17 compute-0 podman[432853]: 2026-02-25 14:09:17.876073938 +0000 UTC m=+0.067551019 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Feb 25 14:09:17 compute-0 sudo[432872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:09:17 compute-0 sudo[432872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:09:17 compute-0 sudo[432872]: pam_unix(sudo:session): session closed for user root
Feb 25 14:09:17 compute-0 podman[432855]: 2026-02-25 14:09:17.900677337 +0000 UTC m=+0.092174988 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 14:09:17 compute-0 sudo[432925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:09:17 compute-0 sudo[432925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:09:18 compute-0 ceph-mon[76335]: pgmap v4481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:18 compute-0 podman[432961]: 2026-02-25 14:09:18.240041461 +0000 UTC m=+0.041970982 container create 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:09:18 compute-0 systemd[1]: Started libpod-conmon-68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea.scope.
Feb 25 14:09:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:09:18 compute-0 podman[432961]: 2026-02-25 14:09:18.224521891 +0000 UTC m=+0.026451442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:09:18 compute-0 podman[432961]: 2026-02-25 14:09:18.331358954 +0000 UTC m=+0.133288495 container init 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:09:18 compute-0 podman[432961]: 2026-02-25 14:09:18.338080425 +0000 UTC m=+0.140009946 container start 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 14:09:18 compute-0 podman[432961]: 2026-02-25 14:09:18.34144573 +0000 UTC m=+0.143375271 container attach 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:09:18 compute-0 pedantic_shtern[432978]: 167 167
Feb 25 14:09:18 compute-0 systemd[1]: libpod-68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea.scope: Deactivated successfully.
Feb 25 14:09:18 compute-0 podman[432961]: 2026-02-25 14:09:18.347869512 +0000 UTC m=+0.149799063 container died 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 14:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-43378bbf4b0b88aaeecb84b56418ff4d7a0d64641cc455c41db76a0fd4dbcd73-merged.mount: Deactivated successfully.
Feb 25 14:09:18 compute-0 podman[432961]: 2026-02-25 14:09:18.387107876 +0000 UTC m=+0.189037427 container remove 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 14:09:18 compute-0 systemd[1]: libpod-conmon-68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea.scope: Deactivated successfully.
Feb 25 14:09:18 compute-0 podman[433001]: 2026-02-25 14:09:18.587286599 +0000 UTC m=+0.053880890 container create f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:09:18 compute-0 systemd[1]: Started libpod-conmon-f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6.scope.
Feb 25 14:09:18 compute-0 podman[433001]: 2026-02-25 14:09:18.568033093 +0000 UTC m=+0.034627424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:09:18 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb82a410d81f08cd713fef24d02234e0d0b31af7a569a83ff5552bda9941dad6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb82a410d81f08cd713fef24d02234e0d0b31af7a569a83ff5552bda9941dad6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb82a410d81f08cd713fef24d02234e0d0b31af7a569a83ff5552bda9941dad6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb82a410d81f08cd713fef24d02234e0d0b31af7a569a83ff5552bda9941dad6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:18 compute-0 podman[433001]: 2026-02-25 14:09:18.697435167 +0000 UTC m=+0.164029528 container init f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 25 14:09:18 compute-0 podman[433001]: 2026-02-25 14:09:18.709997053 +0000 UTC m=+0.176591374 container start f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 14:09:18 compute-0 podman[433001]: 2026-02-25 14:09:18.713782321 +0000 UTC m=+0.180376702 container attach f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb 25 14:09:19 compute-0 sharp_johnson[433017]: {
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:     "0": [
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:         {
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "devices": [
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "/dev/loop3"
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             ],
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_name": "ceph_lv0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_size": "21470642176",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "name": "ceph_lv0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "tags": {
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.cluster_name": "ceph",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.crush_device_class": "",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.encrypted": "0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.objectstore": "bluestore",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.osd_id": "0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.type": "block",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.vdo": "0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.with_tpm": "0"
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             },
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "type": "block",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "vg_name": "ceph_vg0"
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:         }
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:     ],
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:     "1": [
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:         {
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "devices": [
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "/dev/loop4"
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             ],
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_name": "ceph_lv1",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_size": "21470642176",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "name": "ceph_lv1",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "tags": {
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.cluster_name": "ceph",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.crush_device_class": "",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.encrypted": "0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.objectstore": "bluestore",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.osd_id": "1",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.type": "block",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.vdo": "0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.with_tpm": "0"
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             },
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "type": "block",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "vg_name": "ceph_vg1"
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:         }
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:     ],
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:     "2": [
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:         {
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "devices": [
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "/dev/loop5"
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             ],
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_name": "ceph_lv2",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_size": "21470642176",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "name": "ceph_lv2",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "tags": {
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.cluster_name": "ceph",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.crush_device_class": "",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.encrypted": "0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.objectstore": "bluestore",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.osd_id": "2",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.type": "block",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.vdo": "0",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:                 "ceph.with_tpm": "0"
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             },
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "type": "block",
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:             "vg_name": "ceph_vg2"
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:         }
Feb 25 14:09:19 compute-0 sharp_johnson[433017]:     ]
Feb 25 14:09:19 compute-0 sharp_johnson[433017]: }
Feb 25 14:09:19 compute-0 systemd[1]: libpod-f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6.scope: Deactivated successfully.
Feb 25 14:09:19 compute-0 podman[433001]: 2026-02-25 14:09:19.04347435 +0000 UTC m=+0.510068671 container died f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 14:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb82a410d81f08cd713fef24d02234e0d0b31af7a569a83ff5552bda9941dad6-merged.mount: Deactivated successfully.
Feb 25 14:09:19 compute-0 podman[433001]: 2026-02-25 14:09:19.08748079 +0000 UTC m=+0.554075061 container remove f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Feb 25 14:09:19 compute-0 systemd[1]: libpod-conmon-f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6.scope: Deactivated successfully.
Feb 25 14:09:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:19 compute-0 sudo[432925]: pam_unix(sudo:session): session closed for user root
Feb 25 14:09:19 compute-0 sudo[433038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:09:19 compute-0 sudo[433038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:09:19 compute-0 sudo[433038]: pam_unix(sudo:session): session closed for user root
Feb 25 14:09:19 compute-0 sudo[433063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:09:19 compute-0 sudo[433063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:09:19 compute-0 ceph-mon[76335]: pgmap v4482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:19 compute-0 podman[433101]: 2026-02-25 14:09:19.538262397 +0000 UTC m=+0.042910809 container create bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 14:09:19 compute-0 systemd[1]: Started libpod-conmon-bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787.scope.
Feb 25 14:09:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:09:19 compute-0 podman[433101]: 2026-02-25 14:09:19.518857486 +0000 UTC m=+0.023505938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:09:19 compute-0 podman[433101]: 2026-02-25 14:09:19.630501546 +0000 UTC m=+0.135150028 container init bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 14:09:19 compute-0 podman[433101]: 2026-02-25 14:09:19.639747318 +0000 UTC m=+0.144395730 container start bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 14:09:19 compute-0 podman[433101]: 2026-02-25 14:09:19.643066653 +0000 UTC m=+0.147715145 container attach bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 14:09:19 compute-0 romantic_dewdney[433117]: 167 167
Feb 25 14:09:19 compute-0 systemd[1]: libpod-bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787.scope: Deactivated successfully.
Feb 25 14:09:19 compute-0 podman[433101]: 2026-02-25 14:09:19.644667398 +0000 UTC m=+0.149315800 container died bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 14:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b14d3e8d78f45e0a6f4a7dc40c431ad972e7541040d6e9781d37248826d5490b-merged.mount: Deactivated successfully.
Feb 25 14:09:19 compute-0 podman[433101]: 2026-02-25 14:09:19.682639596 +0000 UTC m=+0.187287988 container remove bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 14:09:19 compute-0 systemd[1]: libpod-conmon-bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787.scope: Deactivated successfully.
Feb 25 14:09:19 compute-0 podman[433142]: 2026-02-25 14:09:19.847863906 +0000 UTC m=+0.050800073 container create 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 14:09:19 compute-0 systemd[1]: Started libpod-conmon-519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0.scope.
Feb 25 14:09:19 compute-0 podman[433142]: 2026-02-25 14:09:19.823068732 +0000 UTC m=+0.026004959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:09:19 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20325ab69bb9bd93d539aca57ebe1ae0a55724b37e185530ad2515ce221410ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20325ab69bb9bd93d539aca57ebe1ae0a55724b37e185530ad2515ce221410ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20325ab69bb9bd93d539aca57ebe1ae0a55724b37e185530ad2515ce221410ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20325ab69bb9bd93d539aca57ebe1ae0a55724b37e185530ad2515ce221410ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:09:19 compute-0 podman[433142]: 2026-02-25 14:09:19.959040542 +0000 UTC m=+0.161976679 container init 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 14:09:19 compute-0 podman[433142]: 2026-02-25 14:09:19.973653687 +0000 UTC m=+0.176589844 container start 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:09:19 compute-0 podman[433142]: 2026-02-25 14:09:19.977211618 +0000 UTC m=+0.180147775 container attach 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:09:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:20 compute-0 lvm[433234]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:09:20 compute-0 lvm[433234]: VG ceph_vg0 finished
Feb 25 14:09:20 compute-0 lvm[433237]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:09:20 compute-0 lvm[433237]: VG ceph_vg1 finished
Feb 25 14:09:20 compute-0 lvm[433239]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:09:20 compute-0 lvm[433239]: VG ceph_vg2 finished
Feb 25 14:09:20 compute-0 lvm[433240]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:09:20 compute-0 lvm[433240]: VG ceph_vg1 finished
Feb 25 14:09:20 compute-0 stupefied_faraday[433158]: {}
Feb 25 14:09:20 compute-0 systemd[1]: libpod-519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0.scope: Deactivated successfully.
Feb 25 14:09:20 compute-0 systemd[1]: libpod-519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0.scope: Consumed 1.214s CPU time.
Feb 25 14:09:20 compute-0 podman[433142]: 2026-02-25 14:09:20.828085134 +0000 UTC m=+1.031021301 container died 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Feb 25 14:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-20325ab69bb9bd93d539aca57ebe1ae0a55724b37e185530ad2515ce221410ed-merged.mount: Deactivated successfully.
Feb 25 14:09:20 compute-0 podman[433142]: 2026-02-25 14:09:20.888324654 +0000 UTC m=+1.091260831 container remove 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 14:09:20 compute-0 systemd[1]: libpod-conmon-519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0.scope: Deactivated successfully.
Feb 25 14:09:20 compute-0 sudo[433063]: pam_unix(sudo:session): session closed for user root
Feb 25 14:09:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:09:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:09:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:09:20 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:09:21 compute-0 sudo[433255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:09:21 compute-0 sudo[433255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:09:21 compute-0 sudo[433255]: pam_unix(sudo:session): session closed for user root
Feb 25 14:09:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:21 compute-0 nova_compute[244014]: 2026-02-25 14:09:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:09:21 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:09:21 compute-0 ceph-mon[76335]: pgmap v4483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #225. Immutable memtables: 0.
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.977885) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 141] Flushing memtable with next log file: 225
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028561977931, "job": 141, "event": "flush_started", "num_memtables": 1, "num_entries": 2057, "num_deletes": 251, "total_data_size": 3541043, "memory_usage": 3595528, "flush_reason": "Manual Compaction"}
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 141] Level-0 flush table #226: started
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028561989719, "cf_name": "default", "job": 141, "event": "table_file_creation", "file_number": 226, "file_size": 3451806, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91183, "largest_seqno": 93239, "table_properties": {"data_size": 3442357, "index_size": 6006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18819, "raw_average_key_size": 20, "raw_value_size": 3423662, "raw_average_value_size": 3665, "num_data_blocks": 266, "num_entries": 934, "num_filter_entries": 934, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028334, "oldest_key_time": 1772028334, "file_creation_time": 1772028561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 226, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 141] Flush lasted 11894 microseconds, and 5008 cpu microseconds.
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.989776) [db/flush_job.cc:967] [default] [JOB 141] Level-0 flush table #226: 3451806 bytes OK
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.989802) [db/memtable_list.cc:519] [default] Level-0 commit table #226 started
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.991647) [db/memtable_list.cc:722] [default] Level-0 commit table #226: memtable #1 done
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.991664) EVENT_LOG_v1 {"time_micros": 1772028561991658, "job": 141, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.991706) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 141] Try to delete WAL files size 3532417, prev total WAL file size 3559236, number of live WAL files 2.
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000222.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.992566) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 142] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 141 Base level 0, inputs: [226(3370KB)], [224(9189KB)]
Feb 25 14:09:21 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028561992626, "job": 142, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [226], "files_L6": [224], "score": -1, "input_data_size": 12862189, "oldest_snapshot_seqno": -1}
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 142] Generated table #227: 10312 keys, 11098701 bytes, temperature: kUnknown
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028562062037, "cf_name": "default", "job": 142, "event": "table_file_creation", "file_number": 227, "file_size": 11098701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11036234, "index_size": 35602, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25797, "raw_key_size": 274415, "raw_average_key_size": 26, "raw_value_size": 10857837, "raw_average_value_size": 1052, "num_data_blocks": 1346, "num_entries": 10312, "num_filter_entries": 10312, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 227, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.062475) [db/compaction/compaction_job.cc:1663] [default] [JOB 142] Compacted 1@0 + 1@6 files to L6 => 11098701 bytes
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.064110) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.0 rd, 159.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 10826, records dropped: 514 output_compression: NoCompression
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.064142) EVENT_LOG_v1 {"time_micros": 1772028562064127, "job": 142, "event": "compaction_finished", "compaction_time_micros": 69512, "compaction_time_cpu_micros": 38900, "output_level": 6, "num_output_files": 1, "total_output_size": 11098701, "num_input_records": 10826, "num_output_records": 10312, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000226.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028562064919, "job": 142, "event": "table_file_deletion", "file_number": 226}
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000224.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028562066356, "job": 142, "event": "table_file_deletion", "file_number": 224}
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.992458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.066474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.066482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.066483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.066485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:09:22 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.066487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:09:22 compute-0 nova_compute[244014]: 2026-02-25 14:09:22.169 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:22 compute-0 nova_compute[244014]: 2026-02-25 14:09:22.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:24 compute-0 ceph-mon[76335]: pgmap v4484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:24 compute-0 nova_compute[244014]: 2026-02-25 14:09:24.885 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:26 compute-0 ceph-mon[76335]: pgmap v4485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:27 compute-0 nova_compute[244014]: 2026-02-25 14:09:27.173 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:27 compute-0 nova_compute[244014]: 2026-02-25 14:09:27.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:28 compute-0 ceph-mon[76335]: pgmap v4486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:29 compute-0 sshd-session[433280]: Connection closed by authenticating user root 46.101.242.142 port 55650 [preauth]
Feb 25 14:09:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:30 compute-0 ceph-mon[76335]: pgmap v4487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:31 compute-0 ceph-mon[76335]: pgmap v4488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:09:31
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'backups', '.mgr', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes', 'images', 'default.rgw.log', 'default.rgw.control']
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:09:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:09:32 compute-0 nova_compute[244014]: 2026-02-25 14:09:32.191 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:32 compute-0 nova_compute[244014]: 2026-02-25 14:09:32.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:09:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:09:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:34 compute-0 ceph-mon[76335]: pgmap v4489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:36 compute-0 ceph-mon[76335]: pgmap v4490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:37 compute-0 nova_compute[244014]: 2026-02-25 14:09:37.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:37 compute-0 nova_compute[244014]: 2026-02-25 14:09:37.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:37 compute-0 nova_compute[244014]: 2026-02-25 14:09:37.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:38 compute-0 ceph-mon[76335]: pgmap v4491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:38 compute-0 nova_compute[244014]: 2026-02-25 14:09:38.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:38 compute-0 nova_compute[244014]: 2026-02-25 14:09:38.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:09:38 compute-0 nova_compute[244014]: 2026-02-25 14:09:38.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:09:38 compute-0 nova_compute[244014]: 2026-02-25 14:09:38.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:09:38 compute-0 nova_compute[244014]: 2026-02-25 14:09:38.903 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:09:38 compute-0 nova_compute[244014]: 2026-02-25 14:09:38.903 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:09:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:09:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200555220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:09:39 compute-0 nova_compute[244014]: 2026-02-25 14:09:39.481 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:09:39 compute-0 nova_compute[244014]: 2026-02-25 14:09:39.675 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:09:39 compute-0 nova_compute[244014]: 2026-02-25 14:09:39.677 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3449MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:09:39 compute-0 nova_compute[244014]: 2026-02-25 14:09:39.677 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:09:39 compute-0 nova_compute[244014]: 2026-02-25 14:09:39.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:09:39 compute-0 nova_compute[244014]: 2026-02-25 14:09:39.744 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:09:39 compute-0 nova_compute[244014]: 2026-02-25 14:09:39.745 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:09:39 compute-0 nova_compute[244014]: 2026-02-25 14:09:39.878 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:09:40 compute-0 ceph-mon[76335]: pgmap v4492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/200555220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:09:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:09:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503161338' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:09:40 compute-0 nova_compute[244014]: 2026-02-25 14:09:40.411 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:09:40 compute-0 nova_compute[244014]: 2026-02-25 14:09:40.418 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:09:40 compute-0 nova_compute[244014]: 2026-02-25 14:09:40.434 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:09:40 compute-0 nova_compute[244014]: 2026-02-25 14:09:40.437 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:09:40 compute-0 nova_compute[244014]: 2026-02-25 14:09:40.437 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:09:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2503161338' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:09:42 compute-0 nova_compute[244014]: 2026-02-25 14:09:42.197 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:42 compute-0 ceph-mon[76335]: pgmap v4493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:42 compute-0 nova_compute[244014]: 2026-02-25 14:09:42.643 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:43 compute-0 ceph-mon[76335]: pgmap v4494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:43 compute-0 nova_compute[244014]: 2026-02-25 14:09:43.438 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:43 compute-0 nova_compute[244014]: 2026-02-25 14:09:43.438 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:09:43 compute-0 nova_compute[244014]: 2026-02-25 14:09:43.438 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:09:43 compute-0 nova_compute[244014]: 2026-02-25 14:09:43.479 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:09:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:09:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:46 compute-0 ceph-mon[76335]: pgmap v4495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:46 compute-0 nova_compute[244014]: 2026-02-25 14:09:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:47 compute-0 nova_compute[244014]: 2026-02-25 14:09:47.201 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:09:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3720512164' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:09:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:09:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3720512164' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:09:47 compute-0 nova_compute[244014]: 2026-02-25 14:09:47.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:48 compute-0 ceph-mon[76335]: pgmap v4496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3720512164' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:09:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/3720512164' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:09:48 compute-0 podman[433327]: 2026-02-25 14:09:48.74286099 +0000 UTC m=+0.077057538 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 14:09:48 compute-0 podman[433328]: 2026-02-25 14:09:48.781555159 +0000 UTC m=+0.113811782 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller)
Feb 25 14:09:48 compute-0 nova_compute[244014]: 2026-02-25 14:09:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:48 compute-0 nova_compute[244014]: 2026-02-25 14:09:48.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 14:09:48 compute-0 nova_compute[244014]: 2026-02-25 14:09:48.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 14:09:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:50 compute-0 ceph-mon[76335]: pgmap v4497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:50 compute-0 nova_compute[244014]: 2026-02-25 14:09:50.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:52 compute-0 nova_compute[244014]: 2026-02-25 14:09:52.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:52 compute-0 ceph-mon[76335]: pgmap v4498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:52 compute-0 nova_compute[244014]: 2026-02-25 14:09:52.648 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:53 compute-0 ceph-mon[76335]: pgmap v4499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:54 compute-0 nova_compute[244014]: 2026-02-25 14:09:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:54 compute-0 nova_compute[244014]: 2026-02-25 14:09:54.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:09:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:09:55.114 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:09:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:09:55.115 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:09:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:09:55.115 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:09:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:09:55 compute-0 nova_compute[244014]: 2026-02-25 14:09:55.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:56 compute-0 ceph-mon[76335]: pgmap v4500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:57 compute-0 nova_compute[244014]: 2026-02-25 14:09:57.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:57 compute-0 nova_compute[244014]: 2026-02-25 14:09:57.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:09:57 compute-0 nova_compute[244014]: 2026-02-25 14:09:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:58 compute-0 ceph-mon[76335]: pgmap v4501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:09:58 compute-0 nova_compute[244014]: 2026-02-25 14:09:58.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:09:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:00 compute-0 ceph-mon[76335]: pgmap v4502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:00 compute-0 nova_compute[244014]: 2026-02-25 14:10:00.649 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:10:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.213 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:02 compute-0 ceph-mon[76335]: pgmap v4503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.877 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.877 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.879 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.880 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.880 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.906 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.924 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.924 244018 WARNING nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.925 244018 WARNING nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.925 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Removable base files: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.925 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.925 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.926 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.926 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.926 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Feb 25 14:10:02 compute-0 nova_compute[244014]: 2026-02-25 14:10:02.926 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Feb 25 14:10:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:03 compute-0 ceph-mon[76335]: pgmap v4504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:05 compute-0 nova_compute[244014]: 2026-02-25 14:10:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:05 compute-0 nova_compute[244014]: 2026-02-25 14:10:05.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 14:10:06 compute-0 ceph-mon[76335]: pgmap v4505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:07 compute-0 nova_compute[244014]: 2026-02-25 14:10:07.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:07 compute-0 nova_compute[244014]: 2026-02-25 14:10:07.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:08 compute-0 ceph-mon[76335]: pgmap v4506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:09 compute-0 ceph-mon[76335]: pgmap v4507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:12 compute-0 ceph-mon[76335]: pgmap v4508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:12 compute-0 nova_compute[244014]: 2026-02-25 14:10:12.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:12 compute-0 nova_compute[244014]: 2026-02-25 14:10:12.654 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:14 compute-0 ceph-mon[76335]: pgmap v4509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:15 compute-0 sshd-session[433371]: Connection closed by authenticating user root 46.101.242.142 port 50140 [preauth]
Feb 25 14:10:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:10:15 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 186K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:10:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #228. Immutable memtables: 0.
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.529219) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 143] Flushing memtable with next log file: 228
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615529264, "job": 143, "event": "flush_started", "num_memtables": 1, "num_entries": 651, "num_deletes": 256, "total_data_size": 803272, "memory_usage": 815960, "flush_reason": "Manual Compaction"}
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 143] Level-0 flush table #229: started
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615534858, "cf_name": "default", "job": 143, "event": "table_file_creation", "file_number": 229, "file_size": 796407, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93240, "largest_seqno": 93890, "table_properties": {"data_size": 792914, "index_size": 1400, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7550, "raw_average_key_size": 18, "raw_value_size": 786033, "raw_average_value_size": 1936, "num_data_blocks": 62, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028561, "oldest_key_time": 1772028561, "file_creation_time": 1772028615, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 229, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 143] Flush lasted 5674 microseconds, and 2372 cpu microseconds.
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.534890) [db/flush_job.cc:967] [default] [JOB 143] Level-0 flush table #229: 796407 bytes OK
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.534913) [db/memtable_list.cc:519] [default] Level-0 commit table #229 started
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.536464) [db/memtable_list.cc:722] [default] Level-0 commit table #229: memtable #1 done
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.536478) EVENT_LOG_v1 {"time_micros": 1772028615536473, "job": 143, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.536500) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 143] Try to delete WAL files size 799803, prev total WAL file size 799803, number of live WAL files 2.
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000225.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.536919) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323638' seq:72057594037927935, type:22 .. '6C6F676D0034353230' seq:0, type:0; will stop at (end)
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 144] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 143 Base level 0, inputs: [229(777KB)], [227(10MB)]
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615537011, "job": 144, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [229], "files_L6": [227], "score": -1, "input_data_size": 11895108, "oldest_snapshot_seqno": -1}
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 144] Generated table #230: 10198 keys, 11804138 bytes, temperature: kUnknown
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615597294, "cf_name": "default", "job": 144, "event": "table_file_creation", "file_number": 230, "file_size": 11804138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11740880, "index_size": 36649, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 273005, "raw_average_key_size": 26, "raw_value_size": 11562880, "raw_average_value_size": 1133, "num_data_blocks": 1388, "num_entries": 10198, "num_filter_entries": 10198, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028615, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 230, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.597623) [db/compaction/compaction_job.cc:1663] [default] [JOB 144] Compacted 1@0 + 1@6 files to L6 => 11804138 bytes
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.599704) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.0 rd, 195.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.6 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(29.8) write-amplify(14.8) OK, records in: 10718, records dropped: 520 output_compression: NoCompression
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.599725) EVENT_LOG_v1 {"time_micros": 1772028615599714, "job": 144, "event": "compaction_finished", "compaction_time_micros": 60376, "compaction_time_cpu_micros": 37327, "output_level": 6, "num_output_files": 1, "total_output_size": 11804138, "num_input_records": 10718, "num_output_records": 10198, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000229.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615599958, "job": 144, "event": "table_file_deletion", "file_number": 229}
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000227.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615601528, "job": 144, "event": "table_file_deletion", "file_number": 227}
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.536820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.601569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.601575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.601577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.601579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:10:15 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.601580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:10:16 compute-0 ceph-mon[76335]: pgmap v4510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:17 compute-0 nova_compute[244014]: 2026-02-25 14:10:17.224 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:17 compute-0 ceph-mon[76335]: pgmap v4511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:17 compute-0 nova_compute[244014]: 2026-02-25 14:10:17.656 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:10:19 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.2 total, 600.0 interval
                                           Cumulative writes: 49K writes, 196K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.73 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:10:19 compute-0 podman[433374]: 2026-02-25 14:10:19.727445158 +0000 UTC m=+0.071104660 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:10:19 compute-0 podman[433373]: 2026-02-25 14:10:19.734476117 +0000 UTC m=+0.075907546 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 14:10:20 compute-0 ceph-mon[76335]: pgmap v4512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:21 compute-0 sudo[433419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:10:21 compute-0 sudo[433419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:10:21 compute-0 sudo[433419]: pam_unix(sudo:session): session closed for user root
Feb 25 14:10:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:21 compute-0 sudo[433444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:10:21 compute-0 sudo[433444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:10:21 compute-0 ceph-mon[76335]: pgmap v4513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:21 compute-0 sudo[433444]: pam_unix(sudo:session): session closed for user root
Feb 25 14:10:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:10:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:10:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:10:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:10:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:10:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:10:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:10:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:10:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:10:21 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:10:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:10:21 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:10:21 compute-0 sudo[433500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:10:21 compute-0 sudo[433500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:10:21 compute-0 sudo[433500]: pam_unix(sudo:session): session closed for user root
Feb 25 14:10:21 compute-0 sudo[433525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:10:21 compute-0 sudo[433525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:10:22 compute-0 podman[433561]: 2026-02-25 14:10:22.040478833 +0000 UTC m=+0.038062941 container create b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 14:10:22 compute-0 systemd[1]: Started libpod-conmon-b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0.scope.
Feb 25 14:10:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:10:22 compute-0 podman[433561]: 2026-02-25 14:10:22.106536229 +0000 UTC m=+0.104120357 container init b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:10:22 compute-0 podman[433561]: 2026-02-25 14:10:22.111552741 +0000 UTC m=+0.109136849 container start b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 14:10:22 compute-0 podman[433561]: 2026-02-25 14:10:22.114550376 +0000 UTC m=+0.112134504 container attach b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:10:22 compute-0 charming_ardinghelli[433577]: 167 167
Feb 25 14:10:22 compute-0 podman[433561]: 2026-02-25 14:10:22.116554923 +0000 UTC m=+0.114139031 container died b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 14:10:22 compute-0 systemd[1]: libpod-b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0.scope: Deactivated successfully.
Feb 25 14:10:22 compute-0 podman[433561]: 2026-02-25 14:10:22.020182337 +0000 UTC m=+0.017766465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad2624f4a816977d96f56b6135bba39aedaf4ef83906dd907d4c6b34e88088d0-merged.mount: Deactivated successfully.
Feb 25 14:10:22 compute-0 podman[433561]: 2026-02-25 14:10:22.184314587 +0000 UTC m=+0.181898705 container remove b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:10:22 compute-0 systemd[1]: libpod-conmon-b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0.scope: Deactivated successfully.
Feb 25 14:10:22 compute-0 nova_compute[244014]: 2026-02-25 14:10:22.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:10:22 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:10:22 compute-0 podman[433602]: 2026-02-25 14:10:22.348168969 +0000 UTC m=+0.067746325 container create 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 14:10:22 compute-0 systemd[1]: Started libpod-conmon-5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557.scope.
Feb 25 14:10:22 compute-0 podman[433602]: 2026-02-25 14:10:22.303990584 +0000 UTC m=+0.023567960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:10:22 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:22 compute-0 podman[433602]: 2026-02-25 14:10:22.462471054 +0000 UTC m=+0.182048430 container init 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 14:10:22 compute-0 podman[433602]: 2026-02-25 14:10:22.468408822 +0000 UTC m=+0.187986178 container start 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:10:22 compute-0 podman[433602]: 2026-02-25 14:10:22.509171479 +0000 UTC m=+0.228748835 container attach 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:10:22 compute-0 nova_compute[244014]: 2026-02-25 14:10:22.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:22 compute-0 wizardly_lewin[433619]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:10:22 compute-0 wizardly_lewin[433619]: --> All data devices are unavailable
Feb 25 14:10:22 compute-0 systemd[1]: libpod-5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557.scope: Deactivated successfully.
Feb 25 14:10:22 compute-0 podman[433602]: 2026-02-25 14:10:22.902720732 +0000 UTC m=+0.622298088 container died 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 14:10:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d-merged.mount: Deactivated successfully.
Feb 25 14:10:23 compute-0 podman[433602]: 2026-02-25 14:10:23.126085674 +0000 UTC m=+0.845663030 container remove 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:10:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:23 compute-0 sudo[433525]: pam_unix(sudo:session): session closed for user root
Feb 25 14:10:23 compute-0 systemd[1]: libpod-conmon-5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557.scope: Deactivated successfully.
Feb 25 14:10:23 compute-0 sudo[433651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:10:23 compute-0 sudo[433651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:10:23 compute-0 sudo[433651]: pam_unix(sudo:session): session closed for user root
Feb 25 14:10:23 compute-0 sudo[433676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:10:23 compute-0 sudo[433676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:10:23 compute-0 ceph-mon[76335]: pgmap v4514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:23 compute-0 podman[433713]: 2026-02-25 14:10:23.61379794 +0000 UTC m=+0.087848855 container create 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:10:23 compute-0 podman[433713]: 2026-02-25 14:10:23.548786204 +0000 UTC m=+0.022837129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:10:23 compute-0 systemd[1]: Started libpod-conmon-5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57.scope.
Feb 25 14:10:23 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:10:23 compute-0 podman[433713]: 2026-02-25 14:10:23.802195568 +0000 UTC m=+0.276246503 container init 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:10:23 compute-0 podman[433713]: 2026-02-25 14:10:23.80858457 +0000 UTC m=+0.282635505 container start 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:10:23 compute-0 tender_golick[433726]: 167 167
Feb 25 14:10:23 compute-0 systemd[1]: libpod-5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57.scope: Deactivated successfully.
Feb 25 14:10:23 compute-0 podman[433713]: 2026-02-25 14:10:23.853159285 +0000 UTC m=+0.327210220 container attach 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb 25 14:10:23 compute-0 podman[433713]: 2026-02-25 14:10:23.853976468 +0000 UTC m=+0.328027373 container died 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 14:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf540d425dfee06a72ba1b6ede0a9d9ecbfd4f37cbe0c9cf1ca5cbede055dc46-merged.mount: Deactivated successfully.
Feb 25 14:10:24 compute-0 podman[433713]: 2026-02-25 14:10:24.194666769 +0000 UTC m=+0.668717704 container remove 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 14:10:24 compute-0 systemd[1]: libpod-conmon-5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57.scope: Deactivated successfully.
Feb 25 14:10:24 compute-0 podman[433753]: 2026-02-25 14:10:24.375966556 +0000 UTC m=+0.083981905 container create 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 14:10:24 compute-0 podman[433753]: 2026-02-25 14:10:24.320309046 +0000 UTC m=+0.028324445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:10:24 compute-0 systemd[1]: Started libpod-conmon-168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1.scope.
Feb 25 14:10:24 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c579e6730a53eb2b09234de6a5b55198bc0f5fd4d2cc28c59d721eb862434f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c579e6730a53eb2b09234de6a5b55198bc0f5fd4d2cc28c59d721eb862434f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c579e6730a53eb2b09234de6a5b55198bc0f5fd4d2cc28c59d721eb862434f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c579e6730a53eb2b09234de6a5b55198bc0f5fd4d2cc28c59d721eb862434f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:24 compute-0 podman[433753]: 2026-02-25 14:10:24.487133652 +0000 UTC m=+0.195148981 container init 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 25 14:10:24 compute-0 podman[433753]: 2026-02-25 14:10:24.495318784 +0000 UTC m=+0.203334103 container start 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:10:24 compute-0 podman[433753]: 2026-02-25 14:10:24.502128768 +0000 UTC m=+0.210144107 container attach 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:10:24 compute-0 kind_blackburn[433770]: {
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:     "0": [
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:         {
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "devices": [
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "/dev/loop3"
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             ],
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_name": "ceph_lv0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_size": "21470642176",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "name": "ceph_lv0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "tags": {
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.cluster_name": "ceph",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.crush_device_class": "",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.encrypted": "0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.objectstore": "bluestore",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.osd_id": "0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.type": "block",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.vdo": "0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.with_tpm": "0"
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             },
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "type": "block",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "vg_name": "ceph_vg0"
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:         }
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:     ],
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:     "1": [
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:         {
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "devices": [
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "/dev/loop4"
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             ],
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_name": "ceph_lv1",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_size": "21470642176",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "name": "ceph_lv1",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "tags": {
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.cluster_name": "ceph",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.crush_device_class": "",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.encrypted": "0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.objectstore": "bluestore",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.osd_id": "1",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.type": "block",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.vdo": "0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.with_tpm": "0"
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             },
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "type": "block",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "vg_name": "ceph_vg1"
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:         }
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:     ],
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:     "2": [
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:         {
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "devices": [
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "/dev/loop5"
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             ],
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_name": "ceph_lv2",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_size": "21470642176",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "name": "ceph_lv2",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "tags": {
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.cluster_name": "ceph",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.crush_device_class": "",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.encrypted": "0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.objectstore": "bluestore",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.osd_id": "2",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.type": "block",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.vdo": "0",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:                 "ceph.with_tpm": "0"
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             },
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "type": "block",
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:             "vg_name": "ceph_vg2"
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:         }
Feb 25 14:10:24 compute-0 kind_blackburn[433770]:     ]
Feb 25 14:10:24 compute-0 kind_blackburn[433770]: }
Feb 25 14:10:24 compute-0 systemd[1]: libpod-168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1.scope: Deactivated successfully.
Feb 25 14:10:24 compute-0 podman[433779]: 2026-02-25 14:10:24.824724606 +0000 UTC m=+0.026181114 container died 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-26c579e6730a53eb2b09234de6a5b55198bc0f5fd4d2cc28c59d721eb862434f-merged.mount: Deactivated successfully.
Feb 25 14:10:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:10:24 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.6 total, 600.0 interval
                                           Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.74 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:10:25 compute-0 podman[433779]: 2026-02-25 14:10:25.006993601 +0000 UTC m=+0.208450109 container remove 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:10:25 compute-0 systemd[1]: libpod-conmon-168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1.scope: Deactivated successfully.
Feb 25 14:10:25 compute-0 sudo[433676]: pam_unix(sudo:session): session closed for user root
Feb 25 14:10:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:25 compute-0 sudo[433795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:10:25 compute-0 sudo[433795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:10:25 compute-0 sudo[433795]: pam_unix(sudo:session): session closed for user root
Feb 25 14:10:25 compute-0 sudo[433820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:10:25 compute-0 sudo[433820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:10:25 compute-0 ceph-mon[76335]: pgmap v4515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:25 compute-0 podman[433859]: 2026-02-25 14:10:25.587765359 +0000 UTC m=+0.078310465 container create a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 14:10:25 compute-0 podman[433859]: 2026-02-25 14:10:25.538757727 +0000 UTC m=+0.029302823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:10:25 compute-0 systemd[1]: Started libpod-conmon-a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179.scope.
Feb 25 14:10:25 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:10:25 compute-0 podman[433859]: 2026-02-25 14:10:25.727505856 +0000 UTC m=+0.218050922 container init a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 14:10:25 compute-0 podman[433859]: 2026-02-25 14:10:25.735187354 +0000 UTC m=+0.225732450 container start a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:10:25 compute-0 systemd[1]: libpod-a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179.scope: Deactivated successfully.
Feb 25 14:10:25 compute-0 agitated_taussig[433875]: 167 167
Feb 25 14:10:25 compute-0 conmon[433875]: conmon a3ecc79e3cb7cbdaad3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179.scope/container/memory.events
Feb 25 14:10:25 compute-0 podman[433859]: 2026-02-25 14:10:25.786328646 +0000 UTC m=+0.276873742 container attach a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:10:25 compute-0 podman[433859]: 2026-02-25 14:10:25.787435007 +0000 UTC m=+0.277980073 container died a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 14:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ed6dc0e4392184d5989e72288e3b533196108b1fae13107c4e1203ff0f3d0a6-merged.mount: Deactivated successfully.
Feb 25 14:10:26 compute-0 podman[433859]: 2026-02-25 14:10:26.041027996 +0000 UTC m=+0.531573062 container remove a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 14:10:26 compute-0 systemd[1]: libpod-conmon-a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179.scope: Deactivated successfully.
Feb 25 14:10:26 compute-0 podman[433900]: 2026-02-25 14:10:26.247532119 +0000 UTC m=+0.069070162 container create b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 14:10:26 compute-0 podman[433900]: 2026-02-25 14:10:26.215288494 +0000 UTC m=+0.036826557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:10:26 compute-0 systemd[1]: Started libpod-conmon-b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f.scope.
Feb 25 14:10:26 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dee155e03a8018379689d98c5b50f1e18068ecf5cbc4e18b6fe8836cddc346/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dee155e03a8018379689d98c5b50f1e18068ecf5cbc4e18b6fe8836cddc346/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dee155e03a8018379689d98c5b50f1e18068ecf5cbc4e18b6fe8836cddc346/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dee155e03a8018379689d98c5b50f1e18068ecf5cbc4e18b6fe8836cddc346/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:10:26 compute-0 podman[433900]: 2026-02-25 14:10:26.383277123 +0000 UTC m=+0.204815186 container init b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:10:26 compute-0 podman[433900]: 2026-02-25 14:10:26.391553998 +0000 UTC m=+0.213092041 container start b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:10:26 compute-0 podman[433900]: 2026-02-25 14:10:26.408482718 +0000 UTC m=+0.230020781 container attach b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 14:10:27 compute-0 lvm[433996]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:10:27 compute-0 lvm[433996]: VG ceph_vg1 finished
Feb 25 14:10:27 compute-0 lvm[433994]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:10:27 compute-0 lvm[433997]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:10:27 compute-0 lvm[433997]: VG ceph_vg2 finished
Feb 25 14:10:27 compute-0 lvm[433994]: VG ceph_vg0 finished
Feb 25 14:10:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:27 compute-0 xenodochial_swanson[433916]: {}
Feb 25 14:10:27 compute-0 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 14:10:27 compute-0 nova_compute[244014]: 2026-02-25 14:10:27.231 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:27 compute-0 systemd[1]: libpod-b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f.scope: Deactivated successfully.
Feb 25 14:10:27 compute-0 systemd[1]: libpod-b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f.scope: Consumed 1.255s CPU time.
Feb 25 14:10:27 compute-0 podman[433900]: 2026-02-25 14:10:27.265418576 +0000 UTC m=+1.086956619 container died b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 14:10:27 compute-0 ceph-mon[76335]: pgmap v4516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-69dee155e03a8018379689d98c5b50f1e18068ecf5cbc4e18b6fe8836cddc346-merged.mount: Deactivated successfully.
Feb 25 14:10:27 compute-0 podman[433900]: 2026-02-25 14:10:27.456221203 +0000 UTC m=+1.277759276 container remove b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:10:27 compute-0 systemd[1]: libpod-conmon-b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f.scope: Deactivated successfully.
Feb 25 14:10:27 compute-0 sudo[433820]: pam_unix(sudo:session): session closed for user root
Feb 25 14:10:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:10:27 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:10:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:10:27 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:10:27 compute-0 nova_compute[244014]: 2026-02-25 14:10:27.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:27 compute-0 sudo[434014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:10:27 compute-0 sudo[434014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:10:27 compute-0 sudo[434014]: pam_unix(sudo:session): session closed for user root
Feb 25 14:10:28 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:10:28 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:10:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:29 compute-0 ceph-mon[76335]: pgmap v4517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:10:31
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'images', 'backups']
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:10:31 compute-0 ceph-mon[76335]: pgmap v4518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:10:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:10:32 compute-0 nova_compute[244014]: 2026-02-25 14:10:32.237 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:32 compute-0 nova_compute[244014]: 2026-02-25 14:10:32.662 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:10:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:10:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:33 compute-0 ceph-mon[76335]: pgmap v4519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:35 compute-0 ceph-mon[76335]: pgmap v4520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:37 compute-0 nova_compute[244014]: 2026-02-25 14:10:37.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:37 compute-0 ceph-mon[76335]: pgmap v4521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:37 compute-0 nova_compute[244014]: 2026-02-25 14:10:37.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:37 compute-0 nova_compute[244014]: 2026-02-25 14:10:37.760 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:38 compute-0 nova_compute[244014]: 2026-02-25 14:10:38.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:38 compute-0 nova_compute[244014]: 2026-02-25 14:10:38.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:38 compute-0 nova_compute[244014]: 2026-02-25 14:10:38.985 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:10:38 compute-0 nova_compute[244014]: 2026-02-25 14:10:38.985 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:10:38 compute-0 nova_compute[244014]: 2026-02-25 14:10:38.986 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:10:38 compute-0 nova_compute[244014]: 2026-02-25 14:10:38.986 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:10:38 compute-0 nova_compute[244014]: 2026-02-25 14:10:38.986 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:10:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:39 compute-0 ceph-mon[76335]: pgmap v4522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:10:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2560870428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:10:39 compute-0 nova_compute[244014]: 2026-02-25 14:10:39.586 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:10:39 compute-0 nova_compute[244014]: 2026-02-25 14:10:39.754 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:10:39 compute-0 nova_compute[244014]: 2026-02-25 14:10:39.755 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3499MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:10:39 compute-0 nova_compute[244014]: 2026-02-25 14:10:39.756 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:10:39 compute-0 nova_compute[244014]: 2026-02-25 14:10:39.756 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:10:39 compute-0 nova_compute[244014]: 2026-02-25 14:10:39.834 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:10:39 compute-0 nova_compute[244014]: 2026-02-25 14:10:39.834 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:10:39 compute-0 nova_compute[244014]: 2026-02-25 14:10:39.952 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:10:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:10:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534849972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:10:40 compute-0 nova_compute[244014]: 2026-02-25 14:10:40.493 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:10:40 compute-0 nova_compute[244014]: 2026-02-25 14:10:40.500 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:10:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2560870428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:10:40 compute-0 nova_compute[244014]: 2026-02-25 14:10:40.525 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:10:40 compute-0 nova_compute[244014]: 2026-02-25 14:10:40.527 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:10:40 compute-0 nova_compute[244014]: 2026-02-25 14:10:40.528 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:10:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3534849972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:10:41 compute-0 ceph-mon[76335]: pgmap v4523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:42 compute-0 nova_compute[244014]: 2026-02-25 14:10:42.244 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:42 compute-0 nova_compute[244014]: 2026-02-25 14:10:42.665 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:43 compute-0 ceph-mon[76335]: pgmap v4524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:43 compute-0 nova_compute[244014]: 2026-02-25 14:10:43.526 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:43 compute-0 nova_compute[244014]: 2026-02-25 14:10:43.526 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:10:43 compute-0 nova_compute[244014]: 2026-02-25 14:10:43.527 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:10:43 compute-0 nova_compute[244014]: 2026-02-25 14:10:43.544 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:10:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:10:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:45 compute-0 ceph-mon[76335]: pgmap v4525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:46 compute-0 nova_compute[244014]: 2026-02-25 14:10:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:47 compute-0 nova_compute[244014]: 2026-02-25 14:10:47.248 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:10:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935582163' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:10:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:10:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935582163' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:10:47 compute-0 nova_compute[244014]: 2026-02-25 14:10:47.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:48 compute-0 ceph-mon[76335]: pgmap v4526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1935582163' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:10:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1935582163' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:10:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:49 compute-0 ceph-mon[76335]: pgmap v4527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:50 compute-0 podman[434083]: 2026-02-25 14:10:50.717447438 +0000 UTC m=+0.059763408 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 14:10:50 compute-0 podman[434084]: 2026-02-25 14:10:50.749505598 +0000 UTC m=+0.091995103 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, container_name=ovn_controller)
Feb 25 14:10:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:51 compute-0 ceph-mon[76335]: pgmap v4528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:51 compute-0 nova_compute[244014]: 2026-02-25 14:10:51.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:52 compute-0 nova_compute[244014]: 2026-02-25 14:10:52.251 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:52 compute-0 nova_compute[244014]: 2026-02-25 14:10:52.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:53 compute-0 ceph-mon[76335]: pgmap v4529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:10:55.115 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:10:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:10:55.116 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:10:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:10:55.116 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:10:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:55 compute-0 ceph-mon[76335]: pgmap v4530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:10:55 compute-0 nova_compute[244014]: 2026-02-25 14:10:55.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:55 compute-0 nova_compute[244014]: 2026-02-25 14:10:55.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:10:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:57 compute-0 nova_compute[244014]: 2026-02-25 14:10:57.255 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:57 compute-0 ceph-mon[76335]: pgmap v4531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:57 compute-0 nova_compute[244014]: 2026-02-25 14:10:57.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:10:57 compute-0 nova_compute[244014]: 2026-02-25 14:10:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:58 compute-0 nova_compute[244014]: 2026-02-25 14:10:58.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:10:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:10:59 compute-0 ceph-mon[76335]: pgmap v4532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:00 compute-0 nova_compute[244014]: 2026-02-25 14:11:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:11:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:01 compute-0 ceph-mon[76335]: pgmap v4533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:11:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:11:01 compute-0 sshd-session[434127]: Connection closed by authenticating user root 46.101.242.142 port 43028 [preauth]
Feb 25 14:11:02 compute-0 nova_compute[244014]: 2026-02-25 14:11:02.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:02 compute-0 nova_compute[244014]: 2026-02-25 14:11:02.673 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:03 compute-0 ceph-mon[76335]: pgmap v4534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:05 compute-0 ceph-mon[76335]: pgmap v4535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:07 compute-0 nova_compute[244014]: 2026-02-25 14:11:07.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:07 compute-0 ceph-mon[76335]: pgmap v4536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:07 compute-0 nova_compute[244014]: 2026-02-25 14:11:07.677 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:09 compute-0 ceph-mon[76335]: pgmap v4537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:11 compute-0 ceph-mon[76335]: pgmap v4538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:12 compute-0 nova_compute[244014]: 2026-02-25 14:11:12.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:12 compute-0 nova_compute[244014]: 2026-02-25 14:11:12.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:13 compute-0 ceph-mon[76335]: pgmap v4539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:15 compute-0 ceph-mon[76335]: pgmap v4540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:17 compute-0 nova_compute[244014]: 2026-02-25 14:11:17.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:17 compute-0 ceph-mon[76335]: pgmap v4541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:17 compute-0 nova_compute[244014]: 2026-02-25 14:11:17.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:19 compute-0 ceph-mon[76335]: pgmap v4542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:21 compute-0 ceph-mon[76335]: pgmap v4543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:21 compute-0 podman[434129]: 2026-02-25 14:11:21.726511987 +0000 UTC m=+0.062469714 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 14:11:21 compute-0 podman[434130]: 2026-02-25 14:11:21.762763306 +0000 UTC m=+0.097436057 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 14:11:22 compute-0 nova_compute[244014]: 2026-02-25 14:11:22.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:22 compute-0 nova_compute[244014]: 2026-02-25 14:11:22.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 14:11:23 compute-0 ceph-mon[76335]: pgmap v4544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 14:11:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 14:11:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:25 compute-0 ceph-mon[76335]: pgmap v4545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 14:11:26 compute-0 nova_compute[244014]: 2026-02-25 14:11:26.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:11:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 25 14:11:27 compute-0 ceph-mon[76335]: pgmap v4546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 25 14:11:27 compute-0 nova_compute[244014]: 2026-02-25 14:11:27.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:27 compute-0 nova_compute[244014]: 2026-02-25 14:11:27.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:27 compute-0 sudo[434168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:11:27 compute-0 sudo[434168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:11:27 compute-0 sudo[434168]: pam_unix(sudo:session): session closed for user root
Feb 25 14:11:27 compute-0 sudo[434193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:11:27 compute-0 sudo[434193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:11:28 compute-0 sudo[434193]: pam_unix(sudo:session): session closed for user root
Feb 25 14:11:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:11:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:11:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:11:28 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:11:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:11:28 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:11:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:11:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:11:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:11:28 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:11:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:11:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:11:28 compute-0 sudo[434249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:11:28 compute-0 sudo[434249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:11:28 compute-0 sudo[434249]: pam_unix(sudo:session): session closed for user root
Feb 25 14:11:28 compute-0 sudo[434274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:11:28 compute-0 sudo[434274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:11:28 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:11:28 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:11:28 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:11:28 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:11:28 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:11:28 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:11:28 compute-0 podman[434311]: 2026-02-25 14:11:28.74862881 +0000 UTC m=+0.023193579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:11:28 compute-0 podman[434311]: 2026-02-25 14:11:28.923963688 +0000 UTC m=+0.198528437 container create 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 14:11:29 compute-0 systemd[1]: Started libpod-conmon-62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587.scope.
Feb 25 14:11:29 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:11:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 13 op/s
Feb 25 14:11:29 compute-0 podman[434311]: 2026-02-25 14:11:29.33823814 +0000 UTC m=+0.612802939 container init 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 14:11:29 compute-0 podman[434311]: 2026-02-25 14:11:29.349761237 +0000 UTC m=+0.624326006 container start 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 14:11:29 compute-0 bold_varahamihira[434327]: 167 167
Feb 25 14:11:29 compute-0 systemd[1]: libpod-62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587.scope: Deactivated successfully.
Feb 25 14:11:29 compute-0 podman[434311]: 2026-02-25 14:11:29.687888116 +0000 UTC m=+0.962452945 container attach 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 14:11:29 compute-0 podman[434311]: 2026-02-25 14:11:29.689492262 +0000 UTC m=+0.964057041 container died 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 14:11:29 compute-0 ceph-mon[76335]: pgmap v4547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 13 op/s
Feb 25 14:11:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-23272e85800ee83bf246449e45ac3a91c460b8287478101f37aed0542473a3f6-merged.mount: Deactivated successfully.
Feb 25 14:11:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:30 compute-0 podman[434311]: 2026-02-25 14:11:30.997782064 +0000 UTC m=+2.272346843 container remove 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:11:31 compute-0 systemd[1]: libpod-conmon-62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587.scope: Deactivated successfully.
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 13 op/s
Feb 25 14:11:31 compute-0 podman[434353]: 2026-02-25 14:11:31.104820742 +0000 UTC m=+0.023151768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:11:31
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'vms', '.mgr', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes']
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:11:31 compute-0 podman[434353]: 2026-02-25 14:11:31.432856835 +0000 UTC m=+0.351187811 container create 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 14:11:31 compute-0 systemd[1]: Started libpod-conmon-304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4.scope.
Feb 25 14:11:31 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:31 compute-0 ceph-mon[76335]: pgmap v4548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 13 op/s
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:11:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:11:31 compute-0 podman[434353]: 2026-02-25 14:11:31.809124857 +0000 UTC m=+0.727455853 container init 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 25 14:11:31 compute-0 podman[434353]: 2026-02-25 14:11:31.820439988 +0000 UTC m=+0.738770944 container start 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 14:11:31 compute-0 podman[434353]: 2026-02-25 14:11:31.849967837 +0000 UTC m=+0.768298803 container attach 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 14:11:32 compute-0 awesome_dirac[434370]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:11:32 compute-0 awesome_dirac[434370]: --> All data devices are unavailable
Feb 25 14:11:32 compute-0 systemd[1]: libpod-304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4.scope: Deactivated successfully.
Feb 25 14:11:32 compute-0 podman[434353]: 2026-02-25 14:11:32.360302414 +0000 UTC m=+1.278633370 container died 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 25 14:11:32 compute-0 nova_compute[244014]: 2026-02-25 14:11:32.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:32 compute-0 nova_compute[244014]: 2026-02-25 14:11:32.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:11:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef-merged.mount: Deactivated successfully.
Feb 25 14:11:33 compute-0 podman[434353]: 2026-02-25 14:11:33.173309795 +0000 UTC m=+2.091640761 container remove 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 14:11:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Feb 25 14:11:33 compute-0 sudo[434274]: pam_unix(sudo:session): session closed for user root
Feb 25 14:11:33 compute-0 systemd[1]: libpod-conmon-304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4.scope: Deactivated successfully.
Feb 25 14:11:33 compute-0 sudo[434401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:11:33 compute-0 sudo[434401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:11:33 compute-0 sudo[434401]: pam_unix(sudo:session): session closed for user root
Feb 25 14:11:33 compute-0 sudo[434426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:11:33 compute-0 sudo[434426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:11:33 compute-0 ceph-mon[76335]: pgmap v4549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Feb 25 14:11:33 compute-0 podman[434462]: 2026-02-25 14:11:33.784410905 +0000 UTC m=+0.114767320 container create e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:11:33 compute-0 podman[434462]: 2026-02-25 14:11:33.712987947 +0000 UTC m=+0.043344452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:11:33 compute-0 systemd[1]: Started libpod-conmon-e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f.scope.
Feb 25 14:11:33 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:11:34 compute-0 podman[434462]: 2026-02-25 14:11:34.137117038 +0000 UTC m=+0.467473543 container init e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:11:34 compute-0 podman[434462]: 2026-02-25 14:11:34.148512021 +0000 UTC m=+0.478868446 container start e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 14:11:34 compute-0 vigorous_bartik[434480]: 167 167
Feb 25 14:11:34 compute-0 systemd[1]: libpod-e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f.scope: Deactivated successfully.
Feb 25 14:11:34 compute-0 podman[434462]: 2026-02-25 14:11:34.406471374 +0000 UTC m=+0.736827829 container attach e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:11:34 compute-0 podman[434462]: 2026-02-25 14:11:34.407227376 +0000 UTC m=+0.737583801 container died e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 14:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6377fc07c539762fe06ee0239e855357399bb3b5b5bbcf954d7072bdd4c50412-merged.mount: Deactivated successfully.
Feb 25 14:11:35 compute-0 podman[434462]: 2026-02-25 14:11:35.094913159 +0000 UTC m=+1.425269614 container remove e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 14:11:35 compute-0 systemd[1]: libpod-conmon-e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f.scope: Deactivated successfully.
Feb 25 14:11:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Feb 25 14:11:35 compute-0 podman[434507]: 2026-02-25 14:11:35.275369592 +0000 UTC m=+0.039120221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:11:35 compute-0 podman[434507]: 2026-02-25 14:11:35.393211148 +0000 UTC m=+0.156961737 container create 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 14:11:35 compute-0 ceph-mon[76335]: pgmap v4550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Feb 25 14:11:35 compute-0 systemd[1]: Started libpod-conmon-522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe.scope.
Feb 25 14:11:35 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90d7d90670c38b1fecda2ea9cdc25f460212a39faaf6f36f0253e3dc965f33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90d7d90670c38b1fecda2ea9cdc25f460212a39faaf6f36f0253e3dc965f33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90d7d90670c38b1fecda2ea9cdc25f460212a39faaf6f36f0253e3dc965f33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90d7d90670c38b1fecda2ea9cdc25f460212a39faaf6f36f0253e3dc965f33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:35 compute-0 podman[434507]: 2026-02-25 14:11:35.939390763 +0000 UTC m=+0.703141402 container init 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:11:35 compute-0 podman[434507]: 2026-02-25 14:11:35.950507619 +0000 UTC m=+0.714258208 container start 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:11:36 compute-0 podman[434507]: 2026-02-25 14:11:36.244647239 +0000 UTC m=+1.008397788 container attach 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 14:11:36 compute-0 confident_curran[434524]: {
Feb 25 14:11:36 compute-0 confident_curran[434524]:     "0": [
Feb 25 14:11:36 compute-0 confident_curran[434524]:         {
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "devices": [
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "/dev/loop3"
Feb 25 14:11:36 compute-0 confident_curran[434524]:             ],
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_name": "ceph_lv0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_size": "21470642176",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "name": "ceph_lv0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "tags": {
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.cluster_name": "ceph",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.crush_device_class": "",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.encrypted": "0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.objectstore": "bluestore",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.osd_id": "0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.type": "block",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.vdo": "0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.with_tpm": "0"
Feb 25 14:11:36 compute-0 confident_curran[434524]:             },
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "type": "block",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "vg_name": "ceph_vg0"
Feb 25 14:11:36 compute-0 confident_curran[434524]:         }
Feb 25 14:11:36 compute-0 confident_curran[434524]:     ],
Feb 25 14:11:36 compute-0 confident_curran[434524]:     "1": [
Feb 25 14:11:36 compute-0 confident_curran[434524]:         {
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "devices": [
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "/dev/loop4"
Feb 25 14:11:36 compute-0 confident_curran[434524]:             ],
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_name": "ceph_lv1",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_size": "21470642176",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "name": "ceph_lv1",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "tags": {
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.cluster_name": "ceph",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.crush_device_class": "",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.encrypted": "0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.objectstore": "bluestore",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.osd_id": "1",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.type": "block",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.vdo": "0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.with_tpm": "0"
Feb 25 14:11:36 compute-0 confident_curran[434524]:             },
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "type": "block",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "vg_name": "ceph_vg1"
Feb 25 14:11:36 compute-0 confident_curran[434524]:         }
Feb 25 14:11:36 compute-0 confident_curran[434524]:     ],
Feb 25 14:11:36 compute-0 confident_curran[434524]:     "2": [
Feb 25 14:11:36 compute-0 confident_curran[434524]:         {
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "devices": [
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "/dev/loop5"
Feb 25 14:11:36 compute-0 confident_curran[434524]:             ],
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_name": "ceph_lv2",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_size": "21470642176",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "name": "ceph_lv2",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "tags": {
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.cluster_name": "ceph",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.crush_device_class": "",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.encrypted": "0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.objectstore": "bluestore",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.osd_id": "2",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.type": "block",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.vdo": "0",
Feb 25 14:11:36 compute-0 confident_curran[434524]:                 "ceph.with_tpm": "0"
Feb 25 14:11:36 compute-0 confident_curran[434524]:             },
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "type": "block",
Feb 25 14:11:36 compute-0 confident_curran[434524]:             "vg_name": "ceph_vg2"
Feb 25 14:11:36 compute-0 confident_curran[434524]:         }
Feb 25 14:11:36 compute-0 confident_curran[434524]:     ]
Feb 25 14:11:36 compute-0 confident_curran[434524]: }
Feb 25 14:11:36 compute-0 systemd[1]: libpod-522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe.scope: Deactivated successfully.
Feb 25 14:11:36 compute-0 conmon[434524]: conmon 522654a94a2242549d90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe.scope/container/memory.events
Feb 25 14:11:36 compute-0 podman[434507]: 2026-02-25 14:11:36.319746891 +0000 UTC m=+1.083497460 container died 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 25 14:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a90d7d90670c38b1fecda2ea9cdc25f460212a39faaf6f36f0253e3dc965f33-merged.mount: Deactivated successfully.
Feb 25 14:11:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Feb 25 14:11:37 compute-0 podman[434507]: 2026-02-25 14:11:37.4056353 +0000 UTC m=+2.169385849 container remove 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:11:37 compute-0 ceph-mon[76335]: pgmap v4551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Feb 25 14:11:37 compute-0 sudo[434426]: pam_unix(sudo:session): session closed for user root
Feb 25 14:11:37 compute-0 systemd[1]: libpod-conmon-522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe.scope: Deactivated successfully.
Feb 25 14:11:37 compute-0 sudo[434546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:11:37 compute-0 sudo[434546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:11:37 compute-0 sudo[434546]: pam_unix(sudo:session): session closed for user root
Feb 25 14:11:37 compute-0 sudo[434571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:11:37 compute-0 sudo[434571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:11:37 compute-0 nova_compute[244014]: 2026-02-25 14:11:37.636 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:37 compute-0 nova_compute[244014]: 2026-02-25 14:11:37.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:37 compute-0 podman[434609]: 2026-02-25 14:11:37.906293154 +0000 UTC m=+0.053501330 container create b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:11:37 compute-0 systemd[1]: Started libpod-conmon-b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479.scope.
Feb 25 14:11:37 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:11:37 compute-0 podman[434609]: 2026-02-25 14:11:37.884402093 +0000 UTC m=+0.031610279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:11:37 compute-0 podman[434609]: 2026-02-25 14:11:37.98820793 +0000 UTC m=+0.135416086 container init b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 14:11:37 compute-0 podman[434609]: 2026-02-25 14:11:37.995898738 +0000 UTC m=+0.143106914 container start b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:11:38 compute-0 podman[434609]: 2026-02-25 14:11:38.000010355 +0000 UTC m=+0.147218551 container attach b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 14:11:38 compute-0 bold_haibt[434625]: 167 167
Feb 25 14:11:38 compute-0 systemd[1]: libpod-b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479.scope: Deactivated successfully.
Feb 25 14:11:38 compute-0 conmon[434625]: conmon b1ac0970db818c87adae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479.scope/container/memory.events
Feb 25 14:11:38 compute-0 podman[434609]: 2026-02-25 14:11:38.00299726 +0000 UTC m=+0.150205436 container died b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-65250cc31478f6d64f26c40c7ed460a8039f49926454c1013618f40cc1a981c2-merged.mount: Deactivated successfully.
Feb 25 14:11:38 compute-0 podman[434609]: 2026-02-25 14:11:38.052611128 +0000 UTC m=+0.199819304 container remove b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 14:11:38 compute-0 systemd[1]: libpod-conmon-b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479.scope: Deactivated successfully.
Feb 25 14:11:38 compute-0 podman[434651]: 2026-02-25 14:11:38.252685528 +0000 UTC m=+0.060380625 container create 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:11:38 compute-0 systemd[1]: Started libpod-conmon-7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d.scope.
Feb 25 14:11:38 compute-0 podman[434651]: 2026-02-25 14:11:38.226487644 +0000 UTC m=+0.034182751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:11:38 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057aaa93fb765d91794b579d5513400e955c63b2d6a96b4edc03e9556761c0e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057aaa93fb765d91794b579d5513400e955c63b2d6a96b4edc03e9556761c0e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057aaa93fb765d91794b579d5513400e955c63b2d6a96b4edc03e9556761c0e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057aaa93fb765d91794b579d5513400e955c63b2d6a96b4edc03e9556761c0e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:11:38 compute-0 podman[434651]: 2026-02-25 14:11:38.364244695 +0000 UTC m=+0.171939772 container init 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 14:11:38 compute-0 podman[434651]: 2026-02-25 14:11:38.372961583 +0000 UTC m=+0.180656680 container start 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 14:11:38 compute-0 podman[434651]: 2026-02-25 14:11:38.376996567 +0000 UTC m=+0.184691774 container attach 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:11:39 compute-0 lvm[434745]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:11:39 compute-0 lvm[434746]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:11:39 compute-0 lvm[434746]: VG ceph_vg1 finished
Feb 25 14:11:39 compute-0 lvm[434745]: VG ceph_vg0 finished
Feb 25 14:11:39 compute-0 lvm[434748]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:11:39 compute-0 lvm[434748]: VG ceph_vg2 finished
Feb 25 14:11:39 compute-0 clever_vaughan[434667]: {}
Feb 25 14:11:39 compute-0 systemd[1]: libpod-7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d.scope: Deactivated successfully.
Feb 25 14:11:39 compute-0 podman[434651]: 2026-02-25 14:11:39.18443656 +0000 UTC m=+0.992131657 container died 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Feb 25 14:11:39 compute-0 systemd[1]: libpod-7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d.scope: Consumed 1.206s CPU time.
Feb 25 14:11:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 14:11:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-057aaa93fb765d91794b579d5513400e955c63b2d6a96b4edc03e9556761c0e9-merged.mount: Deactivated successfully.
Feb 25 14:11:39 compute-0 podman[434651]: 2026-02-25 14:11:39.261293802 +0000 UTC m=+1.068988909 container remove 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 14:11:39 compute-0 systemd[1]: libpod-conmon-7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d.scope: Deactivated successfully.
Feb 25 14:11:39 compute-0 ceph-mon[76335]: pgmap v4552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 14:11:39 compute-0 sudo[434571]: pam_unix(sudo:session): session closed for user root
Feb 25 14:11:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:11:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:11:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:11:39 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:11:39 compute-0 sudo[434766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:11:39 compute-0 sudo[434766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:11:39 compute-0 sudo[434766]: pam_unix(sudo:session): session closed for user root
Feb 25 14:11:39 compute-0 nova_compute[244014]: 2026-02-25 14:11:39.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:11:39 compute-0 nova_compute[244014]: 2026-02-25 14:11:39.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:11:39 compute-0 nova_compute[244014]: 2026-02-25 14:11:39.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:11:39 compute-0 nova_compute[244014]: 2026-02-25 14:11:39.914 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:11:39 compute-0 nova_compute[244014]: 2026-02-25 14:11:39.914 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:11:39 compute-0 nova_compute[244014]: 2026-02-25 14:11:39.915 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:11:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:11:40 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:11:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:11:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2566983337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.513 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.737 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.739 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3471MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.739 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.739 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.820 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.821 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.849 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.894 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.895 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 25 14:11:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.913 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.938 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 25 14:11:40 compute-0 nova_compute[244014]: 2026-02-25 14:11:40.958 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:11:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 14:11:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2566983337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:11:41 compute-0 ceph-mon[76335]: pgmap v4553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 14:11:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:11:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3165919734' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:11:41 compute-0 nova_compute[244014]: 2026-02-25 14:11:41.534 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:11:41 compute-0 nova_compute[244014]: 2026-02-25 14:11:41.542 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:11:41 compute-0 nova_compute[244014]: 2026-02-25 14:11:41.558 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:11:41 compute-0 nova_compute[244014]: 2026-02-25 14:11:41.560 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:11:41 compute-0 nova_compute[244014]: 2026-02-25 14:11:41.561 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:11:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3165919734' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:11:42 compute-0 nova_compute[244014]: 2026-02-25 14:11:42.563 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:11:42 compute-0 nova_compute[244014]: 2026-02-25 14:11:42.563 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:11:42 compute-0 nova_compute[244014]: 2026-02-25 14:11:42.564 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:11:42 compute-0 nova_compute[244014]: 2026-02-25 14:11:42.584 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:11:42 compute-0 nova_compute[244014]: 2026-02-25 14:11:42.585 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:11:42 compute-0 nova_compute[244014]: 2026-02-25 14:11:42.653 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:42 compute-0 nova_compute[244014]: 2026-02-25 14:11:42.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Feb 25 14:11:43 compute-0 ceph-mon[76335]: pgmap v4554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:11:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:11:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Feb 25 14:11:45 compute-0 ceph-mon[76335]: pgmap v4555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Feb 25 14:11:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:46 compute-0 nova_compute[244014]: 2026-02-25 14:11:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:11:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Feb 25 14:11:47 compute-0 ceph-mon[76335]: pgmap v4556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Feb 25 14:11:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:11:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4273942040' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:11:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:11:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4273942040' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:11:47 compute-0 nova_compute[244014]: 2026-02-25 14:11:47.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:11:47 compute-0 nova_compute[244014]: 2026-02-25 14:11:47.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:11:47 compute-0 nova_compute[244014]: 2026-02-25 14:11:47.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:11:47 compute-0 nova_compute[244014]: 2026-02-25 14:11:47.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:11:47 compute-0 nova_compute[244014]: 2026-02-25 14:11:47.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:47 compute-0 nova_compute[244014]: 2026-02-25 14:11:47.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:11:48 compute-0 sshd-session[434835]: Connection closed by authenticating user root 46.101.242.142 port 35606 [preauth]
Feb 25 14:11:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4273942040' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:11:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/4273942040' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:11:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Feb 25 14:11:49 compute-0 ceph-mon[76335]: pgmap v4557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Feb 25 14:11:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #231. Immutable memtables: 0.
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.048405) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 145] Flushing memtable with next log file: 231
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711048497, "job": 145, "event": "flush_started", "num_memtables": 1, "num_entries": 1030, "num_deletes": 250, "total_data_size": 1458948, "memory_usage": 1486432, "flush_reason": "Manual Compaction"}
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 145] Level-0 flush table #232: started
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711160310, "cf_name": "default", "job": 145, "event": "table_file_creation", "file_number": 232, "file_size": 887714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93891, "largest_seqno": 94920, "table_properties": {"data_size": 883748, "index_size": 1617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10665, "raw_average_key_size": 20, "raw_value_size": 875137, "raw_average_value_size": 1712, "num_data_blocks": 71, "num_entries": 511, "num_filter_entries": 511, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028616, "oldest_key_time": 1772028616, "file_creation_time": 1772028711, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 232, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 145] Flush lasted 111961 microseconds, and 4111 cpu microseconds.
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:11:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.160391) [db/flush_job.cc:967] [default] [JOB 145] Level-0 flush table #232: 887714 bytes OK
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.160430) [db/memtable_list.cc:519] [default] Level-0 commit table #232 started
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.333928) [db/memtable_list.cc:722] [default] Level-0 commit table #232: memtable #1 done
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.333990) EVENT_LOG_v1 {"time_micros": 1772028711333978, "job": 145, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.334026) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 145] Try to delete WAL files size 1454104, prev total WAL file size 1454104, number of live WAL files 2.
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000228.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.334941) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303033' seq:72057594037927935, type:22 .. '6D6772737461740034323534' seq:0, type:0; will stop at (end)
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 146] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 145 Base level 0, inputs: [232(866KB)], [230(11MB)]
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711335029, "job": 146, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [232], "files_L6": [230], "score": -1, "input_data_size": 12691852, "oldest_snapshot_seqno": -1}
Feb 25 14:11:51 compute-0 sshd-session[434837]: Received disconnect from 45.148.10.151 port 40486:11:  [preauth]
Feb 25 14:11:51 compute-0 sshd-session[434837]: Disconnected from authenticating user root 45.148.10.151 port 40486 [preauth]
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 146] Generated table #233: 10238 keys, 9888267 bytes, temperature: kUnknown
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711487753, "cf_name": "default", "job": 146, "event": "table_file_creation", "file_number": 233, "file_size": 9888267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9828332, "index_size": 33262, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 274097, "raw_average_key_size": 26, "raw_value_size": 9653141, "raw_average_value_size": 942, "num_data_blocks": 1248, "num_entries": 10238, "num_filter_entries": 10238, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028711, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 233, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.488271) [db/compaction/compaction_job.cc:1663] [default] [JOB 146] Compacted 1@0 + 1@6 files to L6 => 9888267 bytes
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.498420) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.0 rd, 64.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.3 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(25.4) write-amplify(11.1) OK, records in: 10709, records dropped: 471 output_compression: NoCompression
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.498454) EVENT_LOG_v1 {"time_micros": 1772028711498438, "job": 146, "event": "compaction_finished", "compaction_time_micros": 152849, "compaction_time_cpu_micros": 39740, "output_level": 6, "num_output_files": 1, "total_output_size": 9888267, "num_input_records": 10709, "num_output_records": 10238, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000232.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711498806, "job": 146, "event": "table_file_deletion", "file_number": 232}
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000230.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711501075, "job": 146, "event": "table_file_deletion", "file_number": 230}
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.334770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.501173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.501181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.501185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.501188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:11:51 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.501190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:11:52 compute-0 ceph-mon[76335]: pgmap v4558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 14:11:52 compute-0 nova_compute[244014]: 2026-02-25 14:11:52.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:11:52 compute-0 nova_compute[244014]: 2026-02-25 14:11:52.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:11:52 compute-0 nova_compute[244014]: 2026-02-25 14:11:52.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:11:52 compute-0 nova_compute[244014]: 2026-02-25 14:11:52.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:11:52 compute-0 nova_compute[244014]: 2026-02-25 14:11:52.745 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:52 compute-0 nova_compute[244014]: 2026-02-25 14:11:52.746 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:11:52 compute-0 podman[434839]: 2026-02-25 14:11:52.800659859 +0000 UTC m=+0.125707179 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:11:52 compute-0 podman[434840]: 2026-02-25 14:11:52.839235444 +0000 UTC m=+0.164052198 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 14:11:52 compute-0 nova_compute[244014]: 2026-02-25 14:11:52.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:11:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 14:11:53 compute-0 ceph-mon[76335]: pgmap v4559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 14:11:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:11:55.116 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:11:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:11:55.117 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:11:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:11:55.117 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:11:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:55 compute-0 ceph-mon[76335]: pgmap v4560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:55 compute-0 nova_compute[244014]: 2026-02-25 14:11:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:11:55 compute-0 nova_compute[244014]: 2026-02-25 14:11:55.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:11:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:11:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:57 compute-0 systemd[1]: Starting dnf makecache...
Feb 25 14:11:57 compute-0 ceph-mon[76335]: pgmap v4561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:57 compute-0 nova_compute[244014]: 2026-02-25 14:11:57.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:11:57 compute-0 nova_compute[244014]: 2026-02-25 14:11:57.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:11:57 compute-0 nova_compute[244014]: 2026-02-25 14:11:57.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:11:57 compute-0 nova_compute[244014]: 2026-02-25 14:11:57.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:11:57 compute-0 nova_compute[244014]: 2026-02-25 14:11:57.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:11:57 compute-0 nova_compute[244014]: 2026-02-25 14:11:57.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:11:58 compute-0 dnf[434880]: Metadata cache refreshed recently.
Feb 25 14:11:58 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 25 14:11:58 compute-0 systemd[1]: Finished dnf makecache.
Feb 25 14:11:58 compute-0 nova_compute[244014]: 2026-02-25 14:11:58.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:11:58 compute-0 nova_compute[244014]: 2026-02-25 14:11:58.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:11:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:11:59 compute-0 ceph-mon[76335]: pgmap v4562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:01 compute-0 ceph-mon[76335]: pgmap v4563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:12:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:12:02 compute-0 nova_compute[244014]: 2026-02-25 14:12:02.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:12:02 compute-0 nova_compute[244014]: 2026-02-25 14:12:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:12:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:03 compute-0 ceph-mon[76335]: pgmap v4564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:05 compute-0 ceph-mon[76335]: pgmap v4565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:07 compute-0 ceph-mon[76335]: pgmap v4566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:07 compute-0 nova_compute[244014]: 2026-02-25 14:12:07.791 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:12:07 compute-0 nova_compute[244014]: 2026-02-25 14:12:07.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:12:07 compute-0 nova_compute[244014]: 2026-02-25 14:12:07.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:12:07 compute-0 nova_compute[244014]: 2026-02-25 14:12:07.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:12:07 compute-0 nova_compute[244014]: 2026-02-25 14:12:07.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:07 compute-0 nova_compute[244014]: 2026-02-25 14:12:07.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:12:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:10 compute-0 ceph-mon[76335]: pgmap v4567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:12 compute-0 nova_compute[244014]: 2026-02-25 14:12:12.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:12:12 compute-0 nova_compute[244014]: 2026-02-25 14:12:12.826 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:12 compute-0 nova_compute[244014]: 2026-02-25 14:12:12.826 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:12:12 compute-0 nova_compute[244014]: 2026-02-25 14:12:12.826 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:12:12 compute-0 nova_compute[244014]: 2026-02-25 14:12:12.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:12:12 compute-0 nova_compute[244014]: 2026-02-25 14:12:12.829 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:12 compute-0 ceph-mon[76335]: pgmap v4568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:15 compute-0 ceph-mon[76335]: pgmap v4569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:17 compute-0 ceph-mon[76335]: pgmap v4570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:17 compute-0 nova_compute[244014]: 2026-02-25 14:12:17.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:12:19 compute-0 ceph-mon[76335]: pgmap v4571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:21 compute-0 ceph-mon[76335]: pgmap v4572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:22 compute-0 ceph-mon[76335]: pgmap v4573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:22 compute-0 nova_compute[244014]: 2026-02-25 14:12:22.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:23 compute-0 podman[434881]: 2026-02-25 14:12:23.701777907 +0000 UTC m=+0.050430180 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 14:12:23 compute-0 podman[434882]: 2026-02-25 14:12:23.772447359 +0000 UTC m=+0.113833177 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260223, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 14:12:24 compute-0 ceph-mon[76335]: pgmap v4574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:27 compute-0 ceph-mon[76335]: pgmap v4575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:27 compute-0 nova_compute[244014]: 2026-02-25 14:12:27.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:12:29 compute-0 ceph-mon[76335]: pgmap v4576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:31 compute-0 ceph-mon[76335]: pgmap v4577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:12:31
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'backups', 'default.rgw.meta', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:12:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:12:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:12:32 compute-0 nova_compute[244014]: 2026-02-25 14:12:32.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:32 compute-0 nova_compute[244014]: 2026-02-25 14:12:32.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:33 compute-0 ceph-mon[76335]: pgmap v4578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:34 compute-0 ceph-mon[76335]: pgmap v4579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:36 compute-0 sshd-session[434927]: Connection closed by authenticating user root 46.101.242.142 port 51940 [preauth]
Feb 25 14:12:36 compute-0 ceph-mon[76335]: pgmap v4580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:37 compute-0 nova_compute[244014]: 2026-02-25 14:12:37.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:38 compute-0 ceph-mon[76335]: pgmap v4581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:39 compute-0 sudo[434929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:12:39 compute-0 sudo[434929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:12:39 compute-0 sudo[434929]: pam_unix(sudo:session): session closed for user root
Feb 25 14:12:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:39 compute-0 sudo[434954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:12:39 compute-0 sudo[434954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:12:40 compute-0 sudo[434954]: pam_unix(sudo:session): session closed for user root
Feb 25 14:12:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:12:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:12:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:12:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:12:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:12:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:12:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:12:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:12:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:12:40 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:12:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:12:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:12:40 compute-0 sudo[435010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:12:40 compute-0 sudo[435010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:12:40 compute-0 sudo[435010]: pam_unix(sudo:session): session closed for user root
Feb 25 14:12:40 compute-0 sudo[435035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:12:40 compute-0 sudo[435035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:12:40 compute-0 podman[435071]: 2026-02-25 14:12:40.58814902 +0000 UTC m=+0.027914282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:12:40 compute-0 podman[435071]: 2026-02-25 14:12:40.712510574 +0000 UTC m=+0.152275816 container create 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:12:40 compute-0 systemd[1]: Started libpod-conmon-2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da.scope.
Feb 25 14:12:40 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:12:40 compute-0 nova_compute[244014]: 2026-02-25 14:12:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:12:40 compute-0 podman[435071]: 2026-02-25 14:12:40.909040763 +0000 UTC m=+0.348806085 container init 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 14:12:40 compute-0 podman[435071]: 2026-02-25 14:12:40.920534188 +0000 UTC m=+0.360299430 container start 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 14:12:40 compute-0 nova_compute[244014]: 2026-02-25 14:12:40.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:12:40 compute-0 lucid_visvesvaraya[435088]: 167 167
Feb 25 14:12:40 compute-0 systemd[1]: libpod-2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da.scope: Deactivated successfully.
Feb 25 14:12:40 compute-0 nova_compute[244014]: 2026-02-25 14:12:40.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:12:40 compute-0 nova_compute[244014]: 2026-02-25 14:12:40.928 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:12:40 compute-0 nova_compute[244014]: 2026-02-25 14:12:40.928 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:12:40 compute-0 nova_compute[244014]: 2026-02-25 14:12:40.929 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:12:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:40 compute-0 podman[435071]: 2026-02-25 14:12:40.991400757 +0000 UTC m=+0.431166039 container attach 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 14:12:40 compute-0 podman[435071]: 2026-02-25 14:12:40.992347563 +0000 UTC m=+0.432112835 container died 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 14:12:41 compute-0 ceph-mon[76335]: pgmap v4582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:12:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:12:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:12:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:12:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:12:41 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fa7c5b6ff63f662091617c47609408a42e65e570deeb59b442e0e807923e451-merged.mount: Deactivated successfully.
Feb 25 14:12:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:12:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488260971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:12:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:41 compute-0 nova_compute[244014]: 2026-02-25 14:12:41.516 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:12:41 compute-0 nova_compute[244014]: 2026-02-25 14:12:41.693 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:12:41 compute-0 nova_compute[244014]: 2026-02-25 14:12:41.695 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:12:41 compute-0 nova_compute[244014]: 2026-02-25 14:12:41.695 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:12:41 compute-0 nova_compute[244014]: 2026-02-25 14:12:41.696 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:12:41 compute-0 nova_compute[244014]: 2026-02-25 14:12:41.748 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:12:41 compute-0 nova_compute[244014]: 2026-02-25 14:12:41.749 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:12:41 compute-0 nova_compute[244014]: 2026-02-25 14:12:41.764 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:12:41 compute-0 podman[435071]: 2026-02-25 14:12:41.996510867 +0000 UTC m=+1.436276149 container remove 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 14:12:42 compute-0 systemd[1]: libpod-conmon-2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da.scope: Deactivated successfully.
Feb 25 14:12:42 compute-0 podman[435155]: 2026-02-25 14:12:42.184581126 +0000 UTC m=+0.038199344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:12:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3488260971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:12:42 compute-0 podman[435155]: 2026-02-25 14:12:42.336905572 +0000 UTC m=+0.190523800 container create 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 14:12:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:12:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145661046' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:12:42 compute-0 nova_compute[244014]: 2026-02-25 14:12:42.457 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:12:42 compute-0 nova_compute[244014]: 2026-02-25 14:12:42.466 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:12:42 compute-0 nova_compute[244014]: 2026-02-25 14:12:42.482 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:12:42 compute-0 nova_compute[244014]: 2026-02-25 14:12:42.484 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:12:42 compute-0 nova_compute[244014]: 2026-02-25 14:12:42.485 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:12:42 compute-0 systemd[1]: Started libpod-conmon-2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a.scope.
Feb 25 14:12:42 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:42 compute-0 podman[435155]: 2026-02-25 14:12:42.594581574 +0000 UTC m=+0.448199792 container init 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:12:42 compute-0 podman[435155]: 2026-02-25 14:12:42.604123364 +0000 UTC m=+0.457741592 container start 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:12:42 compute-0 podman[435155]: 2026-02-25 14:12:42.658766612 +0000 UTC m=+0.512384840 container attach 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 14:12:42 compute-0 nova_compute[244014]: 2026-02-25 14:12:42.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:42 compute-0 nova_compute[244014]: 2026-02-25 14:12:42.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:43 compute-0 pedantic_dijkstra[435176]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:12:43 compute-0 pedantic_dijkstra[435176]: --> All data devices are unavailable
Feb 25 14:12:43 compute-0 systemd[1]: libpod-2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a.scope: Deactivated successfully.
Feb 25 14:12:43 compute-0 podman[435155]: 2026-02-25 14:12:43.065754235 +0000 UTC m=+0.919372433 container died 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd-merged.mount: Deactivated successfully.
Feb 25 14:12:43 compute-0 nova_compute[244014]: 2026-02-25 14:12:43.487 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:12:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:43 compute-0 ceph-mon[76335]: pgmap v4583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1145661046' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:12:43 compute-0 podman[435155]: 2026-02-25 14:12:43.854305848 +0000 UTC m=+1.707924076 container remove 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 14:12:43 compute-0 nova_compute[244014]: 2026-02-25 14:12:43.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:12:43 compute-0 nova_compute[244014]: 2026-02-25 14:12:43.879 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:12:43 compute-0 nova_compute[244014]: 2026-02-25 14:12:43.879 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:12:43 compute-0 systemd[1]: libpod-conmon-2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a.scope: Deactivated successfully.
Feb 25 14:12:43 compute-0 nova_compute[244014]: 2026-02-25 14:12:43.895 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:12:43 compute-0 sudo[435035]: pam_unix(sudo:session): session closed for user root
Feb 25 14:12:43 compute-0 sudo[435206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:12:43 compute-0 sudo[435206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:12:43 compute-0 sudo[435206]: pam_unix(sudo:session): session closed for user root
Feb 25 14:12:44 compute-0 sudo[435231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:12:44 compute-0 sudo[435231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:12:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:12:44 compute-0 podman[435269]: 2026-02-25 14:12:44.407371009 +0000 UTC m=+0.104647316 container create 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 14:12:44 compute-0 podman[435269]: 2026-02-25 14:12:44.335971806 +0000 UTC m=+0.033248093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:12:44 compute-0 systemd[1]: Started libpod-conmon-65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30.scope.
Feb 25 14:12:44 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:12:44 compute-0 podman[435269]: 2026-02-25 14:12:44.847015776 +0000 UTC m=+0.544292053 container init 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 14:12:44 compute-0 podman[435269]: 2026-02-25 14:12:44.852385398 +0000 UTC m=+0.549661635 container start 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:12:44 compute-0 charming_germain[435285]: 167 167
Feb 25 14:12:44 compute-0 systemd[1]: libpod-65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30.scope: Deactivated successfully.
Feb 25 14:12:44 compute-0 conmon[435285]: conmon 65c276f927596a44bc1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30.scope/container/memory.events
Feb 25 14:12:45 compute-0 ceph-mon[76335]: pgmap v4584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:45 compute-0 podman[435269]: 2026-02-25 14:12:45.026408259 +0000 UTC m=+0.723684556 container attach 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:12:45 compute-0 podman[435269]: 2026-02-25 14:12:45.027463459 +0000 UTC m=+0.724739666 container died 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 14:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dea3d2335efafee3d211fb84197ba7c7f0db74407fc6ad208bbf960568fc303-merged.mount: Deactivated successfully.
Feb 25 14:12:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:45 compute-0 podman[435269]: 2026-02-25 14:12:45.688424028 +0000 UTC m=+1.385700255 container remove 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:12:45 compute-0 systemd[1]: libpod-conmon-65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30.scope: Deactivated successfully.
Feb 25 14:12:45 compute-0 podman[435309]: 2026-02-25 14:12:45.867907204 +0000 UTC m=+0.035576209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:12:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:46 compute-0 podman[435309]: 2026-02-25 14:12:46.042856591 +0000 UTC m=+0.210525586 container create 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Feb 25 14:12:46 compute-0 systemd[1]: Started libpod-conmon-084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527.scope.
Feb 25 14:12:46 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6ce5827f913b89ed0f49e92f5c0b3a5ab6a9d327a3a6f0958ec2009ff35381/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6ce5827f913b89ed0f49e92f5c0b3a5ab6a9d327a3a6f0958ec2009ff35381/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6ce5827f913b89ed0f49e92f5c0b3a5ab6a9d327a3a6f0958ec2009ff35381/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6ce5827f913b89ed0f49e92f5c0b3a5ab6a9d327a3a6f0958ec2009ff35381/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:46 compute-0 podman[435309]: 2026-02-25 14:12:46.578271002 +0000 UTC m=+0.745939997 container init 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:12:46 compute-0 podman[435309]: 2026-02-25 14:12:46.585547919 +0000 UTC m=+0.753216894 container start 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:12:46 compute-0 podman[435309]: 2026-02-25 14:12:46.698008755 +0000 UTC m=+0.865677780 container attach 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:12:46 compute-0 admiring_shamir[435326]: {
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:     "0": [
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:         {
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "devices": [
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "/dev/loop3"
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             ],
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_name": "ceph_lv0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_size": "21470642176",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "name": "ceph_lv0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "tags": {
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.cluster_name": "ceph",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.crush_device_class": "",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.encrypted": "0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.objectstore": "bluestore",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.osd_id": "0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.type": "block",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.vdo": "0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.with_tpm": "0"
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             },
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "type": "block",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "vg_name": "ceph_vg0"
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:         }
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:     ],
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:     "1": [
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:         {
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "devices": [
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "/dev/loop4"
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             ],
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_name": "ceph_lv1",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_size": "21470642176",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "name": "ceph_lv1",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "tags": {
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.cluster_name": "ceph",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.crush_device_class": "",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.encrypted": "0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.objectstore": "bluestore",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.osd_id": "1",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.type": "block",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.vdo": "0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.with_tpm": "0"
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             },
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "type": "block",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "vg_name": "ceph_vg1"
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:         }
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:     ],
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:     "2": [
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:         {
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "devices": [
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "/dev/loop5"
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             ],
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_name": "ceph_lv2",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_size": "21470642176",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "name": "ceph_lv2",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "tags": {
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.cluster_name": "ceph",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.crush_device_class": "",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.encrypted": "0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.objectstore": "bluestore",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.osd_id": "2",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.type": "block",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.vdo": "0",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:                 "ceph.with_tpm": "0"
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             },
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "type": "block",
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:             "vg_name": "ceph_vg2"
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:         }
Feb 25 14:12:46 compute-0 admiring_shamir[435326]:     ]
Feb 25 14:12:46 compute-0 admiring_shamir[435326]: }
Feb 25 14:12:46 compute-0 nova_compute[244014]: 2026-02-25 14:12:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:12:46 compute-0 systemd[1]: libpod-084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527.scope: Deactivated successfully.
Feb 25 14:12:46 compute-0 podman[435309]: 2026-02-25 14:12:46.90957669 +0000 UTC m=+1.077245695 container died 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:12:47 compute-0 ceph-mon[76335]: pgmap v4585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd6ce5827f913b89ed0f49e92f5c0b3a5ab6a9d327a3a6f0958ec2009ff35381-merged.mount: Deactivated successfully.
Feb 25 14:12:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:12:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482154077' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:12:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:12:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482154077' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:12:47 compute-0 podman[435309]: 2026-02-25 14:12:47.835458365 +0000 UTC m=+2.003127340 container remove 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:12:47 compute-0 nova_compute[244014]: 2026-02-25 14:12:47.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:12:47 compute-0 nova_compute[244014]: 2026-02-25 14:12:47.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:12:47 compute-0 nova_compute[244014]: 2026-02-25 14:12:47.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:12:47 compute-0 nova_compute[244014]: 2026-02-25 14:12:47.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:12:47 compute-0 nova_compute[244014]: 2026-02-25 14:12:47.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:47 compute-0 nova_compute[244014]: 2026-02-25 14:12:47.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:12:47 compute-0 systemd[1]: libpod-conmon-084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527.scope: Deactivated successfully.
Feb 25 14:12:47 compute-0 sudo[435231]: pam_unix(sudo:session): session closed for user root
Feb 25 14:12:48 compute-0 sudo[435347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:12:48 compute-0 sudo[435347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:12:48 compute-0 sudo[435347]: pam_unix(sudo:session): session closed for user root
Feb 25 14:12:48 compute-0 sudo[435372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:12:48 compute-0 sudo[435372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:12:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1482154077' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:12:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1482154077' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:12:48 compute-0 podman[435408]: 2026-02-25 14:12:48.33888874 +0000 UTC m=+0.030301119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:12:48 compute-0 podman[435408]: 2026-02-25 14:12:48.608069887 +0000 UTC m=+0.299482266 container create 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 14:12:48 compute-0 systemd[1]: Started libpod-conmon-085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2.scope.
Feb 25 14:12:48 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:12:49 compute-0 podman[435408]: 2026-02-25 14:12:49.061400603 +0000 UTC m=+0.752813042 container init 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 14:12:49 compute-0 podman[435408]: 2026-02-25 14:12:49.068233287 +0000 UTC m=+0.759645676 container start 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:12:49 compute-0 elated_montalcini[435423]: 167 167
Feb 25 14:12:49 compute-0 systemd[1]: libpod-085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2.scope: Deactivated successfully.
Feb 25 14:12:49 compute-0 podman[435408]: 2026-02-25 14:12:49.367885737 +0000 UTC m=+1.059298136 container attach 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:12:49 compute-0 podman[435408]: 2026-02-25 14:12:49.368553226 +0000 UTC m=+1.059965615 container died 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 14:12:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:49 compute-0 ceph-mon[76335]: pgmap v4586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f10e33953e6a2128404ece1cbd8cc211baef386d71515ccd1ed29ac358330fab-merged.mount: Deactivated successfully.
Feb 25 14:12:50 compute-0 podman[435408]: 2026-02-25 14:12:50.317351091 +0000 UTC m=+2.008763470 container remove 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 14:12:50 compute-0 systemd[1]: libpod-conmon-085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2.scope: Deactivated successfully.
Feb 25 14:12:50 compute-0 podman[435448]: 2026-02-25 14:12:50.491283579 +0000 UTC m=+0.044532743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:12:50 compute-0 podman[435448]: 2026-02-25 14:12:50.671786134 +0000 UTC m=+0.225035248 container create 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:12:50 compute-0 systemd[1]: Started libpod-conmon-9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96.scope.
Feb 25 14:12:50 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5aa22030edb7bc47213496e807ee4a34a669a85fad4336f2385b2c923d672/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5aa22030edb7bc47213496e807ee4a34a669a85fad4336f2385b2c923d672/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5aa22030edb7bc47213496e807ee4a34a669a85fad4336f2385b2c923d672/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5aa22030edb7bc47213496e807ee4a34a669a85fad4336f2385b2c923d672/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:12:50 compute-0 ceph-mon[76335]: pgmap v4587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:50 compute-0 podman[435448]: 2026-02-25 14:12:50.93601497 +0000 UTC m=+0.489264134 container init 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:12:50 compute-0 podman[435448]: 2026-02-25 14:12:50.945227001 +0000 UTC m=+0.498476125 container start 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 14:12:50 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:51 compute-0 podman[435448]: 2026-02-25 14:12:51.082091509 +0000 UTC m=+0.635340683 container attach 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 14:12:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:51 compute-0 lvm[435545]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:12:51 compute-0 lvm[435545]: VG ceph_vg1 finished
Feb 25 14:12:51 compute-0 lvm[435544]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:12:51 compute-0 lvm[435544]: VG ceph_vg0 finished
Feb 25 14:12:51 compute-0 lvm[435547]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:12:51 compute-0 lvm[435547]: VG ceph_vg2 finished
Feb 25 14:12:51 compute-0 hopeful_robinson[435465]: {}
Feb 25 14:12:51 compute-0 systemd[1]: libpod-9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96.scope: Deactivated successfully.
Feb 25 14:12:51 compute-0 systemd[1]: libpod-9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96.scope: Consumed 1.287s CPU time.
Feb 25 14:12:51 compute-0 podman[435448]: 2026-02-25 14:12:51.958320637 +0000 UTC m=+1.511569761 container died 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 14:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-20c5aa22030edb7bc47213496e807ee4a34a669a85fad4336f2385b2c923d672-merged.mount: Deactivated successfully.
Feb 25 14:12:52 compute-0 nova_compute[244014]: 2026-02-25 14:12:52.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:12:52 compute-0 nova_compute[244014]: 2026-02-25 14:12:52.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:12:52 compute-0 nova_compute[244014]: 2026-02-25 14:12:52.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:12:52 compute-0 nova_compute[244014]: 2026-02-25 14:12:52.877 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:12:52 compute-0 nova_compute[244014]: 2026-02-25 14:12:52.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:52 compute-0 nova_compute[244014]: 2026-02-25 14:12:52.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:12:53 compute-0 podman[435448]: 2026-02-25 14:12:53.143924261 +0000 UTC m=+2.697173345 container remove 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 14:12:53 compute-0 sudo[435372]: pam_unix(sudo:session): session closed for user root
Feb 25 14:12:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:12:53 compute-0 systemd[1]: libpod-conmon-9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96.scope: Deactivated successfully.
Feb 25 14:12:53 compute-0 ceph-mon[76335]: pgmap v4588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:53 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:12:53 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:12:53 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:12:53 compute-0 sudo[435562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:12:53 compute-0 sudo[435562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:12:53 compute-0 sudo[435562]: pam_unix(sudo:session): session closed for user root
Feb 25 14:12:54 compute-0 podman[435587]: 2026-02-25 14:12:54.747582221 +0000 UTC m=+0.079590416 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 14:12:54 compute-0 ceph-mon[76335]: pgmap v4589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:54 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:12:54 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:12:54 compute-0 podman[435588]: 2026-02-25 14:12:54.794003066 +0000 UTC m=+0.125716133 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 14:12:54 compute-0 nova_compute[244014]: 2026-02-25 14:12:54.874 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:12:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:12:55.118 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:12:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:12:55.118 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:12:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:12:55.118 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:12:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:55 compute-0 nova_compute[244014]: 2026-02-25 14:12:55.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:12:55 compute-0 nova_compute[244014]: 2026-02-25 14:12:55.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:12:55 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:12:57 compute-0 ceph-mon[76335]: pgmap v4590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:57 compute-0 nova_compute[244014]: 2026-02-25 14:12:57.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:12:57 compute-0 nova_compute[244014]: 2026-02-25 14:12:57.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:57 compute-0 nova_compute[244014]: 2026-02-25 14:12:57.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:12:57 compute-0 nova_compute[244014]: 2026-02-25 14:12:57.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:12:57 compute-0 nova_compute[244014]: 2026-02-25 14:12:57.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:12:57 compute-0 nova_compute[244014]: 2026-02-25 14:12:57.907 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:12:59 compute-0 ceph-mon[76335]: pgmap v4591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:12:59 compute-0 nova_compute[244014]: 2026-02-25 14:12:59.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:13:00 compute-0 nova_compute[244014]: 2026-02-25 14:13:00.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:13:00 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:01 compute-0 ceph-mon[76335]: pgmap v4592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:13:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:13:02 compute-0 nova_compute[244014]: 2026-02-25 14:13:02.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:13:02 compute-0 nova_compute[244014]: 2026-02-25 14:13:02.908 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:13:03 compute-0 ceph-mon[76335]: pgmap v4593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:04 compute-0 ceph-mon[76335]: pgmap v4594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:05 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:06 compute-0 ceph-mon[76335]: pgmap v4595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:07 compute-0 nova_compute[244014]: 2026-02-25 14:13:07.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:13:09 compute-0 ceph-mon[76335]: pgmap v4596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:10 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:11 compute-0 ceph-mon[76335]: pgmap v4597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:11 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #234. Immutable memtables: 0.
Feb 25 14:13:11 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:11.726881) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 14:13:11 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 147] Flushing memtable with next log file: 234
Feb 25 14:13:11 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028791726946, "job": 147, "event": "flush_started", "num_memtables": 1, "num_entries": 863, "num_deletes": 251, "total_data_size": 1197496, "memory_usage": 1224064, "flush_reason": "Manual Compaction"}
Feb 25 14:13:11 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 147] Level-0 flush table #235: started
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028792000215, "cf_name": "default", "job": 147, "event": "table_file_creation", "file_number": 235, "file_size": 1186533, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94921, "largest_seqno": 95783, "table_properties": {"data_size": 1182076, "index_size": 2108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9605, "raw_average_key_size": 19, "raw_value_size": 1173293, "raw_average_value_size": 2409, "num_data_blocks": 92, "num_entries": 487, "num_filter_entries": 487, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028712, "oldest_key_time": 1772028712, "file_creation_time": 1772028791, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 235, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 147] Flush lasted 273419 microseconds, and 6215 cpu microseconds.
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.000293) [db/flush_job.cc:967] [default] [JOB 147] Level-0 flush table #235: 1186533 bytes OK
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.000329) [db/memtable_list.cc:519] [default] Level-0 commit table #235 started
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.100476) [db/memtable_list.cc:722] [default] Level-0 commit table #235: memtable #1 done
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.100558) EVENT_LOG_v1 {"time_micros": 1772028792100541, "job": 147, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.100603) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 147] Try to delete WAL files size 1193268, prev total WAL file size 1220087, number of live WAL files 2.
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000231.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.101743) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039353338' seq:72057594037927935, type:22 .. '7061786F730039373930' seq:0, type:0; will stop at (end)
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 148] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 147 Base level 0, inputs: [235(1158KB)], [233(9656KB)]
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028792101805, "job": 148, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [235], "files_L6": [233], "score": -1, "input_data_size": 11074800, "oldest_snapshot_seqno": -1}
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 148] Generated table #236: 10211 keys, 9270018 bytes, temperature: kUnknown
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028792379133, "cf_name": "default", "job": 148, "event": "table_file_creation", "file_number": 236, "file_size": 9270018, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9210830, "index_size": 32579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 274275, "raw_average_key_size": 26, "raw_value_size": 9036549, "raw_average_value_size": 884, "num_data_blocks": 1210, "num_entries": 10211, "num_filter_entries": 10211, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 236, "seqno_to_time_mapping": "N/A"}}
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.380195) [db/compaction/compaction_job.cc:1663] [default] [JOB 148] Compacted 1@0 + 1@6 files to L6 => 9270018 bytes
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.480434) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.9 rd, 33.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.4 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(17.1) write-amplify(7.8) OK, records in: 10725, records dropped: 514 output_compression: NoCompression
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.480488) EVENT_LOG_v1 {"time_micros": 1772028792480466, "job": 148, "event": "compaction_finished", "compaction_time_micros": 277416, "compaction_time_cpu_micros": 46278, "output_level": 6, "num_output_files": 1, "total_output_size": 9270018, "num_input_records": 10725, "num_output_records": 10211, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000235.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028792480975, "job": 148, "event": "table_file_deletion", "file_number": 235}
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000233.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028792483455, "job": 148, "event": "table_file_deletion", "file_number": 233}
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.101594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.483579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.483588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.483592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.483594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:13:12 compute-0 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.483597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 14:13:12 compute-0 nova_compute[244014]: 2026-02-25 14:13:12.952 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:13:12 compute-0 nova_compute[244014]: 2026-02-25 14:13:12.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:13:12 compute-0 nova_compute[244014]: 2026-02-25 14:13:12.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5041 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:13:12 compute-0 nova_compute[244014]: 2026-02-25 14:13:12.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:13:12 compute-0 nova_compute[244014]: 2026-02-25 14:13:12.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:13:12 compute-0 nova_compute[244014]: 2026-02-25 14:13:12.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:13:13 compute-0 ceph-mon[76335]: pgmap v4598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:15 compute-0 ceph-mon[76335]: pgmap v4599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:15 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:17 compute-0 ceph-mon[76335]: pgmap v4600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:17 compute-0 nova_compute[244014]: 2026-02-25 14:13:17.956 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:13:19 compute-0 ceph-mon[76335]: pgmap v4601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:20 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:21 compute-0 ceph-mon[76335]: pgmap v4602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:22 compute-0 nova_compute[244014]: 2026-02-25 14:13:22.958 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:13:23 compute-0 sshd-session[435631]: Connection closed by authenticating user root 46.101.242.142 port 34912 [preauth]
Feb 25 14:13:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:23 compute-0 ceph-mon[76335]: pgmap v4603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:24 compute-0 ceph-mon[76335]: pgmap v4604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:25 compute-0 podman[435633]: 2026-02-25 14:13:25.738490823 +0000 UTC m=+0.074806770 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 25 14:13:25 compute-0 podman[435634]: 2026-02-25 14:13:25.770751938 +0000 UTC m=+0.102922048 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ovn_controller)
Feb 25 14:13:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:26 compute-0 ceph-mon[76335]: pgmap v4605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:27 compute-0 nova_compute[244014]: 2026-02-25 14:13:27.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:13:29 compute-0 ceph-mon[76335]: pgmap v4606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:13:31
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'images', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.mgr']
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:13:31 compute-0 ceph-mon[76335]: pgmap v4607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:13:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:13:31 compute-0 nova_compute[244014]: 2026-02-25 14:13:31.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:13:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:13:32 compute-0 nova_compute[244014]: 2026-02-25 14:13:32.962 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:13:33 compute-0 ceph-mon[76335]: pgmap v4608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:34 compute-0 ceph-mon[76335]: pgmap v4609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:36 compute-0 ceph-mon[76335]: pgmap v4610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:37 compute-0 nova_compute[244014]: 2026-02-25 14:13:37.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:13:37 compute-0 nova_compute[244014]: 2026-02-25 14:13:37.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:13:37 compute-0 nova_compute[244014]: 2026-02-25 14:13:37.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:13:37 compute-0 nova_compute[244014]: 2026-02-25 14:13:37.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:13:38 compute-0 nova_compute[244014]: 2026-02-25 14:13:38.004 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:13:38 compute-0 nova_compute[244014]: 2026-02-25 14:13:38.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:13:39 compute-0 ceph-mon[76335]: pgmap v4611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:40 compute-0 nova_compute[244014]: 2026-02-25 14:13:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:13:40 compute-0 nova_compute[244014]: 2026-02-25 14:13:40.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:13:40 compute-0 nova_compute[244014]: 2026-02-25 14:13:40.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:13:40 compute-0 nova_compute[244014]: 2026-02-25 14:13:40.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:13:40 compute-0 nova_compute[244014]: 2026-02-25 14:13:40.913 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:13:40 compute-0 nova_compute[244014]: 2026-02-25 14:13:40.913 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:13:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:41 compute-0 ceph-mon[76335]: pgmap v4612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:13:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1511290746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:13:41 compute-0 nova_compute[244014]: 2026-02-25 14:13:41.455 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:13:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:41 compute-0 nova_compute[244014]: 2026-02-25 14:13:41.635 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:13:41 compute-0 nova_compute[244014]: 2026-02-25 14:13:41.637 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3531MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:13:41 compute-0 nova_compute[244014]: 2026-02-25 14:13:41.637 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:13:41 compute-0 nova_compute[244014]: 2026-02-25 14:13:41.637 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:13:41 compute-0 nova_compute[244014]: 2026-02-25 14:13:41.708 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:13:41 compute-0 nova_compute[244014]: 2026-02-25 14:13:41.709 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:13:41 compute-0 nova_compute[244014]: 2026-02-25 14:13:41.726 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:13:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:13:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220818318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:13:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1511290746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:13:42 compute-0 nova_compute[244014]: 2026-02-25 14:13:42.237 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:13:42 compute-0 nova_compute[244014]: 2026-02-25 14:13:42.244 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:13:42 compute-0 nova_compute[244014]: 2026-02-25 14:13:42.264 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:13:42 compute-0 nova_compute[244014]: 2026-02-25 14:13:42.266 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:13:42 compute-0 nova_compute[244014]: 2026-02-25 14:13:42.267 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:13:43 compute-0 nova_compute[244014]: 2026-02-25 14:13:43.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:13:43 compute-0 nova_compute[244014]: 2026-02-25 14:13:43.007 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:13:43 compute-0 nova_compute[244014]: 2026-02-25 14:13:43.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:13:43 compute-0 nova_compute[244014]: 2026-02-25 14:13:43.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:13:43 compute-0 nova_compute[244014]: 2026-02-25 14:13:43.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:13:43 compute-0 nova_compute[244014]: 2026-02-25 14:13:43.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:13:43 compute-0 ceph-mon[76335]: pgmap v4613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:43 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/220818318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:13:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:13:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:13:44 compute-0 nova_compute[244014]: 2026-02-25 14:13:44.268 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:13:44 compute-0 nova_compute[244014]: 2026-02-25 14:13:44.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:13:44 compute-0 nova_compute[244014]: 2026-02-25 14:13:44.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:13:44 compute-0 nova_compute[244014]: 2026-02-25 14:13:44.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:13:44 compute-0 nova_compute[244014]: 2026-02-25 14:13:44.897 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:13:45 compute-0 ceph-mon[76335]: pgmap v4614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:46 compute-0 nova_compute[244014]: 2026-02-25 14:13:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:13:47 compute-0 ceph-mon[76335]: pgmap v4615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:13:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1116104455' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:13:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:13:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1116104455' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:13:48 compute-0 nova_compute[244014]: 2026-02-25 14:13:48.010 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:13:48 compute-0 nova_compute[244014]: 2026-02-25 14:13:48.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:13:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1116104455' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:13:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/1116104455' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:13:49 compute-0 ceph-mon[76335]: pgmap v4616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:50 compute-0 ceph-osd[89088]: bluestore.MempoolThread fragmentation_score=0.004165 took=0.000093s
Feb 25 14:13:50 compute-0 ceph-osd[86953]: bluestore.MempoolThread fragmentation_score=0.004726 took=0.000069s
Feb 25 14:13:50 compute-0 ceph-osd[88012]: bluestore.MempoolThread fragmentation_score=0.004789 took=0.000056s
Feb 25 14:13:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:51 compute-0 ceph-mon[76335]: pgmap v4617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:53 compute-0 nova_compute[244014]: 2026-02-25 14:13:53.012 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:13:53 compute-0 ceph-mon[76335]: pgmap v4618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:53 compute-0 sudo[435724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:13:53 compute-0 sudo[435724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:13:53 compute-0 sudo[435724]: pam_unix(sudo:session): session closed for user root
Feb 25 14:13:53 compute-0 sudo[435749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:13:53 compute-0 sudo[435749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:13:54 compute-0 sudo[435749]: pam_unix(sudo:session): session closed for user root
Feb 25 14:13:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:13:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:13:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:13:54 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:13:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:13:54 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:13:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:13:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:13:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:13:54 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:13:54 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:13:54 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:13:54 compute-0 sudo[435805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:13:54 compute-0 sudo[435805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:13:54 compute-0 sudo[435805]: pam_unix(sudo:session): session closed for user root
Feb 25 14:13:54 compute-0 sudo[435830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:13:54 compute-0 sudo[435830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:13:54 compute-0 podman[435868]: 2026-02-25 14:13:54.725888445 +0000 UTC m=+0.066020662 container create b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:13:54 compute-0 systemd[1]: Started libpod-conmon-b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0.scope.
Feb 25 14:13:54 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:13:54 compute-0 podman[435868]: 2026-02-25 14:13:54.695131204 +0000 UTC m=+0.035263531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:13:54 compute-0 podman[435868]: 2026-02-25 14:13:54.797907976 +0000 UTC m=+0.138040213 container init b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 14:13:54 compute-0 podman[435868]: 2026-02-25 14:13:54.806306334 +0000 UTC m=+0.146438551 container start b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 14:13:54 compute-0 youthful_wing[435884]: 167 167
Feb 25 14:13:54 compute-0 systemd[1]: libpod-b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0.scope: Deactivated successfully.
Feb 25 14:13:54 compute-0 podman[435868]: 2026-02-25 14:13:54.811568763 +0000 UTC m=+0.151701000 container attach b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:13:54 compute-0 podman[435868]: 2026-02-25 14:13:54.812372286 +0000 UTC m=+0.152504503 container died b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-012abd3ad92828d5ef3c8e5158e263f3cedf4c8123e29b2e8536ac11b86187c3-merged.mount: Deactivated successfully.
Feb 25 14:13:54 compute-0 podman[435868]: 2026-02-25 14:13:54.85417492 +0000 UTC m=+0.194307137 container remove b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:13:54 compute-0 systemd[1]: libpod-conmon-b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0.scope: Deactivated successfully.
Feb 25 14:13:54 compute-0 podman[435908]: 2026-02-25 14:13:54.979128911 +0000 UTC m=+0.038085270 container create 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:13:55 compute-0 systemd[1]: Started libpod-conmon-04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7.scope.
Feb 25 14:13:55 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:55 compute-0 podman[435908]: 2026-02-25 14:13:54.963896829 +0000 UTC m=+0.022853198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:13:55 compute-0 podman[435908]: 2026-02-25 14:13:55.07190187 +0000 UTC m=+0.130858269 container init 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:13:55 compute-0 podman[435908]: 2026-02-25 14:13:55.079115034 +0000 UTC m=+0.138071393 container start 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Feb 25 14:13:55 compute-0 podman[435908]: 2026-02-25 14:13:55.084473736 +0000 UTC m=+0.143430095 container attach 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 25 14:13:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:13:55.120 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:13:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:13:55.123 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:13:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:13:55.124 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:13:55 compute-0 ceph-mon[76335]: pgmap v4619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:13:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:13:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:13:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:13:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:13:55 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:13:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:55 compute-0 jovial_ritchie[435924]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:13:55 compute-0 jovial_ritchie[435924]: --> All data devices are unavailable
Feb 25 14:13:55 compute-0 systemd[1]: libpod-04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7.scope: Deactivated successfully.
Feb 25 14:13:55 compute-0 podman[435944]: 2026-02-25 14:13:55.63908549 +0000 UTC m=+0.025610417 container died 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 14:13:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee-merged.mount: Deactivated successfully.
Feb 25 14:13:55 compute-0 podman[435944]: 2026-02-25 14:13:55.677222381 +0000 UTC m=+0.063747308 container remove 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 14:13:55 compute-0 systemd[1]: libpod-conmon-04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7.scope: Deactivated successfully.
Feb 25 14:13:55 compute-0 sudo[435830]: pam_unix(sudo:session): session closed for user root
Feb 25 14:13:55 compute-0 sudo[435959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:13:55 compute-0 sudo[435959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:13:55 compute-0 sudo[435959]: pam_unix(sudo:session): session closed for user root
Feb 25 14:13:55 compute-0 sudo[435992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:13:55 compute-0 nova_compute[244014]: 2026-02-25 14:13:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:13:55 compute-0 nova_compute[244014]: 2026-02-25 14:13:55.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:13:55 compute-0 sudo[435992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:13:55 compute-0 podman[435983]: 2026-02-25 14:13:55.923786757 +0000 UTC m=+0.104866262 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 14:13:55 compute-0 podman[435984]: 2026-02-25 14:13:55.923971792 +0000 UTC m=+0.106339984 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 14:13:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:13:56 compute-0 podman[436063]: 2026-02-25 14:13:56.208631948 +0000 UTC m=+0.066562657 container create ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:13:56 compute-0 systemd[1]: Started libpod-conmon-ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a.scope.
Feb 25 14:13:56 compute-0 podman[436063]: 2026-02-25 14:13:56.181919931 +0000 UTC m=+0.039850700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:13:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:13:56 compute-0 podman[436063]: 2026-02-25 14:13:56.2997454 +0000 UTC m=+0.157676089 container init ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:13:56 compute-0 podman[436063]: 2026-02-25 14:13:56.307786128 +0000 UTC m=+0.165716797 container start ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 14:13:56 compute-0 podman[436063]: 2026-02-25 14:13:56.310638389 +0000 UTC m=+0.168569088 container attach ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:13:56 compute-0 stoic_wiles[436079]: 167 167
Feb 25 14:13:56 compute-0 systemd[1]: libpod-ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a.scope: Deactivated successfully.
Feb 25 14:13:56 compute-0 podman[436063]: 2026-02-25 14:13:56.31634502 +0000 UTC m=+0.174275719 container died ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:13:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-39b768e91db09a002aab99065c9c4844bfc92ccbeeef6896586932ee31e1b9bb-merged.mount: Deactivated successfully.
Feb 25 14:13:56 compute-0 podman[436063]: 2026-02-25 14:13:56.355644944 +0000 UTC m=+0.213575613 container remove ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:13:56 compute-0 systemd[1]: libpod-conmon-ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a.scope: Deactivated successfully.
Feb 25 14:13:56 compute-0 podman[436103]: 2026-02-25 14:13:56.506311623 +0000 UTC m=+0.039483500 container create f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 14:13:56 compute-0 systemd[1]: Started libpod-conmon-f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47.scope.
Feb 25 14:13:56 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78250d3f837c45103fd056fc42ede6d93c6ada4e37790a686bdac73ff2f3387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78250d3f837c45103fd056fc42ede6d93c6ada4e37790a686bdac73ff2f3387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78250d3f837c45103fd056fc42ede6d93c6ada4e37790a686bdac73ff2f3387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78250d3f837c45103fd056fc42ede6d93c6ada4e37790a686bdac73ff2f3387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:56 compute-0 podman[436103]: 2026-02-25 14:13:56.489745384 +0000 UTC m=+0.022917301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:13:56 compute-0 podman[436103]: 2026-02-25 14:13:56.603018693 +0000 UTC m=+0.136190650 container init f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:13:56 compute-0 podman[436103]: 2026-02-25 14:13:56.612151372 +0000 UTC m=+0.145323259 container start f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:13:56 compute-0 podman[436103]: 2026-02-25 14:13:56.616796964 +0000 UTC m=+0.149968831 container attach f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:13:56 compute-0 nova_compute[244014]: 2026-02-25 14:13:56.874 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]: {
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:     "0": [
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:         {
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "devices": [
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "/dev/loop3"
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             ],
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_name": "ceph_lv0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_size": "21470642176",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "name": "ceph_lv0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "tags": {
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.cluster_name": "ceph",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.crush_device_class": "",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.encrypted": "0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.objectstore": "bluestore",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.osd_id": "0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.type": "block",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.vdo": "0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.with_tpm": "0"
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             },
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "type": "block",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "vg_name": "ceph_vg0"
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:         }
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:     ],
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:     "1": [
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:         {
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "devices": [
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "/dev/loop4"
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             ],
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_name": "ceph_lv1",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_size": "21470642176",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "name": "ceph_lv1",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "tags": {
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.cluster_name": "ceph",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.crush_device_class": "",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.encrypted": "0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.objectstore": "bluestore",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.osd_id": "1",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.type": "block",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.vdo": "0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.with_tpm": "0"
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             },
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "type": "block",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "vg_name": "ceph_vg1"
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:         }
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:     ],
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:     "2": [
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:         {
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "devices": [
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "/dev/loop5"
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             ],
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_name": "ceph_lv2",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_size": "21470642176",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "name": "ceph_lv2",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "tags": {
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.cluster_name": "ceph",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.crush_device_class": "",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.encrypted": "0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.objectstore": "bluestore",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.osd_id": "2",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.type": "block",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.vdo": "0",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:                 "ceph.with_tpm": "0"
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             },
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "type": "block",
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:             "vg_name": "ceph_vg2"
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:         }
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]:     ]
Feb 25 14:13:56 compute-0 intelligent_jemison[436120]: }
Feb 25 14:13:56 compute-0 systemd[1]: libpod-f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47.scope: Deactivated successfully.
Feb 25 14:13:56 compute-0 podman[436103]: 2026-02-25 14:13:56.952013682 +0000 UTC m=+0.485185579 container died f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:13:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f78250d3f837c45103fd056fc42ede6d93c6ada4e37790a686bdac73ff2f3387-merged.mount: Deactivated successfully.
Feb 25 14:13:57 compute-0 podman[436103]: 2026-02-25 14:13:57.001532165 +0000 UTC m=+0.534704032 container remove f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:13:57 compute-0 systemd[1]: libpod-conmon-f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47.scope: Deactivated successfully.
Feb 25 14:13:57 compute-0 sudo[435992]: pam_unix(sudo:session): session closed for user root
Feb 25 14:13:57 compute-0 sudo[436141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:13:57 compute-0 sudo[436141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:13:57 compute-0 sudo[436141]: pam_unix(sudo:session): session closed for user root
Feb 25 14:13:57 compute-0 sudo[436166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:13:57 compute-0 sudo[436166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:13:57 compute-0 ceph-mon[76335]: pgmap v4620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:57 compute-0 podman[436203]: 2026-02-25 14:13:57.489810651 +0000 UTC m=+0.045772968 container create d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:13:57 compute-0 systemd[1]: Started libpod-conmon-d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04.scope.
Feb 25 14:13:57 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:13:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:57 compute-0 podman[436203]: 2026-02-25 14:13:57.562334566 +0000 UTC m=+0.118296953 container init d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 14:13:57 compute-0 podman[436203]: 2026-02-25 14:13:57.476097293 +0000 UTC m=+0.032059630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:13:57 compute-0 podman[436203]: 2026-02-25 14:13:57.571935098 +0000 UTC m=+0.127897455 container start d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 14:13:57 compute-0 podman[436203]: 2026-02-25 14:13:57.575601662 +0000 UTC m=+0.131564069 container attach d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:13:57 compute-0 brave_mayer[436219]: 167 167
Feb 25 14:13:57 compute-0 systemd[1]: libpod-d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04.scope: Deactivated successfully.
Feb 25 14:13:57 compute-0 podman[436203]: 2026-02-25 14:13:57.578825143 +0000 UTC m=+0.134787500 container died d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4f0340b08b862b97346ac964a3d6b5fb6fc8d5da19bd7ca90cabd16ebaf10e7-merged.mount: Deactivated successfully.
Feb 25 14:13:57 compute-0 podman[436203]: 2026-02-25 14:13:57.621519733 +0000 UTC m=+0.177482080 container remove d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:13:57 compute-0 systemd[1]: libpod-conmon-d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04.scope: Deactivated successfully.
Feb 25 14:13:57 compute-0 podman[436245]: 2026-02-25 14:13:57.788570467 +0000 UTC m=+0.041423605 container create 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:13:57 compute-0 systemd[1]: Started libpod-conmon-91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a.scope.
Feb 25 14:13:57 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777a085c202cc0d0a47654f29d6370eb41d3c918b548c624239ee10afca47a8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777a085c202cc0d0a47654f29d6370eb41d3c918b548c624239ee10afca47a8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777a085c202cc0d0a47654f29d6370eb41d3c918b548c624239ee10afca47a8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777a085c202cc0d0a47654f29d6370eb41d3c918b548c624239ee10afca47a8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:13:57 compute-0 podman[436245]: 2026-02-25 14:13:57.770310069 +0000 UTC m=+0.023163207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:13:57 compute-0 podman[436245]: 2026-02-25 14:13:57.873797162 +0000 UTC m=+0.126650280 container init 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Feb 25 14:13:57 compute-0 podman[436245]: 2026-02-25 14:13:57.881347086 +0000 UTC m=+0.134200204 container start 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:13:57 compute-0 podman[436245]: 2026-02-25 14:13:57.884814794 +0000 UTC m=+0.137667972 container attach 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:13:58 compute-0 nova_compute[244014]: 2026-02-25 14:13:58.015 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:13:58 compute-0 lvm[436340]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:13:58 compute-0 lvm[436337]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:13:58 compute-0 lvm[436337]: VG ceph_vg0 finished
Feb 25 14:13:58 compute-0 lvm[436340]: VG ceph_vg1 finished
Feb 25 14:13:58 compute-0 lvm[436342]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:13:58 compute-0 lvm[436342]: VG ceph_vg2 finished
Feb 25 14:13:58 compute-0 strange_bose[436261]: {}
Feb 25 14:13:58 compute-0 podman[436245]: 2026-02-25 14:13:58.63497873 +0000 UTC m=+0.887831888 container died 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:13:58 compute-0 systemd[1]: libpod-91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a.scope: Deactivated successfully.
Feb 25 14:13:58 compute-0 systemd[1]: libpod-91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a.scope: Consumed 1.003s CPU time.
Feb 25 14:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-777a085c202cc0d0a47654f29d6370eb41d3c918b548c624239ee10afca47a8d-merged.mount: Deactivated successfully.
Feb 25 14:13:58 compute-0 podman[436245]: 2026-02-25 14:13:58.678486922 +0000 UTC m=+0.931340040 container remove 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 14:13:58 compute-0 systemd[1]: libpod-conmon-91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a.scope: Deactivated successfully.
Feb 25 14:13:58 compute-0 sudo[436166]: pam_unix(sudo:session): session closed for user root
Feb 25 14:13:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:13:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:13:58 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:13:58 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:13:58 compute-0 sudo[436355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:13:58 compute-0 sudo[436355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:13:58 compute-0 sudo[436355]: pam_unix(sudo:session): session closed for user root
Feb 25 14:13:59 compute-0 ceph-mon[76335]: pgmap v4621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:13:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:13:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:13:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:00 compute-0 nova_compute[244014]: 2026-02-25 14:14:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:14:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:01 compute-0 ceph-mon[76335]: pgmap v4622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:14:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:14:02 compute-0 nova_compute[244014]: 2026-02-25 14:14:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:14:03 compute-0 nova_compute[244014]: 2026-02-25 14:14:03.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:03 compute-0 ceph-mon[76335]: pgmap v4623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:03 compute-0 nova_compute[244014]: 2026-02-25 14:14:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:14:05 compute-0 ceph-mon[76335]: pgmap v4624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:07 compute-0 ceph-mon[76335]: pgmap v4625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:08 compute-0 nova_compute[244014]: 2026-02-25 14:14:08.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:08 compute-0 nova_compute[244014]: 2026-02-25 14:14:08.020 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:08 compute-0 nova_compute[244014]: 2026-02-25 14:14:08.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:14:08 compute-0 nova_compute[244014]: 2026-02-25 14:14:08.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:08 compute-0 nova_compute[244014]: 2026-02-25 14:14:08.056 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:14:08 compute-0 nova_compute[244014]: 2026-02-25 14:14:08.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:09 compute-0 ceph-mon[76335]: pgmap v4626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:11 compute-0 sshd-session[436380]: Connection closed by authenticating user root 46.101.242.142 port 37916 [preauth]
Feb 25 14:14:11 compute-0 ceph-mon[76335]: pgmap v4627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:13 compute-0 nova_compute[244014]: 2026-02-25 14:14:13.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:13 compute-0 nova_compute[244014]: 2026-02-25 14:14:13.059 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:13 compute-0 nova_compute[244014]: 2026-02-25 14:14:13.060 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:14:13 compute-0 nova_compute[244014]: 2026-02-25 14:14:13.060 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:13 compute-0 nova_compute[244014]: 2026-02-25 14:14:13.061 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:14:13 compute-0 nova_compute[244014]: 2026-02-25 14:14:13.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:13 compute-0 ceph-mon[76335]: pgmap v4628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:15 compute-0 ceph-mon[76335]: pgmap v4629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:17 compute-0 ceph-mon[76335]: pgmap v4630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:18 compute-0 nova_compute[244014]: 2026-02-25 14:14:18.063 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:18 compute-0 nova_compute[244014]: 2026-02-25 14:14:18.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:18 compute-0 nova_compute[244014]: 2026-02-25 14:14:18.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:14:18 compute-0 nova_compute[244014]: 2026-02-25 14:14:18.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:18 compute-0 nova_compute[244014]: 2026-02-25 14:14:18.119 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:14:18 compute-0 nova_compute[244014]: 2026-02-25 14:14:18.119 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:19 compute-0 ceph-mon[76335]: pgmap v4631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:21 compute-0 ceph-mon[76335]: pgmap v4632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:23 compute-0 nova_compute[244014]: 2026-02-25 14:14:23.121 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:23 compute-0 nova_compute[244014]: 2026-02-25 14:14:23.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:23 compute-0 nova_compute[244014]: 2026-02-25 14:14:23.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:14:23 compute-0 nova_compute[244014]: 2026-02-25 14:14:23.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:23 compute-0 nova_compute[244014]: 2026-02-25 14:14:23.170 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:14:23 compute-0 nova_compute[244014]: 2026-02-25 14:14:23.171 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:23 compute-0 ceph-mon[76335]: pgmap v4633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:24 compute-0 ceph-mon[76335]: pgmap v4634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:26 compute-0 ceph-mon[76335]: pgmap v4635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:26 compute-0 podman[436382]: 2026-02-25 14:14:26.769884957 +0000 UTC m=+0.108525366 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Feb 25 14:14:26 compute-0 podman[436383]: 2026-02-25 14:14:26.773442767 +0000 UTC m=+0.107136676 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 14:14:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:28 compute-0 nova_compute[244014]: 2026-02-25 14:14:28.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:14:28 compute-0 ceph-mon[76335]: pgmap v4636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:30 compute-0 ceph-mon[76335]: pgmap v4637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:14:31
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.log']
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:14:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:14:32 compute-0 ceph-mon[76335]: pgmap v4638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:14:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:14:33 compute-0 nova_compute[244014]: 2026-02-25 14:14:33.174 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:33 compute-0 nova_compute[244014]: 2026-02-25 14:14:33.176 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:33 compute-0 nova_compute[244014]: 2026-02-25 14:14:33.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:14:33 compute-0 nova_compute[244014]: 2026-02-25 14:14:33.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:33 compute-0 nova_compute[244014]: 2026-02-25 14:14:33.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:14:33 compute-0 nova_compute[244014]: 2026-02-25 14:14:33.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:33 compute-0 nova_compute[244014]: 2026-02-25 14:14:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:14:34 compute-0 ceph-mon[76335]: pgmap v4639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:36 compute-0 ceph-mon[76335]: pgmap v4640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:38 compute-0 nova_compute[244014]: 2026-02-25 14:14:38.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:38 compute-0 nova_compute[244014]: 2026-02-25 14:14:38.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:38 compute-0 nova_compute[244014]: 2026-02-25 14:14:38.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:14:38 compute-0 nova_compute[244014]: 2026-02-25 14:14:38.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:38 compute-0 nova_compute[244014]: 2026-02-25 14:14:38.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:14:38 compute-0 nova_compute[244014]: 2026-02-25 14:14:38.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:38 compute-0 ceph-mon[76335]: pgmap v4641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:40 compute-0 ceph-mon[76335]: pgmap v4642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:40 compute-0 nova_compute[244014]: 2026-02-25 14:14:40.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:14:40 compute-0 nova_compute[244014]: 2026-02-25 14:14:40.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:14:40 compute-0 nova_compute[244014]: 2026-02-25 14:14:40.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:14:40 compute-0 nova_compute[244014]: 2026-02-25 14:14:40.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:14:40 compute-0 nova_compute[244014]: 2026-02-25 14:14:40.927 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:14:40 compute-0 nova_compute[244014]: 2026-02-25 14:14:40.927 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:14:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:14:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/730122595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:14:41 compute-0 nova_compute[244014]: 2026-02-25 14:14:41.447 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:14:41 compute-0 nova_compute[244014]: 2026-02-25 14:14:41.572 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:14:41 compute-0 nova_compute[244014]: 2026-02-25 14:14:41.573 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3524MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:14:41 compute-0 nova_compute[244014]: 2026-02-25 14:14:41.573 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:14:41 compute-0 nova_compute[244014]: 2026-02-25 14:14:41.573 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:14:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:41 compute-0 nova_compute[244014]: 2026-02-25 14:14:41.631 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:14:41 compute-0 nova_compute[244014]: 2026-02-25 14:14:41.631 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:14:41 compute-0 nova_compute[244014]: 2026-02-25 14:14:41.741 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:14:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/730122595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:14:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:14:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4176816103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:14:42 compute-0 nova_compute[244014]: 2026-02-25 14:14:42.248 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:14:42 compute-0 nova_compute[244014]: 2026-02-25 14:14:42.255 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:14:42 compute-0 nova_compute[244014]: 2026-02-25 14:14:42.272 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:14:42 compute-0 nova_compute[244014]: 2026-02-25 14:14:42.276 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:14:42 compute-0 nova_compute[244014]: 2026-02-25 14:14:42.276 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:14:42 compute-0 ceph-mon[76335]: pgmap v4643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4176816103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:14:43 compute-0 nova_compute[244014]: 2026-02-25 14:14:43.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:43 compute-0 nova_compute[244014]: 2026-02-25 14:14:43.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:43 compute-0 nova_compute[244014]: 2026-02-25 14:14:43.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:14:43 compute-0 nova_compute[244014]: 2026-02-25 14:14:43.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:43 compute-0 nova_compute[244014]: 2026-02-25 14:14:43.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:14:43 compute-0 nova_compute[244014]: 2026-02-25 14:14:43.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:14:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:14:44 compute-0 nova_compute[244014]: 2026-02-25 14:14:44.264 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:14:44 compute-0 ceph-mon[76335]: pgmap v4644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:45 compute-0 nova_compute[244014]: 2026-02-25 14:14:45.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:14:45 compute-0 nova_compute[244014]: 2026-02-25 14:14:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:14:45 compute-0 nova_compute[244014]: 2026-02-25 14:14:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:14:45 compute-0 nova_compute[244014]: 2026-02-25 14:14:45.893 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:14:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:46 compute-0 ceph-mon[76335]: pgmap v4645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:46 compute-0 nova_compute[244014]: 2026-02-25 14:14:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:14:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:14:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721512744' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:14:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:14:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721512744' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:14:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2721512744' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:14:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/2721512744' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:14:48 compute-0 nova_compute[244014]: 2026-02-25 14:14:48.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:48 compute-0 nova_compute[244014]: 2026-02-25 14:14:48.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:48 compute-0 ceph-mon[76335]: pgmap v4646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:49 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:50 compute-0 ceph-mon[76335]: pgmap v4647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:51 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:51 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:52 compute-0 ceph-mon[76335]: pgmap v4648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:53 compute-0 nova_compute[244014]: 2026-02-25 14:14:53.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:53 compute-0 nova_compute[244014]: 2026-02-25 14:14:53.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:14:53 compute-0 nova_compute[244014]: 2026-02-25 14:14:53.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:14:53 compute-0 nova_compute[244014]: 2026-02-25 14:14:53.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:53 compute-0 nova_compute[244014]: 2026-02-25 14:14:53.344 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:14:53 compute-0 nova_compute[244014]: 2026-02-25 14:14:53.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:14:53 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:53 compute-0 nova_compute[244014]: 2026-02-25 14:14:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:14:53 compute-0 nova_compute[244014]: 2026-02-25 14:14:53.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 25 14:14:53 compute-0 nova_compute[244014]: 2026-02-25 14:14:53.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 25 14:14:54 compute-0 ceph-mon[76335]: pgmap v4649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:14:55.121 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:14:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:14:55.122 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:14:55 compute-0 ovn_metadata_agent[157124]: 2026-02-25 14:14:55.122 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:14:55 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:56 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:14:56 compute-0 ceph-mon[76335]: pgmap v4650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:57 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:57 compute-0 podman[436472]: 2026-02-25 14:14:57.754528683 +0000 UTC m=+0.086769679 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 14:14:57 compute-0 podman[436473]: 2026-02-25 14:14:57.790429921 +0000 UTC m=+0.121596377 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Feb 25 14:14:57 compute-0 nova_compute[244014]: 2026-02-25 14:14:57.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:14:57 compute-0 nova_compute[244014]: 2026-02-25 14:14:57.900 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 25 14:14:58 compute-0 nova_compute[244014]: 2026-02-25 14:14:58.346 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:14:58 compute-0 nova_compute[244014]: 2026-02-25 14:14:58.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:14:58 compute-0 sudo[436513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:14:58 compute-0 sudo[436513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:14:58 compute-0 sudo[436513]: pam_unix(sudo:session): session closed for user root
Feb 25 14:14:58 compute-0 ceph-mon[76335]: pgmap v4651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:58 compute-0 sudo[436538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 25 14:14:58 compute-0 sudo[436538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:14:59 compute-0 sudo[436538]: pam_unix(sudo:session): session closed for user root
Feb 25 14:14:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:14:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:14:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 14:14:59 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:14:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 14:14:59 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:14:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 14:14:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:14:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 14:14:59 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:14:59 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:14:59 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:14:59 compute-0 sudo[436595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:14:59 compute-0 sudo[436595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:14:59 compute-0 sudo[436595]: pam_unix(sudo:session): session closed for user root
Feb 25 14:14:59 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:14:59 compute-0 sudo[436620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 25 14:14:59 compute-0 sudo[436620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:14:59 compute-0 podman[436658]: 2026-02-25 14:14:59.950601158 +0000 UTC m=+0.068769559 container create ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 14:14:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:14:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 14:14:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:14:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 14:14:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 14:14:59 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:14:59 compute-0 systemd[1]: Started libpod-conmon-ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b.scope.
Feb 25 14:15:00 compute-0 podman[436658]: 2026-02-25 14:14:59.917542771 +0000 UTC m=+0.035711212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:15:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:15:00 compute-0 podman[436658]: 2026-02-25 14:15:00.058151386 +0000 UTC m=+0.176319797 container init ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:15:00 compute-0 podman[436658]: 2026-02-25 14:15:00.065480563 +0000 UTC m=+0.183648944 container start ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:15:00 compute-0 festive_morse[436675]: 167 167
Feb 25 14:15:00 compute-0 systemd[1]: libpod-ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b.scope: Deactivated successfully.
Feb 25 14:15:00 compute-0 podman[436658]: 2026-02-25 14:15:00.207368894 +0000 UTC m=+0.325537295 container attach ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Feb 25 14:15:00 compute-0 podman[436658]: 2026-02-25 14:15:00.209083102 +0000 UTC m=+0.327251543 container died ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a607218c823626690fb9373365c5dc8ec19f069dd923076690906ff935d470b9-merged.mount: Deactivated successfully.
Feb 25 14:15:00 compute-0 podman[436658]: 2026-02-25 14:15:00.295441819 +0000 UTC m=+0.413610180 container remove ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:15:00 compute-0 systemd[1]: libpod-conmon-ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b.scope: Deactivated successfully.
Feb 25 14:15:00 compute-0 sshd-session[436576]: Connection closed by authenticating user root 46.101.242.142 port 45790 [preauth]
Feb 25 14:15:00 compute-0 podman[436700]: 2026-02-25 14:15:00.44545759 +0000 UTC m=+0.024393662 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:15:00 compute-0 podman[436700]: 2026-02-25 14:15:00.604481966 +0000 UTC m=+0.183417998 container create 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:15:00 compute-0 systemd[1]: Started libpod-conmon-953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7.scope.
Feb 25 14:15:00 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:00 compute-0 podman[436700]: 2026-02-25 14:15:00.720753631 +0000 UTC m=+0.299689683 container init 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:15:00 compute-0 podman[436700]: 2026-02-25 14:15:00.729726615 +0000 UTC m=+0.308662637 container start 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:15:00 compute-0 podman[436700]: 2026-02-25 14:15:00.733725388 +0000 UTC m=+0.312661500 container attach 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Feb 25 14:15:00 compute-0 nova_compute[244014]: 2026-02-25 14:15:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:15:00 compute-0 ceph-mon[76335]: pgmap v4652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:01 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:15:01 compute-0 serene_kepler[436717]: --> passed data devices: 0 physical, 3 LVM
Feb 25 14:15:01 compute-0 serene_kepler[436717]: --> All data devices are unavailable
Feb 25 14:15:01 compute-0 systemd[1]: libpod-953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7.scope: Deactivated successfully.
Feb 25 14:15:01 compute-0 podman[436700]: 2026-02-25 14:15:01.15164021 +0000 UTC m=+0.730576282 container died 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 14:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071-merged.mount: Deactivated successfully.
Feb 25 14:15:01 compute-0 podman[436700]: 2026-02-25 14:15:01.206190446 +0000 UTC m=+0.785126518 container remove 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 14:15:01 compute-0 systemd[1]: libpod-conmon-953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7.scope: Deactivated successfully.
Feb 25 14:15:01 compute-0 sudo[436620]: pam_unix(sudo:session): session closed for user root
Feb 25 14:15:01 compute-0 sudo[436751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:15:01 compute-0 sudo[436751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:15:01 compute-0 sudo[436751]: pam_unix(sudo:session): session closed for user root
Feb 25 14:15:01 compute-0 sudo[436776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- lvm list --format json
Feb 25 14:15:01 compute-0 sudo[436776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:15:01 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:01 compute-0 podman[436813]: 2026-02-25 14:15:01.687790132 +0000 UTC m=+0.061640317 container create f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 14:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:15:01 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:15:01 compute-0 systemd[1]: Started libpod-conmon-f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391.scope.
Feb 25 14:15:01 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:15:01 compute-0 podman[436813]: 2026-02-25 14:15:01.660370795 +0000 UTC m=+0.034221020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:15:01 compute-0 podman[436813]: 2026-02-25 14:15:01.766543634 +0000 UTC m=+0.140393829 container init f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 14:15:01 compute-0 podman[436813]: 2026-02-25 14:15:01.772455531 +0000 UTC m=+0.146305676 container start f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Feb 25 14:15:01 compute-0 cool_cerf[436830]: 167 167
Feb 25 14:15:01 compute-0 podman[436813]: 2026-02-25 14:15:01.776735892 +0000 UTC m=+0.150586147 container attach f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:15:01 compute-0 systemd[1]: libpod-f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391.scope: Deactivated successfully.
Feb 25 14:15:01 compute-0 podman[436813]: 2026-02-25 14:15:01.778083981 +0000 UTC m=+0.151934156 container died f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-839454d28033263655fa5d431d7b6b38331cad0e539b894b58eb250230d8d9d9-merged.mount: Deactivated successfully.
Feb 25 14:15:01 compute-0 podman[436813]: 2026-02-25 14:15:01.832211144 +0000 UTC m=+0.206061299 container remove f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 14:15:01 compute-0 systemd[1]: libpod-conmon-f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391.scope: Deactivated successfully.
Feb 25 14:15:02 compute-0 podman[436854]: 2026-02-25 14:15:02.030320138 +0000 UTC m=+0.062095421 container create 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:15:02 compute-0 systemd[1]: Started libpod-conmon-6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec.scope.
Feb 25 14:15:02 compute-0 podman[436854]: 2026-02-25 14:15:01.996975403 +0000 UTC m=+0.028750606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:15:02 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9958aafbf95798740f9499bdf40a3fc185d7ce5884c077d4e1d54ace1c049d8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9958aafbf95798740f9499bdf40a3fc185d7ce5884c077d4e1d54ace1c049d8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9958aafbf95798740f9499bdf40a3fc185d7ce5884c077d4e1d54ace1c049d8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9958aafbf95798740f9499bdf40a3fc185d7ce5884c077d4e1d54ace1c049d8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:02 compute-0 podman[436854]: 2026-02-25 14:15:02.133490621 +0000 UTC m=+0.165265804 container init 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 14:15:02 compute-0 podman[436854]: 2026-02-25 14:15:02.142877697 +0000 UTC m=+0.174652880 container start 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:15:02 compute-0 podman[436854]: 2026-02-25 14:15:02.147663793 +0000 UTC m=+0.179438986 container attach 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 14:15:02 compute-0 zealous_turing[436870]: {
Feb 25 14:15:02 compute-0 zealous_turing[436870]:     "0": [
Feb 25 14:15:02 compute-0 zealous_turing[436870]:         {
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "devices": [
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "/dev/loop3"
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             ],
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_name": "ceph_lv0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_size": "21470642176",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "name": "ceph_lv0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "tags": {
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.cluster_name": "ceph",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.crush_device_class": "",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.encrypted": "0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.objectstore": "bluestore",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.osd_id": "0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.type": "block",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.vdo": "0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.with_tpm": "0"
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             },
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "type": "block",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "vg_name": "ceph_vg0"
Feb 25 14:15:02 compute-0 zealous_turing[436870]:         }
Feb 25 14:15:02 compute-0 zealous_turing[436870]:     ],
Feb 25 14:15:02 compute-0 zealous_turing[436870]:     "1": [
Feb 25 14:15:02 compute-0 zealous_turing[436870]:         {
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "devices": [
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "/dev/loop4"
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             ],
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_name": "ceph_lv1",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_size": "21470642176",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "name": "ceph_lv1",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "tags": {
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.cluster_name": "ceph",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.crush_device_class": "",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.encrypted": "0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.objectstore": "bluestore",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.osd_id": "1",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.type": "block",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.vdo": "0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.with_tpm": "0"
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             },
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "type": "block",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "vg_name": "ceph_vg1"
Feb 25 14:15:02 compute-0 zealous_turing[436870]:         }
Feb 25 14:15:02 compute-0 zealous_turing[436870]:     ],
Feb 25 14:15:02 compute-0 zealous_turing[436870]:     "2": [
Feb 25 14:15:02 compute-0 zealous_turing[436870]:         {
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "devices": [
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "/dev/loop5"
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             ],
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_name": "ceph_lv2",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_size": "21470642176",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "name": "ceph_lv2",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "tags": {
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.cephx_lockbox_secret": "",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.cluster_name": "ceph",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.crush_device_class": "",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.encrypted": "0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.objectstore": "bluestore",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.osd_id": "2",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.type": "block",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.vdo": "0",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:                 "ceph.with_tpm": "0"
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             },
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "type": "block",
Feb 25 14:15:02 compute-0 zealous_turing[436870]:             "vg_name": "ceph_vg2"
Feb 25 14:15:02 compute-0 zealous_turing[436870]:         }
Feb 25 14:15:02 compute-0 zealous_turing[436870]:     ]
Feb 25 14:15:02 compute-0 zealous_turing[436870]: }
Feb 25 14:15:02 compute-0 systemd[1]: libpod-6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec.scope: Deactivated successfully.
Feb 25 14:15:02 compute-0 podman[436854]: 2026-02-25 14:15:02.457208114 +0000 UTC m=+0.488983317 container died 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9958aafbf95798740f9499bdf40a3fc185d7ce5884c077d4e1d54ace1c049d8a-merged.mount: Deactivated successfully.
Feb 25 14:15:02 compute-0 podman[436854]: 2026-02-25 14:15:02.570313369 +0000 UTC m=+0.602088512 container remove 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 14:15:02 compute-0 systemd[1]: libpod-conmon-6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec.scope: Deactivated successfully.
Feb 25 14:15:02 compute-0 sudo[436776]: pam_unix(sudo:session): session closed for user root
Feb 25 14:15:02 compute-0 sudo[436893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 25 14:15:02 compute-0 sudo[436893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:15:02 compute-0 sudo[436893]: pam_unix(sudo:session): session closed for user root
Feb 25 14:15:02 compute-0 sudo[436918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -- raw list --format json
Feb 25 14:15:02 compute-0 sudo[436918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:15:02 compute-0 ceph-mon[76335]: pgmap v4653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:03 compute-0 podman[436956]: 2026-02-25 14:15:03.066020605 +0000 UTC m=+0.064340274 container create e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:15:03 compute-0 systemd[1]: Started libpod-conmon-e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772.scope.
Feb 25 14:15:03 compute-0 podman[436956]: 2026-02-25 14:15:03.036512879 +0000 UTC m=+0.034832608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:15:03 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:15:03 compute-0 podman[436956]: 2026-02-25 14:15:03.166451351 +0000 UTC m=+0.164771060 container init e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 14:15:03 compute-0 podman[436956]: 2026-02-25 14:15:03.177203055 +0000 UTC m=+0.175522744 container start e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 14:15:03 compute-0 podman[436956]: 2026-02-25 14:15:03.181686183 +0000 UTC m=+0.180005892 container attach e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 14:15:03 compute-0 wizardly_tharp[436972]: 167 167
Feb 25 14:15:03 compute-0 systemd[1]: libpod-e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772.scope: Deactivated successfully.
Feb 25 14:15:03 compute-0 podman[436956]: 2026-02-25 14:15:03.183942516 +0000 UTC m=+0.182262205 container died e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 14:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff7a87ad0a3daa8b3ea0f4f60f08500fa0fccf4c71b668e736ce2904b1931b6e-merged.mount: Deactivated successfully.
Feb 25 14:15:03 compute-0 podman[436956]: 2026-02-25 14:15:03.270969932 +0000 UTC m=+0.269289621 container remove e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 14:15:03 compute-0 systemd[1]: libpod-conmon-e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772.scope: Deactivated successfully.
Feb 25 14:15:03 compute-0 nova_compute[244014]: 2026-02-25 14:15:03.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:03 compute-0 podman[436996]: 2026-02-25 14:15:03.417278797 +0000 UTC m=+0.042423023 container create 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb 25 14:15:03 compute-0 systemd[1]: Started libpod-conmon-55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0.scope.
Feb 25 14:15:03 compute-0 systemd[1]: Started libcrun container.
Feb 25 14:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068cea46dab1fee3af5ace3e592aa0abdeaaa0f4aeecca7185d27854e2b509ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068cea46dab1fee3af5ace3e592aa0abdeaaa0f4aeecca7185d27854e2b509ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068cea46dab1fee3af5ace3e592aa0abdeaaa0f4aeecca7185d27854e2b509ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068cea46dab1fee3af5ace3e592aa0abdeaaa0f4aeecca7185d27854e2b509ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 14:15:03 compute-0 podman[436996]: 2026-02-25 14:15:03.398768303 +0000 UTC m=+0.023912539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 14:15:03 compute-0 podman[436996]: 2026-02-25 14:15:03.503087599 +0000 UTC m=+0.128231855 container init 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 14:15:03 compute-0 podman[436996]: 2026-02-25 14:15:03.511902738 +0000 UTC m=+0.137046944 container start 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 14:15:03 compute-0 podman[436996]: 2026-02-25 14:15:03.515890681 +0000 UTC m=+0.141034977 container attach 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 14:15:03 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:04 compute-0 lvm[437090]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:15:04 compute-0 lvm[437090]: VG ceph_vg0 finished
Feb 25 14:15:04 compute-0 lvm[437091]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:15:04 compute-0 lvm[437091]: VG ceph_vg1 finished
Feb 25 14:15:04 compute-0 lvm[437093]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:15:04 compute-0 lvm[437093]: VG ceph_vg2 finished
Feb 25 14:15:04 compute-0 elated_beaver[437012]: {}
Feb 25 14:15:04 compute-0 systemd[1]: libpod-55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0.scope: Deactivated successfully.
Feb 25 14:15:04 compute-0 systemd[1]: libpod-55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0.scope: Consumed 1.099s CPU time.
Feb 25 14:15:04 compute-0 podman[436996]: 2026-02-25 14:15:04.333454398 +0000 UTC m=+0.958598644 container died 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 14:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-068cea46dab1fee3af5ace3e592aa0abdeaaa0f4aeecca7185d27854e2b509ee-merged.mount: Deactivated successfully.
Feb 25 14:15:04 compute-0 podman[436996]: 2026-02-25 14:15:04.400352983 +0000 UTC m=+1.025497189 container remove 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 14:15:04 compute-0 systemd[1]: libpod-conmon-55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0.scope: Deactivated successfully.
Feb 25 14:15:04 compute-0 sudo[436918]: pam_unix(sudo:session): session closed for user root
Feb 25 14:15:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 14:15:04 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:15:04 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 14:15:04 compute-0 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:15:04 compute-0 sudo[437110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 25 14:15:04 compute-0 sudo[437110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 25 14:15:04 compute-0 sudo[437110]: pam_unix(sudo:session): session closed for user root
Feb 25 14:15:04 compute-0 nova_compute[244014]: 2026-02-25 14:15:04.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:15:04 compute-0 nova_compute[244014]: 2026-02-25 14:15:04.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:15:05 compute-0 ceph-mon[76335]: pgmap v4654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:05 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:15:05 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 14:15:05 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:05 compute-0 nova_compute[244014]: 2026-02-25 14:15:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:15:05 compute-0 nova_compute[244014]: 2026-02-25 14:15:05.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 25 14:15:06 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:15:07 compute-0 ceph-mon[76335]: pgmap v4655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:07 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:08 compute-0 nova_compute[244014]: 2026-02-25 14:15:08.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:09 compute-0 ceph-mon[76335]: pgmap v4656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:09 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:11 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:15:11 compute-0 ceph-mon[76335]: pgmap v4657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:11 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:13 compute-0 nova_compute[244014]: 2026-02-25 14:15:13.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:13 compute-0 ceph-mon[76335]: pgmap v4658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:13 compute-0 sshd-session[437135]: Accepted publickey for zuul from 192.168.122.10 port 36348 ssh2: ECDSA SHA256:Lt+nyLvEHFdh9YsbJVkTJEEj74N9LTMmF+RpURDsZbk
Feb 25 14:15:13 compute-0 systemd-logind[811]: New session 56 of user zuul.
Feb 25 14:15:13 compute-0 systemd[1]: Started Session 56 of User zuul.
Feb 25 14:15:13 compute-0 sshd-session[437135]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 25 14:15:13 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:13 compute-0 sudo[437139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Feb 25 14:15:13 compute-0 sudo[437139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 25 14:15:15 compute-0 ceph-mon[76335]: pgmap v4659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:15 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:16 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:15:16 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23362 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:16 compute-0 ceph-mon[76335]: pgmap v4660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:16 compute-0 ceph-mon[76335]: from='client.23362 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:16 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23364 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:17 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 25 14:15:17 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827492314' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 25 14:15:17 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:17 compute-0 ceph-mon[76335]: from='client.23364 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:17 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1827492314' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 25 14:15:18 compute-0 nova_compute[244014]: 2026-02-25 14:15:18.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:18 compute-0 ceph-mon[76335]: pgmap v4661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:19 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:20 compute-0 ovs-vsctl[437419]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 25 14:15:20 compute-0 ceph-mon[76335]: pgmap v4662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:21 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:15:21 compute-0 virtqemud[243235]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 25 14:15:21 compute-0 virtqemud[243235]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 25 14:15:21 compute-0 virtqemud[243235]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 25 14:15:21 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:21 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: cache status {prefix=cache status} (starting...)
Feb 25 14:15:21 compute-0 lvm[437741]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 14:15:21 compute-0 lvm[437741]: VG ceph_vg2 finished
Feb 25 14:15:22 compute-0 lvm[437747]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 14:15:22 compute-0 lvm[437747]: VG ceph_vg0 finished
Feb 25 14:15:22 compute-0 lvm[437767]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 14:15:22 compute-0 lvm[437767]: VG ceph_vg1 finished
Feb 25 14:15:22 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: client ls {prefix=client ls} (starting...)
Feb 25 14:15:22 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23368 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:22 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: damage ls {prefix=damage ls} (starting...)
Feb 25 14:15:22 compute-0 ceph-mon[76335]: pgmap v4663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:22 compute-0 ceph-mon[76335]: from='client.23368 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:22 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23370 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:22 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump loads {prefix=dump loads} (starting...)
Feb 25 14:15:22 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb 25 14:15:22 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb 25 14:15:23 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb 25 14:15:23 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23374 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb 25 14:15:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2431788982' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 25 14:15:23 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb 25 14:15:23 compute-0 nova_compute[244014]: 2026-02-25 14:15:23.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 25 14:15:23 compute-0 nova_compute[244014]: 2026-02-25 14:15:23.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:23 compute-0 nova_compute[244014]: 2026-02-25 14:15:23.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 25 14:15:23 compute-0 nova_compute[244014]: 2026-02-25 14:15:23.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:15:23 compute-0 nova_compute[244014]: 2026-02-25 14:15:23.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 25 14:15:23 compute-0 nova_compute[244014]: 2026-02-25 14:15:23.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:23 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb 25 14:15:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 14:15:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/700290619' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:15:23 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:23 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb 25 14:15:23 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23378 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:23 compute-0 ceph-mgr[76641]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 25 14:15:23 compute-0 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: 2026-02-25T14:15:23.664+0000 7f96f9a94640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 25 14:15:23 compute-0 ceph-mon[76335]: from='client.23370 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:23 compute-0 ceph-mon[76335]: from='client.23374 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2431788982' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 25 14:15:23 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/700290619' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 14:15:23 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: ops {prefix=ops} (starting...)
Feb 25 14:15:23 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb 25 14:15:23 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033706135' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 25 14:15:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb 25 14:15:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3479067776' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 25 14:15:24 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: session ls {prefix=session ls} (starting...)
Feb 25 14:15:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb 25 14:15:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2375952924' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 25 14:15:24 compute-0 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: status {prefix=status} (starting...)
Feb 25 14:15:24 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 25 14:15:24 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/66788207' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 25 14:15:24 compute-0 ceph-mon[76335]: pgmap v4664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:24 compute-0 ceph-mon[76335]: from='client.23378 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4033706135' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 25 14:15:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3479067776' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 25 14:15:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2375952924' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 25 14:15:24 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/66788207' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 25 14:15:25 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23388 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 25 14:15:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114139203' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 25 14:15:25 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23392 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:25 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:25 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 25 14:15:25 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638786567' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 25 14:15:25 compute-0 ceph-mon[76335]: from='client.23388 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3114139203' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 25 14:15:25 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1638786567' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 25 14:15:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:15:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb 25 14:15:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806312356' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 25 14:15:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 25 14:15:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241723639' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 25 14:15:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb 25 14:15:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045208824' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 25 14:15:26 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 25 14:15:26 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3198604801' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 25 14:15:26 compute-0 ceph-mon[76335]: from='client.23392 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:26 compute-0 ceph-mon[76335]: pgmap v4665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3806312356' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 25 14:15:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3241723639' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 25 14:15:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4045208824' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 25 14:15:26 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3198604801' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 25 14:15:27 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23404 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:27 compute-0 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: 2026-02-25T14:15:27.105+0000 7f96f9a94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 25 14:15:27 compute-0 ceph-mgr[76641]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 25 14:15:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 25 14:15:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2180581000' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 25 14:15:27 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:27 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb 25 14:15:27 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919515912' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 25 14:15:27 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23410 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:27 compute-0 ceph-mon[76335]: from='client.23404 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2180581000' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 25 14:15:27 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1919515912' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 25 14:15:28 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23414 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:15.335227+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 64610304 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:16.335404+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 64610304 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:17.335576+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 64610304 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:18.335823+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:19.335983+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:20.336179+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:21.336360+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:22.336594+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:23.336784+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:24.336992+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:25.337143+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:26.337323+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:27.337492+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:28.337634+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:29.337790+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:30.337995+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:31.338155+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:32.338418+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:33.338603+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:34.338788+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:35.338995+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:36.339219+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:37.339448+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:38.339644+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:39.339836+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:40.340007+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:41.340197+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:42.340374+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:43.340647+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:44.340874+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:45.341093+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:46.341298+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:47.341536+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:48.341801+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:49.341965+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:50.342142+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:51.342379+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:52.342633+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:53.342815+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:54.342976+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:55.343135+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:56.343255+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:57.343440+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:58.343937+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:59.344106+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:00.344242+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:01.344430+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:02.344641+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:03.344796+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:04.344974+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:05.345097+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:06.345286+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:07.345486+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:08.345668+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:09.345817+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:10.345948+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:11.346142+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:12.346328+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:13.346514+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 64569344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:14.346788+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:15.346968+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:16.347163+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:17.347340+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:18.347557+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:19.347809+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:20.347975+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:21.348171+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:22.348473+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:23.348844+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:24.349020+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:25.349264+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:26.349454+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:27.349634+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:28.349784+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:29.350015+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:30.350200+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:31.350405+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:32.350615+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:33.350889+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:34.351058+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:35.351220+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:36.351392+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:37.351620+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:38.351913+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:39.361824+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:40.362048+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:41.362249+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:42.362465+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:43.362779+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:44.362985+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:45.363166+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:46.363294+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:47.363450+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:48.363622+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:49.363811+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:50.363982+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:51.364224+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:52.364466+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:53.364626+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:54.364945+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:55.365171+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:56.365366+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:57.365550+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:58.365783+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:59.366012+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:00.366205+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:01.366457+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:02.366715+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:03.366871+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 64536576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:04.367067+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:05.367237+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:06.367462+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:07.367673+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:08.367914+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:09.368143+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:10.368346+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:11.368514+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:12.368788+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:13.369099+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:14.369283+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:15.369518+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:16.369749+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:17.369958+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:18.370175+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:19.370326+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:20.370486+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:21.370641+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:22.370873+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:23.371079+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:24.371275+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:25.371410+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:26.371635+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:27.371814+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:28.371950+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:29.372081+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:30.372315+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:31.372509+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:32.372784+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:33.373002+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:34.373172+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:35.373340+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:36.373492+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:37.373760+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:38.373975+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:39.374149+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:40.374360+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:41.374620+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:42.374872+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:43.375011+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:44.375212+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:45.375409+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:46.375620+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:47.375786+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:48.375959+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:49.376174+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:50.376342+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:51.376528+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:52.376886+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:53.377068+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 64503808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:54.377286+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 64503808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:55.377466+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 64503808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:56.377666+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 64495616 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:57.377921+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:58.378088+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:59.378282+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:00.378480+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:01.378638+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:02.378917+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:03.379199+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:04.380493+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:05.380662+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:06.380897+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:07.381095+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:08.381269+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:09.381434+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:10.381594+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:11.381793+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:12.382102+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:13.382323+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:14.382516+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:15.382817+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:16.383070+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:17.383275+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:18.383468+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:19.383751+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:20.384003+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:21.384221+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:22.384460+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:23.384641+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:24.384824+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:25.385022+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:26.385206+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:27.385370+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:28.385534+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:29.385674+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:30.385874+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 64471040 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:31.386070+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 64471040 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:32.386217+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 64471040 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:33.386418+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 64471040 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:34.386569+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:35.386780+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:36.386992+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:37.387137+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:38.387392+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:39.387579+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:40.387851+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:41.388164+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:42.388429+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:43.388615+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:44.388813+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:45.388973+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:46.389153+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:47.389301+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:48.389494+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:49.389670+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:50.389839+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:51.390015+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:52.390223+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:53.390423+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:54.390574+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 64446464 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:55.390721+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 64446464 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:56.390871+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 64446464 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:57.391021+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 64446464 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:58.391669+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 64446464 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:59.391852+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:00.392218+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:01.392420+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:02.392616+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:03.392815+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:04.392967+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:05.393149+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:06.393285+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:07.393454+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:08.393619+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:09.393836+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:10.394011+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:11.394154+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 64430080 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:12.394398+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 64430080 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:13.394613+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 64430080 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:14.394804+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 64430080 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:15.395053+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:16.395203+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:17.395337+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:18.395542+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:19.395777+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:20.395962+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:21.396158+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:22.396423+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:23.396599+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:24.396759+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:25.396917+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:26.397116+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:27.397311+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:28.397480+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:29.397613+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:30.397750+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64413696 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:31.397941+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64413696 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:32.398271+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64413696 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:33.398473+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64413696 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:34.398644+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64413696 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:35.398808+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64405504 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:36.398982+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64405504 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:37.399365+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:38.399577+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:39.399787+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:40.399994+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:41.400162+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:42.400354+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:43.400503+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:44.400734+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:45.400974+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:46.401124+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:47.401317+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:48.401463+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:49.401632+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:50.401783+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:51.401963+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:52.402152+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:53.402331+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:54.402504+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:55.402837+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:56.403032+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:57.403263+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:58.403477+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:59.403644+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:00.403814+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:01.403999+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:02.404184+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:03.404369+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64372736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:04.404537+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64372736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:05.404763+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64372736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:06.404910+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64372736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:07.405123+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:08.405314+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:09.405502+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:10.405816+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:11.406087+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:12.406541+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:13.406756+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:14.406969+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:15.407192+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:16.407358+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:17.407610+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:18.407781+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:19.407958+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:20.408237+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:21.408423+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:22.408602+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:23.408799+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:24.408981+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:25.409217+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:26.409445+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:27.409652+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:28.409873+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:29.410092+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:30.410336+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:31.410585+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:32.410824+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:33.411027+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:34.411214+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:35.411388+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:36.411569+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:37.411750+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:38.411942+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:39.412098+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:40.412271+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:41.412453+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:42.412659+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:43.412852+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:44.413000+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:45.413202+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:46.413386+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:47.413572+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:48.413768+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:49.413922+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:50.414076+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:51.414242+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:52.414439+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:53.414601+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:54.414765+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:55.414929+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:56.415108+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:57.415263+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:58.415378+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:59.415494+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:00.415622+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:01.415774+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:02.416023+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:03.416185+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 64331776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:04.416336+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 64331776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:05.416554+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:06.416828+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:07.417199+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:08.417436+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:09.417646+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:10.417832+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:11.418011+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:12.418290+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:13.418523+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:14.418781+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:15.419057+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:16.419367+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:17.419648+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:18.419803+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:19.419965+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:20.420098+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:21.420235+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:22.420459+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:23.420646+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:24.420900+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:25.421109+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 64307200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:26.421308+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 64307200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:27.421501+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:28.421743+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:29.421896+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:30.422031+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:31.422229+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:32.422452+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:33.422637+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:34.422772+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:35.422947+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:36.423112+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:37.423283+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:38.423474+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303169536 unmapped: 64290816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:39.423802+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:40.423980+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:41.424156+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:42.424349+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:43.424517+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:44.424748+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:45.424905+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:46.425080+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:47.425332+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:48.425495+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:49.425601+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:50.425751+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:51.425900+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 64274432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:52.426133+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 64274432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:53.426300+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 64274432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:54.426438+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 64274432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:55.426608+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 64274432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:56.426814+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 64258048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:57.427039+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 64258048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:58.427183+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:59.427403+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:00.427636+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:01.427837+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:02.428104+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:03.428264+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:04.428448+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:05.428628+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:06.428814+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:07.428946+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:08.429112+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:09.429573+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:10.429758+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:11.430058+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:12.430456+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:13.430677+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:14.431098+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 64241664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:15.431465+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 64241664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:16.432174+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 64241664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:17.432363+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 64241664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:18.432611+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:19.432825+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:20.432972+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:21.433188+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:22.433585+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:23.433845+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:24.434048+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:25.434242+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:26.434482+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:27.434817+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:28.435044+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:29.435243+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:30.435448+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303235072 unmapped: 64225280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:31.435650+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303235072 unmapped: 64225280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:32.435921+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303235072 unmapped: 64225280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:33.436132+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303235072 unmapped: 64225280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:34.436307+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:35.436510+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:36.436726+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:37.436973+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:38.437149+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:39.437330+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:40.437487+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:41.437646+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:42.437909+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 64208896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:43.438153+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 64208896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:44.438458+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 64208896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:45.438741+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 64208896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:46.438920+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 64192512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:47.439125+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 64192512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:48.439308+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 64192512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:49.439499+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 64192512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:50.439726+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 64192512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:51.440025+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 64184320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:52.440249+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:53.440477+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:54.440664+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:55.440884+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:56.441098+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:57.441397+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:58.441581+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:59.441845+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:00.442054+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:01.442336+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:02.442553+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:03.442825+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:04.443038+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:05.443183+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:06.443315+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 64167936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:07.443595+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 64167936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:08.443773+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 64167936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:09.444016+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 64167936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:10.444256+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 64167936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-02-25T13:49:11.444432+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _finish_auth 0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:11.445328+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:12.444667+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:13.444953+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:14.445176+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:15.445444+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:16.445593+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:17.445769+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:18.445939+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:19.446103+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:20.446300+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:21.446486+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:22.446676+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:23.446950+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:24.447085+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:25.447263+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:26.447491+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:27.447733+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 64151552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:28.447970+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 64151552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:29.448226+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 64151552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:30.448385+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:31.448565+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:32.448740+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:33.448908+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:34.449058+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:35.449211+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:36.449366+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:37.449545+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:38.449754+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:39.449905+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:40.450142+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:41.450352+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:42.450572+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:43.450801+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:44.450966+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:45.451196+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:46.451386+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:47.451564+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:48.451728+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:49.451899+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 64126976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:50.452073+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 64126976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:51.452260+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 64126976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:52.452517+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 64126976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:53.452671+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 64126976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:54.452843+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 64118784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.6 total, 600.0 interval
                                           Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:55.453027+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 64118784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:56.453174+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 64118784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:57.453339+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 64118784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:58.453573+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 64110592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:59.453713+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 64110592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:00.453886+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 64110592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:01.454161+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: mgrc ms_handle_reset ms_handle_reset con 0x55f8d50ee800
Feb 25 14:15:28 compute-0 ceph-osd[89088]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1263522489
Feb 25 14:15:28 compute-0 ceph-osd[89088]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1263522489,v1:192.168.122.100:6801/1263522489]
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: get_auth_request con 0x55f8d4dc8800 auth_method 0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: mgrc handle_mgr_configure stats_period=5
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:02.454405+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:03.454585+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:04.454772+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:05.454974+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:06.455284+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:07.455522+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:08.455785+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:09.456002+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:10.456183+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:11.456328+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:12.456906+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:13.457166+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:14.457376+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:15.457585+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:16.457753+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:17.458330+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:18.458577+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:19.459263+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:20.459897+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:21.460503+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:22.460750+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:23.461087+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:24.461770+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:25.462061+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:26.462488+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:27.462995+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:28.463372+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:29.463811+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:30.464176+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:31.464393+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:32.464662+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:33.464974+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:34.465229+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:35.465464+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:36.465765+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:37.465948+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:38.466166+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:39.466304+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:40.466489+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:41.466678+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:42.466965+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 64069632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:43.467241+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 64069632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:44.467547+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 64069632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:45.467809+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 64069632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:46.468017+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 64069632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 598.555297852s of 598.695800781s, submitted: 90
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:47.468184+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 64036864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:48.468414+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 64036864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:49.468631+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 64036864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:50.468825+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 64036864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:51.468989+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 64045056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:52.469255+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 64045056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:53.469422+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:54.469555+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:55.469744+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:56.469930+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:57.470100+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:58.470299+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:59.470475+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:00.470723+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:01.470925+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:02.471155+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:03.471410+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:04.471591+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:05.471758+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:06.472051+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:07.472271+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:08.472484+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:09.472675+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:10.472968+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:11.473131+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:12.473343+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:13.473601+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:14.473784+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:15.473944+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:16.474101+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:17.474252+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:18.474395+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:19.474544+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:20.474747+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:21.474914+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:22.475216+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:23.475389+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:24.475578+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:25.475777+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:26.475947+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:27.476206+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:28.476404+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:29.476612+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:30.476790+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:31.476962+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:32.477138+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:33.477328+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:34.477491+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:35.477707+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:36.477968+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:37.478186+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:38.478395+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:39.478573+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:40.478816+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:41.479003+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:42.479242+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:43.479386+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:44.479564+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:45.479918+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:46.480089+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:47.480288+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:48.480447+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:49.480585+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:50.480767+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:51.480957+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:52.481199+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:53.481418+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:54.481777+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:55.481962+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:56.482111+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:57.482351+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:58.482563+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:59.482782+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:00.483001+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:01.483218+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:02.483468+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:03.483624+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:04.483807+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:05.484008+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:06.484217+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:07.484425+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:08.484619+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:09.484840+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:10.485026+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:11.485171+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:12.485441+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:13.485645+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:14.485816+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:15.486004+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:16.486201+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:17.486351+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:18.486513+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:19.486657+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:20.486768+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:21.486930+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:22.487292+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:23.487472+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:24.487656+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:25.487882+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:26.489136+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:27.489867+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:28.490489+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:29.490984+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:30.491308+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:31.491670+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:32.491931+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:33.492121+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 63979520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:34.492317+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:35.492533+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:36.492759+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:37.493300+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:38.493796+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:39.494138+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:40.494436+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:41.495115+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:42.495857+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:43.496320+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:44.496914+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:45.497163+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:46.497399+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:47.497613+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:48.497864+0000)
Feb 25 14:15:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb 25 14:15:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999877447' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:49.498104+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:50.498356+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:51.498649+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:52.498934+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:53.499265+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:54.499586+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:55.499848+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:56.500011+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:57.500157+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:58.500380+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:59.500626+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:00.500888+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:01.501126+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:02.501373+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:03.501677+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:04.502009+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:05.502212+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:06.502444+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:07.502657+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:08.502924+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:09.503085+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:10.503322+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:11.503568+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:12.503972+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:13.504293+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:14.504581+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:15.504767+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:16.504990+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303513600 unmapped: 63946752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:17.505185+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303513600 unmapped: 63946752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:18.505367+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303513600 unmapped: 63946752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:19.505535+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 63938560 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:20.505787+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 63938560 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:21.506016+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 63938560 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:22.506248+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 63938560 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:23.506491+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:24.506744+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:25.506932+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:26.507183+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:27.509235+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:28.509385+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:29.509550+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:30.509751+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:31.509935+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:32.510167+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:33.510555+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:34.510868+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:35.511156+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:36.511426+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:37.511663+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:38.511859+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:39.512061+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:40.512189+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:41.512390+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:42.512605+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:43.512840+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:44.513054+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:45.513249+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:46.513411+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:47.513598+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:48.513769+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:49.513932+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:50.514085+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:51.514280+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:52.514495+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:53.514733+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:54.514934+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:55.515141+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:56.515372+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:57.515614+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:58.515839+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:59.516005+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:00.516173+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:01.516359+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:02.516640+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:03.516794+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:04.516947+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:05.517130+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:06.517284+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:07.517485+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:08.517766+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:09.517960+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:10.518145+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:11.518302+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:12.518527+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:13.518741+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:14.518941+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:15.519143+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:16.519339+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:17.519533+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:18.519778+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:19.520061+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:20.520243+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:21.520466+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:22.520669+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:23.520937+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:24.521116+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:25.521367+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:26.521553+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:27.521744+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:28.521914+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:29.522111+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:30.522293+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:31.522463+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303570944 unmapped: 63889408 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:32.522676+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303570944 unmapped: 63889408 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:33.522852+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303570944 unmapped: 63889408 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:34.523021+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303587328 unmapped: 63873024 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:35.523222+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303587328 unmapped: 63873024 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:36.523443+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303587328 unmapped: 63873024 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:37.523656+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303587328 unmapped: 63873024 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:38.523873+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:39.524049+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:40.524220+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:41.524523+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:42.524819+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:43.525026+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:44.525145+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:45.525251+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:46.525365+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:47.525473+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:48.525610+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:49.525800+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:50.525980+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:51.526135+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:52.526341+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:53.526551+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:54.526739+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:55.526917+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:56.527076+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:57.527229+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:58.527417+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:59.527548+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:00.527771+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:01.527943+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:02.528184+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:03.528374+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:04.528640+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:05.528840+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:06.528990+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:07.529134+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:08.529272+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:09.529375+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:10.529487+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:11.529593+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:12.529736+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:13.529866+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:14.529976+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:15.530101+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:16.530230+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:17.530396+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303628288 unmapped: 63832064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:18.530815+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303628288 unmapped: 63832064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:19.530954+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303628288 unmapped: 63832064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:20.531176+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303628288 unmapped: 63832064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:21.531345+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303628288 unmapped: 63832064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:22.531595+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 63823872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:23.531776+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 63823872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:24.531947+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:25.532155+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:26.532344+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:27.532540+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:28.532719+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:29.532904+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:30.533069+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:31.533252+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:32.533538+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:33.533784+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:34.533975+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:35.534175+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 63807488 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:36.534313+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 63807488 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:37.534483+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 63807488 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:38.534729+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 63807488 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:39.534887+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 63799296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:40.535071+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 63799296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:41.535253+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:42.535469+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:43.535658+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:44.535877+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:45.535967+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:46.536092+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:47.536232+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:48.536398+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303677440 unmapped: 63782912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:49.536588+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303677440 unmapped: 63782912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:50.536750+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303677440 unmapped: 63782912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:51.536924+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:52.537152+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:53.537268+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:54.537446+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:55.537634+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:56.537814+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:57.537963+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:58.538102+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:59.538243+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:00.538393+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:01.538515+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:02.538748+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:03.538877+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:04.539066+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:05.539284+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:06.539537+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:07.539777+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:08.539959+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:09.540128+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:10.540265+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:11.540514+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 63758336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:12.540833+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 63758336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:13.541028+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 63758336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:14.541209+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 63758336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:15.541422+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:16.541574+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:17.541815+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:18.542002+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:19.542207+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:20.542363+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:21.542632+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:22.542948+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:23.543159+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:24.543396+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:25.543587+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:26.543781+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:27.543953+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:28.544134+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303718400 unmapped: 63741952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:29.544302+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303718400 unmapped: 63741952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:30.544482+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303718400 unmapped: 63741952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:31.544658+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:32.544971+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:33.545128+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:34.545281+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:35.545473+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:36.545785+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:37.545981+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:38.546164+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:39.546355+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:40.546530+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:41.546714+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:42.546921+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:43.547135+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:44.547312+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:45.547487+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:46.547641+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:47.547820+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:48.548038+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:49.548212+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:50.548449+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:51.548643+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:52.548924+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:53.549162+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:54.549405+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:55.549743+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:56.549932+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:57.550095+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:58.550318+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:59.550532+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:00.550788+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:01.550938+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:02.551144+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:03.551348+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:04.551542+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:05.551777+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:06.551972+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:07.552115+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:08.552290+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:09.552538+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:10.552842+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:11.553105+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:12.553387+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:13.553607+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:14.553763+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:15.554001+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303767552 unmapped: 63692800 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:16.554200+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303767552 unmapped: 63692800 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:17.554360+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 63684608 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:18.554528+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 63684608 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:19.554826+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 63684608 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:20.555040+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:21.555218+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:22.555429+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:23.555632+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:24.555802+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:25.555978+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:26.556176+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:27.556358+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:28.556494+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:29.556809+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:30.556957+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:31.557118+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:32.557319+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:33.557500+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:34.557763+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:35.557996+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:36.558145+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:37.558384+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:38.558597+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:39.558841+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:40.559062+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:41.559258+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:42.559450+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:43.559649+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:44.559892+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:45.560110+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:46.560351+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:47.560533+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:48.560718+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:49.561078+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:50.561271+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:51.561582+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:52.561791+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:53.562037+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:54.562615+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 63660032 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:55.562805+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 63660032 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:56.563220+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 63660032 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:57.563438+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 63660032 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:58.563629+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303808512 unmapped: 63651840 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:59.563788+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303808512 unmapped: 63651840 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:00.563967+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303816704 unmapped: 63643648 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:01.564151+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 63635456 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:02.564408+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 63635456 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:03.564603+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:04.564748+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:05.564904+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:06.565073+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:07.565264+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:08.565461+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:09.565645+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:10.565814+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:11.565996+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 63619072 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:12.566198+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 63619072 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:13.566365+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 63619072 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:14.566500+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:15.566620+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:16.566758+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:17.566930+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:18.567085+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:19.567234+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:20.567431+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:21.567604+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:22.567801+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:23.568000+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:24.568201+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:25.568391+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:26.568558+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:27.568747+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:28.568927+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:29.569123+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:30.569287+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:31.569502+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:32.569742+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:33.569928+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:34.570066+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:35.570292+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:36.570607+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303857664 unmapped: 63602688 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:37.570816+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303857664 unmapped: 63602688 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:38.570971+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:39.571113+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:40.571280+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:41.571448+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:42.571758+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:43.571938+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:44.572148+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:45.572348+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:46.572541+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:47.572792+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:48.572977+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:49.573180+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:50.573373+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303874048 unmapped: 63586304 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:51.573552+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303890432 unmapped: 63569920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:52.573798+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303890432 unmapped: 63569920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:53.573989+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:54.574146+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:55.574377+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:56.574567+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:57.574802+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:58.574971+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:59.575113+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:00.575279+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:01.575421+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:02.575647+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:03.575807+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:04.575971+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:05.576138+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303906816 unmapped: 63553536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:06.576304+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303906816 unmapped: 63553536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:07.576501+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303906816 unmapped: 63553536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:08.576791+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303906816 unmapped: 63553536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:09.576957+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:10.577182+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:11.577345+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:12.577558+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:13.577790+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:14.577987+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:15.578141+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:16.578307+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:17.578483+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:18.578641+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:19.578823+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:20.578944+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:21.579071+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:22.579299+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:23.579531+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:24.579786+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:25.579992+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:26.580173+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:27.580304+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:28.580563+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:29.580839+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:30.581040+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:31.581244+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:32.581429+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:33.581657+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:34.581889+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:35.582097+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:36.582289+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:37.582431+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:38.582589+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:39.582762+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 63520768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:40.582915+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 63520768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:41.583105+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:42.583337+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:43.583506+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:44.583759+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:45.583957+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:46.584105+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:47.584304+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:48.584495+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:49.584681+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:50.584967+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:51.585141+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:52.585350+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:53.585499+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:54.585638+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.6 total, 600.0 interval
                                           Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.75 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:55.585778+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 63496192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:56.585930+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 63496192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:57.586130+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 63496192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:58.586433+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303972352 unmapped: 63488000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:59.586617+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:00.586905+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:01.587060+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:02.587241+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:03.587404+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:04.587537+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:05.587809+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:06.587970+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:07.588143+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:08.588307+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:09.588475+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:10.588730+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:11.588893+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:12.589093+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:13.589318+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:14.589542+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:15.589783+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:16.589930+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:17.590075+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:18.590270+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:19.590468+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:20.590619+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:21.590870+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:22.591110+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:23.591248+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:24.591413+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:25.591574+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303988736 unmapped: 63471616 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:26.591778+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303988736 unmapped: 63471616 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:27.591934+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303988736 unmapped: 63471616 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:28.592102+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303988736 unmapped: 63471616 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:29.592277+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:30.592441+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:31.596376+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:32.597916+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:33.598441+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:34.598876+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:35.599546+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304005120 unmapped: 63455232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:36.599841+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304005120 unmapped: 63455232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:37.600155+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304005120 unmapped: 63455232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:38.600301+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304005120 unmapped: 63455232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:39.600529+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 63447040 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:40.600788+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304021504 unmapped: 63438848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:41.601059+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:42.601322+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:43.601486+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:44.601684+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:45.601956+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:46.602150+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:47.602371+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:48.602538+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 601.275024414s of 601.484924316s, submitted: 132
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:49.602754+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:50.602989+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:51.603206+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:52.603565+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:53.603810+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304062464 unmapped: 63397888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:54.603970+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:55.604131+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:56.604270+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:57.604448+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:58.604565+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:59.604686+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:00.604885+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:01.605022+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:02.605186+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:03.605327+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:04.605497+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:05.605781+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:06.605947+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:07.606133+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:08.606305+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:09.606463+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:10.606657+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:11.606850+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:12.607018+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:13.607167+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:14.607356+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:15.607525+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:16.607749+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:17.607912+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:18.608148+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:19.608331+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:20.608506+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:21.608746+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:22.608940+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:23.609111+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:24.609262+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:25.609449+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:26.609664+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:27.609882+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:28.610027+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:29.610240+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:30.610405+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:31.610558+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:32.610761+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:33.610974+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:34.611164+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:35.611389+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:36.611602+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:37.611776+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:38.611910+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:39.612104+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:40.612303+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:41.612466+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:42.612715+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:43.612908+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:44.613116+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:45.613294+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:46.613628+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:47.613827+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:48.614044+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:49.614287+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:50.614426+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:51.614623+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:52.615079+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:53.615225+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:54.615436+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:55.615576+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:56.615722+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:57.615896+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:58.616062+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:59.616239+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:00.616416+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 63348736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:01.616606+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 63348736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:02.616823+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 63348736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:03.616954+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 63348736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:04.617123+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:05.617304+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:06.617509+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:07.617811+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:08.618031+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:09.618200+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:10.618340+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:11.618583+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:12.618969+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:13.619131+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:14.619276+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:15.619544+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:16.619823+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:17.620065+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:18.620271+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:19.620457+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:20.620624+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:21.620810+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:22.621001+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:23.621234+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:24.621374+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:25.621560+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:26.621759+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:27.621979+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:28.622257+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:29.622426+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:30.622581+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:31.622763+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:32.623025+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:33.623225+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:34.623391+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:35.623552+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:36.623770+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:37.623997+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:38.624206+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:39.624384+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:40.624607+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:41.624853+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:42.625102+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:43.625371+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:44.625553+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:45.625774+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:46.625996+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 63324160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:47.626237+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 63324160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:48.626456+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:49.626664+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:50.626877+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:51.627111+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:52.627351+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:53.627525+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:54.627757+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:55.627884+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:56.628002+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:57.628154+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:58.628318+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:59.628641+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:00.628863+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:01.629101+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:02.629377+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:03.629558+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:04.629771+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:05.629936+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:06.630072+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:07.630204+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:08.630458+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:09.630736+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:10.630926+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:11.631102+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:12.631317+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:13.631504+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:14.631731+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:15.631916+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 63299584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:16.632058+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 63299584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:17.632245+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 63299584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:18.632416+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:19.632627+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:20.632801+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:21.632959+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:22.633172+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:23.633309+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:24.633476+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:25.633624+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:26.633773+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:27.633953+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:28.634121+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:29.634373+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 63283200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:30.634548+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 63283200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:31.634790+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 63283200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:32.634984+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 63283200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:33.635131+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 63275008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:34.635265+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 63275008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:35.635395+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:36.635532+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:37.635677+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:38.635875+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:39.636070+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:40.636249+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:41.636440+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:42.636670+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:43.636896+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:44.637044+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:45.637194+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:46.637343+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:47.637549+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:48.637728+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:49.637924+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:50.638099+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:51.638258+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:52.638476+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:53.638768+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:54.638931+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:55.639095+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:56.639280+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:57.639453+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:58.639679+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:59.639903+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:00.640061+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:01.640213+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:02.640413+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:03.640591+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:04.640807+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:05.640953+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:06.641085+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:07.641228+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:08.641431+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:09.641621+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:10.641813+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:11.642007+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:12.642309+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:13.642467+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:14.642612+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:15.642780+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:16.642940+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:17.643156+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:18.643382+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:19.643576+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:20.643801+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:21.643988+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:22.644185+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:23.644374+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:24.644520+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:25.644677+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:26.644811+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:27.644960+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:28.645113+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:29.645301+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:30.645522+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:31.645798+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:32.646033+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:33.646187+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:34.646364+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:35.646548+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:36.646814+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:37.647001+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:38.647170+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:39.647363+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:40.647542+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:41.647794+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304242688 unmapped: 63217664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:42.648093+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304242688 unmapped: 63217664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:43.648297+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63209472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:44.648460+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63209472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:45.648744+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63209472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:46.648912+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63209472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:47.649074+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:48.649249+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:49.649447+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:50.649615+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:51.649759+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:52.649952+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:53.650131+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:54.650393+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets getting new tickets!
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:55.650665+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _finish_auth 0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:55.651514+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:56.650913+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:57.651099+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:58.651262+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:59.651447+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:00.651600+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:01.651796+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: mgrc ms_handle_reset ms_handle_reset con 0x55f8d4dc8800
Feb 25 14:15:28 compute-0 ceph-osd[89088]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1263522489
Feb 25 14:15:28 compute-0 ceph-osd[89088]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1263522489,v1:192.168.122.100:6801/1263522489]
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: get_auth_request con 0x55f8d539c000 auth_method 0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: mgrc handle_mgr_configure stats_period=5
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:02.652033+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:03.652244+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:04.652399+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:05.652648+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:06.652899+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:07.653082+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:08.653208+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:09.653373+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:10.653569+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:11.653783+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:12.654061+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63184896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:13.654259+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63184896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:14.654449+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63184896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:15.654636+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63184896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:16.654817+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63184896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:17.655000+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:18.655168+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:19.655434+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:20.655570+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:21.655797+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:22.656033+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:23.656190+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:24.656339+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:25.656496+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63168512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:26.656738+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:27.656888+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:28.657033+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:29.657181+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:30.657321+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:31.657494+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:32.658988+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:33.659190+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:34.659347+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:35.659514+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:36.659763+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:37.659932+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:38.660125+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:39.660294+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:40.660442+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:41.660613+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:42.661898+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:43.662099+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:44.662277+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:45.662453+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:46.662649+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:47.662805+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:48.662940+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:49.663167+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:50.663343+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 63143936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 302.284851074s of 302.442108154s, submitted: 90
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:51.663482+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:52.663775+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:53.663987+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:54.664181+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:55.664452+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:56.664648+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:57.664855+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:58.665054+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:59.665220+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:00.665390+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:01.665555+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:02.665775+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:03.665945+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:04.666092+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:05.666205+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:06.666345+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:07.666567+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:08.666739+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:09.666963+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:10.667125+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:11.667326+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:12.667530+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:13.667679+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:14.667904+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:15.668113+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:16.668297+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:17.668469+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:18.668650+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:19.668888+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:20.669064+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:21.669239+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:22.669543+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:23.669777+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:24.669937+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:25.670149+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:26.670311+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:27.670490+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:28.670664+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:29.670899+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:30.671177+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:31.671460+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:32.671746+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63111168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:33.671942+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63111168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:34.672115+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63111168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:35.672276+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63111168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:36.672489+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63111168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:37.672647+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63102976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:38.672839+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63102976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:39.673024+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63102976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:40.673185+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63102976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:41.673384+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:42.673607+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:43.673841+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:44.674050+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:45.674259+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:46.674494+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:47.674811+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:48.675020+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:49.675224+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:50.675402+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:51.675613+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:52.675886+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:53.676077+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:54.676233+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:55.676422+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:56.676614+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:57.676778+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:58.676992+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:59.677197+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:00.677427+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:01.677637+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:02.677925+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:03.678122+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:04.678302+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:05.678583+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:06.678866+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:07.679080+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:08.679274+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:09.679488+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:10.679810+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:11.680034+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:12.680260+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:13.680387+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:14.680526+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63078400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:15.680669+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:16.680884+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:17.681054+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:18.681241+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:19.681365+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:20.681493+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:21.681651+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:22.681948+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:23.682137+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:24.682296+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:25.682411+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63062016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:26.682532+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 63053824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:27.682649+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 63053824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:28.682799+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:29.683024+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 63053824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:30.683961+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:31.684203+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:32.686448+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:33.686725+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:34.687553+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:35.687796+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:36.688246+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:37.688483+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:38.688633+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:39.948757+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:40.949044+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:41.949170+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:42.949338+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:43.949484+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:44.949884+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:45.950056+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:46.950401+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:47.950567+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:48.950928+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:49.951055+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:50.951356+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:51.951551+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:52.951775+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:53.951983+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:54.952306+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:55.952532+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:56.952744+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:57.952912+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:58.953182+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:59.953384+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:00.953598+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:01.953763+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:02.954011+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:03.954197+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:04.954376+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:05.954543+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:06.954758+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:07.954924+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:08.955111+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:09.955312+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:10.955505+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:11.955666+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:12.955931+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:13.956125+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:14.956370+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:15.956576+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:16.956799+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:17.956956+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:18.957061+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:19.957197+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:20.957414+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:21.957590+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63004672 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:22.958035+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63004672 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:23.958234+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:24.958434+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:25.958657+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:26.958785+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:27.958925+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:28.959122+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:29.959306+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:30.959448+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:31.959621+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:32.959746+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:33.959953+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:34.960128+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:35.960305+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:36.960624+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:37.960802+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:38.960965+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:39.961142+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:40.961356+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:41.961585+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:42.961855+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:43.962022+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:44.962175+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:45.962343+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:46.962541+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:47.962706+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:48.962856+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:49.962983+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:50.963134+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:51.963312+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:52.963543+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:53.963759+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:54.963934+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:55.964131+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:56.964302+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:57.964464+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:58.964650+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:59.964844+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:00.965018+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:01.965187+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:02.965391+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:03.965558+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:04.965768+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:05.965984+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:06.966141+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:07.966352+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:08.966489+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:09.966730+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:10.966923+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:11.967082+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:12.967304+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:13.967484+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:14.967744+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:15.967872+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:16.968134+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:17.968309+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:18.968547+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:19.968792+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:20.969048+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:21.969267+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:22.969507+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:23.969734+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:24.969944+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:25.970100+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:26.970329+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:27.970544+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:28.970761+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:29.970919+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:30.971099+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:31.971290+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:32.971586+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:33.971784+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:34.971968+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:35.972149+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:36.972390+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:37.972600+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:38.972823+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:39.972974+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:40.973149+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:41.973396+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:42.973661+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:43.973953+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:44.974122+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:45.974276+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:46.974441+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:47.974620+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:48.974746+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:49.974873+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:50.975043+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:51.975172+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:52.975347+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:53.975513+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.6 total, 600.0 interval
                                           Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.74 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:54.975845+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:55.976106+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:56.976295+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:57.976493+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:58.976649+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:59.976832+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:00.977030+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:01.977246+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:02.977456+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:03.977837+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:04.978044+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:05.978281+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:06.978507+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:07.978654+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:08.978992+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:09.979212+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:10.979415+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:11.979614+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:12.979822+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:13.980001+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:14.980247+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:15.980431+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:16.980572+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:17.980768+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:18.980919+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:19.981110+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:20.981357+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:21.981547+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:22.981758+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:23.981938+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:24.982122+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:25.982322+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:26.982527+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:27.982809+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:28.982987+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:29.983157+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:30.983356+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:31.983647+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:32.983963+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:33.984120+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:34.984320+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:35.984503+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:36.984717+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:37.984941+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:38.985136+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:39.985425+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:40.985747+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:41.985980+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:42.986286+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:43.986523+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:44.986727+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 62930944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:45.987001+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 62930944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:46.987182+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:47.987377+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 297.608093262s of 297.648681641s, submitted: 24
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:48.987632+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:49.987860+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:50.988050+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:51.988198+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:52.988382+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:53.988757+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 62914560 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:54.988891+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [0,0,0,0,0,0,1])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 62906368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:55.989070+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 62898176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:56.989221+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 62898176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:57.989379+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 62898176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:58.989526+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 62889984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.786643982s of 10.974362373s, submitted: 38
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:59.989710+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 62881792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:00.989934+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 62881792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:01.990126+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 62857216 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:02.990289+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 62857216 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 nova_compute[244014]: 2026-02-25 14:15:28.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:03.990417+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 62857216 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:04.990599+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 62857216 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:05.990796+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 62857216 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:06.990964+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:07.991138+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:08.991363+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:09.991568+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:10.991786+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:11.991949+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:12.992170+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:13.992374+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:14.992542+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:15.992764+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:16.992988+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:17.993174+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:18.993339+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:19.993516+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:20.993777+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:21.994025+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:22.994256+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:23.994471+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:24.994728+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:25.994972+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:26.995158+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:27.995338+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:28.995476+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:29.995588+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:30.995724+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:31.995917+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:32.996118+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:33.996217+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:34.996342+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:35.996478+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:36.996622+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:37.996786+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:38.997013+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:39.997226+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:40.997446+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:41.997638+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:42.997885+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:43.998064+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:44.998182+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:45.998444+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:46.998653+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:47.998903+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:48.999108+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:49.999295+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:50.999504+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:51.999648+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:52.999871+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:54.000002+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:55.000377+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:56.000570+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:57.000770+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:58.000985+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:59.001166+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:00.001469+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:01.001658+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:02.001823+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:03.002002+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:04.002166+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:05.002328+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:06.002505+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 62808064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:07.002678+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 62808064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:08.002917+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 62808064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:09.003137+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 62808064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:10.003312+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:11.003490+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:12.003662+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:13.003912+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:14.004067+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:15.004221+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:16.004372+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:17.004507+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:18.004662+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:19.004849+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:20.005043+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:21.005231+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:22.005413+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:23.005728+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:24.005964+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:25.006235+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:26.006476+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:27.006625+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:28.006807+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:29.006978+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:30.007133+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:31.007271+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:32.007393+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:33.007578+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:34.007757+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:35.007991+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:36.008164+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:37.008333+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:38.008488+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:39.008622+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:40.008807+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:41.008997+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:42.009200+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:43.009379+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:44.009544+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:45.009752+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:46.009911+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:47.010100+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:48.010236+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:49.010410+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:50.010574+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:51.010791+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:52.010951+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:53.011112+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:54.011262+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:55.011418+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:56.011562+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:57.011789+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:58.011901+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:59.012050+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:00.012172+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:01.012308+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:02.012465+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:03.012655+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:04.012801+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:05.012999+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:06.013195+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:07.013392+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:08.013546+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:09.013810+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:10.014000+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:11.014126+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:12.014268+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:13.014437+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:14.014577+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:15.014766+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:16.014929+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:17.015056+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:18.015204+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:19.015344+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:20.015477+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread fragmentation_score=0.004165 took=0.000093s
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:21.015631+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:22.015821+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:23.016007+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:24.016126+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:25.016269+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:26.016396+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:27.016554+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:28.016713+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:29.016900+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:30.017063+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:31.017203+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:32.017429+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:33.017601+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:34.017737+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:35.017876+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:36.018059+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:37.018248+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:38.018418+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:39.018566+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:40.018767+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:41.018915+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:42.019087+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:43.019269+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:44.019454+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:45.019608+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:46.019778+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:47.019906+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:48.020023+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 62734336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:49.020211+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 62734336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:50.020421+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 62734336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:51.020566+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 62734336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:52.020750+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 62734336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:53.020934+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:54.021069+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:55.021239+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:56.021423+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:57.021627+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:58.021826+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:59.022037+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:00.022241+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:01.022449+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:02.022629+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:03.022843+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:04.022995+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:05.023182+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:06.023337+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:07.023556+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:08.023723+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:09.023848+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:10.023991+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:11.024141+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:12.024291+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:13.024428+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:14.024578+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:15.024780+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:16.024963+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:17.025133+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:18.025361+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:19.025571+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:20.025713+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:21.025861+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:22.026039+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:23.026224+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:24.026404+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:25.026538+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:26.026684+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:27.026903+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:28.027059+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:29.027182+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:30.027342+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:31.027492+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:32.027647+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:33.027935+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:34.028056+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:35.028261+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:36.028509+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:37.028636+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:38.028768+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:39.028896+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:40.029086+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:41.029291+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:42.029503+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:43.029735+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:44.029848+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:45.029962+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:46.030084+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:47.030221+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:48.030371+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:49.030558+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:50.030727+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:51.030933+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:52.031086+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:53.031266+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:54.031388+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:55.031584+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: do_command 'config diff' '{prefix=config diff}'
Feb 25 14:15:28 compute-0 ceph-osd[89088]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 62586880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:28 compute-0 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:28 compute-0 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 14:15:28 compute-0 ceph-osd[89088]: do_command 'config show' '{prefix=config show}'
Feb 25 14:15:28 compute-0 ceph-osd[89088]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 25 14:15:28 compute-0 ceph-osd[89088]: do_command 'counter dump' '{prefix=counter dump}'
Feb 25 14:15:28 compute-0 ceph-osd[89088]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:56.031755+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: do_command 'counter schema' '{prefix=counter schema}'
Feb 25 14:15:28 compute-0 ceph-osd[89088]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 305004544 unmapped: 62455808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:57.031898+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 62537728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: tick
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_tickets
Feb 25 14:15:28 compute-0 ceph-osd[89088]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:58.032024+0000)
Feb 25 14:15:28 compute-0 ceph-osd[89088]: do_command 'log dump' '{prefix=log dump}'
Feb 25 14:15:28 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 14:15:28 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23416 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} v 0)
Feb 25 14:15:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} : dispatch
Feb 25 14:15:28 compute-0 podman[438524]: 2026-02-25 14:15:28.75332572 +0000 UTC m=+0.095168428 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 25 14:15:28 compute-0 podman[438525]: 2026-02-25 14:15:28.764754414 +0000 UTC m=+0.104796891 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 25 14:15:28 compute-0 ceph-mon[76335]: pgmap v4666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:28 compute-0 ceph-mon[76335]: from='client.23410 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:28 compute-0 ceph-mon[76335]: from='client.23414 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:28 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3999877447' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 25 14:15:28 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} : dispatch
Feb 25 14:15:28 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 25 14:15:28 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/276674330' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 25 14:15:29 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23420 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} v 0)
Feb 25 14:15:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} : dispatch
Feb 25 14:15:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 25 14:15:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117185116' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 25 14:15:29 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23424 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:29 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:29 compute-0 ceph-mon[76335]: from='client.23416 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/276674330' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 25 14:15:29 compute-0 ceph-mon[76335]: from='client.23420 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:29 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} : dispatch
Feb 25 14:15:29 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1117185116' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 25 14:15:29 compute-0 ceph-mon[76335]: from='client.23424 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:29 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23428 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:29 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 25 14:15:29 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092125088' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 25 14:15:30 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:30 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 25 14:15:30 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1038890009' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 25 14:15:30 compute-0 ceph-mon[76335]: pgmap v4667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:30 compute-0 ceph-mon[76335]: from='client.23428 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4092125088' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 25 14:15:30 compute-0 ceph-mon[76335]: from='client.23430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:30 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1038890009' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 25 14:15:30 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23434 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:15:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 25 14:15:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2941910060' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23438 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:15:31
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root']
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 14:15:31 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb 25 14:15:31 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454671536' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 14:15:31 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23442 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:31 compute-0 ceph-mon[76335]: from='client.23434 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2941910060' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 25 14:15:31 compute-0 ceph-mon[76335]: from='client.23438 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:31 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3454671536' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23446 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 25 14:15:32 compute-0 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: 2026-02-25T14:15:32.258+0000 7f96f9a94640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 25 14:15:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb 25 14:15:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337511007' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:18.287548+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:19.287862+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:20.288123+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:21.288309+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:22.288509+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:23.288780+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:24.289048+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:25.289268+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:26.289489+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:27.289766+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:28.289961+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:29.290200+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:30.290405+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:31.290668+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:32.290971+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:33.291222+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:34.291410+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:35.291652+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:36.291919+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:37.292101+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:38.292270+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:39.292449+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:40.292827+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:41.293107+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:42.293383+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:43.293591+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:44.293858+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:45.294041+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:46.294321+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:47.294533+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:48.294748+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:49.294929+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:50.295100+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:51.295367+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:52.295565+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:53.295800+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:54.296021+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:55.296236+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:56.296477+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:57.296625+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:58.297061+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:59.297219+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:00.297401+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:01.297562+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:02.297838+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:03.298120+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:04.298242+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:05.298329+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:06.298523+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:07.298675+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:08.298925+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:09.299058+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:10.299220+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:11.299370+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:12.299580+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:13.299772+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:14.300030+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:15.300252+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:16.300507+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:17.300740+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:18.301001+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:19.301300+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:20.301551+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:21.301779+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:22.301972+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:23.302193+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:24.302418+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:25.302560+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:26.302770+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:27.302995+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:28.303193+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:29.303420+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:30.303657+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:31.303972+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:32.304168+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:33.304356+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:34.304504+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 73695232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:35.304663+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:36.304909+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:37.305064+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:38.305265+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:39.305460+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:40.305645+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:41.305848+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:42.306017+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:43.306185+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:44.306314+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:45.306462+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:46.306639+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:47.306796+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:48.306935+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:49.307064+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:50.307208+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:51.307338+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:52.307536+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:53.307904+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:54.308124+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:55.308338+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:56.308517+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:57.308781+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:58.309014+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:59.309231+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:00.309457+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:01.309589+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:02.309779+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:03.310047+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:04.310338+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:05.310522+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:06.310762+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:07.311061+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:08.311323+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:09.311576+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:10.311757+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:11.311918+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:12.312125+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:13.312304+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:14.312504+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:15.312764+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:16.313187+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:17.313379+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:18.313599+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:19.313742+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:20.313909+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:21.314106+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:22.314269+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:23.314457+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:24.314607+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:25.314761+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:26.315018+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:27.315296+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 73637888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:28.315442+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:29.315597+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:30.315775+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:31.315912+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:32.316185+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:33.316418+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:34.316631+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:35.316802+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:36.316992+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:37.317227+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:38.317469+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:39.317755+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:40.318048+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:41.318374+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:42.318660+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:43.319105+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365903872 unmapped: 73613312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:44.319396+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365903872 unmapped: 73613312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:45.319591+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:46.319785+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:47.320066+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:48.320418+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:49.320671+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:50.321091+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:51.321387+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:52.321584+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:53.321851+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:54.322100+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:55.322346+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:56.322608+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:57.322823+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:58.323088+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:59.323239+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365928448 unmapped: 73588736 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:00.323412+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:01.323594+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:02.323780+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:03.323980+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:04.324180+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:05.324345+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:06.324595+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:07.324774+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:08.324940+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:09.325164+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:10.325357+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:11.325554+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:12.325733+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:13.325893+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:14.326088+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:15.326266+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:16.326609+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:17.326825+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:18.327006+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:19.327171+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:20.327330+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:21.327482+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:22.327830+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:23.328057+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:24.328329+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:25.328580+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:26.328839+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:27.329016+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:28.329276+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:29.329437+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:30.329661+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:31.329958+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365969408 unmapped: 73547776 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:32.330136+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365969408 unmapped: 73547776 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:33.330279+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365969408 unmapped: 73547776 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:34.330443+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:35.330639+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:36.330989+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:37.331186+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:38.331420+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:39.331625+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:40.331816+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:41.332009+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:42.332225+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:43.332447+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:44.332646+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:45.332982+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:46.333231+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:47.333372+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:48.333514+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:49.333686+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73523200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:50.333903+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:51.334049+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:52.334286+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:53.334475+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:54.334772+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:55.335030+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:56.335307+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:57.335562+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:58.335801+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:59.336018+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:00.336185+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:01.336375+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:02.336571+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:03.336765+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:04.336926+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:05.337182+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:06.337401+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:07.337619+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:08.337831+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:09.338120+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:10.338349+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:11.338526+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:12.338737+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:13.338910+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:14.339319+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:15.339511+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:16.339847+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:17.340086+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:18.340277+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:19.340469+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:20.340676+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:21.341033+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:22.341226+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:23.341394+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:24.341593+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:25.341792+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:26.342093+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:27.342271+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:28.342534+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:29.351468+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:30.351742+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:31.351938+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:32.352143+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:33.352389+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:34.352590+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366051328 unmapped: 73465856 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:35.352822+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:36.353031+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:37.353193+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:38.353380+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:39.353602+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:40.353967+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:41.354124+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:42.354308+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:43.354496+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:44.354682+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:45.355081+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 73433088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:46.355357+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:47.355485+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:48.355657+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:49.355831+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:50.356030+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:51.356261+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:52.356579+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:53.356768+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:54.356931+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:55.357173+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:56.357408+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:57.357662+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:58.358039+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:59.358194+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:00.358377+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:01.358560+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:02.358851+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:03.359087+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:04.359255+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:05.359498+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:06.359741+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:07.359950+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:08.360177+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:09.360344+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:10.360532+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:11.360747+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:12.360935+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:13.361142+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:14.361386+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:15.361669+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:16.361971+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:17.362183+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:18.362357+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:19.362524+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:20.362723+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:21.362946+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:22.363145+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:23.363389+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366125056 unmapped: 73392128 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:24.363612+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:25.363812+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:26.364043+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:27.364240+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:28.364519+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:29.364769+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:30.364968+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:31.365178+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:32.365386+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:33.365597+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:34.365781+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:35.366002+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:36.366214+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:37.366490+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:38.366823+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:39.367009+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 73359360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:40.367213+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:41.367384+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:42.367566+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:43.367763+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:44.367920+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:45.368115+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:46.368342+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:47.368568+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:48.368764+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:49.368933+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:50.369124+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:51.369334+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:52.369492+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:53.369680+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:54.369933+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:55.370397+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:56.370857+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:57.371056+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:58.371219+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:59.371361+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:00.371528+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:01.371736+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:02.371920+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:03.372080+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 73310208 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:04.372247+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 73310208 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:05.372459+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:06.372779+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:07.372995+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:08.373210+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:09.373415+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:10.373659+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:11.373983+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:12.374187+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:13.374434+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:14.374621+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:15.374849+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:16.375028+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:17.375257+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:18.375428+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:19.375610+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:20.375778+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:21.375923+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:22.376088+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:23.376255+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:24.376431+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:25.376621+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:26.376950+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366231552 unmapped: 73285632 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:27.377138+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366231552 unmapped: 73285632 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:28.377330+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:29.377467+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:30.377622+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:31.377761+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:32.377920+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:33.378094+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:34.378240+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:35.378424+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:36.378648+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:37.378789+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:38.378976+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:39.379166+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:40.379344+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:41.379478+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:42.379631+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:43.379779+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:44.379940+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:45.380062+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:46.380241+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:47.380454+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:48.380641+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:49.380814+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:50.380953+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:51.381080+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:52.381252+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:53.381459+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:54.381654+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:55.381883+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:56.382716+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:57.382882+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:58.383068+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:59.383222+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:00.383369+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:01.383540+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:02.383812+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:03.383982+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:04.384133+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:05.384300+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:06.384511+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:07.384892+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:08.385078+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 73236480 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:09.385614+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 73236480 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:10.385839+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 73236480 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:11.386066+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366297088 unmapped: 73220096 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:12.386239+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366297088 unmapped: 73220096 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:13.386400+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366305280 unmapped: 73211904 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:14.386574+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366305280 unmapped: 73211904 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:15.386805+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366305280 unmapped: 73211904 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:16.387097+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366305280 unmapped: 73211904 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:17.387448+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:18.387780+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:19.388033+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:20.388187+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:21.388435+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:22.388643+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:23.388875+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:24.389084+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:25.389401+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:26.389638+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:27.389819+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:28.390019+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:29.390174+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:30.390314+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:31.390523+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366329856 unmapped: 73187328 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:32.390742+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366329856 unmapped: 73187328 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:33.390910+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366329856 unmapped: 73187328 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:34.391111+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366329856 unmapped: 73187328 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:35.391357+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366329856 unmapped: 73187328 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:36.391646+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 73179136 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:37.391905+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 73179136 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:38.392111+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 73179136 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:39.392303+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 73170944 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:40.392499+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:41.392754+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:42.392986+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:43.393224+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:44.393468+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:45.393628+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:46.393897+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:47.394092+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:48.394329+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:49.394508+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:50.394660+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:51.394877+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:52.395072+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:53.395352+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:54.395631+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:55.395807+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:56.396016+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:57.396248+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:58.396484+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:59.396766+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:00.396959+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:01.397241+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:02.397446+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:03.397673+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:04.397892+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:05.398038+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:06.398225+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:07.398546+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:08.398780+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:09.399007+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366387200 unmapped: 73129984 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:10.399217+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-02-25T13:49:11.399391+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _finish_auth 0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:11.400448+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:12.399665+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:13.399890+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:14.400108+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:15.400367+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:16.400725+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:17.400958+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:18.401116+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:19.401342+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:20.401523+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:21.401788+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:22.402001+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:23.402201+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:24.402432+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:25.402822+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:26.403194+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:27.403410+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:28.403590+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:29.403786+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:30.403982+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:31.404197+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:32.404402+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:33.404661+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:34.404952+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:35.405217+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:36.405463+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:37.405637+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:38.405802+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:39.430470+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:40.430661+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:41.430820+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:42.431008+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:43.431156+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:44.431323+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:45.431496+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:46.431710+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:47.431893+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:48.432119+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.2 total, 600.0 interval
                                           Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.74 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:49.432308+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:50.432551+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:51.432745+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:52.432938+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:53.433159+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:54.433355+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:55.433540+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a3671c1800 session 0x55a370d56a80
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: handle_auth_request added challenge on 0x55a368f00c00
Feb 25 14:15:32 compute-0 ceph-osd[88012]: mgrc ms_handle_reset ms_handle_reset con 0x55a36cb89800
Feb 25 14:15:32 compute-0 ceph-osd[88012]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1263522489
Feb 25 14:15:32 compute-0 ceph-osd[88012]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1263522489,v1:192.168.122.100:6801/1263522489]
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: get_auth_request con 0x55a369b55800 auth_method 0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: mgrc handle_mgr_configure stats_period=5
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:56.433864+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a369b57400 session 0x55a370d53500
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: handle_auth_request added challenge on 0x55a369b4b800
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a3686f5400 session 0x55a370d56540
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: handle_auth_request added challenge on 0x55a369b51800
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:57.434061+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74113024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:58.434314+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74113024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:59.434504+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:00.434729+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:01.434942+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:02.435232+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:03.435429+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:04.435634+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:05.435826+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:06.436090+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:07.436233+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:08.436423+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:09.436568+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:10.436733+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:11.436883+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:12.437060+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:13.437239+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:14.437423+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:15.437579+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:16.437816+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:17.438150+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:18.438355+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:19.438677+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:20.439111+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:21.439385+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:22.439614+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:23.439817+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:24.440033+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:25.440217+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:26.440563+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 74088448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:27.440808+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 74088448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:28.441217+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 74088448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:29.441487+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 74088448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:30.441774+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365436928 unmapped: 74080256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:31.441933+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365436928 unmapped: 74080256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:32.442093+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365436928 unmapped: 74080256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:33.442514+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365436928 unmapped: 74080256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:34.442757+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365436928 unmapped: 74080256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:35.443089+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:36.443396+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:37.443609+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:38.443883+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a369b56c00 session 0x55a36b5fae00
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: handle_auth_request added challenge on 0x55a3686f5400
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:39.444054+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:40.444249+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:41.444463+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:42.444650+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:43.444928+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:44.445148+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:45.445475+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:46.445794+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 598.568664551s of 598.707702637s, submitted: 90
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:47.445948+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365453312 unmapped: 74063872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:48.446120+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365453312 unmapped: 74063872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:49.446289+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365453312 unmapped: 74063872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:50.446464+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365453312 unmapped: 74063872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:51.446607+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 74014720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:52.446852+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 74014720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:53.447046+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 73957376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:54.447201+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:55.447360+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:56.447585+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:57.447787+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:58.447928+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:59.448119+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:00.448360+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:01.448552+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:02.448789+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:03.448970+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:04.449157+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:05.449368+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:06.449646+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:07.449839+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:08.450054+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:09.450235+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:10.450529+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:11.450736+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:12.450927+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:13.451109+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:14.451268+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:15.451481+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:16.451821+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:17.451985+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:18.452198+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:19.452385+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:20.452628+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:21.452854+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:22.453046+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:23.453278+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:24.453547+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:25.453762+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:26.454031+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:27.454228+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:28.454469+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:29.454762+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:30.455046+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:31.455273+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:32.455482+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:33.455846+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:34.456033+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:35.456259+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:36.456521+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:37.456773+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:38.456987+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:39.457122+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:40.457394+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:41.457674+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:42.457924+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:43.458114+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:44.458298+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:45.458504+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:46.458792+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:47.459008+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:48.459262+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:49.459493+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:50.459747+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:51.459988+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:52.460195+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:53.460382+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:54.460569+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:55.460760+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:56.461058+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:57.461252+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:58.461513+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:59.461802+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:00.461992+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:01.462190+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:02.462405+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:03.462592+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:04.462747+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:05.462961+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:06.463229+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 73924608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:07.463422+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 73924608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:08.463629+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 73924608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:09.463855+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 73924608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:10.464120+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 73924608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:11.464291+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:12.464498+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:13.464783+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:14.464992+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:15.465255+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:16.465530+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:17.465831+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:18.466043+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:19.466228+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:20.466430+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:21.466637+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:22.466845+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 73908224 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:23.467004+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 73908224 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:24.467157+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 73908224 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:25.467371+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:26.468754+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:27.470071+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:28.470814+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:29.471113+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:30.471968+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:31.472812+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:32.473316+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:33.473534+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:34.474249+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:35.474946+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:36.475271+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:37.475605+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:38.475885+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365625344 unmapped: 73891840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:39.476059+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365625344 unmapped: 73891840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:40.476539+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:41.476830+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:42.477045+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:43.477240+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:44.477565+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:45.477776+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:46.478121+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:47.478538+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:48.478876+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:49.479098+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:50.479355+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:51.479598+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:52.479797+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:53.479970+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:54.480261+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:55.480497+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:56.480807+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:57.481001+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:58.481216+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:59.481385+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:00.481576+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:01.481785+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:02.481989+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:03.482176+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:04.482378+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:05.482592+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:06.482871+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:07.483122+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:08.483347+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:09.483559+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:10.483785+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:11.483946+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:12.484125+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:13.484295+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:14.484448+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:15.484648+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:16.484939+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:17.485122+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:18.485320+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:19.485547+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:20.485768+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:21.485958+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:22.486147+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365658112 unmapped: 73859072 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:23.486332+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365666304 unmapped: 73850880 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:24.486504+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365666304 unmapped: 73850880 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:25.486647+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365666304 unmapped: 73850880 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:26.486914+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365666304 unmapped: 73850880 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:27.487084+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:28.487786+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:29.487990+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:30.488180+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:31.488356+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:32.488930+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:33.489398+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:34.489736+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:35.489938+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:36.490205+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:37.490482+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:38.490760+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:39.490970+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:40.491289+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:41.491569+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:42.491847+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:43.492114+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:44.492298+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:45.492471+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365682688 unmapped: 73834496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:46.492759+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365682688 unmapped: 73834496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:47.492942+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365690880 unmapped: 73826304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:48.493126+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365690880 unmapped: 73826304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:49.493325+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365690880 unmapped: 73826304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:50.493538+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365690880 unmapped: 73826304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:51.493787+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:52.494025+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:53.494318+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:54.494611+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:55.494793+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:56.494991+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:57.495336+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:58.495528+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:59.495726+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:00.495938+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:01.496203+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:02.496491+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:03.496651+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365707264 unmapped: 73809920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:04.496924+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365707264 unmapped: 73809920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:05.497153+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365707264 unmapped: 73809920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:06.497403+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365707264 unmapped: 73809920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:07.497649+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365707264 unmapped: 73809920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:08.497878+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:09.498088+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:10.498320+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:11.498517+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:12.498821+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:13.499009+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:14.499247+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:15.499461+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:16.499670+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:17.499967+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:18.500198+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365723648 unmapped: 73793536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:19.500402+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365731840 unmapped: 73785344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:20.500681+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365731840 unmapped: 73785344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:21.500966+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365731840 unmapped: 73785344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:22.501244+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365740032 unmapped: 73777152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:23.501499+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365740032 unmapped: 73777152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:24.501773+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365740032 unmapped: 73777152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:25.502096+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:26.502345+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:27.502529+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:28.502862+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:29.503060+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:30.503326+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365756416 unmapped: 73760768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:31.503486+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:32.503708+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:33.503888+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:34.504068+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:35.504220+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:36.504440+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:37.504646+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:38.504843+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:39.505094+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:40.505307+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:41.505510+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:42.505732+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:43.505947+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:44.506153+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:45.506377+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:46.506563+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:47.506785+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:48.506906+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:49.507058+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:50.507215+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:51.507407+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:52.507602+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:53.507799+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:54.507969+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:55.508266+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:56.508478+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:57.508668+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:58.508871+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:59.509014+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:00.509182+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:01.509390+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:02.509589+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:03.509767+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:04.509941+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:05.510099+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:06.510337+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:07.510500+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:08.510667+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:09.510883+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:10.511061+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:11.511257+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:12.511425+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:13.511608+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:14.511786+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:15.511986+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:16.512185+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:17.512358+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:18.512531+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:19.512648+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:20.512847+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:21.513021+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:22.513208+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:23.513361+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:24.513568+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:25.513810+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:26.514037+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:27.514258+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:28.514436+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:29.514656+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:30.514934+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:31.515115+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:32.515364+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:33.515660+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:34.515872+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:35.516060+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:36.516243+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:37.516401+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:38.516591+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:39.516749+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:40.516921+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:41.517090+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:42.517304+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:43.517498+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:44.517683+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:45.517960+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:46.518228+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:47.518438+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:48.518629+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:49.518813+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:50.519005+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:51.519165+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:52.519390+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:53.519577+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:54.519765+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:55.519936+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:56.520178+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:57.520354+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:58.520562+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:59.520785+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 73637888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:00.520935+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:01.521090+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:02.521261+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:03.521390+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:04.521546+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:05.521767+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:06.522033+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:07.522230+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:08.522379+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:09.522528+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:10.522778+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:11.522971+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:12.523163+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:13.523382+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:14.523577+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:15.523739+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:16.523934+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:17.524236+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:18.524408+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:19.524567+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:20.524765+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:21.524972+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:22.525137+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365903872 unmapped: 73613312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:23.525292+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:24.525474+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:25.525623+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:26.525858+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:27.526067+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:28.526278+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:29.526438+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:30.526601+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:31.526753+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:32.526956+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:33.527178+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:34.527337+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:35.527507+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:36.527769+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:37.527967+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:38.528201+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:39.528405+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:40.528566+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:41.528738+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:42.529001+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:43.529257+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:44.529425+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:45.529584+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365928448 unmapped: 73588736 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:46.529763+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365928448 unmapped: 73588736 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:47.529974+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:48.530168+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:49.530354+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:50.530578+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:51.530765+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:52.530944+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:53.531198+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:54.531414+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:55.531641+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:56.531993+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:57.532197+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:58.532405+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:59.532623+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:00.532910+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:01.533103+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:02.533304+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:03.533506+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:04.533754+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:05.533926+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:06.534252+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:07.534537+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:08.534837+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:09.535058+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:10.535308+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:11.535593+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:12.535785+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:13.535976+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:14.536236+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:15.536504+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:16.536783+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:17.536976+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:18.537123+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:19.537325+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:20.537542+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:21.537774+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:22.538083+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:23.538335+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:24.538541+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:25.538770+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:26.539048+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:27.539235+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:28.539425+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:29.539660+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:30.539879+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:31.540034+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:32.540236+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:33.540476+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:34.540814+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:35.540994+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73523200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:36.541231+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73523200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:37.541407+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73523200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:38.541594+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:39.541845+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:40.542082+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:41.542267+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:42.542499+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:43.542810+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:44.543060+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [1])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:45.543298+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:46.543601+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:47.543768+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:48.543924+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:49.544171+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:50.544459+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:51.544592+0000)
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:52.544764+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:53.544981+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:54.545324+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:55.545491+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:56.545764+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:57.545955+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:58.546178+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:59.546363+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:00.546537+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:01.546771+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:02.547000+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:03.547182+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:04.547351+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:05.547544+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:06.547898+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:07.548136+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:08.548411+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:09.548801+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:10.548978+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:11.549255+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:12.549640+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:13.549843+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:14.550011+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 73474048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:15.550145+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 73474048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:16.550342+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 73474048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:17.550521+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 73474048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:18.550868+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 73474048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:19.551027+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366051328 unmapped: 73465856 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:20.551180+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 73457664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:21.551374+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 73457664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:22.551547+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:23.551681+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:24.551911+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:25.552157+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:26.552371+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:27.552589+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:28.552863+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:29.553060+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:30.553275+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:31.553542+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:32.553772+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:33.554010+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:34.554164+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:35.554382+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:36.554648+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:37.554810+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:38.554981+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:39.555176+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:40.555363+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:41.555600+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 73433088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:42.555772+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 73433088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:43.555977+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 73433088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:44.556147+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 73433088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:45.556357+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:46.556636+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:47.556817+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:48.556985+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:49.557128+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:50.557316+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:51.557640+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:52.557838+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:53.558045+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:54.558223+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:55.558432+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:56.558661+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:57.558854+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:58.559018+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:59.559248+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:00.559460+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:01.559668+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:02.559860+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:03.560021+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:04.560238+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:05.560448+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:06.560681+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:07.560914+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:08.561128+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:09.561326+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:10.561546+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:11.561779+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:12.561953+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:13.562134+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:14.562314+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:15.562495+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:16.562780+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:17.563017+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:18.563207+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:19.563370+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:20.563639+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:21.563837+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:22.564070+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:23.564321+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:24.564504+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:25.564680+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:26.564922+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 73359360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:27.577228+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 73359360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:28.577373+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 73359360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:29.577559+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:30.577823+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:31.577992+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:32.578169+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:33.578347+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:34.578554+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:35.578766+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:36.579046+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:37.579294+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:38.579451+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:39.579675+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:40.579899+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:41.580100+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:42.580280+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:43.580457+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:44.580672+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:45.580885+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:46.581089+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:47.581281+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:48.581495+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.2 total, 600.0 interval
                                           Cumulative writes: 49K writes, 196K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.74 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:49.581763+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:50.581932+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:51.582094+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 73318400 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:52.582267+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 73318400 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:53.582403+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 73318400 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:54.582613+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 73318400 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:55.582791+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 73318400 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:56.583082+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 73310208 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:57.583284+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 73310208 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:58.583533+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 73310208 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:59.583792+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:00.583951+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:01.584110+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:02.584274+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:03.584434+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:04.584576+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:05.584782+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:06.585007+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:07.585195+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:08.585358+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:09.585522+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:10.585991+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:11.586175+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:12.586314+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:13.586485+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb 25 14:15:32 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/224906319' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:14.586794+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366231552 unmapped: 73285632 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:15.586981+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366231552 unmapped: 73285632 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:16.587204+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:17.587376+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:18.587570+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:19.587769+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:20.588042+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:21.588262+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:22.588468+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:23.588653+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:24.588802+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:25.588996+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:26.589215+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:27.589383+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:28.589532+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:29.589679+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:30.590164+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:31.590563+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:32.590971+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:33.591191+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:34.591780+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:35.591975+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:36.592181+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:37.592387+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 73252864 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:38.592744+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 73252864 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:39.592935+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:40.593464+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:41.593650+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:42.593946+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:43.594108+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:44.594378+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:45.594564+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:46.595022+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:47.595240+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:48.595428+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 601.250732422s of 601.472778320s, submitted: 132
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:49.595623+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 73228288 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:50.595784+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 73228288 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:51.596018+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 73228288 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:52.596282+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 73228288 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:53.596539+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 73228288 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:54.596733+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:55.596908+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:56.597115+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:57.597283+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:58.597448+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:59.597600+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:00.597813+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:01.597947+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:02.598094+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:03.598321+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:04.598548+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:05.598730+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:06.598901+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:07.599071+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:08.599224+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:09.599408+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:10.599588+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:11.599787+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:12.599957+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:13.600108+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:14.600285+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:15.600445+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:16.600744+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:17.600907+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:18.601064+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:19.601240+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:20.601517+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:21.601749+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:22.601927+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:23.602107+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:24.602347+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:25.602570+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:26.602830+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:27.603032+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:28.603213+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:29.603402+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:30.603658+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:31.603847+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:32.604117+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:33.604367+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:34.604569+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:35.604807+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:36.605077+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:37.605269+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:38.605426+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:39.605655+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:40.605959+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:41.606214+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:42.606402+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:43.606588+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:44.606742+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:45.606912+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:46.607185+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:47.607474+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:48.607783+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:49.608004+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:50.608195+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:51.608378+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:52.608541+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:53.608756+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:54.608945+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:55.609177+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:56.609370+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:57.609549+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:58.609745+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:59.609909+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:00.610054+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:01.610256+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:02.610458+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:03.610647+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:04.610885+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:05.611160+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:06.611381+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:07.611601+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:08.611818+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:09.612006+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:10.612158+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:11.612342+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:12.612538+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:13.612753+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:14.612947+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:15.613108+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:16.613307+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:17.613497+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:18.613741+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:19.613970+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:20.614200+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:21.614471+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:22.614657+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:23.614879+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:24.615111+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:25.615276+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:26.615471+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:27.615730+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:28.615945+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:29.616194+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:30.616375+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:31.616597+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:32.616884+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:33.617094+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:34.617303+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:35.617514+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:36.617800+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:37.617980+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:38.618178+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:39.618354+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:40.618570+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:41.618853+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:42.619025+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:43.619228+0000)
Feb 25 14:15:32 compute-0 ceph-mon[76335]: pgmap v4668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:32 compute-0 ceph-mon[76335]: from='client.23442 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:32 compute-0 ceph-mon[76335]: from='client.23446 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/337511007' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 25 14:15:32 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/224906319' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:44.619430+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:45.619619+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:46.619893+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:47.620135+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:48.620346+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:49.620549+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:50.620776+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:51.621026+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:52.621234+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:53.621434+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:54.621624+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:55.621775+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:56.621985+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:57.622205+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:58.622402+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:59.622650+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:00.622832+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:01.623104+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:02.623303+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:03.623455+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:04.623615+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:05.623837+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 73064448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:06.624030+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 73064448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:07.624187+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 73064448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:08.624431+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:09.624618+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:10.624820+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:11.625034+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:12.625251+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:13.625535+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:14.625766+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:15.625993+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:16.626204+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:17.626391+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:18.626550+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:19.626712+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:20.626868+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 73048064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:21.626996+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 73048064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:22.627185+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 73048064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:23.627313+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 73048064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:24.627520+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:25.627796+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:26.627995+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:27.628133+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:28.628790+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:29.628980+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:30.629162+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:31.629332+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:32.629501+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:33.629643+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:34.629800+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:35.629939+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:36.630147+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:37.630319+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:38.630457+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:39.630618+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 73023488 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:40.630811+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 73023488 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:41.631035+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 73023488 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:42.631243+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:43.631667+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:44.631889+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:45.632104+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:46.632325+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:47.632510+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:48.632718+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:49.632880+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:50.633086+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:51.633294+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:52.633489+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:53.633675+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:54.633833+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:55.633990+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:56.634225+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:57.634402+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366510080 unmapped: 73007104 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:58.634585+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366510080 unmapped: 73007104 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:59.634778+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:00.634935+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:01.635109+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:02.635321+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:03.635485+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:04.635660+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:05.635866+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:06.636069+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:07.636232+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:08.636423+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:09.636599+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:10.636804+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:11.636961+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:12.637144+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:13.637336+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:14.637566+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:15.637764+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:16.637967+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 72982528 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:17.638127+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 72982528 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:18.638253+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 72982528 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:19.638419+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 72982528 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:20.638639+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:21.638847+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:22.639043+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:23.639207+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:24.639360+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:25.639530+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:26.639705+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:27.639905+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:28.640106+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:29.640370+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:30.640519+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:31.640755+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:32.640980+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:33.641220+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:34.641403+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:35.641571+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:36.641790+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:37.641953+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:38.642142+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366567424 unmapped: 72949760 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:39.642341+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366567424 unmapped: 72949760 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:40.642503+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366567424 unmapped: 72949760 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:41.642754+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366567424 unmapped: 72949760 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:42.643044+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:43.643197+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:44.643348+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:45.643535+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:46.643794+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:47.643967+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:48.644206+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:49.644398+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:50.644621+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets getting new tickets!
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:51.645002+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _finish_auth 0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:51.646079+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 72933376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:52.645251+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 72933376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:53.645451+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 72933376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:54.645628+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 72933376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:55.645868+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a368f00c00 session 0x55a368b41340
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: handle_auth_request added challenge on 0x55a369b56c00
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 72933376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: mgrc ms_handle_reset ms_handle_reset con 0x55a369b55800
Feb 25 14:15:32 compute-0 ceph-osd[88012]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1263522489
Feb 25 14:15:32 compute-0 ceph-osd[88012]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1263522489,v1:192.168.122.100:6801/1263522489]
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: get_auth_request con 0x55a3686f9c00 auth_method 0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: mgrc handle_mgr_configure stats_period=5
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:56.646067+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a369b4b800 session 0x55a36f53fdc0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: handle_auth_request added challenge on 0x55a366d05400
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a369b51800 session 0x55a36de4c380
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: handle_auth_request added challenge on 0x55a36637cc00
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 72908800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:57.646267+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 72900608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:58.646480+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 72900608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:59.646793+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:00.646982+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:01.647138+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:02.647326+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:03.647511+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:04.647735+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:05.647939+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:06.648136+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:07.648310+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:08.648491+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:09.648676+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:10.648884+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:11.649084+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:12.649335+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:13.649522+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:14.649748+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:15.649940+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:16.650233+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:17.650416+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:18.650616+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:19.650997+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:20.651220+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:21.651496+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:22.651681+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:23.651972+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:24.652200+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:25.652344+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:26.652580+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:27.652766+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:28.652969+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:29.653165+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:30.653378+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:31.653551+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:32.653806+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:33.654038+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:34.654212+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:35.654383+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:36.654623+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:37.654860+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:38.655004+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a3686f5400 session 0x55a368b40c40
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: handle_auth_request added challenge on 0x55a369b51800
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:39.655206+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:40.655420+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:41.655582+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:42.655822+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:43.656047+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366657536 unmapped: 72859648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:44.656272+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366657536 unmapped: 72859648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:45.656450+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366665728 unmapped: 72851456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:46.656664+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366665728 unmapped: 72851456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:47.656848+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366665728 unmapped: 72851456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:48.657034+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366665728 unmapped: 72851456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:49.657209+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366673920 unmapped: 72843264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:50.657417+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366673920 unmapped: 72843264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 302.287567139s of 302.444122314s, submitted: 90
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:51.657570+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:52.657826+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:53.658019+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:54.658270+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:55.658517+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:56.658789+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:57.658955+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:58.659130+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:59.659313+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:00.659533+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:01.659752+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:02.660011+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:03.660241+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:04.660409+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:05.660557+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:06.660800+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:07.660994+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:08.661204+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:09.661355+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:10.661526+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:11.661761+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:12.662211+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:13.662395+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:14.662599+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:15.662761+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:16.662951+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:17.663254+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:18.663500+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:19.663791+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:20.663963+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:21.664150+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:22.664398+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:23.664583+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:24.664847+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:25.665117+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:26.665464+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:27.665786+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:28.666022+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:29.666237+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:30.666540+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:31.666898+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:32.667142+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 72794112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:33.667378+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 72794112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:34.667600+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:35.667767+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:36.667954+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:37.668077+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:38.668300+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:39.668494+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:40.668643+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:41.668850+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:42.669045+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:43.669310+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 72777728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:44.669494+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 72777728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:45.669769+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 72777728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:46.670079+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:47.670308+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:48.670547+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:49.670739+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:50.670963+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:51.671136+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:52.671395+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:53.671561+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:54.671762+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:55.671942+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:56.672182+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:57.672340+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:58.672524+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:59.672772+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:00.673021+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:01.673214+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:02.673418+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:03.673629+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:04.673838+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:05.674037+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:06.674238+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:07.674406+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:08.674623+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:09.674812+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:10.674999+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:11.675200+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:12.675389+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:13.675551+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:14.675766+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:15.675899+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:16.676102+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:17.676242+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:18.676451+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:19.676657+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:20.676892+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:21.677062+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:22.677198+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:23.677380+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:24.677549+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:25.677718+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:26.677904+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:27.678090+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:28.678758+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:29.679262+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:30.679551+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:31.680009+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:32.680259+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:33.680427+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:34.681324+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:35.682092+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 72753152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:36.682530+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 72753152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:37.682860+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 72753152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:38.683118+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 72753152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:39.683307+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:40.683963+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:41.684132+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:42.684317+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:43.684540+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:44.684827+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:45.685108+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:46.685311+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:47.685518+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:48.685680+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:49.685897+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:50.686253+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:51.686610+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 72728576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:52.686889+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 72728576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:53.687151+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 72728576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:54.687395+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 72728576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:55.687669+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:56.687994+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:57.688299+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:58.688541+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:59.688797+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:00.688997+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:01.689185+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:02.689367+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:03.689562+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:04.689804+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:05.690025+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:06.690255+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:07.690445+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:08.690646+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:09.690853+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:10.691061+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:11.691261+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:12.691484+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:13.691675+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:14.691899+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:15.692093+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:16.692276+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:17.692451+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:18.692671+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:19.692859+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:20.693084+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:21.693290+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:22.693531+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:23.693769+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:24.694017+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:25.694226+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:26.694467+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:27.694672+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:28.694944+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:29.695112+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:30.695280+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:31.695455+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366813184 unmapped: 72704000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:32.695798+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:33.695952+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:34.696173+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:35.696370+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:36.696607+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:37.696831+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:38.697021+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 72687616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:39.697277+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 72687616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:40.697553+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 72687616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:41.697827+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:42.698083+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:43.698278+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:44.698447+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:45.698623+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:46.698852+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:47.699065+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:48.699233+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:49.699396+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:50.699566+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:51.699763+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:52.699958+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:53.700135+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:54.700324+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:55.700501+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:56.700786+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:57.700938+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:58.701101+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:59.701304+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:00.701493+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:01.701642+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:02.701797+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:03.701988+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:04.702153+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:05.702337+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:06.702564+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:07.702783+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:08.702963+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:09.703147+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:10.703351+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:11.703553+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:12.703744+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:13.703942+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:14.704187+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:15.704449+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:16.704773+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:17.705049+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:18.705328+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:19.705606+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:20.705964+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:21.706247+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:22.706432+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:23.706806+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:24.707171+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:25.707531+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 72646656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:26.707842+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 72646656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:27.708104+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 72646656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:28.708424+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 72646656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:29.708660+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:30.708904+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:31.709101+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:32.709403+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:33.709683+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:34.709960+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:35.710261+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:36.710613+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:37.710925+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:38.711276+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:39.711467+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366886912 unmapped: 72630272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:40.711888+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366886912 unmapped: 72630272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:41.712310+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366886912 unmapped: 72630272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:42.712628+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366886912 unmapped: 72630272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:43.712959+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366886912 unmapped: 72630272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:44.713191+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366895104 unmapped: 72622080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:45.713536+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366903296 unmapped: 72613888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:46.714010+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366903296 unmapped: 72613888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:47.714261+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366903296 unmapped: 72613888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:48.714639+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.2 total, 600.0 interval
                                           Cumulative writes: 49K writes, 196K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.73 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366903296 unmapped: 72613888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:49.714805+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366903296 unmapped: 72613888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:50.715066+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:51.715307+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:52.715498+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:53.715675+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:54.715927+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:55.716138+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:56.716603+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:57.716790+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:58.717023+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:59.717286+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:00.717625+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:01.717816+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:02.718110+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:03.718425+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:04.718665+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:05.718891+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:06.719197+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:07.719459+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:08.719797+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:09.719960+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:10.720144+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:11.720372+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:12.720569+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:13.720760+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:14.720916+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:15.721090+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:16.721341+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:17.721477+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:18.721681+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:19.721966+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:20.722120+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:21.722317+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:22.722983+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:23.723147+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:24.723344+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366936064 unmapped: 72581120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:25.723492+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366936064 unmapped: 72581120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:26.723753+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366936064 unmapped: 72581120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:27.723974+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366936064 unmapped: 72581120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:28.724128+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:29.724286+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:30.724506+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:31.724786+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:32.724969+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:33.725135+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:34.725320+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:35.725500+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:36.725742+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:37.725882+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:38.726045+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:39.726186+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:40.726290+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:41.726557+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:42.726756+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366952448 unmapped: 72564736 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:43.726948+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366952448 unmapped: 72564736 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:44.727113+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 72556544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:45.727402+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 72556544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:46.727767+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 72556544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:47.728018+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 72556544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:48.728234+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 72556544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 297.736419678s of 297.771453857s, submitted: 22
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:49.728455+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 72548352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:50.728825+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 72548352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:51.728963+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 72548352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:52.729123+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 72548352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:53.729379+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 72548352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:54.729614+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 72540160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:55.729754+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 72540160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:56.729970+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 72540160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:57.730127+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 72540160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:58.730263+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 72531968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.380105019s of 10.057199478s, submitted: 38
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:59.730422+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 72531968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688239 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:00.730659+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 72531968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:01.730874+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366993408 unmapped: 72523776 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:02.731051+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367001600 unmapped: 72515584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [0,0,0,0,0,0,1])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:03.731208+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 72507392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:04.731374+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:05.731576+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:06.731774+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:07.731988+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:08.732151+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:09.732398+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:10.732602+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:11.732808+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:12.732991+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:13.733234+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:14.733396+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:15.733582+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:16.733768+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:17.733931+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:18.734109+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:19.734273+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:20.734449+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:21.734647+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:22.734838+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:23.735059+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:24.735163+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:25.735329+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:26.735561+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:27.735716+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:28.735860+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:29.736044+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:30.736239+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:31.736444+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:32.736603+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:33.736829+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:34.736937+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:35.737071+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:36.737260+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:37.737438+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:38.737615+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:39.737762+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:40.737940+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:41.738083+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:42.738234+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:43.738418+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:44.738681+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:45.738895+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367034368 unmapped: 72482816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:46.739162+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367034368 unmapped: 72482816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:47.739426+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367034368 unmapped: 72482816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:48.739661+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367034368 unmapped: 72482816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:49.739892+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367042560 unmapped: 72474624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:50.740083+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:51.740255+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:52.740462+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:53.740603+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:54.740802+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:55.740970+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:56.741221+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:57.741453+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:58.741720+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:59.741895+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:00.742093+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:01.742295+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:02.742529+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:03.742761+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:04.742948+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:05.743162+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:06.743410+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:07.743584+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:08.743796+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:09.744639+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:10.744877+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:11.745059+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:12.745263+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:13.745516+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:14.745739+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:15.745954+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:16.746154+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:17.746304+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:18.746467+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:19.746636+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:20.746795+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:21.746941+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:22.747293+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:23.747478+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:24.747671+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:25.747881+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:26.748232+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:27.748440+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:28.748655+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:29.748829+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:30.749023+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:31.749201+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:32.749351+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:33.749564+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:34.749802+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:35.749962+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:36.750161+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367075328 unmapped: 72441856 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:37.750363+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:38.750571+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:39.750767+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:40.750949+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:41.751124+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:42.751318+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:43.751506+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:44.751742+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:45.752021+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:46.752295+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:47.752627+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:48.752924+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:49.753387+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:50.753797+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:51.753918+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:52.754102+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:53.754275+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:54.754430+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:55.754597+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:56.754767+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:57.754925+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:58.755063+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:59.755175+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:00.755396+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:01.755613+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:02.755793+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:03.755967+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:04.756129+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:05.756268+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:06.756474+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:07.756668+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:08.756921+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:09.757118+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:10.757301+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:11.757428+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:12.757564+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:13.757726+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:14.757892+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367099904 unmapped: 72417280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:15.758051+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:16.758319+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:17.758496+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:18.758765+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:19.758924+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread fragmentation_score=0.004789 took=0.000056s
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:20.759156+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:21.759325+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:22.759528+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:23.759696+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:24.759874+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:25.760050+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:26.760338+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:27.760505+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:28.760711+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:29.760904+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367116288 unmapped: 72400896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:30.761101+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367116288 unmapped: 72400896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:31.761277+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367116288 unmapped: 72400896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:32.761469+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:33.761828+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:34.762099+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:35.762338+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:36.762772+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:37.763018+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:38.763240+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:39.763407+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:40.763544+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:41.763786+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:42.763959+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:43.764130+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:44.764333+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:45.764567+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:46.764821+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:47.765012+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:48.765192+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:49.765439+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:50.765619+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:51.765846+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:52.766063+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:53.766335+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:54.766518+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:55.766837+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 72376320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:56.767007+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 72376320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:57.767382+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 72376320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:58.767745+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 72376320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:59.768073+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367149056 unmapped: 72368128 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:00.768329+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367149056 unmapped: 72368128 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:01.768571+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367149056 unmapped: 72368128 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:02.768756+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367149056 unmapped: 72368128 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:03.768927+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:04.769141+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:05.769340+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:06.769606+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:07.769773+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:08.769945+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:09.770127+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:10.770328+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:11.770470+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:12.770626+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:13.770802+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:14.770974+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:15.771148+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:16.771347+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:17.771564+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:18.771762+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:19.771987+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:20.772217+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:21.772431+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:22.772571+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:23.772811+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:24.773010+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:25.773189+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:26.773365+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:27.773600+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:28.773802+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:29.773984+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:30.774122+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:31.774283+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:32.774493+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:33.774640+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:34.774768+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:35.774932+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:36.775158+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:37.775326+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:38.775462+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72335360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:39.775619+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72335360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:40.775790+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72335360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:41.775940+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72335360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:42.776068+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72335360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:43.776200+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72335360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:44.776330+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:45.776467+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:46.776686+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:47.776865+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:48.777055+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:49.777251+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:50.777411+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:51.777569+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:52.777718+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:53.777861+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:54.778006+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:55.778138+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72327168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:56.778308+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:57.778459+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 72318976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:58.778597+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367206400 unmapped: 72310784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:59.778734+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 72245248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: do_command 'config diff' '{prefix=config diff}'
Feb 25 14:15:32 compute-0 ceph-osd[88012]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 25 14:15:32 compute-0 ceph-osd[88012]: do_command 'config show' '{prefix=config show}'
Feb 25 14:15:32 compute-0 ceph-osd[88012]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 25 14:15:32 compute-0 ceph-osd[88012]: do_command 'counter dump' '{prefix=counter dump}'
Feb 25 14:15:32 compute-0 ceph-osd[88012]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 25 14:15:32 compute-0 ceph-osd[88012]: do_command 'counter schema' '{prefix=counter schema}'
Feb 25 14:15:32 compute-0 ceph-osd[88012]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:15:00.778852+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:32 compute-0 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:32 compute-0 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: tick
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_tickets
Feb 25 14:15:32 compute-0 ceph-osd[88012]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:15:01.778989+0000)
Feb 25 14:15:32 compute-0 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367001600 unmapped: 72515584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:32 compute-0 ceph-osd[88012]: do_command 'log dump' '{prefix=log dump}'
Feb 25 14:15:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb 25 14:15:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4165039023' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 25 14:15:33 compute-0 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 14:15:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Feb 25 14:15:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3752465770' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 25 14:15:33 compute-0 nova_compute[244014]: 2026-02-25 14:15:33.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:33 compute-0 rsyslogd[1020]: imjournal from <np0005629333:ceph-osd>: begin to drop messages due to rate-limiting
Feb 25 14:15:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Feb 25 14:15:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571814693' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 25 14:15:33 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:33 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Feb 25 14:15:33 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008146766' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 25 14:15:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4165039023' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 25 14:15:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3752465770' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 25 14:15:33 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3571814693' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 25 14:15:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Feb 25 14:15:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767682960' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 25 14:15:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Feb 25 14:15:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319128598' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 25 14:15:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Feb 25 14:15:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565955713' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 25 14:15:34 compute-0 ceph-mon[76335]: pgmap v4669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4008146766' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 25 14:15:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1767682960' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 25 14:15:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/319128598' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 25 14:15:34 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2565955713' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 25 14:15:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Feb 25 14:15:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502810165' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 25 14:15:34 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Feb 25 14:15:34 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390873853' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 25 14:15:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Feb 25 14:15:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3078509921' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 25 14:15:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Feb 25 14:15:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2936110302' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 25 14:15:35 compute-0 kernel: /proc/cgroups lists only v1 controllers, use cgroup.controllers of root cgroup for v2 info
Feb 25 14:15:35 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1502810165' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 25 14:15:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1390873853' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 25 14:15:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3078509921' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 25 14:15:35 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2936110302' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 25 14:15:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 25 14:15:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3403328968' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 25 14:15:35 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Feb 25 14:15:35 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2139897385' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 25 14:15:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:15:36 compute-0 crontab[439666]: (root) LIST (root)
Feb 25 14:15:36 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Feb 25 14:15:36 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/963933858' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 25 14:15:36 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23480 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:36 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23484 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:36 compute-0 nova_compute[244014]: 2026-02-25 14:15:36.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:15:36 compute-0 ceph-mon[76335]: pgmap v4670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3403328968' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 25 14:15:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2139897385' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 25 14:15:36 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/963933858' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 25 14:15:36 compute-0 ceph-mon[76335]: from='client.23480 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:36 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23482 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:37 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} v 0)
Feb 25 14:15:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} : dispatch
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:25.483421+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:26.483575+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:27.483775+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:28.484029+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:29.484206+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:30.491610+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:31.491859+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:32.492084+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363094016 unmapped: 78618624 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:33.492297+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363094016 unmapped: 78618624 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:34.492554+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363094016 unmapped: 78618624 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:35.492734+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363094016 unmapped: 78618624 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:36.493497+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363094016 unmapped: 78618624 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:37.493641+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:38.493782+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:39.494041+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:40.494260+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:41.494428+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:42.494636+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:43.494814+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:44.495006+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:45.495181+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:46.495369+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:47.495546+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:48.495825+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:49.496013+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:50.496278+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:51.496515+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:52.496764+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:53.496943+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:54.497127+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:55.497361+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:56.497609+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:57.497824+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:58.498069+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:41:59.498316+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:00.498565+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:01.498836+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 78585856 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:02.499086+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 78585856 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:03.499316+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 78585856 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:04.499583+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 78585856 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:05.499790+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 78585856 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:06.500080+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:07.500286+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:08.500496+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:09.500655+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:10.500891+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:11.501046+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:12.501262+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:13.501594+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:14.501802+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:15.502071+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:16.502291+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:17.502554+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:18.502833+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:19.503053+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:20.503296+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:21.503578+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:22.503913+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:23.504160+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:24.504443+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:25.504683+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:26.504961+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:27.505236+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:28.505496+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:29.505803+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:30.506213+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:31.506484+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:32.506847+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:33.507077+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363077632 unmapped: 78635008 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:34.507294+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363077632 unmapped: 78635008 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:35.507508+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363077632 unmapped: 78635008 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:36.507840+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363077632 unmapped: 78635008 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:37.508072+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363077632 unmapped: 78635008 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:38.508399+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:39.508769+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:40.509032+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:41.509359+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:42.509658+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:43.509907+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:44.510168+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:45.510395+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:46.510605+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:47.510772+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:48.510943+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 78626816 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:49.511158+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363094016 unmapped: 78618624 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:50.511372+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363094016 unmapped: 78618624 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:51.511557+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363094016 unmapped: 78618624 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:52.511769+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363094016 unmapped: 78618624 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:53.521661+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363094016 unmapped: 78618624 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:54.521900+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:55.522095+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:56.522263+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:57.522414+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:58.522599+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:42:59.522781+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:00.522966+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:01.523172+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:02.523372+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:03.523643+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:04.523911+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:05.524095+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:06.524279+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:07.524488+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:08.524884+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:09.525112+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 78610432 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:10.525356+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:11.525630+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:12.525839+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:13.526031+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 78602240 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:14.526280+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:15.526428+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:16.526587+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:17.526812+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:18.526991+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:19.527185+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:20.527313+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:21.527518+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:22.527867+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:23.528048+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:24.528193+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:25.528374+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 78594048 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:26.528563+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 78585856 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:27.528763+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:28.528924+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 78585856 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:29.529113+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 78585856 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:30.529272+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363134976 unmapped: 78577664 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:31.529456+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:32.529686+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:33.529910+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:34.530198+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:35.530355+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:36.530512+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:37.530733+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:38.530971+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:39.531149+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:40.531405+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:41.531639+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:42.531949+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:43.532171+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:44.532418+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:45.532554+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:46.532728+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:47.532881+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:48.533098+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:49.533360+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:50.533586+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:51.533783+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:52.534022+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:53.534280+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:54.534469+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:55.534843+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:56.535145+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:57.535456+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:58.535680+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:43:59.535954+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:00.536133+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:01.536318+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:02.536492+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:03.536629+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:04.536806+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:05.537011+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:06.537192+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:07.537342+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:08.537539+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:09.537749+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:10.537934+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363200512 unmapped: 78512128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:11.538143+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363200512 unmapped: 78512128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:12.538354+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363200512 unmapped: 78512128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:13.538537+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363200512 unmapped: 78512128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:14.538745+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:15.538895+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:16.539141+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:17.539319+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:18.539613+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:19.539826+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:20.540070+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:21.540300+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:22.540675+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:23.541024+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:24.541171+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:25.541389+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:26.541764+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:27.541978+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:28.542141+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:29.542395+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:30.542797+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:31.543078+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:32.543319+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:33.543488+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:34.543661+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:35.543825+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:36.543990+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:37.544171+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:38.544376+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:39.544546+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:40.544741+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:41.544908+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:42.545133+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:43.545325+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:44.545472+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:45.545667+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:46.545859+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:47.546096+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:48.546371+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:49.546621+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:50.546774+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:51.546938+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:52.547188+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:53.547357+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:54.547612+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363257856 unmapped: 78454784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:55.547774+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363257856 unmapped: 78454784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:56.547963+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363257856 unmapped: 78454784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:57.548127+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363257856 unmapped: 78454784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:58.548334+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363257856 unmapped: 78454784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:44:59.548497+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:00.548665+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:01.548887+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:02.549108+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:03.549257+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:04.549432+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:05.549573+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:06.549749+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:07.549998+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:08.550166+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:09.550315+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:10.550482+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:11.550662+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:12.550906+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:13.551072+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:14.551215+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:15.551337+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:16.551456+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:17.551601+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:18.551785+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:19.551930+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:20.552123+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:21.552295+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:22.552488+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:23.552666+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:24.552830+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:25.553060+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 78397440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:26.553249+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:27.553472+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:28.553658+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:29.553815+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:30.554003+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:31.554243+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:32.554581+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:33.554767+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:34.554956+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:35.555128+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:36.555341+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:37.555538+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:38.555836+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:39.556108+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:40.556283+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:41.556438+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:42.556672+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 78381056 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:43.556913+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:44.557118+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:45.557301+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:46.557489+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:47.557679+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:48.558026+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:49.558187+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:50.558361+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:51.558569+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:52.558789+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:53.559081+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:54.559318+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:55.559484+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:56.559909+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:57.560117+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:58.560301+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 78356480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:45:59.560517+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 78348288 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:00.560793+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 78348288 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:01.560990+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 78348288 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:02.561209+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 78348288 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:03.561436+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 78348288 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:04.561612+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 78348288 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:05.561790+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 78348288 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:06.562004+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:07.562171+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:08.562341+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:09.562525+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:10.562720+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:11.562920+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:12.563116+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:13.563294+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:14.563491+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:15.564662+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:16.565396+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:17.565598+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:18.565815+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:19.566030+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:20.566230+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363397120 unmapped: 78315520 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:21.566398+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363397120 unmapped: 78315520 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:22.566624+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:23.566823+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:24.567084+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:25.567310+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:26.567550+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:27.567799+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:28.568000+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:29.568219+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:30.568418+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:31.568758+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:32.568982+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:33.569191+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:34.569392+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:35.569646+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:36.569881+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:37.570132+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:38.570352+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:39.570543+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:40.570751+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:41.570935+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:42.571196+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:43.571429+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:44.571622+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:45.571818+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:46.572014+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:47.572186+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:48.572367+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:49.572527+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:50.572700+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:51.572898+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:52.573141+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:53.573383+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:54.573568+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:55.573783+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:56.574048+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:57.574287+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:58.574499+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:46:59.574719+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:00.574927+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:01.575146+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:02.575426+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:03.575640+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 78249984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:04.575845+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 78249984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:05.576013+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 78249984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:06.576220+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 78249984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:07.576432+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 78241792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:08.576678+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 78241792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:09.576878+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 78241792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:10.577117+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 78241792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:11.577271+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 78241792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:12.577501+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 78241792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:13.577737+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 78241792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:14.577938+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 78241792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:15.578139+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 78241792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:16.578345+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 78241792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:17.578512+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:18.578679+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:19.578921+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:20.579110+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:21.579408+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:22.579671+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:23.579933+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:24.580137+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:25.580342+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:26.580588+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:27.580888+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:28.581081+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:29.581258+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:30.581468+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363495424 unmapped: 78217216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:31.581679+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363495424 unmapped: 78217216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:32.582016+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363495424 unmapped: 78217216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:33.582217+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363495424 unmapped: 78217216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:34.582406+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363495424 unmapped: 78217216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:35.582636+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363495424 unmapped: 78217216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:36.582840+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363495424 unmapped: 78217216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:37.583018+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:38.583213+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:39.583367+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 78200832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:40.583530+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 78200832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:41.583674+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 78200832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:42.583939+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 78200832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:43.584168+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 78200832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:44.584395+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 78200832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:45.584650+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 78200832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:46.584921+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 78200832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:47.585153+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 78200832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:48.585412+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363520000 unmapped: 78192640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:49.585648+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363520000 unmapped: 78192640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:50.585882+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363520000 unmapped: 78192640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:51.586115+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:52.586578+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:53.586816+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:54.587020+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:55.587251+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:56.587408+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:57.587643+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:58.587809+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:47:59.587968+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:00.588106+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363544576 unmapped: 78168064 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:01.588296+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363544576 unmapped: 78168064 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:02.588504+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:03.588678+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:04.588929+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:05.589143+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:06.589334+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:07.589508+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:08.589769+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:09.590188+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:10.592650+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:11.592810+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:12.593047+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:13.593203+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:14.593589+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:15.593770+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:16.595956+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:17.596177+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:18.597288+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:19.597520+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:20.598470+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:21.598680+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:22.599053+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:23.599215+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363569152 unmapped: 78143488 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:24.600042+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363569152 unmapped: 78143488 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:25.600242+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363569152 unmapped: 78143488 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:26.600844+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:27.601059+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:28.601256+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:29.601456+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:30.601893+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:31.602106+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:32.602489+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:33.602764+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:34.603130+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:35.603354+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:36.603593+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:37.603817+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:38.604010+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:39.604194+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:40.604462+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:41.604761+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:42.605031+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:43.605198+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:44.605496+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:45.605653+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:46.605866+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:47.606071+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:48.606274+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:49.606498+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363601920 unmapped: 78110720 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:50.606757+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363601920 unmapped: 78110720 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:51.606914+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363601920 unmapped: 78110720 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:52.607176+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:53.607355+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:54.607546+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:55.607819+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:56.608033+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:57.608249+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:58.608441+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:48:59.608736+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:00.608977+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:01.609190+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:02.609395+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:03.609592+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:04.609859+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 78094336 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:05.610022+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 78094336 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:06.610207+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 78094336 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:07.610366+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 78086144 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:08.610646+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 78086144 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:09.610860+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 78086144 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-02-25T13:49:10.611071+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _finish_auth 0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:10.612196+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 78086144 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:11.611383+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 78086144 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:12.611600+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 78086144 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:13.611810+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 78086144 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:14.611999+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:15.612217+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:16.612378+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:17.612564+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:18.612771+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:19.612976+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:20.613193+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:21.613340+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:22.613552+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:23.614081+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:24.614283+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:25.614481+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:26.614737+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:27.614988+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:28.615168+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:29.615445+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:30.615758+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:31.615947+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:32.616157+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:33.616332+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:34.616567+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363651072 unmapped: 78061568 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:35.616848+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363651072 unmapped: 78061568 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:36.617085+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 78053376 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:37.617274+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 78053376 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:38.617460+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 78053376 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:39.617653+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 78053376 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:40.617824+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 78053376 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:41.618030+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 78053376 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:42.618289+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:43.618466+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:44.618732+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 186K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:45.619038+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:46.619323+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:47.619516+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:48.619733+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 78962688 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:49.619971+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 78962688 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:50.620171+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 78962688 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:51.620364+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: mgrc ms_handle_reset ms_handle_reset con 0x55651dc45000
Feb 25 14:15:37 compute-0 ceph-osd[86953]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1263522489
Feb 25 14:15:37 compute-0 ceph-osd[86953]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1263522489,v1:192.168.122.100:6801/1263522489]
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: get_auth_request con 0x556522347c00 auth_method 0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: mgrc handle_mgr_configure stats_period=5
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362766336 unmapped: 78946304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:52.620610+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362766336 unmapped: 78946304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:53.620774+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362766336 unmapped: 78946304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:54.620947+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362766336 unmapped: 78946304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:55.621158+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362766336 unmapped: 78946304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:56.621358+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 ms_handle_reset con 0x556518db8000 session 0x55651da87180
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: handle_auth_request added challenge on 0x55651a655c00
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:57.621574+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:58.621832+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:49:59.622043+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:00.622252+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:01.622439+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:02.622650+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:03.622801+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:04.622987+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:05.623149+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:06.623352+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:07.623534+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:08.623759+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:09.623898+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:10.624077+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:11.624221+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:12.624510+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:13.624759+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 78815232 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:14.625039+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:15.625195+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:16.625438+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:17.626369+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:18.626941+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:19.627672+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:20.628279+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:21.628522+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:22.628853+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:23.629269+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:24.629721+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:25.630219+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:26.630474+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:27.630847+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:28.631158+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:29.631394+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:30.631553+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:31.631825+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:32.632142+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:33.632373+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:34.632654+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:35.633071+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:36.633296+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:37.633522+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:38.633812+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 78807040 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 ms_handle_reset con 0x55651a619400 session 0x55651da86fc0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: handle_auth_request added challenge on 0x55651e1a6000
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:39.634048+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363036672 unmapped: 78675968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:40.634222+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363036672 unmapped: 78675968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:41.634511+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363036672 unmapped: 78675968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:42.634853+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363036672 unmapped: 78675968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:43.635060+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363044864 unmapped: 78667776 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:44.635324+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363044864 unmapped: 78667776 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:45.635524+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363044864 unmapped: 78667776 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 180889
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:46.635755+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363044864 unmapped: 78667776 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 598.538696289s of 598.696411133s, submitted: 106
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:47.635930+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 78659584 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:48.636123+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 78659584 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:49.636367+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 78659584 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:50.636575+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 78659584 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:51.636787+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:52.637078+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 78643200 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:53.637246+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:54.637386+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:55.637561+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:56.637776+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:57.637953+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:58.638141+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:50:59.638303+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:00.638444+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:01.638620+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:02.638841+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:03.639050+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:04.639230+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:05.639423+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:06.639629+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:07.639822+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:08.640017+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:09.640234+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:10.640426+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:11.640623+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:12.640905+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:13.641106+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:14.641282+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:15.641507+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:16.641672+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:17.641853+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:18.642051+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:19.642266+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:20.642503+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:21.642748+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:22.643034+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:23.643242+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 78569472 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:24.643495+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:25.643728+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:26.643947+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:27.644145+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:28.644411+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:29.644793+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:30.645165+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:31.645371+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:32.645634+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:33.645863+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:34.646068+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:35.646264+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:36.646440+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:37.646614+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:38.646794+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:39.646998+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:40.647201+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:41.647455+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:42.647772+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:43.648004+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:44.648174+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:45.648387+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:46.648897+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:47.649094+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:48.649258+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:49.649443+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:50.649627+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:51.649855+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:52.650113+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:53.650281+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:54.650461+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:55.650606+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:56.650771+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:57.650954+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:58.651166+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:51:59.651394+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:00.651614+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:01.651795+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:02.652019+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:03.652203+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:04.652348+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:05.652520+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:06.652817+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:07.652990+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:08.653133+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:09.653307+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:10.653478+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:11.653629+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:12.653816+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:13.653976+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:14.654141+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:15.654328+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:16.654488+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:17.654666+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:18.654880+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:19.655018+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:20.655172+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:21.655336+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:22.655535+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:23.655740+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:24.655912+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 78561280 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:25.656085+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:26.656560+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:27.656776+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:28.657649+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:29.657860+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:30.658175+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:31.658410+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:32.658649+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:33.658803+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:34.659263+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:35.659612+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:36.659807+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:37.660025+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 78553088 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:38.660575+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:39.660889+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:40.661391+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:41.661887+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:42.662230+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:43.662454+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:44.662903+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23488 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:45.663080+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:46.663447+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:47.663775+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:48.685574+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:49.685834+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:50.686160+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:51.686403+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:52.686632+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:53.686860+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:54.687046+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:55.687354+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:56.687600+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 78544896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:57.687786+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:58.688122+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:52:59.688312+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:00.688514+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:01.688666+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:02.688880+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:03.689106+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:04.689318+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:05.689591+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:06.689931+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:07.690130+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:08.690297+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:09.690502+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:10.690748+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:11.691009+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:12.691259+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:13.691531+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:14.691745+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:15.691924+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:16.692211+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:17.692416+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:18.692603+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:19.692806+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:20.693015+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:21.693281+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:22.693538+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 78536704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:23.693809+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:24.694042+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:25.694234+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:26.694411+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:27.694595+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:28.694768+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:29.694897+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:30.695111+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:31.695279+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:32.695589+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:33.696270+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:34.696539+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:35.696806+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:36.697074+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:37.697320+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:38.697773+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:39.698082+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:40.698267+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:41.698581+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:42.698947+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 78528512 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:43.699210+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:44.699473+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:45.699766+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:46.699931+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:47.700067+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:48.700369+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:49.700617+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:50.700824+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:51.701065+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:52.701287+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:53.701486+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:54.701729+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363192320 unmapped: 78520320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:55.701924+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:56.702145+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:57.702325+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:58.702465+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:53:59.702640+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:00.702832+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:01.702986+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:02.703227+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:03.703460+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:04.703770+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:05.703990+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:06.704170+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:07.704356+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:08.704567+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:09.704787+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:10.705017+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:11.705246+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:12.705623+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:13.705809+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:14.706018+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 78503936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:15.706200+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:16.706393+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:17.706682+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:18.707088+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:19.707287+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:20.707557+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:21.707768+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:22.708127+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:23.708368+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:24.708633+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:25.708911+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:26.709140+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 78495744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:27.709379+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:28.709627+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:29.709808+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:30.709984+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:31.710162+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:32.710401+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:33.710560+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:34.710814+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:35.711013+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:36.711315+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:37.711475+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:38.711790+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:39.711976+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:40.712180+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:41.712361+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:42.712600+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:43.712904+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:44.713194+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:45.713497+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 78487552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:46.713804+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:47.714025+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:48.714234+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:49.714489+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:50.714787+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:51.715038+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:52.715338+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:53.715576+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:54.715836+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:55.716056+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:56.716277+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:57.716534+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:58.716833+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:54:59.717033+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:00.717244+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:01.717427+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363233280 unmapped: 78479360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:02.717748+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:03.718065+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:04.718405+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:05.718662+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:06.719048+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:07.719240+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:08.719493+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:09.719852+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:10.720126+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:11.720329+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:12.720578+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:13.720749+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:14.720859+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:15.721028+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:16.721202+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:17.721355+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:18.721553+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 78471168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:19.721729+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:20.721920+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:21.722147+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:22.722353+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 78462976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:23.722527+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363257856 unmapped: 78454784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:24.722741+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363257856 unmapped: 78454784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:25.722934+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363257856 unmapped: 78454784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:26.723096+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:27.723282+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:28.723603+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:29.723816+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:30.724002+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:31.724210+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:32.724445+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:33.724583+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 78446592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:34.724781+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363274240 unmapped: 78438400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:35.725004+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363274240 unmapped: 78438400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:36.725146+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363274240 unmapped: 78438400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:37.725296+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363274240 unmapped: 78438400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:38.725648+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363274240 unmapped: 78438400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:39.725944+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363274240 unmapped: 78438400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:40.726081+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363274240 unmapped: 78438400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:41.726221+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363274240 unmapped: 78438400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:42.726410+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:43.726648+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:44.726829+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:45.727020+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:46.727254+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:47.727511+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:48.727737+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 78430208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:49.727929+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:50.728142+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:51.728370+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:52.728629+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:53.728868+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:54.729034+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:55.729209+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:56.729462+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:57.729674+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 78422016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:58.729877+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:55:59.730050+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:00.730228+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:01.730364+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:02.730538+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:03.730635+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:04.730794+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:05.730967+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:06.731191+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:07.731385+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:08.731601+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:09.731811+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:10.732024+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:11.732228+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:12.732503+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:13.732827+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 78413824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:14.733108+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 78397440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:15.733291+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 78397440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:16.733519+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 78397440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:17.733670+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 78397440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:18.733879+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 78397440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:19.734076+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:20.734239+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:21.734411+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:22.734771+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:23.735017+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:24.735205+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:25.735396+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:26.735609+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:27.735792+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:28.735924+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:29.736087+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 78389248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:30.736276+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 78381056 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:31.736456+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 78381056 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:32.736661+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 78381056 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:33.736865+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 78381056 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:34.737043+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 78381056 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:35.737193+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:36.737340+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:37.737517+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:38.737660+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 78372864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:39.737800+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:40.737976+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:41.738095+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:42.738315+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:43.738478+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:44.738643+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:45.738805+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:46.739031+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 78364672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:47.739225+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 78356480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:48.739419+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 78356480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:49.739582+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 78356480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:50.739792+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 78356480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:51.739986+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 78356480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:52.740194+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 78356480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:53.740340+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 78356480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:54.740494+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:55.740719+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:56.740925+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:57.741140+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:58.741308+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:56:59.741506+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:00.741769+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:01.741944+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:02.742152+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:03.742298+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:04.742508+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:05.742730+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:06.742894+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:07.743041+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:08.743271+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 78340096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:09.743458+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 78331904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:10.743765+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 78331904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:11.743993+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 78331904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:12.744211+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 78331904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:13.744377+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 78331904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:14.744576+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 78331904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:15.744775+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 78331904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:16.744981+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 78331904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:17.745195+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 78331904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:18.745318+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:19.745440+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:20.745600+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:21.745786+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 78323712 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:22.746019+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363397120 unmapped: 78315520 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:23.746267+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363397120 unmapped: 78315520 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:24.746461+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363397120 unmapped: 78315520 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:25.746672+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:26.746887+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:27.747023+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:28.747220+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:29.747459+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:30.747622+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:31.747786+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:32.747974+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:33.748213+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:34.748444+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:35.748667+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:36.748972+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:37.749172+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:38.749342+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:39.749562+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:40.749798+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 78307328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:41.749969+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:42.750145+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:43.750361+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 78299136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:44.750542+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:45.750782+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:46.750945+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:47.751086+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 78290944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:48.751275+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:49.751501+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:50.751675+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:51.751851+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:52.752411+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:53.752583+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:54.752755+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:55.753110+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:56.753293+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:57.753484+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:58.754191+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 78282752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:57:59.754414+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:00.754622+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:01.754854+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:02.755339+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:03.755621+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:04.755772+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:05.755992+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:06.756173+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:07.756351+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:08.756538+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:09.756762+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:10.756945+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:11.757158+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:12.757397+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:13.757580+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:14.757784+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:15.757949+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 78258176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:16.758114+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 78249984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:17.758264+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 78249984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:18.758428+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 78249984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:19.758582+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 78249984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:20.758768+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 78249984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:21.758936+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 78249984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:22.759151+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 78249984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:23.759456+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 78241792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:24.759667+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:25.759858+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:26.760045+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:27.760276+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:28.760669+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:29.760922+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 78233600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:30.761118+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:31.761274+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:32.761482+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:33.761782+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:34.761974+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:35.762164+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:36.762439+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:37.762759+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:38.762928+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:39.763111+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:40.763305+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:41.763489+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:42.763793+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:43.764000+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 78225408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:44.764195+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:45.764382+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:46.764580+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:47.764808+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:48.764996+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:49.765214+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:50.765459+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:51.765657+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:52.768749+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:53.768996+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 78209024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:54.769195+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 78200832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:55.769403+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363520000 unmapped: 78192640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:56.769565+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363520000 unmapped: 78192640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:57.769830+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363520000 unmapped: 78192640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:58.769985+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363520000 unmapped: 78192640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:58:59.770194+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 78184448 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:00.770345+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 78184448 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:01.770519+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 78184448 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:02.770782+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 78184448 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:03.770927+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 78184448 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:04.771087+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 78184448 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:05.771334+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 78184448 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:06.771526+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:07.771677+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:08.771890+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:09.772049+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:10.772244+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:11.772434+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:12.772766+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:13.772923+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363536384 unmapped: 78176256 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:14.773092+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363544576 unmapped: 78168064 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:15.773265+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363544576 unmapped: 78168064 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:16.773394+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363544576 unmapped: 78168064 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:17.773628+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363544576 unmapped: 78168064 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:18.773842+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:19.774015+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:20.774231+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:21.774410+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:22.774604+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:23.774778+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:24.775010+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:25.775227+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 78159872 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:26.775442+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:27.775661+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:28.775961+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:29.776222+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:30.776444+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:31.776605+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:32.776911+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:33.777200+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 78151680 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:34.777427+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:35.777770+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:36.777976+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:37.778176+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:38.778365+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:39.778544+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:40.778765+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:41.778931+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:42.779141+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:43.779323+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:44.779551+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 186K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 300 writes, 450 keys, 300 commit groups, 1.0 writes per commit group, ingest: 0.15 MB, 0.00 MB/s
                                           Interval WAL: 300 writes, 150 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:45.779808+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:46.780085+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:47.780283+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:48.780485+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:49.780774+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 78135296 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:50.780980+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:51.781199+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:52.781451+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363585536 unmapped: 78127104 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:53.781641+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:54.781823+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:55.781988+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:56.782136+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:57.782291+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:58.782471+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T13:59:59.782654+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:00.782829+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:01.782999+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:02.783227+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:03.783438+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 78118912 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:04.783575+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363601920 unmapped: 78110720 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:05.783757+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363601920 unmapped: 78110720 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:06.784040+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363601920 unmapped: 78110720 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:07.784271+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:08.784471+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:09.784672+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:10.784892+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:11.785144+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:12.785443+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:13.785812+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:14.786048+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 78094336 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:15.786324+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 78086144 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:16.786542+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 78086144 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:17.786791+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 78086144 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:18.787024+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 78086144 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:19.787205+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:20.787445+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:21.787611+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:22.787843+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:23.788022+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:24.788235+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:25.788405+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:26.788582+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:27.788784+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:28.788979+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:29.789177+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:30.789480+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:31.789763+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:32.789939+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:33.790200+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:34.790367+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:35.790525+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:36.790731+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:37.791172+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:38.791551+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 78077952 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:39.792143+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:40.792530+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:41.793005+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:42.793276+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:43.793495+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:44.793780+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:45.794118+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:46.794437+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:47.794656+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:48.794949+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 78069760 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 601.242309570s of 601.485046387s, submitted: 150
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:49.795265+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 78053376 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:50.795569+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 78053376 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:51.795809+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 78053376 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:52.796091+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 78053376 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:53.796365+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 78053376 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:54.796642+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 78004224 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:55.796861+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 78004224 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:56.797037+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 78004224 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:57.797324+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 78004224 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:58.797584+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 78004224 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:00:59.797848+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 78004224 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:00.798060+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:01.798238+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:02.798464+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:03.798736+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:04.799008+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:05.799211+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:06.799426+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:07.799618+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:08.799801+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:09.799948+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:10.800151+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:11.800359+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:12.800577+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:13.800776+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:14.800975+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:15.801181+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:16.801344+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:17.801536+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:18.801758+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:19.801938+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:20.802102+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:21.802281+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:22.802449+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:23.802617+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:24.802800+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363716608 unmapped: 77996032 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:25.803009+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:26.803178+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:27.803395+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:28.803551+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:29.803771+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:30.803947+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:31.804122+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:32.804344+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:33.804601+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:34.804795+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:35.805035+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:36.805267+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:37.805480+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:38.805727+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:39.805921+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:40.806117+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:41.806306+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:42.806614+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:43.806870+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:44.807071+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:45.807255+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:46.807444+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:47.807622+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:48.807814+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 77987840 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:49.808116+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363732992 unmapped: 77979648 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:50.808336+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363732992 unmapped: 77979648 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:51.808519+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363732992 unmapped: 77979648 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:52.808805+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363732992 unmapped: 77979648 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:53.808974+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363732992 unmapped: 77979648 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:54.809174+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363732992 unmapped: 77979648 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:55.809433+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363732992 unmapped: 77979648 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:56.809597+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363732992 unmapped: 77979648 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:57.809824+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363732992 unmapped: 77979648 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:58.809996+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:01:59.810186+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:00.810386+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:01.810560+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:02.810864+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:03.811078+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:04.811265+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:05.811431+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:06.811630+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:07.811846+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:08.812124+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:09.812352+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:10.812528+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:11.812760+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:12.813006+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:13.813207+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 77971456 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:14.813382+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363749376 unmapped: 77963264 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:15.820971+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363749376 unmapped: 77963264 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:16.821186+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363749376 unmapped: 77963264 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:17.821485+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363749376 unmapped: 77963264 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:18.821819+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 77946880 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:19.822122+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 77946880 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:20.822444+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 77946880 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:21.822730+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 77946880 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:22.823090+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 77946880 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:23.823374+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 77946880 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:24.823603+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 77946880 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:25.823777+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 77938688 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:26.824045+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 77938688 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:27.824291+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 77938688 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:28.824478+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 77938688 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:29.824740+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:30.824996+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:31.825238+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:32.825509+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:33.825762+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:34.826011+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:35.826175+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:36.826329+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:37.826515+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:38.826740+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:39.826897+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:40.827073+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:41.827288+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 77930496 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:42.827537+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:43.827777+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:44.827986+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:45.828155+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:46.828399+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:47.828591+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:48.828781+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:49.828966+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:50.829200+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:51.829357+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:52.829642+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:53.829834+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:54.830041+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:55.830245+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:56.830528+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 77922304 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:57.830899+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 77914112 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:58.831141+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 77914112 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:02:59.831361+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363806720 unmapped: 77905920 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:00.831639+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363806720 unmapped: 77905920 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:01.831946+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363806720 unmapped: 77905920 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:02.832162+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363806720 unmapped: 77905920 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:03.832345+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363806720 unmapped: 77905920 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:04.832507+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363806720 unmapped: 77905920 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:05.832676+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363806720 unmapped: 77905920 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:06.833009+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363806720 unmapped: 77905920 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:07.833174+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:08.833354+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:09.833524+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:10.833789+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:11.834003+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:12.834328+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:13.834525+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:14.834779+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:15.834974+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:16.835137+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:17.835319+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:18.835509+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 77897728 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:19.835895+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 77889536 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:20.836043+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 77889536 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:21.837932+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 77889536 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:22.838289+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 77889536 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:23.838530+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 77889536 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:24.838810+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 77889536 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:25.839855+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 77889536 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:26.840055+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:27.840246+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:28.840445+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:29.840632+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:30.840907+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:31.841087+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:32.841330+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:33.841524+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:34.841794+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:35.841985+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:36.842171+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:37.842330+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:38.842538+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:39.842727+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:40.842944+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:41.843103+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:42.843310+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:43.843607+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 77881344 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:44.843755+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 77873152 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:45.843981+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 77873152 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:46.844156+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 77873152 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:47.844325+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 77873152 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:48.844564+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 77873152 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:49.844761+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 77873152 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:50.844931+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 77873152 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:51.845096+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 77873152 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:52.845307+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 77873152 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:53.845492+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 77873152 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:54.845635+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:55.845835+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:56.846036+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:57.846215+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:58.846411+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:03:59.846573+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:00.846771+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:01.847024+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:02.847267+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:03.847408+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:04.847615+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:05.847847+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:06.848000+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:07.848224+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:08.848409+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363847680 unmapped: 77864960 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:09.848585+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363855872 unmapped: 77856768 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:10.848765+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 77848576 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:11.848931+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 77848576 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:12.849329+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 77848576 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:13.849606+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 77848576 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:14.849858+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 77848576 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:15.850126+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363864064 unmapped: 77848576 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:16.850303+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:17.850567+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:18.850836+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:19.851024+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:20.851204+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:21.851442+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:22.851656+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:23.851833+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:24.852023+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:25.852175+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:26.852389+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:27.852559+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:28.852838+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363872256 unmapped: 77840384 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:29.853029+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 77832192 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:30.853161+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 77832192 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:31.853344+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:32.853603+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:33.853803+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:34.853995+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:35.854267+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:36.854425+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:37.854580+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:38.854760+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:39.854950+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:40.855136+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:41.855351+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:42.855555+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:43.855788+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363888640 unmapped: 77824000 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:44.855949+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363896832 unmapped: 77815808 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:45.856114+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363896832 unmapped: 77815808 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets getting new tickets!
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:46.856452+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _finish_auth 0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:46.857495+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 77807616 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:47.856609+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 77807616 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:48.856803+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 77807616 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:49.856963+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 77807616 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:50.857138+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 77807616 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:51.857321+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 77807616 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:52.857560+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 77807616 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:53.857743+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 77807616 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:54.857890+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 77807616 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:55.858069+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363905024 unmapped: 77807616 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:56.858236+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 ms_handle_reset con 0x55651a655c00 session 0x55651d894380
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: handle_auth_request added challenge on 0x556518d90c00
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:57.858401+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:58.858566+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:04:59.858770+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:00.858960+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:01.859125+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:02.859335+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:03.859480+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:04.859684+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:05.859881+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:06.860056+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:07.860250+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:08.860391+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:09.860539+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:10.860681+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364036096 unmapped: 77676544 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:11.860869+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364044288 unmapped: 77668352 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:12.861047+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364052480 unmapped: 77660160 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:13.861202+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364052480 unmapped: 77660160 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:14.861412+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364052480 unmapped: 77660160 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:15.861608+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364052480 unmapped: 77660160 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:16.861772+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364052480 unmapped: 77660160 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:17.861963+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364052480 unmapped: 77660160 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:18.862142+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364052480 unmapped: 77660160 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:19.862296+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364052480 unmapped: 77660160 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:20.862477+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364052480 unmapped: 77660160 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:21.862735+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:22.863039+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:23.863206+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:24.863414+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:25.863654+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:26.863821+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:27.863993+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:28.864147+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:29.864335+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:30.864507+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:31.864675+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:32.864931+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:33.865100+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:34.865267+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:35.865473+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:36.865649+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:37.865850+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364060672 unmapped: 77651968 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:38.865993+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 ms_handle_reset con 0x55651e1a6000 session 0x55651bb756c0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: handle_auth_request added challenge on 0x556525f76000
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364191744 unmapped: 77520896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:39.866116+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364191744 unmapped: 77520896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:40.866281+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364191744 unmapped: 77520896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:41.866479+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364191744 unmapped: 77520896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:42.866665+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364191744 unmapped: 77520896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:43.866852+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364191744 unmapped: 77520896 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:44.867079+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364199936 unmapped: 77512704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:45.867275+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364199936 unmapped: 77512704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:46.867457+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364199936 unmapped: 77512704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:47.867679+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364199936 unmapped: 77512704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:48.867877+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364199936 unmapped: 77512704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:49.868022+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364199936 unmapped: 77512704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:50.868212+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364199936 unmapped: 77512704 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 302.307678223s of 302.485839844s, submitted: 106
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:51.868378+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364216320 unmapped: 77496320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:52.868609+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364216320 unmapped: 77496320 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:53.868853+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:54.869057+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:55.869266+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:56.869390+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:57.869611+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:58.869757+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:05:59.869937+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:00.870091+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:01.870262+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:02.870418+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:03.870560+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:04.870780+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:05.871024+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:06.871193+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:07.871362+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:08.871525+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:09.871670+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:10.871933+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:11.872072+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:12.872271+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:13.872435+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:14.872614+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:15.872776+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:16.872941+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:17.873134+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:18.873313+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:19.873522+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:20.873838+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:21.874072+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:22.874290+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:23.874434+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:24.874653+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:25.874868+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:26.875101+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:27.875268+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364224512 unmapped: 77488128 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:28.875505+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 77479936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:29.875705+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 77479936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:30.875881+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 77479936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:31.876072+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 77479936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:32.876316+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 77479936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:33.876469+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364232704 unmapped: 77479936 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:34.876647+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:35.876785+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:36.876928+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:37.877103+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:38.877264+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:39.877417+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:40.877579+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:41.877835+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:42.878083+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:43.878278+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:44.878509+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:45.878850+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:46.879007+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:47.879165+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:48.879372+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:49.879529+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:50.879751+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:51.879888+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:52.880058+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:53.880209+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:54.880366+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:55.880506+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:56.880737+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:57.880982+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:58.881146+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:06:59.881344+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:00.881528+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:01.881726+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:02.881944+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:03.882174+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364240896 unmapped: 77471744 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:04.882375+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364249088 unmapped: 77463552 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:05.882580+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 77455360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:06.882762+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 77455360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:07.882956+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 77455360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:08.883137+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 77455360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:09.883316+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 77455360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:10.883499+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 77455360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:11.883681+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 77455360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:12.883907+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364257280 unmapped: 77455360 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:13.884129+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:14.884305+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:15.884477+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:16.884627+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:17.884784+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:18.884949+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:19.885134+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:20.885282+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:21.885474+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:22.885765+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:23.885920+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:24.886069+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:25.886239+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:26.886402+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:27.886597+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:28.889946+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:29.893937+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:30.894423+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:31.896054+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:32.896521+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:33.898903+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 77447168 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:34.900981+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 77430784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:35.901415+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 77430784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:36.901755+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 77430784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:37.902199+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:38.902803+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:39.902965+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:40.903236+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:41.903439+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:42.903802+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:43.903989+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:44.904359+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:45.904660+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:46.905069+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:47.905301+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:48.905654+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:49.905980+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:50.906163+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:51.906431+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:52.906745+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:53.906916+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:54.907199+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:55.907429+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:56.907788+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:57.907961+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:58.908274+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:07:59.908556+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:00.908778+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:01.909001+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:02.909233+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:03.909427+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:04.909663+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:05.909895+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 77438976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:06.910142+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 77438976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:07.910374+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 77438976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:08.910565+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 77438976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:09.910951+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 77438976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:10.911140+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 77438976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:11.911406+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 77438976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:12.911649+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 77438976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:13.911849+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 77438976 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:14.912065+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 77430784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:15.912240+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 77430784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:16.912427+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 77430784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:17.912584+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 77430784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:18.912723+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:19.912885+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 77430784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:20.913074+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 77430784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:21.913288+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 77430784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:22.913540+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 77430784 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:23.913773+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364290048 unmapped: 77422592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:24.913957+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364290048 unmapped: 77422592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:25.914131+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364290048 unmapped: 77422592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:26.914302+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364290048 unmapped: 77422592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:27.914480+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364290048 unmapped: 77422592 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:28.914673+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:29.914899+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:30.915071+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:31.915249+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:32.915615+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:33.916085+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:34.916636+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:35.917073+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:36.917600+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:37.917826+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:38.918025+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 77414400 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:39.918204+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:40.918432+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:41.918648+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:42.918945+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:43.919192+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:44.919351+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:45.919567+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:46.919740+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:47.919897+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:48.920067+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:49.920257+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:50.920461+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:51.920797+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:52.921082+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 77406208 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:53.921273+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 77398016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:54.921506+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 77398016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:55.921726+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 77398016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:56.921950+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 77398016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:57.922130+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 77398016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:58.922391+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 77398016 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:08:59.922640+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:00.922812+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:01.923026+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:02.923283+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:03.923468+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:04.923642+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:05.923856+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:06.924092+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:07.924333+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:08.924557+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:09.924775+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:10.924986+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:11.925233+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:12.925533+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364322816 unmapped: 77389824 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:13.925788+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:14.926008+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:15.926208+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:16.926598+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:17.927902+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:18.928145+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:19.928333+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:20.928550+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:21.928749+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:22.929013+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:23.929342+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:24.929593+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:25.929862+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:26.930100+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:27.930326+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:28.930513+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:29.930750+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:30.930968+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:31.931217+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:32.931489+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:33.931742+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:34.931939+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:35.932126+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364331008 unmapped: 77381632 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:36.932327+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 77373440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:37.932512+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 77373440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:38.932684+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 77373440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:39.932882+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 77373440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:40.933126+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 77373440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:41.933323+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 77373440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:42.933577+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 77373440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:43.933783+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 77373440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:44.933949+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 77373440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 186K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:45.934167+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364339200 unmapped: 77373440 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:46.934354+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 77365248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:47.934545+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 77365248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:48.934735+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 77365248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:49.934890+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364347392 unmapped: 77365248 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:50.935070+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364355584 unmapped: 77357056 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:51.935271+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364355584 unmapped: 77357056 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:52.935473+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364355584 unmapped: 77357056 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:53.935675+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:54.936023+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:55.936221+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:56.936384+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:57.936566+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:58.936810+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:09:59.937033+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:00.937213+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:01.937572+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:02.937857+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:03.938195+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:04.938439+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:05.938816+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:06.939188+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:07.939424+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:08.939788+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:09.940112+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:10.940287+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:11.940491+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364363776 unmapped: 77348864 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:12.940992+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 77340672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:13.941183+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364371968 unmapped: 77340672 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:14.941406+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:15.941670+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:16.941891+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:17.942107+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:18.942315+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:19.942577+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:20.942846+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:21.943009+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:22.943496+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:23.943738+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:24.943891+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:25.944190+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:26.944432+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:27.944635+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:28.944966+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:29.945137+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:30.945326+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:31.945598+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 77332480 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:32.946008+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364388352 unmapped: 77324288 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:33.946231+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364388352 unmapped: 77324288 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:34.946471+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364388352 unmapped: 77324288 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:35.946637+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364388352 unmapped: 77324288 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:36.946903+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 77316096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:37.947182+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 77316096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:38.947469+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 77316096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:39.947664+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 77316096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:40.947926+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 77316096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:41.948195+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 77316096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:42.948438+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 77316096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:43.948852+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 77316096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:44.949107+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 77316096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:45.949317+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364396544 unmapped: 77316096 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:46.949617+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 77307904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:47.949838+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 77307904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:48.950060+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 297.750335693s of 297.777130127s, submitted: 18
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 77307904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:49.950252+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 77307904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:50.950526+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 77307904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:51.950762+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 77307904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:52.950960+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 77307904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:53.951148+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 77307904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:54.951306+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 77307904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:55.951482+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364404736 unmapped: 77307904 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:56.951652+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364421120 unmapped: 77291520 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:57.951835+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364429312 unmapped: 77283328 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:58.951996+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.465785980s of 10.004442215s, submitted: 42
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364437504 unmapped: 77275136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:10:59.952146+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364437504 unmapped: 77275136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:00.952355+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646327 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364437504 unmapped: 77275136 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:01.952538+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 77266944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:02.952765+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364445696 unmapped: 77266944 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:03.952900+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364453888 unmapped: 77258752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:04.953098+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [0,0,0,1])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364453888 unmapped: 77258752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:05.953277+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364453888 unmapped: 77258752 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:06.953508+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:07.953680+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:08.953935+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:09.954163+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:10.954369+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:11.954551+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:12.954759+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:13.954961+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:14.955191+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:15.955384+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:16.955615+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:17.955795+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:18.955990+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:19.956130+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:20.956313+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364470272 unmapped: 77242368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:21.956488+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:22.956808+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:23.957043+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:24.957244+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:25.957384+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:26.957528+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:27.957741+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:28.957905+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:29.958049+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:30.958224+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:31.958459+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:32.958722+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:33.958884+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:34.959056+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:35.959246+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:36.959397+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:37.959563+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:38.959801+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:39.960042+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:40.960253+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:41.960505+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:42.960745+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:43.960942+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:44.961037+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:45.961199+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:46.961488+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:47.961747+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:48.962085+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:49.962339+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:50.962629+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:51.962891+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:52.963216+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:53.963393+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364478464 unmapped: 77234176 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:54.963636+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:55.963896+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:56.964129+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:57.964370+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:58.964596+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:11:59.964786+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:00.965009+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:01.965208+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:02.965461+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:03.965762+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:04.965971+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:05.966138+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:06.966336+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:07.966529+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:08.966749+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:09.966917+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:10.967125+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:11.967341+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364486656 unmapped: 77225984 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:12.967568+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 77217792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:13.967741+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 77217792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:14.967924+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 77217792 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:15.968097+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 77209600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:16.968269+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 77209600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:17.968475+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 77209600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:18.968638+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 77209600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:19.968797+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 77209600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:20.969035+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 77209600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:21.969205+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 77209600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:22.969454+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364503040 unmapped: 77209600 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:23.969622+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:24.969889+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:25.970106+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:26.970254+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:27.970421+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:28.970652+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:29.970803+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:30.970947+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:31.971102+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:32.971329+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:33.971591+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:34.971914+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:35.972100+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:36.972286+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:37.972493+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:38.972739+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:39.972908+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364511232 unmapped: 77201408 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:40.973133+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364519424 unmapped: 77193216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:41.973304+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364519424 unmapped: 77193216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:42.973511+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364519424 unmapped: 77193216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:43.973666+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364519424 unmapped: 77193216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:44.973884+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364519424 unmapped: 77193216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:45.974050+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364519424 unmapped: 77193216 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:46.974206+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:47.974419+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:48.974651+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:49.974861+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:50.975115+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:51.975296+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:52.975518+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:53.975762+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:54.975932+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:55.976051+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:56.976181+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:57.976317+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:58.976477+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:12:59.976608+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:00.976797+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:01.976943+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:02.977121+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:03.977267+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:04.977455+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:05.977639+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:06.977820+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:08.003384+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:09.003596+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [1])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:10.003784+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364527616 unmapped: 77185024 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:11.003944+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364535808 unmapped: 77176832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:12.004141+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364535808 unmapped: 77176832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:13.004375+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364535808 unmapped: 77176832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:14.004519+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364535808 unmapped: 77176832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:15.004778+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364535808 unmapped: 77176832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:16.004992+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364535808 unmapped: 77176832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:17.005193+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364535808 unmapped: 77176832 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:18.005351+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:19.005506+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:20.005658+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread fragmentation_score=0.004726 took=0.000069s
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:21.005793+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:22.005964+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:23.006226+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:24.006372+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:25.006536+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:26.006704+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:27.006821+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:28.006970+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:29.007255+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:30.007469+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:31.007655+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:32.007844+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:33.008092+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:34.008273+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:35.008437+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:36.008595+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 364544000 unmapped: 77168640 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:37.008814+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:38.008999+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:39.009134+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:40.009298+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:41.009608+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:42.009879+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:43.010153+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:44.010369+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:45.010624+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:46.010806+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:47.010957+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:48.011134+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:49.011366+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:50.011557+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:51.011730+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:52.011949+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:53.012196+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:54.012376+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:55.012573+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:56.012830+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:57.013175+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:58.013461+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:13:59.013730+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:00.014006+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:01.014177+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:02.014443+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:03.014782+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:04.015022+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:05.015217+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:06.015375+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:07.015602+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:08.015910+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:09.016171+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:10.016393+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:11.016616+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:12.016824+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:13.017281+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:14.017502+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:15.017783+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:16.018004+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:17.018395+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:18.018597+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:19.018908+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:20.019077+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:21.019267+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:22.019492+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:23.019876+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:24.020093+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:25.020320+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:26.020498+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:27.020810+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:28.020983+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:29.021150+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:30.021342+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:31.021483+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:32.021642+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:33.021891+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:34.022246+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:35.022608+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:36.022766+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:37.022955+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:38.023153+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:39.023375+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:40.023795+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:41.023963+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:42.024177+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:43.024427+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:44.024575+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:45.024750+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:46.024955+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:47.025109+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:48.025318+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:49.025560+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:50.025803+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:51.026078+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:52.026289+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:53.026485+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:54.026643+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:55.026788+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:56.027025+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:57.027259+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:58.027414+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:14:59.027578+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:15:00.027802+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:15:01.027973+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:15:02.028108+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:15:03.028305+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 78045184 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:15:04.028449+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 77946880 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: do_command 'config diff' '{prefix=config diff}'
Feb 25 14:15:37 compute-0 ceph-osd[86953]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 25 14:15:37 compute-0 ceph-osd[86953]: do_command 'config show' '{prefix=config show}'
Feb 25 14:15:37 compute-0 ceph-osd[86953]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 25 14:15:37 compute-0 ceph-osd[86953]: do_command 'counter dump' '{prefix=counter dump}'
Feb 25 14:15:37 compute-0 ceph-osd[86953]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:15:05.028593+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: do_command 'counter schema' '{prefix=counter schema}'
Feb 25 14:15:37 compute-0 ceph-osd[86953]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 78266368 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:15:06.028738+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e875f000/0x0/0x4ffc00000, data 0x9e9a61/0xb8d000, compress 0x0/0x0/0x0, omap 0x7016b, meta 0x1689fe95), peers [1,2] op hist [])
Feb 25 14:15:37 compute-0 ceph-osd[86953]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 78102528 heap: 441712640 old mem: 2845415832 new mem: 2845415832
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: tick
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_tickets
Feb 25 14:15:37 compute-0 ceph-osd[86953]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-25T14:15:07.028884+0000)
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 14:15:37 compute-0 ceph-osd[86953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 14:15:37 compute-0 ceph-osd[86953]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646255 data_alloc: 218103808 data_used: 181109
Feb 25 14:15:37 compute-0 ceph-osd[86953]: do_command 'log dump' '{prefix=log dump}'
Feb 25 14:15:37 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23490 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:37 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} v 0)
Feb 25 14:15:37 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} : dispatch
Feb 25 14:15:37 compute-0 ceph-mon[76335]: from='client.23484 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:37 compute-0 ceph-mon[76335]: from='client.23482 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:37 compute-0 ceph-mon[76335]: from='client.23486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} : dispatch
Feb 25 14:15:37 compute-0 ceph-mon[76335]: from='client.23488 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:37 compute-0 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} : dispatch
Feb 25 14:15:38 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23494 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:38 compute-0 nova_compute[244014]: 2026-02-25 14:15:38.364 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:38 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Feb 25 14:15:38 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2283975422' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 25 14:15:38 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23498 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:38 compute-0 ceph-mon[76335]: pgmap v4671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:38 compute-0 ceph-mon[76335]: from='client.23490 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:38 compute-0 ceph-mon[76335]: from='client.23494 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:38 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2283975422' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 25 14:15:39 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23502 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Feb 25 14:15:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3099582821' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 25 14:15:39 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:39 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Feb 25 14:15:39 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033680424' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 25 14:15:39 compute-0 ceph-mon[76335]: from='client.23498 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:39 compute-0 ceph-mon[76335]: from='client.23502 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 14:15:39 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3099582821' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 25 14:15:39 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3033680424' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 25 14:15:39 compute-0 systemd[1]: Starting Hostname Service...
Feb 25 14:15:40 compute-0 systemd[1]: Started Hostname Service.
Feb 25 14:15:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Feb 25 14:15:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1893019342' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 25 14:15:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 25 14:15:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 25 14:15:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 25 14:15:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 25 14:15:40 compute-0 nova_compute[244014]: 2026-02-25 14:15:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:15:40 compute-0 nova_compute[244014]: 2026-02-25 14:15:40.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:15:40 compute-0 nova_compute[244014]: 2026-02-25 14:15:40.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:15:40 compute-0 nova_compute[244014]: 2026-02-25 14:15:40.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:15:40 compute-0 nova_compute[244014]: 2026-02-25 14:15:40.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 25 14:15:40 compute-0 nova_compute[244014]: 2026-02-25 14:15:40.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:15:40 compute-0 ceph-mon[76335]: pgmap v4672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:40 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1893019342' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 25 14:15:40 compute-0 ceph-mon[76335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 25 14:15:40 compute-0 ceph-mon[76335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 25 14:15:40 compute-0 ceph-mon[76335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 25 14:15:40 compute-0 ceph-mon[76335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 25 14:15:40 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Feb 25 14:15:40 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192140541' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 25 14:15:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:15:41 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23520 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:41 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:15:41 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620031450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:15:41 compute-0 nova_compute[244014]: 2026-02-25 14:15:41.487 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:15:41 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:41 compute-0 nova_compute[244014]: 2026-02-25 14:15:41.624 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 25 14:15:41 compute-0 nova_compute[244014]: 2026-02-25 14:15:41.625 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3428MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 25 14:15:41 compute-0 nova_compute[244014]: 2026-02-25 14:15:41.625 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 25 14:15:41 compute-0 nova_compute[244014]: 2026-02-25 14:15:41.625 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 25 14:15:41 compute-0 nova_compute[244014]: 2026-02-25 14:15:41.763 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 25 14:15:41 compute-0 nova_compute[244014]: 2026-02-25 14:15:41.764 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 25 14:15:41 compute-0 nova_compute[244014]: 2026-02-25 14:15:41.789 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 25 14:15:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/192140541' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 25 14:15:41 compute-0 ceph-mon[76335]: from='client.23520 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:41 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/620031450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:15:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Feb 25 14:15:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3143190716' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 25 14:15:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 14:15:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394462388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:15:42 compute-0 nova_compute[244014]: 2026-02-25 14:15:42.360 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 25 14:15:42 compute-0 nova_compute[244014]: 2026-02-25 14:15:42.366 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 25 14:15:42 compute-0 nova_compute[244014]: 2026-02-25 14:15:42.386 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 25 14:15:42 compute-0 nova_compute[244014]: 2026-02-25 14:15:42.388 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 25 14:15:42 compute-0 nova_compute[244014]: 2026-02-25 14:15:42.388 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 25 14:15:42 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Feb 25 14:15:42 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2792650226' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 25 14:15:42 compute-0 ceph-mon[76335]: pgmap v4673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3143190716' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 25 14:15:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/1394462388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 14:15:42 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2792650226' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 25 14:15:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Feb 25 14:15:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2078018889' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 25 14:15:43 compute-0 nova_compute[244014]: 2026-02-25 14:15:43.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:43 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4674: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:43 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Feb 25 14:15:43 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043839112' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 25 14:15:44 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2078018889' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 25 14:15:44 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4043839112' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23532 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:44 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:15:44 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Feb 25 14:15:44 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529668635' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Feb 25 14:15:45 compute-0 ceph-mon[76335]: pgmap v4674: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:45 compute-0 ceph-mon[76335]: from='client.23532 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:45 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/2529668635' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Feb 25 14:15:45 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Feb 25 14:15:45 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/388960921' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Feb 25 14:15:45 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:45 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23538 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 14:15:46 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/388960921' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Feb 25 14:15:46 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Feb 25 14:15:46 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/889567173' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Feb 25 14:15:46 compute-0 nova_compute[244014]: 2026-02-25 14:15:46.388 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:15:46 compute-0 nova_compute[244014]: 2026-02-25 14:15:46.388 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 25 14:15:46 compute-0 nova_compute[244014]: 2026-02-25 14:15:46.388 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 25 14:15:46 compute-0 nova_compute[244014]: 2026-02-25 14:15:46.436 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 25 14:15:46 compute-0 nova_compute[244014]: 2026-02-25 14:15:46.437 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:15:46 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23542 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:47 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23544 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:47 compute-0 ceph-mon[76335]: pgmap v4675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:47 compute-0 ceph-mon[76335]: from='client.23538 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:47 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/889567173' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Feb 25 14:15:47 compute-0 ceph-mon[76335]: from='client.23542 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Feb 25 14:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4034781191' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Feb 25 14:15:47 compute-0 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 14:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/312368934' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 14:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/312368934' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:15:47 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Feb 25 14:15:47 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3685298725' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Feb 25 14:15:48 compute-0 ceph-mon[76335]: from='client.23544 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/4034781191' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Feb 25 14:15:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/312368934' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 14:15:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.10:0/312368934' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 14:15:48 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3685298725' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Feb 25 14:15:48 compute-0 nova_compute[244014]: 2026-02-25 14:15:48.368 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:48 compute-0 nova_compute[244014]: 2026-02-25 14:15:48.370 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23554 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23556 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 14:15:48 compute-0 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 14:15:48 compute-0 nova_compute[244014]: 2026-02-25 14:15:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 25 14:15:48 compute-0 sshd-session[440975]: Connection closed by authenticating user root 46.101.242.142 port 38230 [preauth]
Feb 25 14:15:49 compute-0 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Feb 25 14:15:49 compute-0 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3130480793' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Feb 25 14:15:49 compute-0 ceph-mon[76335]: pgmap v4676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 14:15:49 compute-0 ceph-mon[76335]: from='client.23554 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 14:15:49 compute-0 ceph-mon[76335]: from='client.? 192.168.122.100:0/3130480793' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
